[
  {
    "path": ".github/Anatomy-of-Runbook.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \n\n<h1>Anatomy of a unSkript Runbook</h1>\n\n\n<div class=\"warning\" style='padding:0.1em; background-color:#EFEFFF; color:#69337A'>\n<span>\n<p style='margin-top:1em; text-align:center'>\n<b>Runbook Definition</b></p>\n<p style='margin-left:1em;'>\nIn a computer system or network, a Runbook is a compilation of routine procedures and operations that the system administrator or operator carries out. System administrators in IT department and NOCs use Runbooks as a reference. \n</p>\n<p style='margin-bottom:1em; margin-right:1em; text-align:right; font-family:Georgia'> <b>- Wikipedia</b> <i>(https://en.wikipedia.org/wiki/Runbook)</i>\n</p></span>\n</div>\n\n\n\n## unSkript Runbook\n\nunSkript Runbooks is a collection of atomic Routines called unSkript Actions. Think of it like a building blocks (like Legos) with which you can construct any model you wish. These Actions are nothing but modular Python functions that accomplish a well defined task. Using these Actions you can construct a unSkript Runbook to accomplish a given task.  In that sense unSkript Runbooks is a collection of such Actions and/or Information Text that accomplish a pre-defined task. \n\n<image src=\"https://github.com/unskript/Awesome-CloudOps-Automation/blob/master/.github/images/anatomy.png\">\n<br>\n<image src=\"https://github.com/unskript/Awesome-CloudOps-Automation/blob/master/.github/images/ui.png\">\n<br>\n\n\n## Actions\n\nActions (sometimes referred to here as Legos) are the Atomic part of a unSkript Runbook. Here is a sample Action that performs a well defined task. \n\n```\ndef aws_get_instance_details(handle, instance_id: str, region: str) -> Dict:\n    \"\"\"aws_get_instance_details Returns instance details.\n\n        :type handle: object\n        :param handle: Object returned by the task.validate(...) method.\n\n        :type instance_ids: list\n        :param instance_ids: List of instance ids.\n\n        :type region: string\n        :param region: Region for instance.\n\n        :rtype: Dict with the instance details.\n    \"\"\"\n\n    ec2client = handle.client('ec2', region_name=region)\n    instances = []\n    response = ec2client.describe_instances(\n        Filters=[{\"Name\": \"instance-id\", \"Values\": [instance_id]}])\n    for reservation in response[\"Reservations\"]:\n        for instance in reservation[\"Instances\"]:\n            instances.append(instance)\n\n    return instances[0]\n```\n\nThis Action expects three parameters as inputs. \n  1. `handle` is an Object of type `Connector AWS`. \n  2. `instance-id`  is the `aws ec2` instance identifier.\n  3. `region` is the `aws region` where the `aws ec2` can be found\n\n\nActions depend on the respective connector. What this means is that we need to \ncreate a AWS connector before using this AWS Action. You can create any supported\nconnector by clicking on `Credentials` -> Add New Credential. \n"
  },
  {
    "path": ".github/CONTRIBUTING.md",
    "content": "# Contributing\nThanks for taking time to contribute and helping us make this project better! The following is a set of guidelines for contributing to the project.\nPlease note we have a code of conduct, please follow it in all your interactions with the project.\n \n## Submitting issues\n\nWe have several forms for your issues:\n* [Bug Report](https://github.com/unskript/Awesome-CloudOps-Automation/issues/new?assignees=&labels=&projects=&template=bug_report.md&title=): Ensure you have followed the steps in the form so we can best assist you.\n* [Feature Request](https://github.com/unskript/Awesome-CloudOps-Automation/issues/new?assignees=&labels=&projects=&template=feature_request.md&title=): Do you have an awesome idea to make this project better?  We want to hear it!\n* [Action Creation](https://github.com/unskript/Awesome-CloudOps-Automation/issues/new?assignees=&labels=action%2Ctriage&projects=&template=add_action.yml&title=%5BAction%5D%3A+): Do you have an idea for a new Action?\n* [RunBook idea](https://github.com/unskript/Awesome-CloudOps-Automation/issues/new?assignees=&labels=runbook%2Ctriage&projects=&template=add_runbook.yml&title=%5BRunBook%5D%3A+): Have a request or idea for a RunBook that will make the life of an SRE better?  File it using this form.\n\n \n \n### Version control branching\n-----------------------------\n \n* Always **make a new branch** for your work. \n* **Base your new branch off of the master branch** on the main\n repository.\n \n\n### Create a new xRunBook\nYour RunBooks are stored locally at ```$HOME/Awesome-CloudOps-Automation/custom/runbooks``` Copy an existing xRunBook and rename it. It will appear in the Welcome page on refresh. Click to Open.\nYour saved xRunBook can be found at ```$HOME/Awesome-CloudOps-Automation/custom/runbooks```\n\n  1. All created RunBooks have a ipynb file. You'll need to create a .json file with metadata about your RunBook.  Copy from another RunBook un the repository, and update the values for each parameter.\n  2. Use the sanitize.py script to remove all parameters and outputs from your Runbook:\n  ```shell\n      python3 sanitize.py -f <ipynb file> \n  ```\n  3. Copy the saved RunBook (json and ipynb) files from the Custom folder into the folder of the Connector used, and submit a PR!\n  \n\n### Create a new Action\n\n#### Create a new action inside an existing RunBook.\n\n   1. If you will not use external credentials, click *+Add Action* at the top of the menu.\n   2. If you will be using an existing credential, add an existing Action for that service (like AWS), and edit the code to create your new Action.\n   3. If the service you'd like to build for does not have credentials yet, please [file an issue](https://github.com/unskript/Awesome-CloudOps-Automation/issues/new?assignees=&labels=Credential%2Ctriage&template=add_credential.yml&title=%5BCredential%5D%3A+).\n   \n#### Creating and connecting your Action\n\n1. [Creating Custom Actions](https://docs.unskript.com/unskript-product-documentation/guides/actions/create-custom-actions) describes the steps to create your own Action.\n2.  To submit to OSS, follow the [Submit An Action](https://docs.unskript.com/unskript-product-documentation/guides/contribute-to-open-source#actions) instructions.  \n\n \n## Support Channels\n---\nWhether you are a user or contributor, official support channels include:\n- GitHub issues: https://github.com/unskript/Awesome-CloudOps-Automation/issues/new\n- Slack: https://communityinviter.com/apps/cloud-ops-community/awesome-cloud-automation\n \n### Extending Docker\n You can use our base docker and extend the functionality to fit your need. Follow this [document](./README_extending_docker.md) to create and build your own custom docker."
  },
  {
    "path": ".github/DEVELOPERGUIDE.md",
    "content": "\n<p align=\"center\">\n  <a href=\"https://github.com/unskript/Awesome-CloudOps-Automation\">\n    <img src=\"https://unskript.com/assets/favicon.png\" alt=\"Logo\" width=\"80\" height=\"80\">\n  </a>\n<p align=\"center\">\n  <h3 align=\"center\">Action Development Guide</h3>\n  <p align=\"center\">\n    CloudOps automation made simple!\n    <br />\n    <br />\n    <a href=\"https://unskript.com/blog\">Visit our blog</a>\n    ·\n    <a href=\"https://www.youtube.com/channel/UCvtSYNHVvuogq2u-F7UDMkw\">YouTube Tutorials</a>\n    .\n    <a href=\"https://github.com/unskript/Awesome-CloudOps-Automation/issues/new?assignees=&labels=&template=bug_report.md&title=\">Report Bug</a>\n    ·\n    <a href=\"https://github.com/unskript/Awesome-CloudOps-Automation/issues/new?assignees=&labels=&template=feature_request.md&title=\">Request Feature</a>\n  </p>\n</p>\n\n\n# Actions\n\nActions are the atomic units of xRunBooks.  All xRunBooks are composed of Actions, and each Action is a step that progresses the xRunBook.\n\nIn this document, we'll walk through the anatomy of a Lego/Action, how they are created, and how they work.\n\n> TL;dr: If you build your Action with the Docker Open source, and save it to your computer, these files will be generated for you. You'll have to modify two files before making a contribution: \n  * the JSON file (to update the parameters)\n  * the Readme (tell us what your action does and how it works)\n\n\n# Lego Authoring Guidelines\n\n## Directory Structure\n\nThe Directory structure is:\n\n1. CONNECTOR is a directory of xRunBooks and Lego/Actions that are run for a particular service/API/etc. (for example: Redis, AWS or Slack)\n2. Inside the CONNECTOR Directory will be two files for each xRunBook (a JSON file and the actual rRunBook in the .ipynb file), and the Lego subdirectory will hold all of the Actions.\n\nIn this document, we'll walk through the steps in creating an Action.\n\n```\nCONNECTOR\n    |- __init__.py\n    |-  RUNBOOKS \n    |-  legos\n          |- __init__.py\n          |- LEGO1\n          |     |- __init__.py\n          |     |- README.md\n          |     |- LEGO1.json \n          |     |- LEGO1.py\n          |     |- LEGO1_SUPPORTING.png/gif/jpeg \n          | \n          |- LEGO2\n                |- __init__.py\n                |- README.md\n                |- LEGO2.json\n                |- LEGO2.py\n                |- LEGO2_SUPPORTING.png/gif/jpeg \n```          \n          \n          \nHere's an Example structure for AWS, with a Lego called aws_delete_volume:\n```\nAWS\n |- Resize_PVC.ipynb\n |- __init__.py\n |- legos\n      |- __init__.py\n      |- aws_delete_volume\n             |- __init__.py\n             |- README.md\n             |- aws_delete_volume.json\n             |- aws_delete_volume.py\n\n```\n\n 1. Every Directory under the CONNECTOR will have __init__.py file (Essential to distinguish as a module/sub-module).  \n\n 2. Every CONNECTOR will have a legos Directory. (Example: AWS/legos)\n\n 3. Underneath of legos directory, Every Lego will have the same Name Directory Example: aws_delete_volume will have aws_delete_volume.py underneath of it. \n\n 4. Every Lego Directory will have:\n    1. [README.md](#readmemd)\n    2. [JSON File](#json-file)\n    3. [py file](#python-file) \n    \n    You may have additional files if your readme has images.\n\n\n## README.md\n\nThe  README.md explains what the Action is supposed to do, It should contain:\n\n  1. **Action Title** \n      ```\n        Example:\n          <h2>Delete AWS EBS Volume </h2>\n      ```\n\n  2.  **Description**: explains what the Lego is intended to do.\n\n      ```\n      This Action deletes AWS EBS volume and gives a list of deletion status.\n      ```\n\n  3. **Action Details**: here we explain the Action signature, what are the different input fields to the Action.  It's also nice to add an example of how the Action might be used:\n\n      ```\n      aws_delete_volumes(handle: object, volume_id: str, region: str)\n\n      handle: Object of type unSkript AWS Connector\n      volume_id: Volume ID needed to delete particular volume\n      region: Used to filter the volume for specific region\n      ```\n        \n      Example Usage:\n\n           aws_delete_volumes(handle,\n                           \"vol-039ce61146a4d7901\",\n                           \"us-west-2\")\n    \n 5. **Action Input**: explains how many parameters are needed for the LeActiongo. Which of them are Mandatory, which of them are optional. \n\n ```\n\nThis Action take three inputs handle, volume_id and region. All three are required.\n ```\n\n 6. **Action Output** A sample output from the Action upon completion.  Ensure to remove sensitive values. \n\n\n## Action JSON file\n\nIf you created your Action with the Docker build of unSkript, the JSON file is generated for you. \n\nHere is an example JSON file:\nExample:\n\n```json\n{\n    \"action_title\": \"Delete AWS EBS Volume by Volume ID\",\n    \"action_description\": \"Delete AWS Volume by Volume ID\",\n    \"action_type\": \"LEGO_TYPE_AWS\",\n    \"action_entry_function\": \"aws_delete_volumes\",\n    \"action_needs_credential\": true,\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n    \"action_supports_poll\": true,\n    \"action_supports_iteration\": true,\n    \"action_categories\": []\n  }\n  \n```\nAll of these fields are Mandatory.\n\n* **Action Title**: The human readable title of your Lego\n* **Action Description**: a text description of what the Lego does\n* **Action Type**:\n* Action Entry Function:\n* **Action Needs Credential**: Boolean - are the credentials for this connector required?\n* **Action Output Type**: A string? a List?  what does the output look like?\n* **Action Supports Poll**: can we poll this Action if it takes a while to finish?\n* **action_supports_iteration**: Can we run this Action many times with multiple inputs?\n* **action_categories**: categories that appear in teh documentation - for added visibility of your action.\n\n\n## Python file\n\nThis is the Python file that is run in the xRunBook.  Examples can be found in the various Lego directories in this repository.\n\nThe fastest way to create the Python file is to create your Action in the Docker build of unSkript. WHen you save your Action (from the three dot menu next to the \"Run Action\" button), it will be saved locally on your computer\n\n## __init__.py\n\nThis can be copied from any other Action directory and pasted in.\n\n\n\n"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/add_action.yml",
    "content": "name: Create New Action\ndescription: File an Action Request\ntitle: \"[Action]: \"\nlabels: [\"action\", \"triage\"]\nbody:\n  - type: markdown\n    attributes:\n      value: |\n        Thanks for taking the time to suggest a new Action!\n  - type: input\n    id: contact\n    attributes:\n      label: Contact Details\n      description: How can we get in touch with you if we need more info?\n      placeholder: ex. email@example.com\n    validations:\n      required: false\n  - type: checkboxes\n    id: healthcheck\n    attributes:\n      label: HealthCheck\n      description: Check Actions can be used in the Healthcheck flow\n      options:\n        - label: Is this Action a Check Action?\n  - type: textarea\n    id: Actionname\n    attributes:\n      label: Action Name\n      description: What is the Name for your Action?\n      placeholder: List all Open GitHub Pull Requests\n    validations:\n      required: true\n  - type: textarea\n    id: Actioninputs\n    attributes:\n      label: Action Inputs\n      description: What variables (and variable types) do you expect for this action\n      placeholder: region instance_id\n    validations:\n      required: true\n  - type: textarea\n    id: Actionoutputs\n    attributes:\n      label: Action Outputs\n      description: What do you want to see in the Action output (and type)\n      placeholder: list of all IP addresses\n    validations:\n      required: true\n  - type: textarea\n    id: comments\n    attributes:\n      label: Comments\n      description: Do you have any additional information for this Action?\n    validations:\n      required: false\n  - type: checkboxes\n    id: terms\n    attributes:\n      label: Code of Conduct\n      description: By submitting this issue, you agree to follow our [Code of Conduct](https://example.com)\n      options:\n        - label: I agree to follow this project's Code of Conduct\n          required: true\n"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/add_chatgpt_action.yml",
    "content": "---\nname: Create New Action with files\ndescription: File an Action Request with files\ntitle: \"[Action]: \"\nlabels:\n  - action\n  - triage\nbody:\n  - type: markdown\n    attributes:\n      value: >\n        Thanks for taking the time to suggest a new Action! If you have used\n        ChatGPT to generate the files for an action, you can submit them here.\n  - type: input\n    id: contact\n    attributes:\n      label: Contact Details\n      description: How can we get in touch with you if we need more info?\n      placeholder: ex. email@example.com\n    validations:\n      required: false\n  - type: textarea\n    id: Actionname\n    attributes:\n      label: Action Name\n      description: What is the Name for your Action?\n      placeholder: List all Open GitHub Pull Requests\n    validations:\n      required: true\n  - type: textarea\n    id: Actionreadme\n    attributes:\n      label: Action Readme\n      description: paste in your readme file (in markdown)\n      placeholder: null\n    validations:\n      required: true\n  - type: textarea\n    id: Actionjson\n    attributes:\n      label: Action json\n      description: paste in your json file\n      placeholder: null\n    validations:\n      required: true\n  - type: textarea\n    id: Actionpy\n    attributes:\n      label: Action python\n      description: paste in your python file\n    validations:\n      required: true\n  - type: textarea\n    id: Actionoutputs\n    attributes:\n      label: Action Outputs\n      description: What do you want to see in the Action output (and type)\n      placeholder: null\n    validations:\n      required: true\n  - type: textarea\n    id: comments\n    attributes:\n      label: Comments\n      description: Do you have any additional information for this Action?\n    validations:\n      required: false\n  - type: checkboxes\n    id: terms\n    attributes:\n      label: Code of Conduct\n      description: By submitting this issue, you agree to follow our [Code of\n        Conduct](https://example.com)\n      options:\n        - label: I agree to follow this project's Code of Conduct\n          required: true\n"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/add_credential.yml",
    "content": "name: Create New Credential\ndescription: File an Credential Request\ntitle: \"[Credential]: \"\nlabels: [\"Credential\", \"triage\"]\nbody:\n  - type: markdown\n    attributes:\n      value: |\n        Thanks for taking the time to suggest a new Credential!\n  - type: input\n    id: contact\n    attributes:\n      label: Contact Details\n      description: How can we get in touch with you if we need more info?\n      placeholder: ex. email@example.com\n    validations:\n      required: false\n  - type: textarea\n    id: credentialname\n    attributes:\n      label: Credential Name\n      description: What is the service you would like to connect to unSkript?\n    validations:\n      required: true\n  - type: textarea\n    id: Credentialinputs\n    attributes:\n      label: Credential type\n      description: What is the authentication procedure with this service? Examples API key or key/secret\n    validations:\n      required: true\n  - type: textarea\n    id: comments\n    attributes:\n      label: Comments\n      description: Do you have any additional information for this Connection?\n    validations:\n      required: false\n  - type: checkboxes\n    id: terms\n    attributes:\n      label: Code of Conduct\n      description: By submitting this issue, you agree to follow our [Code of Conduct](https://example.com)\n      options:\n        - label: I agree to follow this project's Code of Conduct\n          required: true\n"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/add_runbook.yml",
    "content": "name: Create New RunBook\ndescription: File an Runbook Request\ntitle: \"[RunBook]: \"\nlabels: [\"runbook\", \"triage\"]\nbody:\n  - type: markdown\n    attributes:\n      value: |\n        Thanks for taking the time to suggest a new runbook!\n  - type: input\n    id: contact\n    attributes:\n      label: Contact Details\n      description: How can we get in touch with you if we need more info?\n      placeholder: ex. email@example.com\n    validations:\n      required: false\n  - type: textarea\n    id: runbookname\n    attributes:\n      label: RunBook Name\n      description: What is the Name for your runbook?\n      placeholder: List all Open GitHub Pull Requests\n    validations:\n      required: true\n  - type: textarea\n    id: runbookinputs\n    attributes:\n      label: runbook Inputs\n      description: What variables (and variable types) do you expect for this runbook\n      placeholder: region instance_id\n    validations:\n      required: true\n  - type: textarea\n    id: runbookActions\n    attributes:\n      label: runBook Actions\n      description: What actions should be in this runbook?  Do oututs from actions tie into other actions?\n    validations:\n      required: true\n  - type: textarea\n    id: comments\n    attributes:\n      label: Comments\n      description: Do you have any additional information for this Runbook?\n    validations:\n      required: false\n  - type: checkboxes\n    id: terms\n    attributes:\n      label: Code of Conduct\n      description: By submitting this issue, you agree to follow our [Code of Conduct](https://example.com)\n      options:\n        - label: I agree to follow this project's Code of Conduct\n          required: true\n"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/bug_report.md",
    "content": "---\nname: Bug report\nabout: Create a report to help us improve\ntitle: ''\nlabels: ''\nassignees: ''\n\n---\n\n**Describe the bug**\nA clear and concise description of what the bug is.\n\n**To Reproduce**\nSteps to reproduce the behavior:\n1. Go to '...'\n2. Click on '....'\n3. Scroll down to '....'\n4. See error\n\n**Expected behavior**\nA clear and concise description of what you expected to happen.\n\n**Screenshots**\nIf applicable, add screenshots to help explain your problem.\n\n**Desktop (please complete the following information):**\n - OS: [e.g. iOS]\n - Browser [e.g. chrome, safari]\n - Version [e.g. 22]\n\n**Smartphone (please complete the following information):**\n - Device: [e.g. iPhone6]\n - OS: [e.g. iOS8.1]\n - Browser [e.g. stock browser, safari]\n - Version [e.g. 22]\n\n**Additional context**\nAdd any other context about the problem here.\n"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/config.yml",
    "content": "blank_issues_enabled: false\n"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/feature_request.md",
    "content": "---\nname: Feature request\nabout: Suggest an idea for this project\ntitle: ''\nlabels: ''\nassignees: ''\n\n---\n\n**Is your feature request related to a problem? Please describe.**\nA clear and concise description of what the problem is. Ex. I'm always frustrated when [...]\n\n**Describe the solution you'd like**\nA clear and concise description of what you want to happen.\n\n**Describe alternatives you've considered**\nA clear and concise description of any alternative solutions or features you've considered.\n\n**Additional context**\nAdd any other context or screenshots about the feature request here.\n"
  },
  {
    "path": ".github/PULL_REQUEST_TEMPLATE/action_pr_template.yml",
    "content": "name: Action Pull Request\ndescription: Use this template to raise PR for your Action\nlabels:\n  - 'awesome-action'\nbody:\n  - type: markdown\n    attributes:\n      value: |\n        Please include a summary of the change, motivation and context\n  - type: textarea\n    id: description\n    attributes:\n      label: Description \n      description: |\n        1. Describe the feature and how this change fits in it, e.g. this PR makes kafka message.max.bytes configurable to better support batching\n        2. Describe why this is better than previous situation e.g. this PR changes logic for retry on healthchecks to avoid false positives\n        3. Dink relevant information about the bug (github issue or slack thread) and how this change solves it e.g. this change fixes #99999 by adding a lock on read/write to avoid data races.\n      placeholder: |\n        ...\n    validations:\n      required: true\n  - type: textarea\n    id: testing\n    attributes:\n      label: Testing\n      description: Please describe the tests that you ran to verify your changes. Please summarize what did you test and what needs to be tested e.g. deployed and tested helm chart locally.\n      placeholder: ...\n    validations:\n      required: true\n  - type: markdown\n    attributes:\n      value: |\n        ### Checklist\n        - [ ] My changes generate no new warnings.\n        - [ ] I have added tests that prove my fix is effective or that my feature works.\n        - [ ] Any dependent changes have been merged and published.\n  - type: textarea\n    id: documentation\n    attributes: \n      label: Documentation\n      description: Make sure that you have documented corresponding changes in this repository.\n      placeholder: |\n        Include __important__ links regarding the implementation of this PR.\n        This usually includes and RFC or an aggregation of issues and/or individual \n        conversations that helped put this solution together. This helps ensure there is a good \n        aggregation of resources regarding the implementation."
  },
  {
    "path": ".github/PULL_REQUEST_TEMPLATE/feature_request_template.yml",
    "content": "name: Feature request\ndescription: Use this template for requesting a feature/enhancement.\nlabels:\n  - 'feature-request'\nbody:\n  - type: markdown\n    attributes:\n      value: |\n        Please provide the feature/enhancement request in as much details as possible\n  - type: input\n    id: feature name\n    attributes:\n      label: Feature Name\n      description: |\n        Mention what the feature/enhancement you would like to have developed\n        You can share any screenshots, diagrams, etc..\n      placeholder: |\n        My Awesome Feature\n    validations:\n      required: true\n  - type: textarea\n    id: feature\n    attributes:\n      label: Feature description\n      description: Describe feature that you would like to see\n      placeholder: ...\n    validations:\n      required: true\n"
  },
  {
    "path": ".github/PULL_REQUEST_TEMPLATE/lego_pr_template.md",
    "content": "## Description\nPlease include a summary of the change, motivation and context.\n\n<!--\n- **on a feature**: describe the feature and how this change fits in it, e.g. this PR makes kafka message.max.bytes configurable to better support batching\n- **on a refactor**: describe why this is better than previous situation e.g. this PR changes logic for retry on healthchecks to avoid false positives\n- **on a bugfix**: link relevant information about the bug (github issue or slack thread) and how this change solves it e.g. this change fixes #99999 by adding a lock on read/write to avoid data races.\n-->\n\n\n### Testing\nPlease describe the tests that you ran to verify your changes. Please summarize what did you test and what needs to be tested e.g. deployed and tested helm chart locally. \n\n### Checklist:\n- [ ] My changes generate no new warnings.\n- [ ] I have added tests that prove my fix is effective or that my feature works.\n- [ ] Any dependent changes have been merged and published. \n\n### Documentation\nMake sure that you have documented corresponding changes in this repository. \n\n<!--\nInclude __important__ links regarding the implementation of this PR.\nThis usually includes and RFC or an aggregation of issues and/or individual conversations that helped put this solution together. This helps ensure there is a good aggregation of resources regarding the implementation.\n-->\n"
  },
  {
    "path": ".github/PULL_REQUEST_TEMPLATE/runbook_pr_template.md",
    "content": "## Description\nPlease describe what the runbook is accomplishing. \n\n>  Eg: This Runbook helps in find and prune un-used keypairs in AWS\n\n\n### Runbook Parameters\nPlease describe the parameters that are required to be filled when this runbook is run.\n\n> Eg: This Runbook needs One parameter `region`\n>    region: string : AWS Region where we need to search the un-used keypairs in.\n\n### Runbook URL\n<!--\nThis is optional. If the Runbook was developed on say unSkript hosted tenants, Please\nspecify the URL to the Runbook.  \n-->\n\n### Checks\nPlease include the list of checks this runbook has implemented. \n\n> Eg: This runbook implements health check for MongoDB Server.\n\n### Checklist:\n- [ ] My runbook has parameters\n- [ ] Runbook parameters have default values\n- [ ] Have included Runbook URL\n- [ ] Have attached Screenshot of the Runbook\n- [ ] Runbook has checks included \n- [ ] Runbook has remediation included\n\n### Documentation\nMake sure that you have documented corresponding changes in this repository. \n\n<!--\nInclude __important__ links regarding the implementation of this PR.\nThis usually includes and reference documentation about how to detect the issue\nand links (if any) to the remediation of the issue.\n-->\n"
  },
  {
    "path": ".github/code-of-conduct.md",
    "content": "# Unskript Code of Conduct\n\n## Our Pledge\n\nWe as members, contributors, and leaders pledge to make participation in our\ncommunity a harassment-free experience for everyone, regardless of age, body\nsize, visible or invisible disability, ethnicity, sex characteristics, gender\nidentity and expression, level of experience, education, socio-economic status,\nnationality, personal appearance, race, religion, or sexual identity\nand orientation.\n\nWe pledge to act and interact in ways that contribute to an open, welcoming,\ndiverse, inclusive, and healthy community.\n\n## Our Standards\n\nExamples of behavior that contributes to a positive environment for our\ncommunity include:\n\n* Demonstrating empathy and kindness toward other people\n* Being respectful of differing opinions, viewpoints, and experiences\n* Giving and gracefully accepting constructive feedback\n* Accepting responsibility and apologizing to those affected by our mistakes,\n  and learning from the experience\n* Focusing on what is best not just for us as individuals, but for the\n  overall community\n\nExamples of unacceptable behavior include:\n\n* The use of sexualized language or imagery, and sexual attention or\n  advances of any kind\n* Trolling, insulting or derogatory comments, and personal or political attacks\n* Public or private harassment\n* Publishing others' private information, such as a physical or email\n  address, without their explicit permission\n* Other conduct which could reasonably be considered inappropriate in a\n  professional setting\n\n## Enforcement Responsibilities\n\nCommunity leaders are responsible for clarifying and enforcing our standards of\nacceptable behavior and will take appropriate and fair corrective action in\nresponse to any behavior that they deem inappropriate, threatening, offensive,\nor harmful.\n\nCommunity leaders have the right and responsibility to remove, edit, or reject\ncomments, commits, code, wiki edits, issues, and other contributions that are\nnot aligned to this Code of Conduct, and will communicate reasons for moderation\ndecisions when appropriate.\n\n## Scope\n\nThis Code of Conduct applies within all community spaces, and also applies when\nan individual is officially representing the community in public spaces.\nExamples of representing our community include using an official e-mail address,\nposting via an official social media account, or acting as an appointed\nrepresentative at an online or offline event.\n\n## Enforcement\n\nInstances of abusive, harassing, or otherwise unacceptable behavior may be\nreported to the community leaders responsible for enforcement at [community@unskript.com](mailto:community@unskript.com).\nAll complaints will be reviewed and investigated promptly and fairly.\n\nAll community leaders are obligated to respect the privacy and security of the\nreporter of any incident.\n\n## Enforcement Guidelines\n\nCommunity leaders will follow these Community Impact Guidelines in determining\nthe consequences for any action they deem in violation of this Code of Conduct:\n\n### 1. Correction\n\n**Community Impact**: Use of inappropriate language or other behavior deemed\nunprofessional or unwelcome in the community.\n\n**Consequence**: A private, written warning from community leaders, providing\nclarity around the nature of the violation and an explanation of why the\nbehavior was inappropriate. A public apology may be requested.\n\n### 2. Warning\n\n**Community Impact**: A violation through a single incident or series\nof actions.\n\n**Consequence**: A warning with consequences for continued behavior. No\ninteraction with the people involved, including unsolicited interaction with\nthose enforcing the Code of Conduct, for a specified period of time. This\nincludes avoiding interactions in community spaces as well as external channels\nlike social media. Violating these terms may lead to a temporary or\npermanent ban.\n\n### 3. Temporary Ban\n\n**Community Impact**: A serious violation of community standards, including\nsustained inappropriate behavior.\n\n**Consequence**: A temporary ban from any sort of interaction or public\ncommunication with the community for a specified period of time. No public or\nprivate interaction with the people involved, including unsolicited interaction\nwith those enforcing the Code of Conduct, is allowed during this period.\nViolating these terms may lead to a permanent ban.\n\n### 4. Permanent Ban\n\n**Community Impact**: Demonstrating a pattern of violation of community\nstandards, including sustained inappropriate behavior,  harassment of an\nindividual, or aggression toward or disparagement of classes of individuals.\n\n**Consequence**: A permanent ban from any sort of public interaction within\nthe community.\n\n## Attribution\n\nThis Code of Conduct is adapted from the [Contributor Covenant][homepage],\nversion 2.0, available at\nhttps://www.contributor-covenant.org/version/2/0/code_of_conduct.html.\n\nCommunity Impact Guidelines were inspired by [Mozilla's code of conduct\nenforcement ladder](https://github.com/mozilla/diversity).\n\n[homepage]: https://www.contributor-covenant.org\n\nFor answers to common questions about this code of conduct, see the FAQ at\nhttps://www.contributor-covenant.org/faq. Translations are available at\nhttps://www.contributor-covenant.org/translations.\n\n"
  },
  {
    "path": ".github/dependabot.yml",
    "content": "version: 2\nupdates:\n  - package-ecosystem: github-actions\n    directory: /\n    schedule:\n      interval: daily\n"
  },
  {
    "path": ".github/guidelines-to-creating-runbook.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \n\n<h1>Guideline for creating reusable xRunBooks </h1>\n\n<br>\n\n## 1 Introduction\n\nA Runbook is a collection of `Actions` which accomplish a well defined task. A Runbook is intended to be written once and used multiple times. So re-usability means the runbook needs to be `parameterized`.  Parameterization of a runbook is the process wherein we define what are the Inputs expected to the Runbook. For instance, if we are authoring a Runbook to list and delete all unused key-pairs in AWS, then we can think of `AWS region` as an `input parameter` for this runbook. \n\nThis document lists such guidelines when creating  re-usable runbooks.\n\n\n## 2 Guidelines\n\n1. To make runbook portable and re-usable we should not hard-code any values like `aws region` in the runbook. It should instead be taken as an input parameter to the runbook.\n2. It is customary to have a `markdown` cell preceding each `Action` cell where we explain what is being done in the `Action` cell. Like any good code, a Runbook with well described `markdown` cells increases readability of the runbook.\n3. A Runbook should have a clear `Steps` markdown cell where every step that is taken in the runbook is clearly explained. \n4. A Runbook can have unSkript defined `Action` and/or custom `Action` cells. But every `Action` cell should be preceded with a `markdown` cell explaining the intent of the `Action` cell.\n5. A Runbook shall list all the outputs clearly formatted and easy to read and understand. \n6. A runbook shall have a `Conclusion` markdown cell which summarizes what was done in the runbook. We may also include any links to help in debugging the issue that the runbook set out to solve. \n\n\n## 3 Runbook Etiquette\n\n1. Runbook names should use the \"_\" to replace spaces.\n2.  Make sure there are no hard-coded values in the runbook. No magic variables in the runbook. Any variable being used should be well documented in the `Action` cell or in the `Markdown` cell.\n3. Keep the structure of the runbook in the form of `Markdown` followed by `Action`\n4. A Remediation section would help user know what are the next steps to take to resolve the issue at hand. \n5. If a remediation is known Eg: Pruning un-used key-pairs in a region, then the runbook should provide an `Action` to achieve the desired remediation to the user.  If it is not known, then the Runbook should offer Links to where further troubleshooting can be done. "
  },
  {
    "path": ".github/hfest_2022_resource.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \n\n\n\n<h1>Hacktoberfest 2022 Resource</h1>\n\n## Google cloud resource available for testing\n\n### Storage Buckets (Object store) \nYour computer's filesystem uses directories to organize data which are then stored in files. Similarly, data that is kept on the Cloud are in the form of objects, which are then gathered in buckets(a bucket is basically a container used to store objects).\n  \n  ###### Examples\n  1. gs://hacktoberfest_bucket_1/\n  2. gs://hacktoberfest_bucket_2/\n  3. gs://hacktoberfest_public_bucket/\n\n\n### Compute instances (Compute Engine Instances) \nCompute Engine instances can run public images for Linux and Windows Servers, as well as private custom images created or imported from existing systems. Additionally, Docker containers, which are launched automatically on instances running the Container-Optimized OS public image, can be deployed.\n\n```NAME                    ZONE        MACHINE_TYPE  PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP  STATUS\nhacktoberfest-tagged    us-west1-b  e2-micro                   10.138.0.4                RUNNING\nhacktoberfest-untagged  us-west1-b  e2-micro                   10.138.0.3                RUNNING\n```\n\n### Identity Access Management (IAM) user\nIAM (Identity and Access Management) allows administrators to authorize who can take actions on specific resources, giving you complete control and visibility over Google Cloud resources.\n\n```\nDISPLAY NAME                            EMAIL                                                               DISABLED\n\nhacktober-test-user                     hacktober-test-user@hacktoberfest-2022.iam.gserviceaccount.com      False\n\n```\n\n### Filestore\nApplications that need a file system interface and a shared file system for data can use Filestore, a managed file storage service. The user gets a native experience for setting up managed network detached storage with their virtual machine in the compute engine and Google Kubernetes Engine. For many applications, it is the ideal choice since it provides low latency for file operations.\n\n```\nINSTANCE_NAME            LOCATION    TIER       CAPACITY_GB  FILE_SHARE_NAME  IP_ADDRESS      STATE     CREATE_TIME\nhacktoberfest-filestore  us-west1-b  BASIC_HDD  1024         hacktoberfest    10.101.128.210  CREATING  2022-10-11T23:12:25\n```\n\n\n### GKE (Google Kubernetes Engine) Cluster\nGoogle Kubernetes Engine (GKE) is a managed, production-ready environment for running containerized applications.\n\n```\nNAME                   LOCATION    MASTER_VERSION    MASTER_IP      MACHINE_TYPE  NODE_VERSION      NUM_NODES  STATUS\nhacktoberfest-cluster  us-west1-b  1.22.12-gke.2300  XX.XXX.XXX.XX  e2-medium     1.22.12-gke.2300  3          RUNNING\n\n```\n"
  },
  {
    "path": ".github/images/actionShield.json",
    "content": "{\"schemaVersion\": 1, \"label\": \"Action Count\", \"message\": \"539\", \"color\": \"green\"}"
  },
  {
    "path": ".github/images/runbookShield.json",
    "content": "{\"schemaVersion\": 1, \"label\": \"RunBook Count\", \"message\": \"81\", \"color\": \"orange\"}"
  },
  {
    "path": ".github/pull_request_template.md",
    "content": "## Description\nPlease include a summary of the change, motivation and context.\n\n<!--\n- **on a feature**: describe the feature and how this change fits in it, e.g. this PR makes kafka message.max.bytes configurable to better support batching\n- **on a refactor**: describe why this is better than previous situation e.g. this PR changes logic for retry on healthchecks to avoid false positives\n- **on a bugfix**: link relevant information about the bug (github issue or slack thread) and how this change solves it e.g. this change fixes #99999 by adding a lock on read/write to avoid data races.\n-->\n\n\n### Testing\nPlease describe the tests that you ran to verify your changes. Please summarize what did you test and what needs to be tested e.g. deployed and tested helm chart locally. \n\n### Checklist:\n- [ ] My changes generate no new warnings.\n- [ ] I have added tests that prove my fix is effective or that my feature works.\n- [ ] Any dependent changes have been merged and published. \n\n### Documentation\nMake sure that you have documented corresponding changes in this repository. \n\n<!--\nInclude __important__ links regarding the implementation of this PR.\nThis usually includes and RFC or an aggregation of issues and/or individual conversations that helped put this solution together. This helps ensure there is a good aggregation of resources regarding the implementation.\n-->\n"
  },
  {
    "path": ".github/workflows/all_module_test.yml",
    "content": "name: All Modules Import Test\non:\n  pull_request:\n    types: [ opened, reopened, edited, ready_for_review ]\n  push:\n\npermissions:\n    id-token: write\n    contents: read\n\nenv:\n    GITHUB_TOKEN: ${{ secrets.BUILDER_PAT_ENCODED }}\n\njobs:\n  all-module-import-test: \n    runs-on: ubuntu-latest \n    strategy:\n      fail-fast: true\n\n    steps:\n      - name: Harden Runner\n        uses: step-security/harden-runner@1f99358870fe1c846a3ccba386cc2b2246836776 # v2.2.1\n        with:\n          egress-policy: audit # TODO: change to 'egress-policy: block' after couple of runs\n      \n      - uses: actions/checkout@v3\n\n      - name: Set up Python 3.11\n        uses: actions/setup-python@v4\n        with:\n          python-version: '3.11'\n\n      - name: Configure AWS Credentials\n        uses: aws-actions/configure-aws-credentials@v1\n        with:\n          aws-region: ${{ secrets.GHACTION_AWS_REGION }}\n          role-to-assume: ${{ secrets.GHACTION_AWS_ROLE }}\n          role-session-name: ${{ secrets.GHACTION_AWS_SESSION_NAME }}\n\n     \n      - name: Install system dependencies\n        run: |\n          pip install shyaml\n          pip install --upgrade pip\n          sudo apt update\n          sudo apt install -y wget\n          \n          # Install NumPy first with a compatible version\n          pip install numpy>=1.22.0\n          \n          # Install PyArrow with binary wheel - no build required\n          pip install pyarrow --only-binary=pyarrow\n          \n          # Continue with other dependencies\n          aws s3 cp ${{ secrets.BUILD_REQUIREMENTS }} /tmp/requirements.txt\n          pip install --no-cache-dir -r /tmp/requirements.txt || true\n          \n          # Install main and sub modules\n          aws s3 cp ${{ secrets.MAIN_MODULE_BUILD_PACKAGE }} /tmp/main_module.tar.gz\n          pip install --no-cache-dir /tmp/main_module.tar.gz\n          aws s3 cp ${{ secrets.SUB_MODULE_BUILD_PACKAGE }} /tmp/sub_module.tar.gz\n          pip install --no-cache-dir /tmp/sub_module.tar.gz\n          \n          # Additional dependencies\n          pip install --no-cache-dir matplotlib>=3.7.1\n          pip install setuptools wheel cython\n        \n      - name: Run All Modules Check\n        run: /usr/bin/env python all_modules_test.py\n"
  },
  {
    "path": ".github/workflows/build-and-release-docker-lite.yml",
    "content": "name: Build and Release Docker Lite\n\non:\n  workflow_call:\n    inputs:\n      enabled:\n        required: true\n        type: boolean\n      release_tag:\n        required: true\n        type: string\n      unskript_branch:\n        required: false\n        default: \"master\"\n        type: string\n      awesome_branch:\n        required: false\n        default: \"master\"\n        type: string\n      devops_branch:\n        required: false\n        default: \"master\"\n        type: string\n      build_number:\n        required: true\n        type: string\n      latest:\n        required: false\n        default: true\n        type: boolean\n\n  workflow_dispatch:\n    inputs:\n      enabled:\n        description: 'Workflow Enable Flag'\n        required: true\n        default: false\n        type: boolean\n      unskript_branch:\n        description: 'unSkript Branch name'\n        required: true\n        default: master\n        type: string\n      awesome_branch:\n        description: 'unSkript submodule awesome Branch name'\n        required: true\n        default: master\n        type: string\n      gotty_branch:\n        description: 'gotty Branch name'\n        required: true\n        default: master\n        type: string\n      devops_branch:\n        description: 'Devops Branch name'\n        required: false\n        default: master\n        type: string\n      build_number:\n        description: 'Docker build number'\n        required: true\n        type: string\n      build_target:\n          required: true \n          default: 'build-amd64'\n          options:\n            - build-amd64\n            - build-both \n            - build-arm64\n          type: choice\n      latest:\n        description: 'Docker Latest tag Branch name'\n        default: false\n        type: boolean\n\nconcurrency:\n  group: ${{ github.ref }}\n  cancel-in-progress: true\n\nenv:\n  DOCKER_REGISTRY: docker.io\n  DOCKER_IMAGE: unskript/awesome-runbooks\n  DOCKER_USERNAME: ${{ secrets.DOCKER_USER }}\n  DOCKER_PASSWORD: ${{ secrets.DOCKER_PASSWORD }}\n  USERNAME: ${{ secrets.BUILD_USER }}\n  DOCKER_TARGET: linux/amd64, linux/arm64 \n\npermissions:\n  contents: read\n\njobs:\n  build-unskript:\n    runs-on: ubuntu-latest\n    if: ${{ inputs.enabled }}\n    strategy:\n      fail-fast: false\n\n    steps:\n      - name: Harden Runner\n        uses: step-security/harden-runner@55d479fb1c5bcad5a4f9099a5d9f37c8857b2845 # v2.4.1\n        with:\n          egress-policy: audit\n\n      - uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 # v3.5.0\n\n      - name: Get current date\n        id: date\n        run: echo \"::set-output name=date::$(date +'%Y-%m-%d-%s')\"\n\n      - name: Configure APT and Install Dependencies\n        run: |\n          # Remove existing lists and clean apt cache\n          sudo rm -rf /var/lib/apt/lists/*\n          sudo apt-get clean\n          \n          # Configure main repository\n          echo \"deb http://archive.ubuntu.com/ubuntu $(lsb_release -cs) main restricted universe multiverse\" | sudo tee /etc/apt/sources.list\n          echo \"deb http://archive.ubuntu.com/ubuntu $(lsb_release -cs)-updates main restricted universe multiverse\" | sudo tee -a /etc/apt/sources.list\n          \n          # Update package lists\n          sudo apt-get update -y\n          \n          # Install ODBC packages\n          sudo apt-get install -y --no-install-recommends \\\n            unixodbc-dev \\\n            unixodbc \\\n            unixodbc-common \\\n            libodbcinst2\n          \n          # Clean up\n          sudo apt-get clean\n          sudo rm -rf /var/lib/apt/lists/*\n\n      - name: Set up Python 3.x\n        uses: actions/setup-python@75f3110429a8c05be0e1bf360334e4cced2b63fa # v2.3.3\n        with:\n          python-version: '3.9'\n\n      - name: Install Python dependencies\n        run: |\n          pip install shyaml\n          sudo apt-get update -y\n          sudo apt-get install -y --no-install-recommends unixodbc-dev unixodbc unixodbc-common libodbcinst2\n          sudo apt-get clean\n          sudo rm -rf /var/lib/apt/lists/*\n\n      # - name: Set up Python 3.x\n      #   uses: actions/setup-python@75f3110429a8c05be0e1bf360334e4cced2b63fa # v2.3.3\n      #   with:\n      #     # Restricting python version to 3.9\n      #     python-version: '3.9'\n\n      # - name: Install system dependencies\n      #   run: |\n      #     pip install shyaml\n      #     #sudo apt-get install -y --allow-downgrades unixodbc-dev=2.3.9 unixodbc=2.3.9 odbcinst1debian2=2.3.9 odbcinst=2.3.9\n      #     sudo apt-get install -y --allow-downgrades unixodbc-common libodbcinst2\n          \n      - name: Checkout Code\n        run: |\n          wget -O /tmp/pandas-2.0.1.tar.gz https://files.pythonhosted.org/packages/6c/e0/73987b6ecc7246e02ab557240843f93fd5adf45d1355abb458aa1f2a0932/pandas-2.0.1.tar.gz\n          sudo pip install /tmp/pandas-2.0.1.tar.gz\n          cd $HOME\n          git clone https://${{ env.USERNAME }}:${{ secrets.BUILDER_PAT }}@github.com/unskript/unskript.git unskript\n          cd unskript\n          git checkout ${{ inputs.unskript_branch }}\n\n          # We use the --upgrade-strage only-if-needed and --use-deprecated=legacy-resolver to avoid\n          # PIP dependency loop. Since we are asking the git runner to be of ubuntu-latest, it becomes\n          # a moving target for us hence we want to restrict the packages that are needed for unskript\n          # to compile to be fixed.\n          /usr/bin/env python -m pip install -r ./requirements.txt --upgrade --upgrade-strategy only-if-needed --use-deprecated=legacy-resolver\n          /usr/bin/env python -m pip install --upgrade protobuf\n          # We need to restrict URLLIB3 to 1.26.6 because Version 2.x.y onwards the DEFAULT_CIPHERS\n          # variable is deprecrated. Unfortunately our boto3 and botocore that is needed for our\n          # unskript package does not work with the latest version of URLLIB3. Hence we need to\n          # restrict it to a fixed version of 1.26.6\n          /usr/bin/env python -m pip install --upgrade urllib3==1.26.6\n          /usr/bin/env python -m pip install --upgrade types-urllib3==1.26.13\n          /usr/bin/env python -m pip install google-api-python-client==2.77.0\n          /usr/bin/env python -m pip install --upgrade numpy==1.23.4\n          make awesome-submodule\n          cd awesome\n          git checkout ${{ inputs.awesome_branch }}\n          cd ..\n          make legoschema\n          [ -f \"setup-full.py\" ] &&  cp \"setup-full.py\" ./setup.py\n          /usr/bin/env python ./setup.py bdist_wheel\n          mv dist/code*tar /tmp\n          mv dist/unskript-0.1.0-py2.py3-none-any.whl /tmp\n\n      - uses: actions/upload-artifact@v4\n        with:\n          name: schema-${{ github.run_id }}\n          path: /tmp/code_snippet_schemas.tar\n\n      - uses: actions/upload-artifact@v4\n        with:\n          name: unskript-${{ github.run_id }}\n          path: /tmp/unskript-0.1.0-py2.py3-none-any.whl\n\n  build-gotty:\n    runs-on: \"ubuntu-latest\"\n    if: ${{ inputs.enabled }}\n    strategy:\n      fail-fast: false\n    steps:\n    - name: Harden Runner\n      uses: step-security/harden-runner@55d479fb1c5bcad5a4f9099a5d9f37c8857b2845 # v2.4.1\n      with:\n        egress-policy: audit # TODO: change to 'egress-policy: block' after couple of runs\n   \n    - uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 # v3.5.0\n\n    - name: Set up Go\n      uses: actions/setup-go@v3\n      with:\n        go-version: 1.21\n    \n    - name: Checkout Code\n      run: |\n        cd $HOME\n        pwd\n        git clone https://${{ env.USERNAME }}:${{ secrets.BUILDER_PAT }}@github.com/unskript/gotty.git gotty\n\n    - name: \"Build & test\"\n      run: | \n        cd $HOME/gotty\n        git checkout ${{ inputs.gotty_branch }}\n        make tools test release-artifacts\n        ls -l $HOME/gotty/builds/pkg\n        mkdir -p /tmp/linux_amd64\n        mkdir -p /tmp/linux_arm64\n        mv $HOME/gotty/builds/pkg/linux_amd64/gotty /tmp/linux_amd64\n        mv $HOME/gotty/builds/pkg/linux_arm64/gotty /tmp/linux_arm64\n        ls -l /tmp\n\n    - name: Upload build artifacts\n      uses: actions/upload-artifact@v4\n      with:\n        name: gotty-linux-amd64-${{ github.run_id }}\n        path: /tmp/linux_amd64/gotty\n    - name: Upload build artifacts\n      uses: actions/upload-artifact@v4\n      with:\n        name: gotty-linux-arm64-${{ github.run_id }}\n        path: /tmp/linux_arm64/gotty\n\n  build-docker:\n    runs-on: ubuntu-latest\n    if: ${{ inputs.enabled }}\n    needs: [build-unskript, build-gotty]\n    strategy:\n      fail-fast: false\n\n    steps:\n      - name: Harden Runner\n        uses: step-security/harden-runner@55d479fb1c5bcad5a4f9099a5d9f37c8857b2845 # v2.4.1\n        with:\n          egress-policy: audit # TODO: change to 'egress-policy: block' after couple of runs\n\n      - uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 # v3.5.0\n\n      # - name: Update Spy2 to workaround for repo timeout issue\n      #   run: |\n      #      sudo gem install apt-spy2\n      #      sudo apt-spy2 fix --commit --launchpad --country=US\n      #      sudo apt-get update\n\n\n      - name: Set up Python 3.x\n        uses: actions/setup-python@75f3110429a8c05be0e1bf360334e4cced2b63fa # v2.3.3\n        with:\n          python-version: '3.9'\n\n      - name: Install system dependencies\n        run: |\n          sudo apt update --fix-missing\n          pip install shyaml\n\n      - name: Setup Docker Buildx\n        uses: crazy-max/ghaction-docker-buildx@126d331dc69f4a1aa02452e374835e6a5d565613 # v3.3.1\n        with:\n          version: latest\n          config: .github/buildkitd.toml\n\n      - name: Prepare Docker Buildx\n        id: prepare\n        run: |\n          echo ::set-output name=docker_platform::${DOCKER_TARGET}\n          echo ::set-output name=docker_image::${DOCKER_REGISTRY}/${DOCKER_IMAGE}\n          echo ::set-output name=version::${GITHUB_RUN_NUMBER}\n\n      - name: Docker Login\n        run: |\n          echo \"${DOCKER_PASSWORD}\" | docker login --username \"${DOCKER_USERNAME}\" --password-stdin\n\n\n      - name: Checkout Code\n        run: |\n          cd $HOME\n          pwd\n          git clone https://${{ env.USERNAME }}:${{ secrets.BUILDER_PAT }}@github.com/unskript/devops.git devops\n\n\n      - uses: actions/download-artifact@v4\n        with:\n          name: schema-${{ github.run_id }}\n          path: /tmp/unskript\n\n      - uses: actions/download-artifact@v4\n        with:\n          name: unskript-${{ github.run_id }}\n          path: /tmp/unskript\n      - uses: actions/download-artifact@v4\n        with:\n          name: gotty-linux-amd64-${{ github.run_id }}\n          path: /tmp/gotty/linux_amd64\n      - uses: actions/download-artifact@v4\n        with:\n          name: gotty-linux-arm64-${{ github.run_id }}\n          path: /tmp/gotty/linux_arm64\n\n      - name: Prepare to Build\n        run: |\n          cd $HOME/devops/dockers/jupyterlab/oss_docker_lite\n\n          ls -l /tmp/unskript/\n          tar xf /tmp/unskript/code_snippet_schemas.tar\n          cd downloads\n          mv /tmp/unskript/*.whl .\n          mkdir -p gotty/linux_amd64\n          mkdir -p gotty/linux_arm64\n          ls -l /tmp/gotty\n          ls -l  /tmp/gotty/linux_amd64\n          mv /tmp/gotty/linux_amd64/gotty gotty/linux_amd64\n          mv /tmp/gotty/linux_arm64/gotty gotty/linux_arm64\n          if [[ ${{ inputs.latest }} == \"true\" ]]; then\n              bt=\"${{ env.DOCKER_IMAGE }}:minimal-${{ inputs.build_number }} -t ${{ env.DOCKER_IMAGE }}:minimal-latest\"\n          else\n              bt=\"${{ env.DOCKER_IMAGE }}:minimal-${{ inputs.build_number }}\"\n          fi\n          echo \"BUILD_TAGS=$bt\" >> $GITHUB_ENV\n\n      - uses: geekyeggo/delete-artifact@54ab544f12cdb7b71613a16a2b5a37a9ade990af # v2.0.0\n        with:\n          name: |\n            unskript-${{ github.run_id }}\n            schema-${{ github.run_id }}\n            gotty-linux-amd64-${{ github.run_id }}\n            gotty-linux-arm64-${{ github.run_id }}\n      - name: Copy unskript-ctl files\n        run: |\n          # docker buildx create --name mybuilder\n          # docker buildx use mybuilder\n          # docker buildx inspect --bootstrap\n          # docker buildx ls\n          cd $HOME/devops/dockers/jupyterlab/oss_docker_lite\n          git checkout ${{ inputs.devops_branch }}\n          export BUILD_NUMBER=${{ inputs.build_number }}\n          make copy\n          cd $HOME/devops/dockers/jupyterlab/oss_docker_lite/\n          git clone https://${{ env.USERNAME }}:${{ secrets.BUILDER_PAT }}@github.com/unskript/unskript.git unskript\n          cd unskript\n          git checkout ${{ inputs.unskript_branch }}\n          make awesome-submodule\n          cd awesome\n          git checkout ${{ inputs.awesome_branch }}\n          cd ..\n          make syncrunbooks\n          #sed -i \"s/BUILD_NUMBER = .*/BUILD_NUMBER = \\\"${{ inputs.build_number }}\\\"/g\" awesome/unskript-ctl/unskript_ctl_version.py\n          # Lets ADD BUILD_NUMBER TO THE version file\n          echo \"BUILD_NUMBER = \\\"${{ inputs.build_number }}\\\"\" >> awesome/unskript-ctl/unskript_ctl_version.py\n          cp -Rf awesome/unskript-ctl/* $HOME/devops/dockers/jupyterlab/oss_docker_lite/\n          cp -Rf awesome/bin/* $HOME/devops/dockers/jupyterlab/oss_docker_lite/\n          cd $HOME/devops/dockers/jupyterlab/oss_docker_lite/\n          # docker buildx build --platform linux/amd64,linux/arm64 --push -t ${{ env.BUILD_TAGS }} .\n      \n      - name: Docker Build & Push\n        run: |\n          if [ \"${{ inputs.build_target }}\" = \"build-amd64\" ]; then\n            cd $HOME/devops/dockers/jupyterlab/oss_docker_lite/\n            docker buildx build --cache-from type=gha --cache-to type=gha,mode=max  --platform linux/amd64 --push -t ${{ env.BUILD_TAGS }} .\n          elif [ \"${{ inputs.build_target }}\" = \"build-arm64\" ]; then\n            cd $HOME/devops/dockers/jupyterlab/oss_docker_lite/\n            docker buildx build --cache-from type=gha --cache-to type=gha,mode=max  --platform linux/arm64 --push -t ${{ env.BUILD_TAGS }} .\n          elif [ \"${{ inputs.build_target }}\" = \"build-both\" ]; then\n              cd $HOME/devops/dockers/jupyterlab/oss_docker_lite/\n              docker buildx build --cache-from type=gha --cache-to type=gha,mode=max  --platform linux/amd64,linux/arm64 --push -t ${{ env.BUILD_TAGS }} .\n          fi\n      - name: Docker Scout\n        id: docker-scout\n        uses: docker/scout-action@v1\n        with:\n          command: cves\n          image: ${{ env.BUILD_TAGS }}\n          only-severities: critical,high\n          exit-code: true\n        \n      - name: Validate result of Docker Scout\n        if: failure()\n        run: |\n            sudo apt install -y jq curl\n            TOKEN=$(curl -s -H \"Content-Type: application/json\" -X POST -d '{\"username\": \"${{ env.DOCKER_USERNAME }}\", \"password\": \"${{ env.DOCKER_PASSWORD }}\"}' https://hub.docker.com/v2/users/login/ | jq -r .token)\n            REPO_LIST=$(curl -s -H \"Authorization: JWT ${TOKEN}\" https://hub.docker.com/v2/repositories/unskript/?page_size=200 | jq -r '.results | if . then .[] | .name else empty end')\n            for i in ${REPO_LIST}\n            do\n                curl -X DELETE -s -H \"Authorization: JWT ${TOKEN}\" https://hub.docker.com/v2/repositories/unskript/${i}/tags/${{ inputs.build_number }}/\n                echo \"DELETED IMAGE THAT FAILED DOCKER SCOUT TEST\"\n            done\n\n  cleanup:\n    runs-on: ubuntu-latest\n    if: ${{ inputs.enabled }}\n    needs: [build-docker]\n    strategy:\n      fail-fast: false\n\n    steps:\n      - name: Harden Runner\n        uses: step-security/harden-runner@55d479fb1c5bcad5a4f9099a5d9f37c8857b2845 # v2.4.1\n        with:\n          egress-policy: audit # TODO: change to 'egress-policy: block' after couple of runs\n\n      - uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 # v3.5.0\n      - name: Install system dependencies\n        run: |\n          pip install shyaml\n\n\n      - uses: geekyeggo/delete-artifact@54ab544f12cdb7b71613a16a2b5a37a9ade990af # v2.0.0\n        with:\n          name: |\n            unskript-${{ github.run_id }}\n            schema-${{ github.run_id }}\n            gotty-linux-amd64-${{ github.run_id }}\n            gotty-linux-arm64-${{ github.run_id }}\n"
  },
  {
    "path": ".github/workflows/build-and-release-docker.yml",
    "content": "name: Build and Release Docker\n\non:\n  workflow_call:\n    inputs:\n      enabled:\n        required: true\n        type: boolean\n      release_tag:\n        required: true\n        type: string\n      elyra_branch:\n        required: false\n        default: \"master\"\n        type: string\n      unskript_branch:\n        required: false\n        default: \"master\"\n        type: string\n      celltoolbar_branch:\n        required: false\n        default: \"master\"\n        type: string\n      snippet_branch:\n        required: false\n        default: \"master\"\n        type: string\n      awesome_branch:\n        required: false\n        default: \"master\"\n        type: string\n      devops_branch:\n        required: false\n        default: \"master\"\n        type: string\n      build_number:\n        required: true\n        type: string\n      latest:\n        required: false\n        default: true\n        type: boolean\n\n  workflow_dispatch:\n    inputs:\n      enabled:\n        description: 'Workflow Enable Flag'\n        required: true\n        default: false\n        type: boolean\n      elyra_branch:\n        description: 'Elyra Branch name'\n        required: true\n        default: master\n        type: string\n      unskript_branch:\n        description: 'unSkript Branch name'\n        required: true\n        default: master\n        type: string\n      celltoolbar_branch:\n        description: 'Celltoolbar Branch name'\n        required: true\n        default: master\n        type: string\n      snippet_branch:\n        description: 'Code Snippets Branch name'\n        required: true\n        default: master\n        type: string\n      awesome_branch:\n        description: 'unSkript submodule awesome Branch name'\n        required: true\n        default: master\n        type: string\n      devops_branch:\n        description: 'Devops Branch name'\n        required: false\n        default: master\n        type: string\n      build_number:\n        description: 'Docker build number'\n        required: true\n        type: string\n      latest:\n        description: 'Docker Latest tag Branch name'\n        default: false\n        type: boolean\n\nconcurrency:\n  group: ${{ github.ref }}-full-build\n  cancel-in-progress: true\n\nenv:\n  DOCKER_REGISTRY: docker.io\n  DOCKER_IMAGE: unskript/awesome-runbooks\n  DOCKER_USERNAME: ${{ secrets.DOCKER_USER }}\n  DOCKER_PASSWORD: ${{ secrets.DOCKER_PASSWORD }}\n  USERNAME: ${{ secrets.BUILD_USER }}\n  DOCKER_TARGET: linux/amd64, linux/arm64\n\npermissions:\n  contents: read\n\njobs:\n  build-elyra:\n    runs-on: ubuntu-latest\n    if: ${{ inputs.enabled }}\n    strategy:\n      fail-fast: false\n\n    steps:\n      - name: Harden Runner\n        uses: step-security/harden-runner@55d479fb1c5bcad5a4f9099a5d9f37c8857b2845 # v2.4.1\n        with:\n          egress-policy: audit # TODO: change to 'egress-policy: block' after couple of runs\n\n      - uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 # v3.5.0\n      - name: Get current date\n        id: date\n        run: echo \"::set-output name=date::$(date +'%Y-%m-%d-%s')\"\n      - name: Install system dependencies\n        run: |\n          pip install shyaml\n\n      - name: Update Spy2 to workaround for repo timeout issue\n        run: |\n           sudo gem install apt-spy2\n           sudo apt-spy2 fix --commit --launchpad --country=US\n           sudo apt-get update\n\n      - name: Set up Python 3.x\n        uses: actions/setup-python@75f3110429a8c05be0e1bf360334e4cced2b63fa # v2.3.3\n        with:\n          python-version: '3.7'\n\n      - name: Checkout & Build Code\n        run: |\n          cd $HOME\n          git clone https://${{ env.USERNAME }}:${{ secrets.BUILDER_PAT }}@github.com/unskript/elyra.git elyra\n          cd elyra\n          git checkout ${{ inputs.elyra_branch }}\n          python3 ./setup.py sdist\n          mv dist/elyra*.tar.gz /tmp\n\n      - uses: actions/upload-artifact@0b7f8abb1508181956e8e162db84b466c27e18ce # v3.1.2\n        with:\n          name: elyra-${{ github.run_id }}\n          path: /tmp/elyra-3.0.0.dev0.tar.gz\n\n  build-unskript:\n    runs-on: ubuntu-latest\n    if: ${{ inputs.enabled }}\n    needs: [build-elyra]\n    strategy:\n      fail-fast: false\n\n    steps:\n      - name: Harden Runner\n        uses: step-security/harden-runner@55d479fb1c5bcad5a4f9099a5d9f37c8857b2845 # v2.4.1\n        with:\n          egress-policy: audit # TODO: change to 'egress-policy: block' after couple of runs\n\n      - uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 # v3.5.0\n      - uses: actions/download-artifact@9bc31d5ccc31df68ecc42ccf4149144866c47d8a # v3.0.2\n        with:\n          name: elyra-${{ github.run_id }}\n          path: /tmp/elyra\n      - name: Get current date\n        id: date\n        run: echo \"::set-output name=date::$(date +'%Y-%m-%d-%s')\"\n\n      - name: Update Spy2 to workaround for repo timeout issue\n        run: |\n           sudo gem install apt-spy2\n           sudo apt-spy2 fix --commit --launchpad --country=US\n           sudo apt-get update\n\n\n      - name: Set up Python 3.x\n        uses: actions/setup-python@75f3110429a8c05be0e1bf360334e4cced2b63fa # v2.3.3\n        with:\n          # Restricting python version to 3.9\n          python-version: '3.9'\n\n      - name: Install system dependencies\n        run: |\n          pip install shyaml\n          sudo apt-get install -y --allow-downgrades unixodbc-dev=2.3.7 unixodbc=2.3.7 odbcinst1debian2=2.3.7 odbcinst=2.3.7\n\n\n      - name: Checkout Code\n        run: |\n          wget -O /tmp/pandas-2.0.1.tar.gz https://files.pythonhosted.org/packages/6c/e0/73987b6ecc7246e02ab557240843f93fd5adf45d1355abb458aa1f2a0932/pandas-2.0.1.tar.gz\n          sudo pip install /tmp/pandas-2.0.1.tar.gz\n          cd $HOME\n          git clone https://${{ env.USERNAME }}:${{ secrets.BUILDER_PAT }}@github.com/unskript/unskript.git unskript\n          cd unskript\n          git checkout ${{ inputs.unskript_branch }}\n          /usr/bin/env python -m pip install /tmp/elyra/*.tar.gz\n          # We use the --upgrade-strage only-if-needed and --use-deprecated=legacy-resolver to avoid\n          # PIP dependency loop. Since we are asking the git runner to be of ubuntu-latest, it becomes\n          # a moving target for us hence we want to restrict the packages that are needed for unskript\n          # to compile to be fixed.\n          /usr/bin/env python -m pip install -r ./requirements.txt --upgrade --upgrade-strategy only-if-needed --use-deprecated=legacy-resolver\n          /usr/bin/env python -m pip install --upgrade protobuf\n          # We need to restrict URLLIB3 to 1.26.6 because Version 2.x.y onwards the DEFAULT_CIPHERS\n          # variable is deprecrated. Unfortunately our boto3 and botocore that is needed for our\n          # unskript package does not work with the latest version of URLLIB3. Hence we need to\n          # restrict it to a fixed version of 1.26.6\n          /usr/bin/env python -m pip install --upgrade urllib3==1.26.6\n          /usr/bin/env python -m pip install --upgrade types-urllib3==1.26.13\n          /usr/bin/env python -m pip install google-api-python-client==2.77.0\n          make awesome-submodule\n          cd awesome\n          git checkout ${{ inputs.awesome_branch }}\n          cd ..\n          make legoschema\n          [ -f \"setup-full.py\" ] &&  cp \"setup-full.py\" ./setup.py\n          /usr/bin/env python ./setup.py bdist_wheel\n          mv dist/code*tar /tmp\n          mv dist/unskript-0.1.0-py2.py3-none-any.whl /tmp\n\n      - uses: actions/upload-artifact@0b7f8abb1508181956e8e162db84b466c27e18ce # v3.1.2\n        with:\n          name: schema-${{ github.run_id }}\n          path: /tmp/code_snippet_schemas.tar\n\n      - uses: actions/upload-artifact@0b7f8abb1508181956e8e162db84b466c27e18ce # v3.1.2\n        with:\n          name: unskript-${{ github.run_id }}\n          path: /tmp/unskript-0.1.0-py2.py3-none-any.whl\n\n\n  build-jlab-celltoolbar:\n    runs-on: ubuntu-20.04\n    if: ${{ inputs.enabled }}\n    strategy:\n      fail-fast: false\n\n    steps:\n      - name: Harden Runner\n        uses: step-security/harden-runner@55d479fb1c5bcad5a4f9099a5d9f37c8857b2845 # v2.4.1\n        with:\n          egress-policy: audit # TODO: change to 'egress-policy: block' after couple of runs\n\n      - uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 # v3.5.0\n      - name: Get current date\n        id: date\n        run: echo \"::set-output name=date::$(date +'%Y-%m-%d-%s')\"\n\n      - name: Update Spy2 to workaround for repo timeout issue\n        run: |\n           sudo gem install apt-spy2\n           sudo apt-spy2 fix --commit --launchpad --country=US\n           sudo apt-get update\n\n      - name: Set up Python 3.x\n        uses: actions/setup-python@75f3110429a8c05be0e1bf360334e4cced2b63fa # v2.3.3\n        with:\n          python-version: '3.7'\n\n      - name: Install system dependencies\n        run: |\n          pip install shyaml\n          sudo apt install nodejs npm\n          pip install jupyterlab\n\n      - name: Checkout Code\n        run: |\n          cd $HOME\n          export NODE_OPTIONS=--openssl-legacy-provider\n          git clone https://${{ env.USERNAME }}:${{ secrets.BUILDER_PAT }}@github.com/unskript/jlab-enhanced-cell-toolbar.git jlab-enhanced-cell-toolbar\n          cd jlab-enhanced-cell-toolbar\n          git checkout ${{ inputs.celltoolbar_branch }}\n          /usr/bin/env python -m pip install jupyter-packaging\n          /usr/bin/env python -m pip install --upgrade markupsafe\n          /usr/bin/env python -m pip install --upgrade jinja2\n          jlpm install\n          /usr/bin/env python ./setup.py bdist_wheel\n          mv dist/*.whl /tmp\n\n\n      - uses: actions/upload-artifact@0b7f8abb1508181956e8e162db84b466c27e18ce # v3.1.2\n        with:\n          name: toolbar-${{ github.run_id }}\n          path: /tmp/jlab_enhanced_cell_toolbar-3.4.0-py3-none-any.whl\n\n  build-code-snippets:\n    runs-on: ubuntu-20.04\n    if: ${{ inputs.enabled }}\n    strategy:\n      fail-fast: false\n\n    steps:\n      - name: Harden Runner\n        uses: step-security/harden-runner@55d479fb1c5bcad5a4f9099a5d9f37c8857b2845 # v2.4.1\n        with:\n          egress-policy: audit # TODO: change to 'egress-policy: block' after couple of runs\n\n      - uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 # v3.5.0\n      - name: Get current date\n        id: date\n        run: echo \"::set-output name=date::$(date +'%Y-%m-%d-%s')\"\n\n      - name: Update Spy2 to workaround for repo timeout issue\n        run: |\n           sudo gem install apt-spy2\n           sudo apt-spy2 fix --commit --launchpad --country=US\n           sudo apt-get update\n\n      - name: Set up Python 3.x\n        uses: actions/setup-python@75f3110429a8c05be0e1bf360334e4cced2b63fa # v2.3.3\n        with:\n          python-version: '3.7'\n\n      - name: Install system dependencies\n        run: |\n          pip install shyaml\n          sudo apt install nodejs npm\n          pip install jupyterlab\n\n      - name: Checkout Code\n        run: |\n          cd $HOME\n          git clone https://${{ env.USERNAME }}:${{ secrets.BUILDER_PAT }}@github.com/unskript/jupyterlab-code-snippets.git jupyterlab-code-snippets\n          cd jupyterlab-code-snippets\n          git checkout ${{ inputs.snippet_branch }}\n          /usr/bin/env python -m pip install jupyter-packaging\n          /usr/bin/env python -m pip install --upgrade markupsafe\n          /usr/bin/env python -m pip install --upgrade jinja2\n          jlpm install\n          /usr/bin/env python ./setup.py bdist_wheel\n          mv dist/*.whl /tmp\n\n\n      - uses: actions/upload-artifact@0b7f8abb1508181956e8e162db84b466c27e18ce # v3.1.2\n        with:\n          name: code-snippets-${{ github.run_id }}\n          path: /tmp/jupyterlab_code_snippets-2.1.1-py3-none-any.whl\n\n  build-docker:\n    runs-on: ubuntu-latest\n    if: ${{ inputs.enabled }}\n    needs: [build-unskript, build-elyra, build-jlab-celltoolbar, build-code-snippets]\n    strategy:\n      fail-fast: false\n\n    steps:\n      - name: Harden Runner\n        uses: step-security/harden-runner@55d479fb1c5bcad5a4f9099a5d9f37c8857b2845 # v2.4.1\n        with:\n          egress-policy: audit # TODO: change to 'egress-policy: block' after couple of runs\n\n      - uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 # v3.5.0\n\n      - name: Update Spy2 to workaround for repo timeout issue\n        run: |\n           sudo gem install apt-spy2\n           sudo apt-spy2 fix --commit --launchpad --country=US\n           sudo apt-get update\n\n\n      - name: Set up Python 3.x\n        uses: actions/setup-python@75f3110429a8c05be0e1bf360334e4cced2b63fa # v2.3.3\n        with:\n          python-version: '3.7'\n\n      - name: Install system dependencies\n        run: |\n          sudo apt update --fix-missing\n          pip install shyaml\n\n      - name: Setup Docker Buildx\n        uses: crazy-max/ghaction-docker-buildx@126d331dc69f4a1aa02452e374835e6a5d565613 # v3.3.1\n        with:\n          version: latest\n          config: .github/buildkitd.toml\n\n      - name: Prepare Docker Buildx\n        id: prepare\n        run: |\n          echo ::set-output name=docker_platform::${DOCKER_TARGET}\n          echo ::set-output name=docker_image::${DOCKER_REGISTRY}/${DOCKER_IMAGE}\n          echo ::set-output name=version::${GITHUB_RUN_NUMBER}\n\n      - name: Docker Login\n        run: |\n          echo \"${DOCKER_PASSWORD}\" | docker login --username \"${DOCKER_USERNAME}\" --password-stdin\n\n\n      - name: Checkout Code\n        run: |\n          cd $HOME\n          pwd\n          git clone https://${{ env.USERNAME }}:${{ secrets.BUILDER_PAT }}@github.com/unskript/devops.git devops\n\n      - uses: actions/download-artifact@9bc31d5ccc31df68ecc42ccf4149144866c47d8a # v3.0.2\n        with:\n          name: code-snippets-${{ github.run_id }}\n          path: /tmp/code_snippet\n\n      - uses: actions/download-artifact@9bc31d5ccc31df68ecc42ccf4149144866c47d8a # v3.0.2\n        with:\n          name: toolbar-${{ github.run_id }}\n          path: /tmp/jlab_enhanced_cell_toolbar\n\n      - uses: actions/download-artifact@9bc31d5ccc31df68ecc42ccf4149144866c47d8a # v3.0.2\n        with:\n          name: schema-${{ github.run_id }}\n          path: /tmp/unskript\n\n      - uses: actions/download-artifact@9bc31d5ccc31df68ecc42ccf4149144866c47d8a # v3.0.2\n        with:\n          name: unskript-${{ github.run_id }}\n          path: /tmp/unskript\n\n      - uses: actions/download-artifact@9bc31d5ccc31df68ecc42ccf4149144866c47d8a # v3.0.2\n        with:\n          name: elyra-${{ github.run_id }}\n          path: /tmp/elyra\n\n      - name: Prepare to Build\n        run: |\n          cd $HOME/devops/dockers/jupyterlab/oss_docker\n\n          ls -l /tmp/unskript/\n          ls -l /tmp/elyra/\n          ls -l /tmp/jlab_enhanced_cell_toolbar/\n          ls -l /tmp/code_snippet/\n          tar xf /tmp/unskript/code_snippet_schemas.tar\n          cd downloads\n          mv /tmp/unskript/*.whl .\n          mv /tmp/code_snippet/*.whl .\n          mv /tmp/jlab_enhanced_cell_toolbar/*.whl .\n          mv /tmp/elyra/elyra*tar.gz .\n          if [[ ${{ inputs.latest }} == \"true\" ]]; then\n              bt=\"${{ env.DOCKER_IMAGE }}:${{ inputs.build_number }} -t ${{ env.DOCKER_IMAGE }}:latest\"\n          else\n              bt=\"${{ env.DOCKER_IMAGE }}:${{ inputs.build_number }}\"\n          fi\n          echo \"BUILD_TAGS=$bt\" >> $GITHUB_ENV\n\n      - uses: geekyeggo/delete-artifact@54ab544f12cdb7b71613a16a2b5a37a9ade990af # v2.0.0\n        with:\n          name: |\n            elyra-${{ github.run_id }}\n            toolbar-${{ github.run_id }}\n            code-snippets-${{ github.run_id }}\n            unskript-${{ github.run_id }}\n            schema-${{ github.run_id }}\n\n      - name: Build & Push\n        run: |\n          docker buildx create --name mybuilder\n          docker buildx use mybuilder\n          docker buildx inspect --bootstrap\n          docker buildx ls\n          cd $HOME/devops/dockers/jupyterlab/oss_docker\n          git checkout ${{ inputs.devops_branch }}\n          export BUILD_NUMBER=${{ inputs.build_number }}\n          make copy\n          cd $HOME/devops/dockers/jupyterlab/oss_docker/\n          git clone https://${{ env.USERNAME }}:${{ secrets.BUILDER_PAT }}@github.com/unskript/unskript.git unskript\n          cd unskript\n          git checkout ${{ inputs.unskript_branch }}\n          make awesome-submodule\n          cd awesome\n          git checkout ${{ inputs.awesome_branch }}\n          cd ..\n          make syncrunbooks\n          cp -Rf awesome/unskript-ctl/* $HOME/devops/dockers/jupyterlab/oss_docker/\n          cp -Rf awesome/bin/* $HOME/devops/dockers/jupyterlab/oss_docker/\n          cp awesome/build/templates/Welcome_template.ipynb $HOME/devops/dockers/jupyterlab/oss_docker/runbooks/template.ipynb\n          cp awesome/build/templates/Welcome.ipynb $HOME/devops/dockers/jupyterlab/oss_docker/runbooks/Welcome.ipynb\n          cd $HOME/devops/dockers/jupyterlab/oss_docker/\n          cp $HOME/devops/dockers/jupyterlab/common/install_utils.sh .\n          docker buildx build --platform linux/amd64,linux/arm64 --push -t ${{ env.BUILD_TAGS }} .\n\n      - name: Docker Scout\n        id: docker-scout\n        uses: docker/scout-action@v1\n        with:\n          command: cves\n          image: ${{ env.BUILD_TAGS }}\n          only-severities: critical,high\n          exit-code: true\n      \n      - name: Validate result of Docker Scout\n        if: failure()\n        run: |\n            sudo apt install -y jq curl\n            TOKEN=$(curl -s -H \"Content-Type: application/json\" -X POST -d '{\"username\": \"${{ env.DOCKER_USERNAME }}\", \"password\": \"${{ env.DOCKER_PASSWORD }}\"}' https://hub.docker.com/v2/users/login/ | jq -r .token)\n            REPO_LIST=$(curl -s -H \"Authorization: JWT ${TOKEN}\" https://hub.docker.com/v2/repositories/unskript/?page_size=200 | jq -r '.results | if . then .[] | .name else empty end')\n            for i in ${REPO_LIST}\n            do\n                curl -X DELETE -s -H \"Authorization: JWT ${TOKEN}\" https://hub.docker.com/v2/repositories/unskript/${i}/tags/${{ inputs.build_number }}/\n                echo \"DELETED IMAGE THAT FAILED DOCKER SCOUT TEST\"\n            done\n            \n  cleanup:\n    runs-on: ubuntu-latest\n    if: ${{ inputs.enabled }}\n    needs: [build-docker]\n    strategy:\n      fail-fast: false\n\n    steps:\n      - name: Harden Runner\n        uses: step-security/harden-runner@55d479fb1c5bcad5a4f9099a5d9f37c8857b2845 # v2.4.1\n        with:\n          egress-policy: audit # TODO: change to 'egress-policy: block' after couple of runs\n\n      - uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 # v3.5.0\n      - name: Install system dependencies\n        run: |\n          pip install shyaml\n\n\n      - uses: geekyeggo/delete-artifact@54ab544f12cdb7b71613a16a2b5a37a9ade990af # v2.0.0\n        with:\n          name: |\n            elyra-${{ github.run_id }}\n            toolbar-${{ github.run_id }}\n            code-snippets-${{ github.run_id }}\n            unskript-${{ github.run_id }}\n            schema-${{ github.run_id }}\n"
  },
  {
    "path": ".github/workflows/codeql.yml",
    "content": "# For most projects, this workflow file will not need changing; you simply need\n# to commit it to your repository.\n#\n# You may wish to alter this file to override the set of languages analyzed,\n# or to provide custom queries or build logic.\n#\n# ******** NOTE ********\n# We have attempted to detect the languages in your repository. Please check\n# the `language` matrix defined below to confirm you have the correct set of\n# supported CodeQL languages.\n#\nname: \"CodeQL\"\n\non:\n  push:\n    branches: [\"master\"]\n  pull_request:\n    # The branches below must be a subset of the branches above\n    branches: [\"master\"]\n  schedule:\n    - cron: \"0 0 * * 1\"\n\npermissions:\n  contents: read\n\njobs:\n  analyze:\n    name: Analyze\n    runs-on: ubuntu-latest\n    permissions:\n      actions: read\n      contents: read\n      security-events: write\n\n    strategy:\n      fail-fast: false\n      matrix:\n        language: [\"python\"]\n        # CodeQL supports [ $supported-codeql-languages ]\n        # Learn more about CodeQL language support at https://aka.ms/codeql-docs/language-support\n\n    steps:\n      - name: Harden Runner\n        uses: step-security/harden-runner@55d479fb1c5bcad5a4f9099a5d9f37c8857b2845 # v2.4.1\n        with:\n          egress-policy: audit # TODO: change to 'egress-policy: block' after couple of runs\n\n      - name: Checkout repository\n        uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 # v3.5.0\n\n      # Initializes the CodeQL tools for scanning.\n      - name: Initialize CodeQL\n        uses: github/codeql-action/init@46ed16ded91731b2df79a2893d3aea8e9f03b5c4 # v2.20.3\n        with:\n          languages: ${{ matrix.language }}\n          # If you wish to specify custom queries, you can do so here or in a config file.\n          # By default, queries listed here will override any specified in a config file.\n          # Prefix the list here with \"+\" to use these queries and those in the config file.\n\n      # Autobuild attempts to build any compiled languages  (C/C++, C#, or Java).\n      # If this step fails, then you should remove it and run the build manually (see below)\n      - name: Autobuild\n        uses: github/codeql-action/autobuild@46ed16ded91731b2df79a2893d3aea8e9f03b5c4 # v2.20.3\n\n      # ℹ️ Command-line programs to run using the OS shell.\n      # 📚 See https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idstepsrun\n\n      #   If the Autobuild fails above, remove it and uncomment the following three lines.\n      #   modify them (or add more) to build your code if your project, please refer to the EXAMPLE below for guidance.\n\n      # - run: |\n      #   echo \"Run, Build Application using script\"\n      #   ./location_of_script_within_repo/buildscript.sh\n\n      - name: Perform CodeQL Analysis\n        uses: github/codeql-action/analyze@46ed16ded91731b2df79a2893d3aea8e9f03b5c4 # v2.20.3\n        with:\n          category: \"/language:${{matrix.language}}\"\n"
  },
  {
    "path": ".github/workflows/dependency-review.yml",
    "content": "# Dependency Review Action\n#\n# This Action will scan dependency manifest files that change as part of a Pull Request,\n# surfacing known-vulnerable versions of the packages declared or updated in the PR.\n# Once installed, if the workflow run is marked as required, \n# PRs introducing known-vulnerable packages will be blocked from merging.\n#\n# Source repository: https://github.com/actions/dependency-review-action\nname: 'Dependency Review'\non: [pull_request]\n\npermissions:\n  contents: read\n\njobs:\n  dependency-review:\n    runs-on: ubuntu-latest\n    steps:\n      - name: Harden Runner\n        uses: step-security/harden-runner@55d479fb1c5bcad5a4f9099a5d9f37c8857b2845 # v2.4.1\n        with:\n          egress-policy: audit # TODO: change to 'egress-policy: block' after couple of runs\n\n      - name: 'Checkout Repository'\n        uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 # v3.5.0\n      - name: 'Dependency Review'\n        uses: actions/dependency-review-action@f6fff72a3217f580d5afd49a46826795305b63c7 # v3.0.8\n"
  },
  {
    "path": ".github/workflows/generate_readme.yaml",
    "content": "name: Generate Readme\non:\n  workflow_dispatch:\njobs:\n  generate-readme:\n    if: \"!startsWith(github.event.head_commit.message, 'generateReadme:')\"\n    runs-on: ubuntu-latest\n    steps:\n    - name: Harden Runner\n      uses: step-security/harden-runner@55d479fb1c5bcad5a4f9099a5d9f37c8857b2845 # v2.4.1\n      with:\n        egress-policy: audit # TODO: change to 'egress-policy: block' after couple of runs\n\n    - uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 # v2.7.0\n      with:\n        token: '${{ secrets.GENERATE_README }}'\n    - run: \"pip install MarkupSafe==2.0.1\"\n    - run: \"pip install notebook\"\n    - run: \"pip install papermill\"\n    - run: \"pip install Markdown==3.3.7\"\n    - run: \"jupyter nbconvert --to notebook --execute generate_readme.ipynb\"\n    - uses: EndBug/add-and-commit@1bad3abcf0d6ec49a5857d124b0bfb52dc7bb081 # v9.1.3\n      with:\n        message: 'generateReadme: Refresh'\n  copy-file:\n    runs-on: ubuntu-latest\n    steps:\n    - name: Harden Runner\n      uses: step-security/harden-runner@55d479fb1c5bcad5a4f9099a5d9f37c8857b2845 # v2.4.1\n      with:\n        egress-policy: audit # TODO: change to 'egress-policy: block' after couple of runs\n\n    - name: Checkout\n      uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 # v2.7.0\n\n    - name: Pushes action and runbook lists to docs\n      uses: dmnemec/copy_file_to_another_repo_action@bbebd3da22e4a37d04dca5f782edd5201cb97083 # main\n      env:\n        API_TOKEN_GITHUB: '${{ secrets.GENERATE_README }}'\n      with:\n        source_file: 'lists/Action_list.md'\n        destination_repo: 'unskript/docs'\n        destination_folder: 'lists'\n        user_email: 'doug.sillars@gmail.com'\n        user_name: 'dougsillars'\n        commit_message: 'a new list of actions!'\n  copy-file2:\n    runs-on: ubuntu-latest\n    steps:\n    - name: Harden Runner\n      uses: step-security/harden-runner@55d479fb1c5bcad5a4f9099a5d9f37c8857b2845 # v2.4.1\n      with:\n        egress-policy: audit # TODO: change to 'egress-policy: block' after couple of runs\n\n    - name: Checkout\n      uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 # v2.7.0\n\n    - name: Pushes action and runbook lists to docs\n      uses: dmnemec/copy_file_to_another_repo_action@bbebd3da22e4a37d04dca5f782edd5201cb97083 # main\n      env:\n        API_TOKEN_GITHUB: '${{ secrets.GENERATE_README }}'\n      with:\n        source_file: 'lists/.'\n        destination_repo: 'unskript/docs'\n        destination_folder: 'lists'\n        user_email: 'doug.sillars@gmail.com'\n        user_name: 'dougsillars'\n        commit_message: 'a new list of runbooks!'\n"
  },
  {
    "path": ".github/workflows/lint-test.yaml",
    "content": "name: Lint\n\non:\n  # Trigger the workflow on push or pull request,\n  # but only for the main branch\n  push:\n    branches:\n      - master\n  pull_request:\n    branches:\n      - master\n\n#on:\n#  workflow_dispatch:\n\njobs:\n  run-linters:\n    name: Run linters\n    runs-on: ubuntu-latest\n\n    steps:\n      - name: Check out Git repository\n        uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9\n\n      - name: Set up Python\n        uses: actions/setup-python@v1\n        with:\n          python-version: 3.9\n\n      - name: Install Python dependencies\n        run: pip install pylint\n\n      - name: Run linters\n        uses: wearerequired/lint-action@v2\n        with:\n          pylint: true\n\n"
  },
  {
    "path": ".github/workflows/make-release.yaml",
    "content": "name: Make Release\n\non:\n  workflow_dispatch: \n  \njobs:\n  make-release:\n    runs-on: ubuntu-latest\n    outputs:\n      GITHUB_ONLY_TAG: ${{ steps.sanitize_tag.outputs.GITHUB_ONLY_TAG }}\n      GITHUB_CHANGELOG: ${{ steps.tag_version.outputs.changelog }}\n    steps:\n      - name: Harden Runner\n        uses: step-security/harden-runner@55d479fb1c5bcad5a4f9099a5d9f37c8857b2845 # v2.4.1\n        with:\n          egress-policy: audit # TODO: change to 'egress-policy: block' after couple of runs\n\n      - uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 # v3.5.0 \n      - run: git fetch --prune --unshallow\n      - name: Bump version and push \n        id: tag_version\n        uses: mathieudutour/github-tag-action@fcfbdceb3093f6d85a3b194740f8c6cec632f4e2 # v6.1\n        with:\n          github_token: ${{ secrets.BUILDER_PAT }}\n          dry_run: true\n          default_bump: minor\n          default_prerelease_bump: minor\n          append_to_pre_release_tag: \"\"\n          tag_prefix: \"\"\n\n      - name: Sanitize Tag\n        id: sanitize_tag\n        run: |\n          TAG_NAME=$(echo ${{ steps.tag_version.outputs.new_tag }} | cut -d '-' -f 1)\n          echo \"GITHUB_ONLY_TAG=$TAG_NAME\" >> $GITHUB_ENV\n          echo \"GITHUB_CHANGELOG=${{ steps.tag_version.outputs.changelog }}\" >> $GITHUB_ENV\n          echo \"TAGNAME: $TAG_NAME\"\n          echo \"GITHUB_ONLY_TAG=$TAG_NAME\" >> $GITHUB_OUTPUT\n          echo \"GITHUB_CHANGELOG=${{ steps.tag_version.outputs.changelog }}\" >> $GITHUB_OUTPUT\n      \n      - name: Create Docker ReleaseNotes\n        id: create_docker_rn\n        run: |\n          echo \"## Awesome Docker\" > /tmp/docker_rn.md\n          echo \"\" >> /tmp/docker_rn.md\n          echo \"\" >> /tmp/docker_rn.md\n          echo \"Please find the Latest build [Here](https://hub.docker.com/r/unskript/awesome-runbooks/tags)\" >> /tmp/docker_rn.md\n          echo \"\" >> /tmp/docker_rn.md\n          echo \"${{ steps.tag_version.outputs.changelog }}\" >> /tmp/docker_rn.md\n          cat /tmp/docker_rn.md\n\n      - name: Create a GitHub release\n        uses: ncipollo/release-action@a2e71bdd4e7dab70ca26a852f29600c98b33153e # v1.12.0\n        with:\n          tag: ${{ steps.sanitize_tag.outputs.GITHUB_ONLY_TAG }}\n          name: Release ${{ steps.sanitize_tag.outputs.GITHUB_ONLY_TAG }}\n          bodyFile: \"/tmp/docker_rn.md\"\n          generateReleaseNotes: true\n          makeLatest: legacy\n          omitBody: false\n          omitBodyDuringUpdate: false\n          omitDraftDuringUpdate: false\n          omitName: false\n          omitNameDuringUpdate: false\n          omitPrereleaseDuringUpdate: false\n          removeArtifacts: false\n          replacesArtifacts: true\n          skipIfReleaseExists: false\n          updateOnlyUnreleased: false\n\n  build-docker: \n    needs: make-release\n    uses: \"./.github/workflows/build-and-release-docker.yml\"\n    with:\n      enabled: true\n      release_tag: \"${{ needs.make-release.outputs.GITHUB_ONLY_TAG }}\"\n      build_number: \"${{ needs.make-release.outputs.GITHUB_ONLY_TAG }}\"\n      elyra_branch: \"master\"\n      unskript_branch: \"master\"\n      celltoolbar_branch: \"master\"\n      snippet_branch: \"master\"\n    secrets: inherit"
  },
  {
    "path": ".github/workflows/run-legoschema.yml",
    "content": "name: Run Legoschema\non:\n  pull_request:\n    types: [opened, reopened, edited, ready_for_review]\n  push:\n\nconcurrency:\n  group: ${{ github.ref }}\n  cancel-in-progress: false\n\npermissions:\n  contents: read\n\njobs:\n  run-validator: \n    runs-on: ubuntu-latest \n    strategy:\n      fail-fast: false \n\n    steps:\n      - name: Harden Runner\n        uses: step-security/harden-runner@55d479fb1c5bcad5a4f9099a5d9f37c8857b2845 # v2.4.1\n        with:\n          egress-policy: audit # TODO: change to 'egress-policy: block' after couple of runs\n\n      - uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 # v3.5.0\n      - name: Get current date\n        id: date\n        run: echo \"::set-output name=date::$(date +'%Y-%m-%d-%s')\"\n      - name: Install system dependencies\n        run: |\n          pip install shyaml\n      \n      - name: Set up Python 3.x\n        uses: actions/setup-python@75f3110429a8c05be0e1bf360334e4cced2b63fa # v2.3.3\n        with:\n          python-version: '3.9'\n          \n      - name: Run Validator\n        run: |\n          /usr/bin/env python ./validator.py\n"
  },
  {
    "path": ".github/workflows/sanitize-runbook.yml",
    "content": "name: Sanitize Runbook\non:\n  pull_request:\n    types: [ opened, reopened, edited, ready_for_review ]\n  push:\n\nenv:\n  PR_NUMBER: ${{ github.event.number }}\n\njobs:\n  sanitize-runbooks:\n    runs-on: ubuntu-latest\n    strategy:\n      fail-fast: true\n\n    steps:\n      - uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9\n      - name: Get current date\n        id: date\n        run: echo \"::set-output name=date::$(date +'%Y-%m-%d-%s')\"\n\n      - name: Install system dependencies\n        run: |\n          pip install shyaml nbformat URLExtract\n\n      - name: All Runbooks\n        id: files\n        run: |\n          echo \"all_runbook_files=$(find . -mindepth 2 -maxdepth 2 -name \\*.ipynb | tr '\\n' ' ')\" >> $GITHUB_OUTPUT\n\n      - name: Run Sanitize\n        id: sanity\n        run: |\n          echo \"Running sanitize script on ${{ steps.files.outputs.all_runbooks_files }}\"\n          /usr/bin/env python ./sanitize.py -v ${{ steps.files.outputs.all_runbook_files }}\n          \n      - name: Run Region Test\n        run: |\n          /usr/bin/env python ./region_test.py  \n          \n      - name: Checkout Repository\n        uses: actions/checkout@v3\n      - uses: actions/setup-python@v4\n        with:\n          python-version: '3.9'\n          \n      - name: Pytype Python Checker\n        uses: theahura/pytypes-action@main\n        with:\n          args: --generate-config pytype.toml\n\n      - name: Run Static Analysis on the Runbooks\n        run: |\n          \n"
  },
  {
    "path": ".github/workflows/scorecards.yml",
    "content": "# This workflow uses actions that are not certified by GitHub. They are provided\n# by a third-party and are governed by separate terms of service, privacy\n# policy, and support documentation.\n\nname: Scorecard supply-chain security\non:\n  # For Branch-Protection check. Only the default branch is supported. See\n  # https://github.com/ossf/scorecard/blob/main/docs/checks.md#branch-protection\n  branch_protection_rule:\n  # To guarantee Maintained check is occasionally updated. See\n  # https://github.com/ossf/scorecard/blob/main/docs/checks.md#maintained\n  schedule:\n    - cron: '20 7 * * 2'\n  push:\n    branches: [\"master\"]\n\n# Declare default permissions as read only.\npermissions: read-all\n\njobs:\n  analysis:\n    name: Scorecard analysis\n    runs-on: ubuntu-latest\n    permissions:\n      # Needed to upload the results to code-scanning dashboard.\n      security-events: write\n      # Needed to publish results and get a badge (see publish_results below).\n      id-token: write\n      contents: read\n      actions: read\n\n    steps:\n      - name: Harden Runner\n        uses: step-security/harden-runner@55d479fb1c5bcad5a4f9099a5d9f37c8857b2845 # v2.4.1\n        with:\n          egress-policy: audit # TODO: change to 'egress-policy: block' after couple of runs\n\n      - name: \"Checkout code\"\n        uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 # v3.5.0\n        with:\n          persist-credentials: false\n\n      - name: \"Run analysis\"\n        uses: ossf/scorecard-action@08b4669551908b1024bb425080c797723083c031 # v2.2.0\n        with:\n          results_file: results.sarif\n          results_format: sarif\n          # (Optional) \"write\" PAT token. Uncomment the `repo_token` line below if:\n          # - you want to enable the Branch-Protection check on a *public* repository, or\n          # - you are installing Scorecards on a *private* repository\n          # To create the PAT, follow the steps in https://github.com/ossf/scorecard-action#authentication-with-pat.\n          # repo_token: ${{ secrets.SCORECARD_TOKEN }}\n\n          # Public repositories:\n          #   - Publish results to OpenSSF REST API for easy access by consumers\n          #   - Allows the repository to include the Scorecard badge.\n          #   - See https://github.com/ossf/scorecard-action#publishing-results.\n          # For private repositories:\n          #   - `publish_results` will always be set to `false`, regardless\n          #     of the value entered here.\n          publish_results: true\n\n      # Upload the results as artifacts (optional). Commenting out will disable uploads of run results in SARIF\n      # format to the repository Actions tab.\n      - name: \"Upload artifact\"\n        uses: actions/upload-artifact@0b7f8abb1508181956e8e162db84b466c27e18ce # v3.1.2\n        with:\n          name: SARIF file\n          path: results.sarif\n          retention-days: 5\n\n      # Upload the results to GitHub's code scanning dashboard.\n      - name: \"Upload to code-scanning\"\n        uses: github/codeql-action/upload-sarif@46ed16ded91731b2df79a2893d3aea8e9f03b5c4 # v2.20.3\n        with:\n          sarif_file: results.sarif\n"
  },
  {
    "path": ".gitignore",
    "content": "*.DS_Store\n.ipynb_checkpoints\n__pycache__\n\n# Ignore temp files (build artifacts) that gets generated\n# for custom Actions\ncustom/*\nlib/*\n\n\n# Ignore Temp files\n*.jsonx\nall_*legos.py\n\npyvenv.cfg\n\n"
  },
  {
    "path": ".pylintrc",
    "content": "# This Pylint rcfile contains a best-effort configuration to uphold the\n# best-practices and style described in the Google Python style guide:\n#   https://google.github.io/styleguide/pyguide.html\n#\n# Its canonical open-source location is:\n#   https://google.github.io/styleguide/pylintrc\n\n[MASTER]\n\n# Files or directories to be skipped. They should be base names, not paths.\nignore=third_party\n\n# Files or directories matching the regex patterns are skipped. The regex\n# matches against base names, not paths.\nignore-patterns=\n\n# Pickle collected data for later comparisons.\npersistent=no\n\n# List of plugins (as comma separated values of python modules names) to load,\n# usually to register additional checkers.\nload-plugins=\n\n# Use multiple processes to speed up Pylint.\njobs=4\n\n# Allow loading of arbitrary C extensions. Extensions are imported into the\n# active Python interpreter and may run arbitrary code.\nunsafe-load-any-extension=no\n\n\n[MESSAGES CONTROL]\n\n# Only show warnings with the listed confidence levels. Leave empty to show\n# all. Valid levels: HIGH, INFERENCE, INFERENCE_FAILURE, UNDEFINED\nconfidence=\n\n# Enable the message, report, category or checker with the given id(s). You can\n# either give multiple identifier separated by comma (,) or put this option\n# multiple time (only on the command line, not in the configuration file where\n# it should appear only once). See also the \"--disable\" option for examples.\n#enable=\n\n# Disable the message, report, category or checker with the given id(s). You\n# can either give multiple identifiers separated by comma (,) or put this\n# option multiple times (only on the command line, not in the configuration\n# file where it should appear only once).You can also use \"--disable=all\" to\n# disable everything first and then reenable specific checks. For example, if\n# you want to run only the similarities checker, you can use \"--disable=all\n# --enable=similarities\". If you want to run only the classes checker, but have\n# no Warning level messages displayed, use\"--disable=all --enable=classes\n# --disable=W\"\ndisable=abstract-method,\n        apply-builtin,\n        arguments-differ,\n        attribute-defined-outside-init,\n        backtick,\n        bad-option-value,\n        basestring-builtin,\n        buffer-builtin,\n        c-extension-no-member,\n        consider-using-enumerate,\n        cmp-builtin,\n        cmp-method,\n        coerce-builtin,\n        coerce-method,\n        delslice-method,\n        div-method,\n        duplicate-code,\n        eq-without-hash,\n        execfile-builtin,\n        file-builtin,\n        filter-builtin-not-iterating,\n        fixme,\n        getslice-method,\n        global-statement,\n        hex-method,\n        idiv-method,\n        implicit-str-concat,\n        import-error,\n        import-self,\n        import-star-module-level,\n        inconsistent-return-statements,\n        input-builtin,\n        intern-builtin,\n        invalid-str-codec,\n        locally-disabled,\n        long-builtin,\n        long-suffix,\n        map-builtin-not-iterating,\n        misplaced-comparison-constant,\n        missing-function-docstring,\n        metaclass-assignment,\n        next-method-called,\n        next-method-defined,\n        no-absolute-import,\n        no-else-break,\n        no-else-continue,\n        no-else-raise,\n        no-else-return,\n        no-init,  # added\n        no-member,\n        no-name-in-module,\n        no-self-use,\n        nonzero-method,\n        oct-method,\n        old-division,\n        old-ne-operator,\n        old-octal-literal,\n        old-raise-syntax,\n        parameter-unpacking,\n        print-statement,\n        raising-string,\n        range-builtin-not-iterating,\n        raw_input-builtin,\n        rdiv-method,\n        reduce-builtin,\n        relative-import,\n        reload-builtin,\n        round-builtin,\n        setslice-method,\n        signature-differs,\n        standarderror-builtin,\n        suppressed-message,\n        sys-max-int,\n        too-few-public-methods,\n        too-many-ancestors,\n        too-many-arguments,\n        too-many-boolean-expressions,\n        too-many-branches,\n        too-many-instance-attributes,\n        too-many-locals,\n        too-many-nested-blocks,\n        too-many-public-methods,\n        too-many-return-statements,\n        too-many-statements,\n        trailing-newlines,\n        unichr-builtin,\n        unicode-builtin,\n        unnecessary-pass,\n        unpacking-in-except,\n        useless-else-on-loop,\n        useless-object-inheritance,\n        import-outside-toplevel,\n        useless-suppression,\n        using-cmp-argument,\n        wrong-import-order,\n        xrange-builtin,\n        zip-builtin-not-iterating,\n\tW,\n        C0103,\n        C0115,\n        C0114,\n\tC0303,\n        C0304,\n        C0301,\n        C0412,\n        E0203,\n        R1732\n        \n\n\n[REPORTS]\n\n# Set the output format. Available formats are text, parseable, colorized, msvs\n# (visual studio) and html. You can also give a reporter class, eg\n# mypackage.mymodule.MyReporterClass.\noutput-format=text\n\n# Tells whether to display a full report or only the messages\nreports=no\n\n# Python expression which should return a note less than 10 (10 is the highest\n# note). You have access to the variables errors warning, statement which\n# respectively contain the number of errors / warnings messages and the total\n# number of statements analyzed. This is used by the global evaluation report\n# (RP0004).\nevaluation=10.0 - ((float(5 * error + warning + refactor + convention) / statement) * 10)\n\n# Template used to display messages. This is a python new-style format string\n# used to format the message information. See doc for all details\n#msg-template=\n\n\n[BASIC]\n\n# Good variable names which should always be accepted, separated by a comma\ngood-names=main,_\n\n# Bad variable names which should always be refused, separated by a comma\nbad-names=\n\n# Colon-delimited sets of names that determine each other's naming style when\n# the name regexes allow several styles.\nname-group=\n\n# Include a hint for the correct naming format with invalid-name\ninclude-naming-hint=no\n\n# List of decorators that produce properties, such as abc.abstractproperty. Add\n# to this list to register other decorators that produce valid properties.\nproperty-classes=abc.abstractproperty,cached_property.cached_property,cached_property.threaded_cached_property,cached_property.cached_property_with_ttl,cached_property.threaded_cached_property_with_ttl\n\n# Regular expression matching correct function names\nfunction-rgx=^(?:(?P<exempt>setUp|tearDown|setUpModule|tearDownModule)|(?P<camel_case>_?[A-Z][a-zA-Z0-9]*)|(?P<snake_case>_?[a-z][a-z0-9_]*))$\n\n# Regular expression matching correct variable names\nvariable-rgx=^[a-z][a-z0-9_]*$\n\n# Regular expression matching correct constant names\nconst-rgx=^(_?[A-Z][A-Z0-9_]*|__[a-z0-9_]+__|_?[a-z][a-z0-9_]*)$\n\n# Regular expression matching correct attribute names\nattr-rgx=^_{0,2}[a-z][a-z0-9_]*$\n\n# Regular expression matching correct argument names\nargument-rgx=^[a-z][a-z0-9_]*$\n\n# Regular expression matching correct class attribute names\nclass-attribute-rgx=^(_?[A-Z][A-Z0-9_]*|__[a-z0-9_]+__|_?[a-z][a-z0-9_]*)$\n\n# Regular expression matching correct inline iteration names\ninlinevar-rgx=^[a-z][a-z0-9_]*$\n\n# Regular expression matching correct class names\nclass-rgx=^_?[A-Z][a-zA-Z0-9]*$\n\n# Regular expression matching correct module names\n#module-rgx=^(_?[a-z][a-z0-9_]*|__init__)$\n\n# Regular expression matching correct method names\nmethod-rgx=(?x)^(?:(?P<exempt>_[a-z0-9_]+__|runTest|setUp|tearDown|setUpTestCase|tearDownTestCase|setupSelf|tearDownClass|setUpClass|(test|assert)_*[A-Z0-9][a-zA-Z0-9_]*|next)|(?P<camel_case>_{0,2}[A-Z][a-zA-Z0-9_]*)|(?P<snake_case>_{0,2}[a-z][a-z0-9_]*))$\n\n# Regular expression which should only match function or class names that do\n# not require a docstring.\nno-docstring-rgx=(__.*__|main|test.*|.*test|.*Test)$\n\n# Minimum line length for functions/classes that require docstrings, shorter\n# ones are exempt.\ndocstring-min-length=10\n\n\n[TYPECHECK]\n\n# List of decorators that produce context managers, such as\n# contextlib.contextmanager. Add to this list to register other decorators that\n# produce valid context managers.\ncontextmanager-decorators=contextlib.contextmanager,contextlib2.contextmanager\n\n# Tells whether missing members accessed in mixin class should be ignored. A\n# mixin class is detected if its name ends with \"mixin\" (case insensitive).\nignore-mixin-members=yes\n\n# List of module names for which member attributes should not be checked\n# (useful for modules/projects where namespaces are manipulated during runtime\n# and thus existing member attributes cannot be deduced by static analysis. It\n# supports qualified module names, as well as Unix pattern matching.\nignored-modules=\n\n# List of class names for which member attributes should not be checked (useful\n# for classes with dynamically set attributes). This supports the use of\n# qualified names.\nignored-classes=optparse.Values,thread._local,_thread._local\n\n# List of members which are set dynamically and missed by pylint inference\n# system, and so shouldn't trigger E1101 when accessed. Python regular\n# expressions are accepted.\ngenerated-members=\n\n\n[FORMAT]\n\n# Maximum number of characters on a single line.\n#max-line-length=80\nmax-line-length=100\n\n# TODO(https://github.com/PyCQA/pylint/issues/3352): Direct pylint to exempt\n# lines made too long by directives to pytype.\n\n# Regexp for a line that is allowed to be longer than the limit.\nignore-long-lines=(?x)(\n  ^\\s*(\\#\\ )?<?https?://\\S+>?$|\n  ^\\s*(from\\s+\\S+\\s+)?import\\s+.+$)\n\n# Allow the body of an if to be on the same line as the test if there is no\n# else.\nsingle-line-if-stmt=yes\n\n# Maximum number of lines in a module\nmax-module-lines=99999\n\n# String used as indentation unit.  The internal Google style guide mandates 2\n# spaces.  Google's externaly-published style guide says 4, consistent with\n# PEP 8.  Here, we use 2 spaces, for conformity with many open-sourced Google\n# projects (like TensorFlow).\nindent-string='  '\n\n# Number of spaces of indent required inside a hanging  or continued line.\nindent-after-paren=4\n\n# Expected format of line ending, e.g. empty (any line ending), LF or CRLF.\nexpected-line-ending-format=\n\n\n[MISCELLANEOUS]\n\n# List of note tags to take in consideration, separated by a comma.\nnotes=TODO\n\n\n[STRING]\n\n# This flag controls whether inconsistent-quotes generates a warning when the\n# character used as a quote delimiter is used inconsistently within a module.\ncheck-quote-consistency=yes\n\n\n[VARIABLES]\n\n# Tells whether we should check for unused import in __init__ files.\ninit-import=no\n\n# A regular expression matching the name of dummy variables (i.e. expectedly\n# not used).\ndummy-variables-rgx=^\\*{0,2}(_$|unused_|dummy_)\n\n# List of additional names supposed to be defined in builtins. Remember that\n# you should avoid to define new builtins when possible.\nadditional-builtins=\n\n# List of strings which can identify a callback function by name. A callback\n# name must start or end with one of those strings.\ncallbacks=cb_,_cb\n\n# List of qualified module names which can have objects that can redefine\n# builtins.\nredefining-builtins-modules=six,six.moves,past.builtins,future.builtins,functools\n\n\n[LOGGING]\n\n# Logging modules to check that the string format arguments are in logging\n# function parameter format\nlogging-modules=logging,absl.logging,tensorflow.io.logging\n\n\n[SIMILARITIES]\n\n# Minimum lines number of a similarity.\nmin-similarity-lines=4\n\n# Ignore comments when computing similarities.\nignore-comments=yes\n\n# Ignore docstrings when computing similarities.\nignore-docstrings=yes\n\n# Ignore imports when computing similarities.\nignore-imports=no\n\n\n[SPELLING]\n\n# Spelling dictionary name. Available dictionaries: none. To make it working\n# install python-enchant package.\nspelling-dict=\n\n# List of comma separated words that should not be checked.\nspelling-ignore-words=\n\n# A path to a file that contains private dictionary; one word per line.\nspelling-private-dict-file=\n\n# Tells whether to store unknown words to indicated private dictionary in\n# --spelling-private-dict-file option instead of raising a message.\nspelling-store-unknown-words=no\n\n\n[IMPORTS]\n\n# Deprecated modules which should not be used, separated by a comma\ndeprecated-modules=regsub,\n                   TERMIOS,\n                   Bastion,\n                   rexec,\n                   sets\n\n# Create a graph of every (i.e. internal and external) dependencies in the\n# given file (report RP0402 must not be disabled)\nimport-graph=\n\n# Create a graph of external dependencies in the given file (report RP0402 must\n# not be disabled)\next-import-graph=\n\n# Create a graph of internal dependencies in the given file (report RP0402 must\n# not be disabled)\nint-import-graph=\n\n# Force import order to recognize a module as part of the standard\n# compatibility libraries.\nknown-standard-library=\n\n# Force import order to recognize a module as part of a third party library.\nknown-third-party=enchant, absl\n\n# Analyse import fallback blocks. This can be used to support both Python 2 and\n# 3 compatible code, which means that the block might have code that exists\n# only in one or another interpreter, leading to false positives when analysed.\nanalyse-fallback-blocks=no\n\n\n[CLASSES]\n\n# List of method names used to declare (i.e. assign) instance attributes.\ndefining-attr-methods=__init__,\n                      __new__,\n                      setUp\n\n# List of member names, which should be excluded from the protected access\n# warning.\nexclude-protected=_asdict,\n                  _fields,\n                  _replace,\n                  _source,\n                  _make\n\n# List of valid names for the first argument in a class method.\nvalid-classmethod-first-arg=cls,\n                            class_\n\n# List of valid names for the first argument in a metaclass class method.\nvalid-metaclass-classmethod-first-arg=mcs\n\n\n[EXCEPTIONS]\n\n# Exceptions that will emit a warning when being caught. Defaults to\n# \"Exception\"\novergeneral-exceptions=builtins.StandardError,\n                       builtins.Exception,\n                       builtins.BaseException\n\n"
  },
  {
    "path": ".vscode/settings.json",
    "content": "{\n    \"cSpell.words\": [\n        \"SECOPS\"\n    ]\n}"
  },
  {
    "path": "AWS/.gitignore",
    "content": ".DS_Store"
  },
  {
    "path": "AWS/AWS_Access_Key_Rotation.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"8ca4bd16-bef4-4d7c-96eb-59eeb2315864\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Steps Overview\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Steps Overview\"\n   },\n   \"source\": [\n    \"<center><img src=\\\"https://storage.googleapis.com/unskript-website/assets/favicon.png\\\" alt=\\\"unSkript.com\\\" width=\\\"100\\\" height=\\\"100\\\">\\n\",\n    \"<h1 id=\\\"unSkript-Runbooks\\\">unSkript Runbooks<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#unSkript-Runbooks\\\" target=\\\"_self\\\">&para;</a></h1>\\n\",\n    \"<div class=\\\"alert alert-block alert-success\\\">\\n\",\n    \"<h3 id=\\\"Objective\\\">Objective<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Objective\\\" target=\\\"_self\\\">&para;</a></h3>\\n\",\n    \"<br><strong style=\\\"color: #000000;\\\"><em>Check and Rotate Expiring Access Keys for all IAM Users </em></strong></div>\\n\",\n    \"</center><center>\\n\",\n    \"<h2 id=\\\"AWS-Access-Key-Rotation\\\"><u>AWS Access Key Rotation</u><a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#AWS-Access-Key-Rotation\\\" target=\\\"_self\\\">&para;</a></h2>\\n\",\n    \"</center>\\n\",\n    \"<h1 id=\\\"Steps-Overview\\\">Steps Overview<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Steps-Overview\\\" target=\\\"_self\\\">&para;</a></h1>\\n\",\n    \"<p>1) <a href=\\\"#2\\\" target=\\\"_self\\\" rel=\\\"noopener\\\">List all Expiring Access Key</a><br>2)&nbsp;<a href=\\\"#3\\\" target=\\\"_self\\\" rel=\\\"noopener\\\">Create AWS Access Key</a><br>3) <a href=\\\"#4\\\" target=\\\"_self\\\" rel=\\\"noopener\\\">Update AWS Access Key</a><br>4)&nbsp;<a href=\\\"#5\\\" target=\\\"_self\\\" rel=\\\"noopener\\\">Delete AWS Access Key</a></p>\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"c1ec70e9-0b9c-4c05-b5e0-5ebfd4263c4f\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 1\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 1\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"List-all-Expiring-Access-Keys\\\"><a id=\\\"2\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>List all Expiring Access Keys<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#List-all-Expiring-Access-Keys\\\" target=\\\"_self\\\">&para;</a></h3>\\n\",\n    \"<p>Using unSkript's AWS List Expiring Access Keys action we will list those users whose Access Keys past the given threshold number of days i.e. expiring.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Action takes the following parameters: <code>threshold_days</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Action captures the following output: <code>expiring_users</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"dae47429-ca5a-4834-bb46-ac9b2a37527f\",\n   \"metadata\": {\n    \"actionBashCommand\": false,\n    \"actionCategories\": [\n     \"CATEGORY_TYPE_CLOUDOPS\",\n     \"CATEGORY_TYPE_DEVOPS\",\n     \"CATEGORY_TYPE_SRE\",\n     \"CATEGORY_TYPE_AWS\",\n     \"CATEGORY_TYPE_AWS_IAM\"\n    ],\n    \"actionDescription\": \"List Expiring IAM User Access Keys\",\n    \"actionEntryFunction\": \"aws_list_expiring_access_keys\",\n    \"actionIsCheck\": true,\n    \"actionIsRemediation\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [\n     \"924025582b6c1b3ea3c8c834f1ee430a2df8bd42c5119191cb5c5da3121f1d18\"\n    ],\n    \"actionNextHopParameterMapping\": {},\n    \"actionNouns\": [\n     \"expiring\",\n     \"access\",\n     \"aws\"\n    ],\n    \"actionOutputType\": \"ACTION_OUTPUT_TYPE_LIST\",\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"actionTitle\": \"AWS List Expiring Access Keys\",\n    \"actionType\": \"LEGO_TYPE_AWS\",\n    \"actionVerbs\": [\n     \"list\"\n    ],\n    \"actionVersion\": \"1.0.0\",\n    \"action_modified\": false,\n    \"action_uuid\": \"c3a4e091801f8197429f073a0612e2cc373b6630ce4426d73617b8e101bc5d6a\",\n    \"collapsed\": true,\n    \"continueOnError\": false,\n    \"credentialsJson\": {},\n    \"description\": \"List Expiring IAM User Access Keys\",\n    \"id\": 1,\n    \"index\": 1,\n    \"inputData\": [\n     {\n      \"threshold_days\": {\n       \"constant\": false,\n       \"value\": \"int(threshold_days)\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"threshold_days\": {\n        \"default\": 90,\n        \"description\": \"Threshold number(in days) to check for expiry. Eg: 30\",\n        \"title\": \"Threshold Days\",\n        \"type\": \"integer\"\n       }\n      },\n      \"required\": [],\n      \"title\": \"aws_list_expiring_access_keys\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"language\": \"python\",\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS List Expiring Access Keys\",\n    \"orderProperties\": [\n     \"threshold_days\"\n    ],\n    \"outputParams\": {\n     \"output_name\": \"expiring_users\",\n     \"output_name_enabled\": true,\n     \"output_runbook_enabled\": false,\n     \"output_runbook_name\": \"\"\n    },\n    \"printOutput\": true,\n    \"tags\": [\n     \"aws_list_expiring_access_keys\"\n    ],\n    \"uuid\": \"c3a4e091801f8197429f073a0612e2cc373b6630ce4426d73617b8e101bc5d6a\",\n    \"version\": \"1.0.0\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"# Copyright (c) 2021 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"import pprint\\n\",\n    \"from typing import Tuple\\n\",\n    \"import datetime\\n\",\n    \"import dateutil\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from unskript.legos.aws.aws_list_all_iam_users.aws_list_all_iam_users import aws_list_all_iam_users\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_list_expiring_access_keys_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_list_expiring_access_keys(handle, threshold_days: int = 90)-> Tuple:\\n\",\n    \"    \\\"\\\"\\\"aws_list_expiring_access_keys returns all the ACM issued certificates which are\\n\",\n    \"       about to expire given a threshold number of days\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned from Task Validate\\n\",\n    \"\\n\",\n    \"        :type threshold_days: int\\n\",\n    \"        :param threshold_days: Threshold number of days to check for expiry. Eg: 30 -lists\\n\",\n    \"        all access Keys which are expiring within 30 days\\n\",\n    \"\\n\",\n    \"        :rtype: Status, List of expiring access keys and Error if any\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    result =[]\\n\",\n    \"    all_users=[]\\n\",\n    \"    try:\\n\",\n    \"        all_users = aws_list_all_iam_users(handle=handle)\\n\",\n    \"    except Exception as error:\\n\",\n    \"        raise error\\n\",\n    \"\\n\",\n    \"    for each_user in all_users:\\n\",\n    \"        try:\\n\",\n    \"            iamClient = handle.client('iam')\\n\",\n    \"            final_result={}\\n\",\n    \"            response = iamClient.list_access_keys(UserName=each_user)\\n\",\n    \"            for x in response[\\\"AccessKeyMetadata\\\"]:\\n\",\n    \"                if len(response[\\\"AccessKeyMetadata\\\"])!= 0:\\n\",\n    \"                    create_date = x[\\\"CreateDate\\\"]\\n\",\n    \"                    right_now = datetime.datetime.now(dateutil.tz.tzlocal())\\n\",\n    \"                    diff = right_now-create_date\\n\",\n    \"                    days_remaining = diff.days\\n\",\n    \"                    if days_remaining > threshold_days:\\n\",\n    \"                        final_result[\\\"username\\\"] = x[\\\"UserName\\\"]\\n\",\n    \"                        final_result[\\\"access_key_id\\\"] = x[\\\"AccessKeyId\\\"]\\n\",\n    \"            if len(final_result)!=0:\\n\",\n    \"                result.append(final_result)\\n\",\n    \"        except Exception as e:\\n\",\n    \"            raise e\\n\",\n    \"\\n\",\n    \"    if len(result) != 0:\\n\",\n    \"        return (False, result)\\n\",\n    \"    return (True, None)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"threshold_days\\\": \\\"int(threshold_days)\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"expiring_users\\\")\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_list_expiring_access_keys, lego_printer=aws_list_expiring_access_keys_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"af12fb80-4786-4dc6-b1b9-c7fdc372563e\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 1A\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 1A\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"List-of-Expiring-Users\\\">List of Expiring Users</h3>\\n\",\n    \"<p>This action lists the usernames of expiring Access Keys using the output from Step 2.</p>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 21,\n   \"id\": \"3828def9-f4b1-4e75-9f1b-6b70fed35ae8\",\n   \"metadata\": {\n    \"collapsed\": true,\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-02-08T10:36:03.614Z\"\n    },\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"name\": \"Create List of Object of Expiring Users\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Create List of Object of Expiring Users\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"all_expiring_users = []\\n\",\n    \"if expiring_users[0] == False:\\n\",\n    \"    if len(expiring_users[1])!=0:\\n\",\n    \"        all_expiring_users=expiring_users[1]\\n\",\n    \"print(all_expiring_users)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"e1956e5a-c097-4dd7-a0da-ae45fc98c4db\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 1B\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 1B\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"List-of-Expiring-Users-and-Access-Keys\\\"><a id=\\\"3\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>List of Expiring Users and Access Keys</h3>\\n\",\n    \"<p>This action simply creates another list containing a dictionary of the user and their old access key. The output from this acion is required for Step 4 and Step 5.</p>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 22,\n   \"id\": \"a407845b-41f9-4ca5-9387-a2cfb0e6e46f\",\n   \"metadata\": {\n    \"collapsed\": true,\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-02-08T10:36:12.088Z\"\n    },\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"name\": \"Create List of Expiring Users\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Create List of Expiring Users\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"expiring_usernames = []\\n\",\n    \"for each_user in all_expiring_users:\\n\",\n    \"    for k,v in each_user.items():\\n\",\n    \"        if k=='username':\\n\",\n    \"            expiring_usernames.append(v)\\n\",\n    \"print(expiring_usernames)\\n\",\n    \"task.configure(outputName=\\\"expiring_usernames\\\")\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"b3a132e5-42c9-46a2-9788-9f8648dc71f6\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 2\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 2\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Create-AWS-Access-Keys\\\"><a id=\\\"3\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Create AWS Access Keys</h3>\\n\",\n    \"<p>Using unSkript's AWS Create Access Key action we will create a new Access Key for the users from Step 2.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Action takes the following parameters: <code>aws_username</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"f75a8726-d158-4afd-a667-0abd6f9717dc\",\n   \"metadata\": {\n    \"actionBashCommand\": false,\n    \"actionCategories\": [\n     \"CATEGORY_TYPE_SECOPS\",\n     \"CATEGORY_TYPE_SRE\",\n     \"CATEGORY_TYPE_AWS\",\n     \"CATEGORY_TYPE_AWS_IAM\",\n     \"CATEGORY_TYPE_IAM\"\n    ],\n    \"actionDescription\": \"Create a new Access Key for the User\",\n    \"actionEntryFunction\": \"aws_create_access_key\",\n    \"actionIsCheck\": false,\n    \"actionIsRemediation\": true,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": null,\n    \"actionNextHopParameterMapping\": null,\n    \"actionNouns\": null,\n    \"actionOutputType\": \"ACTION_OUTPUT_TYPE_DICT\",\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"actionTitle\": \"AWS Create Access Key\",\n    \"actionType\": \"LEGO_TYPE_AWS\",\n    \"actionVerbs\": null,\n    \"actionVersion\": \"1.0.0\",\n    \"action_modified\": false,\n    \"action_uuid\": \"492b9b6807e5254512715555e3ec52a97e006c04a28511710e5bc1b0c45ffdd7\",\n    \"collapsed\": true,\n    \"condition_enabled\": true,\n    \"continueOnError\": true,\n    \"credentialsJson\": {},\n    \"description\": \"Create a new Access Key for the User\",\n    \"id\": 1,\n    \"index\": 1,\n    \"inputData\": [\n     {\n      \"aws_username\": {\n       \"constant\": false,\n       \"value\": \"iter_item\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"aws_username\": {\n        \"description\": \"Username of the IAM User\",\n        \"title\": \"Username\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"aws_username\"\n      ],\n      \"title\": \"aws_create_access_key\",\n      \"type\": \"object\"\n     }\n    ],\n    \"iterData\": [\n     {\n      \"iter_enabled\": true,\n      \"iter_item\": \"aws_username\",\n      \"iter_list\": {\n       \"constant\": false,\n       \"objectItems\": false,\n       \"value\": \"expiring_usernames\"\n      }\n     }\n    ],\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"language\": \"python\",\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS Create Access Key\",\n    \"orderProperties\": [\n     \"aws_username\"\n    ],\n    \"outputParams\": {\n     \"output_name\": \"new_access_keys\",\n     \"output_name_enabled\": true,\n     \"output_runbook_enabled\": false,\n     \"output_runbook_name\": \"\"\n    },\n    \"printOutput\": true,\n    \"startcondition\": \"len(expiring_usernames)!=0\",\n    \"tags\": [\n     \"aws_create_access_key\"\n    ],\n    \"uuid\": \"492b9b6807e5254512715555e3ec52a97e006c04a28511710e5bc1b0c45ffdd7\",\n    \"version\": \"1.0.0\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"# Copyright (c) 2021 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"import pprint\\n\",\n    \"from typing import Dict\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_create_access_key_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_create_access_key(\\n\",\n    \"    handle,\\n\",\n    \"    aws_username: str\\n\",\n    \") -> Dict:\\n\",\n    \"    \\\"\\\"\\\"aws_create_access_key creates a new access key for the given user.\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned from Task Validate\\n\",\n    \"\\n\",\n    \"        :type aws_username: str\\n\",\n    \"        :param aws_username: Username of the IAM user to be looked up\\n\",\n    \"\\n\",\n    \"        :rtype: Result Dictionary of result\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    iamClient = handle.client('iam')\\n\",\n    \"    result = iamClient.create_access_key(UserName=aws_username)\\n\",\n    \"    retVal = {}\\n\",\n    \"    temp_list = []\\n\",\n    \"    for key, value in result.items():\\n\",\n    \"        if key not in temp_list:\\n\",\n    \"            temp_list.append(key)\\n\",\n    \"            retVal[key] = value\\n\",\n    \"    return retVal\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(continueOnError=True)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"aws_username\\\": \\\"iter_item\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(iterJson='''{\\n\",\n    \"    \\\"iter_enabled\\\": true,\\n\",\n    \"    \\\"iter_list_is_const\\\": false,\\n\",\n    \"    \\\"iter_list\\\": \\\"expiring_usernames\\\",\\n\",\n    \"    \\\"iter_parameter\\\": \\\"aws_username\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"len(expiring_usernames)!=0\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"task.configure(outputName=\\\"new_access_keys\\\")\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_create_access_key, lego_printer=aws_create_access_key_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"e6797aa7-a0c2-4842-8482-da22a5363fe8\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 3\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 3\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Update-AWS-Access-Key\\\"><a id=\\\"4\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Update AWS Access Key</h3>\\n\",\n    \"<p>Using the AWS Update Access Key action we will update the status of the old Access Key to <strong>\\\"Inactive\\\"</strong>. This step is required to delete the old access key as one user cannot have two Access Keys.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters: <code>aws_username</code>, <code>aws_access_key_id</code> and <code>status</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"aef7b261-1f5a-4402-ae02-22841fc4569b\",\n   \"metadata\": {\n    \"actionBashCommand\": false,\n    \"actionCategories\": [\n     \"CATEGORY_TYPE_DEVOPS\",\n     \"CATEGORY_TYPE_SRE\",\n     \"CATEGORY_TYPE_AWS\",\n     \"CATEGORY_TYPE_AWS_IAM\"\n    ],\n    \"actionDescription\": \"Update status of the Access Key\",\n    \"actionEntryFunction\": \"aws_update_access_key\",\n    \"actionIsCheck\": false,\n    \"actionIsRemediation\": true,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": null,\n    \"actionNextHopParameterMapping\": null,\n    \"actionNouns\": null,\n    \"actionOutputType\": \"ACTION_OUTPUT_TYPE_DICT\",\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"actionTitle\": \"AWS Update Access Key\",\n    \"actionType\": \"LEGO_TYPE_AWS\",\n    \"actionVerbs\": null,\n    \"actionVersion\": \"1.0.0\",\n    \"action_modified\": false,\n    \"action_uuid\": \"0297f6c80f0416d10484fa2593510515eef2900add97924e3e73beaab5fea819\",\n    \"collapsed\": true,\n    \"condition_enabled\": true,\n    \"continueOnError\": false,\n    \"credentialsJson\": {},\n    \"description\": \"Update status of the Access Key\",\n    \"id\": 1,\n    \"index\": 1,\n    \"inputData\": [\n     {\n      \"aws_access_key_id\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"access_key_id\\\\\\\\\\\")\\\"\"\n      },\n      \"aws_username\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"username\\\\\\\\\\\")\\\"\"\n      },\n      \"status\": {\n       \"constant\": true,\n       \"value\": \"Inactive\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"definitions\": {\n       \"AccessKeyStatus\": {\n        \"description\": \"An enumeration.\",\n        \"enum\": [\n         \"Active\",\n         \"Inactive\"\n        ],\n        \"title\": \"AccessKeyStatus\",\n        \"type\": \"string\"\n       }\n      },\n      \"properties\": {\n       \"aws_access_key_id\": {\n        \"description\": \"Old Access Key ID of the User\",\n        \"title\": \"Access Key ID\",\n        \"type\": \"string\"\n       },\n       \"aws_username\": {\n        \"description\": \"Username of the IAM User\",\n        \"title\": \"Username\",\n        \"type\": \"string\"\n       },\n       \"status\": {\n        \"allOf\": [\n         {\n          \"$ref\": \"#/definitions/AccessKeyStatus\"\n         }\n        ],\n        \"description\": \"Status to set for the Access Key\",\n        \"title\": \"Status\",\n        \"type\": \"enum\"\n       }\n      },\n      \"required\": [\n       \"aws_username\",\n       \"aws_access_key_id\",\n       \"status\"\n      ],\n      \"title\": \"aws_update_access_key\",\n      \"type\": \"object\"\n     }\n    ],\n    \"iterData\": [\n     {\n      \"iter_enabled\": true,\n      \"iter_item\": {\n       \"aws_access_key_id\": \"access_key_id\",\n       \"aws_username\": \"username\"\n      },\n      \"iter_list\": {\n       \"constant\": false,\n       \"objectItems\": true,\n       \"value\": \"all_expiring_users\"\n      }\n     }\n    ],\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"language\": \"python\",\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS Update Access Key\",\n    \"orderProperties\": [\n     \"aws_username\",\n     \"aws_access_key_id\",\n     \"status\"\n    ],\n    \"printOutput\": true,\n    \"startcondition\": \"len(all_expiring_users)!=0\",\n    \"tags\": [\n     \"aws_update_access_key\"\n    ],\n    \"uuid\": \"0297f6c80f0416d10484fa2593510515eef2900add97924e3e73beaab5fea819\",\n    \"version\": \"1.0.0\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"# Copyright (c) 2021 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"import pprint\\n\",\n    \"from typing import Dict\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from unskript.enums.aws_access_key_enums import AccessKeyStatus\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_update_access_key_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(\\\"Access Key status successfully changed\\\")\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_update_access_key(\\n\",\n    \"    handle,\\n\",\n    \"    aws_username: str,\\n\",\n    \"    aws_access_key_id: str,\\n\",\n    \"    status: AccessKeyStatus\\n\",\n    \") -> Dict:\\n\",\n    \"    \\\"\\\"\\\"aws_update_access_key updates the status of an access key to Inactive/Active\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned from Task Validate\\n\",\n    \"\\n\",\n    \"        :type aws_username: str\\n\",\n    \"        :param aws_username: Username of the IAM user to be looked up\\n\",\n    \"\\n\",\n    \"        :type aws_access_key_id: str\\n\",\n    \"        :param aws_access_key_id: Old Access Key ID of the user of which the status\\n\",\n    \"        needs to be updated\\n\",\n    \"\\n\",\n    \"        :type status: AccessKeyStatus\\n\",\n    \"        :param status: Status to set for the Access Key\\n\",\n    \"\\n\",\n    \"        :rtype: Result Dictionary of result\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    iamClient = handle.client('iam')\\n\",\n    \"    result = iamClient.update_access_key(\\n\",\n    \"        UserName=aws_username,\\n\",\n    \"        AccessKeyId=aws_access_key_id,\\n\",\n    \"        Status=status\\n\",\n    \"        )\\n\",\n    \"    retVal = {}\\n\",\n    \"    temp_list = []\\n\",\n    \"    for key, value in result.items():\\n\",\n    \"        if key not in temp_list:\\n\",\n    \"            temp_list.append(key)\\n\",\n    \"            retVal[key] = value\\n\",\n    \"    return retVal\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(continueOnError=False)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"aws_username\\\": \\\"\\\\\\\\\\\"iter.get(\\\\\\\\\\\\\\\\\\\\\\\\\\\"username\\\\\\\\\\\\\\\\\\\\\\\\\\\")\\\\\\\\\\\"\\\",\\n\",\n    \"    \\\"aws_access_key_id\\\": \\\"\\\\\\\\\\\"iter.get(\\\\\\\\\\\\\\\\\\\\\\\\\\\"access_key_id\\\\\\\\\\\\\\\\\\\\\\\\\\\")\\\\\\\\\\\"\\\",\\n\",\n    \"    \\\"status\\\": \\\"AccessKeyStatus.Inactive\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(iterJson='''{\\n\",\n    \"    \\\"iter_enabled\\\": true,\\n\",\n    \"    \\\"iter_list_is_const\\\": false,\\n\",\n    \"    \\\"iter_list\\\": \\\"all_expiring_users\\\",\\n\",\n    \"    \\\"iter_parameter\\\": [\\\"aws_access_key_id\\\",\\\"aws_username\\\"]\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"len(all_expiring_users)!=0\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_update_access_key, lego_printer=aws_update_access_key_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"aa81394c-749e-4b32-bf6c-a866369f2cf5\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 4\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 4\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Delete-AWS-Access-Key\\\"><a id=\\\"5\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Delete AWS Access Key</h3>\\n\",\n    \"<p>Finally, we will delete the the old (Inactive) Access Key for the IAM Users</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters: <code>aws_username</code> and <code>aws_access_key_id</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"e12a42d2-1eb8-4737-b0d7-4dd80c688fca\",\n   \"metadata\": {\n    \"actionBashCommand\": false,\n    \"actionCategories\": [\n     \"CATEGORY_TYPE_SECOPS\",\n     \"CATEGORY_TYPE_SRE\",\n     \"CATEGORY_TYPE_AWS\",\n     \"CATEGORY_TYPE_AWS_IAM\",\n     \"CATEGORY_TYPE_IAM\"\n    ],\n    \"actionDescription\": \"Delete an Access Key for a User\",\n    \"actionEntryFunction\": \"aws_delete_access_key\",\n    \"actionIsCheck\": false,\n    \"actionIsRemediation\": true,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": null,\n    \"actionNextHopParameterMapping\": null,\n    \"actionNouns\": null,\n    \"actionOutputType\": \"ACTION_OUTPUT_TYPE_DICT\",\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"actionTitle\": \"AWS Delete Access Key\",\n    \"actionType\": \"LEGO_TYPE_AWS\",\n    \"actionVerbs\": null,\n    \"actionVersion\": \"1.0.0\",\n    \"action_modified\": false,\n    \"action_uuid\": \"4ce21d2ac0824cafdddbb4245ffcb1d4c34786ed68c075fb1041eb8c7e22f01d\",\n    \"collapsed\": true,\n    \"condition_enabled\": true,\n    \"continueOnError\": true,\n    \"credentialsJson\": {},\n    \"description\": \"Delete an Access Key for a User\",\n    \"id\": 1,\n    \"index\": 1,\n    \"inputData\": [\n     {\n      \"aws_access_key_id\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"access_key_id\\\\\\\\\\\")\\\"\"\n      },\n      \"aws_username\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"username\\\\\\\\\\\")\\\"\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"aws_access_key_id\": {\n        \"description\": \"Old Access Key ID of the User\",\n        \"title\": \"Access Key ID\",\n        \"type\": \"string\"\n       },\n       \"aws_username\": {\n        \"description\": \"Username of the IAM User\",\n        \"title\": \"Username\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"aws_username\",\n       \"aws_access_key_id\"\n      ],\n      \"title\": \"aws_delete_access_key\",\n      \"type\": \"object\"\n     }\n    ],\n    \"iterData\": [\n     {\n      \"iter_enabled\": true,\n      \"iter_item\": {\n       \"aws_access_key_id\": \"access_key_id\",\n       \"aws_username\": \"username\"\n      },\n      \"iter_list\": {\n       \"constant\": false,\n       \"objectItems\": true,\n       \"value\": \"all_expiring_users\"\n      }\n     }\n    ],\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"language\": \"python\",\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS Delete Access Key\",\n    \"orderProperties\": [\n     \"aws_username\",\n     \"aws_access_key_id\"\n    ],\n    \"printOutput\": true,\n    \"startcondition\": \"len(all_expiring_users)!=0\",\n    \"tags\": [\n     \"aws_delete_access_key\"\n    ],\n    \"uuid\": \"4ce21d2ac0824cafdddbb4245ffcb1d4c34786ed68c075fb1041eb8c7e22f01d\",\n    \"version\": \"1.0.0\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"# Copyright (c) 2021 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"import pprint\\n\",\n    \"from typing import Dict\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_delete_access_key_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(\\\"Access Key successfully deleted\\\")\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_delete_access_key(\\n\",\n    \"    handle,\\n\",\n    \"    aws_username: str,\\n\",\n    \"    aws_access_key_id: str,\\n\",\n    \") -> Dict:\\n\",\n    \"    \\\"\\\"\\\"aws_delete_access_key deleted the given access key.\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned from Task Validate\\n\",\n    \"\\n\",\n    \"        :type aws_username: str\\n\",\n    \"        :param aws_username: Username of the IAM user to be looked up\\n\",\n    \"\\n\",\n    \"        :type aws_access_key_id: str\\n\",\n    \"        :param aws_access_key_id: Old Access Key ID of the user which needs to be deleted\\n\",\n    \"\\n\",\n    \"        :rtype: Result Status Dictionary of result\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    iamClient = handle.client('iam')\\n\",\n    \"    result = iamClient.delete_access_key(UserName=aws_username, AccessKeyId=aws_access_key_id)\\n\",\n    \"    retVal = {}\\n\",\n    \"    temp_list = []\\n\",\n    \"    for key, value in result.items():\\n\",\n    \"        if key not in temp_list:\\n\",\n    \"            temp_list.append(key)\\n\",\n    \"            retVal[key] = value\\n\",\n    \"    return retVal\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(continueOnError=True)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"aws_username\\\": \\\"iter.get(\\\\\\\\\\\"username\\\\\\\\\\\")\\\",\\n\",\n    \"    \\\"aws_access_key_id\\\": \\\"iter.get(\\\\\\\\\\\"access_key_id\\\\\\\\\\\")\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(iterJson='''{\\n\",\n    \"    \\\"iter_enabled\\\": true,\\n\",\n    \"    \\\"iter_list_is_const\\\": false,\\n\",\n    \"    \\\"iter_list\\\": \\\"all_expiring_users\\\",\\n\",\n    \"    \\\"iter_parameter\\\": [\\\"aws_username\\\",\\\"aws_access_key_id\\\"]\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"len(all_expiring_users)!=0\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_delete_access_key, lego_printer=aws_delete_access_key_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"d87557cc-2feb-47ce-89f2-5ee1d7375c88\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Conclusion\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Conclusion\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Conclusion\\\">Conclusion<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Conclusion\\\" target=\\\"_self\\\">&para;</a></h3>\\n\",\n    \"<p>In this Runbook, we were able to perform AWS Access Key rotation for IAM users whose Access Keys were expiring by using unSkript's AWS actions. To view the full platform capabilities of unSkript please visit <a href=\\\"https://us.app.unskript.io\\\" target=\\\"_blank\\\" rel=\\\"noopener\\\">us.app.unskript.io</a></p>\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"execution_data\": {\n   \"runbook_name\": \"AWS Access Key Rotation for IAM users\",\n   \"parameters\": [\n    \"threshold_days\"\n   ]\n  },\n  \"kernelspec\": {\n   \"display_name\": \"unSkript (Build: 1169)\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"codemirror_mode\": {\n    \"name\": \"ipython\",\n    \"version\": 3\n   },\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"nbconvert_exporter\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.10.6\"\n  },\n  \"parameterSchema\": {\n   \"properties\": {\n    \"threshold_days\": {\n     \"description\": \"Threshold number of days to check if an access key has expired. Eg: 45\",\n     \"title\": \"threshold_days\",\n     \"type\": \"number\"\n    }\n   },\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"vscode\": {\n   \"interpreter\": {\n    \"hash\": \"b0fa6594d8f4cbf19f97940f81e996739fb7646882a419484c72d19e05852a7e\"\n   }\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}"
  },
  {
    "path": "AWS/AWS_Access_Key_Rotation.json",
    "content": "{\n  \"name\": \"AWS Access Key Rotation for IAM users\",\n  \"description\": \"This runbook can be used to configure AWS Access Key rotation. Changing access keys (which consist of an access key ID and a secret access key) on a regular schedule is a well-known security best practice because it shortens the period an access key is active and therefore reduces the business impact if they are compromised. Having an established process that is run regularly also ensures the operational steps around key rotation are verified, so changing a key is never a scary step.\",  \n  \"uuid\": \"a79201f821993867e23dd9603ed7ef5123325353d717c566f902f7ca6e471f5c\",\n  \"icon\": \"CONNECTOR_TYPE_AWS\",\n  \"categories\": [ \"CATEGORY_TYPE_IAM\", \"CATEGORY_TYPE_SECOPS\" ],\n  \"connector_types\": [ \"CONNECTOR_TYPE_AWS\" ],\n  \"version\": \"1.0.0\"\n}\n\n"
  },
  {
    "path": "AWS/AWS_Add_Lifecycle_Policy_To_S3_Buckets.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"5424264e-6195-4cf9-906b-24b02d5a83f3\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Steps Overview\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Steps Overview\"\n   },\n   \"source\": [\n    \"<center><img src=\\\"https://storage.googleapis.com/unskript-website/assets/favicon.png\\\" alt=\\\"unSkript.com\\\" width=\\\"100\\\" height=\\\"100\\\">\\n\",\n    \"<h1 id=\\\"unSkript-Runbooks\\\">unSkript Runbooks</h1>\\n\",\n    \"<div class=\\\"alert alert-block alert-success\\\">\\n\",\n    \"<h3 id=\\\"Objective\\\">Objective</h3>\\n\",\n    \"<br><strong style=\\\"color: #000000;\\\"><em>Find S3 buckets without any lifecycle policies and attach one to them</em></strong></div>\\n\",\n    \"</center>\\n\",\n    \"<p>&nbsp;</p>\\n\",\n    \"<center>\\n\",\n    \"<h2 id=\\\"Delete-RDS-Instances-with-Low-CPU-Utilization\\\"><u>Add Lifecycle Policy To AWS S3 Buckets</u></h2>\\n\",\n    \"</center>\\n\",\n    \"<h1 id=\\\"Steps-Overview\\\">Steps Overview</h1>\\n\",\n    \"<p>1)<a href=\\\"#1\\\" target=\\\"_self\\\" rel=\\\"noopener\\\"> Find S3 Buckets without Lifecycle Policies</a><br>2)<a href=\\\"#2\\\" target=\\\"_self\\\" rel=\\\"noopener\\\"> Attach Lifecycle Policy</a></p>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 17,\n   \"id\": \"e1f146c9-5180-4459-9c82-cf0e1da02785\",\n   \"metadata\": {\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-05-16T08:46:04.477Z\"\n    },\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"name\": \"Input verification\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Input verification\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"if region == None:\\n\",\n    \"    region = ''\\n\",\n    \"if bucket_names and not region:\\n\",\n    \"    raise SystemExit(\\\"Provide a region for the S3 Bucket names!\\\")\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"908f4dcb-8483-44fc-8f81-ce2502e03093\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 1\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 1\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Find-RDS-Instances-with-Low-CPU-Utilization\\\"><a id=\\\"1\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Find S3 Buckets without Lifecycle Policies</h3>\\n\",\n    \"<p>Using Find AWS S3 buckets without lifecycle policieswe can identify buckets that do not have any configured lifecycle rules for managing object lifecycle. By examining the presence or absence of lifecycle policies, you can gain insights into the data management practices of your S3 buckets. This information can be valuable for optimizing storage costs and ensuring efficient data lifecycle management.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters: <code>region</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following output: <code>buckets_without_policy</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 18,\n   \"id\": \"cc2e3052-9a34-4e09-ab57-c868197a5f62\",\n   \"metadata\": {\n    \"actionBashCommand\": false,\n    \"actionCategories\": [\n     \"CATEGORY_TYPE_CLOUDOPS\",\n     \"CATEGORY_TYPE_AWS\",\n     \"CATEGORY_TYPE_AWS_S3\"\n    ],\n    \"actionDescription\": \"S3 lifecycle policies enable you to automatically transition objects to different storage classes or delete them when they are no longer needed. This action finds all S3 buckets without lifecycle policies. \",\n    \"actionEntryFunction\": \"aws_find_s3_buckets_without_lifecycle_policies\",\n    \"actionIsCheck\": true,\n    \"actionIsRemediation\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [],\n    \"actionNextHopParameterMapping\": {},\n    \"actionNouns\": null,\n    \"actionOutputType\": \"ACTION_OUTPUT_TYPE_LIST\",\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"actionTitle\": \"AWS Find S3 Buckets without Lifecycle Policies\",\n    \"actionType\": \"LEGO_TYPE_AWS\",\n    \"actionVerbs\": null,\n    \"actionVersion\": \"1.0.0\",\n    \"action_modified\": false,\n    \"action_uuid\": \"01cb410b7247b1803c9d41cfd23853bf405b7a603ef52a9d535ed675ed961909\",\n    \"collapsed\": true,\n    \"condition_enabled\": true,\n    \"continueOnError\": false,\n    \"credentialsJson\": {},\n    \"description\": \"S3 lifecycle policies enable you to automatically transition objects to different storage classes or delete them when they are no longer needed. This action finds all S3 buckets without lifecycle policies. \",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-05-16T08:46:51.456Z\"\n    },\n    \"id\": 4,\n    \"index\": 4,\n    \"inputData\": [\n     {\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"region\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"region\": {\n        \"default\": \"\",\n        \"description\": \"AWS Region of S3 buckets.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [],\n      \"title\": \"aws_find_s3_buckets_without_lifecycle_policies\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"language\": \"python\",\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS Find S3 Buckets without Lifecycle Policies\",\n    \"orderProperties\": [\n     \"region\"\n    ],\n    \"outputParams\": {\n     \"output_name\": \"buckets_without_policy\",\n     \"output_name_enabled\": true,\n     \"output_runbook_enabled\": false,\n     \"output_runbook_name\": \"\"\n    },\n    \"printOutput\": true,\n    \"startcondition\": \"not bucket_names\",\n    \"tags\": [\n     \"aws_find_s3_buckets_without_lifecycle_policies\"\n    ],\n    \"title\": \"AWS Find S3 Buckets without Lifecycle Policies\",\n    \"uuid\": \"01cb410b7247b1803c9d41cfd23853bf405b7a603ef52a9d535ed675ed961909\",\n    \"version\": \"1.0.0\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"##  Copyright (c) 2023 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\\n\",\n    \"from unskript.legos.aws.aws_get_s3_buckets.aws_get_s3_buckets import aws_get_s3_buckets\\n\",\n    \"from typing import List, Optional, Tuple\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_find_s3_buckets_without_lifecycle_policies_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_find_s3_buckets_without_lifecycle_policies(handle, region: str=\\\"\\\") -> Tuple:\\n\",\n    \"    \\\"\\\"\\\"aws_find_s3_buckets_without_lifecycle_policies List all the S3 buckets without lifecycle policies\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned from task.validate(...).\\n\",\n    \"\\n\",\n    \"        :type region: string\\n\",\n    \"        :param region: AWS Region of the bucket\\n\",\n    \"\\n\",\n    \"        :rtype: Status, List of all the S3 buckets without lifecycle policies with regions\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    result = []\\n\",\n    \"    all_regions = [region]\\n\",\n    \"    if not region:\\n\",\n    \"        all_regions = aws_list_all_regions(handle)\\n\",\n    \"    for reg in all_regions:\\n\",\n    \"        try:\\n\",\n    \"            s3Session = handle.resource(\\\"s3\\\", region_name=reg)\\n\",\n    \"            response = aws_get_s3_buckets(handle, region=reg)\\n\",\n    \"            for bucket in response:\\n\",\n    \"                bucket_region = s3Session.meta.client.get_bucket_location(Bucket=bucket)['LocationConstraint']\\n\",\n    \"                if bucket_region == None:\\n\",\n    \"                    bucket_region = 'us-east-1'\\n\",\n    \"                if bucket_region != reg:\\n\",\n    \"                    continue\\n\",\n    \"                bucket_lifecycle_configuration = s3Session.BucketLifecycleConfiguration(bucket)\\n\",\n    \"                try:\\n\",\n    \"                    if bucket_lifecycle_configuration.rules:\\n\",\n    \"                        continue\\n\",\n    \"                except Exception:\\n\",\n    \"                    bucket_details = {}\\n\",\n    \"                    bucket_details[\\\"bucket_name\\\"] = bucket\\n\",\n    \"                    bucket_details[\\\"region\\\"] = reg\\n\",\n    \"                    result.append(bucket_details)\\n\",\n    \"        except Exception:\\n\",\n    \"            pass\\n\",\n    \"    if len(result) != 0:\\n\",\n    \"        return (False, result)\\n\",\n    \"    else:\\n\",\n    \"        return (True, None)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"region\\\": \\\"region\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"not bucket_names\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"buckets_without_policy\\\")\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_find_s3_buckets_without_lifecycle_policies, lego_printer=aws_find_s3_buckets_without_lifecycle_policies_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"199591ef-cb3a-49b7-b515-3c6998050320\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 2A\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 2A\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Create-List-of-Low-CPU-Utilization-RDS-Instances\\\">Create List of Buckets with No Lifecycle Policy</h3>\\n\",\n    \"<p>This action gets the list of&nbsp; S3 buckets from the tuple output in Step 1.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters: <code>None</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following output: <code>all_buckets_without_policy</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 19,\n   \"id\": \"6a10e980-9f17-4436-9166-90ea130aa316\",\n   \"metadata\": {\n    \"collapsed\": true,\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-05-16T08:46:53.557Z\"\n    },\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"name\": \"Create List of Buckets with No Lifecycle Policy\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Create List of Buckets with No Lifecycle Policy\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"all_buckets_without_policy = []\\n\",\n    \"try:\\n\",\n    \"    for res in buckets_without_policy:\\n\",\n    \"        if type(res)==bool:\\n\",\n    \"            if res == False:\\n\",\n    \"                continue\\n\",\n    \"        elif type(res)==list:\\n\",\n    \"            if len(res)!=0:\\n\",\n    \"                all_buckets_without_policy=res\\n\",\n    \"except Exception:\\n\",\n    \"    for buck in bucket_names:\\n\",\n    \"        data_dict = {}\\n\",\n    \"        data_dict[\\\"region\\\"] = region\\n\",\n    \"        data_dict[\\\"bucket_name\\\"] = buck\\n\",\n    \"        all_buckets_without_policy.append(data_dict)\\n\",\n    \"print(all_buckets_without_policy)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"978d3b61-2fd9-461d-89bd-534d2dcf3b63\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 2\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 2\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Delete-RDS-Instance\\\"><a id=\\\"2\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Attach Lifecycle Policy</h3>\\n\",\n    \"<p>This action attached a new lifecycle policy to the S3 buckets found in Step 1. From the listed input parameters, <code>expiration_days and nonconcurrent_days</code> have a default value of 30 days.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters:&nbsp;<code>bucket_name, region, expiration_days, prefix, nonconcurrent_days</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 21,\n   \"id\": \"8fcae72a-d600-4a8a-b103-6fa0afade0f9\",\n   \"metadata\": {\n    \"actionBashCommand\": false,\n    \"actionCategories\": [\n     \"CATEGORY_TYPE_CLOUDOPS\",\n     \"CATEGORY_TYPE_AWS\",\n     \"CATEGORY_TYPE_AWS_S3\"\n    ],\n    \"actionDescription\": \"Creates a new lifecycle configuration for the bucket or replaces an existing lifecycle configuration.\",\n    \"actionEntryFunction\": \"aws_add_lifecycle_configuration_to_s3_bucket\",\n    \"actionIsCheck\": false,\n    \"actionIsRemediation\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": null,\n    \"actionNextHopParameterMapping\": null,\n    \"actionNouns\": null,\n    \"actionOutputType\": \"ACTION_OUTPUT_TYPE_DICT\",\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"actionTitle\": \"Add Lifecycle Configuration to AWS S3 Bucket\",\n    \"actionType\": \"LEGO_TYPE_AWS\",\n    \"actionVerbs\": null,\n    \"actionVersion\": \"1.0.0\",\n    \"action_modified\": false,\n    \"action_uuid\": \"a55496e5f6dcbcdaeb22e734eea5363d34e60fa5c580b252ca16b022c0dbaf8f\",\n    \"collapsed\": true,\n    \"condition_enabled\": true,\n    \"continueOnError\": true,\n    \"credentialsJson\": {},\n    \"description\": \"Creates a new lifecycle configuration for the bucket or replaces an existing lifecycle configuration.\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-05-16T08:47:36.364Z\"\n    },\n    \"id\": 1,\n    \"index\": 1,\n    \"inputData\": [\n     {\n      \"bucket_name\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"bucket_name\\\\\\\\\\\")\\\"\"\n      },\n      \"expiration_days\": {\n       \"constant\": false,\n       \"value\": \"expiration_days\"\n      },\n      \"noncurrent_days\": {\n       \"constant\": false,\n       \"value\": \"noncurrent_days\"\n      },\n      \"prefix\": {\n       \"constant\": false,\n       \"value\": \"prefix\"\n      },\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"region\\\\\\\\\\\")\\\"\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"bucket_name\": {\n        \"description\": \"The name of the bucket for which to set the configuration.\",\n        \"title\": \"Bucket Name\",\n        \"type\": \"string\"\n       },\n       \"expiration_days\": {\n        \"default\": 30,\n        \"description\": \"Specifies the expiration for the lifecycle of the object in the form of days. Eg: 30 (days)\",\n        \"title\": \"Expiration Days\",\n        \"type\": \"number\"\n       },\n       \"noncurrent_days\": {\n        \"default\": 30,\n        \"description\": \"Specifies the number of days an object is noncurrent before Amazon S3 permanently deletes the noncurrent object versions\",\n        \"title\": \"Noncurrent Days\",\n        \"type\": \"number\"\n       },\n       \"prefix\": {\n        \"default\": \"\",\n        \"description\": \"Prefix identifying one or more objects to which the rule applies.\",\n        \"title\": \"Prefix\",\n        \"type\": \"string\"\n       },\n       \"region\": {\n        \"description\": \"AWS Region.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"region\",\n       \"bucket_name\"\n      ],\n      \"title\": \"aws_add_lifecycle_configuration_to_s3_bucket\",\n      \"type\": \"object\"\n     }\n    ],\n    \"iterData\": [\n     {\n      \"iter_enabled\": true,\n      \"iter_item\": {\n       \"bucket_name\": \"bucket_name\",\n       \"region\": \"region\"\n      },\n      \"iter_list\": {\n       \"constant\": false,\n       \"objectItems\": true,\n       \"value\": \"all_buckets_without_policy\"\n      }\n     }\n    ],\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"language\": \"python\",\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"Add Lifecycle Configuration to AWS S3 Bucket\",\n    \"orderProperties\": [\n     \"region\",\n     \"bucket_name\",\n     \"expiration_days\",\n     \"prefix\",\n     \"noncurrent_days\"\n    ],\n    \"printOutput\": true,\n    \"startcondition\": \"len(all_buckets_without_policy)!=0\",\n    \"tags\": [\n     \"aws_add_lifecycle_configuration_to_s3_bucket\"\n    ],\n    \"uuid\": \"a55496e5f6dcbcdaeb22e734eea5363d34e60fa5c580b252ca16b022c0dbaf8f\",\n    \"version\": \"1.0.0\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"##  Copyright (c) 2023 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import Dict, Optional\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_add_lifecycle_configuration_to_s3_bucket_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_add_lifecycle_configuration_to_s3_bucket(handle, region: str, bucket_name:str, expiration_days:int=30, prefix:str='', noncurrent_days:int=30) -> Dict:\\n\",\n    \"    \\\"\\\"\\\"aws_add_lifecycle_configuration_to_s3_bucket returns response of adding lifecycle configuration\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned from task.validate(...).\\n\",\n    \"\\n\",\n    \"        :type region: string\\n\",\n    \"        :param region: location of the bucket\\n\",\n    \"\\n\",\n    \"        :type bucket_name: string\\n\",\n    \"        :param bucket_name: The name of the bucket for which to set the configuration.\\n\",\n    \"\\n\",\n    \"        :type expiration_days: int\\n\",\n    \"        :param expiration_days: Specifies the expiration for the lifecycle of the object in the form of days. Eg: 30 (days)\\n\",\n    \"\\n\",\n    \"        :type prefix: string\\n\",\n    \"        :param prefix: location of the bucket\\n\",\n    \"\\n\",\n    \"        :type noncurrent_days: int\\n\",\n    \"        :param noncurrent_days: Specifies the number of days an object is noncurrent before Amazon S3 permanently deletes the noncurrent object versions.\\n\",\n    \"\\n\",\n    \"        :rtype: Dict of the response of adding lifecycle configuration\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    s3Client = handle.client(\\\"s3\\\", region_name=region)\\n\",\n    \"    try:\\n\",\n    \"        lifecycle_config = {\\n\",\n    \"            'Rules': [\\n\",\n    \"                {\\n\",\n    \"                    'Expiration': {\\n\",\n    \"                        'Days': expiration_days,\\n\",\n    \"                    },\\n\",\n    \"                    'Filter': {\\n\",\n    \"                        'Prefix': ''\\n\",\n    \"                    },\\n\",\n    \"                    'Status': 'Enabled',\\n\",\n    \"                    'NoncurrentVersionExpiration': {\\n\",\n    \"                        'NoncurrentDays': noncurrent_days\\n\",\n    \"                    }\\n\",\n    \"                }\\n\",\n    \"            ]\\n\",\n    \"        }\\n\",\n    \"        bucket_name = 'testrunbook'\\n\",\n    \"        response = s3Client.put_bucket_lifecycle_configuration(\\n\",\n    \"            Bucket=bucket_name,\\n\",\n    \"            LifecycleConfiguration=lifecycle_config\\n\",\n    \"        )\\n\",\n    \"    except Exception as e:\\n\",\n    \"        raise e\\n\",\n    \"    return response\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(continueOnError=True)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"expiration_days\\\": \\\"int(expiration_days)\\\",\\n\",\n    \"    \\\"prefix\\\": \\\"prefix\\\",\\n\",\n    \"    \\\"noncurrent_days\\\": \\\"int(noncurrent_days)\\\",\\n\",\n    \"    \\\"bucket_name\\\": \\\"iter.get(\\\\\\\\\\\"bucket_name\\\\\\\\\\\")\\\",\\n\",\n    \"    \\\"region\\\": \\\"iter.get(\\\\\\\\\\\"region\\\\\\\\\\\")\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(iterJson='''{\\n\",\n    \"    \\\"iter_enabled\\\": true,\\n\",\n    \"    \\\"iter_list_is_const\\\": false,\\n\",\n    \"    \\\"iter_list\\\": \\\"all_buckets_without_policy\\\",\\n\",\n    \"    \\\"iter_parameter\\\": [\\\"bucket_name\\\",\\\"region\\\"]\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"len(all_buckets_without_policy)!=0\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_add_lifecycle_configuration_to_s3_bucket, lego_printer=aws_add_lifecycle_configuration_to_s3_bucket_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"44a6cf05-385b-445d-a503-ad4aa607a568\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Conclusion\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Conclusion\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Conclusion&para;\\\">Conclusion</h3>\\n\",\n    \"<p>In this Runbook, we were able to find AWS S3 buckets without lifecycle policies and attach one to them. To view the full platform capabilities of unSkript please visit&nbsp;<a href=\\\"https://us.app.unskript.io\\\" target=\\\"_blank\\\" rel=\\\"noopener\\\">us.app.unskript.io</a></p>\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"execution_data\": {\n   \"runbook_name\": \"Add Lifecycle Policy to S3 Buckets\",\n   \"parameters\": [\n    \"region\",\n    \"threshold_days\"\n   ]\n  },\n  \"kernelspec\": {\n   \"display_name\": \"unSkript (Build: 1166)\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"codemirror_mode\": {\n    \"name\": \"ipython\",\n    \"version\": 3\n   },\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"nbconvert_exporter\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.10.6\"\n  },\n  \"outputParameterSchema\": {\n   \"properties\": {},\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"parameterSchema\": {\n   \"properties\": {\n    \"bucket_names\": {\n     \"description\": \"List of S3 buckets to attach the lifecycle policy to.\",\n     \"title\": \"bucket_names\",\n     \"type\": \"array\"\n    },\n    \"expiration_days\": {\n     \"default\": 30,\n     \"description\": \"Specifies the expiration of the lifecycle of the S3 bucker. By default it is considered to be 30 days. \",\n     \"title\": \"expiration_days\",\n     \"type\": \"number\"\n    },\n    \"noncurrent_days\": {\n     \"default\": 30,\n     \"description\": \"Specifies the transition rule for the lifecycle rule that describes when noncurrent objects transition to a specific storage class.\",\n     \"title\": \"noncurrent_days\",\n     \"type\": \"number\"\n    },\n    \"prefix\": {\n     \"default\": \" \",\n     \"description\": \"Prefix identifying one or more of the rules that applies to the object\",\n     \"title\": \"prefix\",\n     \"type\": \"string\"\n    },\n    \"region\": {\n     \"description\": \"AWS region to find the S3 buckets\",\n     \"title\": \"region\",\n     \"type\": \"string\"\n    }\n   },\n   \"required\": [\n    \"region\"\n   ],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"parameterValues\": {},\n  \"vscode\": {\n   \"interpreter\": {\n    \"hash\": \"b0fa6594d8f4cbf19f97940f81e996739fb7646882a419484c72d19e05852a7e\"\n   }\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}"
  },
  {
    "path": "AWS/AWS_Add_Lifecycle_Policy_To_S3_Buckets.json",
    "content": "{\n    \"name\": \"Add Lifecycle Policy to S3 Buckets\",\n    \"description\": \"Attaching lifecycle policies to AWS S3 buckets enables us to automate the management of object lifecycle in your storage buckets. By configuring lifecycle policies, you can define rules that determine the actions to be taken on objects based on their age or other criteria. This includes transitioning objects to different storage classes, such as moving infrequently accessed data to lower-cost storage tiers or archiving them to Glacier, as well as setting expiration dates for objects. By attaching lifecycle policies to your S3 buckets, you can optimize storage costs by automatically moving data to the most cost-effective storage tier based on its lifecycle. Additionally, it allows you to efficiently manage data retention and comply with regulatory requirements or business policies regarding data expiration. This runbook helps us find all the buckets without any lifecycle policy and attach one to them.\",\n    \"uuid\": \"3d74913836e037a001f718b48f1e19010394b90afc2422d0572ab5c515521075\",\n    \"icon\": \"CONNECTOR_TYPE_AWS\",\n    \"categories\": [ \"CATEGORY_TYPE_COST_OPT\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\" ],\n    \"connector_types\": [ \"CONNECTOR_TYPE_AWS\" ],\n    \"version\": \"1.0.0\"\n  }"
  },
  {
    "path": "AWS/AWS_Add_Mandatory_tags_to_EC2.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"79251bc7-c6cd-4344-a8d5-754bf62eb17e\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Enforce Mandatory Tags Across All AWS Resources\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Enforce Mandatory Tags Across All AWS Resources\"\n   },\n   \"source\": [\n    \"<img src=\\\"https://storage.googleapis.com/unskript-website/assets/favicon.png\\\" alt=\\\"unSkript.com\\\" width=\\\"100\\\" height=\\\"100\\\"/> \\n\",\n    \"<h1> unSkript Runbooks </h1>\\n\",\n    \"<div class=\\\"alert alert-block alert-success\\\">\\n\",\n    \"    <b> This runbook demonstrates How to build a process around Mandatory Tags Across All AWS Resources using unSkript legos.</b>\\n\",\n    \"</div>\\n\",\n    \"\\n\",\n    \"<br>\\n\",\n    \"\\n\",\n    \"<center><h2>Enforce Mandatory Tags Across All AWS Resources</h2></center>\\n\",\n    \"\\n\",\n    \"# Steps Overview\\n\",\n    \"   1. **AWS Get Untagged Resources**: List all the Untagged Resources instanceIDs in the given region. This will also return data about the instances to help identify who the owners are.\\n\",\n    \"   2. **Get tag keys of all Resources** This is a similar list to #1, but includes all instances, and all of the tages for each instance.\\n\",\n    \"   3. AWS Attach Tags to Resources: This action takes in an instance ID and a tag key:value pair.  Run this action as many times as needed to fully tag your instances.\\n\",\n    \"   4. compare tag keys against required list: This final Action looks through all actions, and compares the Tag Keys with the required tag list.  If an instance is not in compliance, it is exported.\\n\",\n    \"   \\n\",\n    \" The eventual goal is that after all required instances are labelled in step 3, step 4 will have only instances that are no longer needed, and can be removed from AWS.\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"a49a1258-79d2-4846-8731-4ed74b36d6bc\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"AWS Get Untagged Resources\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"AWS Get Untagged Resources\"\n   },\n   \"source\": [\n    \"Here we will use unSkript AWS Get Untagged Resources Lego. This lego take region: str as input. This inputs is used to find out all Untagged Resources.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 24,\n   \"id\": \"0ec169e9-f3f2-400d-9b58-e4a598769e61\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_uuid\": \"aee6cabb55096d5cf6098faa7e4a94135e8f5b0572b36d4b3252d7745fae595b\",\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"AWS Get Untagged Resources\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2022-12-14T23:05:39.107Z\"\n    },\n    \"id\": 187,\n    \"index\": 187,\n    \"inputData\": [\n     {\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"Region\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"region\": {\n        \"description\": \"AWS Region.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"region\"\n      ],\n      \"title\": \"aws_get_untagged_resources\",\n      \"type\": \"object\"\n     }\n    ],\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS Get Untagged Resources\",\n    \"nouns\": [\n     \"aws\",\n     \"resources\"\n    ],\n    \"orderProperties\": [\n     \"region\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"UntaggedResources\",\n     \"output_name_enabled\": true\n    },\n    \"tags\": [\n     \"aws_get_untagged_resources\"\n    ],\n    \"title\": \"AWS Get Untagged Resources\",\n    \"trusted\": true,\n    \"verbs\": [\n     \"list\"\n    ]\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##  Copyright (c) 2021 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from typing import List, Dict\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from unskript.connectors.aws import aws_get_paginator\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_untagged_resources_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_untagged_resources(handle, region: str) -> List:\\n\",\n    \" \\n\",\n    \"    print(\\\"region\\\",region)\\n\",\n    \"    ec2Client = handle.client('ec2', region_name=region)\\n\",\n    \"    #res = aws_get_paginator(ec2Client, \\\"describe_instances\\\", \\\"Reservations\\\")\\n\",\n    \"    res = aws_get_paginator(ec2Client, \\\"describe_instances\\\", \\\"Reservations\\\")\\n\",\n    \"    result = []\\n\",\n    \"    for reservation in res:\\n\",\n    \"        for instance in reservation['Instances']:       \\n\",\n    \"            try:\\n\",\n    \"                #has tags\\n\",\n    \"                tagged_instance = instance['Tags']\\n\",\n    \"            except Exception as e:\\n\",\n    \"                #no tags\\n\",\n    \"                result.append({\\\"instance\\\":instance['InstanceId'],\\\"type\\\":instance['InstanceType'],\\\"imageId\\\":instance['ImageId'], \\\"launched\\\":instance['LaunchTime'] })\\n\",\n    \"    return result\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"region\\\": \\\"Region\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"UntaggedResources\\\")\\n\",\n    \"\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_get_untagged_resources, lego_printer=aws_get_untagged_resources_printer, hdl=hdl, args=args)\"\n   ],\n   \"output\": {}\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"be97efa2-dbb5-40b2-8d07-cc000278ba84\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Get Tags Keys Of All Resources\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Get Tags Keys Of All Resources\"\n   },\n   \"source\": [\n    \"Here we will use unSkript Get Tag Keys Of All Resources Lego. This lego take region: str as input. This input is used to find out all Tag Keys of Resources.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 65,\n   \"id\": \"363de8c8-6aa8-40f4-8856-a62a2f0a69f5\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_uuid\": \"db00e432f32042fe9e14ba89a69a4fb86f88f8554c5d45af4cd287a6e5e01532\",\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"AWS Get Tags of All Resources\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2022-12-15T01:37:23.541Z\"\n    },\n    \"id\": 132,\n    \"index\": 132,\n    \"inputData\": [\n     {\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"Region\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"region\": {\n        \"description\": \"AWS Region.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"region\"\n      ],\n      \"title\": \"aws_resources_tags\",\n      \"type\": \"object\"\n     }\n    ],\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS Get Tag Keys of All Resources\",\n    \"nouns\": [\n     \"aws\",\n     \"resources\"\n    ],\n    \"orderProperties\": [\n     \"region\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"tags\": [\n     \"aws_resources_tags\"\n    ],\n    \"title\": \"AWS Get Tag Keys of All Resources\",\n    \"trusted\": true,\n    \"verbs\": [\n     \"list\"\n    ]\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##  Copyright (c) 2021 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from typing import List, Dict\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from unskript.connectors.aws import aws_get_paginator\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_resources_tags_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_resources_tags(handle, region: str) -> List:\\n\",\n    \"    \\\"\\\"\\\"aws_resources_tags Returns an List of all Resources Tags.\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned from task.validate(...).\\n\",\n    \"\\n\",\n    \"        :type region: str\\n\",\n    \"        :param region: Region to filter resources.\\n\",\n    \"\\n\",\n    \"        :rtype: List of all Resources Tags.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    ec2Client = handle.client('ec2', region_name=region)\\n\",\n    \"    #res = aws_get_paginator(ec2Client, \\\"describe_instances\\\", \\\"Reservations\\\")\\n\",\n    \"    res = aws_get_paginator(ec2Client, \\\"describe_instances\\\", \\\"Reservations\\\")\\n\",\n    \"    result = []\\n\",\n    \"    for reservation in res:\\n\",\n    \"        for instance in reservation['Instances']:       \\n\",\n    \"            try:\\n\",\n    \"                #has tags\\n\",\n    \"                tagged_instance = instance['Tags']\\n\",\n    \"                result.append({\\\"instance\\\":instance['InstanceId'],\\\"type\\\":instance['InstanceType'],\\\"imageId\\\":instance['ImageId'], \\\"launched\\\":instance['LaunchTime'], \\\"tags\\\": tagged_instance})\\n\",\n    \"            except Exception as e:\\n\",\n    \"                #no tags\\n\",\n    \"                result.append({\\\"instance\\\":instance['InstanceId'],\\\"type\\\":instance['InstanceType'],\\\"imageId\\\":instance['ImageId'], \\\"launched\\\":instance['LaunchTime'], \\\"tags\\\": []})\\n\",\n    \"    return result\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"region\\\": \\\"Region\\\"\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_resources_tags, lego_printer=aws_resources_tags_printer, hdl=hdl, args=args)\"\n   ],\n   \"output\": {}\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"ce65fdd0-ee64-42d0-90a6-0fe1c0f54608\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"AWS Attach Tags to Resources\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"AWS Attach Tags to Resources\"\n   },\n   \"source\": [\n    \"Here we will use unSkript AWS Attach Tags to Resources Lego. This lego take handle, resource_arn: list, tag_key: str, tag_value: str, region: str as input. This input is used to attach mandatory tags to all untagged Resources.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 52,\n   \"id\": \"e7815002-3aaf-4b3b-a3fe-12d1c3b1edba\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_uuid\": \"878cb7819ecb4687ecfa8c6143365d10fe6b127adeb4a27fd71d06a3a2243d22\",\n    \"continueOnError\": true,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"AWS Attach Tags to Resources\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2022-12-15T00:11:14.333Z\"\n    },\n    \"id\": 167,\n    \"index\": 167,\n    \"inputData\": [\n     {\n      \"instanceId\": {\n       \"constant\": false,\n       \"value\": \"\\\"i-0ec9048cb5520b225\\\"\"\n      },\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"Region\"\n      },\n      \"tag_key\": {\n       \"constant\": false,\n       \"value\": \"\\\"Environment\\\"\"\n      },\n      \"tag_value\": {\n       \"constant\": false,\n       \"value\": \"\\\"test\\\"\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"instanceId\": {\n        \"default\": \"\\\"i-0ec9048cb5520b225\\\"\",\n        \"description\": \"instance ID\",\n        \"title\": \"instanceId\",\n        \"type\": \"string\"\n       },\n       \"region\": {\n        \"default\": \"\",\n        \"description\": \"AWS Region.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       },\n       \"tag_key\": {\n        \"default\": \"\",\n        \"description\": \"Resource Tag Key.\",\n        \"title\": \"Tag Key\",\n        \"type\": \"string\"\n       },\n       \"tag_value\": {\n        \"default\": \"\",\n        \"description\": \"Resource Tag Value.\",\n        \"title\": \"Tag Value\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"resource_arn\",\n       \"tag_key\",\n       \"tag_value\",\n       \"region\"\n      ],\n      \"title\": \"aws_tag_resources\",\n      \"type\": \"object\"\n     }\n    ],\n    \"iterData\": [\n     {\n      \"iter_enabled\": false,\n      \"iter_item\": \"resource_arn\",\n      \"iter_list\": {\n       \"constant\": false,\n       \"objectItems\": false,\n       \"value\": \"UntaggedResources[\"\n      }\n     }\n    ],\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS Attach Tags to Resources\",\n    \"nouns\": [\n     \"aws\",\n     \"resources\"\n    ],\n    \"orderProperties\": [\n     \"tag_key\",\n     \"tag_value\",\n     \"region\",\n     \"instanceId\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"tags\": [\n     \"aws_tag_resources\"\n    ],\n    \"title\": \"AWS Attach Tags to Resources\",\n    \"trusted\": true,\n    \"verbs\": [\n     \"dict\"\n    ]\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##  Copyright (c) 2021 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from typing import List, Dict\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from unskript.connectors.aws import aws_get_paginator\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_tag_resources_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_tag_resources(handle, instanceId: str, tag_key: str, tag_value: str, region: str) -> Dict:\\n\",\n    \"\\n\",\n    \"    ec2Client = handle.client('ec2', region_name=region)\\n\",\n    \"    result = {}\\n\",\n    \"    try:\\n\",\n    \"        response = ec2Client.create_tags(\\n\",\n    \"            Resources=[\\n\",\n    \"                instanceId\\n\",\n    \"            ],\\n\",\n    \"            Tags=[\\n\",\n    \"                {\\n\",\n    \"                    'Key': tag_key,\\n\",\n    \"                    'Value': tag_value\\n\",\n    \"                },\\n\",\n    \"            ]\\n\",\n    \"        )\\n\",\n    \"        result = response\\n\",\n    \"\\n\",\n    \"    except Exception as error:\\n\",\n    \"        result[\\\"error\\\"] = error\\n\",\n    \"\\n\",\n    \"    return result\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"\\n\",\n    \"task.configure(continueOnError=False)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"region\\\": \\\"Region\\\",\\n\",\n    \"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(iterJson='''{\\n\",\n    \"    \\\"iter_enabled\\\": false,\\n\",\n    \"    \\\"iter_list_is_const\\\": false,\\n\",\n    \"    \\\"iter_list\\\": \\\"UntaggedResources[\\\",\\n\",\n    \"    \\\"iter_parameter\\\": \\\"resource_arn\\\"\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_tag_resources, lego_printer=aws_tag_resources_printer, hdl=hdl, args=args)\"\n   ],\n   \"output\": {}\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 63,\n   \"id\": \"5bc81313-934a-476b-88a4-2c2629c3f759\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_uuid\": \"aee6cabb55096d5cf6098faa7e4a94135e8f5b0572b36d4b3252d7745fae595b\",\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"AWS Get Untagged Resources\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2022-12-15T00:44:19.650Z\"\n    },\n    \"id\": 179,\n    \"index\": 179,\n    \"inputData\": [\n     {\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"\\\"us-west-2\\\"\"\n      },\n      \"requiredTags\": {\n       \"constant\": false,\n       \"value\": \"[\\\"CostCenter\\\", \\\"Environment\\\"]\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"region\": {\n        \"default\": \"\",\n        \"description\": \"AWS Region.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       },\n       \"requiredTags\": {\n        \"default\": \"[\\\"CostCenter\\\", \\\"Environment\\\"]\",\n        \"description\": \"a list of required tags for EC2 Instances\",\n        \"title\": \"requiredTags\",\n        \"type\": \"array\"\n       }\n      },\n      \"required\": [\n       \"region\"\n      ],\n      \"title\": \"aws_get_untagged_resources\",\n      \"type\": \"object\"\n     }\n    ],\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"compare tag keys against required list\",\n    \"nouns\": [],\n    \"orderProperties\": [\n     \"region\",\n     \"requiredTags\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"tags\": [\n     \"aws_get_untagged_resources\"\n    ],\n    \"title\": \"compare tag keys against required list\",\n    \"trusted\": true,\n    \"verbs\": []\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##  Copyright (c) 2021 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from typing import List, Dict\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from unskript.connectors.aws import aws_get_paginator\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_resources_out_of_compliance_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_resources_out_of_compliance(handle, region: str, requiredTags: list) -> List:\\n\",\n    \"\\n\",\n    \"    ec2Client = handle.client('ec2', region_name=region)\\n\",\n    \"    #res = aws_get_paginator(ec2Client, \\\"describe_instances\\\", \\\"Reservations\\\")\\n\",\n    \"    res = aws_get_paginator(ec2Client, \\\"describe_instances\\\", \\\"Reservations\\\")\\n\",\n    \"    result = []\\n\",\n    \"    for reservation in res:\\n\",\n    \"        for instance in reservation['Instances']:       \\n\",\n    \"            try:\\n\",\n    \"                #has tags\\n\",\n    \"                allTags = True\\n\",\n    \"                keyList = []\\n\",\n    \"                tagged_instance = instance['Tags']\\n\",\n    \"                #print(tagged_instance)\\n\",\n    \"                #get all the keys for the instance\\n\",\n    \"                for kv in tagged_instance:\\n\",\n    \"                    key = kv[\\\"Key\\\"]\\n\",\n    \"                    keyList.append(key)\\n\",\n    \"                #see if the required tags are represented in the keylist\\n\",\n    \"                #if they are not - the instance is not in compliance\\n\",\n    \"                for required in requiredTags:\\n\",\n    \"                        if required not in keyList:\\n\",\n    \"                            allTags = False\\n\",\n    \"                if not allTags:\\n\",\n    \"                    # instance is not in compliance\\n\",\n    \"                    result.append({\\\"instance\\\":instance['InstanceId'],\\\"type\\\":instance['InstanceType'],\\\"imageId\\\":instance['ImageId'], \\\"launched\\\":instance['LaunchTime'], \\\"tags\\\": tagged_instance})\\n\",\n    \"                \\n\",\n    \"            except Exception as e:\\n\",\n    \"                #no tags\\n\",\n    \"                result.append({\\\"instance\\\":instance['InstanceId'],\\\"type\\\":instance['InstanceType'],\\\"imageId\\\":instance['ImageId'], \\\"launched\\\":instance['LaunchTime'], \\\"tags\\\": []})\\n\",\n    \"    return result\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"region\\\": \\\"\\\\\\\\\\\"us-west-2\\\\\\\\\\\"\\\",\\n\",\n    \"    \\\"requiredTags\\\": \\\"[\\\\\\\\\\\"CostCenter\\\\\\\\\\\", \\\\\\\\\\\"Environment\\\\\\\\\\\"]\\\"\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_get_resources_out_of_compliance, lego_printer=aws_get_resources_out_of_compliance_printer, hdl=hdl, args=args)\"\n   ],\n   \"output\": {}\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"a8280ac4-d504-44d2-b5ea-d97f7ca672c8\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Conclusion\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Conclusion\"\n   },\n   \"source\": [\n    \"In this Runbook, we demonstrated the use of unSkript's AWS legos to attach tags. This Runbook gets the list of all untagged resources of a given region, discovers tag keys of the given region and attaches mandatory tags to all the untagged resource. To view the full platform capabilities of unSkript please visit https://unskript.com\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"execution_data\": {\n   \"runbook_name\": \"AWS Add Mandatory tags to EC2\",\n   \"parameters\": [\n    \"Region\",\n    \"Tag_Key\",\n    \"Tag_Value\"\n   ]\n  },\n  \"kernelspec\": {\n   \"display_name\": \"unSkript (Build: 762)\",\n   \"name\": \"python_kubernetes\"\n  },\n  \"language_info\": {\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"pygments_lexer\": \"ipython3\"\n  },\n  \"parameterSchema\": {\n   \"properties\": {\n    \"Region\": {\n     \"default\": \"us-west-2\",\n     \"description\": \"Resources Region\",\n     \"title\": \"Region\",\n     \"type\": \"string\"\n    },\n    \"Tag_Key\": {\n     \"default\": \"Description\",\n     \"description\": \"Mandatory Tag key for resources (only use when tag need to be attached to all the resources)\",\n     \"title\": \"Tag_Key\",\n     \"type\": \"string\"\n    },\n    \"Tag_Value\": {\n     \"default\": \"Unskript\",\n     \"description\": \"Mandatory Tag Value for resources (only use when tag need to be attached to all the resources)\",\n     \"title\": \"Tag_Value\",\n     \"type\": \"string\"\n    }\n   },\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"parameterValues\": null\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}"
  },
  {
    "path": "AWS/AWS_Add_Mandatory_tags_to_EC2.json",
    "content": "{\n  \"name\": \"AWS Add Mandatory tags to EC2\",\n  \"description\": \"This xRunBook is a set of example actions that could be used to establish mandatory tagging to EC2 instances.  First testing instances for compliance, and creating reports of instances that are missing the required tags. There is also and action to add tags to an instance - to help bring them into tag compliance.\",  \n  \"uuid\": \"70e8223d93ea22200942a614b7565faf63bfe6d14de352e60804f8b5dc6fbbcd\",\n  \"icon\": \"CONNECTOR_TYPE_AWS\",\n  \"categories\": [ \"CATEGORY_TYPE_IAM\", \"CATEGORY_TYPE_SECOPS\" ],\n  \"connector_types\": [ \"CONNECTOR_TYPE_AWS\" ],\n  \"version\": \"1.0.0\"\n}\n\n"
  },
  {
    "path": "AWS/AWS_Add_Tag_Across_Selected_AWS_Resources.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"79251bc7-c6cd-4344-a8d5-754bf62eb17e\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Add tag to selected AWS Resources\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Add tag to selected AWS Resources\"\n   },\n   \"source\": [\n    \"<p><img src=\\\"https://storage.googleapis.com/unskript-website/assets/favicon.png\\\" alt=\\\"unSkript.com\\\" width=\\\"100\\\" height=\\\"100\\\"></p>\\n\",\n    \"<h1 id=\\\"-unSkript-Runbooks-\\\">unSkript Runbooks <a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#-unSkript-Runbooks-\\\" target=\\\"_self\\\">&para;</a></h1>\\n\",\n    \"<div class=\\\"alert alert-block alert-success\\\"><strong>&nbsp;This runbook adds tags to selectedAWS Resources.</strong></div>\\n\",\n    \"<p>&nbsp;</p>\\n\",\n    \"<center>\\n\",\n    \"<h2 id=\\\"Enforce-Mandatory-Tags-Across-All-AWS-Resources\\\">Enforce Mandatory Tags Across All AWS Resources<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Enforce-Mandatory-Tags-Across-All-AWS-Resources\\\" target=\\\"_self\\\">&para;</a></h2>\\n\",\n    \"</center>\\n\",\n    \"<h1 id=\\\"Steps-Overview\\\">Steps Overview<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Steps-Overview\\\" target=\\\"_self\\\">&para;</a></h1>\\n\",\n    \"<ol>\\n\",\n    \"<li>List all the Resources ARNs that do not have this tag in the given region.</li>\\n\",\n    \"<li>Select teh resources to tag - with value from input parameters.</li>\\n\",\n    \"<li>Add Tags to the selected AWS Resources.</li>\\n\",\n    \"</ol>\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"a49a1258-79d2-4846-8731-4ed74b36d6bc\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"AWS Get Untagged Resources\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"AWS Get Untagged Resources\"\n   },\n   \"source\": [\n    \"Here we will use unSkript AWS Get Untagged Resources Lego. This lego take region: str as input. This inputs is used to find out all Untagged Resources.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 167,\n   \"id\": \"0ec169e9-f3f2-400d-9b58-e4a598769e61\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": true,\n    \"action_uuid\": \"aee6cabb55096d5cf6098faa7e4a94135e8f5b0572b36d4b3252d7745fae595b\",\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"customCell\": true,\n    \"description\": \"AWS Get Untagged Resources\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-06-04T03:38:42.589Z\"\n    },\n    \"id\": 187,\n    \"index\": 187,\n    \"inputData\": [\n     {\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"Region\"\n      },\n      \"tag\": {\n       \"constant\": false,\n       \"value\": \"Tag_Key\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"region\": {\n        \"default\": \"\",\n        \"description\": \"AWS Region.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       },\n       \"tag\": {\n        \"default\": \"\\\"Tag_Key\\\"\",\n        \"description\": \"The Tag to search for\",\n        \"title\": \"tag\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"region\",\n       \"tag\"\n      ],\n      \"title\": \"aws_get_resources_missing_tag\",\n      \"type\": \"object\"\n     }\n    ],\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS Get Resources Missing Tag\",\n    \"nouns\": [\n     \"aws\",\n     \"resources\"\n    ],\n    \"orderProperties\": [\n     \"region\",\n     \"tag\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"UntaggedResources\",\n     \"output_name_enabled\": true\n    },\n    \"printOutput\": true,\n    \"service_id_enabled\": false,\n    \"tags\": [\n     \"aws_get_untagged_resources\"\n    ],\n    \"title\": \"AWS Get Resources Missing Tag\",\n    \"verbs\": [\n     \"list\"\n    ]\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##  Copyright (c) 2021 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from typing import List, Dict\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from unskript.connectors.aws import aws_get_paginator\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_resources_missing_tag_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(f\\\"there are {len(output)} resources missing tag {Tag_Key}. We can fix a max of 20.\\\" )\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_resources_missing_tag(handle, region: str, tag:str) -> List:\\n\",\n    \"    \\\"\\\"\\\"aws_get_resources_missing_tag Returns an List of Untagged Resources.\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned from task.validate(...).\\n\",\n    \"\\n\",\n    \"        :type region: str\\n\",\n    \"        :param region: Region to filter resources.\\n\",\n    \"\\n\",\n    \"        :rtype: List of untagged resources.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"\\n\",\n    \"    ec2Client = handle.client('resourcegroupstaggingapi', region_name=region)\\n\",\n    \"    result = []\\n\",\n    \"\\n\",\n    \"    arnKeywordsToIgnore = [\\\"sqlworkbench\\\",\\n\",\n    \"                           \\\"AutoScalingManagedRule\\\",\\n\",\n    \"                           \\\"sagarProxy\\\",\\n\",\n    \"                           \\\"fsap-0f4d1bbd83f172783\\\",\\n\",\n    \"                           \\\"experiment\\\"]\\n\",\n    \"\\n\",\n    \"    try:\\n\",\n    \"        response = aws_get_paginator(ec2Client, \\\"get_resources\\\", \\\"ResourceTagMappingList\\\")\\n\",\n    \"        for resources in response:\\n\",\n    \"            if not resources[\\\"Tags\\\"]:\\n\",\n    \"                #no tags at all!!\\n\",\n    \"                arnIgnore = False\\n\",\n    \"                for substring in arnKeywordsToIgnore:\\n\",\n    \"                    if substring in resources[\\\"ResourceARN\\\"]:\\n\",\n    \"                        arnIgnore = True\\n\",\n    \"                if not arnIgnore:\\n\",\n    \"                    # instance is missing tag\\n\",\n    \"                    result.append(resources[\\\"ResourceARN\\\"])\\n\",\n    \"            else:\\n\",\n    \"                #has tags\\n\",\n    \"                allTags = True\\n\",\n    \"                keyList = []\\n\",\n    \"                tagged_instance = resources['Tags']\\n\",\n    \"                #print(tagged_instance)\\n\",\n    \"                #get all the keys for the instance\\n\",\n    \"                for kv in tagged_instance:\\n\",\n    \"                    key = kv[\\\"Key\\\"]\\n\",\n    \"                    keyList.append(key)\\n\",\n    \"                #see if the required tags are represented in the keylist\\n\",\n    \"                #if they are not - the instance is not in compliance\\n\",\n    \"                if tag not in keyList:\\n\",\n    \"                    allTags = False\\n\",\n    \"                if not allTags:\\n\",\n    \"                    arnIgnore = False\\n\",\n    \"                    for substring in arnKeywordsToIgnore:\\n\",\n    \"                        if substring in resources[\\\"ResourceARN\\\"]:\\n\",\n    \"                            arnIgnore = True\\n\",\n    \"                    if not arnIgnore:\\n\",\n    \"                        # instance is missing tag\\n\",\n    \"                        result.append(resources[\\\"ResourceARN\\\"])\\n\",\n    \"\\n\",\n    \"    except Exception as error:\\n\",\n    \"        result.append({\\\"error\\\":error})\\n\",\n    \"\\n\",\n    \"    return result\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"region\\\": \\\"Region\\\",\\n\",\n    \"    \\\"tag\\\": \\\"Tag_Key\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"UntaggedResources\\\")\\n\",\n    \"\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_get_resources_missing_tag, lego_printer=aws_get_resources_missing_tag_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 165,\n   \"id\": \"de6350ed-9d0c-45fe-8917-5e95d370eed7\",\n   \"metadata\": {\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-06-04T03:32:16.862Z\"\n    },\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"name\": \"Select Resources to Tag\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Select Resources to Tag\",\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"import ipywidgets as widgets\\n\",\n    \"\\n\",\n    \"# Maximum number of checkboxes to display\\n\",\n    \"#the API has a max of 20 to update at once.\\n\",\n    \"max_checkboxes = 20\\n\",\n    \"checked = False\\n\",\n    \"# Create checkboxes\\n\",\n    \"checkboxes = [widgets.Checkbox(value=checked, description=untagged,style=dict(description_width='initial'),layout=dict(width='200%') ) for untagged in UntaggedResources[:max_checkboxes]]\\n\",\n    \"\\n\",\n    \"# Create a VBox container to display the checkboxes vertically\\n\",\n    \"checkboxes_container = widgets.VBox(checkboxes)\\n\",\n    \"\\n\",\n    \"# Display the checkboxes\\n\",\n    \"display(checkboxes_container)\\n\",\n    \"\\n\",\n    \"# List to store the checked states\\n\",\n    \"checked_list = []\\n\",\n    \"firstRun = True\\n\",\n    \"\\n\",\n    \"# Function to update the checked_list\\n\",\n    \"def update_checked_list(**kwargs):\\n\",\n    \"    checked_list.clear()\\n\",\n    \"    checked_list.extend([untagged for untagged, value in kwargs.items() if value])\\n\",\n    \"    global firstRun\\n\",\n    \"    if not firstRun:\\n\",\n    \"        print(\\\"Checked items:\\\", checked_list)\\n\",\n    \"    firstRun = False\\n\",\n    \"    \\n\",\n    \"# Create a dictionary of Checkbox widgets and their names\\n\",\n    \"checkbox_dict = {untagged: checkbox for untagged, checkbox in zip(UntaggedResources, checkboxes)}\\n\",\n    \"\\n\",\n    \"# Create the interactive_output widget\\n\",\n    \"output = widgets.interactive_output(update_checked_list, checkbox_dict)\\n\",\n    \"\\n\",\n    \"# Print the checked list initially\\n\",\n    \"update_checked_list(**{name: checkbox.value for name, checkbox in checkbox_dict.items()})\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"# Display the output\\n\",\n    \"display(output)\\n\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"ce65fdd0-ee64-42d0-90a6-0fe1c0f54608\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"AWS Attach Tags to Resources\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"AWS Attach Tags to Resources\"\n   },\n   \"source\": [\n    \"Here we will use unSkript AWS Attach Tags to Resources Lego. This lego take handle, resource_arn: list, tag_key: str, tag_value: str, region: str as input. This input is used to attach mandatory tags to all untagged Resources.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 149,\n   \"id\": \"b0bf6aee-2b72-4348-8c38-fe3783619da6\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionCategories\": [\n     \"CATEGORY_TYPE_COST_OPT\",\n     \"CATEGORY_TYPE_DEVOPS\",\n     \"CATEGORY_TYPE_AWS\"\n    ],\n    \"actionIsCheck\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHopParameterMapping\": {},\n    \"actionOutputType\": \"\",\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": false,\n    \"action_uuid\": \"878cb7819ecb4687ecfa8c6143365d10fe6b127adeb4a27fd71d06a3a2243d22\",\n    \"checkEnabled\": false,\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"AWS Attach Tags to Resources\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-06-04T03:20:41.106Z\"\n    },\n    \"id\": 260,\n    \"index\": 260,\n    \"inputData\": [\n     {\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"Region\"\n      },\n      \"resource_arn\": {\n       \"constant\": false,\n       \"value\": \"checked_list\"\n      },\n      \"tag_key\": {\n       \"constant\": false,\n       \"value\": \"Tag_Key\"\n      },\n      \"tag_value\": {\n       \"constant\": false,\n       \"value\": \"\\\"01/01/2025\\\"\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"region\": {\n        \"description\": \"AWS Region.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       },\n       \"resource_arn\": {\n        \"description\": \"Resource ARNs.\",\n        \"items\": {},\n        \"title\": \"Resource ARN\",\n        \"type\": \"array\"\n       },\n       \"tag_key\": {\n        \"description\": \"Resource Tag Key.\",\n        \"title\": \"Tag Key\",\n        \"type\": \"string\"\n       },\n       \"tag_value\": {\n        \"description\": \"Resource Tag Value.\",\n        \"title\": \"Tag Value\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"region\",\n       \"resource_arn\",\n       \"tag_key\",\n       \"tag_value\"\n      ],\n      \"title\": \"aws_attach_tags_to_resources\",\n      \"type\": \"object\"\n     }\n    ],\n    \"isUnskript\": false,\n    \"iterData\": [\n     {\n      \"iter_enabled\": false,\n      \"iter_item\": \"resource_arn\",\n      \"iter_list\": {\n       \"constant\": false,\n       \"objectItems\": false,\n       \"value\": \"checked_list\"\n      }\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS Attach Tags to Resources\",\n    \"nouns\": [],\n    \"orderProperties\": [\n     \"resource_arn\",\n     \"tag_key\",\n     \"tag_value\",\n     \"region\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"printOutput\": true,\n    \"tags\": [\n     \"aws_attach_tags_to_resources\"\n    ],\n    \"verbs\": []\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##  Copyright (c) 2021 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"import pprint\\n\",\n    \"from typing import Dict\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_attach_tags_to_resources_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_attach_tags_to_resources(\\n\",\n    \"    handle,\\n\",\n    \"    resource_arn: list,\\n\",\n    \"    tag_key: str,\\n\",\n    \"    tag_value: str,\\n\",\n    \"    region: str\\n\",\n    \"    ) -> Dict:\\n\",\n    \"    \\\"\\\"\\\"aws_attach_tags_to_resources Returns an Dict of resource info.\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned from task.validate(...).\\n\",\n    \"\\n\",\n    \"        :type resource_arn: list\\n\",\n    \"        :param resource_arn: Resource ARNs.\\n\",\n    \"\\n\",\n    \"        :type tag_key: str\\n\",\n    \"        :param tag_key: Resource Tag Key.\\n\",\n    \"\\n\",\n    \"        :type tag_value: str\\n\",\n    \"        :param tag_value: Resource Tag value.\\n\",\n    \"\\n\",\n    \"        :type region: str\\n\",\n    \"        :param region: Region to filter resources.\\n\",\n    \"\\n\",\n    \"        :rtype: Dict of resource info.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    ec2Client = handle.client('resourcegroupstaggingapi', region_name=region)\\n\",\n    \"    result = {}\\n\",\n    \"    try:\\n\",\n    \"        response = ec2Client.tag_resources(\\n\",\n    \"            ResourceARNList=resource_arn,\\n\",\n    \"            Tags={tag_key: tag_value}\\n\",\n    \"            )\\n\",\n    \"        result = response\\n\",\n    \"\\n\",\n    \"    except Exception as error:\\n\",\n    \"        result[\\\"error\\\"] = error\\n\",\n    \"\\n\",\n    \"    return result\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(continueOnError=False)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"region\\\": \\\"Region\\\",\\n\",\n    \"    \\\"resource_arn\\\": \\\"checked_list\\\",\\n\",\n    \"    \\\"tag_key\\\": \\\"Tag_Key\\\",\\n\",\n    \"    \\\"tag_value\\\": \\\"\\\\\\\\\\\"01/01/2025\\\\\\\\\\\"\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(iterJson='''{\\n\",\n    \"    \\\"iter_enabled\\\": false,\\n\",\n    \"    \\\"iter_list_is_const\\\": false,\\n\",\n    \"    \\\"iter_list\\\": \\\"checked_list\\\",\\n\",\n    \"    \\\"iter_parameter\\\": \\\"resource_arn\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_attach_tags_to_resources, lego_printer=aws_attach_tags_to_resources_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"a8280ac4-d504-44d2-b5ea-d97f7ca672c8\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Conclusion\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Conclusion\"\n   },\n   \"source\": [\n    \"In this Runbook, we demonstrated the use of unSkript's AWS legos to attach tags. This Runbook gets the list of all untagged resources of a given region, discovers tag keys of the given region and attaches mandatory tags to all the untagged resource. To view the full platform capabilities of unSkript please visit https://unskript.com\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"execution_data\": {\n   \"runbook_name\": \"AWS Update Resources about to expire\",\n   \"parameters\": [\n    \"Region\",\n    \"Tag_Key\",\n    \"Tag_Value\"\n   ]\n  },\n  \"kernelspec\": {\n   \"display_name\": \"unSkript (Build: 1185)\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"codemirror_mode\": {\n    \"name\": \"ipython\",\n    \"version\": 3\n   },\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"nbconvert_exporter\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.9.6\"\n  },\n  \"outputParameterSchema\": null,\n  \"parameterSchema\": {\n   \"definitions\": null,\n   \"properties\": {\n    \"Region\": {\n     \"default\": \"us-west-2\",\n     \"description\": \"Resources Region\",\n     \"title\": \"Region\",\n     \"type\": \"string\"\n    },\n    \"Tag_Key\": {\n     \"default\": \"owner\",\n     \"description\": \"Mandatory Tag key for resources (only use when tag need to be attached to all the resources)\",\n     \"title\": \"Tag_Key\",\n     \"type\": \"string\"\n    },\n    \"Tag_Value\": {\n     \"description\": \"Mandatory Tag Value for resources (only use when tag need to be attached to all the resources)\",\n     \"title\": \"Tag_Value\",\n     \"type\": \"string\"\n    }\n   },\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}"
  },
  {
    "path": "AWS/AWS_Add_Tag_Across_Selected_AWS_Resources.json",
    "content": "{\n    \"name\": \"AWS Update Resources about to expire\",\n    \"description\": \"This finds resources that have an expiration tag that is about to expire.  Can eitehr send a Slack message in 'auto'mode, or can be used to manually remediate the issue interactively.\",  \n    \"uuid\": \"a79201f821993867e23dd9603ed7ef5523324353d717c566f902f7ac6e471f5e\",\n    \"icon\": \"CONNECTOR_TYPE_AWS\",\n    \"categories\": [ \"CATEGORY_TYPE_COST_OPT\", \"CATEGORY_TYPE_CLOUDOPS\" ],\n    \"connector_types\": [ \"CONNECTOR_TYPE_AWS\" ],\n    \"version\": \"1.0.0\"\n  }"
  },
  {
    "path": "AWS/AWS_Bulk_Update_Resource_Tag.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"79251bc7-c6cd-4344-a8d5-754bf62eb17e\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Update Tags for AWS Resources\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Update Tags for AWS Resources\"\n   },\n   \"source\": [\n    \"<p><img src=\\\"https://storage.googleapis.com/unskript-website/assets/favicon.png\\\" alt=\\\"unSkript.com\\\" width=\\\"100\\\" height=\\\"100\\\"></p>\\n\",\n    \"<h1 id=\\\"unSkript-Runbooks-&para;\\\">unSkript Runbooks <a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#-unSkript-Runbooks-\\\" target=\\\"_self\\\" rel=\\\"noopener\\\">&para;</a><a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#unSkript-Runbooks-&para;\\\" target=\\\"_self\\\">&para;</a></h1>\\n\",\n    \"<div class=\\\"alert alert-block alert-success\\\"><strong>&nbsp;This runbook demonstrates How to Update Tags for AWS Resources using unSkript legos.</strong></div>\\n\",\n    \"<p>&nbsp;</p>\\n\",\n    \"<center>\\n\",\n    \"<h2 id=\\\"Enforce-Mandatory-Tags-Across-All-AWS-Resources&para;\\\">Update Tags for selected AWS Resources<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Enforce-Mandatory-Tags-Across-All-AWS-Resources\\\" target=\\\"_self\\\" rel=\\\"noopener\\\">&para;</a><a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Enforce-Mandatory-Tags-Across-All-AWS-Resources&para;\\\" target=\\\"_self\\\">&para;</a></h2>\\n\",\n    \"</center>\\n\",\n    \"<h1 id=\\\"Steps-Overview&para;\\\">Steps Overview<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Steps-Overview\\\" target=\\\"_self\\\" rel=\\\"noopener\\\">&para;</a><a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Steps-Overview&para;\\\" target=\\\"_self\\\">&para;</a></h1>\\n\",\n    \"<ol>\\n\",\n    \"<li>List all the Resources ARNs in the given region with the selected tag.</li>\\n\",\n    \"<li>WE'll print a list of tagged resources along with the current value of the tag. Select and change as desired.</li>\\n\",\n    \"<li>Update the Selected tags at AWS.</li>\\n\",\n    \"</ol>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 15,\n   \"id\": \"0ec169e9-f3f2-400d-9b58-e4a598769e61\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": true,\n    \"action_uuid\": \"aee6cabb55096d5cf6098faa7e4a94135e8f5b0572b36d4b3252d7745fae595b\",\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"customCell\": true,\n    \"description\": \"AWS Get Untagged Resources\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-06-09T16:26:07.832Z\"\n    },\n    \"id\": 187,\n    \"index\": 187,\n    \"inputData\": [\n     {\n      \"new_owner\": {\n       \"constant\": false,\n       \"value\": \"new_value\"\n      },\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"Region\"\n      },\n      \"tag\": {\n       \"constant\": false,\n       \"value\": \"Tag_Key\"\n      },\n      \"tag_value\": {\n       \"constant\": false,\n       \"value\": \"current_value\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"new_owner\": {\n        \"default\": \"\\\"new_owner\\\"\",\n        \"description\": \"The new Owner of the discovered resources\",\n        \"title\": \"new_owner\",\n        \"type\": \"string\"\n       },\n       \"region\": {\n        \"default\": \"\",\n        \"description\": \"AWS Region.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       },\n       \"tag\": {\n        \"default\": \"\\\"Tag_Key\\\"\",\n        \"description\": \"The Tag to search for\",\n        \"title\": \"tag\",\n        \"type\": \"string\"\n       },\n       \"tag_value\": {\n        \"default\": \"\\\"current_owner\\\"\",\n        \"description\": \"This Action will pull all resources with this value as owner.\",\n        \"title\": \"tag_value\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"new_owner\",\n       \"region\",\n       \"tag\",\n       \"tag_value\"\n      ],\n      \"title\": \"aws_get_resources_with_tag\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS Get Resources With Tag\",\n    \"nouns\": [\n     \"aws\",\n     \"resources\"\n    ],\n    \"orderProperties\": [\n     \"region\",\n     \"tag\",\n     \"tag_value\",\n     \"new_owner\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"taggedResources\",\n     \"output_name_enabled\": true,\n     \"output_runbook_enabled\": false,\n     \"output_runbook_name\": \"\"\n    },\n    \"printOutput\": true,\n    \"service_id_enabled\": false,\n    \"tags\": [\n     \"aws_get_untagged_resources\"\n    ],\n    \"title\": \"AWS Get Resources With Tag\",\n    \"verbs\": [\n     \"list\"\n    ]\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##  Copyright (c) 2021 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from typing import List, Dict\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from unskript.connectors.aws import aws_get_paginator\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_resources_with_tag_printer(output):\\n\",\n    \"\\n\",\n    \"    pprint.pprint(f\\\"There are {len(output)} resources with tag {Tag_Key} with value {tag_value}.\\\" )\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_resources_with_tag(handle, region: str, tag:str, tag_value:str, new_owner:str) -> List:\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"    ec2Client = handle.client('resourcegroupstaggingapi', region_name=region)\\n\",\n    \"    result = []\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"    try:\\n\",\n    \"        response = aws_get_paginator(ec2Client, \\\"get_resources\\\", \\\"ResourceTagMappingList\\\")\\n\",\n    \"        for resources in response:\\n\",\n    \"            if  resources[\\\"Tags\\\"]:\\n\",\n    \"                #has tags\\n\",\n    \"                #print(tagged_instance)\\n\",\n    \"                #get all the keys for the instance\\n\",\n    \"                for kv in resources['Tags']:\\n\",\n    \"                    key = kv[\\\"Key\\\"]\\n\",\n    \"                    value = kv[\\\"Value\\\"]\\n\",\n    \"                    if tag == key and tag_value == value:\\n\",\n    \"                        result.append(resources[\\\"ResourceARN\\\"])\\n\",\n    \"\\n\",\n    \"    except Exception as error:\\n\",\n    \"        result.append({\\\"error\\\":error})\\n\",\n    \"    pprint.pprint(f\\\"There are {len(result)} resources with tag {Tag_Key} with value {tag_value}. If you continue, we'll replace {tag_value} with {new_owner}\\\" )\\n\",\n    \"    return result\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"new_owner\\\": \\\"new_value\\\",\\n\",\n    \"    \\\"region\\\": \\\"Region\\\",\\n\",\n    \"    \\\"tag\\\": \\\"Tag_Key\\\",\\n\",\n    \"    \\\"tag_value\\\": \\\"current_value\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"taggedResources\\\")\\n\",\n    \"\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_get_resources_with_tag, lego_printer=aws_get_resources_with_tag_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"ce65fdd0-ee64-42d0-90a6-0fe1c0f54608\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"AWS Attach Tags to Resources\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"AWS Attach Tags to Resources\"\n   },\n   \"source\": [\n    \"Here we will use unSkript AWS Attach Tags to Resources Lego. This lego take handle, resource_arn: list, tag_key: str, tag_value: str, region: str as input. This input is used to attach mandatory tags to all untagged Resources.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 14,\n   \"id\": \"cdba6167-a94a-4985-9caf-50c5de52f8a5\",\n   \"metadata\": {\n    \"actionBashCommand\": false,\n    \"actionCategories\": [\n     \"CATEGORY_TYPE_COST_OPT\",\n     \"CATEGORY_TYPE_DEVOPS\",\n     \"CATEGORY_TYPE_AWS\"\n    ],\n    \"actionDescription\": \"AWS Attach Tags to Resources\",\n    \"actionEntryFunction\": \"aws_attach_tags_to_resources\",\n    \"actionIsCheck\": false,\n    \"actionIsRemediation\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": null,\n    \"actionNextHopParameterMapping\": null,\n    \"actionNouns\": null,\n    \"actionOutputType\": \"ACTION_OUTPUT_TYPE_DICT\",\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"actionTitle\": \"AWS Attach Tags to Resources\",\n    \"actionType\": \"LEGO_TYPE_AWS\",\n    \"actionVerbs\": null,\n    \"actionVersion\": \"1.0.0\",\n    \"action_modified\": false,\n    \"action_uuid\": \"878cb7819ecb4687ecfa8c6143365d10fe6b127adeb4a27fd71d06a3a2243d22\",\n    \"continueOnError\": false,\n    \"credentialsJson\": {},\n    \"description\": \"AWS Attach Tags to Resources\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-06-09T16:26:01.445Z\"\n    },\n    \"id\": 2,\n    \"index\": 2,\n    \"inputData\": [\n     {\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"Region\"\n      },\n      \"resource_arn\": {\n       \"constant\": false,\n       \"value\": \"taggedResources\"\n      },\n      \"tag_key\": {\n       \"constant\": false,\n       \"value\": \"Tag_Key\"\n      },\n      \"tag_value\": {\n       \"constant\": false,\n       \"value\": \"new_value\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"region\": {\n        \"description\": \"AWS Region.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       },\n       \"resource_arn\": {\n        \"description\": \"Resource ARNs.\",\n        \"items\": {},\n        \"title\": \"Resource ARN\",\n        \"type\": \"array\"\n       },\n       \"tag_key\": {\n        \"description\": \"Resource Tag Key.\",\n        \"title\": \"Tag Key\",\n        \"type\": \"string\"\n       },\n       \"tag_value\": {\n        \"description\": \"Resource Tag Value.\",\n        \"title\": \"Tag Value\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"region\",\n       \"resource_arn\",\n       \"tag_key\",\n       \"tag_value\"\n      ],\n      \"title\": \"aws_attach_tags_to_resources\",\n      \"type\": \"object\"\n     }\n    ],\n    \"iterData\": [\n     {\n      \"iter_enabled\": false,\n      \"iter_item\": \"resource_arn\",\n      \"iter_list\": {\n       \"constant\": false,\n       \"objectItems\": false,\n       \"value\": \"taggedResources\"\n      }\n     }\n    ],\n    \"language\": \"python\",\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS Attach Tags to Resources\",\n    \"orderProperties\": [\n     \"resource_arn\",\n     \"tag_key\",\n     \"tag_value\",\n     \"region\"\n    ],\n    \"printOutput\": true,\n    \"tags\": [\n     \"aws_attach_tags_to_resources\"\n    ],\n    \"uuid\": \"878cb7819ecb4687ecfa8c6143365d10fe6b127adeb4a27fd71d06a3a2243d22\",\n    \"version\": \"1.0.0\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##  Copyright (c) 2023 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"import pprint\\n\",\n    \"from typing import Dict\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from typing import List\\n\",\n    \"\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"class InputSchema(BaseModel):\\n\",\n    \"    region: str = Field(..., description='AWS Region.', title='Region')\\n\",\n    \"    resource_arn: List = Field(..., description='Resource ARNs.', title='Resource ARN')\\n\",\n    \"    tag_key: str = Field(..., description='Resource Tag Key.', title='Tag Key')\\n\",\n    \"    tag_value: str = Field(..., description='Resource Tag Value.', title='Tag Value')\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"# This API has a limit of 20 ARNs per api call...\\n\",\n    \"#we'll need to break up the list into chunks of 20\\n\",\n    \"def break_list(long_list, max_size):\\n\",\n    \"    return [long_list[i:i + max_size] for i in range(0, len(long_list), max_size)]\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"def aws_attach_tags_to_resources_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"def aws_attach_tags_to_resources(\\n\",\n    \"    handle,\\n\",\n    \"    resource_arn: list,\\n\",\n    \"    tag_key: str,\\n\",\n    \"    tag_value: str,\\n\",\n    \"    region: str\\n\",\n    \"    ) -> Dict:\\n\",\n    \"    \\\"\\\"\\\"aws_attach_tags_to_resources Returns an Dict of resource info.\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned from task.validate(...).\\n\",\n    \"\\n\",\n    \"        :type resource_arn: list\\n\",\n    \"        :param resource_arn: Resource ARNs.\\n\",\n    \"\\n\",\n    \"        :type tag_key: str\\n\",\n    \"        :param tag_key: Resource Tag Key.\\n\",\n    \"\\n\",\n    \"        :type tag_value: str\\n\",\n    \"        :param tag_value: Resource Tag value.\\n\",\n    \"\\n\",\n    \"        :type region: str\\n\",\n    \"        :param region: Region to filter resources.\\n\",\n    \"\\n\",\n    \"        :rtype: Dict of resource info.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    ec2Client = handle.client('resourcegroupstaggingapi', region_name=region)\\n\",\n    \"    result = {}\\n\",\n    \"\\n\",\n    \"    #break the ARN list into groups of 20 to send through the API\\n\",\n    \"    list_of_lists = break_list(resource_arn, 20)\\n\",\n    \"\\n\",\n    \"    for index, smallerList in enumerate(list_of_lists):\\n\",\n    \"\\n\",\n    \"        try:\\n\",\n    \"            response = ec2Client.tag_resources(\\n\",\n    \"                ResourceARNList=smallerList,\\n\",\n    \"                Tags={tag_key: tag_value}\\n\",\n    \"                )\\n\",\n    \"            result[index] = response\\n\",\n    \"\\n\",\n    \"        except Exception as error:\\n\",\n    \"            result[f\\\"{index} error\\\"] = error\\n\",\n    \"\\n\",\n    \"    return result\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(continueOnError=False)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"region\\\": \\\"Region\\\",\\n\",\n    \"    \\\"resource_arn\\\": \\\"taggedResources\\\",\\n\",\n    \"    \\\"tag_key\\\": \\\"Tag_Key\\\",\\n\",\n    \"    \\\"tag_value\\\": \\\"new_value\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(iterJson='''{\\n\",\n    \"    \\\"iter_enabled\\\": false,\\n\",\n    \"    \\\"iter_list_is_const\\\": false,\\n\",\n    \"    \\\"iter_list\\\": \\\"taggedResources\\\",\\n\",\n    \"    \\\"iter_parameter\\\": \\\"resource_arn\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_attach_tags_to_resources, lego_printer=aws_attach_tags_to_resources_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"a8280ac4-d504-44d2-b5ea-d97f7ca672c8\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Conclusion\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Conclusion\"\n   },\n   \"source\": [\n    \"In this Runbook, we demonstrated the use of unSkript's AWS legos to attach tags. This Runbook gets the list of all untagged resources of a given region, discovers tag keys of the given region and attaches mandatory tags to all the untagged resource. To view the full platform capabilities of unSkript please visit https://unskript.com\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"execution_data\": {\n   \"runbook_name\": \"AWS Bulk Update Resource Tag\",\n   \"parameters\": [\n    \"Region\",\n    \"Tag_Key\",\n    \"Tag_Value\"\n   ]\n  },\n  \"kernelspec\": {\n   \"display_name\": \"unSkript (Build: 1185)\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"codemirror_mode\": {\n    \"name\": \"ipython\",\n    \"version\": 3\n   },\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"nbconvert_exporter\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.9.6\"\n  },\n  \"outputParameterSchema\": null,\n  \"parameterSchema\": {\n   \"definitions\": null,\n   \"properties\": {\n    \"Region\": {\n     \"default\": \"us-west-2\",\n     \"description\": \"Resources Region\",\n     \"title\": \"Region\",\n     \"type\": \"string\"\n    },\n    \"Tag_Key\": {\n     \"default\": \"owner\",\n     \"description\": \"This is the Key to the owner tag.  If you use a different tag value than \\\"owner\\\", change this value.\",\n     \"title\": \"Tag_Key\",\n     \"type\": \"string\"\n    },\n    \"current_value\": {\n     \"description\": \"The current tag value.\",\n     \"title\": \"current_value\",\n     \"type\": \"string\"\n    },\n    \"new_value\": {\n     \"description\": \"The new value for the tag.\",\n     \"title\": \"new_value\",\n     \"type\": \"string\"\n    }\n   },\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"parameterValues\": {\n   \"Region\": \"us-west-2\",\n   \"Tag_Key\": \"owner\",\n   \"current_value\": \"unskript\",\n   \"new_value\": \"unskript1\"\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}"
  },
  {
    "path": "AWS/AWS_Bulk_Update_Resource_Tag.json",
    "content": "{\n    \"name\": \"AWS Bulk Update Resource Tag\",\n    \"description\": \"This runbook will find all AWS Resources tagged with a given key:value tag.  It will then update the tag's value to a new value. This can be used to bulk update the owner of resources, or any other reason you might need to change the tag value for many AWS resources.\",\n    \"uuid\": \"32ce1935204c64d816fd1f01f4fe41e8d8bd47725b899535c6acee703a7bcf0d\",\n    \"icon\": \"CONNECTOR_TYPE_AWS\",\n    \"categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_COST_OPT\" ],\n    \"connector_types\": [ \"CONNECTOR_TYPE_AWS\" ],\n    \"version\": \"1.0.0\"\n  }"
  },
  {
    "path": "AWS/AWS_Change_EBS_Volume_To_GP3_Type.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"6b5fc373-33cc-4aa1-8a91-95d195fca904\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Steps Overview\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Steps Overview\"\n   },\n   \"source\": [\n    \"<center><img src=\\\"https://storage.googleapis.com/unskript-website/assets/favicon.png\\\" alt=\\\"unSkript.com\\\" width=\\\"100\\\" height=\\\"100\\\">\\n\",\n    \"<h1 id=\\\"unSkript-Runbooks\\\">unSkript Runbooks</h1>\\n\",\n    \"<div class=\\\"alert alert-block alert-success\\\">\\n\",\n    \"<h3 id=\\\"Objective\\\">Objective</h3>\\n\",\n    \"<br><strong\\\"><em>Change EBS volumes that are not GP3 Type to GP3 Type</em></strong></div>\\n\",\n    \"</center>\\n\",\n    \"<p>&nbsp;</p>\\n\",\n    \"<center>\\n\",\n    \"<h2 id=\\\"Secure-Publicly-accessible-Amazon-RDS-Snapshot\\\"><u><strong\\\">Change AWS EBS volume to GP3 Type</strong></u></h2>\\n\",\n    \"</center>\\n\",\n    \"<h1 id=\\\"Steps-Overview\\\">Steps Overview</h1>\\n\",\n    \"<p><br>1)<a href=\\\"#1\\\"> Get AWS EBS Volume Without GP3 Type</a><br>2)<a href=\\\"#2\\\"> Modify EBS Volume to GP3</a></p>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 29,\n   \"id\": \"20a19712-dd1b-44cf-9ff9-97d0fa59c4b3\",\n   \"metadata\": {\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-04-18T12:30:28.332Z\"\n    },\n    \"name\": \"Input Verification\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Input Verification\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"if ebs_volume_ids and not region:\\n\",\n    \"    raise SystemExit(\\\"Provide a region for the EBS Volumes!\\\")\\n\",\n    \"if region == None:\\n\",\n    \"    region = \\\"\\\"\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"920ff80f-0083-40e9-96d1-f4ca61b603ef\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 1\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 1\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Get-AWS-EBS-Volume-Without-GP3-Type\\\"><a id=\\\"2\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Get AWS EBS Volume Without GP3 Type<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Get-AWS-EBS-Volume-Without-GP3-Type\\\" target=\\\"_self\\\">&para;</a></h3>\\n\",\n    \"<p>Using unSkript's Get AWS EBS Volume Without GP3 Type action we will fetch all the EBS Volumes that are not of General Purpose Type-3.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters: <code>region(Optional)</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following output: <code>all_volumes</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"bc4e94de-bb36-4db2-8017-ab96ae205959\",\n   \"metadata\": {\n    \"actionBashCommand\": false,\n    \"actionCategories\": [\n     \"CATEGORY_TYPE_IAM\",\n     \"CATEGORY_TYPE_SECOPS\"\n    ],\n    \"actionDescription\": \"AWS recently introduced the General Purpose SSD (gp3) volume type, which is designed to provide higher IOPS performance at a lower cost than the gp2 volume type.\",\n    \"actionEntryFunction\": \"aws_get_ebs_volumes_without_gp3_type\",\n    \"actionIsCheck\": true,\n    \"actionIsRemediation\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [\n     \"2475714639442a9adcd0a87f7d193f6e8a6bbb9537d1eb6b03a6befb8ef84b19\"\n    ],\n    \"actionNextHopParameterMapping\": {},\n    \"actionNouns\": null,\n    \"actionOutputType\": \"ACTION_OUTPUT_TYPE_LIST\",\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"actionTitle\": \"Get AWS EBS Volume Without GP3 Type\",\n    \"actionType\": \"LEGO_TYPE_AWS\",\n    \"actionVerbs\": null,\n    \"actionVersion\": \"1.0.0\",\n    \"action_modified\": false,\n    \"action_uuid\": \"ef1a281f6f6d0f44406bc73758705fd814b740952f9a82a2735d8db6fb6d834f\",\n    \"collapsed\": true,\n    \"condition_enabled\": true,\n    \"continueOnError\": false,\n    \"description\": \"AWS recently introduced the General Purpose SSD (gp3) volume type, which is designed to provide higher IOPS performance at a lower cost than the gp2 volume type.\",\n    \"id\": 4,\n    \"index\": 4,\n    \"inputData\": [\n     {\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"region\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"region\": {\n        \"default\": \"\",\n        \"description\": \"AWS Region.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [],\n      \"title\": \"aws_get_ebs_volumes_without_gp3_type\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"language\": \"python\",\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"Get AWS EBS Volume Without GP3 Type\",\n    \"orderProperties\": [\n     \"region\"\n    ],\n    \"outputParams\": {\n     \"output_name\": \"all_volumes\",\n     \"output_name_enabled\": true,\n     \"output_runbook_enabled\": false,\n     \"output_runbook_name\": \"\"\n    },\n    \"printOutput\": true,\n    \"startcondition\": \"not ebs_volume_ids\",\n    \"tags\": [\n     \"aws_get_ebs_volumes_without_gp3_type\"\n    ],\n    \"uuid\": \"ef1a281f6f6d0f44406bc73758705fd814b740952f9a82a2735d8db6fb6d834f\",\n    \"version\": \"1.0.0\",\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"##  Copyright (c) 2021 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"import pprint\\n\",\n    \"from typing import Optional, Tuple\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_ebs_volumes_without_gp3_type_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_ebs_volumes_without_gp3_type(handle, region: str = \\\"\\\") -> Tuple:\\n\",\n    \"    \\\"\\\"\\\"aws_get_ebs_volumes_without_gp3_type Returns an array of ebs volumes.\\n\",\n    \"\\n\",\n    \"        :type region: string\\n\",\n    \"        :param region: Used to filter the volume for specific region.\\n\",\n    \"\\n\",\n    \"        :rtype: Tuple with status result and list of EBS Volume without GP3 type.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    result=[]\\n\",\n    \"    all_regions = [region]\\n\",\n    \"    if not region:\\n\",\n    \"        all_regions = aws_list_all_regions(handle)\\n\",\n    \"\\n\",\n    \"    for reg in all_regions:\\n\",\n    \"        try:\\n\",\n    \"            # Filtering the volume by region\\n\",\n    \"            ec2Client = handle.resource('ec2', region_name=reg)\\n\",\n    \"            volumes = ec2Client.volumes.all()\\n\",\n    \"\\n\",\n    \"            # collecting the volumes which has zero attachments\\n\",\n    \"            for volume in volumes:\\n\",\n    \"                volume_dict = {}\\n\",\n    \"                if volume.volume_type != \\\"gp3\\\":\\n\",\n    \"                    volume_dict[\\\"region\\\"] = reg\\n\",\n    \"                    volume_dict[\\\"volume_id\\\"] = volume.id\\n\",\n    \"                    volume_dict[\\\"volume_type\\\"] = volume.volume_type\\n\",\n    \"                    result.append(volume_dict)\\n\",\n    \"        except Exception:\\n\",\n    \"            pass\\n\",\n    \"\\n\",\n    \"    if len(result) != 0:\\n\",\n    \"        return (False, result)\\n\",\n    \"    return (True, None)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"region\\\": \\\"region\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"not ebs_volume_ids\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"task.configure(outputName=\\\"all_volumes\\\")\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_get_ebs_volumes_without_gp3_type, lego_printer=aws_get_ebs_volumes_without_gp3_type_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"da849c25-f2f5-45b0-9502-33e35a7e54a5\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Stap 1 Extension\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Stap 1 Extension\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Create-List-of-Volumes-without-GP3-Type\\\">Create List of Volumes without GP3 Type<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Create-List-of-Volumes-without-GP3-Type\\\" target=\\\"_self\\\">&para;</a></h3>\\n\",\n    \"<p>This action filters regions that have no ebs volumes without gp3 type .</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters: <code>None</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following output:&nbsp;<code>all_non_gp3_volumes</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"b1284613-b251-4ba3-83a8-db49cfb3bcab\",\n   \"metadata\": {\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-04-18T13:26:21.555Z\"\n    },\n    \"name\": \"Create List of Volumes without GP3 Type\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Create List of Volumes without GP3 Type\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"all_non_gp3_volumes = []\\n\",\n    \"dummy = []\\n\",\n    \"try:\\n\",\n    \"    if all_volumes[0] == False:\\n\",\n    \"        for volume in all_volumes[1]:\\n\",\n    \"            all_non_gp3_volumes.append(volume)\\n\",\n    \"except Exception as e:\\n\",\n    \"    if ebs_volume_ids:\\n\",\n    \"        for vol_id in ebs_volume_ids:\\n\",\n    \"            data_dict = {}\\n\",\n    \"            data_dict[\\\"region\\\"] = region\\n\",\n    \"            data_dict[\\\"volume_id\\\"] = vol_id\\n\",\n    \"            all_non_gp3_volumes.append(data_dict)\\n\",\n    \"    else:\\n\",\n    \"        raise Exception(e)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"17285759-9cfa-4966-9354-4ff9342b2bd2\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 2\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 2\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Modify-EBS-Volume-to-GP3&para;\\\">Modify EBS Volume to GP3</h3>\\n\",\n    \"<p>Using unSkript's Modify EBS Volume to GP3 action we will modify the EBS volume type to GP3.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters: <code>region</code>, &nbsp;<code>volume_type,&nbsp;volume_id</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"25b754cf-7a86-43e6-8727-b66434953158\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionCategories\": [\n     \"CATEGORY_TYPE_DEVOPS\",\n     \"CATEGORY_TYPE_SRE\",\n     \"CATEGORY_TYPE_AWS\",\n     \"CATEGORY_TYPE_AWS_EC2\"\n    ],\n    \"actionIsCheck\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [],\n    \"actionNextHopParameterMapping\": {},\n    \"actionOutputType\": \"\",\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": false,\n    \"action_uuid\": \"05bba1d41c46a68afc0f11b423dc140bd431315f52489b334d00ff3a938205ba\",\n    \"checkEnabled\": false,\n    \"collapsed\": true,\n    \"condition_enabled\": true,\n    \"continueOnError\": true,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"AWS recently introduced the General Purpose SSD (gp3) volume type, which is designed to provide higher IOPS performance at a lower cost than the gp2 volume type.\",\n    \"id\": 347,\n    \"index\": 347,\n    \"inputData\": [\n     {\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"region\\\\\\\\\\\")\\\"\"\n      },\n      \"volume_id\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"volume_id\\\\\\\\\\\")\\\"\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"region\": {\n        \"description\": \"AWS Region.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       },\n       \"volume_id\": {\n        \"description\": \"EBS Volume ID.\",\n        \"title\": \"Volume ID\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"region\",\n       \"volume_id\"\n      ],\n      \"title\": \"aws_modify_ebs_volume_to_gp3\",\n      \"type\": \"object\"\n     }\n    ],\n    \"isUnskript\": false,\n    \"iterData\": [\n     {\n      \"iter_enabled\": true,\n      \"iter_item\": {\n       \"region\": \"region\",\n       \"volume_id\": \"volume_id\"\n      },\n      \"iter_list\": {\n       \"constant\": false,\n       \"objectItems\": true,\n       \"value\": \"all_non_gp3_volumes\"\n      }\n     }\n    ],\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS Modify EBS Volume to GP3\",\n    \"nouns\": [],\n    \"orderProperties\": [\n     \"region\",\n     \"volume_id\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"modified_volumes\",\n     \"output_name_enabled\": true,\n     \"output_runbook_enabled\": false,\n     \"output_runbook_name\": \"\"\n    },\n    \"printOutput\": true,\n    \"startcondition\": \"len(all_non_gp3_volumes)!=0\",\n    \"tags\": [],\n    \"title\": \"AWS Modify EBS Volume to GP3\",\n    \"verbs\": []\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"##  Copyright (c) 2021 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import List\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_modify_ebs_volume_to_gp3_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_modify_ebs_volume_to_gp3(handle, region: str, volume_id: str) -> List:\\n\",\n    \"    \\\"\\\"\\\"aws_modify_ebs_volume_to_gp3 returns an array of modified details for EBS volumes.\\n\",\n    \"\\n\",\n    \"        :type region: string\\n\",\n    \"        :param region: Used to filter the volume for specific region.\\n\",\n    \"\\n\",\n    \"        :type volume_id: string\\n\",\n    \"        :param volume_id: EBS Volume ID.\\n\",\n    \"\\n\",\n    \"        :rtype: List of modified details for EBS volumes\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    result = []\\n\",\n    \"    try:\\n\",\n    \"        ec2Client = handle.client('ec2', region_name=region)\\n\",\n    \"        volumes = ec2Client.modify_volume(VolumeId=volume_id, VolumeType='gp3')\\n\",\n    \"        result.append(volumes)\\n\",\n    \"    except Exception as e:\\n\",\n    \"        result.append({\\\"error\\\": e})\\n\",\n    \"\\n\",\n    \"    return result\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(continueOnError=True)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"region\\\": \\\"iter.get(\\\\\\\\\\\"region\\\\\\\\\\\")\\\",\\n\",\n    \"    \\\"volume_id\\\": \\\"iter.get(\\\\\\\\\\\"volume_id\\\\\\\\\\\")\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(iterJson='''{\\n\",\n    \"    \\\"iter_enabled\\\": true,\\n\",\n    \"    \\\"iter_list_is_const\\\": false,\\n\",\n    \"    \\\"iter_list\\\": \\\"all_non_gp3_volumes\\\",\\n\",\n    \"    \\\"iter_parameter\\\": [\\\"region\\\",\\\"volume_id\\\"]\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"len(all_non_gp3_volumes)!=0\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"modified_volumes\\\")\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_modify_ebs_volume_to_gp3, lego_printer=aws_modify_ebs_volume_to_gp3_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"05396b66-dec6-4132-ac6c-49d5deefa68b\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Conclusion\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Conclusion\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Conclusion\\\">Conclusion<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Conclusion\\\" target=\\\"_self\\\">&para;</a></h3>\\n\",\n    \"<p>In this Runbook, we were able to change the type of those EBS volumes that weren't type GP3 to type GP3 by using unSkript's AWS actions. To view the full platform capabilities of unSkript please visit <a href=\\\"https://us.app.unskript.io\\\" target=\\\"_blank\\\" rel=\\\"noopener\\\">us.app.unskript.io</a></p>\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"execution_data\": {\n   \"runbook_name\": \"Change AWS EBS Volume To GP3 Type\",\n   \"parameters\": [\n    \"ebs_volume_ids\",\n    \"ebs_volume_type\",\n    \"region\"\n   ]\n  },\n  \"kernelspec\": {\n   \"display_name\": \"unSkript (Build: 1166)\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"codemirror_mode\": {\n    \"name\": \"ipython\",\n    \"version\": 3\n   },\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"nbconvert_exporter\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.9.6\"\n  },\n  \"outputParameterSchema\": {\n   \"properties\": {},\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"parameterSchema\": {\n   \"properties\": {\n    \"ebs_volume_ids\": {\n     \"description\": \"List of EBS volume ID's \",\n     \"title\": \"ebs_volume_ids\",\n     \"type\": \"array\"\n    },\n    \"region\": {\n     \"description\": \"AWS Region(s) to get EBS volumes. Eg: us-west-2\",\n     \"title\": \"region\",\n     \"type\": \"string\"\n    }\n   },\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"parameterValues\": {},\n  \"vscode\": {\n   \"interpreter\": {\n    \"hash\": \"b0fa6594d8f4cbf19f97940f81e996739fb7646882a419484c72d19e05852a7e\"\n   }\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}"
  },
  {
    "path": "AWS/AWS_Change_EBS_Volume_To_GP3_Type.json",
    "content": "{\n    \"name\": \"Change AWS EBS Volume To GP3 Type\",\n    \"description\": \"This runbook can be used to change the type of an EBS volume to GP3(General Purpose 3). GP3 type volume has a number of advantages over it's predecessors. gp3 volumes are ideal for a wide variety of applications that require high performance at low cost\",\n    \"uuid\": \"2475714639442a9adcd0a87f7d193f6e8a6bbb9537d1eb6b03a6befb8ef84b19\",\n    \"icon\": \"CONNECTOR_TYPE_AWS\",\n    \"categories\": [ \"CATEGORY_TYPE_COST_OPT\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\" ],\n    \"connector_types\": [ \"CONNECTOR_TYPE_AWS\" ],\n    \"version\": \"1.0.0\"\n  }\n  "
  },
  {
    "path": "AWS/AWS_Change_Route53_TTL.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"82eebdfd-c880-40df-bd6d-5b546c92164b\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Steps Overview\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Steps Overview\"\n   },\n   \"source\": [\n    \"<center><img src=\\\"https://storage.googleapis.com/unskript-website/assets/favicon.png\\\" alt=\\\"unSkript.com\\\" width=\\\"100\\\" height=\\\"100\\\">\\n\",\n    \"<h1 id=\\\"unSkript-Runbooks&para;\\\">unSkript Runbooks</h1>\\n\",\n    \"<div class=\\\"alert alert-block alert-success\\\">\\n\",\n    \"<h3 id=\\\"Objective&para;\\\">Objective</h3>\\n\",\n    \"<br><strong style=\\\"color: #000000;\\\"><em>Find Lower TTL(Time To Live) for Route53 records and change it to a higher TTL value</em></strong></div>\\n\",\n    \"</center>\\n\",\n    \"<p>&nbsp;</p>\\n\",\n    \"<center>\\n\",\n    \"<h2 id=\\\"Change-AWS-Route53-TTL-value\\\"><strong><u>Change AWS Route53 TTL value</u></strong></h2>\\n\",\n    \"</center>\\n\",\n    \"<h1 id=\\\"Steps-Overview\\\">Steps Overview</h1>\\n\",\n    \"<p>1)<a href=\\\"#1\\\" target=\\\"_self\\\" rel=\\\"noopener\\\"> Get TTL under X hours</a><br>2)<a href=\\\"#2\\\" target=\\\"_self\\\" rel=\\\"noopener\\\"> Change the TTL value</a></p>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"889af0ca-5f1f-45e2-8f24-d779c8cc0086\",\n   \"metadata\": {\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-05-18T08:03:49.533Z\"\n    },\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"name\": \"Input Verification\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Input Verification\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"if hosted_zone_id and not record_name and not record_type:\\n\",\n    \"    raise SystemExit(\\\"Provide a Record Name and Record Type!\\\")\\n\",\n    \"elif record_name and not hosted_zone_id and not record_type:\\n\",\n    \"    raise SystemExit(\\\"Provide a Hosted Zone ID and Record Type!\\\")\\n\",\n    \"elif record_type and not hosted_zone_id and not record_name:\\n\",\n    \"    raise SystemExit(\\\"Provide a Hosted Zone ID and Record Name!\\\")\\n\",\n    \"elif record_type and hosted_zone_id and not record_name:\\n\",\n    \"    raise SystemExit(\\\"Provide a Record Name!\\\")\\n\",\n    \"elif record_name and hosted_zone_id and not record_type:\\n\",\n    \"    raise SystemExit(\\\"Provide a Record Type!\\\")\\n\",\n    \"elif hosted_zone_id and record_name and not record_type:\\n\",\n    \"    raise SystemExit(\\\"Provide a Record Type!\\\")\\n\",\n    \"elif hosted_zone_id and record_type and not record_name:\\n\",\n    \"    raise SystemExit(\\\"Provide a Record Name!\\\")\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"2020e8d0-ba3b-4c71-84b2-10917465a27e\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 1\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 1\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Get-TTLs-under-X-hours\\\"><a id=\\\"1\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Get TTLs under X hours</h3>\\n\",\n    \"<p>Using unSkript's Get Route53 TTL Under Hours , we will find the hosted zones and records that have a TTL under given threshold hours. A lower TTL means more queries arrive at the name servers because the cached values expire sooner.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters: <code>threshold(in hours)</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following output: <code>lower_ttl_records</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"8372128f-d195-47f3-b3a7-4482ae7e9764\",\n   \"metadata\": {\n    \"actionBashCommand\": false,\n    \"actionCategories\": [\n     \"CATEGORY_TYPE_COST_OPT\",\n     \"CATEGORY_TYPE_SRE\",\n     \"CATEGORY_TYPE_AWS\",\n     \"CATEGORY_TYPE_AWS_ROUTE53\"\n    ],\n    \"actionDescription\": \"AWS: Check for short Route 53 TTL\",\n    \"actionEntryFunction\": \"aws_get_ttl_under_given_hours\",\n    \"actionIsCheck\": true,\n    \"actionIsRemediation\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [],\n    \"actionNextHopParameterMapping\": {},\n    \"actionNouns\": null,\n    \"actionOutputType\": \"ACTION_OUTPUT_TYPE_LIST\",\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"actionTitle\": \"AWS: Check for short Route 53 TTL\",\n    \"actionType\": \"LEGO_TYPE_AWS\",\n    \"actionVerbs\": null,\n    \"actionVersion\": \"1.0.0\",\n    \"action_modified\": false,\n    \"action_uuid\": \"a885ef62f7614e282856fdc37f0654f67b4ec7e7651350ea0dbb123788e705df\",\n    \"condition_enabled\": true,\n    \"continueOnError\": false,\n    \"credentialsJson\": {},\n    \"description\": \"AWS: Check for short Route 53 TTL\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-05-18T08:06:27.645Z\"\n    },\n    \"id\": 2,\n    \"index\": 2,\n    \"inputData\": [\n     {\n      \"threshold\": {\n       \"constant\": false,\n       \"value\": \"int(threshold_ttl)\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"threshold\": {\n        \"default\": 1,\n        \"description\": \"(In hours) A threshold in hours to verify route 53 TTL is within the threshold.\",\n        \"title\": \"Threshold (In hours)\",\n        \"type\": \"integer\"\n       }\n      },\n      \"required\": [],\n      \"title\": \"aws_get_ttl_under_given_hours\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"language\": \"python\",\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS: Check for short Route 53 TTL\",\n    \"orderProperties\": [\n     \"threshold\"\n    ],\n    \"outputParams\": {\n     \"output_name\": \"lower_ttl_records\",\n     \"output_name_enabled\": true,\n     \"output_runbook_enabled\": false,\n     \"output_runbook_name\": \"\"\n    },\n    \"printOutput\": true,\n    \"startcondition\": \"not hosted_zone_id and not record_name and not record_type\",\n    \"tags\": [\n     \"aws_get_ttl_under_given_hours\"\n    ],\n    \"uuid\": \"a885ef62f7614e282856fdc37f0654f67b4ec7e7651350ea0dbb123788e705df\",\n    \"version\": \"1.0.0\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"# Copyright (c) 2021 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import Tuple, Optional\\n\",\n    \"import pprint\\n\",\n    \"from unskript.connectors.aws import aws_get_paginator\\n\",\n    \"from unskript.legos.aws.aws_get_ttl_for_route53_records.aws_get_ttl_for_route53_records import aws_get_ttl_for_route53_records\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_ttl_under_given_hours_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_ttl_under_given_hours(handle, threshold: int = 1) -> Tuple:\\n\",\n    \"    \\\"\\\"\\\"aws_get_ttl_under_x_hours Returns TTL for records in a hosted zone\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned by the task.validate(...) method.\\n\",\n    \"\\n\",\n    \"        :type threshold: str\\n\",\n    \"        :param threshold: (In hours) A threshold in hours to verify route 53 TTL is within the threshold.\\n\",\n    \"\\n\",\n    \"        :rtype: List of details with the record type, record name and record TTL.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    result = []\\n\",\n    \"    try:\\n\",\n    \"        route_client = handle.client('route53')\\n\",\n    \"        seconds = threshold * 3600\\n\",\n    \"        hosted_zones = aws_get_paginator(route_client, \\\"list_hosted_zones\\\", \\\"HostedZones\\\")\\n\",\n    \"        for zone in hosted_zones:\\n\",\n    \"            record_ttl_data = aws_get_ttl_for_route53_records(handle, zone['Id'])\\n\",\n    \"            for record_ttl in record_ttl_data:\\n\",\n    \"                if isinstance(record_ttl['record_ttl'], str):\\n\",\n    \"                    continue\\n\",\n    \"                elif record_ttl['record_ttl'] < seconds:\\n\",\n    \"                    records = {}\\n\",\n    \"                    records[\\\"hosted_zone_id\\\"] = zone['Id']\\n\",\n    \"                    records[\\\"record_name\\\"] = record_ttl['record_name']\\n\",\n    \"                    records[\\\"record_type\\\"] = record_ttl['record_type']\\n\",\n    \"                    records[\\\"record_ttl\\\"] = record_ttl['record_ttl']\\n\",\n    \"                    result.append(records)\\n\",\n    \"    except Exception as e:\\n\",\n    \"        pass\\n\",\n    \"\\n\",\n    \"    if len(result) != 0:\\n\",\n    \"        return (False, result)\\n\",\n    \"    else:\\n\",\n    \"        return (True, None)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"threshold\\\": \\\"int(threshold_ttl)\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"not hosted_zone_id and not record_name and not record_type\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"lower_ttl_records\\\")\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_get_ttl_under_given_hours, lego_printer=aws_get_ttl_under_given_hours_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"a311041f-620a-4b6b-914f-e52c6c3a71f4\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 2A\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 2A\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Create-List-of-Unused-Log-Streams&para;\\\">Create List of Lower TTL records</h3>\\n\",\n    \"<p>This action filters the output from Step 1 to get the non empty values</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters: <code>None</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following output: <code>all_lower_ttl_records</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"b85ce542-bdf0-44d2-9e75-213002d5c036\",\n   \"metadata\": {\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-05-18T08:06:39.524Z\"\n    },\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"name\": \"Create List of Lower TTL records\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Create List of Lower TTL records\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"# print(lower_ttl_records)\\n\",\n    \"all_lower_ttl_records = []\\n\",\n    \"try:\\n\",\n    \"    if lower_ttl_records[0] == False:\\n\",\n    \"        if len(lower_ttl_records[1])!=0:\\n\",\n    \"            all_lower_ttl_records=lower_ttl_records[1]\\n\",\n    \"except Exception:\\n\",\n    \"    data_dict = {}\\n\",\n    \"    data_dict[\\\"hosted_zone_id\\\"] = hosted_zone_id\\n\",\n    \"    data_dict[\\\"record_name\\\"] = record_name\\n\",\n    \"    data_dict[\\\"record_type\\\"] = record_type\\n\",\n    \"    all_lower_ttl_records.append(data_dict)\\n\",\n    \"print(all_lower_ttl_records)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"9fb3704a-9b19-49c4-96ab-a982217bbcd3\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 2\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 2\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Change-TTL-Value\\\"><a id=\\\"2\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Change TTL Value</h3>\\n\",\n    \"<p>This action changes the TTL value for a record that has a lower value to a higher one. By default<span style=\\\"color: rgb(45, 194, 107);\\\"> 86400 </span>seconds is considered if no value is given,</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters:&nbsp;<code>hosted_zone_id, record_name, record_type, new_ttl</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"70e72194-c276-4f44-a9a9-d90b37488a94\",\n   \"metadata\": {\n    \"actionBashCommand\": false,\n    \"actionCategories\": [\n     \"CATEGORY_TYPE_COST_OPT\",\n     \"CATEGORY_TYPE_SRE\",\n     \"CATEGORY_TYPE_AWS\",\n     \"CATEGORY_TYPE_AWS_ROUTE53\"\n    ],\n    \"actionDescription\": \"Update TTL for an existing record in a hosted zone.\",\n    \"actionEntryFunction\": \"aws_update_ttl_for_route53_records\",\n    \"actionIsCheck\": false,\n    \"actionIsRemediation\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": null,\n    \"actionNextHopParameterMapping\": null,\n    \"actionNouns\": null,\n    \"actionOutputType\": \"ACTION_OUTPUT_TYPE_DICT\",\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"actionTitle\": \"AWS Update TTL for Route53 Record\",\n    \"actionType\": \"LEGO_TYPE_AWS\",\n    \"actionVerbs\": null,\n    \"actionVersion\": \"1.0.0\",\n    \"action_modified\": false,\n    \"action_uuid\": \"a79077024e9d76970c61eb8d40f26129820fbed3cbec6b03e5610dbace0d2224\",\n    \"condition_enabled\": true,\n    \"continueOnError\": true,\n    \"credentialsJson\": {},\n    \"description\": \"Update TTL for an existing record in a hosted zone.\",\n    \"id\": 1,\n    \"index\": 1,\n    \"inputData\": [\n     {\n      \"hosted_zone_id\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"hosted_zone_id\\\\\\\\\\\")\\\"\"\n      },\n      \"new_ttl\": {\n       \"constant\": false,\n       \"value\": \"int(new_ttl)\"\n      },\n      \"record_name\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"record_name\\\\\\\\\\\")\\\"\"\n      },\n      \"record_type\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"record_type\\\\\\\\\\\")\\\"\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"definitions\": {\n       \"Route53RecordType\": {\n        \"description\": \"An enumeration.\",\n        \"enum\": [\n         \"A\",\n         \"AAAA\",\n         \"CAA\",\n         \"CNAME\",\n         \"DS\",\n         \"MX\",\n         \"NAPTR\",\n         \"NS\",\n         \"PTR\",\n         \"SOA\",\n         \"SPF\",\n         \"SRV\",\n         \"TXT\"\n        ],\n        \"title\": \"Route53RecordType\",\n        \"type\": \"string\"\n       }\n      },\n      \"properties\": {\n       \"hosted_zone_id\": {\n        \"description\": \"ID of the hosted zone in Route53\",\n        \"title\": \"Hosted Zone ID\",\n        \"type\": \"string\"\n       },\n       \"new_ttl\": {\n        \"description\": \"New TTL value for a record. Eg: 300\",\n        \"title\": \"New TTL\",\n        \"type\": \"integer\"\n       },\n       \"record_name\": {\n        \"description\": \"Name of record in a hosted zone. Eg: example.com\",\n        \"title\": \"Record Name\",\n        \"type\": \"string\"\n       },\n       \"record_type\": {\n        \"allOf\": [\n         {\n          \"$ref\": \"#/definitions/Route53RecordType\"\n         }\n        ],\n        \"description\": \"Record Type of the record.\",\n        \"title\": \"Record Type\",\n        \"type\": \"enum\"\n       }\n      },\n      \"required\": [\n       \"hosted_zone_id\",\n       \"new_ttl\",\n       \"record_name\",\n       \"record_type\"\n      ],\n      \"title\": \"aws_update_ttl_for_route53_records\",\n      \"type\": \"object\"\n     }\n    ],\n    \"iterData\": [\n     {\n      \"iter_enabled\": true,\n      \"iter_item\": {\n       \"hosted_zone_id\": \"hosted_zone_id\",\n       \"record_name\": \"record_name\",\n       \"record_type\": \"record_type\"\n      },\n      \"iter_list\": {\n       \"constant\": false,\n       \"objectItems\": true,\n       \"value\": \"all_lower_ttl_records\"\n      }\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"language\": \"python\",\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS Update TTL for Route53 Record\",\n    \"orderProperties\": [\n     \"hosted_zone_id\",\n     \"new_ttl\",\n     \"record_name\",\n     \"record_type\"\n    ],\n    \"printOutput\": true,\n    \"startcondition\": \"len(all_lower_ttl_records)!=0\",\n    \"tags\": [\n     \"aws_update_ttl_for_route53_records\"\n    ],\n    \"uuid\": \"a79077024e9d76970c61eb8d40f26129820fbed3cbec6b03e5610dbace0d2224\",\n    \"version\": \"1.0.0\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"# Copyright (c) 2021 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from unskript.enums.aws_route53_record_type_enums import Route53RecordType\\n\",\n    \"from typing import Dict\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_update_ttl_for_route53_records_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_update_ttl_for_route53_records(handle, hosted_zone_id: str, record_name: str, record_type:Route53RecordType, new_ttl:int) -> Dict:\\n\",\n    \"    \\\"\\\"\\\"aws_update_ttl_for_route53_records updates the TTL for a Route53 record in a hosted zone.\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned by the task.validate(...) method.\\n\",\n    \"\\n\",\n    \"        :type hosted_zone_id: string\\n\",\n    \"        :param hosted_zone_id: ID of the hosted zone in Route53\\n\",\n    \"\\n\",\n    \"        :type record_name: string\\n\",\n    \"        :param record_name: Name of record in a hosted zone. Eg: example.com\\n\",\n    \"\\n\",\n    \"        :type record_type: string\\n\",\n    \"        :param record_type: Record Type of the record.\\n\",\n    \"\\n\",\n    \"        :type new_ttl: int\\n\",\n    \"        :param new_ttl: New TTL value for a record. Eg: 300\\n\",\n    \"\\n\",\n    \"        :rtype: Response of updation on new TTL\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"\\n\",\n    \"    route53Client = handle.client('route53')\\n\",\n    \"    new_ttl_value = int(new_ttl)\\n\",\n    \"\\n\",\n    \"    response = route53Client.change_resource_record_sets(\\n\",\n    \"        HostedZoneId=hosted_zone_id,\\n\",\n    \"        ChangeBatch={\\n\",\n    \"            'Changes': [\\n\",\n    \"                {\\n\",\n    \"                    'Action': 'UPSERT',\\n\",\n    \"                    'ResourceRecordSet': {\\n\",\n    \"                        'Name': record_name,\\n\",\n    \"                        'Type': record_type,\\n\",\n    \"                        'TTL': new_ttl_value\\n\",\n    \"                    }\\n\",\n    \"                }\\n\",\n    \"            ]\\n\",\n    \"        }\\n\",\n    \"    )\\n\",\n    \"    return response\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(continueOnError=True)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"hosted_zone_id\\\": \\\"\\\\\\\\\\\"iter.get(\\\\\\\\\\\\\\\\\\\\\\\\\\\"hosted_zone_id\\\\\\\\\\\\\\\\\\\\\\\\\\\")\\\\\\\\\\\"\\\",\\n\",\n    \"    \\\"new_ttl\\\": \\\"int(new_ttl)\\\",\\n\",\n    \"    \\\"record_name\\\": \\\"\\\\\\\\\\\"iter.get(\\\\\\\\\\\\\\\\\\\\\\\\\\\"record_name\\\\\\\\\\\\\\\\\\\\\\\\\\\")\\\\\\\\\\\"\\\",\\n\",\n    \"    \\\"record_type\\\": \\\"\\\\\\\\\\\"iter.get(\\\\\\\\\\\\\\\\\\\\\\\\\\\"record_type\\\\\\\\\\\\\\\\\\\\\\\\\\\")\\\\\\\\\\\"\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(iterJson='''{\\n\",\n    \"    \\\"iter_enabled\\\": true,\\n\",\n    \"    \\\"iter_list_is_const\\\": false,\\n\",\n    \"    \\\"iter_list\\\": \\\"all_lower_ttl_records\\\",\\n\",\n    \"    \\\"iter_parameter\\\": [\\\"hosted_zone_id\\\",\\\"record_name\\\",\\\"record_type\\\"]\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"len(all_lower_ttl_records)!=0\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_update_ttl_for_route53_records, lego_printer=aws_update_ttl_for_route53_records_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"9c7430c8-3660-45bd-90ef-9ceab77e3daa\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Conclusion\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Conclusion\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Conclusion\\\">Conclusion<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Conclusion\\\" target=\\\"_self\\\">&para;</a></h3>\\n\",\n    \"<p>In this Runbook, we were able change the TTL(time to live) to a higher value. As a result, there are fewer queries received by the name servers which will help in saving your AWS costs. To view the full platform capabilities of unSkript please visit&nbsp;<a href=\\\"https://us.app.unskript.io\\\" target=\\\"_blank\\\" rel=\\\"noopener\\\">us.app.unskript.io</a></p>\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"execution_data\": {\n   \"runbook_name\": \"Change AWS Route53 TTL\",\n   \"parameters\": null\n  },\n  \"kernelspec\": {\n   \"display_name\": \"unSkript (Build: 1169)\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"codemirror_mode\": {\n    \"name\": \"ipython\",\n    \"version\": 3\n   },\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"nbconvert_exporter\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.10.6\"\n  },\n  \"outputParameterSchema\": {\n   \"properties\": {},\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"parameterSchema\": {\n   \"properties\": {\n    \"hosted_zone_id\": {\n     \"description\": \"The ID of the hosted zone that contains the resource record sets that you want to change.\",\n     \"title\": \"hosted_zone_id\",\n     \"type\": \"string\"\n    },\n    \"new_ttl\": {\n     \"default\": 86400,\n     \"description\": \"New TTL value (in seconds) that needs to be updated for the records. \",\n     \"title\": \"new_ttl\",\n     \"type\": \"number\"\n    },\n    \"record_name\": {\n     \"description\": \"Record name for a particular hosted zone.\",\n     \"title\": \"record_name\",\n     \"type\": \"string\"\n    },\n    \"record_type\": {\n     \"description\": \"Record type of the record name hosted in a particular zone\",\n     \"enum\": [\n      \"A\",\n      \"AAAA\",\n      \"CAA\",\n      \"CNAME \",\n      \"DS\",\n      \"MX\",\n      \"NAPTR\",\n      \"NS\",\n      \"PTR\",\n      \"SOA\",\n      \"SPF\",\n      \"SRV\"\n     ],\n     \"enumNames\": [\n      \"A\",\n      \"AAAA\",\n      \"CAA\",\n      \"CNAME \",\n      \"DS\",\n      \"MX\",\n      \"NAPTR\",\n      \"NS\",\n      \"PTR\",\n      \"SOA\",\n      \"SPF\",\n      \"SRV\"\n     ],\n     \"title\": \"record_type\",\n     \"type\": \"string\"\n    },\n    \"threshold_ttl\": {\n     \"default\": 1,\n     \"description\": \"Threshold(in hours) to check if the TTL is lower than the given value. Eg: 1, checks for all records whose TTL is less than 3600 seconds (1 hour)\",\n     \"title\": \"threshold_ttl\",\n     \"type\": \"number\"\n    }\n   },\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"vscode\": {\n   \"interpreter\": {\n    \"hash\": \"b0fa6594d8f4cbf19f97940f81e996739fb7646882a419484c72d19e05852a7e\"\n   }\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}"
  },
  {
    "path": "AWS/AWS_Change_Route53_TTL.json",
    "content": "{\n    \"name\": \"Change AWS Route53 TTL\",\n    \"description\": \"For a record in a hosted zone, lower TTL means that more queries arrive at the name servers because the cached values expire sooner. If you configure a higher TTL for your records, then the intermediate resolvers cache the records for longer time. As a result, there are fewer queries received by the name servers. This configuration reduces the charges corresponding to the DNS queries answered. However, higher TTL slows the propagation of record changes because the previous values are cached for longer periods. This Runbook can be used to configure a higher value of a TTL .\",\n    \"uuid\": \"a0773e52a3a3a8688e47a9e10eba1c680913d28a9a8c4466113181534bd1f972\",\n    \"icon\": \"CONNECTOR_TYPE_AWS\",\n    \"categories\": [ \"CATEGORY_TYPE_COST_OPT\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\" ],\n    \"connector_types\": [ \"CONNECTOR_TYPE_AWS\" ],\n    \"version\": \"1.0.0\"\n}"
  },
  {
    "path": "AWS/AWS_Create_New_IAM_User_With_Policy.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"8a97b231-94d6-4e10-a24c-6eac9a4572e4\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Add New IAM User\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Add New IAM User\"\n   },\n   \"source\": [\n    \"<center>\\n\",\n    \"<img src=\\\"https://storage.googleapis.com/unskript-website/assets/favicon.png\\\" alt=\\\"unSkript.com\\\" width=\\\"100\\\" height=\\\"100\\\"/> \\n\",\n    \"<h1> unSkript Runbooks </h1>\\n\",\n    \"\\n\",\n    \"<div class=\\\"alert alert-block alert-success\\\">\\n\",\n    \"    <h3> Objective</h3> <br>\\n\",\n    \"    <b>To add a new IAM user using unSkript actions.</b>\\n\",\n    \"</div>\\n\",\n    \"</center>\\n\",\n    \"<br>\\n\",\n    \"\\n\",\n    \"<center><h2>Add New IAM User</h2></center>\\n\",\n    \"\\n\",\n    \"# Steps Overview\\n\",\n    \"   1)[Create IAM User](#1)</br>\\n\",\n    \"   2)[Create login profile](#2)</br>\\n\",\n    \"   3)[Check the caller identity](#3)</br>\\n\",\n    \"   4)[Post slack message](#4)\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 5,\n   \"id\": \"07e691b1-dd70-4c51-b871-47f608ecd89b\",\n   \"metadata\": {\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-01-06T13:27:50.928Z\"\n    },\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"name\": \"Gathering Information\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Gathering Information\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"tag_key = \\\"Name\\\"\\n\",\n    \"tag_value = username\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"6cb8f37f-8bf2-41a0-b1ae-d946038ea808\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-1\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-1\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Creating-an-IAM-User\\\"><a id=\\\"1\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Creating an IAM User</h3>\\n\",\n    \"<p>Here we will use unSkript <strong>Create New IAM User</strong> action. This action creates an IAM user in AWS and assigns the given tag to the user.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Input parameters: <code>user_name</code>, <code>tag_key</code>, <code>tag_value</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Output variable: <code>user_details</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"9fe78a10-d76f-4961-8e5c-bf381c5b3cc9\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_uuid\": \"3f71dd060d5955f5dc9104dbaf418bf957b2222c510cb3afd09ded8e41e433d9\",\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Create New IAM User\",\n    \"id\": 222,\n    \"index\": 222,\n    \"inputData\": [\n     {\n      \"tag_key\": {\n       \"constant\": false,\n       \"value\": \"tag_key\"\n      },\n      \"tag_value\": {\n       \"constant\": false,\n       \"value\": \"tag_value\"\n      },\n      \"user_name\": {\n       \"constant\": false,\n       \"value\": \"username\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"tag_key\": {\n        \"description\": \"Tag Key to new IAM User.\",\n        \"title\": \"Tag Key\",\n        \"type\": \"string\"\n       },\n       \"tag_value\": {\n        \"description\": \"Tag Value to new IAM User.\",\n        \"title\": \"Tag Value\",\n        \"type\": \"string\"\n       },\n       \"user_name\": {\n        \"description\": \"IAM User Name.\",\n        \"title\": \"User Name\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"user_name\",\n       \"tag_key\",\n       \"tag_value\"\n      ],\n      \"title\": \"aws_create_iam_user\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"Create New IAM User\",\n    \"nouns\": [\n     \"aws\",\n     \"IAM\",\n     \"user\"\n    ],\n    \"orderProperties\": [\n     \"user_name\",\n     \"tag_key\",\n     \"tag_value\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"user_details\",\n     \"output_name_enabled\": true\n    },\n    \"printOutput\": true,\n    \"tags\": [\n     \"aws_create_iam_user\"\n    ],\n    \"title\": \"Create New IAM User\",\n    \"verbs\": [\n     \"create\"\n    ]\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##  Copyright (c) 2021 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from typing import List, Dict\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from botocore.exceptions import ClientError\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_create_iam_user_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_create_iam_user(handle, user_name: str, tag_key: str, tag_value: str) -> Dict:\\n\",\n    \"    \\\"\\\"\\\"aws_create_iam_user Creates new IAM User.\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned by the task.validate(...) method\\n\",\n    \"\\n\",\n    \"        :type user_name: string\\n\",\n    \"        :param user_name: Name of new IAM User.\\n\",\n    \"\\n\",\n    \"        :type tag_key: string\\n\",\n    \"        :param tag_key: Tag Key assign to new User.\\n\",\n    \"\\n\",\n    \"        :type tag_value: string\\n\",\n    \"        :param tag_value: Tag Value assign to new User.\\n\",\n    \"\\n\",\n    \"        :rtype: Dict with the stopped instances state info.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"\\n\",\n    \"    ec2Client = handle.client(\\\"iam\\\")\\n\",\n    \"    result = {}\\n\",\n    \"    try:\\n\",\n    \"        response = ec2Client.create_user(\\n\",\n    \"            UserName=user_name,\\n\",\n    \"            Tags=[\\n\",\n    \"                {\\n\",\n    \"                    'Key': tag_key,\\n\",\n    \"                    'Value': tag_value\\n\",\n    \"                }])\\n\",\n    \"        result = response\\n\",\n    \"    except ClientError as error:\\n\",\n    \"        if error.response['Error']['Code'] == 'EntityAlreadyExists':\\n\",\n    \"            result = error.response\\n\",\n    \"        else:\\n\",\n    \"            result = error.response\\n\",\n    \"\\n\",\n    \"    return result\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"tag_key\\\": \\\"tag_key\\\",\\n\",\n    \"    \\\"tag_value\\\": \\\"tag_value\\\",\\n\",\n    \"    \\\"user_name\\\": \\\"username\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"user_details\\\")\\n\",\n    \"\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_create_iam_user, lego_printer=aws_create_iam_user_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"c174d638-f107-450f-ab2d-d28cf097a722\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-2\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-2\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Create-login-Profile\\\"><a id=\\\"2\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Create login Profile</h3>\\n\",\n    \"<p>This action only executes when step 1 successfully creates a user. In this action, we will pass the newly created username and temporary password, which will create an user profile for the user in AWS.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Input parameters: <code>user_name</code>, <code>password</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Output variable: <code>profile_details</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"35887cbc-bdb1-4f3b-8f59-a2bb78e9b605\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_uuid\": \"7b52e5fdfddd113a1c489d95d5fd8c9a98043c6ea721588531db6a5261434975\",\n    \"condition_enabled\": true,\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Create Login profile for IAM User\",\n    \"id\": 166,\n    \"index\": 166,\n    \"inputData\": [\n     {\n      \"password\": {\n       \"constant\": false,\n       \"value\": \"password\"\n      },\n      \"user_name\": {\n       \"constant\": false,\n       \"value\": \"username\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"password\": {\n        \"description\": \"Password for IAM User.\",\n        \"title\": \"Password\",\n        \"type\": \"string\"\n       },\n       \"user_name\": {\n        \"description\": \"IAM User Name.\",\n        \"title\": \"User Name\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"user_name\",\n       \"password\"\n      ],\n      \"title\": \"aws_create_user_login_profile\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"Create Login profile for IAM User\",\n    \"nouns\": [\n     \"aws\",\n     \"IAM\",\n     \"login\"\n    ],\n    \"orderProperties\": [\n     \"user_name\",\n     \"password\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"profile_details\",\n     \"output_name_enabled\": true\n    },\n    \"printOutput\": true,\n    \"startcondition\": \"'User' in UserInfo\",\n    \"tags\": [\n     \"aws_create_user_login_profile\"\n    ],\n    \"verbs\": [\n     \"create\"\n    ]\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"##  Copyright (c) 2021 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import Dict\\n\",\n    \"from botocore.exceptions import ClientError\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_create_user_login_profile_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_create_user_login_profile(handle, user_name: str, password: str) -> Dict:\\n\",\n    \"    \\\"\\\"\\\"aws_create_user_login_profile Create login profile for IAM User.\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned by the task.validate(...) method.\\n\",\n    \"\\n\",\n    \"        :type user_name: string\\n\",\n    \"        :param user_name: Name of new IAM User.\\n\",\n    \"\\n\",\n    \"        :type password: string\\n\",\n    \"\\n\",\n    \"        :rtype: Dict with the Profile Creation status info.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"\\n\",\n    \"    ec2Client = handle.client(\\\"iam\\\")\\n\",\n    \"    result = {}\\n\",\n    \"    try:\\n\",\n    \"        response = ec2Client.create_login_profile(\\n\",\n    \"            UserName=user_name,\\n\",\n    \"            PasswordResetRequired=True)\\n\",\n    \"\\n\",\n    \"        result = response\\n\",\n    \"    except ClientError as error:\\n\",\n    \"        if error.response['Error']['Code'] == 'EntityAlreadyExists':\\n\",\n    \"            result = error.response\\n\",\n    \"        else:\\n\",\n    \"            result = error.response\\n\",\n    \"\\n\",\n    \"    return result\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"password\\\": \\\"password\\\",\\n\",\n    \"    \\\"user_name\\\": \\\"username\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"'User' in UserInfo\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"profile_details\\\")\\n\",\n    \"\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_create_user_login_profile, lego_printer=aws_create_user_login_profile_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"29511895-d1cc-4a01-9990-8928642b5006\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-3\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-3\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Check-Caller-Identity\\\"><a id=\\\"3\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Check Caller Identity</h3>\\n\",\n    \"<p>Here we will use unSkript <strong>Get Caller Identity Action</strong> action. These Action does not take any inputs. shows the caller's identity for the current user.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Output variable: <code>caller_details</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 5,\n   \"id\": \"dd1e1542-ddd7-4b86-86a2-17e999458fbd\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_uuid\": \"afacde59-a401-4a8b-901d-46c4b3970b78\",\n    \"createTime\": \"2022-07-27T16:51:48Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"v0.0.0\",\n    \"description\": \"Test\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2022-09-02T16:44:27.574Z\"\n    },\n    \"id\": 100001,\n    \"index\": 100001,\n    \"inputschema\": [\n     {\n      \"properties\": {},\n      \"required\": [\n       \"instance_ids\",\n       \"region\"\n      ],\n      \"title\": \"aws_restart_ec2_instances_test\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"Get Caller Identity \",\n    \"nouns\": [],\n    \"orderProperties\": [],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"caller_details\",\n     \"output_name_enabled\": true\n    },\n    \"printOutput\": true,\n    \"tags\": [],\n    \"title\": \"Get Caller Identity \",\n    \"verbs\": []\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##  Copyright (c) 2021 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from typing import List, Dict\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_caller_identity_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_caller_identity(handle) -> Dict:\\n\",\n    \"    ec2Client = handle.client('sts')\\n\",\n    \"    response = ec2Client.get_caller_identity()\\n\",\n    \"\\n\",\n    \"    return response\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"task.configure(outputName=\\\"caller_details\\\")\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_get_caller_identity, lego_printer=aws_get_caller_identity_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"d1f05583-fa8c-4f8c-a357-3f6154df4620\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-4\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-4\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Post-Slack-Message\\\"><a id=\\\"4\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Post Slack Message</h3>\\n\",\n    \"<p>Here we will use unSkript <strong>Post Slack Message</strong> action. These actions send a message on the Slack channel with the newly created username.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Input parameters: <code>channel</code>, <code>message</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Output variable: <code>send_status</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 6,\n   \"id\": \"8cacd129-1fed-4c9e-9f2f-70da41c43c88\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_uuid\": \"6a87f83ab0ecfeecb9c98d084e2b1066c26fa64be5b4928d5573a5d60299802d\",\n    \"condition_enabled\": true,\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Post Slack Message\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2022-09-02T16:45:12.785Z\"\n    },\n    \"id\": 62,\n    \"index\": 62,\n    \"inputData\": [\n     {\n      \"channel\": {\n       \"constant\": false,\n       \"value\": \"channel\"\n      },\n      \"message\": {\n       \"constant\": false,\n       \"value\": \"\\\"New IAM user {}\\\".format(user_name)\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"channel\": {\n        \"default\": \"\",\n        \"description\": \"Name of the slack channel where the message to be posted\",\n        \"title\": \"Channel\",\n        \"type\": \"string\"\n       },\n       \"message\": {\n        \"default\": \"\",\n        \"description\": \"Message to be sent\",\n        \"title\": \"Message\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"channel\",\n       \"message\"\n      ],\n      \"title\": \"slack_post_message\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_SLACK\",\n    \"name\": \"Post Slack Message\",\n    \"nouns\": [\n     \"slack\",\n     \"message\"\n    ],\n    \"orderProperties\": [\n     \"channel\",\n     \"message\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"send_status\",\n     \"output_name_enabled\": true\n    },\n    \"printOutput\": true,\n    \"startcondition\": \"'User' in UserInfo and not channel\",\n    \"tags\": [\n     \"slack_post_message\"\n    ],\n    \"verbs\": [\n     \"post\"\n    ]\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"# Copyright (c) 2021 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from slack_sdk import WebClient\\n\",\n    \"from slack_sdk.errors import SlackApiError\\n\",\n    \"\\n\",\n    \"pp = pprint.PrettyPrinter(indent=2)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def slack_post_message_printer(output):\\n\",\n    \"    if output is not None:\\n\",\n    \"        pprint.pprint(output)\\n\",\n    \"    else:\\n\",\n    \"        return\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def slack_post_message(\\n\",\n    \"        handle: WebClient,\\n\",\n    \"        channel: str,\\n\",\n    \"        message: str) -> str:\\n\",\n    \"\\n\",\n    \"    try:\\n\",\n    \"        response = handle.chat_postMessage(\\n\",\n    \"            channel=channel,\\n\",\n    \"            text=message)\\n\",\n    \"        return f\\\"Successfuly Sent Message on Channel: #{channel}\\\"\\n\",\n    \"    except SlackApiError as e:\\n\",\n    \"        pp.pprint(\\n\",\n    \"            f\\\"Failed sending message to slack channel {channel}, Error: {e.response['error']}\\\")\\n\",\n    \"        if e.response['error'] == 'channel_not_found':\\n\",\n    \"            raise Exception('Channel Not Found')\\n\",\n    \"        elif e.response['error'] == 'duplicate_channel_not_found':\\n\",\n    \"            raise Exception('Channel associated with the message_id not valid')\\n\",\n    \"        elif e.response['error'] == 'not_in_channel':\\n\",\n    \"            raise Exception('Cannot post message to channel user is not in')\\n\",\n    \"        elif e.response['error'] == 'is_archived':\\n\",\n    \"            raise Exception('Channel has been archived')\\n\",\n    \"        elif e.response['error'] == 'msg_too_long':\\n\",\n    \"            raise Exception('Message text is too long')\\n\",\n    \"        elif e.response['error'] == 'no_text':\\n\",\n    \"            raise Exception('Message text was not provided')\\n\",\n    \"        elif e.response['error'] == 'restricted_action':\\n\",\n    \"            raise Exception('Workspace preference prevents user from posting')\\n\",\n    \"        elif e.response['error'] == 'restricted_action_read_only_channel':\\n\",\n    \"            raise Exception('Cannot Post message, read-only channel')\\n\",\n    \"        elif e.response['error'] == 'team_access_not_granted':\\n\",\n    \"            raise Exception('The token used is not granted access to the workspace')\\n\",\n    \"        elif e.response['error'] == 'not_authed':\\n\",\n    \"            raise Exception('No Authtnecition token provided')\\n\",\n    \"        elif e.response['error'] == 'invalid_auth':\\n\",\n    \"            raise Exception('Some aspect of Authentication cannot be validated. Request denied')\\n\",\n    \"        elif e.response['error'] == 'access_denied':\\n\",\n    \"            raise Exception('Access to a resource specified in the request denied')\\n\",\n    \"        elif e.response['error'] == 'account_inactive':\\n\",\n    \"            raise Exception('Authentication token is for a deleted user')\\n\",\n    \"        elif e.response['error'] == 'token_revoked':\\n\",\n    \"            raise Exception('Authentication token for a deleted user has been revoked')\\n\",\n    \"        elif e.response['error'] == 'no_permission':\\n\",\n    \"            raise Exception('The workspace toekn used does not have necessary permission to send message')\\n\",\n    \"        elif e.response['error'] == 'ratelimited':\\n\",\n    \"            raise Exception('The request has been ratelimited. Retry sending message later')\\n\",\n    \"        elif e.response['error'] == 'service_unavailable':\\n\",\n    \"            raise Exception('The service is temporarily unavailable')\\n\",\n    \"        elif e.response['error'] == 'fatal_error':\\n\",\n    \"            raise Exception('The server encountered catostrophic error while sending message')\\n\",\n    \"        elif e.response['error'] == 'internal_error':\\n\",\n    \"            raise Exception('The server could not complete operation, likely due to transietn issue')\\n\",\n    \"        elif e.response['error'] == 'request_timeout':\\n\",\n    \"            raise Exception('Sending message error via POST: either message was missing or truncated')\\n\",\n    \"        else:\\n\",\n    \"            raise Exception(f'Failed Sending Message to slack channel {channel} Error: {e.response[\\\"error\\\"]}')\\n\",\n    \"\\n\",\n    \"    except Exception as e:\\n\",\n    \"        print(\\\"\\\\n\\\\n\\\")\\n\",\n    \"        pp.pprint(\\n\",\n    \"            f\\\"Failed sending message to slack channel {channel}, Error: {e.__str__()}\\\")\\n\",\n    \"        return f\\\"Unable to send message on {channel}\\\"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"channel\\\": \\\"channel\\\",\\n\",\n    \"    \\\"message\\\": \\\"\\\\\\\\\\\"New IAM user {}\\\\\\\\\\\".format(user_name)\\\"\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"'User' in UserInfo and not channel\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"send_status\\\")\\n\",\n    \"\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(slack_post_message, lego_printer=slack_post_message_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"e9df5398-15b1-4279-92b8-d4c62372afed\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Conclusion\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Conclusion\"\n   },\n   \"source\": [\n    \"### Conclusion\\n\",\n    \"In this Runbook, we demonstrated the use of unSkript's AWS and slack actions to perform AWS create new IAM user, login profile and also show the caller identity of the user. On Success, post a message on the slack channel about the User creation. To view the full platform capabilities of unSkript please visit https://us.app.unskript.io\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"execution_data\": {\n   \"runbook_name\": \"Create IAM User with policy\",\n   \"parameters\": [\n    \"channel\",\n    \"password\",\n    \"username\"\n   ]\n  },\n  \"kernelspec\": {\n   \"display_name\": \"Python 3.10.6 64-bit\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"codemirror_mode\": {\n    \"name\": \"ipython\",\n    \"version\": 3\n   },\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"nbconvert_exporter\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.10.6\"\n  },\n  \"parameterSchema\": {\n   \"properties\": {\n    \"channel\": {\n     \"description\": \"Slack Channel Name to send the new User Information. Example random, general\",\n     \"title\": \"channel\",\n     \"type\": \"string\"\n    },\n    \"password\": {\n     \"description\": \"Login profile password for new IAM user.\",\n     \"format\": \"password\",\n     \"title\": \"password\",\n     \"type\": \"string\",\n     \"writeOnly\": true\n    },\n    \"username\": {\n     \"description\": \"Name of the user that needs to be created\",\n     \"title\": \"username\",\n     \"type\": \"string\"\n    }\n   },\n   \"required\": [\n    \"username\",\n    \"password\"\n   ],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"vscode\": {\n   \"interpreter\": {\n    \"hash\": \"b0fa6594d8f4cbf19f97940f81e996739fb7646882a419484c72d19e05852a7e\"\n   }\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}"
  },
  {
    "path": "AWS/AWS_Create_New_IAM_User_With_Policy.json",
    "content": "{\n    \"name\": \"Create IAM User with policy\",\n    \"description\": \"Create new IAM user with a security Policy.  Sends confirmation to Slack.\",\n    \"uuid\": \"1ce85aa2153d808bd95a21a4545c51f239696bc41f55d30b6849cd8218381ffc\",\n    \"icon\": \"CONNECTOR_TYPE_AWS\",\n    \"categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\" ],\n    \"connector_types\": [ \"CONNECTOR_TYPE_AWS\" ],\n    \"version\": \"1.0.0\"\n}"
  },
  {
    "path": "AWS/AWS_Delete_EBS_Volumes_Attached_To_Stopped_Instances.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"5424264e-6195-4cf9-906b-24b02d5a83f3\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Steps Overview\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Steps Overview\"\n   },\n   \"source\": [\n    \"<center><img src=\\\"https://storage.googleapis.com/unskript-website/assets/favicon.png\\\" alt=\\\"unSkript.com\\\" width=\\\"100\\\" height=\\\"100\\\">\\n\",\n    \"<h1 id=\\\"unSkript-Runbooks\\\">unSkript Runbooks</h1>\\n\",\n    \"<div class=\\\"alert alert-block alert-success\\\">\\n\",\n    \"<h3 id=\\\"Objective\\\">Objective</h3>\\n\",\n    \"<br><strong><em>Find and Delete EBS (Elastic Block Storage) Volumes associated with stopped EC2 instances</em></strong></div>\\n\",\n    \"</center>\\n\",\n    \"<p>&nbsp;</p>\\n\",\n    \"<center>\\n\",\n    \"<h2 id=\\\"Delete-EBS-Volumes-Attached-To-Stopped-Instances\\\"><u>Delete EBS Volumes Attached To Stopped Instances</u></h2>\\n\",\n    \"</center>\\n\",\n    \"<h1 id=\\\"Steps-Overview\\\">Steps Overview</h1>\\n\",\n    \"<p>1)<a href=\\\"#1\\\" target=\\\"_self\\\" rel=\\\"noopener\\\"> Get volumes for stopped instances</a><br>2)<a href=\\\"#2\\\" target=\\\"_self\\\" rel=\\\"noopener\\\"> Delete EBS volumes</a></p>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 5,\n   \"id\": \"e1f146c9-5180-4459-9c82-cf0e1da02785\",\n   \"metadata\": {\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-05-17T16:48:03.721Z\"\n    },\n    \"name\": \"Input verification\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Input verification\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"if volume_ids and not region:\\n\",\n    \"    raise SystemExit(\\\"Provide a region for the EBS Volume IDs!\\\")\\n\",\n    \"if region == None:\\n\",\n    \"    region = \\\"\\\"\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"908f4dcb-8483-44fc-8f81-ce2502e03093\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 1\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 1\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Get-volumes-for-stopped-instances\\\"><a id=\\\"1\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Get volumes for stopped instances</h3>\\n\",\n    \"<p>Using unSkript's Get Stopped Instances EBS volumes action, we will find volumes which are associated with stopped instances.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters: <code>region</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following output: <code>stopped_instances_volumes</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"3c1fe36b-eb58-4827-9c00-f6b03b8d7a4a\",\n   \"metadata\": {\n    \"actionBashCommand\": false,\n    \"actionCategories\": [\n     \"CATEGORY_TYPE_DEVOPS\",\n     \"CATEGORY_TYPE_SRE\",\n     \"CATEGORY_TYPE_AWS\",\n     \"CATEGORY_TYPE_AWS_EC2\",\n     \"CATEGORY_TYPE_AWS_EBS\"\n    ],\n    \"actionDescription\": \"This action helps to list the volumes that are attached to stopped instances.\",\n    \"actionEntryFunction\": \"aws_get_stopped_instance_volumes\",\n    \"actionIsCheck\": true,\n    \"actionIsRemediation\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [\n     \"a9d17f4c9feb963b6096290eedb21af43d89e803cdcb1238dc11a544a3071a1e\"\n    ],\n    \"actionNextHopParameterMapping\": {},\n    \"actionNouns\": null,\n    \"actionOutputType\": \"ACTION_OUTPUT_TYPE_LIST\",\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"actionTitle\": \"Get Stopped Instance Volumes\",\n    \"actionType\": \"LEGO_TYPE_AWS\",\n    \"actionVerbs\": null,\n    \"actionVersion\": \"1.0.0\",\n    \"action_modified\": false,\n    \"action_uuid\": \"fee89ce72c745afdc666dc59d1a4f29ca3419640824684151b9464e96d1e27a7\",\n    \"collapsed\": true,\n    \"condition_enabled\": true,\n    \"continueOnError\": false,\n    \"credentialsJson\": {},\n    \"description\": \"This action helps to list the volumes that are attached to stopped instances.\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-05-17T16:47:59.430Z\"\n    },\n    \"id\": 1,\n    \"index\": 1,\n    \"inputData\": [\n     {\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"region\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"region\": {\n        \"default\": \"\",\n        \"description\": \"AWS Region.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [],\n      \"title\": \"aws_get_stopped_instance_volumes\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"language\": \"python\",\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"Get Stopped Instance Volumes\",\n    \"orderProperties\": [\n     \"region\"\n    ],\n    \"outputParams\": {\n     \"output_name\": \"stopped_instances_volumes\",\n     \"output_name_enabled\": true,\n     \"output_runbook_enabled\": false,\n     \"output_runbook_name\": \"\"\n    },\n    \"printOutput\": true,\n    \"startcondition\": \"not volume_ids\",\n    \"tags\": [],\n    \"uuid\": \"fee89ce72c745afdc666dc59d1a4f29ca3419640824684151b9464e96d1e27a7\",\n    \"version\": \"1.0.0\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"##  Copyright (c) 2023 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"import pprint\\n\",\n    \"from typing import Optional, Tuple\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from unskript.connectors.aws import aws_get_paginator\\n\",\n    \"from unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_stopped_instance_volumes_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_stopped_instance_volumes(handle, region: str = \\\"\\\") -> Tuple:\\n\",\n    \"    \\\"\\\"\\\"aws_get_stopped_instance_volumes Returns an array of volumes.\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned from task.validate(...).\\n\",\n    \"\\n\",\n    \"        :type region: string\\n\",\n    \"        :param region: Region to filter instances.\\n\",\n    \"\\n\",\n    \"        :rtype: Array of volumes.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    result = []\\n\",\n    \"    all_regions = [region]\\n\",\n    \"    if not region:\\n\",\n    \"        all_regions = aws_list_all_regions(handle)\\n\",\n    \"\\n\",\n    \"    for reg in all_regions:\\n\",\n    \"        try:\\n\",\n    \"            ec2Client = handle.client('ec2', region_name=reg)\\n\",\n    \"            res = aws_get_paginator(ec2Client, \\\"describe_instances\\\", \\\"Reservations\\\")\\n\",\n    \"            for reservation in res:\\n\",\n    \"                for instance in reservation['Instances']:\\n\",\n    \"                    if instance['State']['Name'] == 'stopped':\\n\",\n    \"                        block_device_mappings = instance['BlockDeviceMappings']\\n\",\n    \"                        for mapping in block_device_mappings:\\n\",\n    \"                            if 'Ebs' in mapping:\\n\",\n    \"                                ebs_volume = {}\\n\",\n    \"                                volume_id = mapping['Ebs']['VolumeId']\\n\",\n    \"                                ebs_volume[\\\"volume_id\\\"] = volume_id\\n\",\n    \"                                ebs_volume[\\\"region\\\"] = reg\\n\",\n    \"                                result.append(ebs_volume)\\n\",\n    \"        except Exception:\\n\",\n    \"            pass\\n\",\n    \"\\n\",\n    \"    if len(result) != 0:\\n\",\n    \"        return (False, result)\\n\",\n    \"    return (True, None)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"region\\\": \\\"region\\\"\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"not volume_ids\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"task.configure(outputName=\\\"stopped_instances_volumes\\\")\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_get_stopped_instance_volumes, lego_printer=aws_get_stopped_instance_volumes_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"199591ef-cb3a-49b7-b515-3c6998050320\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 2A\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 2A\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Create-List-of-Stopped-Instance-Volumes\\\">Create List of Stopped Instance Volumes</h3>\\n\",\n    \"<p>This action filters regions that have no volumes associated with stopped instances and creates a list of those that have them.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters: <code>None</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following output: <code>all_stopped_instances_volumes</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"6a10e980-9f17-4436-9166-90ea130aa316\",\n   \"metadata\": {\n    \"collapsed\": true,\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-04-21T09:16:07.861Z\"\n    },\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"name\": \"Create List of Stopped Instance Volumes\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Create List of Stopped Instance Volumes\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"all_stopped_instances_volumes = []\\n\",\n    \"try:\\n\",\n    \"    if stopped_instances_volumes[0] == False:\\n\",\n    \"        for instance in stopped_instances_volumes[1]:\\n\",\n    \"            all_stopped_instances_volumes.append(instance)\\n\",\n    \"except Exception as e:\\n\",\n    \"    if volume_ids:\\n\",\n    \"        for vol_id in volume_ids:\\n\",\n    \"            data_dict = {}\\n\",\n    \"            data_dict[\\\"region\\\"] = region\\n\",\n    \"            data_dict[\\\"volume_id\\\"] = vol_id\\n\",\n    \"            all_stopped_instances_volumes.append(data_dict)\\n\",\n    \"    else:\\n\",\n    \"        raise Exception(e)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"978d3b61-2fd9-461d-89bd-534d2dcf3b63\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 2\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 2\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Delete-Low-Usage-EBS-Volumes\\\"><a id=\\\"2\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Delete EBS Volumes</h3>\\n\",\n    \"<p>This action deletes volumes found in Step 1.&nbsp;</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters:&nbsp;<code>volume_id, region</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"a48df07b-4723-4413-a1fa-19bfb08ba48e\",\n   \"metadata\": {\n    \"actionBashCommand\": false,\n    \"actionCategories\": [\n     \"CATEGORY_TYPE_DEVOPS\",\n     \"CATEGORY_TYPE_SRE\",\n     \"CATEGORY_TYPE_AWS\",\n     \"CATEGORY_TYPE_AWS_EC2\"\n    ],\n    \"actionDescription\": \"Delete AWS Volume by Volume ID\",\n    \"actionEntryFunction\": \"aws_delete_volume_by_id\",\n    \"actionIsCheck\": false,\n    \"actionIsRemediation\": true,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": null,\n    \"actionNextHopParameterMapping\": null,\n    \"actionNouns\": null,\n    \"actionOutputType\": \"ACTION_OUTPUT_TYPE_LIST\",\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"actionTitle\": \"Delete AWS EBS Volume by Volume ID\",\n    \"actionType\": \"LEGO_TYPE_AWS\",\n    \"actionVerbs\": null,\n    \"actionVersion\": \"1.0.0\",\n    \"action_modified\": false,\n    \"action_uuid\": \"e8cccc03e1af323982c0ab9f06c01127c0481ca81943eb7e82e46245140b1059\",\n    \"condition_enabled\": true,\n    \"continueOnError\": true,\n    \"credentialsJson\": {},\n    \"description\": \"Delete AWS Volume by Volume ID\",\n    \"id\": 2,\n    \"index\": 2,\n    \"inputData\": [\n     {\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"region\\\\\\\\\\\")\\\"\"\n      },\n      \"volume_id\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"volume_id\\\\\\\\\\\")\\\"\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"region\": {\n        \"description\": \"AWS Region.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       },\n       \"volume_id\": {\n        \"description\": \"Volume ID.\",\n        \"title\": \"Volume ID\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"volume_id\",\n       \"region\"\n      ],\n      \"title\": \"aws_delete_volume_by_id\",\n      \"type\": \"object\"\n     }\n    ],\n    \"iterData\": [\n     {\n      \"iter_enabled\": true,\n      \"iter_item\": {\n       \"region\": \"region\",\n       \"volume_id\": \"volume_id\"\n      },\n      \"iter_list\": {\n       \"constant\": false,\n       \"objectItems\": true,\n       \"value\": \"all_stopped_instances_volumes\"\n      }\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"language\": \"python\",\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"Delete AWS EBS Volume by Volume ID\",\n    \"orderProperties\": [\n     \"volume_id\",\n     \"region\"\n    ],\n    \"printOutput\": true,\n    \"startcondition\": \"len(all_stopped_instances_volumes)!=0\",\n    \"tags\": [],\n    \"uuid\": \"e8cccc03e1af323982c0ab9f06c01127c0481ca81943eb7e82e46245140b1059\",\n    \"version\": \"1.0.0\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"##  Copyright (c) 2022 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import List\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_delete_volume_by_id_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint({\\\"Output\\\": output})\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_delete_volume_by_id(handle, volume_id: str, region: str) -> str:\\n\",\n    \"    \\\"\\\"\\\"aws_filter_ebs_unattached_volumes Returns an array of ebs volumes.\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned by the task.validate(...) method.\\n\",\n    \"\\n\",\n    \"        :type region: string\\n\",\n    \"        :param region: Used to filter the volume for specific region.\\n\",\n    \"\\n\",\n    \"        :type volume_id: string\\n\",\n    \"        :param volume_id: Volume ID needed to delete particular volume.\\n\",\n    \"\\n\",\n    \"        :rtype: Result of the API in the List form.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    result = []\\n\",\n    \"\\n\",\n    \"    ec2Client = handle.client('ec2',region_name=region)\\n\",\n    \"\\n\",\n    \"    # Adding logic for deletion criteria\\n\",\n    \"    try:\\n\",\n    \"        response = ec2Client.delete_volume(VolumeId=volume_id,)\\n\",\n    \"        result.append(response)\\n\",\n    \"    except Exception as e:\\n\",\n    \"        result.append(e)\\n\",\n    \"\\n\",\n    \"    return result\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(continueOnError=True)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"volume_id\\\": \\\"iter.get(\\\\\\\\\\\"volume_id\\\\\\\\\\\")\\\",\\n\",\n    \"    \\\"region\\\": \\\"iter.get(\\\\\\\\\\\"region\\\\\\\\\\\")\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(iterJson='''{\\n\",\n    \"    \\\"iter_enabled\\\": true,\\n\",\n    \"    \\\"iter_list_is_const\\\": false,\\n\",\n    \"    \\\"iter_list\\\": \\\"all_stopped_instances_volumes\\\",\\n\",\n    \"    \\\"iter_parameter\\\": [\\\"volume_id\\\",\\\"region\\\"]\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"len(all_stopped_instances_volumes)!=0\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_delete_volume_by_id, lego_printer=aws_delete_volume_by_id_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"44a6cf05-385b-445d-a503-ad4aa607a568\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Conclusion\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Conclusion\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Conclusion\\\">Conclusion</h3>\\n\",\n    \"<p>In this Runbook, we were able to get EBS volumes attached to EC2 instances that have been stopped and delete them. To view the full platform capabilities of unSkript please visit <a href=\\\"https://us.app.unskript.io\\\" target=\\\"_blank\\\" rel=\\\"noopener\\\">us.app.unskript.io</a></p>\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"execution_data\": {\n   \"runbook_name\": \"Delete EBS Volume Attached to Stopped Instances\",\n   \"parameters\": [\n    \"region\",\n    \"threshold_days\"\n   ]\n  },\n  \"kernelspec\": {\n   \"display_name\": \"unSkript (Build: 1166)\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"codemirror_mode\": {\n    \"name\": \"ipython\",\n    \"version\": 3\n   },\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"nbconvert_exporter\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.9.6\"\n  },\n  \"outputParameterSchema\": {\n   \"properties\": {},\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"parameterSchema\": {\n   \"properties\": {\n    \"region\": {\n     \"description\": \"AWS Regions to get the EBS volumes from. e.g. us-west-2. If nothing is given all regions will be considered.\",\n     \"title\": \"region\",\n     \"type\": \"string\"\n    },\n    \"volume_ids\": {\n     \"description\": \"List of EBS Volume IDs.\",\n     \"title\": \"volume_ids\",\n     \"type\": \"array\"\n    }\n   },\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"vscode\": {\n   \"interpreter\": {\n    \"hash\": \"b0fa6594d8f4cbf19f97940f81e996739fb7646882a419484c72d19e05852a7e\"\n   }\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}"
  },
  {
    "path": "AWS/AWS_Delete_EBS_Volumes_Attached_To_Stopped_Instances.json",
    "content": "{\n    \"name\": \"Delete EBS Volume Attached to Stopped Instances\",\n    \"description\": \"EBS (Elastic Block Storage) volumes are attached to EC2 Instances as storage devices. Unused (Unattached) EBS Volumes can keep accruing costs even when their associated EC2 instances are no longer running. These volumes need to be deleted if the instances they are attached to are no more required. This runbook helps us find such volumes and delete them.\",\n    \"uuid\": \"a9d17f4c9feb963b6096290eedb21af43d89e803cdcb1238dc11a544a3071a1e\",\n    \"icon\": \"CONNECTOR_TYPE_AWS\",\n    \"categories\": [ \"CATEGORY_TYPE_COST_OPT\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\" ],\n    \"connector_types\": [ \"CONNECTOR_TYPE_AWS\" ],\n    \"version\": \"1.0.0\"\n  }\n\n  "
  },
  {
    "path": "AWS/AWS_Delete_EBS_Volumes_With_Low_Usage.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"5424264e-6195-4cf9-906b-24b02d5a83f3\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Steps Overview\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Steps Overview\"\n   },\n   \"source\": [\n    \"<center><img src=\\\"https://storage.googleapis.com/unskript-website/assets/favicon.png\\\" alt=\\\"unSkript.com\\\" width=\\\"100\\\" height=\\\"100\\\">\\n\",\n    \"<h1 id=\\\"unSkript-Runbooks\\\">unSkript Runbooks</h1>\\n\",\n    \"<div class=\\\"alert alert-block alert-success\\\">\\n\",\n    \"<h3 id=\\\"Objective\\\">Objective</h3>\\n\",\n    \"<br><strong style=\\\"color: #000000;\\\"><em>Find and Delete EBS (Elastic Block Storage) Volumes with low usage</em></strong></div>\\n\",\n    \"</center>\\n\",\n    \"<p>&nbsp;</p>\\n\",\n    \"<center>\\n\",\n    \"<h2 id=\\\"Delete-EBS-AWS-Volumes-Volumes-Low-Usage\\\"><u>Delete EBS Volumes With Low Usage</u></h2>\\n\",\n    \"</center>\\n\",\n    \"<h1 id=\\\"Steps-Overview\\\">Steps Overview</h1>\\n\",\n    \"<p>1)<a href=\\\"#1\\\" target=\\\"_self\\\" rel=\\\"noopener\\\"> Find EBS volumes with low usage</a><br>2)<a href=\\\"#2\\\" target=\\\"_self\\\" rel=\\\"noopener\\\"> Delete EBS volumes</a></p>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"e1f146c9-5180-4459-9c82-cf0e1da02785\",\n   \"metadata\": {\n    \"collapsed\": true,\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-04-20T17:25:02.809Z\"\n    },\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"name\": \"Input verification\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Input verification\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"if region == None:\\n\",\n    \"    region = ''\\n\",\n    \"if volume_ids and not region:\\n\",\n    \"    raise SystemExit(\\\"Provide a region for the EBS Volume IDs!\\\")\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"908f4dcb-8483-44fc-8f81-ce2502e03093\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 1\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 1\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Find-EBS-Volumes-with-low-usage\\\"><a id=\\\"1\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Find EBS volumes with low usage</h3>\\n\",\n    \"<p>Using unSkript's Find EBS volumes with low usage action, we will find volumes with a low usage given a threshold number of days using the <span style=\\\"color: rgb(53, 152, 219);\\\">VolumeUsage&nbsp;<span style=\\\"color: rgb(0, 0, 0);\\\">metric in Cloudwatch metric statistics.</span></span></p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters: <code>region, threhold_days</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following output: <code>low_usage_volumes</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"8cc5f039-fc8e-46ff-879b-977d6413e6df\",\n   \"metadata\": {\n    \"actionBashCommand\": false,\n    \"actionCategories\": [\n     \"CATEGORY_TYPE_IAM\",\n     \"CATEGORY_TYPE_SECOPS\"\n    ],\n    \"actionDescription\": \"This action list low use volumes from AWS which used <10% capacity from the given threshold days.\",\n    \"actionEntryFunction\": \"aws_get_ebs_volume_for_low_usage\",\n    \"actionIsCheck\": true,\n    \"actionIsRemediation\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [\n     \"c9e1563d58cd6e3778a6c3fb11643498e3cdf3965a18c09214423998d62847b8\"\n    ],\n    \"actionNextHopParameterMapping\": {},\n    \"actionNouns\": null,\n    \"actionOutputType\": \"ACTION_OUTPUT_TYPE_OBJECT\",\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"actionTitle\": \"AWS Get EBS Volumes for Low Usage\",\n    \"actionType\": \"LEGO_TYPE_AWS\",\n    \"actionVerbs\": null,\n    \"actionVersion\": \"1.0.0\",\n    \"action_modified\": false,\n    \"action_uuid\": \"c4fcaf0f517e1f7522cfa0f551857a760298211e4cb65a485df40e7770b8fbcd\",\n    \"collapsed\": true,\n    \"condition_enabled\": true,\n    \"continueOnError\": false,\n    \"credentialsJson\": {},\n    \"description\": \"This action list low use volumes from AWS which used <10% capacity from the given threshold days.\",\n    \"id\": 4,\n    \"index\": 4,\n    \"inputData\": [\n     {\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"region\"\n      },\n      \"threshold_days\": {\n       \"constant\": false,\n       \"value\": \"int(threshold)\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"region\": {\n        \"default\": \"\",\n        \"description\": \"AWS Region.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       },\n       \"threshold_days\": {\n        \"default\": 10,\n        \"description\": \"(in days)\\u00a0The threshold to check the EBS volume usage less than the threshold.\",\n        \"title\": \"Threshold (In days)\",\n        \"type\": \"integer\"\n       }\n      },\n      \"required\": [],\n      \"title\": \"aws_get_ebs_volume_for_low_usage\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"language\": \"python\",\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS Get EBS Volumes for Low Usage\",\n    \"orderProperties\": [\n     \"region\",\n     \"threshold_days\"\n    ],\n    \"outputParams\": {\n     \"output_name\": \"low_usage_volumes\",\n     \"output_name_enabled\": true,\n     \"output_runbook_enabled\": false,\n     \"output_runbook_name\": \"\"\n    },\n    \"printOutput\": true,\n    \"startcondition\": \"not volume_ids\",\n    \"tags\": [\n     \"aws_get_ebs_volume_for_low_usage\"\n    ],\n    \"uuid\": \"c4fcaf0f517e1f7522cfa0f551857a760298211e4cb65a485df40e7770b8fbcd\",\n    \"version\": \"1.0.0\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"##  Copyright (c) 2023 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"import pprint\\n\",\n    \"from typing import Optional, Tuple\\n\",\n    \"from datetime import datetime, timedelta\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from unskript.connectors.aws import aws_get_paginator\\n\",\n    \"from unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_ebs_volume_for_low_usage_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_ebs_volume_for_low_usage(handle, region: str = \\\"\\\", threshold_days: int = 10) -> Tuple:\\n\",\n    \"    \\\"\\\"\\\"aws_get_ebs_volume_for_low_usage Returns an array of ebs volumes.\\n\",\n    \"\\n\",\n    \"        :type region: string\\n\",\n    \"        :param region: AWS Region.\\n\",\n    \"\\n\",\n    \"        :type threshold_days: int\\n\",\n    \"        :param threshold_days: (in days)\\u00a0The threshold to check the EBS volume usage\\n\",\n    \"        less than the threshold.\\n\",\n    \"\\n\",\n    \"        :rtype: Tuple with status result and list of EBS Volume.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    result = []\\n\",\n    \"    all_regions = [region]\\n\",\n    \"    if not region:\\n\",\n    \"        all_regions = aws_list_all_regions(handle)\\n\",\n    \"\\n\",\n    \"    for reg in all_regions:\\n\",\n    \"        try:\\n\",\n    \"            # Filtering the volume by region\\n\",\n    \"            ec2Client = handle.client('ec2', region_name=reg)\\n\",\n    \"            response = aws_get_paginator(ec2Client, \\\"describe_volumes\\\", \\\"Volumes\\\")\\n\",\n    \"            now = datetime.utcnow()\\n\",\n    \"            days_ago = now - timedelta(days=threshold_days)\\n\",\n    \"            # collecting the volumes which has zero attachments\\n\",\n    \"            for volume in response:\\n\",\n    \"                ebs_volume = {}\\n\",\n    \"                volume_id = volume[\\\"VolumeId\\\"]\\n\",\n    \"                cloudwatch = handle.client('cloudwatch', region_name=reg)\\n\",\n    \"                cloudwatch_response = cloudwatch.get_metric_statistics(\\n\",\n    \"                                    Namespace='AWS/EBS',\\n\",\n    \"                                    MetricName='VolumeUsage',\\n\",\n    \"                                    Dimensions=[\\n\",\n    \"                                        {\\n\",\n    \"                                            'Name': 'VolumeId',\\n\",\n    \"                                            'Value': volume_id\\n\",\n    \"                                        }\\n\",\n    \"                                    ],\\n\",\n    \"                                    StartTime=days_ago,\\n\",\n    \"                                    EndTime=now,\\n\",\n    \"                                    Period=3600,\\n\",\n    \"                                    Statistics=['Average']\\n\",\n    \"                                )\\n\",\n    \"                for v in cloudwatch_response['Datapoints']:\\n\",\n    \"                    if v['Average'] < 10:\\n\",\n    \"                        volume_ids = v['Dimensions'][0]['Value']\\n\",\n    \"                        ebs_volume[\\\"volume_id\\\"] = volume_ids\\n\",\n    \"                        ebs_volume[\\\"region\\\"] = reg\\n\",\n    \"                        result.append(ebs_volume)\\n\",\n    \"        except Exception:\\n\",\n    \"            pass\\n\",\n    \"\\n\",\n    \"    if len(result) != 0:\\n\",\n    \"        return (False, result)\\n\",\n    \"    return (True, None)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"region\\\": \\\"region\\\",\\n\",\n    \"    \\\"threshold_days\\\": \\\"int(threshold)\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"not volume_ids\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"low_usage_volumes\\\")\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_get_ebs_volume_for_low_usage, lego_printer=aws_get_ebs_volume_for_low_usage_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"199591ef-cb3a-49b7-b515-3c6998050320\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 2A\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 2A\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Create-List-of-Low-Usage-Volumes&para;\\\">Create List of Low Usage Volumes</h3>\\n\",\n    \"<p>This action filters regions that have no low usage volumes and creates a list of those that have them.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters: <code>None</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following output: <code>all_low_usage_volumes</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"6a10e980-9f17-4436-9166-90ea130aa316\",\n   \"metadata\": {\n    \"collapsed\": true,\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-04-20T17:26:22.391Z\"\n    },\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"name\": \"Create List of Low Usage Volumes\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Create List of Low Usage Volumes\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"all_low_usage_volumes = []\\n\",\n    \"try:\\n\",\n    \"    if low_usage_volumes[0] == False:\\n\",\n    \"        if len(low_usage_volumes[1])!=0:\\n\",\n    \"            all_low_usage_volumes=low_usage_volumes[1]\\n\",\n    \"except Exception:\\n\",\n    \"    for vol_id in volume_ids:\\n\",\n    \"        data_dict = {}\\n\",\n    \"        data_dict[\\\"region\\\"] = region\\n\",\n    \"        data_dict[\\\"volume_id\\\"] = vol_id\\n\",\n    \"        all_low_usage_volumes.append(data_dict)\\n\",\n    \"print(all_low_usage_volumes)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"978d3b61-2fd9-461d-89bd-534d2dcf3b63\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 2\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 2\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Delete-Low-Usage-EBS-Volumes\\\"><a id=\\\"2\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Delete Low Usage EBS Volumes</h3>\\n\",\n    \"<p>This action deleted Low Usage Volumes found in Step 1.&nbsp;</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters:&nbsp;<code>volume_id, region</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"d6058839-742f-4456-872a-e8e7b42dd51b\",\n   \"metadata\": {\n    \"actionBashCommand\": false,\n    \"actionCategories\": [\n     \"CATEGORY_TYPE_DEVOPS\",\n     \"CATEGORY_TYPE_SRE\",\n     \"CATEGORY_TYPE_AWS\",\n     \"CATEGORY_TYPE_AWS_EC2\"\n    ],\n    \"actionDescription\": \"Delete AWS Volume by Volume ID\",\n    \"actionEntryFunction\": \"aws_delete_volume_by_id\",\n    \"actionIsCheck\": false,\n    \"actionIsRemediation\": true,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": null,\n    \"actionNextHopParameterMapping\": null,\n    \"actionNouns\": null,\n    \"actionOutputType\": \"ACTION_OUTPUT_TYPE_LIST\",\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"actionTitle\": \"Delete AWS EBS Volume by Volume ID\",\n    \"actionType\": \"LEGO_TYPE_AWS\",\n    \"actionVerbs\": null,\n    \"actionVersion\": \"1.0.0\",\n    \"action_modified\": false,\n    \"action_uuid\": \"e8cccc03e1af323982c0ab9f06c01127c0481ca81943eb7e82e46245140b1059\",\n    \"collapsed\": true,\n    \"condition_enabled\": true,\n    \"continueOnError\": true,\n    \"credentialsJson\": {},\n    \"description\": \"Delete AWS Volume by Volume ID\",\n    \"id\": 2,\n    \"index\": 2,\n    \"inputData\": [\n     {\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"region\\\\\\\\\\\")\\\"\"\n      },\n      \"volume_id\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"volume_id\\\\\\\\\\\")\\\"\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"region\": {\n        \"description\": \"AWS Region.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       },\n       \"volume_id\": {\n        \"description\": \"Volume ID.\",\n        \"title\": \"Volume ID\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"volume_id\",\n       \"region\"\n      ],\n      \"title\": \"aws_delete_volume_by_id\",\n      \"type\": \"object\"\n     }\n    ],\n    \"iterData\": [\n     {\n      \"iter_enabled\": true,\n      \"iter_item\": {\n       \"region\": \"region\",\n       \"volume_id\": \"volume_id\"\n      },\n      \"iter_list\": {\n       \"constant\": false,\n       \"objectItems\": true,\n       \"value\": \"all_low_usage_volumes\"\n      }\n     }\n    ],\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"language\": \"python\",\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"Delete AWS EBS Volume by Volume ID\",\n    \"orderProperties\": [\n     \"volume_id\",\n     \"region\"\n    ],\n    \"printOutput\": true,\n    \"startcondition\": \"len(all_low_usage_volumes)!=0\",\n    \"tags\": [\n     \"aws_delete_volume_by_id\"\n    ],\n    \"uuid\": \"e8cccc03e1af323982c0ab9f06c01127c0481ca81943eb7e82e46245140b1059\",\n    \"version\": \"1.0.0\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"##  Copyright (c) 2022 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import List\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_delete_volume_by_id_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint({\\\"Output\\\": output})\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_delete_volume_by_id(handle, volume_id: str, region: str) -> str:\\n\",\n    \"    \\\"\\\"\\\"aws_filter_ebs_unattached_volumes Returns an array of ebs volumes.\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned by the task.validate(...) method.\\n\",\n    \"\\n\",\n    \"        :type region: string\\n\",\n    \"        :param region: Used to filter the volume for specific region.\\n\",\n    \"\\n\",\n    \"        :type volume_id: string\\n\",\n    \"        :param volume_id: Volume ID needed to delete particular volume.\\n\",\n    \"\\n\",\n    \"        :rtype: Result of the API in the List form.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    result = []\\n\",\n    \"\\n\",\n    \"    ec2Client = handle.client('ec2',region_name=region)\\n\",\n    \"\\n\",\n    \"    # Adding logic for deletion criteria\\n\",\n    \"    try:\\n\",\n    \"        response = ec2Client.delete_volume(VolumeId=volume_id,)\\n\",\n    \"        result.append(response)\\n\",\n    \"    except Exception as e:\\n\",\n    \"        result.append(e)\\n\",\n    \"\\n\",\n    \"    return result\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(continueOnError=True)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"volume_id\\\": \\\"iter.get(\\\\\\\\\\\"volume_id\\\\\\\\\\\")\\\",\\n\",\n    \"    \\\"region\\\": \\\"iter.get(\\\\\\\\\\\"region\\\\\\\\\\\")\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(iterJson='''{\\n\",\n    \"    \\\"iter_enabled\\\": true,\\n\",\n    \"    \\\"iter_list_is_const\\\": false,\\n\",\n    \"    \\\"iter_list\\\": \\\"all_low_usage_volumes\\\",\\n\",\n    \"    \\\"iter_parameter\\\": [\\\"volume_id\\\",\\\"region\\\"]\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"len(all_low_usage_volumes)!=0\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_delete_volume_by_id, lego_printer=aws_delete_volume_by_id_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"44a6cf05-385b-445d-a503-ad4aa607a568\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Conclusion\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Conclusion\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Conclusion\\\">Conclusion</h3>\\n\",\n    \"<p>In this Runbook, we were able to filter low usage volumes before a given threshold number of days and delete them. To view the full platform capabilities of unSkript please visit <a href=\\\"https://us.app.unskript.io\\\" target=\\\"_blank\\\" rel=\\\"noopener\\\">us.app.unskript.io</a></p>\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"execution_data\": {\n   \"runbook_name\": \"Delete EBS Volume With Low Usage\",\n   \"parameters\": [\n    \"region\",\n    \"threshold_days\"\n   ]\n  },\n  \"kernelspec\": {\n   \"display_name\": \"unSkript (Build: 1169)\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"codemirror_mode\": {\n    \"name\": \"ipython\",\n    \"version\": 3\n   },\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"nbconvert_exporter\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.10.6\"\n  },\n  \"outputParameterSchema\": {\n   \"properties\": {},\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"parameterSchema\": {\n   \"properties\": {\n    \"region\": {\n     \"description\": \"AWS Region to get the EBS volumes from. Eg: \\\"us-west-2\\\". If nothing is given all regions will be considered.\",\n     \"title\": \"region\",\n     \"type\": \"string\"\n    },\n    \"threshold\": {\n     \"default\": 10,\n     \"description\": \"The threshold number of days to check the low usage of volumes\",\n     \"title\": \"threshold\",\n     \"type\": \"number\"\n    },\n    \"volume_ids\": {\n     \"description\": \"List of EBS Volume IDs.\",\n     \"title\": \"volume_ids\",\n     \"type\": \"array\"\n    }\n   },\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"parameterValues\": {},\n  \"vscode\": {\n   \"interpreter\": {\n    \"hash\": \"b0fa6594d8f4cbf19f97940f81e996739fb7646882a419484c72d19e05852a7e\"\n   }\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}"
  },
  {
    "path": "AWS/AWS_Delete_EBS_Volumes_With_Low_Usage.json",
    "content": "{\n    \"name\": \"Delete EBS Volume With Low Usage\",\n    \"description\": \"This runbook can help us identify low usage Amazon Elastic Block Store (EBS) volumes and delete these volumes in order to lower the cost of your AWS bill. This is calculates using the VolumeUsage metric. It measures the percentage of the total storage space that is currently being used by an EBS volume. This metric is reported as a percentage value between 0 and 100.\",\n    \"uuid\": \"c9e1563d58cd6e3778a6c3fb11643498e3cdf3965a18c09214423998d62847b8\",\n    \"icon\": \"CONNECTOR_TYPE_AWS\",\n    \"categories\": [ \"CATEGORY_TYPE_COST_OPT\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\" ],\n    \"connector_types\": [ \"CONNECTOR_TYPE_AWS\" ],\n    \"version\": \"1.0.0\"\n}"
  },
  {
    "path": "AWS/AWS_Delete_ECS_Clusters_with_Low_CPU_Utilization.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"5424264e-6195-4cf9-906b-24b02d5a83f3\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Steps Overview\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Steps Overview\"\n   },\n   \"source\": [\n    \"<center><img src=\\\"https://storage.googleapis.com/unskript-website/assets/favicon.png\\\" alt=\\\"unSkript.com\\\" width=\\\"100\\\" height=\\\"100\\\">\\n\",\n    \"<h1 id=\\\"unSkript-Runbooks\\\">unSkript Runbooks<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#unSkript-Runbooks\\\" target=\\\"_self\\\">&para;</a></h1>\\n\",\n    \"<div class=\\\"alert alert-block alert-success\\\">\\n\",\n    \"<h3 id=\\\"Objective\\\">Objective<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Objective\\\" target=\\\"_self\\\">&para;</a></h3>\\n\",\n    \"<br><strong style=\\\"color: #000000;\\\"><em>Find and Delete ECS (Elastic Container Service) Clusters with Low CPU Utilization</em></strong></div>\\n\",\n    \"</center>\\n\",\n    \"<p>&nbsp;</p>\\n\",\n    \"<center>\\n\",\n    \"<h2 id=\\\"Delete-ECS-Clusters-with-Low-CPU-Utilization\\\"><u>Delete ECS Clusters with Low CPU Utilization</u><a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Delete-ECS-Clusters-with-Low-CPU-Utilization\\\" target=\\\"_self\\\">&para;</a></h2>\\n\",\n    \"</center>\\n\",\n    \"<h1 id=\\\"Steps-Overview\\\">Steps Overview<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Steps-Overview\\\" target=\\\"_self\\\">&para;</a></h1>\\n\",\n    \"<p>1)<a href=\\\"#1\\\" target=\\\"_self\\\" rel=\\\"noopener\\\"> Find ECS Clusters with Low CPU Utilization</a><br>2)<a href=\\\"#2\\\" target=\\\"_self\\\" rel=\\\"noopener\\\"> Delete ECS Clusters</a></p>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"e1f146c9-5180-4459-9c82-cf0e1da02785\",\n   \"metadata\": {\n    \"collapsed\": true,\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-04-27T10:58:26.965Z\"\n    },\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"name\": \"Input verification\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Input verification\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"if region == None:\\n\",\n    \"    region = ''\\n\",\n    \"if cluster_names and not region:\\n\",\n    \"    raise SystemExit(\\\"Provide a region for the ECS Cluster names!\\\")\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"908f4dcb-8483-44fc-8f81-ce2502e03093\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 1\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 1\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Find-ECS-Clusters-with-Low-CPU-Utilization\\\"><a id=\\\"1\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Find ECS Clusters with Low CPU Utilization</h3>\\n\",\n    \"<p>Using unSkript's Find ECS Clusters with Low CPU Utilization action, we will find clusters with a low CPU utilization given a threshold percentage using the&nbsp;<span style=\\\"color: rgb(53, 152, 219);\\\">CPUUtilization <span style=\\\"color: rgb(0, 0, 0);\\\">attribue found in the statistics list of&nbsp; <em>descibe_clusters</em> API.</span></span></p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters: <code>region, threshold</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following output: <code>low_cpu_clusters</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"479be1c9-ac15-43f7-9c17-0736c9c41a31\",\n   \"metadata\": {\n    \"actionBashCommand\": false,\n    \"actionCategories\": [\n     \"CATEGORY_TYPE_DEVOPS\",\n     \"CATEGORY_TYPE_SRE\",\n     \"CATEGORY_TYPE_AWS\",\n     \"CATEGORY_TYPE_AWS_EC2\",\n     \"CATEGORY_TYPE_AWS_EBC\"\n    ],\n    \"actionDescription\": \"This action searches for clusters that have low CPU utilization.\",\n    \"actionEntryFunction\": \"aws_list_clusters_with_low_utilization\",\n    \"actionIsCheck\": true,\n    \"actionIsRemediation\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [\n     \"6ad946fb1afd19286a8e7771e0f8e5566e4fdd54e3e2473385b5ac8e206e0a49\"\n    ],\n    \"actionNextHopParameterMapping\": {},\n    \"actionNouns\": null,\n    \"actionOutputType\": \"ACTION_OUTPUT_TYPE_LIST\",\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"actionTitle\": \"AWS List ECS Clusters with Low CPU Utilization\",\n    \"actionType\": \"LEGO_TYPE_AWS\",\n    \"actionVerbs\": null,\n    \"actionVersion\": \"1.0.0\",\n    \"action_modified\": false,\n    \"action_uuid\": \"25235bca4ec5a70c9c8a83bcdeff08c66bd9cb1a3a61a0e3136958631329d8ce\",\n    \"collapsed\": true,\n    \"condition_enabled\": true,\n    \"continueOnError\": false,\n    \"credentialsJson\": {},\n    \"description\": \"This action searches for clusters that have low CPU utilization.\",\n    \"id\": 1,\n    \"index\": 1,\n    \"inputData\": [\n     {\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"region\"\n      },\n      \"threshold\": {\n       \"constant\": false,\n       \"value\": \"int(threshold)\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"region\": {\n        \"default\": \"\",\n        \"description\": \"AWS Region.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       },\n       \"threshold\": {\n        \"default\": 10,\n        \"description\": \"Threshold to check for cpu utilization is less than threshold.\",\n        \"title\": \"Threshold (In percent)\",\n        \"type\": \"integer\"\n       }\n      },\n      \"required\": [],\n      \"title\": \"aws_list_clusters_with_low_utilization\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"language\": \"python\",\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS List ECS Clusters with Low CPU Utilization\",\n    \"orderProperties\": [\n     \"region\",\n     \"threshold\"\n    ],\n    \"outputParams\": {\n     \"output_name\": \"low_cpu_clusters\",\n     \"output_name_enabled\": true,\n     \"output_runbook_enabled\": false,\n     \"output_runbook_name\": \"\"\n    },\n    \"printOutput\": true,\n    \"startcondition\": \"not cluster_names\",\n    \"tags\": [\n     \"aws_list_clusters_with_low_utilization\"\n    ],\n    \"uuid\": \"25235bca4ec5a70c9c8a83bcdeff08c66bd9cb1a3a61a0e3136958631329d8ce\",\n    \"version\": \"1.0.0\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"##  Copyright (c) 2021 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"import pprint\\n\",\n    \"from typing import Optional, Tuple\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from unskript.connectors.aws import aws_get_paginator\\n\",\n    \"from unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_list_clusters_with_low_utilization_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_list_clusters_with_low_utilization(handle, region: str = \\\"\\\", threshold: int = 10) -> Tuple:\\n\",\n    \"    \\\"\\\"\\\"aws_list_clusters_with_low_utilization Returns an array of ecs clusters.\\n\",\n    \"\\n\",\n    \"        :type region: string\\n\",\n    \"        :param region: AWS Region.\\n\",\n    \"\\n\",\n    \"        :type threshold: int\\n\",\n    \"        :param threshold: (In percent) Threshold to check for cpu utilization\\n\",\n    \"        is less than threshold.\\n\",\n    \"\\n\",\n    \"        :rtype: List of clusters for low CPU utilization\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    result = []\\n\",\n    \"    all_regions = [region]\\n\",\n    \"    if not region:\\n\",\n    \"        all_regions = aws_list_all_regions(handle)\\n\",\n    \"\\n\",\n    \"    for reg in all_regions:\\n\",\n    \"        try:\\n\",\n    \"            ecs_Client = handle.client('ecs', region_name=reg)\\n\",\n    \"            response = aws_get_paginator(ecs_Client, \\\"list_clusters\\\", \\\"clusterArns\\\")\\n\",\n    \"            for cluster in response:\\n\",\n    \"                cluster_dict = {}\\n\",\n    \"                cluster_name = cluster.split('/')[1]\\n\",\n    \"                stats = ecs_Client.describe_clusters(clusters=[cluster])['clusters'][0]['statistics']\\n\",\n    \"                for stat in stats:\\n\",\n    \"                    if stat['name'] == 'CPUUtilization':\\n\",\n    \"                        cpu_utilization = int(stat['value'])\\n\",\n    \"                        if cpu_utilization < threshold:\\n\",\n    \"                            cluster_dict[\\\"cluster_name\\\"] = cluster_name\\n\",\n    \"                            cluster_dict[\\\"region\\\"] = reg\\n\",\n    \"                            result.append(cluster_dict)\\n\",\n    \"        except Exception:\\n\",\n    \"            pass\\n\",\n    \"\\n\",\n    \"    if len(result) != 0:\\n\",\n    \"        return (False, result)\\n\",\n    \"    return (True, None)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"region\\\": \\\"region\\\",\\n\",\n    \"    \\\"threshold\\\": \\\"int(threshold)\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"not cluster_names\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"task.configure(outputName=\\\"low_cpu_clusters\\\")\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_list_clusters_with_low_utilization, lego_printer=aws_list_clusters_with_low_utilization_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"199591ef-cb3a-49b7-b515-3c6998050320\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 2A\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 2A\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Create-List-of-Low-CPU-Utilization-Clusters\\\">Create List of Low CPU Utilization Clusters</h3>\\n\",\n    \"<p>This action filters regions that have no clusters with low CPU utilization and creates a list of those that have them.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters: <code>None</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following output: <code>all_low_cpu_clusters</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"6a10e980-9f17-4436-9166-90ea130aa316\",\n   \"metadata\": {\n    \"collapsed\": true,\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-04-27T10:59:05.263Z\"\n    },\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"name\": \"Create List of Low CPU Utilization Clusters\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Create List of Low CPU Utilization Clusters\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"all_low_cpu_clusters = []\\n\",\n    \"try:\\n\",\n    \"    if low_cpu_clusters[0] == False:\\n\",\n    \"        if len(low_cpu_clusters[1])!=0:\\n\",\n    \"            all_low_cpu_clusters=low_cpu_clusters[1]\\n\",\n    \"except Exception:\\n\",\n    \"    for vol_id in volume_ids:\\n\",\n    \"        data_dict = {}\\n\",\n    \"        data_dict[\\\"region\\\"] = region\\n\",\n    \"        data_dict[\\\"volume_id\\\"] = vol_id\\n\",\n    \"        all_low_cpu_clusters.append(data_dict)\\n\",\n    \"print(all_low_cpu_clusters)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"978d3b61-2fd9-461d-89bd-534d2dcf3b63\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 2\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 2\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Delete-Low-Usage-EBS-Volumes\\\"><a id=\\\"2\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Delete ECS Cluster</h3>\\n\",\n    \"<p>This action deletes clusters found in Step 1.&nbsp;</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters:&nbsp;<code>volume_id, region</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"b96c00e4-7519-49e3-bcd4-d1b7f921759c\",\n   \"metadata\": {\n    \"actionBashCommand\": false,\n    \"actionCategories\": [\n     \"CATEGORY_TYPE_DEVOPS\",\n     \"CATEGORY_TYPE_SRE\",\n     \"CATEGORY_TYPE_AWS\"\n    ],\n    \"actionDescription\": \"Delete AWS ECS Cluster\",\n    \"actionEntryFunction\": \"aws_delete_ecs_cluster\",\n    \"actionIsCheck\": false,\n    \"actionIsRemediation\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": null,\n    \"actionNextHopParameterMapping\": null,\n    \"actionNouns\": null,\n    \"actionOutputType\": \"ACTION_OUTPUT_TYPE_DICT\",\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"actionTitle\": \"AWS Delete ECS Cluster\",\n    \"actionType\": \"LEGO_TYPE_AWS\",\n    \"actionVerbs\": null,\n    \"actionVersion\": \"1.0.0\",\n    \"action_modified\": false,\n    \"action_uuid\": \"1bdeb0fd1addf317585a71f771a1706ab9ae888f33dbddaeb126be1e454ff3a6\",\n    \"condition_enabled\": true,\n    \"continueOnError\": false,\n    \"credentialsJson\": {},\n    \"description\": \"Delete AWS ECS Cluster\",\n    \"id\": 1,\n    \"index\": 1,\n    \"inputData\": [\n     {\n      \"cluster_name\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"cluster_name\\\\\\\\\\\")\\\"\"\n      },\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"region\\\\\\\\\\\")\\\"\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"cluster_name\": {\n        \"description\": \"ECS Cluster name that needs to be deleted\",\n        \"title\": \"ECS Cluster Name\",\n        \"type\": \"string\"\n       },\n       \"region\": {\n        \"description\": \"AWS Region.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"region\",\n       \"cluster_name\"\n      ],\n      \"title\": \"aws_delete_ecs_cluster\",\n      \"type\": \"object\"\n     }\n    ],\n    \"iterData\": [\n     {\n      \"iter_enabled\": true,\n      \"iter_item\": {\n       \"cluster_name\": \"cluster_name\",\n       \"region\": \"region\"\n      },\n      \"iter_list\": {\n       \"constant\": false,\n       \"objectItems\": true,\n       \"value\": \"all_low_cpu_clusters\"\n      }\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"language\": \"python\",\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS Delete ECS Cluster\",\n    \"orderProperties\": [\n     \"region\",\n     \"cluster_name\"\n    ],\n    \"printOutput\": true,\n    \"startcondition\": \"len(all_low_cpu_clusters)!=0\",\n    \"tags\": [\n     \"aws_delete_ecs_cluster\"\n    ],\n    \"uuid\": \"1bdeb0fd1addf317585a71f771a1706ab9ae888f33dbddaeb126be1e454ff3a6\",\n    \"version\": \"1.0.0\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"# Copyright (c) 2021 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import Optional, Dict\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_delete_ecs_cluster_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_delete_ecs_cluster(handle, region: str, cluster_name: str) -> Dict:\\n\",\n    \"    \\\"\\\"\\\"aws_delete_ecs_cluster dict of loadbalancers info.\\n\",\n    \"\\n\",\n    \"        :type region: string\\n\",\n    \"        :param region: AWS Region.\\n\",\n    \"\\n\",\n    \"        :type cluster_name: string\\n\",\n    \"        :param cluster_name: ECS Cluster name\\n\",\n    \"\\n\",\n    \"        :rtype: dict of load balancers info.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    try:\\n\",\n    \"        ec2Client = handle.client('ecs', region_name=region)\\n\",\n    \"        response = ec2Client.delete_cluster(cluster=cluster_name)\\n\",\n    \"        return response\\n\",\n    \"    except Exception as e:\\n\",\n    \"        raise Exception(e)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(continueOnError=False)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"region\\\": \\\"iter.get(\\\\\\\\\\\"region\\\\\\\\\\\")\\\",\\n\",\n    \"    \\\"cluster_name\\\": \\\"iter.get(\\\\\\\\\\\"cluster_name\\\\\\\\\\\")\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(iterJson='''{\\n\",\n    \"    \\\"iter_enabled\\\": true,\\n\",\n    \"    \\\"iter_list_is_const\\\": false,\\n\",\n    \"    \\\"iter_list\\\": \\\"all_low_cpu_clusters\\\",\\n\",\n    \"    \\\"iter_parameter\\\": [\\\"region\\\",\\\"cluster_name\\\"]\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"len(all_low_cpu_clusters)!=0\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_delete_ecs_cluster, lego_printer=aws_delete_ecs_cluster_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"44a6cf05-385b-445d-a503-ad4aa607a568\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Conclusion\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Conclusion\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Conclusion\\\">Conclusion<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Conclusion\\\" target=\\\"_self\\\">&para;</a></h3>\\n\",\n    \"<p>In this Runbook, we were able to filter low CPU utilization ECS clusters given threshold percentage and delete them. To view the full platform capabilities of unSkript please visit <a href=\\\"https://us.app.unskript.io\\\" target=\\\"_blank\\\" rel=\\\"noopener\\\">us.app.unskript.io</a></p>\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"execution_data\": {\n   \"runbook_name\": \"Delete ECS Clusters with Low CPU Utilization\",\n   \"parameters\": [\n    \"region\",\n    \"threshold_days\"\n   ]\n  },\n  \"kernelspec\": {\n   \"display_name\": \"unSkript (Build: 1169)\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"codemirror_mode\": {\n    \"name\": \"ipython\",\n    \"version\": 3\n   },\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"nbconvert_exporter\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.10.6\"\n  },\n  \"outputParameterSchema\": {\n   \"properties\": {},\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"parameterSchema\": {\n   \"properties\": {\n    \"cluster_names\": {\n     \"description\": \"List of ECS cluster names\",\n     \"title\": \"cluster_names\",\n     \"type\": \"array\"\n    },\n    \"region\": {\n     \"description\": \"AWS Region to get the EBS volumes from. \\\"us-west-2\\\". If nothing is given all regions will be considered.\",\n     \"title\": \"region\",\n     \"type\": \"string\"\n    },\n    \"threshold\": {\n     \"default\": 10,\n     \"description\": \"Threshold (in percent) to check for the CPU utilization of clusters below the given threshold.\",\n     \"title\": \"threshold\",\n     \"type\": \"number\"\n    }\n   },\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"vscode\": {\n   \"interpreter\": {\n    \"hash\": \"b0fa6594d8f4cbf19f97940f81e996739fb7646882a419484c72d19e05852a7e\"\n   }\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}"
  },
  {
    "path": "AWS/AWS_Delete_ECS_Clusters_with_Low_CPU_Utilization.json",
    "content": "{\n    \"name\": \"Delete ECS Clusters with Low CPU Utilization\",\n    \"description\": \"ECS clusters are a managed service that allows users to run Docker containers on AWS, making it easier to manage and scale containerized applications. However, running ECS clusters with low CPU utilization can result in wasted resources and unnecessary costs. AWS charges for the resources allocated to a cluster, regardless of whether they are fully utilized or not. By deleting clusters that are not being fully utilized, you can reduce the number of resources being allocated and lower the overall cost of running ECS. Furthermore, deleting unused or low-utilization clusters can also improve overall system performance by freeing up resources for other applications that require more processing power. This runbook helps us to identify such clusters and delete them.\",\n    \"uuid\": \"6ad946fb1afd19286a8e7771e0f8e5566e4fdd54e3e2473385b5ac8e206e0a49\",\n    \"icon\": \"CONNECTOR_TYPE_AWS\",\n    \"categories\": [ \"CATEGORY_TYPE_COST_OPT\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\" ],\n    \"connector_types\": [ \"CONNECTOR_TYPE_AWS\" ],\n    \"version\": \"1.0.0\"\n  }"
  },
  {
    "path": "AWS/AWS_Delete_ELBs_With_No_Targets_Or_Instances.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"82eebdfd-c880-40df-bd6d-5b546c92164b\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Steps Overview\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Steps Overview\"\n   },\n   \"source\": [\n    \"<center><img src=\\\"https://storage.googleapis.com/unskript-website/assets/favicon.png\\\" alt=\\\"unSkript.com\\\" width=\\\"100\\\" height=\\\"100\\\">\\n\",\n    \"<h1 id=\\\"unSkript-Runbooks\\\">unSkript Runbooks</h1>\\n\",\n    \"<div class=\\\"alert alert-block alert-success\\\">\\n\",\n    \"<h3 id=\\\"Objective\\\">Objective</h3>\\n\",\n    \"<br><strong style=\\\"color: #000000;\\\"><em>Find and Delete Elastic Load Balancers that don't have any target groups or instances assosiated with them</em></strong></div>\\n\",\n    \"</center>\\n\",\n    \"<p>&nbsp;</p>\\n\",\n    \"<center>\\n\",\n    \"<h2 id=\\\"Release-Unattached-AWS-Elastic-IPs\\\"><strong><u>Delete ELBs With No Target Groups or Instances</u></strong></h2>\\n\",\n    \"</center>\\n\",\n    \"<h1 id=\\\"Steps-Overview\\\">Steps Overview</h1>\\n\",\n    \"<p>1)<a href=\\\"#1\\\" target=\\\"_self\\\" rel=\\\"noopener\\\"> Find AWS ELBs with no targets or instances</a><br>2)<a href=\\\"#2\\\" target=\\\"_self\\\" rel=\\\"noopener\\\"> Delete Load Balancers</a></p>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 4,\n   \"id\": \"1290c59b-9107-46c0-8f0b-8dce39e91ef9\",\n   \"metadata\": {\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-07-14T16:28:13.395Z\"\n    },\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"name\": \"Input Verification\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Input Verification\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"if region == None:\\n\",\n    \"    region = ''\\n\",\n    \"if elb_arns and not region:\\n\",\n    \"    raise SystemExit(\\\"Provide a region for the ELB ARNs !\\\")\\n\",\n    \"if elb_names and not region:\\n\",\n    \"    raise SystemExit(\\\"Provide a region for the ELB names !\\\")\\n\",\n    \"if elb_arns and elb_names and not region:\\n\",\n    \"    raise SystemExit(\\\"Provide a region for the ELB ARNs and ELB names!\\\")\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"2020e8d0-ba3b-4c71-84b2-10917465a27e\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 1\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 1\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Find-unattached-Elastic-IPs\\\"><a id=\\\"1\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Find AWS ELBs with no targets or instances</h3>\\n\",\n    \"<p>Using unSkript's Find AWS ELBs with no targets or instances action, we will find ELBs that don't have any target groups in the case of Application Load Balancers's or&nbsp;Network Load Balancers's and Classic Load Balancers that have no&nbsp; instances associated to them.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters: <code>region</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following output: <code>elbs_with_no_targets</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"9f8e20f7-82ce-46ce-8dd8-2be94cab9174\",\n   \"metadata\": {\n    \"actionBashCommand\": false,\n    \"actionCategories\": [\n     \"CATEGORY_TYPE_CLOUDOPS\",\n     \"CATEGORY_TYPE_COST_OPT\",\n     \"CATEGORY_TYPE_AWS\",\n     \"CATEGORY_TYPE_AWS_ELB\"\n    ],\n    \"actionDescription\": \"Find AWS ELBs with no targets or instances attached to them.\",\n    \"actionEntryFunction\": \"aws_find_elbs_with_no_targets_or_instances\",\n    \"actionIsCheck\": true,\n    \"actionIsRemediation\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [],\n    \"actionNextHopParameterMapping\": {},\n    \"actionNouns\": null,\n    \"actionOutputType\": \"ACTION_OUTPUT_TYPE_LIST\",\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"actionTitle\": \"Find AWS ELBs with no targets or instances\",\n    \"actionType\": \"LEGO_TYPE_AWS\",\n    \"actionVerbs\": null,\n    \"actionVersion\": \"1.0.0\",\n    \"action_modified\": false,\n    \"action_uuid\": \"ed9c71d09866b0a019abe4f10951f32f9484504e0e274eb3d248e8bc321cb257\",\n    \"collapsed\": true,\n    \"condition_enabled\": true,\n    \"continueOnError\": false,\n    \"credentialsJson\": {},\n    \"description\": \"Find AWS ELBs with no targets or instances attached to them.\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-07-14T16:27:55.801Z\"\n    },\n    \"id\": 1,\n    \"index\": 1,\n    \"inputData\": [\n     {\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"region\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"region\": {\n        \"default\": \"\",\n        \"description\": \"AWS Region.\",\n        \"title\": \"region\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [],\n      \"title\": \"aws_find_elbs_with_no_targets_or_instances\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"language\": \"python\",\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"Find AWS ELBs with no targets or instances\",\n    \"orderProperties\": [\n     \"region\"\n    ],\n    \"outputParams\": {\n     \"output_name\": \"elbs_with_no_targets\",\n     \"output_name_enabled\": true,\n     \"output_runbook_enabled\": false,\n     \"output_runbook_name\": \"\"\n    },\n    \"printOutput\": true,\n    \"startcondition\": \"not elb_arns and not elb_names\",\n    \"tags\": [\n     \"aws_find_elbs_with_no_targets_or_instances\"\n    ],\n    \"uuid\": \"ed9c71d09866b0a019abe4f10951f32f9484504e0e274eb3d248e8bc321cb257\",\n    \"version\": \"1.0.0\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"##  Copyright (c) 2023 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import Optional, Tuple\\n\",\n    \"from unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\\n\",\n    \"from unskript.connectors.aws import aws_get_paginator\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_find_elbs_with_no_targets_or_instances_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_find_elbs_with_no_targets_or_instances(handle, region: str = \\\"\\\")->Tuple:\\n\",\n    \"    \\\"\\\"\\\"aws_find_elbs_with_no_targets_or_instances Returns details of Elb's with no target groups or instances\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned from task.validate(...).\\n\",\n    \"\\n\",\n    \"        :type region: str\\n\",\n    \"        :param region: AWS Region\\n\",\n    \"\\n\",\n    \"        :rtype: Tuple of status, and details of ELB's with no targets or instances\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    result = []\\n\",\n    \"    all_load_balancers = []\\n\",\n    \"    all_regions = [region]\\n\",\n    \"    if not region:\\n\",\n    \"        all_regions = aws_list_all_regions(handle)\\n\",\n    \"    for reg in all_regions:\\n\",\n    \"        try:\\n\",\n    \"            elbv2Client = handle.client('elbv2', region_name=reg)\\n\",\n    \"            elbv2_response = aws_get_paginator(elbv2Client, \\\"describe_load_balancers\\\", \\\"LoadBalancers\\\")\\n\",\n    \"            elbClient = handle.client('elb', region_name=reg)\\n\",\n    \"            elb_response = elbClient.describe_load_balancers()\\n\",\n    \"            for lb in elbv2_response:\\n\",\n    \"                elb_dict = {}\\n\",\n    \"                elb_dict[\\\"load_balancer_name\\\"] = lb['LoadBalancerName']\\n\",\n    \"                elb_dict[\\\"load_balancer_arn\\\"] = lb['LoadBalancerArn']\\n\",\n    \"                elb_dict[\\\"load_balancer_type\\\"] = lb['Type']\\n\",\n    \"                elb_dict[\\\"load_balancer_dns\\\"] = lb['DNSName']\\n\",\n    \"                elb_dict[\\\"region\\\"] = reg\\n\",\n    \"                all_load_balancers.append(elb_dict)\\n\",\n    \"            for lb in elb_response['LoadBalancerDescriptions']:\\n\",\n    \"                elb_dict = {}\\n\",\n    \"                elb_dict[\\\"load_balancer_name\\\"] = lb['LoadBalancerName']\\n\",\n    \"                elb_dict[\\\"load_balancer_type\\\"] = 'classic'\\n\",\n    \"                elb_dict[\\\"load_balancer_dns\\\"] = lb['DNSName']\\n\",\n    \"                elb_dict[\\\"region\\\"] = reg\\n\",\n    \"                all_load_balancers.append(elb_dict)\\n\",\n    \"        except Exception as e:\\n\",\n    \"            pass\\n\",\n    \"    for load_balancer in all_load_balancers:\\n\",\n    \"        if load_balancer['load_balancer_type']=='network' or load_balancer['load_balancer_type']=='application':\\n\",\n    \"            elbv2Client = handle.client('elbv2', region_name=load_balancer['region'])\\n\",\n    \"            target_groups = elbv2Client.describe_target_groups(\\n\",\n    \"                LoadBalancerArn=load_balancer['load_balancer_arn']\\n\",\n    \"            )\\n\",\n    \"            if len(target_groups['TargetGroups']) == 0:\\n\",\n    \"                    elb_dict = {}\\n\",\n    \"                    elb_dict[\\\"elb_arn\\\"] = load_balancer['load_balancer_arn']\\n\",\n    \"                    elb_dict[\\\"elb_name\\\"] = load_balancer['load_balancer_name']\\n\",\n    \"                    elb_dict[\\\"region\\\"] = load_balancer['region']\\n\",\n    \"                    elb_dict[\\\"type\\\"] = load_balancer['load_balancer_type']\\n\",\n    \"                    result.append(elb_dict)\\n\",\n    \"        else:\\n\",\n    \"            elbClient = handle.client('elb', region_name=load_balancer['region'])\\n\",\n    \"            res = elbClient.describe_instance_health(\\n\",\n    \"                LoadBalancerName=load_balancer['load_balancer_name'],\\n\",\n    \"            )\\n\",\n    \"            if len(res['InstanceStates'])==0:\\n\",\n    \"                elb_dict = {}\\n\",\n    \"                elb_dict[\\\"elb_name\\\"] = load_balancer['load_balancer_name']\\n\",\n    \"                elb_dict[\\\"region\\\"] = load_balancer['region']\\n\",\n    \"                elb_dict[\\\"type\\\"] = load_balancer['load_balancer_type']\\n\",\n    \"                result.append(elb_dict)\\n\",\n    \"    if len(result) != 0:\\n\",\n    \"        return (False, result)\\n\",\n    \"    else:\\n\",\n    \"        return (True, None)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"region\\\": \\\"region\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"not elb_arns and not elb_names\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"task.configure(outputName=\\\"elbs_with_no_targets\\\")\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_find_elbs_with_no_targets_or_instances, lego_printer=aws_find_elbs_with_no_targets_or_instances_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"a311041f-620a-4b6b-914f-e52c6c3a71f4\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 2A\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 2A\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Create-List-of-unattached-Elastic-IPs\\\">Create List of ELBs with no targets or instances</h3>\\n\",\n    \"<p>This action filters regions that have no ELB's without targets and instances and creates a list of those that have them.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters: <code>None</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following output: <code>elb_classic_list, elbv2_list</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"b85ce542-bdf0-44d2-9e75-213002d5c036\",\n   \"metadata\": {\n    \"collapsed\": true,\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-05-17T06:12:51.827Z\"\n    },\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"name\": \"Create List of ELBs with no targets or instances\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Create List of ELBs with no targets or instances\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"elb_classic_list = []\\n\",\n    \"elbv2_list = []\\n\",\n    \"try:\\n\",\n    \"    for res in elbs_with_no_targets:\\n\",\n    \"        if type(res)==bool:\\n\",\n    \"            if res == False:\\n\",\n    \"                continue\\n\",\n    \"        elif type(res)==list:\\n\",\n    \"            if len(res)!=0:\\n\",\n    \"                for elb in res:\\n\",\n    \"                    if 'elb_arn' in elb.keys():\\n\",\n    \"                        elbv2_list.append(elb)\\n\",\n    \"                    else:\\n\",\n    \"                        elb_classic_list.append(elb)\\n\",\n    \"except Exception:\\n\",\n    \"    if elb_arns:\\n\",\n    \"        for arn in elb_arns:\\n\",\n    \"            data_dict = {}\\n\",\n    \"            data_dict[\\\"region\\\"] = region\\n\",\n    \"            data_dict[\\\"elb_arn\\\"] = arn\\n\",\n    \"            elbv2_list.append(data_dict)\\n\",\n    \"    if elb_names:\\n\",\n    \"        for name in elb_names:\\n\",\n    \"            data_dict = {}\\n\",\n    \"            data_dict[\\\"region\\\"] = region\\n\",\n    \"            data_dict[\\\"elb_name\\\"] = name\\n\",\n    \"            elb_classic_list.append(data_dict)\\n\",\n    \"print(\\\"Network/Application Load Balancers\\\",\\\"\\\\n\\\",elbv2_list, \\\"\\\\n\\\", \\\"Classic Load Balancers\\\", \\\"\\\\n\\\", elb_classic_list)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"9fb3704a-9b19-49c4-96ab-a982217bbcd3\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 2 - Part 1\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 2 - Part 1\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Delete-Load-Balancers\\\"><a id=\\\"2\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Delete Load Balancers</h3>\\n\",\n    \"<p>This action deletes Network and Application ELBs found in Step 1.&nbsp;</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters:&nbsp;<code>region, elb_arn</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"a3e314ad-8dce-4a3b-bf68-29b33a1f7387\",\n   \"metadata\": {\n    \"actionBashCommand\": false,\n    \"actionCategories\": [\n     \"CATEGORY_TYPE_DEVOPS\",\n     \"CATEGORY_TYPE_SRE\",\n     \"CATEGORY_TYPE_AWS\"\n    ],\n    \"actionDescription\": \"AWS Delete Load Balancer\",\n    \"actionEntryFunction\": \"aws_delete_load_balancer\",\n    \"actionIsCheck\": false,\n    \"actionIsRemediation\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": null,\n    \"actionNextHopParameterMapping\": null,\n    \"actionNouns\": null,\n    \"actionOutputType\": \"ACTION_OUTPUT_TYPE_DICT\",\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"actionTitle\": \"AWS Delete Load Balancer\",\n    \"actionType\": \"LEGO_TYPE_AWS\",\n    \"actionVerbs\": null,\n    \"actionVersion\": \"1.0.0\",\n    \"action_modified\": false,\n    \"action_uuid\": \"bb9ec9e116f23c18a3974ae84f985b60a62db4bf6a03bfe367b7881227ceac8b\",\n    \"condition_enabled\": true,\n    \"continueOnError\": true,\n    \"credentialsJson\": {},\n    \"description\": \"AWS Delete Load Balancer\",\n    \"id\": 1,\n    \"index\": 1,\n    \"inputData\": [\n     {\n      \"elb_arn\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"elb_arn\\\\\\\\\\\")\\\"\"\n      },\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"region\\\\\\\\\\\")\\\"\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"elb_arn\": {\n        \"description\": \"Load Balancer ARN of the ALB/NLB type Load Balancer.\",\n        \"title\": \"Load Balancer ARN (ALB/NLB type)\",\n        \"type\": \"string\"\n       },\n       \"region\": {\n        \"description\": \"AWS Region.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"elb_arn\",\n       \"region\"\n      ],\n      \"title\": \"aws_delete_load_balancer\",\n      \"type\": \"object\"\n     }\n    ],\n    \"iterData\": [\n     {\n      \"iter_enabled\": true,\n      \"iter_item\": {\n       \"elb_arn\": \"elb_arn\",\n       \"region\": \"region\"\n      },\n      \"iter_list\": {\n       \"constant\": false,\n       \"objectItems\": true,\n       \"value\": \"elbv2_list\"\n      }\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"language\": \"python\",\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS Delete Load Balancer\",\n    \"orderProperties\": [\n     \"elb_arn\",\n     \"region\"\n    ],\n    \"printOutput\": true,\n    \"startcondition\": \"len(elbv2_list)!=0\",\n    \"tags\": [\n     \"aws_delete_load_balancer\"\n    ],\n    \"uuid\": \"bb9ec9e116f23c18a3974ae84f985b60a62db4bf6a03bfe367b7881227ceac8b\",\n    \"version\": \"1.0.0\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"# Copyright (c) 2021 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"import pprint\\n\",\n    \"from typing import Dict\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_delete_load_balancer_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_delete_load_balancer(handle, region: str, elb_arn: str) -> Dict:\\n\",\n    \"    \\\"\\\"\\\"aws_delete_load_balancer dict of loadbalancers info.\\n\",\n    \"\\n\",\n    \"        :type region: string\\n\",\n    \"        :param region: AWS Region.\\n\",\n    \"\\n\",\n    \"        :type elb_arn: string\\n\",\n    \"        :param elb_arn: load balancer ARNs.\\n\",\n    \"\\n\",\n    \"        :rtype: dict of load balancers info.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    try:\\n\",\n    \"        elbv2Client = handle.client('elbv2', region_name=region)\\n\",\n    \"        response = elbv2Client.delete_load_balancer(LoadBalancerArn=elb_arn)\\n\",\n    \"        return response\\n\",\n    \"    except Exception as e:\\n\",\n    \"        raise Exception(e)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(continueOnError=True)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"region\\\": \\\"iter.get(\\\\\\\\\\\"region\\\\\\\\\\\")\\\",\\n\",\n    \"    \\\"elb_arn\\\": \\\"iter.get(\\\\\\\\\\\"elb_arn\\\\\\\\\\\")\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(iterJson='''{\\n\",\n    \"    \\\"iter_enabled\\\": true,\\n\",\n    \"    \\\"iter_list_is_const\\\": false,\\n\",\n    \"    \\\"iter_list\\\": \\\"elbv2_list\\\",\\n\",\n    \"    \\\"iter_parameter\\\": [\\\"region\\\",\\\"elb_arn\\\"]\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"len(elbv2_list)!=0\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_delete_load_balancer, lego_printer=aws_delete_load_balancer_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"21e8bef7-c3a3-47a4-9b63-ea57b3cd9043\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 2 - Part 2\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 2 - Part 2\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Delete-Load-Balancers\\\"><a id=\\\"2\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Delete Classic Load Balancers</h3>\\n\",\n    \"<p>This action deletes Classic ELBs found in Step 1.&nbsp;</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters:&nbsp;<code>region, elb_name</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"b700ed80-11dd-4aa8-b6e0-075cccf26b7b\",\n   \"metadata\": {\n    \"actionBashCommand\": false,\n    \"actionCategories\": [\n     \"CATEGORY_TYPE_DEVOPS\",\n     \"CATEGORY_TYPE_SRE\",\n     \"CATEGORY_TYPE_AWS\"\n    ],\n    \"actionDescription\": \"Delete Classic Elastic Load Balancers\",\n    \"actionEntryFunction\": \"aws_delete_classic_load_balancer\",\n    \"actionIsCheck\": false,\n    \"actionIsRemediation\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": null,\n    \"actionNextHopParameterMapping\": null,\n    \"actionNouns\": null,\n    \"actionOutputType\": \"ACTION_OUTPUT_TYPE_DICT\",\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"actionTitle\": \"AWS Delete Classic Load Balancer\",\n    \"actionType\": \"LEGO_TYPE_AWS\",\n    \"actionVerbs\": null,\n    \"actionVersion\": \"1.0.0\",\n    \"action_modified\": false,\n    \"action_uuid\": \"9b88908a212472ac94ac7ce98a854c1a16f853e87f9c5a8cd5db236b637ad5d3\",\n    \"condition_enabled\": true,\n    \"continueOnError\": true,\n    \"credentialsJson\": {},\n    \"description\": \"Delete Classic Elastic Load Balancers\",\n    \"id\": 2,\n    \"index\": 2,\n    \"inputData\": [\n     {\n      \"elb_name\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"elb_name\\\\\\\\\\\")\\\"\"\n      },\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"region\\\\\\\\\\\")\\\"\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"elb_name\": {\n        \"description\": \"Name of classic ELB\",\n        \"title\": \"Classic Load Balancer Name\",\n        \"type\": \"string\"\n       },\n       \"region\": {\n        \"description\": \"AWS Region.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"region\",\n       \"elb_name\"\n      ],\n      \"title\": \"aws_delete_classic_load_balancer\",\n      \"type\": \"object\"\n     }\n    ],\n    \"iterData\": [\n     {\n      \"iter_enabled\": true,\n      \"iter_item\": {\n       \"elb_name\": \"elb_name\",\n       \"region\": \"region\"\n      },\n      \"iter_list\": {\n       \"constant\": false,\n       \"objectItems\": true,\n       \"value\": \"elb_classic_list\"\n      }\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"language\": \"python\",\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS Delete Classic Load Balancer\",\n    \"orderProperties\": [\n     \"region\",\n     \"elb_name\"\n    ],\n    \"printOutput\": true,\n    \"startcondition\": \"len(elb_classic_list)!=0\",\n    \"tags\": [\n     \"aws_delete_classic_load_balancer\"\n    ],\n    \"uuid\": \"9b88908a212472ac94ac7ce98a854c1a16f853e87f9c5a8cd5db236b637ad5d3\",\n    \"version\": \"1.0.0\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"# Copyright (c) 2021 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import Optional, Dict\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_delete_classic_load_balancer_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_delete_classic_load_balancer(handle, region: str, elb_name: str) -> Dict:\\n\",\n    \"    \\\"\\\"\\\"aws_delete_classic_load_balancer reponse of deleting a classic load balancer.\\n\",\n    \"\\n\",\n    \"        :type region: string\\n\",\n    \"        :param region: AWS Region.\\n\",\n    \"\\n\",\n    \"        :type elb_name: string\\n\",\n    \"        :param elb_name: Classic load balancer name.\\n\",\n    \"\\n\",\n    \"        :rtype: dict of deleted load balancers reponse.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    try:\\n\",\n    \"        elblient = handle.client('elb', region_name=region)\\n\",\n    \"        response = elblient.delete_load_balancer(LoadBalancerName=elb_name)\\n\",\n    \"        return response\\n\",\n    \"    except Exception as e:\\n\",\n    \"        raise Exception(e)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(continueOnError=True)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"region\\\": \\\"iter.get(\\\\\\\\\\\"region\\\\\\\\\\\")\\\",\\n\",\n    \"    \\\"elb_name\\\": \\\"iter.get(\\\\\\\\\\\"elb_name\\\\\\\\\\\")\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(iterJson='''{\\n\",\n    \"    \\\"iter_enabled\\\": true,\\n\",\n    \"    \\\"iter_list_is_const\\\": false,\\n\",\n    \"    \\\"iter_list\\\": \\\"elb_classic_list\\\",\\n\",\n    \"    \\\"iter_parameter\\\": [\\\"region\\\",\\\"elb_name\\\"]\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"len(elb_classic_list)!=0\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_delete_classic_load_balancer, lego_printer=aws_delete_classic_load_balancer_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"9c7430c8-3660-45bd-90ef-9ceab77e3daa\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Conclusion\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Conclusion\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Conclusion\\\">Conclusion</h3>\\n\",\n    \"<p>In this Runbook, we were able to check for any AWS Elastic Load Balancers with no target groups or instances in our AWS account and release (remove) them in order to lower AWS costs. To view the full platform capabilities of unSkript please visit&nbsp;<a href=\\\"https://us.app.unskript.io\\\" target=\\\"_blank\\\" rel=\\\"noopener\\\">us.app.unskript.io</a></p>\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"execution_data\": {\n   \"runbook_name\": \"Delete AWS ELBs With No Targets Or Instances\",\n   \"parameters\": null\n  },\n  \"kernelspec\": {\n   \"display_name\": \"unSkript (Build: 1234)\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"codemirror_mode\": {\n    \"name\": \"ipython\",\n    \"version\": 3\n   },\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"nbconvert_exporter\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.10.6\"\n  },\n  \"outputParameterSchema\": {\n   \"properties\": {},\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"parameterSchema\": {\n   \"properties\": {\n    \"elb_arns\": {\n     \"description\": \"List of ELB ARNs for type Network and Application Load Balancer\",\n     \"title\": \"elb_arns\",\n     \"type\": \"array\"\n    },\n    \"elb_names\": {\n     \"description\": \"List of ELB Names for Classic load balancers\",\n     \"title\": \"elb_names\",\n     \"type\": \"array\"\n    },\n    \"region\": {\n     \"description\": \"AWS Region to search for unattached Elastic IPs. Eg: \\\"us-west-2\\\"\",\n     \"title\": \"region\",\n     \"type\": \"string\"\n    }\n   },\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"vscode\": {\n   \"interpreter\": {\n    \"hash\": \"b0fa6594d8f4cbf19f97940f81e996739fb7646882a419484c72d19e05852a7e\"\n   }\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}"
  },
  {
    "path": "AWS/AWS_Delete_ELBs_With_No_Targets_Or_Instances.json",
    "content": "{\n    \"name\": \"Delete AWS ELBs With No Targets Or Instances\",\n    \"description\": \"ELBs are used to distribute incoming traffic across multiple targets or instances, but if those targets or instances are no longer in use, then the ELBs may be unnecessary and can be deleted to save costs. Deleting ELBs with no targets or instances is a simple but effective way to optimize costs in your AWS environment. By identifying and removing these unused ELBs, you can reduce the number of resources you are paying for and avoid unnecessary charges. This runbook helps you identify all types of ELB's- Network, Application, Classic that don't have any target groups or instances attached to them.\",\n    \"uuid\": \"2aba76792cb2802cae55deb60d28820522aeba93865572a1e9c7ddc5309e1312\",\n    \"icon\": \"CONNECTOR_TYPE_AWS\",\n    \"categories\": [ \"CATEGORY_TYPE_COST_OPT\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\" ],\n    \"connector_types\": [ \"CONNECTOR_TYPE_AWS\" ],\n    \"version\": \"1.0.0\"\n}"
  },
  {
    "path": "AWS/AWS_Delete_IAM_User.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"8a97b231-94d6-4e10-a24c-6eac9a4572e4\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"delete IAM User\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"delete IAM User\"\n   },\n   \"source\": [\n    \"<img src=\\\"https://storage.googleapis.com/unskript-website/assets/favicon.png\\\" alt=\\\"unSkript.com\\\" width=\\\"100\\\" height=\\\"100\\\"/> \\n\",\n    \"<h1> unSkript Runbooks </h1>\\n\",\n    \"<div class=\\\"alert alert-block alert-success\\\">\\n\",\n    \"    <b> This runbook demonstrates How to delete IAM user using unSkript legos.</b>\\n\",\n    \"</div>\\n\",\n    \"\\n\",\n    \"<br>\\n\",\n    \"\\n\",\n    \"<center><h2>Delete IAM User</h2></center>\\n\",\n    \"\\n\",\n    \"# Steps Overview\\n\",\n    \"1) Delete login profile for given user.\\n\",\n    \"2) Get list of all Access policies for the user.\\n\",\n    \"3) Remove each policy found.\\n\",\n    \"4) .Delete IAM user by passing User name. \\n\",\n    \"5) Check the caller identity of the current user.\\n\",\n    \"6) Send success IAM user deletion message to the slack channel\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"c174d638-f107-450f-ab2d-d28cf097a722\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Delete login profile for IAM User\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Delete login profile for IAM User\"\n   },\n   \"source\": [\n    \"Here we will Delete the login Profile for an IAM User. This lego takes UserName as input, and deletes the login profile for the user.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 74,\n   \"id\": \"d5f3fcb4-a941-42b5-889a-7cfdbefae5ca\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_uuid\": \"abe9fc82a53b80dc1dd4d5a89e31d22b0338e73e86d2ca859576f38cc6d19f48\",\n    \"collapsed\": true,\n    \"condition_enabled\": true,\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Filter AWS EC2 Instance by Tag\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2022-12-01T02:08:40.419Z\"\n    },\n    \"id\": 155,\n    \"index\": 155,\n    \"inputData\": [\n     {\n      \"UserName\": {\n       \"constant\": false,\n       \"value\": \"user_name\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"UserName\": {\n        \"default\": \"\",\n        \"description\": \"IAM User Name\",\n        \"title\": \"UserName\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"tag_key\",\n       \"tag_value\",\n       \"region\"\n      ],\n      \"title\": \"aws_filter_ec2_by_tags\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"Delete Login Profile for IAM User\",\n    \"nouns\": [\n     \"aws\",\n     \"ec2\",\n     \"instances\",\n     \"tag\"\n    ],\n    \"orderProperties\": [\n     \"UserName\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"startcondition\": \"'User' in UserInfo\",\n    \"tags\": [\n     \"aws_filter_ec2_by_tags\"\n    ],\n    \"title\": \"Delete Login Profile for IAM User\",\n    \"verbs\": [\n     \"filter\"\n    ]\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"##  Copyright (c) 2021 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import Dict\\n\",\n    \"from botocore.exceptions import ClientError\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_delete_user_login_profile_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_delete_user_login_profile(handle, UserName: str) -> Dict:\\n\",\n    \"    \\\"\\\"\\\"aws_create_user_login_profile Create login profile for IAM User.\\n\",\n    \"\\n\",\n    \"        :type UserName: string\\n\",\n    \"        :param UserName: Name of new IAM User.\\n\",\n    \"\\n\",\n    \" \\n\",\n    \"\\n\",\n    \"        :rtype: Dict with the Profile Creation status info.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"\\n\",\n    \"    iamClient = handle.client(\\\"iam\\\")\\n\",\n    \"    result = {}\\n\",\n    \"    try:\\n\",\n    \"        response = iamClient.delete_login_profile(\\n\",\n    \"            UserName=UserName)\\n\",\n    \"\\n\",\n    \"        result = response\\n\",\n    \"    except ClientError as error:\\n\",\n    \"        if error.response['Error']['Code'] == 'EntityAlreadyDeleted':\\n\",\n    \"            result = error.response\\n\",\n    \"        else:\\n\",\n    \"            result = error.response\\n\",\n    \"\\n\",\n    \"    return result\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"UserName\\\": \\\"user_name\\\"\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_delete_user_login_profile, lego_printer=aws_delete_user_login_profile_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"77ae7997-be76-48c7-afc5-eeae200e4eef\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"get all policies for user\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"get all policies for user\"\n   },\n   \"source\": [\n    \"This user may have multiple policies that we have to delete before we delete the user\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 78,\n   \"id\": \"a27344e5-b608-4fc5-8048-73a2af86f8e3\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_uuid\": \"eb5306c9a3d782b9cdce8f2b3d9e57ca882dfce071f897f134b6f172b3e00bad\",\n    \"collapsed\": true,\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"AWS List Attached User Policies\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2022-12-01T02:12:16.467Z\"\n    },\n    \"id\": 232,\n    \"index\": 232,\n    \"inputData\": [\n     {\n      \"user_name\": {\n       \"constant\": false,\n       \"value\": \"user_name\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"user_name\": {\n        \"description\": \"IAM user whose policies need to fetched.\",\n        \"title\": \"User Name\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"user_name\"\n      ],\n      \"title\": \"aws_list_attached_user_policies\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS List Attached User Policies\",\n    \"nouns\": [\n     \"aws\",\n     \"user\",\n     \"policy\"\n    ],\n    \"orderProperties\": [\n     \"user_name\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"policies\",\n     \"output_name_enabled\": true\n    },\n    \"tags\": [\n     \"aws_list_attached_user_policies\"\n    ],\n    \"title\": \"AWS List Attached User Policies\",\n    \"verbs\": [\n     \"list\"\n    ]\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##  Copyright (c) 2021 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from typing import List, Dict\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from botocore.exceptions import ClientError\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_list_attached_user_policies_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_list_attached_user_policies(handle, user_name: str) -> List:\\n\",\n    \"    \\\"\\\"\\\"aws_list_attached_user_policies returns the list of policies attached to the user.\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned from task.validate(...).\\n\",\n    \"\\n\",\n    \"        :type user_name: string\\n\",\n    \"        :param user_name: IAM user whose policies need to fetched.\\n\",\n    \"\\n\",\n    \"        :rtype: List with with the attched policy names.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    result = []\\n\",\n    \"    iamClient = handle.client('iam')\\n\",\n    \"    try:\\n\",\n    \"        response = iamClient.list_attached_user_policies(UserName=user_name)\\n\",\n    \"        for i in response[\\\"AttachedPolicies\\\"]:\\n\",\n    \"            result.append(i['PolicyName'])\\n\",\n    \"\\n\",\n    \"    except ClientError as error:\\n\",\n    \"        result.append(error.response)\\n\",\n    \"\\n\",\n    \"    return result\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"user_name\\\": \\\"user_name\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"policies\\\")\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_list_attached_user_policies, lego_printer=aws_list_attached_user_policies_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 77,\n   \"id\": \"82585fa1-89b1-4ccb-9e5f-a116baf21928\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_uuid\": \"dee9134df84f6c675edab485389572795169495347e40abbdf81f24ec807a85c\",\n    \"collapsed\": true,\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"AWS Attach New Policy to User\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2022-12-01T02:10:40.533Z\"\n    },\n    \"id\": 196,\n    \"index\": 196,\n    \"inputData\": [\n     {\n      \"policy_name\": {\n       \"constant\": false,\n       \"value\": \"policies\"\n      },\n      \"user_name\": {\n       \"constant\": false,\n       \"value\": \"user_name\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"policy_name\": {\n        \"description\": \"Policy name to apply the permissions to the user.\",\n        \"title\": \"Policy Name\",\n        \"type\": \"string\"\n       },\n       \"user_name\": {\n        \"description\": \"IAM user whose policies need to fetched.\",\n        \"title\": \"User Name\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"user_name\",\n       \"policy_name\"\n      ],\n      \"title\": \"aws_attache_iam_policy\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS remove Policies from User\",\n    \"nouns\": [\n     \"aws\",\n     \"policy\",\n     \"IAM\",\n     \"user\"\n    ],\n    \"orderProperties\": [\n     \"user_name\",\n     \"policy_name\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"service_id_enabled\": false,\n    \"tags\": [\n     \"aws_attache_iam_policy\"\n    ],\n    \"title\": \"AWS remove Policies from User\",\n    \"verbs\": [\n     \"attach\"\n    ]\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##  Copyright (c) 2021 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from typing import List, Dict\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from botocore.exceptions import ClientError\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"\\n\",\n    \"def aws_detach_iam_policy(handle, user_name: str, policy_name) -> Dict:\\n\",\n    \"    \\\"\\\"\\\"aws_attache_iam_policy used to provide user permissions.\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned from task.validate(...).\\n\",\n    \"        :type user_name: string\\n\",\n    \"        :param user_name: Dictionary of credentials info.\\n\",\n    \"        :type policy_name: string\\n\",\n    \"        :param policy_name: Policy name to apply the permissions to the user.\\n\",\n    \"        :rtype: Dict with User policy information.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    for policy in policy_name:\\n\",\n    \"    \\n\",\n    \"        result = {}\\n\",\n    \"        iamResource = handle.resource('iam')\\n\",\n    \"        try:\\n\",\n    \"            user = iamResource.User(user_name)\\n\",\n    \"            response = user.detach_policy(\\n\",\n    \"                PolicyArn='arn:aws:iam::aws:policy/'+policy\\n\",\n    \"                )\\n\",\n    \"            result = response\\n\",\n    \"        except ClientError as error:\\n\",\n    \"            result = error.response\\n\",\n    \"\\n\",\n    \"    return result\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"def unskript_default_printer(output):\\n\",\n    \"    if isinstance(output, (list, tuple)):\\n\",\n    \"        for item in output:\\n\",\n    \"            print(f'item: {item}')\\n\",\n    \"    elif isinstance(output, dict):\\n\",\n    \"        for item in output.items():\\n\",\n    \"            print(f'item: {item}')\\n\",\n    \"    else:\\n\",\n    \"        print(f'Output for {task.name}')\\n\",\n    \"        print(output)\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"policy_name\\\": \\\"policies\\\",\\n\",\n    \"    \\\"user_name\\\": \\\"user_name\\\"\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_detach_iam_policy, lego_printer=unskript_default_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"52a2105e-1ac2-4bae-95f1-c4b3675723d0\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Delete IAM User\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Delete IAM User\"\n   },\n   \"source\": [\n    \"Here we will use unSkript Delete IAM User Lego. This lego takes user_name as input. This inputs is used to delete the IAM user from AWS.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 79,\n   \"id\": \"b2abbbfe-05b6-473c-92b0-0de1bb9e43b6\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_uuid\": \"afacde59-a401-4a8b-901d-46c4b3970b78\",\n    \"continueOnError\": false,\n    \"createTime\": \"2022-07-27T16:51:48Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"v0.0.0\",\n    \"description\": \"Test\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2022-12-01T02:12:21.558Z\"\n    },\n    \"id\": 100001,\n    \"index\": 100001,\n    \"inputData\": [\n     {\n      \"user_name\": {\n       \"constant\": false,\n       \"value\": \"user_name\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"user_name\": {\n        \"default\": \"\",\n        \"description\": \"User Name\",\n        \"title\": \"user_name\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"instance_ids\",\n       \"region\"\n      ],\n      \"title\": \"aws_restart_ec2_instances_test\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"Delete IAM User\",\n    \"nouns\": [],\n    \"orderProperties\": [\n     \"user_name\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"UserInfo\",\n     \"output_name_enabled\": true\n    },\n    \"tags\": [],\n    \"title\": \"Delete IAM User\",\n    \"verbs\": []\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##  Copyright (c) 2021 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from typing import List, Dict\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from botocore.exceptions import ClientError\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_delete_iam_user_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_delete_iam_user(handle, user_name: str) -> Dict:\\n\",\n    \"    \\\"\\\"\\\"aws_create_iam_user Creates new IAM User.\\n\",\n    \"\\n\",\n    \"        :type user_name: string\\n\",\n    \"        :param user_name: Name of new IAM User.\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"        :rtype: Dict with the stopped instances state info.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"\\n\",\n    \"    iamClient = handle.client(\\\"iam\\\")\\n\",\n    \"    result = {}\\n\",\n    \"    try:\\n\",\n    \"        response = iamClient.delete_user(\\n\",\n    \"            UserName=user_name)\\n\",\n    \"        result = response\\n\",\n    \"    except ClientError as error:\\n\",\n    \"        if error.response['Error']['Code'] == 'EntityAlreadyExists':\\n\",\n    \"            result = error.response\\n\",\n    \"        else:\\n\",\n    \"            result = error.response\\n\",\n    \"\\n\",\n    \"    return result\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"user_name\\\": \\\"user_name\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"UserInfo\\\")\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_delete_iam_user, lego_printer=aws_delete_iam_user_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"29511895-d1cc-4a01-9990-8928642b5006\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"STS Get Caller Identity\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"STS Get Caller Identity\"\n   },\n   \"source\": [\n    \"Here we will use unSkript STS Get Caller Identity Lego. This lego doesn't take any inputs. Shows the the caller identity for the current user.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 80,\n   \"id\": \"dd1e1542-ddd7-4b86-86a2-17e999458fbd\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_uuid\": \"afacde59-a401-4a8b-901d-46c4b3970b78\",\n    \"createTime\": \"2022-07-27T16:51:48Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"v0.0.0\",\n    \"description\": \"Test\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2022-12-01T02:12:26.299Z\"\n    },\n    \"id\": 100001,\n    \"index\": 100001,\n    \"inputschema\": [\n     {\n      \"properties\": {},\n      \"required\": [\n       \"instance_ids\",\n       \"region\"\n      ],\n      \"title\": \"aws_restart_ec2_instances_test\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"STS Get Caller Identity \",\n    \"nouns\": [],\n    \"orderProperties\": [],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"caller\",\n     \"output_name_enabled\": true\n    },\n    \"tags\": [],\n    \"title\": \"STS Get Caller Identity \",\n    \"verbs\": []\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##  Copyright (c) 2021 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from typing import List, Dict\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_caller_identity_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_caller_identity(handle) -> Dict:\\n\",\n    \"    stsClient = handle.client('sts')\\n\",\n    \"    response = stsClient.get_caller_identity()\\n\",\n    \"\\n\",\n    \"    return response\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(outputName=\\\"caller\\\")\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_get_caller_identity, lego_printer=aws_get_caller_identity_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"d1f05583-fa8c-4f8c-a357-3f6154df4620\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Post Slack Message\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Post Slack Message\"\n   },\n   \"source\": [\n    \"Here we will use unSkript Post Slack Message Lego. This lego takes channel name and message as input. This inputs is used to to post the message that the user was deleted.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 81,\n   \"id\": \"8cacd129-1fed-4c9e-9f2f-70da41c43c88\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_uuid\": \"6a87f83ab0ecfeecb9c98d084e2b1066c26fa64be5b4928d5573a5d60299802d\",\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Post Slack Message\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2022-12-01T02:12:28.404Z\"\n    },\n    \"id\": 62,\n    \"index\": 62,\n    \"inputData\": [\n     {\n      \"channel\": {\n       \"constant\": false,\n       \"value\": \"\\\"devrel_doug_test1\\\"\"\n      },\n      \"message\": {\n       \"constant\": false,\n       \"value\": \"f'IAM user {user_name} deleted by {caller[\\\"Arn\\\"]}'\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"channel\": {\n        \"description\": \"Name of the slack channel where the message to be posted\",\n        \"title\": \"Channel\",\n        \"type\": \"string\"\n       },\n       \"message\": {\n        \"description\": \"Message to be sent\",\n        \"title\": \"Message\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"channel\",\n       \"message\"\n      ],\n      \"title\": \"slack_post_message\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_SLACK\",\n    \"name\": \"Post Slack Message\",\n    \"nouns\": [\n     \"slack\",\n     \"message\"\n    ],\n    \"orderProperties\": [\n     \"channel\",\n     \"message\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"tags\": [\n     \"slack_post_message\"\n    ],\n    \"verbs\": [\n     \"post\"\n    ]\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"# Copyright (c) 2021 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from slack_sdk import WebClient\\n\",\n    \"from slack_sdk.errors import SlackApiError\\n\",\n    \"\\n\",\n    \"pp = pprint.PrettyPrinter(indent=2)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def slack_post_message_printer(output):\\n\",\n    \"    if output is not None:\\n\",\n    \"        pprint.pprint(output)\\n\",\n    \"    else:\\n\",\n    \"        return\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def slack_post_message(\\n\",\n    \"        handle: WebClient,\\n\",\n    \"        channel: str,\\n\",\n    \"        message: str) -> str:\\n\",\n    \"\\n\",\n    \"    try:\\n\",\n    \"        response = handle.chat_postMessage(\\n\",\n    \"            channel=channel,\\n\",\n    \"            text=message)\\n\",\n    \"        return f\\\"Successfuly Sent Message on Channel: #{channel}\\\"\\n\",\n    \"    except SlackApiError as e:\\n\",\n    \"        pp.pprint(\\n\",\n    \"            f\\\"Failed sending message to slack channel {channel}, Error: {e.response['error']}\\\")\\n\",\n    \"        if e.response['error'] == 'channel_not_found':\\n\",\n    \"            raise Exception('Channel Not Found')\\n\",\n    \"        elif e.response['error'] == 'duplicate_channel_not_found':\\n\",\n    \"            raise Exception('Channel associated with the message_id not valid')\\n\",\n    \"        elif e.response['error'] == 'not_in_channel':\\n\",\n    \"            raise Exception('Cannot post message to channel user is not in')\\n\",\n    \"        elif e.response['error'] == 'is_archived':\\n\",\n    \"            raise Exception('Channel has been archived')\\n\",\n    \"        elif e.response['error'] == 'msg_too_long':\\n\",\n    \"            raise Exception('Message text is too long')\\n\",\n    \"        elif e.response['error'] == 'no_text':\\n\",\n    \"            raise Exception('Message text was not provided')\\n\",\n    \"        elif e.response['error'] == 'restricted_action':\\n\",\n    \"            raise Exception('Workspace preference prevents user from posting')\\n\",\n    \"        elif e.response['error'] == 'restricted_action_read_only_channel':\\n\",\n    \"            raise Exception('Cannot Post message, read-only channel')\\n\",\n    \"        elif e.response['error'] == 'team_access_not_granted':\\n\",\n    \"            raise Exception('The token used is not granted access to the workspace')\\n\",\n    \"        elif e.response['error'] == 'not_authed':\\n\",\n    \"            raise Exception('No Authtnecition token provided')\\n\",\n    \"        elif e.response['error'] == 'invalid_auth':\\n\",\n    \"            raise Exception('Some aspect of Authentication cannot be validated. Request denied')\\n\",\n    \"        elif e.response['error'] == 'access_denied':\\n\",\n    \"            raise Exception('Access to a resource specified in the request denied')\\n\",\n    \"        elif e.response['error'] == 'account_inactive':\\n\",\n    \"            raise Exception('Authentication token is for a deleted user')\\n\",\n    \"        elif e.response['error'] == 'token_revoked':\\n\",\n    \"            raise Exception('Authentication token for a deleted user has been revoked')\\n\",\n    \"        elif e.response['error'] == 'no_permission':\\n\",\n    \"            raise Exception('The workspace toekn used does not have necessary permission to send message')\\n\",\n    \"        elif e.response['error'] == 'ratelimited':\\n\",\n    \"            raise Exception('The request has been ratelimited. Retry sending message later')\\n\",\n    \"        elif e.response['error'] == 'service_unavailable':\\n\",\n    \"            raise Exception('The service is temporarily unavailable')\\n\",\n    \"        elif e.response['error'] == 'fatal_error':\\n\",\n    \"            raise Exception('The server encountered catostrophic error while sending message')\\n\",\n    \"        elif e.response['error'] == 'internal_error':\\n\",\n    \"            raise Exception('The server could not complete operation, likely due to transietn issue')\\n\",\n    \"        elif e.response['error'] == 'request_timeout':\\n\",\n    \"            raise Exception('Sending message error via POST: either message was missing or truncated')\\n\",\n    \"        else:\\n\",\n    \"            raise Exception(f'Failed Sending Message to slack channel {channel} Error: {e.response[\\\"error\\\"]}')\\n\",\n    \"\\n\",\n    \"    except Exception as e:\\n\",\n    \"        print(\\\"\\\\n\\\\n\\\")\\n\",\n    \"        pp.pprint(\\n\",\n    \"            f\\\"Failed sending message to slack channel {channel}, Error: {e.__str__()}\\\")\\n\",\n    \"        return f\\\"Unable to send message on {channel}\\\"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"  \\n\",\n    \"    \\\"message\\\": \\\"f'IAM user {user_name} deleted by {caller[\\\\\\\\\\\"Arn\\\\\\\\\\\"]}'\\\"\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(slack_post_message, lego_printer=slack_post_message_printer, hdl=hdl, args=args)\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"execution_data\": {\n   \"runbook_name\": \"Delete IAM profile\",\n   \"parameters\": [\n    \"tag_key\"\n   ]\n  },\n  \"kernelspec\": {\n   \"display_name\": \"unSkript (Build: 1169)\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"codemirror_mode\": {\n    \"name\": \"ipython\",\n    \"version\": 3\n   },\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"nbconvert_exporter\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.9.6\"\n  },\n  \"parameterSchema\": {\n   \"properties\": {\n    \"user_name\": {\n     \"default\": \"FoobarAdmin\",\n     \"description\": \"UserName to be deleted\",\n     \"title\": \"user_name\",\n     \"type\": \"string\"\n    }\n   },\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"vscode\": {\n   \"interpreter\": {\n    \"hash\": \"31f2aee4e71d21fbe5cf8b01ff0e069b9275f58929596ceb00d14d90e3e16cd6\"\n   }\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}"
  },
  {
    "path": "AWS/AWS_Delete_IAM_User.json",
    "content": "{\n    \"name\": \"Delete IAM profile\",\n    \"description\": \"This runbook is the inverse of Create IAM user with profile - removes the profile, the login and then the IAM user itself..\",\n    \"uuid\": \"3aa8d4f8869b00c3bd57b9676d6c267ec93c7291eeffa1e281d29aa689b73ff4\",\n    \"icon\": \"CONNECTOR_TYPE_AWS\",\n    \"categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\" ],\n    \"connector_types\": [ \"CONNECTOR_TYPE_AWS\" ],\n    \"version\": \"1.0.0\"\n  }"
  },
  {
    "path": "AWS/AWS_Delete_Old_EBS_Snapshots.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"1da6be45\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Steps Overview\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Steps Overview\"\n   },\n   \"source\": [\n    \"<center><img src=\\\"https://storage.googleapis.com/unskript-website/assets/favicon.png\\\" alt=\\\"unSkript.com\\\" width=\\\"100\\\" height=\\\"100\\\">\\n\",\n    \"<h1 id=\\\"unSkript-Runbooks\\\">unSkript Runbooks</h1>\\n\",\n    \"<div class=\\\"alert alert-block alert-success\\\">\\n\",\n    \"<h3 id=\\\"Objective\\\">Objective</h3>\\n\",\n    \"<br><strong><em>Find and Delete Old EBS Snapshots</em></strong></div>\\n\",\n    \"</center>\\n\",\n    \"<p>&nbsp;</p>\\n\",\n    \"<center>\\n\",\n    \"<h2 id=\\\"Delete-Old-EBS-Snapshots\\\"><u>Delete Old EBS Snapshots</u></h2>\\n\",\n    \"</center>\\n\",\n    \"<h1 id=\\\"Steps-Overview\\\">Steps Overview</h1>\\n\",\n    \"<p>1.&nbsp;Find Old EBS Snapshots<br>2.&nbsp;Delete old EBS snapshots</p>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 8,\n   \"id\": \"6e4cd8eb-4f75-49f6-8f43-1c7f8d56b279\",\n   \"metadata\": {\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-04-19T09:13:20.286Z\"\n    },\n    \"name\": \"Input Verification\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Input Verification\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"if snapshot_ids and not region:\\n\",\n    \"    raise SystemExit(\\\"Provide a region for the EBS Snapshots!\\\")\\n\",\n    \"if region == None:\\n\",\n    \"    region = \\\"\\\"\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"47fa9334\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-1\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-1\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Filter-unused-NAT-Gateways\\\"><a id=\\\"1\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Find Old EBS Snapshots</h3>\\n\",\n    \"<p>Using unSkript's Filter AWS Find Old EBS Snapshots action, we will find old snapshots given a threshold number of days.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters: <code>region, threhold_days</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following output: <code>unused_snapshots</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"a3cd0833-ab78-452c-bf5f-790fefa28d20\",\n   \"metadata\": {\n    \"actionBashCommand\": false,\n    \"actionCategories\": [\n     \"CATEGORY_TYPE_COST_OPT\",\n     \"CATEGORY_TYPE_SRE\",\n     \"CATEGORY_TYPE_AWS\",\n     \"CATEGORY_TYPE_AWS_EBS\"\n    ],\n    \"actionDescription\": \"This action list a all snapshots details that are older than the threshold\",\n    \"actionEntryFunction\": \"aws_filter_old_ebs_snapshots\",\n    \"actionIsCheck\": true,\n    \"actionIsRemediation\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [\n     \"303d6481e8cfa508d9ba11f847906c7d46f30a1c70f9b6b0e04b12409e74f704\"\n    ],\n    \"actionNextHopParameterMapping\": {},\n    \"actionNouns\": null,\n    \"actionOutputType\": \"ACTION_OUTPUT_TYPE_LIST\",\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"actionTitle\": \"AWS Filter Old EBS Snapshots\",\n    \"actionType\": \"LEGO_TYPE_AWS\",\n    \"actionVerbs\": null,\n    \"actionVersion\": \"1.0.0\",\n    \"action_modified\": false,\n    \"action_uuid\": \"9a74af3d2bb5a9aac60e5d30fb89b3ebf6867ce4782fc629cd9842bd5156a327\",\n    \"collapsed\": true,\n    \"condition_enabled\": true,\n    \"continueOnError\": false,\n    \"description\": \"This action list a all snapshots details that are older than the threshold\",\n    \"id\": 1,\n    \"index\": 1,\n    \"inputData\": [\n     {\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"region\"\n      },\n      \"threshold\": {\n       \"constant\": false,\n       \"value\": \"int(threshold_days)\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"region\": {\n        \"description\": \"AWS Region.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       },\n       \"threshold\": {\n        \"default\": 30,\n        \"description\": \"(in day's) The threshold to check the snapshots older than the threshold.\",\n        \"title\": \"Threshold (in days)\",\n        \"type\": \"integer\"\n       }\n      },\n      \"required\": [],\n      \"title\": \"aws_filter_old_ebs_snapshots\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"language\": \"python\",\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS Filter Old EBS Snapshots\",\n    \"orderProperties\": [\n     \"region\",\n     \"threshold\"\n    ],\n    \"outputParams\": {\n     \"output_name\": \"unused_snapshots\",\n     \"output_name_enabled\": true,\n     \"output_runbook_enabled\": false,\n     \"output_runbook_name\": \"\"\n    },\n    \"printOutput\": true,\n    \"startcondition\": \"not snapshot_ids\",\n    \"tags\": [\n     \"aws_filter_old_ebs_snapshots\"\n    ],\n    \"uuid\": \"9a74af3d2bb5a9aac60e5d30fb89b3ebf6867ce4782fc629cd9842bd5156a327\",\n    \"version\": \"1.0.0\",\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"##  Copyright (c) 2023 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import List, Optional, Tuple\\n\",\n    \"from unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\\n\",\n    \"import pprint\\n\",\n    \"from datetime import datetime, timedelta\\n\",\n    \"import pytz\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_filter_old_ebs_snapshots_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_filter_old_ebs_snapshots(handle, region: str=\\\"\\\", threshold: int = 30) -> Tuple:\\n\",\n    \"    \\\"\\\"\\\"aws_filter_old_ebs_snapshots Returns an array of EBS snapshots details.\\n\",\n    \"\\n\",\n    \"        :type region: string\\n\",\n    \"        :param region: AWS Region.\\n\",\n    \"\\n\",\n    \"        :type threshold: int\\n\",\n    \"        :param threshold: (in days) The threshold to check the snapshots older than the threshold.\\n\",\n    \"\\n\",\n    \"        :rtype: List of EBS snapshots details.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    result = []\\n\",\n    \"    all_regions = [region]\\n\",\n    \"    if not region:\\n\",\n    \"        all_regions = aws_list_all_regions(handle)\\n\",\n    \"    for reg in all_regions:\\n\",\n    \"        try:\\n\",\n    \"            # Filtering the volume by region\\n\",\n    \"            current_time = datetime.now(pytz.UTC)\\n\",\n    \"            ec2Client = handle.resource('ec2', region_name=reg)\\n\",\n    \"            response = ec2Client.snapshots.filter(OwnerIds=['self'])\\n\",\n    \"            for snapshot in response:\\n\",\n    \"                snap_data = {}\\n\",\n    \"                running_time = current_time - snapshot.start_time\\n\",\n    \"                if running_time > timedelta(days=int(threshold)):\\n\",\n    \"                    snap_data[\\\"region\\\"] = reg\\n\",\n    \"                    snap_data[\\\"snapshot_id\\\"] = snapshot.id\\n\",\n    \"                    result.append(snap_data)\\n\",\n    \"        except Exception as e:\\n\",\n    \"            pass\\n\",\n    \"    if len(result)!=0:\\n\",\n    \"        return (False, result)\\n\",\n    \"    else:\\n\",\n    \"        return (True, None)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"region\\\": \\\"region\\\",\\n\",\n    \"    \\\"threshold\\\": \\\"int(threshold_days)\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"not snapshot_ids\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"unused_snapshots\\\")\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_filter_old_ebs_snapshots, lego_printer=aws_filter_old_ebs_snapshots_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"6b8b31be\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-1 Extension\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-1 Extension\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Create-List-of-Unused-NAT-Gateways\\\">Create List of Old EBS Snapshots</h3>\\n\",\n    \"<p>This action filters regions that have no old EBS snapshots and creates a list of those that have them.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters: <code>None</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following output: <code>all_unused_snapshots</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"aa209041-9097-4b16-be3c-3a30aff1eb1e\",\n   \"metadata\": {\n    \"collapsed\": true,\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"name\": \"Create List of Old EBS Snapshots\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Create List of Old EBS Snapshots\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"all_unused_snapshots = []\\n\",\n    \"dummy = []\\n\",\n    \"try:\\n\",\n    \"    if unused_snapshots[0] == False:\\n\",\n    \"        for snapshot in unused_snapshots[1]:\\n\",\n    \"            all_unused_snapshots.append(snapshot)\\n\",\n    \"except Exception as e:\\n\",\n    \"    if snapshot_ids:\\n\",\n    \"        for snap in snapshot_ids:\\n\",\n    \"            data_dict = {}\\n\",\n    \"            data_dict[\\\"region\\\"] = region\\n\",\n    \"            data_dict[\\\"snapshot_id\\\"] = snap\\n\",\n    \"            all_unused_snapshots.append(data_dict)\\n\",\n    \"    else:\\n\",\n    \"         raise Exception(e)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"3c3a62dd\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 2\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 2\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Delete-unused-NAT-Gateways\\\"><a id=\\\"2\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Delete old EBS Snapshots</h3>\\n\",\n    \"<p>This action deletes old EBS Snapshots found in Step 1.&nbsp;</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters:&nbsp;<code>region, snapshot_id</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"a30bb183-cef8-43b5-a75d-ce3ab3db0dac\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionOutputType\": null,\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": false,\n    \"action_uuid\": \"1bcf78d4587707b18b241fa00fd709e4ce3c3bc28ab24c9874e9b0966b08e43a\",\n    \"condition_enabled\": true,\n    \"continueOnError\": true,\n    \"credentialsJson\": {},\n    \"description\": \"Delete EBS Snapshot for an EC2 instance\",\n    \"id\": 2,\n    \"index\": 2,\n    \"inputData\": [\n     {\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"region\\\\\\\\\\\")\\\"\"\n      },\n      \"snapshot_id\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"snapshot_id\\\\\\\\\\\")\\\"\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"region\": {\n        \"description\": \"AWS Region.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       },\n       \"snapshot_id\": {\n        \"description\": \"EBS snapshot ID. Eg: \\\"snap-34bt4bfjed9d\\\"\",\n        \"title\": \"Snapshot ID\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"region\",\n       \"snapshot_id\"\n      ],\n      \"title\": \"aws_delete_ebs_snapshot\",\n      \"type\": \"object\"\n     }\n    ],\n    \"iterData\": [\n     {\n      \"iter_enabled\": true,\n      \"iter_item\": {\n       \"region\": \"region\",\n       \"snapshot_id\": \"snapshot_id\"\n      },\n      \"iter_list\": {\n       \"constant\": false,\n       \"objectItems\": true,\n       \"value\": \"all_unused_snapshots\"\n      }\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"language\": \"python\",\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"metadata\": {\n     \"action_bash_command\": false,\n     \"action_categories\": [\n      \"CATEGORY_TYPE_COST_OPT\",\n      \"CATEGORY_TYPE_SRE\",\n      \"CATEGORY_TYPE_AWS\",\n      \"CATEGORY_TYPE_EBS\"\n     ],\n     \"action_description\": \"Delete EBS Snapshot for an EC2 instance\",\n     \"action_entry_function\": \"aws_delete_ebs_snapshot\",\n     \"action_is_check\": false,\n     \"action_is_remediation\": false,\n     \"action_needs_credential\": true,\n     \"action_next_hop\": null,\n     \"action_next_hop_parameter_mapping\": null,\n     \"action_nouns\": null,\n     \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n     \"action_supports_iteration\": true,\n     \"action_supports_poll\": true,\n     \"action_title\": \"AWS Delete EBS Snapshot\",\n     \"action_type\": \"LEGO_TYPE_AWS\",\n     \"action_verbs\": null,\n     \"action_version\": \"1.0.0\"\n    },\n    \"name\": \"AWS Delete EBS Snapshot\",\n    \"orderProperties\": [\n     \"region\",\n     \"snapshot_id\"\n    ],\n    \"printOutput\": true,\n    \"startcondition\": \"len(all_unused_snapshots)!=0\",\n    \"tags\": [],\n    \"uuid\": \"1bcf78d4587707b18b241fa00fd709e4ce3c3bc28ab24c9874e9b0966b08e43a\",\n    \"version\": \"1.0.0\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"##  Copyright (c) 2023 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import List, Dict\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_delete_ebs_snapshot_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_delete_ebs_snapshot(handle, region: str, snapshot_id: str) -> Dict:\\n\",\n    \"    \\\"\\\"\\\"aws_delete_ebs_snapshot Returns a dict of deleted snapshot details\\n\",\n    \"\\n\",\n    \"        :type region: string\\n\",\n    \"        :param region: AWS Region.\\n\",\n    \"\\n\",\n    \"        :type snapshot_id: string\\n\",\n    \"        :param snapshot_id: EBS snapshot ID. Eg: 'snap-34bt4bfjed9d'\\n\",\n    \"\\n\",\n    \"        :rtype: Deleted snapshot details\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    result = []\\n\",\n    \"    try:\\n\",\n    \"        ec2Client = handle.client('ec2', region_name=region)\\n\",\n    \"        result = ec2Client.delete_snapshot(SnapshotId=snapshot_id)\\n\",\n    \"    except Exception as e:\\n\",\n    \"        raise e\\n\",\n    \"    return  result\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(continueOnError=True)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"region\\\": \\\"iter.get(\\\\\\\\\\\"region\\\\\\\\\\\")\\\",\\n\",\n    \"    \\\"snapshot_id\\\": \\\"iter.get(\\\\\\\\\\\"snapshot_id\\\\\\\\\\\")\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(iterJson='''{\\n\",\n    \"    \\\"iter_enabled\\\": true,\\n\",\n    \"    \\\"iter_list_is_const\\\": false,\\n\",\n    \"    \\\"iter_list\\\": \\\"all_unused_snapshots\\\",\\n\",\n    \"    \\\"iter_parameter\\\": [\\\"region\\\",\\\"snapshot_id\\\"]\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"len(all_unused_snapshots)!=0\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_delete_ebs_snapshot, lego_printer=aws_delete_ebs_snapshot_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"b6288138\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Conclusion\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Conclusion\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Conclusion&para;\\\">Conclusion</h3>\\n\",\n    \"<p>In this Runbook, we were able to filter old EBS Snapshots given a threshold number of days and delete them. To view the full platform capabilities of unSkript please visit&nbsp;<a href=\\\"https://us.app.unskript.io\\\" target=\\\"_blank\\\" rel=\\\"noopener\\\">us.app.unskript.io</a></p>\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"execution_data\": {\n   \"runbook_name\": \"Delete Old EBS Snapshots\",\n   \"parameters\": null\n  },\n  \"kernelspec\": {\n   \"display_name\": \"unSkript (Build: 1166)\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"codemirror_mode\": {\n    \"name\": \"ipython\",\n    \"version\": 3\n   },\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"nbconvert_exporter\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.9.6\"\n  },\n  \"outputParameterSchema\": {\n   \"properties\": {},\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"parameterSchema\": {\n   \"properties\": {\n    \"region\": {\n     \"description\": \"AWS Regions to get the EBS Snapshots from. Eg: us-west-2. If nothing is given all regions will be considered.\",\n     \"title\": \"region\",\n     \"type\": \"string\"\n    },\n    \"snapshot_ids\": {\n     \"description\": \"List of EBS Snapshot IDs. Eg: [\\\"snap-0kwre234dew3w\\\",...]\",\n     \"title\": \"snapshot_ids\",\n     \"type\": \"array\"\n    },\n    \"threshold_days\": {\n     \"default\": 30,\n     \"description\": \"The threshold number of days to check the unused streams\",\n     \"title\": \"threshold_days\",\n     \"type\": \"number\"\n    }\n   },\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"vscode\": {\n   \"interpreter\": {\n    \"hash\": \"b0fa6594d8f4cbf19f97940f81e996739fb7646882a419484c72d19e05852a7e\"\n   }\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}"
  },
  {
    "path": "AWS/AWS_Delete_Old_EBS_Snapshots.json",
    "content": "{\n    \"name\": \"Delete Old EBS Snapshots\",\n    \"description\": \"Amazon Elastic Block Store (EBS) snapshots are created incrementally, an initial snapshot will include all the data on the disk, and subsequent snapshots will only store the blocks on the volume that have changed since the prior snapshot. Unchanged data is not stored, but referenced using the previous snapshot. This runbook helps us to find old EBS snapshots and thereby lower storage costs.\",\n    \"uuid\": \"303d6481e8cfa508d9ba11f847906c7d46f30a1c70f9b6b0e04b12409e74f704\",\n    \"icon\": \"CONNECTOR_TYPE_AWS\",\n    \"categories\": [ \"CATEGORY_TYPE_COST_OPT\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\" ],\n    \"connector_types\": [ \"CONNECTOR_TYPE_AWS\" ],\n    \"version\": \"1.0.0\"\n  }\n  "
  },
  {
    "path": "AWS/AWS_Delete_RDS_Instances_with_Low_CPU_Utilization.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"5424264e-6195-4cf9-906b-24b02d5a83f3\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Steps Overview\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Steps Overview\"\n   },\n   \"source\": [\n    \"<center><img src=\\\"https://storage.googleapis.com/unskript-website/assets/favicon.png\\\" alt=\\\"unSkript.com\\\" width=\\\"100\\\" height=\\\"100\\\">\\n\",\n    \"<h1 id=\\\"unSkript-Runbooks\\\">unSkript Runbooks</h1>\\n\",\n    \"<div class=\\\"alert alert-block alert-success\\\">\\n\",\n    \"<h3 id=\\\"Objective\\\">Objective</h3>\\n\",\n    \"<br><strong style=\\\"color: #000000;\\\"><em>Find and Delete RDS (Relational Database Service) Instances with Low CPU Utilization</em></strong></div>\\n\",\n    \"</center>\\n\",\n    \"<p>&nbsp;</p>\\n\",\n    \"<center>\\n\",\n    \"<h2 id=\\\"Delete-RDS-Instances-with-Low-CPU-Utilization\\\"><u>Delete RDS Instances with Low CPU Utilization</u></h2>\\n\",\n    \"</center>\\n\",\n    \"<h1 id=\\\"Steps-Overview\\\">Steps Overview</h1>\\n\",\n    \"<p>1)<a href=\\\"#1\\\" target=\\\"_self\\\" rel=\\\"noopener\\\"> Find RDS Instances with Low CPU Utilization</a><br>2)<a href=\\\"#2\\\" target=\\\"_self\\\" rel=\\\"noopener\\\"> Delete RDS Instances</a></p>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"e1f146c9-5180-4459-9c82-cf0e1da02785\",\n   \"metadata\": {\n    \"collapsed\": true,\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-05-02T17:14:41.488Z\"\n    },\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"name\": \"Input verification\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Input verification\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"if region == None:\\n\",\n    \"    region = ''\\n\",\n    \"if db_identifiers and not region:\\n\",\n    \"    raise SystemExit(\\\"Provide a region for the RDS Instance names!\\\")\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"908f4dcb-8483-44fc-8f81-ce2502e03093\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 1\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 1\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Find-ECS-Clusters-with-Low-CPU-Utilization\\\"><a id=\\\"1\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Find RDS Instances with Low CPU Utilization</h3>\\n\",\n    \"<p>Using unSkript's Find RDS Instances with Low CPU Utilization action, we will find instances with a low CPU utilization given a threshold percentage using the&nbsp;<span style=\\\"color: rgb(53, 152, 219);\\\">CPUUtilization <span style=\\\"color: rgb(0, 0, 0);\\\">attribue found in the statistics list of&nbsp; <em>get_metric_statistics</em> API of Cloudwatch.</span></span></p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters: <code>region, threshold, duration_minutes</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following output: <code>low_cpu_instances</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"ea17fd11-3ae9-4bdf-9ff8-27f656a5de48\",\n   \"metadata\": {\n    \"actionBashCommand\": false,\n    \"actionCategories\": [\n     \"CATEGORY_TYPE_COST_OPT\",\n     \"CATEGORY_TYPE_SRE\",\n     \"CATEGORY_TYPE_AWS_RDS\",\n     \"CATEGORY_TYPE_AWS\"\n    ],\n    \"actionDescription\": \"This lego finds RDS instances are not utilizing their CPU resources to their full potential.\",\n    \"actionEntryFunction\": \"aws_find_rds_instances_with_low_cpu_utilization\",\n    \"actionIsCheck\": true,\n    \"actionIsRemediation\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [\n     \"655835b762ba634f02074a48e4bae12f7a3e29bb8e6776eb8d657ddbfe181a59\"\n    ],\n    \"actionNextHopParameterMapping\": {},\n    \"actionNouns\": null,\n    \"actionOutputType\": \"ACTION_OUTPUT_TYPE_LIST\",\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"actionTitle\": \"AWS Find RDS Instances with low CPU Utilization\",\n    \"actionType\": \"LEGO_TYPE_AWS\",\n    \"actionVerbs\": null,\n    \"actionVersion\": \"1.0.0\",\n    \"action_modified\": false,\n    \"action_uuid\": \"8d01f8abc8274090c2325ef32905b2649a6af779ce86f78b9e9712ad1d482165\",\n    \"condition_enabled\": true,\n    \"continueOnError\": false,\n    \"credentialsJson\": {},\n    \"description\": \"This lego finds RDS instances are not utilizing their CPU resources to their full potential.\",\n    \"id\": 1,\n    \"index\": 1,\n    \"inputData\": [\n     {\n      \"duration_minutes\": {\n       \"constant\": false,\n       \"value\": \"int(duration_minutes)\"\n      },\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"region\"\n      },\n      \"utilization_threshold\": {\n       \"constant\": false,\n       \"value\": \"int(threshold)\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"duration_minutes\": {\n        \"default\": 5,\n        \"description\": \"Value in minutes to get the start time of the metrics for CPU Utilization\",\n        \"title\": \"Duration of Start time\",\n        \"type\": \"integer\"\n       },\n       \"region\": {\n        \"default\": \"\",\n        \"description\": \"AWS Region to get the RDS Instance\",\n        \"title\": \"AWS Region\",\n        \"type\": \"string\"\n       },\n       \"utilization_threshold\": {\n        \"default\": 10,\n        \"description\": \"The threshold percentage of CPU utilization for an RDS Instance.\",\n        \"title\": \"CPU Utilization Threshold\",\n        \"type\": \"integer\"\n       }\n      },\n      \"required\": [],\n      \"title\": \"aws_find_rds_instances_with_low_cpu_utilization\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"language\": \"python\",\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS Find RDS Instances with low CPU Utilization\",\n    \"orderProperties\": [\n     \"region\",\n     \"duration_minutes\",\n     \"utilization_threshold\"\n    ],\n    \"outputParams\": {\n     \"output_name\": \"low_cpu_instances\",\n     \"output_name_enabled\": true,\n     \"output_runbook_enabled\": false,\n     \"output_runbook_name\": \"\"\n    },\n    \"printOutput\": true,\n    \"startcondition\": \"not db_identifiers\",\n    \"tags\": [\n     \"aws_find_rds_instances_with_low_cpu_utilization\"\n    ],\n    \"uuid\": \"8d01f8abc8274090c2325ef32905b2649a6af779ce86f78b9e9712ad1d482165\",\n    \"version\": \"1.0.0\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"##  Copyright (c) 2023 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import Optional, Tuple\\n\",\n    \"from unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\\n\",\n    \"from unskript.connectors.aws import aws_get_paginator\\n\",\n    \"import pprint\\n\",\n    \"from datetime import datetime,timedelta\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_find_rds_instances_with_low_cpu_utilization_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_find_rds_instances_with_low_cpu_utilization(handle, utilization_threshold:int=10, region: str = \\\"\\\", duration_minutes:int=5) -> Tuple:\\n\",\n    \"    \\\"\\\"\\\"aws_find_rds_instances_with_low_cpu_utilization finds RDS instances that have a lower cpu utlization than the given threshold\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned from task.validate(...).\\n\",\n    \"\\n\",\n    \"        :type region: string\\n\",\n    \"        :param region: Region of the RDS.\\n\",\n    \"\\n\",\n    \"        :type utilization_threshold: integer\\n\",\n    \"        :param utilization_threshold: The threshold percentage of CPU utilization for an RDS Instance.\\n\",\n    \"\\n\",\n    \"        :type duration_minutes: integer\\n\",\n    \"        :param duration_minutes: Value in minutes to get the start time of the metrics for CPU Utilization\\n\",\n    \"\\n\",\n    \"        :rtype: status, list of instances and their region.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    result = []\\n\",\n    \"    all_regions = [region]\\n\",\n    \"    if not region:\\n\",\n    \"        all_regions = aws_list_all_regions(handle)\\n\",\n    \"    for reg in all_regions:\\n\",\n    \"        try:\\n\",\n    \"            rdsClient = handle.client('rds', region_name=reg)\\n\",\n    \"            cloudwatchClient = handle.client('cloudwatch', region_name=reg)\\n\",\n    \"            all_instances = aws_get_paginator(rdsClient, \\\"describe_db_instances\\\", \\\"DBInstances\\\")\\n\",\n    \"            for db in all_instances:\\n\",\n    \"                response = cloudwatchClient.get_metric_data(\\n\",\n    \"                    MetricDataQueries=[\\n\",\n    \"                        {\\n\",\n    \"                            'Id': 'cpu',\\n\",\n    \"                            'MetricStat': {\\n\",\n    \"                                'Metric': {\\n\",\n    \"                                    'Namespace': 'AWS/RDS',\\n\",\n    \"                                    'MetricName': 'CPUUtilization',\\n\",\n    \"                                    'Dimensions': [\\n\",\n    \"                                        {\\n\",\n    \"                                            'Name': 'DBInstanceIdentifier',\\n\",\n    \"                                            'Value': db['DBInstanceIdentifier']\\n\",\n    \"                                        },\\n\",\n    \"                                    ]\\n\",\n    \"                                },\\n\",\n    \"                                'Period': 60,\\n\",\n    \"                                'Stat': 'Average',\\n\",\n    \"                            },\\n\",\n    \"                            'ReturnData': True,\\n\",\n    \"                        },\\n\",\n    \"                    ],\\n\",\n    \"                    StartTime=(datetime.now() - timedelta(minutes=duration_minutes)).isoformat(),\\n\",\n    \"                    EndTime=datetime.utcnow().isoformat(),\\n\",\n    \"                )\\n\",\n    \"                if 'Values' in response['MetricDataResults'][0]:\\n\",\n    \"                    cpu_utilization = response['MetricDataResults'][0]['Values'][0]\\n\",\n    \"                    if cpu_utilization < utilization_threshold:\\n\",\n    \"                        db_instance_dict = {}\\n\",\n    \"                        db_instance_dict[\\\"region\\\"] = reg\\n\",\n    \"                        db_instance_dict[\\\"instance\\\"] = db['DBInstanceIdentifier']\\n\",\n    \"                        result.append(db_instance_dict)\\n\",\n    \"        except Exception as error:\\n\",\n    \"            pass\\n\",\n    \"\\n\",\n    \"    if len(result) != 0:\\n\",\n    \"        return (False, result)\\n\",\n    \"    else:\\n\",\n    \"        return (True, None)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"region\\\": \\\"region\\\",\\n\",\n    \"    \\\"duration_minutes\\\": \\\"int(duration_minutes)\\\",\\n\",\n    \"    \\\"utilization_threshold\\\": \\\"int(threshold)\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"not db_identifiers\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"low_cpu_instances\\\")\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_find_rds_instances_with_low_cpu_utilization, lego_printer=aws_find_rds_instances_with_low_cpu_utilization_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"199591ef-cb3a-49b7-b515-3c6998050320\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 2A\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 2A\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Create-List-of-Low-CPU-Utilization-RDS-Instances\\\">Create List of Low CPU Utilization RDS Instances</h3>\\n\",\n    \"<p>This action filters regions that have no clusters with low CPU utilization and creates a list of those that have them.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters: <code>None</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following output: <code>all_low_cpu_instances</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"6a10e980-9f17-4436-9166-90ea130aa316\",\n   \"metadata\": {\n    \"collapsed\": true,\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-05-02T17:15:25.139Z\"\n    },\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"name\": \"Create List of Low CPU Utilization RDS Instances\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Create List of Low CPU Utilization RDS Instances\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"all_low_cpu_instances = []\\n\",\n    \"dummy = []\\n\",\n    \"try:\\n\",\n    \"    if low_cpu_instances[0] == False:\\n\",\n    \"        if len(low_cpu_instances[1]) != 0:\\n\",\n    \"            all_low_cpu_instances = low_cpu_instances[1]\\n\",\n    \"except Exception:\\n\",\n    \"    for ins_identifier in db_identifiers:\\n\",\n    \"        data_dict = {}\\n\",\n    \"        data_dict[\\\"region\\\"] = region\\n\",\n    \"        data_dict[\\\"instance\\\"] = ins_identifier\\n\",\n    \"        all_low_cpu_instances.append(data_dict)\\n\",\n    \"print(all_low_cpu_instances)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"978d3b61-2fd9-461d-89bd-534d2dcf3b63\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 2\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 2\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Delete-RDS-Instance\\\"><a id=\\\"2\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Delete RDS Instance</h3>\\n\",\n    \"<p>This action deletes instances found in Step 1.&nbsp;</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters:&nbsp;<code>instance_id, region</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"31f99d97-b51b-45c1-b2ba-b0bdb10505ff\",\n   \"metadata\": {\n    \"actionBashCommand\": false,\n    \"actionCategories\": [\n     \"CATEGORY_TYPE_DEVOPS\",\n     \"CATEGORY_TYPE_SRE\",\n     \"CATEGORY_TYPE_AWS\",\n     \"CATEGORY_TYPE_AWS_RDS\"\n    ],\n    \"actionDescription\": \"Delete AWS RDS Instance\",\n    \"actionEntryFunction\": \"aws_delete_rds_instance\",\n    \"actionIsCheck\": false,\n    \"actionIsRemediation\": false,\n    \"actionNeedsCredential\": false,\n    \"actionNextHop\": null,\n    \"actionNextHopParameterMapping\": null,\n    \"actionNouns\": null,\n    \"actionOutputType\": \"ACTION_OUTPUT_TYPE_DICT\",\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"actionTitle\": \"AWS Delete RDS Instance\",\n    \"actionType\": \"LEGO_TYPE_AWS\",\n    \"actionVerbs\": null,\n    \"actionVersion\": \"1.0.0\",\n    \"action_modified\": false,\n    \"action_uuid\": \"11b88b8c6290219912511a30bfb913bc67f7759a6a1298612ed0ac37e381c8f2\",\n    \"collapsed\": true,\n    \"condition_enabled\": true,\n    \"continueOnError\": false,\n    \"credentialsJson\": {},\n    \"description\": \"Delete AWS RDS Instance\",\n    \"id\": 4,\n    \"index\": 4,\n    \"inputData\": [\n     {\n      \"instance_id\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"instance\\\\\\\\\\\")\\\"\"\n      },\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"region\\\\\\\\\\\")\\\"\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"instance_id\": {\n        \"description\": \"The DB instance identifier for the DB instance to be deleted. This parameter isn\\u2019t case-sensitive.\",\n        \"title\": \"RDS DB Identifier\",\n        \"type\": \"string\"\n       },\n       \"region\": {\n        \"description\": \"AWS region of instance identifier\",\n        \"title\": \"AWS Region\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"instance_id\",\n       \"region\"\n      ],\n      \"title\": \"aws_delete_rds_instance\",\n      \"type\": \"object\"\n     }\n    ],\n    \"iterData\": [\n     {\n      \"iter_enabled\": true,\n      \"iter_item\": {\n       \"instance_id\": \"instance\",\n       \"region\": \"region\"\n      },\n      \"iter_list\": {\n       \"constant\": false,\n       \"objectItems\": true,\n       \"value\": \"all_low_cpu_instances\"\n      }\n     }\n    ],\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"language\": \"python\",\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS Delete RDS Instance\",\n    \"orderProperties\": [\n     \"instance_id\",\n     \"region\"\n    ],\n    \"printOutput\": true,\n    \"startcondition\": \"len(all_low_cpu_instances)!=0\",\n    \"tags\": [\n     \"aws_delete_rds_instance\"\n    ],\n    \"uuid\": \"11b88b8c6290219912511a30bfb913bc67f7759a6a1298612ed0ac37e381c8f2\",\n    \"version\": \"1.0.0\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"# Copyright (c) 2023 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import Optional, Dict\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_delete_rds_instance_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_delete_rds_instance(handle, region: str, instance_id: str) -> Dict:\\n\",\n    \"    \\\"\\\"\\\"aws_delete_rds_instance dict of response.\\n\",\n    \"\\n\",\n    \"        :type region: string\\n\",\n    \"        :param region: AWS Region.\\n\",\n    \"\\n\",\n    \"        :type instance_id: string\\n\",\n    \"        :param instance_id: The DB instance identifier for the DB instance to be deleted. This parameter isn\\u2019t case-sensitive.\\n\",\n    \"\\n\",\n    \"        :rtype: dict of response of deleting an RDS instance\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    try:\\n\",\n    \"        ec2Client = handle.client('rds', region_name=region)\\n\",\n    \"        response = ec2Client.delete_db_instance(DBInstanceIdentifier=instance_id)\\n\",\n    \"        return response\\n\",\n    \"    except Exception as e:\\n\",\n    \"        raise Exception(e)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(continueOnError=False)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"region\\\": \\\"iter.get(\\\\\\\\\\\"region\\\\\\\\\\\")\\\",\\n\",\n    \"    \\\"instance_id\\\": \\\"iter.get(\\\\\\\\\\\"instance\\\\\\\\\\\")\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(iterJson='''{\\n\",\n    \"    \\\"iter_enabled\\\": true,\\n\",\n    \"    \\\"iter_list_is_const\\\": false,\\n\",\n    \"    \\\"iter_list\\\": \\\"all_low_cpu_instances\\\",\\n\",\n    \"    \\\"iter_parameter\\\": [\\\"region\\\",\\\"instance_id\\\"]\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"len(all_low_cpu_instances)!=0\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_delete_rds_instance, lego_printer=aws_delete_rds_instance_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"44a6cf05-385b-445d-a503-ad4aa607a568\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Conclusion\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Conclusion\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Conclusion\\\">Conclusion<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Conclusion\\\" target=\\\"_self\\\">&para;</a></h3>\\n\",\n    \"<p>In this Runbook, we were able to filter low CPU utilization RDS Instance given threshold percentage and delete them. To view the full platform capabilities of unSkript please visit <a href=\\\"https://us.app.unskript.io\\\" target=\\\"_blank\\\" rel=\\\"noopener\\\">us.app.unskript.io</a></p>\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"execution_data\": {\n   \"runbook_name\": \"Delete RDS Instances with Low CPU Utilization\",\n   \"parameters\": [\n    \"region\",\n    \"threshold_days\"\n   ]\n  },\n  \"kernelspec\": {\n   \"display_name\": \"unSkript (Build: 1169)\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"codemirror_mode\": {\n    \"name\": \"ipython\",\n    \"version\": 3\n   },\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"nbconvert_exporter\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.10.6\"\n  },\n  \"outputParameterSchema\": {\n   \"properties\": {},\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"parameterSchema\": {\n   \"properties\": {\n    \"db_identifiers\": {\n     \"description\": \"List of RDS Db identifiers.\",\n     \"title\": \"db_identifiers\",\n     \"type\": \"array\"\n    },\n    \"duration_minutes\": {\n     \"default\": 5,\n     \"description\": \"Start time value in minutes to get the start time of metrics collection\",\n     \"title\": \"duration_minutes\",\n     \"type\": \"number\"\n    },\n    \"region\": {\n     \"description\": \"AWS Region to get the RDS Instances from. Eg: \\\"us-west-2\\\". If nothing is given all regions will be considered.\",\n     \"title\": \"region\",\n     \"type\": \"string\"\n    },\n    \"threshold\": {\n     \"default\": 10,\n     \"description\": \"Threshold (in percent) to check for the CPU utilization of RDS Instances below the given threshold.\",\n     \"title\": \"threshold\",\n     \"type\": \"number\"\n    }\n   },\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"vscode\": {\n   \"interpreter\": {\n    \"hash\": \"b0fa6594d8f4cbf19f97940f81e996739fb7646882a419484c72d19e05852a7e\"\n   }\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}"
  },
  {
    "path": "AWS/AWS_Delete_RDS_Instances_with_Low_CPU_Utilization.json",
    "content": "{\n    \"name\": \"Delete RDS Instances with Low CPU Utilization\",\n    \"description\": \"Deleting RDS instances with low CPU utilization is a cost optimization strategy that involves identifying RDS instances with consistently low CPU usage and deleting them to save costs. This approach helps to eliminate unnecessary costs associated with running idle database instances that are not being fully utilized. This runbook helps us to find and delete such instances.\",\n    \"uuid\": \"655835b762ba634f02074a48e4bae12f7a3e29bb8e6776eb8d657ddbfe181a59\",\n    \"icon\": \"CONNECTOR_TYPE_AWS\",\n    \"categories\": [ \"CATEGORY_TYPE_COST_OPT\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\" ],\n    \"connector_types\": [ \"CONNECTOR_TYPE_AWS\" ],\n    \"version\": \"1.0.0\"\n  }"
  },
  {
    "path": "AWS/AWS_Delete_Redshift_Clusters_with_Low_CPU_Utilization.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"5424264e-6195-4cf9-906b-24b02d5a83f3\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Steps Overview\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Steps Overview\"\n   },\n   \"source\": [\n    \"<center><img src=\\\"https://unskript.com/assets/favicon.png\\\" alt=\\\"unSkript.com\\\" width=\\\"100\\\" height=\\\"100\\\">\\n\",\n    \"<h1 id=\\\"unSkript-Runbooks\\\">unSkript Runbooks</h1>\\n\",\n    \"<div class=\\\"alert alert-block alert-success\\\">\\n\",\n    \"<h3 id=\\\"Objective\\\">Objective</h3>\\n\",\n    \"<br><strong style=\\\"color: #000000;\\\"><em>Find and Delete Redshift Clusters with Low CPU Utilization</em></strong></div>\\n\",\n    \"</center>\\n\",\n    \"<p>&nbsp;</p>\\n\",\n    \"<center>\\n\",\n    \"<h2 id=\\\"Delete-RDS-Instances-with-Low-CPU-Utilization\\\"><u>Delete Redshift Clusters with Low CPU Utilization</u></h2>\\n\",\n    \"</center>\\n\",\n    \"<h1 id=\\\"Steps-Overview\\\">Steps Overview</h1>\\n\",\n    \"<p>1)<a href=\\\"#1\\\" target=\\\"_self\\\" rel=\\\"noopener\\\"> Find Redshift Clusters with Low CPU Utilization</a><br>2)<a href=\\\"#2\\\" target=\\\"_self\\\" rel=\\\"noopener\\\"> Delete Redshift Cluster</a></p>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 4,\n   \"id\": \"e1f146c9-5180-4459-9c82-cf0e1da02785\",\n   \"metadata\": {\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-05-17T07:39:46.226Z\"\n    },\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"name\": \"Input verification\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Input verification\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"if region == None:\\n\",\n    \"    region = ''\\n\",\n    \"if cluster_identifiers and not region:\\n\",\n    \"    raise SystemExit(\\\"Please provide a region for the Redshift Clusters names!\\\")\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"908f4dcb-8483-44fc-8f81-ce2502e03093\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 1\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 1\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Find-RDS-Instances-with-Low-CPU-Utilization\\\"><a id=\\\"1\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Find Redshift Clusters with Low CPU Utilization</h3>\\n\",\n    \"<p>Using unSkript's Find Redshift Clusters with Low CPU Utilization action, we will find instances with a low CPU utilization given a threshold percentage using the&nbsp;<span style=\\\"color: rgb(53, 152, 219);\\\">CPUUtilization <span style=\\\"color: rgb(0, 0, 0);\\\">attribue found in the statistics list of&nbsp; <em>get_metric_statistics</em> API of Cloudwatch.</span></span></p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters: <code>region, threshold, duration_minutes</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following output: <code>low_cpu_clusters</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"1424fc3d-ad1a-4614-ad08-bbb1d7151b9f\",\n   \"metadata\": {\n    \"actionBashCommand\": false,\n    \"actionCategories\": [\n     \"CATEGORY_TYPE_COST_OPT\",\n     \"CATEGORY_TYPE_AWS\",\n     \"CATEGORY_TYPE_AWS_REDSHIFT\",\n     \"CATEGORY_TYPE_AWS_CLOUDWATCH\"\n    ],\n    \"actionDescription\": \"Find underutilized Redshift clusters in terms of CPU utilization.\",\n    \"actionEntryFunction\": \"aws_find_redshift_clusters_with_low_cpu_utilization\",\n    \"actionIsCheck\": true,\n    \"actionIsRemediation\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [],\n    \"actionNextHopParameterMapping\": {},\n    \"actionNouns\": null,\n    \"actionOutputType\": \"ACTION_OUTPUT_TYPE_LIST\",\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"actionTitle\": \"AWS Find Redshift Clusters with low CPU Utilization\",\n    \"actionType\": \"LEGO_TYPE_AWS\",\n    \"actionVerbs\": null,\n    \"actionVersion\": \"1.0.0\",\n    \"action_modified\": false,\n    \"action_uuid\": \"27f2812eb37ee235c60584748f430bde0f1df9f7744b91c6148fa647d270dac8\",\n    \"collapsed\": true,\n    \"condition_enabled\": true,\n    \"continueOnError\": true,\n    \"credentialsJson\": {},\n    \"description\": \"Find underutilized Redshift clusters in terms of CPU utilization.\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-05-04T11:22:35.582Z\"\n    },\n    \"id\": 1,\n    \"index\": 1,\n    \"inputData\": [\n     {\n      \"duration_minutes\": {\n       \"constant\": false,\n       \"value\": \"int(duration_minutes)\"\n      },\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"iter_item\"\n      },\n      \"utilization_threshold\": {\n       \"constant\": false,\n       \"value\": \"int(threshold)\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"duration_minutes\": {\n        \"default\": 5,\n        \"description\": \"Value in minutes to determine the start time of the data points. \",\n        \"title\": \"Duration (in minutes)\",\n        \"type\": \"integer\"\n       },\n       \"region\": {\n        \"default\": \"\",\n        \"description\": \"AWS Region to get the Redshift Cluster\",\n        \"title\": \"AWS Region\",\n        \"type\": \"string\"\n       },\n       \"utilization_threshold\": {\n        \"default\": 10,\n        \"description\": \"The threshold value in percent of CPU utilization of the Redshift cluster\",\n        \"title\": \"CPU utilization threshold(in %)\",\n        \"type\": \"integer\"\n       }\n      },\n      \"required\": [],\n      \"title\": \"aws_find_redshift_clusters_with_low_cpu_utilization\",\n      \"type\": \"object\"\n     }\n    ],\n    \"iterData\": [\n     {\n      \"iter_enabled\": true,\n      \"iter_item\": \"region\",\n      \"iter_list\": {\n       \"constant\": false,\n       \"objectItems\": false,\n       \"value\": \"region\"\n      }\n     }\n    ],\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"language\": \"python\",\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS Find Redshift Clusters with low CPU Utilization\",\n    \"orderProperties\": [\n     \"region\",\n     \"duration_minutes\",\n     \"utilization_threshold\"\n    ],\n    \"outputParams\": {\n     \"output_name\": \"low_cpu_clusters\",\n     \"output_name_enabled\": true,\n     \"output_runbook_enabled\": false,\n     \"output_runbook_name\": \"\"\n    },\n    \"printOutput\": true,\n    \"startcondition\": \"not cluster_identifiers\",\n    \"tags\": [\n     \"aws_find_redshift_clusters_with_low_cpu_utilization\"\n    ],\n    \"uuid\": \"27f2812eb37ee235c60584748f430bde0f1df9f7744b91c6148fa647d270dac8\",\n    \"version\": \"1.0.0\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"##  Copyright (c) 2023 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import Optional, Tuple\\n\",\n    \"from unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\\n\",\n    \"import pprint\\n\",\n    \"from datetime import datetime,timedelta\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_find_redshift_clusters_with_low_cpu_utilization_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_find_redshift_clusters_with_low_cpu_utilization(handle, utilization_threshold:int=10, region: str = \\\"\\\", duration_minutes:int=5) -> Tuple:\\n\",\n    \"    \\\"\\\"\\\"aws_find_redshift_clusters_with_low_cpu_utilization finds Redshift Clusters that have a lower cpu utlization than the given threshold\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned from task.validate(...).\\n\",\n    \"\\n\",\n    \"        :type region: string\\n\",\n    \"        :param region: Region of the Cluster.\\n\",\n    \"\\n\",\n    \"        :type utilization_threshold: integer\\n\",\n    \"        :param utilization_threshold: The threshold percentage of CPU utilization for a Redshift Cluster.\\n\",\n    \"\\n\",\n    \"        :type duration_minutes: integer\\n\",\n    \"        :param duration_minutes: The threshold percentage of CPU utilization for a Redshift Cluster.\\n\",\n    \"\\n\",\n    \"        :rtype: status, list of clusters and their region.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    result = []\\n\",\n    \"    all_regions = [region]\\n\",\n    \"    if not region:\\n\",\n    \"        all_regions = aws_list_all_regions(handle)\\n\",\n    \"    for reg in all_regions:\\n\",\n    \"        try:\\n\",\n    \"            redshiftClient = handle.client('redshift', region_name=reg)\\n\",\n    \"            cloudwatchClient = handle.client('cloudwatch', region_name=reg)\\n\",\n    \"            for cluster in redshiftClient.describe_clusters()['Clusters']:\\n\",\n    \"                cluster_identifier = cluster['ClusterIdentifier']\\n\",\n    \"                response = cloudwatchClient.get_metric_statistics(\\n\",\n    \"                Namespace='AWS/Redshift',\\n\",\n    \"                MetricName='CPUUtilization',\\n\",\n    \"                Dimensions=[\\n\",\n    \"                    {\\n\",\n    \"                        'Name': 'ClusterIdentifier',\\n\",\n    \"                        'Value': cluster_identifier\\n\",\n    \"                    }\\n\",\n    \"                ],\\n\",\n    \"                StartTime=(datetime.utcnow() - timedelta(minutes=duration_minutes)).isoformat(),\\n\",\n    \"                EndTime=datetime.utcnow().isoformat(),\\n\",\n    \"                Period=60,\\n\",\n    \"                Statistics=['Average']\\n\",\n    \"                )\\n\",\n    \"                if len(response['Datapoints']) != 0:\\n\",\n    \"                    cpu_usage_percent = response['Datapoints'][-1]['Average']\\n\",\n    \"                    if cpu_usage_percent < utilization_threshold:\\n\",\n    \"                        cluster_dict = {}\\n\",\n    \"                        cluster_dict[\\\"region\\\"] = reg\\n\",\n    \"                        cluster_dict[\\\"cluster\\\"] = cluster_identifier\\n\",\n    \"                        result.append(cluster_dict)\\n\",\n    \"        except Exception:\\n\",\n    \"            pass\\n\",\n    \"\\n\",\n    \"    if len(result) != 0:\\n\",\n    \"        return (False, result)\\n\",\n    \"    else:\\n\",\n    \"        return (True, None)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(continueOnError=True)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"duration_minutes\\\": \\\"int(duration_minutes)\\\",\\n\",\n    \"    \\\"utilization_threshold\\\": \\\"int(threshold)\\\",\\n\",\n    \"    \\\"region\\\": \\\"iter_item\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(iterJson='''{\\n\",\n    \"    \\\"iter_enabled\\\": true,\\n\",\n    \"    \\\"iter_list_is_const\\\": false,\\n\",\n    \"    \\\"iter_list\\\": \\\"region\\\",\\n\",\n    \"    \\\"iter_parameter\\\": \\\"region\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"not cluster_identifiers\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"low_cpu_clusters\\\")\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_find_redshift_clusters_with_low_cpu_utilization, lego_printer=aws_find_redshift_clusters_with_low_cpu_utilization_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"199591ef-cb3a-49b7-b515-3c6998050320\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 2A\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 2A\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Create-List-of-Low-CPU-Utilization-RDS-Instances\\\">Create List of Low CPU Utilization Redshift Clusters</h3>\\n\",\n    \"<p>This action filters regions that have no clusters with low CPU utilization and creates a list of those that have them.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters: <code>None</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following output: <code>all_low_cpu_clusters</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"6a10e980-9f17-4436-9166-90ea130aa316\",\n   \"metadata\": {\n    \"collapsed\": true,\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-05-04T11:22:38.609Z\"\n    },\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"name\": \"Create List of Low CPU Utilization RDS Instances\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Create List of Low CPU Utilization RDS Instances\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"all_low_cpu_clusters = []\\n\",\n    \"try:\\n\",\n    \"    for res in low_cpu_clusters:\\n\",\n    \"        if type(res)==bool:\\n\",\n    \"            if res == False:\\n\",\n    \"                continue\\n\",\n    \"        elif type(res)==list:\\n\",\n    \"            if len(res)!=0:\\n\",\n    \"                all_low_cpu_clusters=res\\n\",\n    \"except Exception:\\n\",\n    \"    for ins_identifier in cluster_identifiers:\\n\",\n    \"        data_dict = {}\\n\",\n    \"        data_dict[\\\"region\\\"] = region\\n\",\n    \"        data_dict[\\\"cluster\\\"] = ins_identifier\\n\",\n    \"        all_low_cpu_clusters.append(data_dict)\\n\",\n    \"print(all_low_cpu_clusters)\\n\",\n    \"task.configure(outputName=\\\"all_low_cpu_instances\\\")\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"978d3b61-2fd9-461d-89bd-534d2dcf3b63\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 2\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 2\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Delete-RDS-Instance\\\"><a id=\\\"2\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Delete Redshift Clusters</h3>\\n\",\n    \"<p>This action deletes instances found in Step 1.&nbsp; By default, the skip final cluster screenshot is set to <span style=\\\"color: rgb(224, 62, 45);\\\">False.&nbsp;<span style=\\\"color: rgb(0, 0, 0);\\\">This setting will not take a final screenshot of the cluster.</span></span></p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters:&nbsp;<code>cluster, region, skip_final_cluster_screenshot</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"267106f2-0625-4a8e-a9e6-4d4e35bcb474\",\n   \"metadata\": {\n    \"actionBashCommand\": false,\n    \"actionCategories\": [\n     \"CATEGORY_TYPE_DEVOPS\",\n     \"CATEGORY_TYPE_SRE\",\n     \"CATEGORY_TYPE_AWS\",\n     \"CATEGORY_TYPE_AWS_REDSHIFT\"\n    ],\n    \"actionDescription\": \"Delete AWS Redshift Cluster\",\n    \"actionEntryFunction\": \"aws_delete_redshift_cluster\",\n    \"actionIsCheck\": false,\n    \"actionIsRemediation\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": null,\n    \"actionNextHopParameterMapping\": null,\n    \"actionNouns\": null,\n    \"actionOutputType\": \"ACTION_OUTPUT_TYPE_DICT\",\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"actionTitle\": \"AWS Delete Redshift Cluster\",\n    \"actionType\": \"LEGO_TYPE_AWS\",\n    \"actionVerbs\": null,\n    \"actionVersion\": \"1.0.0\",\n    \"action_modified\": false,\n    \"action_uuid\": \"6d9934159356d4290f164d36cdd42609f8916a87d4d68f6271bb8634f12485b4\",\n    \"collapsed\": true,\n    \"condition_enabled\": true,\n    \"continueOnError\": true,\n    \"credentialsJson\": {},\n    \"description\": \"Delete AWS Redshift Cluster\",\n    \"id\": 7,\n    \"index\": 7,\n    \"inputData\": [\n     {\n      \"cluster_identifier\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"cluster\\\\\\\\\\\")\\\"\"\n      },\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"region\\\\\\\\\\\")\\\"\"\n      },\n      \"skip_final_cluster_snapshot\": {\n       \"constant\": true,\n       \"value\": false\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"cluster_identifier\": {\n        \"description\": \"The identifier of the cluster to be deleted.\",\n        \"title\": \"Cluster Identifier\",\n        \"type\": \"string\"\n       },\n       \"region\": {\n        \"description\": \"AWS Region.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       },\n       \"skip_final_cluster_snapshot\": {\n        \"default\": false,\n        \"description\": \"Determines whether a final snapshot of the cluster is created before Amazon Redshift deletes the cluster. If true, a final cluster snapshot is not created. If false, a final cluster snapshot is created before the cluster is deleted.\",\n        \"title\": \"Skip Final Cluster Snapshot\",\n        \"type\": \"boolean\"\n       }\n      },\n      \"required\": [\n       \"region\",\n       \"cluster_identifier\"\n      ],\n      \"title\": \"aws_delete_redshift_cluster\",\n      \"type\": \"object\"\n     }\n    ],\n    \"iterData\": [\n     {\n      \"iter_enabled\": true,\n      \"iter_item\": {\n       \"cluster_identifier\": \"cluster\",\n       \"region\": \"region\"\n      },\n      \"iter_list\": {\n       \"constant\": false,\n       \"objectItems\": true,\n       \"value\": \"all_low_cpu_clusters\"\n      }\n     }\n    ],\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"language\": \"python\",\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS Delete Redshift Cluster\",\n    \"orderProperties\": [\n     \"region\",\n     \"cluster_identifier\",\n     \"skip_final_cluster_snapshot\"\n    ],\n    \"printOutput\": true,\n    \"startcondition\": \"len(all_low_cpu_clusters)!=0\",\n    \"tags\": [\n     \"aws_delete_redshift_cluster\"\n    ],\n    \"uuid\": \"6d9934159356d4290f164d36cdd42609f8916a87d4d68f6271bb8634f12485b4\",\n    \"version\": \"1.0.0\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"# Copyright (c) 2023 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import Optional, Dict\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_delete_redshift_cluster_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_delete_redshift_cluster(handle, region: str, cluster_identifier: str, skip_final_cluster_snapshot:bool=False) -> Dict:\\n\",\n    \"    \\\"\\\"\\\"aws_delete_redshift_cluster dict response.\\n\",\n    \"\\n\",\n    \"        :type region: string\\n\",\n    \"        :param region: AWS Region.\\n\",\n    \"\\n\",\n    \"        :type cluster_identifier: string\\n\",\n    \"        :param cluster_identifier: The identifier of the cluster to be deleted.\\n\",\n    \"\\n\",\n    \"        :type skip_final_cluster_snapshot: boolean\\n\",\n    \"        :param skip_final_cluster_snapshot: Determines whether a final snapshot of the cluster is created before Amazon Redshift deletes the cluster. If true, a final cluster snapshot is not created. If false, a final cluster snapshot is created before the cluster is deleted.\\n\",\n    \"\\n\",\n    \"        :rtype: dict of response\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    try:\\n\",\n    \"        redshiftClient = handle.client('redshift', region_name=region)\\n\",\n    \"        response = redshiftClient.delete_cluster(\\n\",\n    \"            ClusterIdentifier=cluster_identifier,\\n\",\n    \"            SkipFinalClusterSnapshot=skip_final_cluster_snapshot\\n\",\n    \"            )\\n\",\n    \"        return response\\n\",\n    \"    except Exception as e:\\n\",\n    \"        raise Exception(e)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(continueOnError=True)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"skip_final_cluster_snapshot\\\": \\\"False\\\",\\n\",\n    \"    \\\"cluster_identifier\\\": \\\"iter.get(\\\\\\\\\\\"cluster\\\\\\\\\\\")\\\",\\n\",\n    \"    \\\"region\\\": \\\"iter.get(\\\\\\\\\\\"region\\\\\\\\\\\")\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(iterJson='''{\\n\",\n    \"    \\\"iter_enabled\\\": true,\\n\",\n    \"    \\\"iter_list_is_const\\\": false,\\n\",\n    \"    \\\"iter_list\\\": \\\"all_low_cpu_clusters\\\",\\n\",\n    \"    \\\"iter_parameter\\\": [\\\"cluster_identifier\\\",\\\"region\\\"]\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"len(all_low_cpu_clusters)!=0\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_delete_redshift_cluster, lego_printer=aws_delete_redshift_cluster_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"44a6cf05-385b-445d-a503-ad4aa607a568\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Conclusion\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Conclusion\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Conclusion&para;\\\">Conclusion</h3>\\n\",\n    \"<p>In this Runbook, we were able to filter low CPU utilization Redshift Clusters given threshold percentage and delete them. To view the full platform capabilities of unSkript please visit <a href=\\\"https://us.app.unskript.io\\\" target=\\\"_blank\\\" rel=\\\"noopener\\\">us.app.unskript.io</a></p>\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"execution_data\": {\n   \"runbook_name\": \"Delete Redshift Clusters with Low CPU Utilization\",\n   \"parameters\": [\n    \"region\",\n    \"threshold_days\"\n   ]\n  },\n  \"kernelspec\": {\n   \"display_name\": \"unSkript (Build: 1166)\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"codemirror_mode\": {\n    \"name\": \"ipython\",\n    \"version\": 3\n   },\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"nbconvert_exporter\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.10.6\"\n  },\n  \"outputParameterSchema\": {\n   \"properties\": {},\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"parameterSchema\": {\n   \"properties\": {\n    \"cluster_identifiers\": {\n     \"description\": \"List of Redshift Clusters identifiers.\",\n     \"title\": \"cluster_identifiers\",\n     \"type\": \"array\"\n    },\n    \"duration_minutes\": {\n     \"default\": 5,\n     \"description\": \"Start time value in minutes to get the start time of metrics collection\",\n     \"title\": \"duration_minutes\",\n     \"type\": \"number\"\n    },\n    \"region\": {\n     \"description\": \"AWS Region to get the Redshift Clusters from. Eg: \\\"us-west-2\\\". If nothing is given all regions will be considered.\",\n     \"title\": \"region\",\n     \"type\": \"string\"\n    },\n    \"threshold\": {\n     \"default\": 10,\n     \"description\": \"Threshold (in percent) to check for the CPU utilization of Redshift Clusters below the given threshold.\",\n     \"title\": \"threshold\",\n     \"type\": \"number\"\n    }\n   },\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"vscode\": {\n   \"interpreter\": {\n    \"hash\": \"b0fa6594d8f4cbf19f97940f81e996739fb7646882a419484c72d19e05852a7e\"\n   }\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}"
  },
  {
    "path": "AWS/AWS_Delete_Redshift_Clusters_with_Low_CPU_Utilization.json",
    "content": "{\n    \"name\": \"Delete Redshift Clusters with Low CPU Utilization\",\n    \"description\": \"Redshift clusters are the basic units of compute and storage in Amazon Redshift, and they can be configured to meet specific performance and cost requirements. In order to optimize the cost and performance of Redshift clusters, it is important to regularly monitor their CPU utilization. If a cluster is consistently showing low CPU utilization over an extended period of time, it may be a good idea to delete the cluster to save costs. This runbook helps us find such clusters and delete them.\",\n    \"uuid\": \"2a51c98c5c99d132011e285546e365402351fd3d09214041aea7592367bd48bf\",\n    \"icon\": \"CONNECTOR_TYPE_AWS\",\n    \"categories\": [ \"CATEGORY_TYPE_COST_OPT\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\" ],\n    \"connector_types\": [ \"CONNECTOR_TYPE_AWS\" ],\n    \"version\": \"1.0.0\"\n}"
  },
  {
    "path": "AWS/AWS_Delete_Unattached_EBS_Volume.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"b526603d-f9fd-4074-adc3-f83dfee4ec85\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Runbook Overview\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Runbook Overview\"\n   },\n   \"source\": [\n    \"<center><img src=\\\"https://storage.googleapis.com/unskript-website/assets/favicon.png\\\" alt=\\\"unSkript.com\\\" width=\\\"100\\\" height=\\\"100\\\">\\n\",\n    \"<h1 id=\\\"-unSkript-Runbooks-\\\">unSkript Runbooks</h1>\\n\",\n    \"<div class=\\\"alert alert-block alert-success\\\">\\n\",\n    \"<h3 id=\\\"Objective\\\"><strong>Objective</strong></h3>\\n\",\n    \"<strong>To delete unattached EBS volume using unSkript actions.</strong></div>\\n\",\n    \"</center><center>\\n\",\n    \"<h2 id=\\\"Delete-Unattached-EBS-Volume\\\">Delete Unattached EBS Volume</h2>\\n\",\n    \"</center>\\n\",\n    \"<h1 id=\\\"Steps-Overview\\\">Steps Overview</h1>\\n\",\n    \"<p>1)<a href=\\\"#1\\\" target=\\\"_self\\\" rel=\\\"noopener\\\">Filter AWS Unattached EBS Volume</a><br>2)<a href=\\\"#2\\\" target=\\\"_self\\\" rel=\\\"noopener\\\">Create Snapshot Of EBS Volume</a><br>3)<a href=\\\"#3\\\" target=\\\"_self\\\" rel=\\\"noopener\\\">Delete EBS Volume</a></p>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 2,\n   \"id\": \"24e9d68b-95e2-4038-b276-fb4a4bf3992f\",\n   \"metadata\": {\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-05-17T16:19:56.852Z\"\n    },\n    \"name\": \"Input Verification \",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Input Verification \"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"if ebs_volume and not region:\\n\",\n    \"    raise SystemExit(\\\"Provide region for the EBS Volumes!\\\")\\n\",\n    \"if region == None:\\n\",\n    \"    region = \\\"\\\"\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"99141ad8-5135-43c0-a4d7-8507b2d51570\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-1\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-1\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Filter-Unattached-EBS-Volumes\\\">Filter Unattached EBS Volumes</h3>\\n\",\n    \"<p>Here we will use unSkript <strong>Filter AWS Unattached EBS Volume</strong> action. This action filters all the EBS volumes from the given region and returns a list of all the unattached EBS volumes. It will execute if the <code>ebs_volume</code> parameter is not passed.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Input parameters: <code>region</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Output variable: <code>unattached_volumes</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"3209f960-b7ea-4858-8dba-27fd7165ff06\",\n   \"metadata\": {\n    \"actionBashCommand\": false,\n    \"actionCategories\": [\n     \"CATEGORY_TYPE_DEVOPS\",\n     \"CATEGORY_TYPE_SRE\",\n     \"CATEGORY_TYPE_AWS\",\n     \"CATEGORY_TYPE_AWS_EC2\",\n     \"CATEGORY_TYPE_AWS_EBC\"\n    ],\n    \"actionDescription\": \"Filter AWS Unattached EBS Volume\",\n    \"actionEntryFunction\": \"aws_filter_ebs_unattached_volumes\",\n    \"actionIsCheck\": true,\n    \"actionIsRemediation\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [\n     \"da23633be34037f023e1c1f56220ec75eb2729d7d8eb2bca9badec15ed0fd2ca\"\n    ],\n    \"actionNextHopParameterMapping\": {},\n    \"actionNouns\": null,\n    \"actionOutputType\": \"ACTION_OUTPUT_TYPE_LIST\",\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"actionTitle\": \"Filter AWS Unattached EBS Volume\",\n    \"actionType\": \"LEGO_TYPE_AWS\",\n    \"actionVerbs\": null,\n    \"actionVersion\": \"1.0.0\",\n    \"action_modified\": false,\n    \"action_uuid\": \"375a1a2a5100b3a99ab867f9fcd54d46e2128dafc69dbbc03bb2083d56668cf4\",\n    \"condition_enabled\": true,\n    \"continueOnError\": false,\n    \"credentialsJson\": {},\n    \"description\": \"Filter AWS Unattached EBS Volume\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-05-17T16:19:59.901Z\"\n    },\n    \"id\": 4,\n    \"index\": 4,\n    \"inputData\": [\n     {\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"region\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"region\": {\n        \"default\": \"\",\n        \"description\": \"AWS Region.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [],\n      \"title\": \"aws_filter_ebs_unattached_volumes\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"language\": \"python\",\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"Filter AWS Unattached EBS Volume\",\n    \"orderProperties\": [\n     \"region\"\n    ],\n    \"outputParams\": {\n     \"output_name\": \"unattached_volumes\",\n     \"output_name_enabled\": true,\n     \"output_runbook_enabled\": false,\n     \"output_runbook_name\": \"\"\n    },\n    \"printOutput\": true,\n    \"startcondition\": \"not ebs_volume\",\n    \"tags\": [\n     \"aws_filter_ebs_unattached_volumes\"\n    ],\n    \"uuid\": \"375a1a2a5100b3a99ab867f9fcd54d46e2128dafc69dbbc03bb2083d56668cf4\",\n    \"version\": \"1.0.0\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"##  Copyright (c) 2021 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import Optional, Tuple\\n\",\n    \"from unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_filter_ebs_unattached_volumes_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_filter_ebs_unattached_volumes(handle, region: str = \\\"\\\") -> Tuple:\\n\",\n    \"    \\\"\\\"\\\"aws_filter_ebs_unattached_volumes Returns an array of ebs volumes.\\n\",\n    \"\\n\",\n    \"        :type region: string\\n\",\n    \"        :param region: Used to filter the volume for specific region.\\n\",\n    \"\\n\",\n    \"        :rtype: Tuple with status result and list of EBS Unattached Volume.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    result=[]\\n\",\n    \"    all_regions = [region]\\n\",\n    \"    if not region:\\n\",\n    \"        all_regions = aws_list_all_regions(handle)\\n\",\n    \"\\n\",\n    \"    for reg in all_regions:\\n\",\n    \"        try:\\n\",\n    \"            # Filtering the volume by region\\n\",\n    \"            ec2Client = handle.resource('ec2', region_name=reg)\\n\",\n    \"            volumes = ec2Client.volumes.all()\\n\",\n    \"\\n\",\n    \"            # collecting the volumes which has zero attachments\\n\",\n    \"            for volume in volumes:\\n\",\n    \"                volume_dict = {}\\n\",\n    \"                if len(volume.attachments) == 0:\\n\",\n    \"                    volume_dict[\\\"region\\\"] = reg\\n\",\n    \"                    volume_dict[\\\"volume_id\\\"] = volume.id\\n\",\n    \"                    result.append(volume_dict)\\n\",\n    \"        except Exception as e:\\n\",\n    \"            pass\\n\",\n    \"\\n\",\n    \"    if len(result) != 0:\\n\",\n    \"        return (False, result)\\n\",\n    \"    else:\\n\",\n    \"        return (True, None)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"not ebs_volume\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"task.configure(outputName=\\\"unattached_volumes\\\")\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"region\\\": \\\"region\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_filter_ebs_unattached_volumes, lego_printer=aws_filter_ebs_unattached_volumes_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"44f706b0-5e9e-4851-88fb-668cd57b8139\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-1 Extension\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-1 Extension\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Modify-Unattached-EBS-Volume-Output\\\">Modify Unattached EBS Volume Output</h3>\\n\",\n    \"<p>In this action, we modify the output from step 1 and return a list of dictionary items for the unattached EBS volume.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Output variable: ebs_list</p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 10,\n   \"id\": \"85f04201-71c3-48cd-ad39-cdb78addcd44\",\n   \"metadata\": {\n    \"collapsed\": true,\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-05-17T16:32:51.626Z\"\n    },\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"name\": \"Modify Step-1 Output\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Modify Step-1 Output\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"ebs_list = []\\n\",\n    \"try:\\n\",\n    \"    if unattached_volumes[0] == False:\\n\",\n    \"        for volume in unattached_volumes[1]:\\n\",\n    \"            ebs_list.append(volume)\\n\",\n    \"except Exception as e:\\n\",\n    \"    if ebs_volume:\\n\",\n    \"        for i in ebs_volume:\\n\",\n    \"            data_dict = {}\\n\",\n    \"            data_dict[\\\"region\\\"] = region\\n\",\n    \"            data_dict[\\\"volume_id\\\"] = i\\n\",\n    \"            ebs_list.append(data_dict)\\n\",\n    \"    else:\\n\",\n    \"        raise Exception(e)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"9bcb8839-c160-4d2b-9af3-0f133d45bcd7\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-2\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-2\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Create-Snapshot-Of-EBS-Volume\\\">Create a Snapshot Of EBS Volume</h3>\\n\",\n    \"<p>Here we will use the unSkript&nbsp;<strong>Create Snapshot Of EBS Volume</strong> action. In this action, we will back up the data stored in EBS volumes by passing the list of unattached EBS volumes from step 1 and creating a snapshot of the EBS volume of the EC2 instance.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Input parameters: <code>volume_id</code>, <code>region</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Output variable: <code>snapshot_metadata</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 30,\n   \"id\": \"f2c931e1-b221-416c-8493-270e34511035\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": false,\n    \"action_uuid\": \"b2fa154276e80ccc52ca79ee65d784371889f5011175fa9313f5c052dd44c5cb\",\n    \"continueOnError\": true,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Create a snapshot for EBS volume of the EC2 Instance for backing up the data stored in EBS\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-01-30T20:14:12.838Z\"\n    },\n    \"id\": 177,\n    \"index\": 177,\n    \"inputData\": [\n     {\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"region\\\\\\\\\\\")\\\"\"\n      },\n      \"volume_id\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"volume_id\\\\\\\\\\\")\\\"\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"region\": {\n        \"description\": \"AWS Region.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       },\n       \"volume_id\": {\n        \"description\": \"Volume ID to create snapshot for particular volume e.g. vol-01eb21cfce30a956c\",\n        \"title\": \"Volume ID\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"volume_id\",\n       \"region\"\n      ],\n      \"title\": \"aws_create_volumes_snapshot\",\n      \"type\": \"object\"\n     }\n    ],\n    \"iterData\": [\n     {\n      \"iter_enabled\": true,\n      \"iter_item\": {\n       \"region\": \"region\",\n       \"volume_id\": \"volume_id\"\n      },\n      \"iter_list\": {\n       \"constant\": false,\n       \"objectItems\": true,\n       \"value\": \"ebs_list\"\n      }\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS Create Snapshot For Volume\",\n    \"nouns\": [],\n    \"orderProperties\": [\n     \"volume_id\",\n     \"region\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"snapshot_metadata\",\n     \"output_name_enabled\": true\n    },\n    \"printOutput\": true,\n    \"tags\": [],\n    \"verbs\": []\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"##  Copyright (c) 2022 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import List\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_create_volumes_snapshot_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_create_volumes_snapshot(handle, volume_id: str, region: str) -> List:\\n\",\n    \"    \\\"\\\"\\\"aws_create_volumes_snapshot Returns an list containing SnapshotId.\\n\",\n    \"\\n\",\n    \"        :type region: string\\n\",\n    \"        :param region: used to filter the volume for a given region.\\n\",\n    \"\\n\",\n    \"        :type volume_id: string\\n\",\n    \"        :param volume_id: Volume ID to create snapshot for particular volume.\\n\",\n    \"\\n\",\n    \"        :rtype: List containing SnapshotId.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    result = []\\n\",\n    \"\\n\",\n    \"    ec2Client = handle.resource('ec2', region_name=region)\\n\",\n    \"\\n\",\n    \"    try:\\n\",\n    \"        response = ec2Client.create_snapshot(VolumeId=volume_id)\\n\",\n    \"        result.append(response)\\n\",\n    \"    except Exception as e:\\n\",\n    \"        raise e\\n\",\n    \"\\n\",\n    \"    return result\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(continueOnError=True)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"region\\\": \\\"iter.get(\\\\\\\\\\\"region\\\\\\\\\\\")\\\",\\n\",\n    \"    \\\"volume_id\\\": \\\"iter.get(\\\\\\\\\\\"volume_id\\\\\\\\\\\")\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(iterJson='''{\\n\",\n    \"    \\\"iter_enabled\\\": true,\\n\",\n    \"    \\\"iter_list_is_const\\\": false,\\n\",\n    \"    \\\"iter_list\\\": \\\"ebs_list\\\",\\n\",\n    \"    \\\"iter_parameter\\\": [\\\"volume_id\\\",\\\"region\\\"]\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"task.configure(outputName=\\\"snapshot_metadata\\\")\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_create_volumes_snapshot, lego_printer=aws_create_volumes_snapshot_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"bafc0d66-295a-4a46-815b-6b2fbb2c5d75\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-2 Extension\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-2 Extension\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Modify-Listener-ARNs-Output\\\">Modify Snapshot Action Output</h3>\\n\",\n    \"<p>In this action, we modify the output from step 2 and return a list of dictionary items for the volumes whose snapshot has been created.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Output variable: snapshot_volumes</p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 32,\n   \"id\": \"3de76e1e-ef8a-4dc9-9300-abcf4efb78ad\",\n   \"metadata\": {\n    \"collapsed\": true,\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-01-30T20:14:53.327Z\"\n    },\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"name\": \"Modify Step-2\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Modify Step-2\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"import json\\n\",\n    \"\\n\",\n    \"snapshot_volumes = []\\n\",\n    \"for k, v in snapshot_metadata.items():\\n\",\n    \"    try:\\n\",\n    \"        if v[0].id:\\n\",\n    \"            snap_dict = json.loads(k.replace(\\\"\\\\'\\\", \\\"\\\\\\\"\\\"))\\n\",\n    \"            snapshot_volumes.append(snap_dict)\\n\",\n    \"    except Exception as e:\\n\",\n    \"        pass\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"1c18b419-768f-4479-bdbf-d64fef6792c3\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-3\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-3\"\n   },\n   \"source\": [\n    \"<p><strong>Delete EBS Volume</strong></p>\\n\",\n    \"<p>In this action, we delete the unattached EBS volume we get after steps 1 and 2.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Input parameters: <code>volume_id</code>, <code>region</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Output variable: <code>deletion_information</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"f85c99de-3ba6-4aae-a85d-1b790e7a00a2\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": false,\n    \"action_uuid\": \"e8cccc03e1af323982c0ab9f06c01127c0481ca81943eb7e82e46245140b1059\",\n    \"condition_enabled\": true,\n    \"continueOnError\": true,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Delete AWS Volume by Volume ID\",\n    \"id\": 273,\n    \"index\": 273,\n    \"inputData\": [\n     {\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"region\\\\\\\\\\\")\\\"\"\n      },\n      \"volume_id\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"volume_id\\\\\\\\\\\")\\\"\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"region\": {\n        \"description\": \"AWS Region.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       },\n       \"volume_id\": {\n        \"description\": \"Volume ID.\",\n        \"title\": \"Volume ID\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"volume_id\",\n       \"region\"\n      ],\n      \"title\": \"aws_delete_volumes\",\n      \"type\": \"object\"\n     }\n    ],\n    \"iterData\": [\n     {\n      \"iter_enabled\": true,\n      \"iter_item\": {\n       \"region\": \"region\",\n       \"volume_id\": \"volume_id\"\n      },\n      \"iter_list\": {\n       \"constant\": false,\n       \"objectItems\": true,\n       \"value\": \"snapshot_volumes\"\n      }\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"Delete AWS EBS Volume by Volume ID\",\n    \"nouns\": [],\n    \"orderProperties\": [\n     \"volume_id\",\n     \"region\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"deletion_information\",\n     \"output_name_enabled\": true\n    },\n    \"printOutput\": true,\n    \"startcondition\": \"len(snapshot_volumes) > 0\",\n    \"tags\": [\n     \"aws_delete_volumes\"\n    ],\n    \"title\": \"Delete AWS EBS Volume by Volume ID\",\n    \"verbs\": []\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"##  Copyright (c) 2022 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import List\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_delete_volumes_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint({\\\"Output\\\": output})\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_delete_volumes(handle, volume_id: str, region: str) -> str:\\n\",\n    \"    \\\"\\\"\\\"aws_filter_ebs_unattached_volumes Returns an array of ebs volumes.\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned by the task.validate(...) method.\\n\",\n    \"\\n\",\n    \"        :type region: string\\n\",\n    \"        :param region: Used to filter the volume for specific region.\\n\",\n    \"\\n\",\n    \"        :type volume_id: string\\n\",\n    \"        :param volume_id: Volume ID needed to delete particular volume.\\n\",\n    \"\\n\",\n    \"        :rtype: Result of the API in the List form.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    result = []\\n\",\n    \"\\n\",\n    \"    ec2Client = handle.client('ec2',region_name=region)\\n\",\n    \"\\n\",\n    \"    # Adding logic for deletion criteria\\n\",\n    \"    try:\\n\",\n    \"        response = ec2Client.delete_volume(VolumeId=volume_id,)\\n\",\n    \"        result.append(response)\\n\",\n    \"    except Exception as e:\\n\",\n    \"        result.append(e)\\n\",\n    \"\\n\",\n    \"    return result\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(continueOnError=True)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"volume_id\\\": \\\"iter.get(\\\\\\\\\\\"volume_id\\\\\\\\\\\")\\\",\\n\",\n    \"    \\\"region\\\": \\\"iter.get(\\\\\\\\\\\"region\\\\\\\\\\\")\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(iterJson='''{\\n\",\n    \"    \\\"iter_enabled\\\": true,\\n\",\n    \"    \\\"iter_list_is_const\\\": false,\\n\",\n    \"    \\\"iter_list\\\": \\\"snapshot_volumes\\\",\\n\",\n    \"    \\\"iter_parameter\\\": [\\\"volume_id\\\",\\\"region\\\"]\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"len(snapshot_volumes) > 0\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"task.configure(outputName=\\\"deletion_information\\\")\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_delete_volumes, lego_printer=aws_delete_volumes_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"943ceb40-c278-45a7-81a0-d16a686d1db8\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Conclusion\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Conclusion\"\n   },\n   \"source\": [\n    \"### Conclusion\\n\",\n    \"<p>In this Runbook, we demonstrated the use of unSkript's AWS actions to filter unattached EBS volumes and create snapshots of those and delete them. To view the full platform capabunscriptedof unSkript please visit <a href=\\\"https://us.app.unskript.io\\\" target=\\\"_blank\\\" rel=\\\"noopener\\\">https://us.app.unskript.io</a></p>\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"execution_data\": {\n   \"runbook_name\": \"Delete Unattached AWS EBS Volumes\",\n   \"parameters\": [\n    \"ebs_volume\",\n    \"region\"\n   ]\n  },\n  \"kernelspec\": {\n   \"display_name\": \"unSkript (Build: 1166)\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"codemirror_mode\": {\n    \"name\": \"ipython\",\n    \"version\": 3\n   },\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"nbconvert_exporter\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.9.6\"\n  },\n  \"parameterSchema\": {\n   \"properties\": {\n    \"ebs_volume\": {\n     \"default\": \"[\\\"abc\\\"]\",\n     \"description\": \"Volume Id of the unattached volume.\",\n     \"title\": \"ebs_volume\",\n     \"type\": \"array\"\n    },\n    \"region\": {\n     \"default\": \"abc\",\n     \"description\": \"AWS region e.g. \\\"us-west-2\\\"\",\n     \"title\": \"region\",\n     \"type\": \"string\"\n    }\n   },\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"vscode\": {\n   \"interpreter\": {\n    \"hash\": \"e8899eb02dfbc033aab5733bdae1bd213fa031d40331094008e8673d99ebab63\"\n   }\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}"
  },
  {
    "path": "AWS/AWS_Delete_Unattached_EBS_Volume.json",
    "content": "{\n  \"name\": \"Delete Unattached AWS EBS Volumes\",\n  \"description\": \"This runbook can be used to delete all unattached EBS Volumes within an AWS region. You can delete an Amazon EBS volume that you no longer need. After deletion, its data is gone and the volume can't be attached to any instance. So before deletion, you can store a snapshot of the volume, which you can use to re-create the volume later.\",\n  \"uuid\": \"da23633be34037f023e1c1f56220ec75eb2729d7d8eb2bca9badec15ed0fd2ca\",\n  \"icon\": \"CONNECTOR_TYPE_AWS\",\n  \"categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\" ],\n  \"connector_types\": [ \"CONNECTOR_TYPE_AWS\" ],\n  \"version\": \"1.0.0\"\n}\n"
  },
  {
    "path": "AWS/AWS_Delete_Unused_AWS_Secrets.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"5424264e-6195-4cf9-906b-24b02d5a83f3\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Steps Overview\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Steps Overview\"\n   },\n   \"source\": [\n    \"<center><img src=\\\"https://storage.googleapis.com/unskript-website/assets/favicon.png\\\" alt=\\\"unSkript.com\\\" width=\\\"100\\\" height=\\\"100\\\">\\n\",\n    \"<h1 id=\\\"unSkript-Runbooks&para;\\\">unSkript Runbooks<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#unSkript-Runbooks\\\" target=\\\"_self\\\" rel=\\\"noopener\\\">&para;</a><a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#unSkript-Runbooks&para;\\\" target=\\\"_self\\\">&para;</a></h1>\\n\",\n    \"<div class=\\\"alert alert-block alert-success\\\">\\n\",\n    \"<h3 id=\\\"Objective&para;\\\">Objective<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Objective\\\" target=\\\"_self\\\" rel=\\\"noopener\\\">&para;</a><a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Objective&para;\\\" target=\\\"_self\\\">&para;</a></h3>\\n\",\n    \"<br><strong><em>Find and Delete unused AWS Secrets</em></strong></div>\\n\",\n    \"</center>\\n\",\n    \"<p>&nbsp;</p>\\n\",\n    \"<center>\\n\",\n    \"<h2 id=\\\"Delete-Unused-AWS-Secrets\\\"><u>Delete Unused AWS Secrets</u><a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Delete-Unused-AWS-Secrets\\\" target=\\\"_self\\\">&para;</a></h2>\\n\",\n    \"</center>\\n\",\n    \"<h1 id=\\\"Steps-Overview&para;\\\">Steps Overview<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Steps-Overview\\\" target=\\\"_self\\\" rel=\\\"noopener\\\">&para;</a><a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Steps-Overview&para;\\\" target=\\\"_self\\\">&para;</a></h1>\\n\",\n    \"<p>1.&nbsp; Find unused secrets<br>2.&nbsp;&nbsp;Delete unused secrets</p>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 5,\n   \"id\": \"714462ea-93a8-4e2e-b1b8-50480b857662\",\n   \"metadata\": {\n    \"collapsed\": true,\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-05-17T16:55:36.475Z\"\n    },\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"name\": \"Input Verification\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Input Verification\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"if secret_names and not region:\\n\",\n    \"    raise SystemExit(\\\"Provide a region for the AWS Secret ID's!\\\")\\n\",\n    \"if region == None:\\n\",\n    \"    region = \\\"\\\"\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"908f4dcb-8483-44fc-8f81-ce2502e03093\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 1\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 1\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Filter-unused-Secrets\\\"><a id=\\\"1\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Filter unused Secrets<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Filter-unused-Secrets\\\" target=\\\"_self\\\">&para;</a></h3>\\n\",\n    \"<p>Using unSkript's Filter AWS Unused Secrets action, we will find unused secrets given a threshold number of days from their last use date. By default threshold number of days is set to <strong><span style=\\\"text-decoration: underline;\\\">30</span></strong>.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters: <code>region, threhold_days</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following output: <code>unused_secrets</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"9854addd-0a54-40f2-a1f5-5ccf4630dd87\",\n   \"metadata\": {\n    \"actionBashCommand\": false,\n    \"actionCategories\": [\n     \"CATEGORY_TYPE_IAM\",\n     \"CATEGORY_TYPE_SECOPS\",\n     \"CATEGORY_TYPE_COST_OPT\"\n    ],\n    \"actionDescription\": \"This action lists all the unused secrets from AWS by comparing the last used date with the given threshold.\",\n    \"actionEntryFunction\": \"aws_list_unused_secrets\",\n    \"actionIsCheck\": true,\n    \"actionIsRemediation\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [\n     \"2a9101a1cf7be1cf70a30de2199dca5b302c3096\"\n    ],\n    \"actionNextHopParameterMapping\": {},\n    \"actionNouns\": null,\n    \"actionOutputType\": \"ACTION_OUTPUT_TYPE_LIST\",\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"actionTitle\": \"AWS List Unused Secrets\",\n    \"actionType\": \"LEGO_TYPE_AWS\",\n    \"actionVerbs\": null,\n    \"actionVersion\": \"1.0.0\",\n    \"action_modified\": false,\n    \"action_uuid\": \"97f5fa81fca213403df2f1b3c17e6f83024b7df66f313f537abaa2a00dab745b\",\n    \"collapsed\": true,\n    \"condition_enabled\": true,\n    \"continueOnError\": false,\n    \"credentialsJson\": {},\n    \"description\": \"This action lists all the unused secrets from AWS by comparing the last used date with the given threshold.\",\n    \"id\": 4,\n    \"index\": 4,\n    \"inputData\": [\n     {\n      \"max_age_days\": {\n       \"constant\": false,\n       \"value\": \"int(threshold_days)\"\n      },\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"region\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"max_age_days\": {\n        \"default\": 30,\n        \"description\": \"The threshold to check the last use of the secret.\",\n        \"title\": \"Max Age Day's\",\n        \"type\": \"integer\"\n       },\n       \"region\": {\n        \"default\": \"\",\n        \"description\": \"AWS Region.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [],\n      \"title\": \"aws_list_unused_secrets\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"language\": \"python\",\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS List Unused Secrets\",\n    \"orderProperties\": [\n     \"region\",\n     \"max_age_days\"\n    ],\n    \"outputParams\": {\n     \"output_name\": \"unused_secrets\",\n     \"output_name_enabled\": true,\n     \"output_runbook_enabled\": false,\n     \"output_runbook_name\": \"\"\n    },\n    \"printOutput\": true,\n    \"startcondition\": \"not secret_names\",\n    \"tags\": [\n     \"aws_list_unused_secrets\"\n    ],\n    \"uuid\": \"97f5fa81fca213403df2f1b3c17e6f83024b7df66f313f537abaa2a00dab745b\",\n    \"version\": \"1.0.0\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"##  Copyright (c) 2023 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"import pprint\\n\",\n    \"from typing import Optional, Tuple\\n\",\n    \"from datetime import datetime, timedelta\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from unskript.connectors.aws import aws_get_paginator\\n\",\n    \"from unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\\n\",\n    \"import pytz\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_list_unused_secrets_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_list_unused_secrets(handle, region: str = \\\"\\\", max_age_days: int = 30) -> Tuple:\\n\",\n    \"    \\\"\\\"\\\"aws_list_unused_secrets Returns an array of unused secrets.\\n\",\n    \"\\n\",\n    \"        :type region: string\\n\",\n    \"        :param region: AWS region.\\n\",\n    \"\\n\",\n    \"        :type max_age_days: int\\n\",\n    \"        :param max_age_days: The threshold to check the last use of the secret.\\n\",\n    \"\\n\",\n    \"        :rtype: Tuple with status result and list of unused secrets.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    result = []\\n\",\n    \"    all_regions = [region]\\n\",\n    \"    if not region:\\n\",\n    \"        all_regions = aws_list_all_regions(handle)\\n\",\n    \"\\n\",\n    \"    for reg in all_regions:\\n\",\n    \"        try:\\n\",\n    \"            # Filtering the secrets by region\\n\",\n    \"            ec2Client = handle.client('secretsmanager', region_name=reg)\\n\",\n    \"            res = aws_get_paginator(ec2Client, \\\"list_secrets\\\", \\\"SecretList\\\")\\n\",\n    \"            for secret in res:\\n\",\n    \"                secret_dict = {}\\n\",\n    \"                secret_id = secret['Name']\\n\",\n    \"                last_accessed_date = ec2Client.describe_secret(SecretId=secret_id)\\n\",\n    \"                if 'LastAccessedDate' in last_accessed_date:\\n\",\n    \"                    if last_accessed_date[\\\"LastAccessedDate\\\"] < datetime.now(pytz.UTC) - timedelta(days=int(max_age_days)):\\n\",\n    \"                        secret_dict[\\\"secret_name\\\"] = secret_id\\n\",\n    \"                        secret_dict[\\\"region\\\"] = reg\\n\",\n    \"                        result.append(secret_dict)\\n\",\n    \"                else:\\n\",\n    \"                    if last_accessed_date[\\\"CreatedDate\\\"] < datetime.now(pytz.UTC) - timedelta(days=int(max_age_days)):\\n\",\n    \"                        secret_dict[\\\"secret_name\\\"] = secret_id\\n\",\n    \"                        secret_dict[\\\"region\\\"] = reg\\n\",\n    \"                        result.append(secret_dict)\\n\",\n    \"        except Exception:\\n\",\n    \"            pass\\n\",\n    \"\\n\",\n    \"    if len(result) != 0:\\n\",\n    \"        return (False, result)\\n\",\n    \"    return (True, None)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"region\\\": \\\"region\\\",\\n\",\n    \"    \\\"max_age_days\\\": \\\"int(threshold_days)\\\"\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"not secret_names\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"task.configure(outputName=\\\"unused_secrets\\\")\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_list_unused_secrets, lego_printer=aws_list_unused_secrets_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"45c3142e-4eb4-4ae7-9522-08fff5207d1f\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 2A\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 2A\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Create-List-of-Unused-Secrets\\\">Create List of Unused Secrets</h3>\\n\",\n    \"<p>This action filters regions that have no unused secrets and creates a list of those that have them.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters: <code>None</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following output: <code>all_unused_secrets</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 7,\n   \"id\": \"6a10e980-9f17-4436-9166-90ea130aa316\",\n   \"metadata\": {\n    \"collapsed\": true,\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-04-18T13:49:30.460Z\"\n    },\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"name\": \"Create List of Unused Secrets\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Create List of Unused Secrets\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"all_unused_secrets = []\\n\",\n    \"try:\\n\",\n    \"    if unused_secrets[0] == False:\\n\",\n    \"        for secret in unused_secrets[1]:\\n\",\n    \"            all_unused_secrets.append(secret)\\n\",\n    \"except Exception as e:\\n\",\n    \"    if secret_names:\\n\",\n    \"        for name in secret_names:\\n\",\n    \"            data_dict = {}\\n\",\n    \"            data_dict[\\\"region\\\"] = region\\n\",\n    \"            data_dict[\\\"secret_name\\\"] = name\\n\",\n    \"            all_unused_secrets.append(data_dict)\\n\",\n    \"    else:\\n\",\n    \"        raise Exception(e)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"978d3b61-2fd9-461d-89bd-534d2dcf3b63\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 2\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 2\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Delete-unused-Secrets\\\"><a id=\\\"2\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Delete unused Secrets<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Delete-unused-Secrets\\\" target=\\\"_self\\\">&para;</a></h3>\\n\",\n    \"<p>This action deleted unused secrets found in Step 1.&nbsp;</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters: <code>region, secret_name</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"2def0b0d-772b-4bee-896e-98463a564477\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionCategories\": [\n     \"CATEGORY_TYPE_DEVOPS\",\n     \"CATEGORY_TYPE_SRE\",\n     \"CATEGORY_TYPE_AWS\"\n    ],\n    \"actionIsCheck\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [],\n    \"actionNextHopParameterMapping\": {},\n    \"actionOutputType\": \"\",\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": false,\n    \"action_uuid\": \"e83805f8b044c82cabcec54003ce692f54ab8781b70d6fde24b9915cb2b166a7\",\n    \"checkEnabled\": false,\n    \"condition_enabled\": true,\n    \"continueOnError\": true,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"AWS Delete Secret\",\n    \"id\": 242,\n    \"index\": 242,\n    \"inputData\": [\n     {\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"region\\\\\\\\\\\")\\\"\"\n      },\n      \"secret_name\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"secret_name\\\\\\\\\\\")\\\"\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"region\": {\n        \"description\": \"AWS Region.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       },\n       \"secret_name\": {\n        \"description\": \"Name of the secret to be deleted.\",\n        \"title\": \"Secret Name\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"secret_name\",\n       \"region\"\n      ],\n      \"title\": \"aws_delete_secret\",\n      \"type\": \"object\"\n     }\n    ],\n    \"isUnskript\": false,\n    \"iterData\": [\n     {\n      \"iter_enabled\": true,\n      \"iter_item\": {\n       \"region\": \"region\",\n       \"secret_name\": \"secret_name\"\n      },\n      \"iter_list\": {\n       \"constant\": false,\n       \"objectItems\": true,\n       \"value\": \"all_unused_secrets\"\n      }\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS Delete Secret\",\n    \"nouns\": [],\n    \"orderProperties\": [\n     \"secret_name\",\n     \"region\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"printOutput\": true,\n    \"startcondition\": \"if len(all_unused_secrets)!=0\",\n    \"tags\": [\n     \"aws_delete_secret\"\n    ],\n    \"verbs\": []\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"# Copyright (c) 2023 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import Dict\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_delete_secret_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_delete_secret(handle, region: str, secret_name: str) -> Dict:\\n\",\n    \"    \\\"\\\"\\\"aws_delete_secret Dict with secret details.\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned from Task Validate\\n\",\n    \"\\n\",\n    \"        :type secret_name: string\\n\",\n    \"        :param secret_name: Name of the secret to be deleted.\\n\",\n    \"\\n\",\n    \"        :type region: string\\n\",\n    \"        :param region: AWS Region.\\n\",\n    \"\\n\",\n    \"        :rtype: Dict with secret details.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    try:\\n\",\n    \"        secrets_client = handle.client('secretsmanager', region_name=region)\\n\",\n    \"        response = secrets_client.delete_secret(SecretId=secret_name)\\n\",\n    \"        return response\\n\",\n    \"    except Exception as e:\\n\",\n    \"        raise Exception(e)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(continueOnError=True)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"region\\\": \\\"iter.get(\\\\\\\\\\\"region\\\\\\\\\\\")\\\",\\n\",\n    \"    \\\"secret_name\\\": \\\"iter.get(\\\\\\\\\\\"secret_name\\\\\\\\\\\")\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(iterJson='''{\\n\",\n    \"    \\\"iter_enabled\\\": true,\\n\",\n    \"    \\\"iter_list_is_const\\\": false,\\n\",\n    \"    \\\"iter_list\\\": \\\"all_unused_secrets\\\",\\n\",\n    \"    \\\"iter_parameter\\\": [\\\"region\\\",\\\"secret_name\\\"]\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"if len(all_unused_secrets)!=0\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_delete_secret, lego_printer=aws_delete_secret_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"44a6cf05-385b-445d-a503-ad4aa607a568\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Conclusion\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Conclusion\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Conclusion\\\">Conclusion</h3>\\n\",\n    \"<p>In this Runbook, we were able to filter unused secrets and delete those keys. To view the full platform capabilities of unSkript please visit&nbsp;<a href=\\\"https://us.app.unskript.io\\\" target=\\\"_blank\\\" rel=\\\"noopener\\\">us.app.unskript.io</a></p>\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"execution_data\": {\n   \"runbook_name\": \"Delete Unused AWS Secrets\",\n   \"parameters\": [\n    \"region\",\n    \"threshold_days\"\n   ]\n  },\n  \"kernelspec\": {\n   \"display_name\": \"unSkript (Build: 1166)\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"codemirror_mode\": {\n    \"name\": \"ipython\",\n    \"version\": 3\n   },\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"nbconvert_exporter\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.10.6\"\n  },\n  \"outputParameterSchema\": {\n   \"properties\": {},\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"parameterSchema\": {\n   \"properties\": {\n    \"region\": {\n     \"description\": \"AWS Regions to get the secrets from. Eg: us-west-2. If nothing is given all regions will be considered.\",\n     \"title\": \"region\",\n     \"type\": \"string\"\n    },\n    \"secret_names\": {\n     \"description\": \"List of AWS Secret Names. Eg: [\\\"sbox-alex/mongodbsecret\\\",\\\"user1/importsecret\\\"]\",\n     \"title\": \"secret_names\",\n     \"type\": \"array\"\n    },\n    \"threshold_days\": {\n     \"default\": \"30\",\n     \"description\": \"The threshold number of days to check the last use of the secret.\",\n     \"title\": \"threshold_days\",\n     \"type\": \"string\"\n    }\n   },\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"vscode\": {\n   \"interpreter\": {\n    \"hash\": \"b0fa6594d8f4cbf19f97940f81e996739fb7646882a419484c72d19e05852a7e\"\n   }\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}"
  },
  {
    "path": "AWS/AWS_Delete_Unused_AWS_Secrets.json",
    "content": "{\n    \"name\": \"Delete Unused AWS Secrets\",\n    \"description\": \"This runbook can be used to delete unused secrets in AWS.\",\n    \"uuid\": \"2a9101a1cf7be1cf70a30de2199dca5b302c3096\",\n    \"icon\": \"CONNECTOR_TYPE_AWS\",\n    \"categories\": [ \"CATEGORY_TYPE_COST_OPT\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\" ],\n    \"connector_types\": [ \"CONNECTOR_TYPE_AWS\" ],\n    \"version\": \"1.0.0\"\n  }\n  "
  },
  {
    "path": "AWS/AWS_Delete_Unused_Log_Streams.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"5424264e-6195-4cf9-906b-24b02d5a83f3\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Steps Overview\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Steps Overview\"\n   },\n   \"source\": [\n    \"<center><img src=\\\"https://storage.googleapis.com/unskript-website/assets/favicon.png\\\" alt=\\\"unSkript.com\\\" width=\\\"100\\\" height=\\\"100\\\">\\n\",\n    \"<h1 id=\\\"unSkript-Runbooks&para;\\\">unSkript Runbooks</h1>\\n\",\n    \"<div class=\\\"alert alert-block alert-success\\\">\\n\",\n    \"<h3 id=\\\"Objective&para;\\\">Objective</h3>\\n\",\n    \"<br><strong style=\\\"color: #000000;\\\"><em>Find and Delete unused AWS Cloudwatch Log Streams</em></strong></div>\\n\",\n    \"</center>\\n\",\n    \"<p>&nbsp;</p>\\n\",\n    \"<center>\\n\",\n    \"<h2 id=\\\"Delete-Unused-AWS-Secrets\\\"><u>Delete Unused AWS Log Streams</u></h2>\\n\",\n    \"</center>\\n\",\n    \"<h1 id=\\\"Steps-Overview&para;\\\">Steps Overview</h1>\\n\",\n    \"<p>1)<a href=\\\"#1\\\" target=\\\"_self\\\" rel=\\\"noopener\\\"> Find unused log streams</a><br>2)<a href=\\\"#2\\\" target=\\\"_self\\\" rel=\\\"noopener\\\"> Delete unused log streams</a></p>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"e1f146c9-5180-4459-9c82-cf0e1da02785\",\n   \"metadata\": {\n    \"collapsed\": true,\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-05-17T13:31:24.986Z\"\n    },\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"name\": \"Input verification\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Input verification\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"if region == None:\\n\",\n    \"    region = ''\\n\",\n    \"if log_stream_name and log_group_name and not region:\\n\",\n    \"    raise SystemExit(\\\"Provide a region for the Logs!\\\")\\n\",\n    \"elif not log_group_name and region and log_stream_name:\\n\",\n    \"    raise SystemExit(\\\"Provide a Log Group Name!\\\")\\n\",\n    \"elif not log_stream_name and region and log_group_name:\\n\",\n    \"    raise SystemExit(\\\"Provide a Log Stream Name!\\\")\\n\",\n    \"elif not log_stream_name and not region and log_group_name:\\n\",\n    \"    raise SystemExit(\\\"Provide a Log Stream Name and region!\\\")\\n\",\n    \"elif not log_group_name and not region and log_stream_name:\\n\",\n    \"    raise SystemExit(\\\"Provide a Log Group Name and region !\\\")\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"908f4dcb-8483-44fc-8f81-ce2502e03093\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 1\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 1\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Filter-unused-Secrets\\\"><a id=\\\"1\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Filter unused log streams</h3>\\n\",\n    \"<p>Using unSkript's Filter AWS Filter Unused Log Streams action, we will find unused log streams given a threshold number of days from their last use date.&nbsp;</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters: <code>region, threhold_days</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following output: <code>unused_log_streams</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"ce3e22b1-4f4e-4f16-a0e4-c57b95d0bb9a\",\n   \"metadata\": {\n    \"actionBashCommand\": false,\n    \"actionCategories\": [\n     \"CATEGORY_TYPE_CLOUDOPS\",\n     \"CATEGORY_TYPE_SECOPS\",\n     \"CATEGORY_TYPE_AWS\",\n     \"CATEGORY_TYPE_AWS_LOGS\"\n    ],\n    \"actionDescription\": \"This action lists all log streams that are unused for all the log groups by the given threshold.\",\n    \"actionEntryFunction\": \"aws_filter_unused_log_streams\",\n    \"actionIsCheck\": true,\n    \"actionIsRemediation\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [\n     \"64b6e7809ddfb1094901da74924ca3386510a1cd\"\n    ],\n    \"actionNextHopParameterMapping\": {},\n    \"actionNouns\": null,\n    \"actionOutputType\": \"ACTION_OUTPUT_TYPE_LIST\",\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"actionTitle\": \"AWS Filter Unused Log Stream\",\n    \"actionType\": \"LEGO_TYPE_AWS\",\n    \"actionVerbs\": null,\n    \"actionVersion\": \"1.0.0\",\n    \"action_modified\": false,\n    \"action_uuid\": \"53df09f034bd51da247c01b663d9e7c84d0ca615cfed4bfe2545547a5a4466be\",\n    \"collapsed\": true,\n    \"condition_enabled\": true,\n    \"continueOnError\": false,\n    \"credentialsJson\": {},\n    \"description\": \"This action lists all log streams that are unused for all the log groups by the given threshold.\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-05-17T13:56:11.674Z\"\n    },\n    \"id\": 1,\n    \"index\": 1,\n    \"inputData\": [\n     {\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"region\"\n      },\n      \"time_period_in_days\": {\n       \"constant\": false,\n       \"value\": \"int(threshold_days)\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"region\": {\n        \"description\": \"AWS Region\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       },\n       \"time_period_in_days\": {\n        \"default\": 30,\n        \"description\": \"(in days)\\u00a0The threshold to filter the unused log strams.\",\n        \"title\": \"Threshold (in days)\",\n        \"type\": \"integer\"\n       }\n      },\n      \"required\": [],\n      \"title\": \"aws_filter_unused_log_streams\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"language\": \"python\",\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS Filter Unused Log Stream\",\n    \"orderProperties\": [\n     \"time_period_in_days\",\n     \"region\"\n    ],\n    \"outputParams\": {\n     \"output_name\": \"unused_log_streams\",\n     \"output_name_enabled\": true,\n     \"output_runbook_enabled\": false,\n     \"output_runbook_name\": \"\"\n    },\n    \"printOutput\": true,\n    \"startcondition\": \"not log_stream_name and not log_group_name\",\n    \"tags\": [\n     \"aws_filter_unused_log_streams\"\n    ],\n    \"uuid\": \"53df09f034bd51da247c01b663d9e7c84d0ca615cfed4bfe2545547a5a4466be\",\n    \"version\": \"1.0.0\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"##  Copyright (c) 2023 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import Optional, List, Tuple\\n\",\n    \"from datetime import datetime, timedelta\\n\",\n    \"import botocore.config\\n\",\n    \"from unskript.connectors.aws import aws_get_paginator\\n\",\n    \"from unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_filter_unused_log_streams_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_filter_unused_log_streams(handle, region: str = \\\"\\\", time_period_in_days: int = 30) -> Tuple:\\n\",\n    \"    \\\"\\\"\\\"aws_filter_unused_log_streams Returns an array of unused log strams for all log groups.\\n\",\n    \"\\n\",\n    \"        :type region: string\\n\",\n    \"        :param region: Used to filter the volume for specific region.\\n\",\n    \"\\n\",\n    \"        :type time_period_in_days: int\\n\",\n    \"        :param time_period_in_days: (in days)\\u00a0The threshold to filter the unused log strams.\\n\",\n    \"\\n\",\n    \"        :rtype: Array of unused log strams for all log groups.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    result = []\\n\",\n    \"    now = datetime.utcnow()\\n\",\n    \"    all_regions = [region]\\n\",\n    \"    if not region:\\n\",\n    \"        all_regions = aws_list_all_regions(handle)\\n\",\n    \"\\n\",\n    \"    for reg in all_regions:\\n\",\n    \"        try:\\n\",\n    \"            start_time = now - timedelta(days=time_period_in_days)\\n\",\n    \"            config = botocore.config.Config(retries={'max_attempts': 10})\\n\",\n    \"            ec2Client = handle.client('logs', region_name=reg, config=config)\\n\",\n    \"            response = aws_get_paginator(ec2Client, \\\"describe_log_groups\\\", \\\"logGroups\\\")\\n\",\n    \"            for log_group in response:\\n\",\n    \"                log_group_name = log_group['logGroupName']\\n\",\n    \"                response1 = aws_get_paginator(ec2Client, \\\"describe_log_streams\\\", \\\"logStreams\\\",\\n\",\n    \"                                            logGroupName=log_group_name,\\n\",\n    \"                                            orderBy='LastEventTime',\\n\",\n    \"                                            descending=True)\\n\",\n    \"\\n\",\n    \"                for log_stream in response1:\\n\",\n    \"                    unused_log_streams = {}\\n\",\n    \"                    last_event_time = log_stream.get('lastEventTimestamp')\\n\",\n    \"                    if last_event_time is None:\\n\",\n    \"                        # The log stream has never logged an event\\n\",\n    \"                        unused_log_streams[\\\"log_group_name\\\"] = log_group_name\\n\",\n    \"                        unused_log_streams[\\\"log_stream_name\\\"] = log_stream['logStreamName']\\n\",\n    \"                        unused_log_streams[\\\"region\\\"] = reg\\n\",\n    \"                        result.append(unused_log_streams)\\n\",\n    \"                    elif datetime.fromtimestamp(last_event_time/1000.0) < start_time:\\n\",\n    \"                        # The log stream has not logged an event in the past given days\\n\",\n    \"                        unused_log_streams[\\\"log_group_name\\\"] = log_group_name\\n\",\n    \"                        unused_log_streams[\\\"log_stream_name\\\"] = log_stream['logStreamName']\\n\",\n    \"                        unused_log_streams[\\\"region\\\"] = reg\\n\",\n    \"                        result.append(unused_log_streams)\\n\",\n    \"        except Exception as e:\\n\",\n    \"            pass\\n\",\n    \"\\n\",\n    \"    if len(result) != 0:\\n\",\n    \"        return (False, result)\\n\",\n    \"    else:\\n\",\n    \"        return (True, None)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"time_period_in_days\\\": \\\"int(threshold_days)\\\",\\n\",\n    \"    \\\"region\\\": \\\"region\\\"\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"not log_stream_name and not log_group_name\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"task.configure(outputName=\\\"unused_log_streams\\\")\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_filter_unused_log_streams, lego_printer=aws_filter_unused_log_streams_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"199591ef-cb3a-49b7-b515-3c6998050320\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 2A\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 2A\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Create-List-of-Unused-Log-Streams&para;\\\">Create List of Unused Log Streams</h3>\\n\",\n    \"<p>This action filters regions that have no unused log streams and creates a list of those that have them.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters: <code>None</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following output: <code>all_unused_log_streams</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"c153b29e-fc95-445a-9400-4a04c63315b3\",\n   \"metadata\": {\n    \"collapsed\": true,\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"name\": \"Create List of Unused Log Streams\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Create List of Unused Log Streams\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"all_unused_log_streams = []\\n\",\n    \"try:\\n\",\n    \"    if unused_log_streams[0] == False:\\n\",\n    \"        if len(unused_log_streams[1])!=0:\\n\",\n    \"            all_unused_log_streams=unused_log_streams[1]\\n\",\n    \"except Exception:\\n\",\n    \"    for log_s in log_stream_name:\\n\",\n    \"        data_dict = {}\\n\",\n    \"        data_dict[\\\"region\\\"] = region\\n\",\n    \"        data_dict[\\\"log_group_name\\\"] = log_group_name\\n\",\n    \"        data_dict[\\\"log_stream_name\\\"] = log_s\\n\",\n    \"        all_unused_log_streams.append(data_dict)\\n\",\n    \"print(all_unused_log_streams)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"cc3c4396-dcb7-482e-8835-bb918fca83fa\",\n   \"metadata\": {\n    \"name\": \"Step 2\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 2\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Delete-unused-Secrets\\\"><a id=\\\"2\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Delete unused log streams</h3>\\n\",\n    \"<p>This action deleted unused log streams found in Step 1.&nbsp;</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters:&nbsp;<code>all_unused_log_streams</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"b90d92fe-69d9-4370-bec3-7b9b68e70169\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionOutputType\": null,\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": false,\n    \"action_uuid\": \"1fbb5c722fc8f70530e452566e341be44ecf4df4a62e4f2253508a1d47288745\",\n    \"condition_enabled\": true,\n    \"continueOnError\": true,\n    \"credentialsJson\": {},\n    \"description\": \"AWS Delete Log Stream\",\n    \"id\": 1,\n    \"index\": 1,\n    \"inputData\": [\n     {\n      \"log_group_name\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"log_group_name\\\\\\\\\\\")\\\"\"\n      },\n      \"log_stream_name\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"log_stream_name\\\\\\\\\\\")\\\"\"\n      },\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"region\\\\\\\\\\\")\\\"\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"log_group_name\": {\n        \"description\": \"Name of the log group.\",\n        \"title\": \"Log Group Name\",\n        \"type\": \"string\"\n       },\n       \"log_stream_name\": {\n        \"description\": \"Name of the log stream.\",\n        \"title\": \"Log Stream Name\",\n        \"type\": \"string\"\n       },\n       \"region\": {\n        \"description\": \"AWS Region\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"log_group_name\",\n       \"log_stream_name\",\n       \"region\"\n      ],\n      \"title\": \"aws_delete_log_stream\",\n      \"type\": \"object\"\n     }\n    ],\n    \"iterData\": [\n     {\n      \"iter_enabled\": true,\n      \"iter_item\": {\n       \"log_group_name\": \"log_group_name\",\n       \"log_stream_name\": \"log_stream_name\",\n       \"region\": \"region\"\n      },\n      \"iter_list\": {\n       \"constant\": false,\n       \"objectItems\": true,\n       \"value\": \"all_unused_log_streams\"\n      }\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"language\": \"python\",\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"metadata\": {\n     \"action_bash_command\": false,\n     \"action_categories\": [\n      \"CATEGORY_TYPE_DEVOPS\",\n      \"CATEGORY_TYPE_SRE\",\n      \"CATEGORY_TYPE_AWS\"\n     ],\n     \"action_description\": \"AWS Delete Log Stream\",\n     \"action_entry_function\": \"aws_delete_log_stream\",\n     \"action_is_check\": false,\n     \"action_is_remediation\": false,\n     \"action_needs_credential\": true,\n     \"action_next_hop\": null,\n     \"action_next_hop_parameter_mapping\": null,\n     \"action_nouns\": null,\n     \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\n     \"action_supports_iteration\": true,\n     \"action_supports_poll\": true,\n     \"action_title\": \"AWS Delete Log Stream\",\n     \"action_type\": \"LEGO_TYPE_AWS\",\n     \"action_verbs\": null,\n     \"action_version\": \"1.0.0\"\n    },\n    \"name\": \"AWS Delete Log Stream\",\n    \"orderProperties\": [\n     \"log_group_name\",\n     \"log_stream_name\",\n     \"region\"\n    ],\n    \"printOutput\": true,\n    \"startcondition\": \"len(all_unused_log_streams)==0\",\n    \"tags\": [\n     \"aws_delete_log_stream\"\n    ],\n    \"title\": \"AWS Delete Log Stream\",\n    \"uuid\": \"1fbb5c722fc8f70530e452566e341be44ecf4df4a62e4f2253508a1d47288745\",\n    \"version\": \"1.0.0\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"# Copyright (c) 2021 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import Optional, Dict\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_delete_log_stream_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_delete_log_stream(handle, log_group_name: str, log_stream_name: str, region: str) -> Dict:\\n\",\n    \"    \\\"\\\"\\\"aws_delete_log_stream Deletes a log stream.\\n\",\n    \"\\n\",\n    \"        :type log_group_name: string\\n\",\n    \"        :param log_group_name: Name of the log group.\\n\",\n    \"\\n\",\n    \"        :type log_stream_name: string\\n\",\n    \"        :param log_stream_name: Name of the log stream.\\n\",\n    \"\\n\",\n    \"        :type region: string\\n\",\n    \"        :param region: AWS Region.\\n\",\n    \"\\n\",\n    \"        :rtype: Dict with the deleted log stream info.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    try:\\n\",\n    \"        log_Client = handle.client('logs', region_name=region)\\n\",\n    \"        response = log_Client.delete_log_stream(\\n\",\n    \"            logGroupName=log_group_name,\\n\",\n    \"            logStreamName=log_stream_name)\\n\",\n    \"        return response\\n\",\n    \"    except Exception as e:\\n\",\n    \"        raise Exception(e)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(continueOnError=True)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"log_group_name\\\": \\\"iter.get(\\\\\\\\\\\"log_group_name\\\\\\\\\\\")\\\",\\n\",\n    \"    \\\"log_stream_name\\\": \\\"iter.get(\\\\\\\\\\\"log_stream_name\\\\\\\\\\\")\\\",\\n\",\n    \"    \\\"region\\\": \\\"iter.get(\\\\\\\\\\\"region\\\\\\\\\\\")\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(iterJson='''{\\n\",\n    \"    \\\"iter_enabled\\\": true,\\n\",\n    \"    \\\"iter_list_is_const\\\": false,\\n\",\n    \"    \\\"iter_list\\\": \\\"all_unused_log_streams\\\",\\n\",\n    \"    \\\"iter_parameter\\\": [\\\"log_group_name\\\",\\\"log_stream_name\\\",\\\"region\\\"]\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"len(all_unused_log_streams)!=0\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_delete_log_stream, lego_printer=aws_delete_log_stream_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"44a6cf05-385b-445d-a503-ad4aa607a568\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Conclusion\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Conclusion\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Conclusion\\\">Conclusion</h3>\\n\",\n    \"<p>In this Runbook, we were able to filter unused log streams before a given threshold number of days and delete them. To view the full platform capabilities of unSkript please visit <a href=\\\"https://us.app.unskript.io\\\" target=\\\"_blank\\\" rel=\\\"noopener\\\">us.app.unskript.io</a></p>\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"execution_data\": {\n   \"runbook_name\": \"Delete Unused AWS Log Streams\",\n   \"parameters\": [\n    \"region\",\n    \"threshold_days\"\n   ]\n  },\n  \"kernelspec\": {\n   \"display_name\": \"unSkript (Build: 1166)\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"codemirror_mode\": {\n    \"name\": \"ipython\",\n    \"version\": 3\n   },\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"nbconvert_exporter\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.10.6\"\n  },\n  \"outputParameterSchema\": {\n   \"properties\": {},\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"parameterSchema\": {\n   \"properties\": {\n    \"log_group_name\": {\n     \"description\": \"Log group name to get the log streams from.\",\n     \"title\": \"log_group_name\",\n     \"type\": \"string\"\n    },\n    \"log_stream_name\": {\n     \"description\": \"List of log streams to delete. Eg: [\\\"log_stream_1\\\", \\\"log_stream_2\\\"]\",\n     \"title\": \"log_stream_name\",\n     \"type\": \"array\"\n    },\n    \"region\": {\n     \"description\": \"AWS Region to get the log streams from. Eg: \\\"us-west-2\\\". If nothing is given all regions will be considered.\",\n     \"title\": \"region\",\n     \"type\": \"string\"\n    },\n    \"threshold_days\": {\n     \"default\": \"30\",\n     \"description\": \"The threshold number of days to check the unused streams\",\n     \"title\": \"threshold_days\",\n     \"type\": \"string\"\n    }\n   },\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"vscode\": {\n   \"interpreter\": {\n    \"hash\": \"b0fa6594d8f4cbf19f97940f81e996739fb7646882a419484c72d19e05852a7e\"\n   }\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}"
  },
  {
    "path": "AWS/AWS_Delete_Unused_Log_Streams.json",
    "content": "{\n    \"name\": \"Delete Unused AWS Log Streams\",\n    \"description\": \"Cloudwatch will retain empty Log Streams after the data retention time period. Those log streams should be deleted in order to save costs. This runbook can find unused log streams over a threshold number of days and help you delete them.\",\n    \"uuid\": \"64b6e7809ddfb1094901da74924ca3386510a1cd\",\n    \"icon\": \"CONNECTOR_TYPE_AWS\",\n    \"categories\": [ \"CATEGORY_TYPE_COST_OPT\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\" ],\n    \"connector_types\": [ \"CONNECTOR_TYPE_AWS\" ],\n    \"version\": \"1.0.0\"\n}"
  },
  {
    "path": "AWS/AWS_Delete_Unused_NAT_Gateways.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"625dfbc1-d348-4423-97b8-df672384cdd1\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Runbook Overview\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Runbook Overview\"\n   },\n   \"source\": [\n    \"<center><img src=\\\"https://storage.googleapis.com/unskript-website/assets/favicon.png\\\" alt=\\\"unSkript.com\\\" width=\\\"100\\\" height=\\\"100\\\">\\n\",\n    \"<h1 id=\\\"unSkript-Runbooks\\\">unSkript Runbooks<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#unSkript-Runbooks\\\" target=\\\"_self\\\">&para;</a></h1>\\n\",\n    \"<div class=\\\"alert alert-block alert-success\\\">\\n\",\n    \"<h3 id=\\\"Objective\\\"><strong>Objective</strong><a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Objective\\\" target=\\\"_self\\\">&para;</a></h3>\\n\",\n    \"<strong>Delete unused NAT gateways using unSkript actions.</strong></div>\\n\",\n    \"</center><center>\\n\",\n    \"<h2 id=\\\"Delete-Unused-NAT-Gateways\\\">Delete Unused NAT Gateways<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Delete-Unused-NAT-Gateways\\\" target=\\\"_self\\\">&para;</a></h2>\\n\",\n    \"</center>\\n\",\n    \"<h1 id=\\\"Steps-Overview\\\">Steps Overview<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Steps-Overview\\\" target=\\\"_self\\\">&para;</a></h1>\\n\",\n    \"<p>1. AWS Find Unused NAT Gateways<br>2. AWS Delete NAT Gateway</p>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 5,\n   \"id\": \"12b4137f-26f9-47d9-8b2f-69b06c928fb3\",\n   \"metadata\": {\n    \"collapsed\": true,\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-05-17T18:23:47.311Z\"\n    },\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"name\": \"Input Verification\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Input Verification\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"if nat_gateway_ids and not region:\\n\",\n    \"    raise SystemExit(\\\"Enter region for given NAT gateways!\\\")\\n\",\n    \"if region == None:\\n\",\n    \"    region = \\\"\\\"\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"4c018554-a630-4b6d-a7c8-043f299f156f\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-1\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-1\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Filter-Unattached-EBS-Volumes\\\">AWS Find Unused NAT Gateways</h3>\\n\",\n    \"<p>Here we will use unSkript&nbsp;<strong>AWS Find Unused NAT Gateways</strong> action. This action filters all the NAT Gateways from the given region and returns a list of all the unused NAT Gateways. It will execute if the nat_gateway_ids&nbsp;parameter is not passed.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Input parameters: <code>region</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Output variable: <code>unused_nat_gateways</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"1839b310-58d6-4746-85e7-5f136f74e237\",\n   \"metadata\": {\n    \"actionBashCommand\": false,\n    \"actionCategories\": [\n     \"CATEGORY_TYPE_CLOUDOPS\",\n     \"CATEGORY_TYPE_SECOPS\",\n     \"CATEGORY_TYPE_AWS\",\n     \"CATEGORY_TYPE_AWS_NAT_GATEWAY\",\n     \"CATEGORY_TYPE_AWS_EC2\"\n    ],\n    \"actionDescription\": \"This action to get all of the Nat gateways that have zero traffic over those\",\n    \"actionEntryFunction\": \"aws_filter_unused_nat_gateway\",\n    \"actionIsCheck\": true,\n    \"actionIsRemediation\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [\n     \"f2b1eecf9b4f727ec80fc4d4f5c7915b788cafe969552af0a26f8db9747bbcd4\"\n    ],\n    \"actionNextHopParameterMapping\": {},\n    \"actionNouns\": null,\n    \"actionOutputType\": \"ACTION_OUTPUT_TYPE_LIST\",\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"actionTitle\": \"AWS Find Unused NAT Gateways\",\n    \"actionType\": \"LEGO_TYPE_AWS\",\n    \"actionVerbs\": null,\n    \"actionVersion\": \"1.0.0\",\n    \"action_modified\": false,\n    \"action_uuid\": \"0f0c137beaf6a9246508393d1e868cea529d30a88631cd0f321799acbfbd47bb\",\n    \"condition_enabled\": true,\n    \"continueOnError\": false,\n    \"credentialsJson\": {},\n    \"description\": \"This action to get all of the Nat gateways that have zero traffic over those\",\n    \"id\": 1,\n    \"index\": 1,\n    \"inputData\": [\n     {\n      \"number_of_days\": {\n       \"constant\": false,\n       \"value\": \"int(number_of_days)\"\n      },\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"region\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"number_of_days\": {\n        \"description\": \"Number of days to check the Datapoints.\",\n        \"title\": \"Number of Days\",\n        \"type\": \"integer\"\n       },\n       \"region\": {\n        \"description\": \"AWS Region.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [],\n      \"title\": \"aws_filter_unused_nat_gateway\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"language\": \"python\",\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS Find Unused NAT Gateways\",\n    \"orderProperties\": [\n     \"region\",\n     \"number_of_days\"\n    ],\n    \"outputParams\": {\n     \"output_name\": \"unused_nat_gateways\",\n     \"output_name_enabled\": true,\n     \"output_runbook_enabled\": false,\n     \"output_runbook_name\": \"\"\n    },\n    \"printOutput\": true,\n    \"startcondition\": \"not nat_gateway_ids\",\n    \"tags\": [\n     \"aws_filter_unused_nat_gateway\"\n    ],\n    \"uuid\": \"0f0c137beaf6a9246508393d1e868cea529d30a88631cd0f321799acbfbd47bb\",\n    \"version\": \"1.0.0\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"##  Copyright (c) 2021 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from typing import Optional, Tuple\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from datetime import datetime, timedelta\\n\",\n    \"from unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_filter_unused_nat_gateway_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def is_nat_gateway_used(handle, nat_gateway, start_time, end_time,number_of_days):\\n\",\n    \"    datapoints = []\\n\",\n    \"    if nat_gateway['State'] != 'deleted':\\n\",\n    \"        # Get the metrics data for the specified NAT Gateway over the last 7 days\\n\",\n    \"        metrics_data = handle.get_metric_statistics(\\n\",\n    \"            Namespace='AWS/NATGateway',\\n\",\n    \"            MetricName='ActiveConnectionCount',\\n\",\n    \"            Dimensions=[\\n\",\n    \"                {\\n\",\n    \"                    'Name': 'NatGatewayId',\\n\",\n    \"                    'Value': nat_gateway['NatGatewayId']\\n\",\n    \"                },\\n\",\n    \"            ],\\n\",\n    \"            StartTime=start_time,\\n\",\n    \"            EndTime=end_time,\\n\",\n    \"            Period=86400*number_of_days,\\n\",\n    \"            Statistics=['Sum']\\n\",\n    \"        )\\n\",\n    \"        datapoints += metrics_data['Datapoints']\\n\",\n    \"    if len(datapoints) == 0 or metrics_data['Datapoints'][0]['Sum']==0:\\n\",\n    \"        return False\\n\",\n    \"    else:\\n\",\n    \"        return True\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_filter_unused_nat_gateway(handle, number_of_days: int = 7, region: str = \\\"\\\") -> Tuple:\\n\",\n    \"    \\\"\\\"\\\"aws_get_natgateway_by_vpc Returns an array of NAT gateways.\\n\",\n    \"\\n\",\n    \"        :type region: string\\n\",\n    \"        :param region: Region to filter NAT Gateways.\\n\",\n    \"\\n\",\n    \"        :type number_of_days: int\\n\",\n    \"        :param number_of_days: Number of days to check the Datapoints.\\n\",\n    \"\\n\",\n    \"        :rtype: Array of NAT gateways.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    result = []\\n\",\n    \"    end_time = datetime.utcnow()\\n\",\n    \"    start_time = end_time - timedelta(days=number_of_days)\\n\",\n    \"    all_regions = [region]\\n\",\n    \"    if not region:\\n\",\n    \"        all_regions = aws_list_all_regions(handle)\\n\",\n    \"\\n\",\n    \"    for reg in all_regions:\\n\",\n    \"        try:\\n\",\n    \"            ec2Client = handle.client('ec2', region_name=reg)\\n\",\n    \"            cloudwatch = handle.client('cloudwatch', region_name=reg)\\n\",\n    \"            response = ec2Client.describe_nat_gateways()\\n\",\n    \"            for nat_gateway in response['NatGateways']:\\n\",\n    \"                nat_gateway_info = {}\\n\",\n    \"                if not is_nat_gateway_used(cloudwatch, nat_gateway, start_time, end_time,number_of_days):\\n\",\n    \"                    nat_gateway_info[\\\"nat_gateway_id\\\"] = nat_gateway['NatGatewayId']\\n\",\n    \"                    nat_gateway_info[\\\"region\\\"] = reg\\n\",\n    \"                    result.append(nat_gateway_info)\\n\",\n    \"        except Exception as e:\\n\",\n    \"            pass\\n\",\n    \"\\n\",\n    \"    if len(result) != 0:\\n\",\n    \"        return (False, result)\\n\",\n    \"    else:\\n\",\n    \"        return (True, None)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"region\\\": \\\"region\\\",\\n\",\n    \"    \\\"number_of_days\\\": \\\"int(number_of_days)\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"not nat_gateway_ids\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"unused_nat_gateways\\\")\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_filter_unused_nat_gateway, lego_printer=aws_filter_unused_nat_gateway_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"c597d85b-9748-421b-a3fe-e6499fa167f4\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-1 Extension\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-1 Extension\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Modify-Snapshot-Output\\\">Modify Unused NAT Gateways Output</h3>\\n\",\n    \"<p>In this action, we modify the output from Step 1 and return a list of unused NAT gateways.&nbsp;</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Output variable: nat_gateways</p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 24,\n   \"id\": \"2fdc0c0f-ea85-498a-88c1-04352631c8f8\",\n   \"metadata\": {\n    \"collapsed\": true,\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-04-27T10:19:08.248Z\"\n    },\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"name\": \"Modify Unused NAT Gateways Output\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Modify Unused NAT Gateways Output\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"nat_gateways = []\\n\",\n    \"try:\\n\",\n    \"    if unused_nat_gateways[0] == False:\\n\",\n    \"        for nat in unused_nat_gateways[1]:\\n\",\n    \"            nat_gateways.append(nat)\\n\",\n    \"except Exception as e:\\n\",\n    \"    if nat_gateway_ids:\\n\",\n    \"        for nat in nat_gateway_ids:\\n\",\n    \"            nat_ids = {}\\n\",\n    \"            nat_ids[\\\"nat_gateway_id\\\"] = nat\\n\",\n    \"            nat_ids[\\\"region\\\"] = region\\n\",\n    \"            nat_gateways.append(nat_ids)\\n\",\n    \"    else:\\n\",\n    \"        raise Exception(e)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"697f38a7-613c-4616-a37a-32b977f4faa0\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-2\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-2\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Delete-EBS-Volume\\\">AWS Delete NAT Gateway</h3>\\n\",\n    \"<p>Here we will use the unSkript <strong>AWS Delete NAT Gateway</strong> action. In this action, we will pass the list of unused NAT Gateways from Step 1 and delete those NAT Gateways.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Input parameters: <code>nat_gateway_id</code>, <code>region</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Output variable: <code>delete_status</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"24c2d14a-a543-4251-8243-d12c052f89b1\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionCategories\": [\n     \"CATEGORY_TYPE_DEVOPS\",\n     \"CATEGORY_TYPE_SRE\",\n     \"CATEGORY_TYPE_AWS\"\n    ],\n    \"actionIsCheck\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [],\n    \"actionNextHopParameterMapping\": {},\n    \"actionOutputType\": \"\",\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": false,\n    \"action_uuid\": \"c24c20b1d1d8a9f31ddbf6f2adf96cbd37df3a0fcf99e4a9a85b1f8b897ad8d4\",\n    \"checkEnabled\": false,\n    \"condition_enabled\": true,\n    \"continueOnError\": true,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"AWS Delete NAT Gateway\",\n    \"id\": 240,\n    \"index\": 240,\n    \"inputData\": [\n     {\n      \"nat_gateway_id\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"nat_gateway_id\\\\\\\\\\\")\\\"\"\n      },\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"region\\\\\\\\\\\")\\\"\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"nat_gateway_id\": {\n        \"description\": \"ID of the NAT Gateway.\",\n        \"title\": \"NAT Gateway ID\",\n        \"type\": \"string\"\n       },\n       \"region\": {\n        \"description\": \"AWS Region.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"nat_gateway_id\",\n       \"region\"\n      ],\n      \"title\": \"aws_delete_nat_gateway\",\n      \"type\": \"object\"\n     }\n    ],\n    \"isUnskript\": false,\n    \"iterData\": [\n     {\n      \"iter_enabled\": true,\n      \"iter_item\": {\n       \"nat_gateway_id\": \"nat_gateway_id\",\n       \"region\": \"region\"\n      },\n      \"iter_list\": {\n       \"constant\": false,\n       \"objectItems\": true,\n       \"value\": \"nat_gateways\"\n      }\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS Delete NAT Gateway\",\n    \"nouns\": [],\n    \"orderProperties\": [\n     \"nat_gateway_id\",\n     \"region\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"delete_status\",\n     \"output_name_enabled\": true,\n     \"output_runbook_enabled\": false,\n     \"output_runbook_name\": \"\"\n    },\n    \"printOutput\": true,\n    \"startcondition\": \"len(nat_gateways) != 0\",\n    \"tags\": [\n     \"aws_delete_nat_gateway\"\n    ],\n    \"title\": \"AWS Delete NAT Gateway\",\n    \"verbs\": []\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"# Copyright (c) 2021 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import Dict\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_delete_nat_gateway_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_delete_nat_gateway(handle, nat_gateway_id: str, region: str) -> Dict:\\n\",\n    \"    \\\"\\\"\\\"aws_delete_nat_gateway Returns an dict of NAT gateways information.\\n\",\n    \"\\n\",\n    \"        :type region: string\\n\",\n    \"        :param region: AWS Region.\\n\",\n    \"\\n\",\n    \"        :type nat_gateway_id: string\\n\",\n    \"        :param nat_gateway_id: ID of the NAT Gateway.\\n\",\n    \"\\n\",\n    \"        :rtype: dict of NAT gateways information.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    try:\\n\",\n    \"        ec2Client = handle.client('ec2', region_name=region)\\n\",\n    \"        response = ec2Client.delete_nat_gateway(NatGatewayId=nat_gateway_id)\\n\",\n    \"        return response\\n\",\n    \"    except Exception as e:\\n\",\n    \"        raise Exception(e)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(continueOnError=True)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"nat_gateway_id\\\": \\\"iter.get(\\\\\\\\\\\"nat_gateway_id\\\\\\\\\\\")\\\",\\n\",\n    \"    \\\"region\\\": \\\"iter.get(\\\\\\\\\\\"region\\\\\\\\\\\")\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(iterJson='''{\\n\",\n    \"    \\\"iter_enabled\\\": true,\\n\",\n    \"    \\\"iter_list_is_const\\\": false,\\n\",\n    \"    \\\"iter_list\\\": \\\"nat_gateways\\\",\\n\",\n    \"    \\\"iter_parameter\\\": [\\\"nat_gateway_id\\\",\\\"region\\\"]\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"len(nat_gateways) != 0\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"task.configure(outputName=\\\"delete_status\\\")\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_delete_nat_gateway, lego_printer=aws_delete_nat_gateway_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"5954b863-9e8b-42f7-be29-5aa9afe3afd4\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Conclusion\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Conclusion\"\n   },\n   \"source\": [\n    \"### Conclusion\\n\",\n    \"<p>In this Runbook, we demonstrated the use of unSkript's AWS actions to filter unused NAT Gateways and delete those. To view the full platform capabilities of unSkript please visit <a href=\\\"https://us.app.unskript.io\\\" target=\\\"_blank\\\" rel=\\\"noopener\\\">https://us.app.unskript.io</a></p>\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"execution_data\": {\n   \"runbook_name\": \"Delete Unused NAT Gateways\",\n   \"parameters\": [\n    \"number_of_days\",\n    \"nat_gateway_ids\",\n    \"region\"\n   ]\n  },\n  \"kernelspec\": {\n   \"display_name\": \"Python 3.10.6 64-bit\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"codemirror_mode\": {\n    \"name\": \"ipython\",\n    \"version\": 3\n   },\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"nbconvert_exporter\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.10.6\"\n  },\n  \"outputParameterSchema\": {\n   \"definitions\": null,\n   \"properties\": {},\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"parameterSchema\": {\n   \"definitions\": null,\n   \"properties\": {\n    \"nat_gateway_ids\": {\n     \"description\": \"NAT Gateways which needs to delete.\",\n     \"title\": \"nat_gateway_ids\",\n     \"type\": \"array\"\n    },\n    \"number_of_days\": {\n     \"default\": 7,\n     \"description\": \"A number of days to check NAT gateways are not used.\",\n     \"title\": \"number_of_days\",\n     \"type\": \"number\"\n    },\n    \"region\": {\n     \"description\": \"AWS Region\",\n     \"title\": \"region\",\n     \"type\": \"string\"\n    }\n   },\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"vscode\": {\n   \"interpreter\": {\n    \"hash\": \"b0fa6594d8f4cbf19f97940f81e996739fb7646882a419484c72d19e05852a7e\"\n   }\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}"
  },
  {
    "path": "AWS/AWS_Delete_Unused_NAT_Gateways.json",
    "content": "{\n    \"name\": \"Delete Unused NAT Gateways\",\n    \"description\": \"This runbook search for all unused NAT gateways from all the region and delete those gateways.\",\n    \"uuid\": \"f2b1eecf9b4f727ec80fc4d4f5c7915b788cafe969552af0a26f8db9747bbcd4\",\n    \"icon\": \"CONNECTOR_TYPE_AWS\",\n    \"categories\": [ \"CATEGORY_TYPE_COST_OPT\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\" ],\n    \"connector_types\": [ \"CONNECTOR_TYPE_AWS\" ],\n    \"version\": \"1.0.0\"\n  }\n  "
  },
  {
    "path": "AWS/AWS_Delete_Unused_Route53_Healthchecks.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"82eebdfd-c880-40df-bd6d-5b546c92164b\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Steps Overview\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Steps Overview\"\n   },\n   \"source\": [\n    \"<center><img src=\\\"https://storage.googleapis.com/unskript-website/assets/favicon.png\\\" alt=\\\"unSkript.com\\\" width=\\\"100\\\" height=\\\"100\\\">\\n\",\n    \"<h1 id=\\\"unSkript-Runbooks\\\">unSkript Runbooks</h1>\\n\",\n    \"<div class=\\\"alert alert-block alert-success\\\">\\n\",\n    \"<h3 id=\\\"Objective\\\">Objective</h3>\\n\",\n    \"<br><strong style=\\\"color: #000000;\\\"><em>Find unwanted/unused healthchecks in AWS Route53 and delete them.</em></strong></div>\\n\",\n    \"</center><center>\\n\",\n    \"<h2 id=\\\"Change-AWS-Route53-TTL-value\\\"><strong><u>Delete Unused AWS Route53 Healthchecks</u></strong></h2>\\n\",\n    \"</center>\\n\",\n    \"<h1 id=\\\"Steps-Overview\\\">Steps Overview</h1>\\n\",\n    \"<p>1)<a href=\\\"#1\\\" target=\\\"_self\\\" rel=\\\"noopener\\\"> Get unused Route53 healthchecks</a><br>2)<a href=\\\"#2\\\" target=\\\"_self\\\" rel=\\\"noopener\\\"> Delete the healthchecks</a></p>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 9,\n   \"id\": \"2066a59f-e43f-4e77-8e61-05ae0745e335\",\n   \"metadata\": {\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-05-17T12:38:45.888Z\"\n    },\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"name\": \"Input Verfication\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Input Verfication\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"if hosted_zone_id == None:\\n\",\n    \"    hosted_zone_id = ''\\n\",\n    \"if health_check_ids and not hosted_zone_id:\\n\",\n    \"    raise SystemExit(\\\"Provide a Hosted Zone ID for the Health Check ID's!\\\")\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"2020e8d0-ba3b-4c71-84b2-10917465a27e\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 1\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 1\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Get-TTLs-under-X-hours\\\"><a id=\\\"1\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Get unused Route53 Healthchecks</h3>\\n\",\n    \"<p>Using unSkript's Get Route53 Unused Healthchecks , we will find the healthcheck IDs that are not being used by any record set to monitor their health.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters: <code>hosted_zone_id(Optional)</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following output: <code>unused_health_checks</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"92ad5e21-d5ca-419a-b3ae-ba8d524da815\",\n   \"metadata\": {\n    \"actionBashCommand\": false,\n    \"actionCategories\": [\n     \"CATEGORY_TYPE_COST_OPT\",\n     \"CATEGORY_TYPE_SRE\",\n     \"CATEGORY_TYPE_AWS\",\n     \"CATEGORY_TYPE_AWS_ROUTE53\"\n    ],\n    \"actionDescription\": \"AWS get Unused Route53 Health Checks\",\n    \"actionEntryFunction\": \"aws_get_unused_route53_health_checks\",\n    \"actionIsCheck\": true,\n    \"actionIsRemediation\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [],\n    \"actionNextHopParameterMapping\": {},\n    \"actionNouns\": null,\n    \"actionOutputType\": \"ACTION_OUTPUT_TYPE_LIST\",\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"actionTitle\": \"AWS get Unused Route53 Health Checks\",\n    \"actionType\": \"LEGO_TYPE_AWS\",\n    \"actionVerbs\": null,\n    \"actionVersion\": \"1.0.0\",\n    \"action_modified\": false,\n    \"action_uuid\": \"7bde6d48cf5e9b2b984335fb1434716a3dba113da0762bc70f57f4246b91df07\",\n    \"collapsed\": true,\n    \"condition_enabled\": true,\n    \"continueOnError\": false,\n    \"credentialsJson\": {},\n    \"description\": \"AWS get Unused Route53 Health Checks\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-05-17T12:39:36.142Z\"\n    },\n    \"id\": 1,\n    \"index\": 1,\n    \"inputData\": [\n     {\n      \"hosted_zone_id\": {\n       \"constant\": false,\n       \"value\": \"hosted_zone_id\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"hosted_zone_id\": {\n        \"default\": \"\",\n        \"description\": \"Used to filter the health checks for a specific hosted zone.\",\n        \"title\": \"Hosted Zone ID\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [],\n      \"title\": \"aws_get_unused_route53_health_checks\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"language\": \"python\",\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS get Unused Route53 Health Checks\",\n    \"orderProperties\": [\n     \"hosted_zone_id\"\n    ],\n    \"outputParams\": {\n     \"output_name\": \"unused_health_checks\",\n     \"output_name_enabled\": true,\n     \"output_runbook_enabled\": false,\n     \"output_runbook_name\": \"\"\n    },\n    \"printOutput\": true,\n    \"startcondition\": \"not heath_check_ids\",\n    \"tags\": [\n     \"aws_get_unused_route53_health_checks\"\n    ],\n    \"uuid\": \"7bde6d48cf5e9b2b984335fb1434716a3dba113da0762bc70f57f4246b91df07\",\n    \"version\": \"1.0.0\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"# Copyright (c) 2021 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import Optional, Tuple\\n\",\n    \"import pprint\\n\",\n    \"from unskript.connectors.aws import aws_get_paginator\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_unused_route53_health_checks_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_unused_route53_health_checks(handle, hosted_zone_id: str = \\\"\\\") -> Tuple:\\n\",\n    \"    \\\"\\\"\\\"aws_get_unused_route53_health_checks Returns a list of unused Route 53 health checks.\\n\",\n    \"\\n\",\n    \"        :type hosted_zone_id: string\\n\",\n    \"        :param hosted_zone_id: Optional. Used to filter the health checks for a specific hosted zone.\\n\",\n    \"\\n\",\n    \"        :rtype: A tuple containing a list of dicts with information about the unused health checks.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    result = []\\n\",\n    \"    try:\\n\",\n    \"        route_client = handle.client('route53')\\n\",\n    \"        health_checks = aws_get_paginator(route_client, \\\"list_health_checks\\\", \\\"HealthChecks\\\")\\n\",\n    \"        if hosted_zone_id:\\n\",\n    \"            hosted_zones = [{'Id': hosted_zone_id}]\\n\",\n    \"        else:\\n\",\n    \"            hosted_zones = aws_get_paginator(route_client, \\\"list_hosted_zones\\\", \\\"HostedZones\\\")\\n\",\n    \"        used_health_check_ids = set()\\n\",\n    \"        for zone in hosted_zones:\\n\",\n    \"            record_sets = aws_get_paginator(route_client, \\\"list_resource_record_sets\\\", \\\"ResourceRecordSets\\\", HostedZoneId=zone['Id'])\\n\",\n    \"            for record_set in record_sets:\\n\",\n    \"                if 'HealthCheckId' in record_set:\\n\",\n    \"                    used_health_check_ids.add(record_set['HealthCheckId'])\\n\",\n    \"        for hc in health_checks:\\n\",\n    \"            if hc['Id'] not in used_health_check_ids:\\n\",\n    \"                result.append(hc['Id'])\\n\",\n    \"    except Exception as e:\\n\",\n    \"        raise e\\n\",\n    \"\\n\",\n    \"    if len(result) != 0:\\n\",\n    \"        return (False, result)\\n\",\n    \"    else:\\n\",\n    \"        return (True, None)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(outputName=\\\"unused_health_checks\\\")\\n\",\n    \"\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"hosted_zone_id\\\": \\\"hosted_zone_id\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"not heath_check_ids\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_get_unused_route53_health_checks, lego_printer=aws_get_unused_route53_health_checks_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"a311041f-620a-4b6b-914f-e52c6c3a71f4\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 2A\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 2A\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Create-List-of-Lower-TTL-records\\\">Create List of Unused Healthchecks</h3>\\n\",\n    \"<p>This action filters the output from Step 1 to get the non empty values</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters: <code>None</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following output: <code>all_unused_health_checks</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"b85ce542-bdf0-44d2-9e75-213002d5c036\",\n   \"metadata\": {\n    \"collapsed\": true,\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-05-17T12:41:37.390Z\"\n    },\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"name\": \"Create List of Unused Healthchecks\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Create List of Unused Healthchecks\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"all_unused_health_checks = []\\n\",\n    \"try:\\n\",\n    \"    for res in unused_health_checks:\\n\",\n    \"        if type(res)==bool:\\n\",\n    \"            if res == False:\\n\",\n    \"                continue\\n\",\n    \"        elif type(res)==list:\\n\",\n    \"            if len(res)!=0:\\n\",\n    \"                all_unused_health_checks=res\\n\",\n    \"except Exception as e:\\n\",\n    \"    all_unused_health_checks = health_check_ids\\n\",\n    \"print(all_unused_health_checks)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"9fb3704a-9b19-49c4-96ab-a982217bbcd3\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 2\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 2\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Change-TTL-Value\\\"><a id=\\\"2\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Delete Route53 Healthcheck</h3>\\n\",\n    \"<p>This action deletes the Route53 healthcheck found in Step 1.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters:&nbsp;<code>health_check_id</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"c84fac99-c05b-4dee-9d8a-80bfdd7a3e60\",\n   \"metadata\": {\n    \"actionBashCommand\": false,\n    \"actionCategories\": [\n     \"CATEGORY_TYPE_DEVOPS\",\n     \"CATEGORY_TYPE_SRE\",\n     \"CATEGORY_TYPE_AWS\"\n    ],\n    \"actionDescription\": \"AWS Delete Route 53 HealthCheck\",\n    \"actionEntryFunction\": \"aws_delete_route53_health_check\",\n    \"actionIsCheck\": false,\n    \"actionIsRemediation\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": null,\n    \"actionNextHopParameterMapping\": null,\n    \"actionNouns\": null,\n    \"actionOutputType\": \"ACTION_OUTPUT_TYPE_DICT\",\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"actionTitle\": \"AWS Delete Route 53 HealthCheck\",\n    \"actionType\": \"LEGO_TYPE_AWS\",\n    \"actionVerbs\": null,\n    \"actionVersion\": \"1.0.0\",\n    \"action_modified\": false,\n    \"action_uuid\": \"33e50f6c5813f3b01f4d63f7ec8d3eb363873c62f28d40d623acc9091c026270\",\n    \"collapsed\": true,\n    \"condition_enabled\": true,\n    \"continueOnError\": true,\n    \"credentialsJson\": {},\n    \"description\": \"AWS Delete Route 53 HealthCheck\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-04-21T12:40:14.522Z\"\n    },\n    \"id\": 1,\n    \"index\": 1,\n    \"inputData\": [\n     {\n      \"health_check_id\": {\n       \"constant\": false,\n       \"value\": \"iter_item\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"health_check_id\": {\n        \"description\": \"The ID of the Health Check to delete.\",\n        \"title\": \"Health Check ID\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"health_check_id\"\n      ],\n      \"title\": \"aws_delete_route53_health_check\",\n      \"type\": \"object\"\n     }\n    ],\n    \"iterData\": [\n     {\n      \"iter_enabled\": true,\n      \"iter_item\": \"health_check_id\",\n      \"iter_list\": {\n       \"constant\": false,\n       \"objectItems\": false,\n       \"value\": \"all_unused_health_checks\"\n      }\n     }\n    ],\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"language\": \"python\",\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS Delete Route 53 HealthCheck\",\n    \"orderProperties\": [\n     \"health_check_id\"\n    ],\n    \"printOutput\": true,\n    \"startcondition\": \"len(all_unused_health_checks)!=0\",\n    \"tags\": [\n     \"aws_delete_route53_health_check\"\n    ],\n    \"uuid\": \"33e50f6c5813f3b01f4d63f7ec8d3eb363873c62f28d40d623acc9091c026270\",\n    \"version\": \"1.0.0\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"# Copyright (c) 2021 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import Dict\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_delete_route53_health_check_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_delete_route53_health_check(handle, health_check_id: str) -> Dict:\\n\",\n    \"    \\\"\\\"\\\"aws_delete_route53_health_check Deletes a Route 53 Health Check.\\n\",\n    \"\\n\",\n    \"        :type health_check_id: string\\n\",\n    \"        :param health_check_id: The ID of the Health Check to delete.\\n\",\n    \"\\n\",\n    \"        :rtype: dict of health check information.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    try:\\n\",\n    \"        route_client = handle.client('route53')\\n\",\n    \"        response = route_client.delete_health_check(HealthCheckId=health_check_id)\\n\",\n    \"        return response\\n\",\n    \"    except Exception as e:\\n\",\n    \"        raise Exception(e)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(continueOnError=True)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"health_check_id\\\": \\\"iter_item\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(iterJson='''{\\n\",\n    \"    \\\"iter_enabled\\\": true,\\n\",\n    \"    \\\"iter_list_is_const\\\": false,\\n\",\n    \"    \\\"iter_list\\\": \\\"all_unused_health_checks\\\",\\n\",\n    \"    \\\"iter_parameter\\\": \\\"health_check_id\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"len(all_unused_health_checks)!=0\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_delete_route53_health_check, lego_printer=aws_delete_route53_health_check_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"9c7430c8-3660-45bd-90ef-9ceab77e3daa\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Conclusion\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Conclusion\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Conclusion&para;\\\">Conclusion</h3>\\n\",\n    \"<p>In this Runbook, we were able delete unused healtcheck ID's which will help in saving your AWS costs. To view the full platform capabilities of unSkript please visit&nbsp;<a href=\\\"https://us.app.unskript.io\\\" target=\\\"_blank\\\" rel=\\\"noopener\\\">us.app.unskript.io</a></p>\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"execution_data\": {\n   \"runbook_name\": \"Delete Unused Route53 HealthChecks\",\n   \"parameters\": null\n  },\n  \"kernelspec\": {\n   \"display_name\": \"unSkript (Build: 1166)\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"codemirror_mode\": {\n    \"name\": \"ipython\",\n    \"version\": 3\n   },\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"nbconvert_exporter\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.10.6\"\n  },\n  \"outputParameterSchema\": {\n   \"properties\": {},\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"parameterSchema\": {\n   \"properties\": {\n    \"health_check_ids\": {\n     \"description\": \"List of Route53 Health check IDs\",\n     \"title\": \"health_check_ids\",\n     \"type\": \"array\"\n    },\n    \"hosted_zone_id\": {\n     \"description\": \"The ID of the hosted zone that contains the resource record sets.\",\n     \"title\": \"hosted_zone_id\",\n     \"type\": \"string\"\n    }\n   },\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"parameterValues\": {},\n  \"vscode\": {\n   \"interpreter\": {\n    \"hash\": \"b0fa6594d8f4cbf19f97940f81e996739fb7646882a419484c72d19e05852a7e\"\n   }\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}"
  },
  {
    "path": "AWS/AWS_Delete_Unused_Route53_Healthchecks.json",
    "content": "{\n    \"name\": \"Delete Unused Route53 HealthChecks\",\n    \"description\": \"When we associate healthchecks with an endpoint, Amazon Route53 sends health check requests to the endpoint IP address. These health checks validate that the endpoint IP addresses are operating as intended. There may be multiple reasons that healtchecks are lying usused for example- health check was mistakenly configured against your application by another customer, health check was configured from your account for testing purposes but wasn't deleted when testing was complete, health check was based on domain names and hence requests were sent due to DNS caching,  Elastic Load Balancing service updated its public IP addresses due to scaling, and the IP addresses were reassigned to your load balancer, and many more. This runbook finds such healthchecks and deletes them to save AWS costs.\",\n    \"uuid\": \"10a363abaf49098a0376eae46a6bfac421e606952369fc6ea02768ad319dd0be\",\n    \"icon\": \"CONNECTOR_TYPE_AWS\",\n    \"categories\": [ \"CATEGORY_TYPE_COST_OPT\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\" ],\n    \"connector_types\": [ \"CONNECTOR_TYPE_AWS\" ],\n    \"version\": \"1.0.0\"\n}"
  },
  {
    "path": "AWS/AWS_Detach_ec2_Instance_from_ASG.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"9a175295-d9f6-47f1-bab9-c4b9d6cdf375\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Runbook Overview\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Runbook Overview\"\n   },\n   \"source\": [\n    \"<center>\\n\",\n    \"<h1 id=\\\"\\\"><img src=\\\"https://storage.googleapis.com/unskript-website/assets/favicon.png\\\" alt=\\\"unSkript.com\\\" width=\\\"100\\\" height=\\\"100\\\"></h1>\\n\",\n    \"<h1 id=\\\"unSkript-Runbooks\\\">unSkript Runbooks</h1>\\n\",\n    \"<div class=\\\"alert alert-block alert-success\\\">\\n\",\n    \"<h3 id=\\\"Objective\\\"><strong>Objective</strong></h3>\\n\",\n    \"<strong>Detach EC2 Instance from Auto Scaling Group</strong></div>\\n\",\n    \"</center><center>\\n\",\n    \"<h2 id=\\\"Detach-EC2-Instance-from-Auto-Scaling-Group\\\"><strong>Detach EC2 Instance from Auto Scaling Group</strong></h2>\\n\",\n    \"</center>\\n\",\n    \"<h1 id=\\\"Steps-Overview\\\"><sub>Steps Overview</sub></h1>\\n\",\n    \"<p>1. &nbsp;Get Unhealthy instances from ASG</p>\\n\",\n    \"<p>2.&nbsp; AWS Detach Instances From AutoScaling Group</p>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 6,\n   \"id\": \"d4246eb1-a222-4926-8d78-39ed59991674\",\n   \"metadata\": {\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-05-18T09:12:04.823Z\"\n    },\n    \"name\": \"Input Verification\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Input Verification\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"if instance_ids and not region:\\n\",\n    \"    raise SystemExit(\\\"Provide region for the instance!\\\")\\n\",\n    \"if region == None:\\n\",\n    \"    region = \\\"\\\"\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"3125e39b-1f1a-4927-b0ad-8589898dce2e\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-1 A\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-1 A\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Get-AWS-AutoScaling-Group-Instances\\\">Get AWS AutoScaling Group Instances</h3>\\n\",\n    \"<p>Using unSkript's <strong>Get AWS AutoScaling Group Instances</strong> action we list all the EC2 instances for a given region with Auto Scaling Group name. This action only executes if the instance_id and region have been given as parameters.</p>\\n\",\n    \"<ul>\\n\",\n    \"<li><strong>Input parameters:</strong>&nbsp; <code>instance_ids, region</code></li>\\n\",\n    \"<li><strong>Output variable:</strong>&nbsp; <code>asg_instance</code></li>\\n\",\n    \"</ul>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"aef56afa-322d-47ba-8396-0a6f8f466562\",\n   \"metadata\": {\n    \"actionBashCommand\": false,\n    \"actionCategories\": [\n     \"CATEGORY_TYPE_CLOUDOPS\",\n     \"CATEGORY_TYPE_DEVOPS\",\n     \"CATEGORY_TYPE_AWS\",\n     \"CATEGORY_TYPE_AWS_EC2\",\n     \"CATEGORY_TYPE_AWS_ASG\"\n    ],\n    \"actionDescription\": \"Use This Action to Get AWS AutoScaling Group Instances\",\n    \"actionEntryFunction\": \"aws_get_auto_scaling_instances\",\n    \"actionIsCheck\": false,\n    \"actionIsRemediation\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": null,\n    \"actionNextHopParameterMapping\": null,\n    \"actionNouns\": null,\n    \"actionOutputType\": \"ACTION_OUTPUT_TYPE_LIST\",\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"actionTitle\": \"Get AWS AutoScaling Group Instances\",\n    \"actionType\": \"LEGO_TYPE_AWS\",\n    \"actionVerbs\": null,\n    \"actionVersion\": \"1.0.0\",\n    \"action_modified\": false,\n    \"action_uuid\": \"4baa10996438c3e1acea659c68a4e383d0be4484f8ec6fe2a6d4b883fcb592c3\",\n    \"collapsed\": true,\n    \"condition_enabled\": true,\n    \"continueOnError\": false,\n    \"credentialsJson\": {},\n    \"description\": \"Use This Action to Get AWS AutoScaling Group Instances\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-05-18T09:17:08.563Z\"\n    },\n    \"id\": 7,\n    \"index\": 7,\n    \"inputData\": [\n     {\n      \"instance_ids\": {\n       \"constant\": false,\n       \"value\": \"instance_ids\"\n      },\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"region\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"instance_ids\": {\n        \"description\": \"List of instances.\",\n        \"items\": {},\n        \"title\": \"Instance IDs\",\n        \"type\": \"array\"\n       },\n       \"region\": {\n        \"description\": \"AWS Region of the ECS service.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"instance_ids\",\n       \"region\"\n      ],\n      \"title\": \"aws_get_auto_scaling_instances\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"language\": \"python\",\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"Get AWS AutoScaling Group Instances\",\n    \"orderProperties\": [\n     \"instance_ids\",\n     \"region\"\n    ],\n    \"outputParams\": {\n     \"output_name\": \"asg_instance\",\n     \"output_name_enabled\": true,\n     \"output_runbook_enabled\": false,\n     \"output_runbook_name\": \"\"\n    },\n    \"printOutput\": true,\n    \"startcondition\": \"len(instance_ids)>0\",\n    \"tags\": [],\n    \"uuid\": \"4baa10996438c3e1acea659c68a4e383d0be4484f8ec6fe2a6d4b883fcb592c3\",\n    \"version\": \"1.0.0\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##  Copyright (c) 2021 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from typing import List, Dict\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from tabulate import tabulate\\n\",\n    \"from botocore.exceptions import ClientError\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_auto_scaling_instances_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    print(tabulate(output, headers='keys'))\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_auto_scaling_instances(handle, instance_ids: list, region: str) -> List:\\n\",\n    \"    \\\"\\\"\\\"aws_get_auto_scaling_instances List of Dict with instanceId and attached groups.\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned from task.validate(...).\\n\",\n    \"\\n\",\n    \"        :type instance_ids: list\\n\",\n    \"        :param instance_ids: List of instances.\\n\",\n    \"\\n\",\n    \"        :type region: string\\n\",\n    \"        :param region: Region to filter instances.\\n\",\n    \"\\n\",\n    \"        :rtype: List of Dict with instanceId and attached groups.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    result = []\\n\",\n    \"    ec2Client = handle.client('autoscaling', region_name=region)\\n\",\n    \"    try:\\n\",\n    \"        response = ec2Client.describe_auto_scaling_instances(InstanceIds=instance_ids)\\n\",\n    \"        for group in response[\\\"AutoScalingInstances\\\"]:\\n\",\n    \"            group_dict = {}\\n\",\n    \"            group_dict[\\\"InstanceId\\\"] = group[\\\"InstanceId\\\"]\\n\",\n    \"            group_dict[\\\"AutoScalingGroupName\\\"] = group[\\\"AutoScalingGroupName\\\"]\\n\",\n    \"            group_dict[\\\"region\\\"] = region\\n\",\n    \"            result.append(group_dict)\\n\",\n    \"    except Exception as error:\\n\",\n    \"        err = {\\\"Error\\\":error}\\n\",\n    \"        result.append(err)\\n\",\n    \"    return result\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"instance_ids\\\": \\\"instance_ids\\\",\\n\",\n    \"    \\\"region\\\": \\\"region\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"len(instance_ids)>0\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"asg_instance\\\")\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_get_auto_scaling_instances, lego_printer=aws_get_auto_scaling_instances_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"561775c0-545a-4ca2-9c79-11b919f7dac0\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-1 B\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-1 B\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Get-Unhealthy-instances-from-ASG\\\">Get Unhealthy instances from ASG</h3>\\n\",\n    \"<p>Here we will use unSkript&nbsp;<strong>Get Unhealthy instances from ASG</strong> action. This action filters all the unhealthy instances from the Auto Scaling Group. It will execute if the <code>instance_id</code> parameter is not given.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Input parameters: <code>region</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Output variable: <code>unhealthy_instance</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"45ecf6d1-3a07-4e97-b8e7-a8b447e568a7\",\n   \"metadata\": {\n    \"actionBashCommand\": false,\n    \"actionCategories\": [\n     \"CATEGORY_TYPE_CLOUDOPS\",\n     \"CATEGORY_TYPE_DEVOPS\",\n     \"CATEGORY_TYPE_SRE\",\n     \"CATEGORY_TYPE_AWS\",\n     \"CATEGORY_TYPE_AWS_ASG\",\n     \"CATEGORY_TYPE_AWS_EC2\"\n    ],\n    \"actionDescription\": \"Get Unhealthy instances from Auto Scaling Group\",\n    \"actionEntryFunction\": \"aws_filter_unhealthy_instances_from_asg\",\n    \"actionIsCheck\": true,\n    \"actionIsRemediation\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [\n     \"680ad9d119afab5f647e1afe7826b88d89bf35304954c3328e65a2fcf470f930\"\n    ],\n    \"actionNextHopParameterMapping\": {},\n    \"actionNouns\": null,\n    \"actionOutputType\": \"ACTION_OUTPUT_TYPE_LIST\",\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"actionTitle\": \"Get Unhealthy instances from ASG\",\n    \"actionType\": \"LEGO_TYPE_AWS\",\n    \"actionVerbs\": null,\n    \"actionVersion\": \"1.0.0\",\n    \"action_modified\": false,\n    \"action_uuid\": \"5de92ab7221455580796b1ebe93c61e3fec51d5dac22e907f96b6e0d7564e0ad\",\n    \"collapsed\": true,\n    \"condition_enabled\": true,\n    \"continueOnError\": false,\n    \"credentialsJson\": {},\n    \"description\": \"Get Unhealthy instances from Auto Scaling Group\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-05-18T09:30:18.292Z\"\n    },\n    \"id\": 1,\n    \"index\": 1,\n    \"inputData\": [\n     {\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"region\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"region\": {\n        \"default\": \"\",\n        \"description\": \"AWS Region of the ASG.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [],\n      \"title\": \"aws_filter_unhealthy_instances_from_asg\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"language\": \"python\",\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"Get Unhealthy instances from ASG\",\n    \"orderProperties\": [\n     \"region\"\n    ],\n    \"outputParams\": {\n     \"output_name\": \"unhealthy_instance\",\n     \"output_name_enabled\": true,\n     \"output_runbook_enabled\": false,\n     \"output_runbook_name\": \"\"\n    },\n    \"printOutput\": true,\n    \"startcondition\": \"not instance_ids\",\n    \"tags\": [],\n    \"uuid\": \"5de92ab7221455580796b1ebe93c61e3fec51d5dac22e907f96b6e0d7564e0ad\",\n    \"version\": \"1.0.0\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##  Copyright (c) 2021 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from typing import Optional, Tuple\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from unskript.connectors.aws import aws_get_paginator\\n\",\n    \"from unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_filter_unhealthy_instances_from_asg_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_filter_unhealthy_instances_from_asg(handle, region: str = \\\"\\\") -> Tuple:\\n\",\n    \"    \\\"\\\"\\\"aws_filter_unhealthy_instances_from_asg gives unhealthy instances from ASG\\n\",\n    \"\\n\",\n    \"        :type region: string\\n\",\n    \"        :param region: AWS region.\\n\",\n    \"\\n\",\n    \"        :rtype: CheckOutput with status result and list of unhealthy instances from ASG.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    result = []\\n\",\n    \"    all_regions = [region]\\n\",\n    \"    if not region:\\n\",\n    \"        all_regions = aws_list_all_regions(handle)\\n\",\n    \"\\n\",\n    \"    for reg in all_regions:\\n\",\n    \"        try:\\n\",\n    \"            asg_client = handle.client('autoscaling', region_name=reg)\\n\",\n    \"            response = aws_get_paginator(asg_client, \\\"describe_auto_scaling_instances\\\", \\\"AutoScalingInstances\\\")\\n\",\n    \"\\n\",\n    \"            # filter instances to only include those that are in an \\\"unhealthy\\\" state\\n\",\n    \"            for instance in response:\\n\",\n    \"                data_dict = {}\\n\",\n    \"                if instance['HealthStatus'] == 'Unhealthy':\\n\",\n    \"                    data_dict[\\\"InstanceId\\\"] = instance[\\\"InstanceId\\\"]\\n\",\n    \"                    data_dict[\\\"AutoScalingGroupName\\\"] = instance[\\\"AutoScalingGroupName\\\"]\\n\",\n    \"                    data_dict[\\\"region\\\"] = reg\\n\",\n    \"                    result.append(data_dict)\\n\",\n    \"\\n\",\n    \"        except Exception as e:\\n\",\n    \"            pass\\n\",\n    \"\\n\",\n    \"    if len(result) != 0:\\n\",\n    \"        return (False, result)\\n\",\n    \"    else:\\n\",\n    \"        return (True, None)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"region\\\": \\\"region\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"not instance_ids\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"task.configure(outputName=\\\"unhealthy_instance\\\")\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_filter_unhealthy_instances_from_asg, lego_printer=aws_filter_unhealthy_instances_from_asg_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"32d0f938-ad56-453c-89be-52c139228017\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-1 Extension\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-1 Extension\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Modify-Output\\\">Modify Output</h3>\\n\",\n    \"<p>In this action, we modify the output from step 1 A and step 1 B to return a list of dictionary items for the unhealthy instances from ASG.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Output variable: detach_instance_list</p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 12,\n   \"id\": \"e47022b7-ec19-4149-a7a7-3e2ebde54f87\",\n   \"metadata\": {\n    \"collapsed\": true,\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-02-08T13:23:56.168Z\"\n    },\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"name\": \"Modify Output\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Modify Output\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"detach_instance_list = []\\n\",\n    \"try:\\n\",\n    \"    if unhealthy_instance:\\n\",\n    \"        if unhealthy_instance[0] == False:\\n\",\n    \"            for instance in unhealthy_instance[1]:\\n\",\n    \"                detach_instance_list.append(instance)\\n\",\n    \"except Exception as e:\\n\",\n    \"    if instance_ids and asg_instance:\\n\",\n    \"        for instance in asg_instance:\\n\",\n    \"            detach_instance_list.append(instance)\\n\",\n    \"    else:\\n\",\n    \"        raise Exception(e)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"614ed424-9394-449e-9dc6-5547f765470a\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-2\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-2\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"AWS-Detach-Instances-From-AutoScaling-Group\\\">AWS Detach Instances From AutoScaling Group</h3>\\n\",\n    \"<p>In this action, we detach the AWS unhealthy instances from the Auto Scaling Group which we get from step 1.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Input parameters: <code>instance_ids, group_name, region</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Output variable: <code>detach_output</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 10,\n   \"id\": \"95603003-ac39-493a-af8a-f1910784a6f2\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": false,\n    \"action_uuid\": \"8e6e08f606d40e2f4481128d356cc67d30be72349074c513627b3f03a178cf6e\",\n    \"condition_enabled\": true,\n    \"continueOnError\": true,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Use This Action to AWS Detach Instances From AutoScaling Group\",\n    \"id\": 284,\n    \"index\": 284,\n    \"inputData\": [\n     {\n      \"group_name\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"AutoScalingGroupName\\\\\\\\\\\")\\\"\"\n      },\n      \"instance_ids\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"InstanceId\\\\\\\\\\\")\\\"\"\n      },\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"region\\\\\\\\\\\")\\\"\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"group_name\": {\n        \"description\": \"Name of AutoScaling Group.\",\n        \"title\": \"Group Name\",\n        \"type\": \"string\"\n       },\n       \"instance_ids\": {\n        \"description\": \"List of instances.\",\n        \"title\": \"Instance IDs\",\n        \"type\": \"string\"\n       },\n       \"region\": {\n        \"description\": \"AWS Region of autoscaling group.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"instance_ids\",\n       \"group_name\",\n       \"region\"\n      ],\n      \"title\": \"aws_detach_autoscaling_instances\",\n      \"type\": \"object\"\n     }\n    ],\n    \"iterData\": [\n     {\n      \"iter_enabled\": true,\n      \"iter_item\": {\n       \"group_name\": \"AutoScalingGroupName\",\n       \"instance_ids\": \"InstanceId\",\n       \"region\": \"region\"\n      },\n      \"iter_list\": {\n       \"constant\": false,\n       \"objectItems\": true,\n       \"value\": \"detach_instance_list\"\n      }\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS Detach Instances From AutoScaling Group\",\n    \"nouns\": [],\n    \"orderProperties\": [\n     \"instance_ids\",\n     \"group_name\",\n     \"region\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"detach_output\",\n     \"output_name_enabled\": true\n    },\n    \"printOutput\": true,\n    \"startcondition\": \"len(detach_instance_list)>0\",\n    \"tags\": [\n     \"aws_detach_autoscaling_instances\"\n    ],\n    \"title\": \"AWS Detach Instances From AutoScaling Group\",\n    \"verbs\": []\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##  Copyright (c) 2021 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from typing import List, Dict\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_detach_autoscaling_instances_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_detach_autoscaling_instances(\\n\",\n    \"    handle,\\n\",\n    \"    instance_ids: str,\\n\",\n    \"    group_name: str,\\n\",\n    \"    region: str\\n\",\n    \") -> Dict:\\n\",\n    \"    \\\"\\\"\\\"aws_detach_autoscaling_instances detach instances from autoscaling group.\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned from task.validate(...).\\n\",\n    \"\\n\",\n    \"        :type instance_ids: string\\n\",\n    \"        :param instance_ids: Name of instances.\\n\",\n    \"\\n\",\n    \"        :type group_name: string\\n\",\n    \"        :param group_name: Name of AutoScaling Group.\\n\",\n    \"\\n\",\n    \"        :type region: string\\n\",\n    \"        :param region: AWS Region of autoscaling group.\\n\",\n    \"\\n\",\n    \"        :rtype: Dict with the detach instance info.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"\\n\",\n    \"    ec2Client = handle.client(\\\"autoscaling\\\", region_name=region)\\n\",\n    \"    result = {}\\n\",\n    \"    try:\\n\",\n    \"        response = ec2Client.detach_instances(\\n\",\n    \"            InstanceIds=[instance_ids],\\n\",\n    \"            AutoScalingGroupName=group_name,\\n\",\n    \"            ShouldDecrementDesiredCapacity=True\\n\",\n    \"            )\\n\",\n    \"        result = response\\n\",\n    \"    except Exception as error:\\n\",\n    \"        result[\\\"error\\\"] = error\\n\",\n    \"\\n\",\n    \"    return result\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(continueOnError=True)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"group_name\\\": \\\"iter.get(\\\\\\\\\\\"AutoScalingGroupName\\\\\\\\\\\")\\\",\\n\",\n    \"    \\\"instance_ids\\\": \\\"iter.get(\\\\\\\\\\\"InstanceId\\\\\\\\\\\")\\\",\\n\",\n    \"    \\\"region\\\": \\\"iter.get(\\\\\\\\\\\"region\\\\\\\\\\\")\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(iterJson='''{\\n\",\n    \"    \\\"iter_enabled\\\": true,\\n\",\n    \"    \\\"iter_list_is_const\\\": false,\\n\",\n    \"    \\\"iter_list\\\": \\\"detach_instance_list\\\",\\n\",\n    \"    \\\"iter_parameter\\\": [\\\"instance_ids\\\",\\\"group_name\\\",\\\"region\\\"]\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"len(detach_instance_list)>0\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"task.configure(outputName=\\\"detach_output\\\")\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_detach_autoscaling_instances, lego_printer=aws_detach_autoscaling_instances_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"346d8d07-6708-4663-bf8c-5d17c8b6506f\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Conclusion\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Conclusion\"\n   },\n   \"source\": [\n    \"### Conclusion\\n\",\n    \"<p>In this Runbook, we demonstrated the use of unSkript's AWS actions. This runbook helps to detach the instances from the Auto Scaling Group. To view the full platform capabilities of unSkript please visit <a href=\\\"https://us.app.unskript.io\\\" target=\\\"_blank\\\" rel=\\\"noopener\\\">https://us.app.unskript.io</a></p>\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"execution_data\": {\n   \"runbook_name\": \"AWS Detach EC2 Instance from ASG\",\n   \"parameters\": [\n    \"region\",\n    \"asg_name\",\n    \"instance_id\"\n   ]\n  },\n  \"kernelspec\": {\n   \"display_name\": \"unSkript (Build: 1166)\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"codemirror_mode\": {\n    \"name\": \"ipython\",\n    \"version\": 3\n   },\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"nbconvert_exporter\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.9.6\"\n  },\n  \"parameterSchema\": {\n   \"properties\": {\n    \"asg_name\": {\n     \"description\": \"Auto Scaling Group Name. Note: if ASG name is given no need to give region.\",\n     \"title\": \"asg_name\",\n     \"type\": \"string\"\n    },\n    \"instance_ids\": {\n     \"description\": \"Instance Ids that are attached to Auto Scaling Group. Note: if instance id is given then the region is mandatory.\",\n     \"title\": \"instance_ids\",\n     \"type\": \"array\"\n    },\n    \"region\": {\n     \"description\": \"AWS region e.g.us-west-2\",\n     \"title\": \"region\",\n     \"type\": \"string\"\n    }\n   },\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}"
  },
  {
    "path": "AWS/AWS_Detach_ec2_Instance_from_ASG.json",
    "content": "{\n  \"name\": \"AWS Detach EC2 Instance from ASG\",\n  \"description\": \"This runbook can be used to detach an instance from Auto Scaling Group. You can remove (detach) an instance that is in the InService state from an Auto Scaling group. After the instance is detached, you can manage it independently from the rest of the Auto Scaling group. By detaching an instance, you can move an instance out of one Auto Scaling group and attach it to a different group. For more information, see Attach EC2 instances to your Auto Scaling group.\",\n  \"uuid\": \"680ad9d119afab5f647e1afe7826b88d89bf35304954c3328e65a2fcf470f930\",\n  \"icon\": \"CONNECTOR_TYPE_AWS\",\n  \"categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\" ],\n  \"connector_types\": [ \"CONNECTOR_TYPE_AWS\" ],\n  \"version\": \"1.0.0\"\n}\n\n"
  },
  {
    "path": "AWS/AWS_EC2_Disk_Cleanup.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"bf364c84\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Runbook Overview\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Runbook Overview\"\n   },\n   \"source\": [\n    \"<center><img src=\\\"https://storage.googleapis.com/unskript-website/assets/favicon.png\\\" alt=\\\"unSkript.com\\\" width=\\\"100\\\" height=\\\"100\\\">\\n\",\n    \"<h1 id=\\\"unSkript-Runbooks\\\">unSkript Runbooks<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#unSkript-Runbooks\\\" target=\\\"_self\\\">&para;</a></h1>\\n\",\n    \"<div class=\\\"alert alert-block alert-success\\\">\\n\",\n    \"<h3 id=\\\"Objective\\\"><strong>Objective</strong><a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Objective\\\" target=\\\"_self\\\">&para;</a></h3>\\n\",\n    \"<strong>Archive large files to S3 to free up EC2 disk space.</strong></div>\\n\",\n    \"</center><center>\\n\",\n    \"<h2 id=\\\"EC2-Disk-Cleanup\\\">EC2 Disk Cleanup<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#EC2-Disk-Cleanup\\\" target=\\\"_self\\\">&para;</a></h2>\\n\",\n    \"</center>\\n\",\n    \"<h1 id=\\\"Steps-Overview\\\">Steps Overview<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Steps-Overview\\\" target=\\\"_self\\\">&para;</a></h1>\\n\",\n    \"<ol>\\n\",\n    \"<li>Find the IP address of the instance</li>\\n\",\n    \"<li>Find large files in the specified path</li>\\n\",\n    \"<li>Map remote file names to S3 object names</li>\\n\",\n    \"<li>Back up files to S3</li>\\n\",\n    \"<li>Delete files from the instance</li>\\n\",\n    \"<li>Send a message to Slack</li>\\n\",\n    \"</ol>\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"683caff0\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-1\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-1\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Get-AWS-Instance-Details\\\">Get AWS Instance Details: Find SSH IP</h3>\\n\",\n    \"<p>Here we will use unSkript&nbsp;<strong>Get AWS Instance Details</strong> action. This action is used to find out all the details of the EC2 instance.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p><strong>Input parameters:</strong> <code>instance_id, region</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p><strong>Output variable:</strong> <code>InstanceDetails</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 3,\n   \"id\": \"637d4299-e731-47f1-8bef-f0ea061ea1c3\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_uuid\": \"aa1e026ca8002b906315feba401e5c46889d459270adce3b65d480dc9530311f\",\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Use This Action to Get Details about an AWS EC2 Instance\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-02-17T18:53:16.952Z\"\n    },\n    \"id\": 103,\n    \"index\": 103,\n    \"inputData\": [\n     {\n      \"instance_id\": {\n       \"constant\": false,\n       \"value\": \"instance_id\"\n      },\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"region\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"instance_id\": {\n        \"description\": \"ID of the instance.\",\n        \"title\": \"Instance Id\",\n        \"type\": \"string\"\n       },\n       \"region\": {\n        \"description\": \"AWS Region of the instance.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"instance_id\",\n       \"region\"\n      ],\n      \"title\": \"aws_get_instance_details\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"Get AWS Instance Details: Find SSH IP\",\n    \"nouns\": [\n     \"instance\",\n     \"details\"\n    ],\n    \"orderProperties\": [\n     \"instance_id\",\n     \"region\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"InstanceDetails\",\n     \"output_name_enabled\": true\n    },\n    \"printOutput\": true,\n    \"tags\": [\n     \"aws_get_instance_details\"\n    ],\n    \"title\": \"Get AWS Instance Details: Find SSH IP\",\n    \"verbs\": [\n     \"get\"\n    ]\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"# Copyright (c) 2021 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_instance_details(\\n\",\n    \"    handle,\\n\",\n    \"    instance_id: str,\\n\",\n    \"    region: str,\\n\",\n    \"):\\n\",\n    \"\\n\",\n    \"    ec2client = handle.client('ec2', region_name=region)\\n\",\n    \"    instances = []\\n\",\n    \"    response = ec2client.describe_instances(\\n\",\n    \"        Filters=[{\\\"Name\\\": \\\"instance-id\\\", \\\"Values\\\": [instance_id]}])\\n\",\n    \"    for reservation in response[\\\"Reservations\\\"]:\\n\",\n    \"        for instance in reservation[\\\"Instances\\\"]:\\n\",\n    \"            instances.append(instance)\\n\",\n    \"\\n\",\n    \"    return instances[0]\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"instance_id\\\": \\\"instance_id\\\",\\n\",\n    \"    \\\"region\\\": \\\"region\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"InstanceDetails\\\")\\n\",\n    \"\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.output = task.execute(aws_get_instance_details, hdl=hdl, args=args)\\n\",\n    \"    if task.output_name != None:\\n\",\n    \"        globals().update({task.output_name: task.output[0]})\\n\",\n    \"\\n\",\n    \"if hasattr(task, 'output'):\\n\",\n    \"    if isinstance(task.output, (list, tuple)):\\n\",\n    \"        for item in task.output:\\n\",\n    \"            print(f'item: {item}')\\n\",\n    \"    elif isinstance(task.output, dict):\\n\",\n    \"        for item in task.output.items():\\n\",\n    \"            print(f'item: {item}')\\n\",\n    \"    else:\\n\",\n    \"        print(f'Output for {task.name}')\\n\",\n    \"        print(task.output)\\n\",\n    \"    w.tasks[task.name]= task.output\\n\",\n    \"    ssh_ip = InstanceDetails[\\\"PrivateIpAddress\\\"]\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"8a4c02ff\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-2\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-2\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"SSH:-Execute-Remote-Command\\\">SSH: Execute Remote Command: Locate large files with du</h3>\\n\",\n    \"<p>Here we will use unSkript&nbsp;<strong>SSH: Locate large files on host</strong> action. This action is used to scan the file system on a given host and returns a dict of large files. The command used to perform the scan is \\\"find inspect_folder -type f -exec du -sk '{}' + | sort -rh | head -n count.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p><strong>Input parameters:</strong> <code>host, inspect_folder,&nbsp;threshold, sudo,&nbsp;count</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p><strong>Output variable:</strong> <code>FileLocation</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<p>&nbsp;</p>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 30,\n   \"id\": \"ac0a3f1d-6177-4987-a506-af53d4b48cec\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_uuid\": \"f3bb79ce49da7d739d31e66c86308c97e481f41275e2bcdaabfc694fa97f9d02\",\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"This action scans the file system on a given host and returns a dict of large files. The command used to perform the scan is \\\"find inspect_folder -type f -exec du -sk '{}' + | sort -rh | head -n count\\\"\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-02-17T19:17:01.447Z\"\n    },\n    \"id\": 60,\n    \"index\": 60,\n    \"inputData\": [\n     {\n      \"count\": {\n       \"constant\": false,\n       \"value\": \"10\"\n      },\n      \"host\": {\n       \"constant\": false,\n       \"value\": \"ssh_ip\"\n      },\n      \"inspect_folder\": {\n       \"constant\": false,\n       \"value\": \"dirs_to_anaylze\"\n      },\n      \"sudo\": {\n       \"constant\": true,\n       \"value\": false\n      },\n      \"threshold\": {\n       \"constant\": false,\n       \"value\": \"int(Threshold)\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"count\": {\n        \"default\": 10,\n        \"description\": \"Number of files to report from the scan. Default is 10\",\n        \"title\": \"Count\",\n        \"type\": \"integer\"\n       },\n       \"host\": {\n        \"description\": \"Host to connect to. Eg 10.10.10.10\",\n        \"title\": \"Host\",\n        \"type\": \"string\"\n       },\n       \"inspect_folder\": {\n        \"description\": \"Folder to inspect on the remote host. Folders are scanned using \\\"find inspect_folder -type f -exec du -sk '{}' + | sort -rh | head -n count\\\"\",\n        \"title\": \"Inspect Folder\",\n        \"type\": \"string\"\n       },\n       \"sudo\": {\n        \"default\": false,\n        \"description\": \"Run the scan with sudo.\",\n        \"title\": \"Run with sudo\",\n        \"type\": \"boolean\"\n       },\n       \"threshold\": {\n        \"default\": 100,\n        \"description\": \"Threshold the files on given size. Specified in Mb. Default is 100Mb\",\n        \"title\": \"Size Threshold\",\n        \"type\": \"integer\"\n       }\n      },\n      \"required\": [\n       \"host\",\n       \"inspect_folder\"\n      ],\n      \"title\": \"ssh_find_large_files\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_SSH\",\n    \"name\": \"SSH: Execute Remote Command: Locate large files with du\",\n    \"nouns\": [\n     \"ssh\",\n     \"files\"\n    ],\n    \"orderProperties\": [\n     \"host\",\n     \"inspect_folder\",\n     \"threshold\",\n     \"count\",\n     \"sudo\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"FileLocation\",\n     \"output_name_enabled\": true\n    },\n    \"printOutput\": true,\n    \"tags\": [\n     \"ssh_find_large_files\"\n    ],\n    \"title\": \"SSH: Execute Remote Command: Locate large files with du\",\n    \"verbs\": [\n     \"find\",\n     \"locate\"\n    ]\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"# Copyright (c) 2021 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"import json\\n\",\n    \"import tempfile\\n\",\n    \"import os\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from pssh.clients import ParallelSSHClient\\n\",\n    \"from typing import List, Optional\\n\",\n    \"from unskript.connectors import ssh\\n\",\n    \"\\n\",\n    \"from unskript.fwk.cellparams import CellParams\\n\",\n    \"from unskript import connectors\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def ssh_find_large_files(\\n\",\n    \"    sshClient,\\n\",\n    \"    host: str,\\n\",\n    \"    inspect_folder: str,\\n\",\n    \"    threshold: int = 0,\\n\",\n    \"    sudo: bool = False,\\n\",\n    \"    count: int = 10) -> dict:\\n\",\n    \"    print(sshClient)\\n\",\n    \"\\n\",\n    \"    client = sshClient([host], None)\\n\",\n    \"\\n\",\n    \"    # find size in Kb\\n\",\n    \"    command = \\\"find \\\" + inspect_folder + \\\\\\n\",\n    \"        \\\" -type f -exec du -sm '{}' + | sort -rh | head -n \\\" + str(count)\\n\",\n    \"    runCommandOutput = client.run_command(command=command, sudo=sudo)\\n\",\n    \"    client.join()\\n\",\n    \"    res = {}\\n\",\n    \"\\n\",\n    \"    for host_output in runCommandOutput:\\n\",\n    \"        for line in host_output.stdout:\\n\",\n    \"            # line is of the form {size} {fullfilename}\\n\",\n    \"            (size, filename) = line.split()\\n\",\n    \"            if int(size) > threshold:\\n\",\n    \"                res[filename] = int(size)\\n\",\n    \"\\n\",\n    \"    return res\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"count\\\": \\\"10\\\",\\n\",\n    \"    \\\"host\\\": \\\"ssh_ip\\\",\\n\",\n    \"    \\\"inspect_folder\\\": \\\"dirs_to_anaylze\\\",\\n\",\n    \"    \\\"sudo\\\": \\\"False\\\",\\n\",\n    \"    \\\"threshold\\\": \\\"int(Threshold)\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"FileLocation\\\")\\n\",\n    \"\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.output = task.execute(ssh_find_large_files, hdl=hdl, args=args)\\n\",\n    \"    if task.output_name != None:\\n\",\n    \"        globals().update({task.output_name: task.output[0]})\\n\",\n    \"\\n\",\n    \"if hasattr(task, 'output'):\\n\",\n    \"    if isinstance(task.output, (list, tuple)):\\n\",\n    \"        for item in task.output:\\n\",\n    \"            print(f'item: {item}')\\n\",\n    \"    elif isinstance(task.output, dict):\\n\",\n    \"        for item in task.output.items():\\n\",\n    \"            print(f'item: {item}')\\n\",\n    \"    else:\\n\",\n    \"        print(f'Output for {task.name}')\\n\",\n    \"        print(task.output)\\n\",\n    \"    w.tasks[task.name]= task.output\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"c646e375-e064-48d1-b101-ecd74bec93e1\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-2 Extension\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-2 Extension\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Create-local-filenames-from-remote-filenames\\\">Custom Step: Create local filenames from remote filenames</h3>\\n\",\n    \"<p>This action takes data from step 2 and sorts the output to get the remote files and local files.</p>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 8,\n   \"id\": \"492dfae5-dfe1-47b9-be70-51ff64029166\",\n   \"metadata\": {\n    \"actionNeedsCredential\": false,\n    \"actionSupportsIteration\": false,\n    \"actionSupportsPoll\": false,\n    \"credentialsJson\": {},\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"name\": \"Custom Step: Create local filenames from remote filenames\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Custom Step: Create local filenames from remote filenames\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"remote_files = [x for x in FileLocation.keys()]\\n\",\n    \"if len(remote_files) == 0:\\n\",\n    \"    print(\\\"No files to process, exiting\\\")\\n\",\n    \"    if hasattr(Workflow(), \\\"Done\\\"):\\n\",\n    \"        Workflow().Done()\\n\",\n    \"\\n\",\n    \"local_files = [ \\\"/tmp/\\\" + x.lstrip(\\\"/\\\").replace(\\\"/\\\", \\\"_\\\") for x in remote_files ]\\n\",\n    \"mapping = []\\n\",\n    \"for i in range(len(remote_files)):\\n\",\n    \"    mapping.append( {'remote': remote_files[i], 'local': local_files[i]} )\\n\",\n    \"print(json.dumps(mapping, indent=2))\\n\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"3349f56f\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-3\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-3\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"SCP:-Remote-file-transfer-over-SSH\\\">SCP: Remote file transfer over SSH</h3>\\n\",\n    \"<p>Here we will use unSkript&nbsp;<strong>SCP: Remote file transfer over SSH</strong> action. This action is used to Copy files from or to the remote host. Files are copied over SCP.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p><strong>Input parameters:</strong> <code>host, remote_file, local_file, direction</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p><strong>Output variable:</strong> <code>transfer_files</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"2a5a1b76-4385-49c1-b558-95216c34ccc4\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_uuid\": \"a3b8cad557699dfa01f15274d81941252f965f7a2a409ac89b844db74f44e4c5\",\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Copy files from or to remote host. Files are copied over SCP. \",\n    \"id\": 59,\n    \"index\": 59,\n    \"inputData\": [\n     {\n      \"direction\": {\n       \"constant\": true,\n       \"value\": true\n      },\n      \"host\": {\n       \"constant\": false,\n       \"value\": \"ssh_ip\"\n      },\n      \"local_file\": {\n       \"constant\": false,\n       \"value\": \"iter.get(\\\"local\\\")\"\n      },\n      \"remote_file\": {\n       \"constant\": false,\n       \"value\": \"iter.get(\\\"remote\\\")\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"direction\": {\n        \"default\": true,\n        \"description\": \"Direction of the copy operation. Default is receive-from-remote-server\",\n        \"title\": \"Receive\",\n        \"type\": \"boolean\"\n       },\n       \"host\": {\n        \"description\": \"Hosts to connect to. For eg. \\\"10.10.10.10\\\"\",\n        \"title\": \"Host\",\n        \"type\": \"string\"\n       },\n       \"local_file\": {\n        \"description\": \"Filename on the unSkript proxy. Eg /tmp/my_local_file\",\n        \"title\": \"Local File\",\n        \"type\": \"string\"\n       },\n       \"remote_file\": {\n        \"description\": \"Filename on the remote server. Eg /home/ec2-user/my_remote_file\",\n        \"title\": \"Remote File\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"host\",\n       \"remote_file\",\n       \"local_file\"\n      ],\n      \"title\": \"ssh_scp\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_SSH\",\n    \"name\": \"SCP: Remote file transfer over SSH\",\n    \"nouns\": [\n     \"ssh\",\n     \"file\"\n    ],\n    \"orderProperties\": [\n     \"host\",\n     \"remote_file\",\n     \"local_file\",\n     \"direction\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"transfer_files\",\n     \"output_name_enabled\": true\n    },\n    \"printOutput\": true,\n    \"tags\": [\n     \"ssh_scp\"\n    ],\n    \"title\": \"SCP: Remote file transfer over SSH\",\n    \"verbs\": [\n     \"copy\",\n     \"transfer\",\n     \"scp\"\n    ]\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"# Copyright (c) 2021 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from gevent import joinall\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def ssh_scp(\\n\",\n    \"        sshClient,\\n\",\n    \"        host: str,\\n\",\n    \"        remote_file: str,\\n\",\n    \"        local_file: str,\\n\",\n    \"        direction: bool = True) -> bool:\\n\",\n    \"\\n\",\n    \"    client = sshClient([host], None)\\n\",\n    \"    copy_args = [{'local_file': local_file, 'remote_file': remote_file}]\\n\",\n    \"\\n\",\n    \"    if direction is True:\\n\",\n    \"        cmds = client.copy_remote_file(remote_file=remote_file, local_file=local_file,\\n\",\n    \"                                       recurse=False,\\n\",\n    \"                                       suffix_separator=\\\"\\\", copy_args=copy_args,\\n\",\n    \"                                       encoding='utf-8')\\n\",\n    \"\\n\",\n    \"    else:\\n\",\n    \"        cmds = client.copy_file(local_file=local_file, remote_file=remote_file,\\n\",\n    \"                                recurse=False, copy_args=None)\\n\",\n    \"\\n\",\n    \"    try:\\n\",\n    \"        joinall(cmds, raise_error=True)\\n\",\n    \"        if direction is True:\\n\",\n    \"            print(f\\\"Successfully copied file {host}://{remote_file} to {local_file}\\\")\\n\",\n    \"        else:\\n\",\n    \"            print(f\\\"Successfully copied file {local_file} to {host}://{remote_file}\\\")\\n\",\n    \"\\n\",\n    \"    except Exception as e:\\n\",\n    \"        print(f\\\"Error encountered while copying files {e}\\\")\\n\",\n    \"        return False\\n\",\n    \"\\n\",\n    \"    return True\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"direction\\\": \\\"True\\\",\\n\",\n    \"    \\\"host\\\": \\\"ssh_ip\\\",\\n\",\n    \"    \\\"local_file\\\": \\\"iter.get(\\\\\\\\\\\"local\\\\\\\\\\\")\\\",\\n\",\n    \"    \\\"remote_file\\\": \\\"iter.get(\\\\\\\\\\\"remote\\\\\\\\\\\")\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"transfer_files\\\")\\n\",\n    \"\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.output = task.execute(ssh_scp, hdl=hdl, args=args)\\n\",\n    \"    if task.output_name != None:\\n\",\n    \"        globals().update({task.output_name: task.output[0]})\\n\",\n    \"\\n\",\n    \"if hasattr(task, 'output'):\\n\",\n    \"    if isinstance(task.output, (list, tuple)):\\n\",\n    \"        for item in task.output:\\n\",\n    \"            print(f'item: {item}')\\n\",\n    \"    elif isinstance(task.output, dict):\\n\",\n    \"        for item in task.output.items():\\n\",\n    \"            print(f'item: {item}')\\n\",\n    \"    else:\\n\",\n    \"        print(f'Output for {task.name}')\\n\",\n    \"        print(task.output)\\n\",\n    \"    w.tasks[task.name]= task.output\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"029a4c00\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-4\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-4\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Upload-file-to-S3\\\">Upload file to S3</h3>\\n\",\n    \"<p>Here we will use the unSkript <strong>Upload file to S3</strong> action. This action is used to Upload a local file to an S3 bucket.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p><strong>Input parameters:</strong> <code>bucketName, file, prefix</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p><strong>Output variable:</strong> <code>upload_output</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"443c623b-e0df-4868-a013-af4d028f3f2c\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_uuid\": \"c7ddfac1e75c2ec65ec4f1bc6d38c4cecc2ad08b19169da94466b49f04ced368\",\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Upload a local file to S3\",\n    \"id\": 126,\n    \"index\": 126,\n    \"inputData\": [\n     {\n      \"bucketName\": {\n       \"constant\": false,\n       \"value\": \"Bucket\"\n      },\n      \"file\": {\n       \"constant\": false,\n       \"value\": \"iter.get(\\\"local\\\")\"\n      },\n      \"prefix\": {\n       \"constant\": false,\n       \"value\": \"prefix or f\\\"{instance_id}/{str(datetime.date.today())}/\\\"\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"bucketName\": {\n        \"description\": \"Name of the bucket to upload into.\",\n        \"title\": \"Bucket\",\n        \"type\": \"string\"\n       },\n       \"file\": {\n        \"description\": \"Name of the local file to upload into bucket. Eg /tmp/file-to-upload\",\n        \"title\": \"File\",\n        \"type\": \"string\"\n       },\n       \"prefix\": {\n        \"default\": \"\",\n        \"description\": \"Prefix to attach to get the final object name to be used in the bucket.\",\n        \"title\": \"Prefix\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"bucketName\",\n       \"file\"\n      ],\n      \"title\": \"aws_upload_file_to_s3\",\n      \"type\": \"object\"\n     }\n    ],\n    \"iterData\": [\n     {\n      \"iter_enabled\": true,\n      \"iter_item\": \"\",\n      \"iter_list\": {\n       \"constant\": false,\n       \"objectItems\": false,\n       \"value\": \"mapping\"\n      }\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"Upload file to S3\",\n    \"nouns\": [\n     \"aws\",\n     \"bucket\",\n     \"file\"\n    ],\n    \"orderProperties\": [\n     \"bucketName\",\n     \"file\",\n     \"prefix\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"upload_output\",\n     \"output_name_enabled\": true\n    },\n    \"printOutput\": true,\n    \"tags\": [\n     \"aws_upload_file_to_s3\"\n    ],\n    \"title\": \"Upload file to S3\",\n    \"verbs\": [\n     \"put\",\n     \"upload\"\n    ]\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"# Copyright (c) 2021 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_upload_file_to_s3(handle, bucketName: str, file: str, prefix: str = \\\"\\\"):\\n\",\n    \"\\n\",\n    \"    s3 = handle.client('s3')\\n\",\n    \"    objName = prefix + file.split(\\\"/\\\")[-1]\\n\",\n    \"    try:\\n\",\n    \"        with open(file, \\\"rb\\\") as f:\\n\",\n    \"            s3.upload_fileobj(f, bucketName, objName)\\n\",\n    \"    except Exception as e:\\n\",\n    \"        print(f\\\"Error: {e}\\\")\\n\",\n    \"        raise e\\n\",\n    \"\\n\",\n    \"    print(f\\\"Successfully copied {file} to bucket:{bucketName} object:{objName}\\\")\\n\",\n    \"    return None\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"bucketName\\\": \\\"Bucket\\\",\\n\",\n    \"    \\\"file\\\": \\\"iter.get(\\\\\\\\\\\"local\\\\\\\\\\\")\\\",\\n\",\n    \"    \\\"prefix\\\": \\\"prefix or f\\\\\\\\\\\"{instance_id}/{str(datetime.date.today())}/\\\\\\\\\\\"\\\"\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"task.configure(iterJson='''{\\n\",\n    \"    \\\"iter_enabled\\\": true,\\n\",\n    \"    \\\"iter_list_is_const\\\": false,\\n\",\n    \"    \\\"iter_list\\\": \\\"mapping\\\",\\n\",\n    \"    \\\"iter_parameter\\\": \\\"\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"upload_output\\\")\\n\",\n    \"\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.output = task.execute(aws_upload_file_to_s3, hdl=hdl, args=args)\\n\",\n    \"    if task.output_name != None:\\n\",\n    \"        globals().update({task.output_name: task.output[0]})\\n\",\n    \"\\n\",\n    \"if hasattr(task, 'output'):\\n\",\n    \"    if isinstance(task.output, (list, tuple)):\\n\",\n    \"        for item in task.output:\\n\",\n    \"            print(f'item: {item}')\\n\",\n    \"    elif isinstance(task.output, dict):\\n\",\n    \"        for item in task.output.items():\\n\",\n    \"            print(f'item: {item}')\\n\",\n    \"    else:\\n\",\n    \"        print(f'Output for {task.name}')\\n\",\n    \"        print(task.output)\\n\",\n    \"    w.tasks[task.name]= task.output\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"f8431944\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-5\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-5\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"SSH-Execute-Remote-Command\\\">SSH Execute Remote Command: Remove Files</h3>\\n\",\n    \"<p>Here we will use unSkript&nbsp;<strong>SSH Execute Remote Command</strong> action. This action is used to SSH Execute Remote Commands to remove files.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p><strong>Input parameters:</strong> <code>hosts, command, sudo</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p><strong>Output variable:</strong> <code>remove_output</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<p>&nbsp;</p>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"eed84b79-c7db-4950-a5e2-5ec66eb72cea\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_uuid\": \"5279b2046bb2eb4a691ba748086f4af9e580a849faae557694bb12a8c2b7b379\",\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"SSH Execute Remote Command\",\n    \"id\": 58,\n    \"index\": 58,\n    \"inputData\": [\n     {\n      \"command\": {\n       \"constant\": false,\n       \"value\": \"\\\"rm -v \\\" + \\\" \\\".join(remote_files)\"\n      },\n      \"hosts\": {\n       \"constant\": false,\n       \"value\": \"[ ssh_ip ]\"\n      },\n      \"sudo\": {\n       \"constant\": true,\n       \"value\": false\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"command\": {\n        \"description\": \"Command to be executed on the remote server.\",\n        \"title\": \"Command\",\n        \"type\": \"string\"\n       },\n       \"hosts\": {\n        \"description\": \"List of hosts to connect to. For eg. [\\\"host1\\\", \\\"host2\\\"].\",\n        \"items\": {\n         \"type\": \"string\"\n        },\n        \"title\": \"Hosts\",\n        \"type\": \"array\"\n       },\n       \"sudo\": {\n        \"default\": false,\n        \"description\": \"Run the command with sudo.\",\n        \"title\": \"Run with sudo\",\n        \"type\": \"boolean\"\n       }\n      },\n      \"required\": [\n       \"hosts\",\n       \"command\"\n      ],\n      \"title\": \"ssh_execute_remote_command\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_SSH\",\n    \"name\": \"SSH Execute Remote Command: Remove Files\",\n    \"nouns\": [\n     \"ssh\",\n     \"command\"\n    ],\n    \"orderProperties\": [\n     \"hosts\",\n     \"command\",\n     \"sudo\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"remove_output\",\n     \"output_name_enabled\": true\n    },\n    \"printOutput\": true,\n    \"tags\": [\n     \"ssh_execute_remote_command\"\n    ],\n    \"title\": \"SSH Execute Remote Command: Remove Files\",\n    \"verbs\": [\n     \"execute\"\n    ]\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"# Copyright (c) 2021 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"import json\\n\",\n    \"import tempfile\\n\",\n    \"import os\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from pssh.clients import ParallelSSHClient\\n\",\n    \"from typing import List, Optional\\n\",\n    \"from unskript.connectors import ssh\\n\",\n    \"\\n\",\n    \"from unskript.legos.cellparams import CellParams\\n\",\n    \"from unskript import connectors\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def ssh_execute_remote_command(sshClient, hosts: List[str], command: str, sudo: bool = False):\\n\",\n    \"\\n\",\n    \"    client = sshClient(hosts, None)\\n\",\n    \"    runCommandOutput = client.run_command(command=command, sudo=sudo)\\n\",\n    \"    client.join()\\n\",\n    \"    res = {}\\n\",\n    \"\\n\",\n    \"    for host_output in runCommandOutput:\\n\",\n    \"        hostname = host_output.host\\n\",\n    \"        output = []\\n\",\n    \"        for line in host_output.stdout:\\n\",\n    \"            output.append(line)\\n\",\n    \"        res[hostname] = output\\n\",\n    \"\\n\",\n    \"        o = \\\"\\\\n\\\".join(output)\\n\",\n    \"        print(f\\\"Output from host {hostname}\\\\n{o}\\\\n\\\")\\n\",\n    \"\\n\",\n    \"    return res\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"command\\\": \\\"\\\\\\\\\\\"rm -v \\\\\\\\\\\" + \\\\\\\\\\\" \\\\\\\\\\\".join(remote_files)\\\",\\n\",\n    \"    \\\"hosts\\\": \\\"[ ssh_ip ]\\\",\\n\",\n    \"    \\\"sudo\\\": \\\"False\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"remove_output\\\")\\n\",\n    \"\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.output = task.execute(ssh_execute_remote_command, hdl=hdl, args=args)\\n\",\n    \"    if task.output_name != None:\\n\",\n    \"        globals().update({task.output_name: task.output[0]})\\n\",\n    \"\\n\",\n    \"if hasattr(task, 'output'):\\n\",\n    \"    if isinstance(task.output, (list, tuple)):\\n\",\n    \"        for item in task.output:\\n\",\n    \"            print(f'item: {item}')\\n\",\n    \"    elif isinstance(task.output, dict):\\n\",\n    \"        for item in task.output.items():\\n\",\n    \"            print(f'item: {item}')\\n\",\n    \"    else:\\n\",\n    \"        print(f'Output for {task.name}')\\n\",\n    \"        print(task.output)\\n\",\n    \"    w.tasks[task.name]= task.output\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"ea669eec-12ae-4097-aaf6-22280d7d2f8b\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-5 Extension\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-5 Extension\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Clean-up-local-files\\\">Clean up local files</h3>\\n\",\n    \"<p>This action is an extension of Step 5 where we will clean up the files locally.</p>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 8,\n   \"id\": \"cfdd871c-1713-49fd-973e-ca852993354b\",\n   \"metadata\": {\n    \"actionNeedsCredential\": false,\n    \"actionSupportsIteration\": false,\n    \"actionSupportsPoll\": false,\n    \"credentialsJson\": {},\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"name\": \"Clean up local files\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Clean up local files\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"from subprocess import PIPE, run\\n\",\n    \"\\n\",\n    \"o = run(f\\\"rm -fv {' '.join(local_files)}\\\", stdout=PIPE, stderr=PIPE, universal_newlines=True, shell=True)\\n\",\n    \"print(o.stdout)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"785e11cd\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-6\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-6\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Post-Slack-Message\\\">Post Slack Message</h3>\\n\",\n    \"<p>Here we will use unSkript&nbsp;<strong>Post Slack Message</strong> action. This action is used to post the message to the slack channel.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p><strong>Input parameters:</strong> <code>channel, message</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p><strong>Output variable:</strong>&nbsp;<code>slack_status</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<p>&nbsp;</p>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"b3f91610-73fc-4f57-93db-203fe91aa4cb\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_uuid\": \"6a87f83ab0ecfeecb9c98d084e2b1066c26fa64be5b4928d5573a5d60299802d\",\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Post Slack Message\",\n    \"id\": 44,\n    \"index\": 44,\n    \"inputData\": [\n     {\n      \"channel\": {\n       \"constant\": false,\n       \"value\": \"channel\"\n      },\n      \"message\": {\n       \"constant\": false,\n       \"value\": \"f\\\"Deleted {len(remote_files)} files from host {ssh_ip}\\\"\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"channel\": {\n        \"description\": \"Name of the slack channel where the message to be posted\",\n        \"title\": \"Channel\",\n        \"type\": \"string\"\n       },\n       \"message\": {\n        \"description\": \"Message to be sent\",\n        \"title\": \"Message\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"channel\",\n       \"message\"\n      ],\n      \"title\": \"slack_post_message\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_SLACK\",\n    \"name\": \"Post Slack Message\",\n    \"nouns\": [\n     \"slack\",\n     \"message\"\n    ],\n    \"orderProperties\": [\n     \"channel\",\n     \"message\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"slack_status\",\n     \"output_name_enabled\": true\n    },\n    \"printOutput\": true,\n    \"tags\": [\n     \"slack_post_message\"\n    ],\n    \"title\": \"Post Slack Message\",\n    \"verbs\": [\n     \"post\"\n    ]\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"# Copyright (c) 2021 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from slack_sdk import WebClient\\n\",\n    \"from slack_sdk.errors import SlackApiError\\n\",\n    \"\\n\",\n    \"pp = pprint.PrettyPrinter(indent=2)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"def legoPrinter(func):\\n\",\n    \"    def Printer(*args, **kwargs):\\n\",\n    \"        output = func(*args, **kwargs)\\n\",\n    \"        if output:\\n\",\n    \"            channel = kwargs[\\\"channel\\\"]\\n\",\n    \"            pp.pprint(print(f\\\"Message sent to Slack channel {channel}\\\"))\\n\",\n    \"        return output\\n\",\n    \"    return Printer\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@legoPrinter\\n\",\n    \"@beartype\\n\",\n    \"def slack_post_message(\\n\",\n    \"        handle: WebClient,\\n\",\n    \"        channel: str,\\n\",\n    \"        message: str) -> bool:\\n\",\n    \"\\n\",\n    \"    try:\\n\",\n    \"        response = handle.chat_postMessage(\\n\",\n    \"            channel=channel,\\n\",\n    \"            text=message)\\n\",\n    \"        return True\\n\",\n    \"    except SlackApiError as e:\\n\",\n    \"        print(\\\"\\\\n\\\\n\\\")\\n\",\n    \"        pp.pprint(\\n\",\n    \"            f\\\"Failed sending message to slack channel {channel}, Error: {e.response['error']}\\\")\\n\",\n    \"        return False\\n\",\n    \"    except Exception as e:\\n\",\n    \"        print(\\\"\\\\n\\\\n\\\")\\n\",\n    \"        pp.pprint(\\n\",\n    \"            f\\\"Failed sending message to slack channel {channel}, Error: {e.__str__()}\\\")\\n\",\n    \"        return False\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"channel\\\": \\\"channel\\\",\\n\",\n    \"    \\\"message\\\": \\\"f\\\\\\\\\\\"Deleted {len(remote_files)} files from host {ssh_ip}\\\\\\\\\\\"\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"slack_status\\\")\\n\",\n    \"\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.output = task.execute(slack_post_message, hdl=hdl, args=args)\\n\",\n    \"    if task.output_name != None:\\n\",\n    \"        globals().update({task.output_name: task.output[0]})\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"1006351c\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Conclusion\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Conclusion\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Conclusion\\\">Conclusion</h3>\\n\",\n    \"<p>In this Runbook, we demonstrated the use of unSkript's AWS and SSH lego to perform AWS and SSH actions and this runbook locates large files in a given path inside an EC2 instance and backs them up into a given S3 bucket. Afterwards, it deletes the files backed up and sends a message on slack. To view the full platform capabilities of unSkript please visit&nbsp;<a href=\\\"https://us.app.unskript.io\\\" target=\\\"_blank\\\" rel=\\\"noopener\\\">https://us.app.unskript.io</a></p>\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"execution_data\": {\n   \"runbook_name\": \"AWS EC2 Disk Cleanup\",\n   \"parameters\": [\n    \"instance_id\",\n    \"prefix\",\n    \"region\",\n    \"Bucket\",\n    \"Threshold\",\n    \"dirs_to_anaylze\"\n   ]\n  },\n  \"kernelspec\": {\n   \"display_name\": \"unSkript (Build: 1105)\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"codemirror_mode\": {\n    \"name\": \"ipython\",\n    \"version\": 3\n   },\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"nbconvert_exporter\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.9.6\"\n  },\n  \"parameterSchema\": {\n   \"properties\": {\n    \"Bucket\": {\n     \"description\": \"S3 Bucket for archiving\",\n     \"title\": \"Bucket\",\n     \"type\": \"string\"\n    },\n    \"Threshold\": {\n     \"default\": 100,\n     \"description\": \"Threshold on file size (in Mb)\",\n     \"title\": \"Threshold\",\n     \"type\": \"number\"\n    },\n    \"channel\": {\n     \"description\": \"Slack channel to send messages.\",\n     \"title\": \"channel\",\n     \"type\": \"string\"\n    },\n    \"dirs_to_anaylze\": {\n     \"default\": \"/home\",\n     \"description\": \"Root for directories to be analyzed for large files\",\n     \"title\": \"dirs_to_anaylze\",\n     \"type\": \"string\"\n    },\n    \"instance_id\": {\n     \"description\": \"EC2 Instance\",\n     \"title\": \"instance_id\",\n     \"type\": \"string\"\n    },\n    \"prefix\": {\n     \"default\": \"test/\",\n     \"description\": \"Prefix to use while uploading to S3 (default: <instance>/<date>)\",\n     \"title\": \"prefix\",\n     \"type\": \"string\"\n    },\n    \"region\": {\n     \"description\": \"AWS Region\",\n     \"title\": \"region\",\n     \"type\": \"string\"\n    }\n   },\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}"
  },
  {
    "path": "AWS/AWS_EC2_Disk_Cleanup.json",
    "content": "{\n  \"name\": \"AWS EC2 Disk Cleanup\",\n  \"description\": \"This runbook locates large files in an EC2 instance and backs them up into a given S3 bucket. Afterwards, it deletes the files backed up and send a message on a specified Slack channel. It uses SSH and linux commands to perform the functions it needs.\", \n  \"uuid\": \"f16e204e8b4c9a59e52e8d71feda07cfa066fa57d7c427772d715b4221c8f634\",\n  \"icon\": \"CONNECTOR_TYPE_AWS\",\n  \"categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\" ],\n  \"connector_types\": [ \"CONNECTOR_TYPE_AWS\" ],\n  \"version\": \"1.0.0\"\n}\n\n"
  },
  {
    "path": "AWS/AWS_Enforce_HTTP_Redirection_across_AWS_ALB.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"d06ee036-9b31-4b61-89d9-87510fa416a3\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Runbook Overview\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Runbook Overview\"\n   },\n   \"source\": [\n    \"<center><img src=\\\"https://storage.googleapis.com/unskript-website/assets/favicon.png\\\" alt=\\\"unSkript.com\\\" width=\\\"100\\\" height=\\\"100\\\">\\n\",\n    \"<h1 id=\\\"unSkript-Runbooks\\\">unSkript Runbooks</h1>\\n\",\n    \"<div class=\\\"alert alert-block alert-success\\\">\\n\",\n    \"<h3 id=\\\"Objective\\\"><strong>Objective</strong></h3>\\n\",\n    \"<strong> This runbook demonstrates how to enforce HTTP redirection across AWS ALB using unSkript actions.</strong></div>\\n\",\n    \"</center><center>\\n\",\n    \"<h2 id=\\\"Enforce-HTTP-Redirection-Across-AWS-ALB\\\">Enforce HTTP Redirection Across AWS ALB</h2>\\n\",\n    \"</center>\\n\",\n    \"<h1 id=\\\"Steps-Overview\\\">Steps Overview</h1>\\n\",\n    \"<p>1.&nbsp;Get AWS ALB Listeners Without HTTP Redirection.<br>2.&nbsp;AWS Modify ALB Listeners HTTP Redirection.</p>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"a90e59f3-d2cb-43dc-8695-fddf2b515fe4\",\n   \"metadata\": {\n    \"collapsed\": true,\n    \"customAction\": true,\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"name\": \"Input Verification\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Input Verification\",\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"if alb_listener_arns and not region:\\n\",\n    \"    raise SystemExit(\\\"Enter region for given ALB Listener's!\\\")\\n\",\n    \"if region == None:\\n\",\n    \"    region = \\\"\\\"\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"c9c8eab9-731f-4b03-b59a-4e2bc95289f7\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-1\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-1\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Get-AWS-ALB-Listeners-Without-HTTP-Redirection\\\">Get AWS ALB Listeners Without HTTP Redirection</h3>\\n\",\n    \"<p>Here we will use unSkript <strong>Get AWS ALB Listeners Without HTTP Redirection</strong> action. In this action, we will check for listener configuration for HTTP redirection and return a list of listener ARNs that don't have HTTP redirection.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Input parameters: <code>region</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Output variable: <code>listener_arns</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"9d6c0f6e-13e1-4269-9205-87e87f891432\",\n   \"metadata\": {\n    \"actionBashCommand\": false,\n    \"actionCategories\": [\n     \"CATEGORY_TYPE_IAM\",\n     \"CATEGORY_TYPE_SECOPS\",\n     \"CATEGORY_TYPE_AWS\",\n     \"CATEGORY_TYPE_AWS_ELB\"\n    ],\n    \"actionDescription\": \"Get AWS ALB Listeners Without HTTP Redirection\",\n    \"actionEntryFunction\": \"aws_get_alb_listeners_without_http_redirect\",\n    \"actionIsCheck\": true,\n    \"actionIsRemediation\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [\n     \"7d87da036fb983f7909a22a01529790dddc5179ebbb8f95517a66314d236555c\"\n    ],\n    \"actionNextHopParameterMapping\": {},\n    \"actionNouns\": null,\n    \"actionOutputType\": \"ACTION_OUTPUT_TYPE_LIST\",\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"actionTitle\": \"Get AWS ALB Listeners Without HTTP Redirection\",\n    \"actionType\": \"LEGO_TYPE_AWS\",\n    \"actionVerbs\": null,\n    \"actionVersion\": \"1.0.0\",\n    \"action_modified\": false,\n    \"action_uuid\": \"e84fa689b445924888abced31fe69f0edfcad2ea9135f175ce1897d86f04e6cd\",\n    \"condition_enabled\": true,\n    \"continueOnError\": false,\n    \"description\": \"Get AWS ALB Listeners Without HTTP Redirection\",\n    \"id\": 1,\n    \"index\": 1,\n    \"inputData\": [\n     {\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"region\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"region\": {\n        \"default\": \"\",\n        \"description\": \"AWS Region of the ALB listeners.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [],\n      \"title\": \"aws_get_alb_listeners_without_http_redirect\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"language\": \"python\",\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"Get AWS ALB Listeners Without HTTP Redirection\",\n    \"orderProperties\": [\n     \"region\"\n    ],\n    \"outputParams\": {\n     \"output_name\": \"listener_arns\",\n     \"output_name_enabled\": true,\n     \"output_runbook_enabled\": false,\n     \"output_runbook_name\": \"\"\n    },\n    \"printOutput\": true,\n    \"startcondition\": \"not alb_listener_arns\",\n    \"tags\": [\n     \"aws_get_alb_listeners_without_http_redirect\"\n    ],\n    \"uuid\": \"e84fa689b445924888abced31fe69f0edfcad2ea9135f175ce1897d86f04e6cd\",\n    \"version\": \"1.0.0\",\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##  Copyright (c) 2021 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from typing import Optional, Tuple\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from unskript.connectors.aws import aws_get_paginator\\n\",\n    \"from unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\\n\",\n    \"from unskript.legos.aws.aws_list_application_loadbalancers.aws_list_application_loadbalancers import aws_list_application_loadbalancers\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_alb_listeners_without_http_redirect_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_alb_listeners_without_http_redirect(handle, region: str = \\\"\\\") -> Tuple:\\n\",\n    \"    \\\"\\\"\\\"aws_get_alb_listeners_without_http_redirect List of ALB listeners without HTTP redirection.\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned from task.validate(...).\\n\",\n    \"\\n\",\n    \"        :type region: string\\n\",\n    \"        :param region: Region to filter ALB listeners.\\n\",\n    \"\\n\",\n    \"        :rtype: Tuple of status result and list of ALB listeners without HTTP redirection.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    result = []\\n\",\n    \"    all_regions = [region]\\n\",\n    \"    alb_list = []\\n\",\n    \"    if not region:\\n\",\n    \"        all_regions = aws_list_all_regions(handle)\\n\",\n    \"\\n\",\n    \"    for reg in all_regions:\\n\",\n    \"        try:\\n\",\n    \"            alb_dict = {}\\n\",\n    \"            loadbalancer_arn = aws_list_application_loadbalancers(handle, reg)\\n\",\n    \"            alb_dict[\\\"region\\\"] = reg\\n\",\n    \"            alb_dict[\\\"alb_arn\\\"] = loadbalancer_arn\\n\",\n    \"            alb_list.append(alb_dict)\\n\",\n    \"        except Exception as error:\\n\",\n    \"            pass\\n\",\n    \"\\n\",\n    \"    for alb in alb_list:\\n\",\n    \"        try:\\n\",\n    \"            ec2Client = handle.client('elbv2', region_name=alb[\\\"region\\\"])\\n\",\n    \"            for load in alb[\\\"alb_arn\\\"]:\\n\",\n    \"                response = aws_get_paginator(ec2Client, \\\"describe_listeners\\\", \\\"Listeners\\\",\\n\",\n    \"                                             LoadBalancerArn=load)\\n\",\n    \"                for listner in response:\\n\",\n    \"                    if 'SslPolicy' not in listner:\\n\",\n    \"                        resp = aws_get_paginator(ec2Client, \\\"describe_rules\\\", \\\"Rules\\\",\\n\",\n    \"                                             ListenerArn=listner['ListenerArn'])\\n\",\n    \"                        for rule in resp:\\n\",\n    \"                            for action in rule['Actions']:\\n\",\n    \"                                listener_dict = {}\\n\",\n    \"                                if action['Type'] != 'redirect':\\n\",\n    \"                                    listener_dict[\\\"region\\\"] = alb[\\\"region\\\"]\\n\",\n    \"                                    listener_dict[\\\"listener_arn\\\"] = listner['ListenerArn']\\n\",\n    \"                                    result.append(listener_dict)\\n\",\n    \"        except Exception as error:\\n\",\n    \"            pass\\n\",\n    \"\\n\",\n    \"    if len(result) != 0:\\n\",\n    \"        return (False, result)\\n\",\n    \"    else:\\n\",\n    \"        return (True, None)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"region\\\": \\\"region\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"not alb_listener_arns\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"task.configure(outputName=\\\"listener_arns\\\")\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_get_alb_listeners_without_http_redirect, lego_printer=aws_get_alb_listeners_without_http_redirect_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"75375134-5683-43a5-b814-b37326b2daab\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-1 Extension\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-1 Extension\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Modify-Listener-ARNs-Output\\\">Modify Listener ARNs Output</h3>\\n\",\n    \"<p>In this action, we modify the output from step 2 and return a list of dictionary items for the Listener's ARNs.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Output variable: <code>arn_list</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 22,\n   \"id\": \"e8d7cf3f-c738-4ed2-b735-08464a6cb712\",\n   \"metadata\": {\n    \"collapsed\": true,\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-01-30T15:00:54.938Z\"\n    },\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"name\": \"Modify Listeners ARNs Output\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Modify Listeners ARNs Output\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"import re \\n\",\n    \"import json\\n\",\n    \"from unskript.legos.utils import parseARN\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"arns_list = []\\n\",\n    \"try:\\n\",\n    \"    if listener_arns[0] == False:\\n\",\n    \"        for listener in listener_arns[1]:\\n\",\n    \"            arns_list.append(listener)\\n\",\n    \"except Exception as e:\\n\",\n    \"    if alb_listener_arns:\\n\",\n    \"        for i in alb_listener_arns:\\n\",\n    \"            arn_dict = {}\\n\",\n    \"            parsedArn = parseARN(i)\\n\",\n    \"            arn_dict[\\\"region\\\"] = parsedArn[\\\"region\\\"]\\n\",\n    \"            arn_dict[\\\"listener_arn\\\"] = i\\n\",\n    \"            arns_list.append(arn_dict)\\n\",\n    \"    else:\\n\",\n    \"        raise Exception(e)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"03516737-23d3-45cc-bcff-927d74635c82\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-2\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-2\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"AWS-Modify-ALB-Listeners-HTTP-Redirection\\\"><a id=\\\"2\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>AWS Modify ALB Listeners HTTP Redirection</h3>\\n\",\n    \"<p>Here we will use unSkript <strong>AWS Modify ALB Listeners HTTP Redirection</strong> action. In this action, we will modify a listener's configuration for HTTP redirection to the listener, which we get from step 2. This action only executes when len(Listener_ARNs)&gt;0.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Input parameters: <code>listener_arn</code>, <code>region</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Output variable: <code>Modified_Output</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 6,\n   \"id\": \"e6461e22-733d-4665-8e51-5e6d755c0c82\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_uuid\": \"f0e5d5038aad3efc10cd1cc79b27571c08d672b6b8c5cdd57e8bd5b78c23b001\",\n    \"condition_enabled\": true,\n    \"continueOnError\": true,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"AWS Modify ALB Listeners HTTP Redirection\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2022-09-16T19:37:19.193Z\"\n    },\n    \"id\": 149,\n    \"index\": 149,\n    \"inputData\": [\n     {\n      \"listener_arn\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"listener_arn\\\\\\\\\\\")\\\"\"\n      },\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"region\\\\\\\\\\\")\\\"\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"listener_arn\": {\n        \"description\": \"listener ARNs.\",\n        \"title\": \"ListenerArn\",\n        \"type\": \"string\"\n       },\n       \"region\": {\n        \"description\": \"AWS Region of the ALB listeners.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"listener_arn\",\n       \"region\"\n      ],\n      \"title\": \"aws_modify_listener_for_http_redirection\",\n      \"type\": \"object\"\n     }\n    ],\n    \"iterData\": [\n     {\n      \"iter_enabled\": true,\n      \"iter_item\": {\n       \"listener_arn\": \"listener_arn\",\n       \"region\": \"region\"\n      },\n      \"iter_list\": {\n       \"constant\": false,\n       \"objectItems\": true,\n       \"value\": \"arns_list\"\n      }\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS Modify ALB Listeners HTTP Redirection\",\n    \"nouns\": [\n     \"listeners\",\n     \"loadbalancers\"\n    ],\n    \"orderProperties\": [\n     \"listener_arn\",\n     \"region\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"modified_output\",\n     \"output_name_enabled\": true\n    },\n    \"printOutput\": true,\n    \"startcondition\": \"len(arns_list)>0\",\n    \"tags\": [\n     \"aws_modify_listener_for_http_redirection\"\n    ],\n    \"verbs\": [\n     \"modify\"\n    ]\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##  Copyright (c) 2021 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from typing import List, Dict\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_modify_listener_for_http_redirection_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_modify_listener_for_http_redirection(handle, listener_arn: str, region: str) -> List:\\n\",\n    \"    \\\"\\\"\\\"aws_modify_listener_for_http_redirection List of Dict with modified listener info.\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned from task.validate(...).\\n\",\n    \"\\n\",\n    \"        :type listener_arn: string\\n\",\n    \"        :param listener_arn: List of LoadBalancerArn.\\n\",\n    \"\\n\",\n    \"        :type region: string\\n\",\n    \"        :param region: Region to filter ALB listeners.\\n\",\n    \"\\n\",\n    \"        :rtype: List of Dict with modified ALB listeners info.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    listner_config = [{\\n\",\n    \"                        \\\"Type\\\": \\\"redirect\\\",\\n\",\n    \"                        \\\"Order\\\": 1,\\n\",\n    \"                        \\\"RedirectConfig\\\": {\\n\",\n    \"                            \\\"Protocol\\\": \\\"HTTPS\\\",\\n\",\n    \"                            \\\"Host\\\": \\\"#{host}\\\",\\n\",\n    \"                            \\\"Query\\\": \\\"#{query}\\\",\\n\",\n    \"                            \\\"Path\\\": \\\"/#{path}\\\",\\n\",\n    \"                            \\\"Port\\\": \\\"443\\\",\\n\",\n    \"                            \\\"StatusCode\\\": \\\"HTTP_302\\\"}}]\\n\",\n    \"    result = []\\n\",\n    \"    try:\\n\",\n    \"        #if ALB_Name in listener_arn:\\n\",\n    \"        ec2Client = handle.client('elbv2', region_name=region)\\n\",\n    \"        response = ec2Client.modify_listener(ListenerArn=listener_arn,\\n\",\n    \"                                                 DefaultActions=listner_config)\\n\",\n    \"        result.append(response)\\n\",\n    \"\\n\",\n    \"    except Exception as error:\\n\",\n    \"        result.append(error)\\n\",\n    \"\\n\",\n    \"    return result\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"task.configure(continueOnError=True)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"region\\\": \\\"iter.get(\\\\\\\\\\\"region\\\\\\\\\\\")\\\",\\n\",\n    \"    \\\"listener_arn\\\": \\\"iter.get(\\\\\\\\\\\"listener_arn\\\\\\\\\\\")\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(iterJson='''{\\n\",\n    \"    \\\"iter_enabled\\\": true,\\n\",\n    \"    \\\"iter_list_is_const\\\": false,\\n\",\n    \"    \\\"iter_list\\\": \\\"arns_list\\\",\\n\",\n    \"    \\\"iter_parameter\\\": [\\\"listener_arn\\\",\\\"region\\\"]\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"len(arns_list)>0\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"modified_output\\\")\\n\",\n    \"\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_modify_listener_for_http_redirection, lego_printer=aws_modify_listener_for_http_redirection_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"ddfe6833-aaf9-42b5-aa00-d759b2921ed0\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Conclusion\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Conclusion\"\n   },\n   \"source\": [\n    \"### Conclusion\\n\",\n    \"<p>In this Runbook, we demonstrated the use of unSkript's AWS actions and this runbook find out all the Application Load Balancer listeners without HTTP redirection and modify them for HTTP redirection. To view the full platform capabilities of unSkript please visit <a href=\\\"https://us.app.unskript.io\\\" target=\\\"_blank\\\" rel=\\\"noopener\\\">https://us.app.unskript.io</a></p>\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"execution_data\": {\n   \"runbook_name\": \"Enforce HTTP Redirection across all AWS ALB instances\",\n   \"parameters\": [\n    \"alb_listener_arns\",\n    \"region\"\n   ]\n  },\n  \"kernelspec\": {\n   \"display_name\": \"unSkript (Build: 1166)\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"codemirror_mode\": {\n    \"name\": \"ipython\",\n    \"version\": 3\n   },\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"nbconvert_exporter\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.9.6\"\n  },\n  \"parameterSchema\": {\n   \"properties\": {\n    \"alb_listener_arns\": {\n     \"description\": \"Listeners ARNs where HTTP redirection needs to be added.\",\n     \"title\": \"alb_listener_arns\",\n     \"type\": \"array\"\n    },\n    \"region\": {\n     \"description\": \"AWS region e.g. us-west-2\",\n     \"title\": \"region\",\n     \"type\": \"string\"\n    }\n   },\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"vscode\": {\n   \"interpreter\": {\n    \"hash\": \"e8899eb02dfbc033aab5733bdae1bd213fa031d40331094008e8673d99ebab63\"\n   }\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}"
  },
  {
    "path": "AWS/AWS_Enforce_HTTP_Redirection_across_AWS_ALB.json",
    "content": "{\n  \"name\": \"Enforce HTTP Redirection across all AWS ALB instances\",\n  \"description\": \"This runbook can be used to enforce HTTP redirection across all AWS ALBs. Web encryption protocols like SSL and TLS have been around for nearly three decades. By securing web data in transit, these security measures ensure that third parties can’t simply intercept unencrypted data and cause harm. HTTPS uses the underlying SSL/TLS technology and is the standard way to communicate web data in an encrypted and authenticated manner instead of using insecure HTTP protocol. In this runbook, we implement the industry best practice of redirecting all unencrypted HTTP data to the secure HTTPS protocol.\",\n  \"uuid\": \"7d87da036fb983f7909a22a01529790dddc5179ebbb8f95517a66314d236555c\",\n  \"icon\": \"CONNECTOR_TYPE_AWS\",\n  \"categories\": [ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SECOPS\" ],\n  \"connector_types\": [ \"CONNECTOR_TYPE_AWS\" ],\n  \"version\": \"1.0.0\"\n}\n"
  },
  {
    "path": "AWS/AWS_Ensure_Redshift_Clusters_have_Paused_Resume_Enabled.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"e2fffe48-5eb4-4177-95ec-7955cc381ad8\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Runbook Overview\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Runbook Overview\"\n   },\n   \"source\": [\n    \"<center><img src=\\\"https://unskript.com/assets/favicon.png\\\" alt=\\\"unSkript.com\\\" width=\\\"100\\\" height=\\\"100\\\">\\n\",\n    \"<h1 id=\\\"unSkript-Runbooks\\\">unSkript Runbooks<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#unSkript-Runbooks\\\" target=\\\"_self\\\">&para;</a></h1>\\n\",\n    \"<div class=\\\"alert alert-block alert-success\\\"><strong>To ensure the Redshift cluster has pause resume enabled in AWS using unSkript actions.</strong></div>\\n\",\n    \"</center><center>\\n\",\n    \"<h2 id=\\\"Ensure-Redshift-Clusters-have-Paused-Resume-Enabled\\\">Ensure Redshift Clusters have Paused Resume Enabled<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Ensure-Redshift-Clusters-have-Paused-Resume-Enabled\\\" target=\\\"_self\\\">&para;</a></h2>\\n\",\n    \"</center>\\n\",\n    \"<h1 id=\\\"Steps-Overview\\\">Steps Overview<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Steps-Overview\\\" target=\\\"_self\\\">&para;</a></h1>\\n\",\n    \"<ol>\\n\",\n    \"<li>AWS Find Redshift Cluster without Pause Resume Enabled</li>\\n\",\n    \"<li>AWS Schedule Redshift Cluster Pause Resume Enabled</li>\\n\",\n    \"</ol>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 46,\n   \"id\": \"cbd771e6-6e0a-4ea0-a653-00f65120e145\",\n   \"metadata\": {\n    \"collapsed\": true,\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-05-19T04:41:41.094Z\"\n    },\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"name\": \"Input Verification\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Input Verification\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"if redshift_clusters and not region:\\n\",\n    \"    raise SystemExit(\\\"Provide region for redshift_clusters!\\\")\\n\",\n    \"if region == None:\\n\",\n    \"    region = \\\"\\\"\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"dbbf73ef-3c3e-49b7-8c4b-301e02614d84\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-1\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-1\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"AWS-Find-Redshift-Cluster-without-Pause-Resume-Enabled\\\">AWS Find Redshift Cluster without Pause Resume Enabled<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#AWS-Find-Redshift-Cluster-without-Pause-Resume-Enabled\\\" target=\\\"_self\\\">&para;</a></h3>\\n\",\n    \"<p>Here we will use unSkript <strong>AWS Find Redshift Cluster without Pause Resume Enabled</strong> action. This action filters all the redshift clusters from the given region and returns a list of clusters that don't have pause resume enabled.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Input parameters: <code>region</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Output variable:&nbsp;<code>clusters</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"3287a7ff-59c3-41e4-85e6-cc79a6969396\",\n   \"metadata\": {\n    \"actionBashCommand\": false,\n    \"actionCategories\": [\n     \"CATEGORY_TYPE_DEVOPS\",\n     \"CATEGORY_TYPE_SRE\",\n     \"CATEGORY_TYPE_AWS\",\n     \"CATEGORY_TYPE_AWS_EC2\",\n     \"CATEGORY_TYPE_DB\"\n    ],\n    \"actionDescription\": \"Use This Action to AWS find redshift cluster for which paused resume are not Enabled\",\n    \"actionEntryFunction\": \"aws_find_redshift_cluster_without_pause_resume_enabled\",\n    \"actionIsCheck\": true,\n    \"actionIsRemediation\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [],\n    \"actionNextHopParameterMapping\": {},\n    \"actionNouns\": null,\n    \"actionOutputType\": \"ACTION_OUTPUT_TYPE_LIST\",\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"actionTitle\": \"AWS Find Redshift Cluster without Pause Resume Enabled\",\n    \"actionType\": \"LEGO_TYPE_AWS\",\n    \"actionVerbs\": null,\n    \"actionVersion\": \"1.0.0\",\n    \"action_modified\": false,\n    \"action_uuid\": \"433eede3d0f6e49e242c1c0f624617df7212a210e1fd5cde8cec0202d2b972aa\",\n    \"collapsed\": true,\n    \"condition_enabled\": true,\n    \"continueOnError\": false,\n    \"credentialsJson\": {},\n    \"description\": \"Use This Action to AWS find redshift cluster for which paused resume are not Enabled\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-05-19T04:42:11.658Z\"\n    },\n    \"id\": 11,\n    \"index\": 11,\n    \"inputData\": [\n     {\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"region\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"region\": {\n        \"default\": \"\",\n        \"description\": \"AWS Region.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [],\n      \"title\": \"aws_find_redshift_cluster_without_pause_resume_enabled\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"language\": \"python\",\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS Find Redshift Cluster without Pause Resume Enabled\",\n    \"orderProperties\": [\n     \"region\"\n    ],\n    \"outputParams\": {\n     \"output_name\": \"clusters\",\n     \"output_name_enabled\": true,\n     \"output_runbook_enabled\": false,\n     \"output_runbook_name\": \"\"\n    },\n    \"printOutput\": true,\n    \"startcondition\": \"not redshift_clusters\",\n    \"tags\": [],\n    \"title\": \"AWS Find Redshift Cluster without Pause Resume Enabled\",\n    \"uuid\": \"433eede3d0f6e49e242c1c0f624617df7212a210e1fd5cde8cec0202d2b972aa\",\n    \"version\": \"1.0.0\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##  Copyright (c) 2023 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import Optional, Tuple\\n\",\n    \"from unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\\n\",\n    \"from unskript.connectors.aws import aws_get_paginator\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_find_redshift_cluster_without_pause_resume_enabled_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_find_redshift_cluster_without_pause_resume_enabled(handle, region: str = \\\"\\\") -> Tuple:\\n\",\n    \"    \\\"\\\"\\\"aws_find_redshift_cluster_without_pause_resume_enabled Gets all redshift cluster which don't have pause and resume not enabled.\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned from task.validate(...).\\n\",\n    \"\\n\",\n    \"        :type region: string\\n\",\n    \"        :param region: AWS Region.\\n\",\n    \"\\n\",\n    \"        :rtype: Tuple with the status result and a list of all redshift clusters that don't have pause and resume enabled.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    result = []\\n\",\n    \"    all_regions = [region]\\n\",\n    \"    if not region:\\n\",\n    \"        all_regions = aws_list_all_regions(handle)\\n\",\n    \"    for reg in all_regions:\\n\",\n    \"        try:\\n\",\n    \"            redshift_Client = handle.client('redshift', region_name=reg)\\n\",\n    \"            response = aws_get_paginator(redshift_Client, \\\"describe_clusters\\\", \\\"Clusters\\\")\\n\",\n    \"            for cluster in response:\\n\",\n    \"                cluster_dict = {}\\n\",\n    \"                cluster_name = cluster[\\\"ClusterIdentifier\\\"]\\n\",\n    \"                schedule_actions = aws_get_paginator(redshift_Client, \\\"describe_scheduled_actions\\\", \\\"ScheduledActions\\\",Filters=[{'Name': 'cluster-identifier', 'Values': [cluster_name]}])\\n\",\n    \"\\n\",\n    \"                if schedule_actions:\\n\",\n    \"                    for actions in schedule_actions:\\n\",\n    \"                        if \\\"ResumeCluster\\\" in actions[\\\"TargetAction\\\"].keys() or \\\"PauseCluster\\\" in actions[\\\"TargetAction\\\"].keys():\\n\",\n    \"                            pass\\n\",\n    \"                        else:\\n\",\n    \"                            cluster_dict[\\\"cluster_name\\\"] = cluster_name\\n\",\n    \"                            cluster_dict[\\\"region\\\"] = reg\\n\",\n    \"                            result.append(cluster_dict)\\n\",\n    \"                else:\\n\",\n    \"                    cluster_dict[\\\"cluster_name\\\"] = cluster_name\\n\",\n    \"                    cluster_dict[\\\"region\\\"] = reg\\n\",\n    \"                    result.append(cluster_dict)\\n\",\n    \"        except Exception as error:\\n\",\n    \"            pass\\n\",\n    \"\\n\",\n    \"    if len(result) != 0:\\n\",\n    \"        return (False, result)\\n\",\n    \"    else:\\n\",\n    \"        return (True, None)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"region\\\": \\\"region\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"not redshift_clusters\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"task.configure(outputName=\\\"clusters\\\")\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_find_redshift_cluster_without_pause_resume_enabled, lego_printer=aws_find_redshift_cluster_without_pause_resume_enabled_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"0f79562c-f105-49d3-beb0-0b5456c3c805\",\n   \"metadata\": {\n    \"name\": \"Gathering Information\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Gathering Information\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Get-IAM-Role\\\">Get IAM Role<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"Ensure_Redshift_Clusters_have_Paused_Resume_Enabled.ipynb#Get-IAM-Role\\\" target=\\\"_self\\\">&para;</a></h3>\\n\",\n    \"<p>In this action, we use&nbsp;<strong>Run Command via AWS CLI</strong>&nbsp;action to get IAM role ARN for schedule pause resume.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Output variable:&nbsp;<code>iam_role_arn</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"f8a5cc43-4e9f-4011-bdcd-6cbd0d3a6596\",\n   \"metadata\": {\n    \"actionBashCommand\": false,\n    \"actionCategories\": [\n     \"CATEGORY_TYPE_DEVOPS\",\n     \"CATEGORY_TYPE_SRE\",\n     \"CATEGORY_TYPE_AWS\",\n     \"CATEGORY_TYPE_AWS_CLI\"\n    ],\n    \"actionDescription\": \"Execute command using AWS CLI\",\n    \"actionEntryFunction\": \"aws_execute_cli_command\",\n    \"actionIsCheck\": false,\n    \"actionIsRemediation\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": null,\n    \"actionNextHopParameterMapping\": null,\n    \"actionNouns\": null,\n    \"actionOutputType\": \"ACTION_OUTPUT_TYPE_STR\",\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"actionTitle\": \"Run Command via AWS CLI\",\n    \"actionType\": \"LEGO_TYPE_AWS\",\n    \"actionVerbs\": null,\n    \"actionVersion\": \"1.0.0\",\n    \"action_modified\": false,\n    \"action_uuid\": \"1db371aff42291641eb6ba83d7acc3fe28e2468d83be1552e8258dc878c0f70d\",\n    \"continueOnError\": false,\n    \"credentialsJson\": {},\n    \"description\": \"Execute command using AWS CLI\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-05-19T04:52:30.867Z\"\n    },\n    \"id\": 1,\n    \"index\": 1,\n    \"inputData\": [\n     {\n      \"aws_command\": {\n       \"constant\": false,\n       \"value\": \"\\\"aws iam get-role --role-name scheduler.redshift.amazonaws.com --query 'Role.Arn' --output text\\\"\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"aws_command\": {\n        \"description\": \"AWS Command eg \\\"aws ec2 describe-instances\\\"\",\n        \"title\": \"AWS Command\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"aws_command\"\n      ],\n      \"title\": \"aws_execute_cli_command\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"language\": \"python\",\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"Run Command via AWS CLI: Get IAM Role\",\n    \"orderProperties\": [\n     \"aws_command\"\n    ],\n    \"outputParams\": {\n     \"output_name\": \"iam_role_arn\",\n     \"output_name_enabled\": true,\n     \"output_runbook_enabled\": false,\n     \"output_runbook_name\": \"\"\n    },\n    \"printOutput\": true,\n    \"tags\": [],\n    \"title\": \"Run Command via AWS CLI: Get IAM Role\",\n    \"uuid\": \"1db371aff42291641eb6ba83d7acc3fe28e2468d83be1552e8258dc878c0f70d\",\n    \"version\": \"1.0.0\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"#\\n\",\n    \"# Copyright (c) 2021 unSkript.com\\n\",\n    \"# All rights reserved.\\n\",\n    \"#\\n\",\n    \"\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_execute_cli_command_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_execute_cli_command(handle, aws_command: str) -> str:\\n\",\n    \"\\n\",\n    \"    result = handle.aws_cli_command(aws_command)\\n\",\n    \"    if result is None or result.returncode != 0:\\n\",\n    \"        print(\\n\",\n    \"            f\\\"Error while executing command ({aws_command}): {result}\\\")\\n\",\n    \"        return str()\\n\",\n    \"\\n\",\n    \"    return result.stdout\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"aws_command\\\": \\\"\\\\\\\\\\\"aws iam get-role --role-name scheduler.redshift.amazonaws.com --query 'Role.Arn' --output text\\\\\\\\\\\"\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"iam_role_arn\\\")\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_execute_cli_command, lego_printer=aws_execute_cli_command_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"b3b728cf-a318-4303-8be9-750fd811cdd7\",\n   \"metadata\": {\n    \"name\": \"Step-1 Extension\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-1 Extension\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Modify-Step-1-Output&para;\\\">Modify Step-1 Output</h3>\\n\",\n    \"<p>In this action, we modify the output from step 1 and return a list of dictionaries for schedule pause resume in the redshift cluster.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Output variable:&nbsp;<code>schedule_cluster_details</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 54,\n   \"id\": \"5cbcb4b2-149f-43f7-b723-e2f3766c9980\",\n   \"metadata\": {\n    \"collapsed\": true,\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-05-19T04:54:58.091Z\"\n    },\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"name\": \"Modify Step-1 Output\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Modify Step-1 Output\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"schedule_cluster_details = []\\n\",\n    \"try:\\n\",\n    \"    if clusters[0] == False:\\n\",\n    \"        for instance in clusters[1]:\\n\",\n    \"            instance[\\\"iam_role\\\"] = iam_role_arn\\n\",\n    \"            instance[\\\"pause_schedule_expression\\\"] = pause_schedule_expression\\n\",\n    \"            instance[\\\"resume_schedule_expression\\\"] = resume_schedule_expression\\n\",\n    \"            schedule_cluster_details.append(instance)\\n\",\n    \"except Exception as e:\\n\",\n    \"    for i in redshift_clusters:\\n\",\n    \"        instance = {}\\n\",\n    \"        instance[\\\"cluster_name\\\"] = i\\n\",\n    \"        instance[\\\"region\\\"] = region\\n\",\n    \"        instance[\\\"iam_role\\\"] = iam_role_arn\\n\",\n    \"        instance[\\\"pause_schedule_expression\\\"] = pause_schedule_expression\\n\",\n    \"        instance[\\\"resume_schedule_expression\\\"] = resume_schedule_expression\\n\",\n    \"        schedule_cluster_details.append(instance)\\n\",\n    \"    else:\\n\",\n    \"        raise Exception(e)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"d1f1a3bf-e7d4-4243-8a99-6e1b66abef29\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-2\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-2\"\n   },\n   \"source\": [\n    \"<p><strong>AWS Schedule Redshift Cluster Pause Resume Enabled</strong></p>\\n\",\n    \"<p>In this action, we pass all details collected from the step1 and schedule pause resume redshift cluster.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Input parameters: <code>iam_role_arn, cluster_name, region, pause_schedule_expression, resume_schedule_expression</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Output variable: <code>schedule_info</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"de3a0d4c-4a05-4546-98ab-f0abea87594a\",\n   \"metadata\": {\n    \"actionBashCommand\": false,\n    \"actionCategories\": [\n     \"CATEGORY_TYPE_DEVOPS\",\n     \"CATEGORY_TYPE_SRE\",\n     \"CATEGORY_TYPE_AWS\",\n     \"CATEGORY_TYPE_AWS_EC2\",\n     \"CATEGORY_TYPE_DB\"\n    ],\n    \"actionDescription\": \"Use This Action to AWS find redshift cluster for which paused resume are not Enabled\",\n    \"actionEntryFunction\": \"aws_find_redshift_cluster_without_pause_resume_enabled\",\n    \"actionIsCheck\": true,\n    \"actionIsRemediation\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [],\n    \"actionNextHopParameterMapping\": {},\n    \"actionNouns\": null,\n    \"actionOutputType\": \"ACTION_OUTPUT_TYPE_LIST\",\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"actionTitle\": \"AWS Find Redshift Cluster without Pause Resume Enabled\",\n    \"actionType\": \"LEGO_TYPE_AWS\",\n    \"actionVerbs\": null,\n    \"actionVersion\": \"1.0.0\",\n    \"action_modified\": true,\n    \"action_uuid\": \"433eede3d0f6e49e242c1c0f624617df7212a210e1fd5cde8cec0202d2b972aa\",\n    \"collapsed\": true,\n    \"continueOnError\": true,\n    \"credentialsJson\": {},\n    \"description\": \"Use This Action to AWS find redshift cluster for which paused resume are not Enabled\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-05-19T04:30:55.971Z\"\n    },\n    \"id\": 12,\n    \"index\": 12,\n    \"inputData\": [\n     {\n      \"cluster_name\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"cluster_name\\\\\\\\\\\")\\\"\"\n      },\n      \"iam_role_arn\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"iam_role\\\\\\\\\\\")\\\"\"\n      },\n      \"pause_schedule_expression\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"pause_schedule_expression\\\\\\\\\\\")\\\"\"\n      },\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"region\\\\\\\\\\\")\\\"\"\n      },\n      \"resume_schedule_expression\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"resume_schedule_expression\\\\\\\\\\\")\\\"\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"cluster_name\": {\n        \"default\": \"\",\n        \"description\": \"Name of the redshift cluster\",\n        \"title\": \"cluster_name\",\n        \"type\": \"string\"\n       },\n       \"iam_role_arn\": {\n        \"default\": \"\",\n        \"description\": \"IAM role ARN for schedule redshift pause resume\",\n        \"title\": \"iam_role_arn\",\n        \"type\": \"string\"\n       },\n       \"pause_schedule_expression\": {\n        \"default\": \"\",\n        \"description\": \"The cron expression for the pause schedule.\",\n        \"title\": \"pause_schedule_expression\",\n        \"type\": \"string\"\n       },\n       \"region\": {\n        \"default\": \"\",\n        \"description\": \"AWS Region.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       },\n       \"resume_schedule_expression\": {\n        \"default\": \"\",\n        \"description\": \"The cron expression for the resume schedule.\",\n        \"title\": \"resume_schedule_expression\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"region\",\n       \"iam_role_arn\",\n       \"cluster_name\",\n       \"pause_schedule_expression\",\n       \"resume_schedule_expression\"\n      ],\n      \"title\": \"aws_find_redshift_cluster_without_pause_resume_enabled\",\n      \"type\": \"object\"\n     }\n    ],\n    \"iterData\": [\n     {\n      \"iter_enabled\": true,\n      \"iter_item\": {\n       \"cluster_name\": \"cluster_name\",\n       \"iam_role_arn\": \"iam_role\",\n       \"pause_schedule_expression\": \"pause_schedule_expression\",\n       \"region\": \"region\",\n       \"resume_schedule_expression\": \"resume_schedule_expression\"\n      },\n      \"iter_list\": {\n       \"constant\": false,\n       \"objectItems\": true,\n       \"value\": \"schedule_cluster_details\"\n      }\n     }\n    ],\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"language\": \"python\",\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS Schedule Redshift Cluster Pause Resume Enabled\",\n    \"orderProperties\": [\n     \"region\",\n     \"iam_role_arn\",\n     \"cluster_name\",\n     \"pause_schedule_expression\",\n     \"resume_schedule_expression\"\n    ],\n    \"outputParams\": {\n     \"output_name\": \"schedule_info\",\n     \"output_name_enabled\": true,\n     \"output_runbook_enabled\": false,\n     \"output_runbook_name\": \"\"\n    },\n    \"printOutput\": true,\n    \"tags\": [\n     \"aws_find_redshift_cluster_without_pause_resume_enabled\"\n    ],\n    \"title\": \"AWS Schedule Redshift Cluster Pause Resume Enabled\",\n    \"uuid\": \"433eede3d0f6e49e242c1c0f624617df7212a210e1fd5cde8cec0202d2b972aa\",\n    \"version\": \"1.0.0\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##  Copyright (c) 2023 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import Optional, List\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_schedule_pause_resume_enabled_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_schedule_pause_resume_enabled(handle,\\n\",\n    \"                                      iam_role_arn: str,\\n\",\n    \"                                      cluster_name: str,\\n\",\n    \"                                      region: str,\\n\",\n    \"                                      pause_schedule_expression: str,\\n\",\n    \"                                      resume_schedule_expression: str) -> List:\\n\",\n    \"    \\\"\\\"\\\"aws_schedule_pause_resume_enabled schedule pause and resume enabled.\\n\",\n    \"\\n\",\n    \"    :type iam_role_arn: str\\n\",\n    \"    :param iam_role_arn: The ARN of the IAM role.\\n\",\n    \"\\n\",\n    \"    :type cluster_name: str\\n\",\n    \"    :param cluster_name: The name of the Redshift cluster.\\n\",\n    \"\\n\",\n    \"    :type region: str\\n\",\n    \"    :param region: AWS Region.\\n\",\n    \"\\n\",\n    \"    :type pause_schedule_expression: str\\n\",\n    \"    :param pause_schedule_expression: The cron expression for the pause schedule.\\n\",\n    \"\\n\",\n    \"    :type resume_schedule_expression: str\\n\",\n    \"    :param resume_schedule_expression: The cron expression for the resume schedule.\\n\",\n    \"\\n\",\n    \"    :rtype: List\\n\",\n    \"    :return: A list of pause and resume enabled status.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    result = []\\n\",\n    \"    pause_action_name = f\\\"{cluster_name}-scheduled-pause\\\"\\n\",\n    \"    resume_action_name = f\\\"{cluster_name}-scheduled-resume\\\"\\n\",\n    \"\\n\",\n    \"    try:\\n\",\n    \"        redshift_client = handle.client('redshift', region_name=region)\\n\",\n    \"        # Schedule pause action\\n\",\n    \"        response_pause = redshift_client.create_scheduled_action(\\n\",\n    \"            ScheduledActionName=pause_action_name,\\n\",\n    \"            TargetAction={\\n\",\n    \"                'PauseCluster': {'ClusterIdentifier': cluster_name}\\n\",\n    \"            },\\n\",\n    \"            Schedule=pause_schedule_expression,\\n\",\n    \"            IamRole=iam_role_arn,\\n\",\n    \"            Enable=True\\n\",\n    \"        )\\n\",\n    \"        result.append(response_pause)\\n\",\n    \"        # Schedule resume action\\n\",\n    \"        response_resume = redshift_client.create_scheduled_action(\\n\",\n    \"            ScheduledActionName=resume_action_name,\\n\",\n    \"            TargetAction={\\n\",\n    \"                'ResumeCluster': {'ClusterIdentifier': cluster_name}\\n\",\n    \"            },\\n\",\n    \"            Schedule=resume_schedule_expression,\\n\",\n    \"            IamRole=iam_role_arn,\\n\",\n    \"            Enable=True\\n\",\n    \"        )\\n\",\n    \"        result.append(response_resume)\\n\",\n    \"\\n\",\n    \"    except Exception as error:\\n\",\n    \"        print(error)\\n\",\n    \"\\n\",\n    \"    return result\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(continueOnError=True)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"region\\\": \\\"iter.get(\\\\\\\\\\\"region\\\\\\\\\\\")\\\",\\n\",\n    \"    \\\"iam_role_arn\\\": \\\"iter.get(\\\\\\\\\\\"iam_role\\\\\\\\\\\")\\\",\\n\",\n    \"    \\\"cluster_name\\\": \\\"iter.get(\\\\\\\\\\\"cluster_name\\\\\\\\\\\")\\\",\\n\",\n    \"    \\\"pause_schedule_expression\\\": \\\"iter.get(\\\\\\\\\\\"pause_schedule_expression\\\\\\\\\\\")\\\",\\n\",\n    \"    \\\"resume_schedule_expression\\\": \\\"iter.get(\\\\\\\\\\\"resume_schedule_expression\\\\\\\\\\\")\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(iterJson='''{\\n\",\n    \"    \\\"iter_enabled\\\": true,\\n\",\n    \"    \\\"iter_list_is_const\\\": false,\\n\",\n    \"    \\\"iter_list\\\": \\\"schedule_cluster_details\\\",\\n\",\n    \"    \\\"iter_parameter\\\": [\\\"region\\\",\\\"iam_role_arn\\\",\\\"cluster_name\\\",\\\"pause_schedule_expression\\\",\\\"resume_schedule_expression\\\"]\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"len(schedule_cluster_details) > 0\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"schedule_info\\\")\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_schedule_pause_resume_enabled, lego_printer=aws_schedule_pause_resume_enabled_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"37022260-01cb-4cb7-9ed1-aeb30ac4ad64\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Conclusion\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Conclusion\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Conclusion\\\">Conclusion</h3>\\n\",\n    \"<p>In this Runbook, we demonstrated using unSkript's AWS actions to enable Redshift clusters that don't have pause resume enabled and enable the pause resume to those clusters. To view the full platform capabunscriptedof unSkript please visit <a href=\\\"https://us.app.unskript.io\\\" target=\\\"_blank\\\" rel=\\\"noopener\\\">https://us.app.unskript.io</a></p>\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"execution_data\": {\n   \"runbook_name\": \"AWS Ensure Redshift Clusters have Paused Resume Enabled\",\n   \"parameters\": null\n  },\n  \"kernelspec\": {\n   \"display_name\": \"unSkript (Build: 1166)\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"codemirror_mode\": {\n    \"name\": \"ipython\",\n    \"version\": 3\n   },\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"nbconvert_exporter\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.9.6\"\n  },\n  \"parameterSchema\": {\n   \"properties\": {\n    \"pause_schedule_expression\": {\n     \"default\": \"cron(0 0 ? * 7,1 *)\",\n     \"description\": \"The cron expression for the pause schedule.e.g. cron(0 0 * * 6-7\\n)\\nIn these expressions:\\n\\n0 0 represents 12:00 AM (midnight).\\n? is used for the day of the month field.\\n* means all possible values for the month field.\\n7,1 specifies Saturday (7) and Sunday (1) for the pause schedule.\",\n     \"title\": \"pause_schedule_expression\",\n     \"type\": \"string\"\n    },\n    \"redshift_clusters\": {\n     \"description\": \"List of Redshift clusters where pause resume needs to be implemented.\",\n     \"title\": \"redshift_clusters\",\n     \"type\": \"array\"\n    },\n    \"region\": {\n     \"description\": \"AWS Region\",\n     \"title\": \"region\",\n     \"type\": \"string\"\n    },\n    \"resume_schedule_expression\": {\n     \"default\": \"cron(0 0 ? * 2 *)\",\n     \"description\": \"The cron expression for the resume schedule.e.g. cron(0 0 ? * 2 *)\\n\\n\\nIn these expressions:\\n\\n0 0 represents 12:00 AM (midnight).\\n? is used for the day of the month field.\\n* means all possible values for the month field.\\n2 represents Monday for the resume schedule.\",\n     \"title\": \"resume_schedule_expression\",\n     \"type\": \"string\"\n    }\n   },\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"show_action_drag_hint_done\": {\n   \"environment_id\": \"1499f27c-6406-4fbd-bd1b-c6f92800018f\",\n   \"environment_name\": \"Staging\",\n   \"execution_id\": \"\",\n   \"inputs_for_searched_lego\": \"\",\n   \"notebook_id\": \"d4159cb3-6c83-4ba5-a2f7-d23c0777076b.ipynb\",\n   \"parameters\": null,\n   \"runbook_name\": \"gcp\",\n   \"search_string\": \"\",\n   \"show_tool_tip\": true,\n   \"tenant_id\": \"982dba5f-d9df-48ae-a5bf-ec1fc94d4882\",\n   \"tenant_url\": \"https://tenant-staging.alpha.unskript.io\",\n   \"user_email_id\": \"support+staging@unskript.com\",\n   \"workflow_id\": \"f8ead207-81c0-414a-a15b-76fcdefafe8d\"\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}"
  },
  {
    "path": "AWS/AWS_Ensure_Redshift_Clusters_have_Paused_Resume_Enabled.json",
    "content": "{\n    \"name\": \"AWS Ensure Redshift Clusters have Paused Resume Enabled\",\n    \"description\": \"This runbook finds redshift clusters that don't have pause resume enabled and schedules the pause resume for the cluster.\",\n    \"uuid\": \"8b9c4eadb5f2fb817be0952f3ecb28c8e490ece6281286a74a95d5fe25019400\",\n    \"icon\": \"CONNECTOR_TYPE_AWS\",\n    \"categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\" ],\n    \"connector_types\": [ \"CONNECTOR_TYPE_AWS\" ],\n    \"version\": \"1.0.0\"\n  }"
  },
  {
    "path": "AWS/AWS_Get_Elb_Unhealthy_Instances.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"c2072425\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Runbook Overview\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Runbook Overview\"\n   },\n   \"source\": [\n    \"<center><img src=\\\"https://storage.googleapis.com/unskript-website/assets/favicon.png\\\" alt=\\\"unSkript.com\\\" width=\\\"100\\\" height=\\\"100\\\">\\n\",\n    \"<h1 id=\\\"unSkript-Runbooks\\\">unSkript Runbooks</h1>\\n\",\n    \"<div class=\\\"alert alert-block alert-success\\\">\\n\",\n    \"<h3 id=\\\"Objective\\\"><strong>Objective</strong></h3>\\n\",\n    \"<strong>To get AWS ELB unhealthy instances using unSkript actions.</strong></div>\\n\",\n    \"</center><center>\\n\",\n    \"<h2 id=\\\"Get-AWS-ELB-Unhealthy-Instances\\\">Get AWS ELB Unhealthy Instances</h2>\\n\",\n    \"</center>\\n\",\n    \"<h1 id=\\\"Steps-Overview\\\">Steps Overview</h1>\\n\",\n    \"<p>1. Get Unhealthy instances from ELB<br>2. Post Slack Message<code>\\n\",\n    \"</code></p>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 19,\n   \"id\": \"d5ec83b7-d75a-4e1b-a455-78b983a7fe50\",\n   \"metadata\": {\n    \"collapsed\": true,\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-05-19T11:34:46.665Z\"\n    },\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"name\": \"Input Verification\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Input Verification\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"if elb_name and not region:\\n\",\n    \"    raise SystemExit(\\\"Provide region for the ELB instances!\\\")\\n\",\n    \"if region == None:\\n\",\n    \"    region = \\\"\\\"\\n\",\n    \"if elb_name == None:\\n\",\n    \"    elb_name = \\\"\\\"\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"205bd131\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-1 B\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-1 B\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Get-Unhealthy-instances-from-ELB\\\">Get Unhealthy instances from ELB</h3>\\n\",\n    \"<p>Here we will use unSkript&nbsp;<strong>Get Unhealthy instances from ELB</strong> action. This action is used to get all unhealthy instances from ELB, the instances which are out of service are considered unhealthy instances.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p><strong>Input parameters:</strong> <code>elb_name</code>, <code>region</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p><strong>Output variable</strong>: <code>unhealthy_instances</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"19d75911-e82d-4712-b0ad-d4e5ebb0da1d\",\n   \"metadata\": {\n    \"actionBashCommand\": false,\n    \"actionCategories\": [\n     \"CATEGORY_TYPE_CLOUDOPS\",\n     \"CATEGORY_TYPE_DEVOPS\",\n     \"CATEGORY_TYPE_SRE\",\n     \"CATEGORY_TYPE_AWS\",\n     \"CATEGORY_TYPE_AWS_ELB\"\n    ],\n    \"actionDescription\": \"Get Unhealthy instances from Elastic Load Balancer\",\n    \"actionEntryFunction\": \"aws_get_unhealthy_instances_from_elb\",\n    \"actionIsCheck\": true,\n    \"actionIsRemediation\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [\n     \"94707558cebedbcb77aabaec5d6d2d1bf3f4664db6e9e905d6d905a11a3ef8bc\"\n    ],\n    \"actionNextHopParameterMapping\": {},\n    \"actionNouns\": null,\n    \"actionOutputType\": \"ACTION_OUTPUT_TYPE_LIST\",\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"actionTitle\": \"Get Unhealthy instances from ELB\",\n    \"actionType\": \"LEGO_TYPE_AWS\",\n    \"actionVerbs\": null,\n    \"actionVersion\": \"1.0.0\",\n    \"action_modified\": false,\n    \"action_uuid\": \"6d2964252c14fd1439bdefd224d147ac75fc7fe06036c6d0956081fa45505139\",\n    \"condition_enabled\": true,\n    \"continueOnError\": false,\n    \"credentialsJson\": {},\n    \"description\": \"Get Unhealthy instances from Elastic Load Balancer\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-05-19T11:35:19.835Z\"\n    },\n    \"id\": 7,\n    \"index\": 7,\n    \"inputData\": [\n     {\n      \"elb_name\": {\n       \"constant\": false,\n       \"value\": \"elb_name\"\n      },\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"region\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"elb_name\": {\n        \"default\": \"\",\n        \"description\": \"Name of the elastic load balancer.\",\n        \"title\": \"ELB Name\",\n        \"type\": \"string\"\n       },\n       \"region\": {\n        \"default\": \"\",\n        \"description\": \"AWS Region of the ELB.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [],\n      \"title\": \"aws_get_unhealthy_instances_from_elb\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"language\": \"python\",\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"Get Unhealthy instances from ELB\",\n    \"orderProperties\": [\n     \"elb_name\",\n     \"region\"\n    ],\n    \"outputParams\": {\n     \"output_name\": \"unhealthy_instances\",\n     \"output_name_enabled\": true,\n     \"output_runbook_enabled\": false,\n     \"output_runbook_name\": \"\"\n    },\n    \"printOutput\": true,\n    \"startcondition\": \"not elb_names\",\n    \"tags\": [\n     \"aws_get_unhealthy_instances_from_elb\"\n    ],\n    \"uuid\": \"6d2964252c14fd1439bdefd224d147ac75fc7fe06036c6d0956081fa45505139\",\n    \"version\": \"1.0.0\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##  Copyright (c) 2021 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"import pprint\\n\",\n    \"from typing import Optional, Tuple\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from unskript.connectors.aws import aws_get_paginator\\n\",\n    \"from unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_unhealthy_instances_from_elb_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_unhealthy_instances_from_elb(handle, elb_name: str = \\\"\\\", region: str = \\\"\\\") -> Tuple:\\n\",\n    \"    \\\"\\\"\\\"aws_get_unhealthy_instances_from_elb gives unhealthy instances from ELB\\n\",\n    \"\\n\",\n    \"        :type elb_name: string\\n\",\n    \"        :param elb_name: Name of the elastic load balancer.\\n\",\n    \"\\n\",\n    \"        :type region: string\\n\",\n    \"        :param region: AWS region.\\n\",\n    \"\\n\",\n    \"        :rtype: A tuple with execution results and a list of unhealthy instances from ELB\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"\\n\",\n    \"    result = []\\n\",\n    \"    all_regions = [region]\\n\",\n    \"    elb_list = []\\n\",\n    \"    if not region:\\n\",\n    \"        all_regions = aws_list_all_regions(handle)\\n\",\n    \"\\n\",\n    \"    if not elb_name:\\n\",\n    \"        for reg in all_regions:\\n\",\n    \"            try:\\n\",\n    \"                asg_client = handle.client('elb', region_name=reg)\\n\",\n    \"                response = aws_get_paginator(\\n\",\n    \"                    asg_client,\\n\",\n    \"                    \\\"describe_load_balancers\\\",\\n\",\n    \"                    \\\"LoadBalancerDescriptions\\\"\\n\",\n    \"                    )\\n\",\n    \"                for i in response:\\n\",\n    \"                    elb_dict = {}\\n\",\n    \"                    elb_dict[\\\"load_balancer_name\\\"] = i[\\\"LoadBalancerName\\\"]\\n\",\n    \"                    elb_dict[\\\"region\\\"] = reg\\n\",\n    \"                    elb_list.append(elb_dict)\\n\",\n    \"            except Exception:\\n\",\n    \"                pass\\n\",\n    \"\\n\",\n    \"    if elb_name and not region:\\n\",\n    \"        for reg in all_regions:\\n\",\n    \"            try:\\n\",\n    \"                asg_client = handle.client('elb', region_name=reg)\\n\",\n    \"                response = aws_get_paginator(\\n\",\n    \"                    asg_client,\\n\",\n    \"                    \\\"describe_load_balancers\\\",\\n\",\n    \"                    \\\"LoadBalancerDescriptions\\\"\\n\",\n    \"                    )\\n\",\n    \"                for i in response:\\n\",\n    \"                    if elb_name in i[\\\"LoadBalancerName\\\"]:\\n\",\n    \"                        elb_dict = {}\\n\",\n    \"                        elb_dict[\\\"load_balancer_name\\\"] = i[\\\"LoadBalancerName\\\"]\\n\",\n    \"                        elb_dict[\\\"region\\\"] = reg\\n\",\n    \"                        elb_list.append(elb_dict)\\n\",\n    \"            except Exception:\\n\",\n    \"                pass\\n\",\n    \"\\n\",\n    \"    if elb_name and region:\\n\",\n    \"        try:\\n\",\n    \"            elbClient = handle.client('elb', region_name=region)\\n\",\n    \"            res = elbClient.describe_instance_health(LoadBalancerName=elb_name)\\n\",\n    \"            for instance in res['InstanceStates']:\\n\",\n    \"                data_dict = {}\\n\",\n    \"                if instance['State'] == \\\"OutOfService\\\":\\n\",\n    \"                    data_dict[\\\"instance_id\\\"] = instance[\\\"InstanceId\\\"]\\n\",\n    \"                    data_dict[\\\"region\\\"] = reg\\n\",\n    \"                    data_dict[\\\"load_balancer_name\\\"] = i[\\\"LoadBalancerName\\\"]\\n\",\n    \"                    result.append(data_dict)\\n\",\n    \"        except Exception:\\n\",\n    \"            pass\\n\",\n    \"\\n\",\n    \"    for elb in elb_list:\\n\",\n    \"        try:\\n\",\n    \"            elbClient = handle.client('elb', region_name=elb[\\\"region\\\"])\\n\",\n    \"            res = elbClient.describe_instance_health(LoadBalancerName=elb[\\\"load_balancer_name\\\"])\\n\",\n    \"            for instance in res['InstanceStates']:\\n\",\n    \"                data_dict = {}\\n\",\n    \"                if instance['State'] == \\\"OutOfService\\\":\\n\",\n    \"                    data_dict[\\\"instance_id\\\"] = instance[\\\"InstanceId\\\"]\\n\",\n    \"                    data_dict[\\\"region\\\"] = reg\\n\",\n    \"                    data_dict[\\\"load_balancer_name\\\"] = i[\\\"LoadBalancerName\\\"]\\n\",\n    \"                    result.append(data_dict)\\n\",\n    \"        except Exception:\\n\",\n    \"            pass\\n\",\n    \"\\n\",\n    \"    if len(result) != 0:\\n\",\n    \"        return (False, result)\\n\",\n    \"    return (True, None)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"elb_name\\\": \\\"elb_name\\\",\\n\",\n    \"    \\\"region\\\": \\\"region\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"not elb_names\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"unhealthy_instances\\\")\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_get_unhealthy_instances_from_elb, lego_printer=aws_get_unhealthy_instances_from_elb_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"8fc2968d-700c-4264-84ab-9dbbeae25d3c\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-1 Extension\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-1 Extension\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Modify-Output\\\">Modify Output</h3>\\n\",\n    \"<p>In this action, we modify the output from step 1A and step 1B and return a list of dictionary items for the unhealthy instances from ELB.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p><strong>Output variable:</strong> elb_instance_list</p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 21,\n   \"id\": \"983ce208-f598-4c1e-ab9a-282e90ba5592\",\n   \"metadata\": {\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-05-19T11:35:22.550Z\"\n    },\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"name\": \"Modify Output\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Modify Output\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"elb_instance_list = []\\n\",\n    \"try:\\n\",\n    \"    if unhealthy_instances:\\n\",\n    \"        if unhealthy_instances[0] == False:\\n\",\n    \"            for instance in unhealthy_instances[1]:\\n\",\n    \"                elb_instance_list.append(instance)\\n\",\n    \"except Exception as e:\\n\",\n    \"    raise Exception(e)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"061cdd14\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-2\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-2\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Post-Slack-Message\\\">Post Slack Message</h3>\\n\",\n    \"<p>Here we will use unSkript <strong>Post Slack Message</strong> action. This action takes channel: str and message: str as input. This input is used to post the message to the slack channel.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p><strong>Input parameters:</strong> <code>message</code>, <code>channel</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p><strong>Output variable</strong>: <code>message_status</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 14,\n   \"id\": \"80e6665a-2c9a-4a33-89f8-ad221be338ec\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_uuid\": \"6a87f83ab0ecfeecb9c98d084e2b1066c26fa64be5b4928d5573a5d60299802d\",\n    \"condition_enabled\": true,\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Post Slack Message\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-02-09T09:45:24.587Z\"\n    },\n    \"id\": 44,\n    \"index\": 44,\n    \"inputData\": [\n     {\n      \"channel\": {\n       \"constant\": false,\n       \"value\": \"channel\"\n      },\n      \"message\": {\n       \"constant\": false,\n       \"value\": \"f\\\"Unhealthy instances for elb:{elb_instance_list}\\\"\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"channel\": {\n        \"description\": \"Name of the slack channel where the message to be posted\",\n        \"title\": \"Channel\",\n        \"type\": \"string\"\n       },\n       \"message\": {\n        \"description\": \"Message to be sent\",\n        \"title\": \"Message\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"channel\",\n       \"message\"\n      ],\n      \"title\": \"slack_post_message\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_SLACK\",\n    \"name\": \"Post Slack Message\",\n    \"nouns\": [\n     \"slack\",\n     \"message\"\n    ],\n    \"orderProperties\": [\n     \"channel\",\n     \"message\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"message_status\",\n     \"output_name_enabled\": true\n    },\n    \"printOutput\": true,\n    \"startcondition\": \"channel\",\n    \"tags\": [\n     \"slack_post_message\"\n    ],\n    \"title\": \"Post Slack Message\",\n    \"verbs\": [\n     \"post\"\n    ]\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"# Copyright (c) 2021 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from slack_sdk import WebClient\\n\",\n    \"from slack_sdk.errors import SlackApiError\\n\",\n    \"\\n\",\n    \"pp = pprint.PrettyPrinter(indent=2)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"def legoPrinter(func):\\n\",\n    \"    def Printer(*args, **kwargs):\\n\",\n    \"        output = func(*args, **kwargs)\\n\",\n    \"        if output:\\n\",\n    \"            channel = kwargs[\\\"channel\\\"]\\n\",\n    \"            pp.pprint(print(f\\\"Message sent to Slack channel {channel}\\\"))\\n\",\n    \"        return output\\n\",\n    \"    return Printer\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@legoPrinter\\n\",\n    \"@beartype\\n\",\n    \"def slack_post_message(\\n\",\n    \"        handle: WebClient,\\n\",\n    \"        channel: str,\\n\",\n    \"        message: str) -> bool:\\n\",\n    \"\\n\",\n    \"    try:\\n\",\n    \"        response = handle.chat_postMessage(\\n\",\n    \"            channel=channel,\\n\",\n    \"            text=message)\\n\",\n    \"        return True\\n\",\n    \"    except SlackApiError as e:\\n\",\n    \"        print(\\\"\\\\n\\\\n\\\")\\n\",\n    \"        pp.pprint(\\n\",\n    \"            f\\\"Failed sending message to slack channel {channel}, Error: {e.response['error']}\\\")\\n\",\n    \"        return False\\n\",\n    \"    except Exception as e:\\n\",\n    \"        print(\\\"\\\\n\\\\n\\\")\\n\",\n    \"        pp.pprint(\\n\",\n    \"            f\\\"Failed sending message to slack channel {channel}, Error: {e.__str__()}\\\")\\n\",\n    \"        return False\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"channel\\\": \\\"channel\\\",\\n\",\n    \"    \\\"message\\\": \\\"f\\\\\\\\\\\"Unhealthy instances for elb:{elb_instance_list}\\\\\\\\\\\"\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"channel\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"message_status\\\")\\n\",\n    \"\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.output = task.execute(slack_post_message, hdl=hdl, args=args)\\n\",\n    \"    if task.output_name != None:\\n\",\n    \"        globals().update({task.output_name: task.output[0]})\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"2fbfd774\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Conclusion\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Conclusion\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Conclusion\\\">Conclusion</h3>\\n\",\n    \"<p>In this Runbook, we demonstrated the use of unSkript's AWS and slack legos to perform AWS action and this runbook fetches the unHealthy EC2 instances for Classic ELB and posts to a slack channel. To view the full platform capabilities of unSkript please visit <a href=\\\"https://us.app.unskript.io\\\" target=\\\"_blank\\\" rel=\\\"noopener\\\">https://us.app.unskript.io</a></p>\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"execution_data\": {\n   \"runbook_name\": \"AWS Get unhealthy EC2 instances from ELB\",\n   \"parameters\": [\n    \"channel\",\n    \"elb_name\",\n    \"region\"\n   ]\n  },\n  \"kernelspec\": {\n   \"display_name\": \"unSkript (Build: 1169)\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"codemirror_mode\": {\n    \"name\": \"ipython\",\n    \"version\": 3\n   },\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"nbconvert_exporter\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.9.6\"\n  },\n  \"parameterSchema\": {\n   \"properties\": {\n    \"channel\": {\n     \"default\": \"\",\n     \"description\": \"Slack channel to post to\",\n     \"title\": \"channel\",\n     \"type\": \"string\"\n    },\n    \"elb_name\": {\n     \"description\": \"ELB Name\",\n     \"title\": \"elb_name\",\n     \"type\": \"string\"\n    },\n    \"region\": {\n     \"description\": \"Region for the ELB instances\",\n     \"title\": \"region\",\n     \"type\": \"string\"\n    }\n   },\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"parameterValues\": {},\n  \"vscode\": {\n   \"interpreter\": {\n    \"hash\": \"e8899eb02dfbc033aab5733bdae1bd213fa031d40331094008e8673d99ebab63\"\n   }\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}"
  },
  {
    "path": "AWS/AWS_Get_Elb_Unhealthy_Instances.json",
    "content": "{\n  \"name\": \"AWS Get unhealthy EC2 instances from ELB\",\n  \"description\": \"This runbook can be used to list unhealthy EC2 instance from an ELB. Sometimes it difficult to determine why Amazon EC2 Auto Scaling didn't terminate an unhealthy instance from Activity History alone. You can find further details about an unhealthy instance's state, and how to terminate that instance, by checking the a few extra things.\",\n  \"uuid\": \"94707558cebedbcb77aabaec5d6d2d1bf3f4664db6e9e905d6d905a11a3ef8bc\",\n  \"icon\": \"CONNECTOR_TYPE_AWS\",\n  \"categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\" ],\n  \"connector_types\": [ \"CONNECTOR_TYPE_AWS\" ],\n  \"version\": \"1.0.0\"\n}\n\n"
  },
  {
    "path": "AWS/AWS_Get_Redshift_Daily_Product_Costs.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"e54624c7-4d3e-431a-adda-d2e0e736ed65\",\n   \"metadata\": {\n    \"orderProperties\": [],\n    \"tags\": []\n   },\n   \"source\": [\n    \"<h2 id=\\\"Introduction\\\">Introduction<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Introduction\\\" target=\\\"_self\\\">&para;</a></h2>\\n\",\n    \"<p>This RunBook takes data from your AWS Cost and Usage Report, and generates a chart of daily usage for the month for each AWS service.</p>\\n\",\n    \"<p>It can also be configured to send alerts to slack if a day-over-day change in cost is over the defined threshold.</p>\\n\",\n    \"<p>Read more in our blog posts:</p>\\n\",\n    \"<p><a href=\\\"https://unskript.com/blog/keeping-your-cloud-costs-in-check-automated-aws-cost-charts-and-alerting/\\\" target=\\\"_blank\\\" rel=\\\"noopener\\\">https://unskript.com/blog/keeping-your-cloud-costs-in-check-automated-aws-cost-charts-and-alerting/</a></p>\\n\",\n    \"<p><a href=\\\"https://unskript.com/blog/cloud-costs-charting-daily-ec2-usage-and-cost/\\\" target=\\\"_blank\\\" rel=\\\"noopener\\\">https://unskript.com/blog/cloud-costs-charting-daily-ec2-usage-and-cost/</a></p>\\n\",\n    \"<h2 id=\\\"Prerequisites\\\">Prerequisites<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Prerequisites\\\" target=\\\"_self\\\">&para;</a></h2>\\n\",\n    \"<p>This RunBook requires a Cost and Usage report in RedShift (here's a link to the <a href=\\\"https://docs.aws.amazon.com/cur/latest/userguide/cur-create.html\\\">AWS docs</a>).</p>\\n\",\n    \"<p>To Update the Redshift table daily - take a look at the Update Redshift database from S3 RunBook.&nbsp; This will ensure that the data in the Redshift table is up to date.</p>\\n\",\n    \"<h2 id=\\\"Steps\\\">Steps<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Steps\\\" target=\\\"_self\\\">&para;</a></h2>\\n\",\n    \"<ol>\\n\",\n    \"<li>Get the ARN of the AWS Secret that can access your RedShift cluster.</li>\\n\",\n    \"<li>Create the SQL Query.&nbsp; This query is built to get the sum of daily cost for each service in AWS.&nbsp; It automatically creates the tablename to match the month/year.</li>\\n\",\n    \"<li>RedShift Query - Kicks off the Query.&nbsp;&nbsp;</li>\\n\",\n    \"<li>RedShiftQuery Details - This tells us the status of the query.&nbsp; We are looking for the status to be equal to \\\"finished.\\\"&nbsp; TODO- add polling to check for this automatically.</li>\\n\",\n    \"<li>Get RedShift Result: Once the query has been completed - this Action pulls the data from Redshift</li>\\n\",\n    \"<li>Chart the data: The data is pulledinto a dataframe, and several charts are made - for the month, and the last 7 days.</li>\\n\",\n    \"<li>Bulid Alert - takes the last 2 days and compares the $$ spend.&nbsp; If the change is upwards, and it exceeds the threshold defined - run the last Action</li>\\n\",\n    \"<li>Post image to Slack: If the alert is tripped - we'll send an alert to Slack with the chart from the last 7 days.&nbsp; Note: It also automatically sends every Monday as well.</li>\\n\",\n    \"</ol>\\n\",\n    \"<p>&nbsp;</p>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 14,\n   \"id\": \"b3644c49-9166-4715-a097-2f27d5c81532\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionCategories\": [],\n    \"actionIsCheck\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [],\n    \"actionNextHopParameterMapping\": {},\n    \"actionOutputType\": \"\",\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": false,\n    \"action_uuid\": \"1ce9f756a4f1503df353fd5e8df7ea32ebe801a93c607251fea1a5367861da89\",\n    \"checkEnabled\": false,\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Given a Secret Name - this Action returns the Secret ARN\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-04-07T16:27:42.487Z\"\n    },\n    \"id\": 189,\n    \"index\": 189,\n    \"inputData\": [\n     {\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"region\"\n      },\n      \"secret_name\": {\n       \"constant\": false,\n       \"value\": \"secret_name\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"region\": {\n        \"description\": \"AWS Region.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       },\n       \"secret_name\": {\n        \"description\": \"AWS Secret Name\",\n        \"title\": \"secret_name\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"region\",\n       \"secret_name\"\n      ],\n      \"title\": \"aws_get_secrets_manager_secretARN\",\n      \"type\": \"object\"\n     }\n    ],\n    \"isUnskript\": false,\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS Get Secrets Manager SecretARN\",\n    \"nouns\": [],\n    \"orderProperties\": [\n     \"region\",\n     \"secret_name\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"secretArn\",\n     \"output_name_enabled\": true\n    },\n    \"printOutput\": true,\n    \"tags\": [],\n    \"verbs\": []\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"##  Copyright (c) 2023 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"\\n\",\n    \"from __future__ import annotations\\n\",\n    \"\\n\",\n    \"from typing import Optional\\n\",\n    \"\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_secrets_manager_secretARN_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint({\\\"secret\\\": output})\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"@beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_secrets_manager_secretARN(handle, region: str, secret_name:str) -> str:\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"    # Create a Secrets Manager client\\n\",\n    \"\\n\",\n    \"    client = handle.client(\\n\",\n    \"        service_name='secretsmanager',\\n\",\n    \"        region_name=region\\n\",\n    \"    )\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"    get_secret_value_response = client.get_secret_value(\\n\",\n    \"        SecretId=secret_name\\n\",\n    \"    )\\n\",\n    \"\\n\",\n    \"    #print(get_secret_value_response)\\n\",\n    \"    # Decrypts secret using the associated KMS key.\\n\",\n    \"    secretArn = get_secret_value_response['ARN']\\n\",\n    \"    return secretArn\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"region\\\": \\\"region\\\",\\n\",\n    \"    \\\"secret_name\\\": \\\"secret_name\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"secretArn\\\")\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_get_secrets_manager_secretARN, lego_printer=aws_get_secrets_manager_secretARN_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 15,\n   \"id\": \"6db09689-1a22-4cac-81be-cb1e3d6e7ef0\",\n   \"metadata\": {\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-04-07T16:27:47.517Z\"\n    },\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"name\": \"Create SQL Query\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Create SQL Query\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"import datetime\\n\",\n    \"\\n\",\n    \"today = datetime.datetime.now()\\n\",\n    \"\\n\",\n    \"yearmonth = today.strftime('%Y%m')\\n\",\n    \"tableName = 'awsbilling'+ yearmonth\\n\",\n    \"todayDay = int(today.strftime('%d'))\\n\",\n    \"yesterDay = 0\\n\",\n    \"if todayDay >1:\\n\",\n    \"    yesterDay = todayDay - 1\\n\",\n    \"\\n\",\n    \"sqlQuery = f\\\"select lineitem_productcode, date_part(day, cast(lineitem_usagestartdate as date)) as day, SUM((lineitem_unblendedcost)::numeric(37,4)) as cost from {tableName} group by lineitem_productcode, day order by cost desc;\\\"\\n\",\n    \"\\n\",\n    \"print(sqlQuery)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"99b78f89-b8e0-4aba-86b1-60ad14274207\",\n   \"metadata\": {\n    \"actionBashCommand\": false,\n    \"actionCategories\": [\n     \"CATEGORY_TYPE_SECOPS\",\n     \"CATEGORY_TYPE_SRE\",\n     \"CATEGORY_TYPE_AWS\",\n     \"CATEGORY_TYPE_AWS_REDSHIFT\"\n    ],\n    \"actionDescription\": \"Make a SQL Query to the given AWS Redshift database\",\n    \"actionEntryFunction\": \"aws_create_redshift_query\",\n    \"actionIsCheck\": false,\n    \"actionIsRemediation\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": null,\n    \"actionNextHopParameterMapping\": null,\n    \"actionNouns\": null,\n    \"actionOutputType\": \"ACTION_OUTPUT_TYPE_STR\",\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"actionTitle\": \"AWS Redshift Query\",\n    \"actionType\": \"LEGO_TYPE_AWS\",\n    \"actionVerbs\": null,\n    \"actionVersion\": \"1.0.0\",\n    \"action_modified\": false,\n    \"action_uuid\": \"edacb40b6b085473676c85af90fd36de2b23e8fd763ee25c787e8fd629c45773\",\n    \"continueOnError\": false,\n    \"credentialsJson\": {},\n    \"description\": \"Make a SQL Query to the given AWS Redshift database\",\n    \"id\": 1,\n    \"index\": 1,\n    \"inputData\": [\n     {\n      \"cluster\": {\n       \"constant\": false,\n       \"value\": \"cluster\"\n      },\n      \"database\": {\n       \"constant\": false,\n       \"value\": \"database\"\n      },\n      \"query\": {\n       \"constant\": false,\n       \"value\": \"sqlQuery\"\n      },\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"region\"\n      },\n      \"secretArn\": {\n       \"constant\": false,\n       \"value\": \"secretArn\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"cluster\": {\n        \"description\": \"Name of Redshift Cluster\",\n        \"title\": \"cluster\",\n        \"type\": \"string\"\n       },\n       \"database\": {\n        \"description\": \"Name of your Redshift database\",\n        \"title\": \"database\",\n        \"type\": \"string\"\n       },\n       \"query\": {\n        \"description\": \"sql query to run\",\n        \"title\": \"query\",\n        \"type\": \"string\"\n       },\n       \"region\": {\n        \"description\": \"AWS Region.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       },\n       \"secretArn\": {\n        \"description\": \"Value of your Secrets Manager ARN\",\n        \"title\": \"secretArn\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"region\",\n       \"query\",\n       \"cluster\",\n       \"database\",\n       \"secretArn\"\n      ],\n      \"title\": \"aws_create_redshift_query\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"language\": \"python\",\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS Redshift Query\",\n    \"orderProperties\": [\n     \"region\",\n     \"query\",\n     \"cluster\",\n     \"database\",\n     \"secretArn\"\n    ],\n    \"printOutput\": true,\n    \"tags\": [],\n    \"uuid\": \"edacb40b6b085473676c85af90fd36de2b23e8fd763ee25c787e8fd629c45773\",\n    \"version\": \"1.0.0\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"##  Copyright (c) 2021 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from __future__ import annotations\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from beartype import beartype\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_create_redshift_query(\\n\",\n    \"    handle,\\n\",\n    \"    region: str,\\n\",\n    \"    cluster:str,\\n\",\n    \"    database:str,\\n\",\n    \"    secretArn: str,\\n\",\n    \"    query:str\\n\",\n    \"    ) -> str:\\n\",\n    \"\\n\",\n    \"    # Input param validation.\\n\",\n    \"    #major change\\n\",\n    \"    client = handle.client('redshift-data', region_name=region)\\n\",\n    \"    # execute the query\\n\",\n    \"    response = client.execute_statement(\\n\",\n    \"        ClusterIdentifier=cluster,\\n\",\n    \"        Database=database,\\n\",\n    \"        SecretArn=secretArn,\\n\",\n    \"        Sql=query\\n\",\n    \"    )\\n\",\n    \"    resultId = response['Id']\\n\",\n    \"    print(response)\\n\",\n    \"    print(\\\"resultId\\\",resultId)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"    return resultId\\n\",\n    \"\\n\",\n    \"#make a change\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"def unskript_default_printer(output):\\n\",\n    \"    if isinstance(output, (list, tuple)):\\n\",\n    \"        for item in output:\\n\",\n    \"            print(f'item: {item}')\\n\",\n    \"    elif isinstance(output, dict):\\n\",\n    \"        for item in output.items():\\n\",\n    \"            print(f'item: {item}')\\n\",\n    \"    else:\\n\",\n    \"        print(f'Output for {task.name}')\\n\",\n    \"        print(output)\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"region\\\": \\\"region\\\",\\n\",\n    \"    \\\"query\\\": \\\"sqlQuery\\\",\\n\",\n    \"    \\\"cluster\\\": \\\"cluster\\\",\\n\",\n    \"    \\\"database\\\": \\\"database\\\",\\n\",\n    \"    \\\"secretArn\\\": \\\"secretArn\\\"\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_create_redshift_query, lego_printer=unskript_default_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 17,\n   \"id\": \"b285b379-5226-4896-89db-b5209e19662f\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionCategories\": [],\n    \"actionIsCheck\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [],\n    \"actionNextHopParameterMapping\": {},\n    \"actionOutputType\": \"\",\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": false,\n    \"action_uuid\": \"26435cb53d995eccf75fd1e0692e611fcdb1b7e09511bbfe365f0e9a5abc416f\",\n    \"checkEnabled\": false,\n    \"collapsed\": true,\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Given an QueryId, this Action will give you the status of the Query, along with other data like  the number of lines/\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-04-07T16:27:52.719Z\"\n    },\n    \"id\": 204,\n    \"index\": 204,\n    \"inputData\": [\n     {\n      \"queryId\": {\n       \"constant\": false,\n       \"value\": \"queryId\"\n      },\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"region\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"queryId\": {\n        \"description\": \"Id of Redshift Query\",\n        \"title\": \"queryId\",\n        \"type\": \"string\"\n       },\n       \"region\": {\n        \"description\": \"AWS Region.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"region\",\n       \"queryId\"\n      ],\n      \"title\": \"aws_get_redshift_query_details\",\n      \"type\": \"object\"\n     }\n    ],\n    \"isUnskript\": false,\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS Get Redshift Query Details\",\n    \"nouns\": [],\n    \"orderProperties\": [\n     \"region\",\n     \"queryId\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"printOutput\": true,\n    \"tags\": [],\n    \"verbs\": []\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"from __future__ import annotations\\n\",\n    \"##\\n\",\n    \"##  Copyright (c) 2023 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import List, Dict\\n\",\n    \"from unskript.connectors.aws import aws_get_paginator\\n\",\n    \"import pprint\\n\",\n    \"from beartype import beartype\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from typing import Optional\\n\",\n    \"\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_redshift_query_details(handle, region: str, queryId:str) -> Dict:\\n\",\n    \"\\n\",\n    \"    client = handle.client('redshift-data', region_name=region)\\n\",\n    \"    response = client.describe_statement(\\n\",\n    \"    Id=queryId\\n\",\n    \"    )\\n\",\n    \"    resultReady = response['HasResultSet']\\n\",\n    \"    queryTimeNs = response['Duration']\\n\",\n    \"    ResultRows = response['ResultRows']\\n\",\n    \"    details = {\\\"Status\\\": response['Status'],\\n\",\n    \"                \\\"resultReady\\\": resultReady, \\n\",\n    \"               \\\"queryTimeNs\\\":queryTimeNs,\\n\",\n    \"               \\\"ResultRows\\\":ResultRows\\n\",\n    \"              }\\n\",\n    \"\\n\",\n    \"    #return resultReady\\n\",\n    \"    return details\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"def unskript_default_printer(output):\\n\",\n    \"\\n\",\n    \"    pp = pprint.PrettyPrinter(indent=4)\\n\",\n    \"    pp.pprint(output)\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"queryId\\\": \\\"queryId\\\",\\n\",\n    \"    \\\"region\\\": \\\"region\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_get_redshift_query_details, lego_printer=unskript_default_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 18,\n   \"id\": \"eae5bad1-0dfd-46f8-8efe-10ffe3b9c40d\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionCategories\": [],\n    \"actionIsCheck\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [],\n    \"actionNextHopParameterMapping\": {},\n    \"actionOutputType\": \"\",\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": false,\n    \"action_uuid\": \"95e51ea5a6230444928042f7932d680fcbc575d053dfa8ed6b60bc7e9b50adcc\",\n    \"checkEnabled\": false,\n    \"collapsed\": true,\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Given a QueryId, Get the Query Result, and format into a List\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-04-07T16:28:05.326Z\"\n    },\n    \"id\": 218,\n    \"index\": 218,\n    \"inputData\": [\n     {\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"region\"\n      },\n      \"resultId\": {\n       \"constant\": false,\n       \"value\": \"queryId\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"region\": {\n        \"description\": \"AWS Region\",\n        \"title\": \"region\",\n        \"type\": \"string\"\n       },\n       \"resultId\": {\n        \"description\": \"Redshift Query Result\",\n        \"title\": \"resultId\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"resultId\",\n       \"region\"\n      ],\n      \"title\": \"aws_get_redshift_result\",\n      \"type\": \"object\"\n     }\n    ],\n    \"isUnskript\": false,\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS Get Redshift Result\",\n    \"nouns\": [],\n    \"orderProperties\": [\n     \"resultId\",\n     \"region\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"redshiftresult\",\n     \"output_name_enabled\": true\n    },\n    \"printOutput\": false,\n    \"tags\": [],\n    \"verbs\": []\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"from __future__ import annotations\\n\",\n    \"##\\n\",\n    \"##  Copyright (c) 2023 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import List, Dict\\n\",\n    \"from unskript.connectors.aws import aws_get_paginator\\n\",\n    \"import pprint\\n\",\n    \"from beartype import beartype\\n\",\n    \"import time\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_redshift_result(handle, region:str, resultId: str) -> List:\\n\",\n    \"\\n\",\n    \"    time.sleep(10)\\n\",\n    \"    client = handle.client('redshift-data', region_name=region)\\n\",\n    \"    result = client.get_statement_result(\\n\",\n    \"        Id=resultId\\n\",\n    \"    )\\n\",\n    \"    #result has the Dictionary, but it is not easily queried\\n\",\n    \"    #get all the columns into an array\\n\",\n    \"    columnNames = []\\n\",\n    \"    for column in result['ColumnMetadata']:\\n\",\n    \"        columnNames.append(column['label'])\\n\",\n    \"    #print(columnNames)\\n\",\n    \"\\n\",\n    \"    #now let's make the output into a dict\\n\",\n    \"    listResult = []\\n\",\n    \"    for record in result['Records']:\\n\",\n    \"\\n\",\n    \"        for key, value in record[0].items():\\n\",\n    \"            rowId = value\\n\",\n    \"        entryCounter = 0\\n\",\n    \"        entryDict = {}\\n\",\n    \"        for entry in record:\\n\",\n    \"\\n\",\n    \"            for key, value in entry.items():\\n\",\n    \"                entryDict[columnNames[entryCounter]] = value\\n\",\n    \"            entryCounter +=1\\n\",\n    \"        #print(\\\"entryDict\\\",entryDict)\\n\",\n    \"        listResult.append(entryDict)\\n\",\n    \"\\n\",\n    \"    #print(listResult)\\n\",\n    \"    return listResult\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"region\\\": \\\"region\\\",\\n\",\n    \"    \\\"resultId\\\": \\\"queryId\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"redshiftresult\\\")\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=False)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_get_redshift_result, lego_printer=unskript_default_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 9,\n   \"id\": \"b42d2d45-0a95-4f16-8b44-0cced11ee848\",\n   \"metadata\": {\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-04-07T01:06:24.931Z\"\n    },\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"name\": \"Build Chart\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Build Chart\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"import matplotlib as mpl\\n\",\n    \"mpl.use('agg')\\n\",\n    \"from matplotlib.figure import Figure\\n\",\n    \"import panel\\n\",\n    \"import matplotlib.pyplot as plt\\n\",\n    \"import pandas as pd\\n\",\n    \"import pprint\\n\",\n    \"import io, base64, urllib\\n\",\n    \"from PIL import Image\\n\",\n    \"\\n\",\n    \"df = pd.DataFrame.from_dict(redshiftresult)\\n\",\n    \"df['cost']=df['cost'].astype(float)\\n\",\n    \"df['day']=df['day'].astype(int)\\n\",\n    \"\\n\",\n    \"%matplotlib inline\\n\",\n    \"\\n\",\n    \"font = {'size' : 22}\\n\",\n    \"dfpivot = df.pivot(index='day', columns='lineitem_productcode', values='cost')\\n\",\n    \"dfpivot.plot(linewidth=5,ylabel=\\\"daily cost in $\\\", figsize=(16, 9) )\\n\",\n    \"\\n\",\n    \"plt.rc('font', **font)\\n\",\n    \"plt.legend(loc='center left', bbox_to_anchor=(1.0, 0.5))\\n\",\n    \"plt.xticks(fontsize=22)\\n\",\n    \"plt.yticks(fontsize=22)\\n\",\n    \"\\n\",\n    \"plt.show()\\n\",\n    \"\\n\",\n    \"dfpivot.plot(linewidth=5,ylabel=\\\"daily cost in $\\\", figsize=(16, 9) )\\n\",\n    \"plt.ylim((0,10))\\n\",\n    \"plt.rc('font', **font)\\n\",\n    \"plt.legend(loc='center left', bbox_to_anchor=(1.0, 0.5))\\n\",\n    \"plt.show()\\n\",\n    \"\\n\",\n    \"dfpivot.plot(linewidth=5,ylabel=\\\"daily cost in $\\\", figsize=(16, 9) )\\n\",\n    \"plt.xlim((todayDay-7,todayDay))\\n\",\n    \"plt.rc('font', **font)\\n\",\n    \"plt.legend(loc='center left', bbox_to_anchor=(1.0, 0.5))\\n\",\n    \"fig = plt.gcf()\\n\",\n    \"plt.show()\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"fig.savefig('awsProducts7Day.jpg')\\n\",\n    \"im  = Image.open('awsProducts7Day.jpg')\\n\",\n    \"display(im)\\n\",\n    \"\\n\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 19,\n   \"id\": \"5f476f1b-a7b0-4927-9c7d-6335e9d3e7da\",\n   \"metadata\": {\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-04-07T16:34:22.658Z\"\n    },\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"name\": \"build alert\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"build alert\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"from datetime import date \\n\",\n    \"\\n\",\n    \"\\n\",\n    \"today = todayDay -1\\n\",\n    \"yesterday =yesterDay -1\\n\",\n    \"\\n\",\n    \"print(today)\\n\",\n    \"bigchange = {}\\n\",\n    \"listChange = []\\n\",\n    \"alert = False\\n\",\n    \"alertText = ''\\n\",\n    \"if yesterday >0:\\n\",\n    \"    for instance in dfpivot.columns:\\n\",\n    \"        todayCost = dfpivot.at[today, instance]\\n\",\n    \"        yesterdayCost = dfpivot.at[yesterday, instance]\\n\",\n    \"\\n\",\n    \"        delta =(todayCost-yesterdayCost)/yesterdayCost\\n\",\n    \"        if abs(todayCost-yesterdayCost) >1: \\n\",\n    \"            if delta >.05:\\n\",\n    \"                #print( instance, delta,dfpivot.at[today, instance], dfpivot.at[yesterday, instance])\\n\",\n    \"                bigchange[instance] = {\\\"delta\\\":delta, \\\"todayCost\\\":todayCost,\\\"yesterdayCost\\\":yesterdayCost}\\n\",\n    \"                listChange.append([instance, yesterdayCost, todayCost])\\n\",\n    \"                alertText = '@here There has been a large change in AWS Costs'\\n\",\n    \"                alert = True\\n\",\n    \"            if date.today().weekday() == 0:\\n\",\n    \"                alertText = 'Today is Monday, Here is the last week of AWS Costs'\\n\",\n    \"                alert = True\\n\",\n    \"    print(listChange)\\n\",\n    \"    print(\\\"bigchange\\\", bigchange)\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 20,\n   \"id\": \"e0091066-452a-4c06-81fc-3704ee90168c\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionCategories\": [],\n    \"actionIsCheck\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [],\n    \"actionNextHopParameterMapping\": {},\n    \"actionOutputType\": \"\",\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": true,\n    \"action_uuid\": \"6a87f83ab0ecfeecb9c98d084e2b1066c26fa64be5b4928d5573a5d60299802d\",\n    \"checkEnabled\": false,\n    \"condition_enabled\": true,\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"customCell\": true,\n    \"description\": \"Post Slack Message\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-04-07T16:34:29.971Z\"\n    },\n    \"id\": 82,\n    \"index\": 82,\n    \"inputData\": [\n     {\n      \"channel\": {\n       \"constant\": false,\n       \"value\": \"\\\"devrel_doug_test1\\\"\"\n      },\n      \"comment\": {\n       \"constant\": false,\n       \"value\": \"alertText\"\n      },\n      \"image\": {\n       \"constant\": false,\n       \"value\": \"'awsProducts7Day.jpg'\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"channel\": {\n        \"default\": \"\",\n        \"description\": \"Name of slack channel.\",\n        \"title\": \"Channel\",\n        \"type\": \"string\"\n       },\n       \"comment\": {\n        \"default\": \"\",\n        \"description\": \"Comment to add with image\",\n        \"required\": false,\n        \"title\": \"comment\",\n        \"type\": \"string\"\n       },\n       \"image\": {\n        \"default\": \"\",\n        \"description\": \"image to uplaod\",\n        \"title\": \"image\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"channel\",\n       \"message\"\n      ],\n      \"title\": \"slack_post_image\",\n      \"type\": \"object\"\n     }\n    ],\n    \"isUnskript\": false,\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_SLACK\",\n    \"name\": \"Post Slack Image\",\n    \"nouns\": [],\n    \"orderProperties\": [\n     \"channel\",\n     \"image\",\n     \"comment\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"printOutput\": true,\n    \"startcondition\": \"alert\",\n    \"tags\": [],\n    \"title\": \"Post Slack Image\",\n    \"verbs\": []\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"# Copyright (c) 2021 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from slack_sdk import WebClient\\n\",\n    \"from slack_sdk.errors import SlackApiError\\n\",\n    \"\\n\",\n    \"pp = pprint.PrettyPrinter(indent=2)\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def slack_post_image_printer(output):\\n\",\n    \"    if output is not None:\\n\",\n    \"        pprint.pprint(output)\\n\",\n    \"    else:\\n\",\n    \"        return\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"@beartype\\n\",\n    \"def slack_post_image(\\n\",\n    \"        handle: WebClient,\\n\",\n    \"        channel: str,\\n\",\n    \"        comment: str,\\n\",\n    \"        image: str) -> str:\\n\",\n    \"\\n\",\n    \"    try:\\n\",\n    \"        result = handle.files_upload(\\n\",\n    \"            channels = channel,\\n\",\n    \"            initial_comment=comment,\\n\",\n    \"            file=image\\n\",\n    \"    )\\n\",\n    \"        return f\\\"Successfuly Sent Message on Channel: #{channel}\\\"\\n\",\n    \"\\n\",\n    \"    except Exception as e:\\n\",\n    \"        print(\\\"\\\\n\\\\n\\\")\\n\",\n    \"        pp.pprint(\\n\",\n    \"            f\\\"Failed sending message to slack channel {channel}, Error: {e.__str__()}\\\")\\n\",\n    \"        return f\\\"Unable to send message on {channel}\\\"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"channel\\\": \\\"\\\\\\\\\\\"devrel_doug_test1\\\\\\\\\\\"\\\",\\n\",\n    \"    \\\"image\\\": \\\"'awsProducts7Day.jpg'\\\",\\n\",\n    \"    \\\"comment\\\": \\\"alertText\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"alert\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(slack_post_image, lego_printer=slack_post_image_printer, hdl=hdl, args=args)\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"execution_data\": {\n   \"runbook_name\": \"AWS Redshift Get Daily Costs from AWS Products\",\n   \"parameters\": [\n    \"cluster\",\n    \"database\",\n    \"region\",\n    \"secret_name\"\n   ]\n  },\n  \"kernelspec\": {\n   \"display_name\": \"Python 3.10.6 64-bit\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"codemirror_mode\": {\n    \"name\": \"ipython\",\n    \"version\": 3\n   },\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"nbconvert_exporter\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.10.6\"\n  },\n  \"parameterSchema\": {\n   \"properties\": {\n    \"cluster\": {\n     \"description\": \"The Redshift Cluster to be queried\",\n     \"title\": \"cluster\",\n     \"type\": \"string\"\n    },\n    \"database\": {\n     \"description\": \"the Redshift Database in our query\",\n     \"title\": \"database\",\n     \"type\": \"string\"\n    },\n    \"region\": {\n     \"description\": \"AWS Region\",\n     \"title\": \"region\",\n     \"type\": \"string\"\n    },\n    \"secret_name\": {\n     \"description\": \"AWS Secret Name to retrieve ARN for\",\n     \"title\": \"secret_name\",\n     \"type\": \"string\"\n    }\n   },\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"vscode\": {\n   \"interpreter\": {\n    \"hash\": \"b0fa6594d8f4cbf19f97940f81e996739fb7646882a419484c72d19e05852a7e\"\n   }\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}"
  },
  {
    "path": "AWS/AWS_Get_Redshift_Daily_Product_Costs.json",
    "content": "{\n    \"name\": \"AWS Redshift Get Daily Costs from AWS Products\",\n    \"description\": \"This runbook can be used to create charts and alerts around Your AWS product usage. It requires a Cost and USage report to be live in RedShift.\",  \n    \"uuid\": \"a79201f221993367e23dd9603ed7ef5123324353d717c566f902f7ca6e471f5c\",\n    \"icon\": \"CONNECTOR_TYPE_AWS\",\n    \"categories\": [ \"CATEGORY_TYPE_COST_OPT\", \"CATEGORY_TYPE_CLOUDOPS\" ],\n    \"connector_types\": [ \"CONNECTOR_TYPE_AWS\" ],\n    \"version\": \"1.0.0\"\n  }"
  },
  {
    "path": "AWS/AWS_Get_Redshift_EC2_Daily_Costs.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"f46958a9-6580-475a-b845-72aacface2dc\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Introduction\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Introduction\"\n   },\n   \"source\": [\n    \"<h2 id=\\\"Introduction\\\">Introduction<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Introduction\\\" target=\\\"_self\\\">&para;</a></h2>\\n\",\n    \"<p>This RunBook takes data from your AWS Cost and Usage Report, and generates a chart of daily usage for the month for each AWS service.</p>\\n\",\n    \"<p>It can also be configured to send alerts to slack if a day-over-day change in cost is over the defined threshold.</p>\\n\",\n    \"<p>Read more in our blog posts:</p>\\n\",\n    \"<p><a href=\\\"https://unskript.com/blog/keeping-your-cloud-costs-in-check-automated-aws-cost-charts-and-alerting/\\\" target=\\\"_blank\\\" rel=\\\"noopener\\\">https://unskript.com/blog/keeping-your-cloud-costs-in-check-automated-aws-cost-charts-and-alerting/</a></p>\\n\",\n    \"<p><a href=\\\"https://unskript.com/blog/cloud-costs-charting-daily-ec2-usage-and-cost/\\\" target=\\\"_blank\\\" rel=\\\"noopener\\\">https://unskript.com/blog/cloud-costs-charting-daily-ec2-usage-and-cost/</a></p>\\n\",\n    \"<h2 id=\\\"Prerequisites\\\">Prerequisites<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Prerequisites\\\" target=\\\"_self\\\">&para;</a></h2>\\n\",\n    \"<p>This RunBook requires a Cost and Usage report in RedShift (here's a link to the <a href=\\\"https://docs.aws.amazon.com/cur/latest/userguide/cur-create.html\\\">AWS docs</a>).</p>\\n\",\n    \"<p>To Update the Redshift table daily - take a look at the Update Redshift database from S3 RunBook.&nbsp; This will ensure that the data in the Redshift table is up to date.</p>\\n\",\n    \"<h2 id=\\\"Steps\\\">Steps<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Steps\\\" target=\\\"_self\\\">&para;</a></h2>\\n\",\n    \"<ol>\\n\",\n    \"<li>Get the ARN of the AWS Secret that can access your RedShift cluster.</li>\\n\",\n    \"<li>Create the SQL Query.&nbsp; This query is built to get the sum of daily cost for each EC2 instance type in your AWS region.&nbsp; It automatically creates the tablename to match the month/year.</li>\\n\",\n    \"<li>RedShift Query - Kicks off the Query.&nbsp;&nbsp;</li>\\n\",\n    \"<li>RedShiftQuery Details - This tells us the status of the query.&nbsp; We are looking for the status to be equal to \\\"finished.\\\"&nbsp; TODO- add polling to check for this automatically.</li>\\n\",\n    \"<li>Get RedShift Result: Once the query has been completed - this Action pulls the data from Redshift</li>\\n\",\n    \"<li>Chart the data: The data is pulledinto a dataframe, and several charts are made - for the month, and the last 7 days.</li>\\n\",\n    \"<li>Bulid Alert - takes the last 2 days and compares the $$ spend.&nbsp; If the change is upwards, and it exceeds the threshold defined - run the last Action</li>\\n\",\n    \"<li>Post image to Slack: If the alert is tripped - we'll send an alert to Slack with the chart from the last 7 days.&nbsp; Note: It also automatically sends every Monday as well.</li>\\n\",\n    \"</ol>\\n\",\n    \"<p>&nbsp;</p>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"78914f28-2fd7-477a-8b43-080c736515e8\",\n   \"metadata\": {\n    \"actionBashCommand\": false,\n    \"actionCategories\": [\n     \"CATEGORY_TYPE_CLOUDOPS\",\n     \"CATEGORY_TYPE_SECOPS\",\n     \"CATEGORY_TYPE_SRE\",\n     \"CATEGORY_TYPE_AWS\",\n     \"CATEGORY_TYPE_AWS_SECRET_MANAGER\"\n    ],\n    \"actionDescription\": \"Given a Secret Name - this Action returns the Secret ARN\",\n    \"actionEntryFunction\": \"aws_get_secrets_manager_secretARN\",\n    \"actionIsCheck\": false,\n    \"actionIsRemediation\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": null,\n    \"actionNextHopParameterMapping\": null,\n    \"actionNouns\": null,\n    \"actionOutputType\": \"ACTION_OUTPUT_TYPE_STR\",\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"actionTitle\": \"AWS Get Secrets Manager SecretARN\",\n    \"actionType\": \"LEGO_TYPE_AWS\",\n    \"actionVerbs\": null,\n    \"actionVersion\": \"1.0.0\",\n    \"action_modified\": false,\n    \"action_uuid\": \"1ce9f756a4f1503df353fd5e8df7ea32ebe801a93c607251fea1a5367861da89\",\n    \"collapsed\": true,\n    \"continueOnError\": false,\n    \"credentialsJson\": {},\n    \"description\": \"Given a Secret Name - this Action returns the Secret ARN\",\n    \"id\": 4,\n    \"index\": 4,\n    \"inputData\": [\n     {\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"region\"\n      },\n      \"secret_name\": {\n       \"constant\": false,\n       \"value\": \"secret_name\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"region\": {\n        \"description\": \"AWS Region.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       },\n       \"secret_name\": {\n        \"description\": \"AWS Secret Name\",\n        \"title\": \"secret_name\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"region\",\n       \"secret_name\"\n      ],\n      \"title\": \"aws_get_secrets_manager_secretARN\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"language\": \"python\",\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS Get Secrets Manager SecretARN\",\n    \"orderProperties\": [\n     \"region\",\n     \"secret_name\"\n    ],\n    \"outputParams\": {\n     \"output_name\": \"secretArn\",\n     \"output_name_enabled\": true,\n     \"output_runbook_enabled\": false,\n     \"output_runbook_name\": \"\"\n    },\n    \"printOutput\": true,\n    \"tags\": [\n     \"aws_get_secrets_manager_secretARN\"\n    ],\n    \"uuid\": \"1ce9f756a4f1503df353fd5e8df7ea32ebe801a93c607251fea1a5367861da89\",\n    \"version\": \"1.0.0\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"##  Copyright (c) 2023 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from __future__ import annotations\\n\",\n    \"import pprint\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from beartype import beartype\\n\",\n    \"from botocore.exceptions import ClientError\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_secrets_manager_secretARN_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint({\\\"secret\\\": output})\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_secrets_manager_secretARN(handle, region: str, secret_name:str) -> str:\\n\",\n    \"    # Create a Secrets Manager client\\n\",\n    \"    client = handle.client(\\n\",\n    \"        service_name='secretsmanager',\\n\",\n    \"        region_name=region\\n\",\n    \"    )\\n\",\n    \"\\n\",\n    \"    try:\\n\",\n    \"        get_secret_value_response = client.get_secret_value(\\n\",\n    \"            SecretId=secret_name\\n\",\n    \"        )\\n\",\n    \"    except ClientError as e:\\n\",\n    \"        # For a list of exceptions thrown, see\\n\",\n    \"        # https://docs.aws.amazon.com/secretsmanager/latest/apireference/API_GetSecretValue.html\\n\",\n    \"        raise e\\n\",\n    \"    # Decrypts secret using the associated KMS key.\\n\",\n    \"    secretArn = get_secret_value_response['ARN']\\n\",\n    \"    return secretArn\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"region\\\": \\\"region\\\",\\n\",\n    \"    \\\"secret_name\\\": \\\"secret_name\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"secretArn\\\")\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_get_secrets_manager_secretARN, lego_printer=aws_get_secrets_manager_secretARN_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 4,\n   \"id\": \"6db09689-1a22-4cac-81be-cb1e3d6e7ef0\",\n   \"metadata\": {\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-04-07T16:39:01.145Z\"\n    },\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"name\": \"Create SQL Query\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Create SQL Query\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"import datetime\\n\",\n    \"\\n\",\n    \"today = datetime.datetime.now()\\n\",\n    \"\\n\",\n    \"yearmonth = today.strftime('%Y%m')\\n\",\n    \"tableName = 'awsbilling'+ yearmonth\\n\",\n    \"todayDay = int(today.strftime('%d'))\\n\",\n    \"yesterDay = 0\\n\",\n    \"if todayDay >1:\\n\",\n    \"    yesterDay = todayDay - 1\\n\",\n    \"\\n\",\n    \"sqlQuery = f\\\"SELECT date_part(day, cast(lineitem_usagestartdate as date)) as day, product_instancetype,SUM(lineitem_usageamount)::numeric(37, 4) AS usage_hours, SUM((lineitem_unblendedcost)::numeric(37,4)) AS usage_cost FROM {tableName} WHERE length(lineitem_usagestartdate)>8 AND product_productfamily = 'Compute Instance' AND pricing_unit IN ('Hours', 'Hrs') GROUP BY  day, product_instancetype ORDER BY 1 DESC, 3 DESC, 2 \\\"\\n\",\n    \"print(sqlQuery)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 5,\n   \"id\": \"57562d3e-4026-4f85-995d-d912318a259a\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionCategories\": [],\n    \"actionIsCheck\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [],\n    \"actionNextHopParameterMapping\": {},\n    \"actionOutputType\": \"\",\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": false,\n    \"action_uuid\": \"edacb40b6b085473676c85af90fd36de2b23e8fd763ee25c787e8fd629c45773\",\n    \"checkEnabled\": false,\n    \"collapsed\": true,\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Make a SQL Query to the given AWS Redshift database\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-04-07T16:39:06.945Z\"\n    },\n    \"id\": 241,\n    \"index\": 241,\n    \"inputData\": [\n     {\n      \"cluster\": {\n       \"constant\": false,\n       \"value\": \"cluster\"\n      },\n      \"database\": {\n       \"constant\": false,\n       \"value\": \"database\"\n      },\n      \"query\": {\n       \"constant\": false,\n       \"value\": \"sqlQuery\"\n      },\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"region\"\n      },\n      \"secretArn\": {\n       \"constant\": false,\n       \"value\": \"secretArn\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"cluster\": {\n        \"description\": \"Name of Redshift Cluster\",\n        \"title\": \"cluster\",\n        \"type\": \"string\"\n       },\n       \"database\": {\n        \"description\": \"Name of your Redshift database\",\n        \"title\": \"database\",\n        \"type\": \"string\"\n       },\n       \"query\": {\n        \"description\": \"sql query to run\",\n        \"title\": \"query\",\n        \"type\": \"string\"\n       },\n       \"region\": {\n        \"description\": \"AWS Region.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       },\n       \"secretArn\": {\n        \"description\": \"Value of your Secrets Manager ARN\",\n        \"title\": \"secretArn\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"region\",\n       \"query\",\n       \"cluster\",\n       \"database\",\n       \"secretArn\"\n      ],\n      \"title\": \"aws_create_redshift_query\",\n      \"type\": \"object\"\n     }\n    ],\n    \"isUnskript\": false,\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS Redshift Query\",\n    \"nouns\": [],\n    \"orderProperties\": [\n     \"region\",\n     \"query\",\n     \"cluster\",\n     \"database\",\n     \"secretArn\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"queryId\",\n     \"output_name_enabled\": true\n    },\n    \"printOutput\": true,\n    \"tags\": [\n     \"aws_create_redshift_query\"\n    ],\n    \"verbs\": []\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"##  Copyright (c) 2021 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from __future__ import annotations\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import List, Dict\\n\",\n    \"from unskript.connectors.aws import aws_get_paginator\\n\",\n    \"import pprint\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_create_redshift_query(handle, region: str,cluster:str, database:str, secretArn: str, query:str) -> str:\\n\",\n    \"\\n\",\n    \"    # Input param validation.\\n\",\n    \"    #major change\\n\",\n    \"    client = handle.client('redshift-data', region_name=region)\\n\",\n    \"    # define your query\\n\",\n    \"    query = query\\n\",\n    \"    # execute the query\\n\",\n    \"    response = client.execute_statement(\\n\",\n    \"        ClusterIdentifier=cluster,\\n\",\n    \"        Database=database,\\n\",\n    \"        SecretArn=secretArn,\\n\",\n    \"        Sql=query\\n\",\n    \"    )\\n\",\n    \"    resultId = response['Id']\\n\",\n    \"    print(response)\\n\",\n    \"    print(\\\"resultId\\\",resultId)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"    return resultId\\n\",\n    \"\\n\",\n    \"#make a change\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"def unskript_default_printer(output):\\n\",\n    \"    if isinstance(output, (list, tuple)):\\n\",\n    \"        for item in output:\\n\",\n    \"            print(f'item: {item}')\\n\",\n    \"    elif isinstance(output, dict):\\n\",\n    \"        for item in output.items():\\n\",\n    \"            print(f'item: {item}')\\n\",\n    \"    else:\\n\",\n    \"        print(f'Output for {task.name}')\\n\",\n    \"        print(output)\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"cluster\\\": \\\"cluster\\\",\\n\",\n    \"    \\\"database\\\": \\\"database\\\",\\n\",\n    \"    \\\"query\\\": \\\"sqlQuery\\\",\\n\",\n    \"    \\\"region\\\": \\\"region\\\",\\n\",\n    \"    \\\"secretArn\\\": \\\"secretArn\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"queryId\\\")\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_create_redshift_query, lego_printer=unskript_default_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 6,\n   \"id\": \"b285b379-5226-4896-89db-b5209e19662f\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionCategories\": [],\n    \"actionIsCheck\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [],\n    \"actionNextHopParameterMapping\": {},\n    \"actionOutputType\": \"\",\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": false,\n    \"action_uuid\": \"26435cb53d995eccf75fd1e0692e611fcdb1b7e09511bbfe365f0e9a5abc416f\",\n    \"checkEnabled\": false,\n    \"collapsed\": true,\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Given an QueryId, this Action will give you the status of the Query, along with other data like  the number of lines/\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-04-07T16:39:12.638Z\"\n    },\n    \"id\": 204,\n    \"index\": 204,\n    \"inputData\": [\n     {\n      \"queryId\": {\n       \"constant\": false,\n       \"value\": \"queryId\"\n      },\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"region\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"queryId\": {\n        \"description\": \"Id of Redshift Query\",\n        \"title\": \"queryId\",\n        \"type\": \"string\"\n       },\n       \"region\": {\n        \"description\": \"AWS Region.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"region\",\n       \"queryId\"\n      ],\n      \"title\": \"aws_get_redshift_query_details\",\n      \"type\": \"object\"\n     }\n    ],\n    \"isUnskript\": false,\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS Get Redshift Query Details\",\n    \"nouns\": [],\n    \"orderProperties\": [\n     \"region\",\n     \"queryId\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"printOutput\": true,\n    \"tags\": [\n     \"aws_get_redshift_query_details\"\n    ],\n    \"verbs\": []\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"from __future__ import annotations\\n\",\n    \"##\\n\",\n    \"##  Copyright (c) 2023 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import List, Dict\\n\",\n    \"from unskript.connectors.aws import aws_get_paginator\\n\",\n    \"import pprint\\n\",\n    \"from beartype import beartype\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from typing import Optional\\n\",\n    \"\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_redshift_query_details(handle, region: str, queryId:str) -> Dict:\\n\",\n    \"\\n\",\n    \"    client = handle.client('redshift-data', region_name=region)\\n\",\n    \"    response = client.describe_statement(\\n\",\n    \"    Id=queryId\\n\",\n    \"    )\\n\",\n    \"    resultReady = response['HasResultSet']\\n\",\n    \"    queryTimeNs = response['Duration']\\n\",\n    \"    ResultRows = response['ResultRows']\\n\",\n    \"    details = {\\\"Status\\\": response['Status'],\\n\",\n    \"                \\\"resultReady\\\": resultReady, \\n\",\n    \"               \\\"queryTimeNs\\\":queryTimeNs,\\n\",\n    \"               \\\"ResultRows\\\":ResultRows\\n\",\n    \"              }\\n\",\n    \"\\n\",\n    \"    #return resultReady\\n\",\n    \"    return details\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"def unskript_default_printer(output):\\n\",\n    \"\\n\",\n    \"    pp = pprint.PrettyPrinter(indent=4)\\n\",\n    \"    pp.pprint(output)\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"queryId\\\": \\\"queryId\\\",\\n\",\n    \"    \\\"region\\\": \\\"region\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_get_redshift_query_details, lego_printer=unskript_default_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 7,\n   \"id\": \"eae5bad1-0dfd-46f8-8efe-10ffe3b9c40d\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionCategories\": [],\n    \"actionIsCheck\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [],\n    \"actionNextHopParameterMapping\": {},\n    \"actionOutputType\": \"\",\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": false,\n    \"action_uuid\": \"95e51ea5a6230444928042f7932d680fcbc575d053dfa8ed6b60bc7e9b50adcc\",\n    \"checkEnabled\": false,\n    \"collapsed\": true,\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Given a QueryId, Get the Query Result, and format into a List\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-04-07T16:39:29.257Z\"\n    },\n    \"id\": 218,\n    \"index\": 218,\n    \"inputData\": [\n     {\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"region\"\n      },\n      \"resultId\": {\n       \"constant\": false,\n       \"value\": \"queryId\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"region\": {\n        \"description\": \"AWS Region\",\n        \"title\": \"region\",\n        \"type\": \"string\"\n       },\n       \"resultId\": {\n        \"description\": \"Redshift Query Result\",\n        \"title\": \"resultId\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"resultId\",\n       \"region\"\n      ],\n      \"title\": \"aws_get_redshift_result\",\n      \"type\": \"object\"\n     }\n    ],\n    \"isUnskript\": false,\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS Get Redshift Result\",\n    \"nouns\": [],\n    \"orderProperties\": [\n     \"resultId\",\n     \"region\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"redshiftresult\",\n     \"output_name_enabled\": true\n    },\n    \"printOutput\": false,\n    \"tags\": [\n     \"aws_get_redshift_result\"\n    ],\n    \"verbs\": []\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"from __future__ import annotations\\n\",\n    \"##\\n\",\n    \"##  Copyright (c) 2023 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import List, Dict\\n\",\n    \"from unskript.connectors.aws import aws_get_paginator\\n\",\n    \"import pprint\\n\",\n    \"from beartype import beartype\\n\",\n    \"import time\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_redshift_result(handle, region:str, resultId: str) -> List:\\n\",\n    \"\\n\",\n    \"    time.sleep(10)\\n\",\n    \"    client = handle.client('redshift-data', region_name=region)\\n\",\n    \"    result = client.get_statement_result(\\n\",\n    \"        Id=resultId\\n\",\n    \"    )\\n\",\n    \"    #result has the Dictionary, but it is not easily queried\\n\",\n    \"    #get all the columns into an array\\n\",\n    \"    columnNames = []\\n\",\n    \"    for column in result['ColumnMetadata']:\\n\",\n    \"        columnNames.append(column['label'])\\n\",\n    \"    #print(columnNames)\\n\",\n    \"\\n\",\n    \"    #now let's make the output into a dict\\n\",\n    \"    listResult = []\\n\",\n    \"    for record in result['Records']:\\n\",\n    \"\\n\",\n    \"        for key, value in record[0].items():\\n\",\n    \"            rowId = value\\n\",\n    \"        entryCounter = 0\\n\",\n    \"        entryDict = {}\\n\",\n    \"        for entry in record:\\n\",\n    \"\\n\",\n    \"            for key, value in entry.items():\\n\",\n    \"                entryDict[columnNames[entryCounter]] = value\\n\",\n    \"            entryCounter +=1\\n\",\n    \"        #print(\\\"entryDict\\\",entryDict)\\n\",\n    \"        listResult.append(entryDict)\\n\",\n    \"\\n\",\n    \"    #print(listResult)\\n\",\n    \"    return listResult\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"region\\\": \\\"region\\\",\\n\",\n    \"    \\\"resultId\\\": \\\"queryId\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"redshiftresult\\\")\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=False)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_get_redshift_result, lego_printer=unskript_default_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 8,\n   \"id\": \"b42d2d45-0a95-4f16-8b44-0cced11ee848\",\n   \"metadata\": {\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-04-07T16:39:32.068Z\"\n    },\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"name\": \"Build Chart\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Build Chart\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"import matplotlib as mpl\\n\",\n    \"mpl.use('agg')\\n\",\n    \"from matplotlib.figure import Figure\\n\",\n    \"import panel\\n\",\n    \"import matplotlib.pyplot as plt\\n\",\n    \"import pandas as pd\\n\",\n    \"import pprint\\n\",\n    \"import io, base64, urllib\\n\",\n    \"from PIL import Image\\n\",\n    \"\\n\",\n    \"df = pd.DataFrame.from_dict(redshiftresult)\\n\",\n    \"df['day']=df['day'].astype(int)\\n\",\n    \"df['usage_hours']=df['usage_hours'].astype(float)\\n\",\n    \"df['usage_cost']=df['usage_cost'].astype(float)\\n\",\n    \"\\n\",\n    \"%matplotlib inline\\n\",\n    \"\\n\",\n    \"font = {'size' : 22}\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"font = {'size'   : 16}\\n\",\n    \"plt.rc('font', **font)\\n\",\n    \"dfpivot = df.pivot(index='day', columns='product_instancetype', values='usage_cost')\\n\",\n    \"dfpivot.plot(linewidth=5, ylabel=\\\"daily cost in $\\\", figsize=(16, 9), )\\n\",\n    \"plt.legend(loc='center left', bbox_to_anchor=(1.0, 0.5))\\n\",\n    \"plt.show()\\n\",\n    \"\\n\",\n    \"dfpivot = df.pivot(index='day', columns='product_instancetype', values='usage_cost')\\n\",\n    \"dfpivot.plot(linewidth=5, ylabel=\\\"daily cost in $\\\", figsize=(16, 9), )\\n\",\n    \"plt.ylim((0,10))\\n\",\n    \"plt.legend(loc='center left', bbox_to_anchor=(1.0, 0.5))\\n\",\n    \"plt.show()\\n\",\n    \"\\n\",\n    \"dfpivot = df.pivot(index='day', columns='product_instancetype', values='usage_cost')\\n\",\n    \"dfpivot.plot(linewidth=5,ylabel=\\\"daily cost in $\\\", figsize=(16, 9) )\\n\",\n    \"plt.xlim((todayDay-7,todayDay))\\n\",\n    \"plt.rc('font', **font)\\n\",\n    \"plt.legend(loc='center left', bbox_to_anchor=(1.0, 0.5))\\n\",\n    \"fig = plt.gcf()\\n\",\n    \"plt.show()\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"fig.savefig('awsProducts7Day.jpg')\\n\",\n    \"im  = Image.open('awsProducts7Day.jpg')\\n\",\n    \"display(im)\\n\",\n    \"\\n\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 9,\n   \"id\": \"5f476f1b-a7b0-4927-9c7d-6335e9d3e7da\",\n   \"metadata\": {\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-04-07T16:39:35.956Z\"\n    },\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"name\": \"build alert\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"build alert\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"from datetime import date \\n\",\n    \"\\n\",\n    \"\\n\",\n    \"today = todayDay -1\\n\",\n    \"yesterday =yesterDay -1\\n\",\n    \"\\n\",\n    \"print(today)\\n\",\n    \"bigchange = {}\\n\",\n    \"listChange = []\\n\",\n    \"alert = False\\n\",\n    \"alertText = ''\\n\",\n    \"if yesterday >0:\\n\",\n    \"    for instance in dfpivot.columns:\\n\",\n    \"        todayCost = dfpivot.at[today, instance]\\n\",\n    \"        yesterdayCost = dfpivot.at[yesterday, instance]\\n\",\n    \"\\n\",\n    \"        delta =(todayCost-yesterdayCost)/yesterdayCost\\n\",\n    \"        if abs(todayCost-yesterdayCost) >1: \\n\",\n    \"            if delta >.05:\\n\",\n    \"                #print( instance, delta,dfpivot.at[today, instance], dfpivot.at[yesterday, instance])\\n\",\n    \"                bigchange[instance] = {\\\"delta\\\":delta, \\\"todayCost\\\":todayCost,\\\"yesterdayCost\\\":yesterdayCost}\\n\",\n    \"                listChange.append([instance, yesterdayCost, todayCost])\\n\",\n    \"                alertText = '@here There has been a large change in EC2 Costs'\\n\",\n    \"                alert = True\\n\",\n    \"        if date.today().weekday() == 0:\\n\",\n    \"            alertText = 'Today is Monday, Here is the last week of EC2 Costs'\\n\",\n    \"            alert = True\\n\",\n    \"    print(date.today().weekday())\\n\",\n    \"    print(\\\"bigchange\\\", bigchange)\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 11,\n   \"id\": \"e0091066-452a-4c06-81fc-3704ee90168c\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionCategories\": [],\n    \"actionIsCheck\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [],\n    \"actionNextHopParameterMapping\": {},\n    \"actionOutputType\": \"\",\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": true,\n    \"action_uuid\": \"6a87f83ab0ecfeecb9c98d084e2b1066c26fa64be5b4928d5573a5d60299802d\",\n    \"checkEnabled\": false,\n    \"condition_enabled\": true,\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"customCell\": true,\n    \"description\": \"Post Slack Message\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-04-07T16:39:47.279Z\"\n    },\n    \"id\": 82,\n    \"index\": 82,\n    \"inputData\": [\n     {\n      \"channel\": {\n       \"constant\": false,\n       \"value\": \"\\\"devrel_doug_test1\\\"\"\n      },\n      \"comment\": {\n       \"constant\": false,\n       \"value\": \"alertText\"\n      },\n      \"image\": {\n       \"constant\": false,\n       \"value\": \"'awsProducts7Day.jpg'\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"channel\": {\n        \"default\": \"\",\n        \"description\": \"Name of slack channel.\",\n        \"title\": \"Channel\",\n        \"type\": \"string\"\n       },\n       \"comment\": {\n        \"default\": \"\",\n        \"description\": \"Comment to add with image\",\n        \"required\": false,\n        \"title\": \"comment\",\n        \"type\": \"string\"\n       },\n       \"image\": {\n        \"default\": \"\",\n        \"description\": \"image to uplaod\",\n        \"title\": \"image\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"channel\",\n       \"message\"\n      ],\n      \"title\": \"slack_post_image\",\n      \"type\": \"object\"\n     }\n    ],\n    \"isUnskript\": false,\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_SLACK\",\n    \"name\": \"Post Slack Image\",\n    \"nouns\": [],\n    \"orderProperties\": [\n     \"channel\",\n     \"image\",\n     \"comment\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"printOutput\": true,\n    \"startcondition\": \"alert\",\n    \"tags\": [\n     \"slack_post_message\"\n    ],\n    \"title\": \"Post Slack Image\",\n    \"verbs\": []\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"# Copyright (c) 2021 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from slack_sdk import WebClient\\n\",\n    \"from slack_sdk.errors import SlackApiError\\n\",\n    \"\\n\",\n    \"pp = pprint.PrettyPrinter(indent=2)\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def slack_post_image_printer(output):\\n\",\n    \"    if output is not None:\\n\",\n    \"        pprint.pprint(output)\\n\",\n    \"    else:\\n\",\n    \"        return\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"@beartype\\n\",\n    \"def slack_post_image(\\n\",\n    \"        handle: WebClient,\\n\",\n    \"        channel: str,\\n\",\n    \"        comment: str,\\n\",\n    \"        image: str) -> str:\\n\",\n    \"\\n\",\n    \"    try:\\n\",\n    \"        result = handle.files_upload(\\n\",\n    \"            channels = channel,\\n\",\n    \"            initial_comment=comment,\\n\",\n    \"            file=image\\n\",\n    \"    )\\n\",\n    \"        return f\\\"Successfuly Sent Message on Channel: #{channel}\\\"\\n\",\n    \"\\n\",\n    \"    except Exception as e:\\n\",\n    \"        print(\\\"\\\\n\\\\n\\\")\\n\",\n    \"        pp.pprint(\\n\",\n    \"            f\\\"Failed sending message to slack channel {channel}, Error: {e.__str__()}\\\")\\n\",\n    \"        return f\\\"Unable to send message on {channel}\\\"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"channel\\\": \\\"\\\\\\\\\\\"devrel_doug_test1\\\\\\\\\\\"\\\",\\n\",\n    \"    \\\"image\\\": \\\"'awsProducts7Day.jpg'\\\",\\n\",\n    \"    \\\"comment\\\": \\\"alertText\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"alert\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(slack_post_image, lego_printer=slack_post_image_printer, hdl=hdl, args=args)\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"execution_data\": {\n   \"runbook_name\": \"AWS Redshift Get Daily Costs from EC2 Usage\",\n   \"parameters\": [\n    \"cluster\",\n    \"database\",\n    \"region\",\n    \"secret_name\"\n   ]\n  },\n  \"kernelspec\": {\n   \"display_name\": \"Python 3.10.6 64-bit\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"codemirror_mode\": {\n    \"name\": \"ipython\",\n    \"version\": 3\n   },\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"nbconvert_exporter\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.10.6\"\n  },\n  \"parameterSchema\": {\n   \"properties\": {\n    \"cluster\": {\n     \"description\": \"The Redshift Cluster to be queried\",\n     \"title\": \"cluster\",\n     \"type\": \"string\"\n    },\n    \"database\": {\n     \"description\": \"the Redshift Database in our query\",\n     \"title\": \"database\",\n     \"type\": \"string\"\n    },\n    \"region\": {\n     \"description\": \"AWS Region\",\n     \"title\": \"region\",\n     \"type\": \"string\"\n    },\n    \"secret_name\": {\n     \"description\": \"AWS Secret Name to retrieve ARN for\",\n     \"title\": \"secret_name\",\n     \"type\": \"string\"\n    }\n   },\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"vscode\": {\n   \"interpreter\": {\n    \"hash\": \"b0fa6594d8f4cbf19f97940f81e996739fb7646882a419484c72d19e05852a7e\"\n   }\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}"
  },
  {
    "path": "AWS/AWS_Get_Redshift_EC2_Daily_Costs.json",
    "content": "{\n    \"name\": \"AWS Redshift Get Daily Costs from EC2 Usage\",\n    \"description\": \"This runbook can be used to create charts and alerts around AWS EC2 usage. It requires a Cost and USage report to be live in RedShift.\",  \n    \"uuid\": \"a79201f221993867e23dd9603ed7ef5123324353d717c566f902f7ca6e471f5c\",\n    \"icon\": \"CONNECTOR_TYPE_AWS\",\n    \"categories\": [ \"CATEGORY_TYPE_COST_OPT\", \"CATEGORY_TYPE_CLOUDOPS\" ],\n    \"connector_types\": [ \"CONNECTOR_TYPE_AWS\" ],\n    \"version\": \"1.0.0\"\n  }"
  },
  {
    "path": "AWS/AWS_Lowering_AWS_CloudTrail_Costs_by_Removing_Redundant_Trails.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"1a1d80a5-6559-47b4-954f-8c301c581d8f\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Finding Redundant Trails in AWS\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Finding Redundant Trails in AWS\"\n   },\n   \"source\": [\n    \"<center><img src=\\\"https://storage.googleapis.com/unskript-website/assets/favicon.png\\\" alt=\\\"unSkript.com\\\" width=\\\"100\\\" height=\\\"100\\\">\\n\",\n    \"<h1 id=\\\"-unSkript-Runbooks-\\\">unSkript Runbooks&nbsp;</h1>\\n\",\n    \"<div class=\\\"alert alert-block alert-success\\\"><strong> This runbook demonstrates how to find redundant trails in AWS using unSkript actions.</strong></div>\\n\",\n    \"</center><center>\\n\",\n    \"<h2 id=\\\"Finding-Redundant-Trails-in-AWS\\\">Finding Redundant Trails in AWS</h2>\\n\",\n    \"</center>\\n\",\n    \"<h1 id=\\\"Steps-Overview\\\">Steps Overview</h1>\\n\",\n    \"<p>1. Finding Redundant Trails in AWS</p>\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"4465838e-f101-4ff9-ae4a-875f3816bbfb\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-1\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-1\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Finding-Redundant-Trails-in-AWS\\\"><a id=\\\"1\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Finding Redundant Trails in AWS</h3>\\n\",\n    \"<p>Here we will use unSkript Finding Redundant Trails in AWS action. The AWS CloudTrail service allows developers to enable policies managing compliance, governance, and auditing of their AWS accounts. In addition, AWS CloudTrail offers logging, monitoring, and storage of any activity around actions related to your AWS structures. The service activates from the moment you set up your AWS account, and while it provides real-time activity visibility, it also means higher AWS costs. This action is used to find Redundant Trails in AWS.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p><strong>Output parameters:</strong>&nbsp;<code>redundant_trails</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"e41b2aa2-2313-4fbe-a320-745afa0983ae\",\n   \"metadata\": {\n    \"actionBashCommand\": false,\n    \"actionCategories\": [\n     \"CATEGORY_TYPE_CLOUDOPS\",\n     \"CATEGORY_TYPE_COST_OPT\",\n     \"CATEGORY_TYPE_AWS\",\n     \"CATEGORY_TYPE_AWS_CLOUDTRAIL\"\n    ],\n    \"actionDescription\": \"This action will find a redundant cloud trail if the attribute IncludeGlobalServiceEvents is true, and then we need to find multiple duplications.\",\n    \"actionEntryFunction\": \"aws_finding_redundant_trails\",\n    \"actionIsCheck\": true,\n    \"actionIsRemediation\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [\n     \"c4d55f5dd5bb964460f4ad7335daa8bb094792b0d64149dbddca019513f05598\"\n    ],\n    \"actionNextHopParameterMapping\": {},\n    \"actionNouns\": null,\n    \"actionOutputType\": \"ACTION_OUTPUT_TYPE_LIST\",\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"actionTitle\": \"Finding Redundant Trails in AWS\",\n    \"actionType\": \"LEGO_TYPE_AWS\",\n    \"actionVerbs\": null,\n    \"actionVersion\": \"1.0.0\",\n    \"action_modified\": false,\n    \"action_uuid\": \"964f2a773fdbd64ec9e9f7e846943824d46fef497b574a088766c63811e61581\",\n    \"collapsed\": true,\n    \"credentialsJson\": {},\n    \"description\": \"This action will find a redundant cloud trail if the attribute IncludeGlobalServiceEvents is true, and then we need to find multiple duplications.\",\n    \"id\": 1,\n    \"index\": 1,\n    \"inputschema\": [\n     {\n      \"properties\": {},\n      \"title\": \"aws_finding_redundant_trails\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"language\": \"python\",\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"Finding Redundant Trails in AWS\",\n    \"orderProperties\": [],\n    \"outputParams\": {\n     \"output_name\": \"redundant_trails\",\n     \"output_name_enabled\": true,\n     \"output_runbook_enabled\": false,\n     \"output_runbook_name\": \"\"\n    },\n    \"printOutput\": true,\n    \"tags\": [],\n    \"uuid\": \"964f2a773fdbd64ec9e9f7e846943824d46fef497b574a088766c63811e61581\",\n    \"version\": \"1.0.0\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"##  Copyright (c) 2023 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import Tuple\\n\",\n    \"from unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_finding_redundant_trails_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_finding_redundant_trails(handle) -> Tuple:\\n\",\n    \"    \\\"\\\"\\\"aws_finding_redundant_trails Returns an array of redundant trails in AWS\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned by the task.validate(...) method.\\n\",\n    \"\\n\",\n    \"        :rtype: Tuple with check status and list of redundant trails\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    result = []\\n\",\n    \"    all_regions = aws_list_all_regions(handle)\\n\",\n    \"    for reg in all_regions:\\n\",\n    \"        try:\\n\",\n    \"            ec2Client = handle.client('cloudtrail', region_name=reg)\\n\",\n    \"            response = ec2Client.describe_trails()\\n\",\n    \"            for glob_service in response[\\\"trailList\\\"]:\\n\",\n    \"                if glob_service[\\\"IncludeGlobalServiceEvents\\\"] is True:\\n\",\n    \"                    for i in result:\\n\",\n    \"                        if i[\\\"trail_name\\\"] == glob_service[\\\"Name\\\"]:\\n\",\n    \"                            i[\\\"regions\\\"].append(reg)\\n\",\n    \"                    else:\\n\",\n    \"                        if not any(i[\\\"trail_name\\\"] == glob_service[\\\"Name\\\"] for i in result):\\n\",\n    \"                            trail_dict = {}\\n\",\n    \"                            trail_dict[\\\"trail_name\\\"] = glob_service[\\\"Name\\\"]\\n\",\n    \"                            trail_dict[\\\"regions\\\"] = [reg]\\n\",\n    \"                            result.append(trail_dict)\\n\",\n    \"        except Exception as e:\\n\",\n    \"            pass\\n\",\n    \"\\n\",\n    \"    if len(result) != 0:\\n\",\n    \"        return (False, result)\\n\",\n    \"    else:\\n\",\n    \"        return (True, None)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(outputName=\\\"redundant_trails\\\")\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_finding_redundant_trails, lego_printer=aws_finding_redundant_trails_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"e49d7606-52ce-4a2b-bc06-22e5470d1aeb\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-1 Extension\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-1 Extension\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Modify-Output\\\">Modify Output</h3>\\n\",\n    \"<p>In this action, we sort the commands on the basis of parameters given by the user as follows,</p>\\n\",\n    \"<ol>\\n\",\n    \"<li><code>stop_multiregion_trail = true </code>To turn off multi-region tracking of cloud trail.\\n\",\n    \"<ol>\\n\",\n    \"<li><code>aws cloudtrail update-trail&nbsp;<span class=\\\"hljs-attr\\\">--region</span> us-west-<span class=\\\"hljs-number\\\">2</span> <span class=\\\"hljs-attr\\\">--name</span> cc-test-trail --no-<span class=\\\"hljs-keyword\\\">is</span>-multi-region-trail</code></li>\\n\",\n    \"</ol>\\n\",\n    \"</li>\\n\",\n    \"<li><code>global_event_tractiong = true </code>To turn off the global service events tracking the issue of the cloud trail.\\n\",\n    \"<ol>\\n\",\n    \"<li><code>aws cloudtrail update-trail&nbsp;<span class=\\\"hljs-attr\\\">--region</span> us-west-<span class=\\\"hljs-number\\\">2</span> <span class=\\\"hljs-attr\\\">--name</span> cc-test-trail --no-<span class=\\\"hljs-keyword\\\">include</span>-<span class=\\\"hljs-keyword\\\">global</span>-service-events</code></li>\\n\",\n    \"</ol>\\n\",\n    \"</li>\\n\",\n    \"<li>For <code>stop_multiregion_trail = true and global_event_tractiong = true</code>&nbsp;we use both commands to update the redundant trails.</li>\\n\",\n    \"</ol>\\n\",\n    \"<p><strong>Output parameters:</strong>&nbsp;<code>command_list</code></p>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 41,\n   \"id\": \"d2fe73b3-eba6-4e47-94d9-551e6533119e\",\n   \"metadata\": {\n    \"collapsed\": true,\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-02-21T10:49:20.023Z\"\n    },\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"name\": \"Modify Output\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Modify Output\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"command_list = []\\n\",\n    \"if stop_multiregion_trail and not global_event_tractiong:\\n\",\n    \"    if not redundant_trails[0]:\\n\",\n    \"        for i in redundant_trails[1]:\\n\",\n    \"            command = \\\"aws cloudtrail update-trail --region \\\" + region + \\\" --name \\\" + i[\\\"trail_name\\\"] + \\\" --no-is-multi-region-trail\\\"\\n\",\n    \"            command_list.append(command)\\n\",\n    \"elif not stop_multiregion_trail and global_event_tractiong:\\n\",\n    \"    if not redundant_trails[0]:\\n\",\n    \"        for i in redundant_trails[1]:\\n\",\n    \"            for region_1 in i[\\\"regions\\\"]:\\n\",\n    \"                command = \\\"aws cloudtrail update-trail --region \\\" + region_1 + \\\" --name \\\" + i[\\\"trail_name\\\"] + \\\" --no-include-global-service-events\\\"\\n\",\n    \"                command_list.append(command)\\n\",\n    \"elif stop_multiregion_trail and global_event_tractiong:\\n\",\n    \"    if not redundant_trails[0]:\\n\",\n    \"        for i in redundant_trails[1]:\\n\",\n    \"            command_1 = \\\"aws cloudtrail update-trail --region \\\" + region + \\\" --name \\\" + i[\\\"trail_name\\\"] + \\\" --no-include-global-service-events\\\"\\n\",\n    \"            command_2 = \\\"aws cloudtrail update-trail --region \\\" + region + \\\" --name \\\" + i[\\\"trail_name\\\"] + \\\" --no-is-multi-region-trail\\\"\\n\",\n    \"            command_list.append(command_1)\\n\",\n    \"            command_list.append(command_2)\\n\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"0d382061-c58d-4528-b2d4-eb9f1f4549de\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-2\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-2\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Run-Command-via-AWS-CLI\\\">Run Command via AWS CLI</h3>\\n\",\n    \"<p>In this action, we execute the commands from the above actions to update the redundant cloud trails.</p>\\n\",\n    \"<p><strong>&nbsp; &nbsp; &nbsp; &nbsp;Input parameters:</strong>&nbsp;<code>aws_command</code></p>\\n\",\n    \"<p><strong>&nbsp; &nbsp; &nbsp; &nbsp;Output parameters:</strong>&nbsp;<code>updated_output</code></p>\\n\",\n    \"<p>&nbsp;</p>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 42,\n   \"id\": \"a991f490-7bc2-43a5-80ec-cab51729d591\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": false,\n    \"action_uuid\": \"1db371aff42291641eb6ba83d7acc3fe28e2468d83be1552e8258dc878c0f70d\",\n    \"condition_enabled\": true,\n    \"continueOnError\": true,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Execute command using AWS CLI\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-02-21T10:49:24.829Z\"\n    },\n    \"id\": 199,\n    \"index\": 199,\n    \"inputData\": [\n     {\n      \"aws_command\": {\n       \"constant\": false,\n       \"value\": \"iter_item\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"aws_command\": {\n        \"description\": \"AWS Command eg \\\"aws ec2 describe-instances\\\"\",\n        \"title\": \"AWS Command\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"aws_command\"\n      ],\n      \"title\": \"aws_execute_cli_command\",\n      \"type\": \"object\"\n     }\n    ],\n    \"iterData\": [\n     {\n      \"iter_enabled\": true,\n      \"iter_item\": \"aws_command\",\n      \"iter_list\": {\n       \"constant\": false,\n       \"objectItems\": false,\n       \"value\": \"command_list\"\n      }\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"Run Command via AWS CLI: Update Redundant Cloud Trails\",\n    \"nouns\": [],\n    \"orderProperties\": [\n     \"aws_command\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"updated_output\",\n     \"output_name_enabled\": true\n    },\n    \"printOutput\": true,\n    \"startcondition\": \"len(command_list)>0\",\n    \"tags\": [\n     \"aws_execute_cli_command\"\n    ],\n    \"title\": \"Run Command via AWS CLI: Update Redundant Cloud Trails\",\n    \"verbs\": []\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"#\\n\",\n    \"# Copyright (c) 2021 unSkript.com\\n\",\n    \"# All rights reserved.\\n\",\n    \"#\\n\",\n    \"\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_execute_cli_command_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    print(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_execute_cli_command(handle, aws_command: str) -> str:\\n\",\n    \"\\n\",\n    \"    result = handle.aws_cli_command(aws_command)\\n\",\n    \"    if result is None or result.returncode != 0:\\n\",\n    \"        print(\\n\",\n    \"            f\\\"Error while executing command ({aws_command}): {result}\\\")\\n\",\n    \"        return str()\\n\",\n    \"\\n\",\n    \"    return result.stdout\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(continueOnError=True)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"aws_command\\\": \\\"iter_item\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(iterJson='''{\\n\",\n    \"    \\\"iter_enabled\\\": true,\\n\",\n    \"    \\\"iter_list_is_const\\\": false,\\n\",\n    \"    \\\"iter_list\\\": \\\"command_list\\\",\\n\",\n    \"    \\\"iter_parameter\\\": \\\"aws_command\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"len(command_list)>0\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"updated_output\\\")\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_execute_cli_command, lego_printer=aws_execute_cli_command_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"c4e37a77-7c92-43ab-80de-bb98d15d0a3a\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Conclusion\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Conclusion\"\n   },\n   \"source\": [\n    \"### Conclusion\\n\",\n    \"<p>This Runbook demonstrates the use of unSkript's AWS actions to find redundant trails in AWS and update the cloud trails. To view the full platform capabilities of unSkript please visit <a href=\\\"https://us.app.unskript.io/\\\" rel=\\\"noopener\\\">https://us.app.unskript.io</a></p>\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"execution_data\": {\n   \"runbook_name\": \"AWS Lowering CloudTrail Costs by Removing Redundant Trails\",\n   \"parameters\": null\n  },\n  \"kernelspec\": {\n   \"display_name\": \"unSkript (Build: 1166)\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"codemirror_mode\": {\n    \"name\": \"ipython\",\n    \"version\": 3\n   },\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"nbconvert_exporter\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.9.6\"\n  },\n  \"parameterSchema\": {\n   \"properties\": {\n    \"global_event_tractiong\": {\n     \"default\": false,\n     \"description\": \"To turn off the global service events tracking the issue of the cloud trail.\",\n     \"title\": \"global_event_tractiong\",\n     \"type\": \"boolean\"\n    },\n    \"region\": {\n     \"description\": \"To update the cloud trail multi-region tracking to a single region.\",\n     \"title\": \"region\",\n     \"type\": \"string\"\n    },\n    \"stop_multiregion_trail\": {\n     \"default\": false,\n     \"description\": \"To turn off multi-region tracking of cloud trail.\",\n     \"title\": \"stop_multiregion_trail\",\n     \"type\": \"boolean\"\n    }\n   },\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"vscode\": {\n   \"interpreter\": {\n    \"hash\": \"e8899eb02dfbc033aab5733bdae1bd213fa031d40331094008e8673d99ebab63\"\n   }\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}"
  },
  {
    "path": "AWS/AWS_Lowering_AWS_CloudTrail_Costs_by_Removing_Redundant_Trails.json",
    "content": "{\n  \"name\": \"AWS Lowering CloudTrail Costs by Removing Redundant Trails\",\n  \"description\": \"The AWS CloudTrail service allows developers to enable policies managing compliance, governance, and auditing of their AWS account. In addition, AWS CloudTrail offers logging, monitoring, and storage of any activity around actions related to your AWS structures. The service activates from the moment you set up your AWS account and while it provides real-time activity visibility, it also means higher AWS costs. Here Finding Redundant Trails in AWS\",\n  \"uuid\": \"c4d55f5dd5bb964460f4ad7335daa8bb094792b0d64149dbddca019513f05598\",\n  \"icon\": \"CONNECTOR_TYPE_AWS\",\n  \"categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_COST_OPT\" ],\n  \"connector_types\": [ \"CONNECTOR_TYPE_AWS\" ],\n  \"version\": \"1.0.0\"\n}\n"
  },
  {
    "path": "AWS/AWS_Notify_About_Unused_Keypairs.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"5360a41f-ee95-482d-8523-4c5f608eca12\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Steps Overview\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Steps Overview\"\n   },\n   \"source\": [\n    \"<center><img src=\\\"https://storage.googleapis.com/unskript-website/assets/favicon.png\\\" alt=\\\"unSkript.com\\\" width=\\\"100\\\" height=\\\"100\\\">\\n\",\n    \"<h1 id=\\\"unSkript-Runbooks\\\">unSkript Runbooks<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#unSkript-Runbooks\\\" target=\\\"_self\\\">&para;</a></h1>\\n\",\n    \"<div class=\\\"alert alert-block alert-success\\\">\\n\",\n    \"<h3 id=\\\"Objective\\\">Objective<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Objective\\\" target=\\\"_self\\\">&para;</a></h3>\\n\",\n    \"<br><strong style=\\\"color: #000000;\\\"><em>Send a Slack notification for Unused Keypairs</em></strong></div>\\n\",\n    \"</center>\\n\",\n    \"<p>&nbsp;</p>\\n\",\n    \"<center>\\n\",\n    \"<h2 id=\\\"Notify-unused-keypairs\\\"><u>Notify unused keypairs</u><a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Notify-unused-keypairs\\\" target=\\\"_self\\\">&para;</a></h2>\\n\",\n    \"</center>\\n\",\n    \"<h1 id=\\\"Steps-Overview\\\">Steps Overview<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Steps-Overview\\\" target=\\\"_self\\\">&para;</a></h1>\\n\",\n    \"<p>1)<a href=\\\"#1\\\" target=\\\"_self\\\" rel=\\\"noopener\\\"> Find unused Keypairs</a><br>2)<a href=\\\"#2\\\" target=\\\"_self\\\" rel=\\\"noopener\\\"> Send message to Slack</a></p>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"78a320c1-9152-46bd-b58b-dc46b7ac7ed5\",\n   \"metadata\": {\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-05-18T16:24:19.808Z\"\n    },\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"name\": \"Input Verification\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Input Verification\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"if region == None:\\n\",\n    \"    region = ''\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"b35da62e-6d0d-4779-8820-cbae0e915530\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 1\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 1\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Filter-unused-Keypairs\\\"><a id=\\\"1\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Filter unused Keypairs</h3>\\n\",\n    \"<p>Using unSkript's Filter AWS Unused Keypairs action, we will fetch all the available keypairs and compare them to the ones that are used by the AWS instances. If a match is not found, the keypair is deduced to be unused.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters: <code>region</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following output: <code>unused_key_pairs</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"5a871fd8-ba3a-4eb3-97f2-a083aac7e925\",\n   \"metadata\": {\n    \"actionBashCommand\": false,\n    \"actionCategories\": [\n     \"CATEGORY_TYPE_CLOUDOPS\",\n     \"CATEGORY_TYPE_DEVOPS\",\n     \"CATEGORY_TYPE_SRE\",\n     \"CATEGORY_TYPE_AWS\",\n     \"CATEGORY_TYPE_AWS_EC2\"\n    ],\n    \"actionDescription\": \"Filter AWS Unused Keypairs\",\n    \"actionEntryFunction\": \"aws_filter_unused_keypairs\",\n    \"actionIsCheck\": true,\n    \"actionIsRemediation\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [\n     \"a28edafac5f3bac3ca34d677d9b01a4bc6f74893e50bc103e5cefb00e0f48746\"\n    ],\n    \"actionNextHopParameterMapping\": {},\n    \"actionNouns\": null,\n    \"actionOutputType\": \"ACTION_OUTPUT_TYPE_LIST\",\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"actionTitle\": \"Filter AWS Unused Keypairs\",\n    \"actionType\": \"LEGO_TYPE_AWS\",\n    \"actionVerbs\": null,\n    \"actionVersion\": \"1.0.0\",\n    \"action_modified\": false,\n    \"action_uuid\": \"adb9d5bea27bf94e9537edccd8683accde12b7afa786ce6e8d89b34079846a44\",\n    \"continueOnError\": false,\n    \"credentialsJson\": {},\n    \"description\": \"Filter AWS Unused Keypairs\",\n    \"id\": 4,\n    \"index\": 4,\n    \"inputData\": [\n     {\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"region\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"region\": {\n        \"default\": \"\",\n        \"description\": \"Name of the AWS Region\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [],\n      \"title\": \"aws_filter_unused_keypairs\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"language\": \"python\",\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"Filter AWS Unused Keypairs\",\n    \"orderProperties\": [\n     \"region\"\n    ],\n    \"outputParams\": {\n     \"output_name\": \"unused_keypairs\",\n     \"output_name_enabled\": true,\n     \"output_runbook_enabled\": false,\n     \"output_runbook_name\": \"\"\n    },\n    \"printOutput\": true,\n    \"tags\": [\n     \"aws_filter_unused_keypairs\"\n    ],\n    \"uuid\": \"adb9d5bea27bf94e9537edccd8683accde12b7afa786ce6e8d89b34079846a44\",\n    \"version\": \"1.0.0\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"##  Copyright (c) 2021 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import List, Tuple,Optional\\n\",\n    \"from unskript.legos.utils import CheckOutput, CheckOutputStatus\\n\",\n    \"from unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\\n\",\n    \"from unskript.connectors.aws import aws_get_paginator\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_filter_unused_keypairs_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    if isinstance(output, CheckOutput):\\n\",\n    \"        print(output.json())\\n\",\n    \"    else:\\n\",\n    \"        pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_filter_unused_keypairs(handle, region: str = None) -> CheckOutput:\\n\",\n    \"    \\\"\\\"\\\"aws_filter_unused_keypairs Returns an array of KeyPair.\\n\",\n    \"\\n\",\n    \"        :type region: object\\n\",\n    \"        :param region: Object containing global params for the notebook.\\n\",\n    \"\\n\",\n    \"        :rtype: Object with status, result of unused key pairs, and error.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    all_keys_dict = {}\\n\",\n    \"    used_keys_dict = {}\\n\",\n    \"    key_pairs_all = []\\n\",\n    \"    used_key_pairs = []\\n\",\n    \"    result = []\\n\",\n    \"    all_regions = [region]\\n\",\n    \"    if region is None or len(region)==0:\\n\",\n    \"        all_regions = aws_list_all_regions(handle=handle)\\n\",\n    \"    for r in all_regions:\\n\",\n    \"        try:\\n\",\n    \"            ec2Client = handle.client('ec2', region_name=r)\\n\",\n    \"            key_pairs_all = list(map(lambda i: i['KeyName'], ec2Client.describe_key_pairs()['KeyPairs']))\\n\",\n    \"            res = aws_get_paginator(ec2Client, \\\"describe_instances\\\", \\\"Reservations\\\")\\n\",\n    \"            for reservation in res:\\n\",\n    \"                for keypair in reservation['Instances']:\\n\",\n    \"                    if 'KeyName'in keypair and keypair['KeyName'] not in used_key_pairs:\\n\",\n    \"                        used_key_pairs.append(keypair['KeyName'])\\n\",\n    \"            used_keys_dict[\\\"region\\\"]=r\\n\",\n    \"            used_keys_dict[\\\"key_name\\\"]=used_key_pairs\\n\",\n    \"            all_keys_dict[\\\"region\\\"]=r\\n\",\n    \"            all_keys_dict[\\\"key_name\\\"]=key_pairs_all\\n\",\n    \"            final_dict = {}\\n\",\n    \"            final_list=[]\\n\",\n    \"            for k,v in all_keys_dict.items():\\n\",\n    \"                if v!=[]:\\n\",\n    \"                    if k==\\\"key_name\\\":\\n\",\n    \"                        for each in v:\\n\",\n    \"                            if each not in used_keys_dict[\\\"key_name\\\"]:\\n\",\n    \"                                final_list.append(each)\\n\",\n    \"                if len(final_list)!=0:\\n\",\n    \"                    final_dict[\\\"region\\\"]=r\\n\",\n    \"                    final_dict[\\\"unused_keys\\\"]=final_list\\n\",\n    \"            if len(final_dict)!=0:\\n\",\n    \"                result.append(final_dict)\\n\",\n    \"        except Exception as e:\\n\",\n    \"            pass\\n\",\n    \"\\n\",\n    \"    if len(result) != 0:\\n\",\n    \"        return (False, result)\\n\",\n    \"    else:\\n\",\n    \"        return (True, None)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"region\\\": \\\"region\\\"\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"task.configure(outputName=\\\"unused_keypairs\\\")\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_filter_unused_keypairs, lego_printer=aws_filter_unused_keypairs_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"732807f2-94cc-4741-b14e-92bbf46b4724\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 2A\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 2A\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Create-List-of-Unused-Keypairs\\\">Create List of Unused Keypairs</h3>\\n\",\n    \"<p>This action filters regions that have no unused keypairs and creates a list of those that have them.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters: <code>None</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following output: <code>all_unused_key_pairs</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"379c28b0-407d-4d04-9319-d57bb5ee48e6\",\n   \"metadata\": {\n    \"collapsed\": true,\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-02-02T16:26:29.300Z\"\n    },\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"name\": \"Create List of Unused Keypairs\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Create List of Unused Keypairs\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"all_unused_key_pairs = []\\n\",\n    \"if unused_keypairs[0] == False:\\n\",\n    \"    if len(unused_keypairs[1])!=0:\\n\",\n    \"        all_unused_key_pairs=unused_keypairs[1]\\n\",\n    \"print(all_unused_key_pairs)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"bdb9d8ef-d374-4225-9f60-a72acab538d3\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 2\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 2\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Send-message-to-Slack\\\"><a id=\\\"2\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Send message to Slack</h3>\\n\",\n    \"<p>This action sends a message containing the region and unused keypairs list to the given channel.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters: <code>None</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"a4e3e317-bb03-4378-9ef0-7fe61fd6f6a8\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": false,\n    \"action_uuid\": \"6a87f83ab0ecfeecb9c98d084e2b1066c26fa64be5b4928d5573a5d60299802d\",\n    \"condition_enabled\": true,\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Post Slack Message\",\n    \"id\": 78,\n    \"index\": 78,\n    \"inputData\": [\n     {\n      \"channel\": {\n       \"constant\": false,\n       \"value\": \"channel_name\"\n      },\n      \"message\": {\n       \"constant\": false,\n       \"value\": \"\\\"Unused Keypairs- {}\\\".format(all_unused_key_pairs)\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"channel\": {\n        \"description\": \"Name of slack channel.\",\n        \"title\": \"Channel\",\n        \"type\": \"string\"\n       },\n       \"message\": {\n        \"description\": \"Message for slack channel.\",\n        \"title\": \"Message\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"channel\",\n       \"message\"\n      ],\n      \"title\": \"slack_post_message\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_SLACK\",\n    \"name\": \"Post Slack Message\",\n    \"nouns\": [],\n    \"orderProperties\": [\n     \"channel\",\n     \"message\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"printOutput\": true,\n    \"startcondition\": \"len(channel_name)!=0\",\n    \"tags\": [\n     \"slack_post_message\"\n    ],\n    \"verbs\": []\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"# Copyright (c) 2021 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from slack_sdk import WebClient\\n\",\n    \"from slack_sdk.errors import SlackApiError\\n\",\n    \"\\n\",\n    \"pp = pprint.PrettyPrinter(indent=2)\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def slack_post_message_printer(output):\\n\",\n    \"    if output is not None:\\n\",\n    \"        pprint.pprint(output)\\n\",\n    \"    else:\\n\",\n    \"        return\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"@beartype\\n\",\n    \"def slack_post_message(\\n\",\n    \"        handle: WebClient,\\n\",\n    \"        channel: str,\\n\",\n    \"        message: str) -> str:\\n\",\n    \"\\n\",\n    \"    try:\\n\",\n    \"        response = handle.chat_postMessage(\\n\",\n    \"            channel=channel,\\n\",\n    \"            text=message)\\n\",\n    \"        return f\\\"Successfuly Sent Message on Channel: #{channel}\\\"\\n\",\n    \"    except SlackApiError as e:\\n\",\n    \"        pp.pprint(\\n\",\n    \"            f\\\"Failed sending message to slack channel {channel}, Error: {e.response['error']}\\\")\\n\",\n    \"        if e.response['error'] == 'channel_not_found':\\n\",\n    \"            raise Exception('Channel Not Found')\\n\",\n    \"        elif e.response['error'] == 'duplicate_channel_not_found':\\n\",\n    \"            raise Exception('Channel associated with the message_id not valid')\\n\",\n    \"        elif e.response['error'] == 'not_in_channel':\\n\",\n    \"            raise Exception('Cannot post message to channel user is not in')\\n\",\n    \"        elif e.response['error'] == 'is_archived':\\n\",\n    \"            raise Exception('Channel has been archived')\\n\",\n    \"        elif e.response['error'] == 'msg_too_long':\\n\",\n    \"            raise Exception('Message text is too long')\\n\",\n    \"        elif e.response['error'] == 'no_text':\\n\",\n    \"            raise Exception('Message text was not provided')\\n\",\n    \"        elif e.response['error'] == 'restricted_action':\\n\",\n    \"            raise Exception('Workspace preference prevents user from posting')\\n\",\n    \"        elif e.response['error'] == 'restricted_action_read_only_channel':\\n\",\n    \"            raise Exception('Cannot Post message, read-only channel')\\n\",\n    \"        elif e.response['error'] == 'team_access_not_granted':\\n\",\n    \"            raise Exception('The token used is not granted access to the workspace')\\n\",\n    \"        elif e.response['error'] == 'not_authed':\\n\",\n    \"            raise Exception('No Authtnecition token provided')\\n\",\n    \"        elif e.response['error'] == 'invalid_auth':\\n\",\n    \"            raise Exception('Some aspect of Authentication cannot be validated. Request denied')\\n\",\n    \"        elif e.response['error'] == 'access_denied':\\n\",\n    \"            raise Exception('Access to a resource specified in the request denied')\\n\",\n    \"        elif e.response['error'] == 'account_inactive':\\n\",\n    \"            raise Exception('Authentication token is for a deleted user')\\n\",\n    \"        elif e.response['error'] == 'token_revoked':\\n\",\n    \"            raise Exception('Authentication token for a deleted user has been revoked')\\n\",\n    \"        elif e.response['error'] == 'no_permission':\\n\",\n    \"            raise Exception('The workspace toekn used does not have necessary permission to send message')\\n\",\n    \"        elif e.response['error'] == 'ratelimited':\\n\",\n    \"            raise Exception('The request has been ratelimited. Retry sending message later')\\n\",\n    \"        elif e.response['error'] == 'service_unavailable':\\n\",\n    \"            raise Exception('The service is temporarily unavailable')\\n\",\n    \"        elif e.response['error'] == 'fatal_error':\\n\",\n    \"            raise Exception('The server encountered catostrophic error while sending message')\\n\",\n    \"        elif e.response['error'] == 'internal_error':\\n\",\n    \"            raise Exception('The server could not complete operation, likely due to transietn issue')\\n\",\n    \"        elif e.response['error'] == 'request_timeout':\\n\",\n    \"            raise Exception('Sending message error via POST: either message was missing or truncated')\\n\",\n    \"        else:\\n\",\n    \"            raise Exception(f'Failed Sending Message to slack channel {channel} Error: {e.response[\\\"error\\\"]}')\\n\",\n    \"\\n\",\n    \"    except Exception as e:\\n\",\n    \"        print(\\\"\\\\n\\\\n\\\")\\n\",\n    \"        pp.pprint(\\n\",\n    \"            f\\\"Failed sending message to slack channel {channel}, Error: {e.__str__()}\\\")\\n\",\n    \"        return f\\\"Unable to send message on {channel}\\\"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"channel\\\": \\\"channel_name\\\",\\n\",\n    \"    \\\"message\\\": \\\"\\\\\\\\\\\"Unused Keypairs- {}\\\\\\\\\\\".format(all_unused_key_pairs)\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"len(channel_name)!=0\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(slack_post_message, lego_printer=slack_post_message_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"4cb76d21-9731-4e77-ad80-8ac4033c79b3\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Conclusion\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Conclusion\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Conclusion\\\">Conclusion</h3>\\n\",\n    \"<p>In this Runbook, we were able to filter unused keypairs and notify that list via slack message to the given channel. To view the full platform capabilities of unSkript please visit <a href=\\\"https://us.app.unskript.io\\\" target=\\\"_blank\\\" rel=\\\"noopener\\\">us.app.unskript.io</a></p>\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"execution_data\": {\n   \"runbook_name\": \"List unused Amazon EC2 key pairs\",\n   \"parameters\": [\n    \"channel_name\",\n    \"region\"\n   ]\n  },\n  \"kernelspec\": {\n   \"display_name\": \"unSkript (Build: 1169)\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"codemirror_mode\": {\n    \"name\": \"ipython\",\n    \"version\": 3\n   },\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"nbconvert_exporter\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.10.6\"\n  },\n  \"parameterSchema\": {\n   \"properties\": {\n    \"channel\": {\n     \"description\": \"Slack channel to send the notification. Eg: dummy, general\",\n     \"title\": \"channel\",\n     \"type\": \"string\"\n    },\n    \"region\": {\n     \"description\": \"AWS Region to search for unused keys. Eg: \\\"us-west-2\\\". If left blank, all regions will be considered.\",\n     \"title\": \"region\",\n     \"type\": \"string\"\n    }\n   },\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"parameterValues\": {},\n  \"vscode\": {\n   \"interpreter\": {\n    \"hash\": \"b0fa6594d8f4cbf19f97940f81e996739fb7646882a419484c72d19e05852a7e\"\n   }\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}"
  },
  {
    "path": "AWS/AWS_Notify_About_Unused_Keypairs.json",
    "content": "{\n    \"name\": \"List unused Amazon EC2 key pairs\",\n    \"description\": \"This runbook finds all EC2 key pairs that are not used by an EC2 instance and notifies a slack channel about them. Optionally it can delete the key pairs based on user configuration.\",\n    \"uuid\": \"a28edafac5f3bac3ca34d677d9b01a4bc6f74893e50bc103e5cefb00e0f48746\",\n    \"icon\": \"CONNECTOR_TYPE_AWS\",\n    \"categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\" ],\n    \"connector_types\": [ \"CONNECTOR_TYPE_AWS\" ],\n    \"version\": \"1.0.0\"\n  }"
  },
  {
    "path": "AWS/AWS_Purchase_Reserved_Cache_Nodes_For_Long_Running_ElastiCache_Clusters.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"5424264e-6195-4cf9-906b-24b02d5a83f3\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Steps Overview\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Steps Overview\"\n   },\n   \"source\": [\n    \"<center><img src=\\\"https://storage.googleapis.com/unskript-website/assets/favicon.png\\\" alt=\\\"unSkript.com\\\" width=\\\"100\\\" height=\\\"100\\\">\\n\",\n    \"<h1 id=\\\"unSkript-Runbooks\\\">unSkript Runbooks</h1>\\n\",\n    \"<div class=\\\"alert alert-block alert-success\\\">\\n\",\n    \"<h3 id=\\\"Objective\\\">Objective</h3>\\n\",\n    \"<br><strong style=\\\"color: #000000;\\\"><em>Enusre Long Running AWS ElastiCache Clusters have Reserved Cache Nodes purchased for them.</em></strong></div>\\n\",\n    \"</center>\\n\",\n    \"<p>&nbsp;</p>\\n\",\n    \"<center>\\n\",\n    \"<h2 id=\\\"Purchase-Reserved-Cache-Nodes-For-Long-Running-AWS-ElastiCache-Clusters\\\"><u>Purchase Reserved Cache Nodes For Long Running AWS ElastiCache Clusters</u></h2>\\n\",\n    \"</center>\\n\",\n    \"<h1 id=\\\"Steps-Overview\\\">Steps Overview</h1>\\n\",\n    \"<p>1)<a href=\\\"#1\\\" target=\\\"_self\\\" rel=\\\"noopener\\\"> Find Long Running AWS ElastiCache Clusters without Reserved Cache Nodes</a><br>2)<a href=\\\"#2\\\" target=\\\"_self\\\" rel=\\\"noopener\\\"> Purchase Reserved Cache Nodes</a></p>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"e1f146c9-5180-4459-9c82-cf0e1da02785\",\n   \"metadata\": {\n    \"collapsed\": true,\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-05-12T15:58:56.432Z\"\n    },\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"name\": \"Input verification\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Input verification\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"if region == None:\\n\",\n    \"    region = ''\\n\",\n    \"threshold_int = int(threshold)\\n\",\n    \"if reserved_cache_node_offering_id and not region:\\n\",\n    \"    raise SystemExit(\\\"Provide a region for the Reserved Cache Node Offering ID!\\\")\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"908f4dcb-8483-44fc-8f81-ce2502e03093\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 1\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 1\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Find-ECS-Clusters-with-Low-CPU-Utilization\\\"><a id=\\\"1\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Find Long Running AWS ElastiCache Clusters without Reserved Nodes</h3>\\n\",\n    \"<p>Using unSkript's Find Long Running AWS ElastiCache Clusters without Reserved Nodes action, we will find clusters that have been running for longer than a specified threshold and do not have reserved cache nodes purchased for them.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters: <code>region, threshold</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following output: <code>clusters_without_reserved_nodes</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"acc43420-0189-440d-9bac-a431b014d69c\",\n   \"metadata\": {\n    \"actionBashCommand\": false,\n    \"actionCategories\": [\n     \"CATEGORY_TYPE_CLOUDOPS\",\n     \"CATEGORY_TYPE_DEVOPS\",\n     \"CATEGORY_TYPE_AWS\",\n     \"CATEGORY_TYPE_AWS_ELASTICACHE\"\n    ],\n    \"actionDescription\": \"This action gets information about long running ElastiCache clusters and their status, and checks if they have any reserved nodes associated with them.\",\n    \"actionEntryFunction\": \"aws_get_long_running_elasticcache_clusters_without_reserved_nodes\",\n    \"actionIsCheck\": true,\n    \"actionIsRemediation\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [],\n    \"actionNextHopParameterMapping\": {},\n    \"actionNouns\": null,\n    \"actionOutputType\": \"ACTION_OUTPUT_TYPE_LIST\",\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"actionTitle\": \"AWS Get Long Running ElastiCache clusters Without Reserved Nodes\",\n    \"actionType\": \"LEGO_TYPE_AWS\",\n    \"actionVerbs\": null,\n    \"actionVersion\": \"1.0.0\",\n    \"action_modified\": false,\n    \"action_uuid\": \"48dd783f3952172c7cf417df55341c1abd4458ad085181ad9367b677b646e86f\",\n    \"collapsed\": true,\n    \"continueOnError\": false,\n    \"credentialsJson\": {},\n    \"description\": \"This action gets information about long running ElastiCache clusters and their status, and checks if they have any reserved nodes associated with them.\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-05-12T15:59:38.022Z\"\n    },\n    \"id\": 1,\n    \"index\": 1,\n    \"inputData\": [\n     {\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"region\"\n      },\n      \"threshold\": {\n       \"constant\": false,\n       \"value\": \"threshold_int\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"region\": {\n        \"default\": \"\",\n        \"description\": \"AWS Region to get the ElasticCache Cluster\",\n        \"title\": \"AWS Region\",\n        \"type\": \"string\"\n       },\n       \"threshold\": {\n        \"default\": 10,\n        \"description\": \"Threshold(in days) to find long running ElasticCache clusters. Eg: 30, This will find all the clusters that have been created a month ago.\",\n        \"title\": \"Threshold(in days)\",\n        \"type\": \"number\"\n       }\n      },\n      \"required\": [],\n      \"title\": \"aws_get_long_running_elasticcache_clusters_without_reserved_nodes\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"language\": \"python\",\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS Get Long Running ElastiCache clusters Without Reserved Nodes\",\n    \"orderProperties\": [\n     \"region\",\n     \"threshold\"\n    ],\n    \"outputParams\": {\n     \"output_name\": \"clusters_without_reserved_nodes\",\n     \"output_name_enabled\": true,\n     \"output_runbook_enabled\": false,\n     \"output_runbook_name\": \"\"\n    },\n    \"printOutput\": true,\n    \"tags\": [\n     \"aws_get_long_running_elasticcache_clusters_without_reserved_nodes\"\n    ],\n    \"uuid\": \"48dd783f3952172c7cf417df55341c1abd4458ad085181ad9367b677b646e86f\",\n    \"version\": \"1.0.0\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"##  Copyright (c) 2023 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import Optional, Tuple\\n\",\n    \"from unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\\n\",\n    \"import pprint\\n\",\n    \"from datetime import datetime,timedelta, timezone\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from typing import Optional\\n\",\n    \"\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_long_running_elasticcache_clusters_without_reserved_nodes_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_long_running_elasticcache_clusters_without_reserved_nodes(handle, region: str = \\\"\\\", threshold:int = 10) -> Tuple:\\n\",\n    \"    \\\"\\\"\\\"aws_get_long_running_elasticcache_clusters_without_reserved_nodes finds ElasticCache Clusters that are long running and have no reserved nodes\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned from task.validate(...).\\n\",\n    \"\\n\",\n    \"        :type region: string\\n\",\n    \"        :param region: Region of the Cluster.\\n\",\n    \"\\n\",\n    \"        :type threshold: integer\\n\",\n    \"        :param threshold: Threshold(in days) to find long running ElasticCache clusters. Eg: 30, This will find all the clusters that have been created a month ago.\\n\",\n    \"\\n\",\n    \"        :rtype: status, list of clusters, nodetype and their region.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    result = []\\n\",\n    \"    reservedNodesPerRegion = {}\\n\",\n    \"    all_regions = [region]\\n\",\n    \"    if not region:\\n\",\n    \"        all_regions = aws_list_all_regions(handle)\\n\",\n    \"    # Get the list of reserved node per region per type. We just need to maintain\\n\",\n    \"    # what type of reserved nodes are present per region. So, reservedNodesPerRegion\\n\",\n    \"    # would be like:\\n\",\n    \"    # <region>:{<nodeType>:True/False}\\n\",\n    \"    for reg in all_regions:\\n\",\n    \"        try:\\n\",\n    \"            elasticacheClient = handle.client('elasticache', region_name=reg)\\n\",\n    \"            response = elasticacheClient.describe_reserved_cache_nodes()\\n\",\n    \"            reservedNodesPerType = {}\\n\",\n    \"            if response['ReservedCacheNodes']:\\n\",\n    \"                for node in response['ReservedCacheNodes']:\\n\",\n    \"                    reservedNodesPerType[node['CacheNodeType']] = True\\n\",\n    \"            else:\\n\",\n    \"                continue\\n\",\n    \"            reservedNodesPerRegion[reg] = reservedNodesPerType\\n\",\n    \"        except Exception:\\n\",\n    \"            pass\\n\",\n    \"\\n\",\n    \"    for reg in all_regions:\\n\",\n    \"        try:\\n\",\n    \"            elasticacheClient = handle.client('elasticache', region_name=reg)\\n\",\n    \"            for cluster in elasticacheClient.describe_cache_clusters()['CacheClusters']:\\n\",\n    \"                cluster_age = datetime.now(timezone.utc) - cluster['CacheClusterCreateTime']\\n\",\n    \"                if cluster_age > timedelta(days=threshold):\\n\",\n    \"                    # Check if the cluster node type is present in the reservedNodesPerRegion map.\\n\",\n    \"                    reservedNodes = reservedNodesPerRegion.get(reg)\\n\",\n    \"                    if reservedNodes != None:\\n\",\n    \"                        if reservedNodes.get(cluster['CacheNodeType']) == True:\\n\",\n    \"                            continue\\n\",\n    \"                    cluster_dict = {}\\n\",\n    \"                    cluster_dict[\\\"region\\\"] = reg\\n\",\n    \"                    cluster_dict[\\\"cluster\\\"] = cluster['CacheClusterId']\\n\",\n    \"                    cluster_dict[\\\"node_type\\\"] = cluster['CacheNodeType']\\n\",\n    \"                    result.append(cluster_dict)\\n\",\n    \"        except Exception:\\n\",\n    \"            pass\\n\",\n    \"\\n\",\n    \"    if len(result) != 0:\\n\",\n    \"        return (False, result)\\n\",\n    \"    else:\\n\",\n    \"        return (True, None)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"region\\\": \\\"region\\\",\\n\",\n    \"    \\\"threshold\\\": \\\"int(threshold_int)\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"clusters_without_reserved_nodes\\\")\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_get_long_running_elasticcache_clusters_without_reserved_nodes, lego_printer=aws_get_long_running_elasticcache_clusters_without_reserved_nodes_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"199591ef-cb3a-49b7-b515-3c6998050320\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 2A\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 2A\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Create-List-of-Clusters-without-reserved-nodes\\\">Create List of Clusters without reserved nodes</h3>\\n\",\n    \"<p>This action filters regions that have no clusters and creates a list of those that have them (without reserved cache nodes).</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters: <code>None</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following output: <code>all_clusters_without_reserved_nodes</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"6a10e980-9f17-4436-9166-90ea130aa316\",\n   \"metadata\": {\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-05-12T15:59:48.109Z\"\n    },\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"name\": \"Create List of Clusters without reserved nodes\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Create List of Clusters without reserved nodes\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"all_clusters_without_reserved_nodes = []\\n\",\n    \"dummy = []\\n\",\n    \"for res in clusters_without_reserved_nodes:\\n\",\n    \"    if type(res)==bool:\\n\",\n    \"        if res == False:\\n\",\n    \"            continue\\n\",\n    \"    elif type(res)==list:\\n\",\n    \"        if len(res)!=0:\\n\",\n    \"            all_clusters_without_reserved_nodes=res\\n\",\n    \"print(all_clusters_without_reserved_nodes)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"978d3b61-2fd9-461d-89bd-534d2dcf3b63\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 2\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 2\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Purchase-Reserved-Cache-Node\\\"><a id=\\\"2\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Purchase Reserved Cache Node</h3>\\n\",\n    \"<p>This action Purchases Reserved Cache Nodes for the clusters found in Step 1.&nbsp;</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters:&nbsp;<code>no_of_nodes, region, reserved_node_offering_id</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"08a796e9-73bd-4969-97a7-214f062058e6\",\n   \"metadata\": {\n    \"actionBashCommand\": false,\n    \"actionCategories\": [\n     \"CATEGORY_TYPE_SRE\",\n     \"CATEGORY_TYPE_AWS\",\n     \"CATEGORY_TYPE_AWS_ELASTICACHE\"\n    ],\n    \"actionDescription\": \"This action purchases a reserved cache node offering.\",\n    \"actionEntryFunction\": \"aws_purchase_elasticcache_reserved_node\",\n    \"actionIsCheck\": false,\n    \"actionIsRemediation\": false,\n    \"actionNeedsCredential\": false,\n    \"actionNextHop\": null,\n    \"actionNextHopParameterMapping\": null,\n    \"actionNouns\": null,\n    \"actionOutputType\": \"ACTION_OUTPUT_TYPE_DICT\",\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"actionTitle\": \"AWS Purchase ElastiCache Reserved Nodes\",\n    \"actionType\": \"LEGO_TYPE_AWS\",\n    \"actionVerbs\": null,\n    \"actionVersion\": \"1.0.0\",\n    \"action_modified\": false,\n    \"action_uuid\": \"b3a50ef59c3ac1727671ecde28e9194c00857bd8c8b26546ea70606ddf8e6914\",\n    \"collapsed\": true,\n    \"continueOnError\": false,\n    \"credentialsJson\": {},\n    \"description\": \"This action purchases a reserved cache node offering.\",\n    \"id\": 4,\n    \"index\": 4,\n    \"inputData\": [\n     {\n      \"no_of_nodes\": {\n       \"constant\": false,\n       \"value\": \"no_of_nodes\"\n      },\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"region\"\n      },\n      \"reserved_node_offering_id\": {\n       \"constant\": false,\n       \"value\": \"reserved_cache_node_offering_id\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"no_of_nodes\": {\n        \"default\": 1,\n        \"description\": \"The number of reserved cache nodes that you want to purchase.\",\n        \"title\": \"No of nodes to purchase\",\n        \"type\": \"integer\"\n       },\n       \"region\": {\n        \"description\": \"AWS Region.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       },\n       \"reserved_node_offering_id\": {\n        \"description\": \"The unique identifier of the reserved cache node offering you want to purchase.\",\n        \"title\": \"Reserved Cache Node Offering ID\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"region\",\n       \"reserved_node_offering_id\"\n      ],\n      \"title\": \"aws_purchase_elasticcache_reserved_node\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"language\": \"python\",\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS Purchase ElastiCache Reserved Nodes\",\n    \"orderProperties\": [\n     \"region\",\n     \"reserved_node_offering_id\",\n     \"no_of_nodes\"\n    ],\n    \"printOutput\": true,\n    \"tags\": [\n     \"aws_purchase_elasticcache_reserved_node\"\n    ],\n    \"uuid\": \"b3a50ef59c3ac1727671ecde28e9194c00857bd8c8b26546ea70606ddf8e6914\",\n    \"version\": \"1.0.0\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"# Copyright (c) 2023 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import Optional, Dict\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_purchase_elasticcache_reserved_node_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_purchase_elasticcache_reserved_node(handle, region: str, reserved_node_offering_id: str, no_of_nodes:int=1) -> Dict:\\n\",\n    \"    \\\"\\\"\\\"aws_purchase_elasticcache_reserved_node returns dict of response.\\n\",\n    \"\\n\",\n    \"        :type region: string\\n\",\n    \"        :param region: AWS Region.\\n\",\n    \"\\n\",\n    \"        :type reserved_node_offering_id: string\\n\",\n    \"        :param reserved_node_offering_id: The unique identifier of the reserved node offering you want to purchase. Example: '438012d3-4052-4cc7-b2e3-8d3372e0e706'\\n\",\n    \"\\n\",\n    \"        :type no_of_nodes: int\\n\",\n    \"        :param no_of_nodes: The number of reserved nodes that you want to purchase.\\n\",\n    \"\\n\",\n    \"        :rtype: dict of response metatdata of purchasing a reserved node\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    try:\\n\",\n    \"        elasticClient = handle.client('elasticache', region_name=region)\\n\",\n    \"        params = {\\n\",\n    \"            'ReservedCacheNodesOfferingId': reserved_node_offering_id,\\n\",\n    \"            'CacheNodeCount': no_of_nodes\\n\",\n    \"            }\\n\",\n    \"        response = elasticClient.purchase_reserved_cache_nodes_offering(**params)\\n\",\n    \"        return response\\n\",\n    \"    except Exception as e:\\n\",\n    \"        raise Exception(e)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"region\\\": \\\"region\\\",\\n\",\n    \"    \\\"reserved_node_offering_id\\\": \\\"reserved_cache_node_offering_id\\\",\\n\",\n    \"    \\\"no_of_nodes\\\": int(no_of_nodes)\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_purchase_elasticcache_reserved_node, lego_printer=aws_purchase_elasticcache_reserved_node_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"44a6cf05-385b-445d-a503-ad4aa607a568\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Conclusion\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Conclusion\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Conclusion&para;\\\">Conclusion</h3>\\n\",\n    \"<p>In this Runbook, we were able to filter long running ElastiCache clusters without reserved nodes given a threshold number of days of creation and purchase nodes for them. To view the full platform capabilities of unSkript please visit <a href=\\\"https://us.app.unskript.io\\\" target=\\\"_blank\\\" rel=\\\"noopener\\\">us.app.unskript.io</a></p>\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"execution_data\": {\n   \"runbook_name\": \"Purchase Reserved Nodes For Long Running AWS ElastiCache Clusters\",\n   \"parameters\": [\n    \"region\",\n    \"threshold_days\"\n   ]\n  },\n  \"kernelspec\": {\n   \"display_name\": \"unSkript (Build: 1166)\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"codemirror_mode\": {\n    \"name\": \"ipython\",\n    \"version\": 3\n   },\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"nbconvert_exporter\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.10.6\"\n  },\n  \"outputParameterSchema\": {\n   \"properties\": {},\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"parameterSchema\": {\n   \"properties\": {\n    \"no_of_nodes\": {\n     \"default\": 1,\n     \"description\": \"The number of cache node instances to reserve. The default value is 1 (node).\",\n     \"title\": \"no_of_nodes\",\n     \"type\": \"number\"\n    },\n    \"region\": {\n     \"description\": \"AWS Region to get the RDS Instances from. Eg: \\\"us-west-2\\\".\",\n     \"title\": \"region\",\n     \"type\": \"string\"\n    },\n    \"reserved_cache_node_offering_id\": {\n     \"description\": \"The ID of the reserved cache node offering to purchase. Example: 438012d3-4052-4cc7-b2e3-8d3372e0e706\",\n     \"title\": \"reserved_cache_node_offering_id\",\n     \"type\": \"string\"\n    },\n    \"threshold\": {\n     \"default\": 10,\n     \"description\": \"Threshold (in days) to find long running ElastiCache clusters. Eg: 30 , this will get all the clusters that have been running for more than 30 days. The default value is 10 days.\",\n     \"title\": \"threshold\",\n     \"type\": \"number\"\n    }\n   },\n   \"required\": [\n    \"reserved_cache_node_offering_id\"\n   ],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"vscode\": {\n   \"interpreter\": {\n    \"hash\": \"b0fa6594d8f4cbf19f97940f81e996739fb7646882a419484c72d19e05852a7e\"\n   }\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}"
  },
  {
    "path": "AWS/AWS_Purchase_Reserved_Cache_Nodes_For_Long_Running_ElastiCache_Clusters.json",
    "content": "{\n    \"name\": \"Purchase Reserved Nodes For Long Running AWS ElastiCache Clusters\",\n    \"description\": \"Ensuring that long-running AWS ElastiCache clusters have Reserved Nodes purchased for them is an effective cost optimization strategy for AWS users. By committing to a specific capacity of ElastiCache nodes for a period of one or three years, users can take advantage of significant discounts compared to On-Demand pricing. This approach can help optimize AWS costs for ElastiCache clusters that are expected to run for an extended period and have predictable usage patterns. This runbook helps us optimize costs by ensuring that Reserved Nodes are purchased for these ElastiCache clusters.\",  \n    \"uuid\": \"51a0b15d932dddeea9b1991fb6299577756408ff7c47acc5dec3eb114e33562b\",\n    \"icon\": \"CONNECTOR_TYPE_AWS\",\n    \"categories\": [ \"CATEGORY_TYPE_COST_OPT\", \"CATEGORY_TYPE_CLOUDOPS\"],\n    \"connector_types\": [ \"CONNECTOR_TYPE_AWS\" ],\n    \"version\": \"1.0.0\"\n  }"
  },
  {
    "path": "AWS/AWS_Purchase_Reserved_Instances_For_Long_Running_RDS_Instances.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"5424264e-6195-4cf9-906b-24b02d5a83f3\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Steps Overview\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Steps Overview\"\n   },\n   \"source\": [\n    \"<center><img src=\\\"https://storage.googleapis.com/unskript-website/assets/favicon.png\\\" alt=\\\"unSkript.com\\\" width=\\\"100\\\" height=\\\"100\\\">\\n\",\n    \"<h1 id=\\\"unSkript-Runbooks\\\">unSkript Runbooks</h1>\\n\",\n    \"<div class=\\\"alert alert-block alert-success\\\">\\n\",\n    \"<h3 id=\\\"Objective\\\">Objective</h3>\\n\",\n    \"<br><strong style=\\\"color: #000000;\\\"><em>Enusre Long Running AWS RDS Instances have Reserved Instances purchased for them.</em></strong></div>\\n\",\n    \"</center>\\n\",\n    \"<p>&nbsp;</p>\\n\",\n    \"<center>\\n\",\n    \"<h2 id=\\\"Delete-RDS-Instances-with-Low-CPU-Utilization\\\"><u>Purchase Reserved Instances For Long Running AWS RDS Instances</u></h2>\\n\",\n    \"</center>\\n\",\n    \"<h1 id=\\\"Steps-Overview\\\">Steps Overview</h1>\\n\",\n    \"<p>1)<a href=\\\"#1\\\" target=\\\"_self\\\" rel=\\\"noopener\\\"> Find Long Running AWS RDS Instances without Reserved Instances</a><br>2)<a href=\\\"#2\\\" target=\\\"_self\\\" rel=\\\"noopener\\\"> Purchase Reserved Instance</a></p>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"e1f146c9-5180-4459-9c82-cf0e1da02785\",\n   \"metadata\": {\n    \"collapsed\": true,\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-05-12T16:32:49.906Z\"\n    },\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"name\": \"Input verification\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Input verification\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"if region == None:\\n\",\n    \"    region = ''\\n\",\n    \"threshold_int = int(threshold)\\n\",\n    \"if reserved_instance_offering_id and not region:\\n\",\n    \"    raise SystemExit(\\\"Provide a region for the Reserved Cache Instance Offering ID!\\\")\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"908f4dcb-8483-44fc-8f81-ce2502e03093\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 1\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 1\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Find-ECS-Clusters-with-Low-CPU-Utilization\\\"><a id=\\\"1\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Find Long Running AWS RDS Instances without Reserved Instances</h3>\\n\",\n    \"<p>Using unSkript's Find Long Running AWS RDS Instances without Reserved Instances action, we will find RDS DB Instances that have been running for longer than a specified threshold and do not have reserved instances purchased for them.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters: <code>region, threshold</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following output: <code>clusters_without_reserved_instances</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"a1efdab1-97ed-4d4d-bcab-5edd1eee6ffb\",\n   \"metadata\": {\n    \"actionBashCommand\": false,\n    \"actionCategories\": [\n     \"CATEGORY_TYPE_COST_OPT\",\n     \"CATEGORY_TYPE_SRE\",\n     \"CATEGORY_TYPE_AWS\",\n     \"CATEGORY_TYPE_AWS_RDS\"\n    ],\n    \"actionDescription\": \"This action gets information about long running instances and their status, and checks if they have any reserved nodes associated with them.\",\n    \"actionEntryFunction\": \"aws_get_long_running_rds_instances_without_reserved_instances\",\n    \"actionIsCheck\": true,\n    \"actionIsRemediation\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [],\n    \"actionNextHopParameterMapping\": {},\n    \"actionNouns\": null,\n    \"actionOutputType\": \"ACTION_OUTPUT_TYPE_LIST\",\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"actionTitle\": \"AWS Get Long Running RDS Instances Without Reserved Instances\",\n    \"actionType\": \"LEGO_TYPE_AWS\",\n    \"actionVerbs\": null,\n    \"actionVersion\": \"1.0.0\",\n    \"action_modified\": false,\n    \"action_uuid\": \"77d61931741da6d2be410571e205c93962815430843b1fbaf8e575e6384598ae\",\n    \"collapsed\": true,\n    \"continueOnError\": false,\n    \"credentialsJson\": {},\n    \"description\": \"This action gets information about long running instances and their status, and checks if they have any reserved nodes associated with them.\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-05-12T16:34:16.408Z\"\n    },\n    \"id\": 15,\n    \"index\": 15,\n    \"inputData\": [\n     {\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"region\"\n      },\n      \"threshold\": {\n       \"constant\": false,\n       \"value\": \"threshold_int\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"region\": {\n        \"default\": \"\",\n        \"description\": \"AWS Region.\",\n        \"title\": \"AWS Region\",\n        \"type\": \"string\"\n       },\n       \"threshold\": {\n        \"default\": 10,\n        \"description\": \"Threshold(in days) to find long running RDS instances. Eg: 30, This will find all the instances that have been created a month ago.\",\n        \"title\": \"Threshold(in days)\",\n        \"type\": \"number\"\n       }\n      },\n      \"required\": [],\n      \"title\": \"aws_get_long_running_rds_instances_without_reserved_instances\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"language\": \"python\",\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS Get Long Running RDS Instances Without Reserved Instances\",\n    \"orderProperties\": [\n     \"region\",\n     \"threshold\"\n    ],\n    \"outputParams\": {\n     \"output_name\": \"clusters_without_reserved_instances\",\n     \"output_name_enabled\": true,\n     \"output_runbook_enabled\": false,\n     \"output_runbook_name\": \"\"\n    },\n    \"printOutput\": true,\n    \"tags\": [\n     \"aws_get_long_running_rds_instances_without_reserved_instances\"\n    ],\n    \"uuid\": \"77d61931741da6d2be410571e205c93962815430843b1fbaf8e575e6384598ae\",\n    \"version\": \"1.0.0\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"##  Copyright (c) 2023 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"import pprint\\n\",\n    \"from typing import Optional, Tuple\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from datetime import datetime,timedelta, timezone\\n\",\n    \"from unskript.connectors.aws import aws_get_paginator\\n\",\n    \"from unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_long_running_rds_instances_without_reserved_instances_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_long_running_rds_instances_without_reserved_instances(handle, region: str = \\\"\\\", threshold:int=10) -> Tuple:\\n\",\n    \"    \\\"\\\"\\\"aws_get_long_running_rds_instances_without_reserved_instances Gets all DB instances that are not m5 or t3.\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned from task.validate(...).\\n\",\n    \"\\n\",\n    \"        :type region: string\\n\",\n    \"        :param region: AWS Region.\\n\",\n    \"\\n\",\n    \"        :type threshold: int\\n\",\n    \"        :param threshold: Threshold(in days) to find long running RDS instances. Eg: 30, This will find all the instances that have been created a month ago.\\n\",\n    \"\\n\",\n    \"        :rtype: A tuple with a Status,and list of DB instances that don't have reserved instances\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    result = []\\n\",\n    \"    all_regions = [region]\\n\",\n    \"    reservedInstancesPerRegion = {}\\n\",\n    \"    if not region:\\n\",\n    \"        all_regions = aws_list_all_regions(handle)\\n\",\n    \"    for reg in all_regions:\\n\",\n    \"        try:\\n\",\n    \"            rdsClient = handle.client('rds', region_name=reg)\\n\",\n    \"            response = rdsClient.describe_reserved_nodes()\\n\",\n    \"            reservedInstancesPerType = {}\\n\",\n    \"            if response['ReservedDBInstances']:\\n\",\n    \"                for ins in response['ReservedDBInstances']:\\n\",\n    \"                    reservedInstancesPerRegion[ins['DBInstanceClass']] = True\\n\",\n    \"            else:\\n\",\n    \"                continue\\n\",\n    \"            reservedInstancesPerRegion[reg] = reservedInstancesPerType\\n\",\n    \"        except Exception:\\n\",\n    \"            pass\\n\",\n    \"    for reg in all_regions:\\n\",\n    \"        try:\\n\",\n    \"            rdsClient = handle.client('rds', region_name=reg)\\n\",\n    \"            response = aws_get_paginator(rdsClient, \\\"describe_db_instances\\\", \\\"DBInstances\\\")\\n\",\n    \"            for instance in response:\\n\",\n    \"                if instance['DBInstanceStatus'] == 'available':\\n\",\n    \"                        uptime = datetime.now(timezone.utc) - instance['InstanceCreateTime']\\n\",\n    \"                        if uptime > timedelta(days=threshold):\\n\",\n    \"                            # Check if the cluster node type is present in the reservedInstancesPerRegion map.\\n\",\n    \"                            reservedInstances = reservedInstancesPerRegion.get(reg)\\n\",\n    \"                            if reservedInstances != None:\\n\",\n    \"                                if reservedInstances.get(instance['DBInstanceClass']) == True:\\n\",\n    \"                                    continue\\n\",\n    \"                            db_instance_dict = {}\\n\",\n    \"                            db_instance_dict[\\\"region\\\"] = reg\\n\",\n    \"                            db_instance_dict[\\\"instance_type\\\"] = instance['DBInstanceClass']\\n\",\n    \"                            db_instance_dict[\\\"instance\\\"] = instance['DBInstanceIdentifier']\\n\",\n    \"                            result.append(db_instance_dict)\\n\",\n    \"        except Exception:\\n\",\n    \"            pass\\n\",\n    \"\\n\",\n    \"    if len(result) != 0:\\n\",\n    \"        return (False, result)\\n\",\n    \"    else:\\n\",\n    \"        return (True, None)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"region\\\": \\\"region\\\",\\n\",\n    \"    \\\"threshold\\\": \\\"int(threshold_int)\\\"\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"task.configure(outputName=\\\"clusters_without_reserved_instances\\\")\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_get_long_running_rds_instances_without_reserved_instances, lego_printer=aws_get_long_running_rds_instances_without_reserved_instances_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"3f369bc9-53d0-44c8-af50-80ba7885c657\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 2A\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 2A\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Create-List-of-Clusters-without-reserved-instances\\\">Create List of Clusters without reserved instances</h3>\\n\",\n    \"<p>This action filters regions that have no clusters and creates a list of those that have them (without reserved instances).</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters: <code>None</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following output: <code>all_clusters_without_reserved_instances</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"6a10e980-9f17-4436-9166-90ea130aa316\",\n   \"metadata\": {\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-05-12T16:34:22.299Z\"\n    },\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"name\": \"Create List of Low CPU Utilization RDS Instances\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Create List of Low CPU Utilization RDS Instances\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"all_clusters_without_reserved_instances = []\\n\",\n    \"dummy = []\\n\",\n    \"for res in clusters_without_reserved_instances:\\n\",\n    \"    if type(res)==bool:\\n\",\n    \"        if res == False:\\n\",\n    \"            continue\\n\",\n    \"    elif type(res)==list:\\n\",\n    \"        if len(res)!=0:\\n\",\n    \"            all_clusters_without_reserved_instances=res\\n\",\n    \"print(all_clusters_without_reserved_instances)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"978d3b61-2fd9-461d-89bd-534d2dcf3b63\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 2\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 2\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Delete-RDS-Instance\\\"><a id=\\\"2\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Purchase Reserved Instance</h3>\\n\",\n    \"<p>This action Purchases Reserved Instances for the clusters found in Step 1.&nbsp;</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters:&nbsp;<code>db_instance_count, region, reserved_instance_offering_id</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"b1a73789-b8a6-4f04-97b8-09d784a8a916\",\n   \"metadata\": {\n    \"actionBashCommand\": false,\n    \"actionCategories\": [\n     \"CATEGORY_TYPE_SRE\",\n     \"CATEGORY_TYPE_AWS\",\n     \"CATEGORY_TYPE_AWS_RDS\"\n    ],\n    \"actionDescription\": \"This action purchases a reserved DB instance offering.\",\n    \"actionEntryFunction\": \"aws_purchase_rds_reserved_instance\",\n    \"actionIsCheck\": false,\n    \"actionIsRemediation\": false,\n    \"actionNeedsCredential\": false,\n    \"actionNextHop\": null,\n    \"actionNextHopParameterMapping\": null,\n    \"actionNouns\": null,\n    \"actionOutputType\": \"ACTION_OUTPUT_TYPE_DICT\",\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"actionTitle\": \"AWS Purchase RDS Reserved Instances\",\n    \"actionType\": \"LEGO_TYPE_AWS\",\n    \"actionVerbs\": null,\n    \"actionVersion\": \"1.0.0\",\n    \"action_modified\": false,\n    \"action_uuid\": \"e38b3b31c357018f66d779266a5f1692dda78556eb22eb02e3acaf9ad2d69b3d\",\n    \"collapsed\": true,\n    \"continueOnError\": false,\n    \"credentialsJson\": {},\n    \"description\": \"This action purchases a reserved DB instance offering.\",\n    \"id\": 4,\n    \"index\": 4,\n    \"inputData\": [\n     {\n      \"db_instance_count\": {\n       \"constant\": false,\n       \"value\": \"db_instance_count\"\n      },\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"region\"\n      },\n      \"reserved_instance_offering_id\": {\n       \"constant\": false,\n       \"value\": \"reserved_instance_offering_id\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"db_instance_count\": {\n        \"default\": 1,\n        \"description\": \"The number of instances to reserve.\",\n        \"title\": \"Instance Count\",\n        \"type\": \"integer\"\n       },\n       \"region\": {\n        \"description\": \"AWS Region.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       },\n       \"reserved_instance_offering_id\": {\n        \"description\": \"The ID of the Reserved DB instance offering to purchase. Example: 438012d3-4052-4cc7-b2e3-8d3372e0e706\",\n        \"title\": \"Reserved Instance Offering ID\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"region\",\n       \"reserved_instance_offering_id\"\n      ],\n      \"title\": \"aws_purchase_rds_reserved_instance\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"language\": \"python\",\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS Purchase RDS Reserved Instances\",\n    \"orderProperties\": [\n     \"region\",\n     \"reserved_instance_offering_id\",\n     \"db_instance_count\"\n    ],\n    \"printOutput\": true,\n    \"tags\": [\n     \"aws_purchase_rds_reserved_instance\"\n    ],\n    \"uuid\": \"e38b3b31c357018f66d779266a5f1692dda78556eb22eb02e3acaf9ad2d69b3d\",\n    \"version\": \"1.0.0\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"# Copyright (c) 2023 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import Optional, Dict\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_purchase_rds_reserved_instance_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_purchase_rds_reserved_instance(handle, region: str, reserved_instance_offering_id: str, db_instance_count:int=1) -> Dict:\\n\",\n    \"    \\\"\\\"\\\"aws_purchase_rds_reserved_instance returns dict of response.\\n\",\n    \"\\n\",\n    \"        :type region: string\\n\",\n    \"        :param region: AWS Region.\\n\",\n    \"\\n\",\n    \"        :type reserved_instance_offering_id: string\\n\",\n    \"        :param reserved_instance_offering_id: The unique identifier of the reserved instance offering you want to purchase.\\n\",\n    \"\\n\",\n    \"        :type db_instance_count: int\\n\",\n    \"        :param db_instance_count: The number of reserved instances that you want to purchase.\\n\",\n    \"\\n\",\n    \"        :rtype: dict of response metatdata of purchasing a reserved instance\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    try:\\n\",\n    \"        redshiftClient = handle.client('redshift', region_name=region)\\n\",\n    \"        params = {\\n\",\n    \"            'ReservedDBInstancesOfferingId': reserved_instance_offering_id,\\n\",\n    \"            'DBInstanceCount': db_instance_count\\n\",\n    \"            }\\n\",\n    \"        response = redshiftClient.purchase_reserved_db_instances_offering(**params)\\n\",\n    \"        return response\\n\",\n    \"    except Exception as e:\\n\",\n    \"        raise Exception(e)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"region\\\": \\\"region\\\",\\n\",\n    \"    \\\"reserved_instance_offering_id\\\": \\\"reserved_instance_offering_id\\\",\\n\",\n    \"    \\\"db_instance_count\\\": int(db_instance_count)\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_purchase_rds_reserved_instance, lego_printer=aws_purchase_rds_reserved_instance_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"44a6cf05-385b-445d-a503-ad4aa607a568\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Conclusion\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Conclusion\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Conclusion&para;\\\">Conclusion</h3>\\n\",\n    \"<p>In this Runbook, we were able to filter long running RDS instances without reserved nodes given a threshold number of days of creation and purchase instances for them. To view the full platform capabilities of unSkript please visit <a href=\\\"https://us.app.unskript.io\\\" target=\\\"_blank\\\" rel=\\\"noopener\\\">us.app.unskript.io</a></p>\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"execution_data\": {\n   \"runbook_name\": \"Purchase Reserved Instances For Long Running AWS RDS Instances\",\n   \"parameters\": [\n    \"region\",\n    \"threshold_days\"\n   ]\n  },\n  \"kernelspec\": {\n   \"display_name\": \"unSkript (Build: 1166)\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"codemirror_mode\": {\n    \"name\": \"ipython\",\n    \"version\": 3\n   },\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"nbconvert_exporter\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.10.6\"\n  },\n  \"outputParameterSchema\": {\n   \"properties\": {},\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"parameterSchema\": {\n   \"properties\": {\n    \"db_instance_count\": {\n     \"default\": 1,\n     \"description\": \"Number of reserved instances to create. The default value is 1.\",\n     \"title\": \"db_instance_count\",\n     \"type\": \"number\"\n    },\n    \"region\": {\n     \"description\": \"AWS region. Eg: \\\"us-west-2\\\"\",\n     \"title\": \"region\",\n     \"type\": \"string\"\n    },\n    \"reserved_instance_offering_id\": {\n     \"description\": \"The ID of the reserved instance offering to purchase. Example: 438012d3-4052-4cc7-b2e3-8d3372e0e706\",\n     \"title\": \"reserved_instance_offering_id\",\n     \"type\": \"string\"\n    },\n    \"threshold\": {\n     \"default\": 10,\n     \"description\": \"Threshold (in days) to find long running RDS Instances. Eg: 30 , this will get all the instances that have been running for more than 30 days. The default value is 10 days.\",\n     \"title\": \"threshold\",\n     \"type\": \"number\"\n    }\n   },\n   \"required\": [\n    \"reserved_instance_offering_id\"\n   ],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"vscode\": {\n   \"interpreter\": {\n    \"hash\": \"b0fa6594d8f4cbf19f97940f81e996739fb7646882a419484c72d19e05852a7e\"\n   }\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}"
  },
  {
    "path": "AWS/AWS_Purchase_Reserved_Instances_For_Long_Running_RDS_Instances.json",
    "content": "{\n    \"name\": \"Purchase Reserved Instances For Long Running AWS RDS Instances\",\n    \"description\": \"Ensuring that long-running AWS RDS instances have Reserved Instances purchased for them is an important cost optimization strategy for AWS users. By committing to a specific capacity of RDS instances for a period of one or three years, users can take advantage of significant discounts compared to On-Demand pricing. This approach can help optimize AWS costs for RDS instances that are expected to run for an extended period and have predictable usage patterns. This runbook helps us to optimize costs by ensuring that Reserved Instances are purchased for these RDS instances.\",  \n    \"uuid\": \"e0ff270a41b65b1804da257ffec5fbdec7dd51bdb3da925cced7fa3391bfe70b\",\n    \"icon\": \"CONNECTOR_TYPE_AWS\",\n    \"categories\": [ \"CATEGORY_TYPE_COST_OPT\", \"CATEGORY_TYPE_CLOUDOPS\"],\n    \"connector_types\": [ \"CONNECTOR_TYPE_AWS\" ],\n    \"version\": \"1.0.0\"\n  }"
  },
  {
    "path": "AWS/AWS_Purchase_Reserved_Nodes_For_Long_Running_Redshift_Clusters.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"5424264e-6195-4cf9-906b-24b02d5a83f3\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Steps Overview\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Steps Overview\"\n   },\n   \"source\": [\n    \"<center><img src=\\\"https://storage.googleapis.com/unskript-website/assets/favicon.png\\\" alt=\\\"unSkript.com\\\" width=\\\"100\\\" height=\\\"100\\\">\\n\",\n    \"<h1 id=\\\"unSkript-Runbooks\\\">unSkript Runbooks</h1>\\n\",\n    \"<div class=\\\"alert alert-block alert-success\\\">\\n\",\n    \"<h3 id=\\\"Objective\\\">Objective</h3>\\n\",\n    \"<br><strong style=\\\"color: #000000;\\\"><em>Enusre Long Running AWS Redshift Clusters have Reserved Nodes purchased for them.</em></strong></div>\\n\",\n    \"</center>\\n\",\n    \"<p>&nbsp;</p>\\n\",\n    \"<center>\\n\",\n    \"<h2 id=\\\"Purchase-Reserved-Nodes-For-Long-Running-AWS-Redshift-Clusters\\\"><u>Purchase Reserved Nodes For Long Running AWS Redshift Clusters</u></h2>\\n\",\n    \"</center>\\n\",\n    \"<h1 id=\\\"Steps-Overview\\\">Steps Overview</h1>\\n\",\n    \"<p>1)<a href=\\\"#1\\\" target=\\\"_self\\\" rel=\\\"noopener\\\"> Find Long Running AWS Redshift Clusters without Reserved Clusters</a><br>2)<a href=\\\"#2\\\" target=\\\"_self\\\" rel=\\\"noopener\\\"> Purchase Reserved Node</a></p>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"e1f146c9-5180-4459-9c82-cf0e1da02785\",\n   \"metadata\": {\n    \"collapsed\": true,\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-05-12T16:53:03.648Z\"\n    },\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"name\": \"Input verification\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Input verification\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"if region == None:\\n\",\n    \"    region = ''\\n\",\n    \"threshold_int = int(threshold)\\n\",\n    \"if reserved_cache_node_offering_id and not region:\\n\",\n    \"    raise SystemExit(\\\"Provide a region for the Reserved Cache Node Offering ID!\\\")\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"908f4dcb-8483-44fc-8f81-ce2502e03093\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 1\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 1\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Find-ECS-Clusters-with-Low-CPU-Utilization\\\"><a id=\\\"1\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Find Long Running AWS Redshift Clusters without Reserved Nodes</h3>\\n\",\n    \"<p>Using unSkript's Find Long Running AWS Redshift Clusters without Reserved Nodes action, we will find clusters that have been running for longer than a specified threshold and do not have reserved nodes purchased for them.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters: <code>region, threshold</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following output: <code>clusters_without_reserved_nodes</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"c2b68fa5-a047-4e34-afa7-b016cb5843b7\",\n   \"metadata\": {\n    \"actionBashCommand\": false,\n    \"actionCategories\": [\n     \"CATEGORY_TYPE_COST_OPT\",\n     \"CATEGORY_TYPE_SRE\",\n     \"CATEGORY_TYPE_AWS\",\n     \"CATEGORY_TYPE_AWS_REDSHIFT\"\n    ],\n    \"actionDescription\": \"This action gets information about running clusters and their status, and checks if they have any reserved nodes associated with them.\",\n    \"actionEntryFunction\": \"aws_get_long_running_redshift_clusters_without_reserved_nodes\",\n    \"actionIsCheck\": true,\n    \"actionIsRemediation\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [],\n    \"actionNextHopParameterMapping\": {},\n    \"actionNouns\": null,\n    \"actionOutputType\": \"ACTION_OUTPUT_TYPE_LIST\",\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"actionTitle\": \"AWS Get Long Running Redshift Clusters Without Reserved Nodes\",\n    \"actionType\": \"LEGO_TYPE_AWS\",\n    \"actionVerbs\": null,\n    \"actionVersion\": \"1.0.0\",\n    \"action_modified\": false,\n    \"action_uuid\": \"04cd063254d5417f558b574e5ae0e90f5a576397b2ce63a53fbb3125b2f99791\",\n    \"collapsed\": true,\n    \"continueOnError\": false,\n    \"credentialsJson\": {},\n    \"description\": \"This action gets information about running clusters and their status, and checks if they have any reserved nodes associated with them.\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-05-12T16:53:09.999Z\"\n    },\n    \"id\": 1,\n    \"index\": 1,\n    \"inputData\": [\n     {\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"region\"\n      },\n      \"threshold\": {\n       \"constant\": false,\n       \"value\": \"threshold_int\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"region\": {\n        \"default\": \"\",\n        \"description\": \"AWS Region to get the Redshift Cluster\",\n        \"title\": \"AWS Region\",\n        \"type\": \"string\"\n       },\n       \"threshold\": {\n        \"default\": 10,\n        \"description\": \"Threshold(in days) to find long running redshift clusters. Eg: 30, This will find all the clusters that have been created a month ago.\",\n        \"title\": \"Threshold(in days)\",\n        \"type\": \"number\"\n       }\n      },\n      \"required\": [],\n      \"title\": \"aws_get_long_running_redshift_clusters_without_reserved_nodes\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"language\": \"python\",\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS Get Long Running Redshift Clusters Without Reserved Nodes\",\n    \"orderProperties\": [\n     \"region\",\n     \"threshold\"\n    ],\n    \"outputParams\": {\n     \"output_name\": \"clusters_without_reserved_nodes\",\n     \"output_name_enabled\": true,\n     \"output_runbook_enabled\": false,\n     \"output_runbook_name\": \"\"\n    },\n    \"printOutput\": true,\n    \"tags\": [\n     \"aws_get_long_running_redshift_clusters_without_reserved_nodes\"\n    ],\n    \"uuid\": \"04cd063254d5417f558b574e5ae0e90f5a576397b2ce63a53fbb3125b2f99791\",\n    \"version\": \"1.0.0\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"##  Copyright (c) 2023 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import Optional, Tuple\\n\",\n    \"from unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\\n\",\n    \"import pprint\\n\",\n    \"from datetime import datetime,timedelta, timezone\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_long_running_redshift_clusters_without_reserved_nodes_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_long_running_redshift_clusters_without_reserved_nodes(handle, region: str = \\\"\\\", threshold:int = 10) -> Tuple:\\n\",\n    \"    \\\"\\\"\\\"aws_get_long_running_redshift_clusters_without_reserved_nodes finds Redshift Clusters that are long running and have no reserved nodes\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned from task.validate(...).\\n\",\n    \"\\n\",\n    \"        :type region: string\\n\",\n    \"        :param region: Region of the Cluster.\\n\",\n    \"\\n\",\n    \"        :type threshold: integer\\n\",\n    \"        :param threshold: Threshold(in days) to find long running redshift clusters. Eg: 30, This will find all the clusters that have been created a month ago.\\n\",\n    \"\\n\",\n    \"        :rtype: status, list of clusters, nodetype and their region.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    result = []\\n\",\n    \"    reservedNodesPerRegion = {}\\n\",\n    \"    all_regions = [region]\\n\",\n    \"    if not region:\\n\",\n    \"        all_regions = aws_list_all_regions(handle)\\n\",\n    \"    for reg in all_regions:\\n\",\n    \"        try:\\n\",\n    \"            redshiftClient = handle.client('redshift', region_name=reg)\\n\",\n    \"            response = redshiftClient.describe_reserved_nodes()\\n\",\n    \"            reservedNodesPerType = {}\\n\",\n    \"            if response['ReservedNodes']:\\n\",\n    \"                for node in response['ReservedNodes']:\\n\",\n    \"                    reservedNodesPerType[node['NodeType']] = True\\n\",\n    \"            else:\\n\",\n    \"                continue\\n\",\n    \"            reservedNodesPerRegion[reg] = reservedNodesPerType\\n\",\n    \"        except Exception:\\n\",\n    \"            pass\\n\",\n    \"    for reg in all_regions:\\n\",\n    \"        try:\\n\",\n    \"            redshiftClient = handle.client('redshift', region_name=reg)\\n\",\n    \"            for cluster in redshiftClient.describe_clusters()['Clusters']:\\n\",\n    \"                cluster_age = datetime.now(timezone.utc) - cluster['ClusterCreateTime']\\n\",\n    \"            if cluster['ClusterStatus'] == 'available' and cluster_age > timedelta(days=threshold):\\n\",\n    \"                # Check if the cluster node type is present in the reservedNodesPerRegion map.\\n\",\n    \"                reservedNodes = reservedNodesPerRegion.get(reg)\\n\",\n    \"                if reservedNodes != None:\\n\",\n    \"                    if reservedNodes.get(cluster['NodeType']) == True:\\n\",\n    \"                        continue\\n\",\n    \"                cluster_dict = {}\\n\",\n    \"                cluster_dict[\\\"region\\\"] = reg\\n\",\n    \"                cluster_dict[\\\"cluster\\\"] = cluster['ClusterIdentifier']\\n\",\n    \"                cluster_dict[\\\"node_type\\\"] = cluster['NodeType']\\n\",\n    \"                result.append(cluster_dict)\\n\",\n    \"        except Exception:\\n\",\n    \"            pass\\n\",\n    \"    if len(result) != 0:\\n\",\n    \"        return (False, result)\\n\",\n    \"    else:\\n\",\n    \"        return (True, None)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"region\\\": \\\"region\\\",\\n\",\n    \"    \\\"threshold\\\": \\\"int(threshold_int)\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"clusters_without_reserved_nodes\\\")\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_get_long_running_redshift_clusters_without_reserved_nodes, lego_printer=aws_get_long_running_redshift_clusters_without_reserved_nodes_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"199591ef-cb3a-49b7-b515-3c6998050320\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 2A\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 2A\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Create-List-of-Clusters-without-reserved-nodes\\\">Create List of Clusters without reserved nodes</h3>\\n\",\n    \"<p>This action filters regions that have no clusters and creates a list of those that have them (without reserved nodes).</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters: <code>None</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following output: <code>all_clusters_without_reserved_nodes</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"6a10e980-9f17-4436-9166-90ea130aa316\",\n   \"metadata\": {\n    \"collapsed\": true,\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-05-12T16:53:13.534Z\"\n    },\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"name\": \"Create List of Clusters without reserved nodes\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Create List of Clusters without reserved nodes\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"all_clusters_without_reserved_nodes = []\\n\",\n    \"dummy = []\\n\",\n    \"for res in clusters_without_reserved_nodes:\\n\",\n    \"    if type(res)==bool:\\n\",\n    \"        if res == False:\\n\",\n    \"            continue\\n\",\n    \"    elif type(res)==list:\\n\",\n    \"        if len(res)!=0:\\n\",\n    \"            all_clusters_without_reserved_nodes=res\\n\",\n    \"print(all_clusters_without_reserved_nodes)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"978d3b61-2fd9-461d-89bd-534d2dcf3b63\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 2\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 2\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Delete-RDS-Instance\\\"><a id=\\\"2\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Purchase Reserved Node</h3>\\n\",\n    \"<p>This action Purchases Reserved Nodes for the clusters found in Step 1.&nbsp;</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters:&nbsp;<code>no_of_nodes, region, reserved_node_offering_id</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"5528b411-1a01-4230-af26-014ad7e951e2\",\n   \"metadata\": {\n    \"actionBashCommand\": false,\n    \"actionCategories\": [\n     \"CATEGORY_TYPE_SRE\",\n     \"CATEGORY_TYPE_AWS\",\n     \"CATEGORY_TYPE_AWS_REDSHIFT\"\n    ],\n    \"actionDescription\": \"This action purchases reserved nodes. Amazon Redshift offers a predefined set of reserved node offerings. You can purchase one or more of the offerings.\",\n    \"actionEntryFunction\": \"aws_purchase_redshift_reserved_node\",\n    \"actionIsCheck\": false,\n    \"actionIsRemediation\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": null,\n    \"actionNextHopParameterMapping\": null,\n    \"actionNouns\": null,\n    \"actionOutputType\": \"ACTION_OUTPUT_TYPE_DICT\",\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"actionTitle\": \"AWS Purchase Redshift Reserved Nodes\",\n    \"actionType\": \"LEGO_TYPE_AWS\",\n    \"actionVerbs\": null,\n    \"actionVersion\": \"1.0.0\",\n    \"action_modified\": false,\n    \"action_uuid\": \"86e0a0ac26eb1973118755e8dded5fa2ee4af6a9a501f7eeeda2917933d7a9f1\",\n    \"continueOnError\": false,\n    \"credentialsJson\": {},\n    \"description\": \"This action purchases reserved nodes. Amazon Redshift offers a predefined set of reserved node offerings. You can purchase one or more of the offerings.\",\n    \"id\": 17,\n    \"index\": 17,\n    \"inputData\": [\n     {\n      \"no_of_nodes\": {\n       \"constant\": false,\n       \"value\": \"no_of_nodes\"\n      },\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"region\"\n      },\n      \"reserved_node_offering_id\": {\n       \"constant\": false,\n       \"value\": \"reserved_node_offering_id\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"no_of_nodes\": {\n        \"default\": 1,\n        \"description\": \"The number of reserved nodes that you want to purchase.\",\n        \"title\": \"No od Nodes to reserve\",\n        \"type\": \"integer\"\n       },\n       \"region\": {\n        \"description\": \"AWS Region.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       },\n       \"reserved_node_offering_id\": {\n        \"description\": \"The unique identifier of the reserved node offering you want to purchase.\",\n        \"title\": \"Reserved Node Offering ID\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"region\",\n       \"reserved_node_offering_id\"\n      ],\n      \"title\": \"aws_purchase_redshift_reserved_node\",\n      \"type\": \"object\"\n     }\n    ],\n    \"language\": \"python\",\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS Purchase Redshift Reserved Nodes\",\n    \"orderProperties\": [\n     \"region\",\n     \"reserved_node_offering_id\",\n     \"no_of_nodes\"\n    ],\n    \"printOutput\": true,\n    \"tags\": [\n     \"aws_purchase_redshift_reserved_node\"\n    ],\n    \"uuid\": \"86e0a0ac26eb1973118755e8dded5fa2ee4af6a9a501f7eeeda2917933d7a9f1\",\n    \"version\": \"1.0.0\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"# Copyright (c) 2023 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import Optional, Dict\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from typing import Optional\\n\",\n    \"\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_purchase_redshift_reserved_node_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_purchase_redshift_reserved_node(handle, region: str, reserved_node_offering_id: str, no_of_nodes:int=1) -> Dict:\\n\",\n    \"    \\\"\\\"\\\"aws_purchase_redshift_reserved_node returns dict of response.\\n\",\n    \"\\n\",\n    \"        :type region: string\\n\",\n    \"        :param region: AWS Region.\\n\",\n    \"\\n\",\n    \"        :type reserved_node_offering_id: string\\n\",\n    \"        :param reserved_node_offering_id: The unique identifier of the reserved node offering you want to purchase.\\n\",\n    \"\\n\",\n    \"        :type no_of_nodes: int\\n\",\n    \"        :param no_of_nodes: The number of reserved nodes that you want to purchase.\\n\",\n    \"\\n\",\n    \"        :rtype: dict of response metatdata of purchasing a reserved node\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    try:\\n\",\n    \"        redshiftClient = handle.client('redshift', region_name=region)\\n\",\n    \"        params = {\\n\",\n    \"            'ReservedNodeOfferingId': reserved_node_offering_id,\\n\",\n    \"            'NodeCount': no_of_nodes\\n\",\n    \"            }\\n\",\n    \"        response = redshiftClient.purchase_reserved_node_offering(**params)\\n\",\n    \"        return response\\n\",\n    \"    except Exception as e:\\n\",\n    \"        raise Exception(e)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"region\\\": \\\"region\\\",\\n\",\n    \"    \\\"reserved_node_offering_id\\\": \\\"reserved_node_offering_id\\\",\\n\",\n    \"    \\\"no_of_nodes\\\": int(no_of_nodes)\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_purchase_redshift_reserved_node, lego_printer=aws_purchase_redshift_reserved_node_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"44a6cf05-385b-445d-a503-ad4aa607a568\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Conclusion\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Conclusion\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Conclusion&para;\\\">Conclusion</h3>\\n\",\n    \"<p>n this Runbook, we were able to filter long running Redshift clusters without reserved nodes given a threshold number of days of creation and purchase nodes for them. To view the full platform capabilities of unSkript please visit <a href=\\\"https://us.app.unskript.io\\\" target=\\\"_blank\\\" rel=\\\"noopener\\\">us.app.unskript.io</a></p>\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"execution_data\": {\n   \"runbook_name\": \"Purchase Reserved Nodes For Long Running AWS Redshift Clusters\",\n   \"parameters\": [\n    \"region\",\n    \"threshold_days\"\n   ]\n  },\n  \"kernelspec\": {\n   \"display_name\": \"unSkript (Build: 1166)\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"codemirror_mode\": {\n    \"name\": \"ipython\",\n    \"version\": 3\n   },\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"nbconvert_exporter\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.10.6\"\n  },\n  \"outputParameterSchema\": {\n   \"properties\": {},\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"parameterSchema\": {\n   \"properties\": {\n    \"no_of_nodes\": {\n     \"default\": 1,\n     \"description\": \"The number of cache node instances to reserve. The default value is 1 (node).\",\n     \"title\": \"no_of_nodes\",\n     \"type\": \"number\"\n    },\n    \"region\": {\n     \"description\": \"AWS region. Eg: 'us-west-2'\",\n     \"title\": \"region\",\n     \"type\": \"string\"\n    },\n    \"reserved_node_offering_id\": {\n     \"description\": \"The ID of the reserved node offering to purchase. Example: 438012d3-4052-4cc7-b2e3-8d3372e0e706\",\n     \"title\": \"reserved_node_offering_id\",\n     \"type\": \"string\"\n    },\n    \"threshold\": {\n     \"default\": 10,\n     \"description\": \"Threshold (in days) to find long running Redshift clusters. Eg: 30 , this will get all the clusters that have been running for more than 30 days. The default value is 10 days.\",\n     \"title\": \"threshold\",\n     \"type\": \"number\"\n    }\n   },\n   \"required\": [\n    \"reserved_node_offering_id\"\n   ],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"vscode\": {\n   \"interpreter\": {\n    \"hash\": \"b0fa6594d8f4cbf19f97940f81e996739fb7646882a419484c72d19e05852a7e\"\n   }\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}"
  },
  {
    "path": "AWS/AWS_Purchase_Reserved_Nodes_For_Long_Running_Redshift_Clusters.json",
    "content": "{\n    \"name\": \"Purchase Reserved Nodes For Long Running AWS Redshift Clusters\",\n    \"description\": \"Ensuring that long-running AWS Redshift Clusters have Reserved Nodes purchased for them is a critical cost optimization strategy . By committing to a specific capacity of Redshift nodes for a period of one or three years, users can take advantage of significant discounts compared to On-Demand pricing. This approach can help optimize AWS costs for Redshift Clusters that are expected to run for an extended period and have predictable usage patterns. This runbook helps us to ensure that Reserved Nodes are purchased for these clusters so that users can effectively plan ahead, reduce their AWS bill, and optimize their costs over time.\",  \n    \"uuid\": \"08d3033e428c5fa241be26cfc8787fb16c05c6aa31830075e730fefd5aaf744f\",\n    \"icon\": \"CONNECTOR_TYPE_AWS\",\n    \"categories\": [ \"CATEGORY_TYPE_COST_OPT\", \"CATEGORY_TYPE_CLOUDOPS\"],\n    \"connector_types\": [ \"CONNECTOR_TYPE_AWS\" ],\n    \"version\": \"1.0.0\"\n  }\n  \n  \n  "
  },
  {
    "path": "AWS/AWS_Release_Unattached_Elastic_IPs.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"82eebdfd-c880-40df-bd6d-5b546c92164b\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Steps Overview\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Steps Overview\"\n   },\n   \"source\": [\n    \"<center><img src=\\\"https://storage.googleapis.com/unskript-website/assets/favicon.png\\\" alt=\\\"unSkript.com\\\" width=\\\"100\\\" height=\\\"100\\\">\\n\",\n    \"<h1 id=\\\"unSkript-Runbooks&para;\\\">unSkript Runbooks</h1>\\n\",\n    \"<div class=\\\"alert alert-block alert-success\\\">\\n\",\n    \"<h3 id=\\\"Objective&para;\\\">Objective</h3>\\n\",\n    \"<br><strong style=\\\"color: #000000;\\\"><em>Find and Delete Unattached AWS Elastic IPs</em></strong></div>\\n\",\n    \"</center>\\n\",\n    \"<p>&nbsp;</p>\\n\",\n    \"<center>\\n\",\n    \"<h2 id=\\\"Release-Unattached-AWS-Elastic-IPs\\\"><strong><u>Release Unattached AWS Elastic IPs</u></strong></h2>\\n\",\n    \"</center>\\n\",\n    \"<h1 id=\\\"Steps-Overview\\\">Steps Overview</h1>\\n\",\n    \"<p>1)<a href=\\\"#1\\\" target=\\\"_self\\\" rel=\\\"noopener\\\"> Find unattached Elastic IPs</a><br>2)<a href=\\\"#2\\\" target=\\\"_self\\\" rel=\\\"noopener\\\"> Delete unattached Elastic IPs</a></p>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 25,\n   \"id\": \"1290c59b-9107-46c0-8f0b-8dce39e91ef9\",\n   \"metadata\": {\n    \"collapsed\": true,\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-04-20T10:15:19.472Z\"\n    },\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"name\": \"Input Verification\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Input Verification\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"if region == None:\\n\",\n    \"    region = ''\\n\",\n    \"if allocation_ids and not region:\\n\",\n    \"    raise SystemExit(\\\"Provide a region for the Allocation ID's!\\\")\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"2020e8d0-ba3b-4c71-84b2-10917465a27e\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 1\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 1\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Find-unattached-Elastic-IPs\\\"><a id=\\\"1\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Find unattached Elastic IPs</h3>\\n\",\n    \"<p>Using unSkript's Find unattached Elastic IPs action, we will find unattahched Elastic IPs which don't have any instances associated to them.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters: <code>region</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following output: <code>unused_ips</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"36acabd0-68b0-4fe8-adf5-39db2cf00962\",\n   \"metadata\": {\n    \"actionBashCommand\": false,\n    \"actionCategories\": [\n     \"CATEGORY_TYPE_DEVOPS\",\n     \"CATEGORY_TYPE_SRE\",\n     \"CATEGORY_TYPE_AWS\",\n     \"CATEGORY_TYPE_AWS_EC2\"\n    ],\n    \"actionDescription\": \"This action lists Elastic IP address and check if it is associated with an instance or network interface.\",\n    \"actionEntryFunction\": \"aws_list_unattached_elastic_ips\",\n    \"actionIsCheck\": true,\n    \"actionIsRemediation\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [\n     \"a9d7ea5f3d31745f1de9fb8616ab6fbc20ff11e665808bdde6a9ba9b8b32e28a\"\n    ],\n    \"actionNextHopParameterMapping\": {},\n    \"actionNouns\": null,\n    \"actionOutputType\": \"ACTION_OUTPUT_TYPE_LIST\",\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"actionTitle\": \"AWS List Unattached Elastic IPs\",\n    \"actionType\": \"LEGO_TYPE_AWS\",\n    \"actionVerbs\": null,\n    \"actionVersion\": \"1.0.0\",\n    \"action_modified\": false,\n    \"action_uuid\": \"9f378662591138c29993d482db1c391aa2d154ffc7142b27824dc2766a5e2a69\",\n    \"collapsed\": true,\n    \"condition_enabled\": true,\n    \"continueOnError\": false,\n    \"credentialsJson\": {},\n    \"description\": \"This action lists Elastic IP address and check if it is associated with an instance or network interface.\",\n    \"id\": 1,\n    \"index\": 1,\n    \"inputData\": [\n     {\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"region\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"region\": {\n        \"default\": \"\",\n        \"description\": \"AWS Region.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [],\n      \"title\": \"aws_list_unattached_elastic_ips\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"language\": \"python\",\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS List Unattached Elastic IPs\",\n    \"orderProperties\": [\n     \"region\"\n    ],\n    \"outputParams\": {\n     \"output_name\": \"unused_ips\",\n     \"output_name_enabled\": true,\n     \"output_runbook_enabled\": false,\n     \"output_runbook_name\": \"\"\n    },\n    \"printOutput\": true,\n    \"startcondition\": \"not allocation_ips\",\n    \"tags\": [\n     \"aws_list_unattached_elastic_ips\"\n    ],\n    \"uuid\": \"9f378662591138c29993d482db1c391aa2d154ffc7142b27824dc2766a5e2a69\",\n    \"version\": \"1.0.0\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"##  Copyright (c) 2023 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"import pprint\\n\",\n    \"from typing import Optional, Tuple\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_list_unattached_elastic_ips_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_list_unattached_elastic_ips(handle, region: str = \\\"\\\") -> Tuple:\\n\",\n    \"    \\\"\\\"\\\"aws_list_unattached_elastic_ips Returns an array of unattached elastic IPs.\\n\",\n    \"\\n\",\n    \"        :type region: string\\n\",\n    \"        :param region: AWS Region.\\n\",\n    \"\\n\",\n    \"        :rtype: Tuple with status result and list of unattached elastic IPs.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    result = []\\n\",\n    \"    all_regions = [region]\\n\",\n    \"    if not region:\\n\",\n    \"        all_regions = aws_list_all_regions(handle)\\n\",\n    \"\\n\",\n    \"    for reg in all_regions:\\n\",\n    \"        try:\\n\",\n    \"            # Filtering the public_ip by region\\n\",\n    \"            ec2Client = handle.client('ec2', region_name=reg)\\n\",\n    \"            all_eips = ec2Client.describe_addresses()\\n\",\n    \"            for eip in all_eips[\\\"Addresses\\\"]:\\n\",\n    \"                vpc_data = {}\\n\",\n    \"                if 'AssociationId' not in eip:\\n\",\n    \"                    vpc_data[\\\"public_ip\\\"] = eip['PublicIp']\\n\",\n    \"                    vpc_data[\\\"allocation_id\\\"] = eip['AllocationId']\\n\",\n    \"                    vpc_data[\\\"region\\\"] = reg\\n\",\n    \"                    result.append(vpc_data)\\n\",\n    \"        except Exception:\\n\",\n    \"            pass\\n\",\n    \"\\n\",\n    \"    if len(result) != 0:\\n\",\n    \"        return (False, result)\\n\",\n    \"    return (True, None)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"region\\\": \\\"region\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"not allocation_ips\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"task.configure(outputName=\\\"unused_ips\\\")\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_list_unattached_elastic_ips, lego_printer=aws_list_unattached_elastic_ips_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"a311041f-620a-4b6b-914f-e52c6c3a71f4\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 2A\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 2A\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Create-List-of-unattached-Elastic-IPs\\\">Create List of unattached Elastic IPs</h3>\\n\",\n    \"<p>This action filters regions that have no unattached Elastic IPs and creates a list of those that have them.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters: <code>None</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following output: <code>all_unused_ips</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 28,\n   \"id\": \"b85ce542-bdf0-44d2-9e75-213002d5c036\",\n   \"metadata\": {\n    \"collapsed\": true,\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-04-20T10:16:03.026Z\"\n    },\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"name\": \"Create List of Unallocated Elastic IPs\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Create List of Unallocated Elastic IPs\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"all_unused_ips = []\\n\",\n    \"try:\\n\",\n    \"    if unused_ips[0] == False:\\n\",\n    \"        if len(unused_ips[1])!=0:\\n\",\n    \"            all_unused_ips=unused_ips[1]\\n\",\n    \"except Exception:\\n\",\n    \"    for ids in allocation_ids:\\n\",\n    \"        data_dict = {}\\n\",\n    \"        data_dict[\\\"region\\\"] = region\\n\",\n    \"        data_dict[\\\"allocation_id\\\"] = ids\\n\",\n    \"        all_unused_ips.append(data_dict)\\n\",\n    \"print(all_unused_ips)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"9fb3704a-9b19-49c4-96ab-a982217bbcd3\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 2\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 2\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Delete-unattached-Elastic-IPs\\\"><a id=\\\"2\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Delete unattached Elastic IPs</h3>\\n\",\n    \"<p>This action deleted unattached Elastic IPs found in Step 1.&nbsp;</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters:&nbsp;<code>region, elastic_ip</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"4ca7a324-cd13-41d6-888f-643709c35d21\",\n   \"metadata\": {\n    \"actionBashCommand\": false,\n    \"actionCategories\": [\n     \"CATEGORY_TYPE_DEVOPS\",\n     \"CATEGORY_TYPE_SRE\",\n     \"CATEGORY_TYPE_AWS\"\n    ],\n    \"actionDescription\": \"AWS Release Elastic IP for both VPC and Standard\",\n    \"actionEntryFunction\": \"aws_release_elastic_ip\",\n    \"actionIsCheck\": false,\n    \"actionIsRemediation\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": null,\n    \"actionNextHopParameterMapping\": null,\n    \"actionNouns\": null,\n    \"actionOutputType\": \"ACTION_OUTPUT_TYPE_DICT\",\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"actionTitle\": \"AWS Release Elastic IP\",\n    \"actionType\": \"LEGO_TYPE_AWS\",\n    \"actionVerbs\": null,\n    \"actionVersion\": \"1.0.0\",\n    \"action_modified\": false,\n    \"action_uuid\": \"20a5f7f3c28da1a98b78fdbc2ca582dd30c1b5a3f57bcfc9da691a3182a332c3\",\n    \"collapsed\": true,\n    \"condition_enabled\": true,\n    \"continueOnError\": true,\n    \"credentialsJson\": {},\n    \"description\": \"AWS Release Elastic IP for both VPC and Standard\",\n    \"id\": 2,\n    \"index\": 2,\n    \"inputData\": [\n     {\n      \"allocation_id\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"allocation_id\\\\\\\\\\\")\\\"\"\n      },\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"region\\\\\\\\\\\")\\\"\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"allocation_id\": {\n        \"description\": \"Allocation ID of the Elastic IP to release.\",\n        \"title\": \"Allocation ID\",\n        \"type\": \"string\"\n       },\n       \"region\": {\n        \"description\": \"AWS Region.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"allocation_id\",\n       \"region\"\n      ],\n      \"title\": \"aws_release_elastic_ip\",\n      \"type\": \"object\"\n     }\n    ],\n    \"iterData\": [\n     {\n      \"iter_enabled\": true,\n      \"iter_item\": {\n       \"allocation_id\": \"allocation_id\",\n       \"region\": \"region\"\n      },\n      \"iter_list\": {\n       \"constant\": false,\n       \"objectItems\": true,\n       \"value\": \"all_unused_ips\"\n      }\n     }\n    ],\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"language\": \"python\",\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS Release Elastic IP\",\n    \"orderProperties\": [\n     \"allocation_id\",\n     \"region\"\n    ],\n    \"printOutput\": true,\n    \"startcondition\": \"len(all_unused_ips)!=0\",\n    \"tags\": [\n     \"aws_release_elastic_ip\"\n    ],\n    \"uuid\": \"20a5f7f3c28da1a98b78fdbc2ca582dd30c1b5a3f57bcfc9da691a3182a332c3\",\n    \"version\": \"1.0.0\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"# Copyright (c) 2023 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import Dict\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_release_elastic_ip_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_release_elastic_ip(handle, region: str, allocation_id: str) -> Dict:\\n\",\n    \"    \\\"\\\"\\\"aws_release_elastic_ip release elastic ip.\\n\",\n    \"\\n\",\n    \"        :type allocation_id: string\\n\",\n    \"        :param allocation_id: Allocation ID of the Elastic IP to release.\\n\",\n    \"\\n\",\n    \"        :type region: string\\n\",\n    \"        :param region: AWS Region.\\n\",\n    \"\\n\",\n    \"        :rtype: Dict with the release elastic ip info.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    try:\\n\",\n    \"        ec2_Client = handle.client('ec2', region_name=region)\\n\",\n    \"        response = ec2_Client.release_address(AllocationId=allocation_id)\\n\",\n    \"        return response\\n\",\n    \"    except Exception as e:\\n\",\n    \"        raise Exception(e)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(continueOnError=True)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"allocation_id\\\": \\\"iter.get(\\\\\\\\\\\"allocation_id\\\\\\\\\\\")\\\",\\n\",\n    \"    \\\"region\\\": \\\"iter.get(\\\\\\\\\\\"region\\\\\\\\\\\")\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(iterJson='''{\\n\",\n    \"    \\\"iter_enabled\\\": true,\\n\",\n    \"    \\\"iter_list_is_const\\\": false,\\n\",\n    \"    \\\"iter_list\\\": \\\"all_unused_ips\\\",\\n\",\n    \"    \\\"iter_parameter\\\": [\\\"allocation_id\\\",\\\"region\\\"]\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"len(all_unused_ips)!=0\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_release_elastic_ip, lego_printer=aws_release_elastic_ip_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"9c7430c8-3660-45bd-90ef-9ceab77e3daa\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Conclusion\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Conclusion\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Conclusion&para;\\\">Conclusion</h3>\\n\",\n    \"<p>In this Runbook, we were able to check for any unattached Elastic IP (EIP) addresses in our AWS account and release (remove) them in order to lower the cost of your monthly AWS bill. To view the full platform capabilities of unSkript please visit&nbsp;<a href=\\\"https://us.app.unskript.io\\\" target=\\\"_blank\\\" rel=\\\"noopener\\\">us.app.unskript.io</a></p>\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"execution_data\": {\n   \"runbook_name\": \"Release Unattached AWS Elastic IPs\",\n   \"parameters\": null\n  },\n  \"kernelspec\": {\n   \"display_name\": \"unSkript (Build: 1169)\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"codemirror_mode\": {\n    \"name\": \"ipython\",\n    \"version\": 3\n   },\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"nbconvert_exporter\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.10.6\"\n  },\n  \"outputParameterSchema\": {\n   \"properties\": {},\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"parameterSchema\": {\n   \"properties\": {\n    \"allocation_ids\": {\n     \"description\": \"List of IDs that AWS assigns to represent the allocation of the Elastic IP address for use with instances in a VPC.\",\n     \"title\": \"allocation_ids\",\n     \"type\": \"array\"\n    },\n    \"region\": {\n     \"description\": \"AWS Region to search for unattached Elastic IPs. Eg: \\\"us-west-2\\\". If left blank, all regions will be considered.\",\n     \"title\": \"region\",\n     \"type\": \"string\"\n    }\n   },\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"vscode\": {\n   \"interpreter\": {\n    \"hash\": \"b0fa6594d8f4cbf19f97940f81e996739fb7646882a419484c72d19e05852a7e\"\n   }\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}"
  },
  {
    "path": "AWS/AWS_Release_Unattached_Elastic_IPs.json",
    "content": "{\n    \"name\": \"Release Unattached AWS Elastic IPs\",\n    \"description\": \"A disassociated Elastic IP address remains allocated to your account until you explicitly release it. AWS imposes a small hourly charge for Elastic IP addresses that are not associated with a running instance. This runbook can be used to deleted those unattached AWS Elastic IP addresses.\",\n    \"uuid\": \"a9d7ea5f3d31745f1de9fb8616ab6fbc20ff11e665808bdde6a9ba9b8b32e28a\",\n    \"icon\": \"CONNECTOR_TYPE_AWS\",\n    \"categories\": [ \"CATEGORY_TYPE_COST_OPT\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\" ],\n    \"connector_types\": [ \"CONNECTOR_TYPE_AWS\" ],\n    \"version\": \"1.0.0\"\n}"
  },
  {
    "path": "AWS/AWS_Remediate_unencrypted_S3_buckets.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"cbabc8b5-57b4-45b8-890c-370bb1ed6f02\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Runbook Overview\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Runbook Overview\"\n   },\n   \"source\": [\n    \"<center><img src=\\\"https://storage.googleapis.com/unskript-website/assets/favicon.png\\\" alt=\\\"unSkript.com\\\" width=\\\"100\\\" height=\\\"100\\\">\\n\",\n    \"<h1 id=\\\"unSkript-Runbooks\\\">unSkript Runbooks<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#unSkript-Runbooks\\\" target=\\\"_self\\\">&para;</a></h1>\\n\",\n    \"<div class=\\\"alert alert-block alert-success\\\">\\n\",\n    \"<h3 id=\\\"Objective\\\"><strong>Objective</strong><a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Objective\\\" target=\\\"_self\\\">&para;</a></h3>\\n\",\n    \"<strong>This runbook demonstrates How to Remediate unencrypted S3 buckets.</strong></div>\\n\",\n    \"</center><center>\\n\",\n    \"<h2 id=\\\"Remediate-unencrypted-S3-buckets\\\">Remediate unencrypted S3 buckets<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Remediate-unencrypted-S3-buckets\\\" target=\\\"_self\\\">&para;</a></h2>\\n\",\n    \"</center>\\n\",\n    \"<h1 id=\\\"Steps-Overview\\\">Steps Overview<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Steps-Overview\\\" target=\\\"_self\\\">&para;</a></h1>\\n\",\n    \"<ol>\\n\",\n    \"<li>Filter all the S3 buckets which are unencrypted.</li>\\n\",\n    \"<li>Apply encryption on unencrypted S3 buckets.</li>\\n\",\n    \"</ol>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 5,\n   \"id\": \"904610fd-51a8-40f8-9850-a288f4cd1ca5\",\n   \"metadata\": {\n    \"collapsed\": true,\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-05-17T16:40:06.556Z\"\n    },\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"name\": \"Input Verification \",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Input Verification \"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"if bucket_name and not region:\\n\",\n    \"    raise SystemExit(\\\"Provide region for the S3 Bucket!\\\")\\n\",\n    \"if region == None:\\n\",\n    \"    region = \\\"\\\"\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"38f0ef87-76cb-4505-b012-5681855c9920\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-1\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-1\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Filter-AWS-Unencrypted-S3-Buckets\\\">Filter AWS Unencrypted S3 Buckets<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Filter-Unattached-EBS-Volumes\\\" target=\\\"_self\\\">&para;</a></h3>\\n\",\n    \"<p>Here we will use unSkript <strong>Filter Unencrypted S3 Buckets</strong> action. This action filters all the S3 buckets from the given region and returns a list of those S3 buckets without encryption. It will execute if the bucket_name parameter is not given.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Input parameters: <code>region</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Output variable: <code>unencrypted_buckets</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"1bd5211a-2ef5-4796-bdf6-231080e966d8\",\n   \"metadata\": {\n    \"actionBashCommand\": false,\n    \"actionCategories\": [\n     \"CATEGORY_TYPE_CLOUDOPS\",\n     \"CATEGORY_TYPE_SECOPS\",\n     \"CATEGORY_TYPE_AWS\",\n     \"CATEGORY_TYPE_AWS_S3\"\n    ],\n    \"actionDescription\": \"Filter AWS Unencrypted S3 Buckets\",\n    \"actionEntryFunction\": \"aws_filter_unencrypted_s3_buckets\",\n    \"actionIsCheck\": true,\n    \"actionIsRemediation\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [\n     \"50d9c6abd7dce3ff9183d4135353e82859bc5a9639455b35bd229331be6048df\"\n    ],\n    \"actionNextHopParameterMapping\": {\n     \"bucket_name\": \".[].bucket\",\n     \"region\": \".[0].region\"\n    },\n    \"actionNouns\": null,\n    \"actionOutputType\": \"ACTION_OUTPUT_TYPE_LIST\",\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"actionTitle\": \"Filter AWS Unencrypted S3 Buckets\",\n    \"actionType\": \"LEGO_TYPE_AWS\",\n    \"actionVerbs\": null,\n    \"actionVersion\": \"1.0.0\",\n    \"action_modified\": false,\n    \"action_uuid\": \"2fa5c0d3a9ed5951fbf2a1390610941af8e145521c244fa07b597d6ca6665a43\",\n    \"collapsed\": true,\n    \"condition_enabled\": true,\n    \"continueOnError\": false,\n    \"credentialsJson\": {},\n    \"description\": \"Filter AWS Unencrypted S3 Buckets\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-05-17T16:39:37.314Z\"\n    },\n    \"id\": 1,\n    \"index\": 1,\n    \"inputData\": [\n     {\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"region\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"region\": {\n        \"default\": \"\",\n        \"description\": \"AWS Region.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [],\n      \"title\": \"aws_filter_unencrypted_s3_buckets\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"language\": \"python\",\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"Filter AWS Unencrypted S3 Buckets\",\n    \"orderProperties\": [\n     \"region\"\n    ],\n    \"outputParams\": {\n     \"output_name\": \"unencrypted_buckets\",\n     \"output_name_enabled\": true,\n     \"output_runbook_enabled\": false,\n     \"output_runbook_name\": \"\"\n    },\n    \"printOutput\": true,\n    \"startcondition\": \"not bucket_names\",\n    \"tags\": [],\n    \"uuid\": \"2fa5c0d3a9ed5951fbf2a1390610941af8e145521c244fa07b597d6ca6665a43\",\n    \"version\": \"1.0.0\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"##  Copyright (c) 2021 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import Optional, Tuple\\n\",\n    \"from unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\\n\",\n    \"from botocore.exceptions import ClientError\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_filter_unencrypted_s3_buckets_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_filter_unencrypted_s3_buckets(handle, region: str = \\\"\\\") -> Tuple:\\n\",\n    \"    \\\"\\\"\\\"aws_filter_unencrypted_s3_buckets List of unencrypted S3 bucket name .\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned from task.validate(...).\\n\",\n    \"\\n\",\n    \"        :type region: string\\n\",\n    \"        :param region: Filter S3 buckets.\\n\",\n    \"\\n\",\n    \"        :rtype: Tuple with status result and list of unencrypted S3 bucket name.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    result = []\\n\",\n    \"    all_regions = [region]\\n\",\n    \"    if not region:\\n\",\n    \"        all_regions = aws_list_all_regions(handle)\\n\",\n    \"    for reg in all_regions:\\n\",\n    \"        try:\\n\",\n    \"            s3Client = handle.client('s3', region_name=reg)\\n\",\n    \"            response = s3Client.list_buckets()\\n\",\n    \"            # List unencrypted S3 buckets\\n\",\n    \"            for bucket in response['Buckets']:\\n\",\n    \"                try:\\n\",\n    \"                    response = s3Client.get_bucket_encryption(Bucket=bucket['Name'])\\n\",\n    \"                    encRules = response['ServerSideEncryptionConfiguration']['Rules']\\n\",\n    \"                except ClientError as e:\\n\",\n    \"                    bucket_dict = {}\\n\",\n    \"                    bucket_dict[\\\"region\\\"] = reg\\n\",\n    \"                    bucket_dict[\\\"bucket\\\"] = bucket['Name']\\n\",\n    \"                    result.append(bucket_dict)\\n\",\n    \"        except Exception as error:\\n\",\n    \"            pass\\n\",\n    \"\\n\",\n    \"    if len(result) != 0:\\n\",\n    \"        return (False, result)\\n\",\n    \"    else:\\n\",\n    \"        return (True, None)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"region\\\": \\\"region\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"not bucket_names\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"unencrypted_buckets\\\")\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_filter_unencrypted_s3_buckets, lego_printer=aws_filter_unencrypted_s3_buckets_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"f2ed3b50-50f4-4983-b409-690aecf27b1c\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-1 Extension\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-1 Extension\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Modify-Unencrypted-S3-Buckets-Output\\\">Modify Unencrypted S3 Buckets Output</h3>\\n\",\n    \"<p>In this action, we modify the output from step 1 and return a list of dictionary items for the Unencrypted S3 Buckets</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Output variable: ebs_list</p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"47117b25-2533-4021-b4f3-329b7fee165e\",\n   \"metadata\": {\n    \"collapsed\": true,\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-02-10T10:31:04.455Z\"\n    },\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"name\": \"Modify Step-1 Output\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Modify Step-1 Output\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"bucket_list = []\\n\",\n    \"\\n\",\n    \"try:\\n\",\n    \"    if unencrypted_buckets[0] == False:\\n\",\n    \"        for bucket in unencrypted_buckets[1]:\\n\",\n    \"            bucket_list.append(bucket)\\n\",\n    \"except Exception as e:\\n\",\n    \"    if bucket_names:\\n\",\n    \"        for i in bucket_names:\\n\",\n    \"            data_dict = {}\\n\",\n    \"            data_dict[\\\"region\\\"] = region\\n\",\n    \"            data_dict[\\\"bucket\\\"] = i\\n\",\n    \"            bucket_list.append(data_dict)\\n\",\n    \"    else:\\n\",\n    \"        raise Exception(e)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"0a1ba685-0340-4af8-9bc7-32e9beff2837\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-2\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-2\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Apply-AWS-Default-Encryption-for-S3-Bucket\\\">Apply AWS Default Encryption for S3 Bucket</h3>\\n\",\n    \"<p>Here we will use the unSkript <strong>Apply AWS Default Encryption for the S3 Buckets</strong> action. In this action, we will apply the default encryption configuration to the unencrypted S3 buckets by passing the list of unencrypted S3 buckets from step 1.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Input parameters: <code>name</code>, <code>region</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Output variable: <code>apply_output</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 2,\n   \"id\": \"80b2e9a4-023a-4235-99ba-dce06988eb6e\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_uuid\": \"eb57da3b21aec38d005bf0355a48ba53937c7ac62f98e9c968c9501412d72008\",\n    \"condition_enabled\": true,\n    \"continueOnError\": true,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Apply a New AWS Policy for S3 Bucket\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2022-08-26T20:00:28.237Z\"\n    },\n    \"id\": 135,\n    \"index\": 135,\n    \"inputData\": [\n     {\n      \"name\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"bucket\\\\\\\\\\\")\\\"\"\n      },\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"region\\\\\\\\\\\")\\\"\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"name\": {\n        \"default\": \"\",\n        \"description\": \"Name of the bucket.\",\n        \"title\": \"Bucket name\",\n        \"type\": \"string\"\n       },\n       \"region\": {\n        \"default\": \"\",\n        \"description\": \"AWS region of the bucket.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"name\",\n       \"policy\",\n       \"region\"\n      ],\n      \"title\": \"aws_put_bucket_policy\",\n      \"type\": \"object\"\n     }\n    ],\n    \"iterData\": [\n     {\n      \"iter_enabled\": true,\n      \"iter_item\": {\n       \"name\": \"bucket\",\n       \"region\": \"region\"\n      },\n      \"iter_list\": {\n       \"constant\": false,\n       \"objectItems\": true,\n       \"value\": \"bucket_list\"\n      }\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"Apply AWS Default Encryption for S3 Bucket\",\n    \"nouns\": [\n     \"aws\",\n     \"policy\",\n     \"bucket\"\n    ],\n    \"orderProperties\": [\n     \"name\",\n     \"region\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"apply_output\",\n     \"output_name_enabled\": true\n    },\n    \"printOutput\": true,\n    \"startcondition\": \"len(bucket_list) > 0\",\n    \"tags\": [\n     \"aws_put_bucket_policy\"\n    ],\n    \"title\": \"Apply AWS Default Encryption for S3 Bucket\",\n    \"verbs\": [\n     \"apply\"\n    ]\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"##  Copyright (c) 2021 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import Dict\\n\",\n    \"import json\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_put_bucket_encryption_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_put_bucket_encryption(handle, name: str, region: str) -> Dict:\\n\",\n    \"    \\\"\\\"\\\"aws_put_bucket_encryption Puts default encryption configuration for bucket.\\n\",\n    \"\\n\",\n    \"          :type name: string\\n\",\n    \"          :param name: NAme of the S3 bucket.\\n\",\n    \"\\n\",\n    \"          :type region: string\\n\",\n    \"          :param region: location of the bucket\\n\",\n    \"\\n\",\n    \"          :rtype: Dict with the response info.\\n\",\n    \"      \\\"\\\"\\\"\\n\",\n    \"    s3Client = handle.client('s3',\\n\",\n    \"                             region_name=region)\\n\",\n    \"\\n\",\n    \"    # Setup default encryption configuration \\n\",\n    \"    response = s3Client.put_bucket_encryption(\\n\",\n    \"        Bucket=name,\\n\",\n    \"        ServerSideEncryptionConfiguration={\\n\",\n    \"            \\\"Rules\\\": [\\n\",\n    \"                {\\\"ApplyServerSideEncryptionByDefault\\\": {\\\"SSEAlgorithm\\\": \\\"AES256\\\"}}\\n\",\n    \"            ]},\\n\",\n    \"        )\\n\",\n    \"    return response\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"task.configure(continueOnError=True)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"region\\\": \\\"iter.get(\\\\\\\\\\\"region\\\\\\\\\\\")\\\",\\n\",\n    \"    \\\"name\\\": \\\"iter.get(\\\\\\\\\\\"bucket\\\\\\\\\\\")\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(iterJson='''{\\n\",\n    \"    \\\"iter_enabled\\\": true,\\n\",\n    \"    \\\"iter_list_is_const\\\": false,\\n\",\n    \"    \\\"iter_list\\\": \\\"bucket_list\\\",\\n\",\n    \"    \\\"iter_parameter\\\": [\\\"name\\\",\\\"region\\\"]\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"len(bucket_list) > 0\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"apply_output\\\")\\n\",\n    \"\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_put_bucket_encryption, lego_printer=aws_put_bucket_encryption_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"dea3003f-03e9-4dff-86fb-b4073ee4ef79\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Conclusion\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Conclusion\"\n   },\n   \"source\": [\n    \"### Conclusion\\n\",\n    \"<p>In this Runbook, we demonstrated the use of unSkript's AWS legos to filter all unencrypted S3 buckets and apply default encryption configuration to the buckets. To view the full platform capabilities of unSkript please visit <a href=\\\"https://us.app.unskript.io\\\" target=\\\"_blank\\\" rel=\\\"noopener\\\">https://us.app.unskript.io</a></p>\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"execution_data\": {\n   \"runbook_name\": \"Remediate unencrypted S3 buckets\",\n   \"parameters\": [\n    \"bucket_name\",\n    \"region\"\n   ]\n  },\n  \"kernelspec\": {\n   \"display_name\": \"unSkript (Build: 1166)\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"codemirror_mode\": {\n    \"name\": \"ipython\",\n    \"version\": 3\n   },\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"nbconvert_exporter\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.9.6\"\n  },\n  \"parameterSchema\": {\n   \"properties\": {\n    \"bucket_names\": {\n     \"description\": \"list of S3 bucket Names\",\n     \"title\": \"bucket_names\",\n     \"type\": \"string\"\n    },\n    \"region\": {\n     \"description\": \"AWS Region e.g. us-west-2\",\n     \"title\": \"region\",\n     \"type\": \"string\"\n    }\n   },\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"parameterValues\": {},\n  \"vscode\": {\n   \"interpreter\": {\n    \"hash\": \"5e269198fab4eb2ea6fe7c886c38b87b334869f0501ab924e1d16d60aeba5d23\"\n   }\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}"
  },
  {
    "path": "AWS/AWS_Remediate_unencrypted_S3_buckets.json",
    "content": "{\n  \"name\": \"Remediate unencrypted S3 buckets\",\n  \"description\": \"This runbook can be used to filter all the S3 buckets which are unencrypted and apply encryption on unencrypted S3 buckets.\",\n  \"uuid\": \"50d9c6abd7dce3ff9183d4135353e82859bc5a9639455b35bd229331be6048df\",\n  \"icon\": \"CONNECTOR_TYPE_AWS\",\n  \"categories\": [ \"CATEGORY_TYPE_CLOUDOPS\",\"CATEGORY_TYPE_SECOPS\" ],\n  \"connector_types\": [ \"CONNECTOR_TYPE_AWS\" ],\n  \"version\": \"1.0.0\"\n}"
  },
  {
    "path": "AWS/AWS_Renew_SSL_Certificate.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"b18495bb-19ba-4b43-9824-8739dd304b90\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Steps Overview\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Steps Overview\"\n   },\n   \"source\": [\n    \"<center><img src=\\\"https://storage.googleapis.com/unskript-website/assets/favicon.png\\\" alt=\\\"unSkript.com\\\" width=\\\"100\\\" height=\\\"100\\\">\\n\",\n    \"<h1 id=\\\"-unSkript-Runbooks-\\\">unSkript Runbooks <a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#-unSkript-Runbooks-\\\" target=\\\"_self\\\">&para;</a></h1>\\n\",\n    \"<div class=\\\"alert alert-block alert-success\\\">\\n\",\n    \"<h3 id=\\\"-Objective\\\">Objective<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#-Objective\\\" target=\\\"_self\\\">&para;</a></h3>\\n\",\n    \"<br><strong style=\\\"color: #000000;\\\"><em>Renew expiring AWS Certificate Manager(ACM) issued SSL Certificates</em></strong></div>\\n\",\n    \"</center>\\n\",\n    \"<p>&nbsp;</p>\\n\",\n    \"<center>\\n\",\n    \"<h2 id=\\\"Renew-SSL-Certificate\\\"><u>Renew SSL Certificate</u><a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Renew-SSL-Certificate\\\" target=\\\"_self\\\">&para;</a></h2>\\n\",\n    \"</center>\\n\",\n    \"<h1 id=\\\"Steps-Overview\\\">Steps Overview<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Steps-Overview\\\" target=\\\"_self\\\">&para;</a></h1>\\n\",\n    \"<p>1)<a href=\\\"#1\\\" target=\\\"_self\\\" rel=\\\"noopener\\\"> List expiring ACM certificates</a><br>2)<a href=\\\"#2\\\" target=\\\"_self\\\" rel=\\\"noopener\\\"> Renew expiring ACM certificates</a></p>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"d0b54d56-ac3c-4bf8-bff3-8f8e9c997630\",\n   \"metadata\": {\n    \"collapsed\": true,\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"name\": \"Input verification\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Input verification\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"if region == None:\\n\",\n    \"    region = ''\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"c52f7b11-cca5-4bde-8641-995f5c9e2f43\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 1\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 1\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"List-expiring-ACM-certificates\\\"><a id=\\\"1\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>List expiring ACM certificates</h3>\\n\",\n    \"<p>Using unSkript's List expiring ACM certificates action, we will fetch all the expiring certificates given a specific number of threshold days.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters: <code>threshold_days</code>, <code>region(Optional)</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following output: <code>expiring_certificates</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"4087f95e-aca3-4eb9-95c0-acf50a778c5a\",\n   \"metadata\": {\n    \"actionBashCommand\": false,\n    \"actionCategories\": [\n     \"CATEGORY_TYPE_CLOUDOPS\",\n     \"CATEGORY_TYPE_SECOPS\",\n     \"CATEGORY_TYPE_SRE\",\n     \"CATEGORY_TYPE_AWS\",\n     \"CATEGORY_TYPE_AWS_ACM\"\n    ],\n    \"actionDescription\": \"List All Expiring ACM Certificates\",\n    \"actionEntryFunction\": \"aws_list_expiring_acm_certificates\",\n    \"actionIsCheck\": true,\n    \"actionIsRemediation\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [\n     \"76681732b20a69913f0d9248272271bf2f4ab6459498ec6d0ab055870e0db0bb\"\n    ],\n    \"actionNextHopParameterMapping\": {},\n    \"actionNouns\": [\n     \"expiring\",\n     \"certificates\",\n     \"aws\"\n    ],\n    \"actionOutputType\": \"ACTION_OUTPUT_TYPE_LIST\",\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"actionTitle\": \"List Expiring ACM Certificates\",\n    \"actionType\": \"LEGO_TYPE_AWS\",\n    \"actionVerbs\": [\n     \"list\"\n    ],\n    \"actionVersion\": \"1.0.0\",\n    \"action_modified\": false,\n    \"action_uuid\": \"c1ee1c3b5cb0e07f0b52ca4d853aba6b3e597882e785ea054f95d69c03d83973\",\n    \"collapsed\": true,\n    \"continueOnError\": false,\n    \"credentialsJson\": {},\n    \"description\": \"List All Expiring ACM Certificates\",\n    \"id\": 4,\n    \"index\": 4,\n    \"inputData\": [\n     {\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"region\"\n      },\n      \"threshold_days\": {\n       \"constant\": false,\n       \"value\": \"int(threshold_days)\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"region\": {\n        \"default\": \"\",\n        \"description\": \"Name of the AWS Region\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       },\n       \"threshold_days\": {\n        \"description\": \"Threshold number(in days) to check for expiry. Eg: 30 -lists all certificates which are expiring within 30 days\",\n        \"title\": \"Threshold Days\",\n        \"type\": \"integer\"\n       }\n      },\n      \"required\": [\n       \"threshold_days\"\n      ],\n      \"title\": \"aws_list_expiring_acm_certificates\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"language\": \"python\",\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"List Expiring ACM Certificates\",\n    \"orderProperties\": [\n     \"threshold_days\",\n     \"region\"\n    ],\n    \"outputParams\": {\n     \"output_name\": \"expiring_certificates\",\n     \"output_name_enabled\": true,\n     \"output_runbook_enabled\": false,\n     \"output_runbook_name\": \"\"\n    },\n    \"printOutput\": true,\n    \"tags\": [\n     \"aws_list_expiring_acm_certificates\"\n    ],\n    \"uuid\": \"c1ee1c3b5cb0e07f0b52ca4d853aba6b3e597882e785ea054f95d69c03d83973\",\n    \"version\": \"1.0.0\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"# Copyright (c) 2021 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"import pprint\\n\",\n    \"from typing import Optional,Tuple\\n\",\n    \"import datetime\\n\",\n    \"import dateutil\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_list_expiring_acm_certificates_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_list_expiring_acm_certificates(handle, threshold_days: int = 90, region: str=None)-> Tuple:\\n\",\n    \"    \\\"\\\"\\\"aws_list_expiring_acm_certificates returns all the ACM issued certificates which\\n\",\n    \"       are about to expire given a threshold number of days\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned from Task Validate\\n\",\n    \"\\n\",\n    \"        :type threshold_days: int\\n\",\n    \"        :param threshold_days: Threshold number of days to check for expiry.\\n\",\n    \"        Eg: 30 -lists all certificates which are expiring within 30 days\\n\",\n    \"\\n\",\n    \"        :type region: str\\n\",\n    \"        :param region: Region name of the AWS account\\n\",\n    \"\\n\",\n    \"        :rtype: Tuple containing status, expiring certificates, and error\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    arn_list=[]\\n\",\n    \"    domain_list = []\\n\",\n    \"    expiring_certificates_list= []\\n\",\n    \"    expiring_certificates_dict={}\\n\",\n    \"    result_list=[]\\n\",\n    \"    all_regions = [region]\\n\",\n    \"    if region is None or len(region)==0:\\n\",\n    \"        all_regions = aws_list_all_regions(handle=handle)\\n\",\n    \"    for r in all_regions:\\n\",\n    \"        iamClient = handle.client('acm', region_name=r)\\n\",\n    \"        try:\\n\",\n    \"            expiring_certificates_dict={}\\n\",\n    \"            certificates_list = iamClient.list_certificates(CertificateStatuses=['ISSUED'])\\n\",\n    \"            for each_arn in certificates_list['CertificateSummaryList']:\\n\",\n    \"                arn_list.append(each_arn['CertificateArn'])\\n\",\n    \"                domain_list.append(each_arn['DomainName'])\\n\",\n    \"            for cert_arn in arn_list:\\n\",\n    \"                details = iamClient.describe_certificate(CertificateArn=cert_arn)\\n\",\n    \"                for key,value in details['Certificate'].items():\\n\",\n    \"                    if key == \\\"NotAfter\\\":\\n\",\n    \"                        expiry_date = value\\n\",\n    \"                        right_now = datetime.datetime.now(dateutil.tz.tzlocal())\\n\",\n    \"                        diff = expiry_date-right_now\\n\",\n    \"                        days_remaining = diff.days\\n\",\n    \"                        if 0 < days_remaining < threshold_days:\\n\",\n    \"                            expiring_certificates_list.append(cert_arn)\\n\",\n    \"            expiring_certificates_dict[\\\"region\\\"]= r\\n\",\n    \"            expiring_certificates_dict[\\\"certificate\\\"]= expiring_certificates_list\\n\",\n    \"            if len(expiring_certificates_list)!=0:\\n\",\n    \"                result_list.append(expiring_certificates_dict)\\n\",\n    \"        except Exception:\\n\",\n    \"            pass\\n\",\n    \"    if len(result_list)!=0:\\n\",\n    \"        return (False, result_list)\\n\",\n    \"    return (True, None)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"threshold_days\\\": \\\"int(threshold_days)\\\",\\n\",\n    \"    \\\"region\\\": \\\"region\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"expiring_certificates\\\")\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_list_expiring_acm_certificates, lego_printer=aws_list_expiring_acm_certificates_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"33e6d07d-2168-44d1-99fe-32539f26758f\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 2A\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 2A\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Create-List-of-Expiring-Certificates\\\"><a id=\\\"1\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Create List of Expiring Certificates</h3>\\n\",\n    \"<p>This action filters regions that have no certificates and creates a list of certificates that have to be renewed</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters: <code>None</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following output: <code>all_expiring_certificates</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 6,\n   \"id\": \"999b1c0b-701f-4207-b80f-2a5a1ce7578d\",\n   \"metadata\": {\n    \"collapsed\": true,\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-02-02T16:16:02.763Z\"\n    },\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"name\": \"Create List of  Expiring Certificates\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Create List of  Expiring Certificates\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"all_expiring_certificates = []\\n\",\n    \"try:\\n\",\n    \"    if expiring_certificates[0] == False:\\n\",\n    \"        if len(expiring_certificates[1])!=0:\\n\",\n    \"            all_expiring_certificates=expiring_certificates[1]\\n\",\n    \"except Exception:\\n\",\n    \"    data_dict = {}\\n\",\n    \"    data_dict[\\\"region\\\"] = region\\n\",\n    \"    data_dict[\\\"certificate\\\"] = certificate_arns\\n\",\n    \"    all_expiring_certificates.append(data_dict)\\n\",\n    \"print(all_expiring_certificates)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"45f6a4b4-f896-4e37-9fb6-3c6db915495e\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 2\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 2\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Renew-expiring-ACM-certificates\\\"><a id=\\\"2\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Renew expiring ACM certificates</h3>\\n\",\n    \"<p>This action renews <strong>eligible</strong> SSL certificates that are available on ACM. Only exported private certificates can be renewed with this operation. In order to renew your AWS Private CA certificates with ACM, you must first grant the <a href=\\\"https://docs.aws.amazon.com/privateca/latest/userguide/PcaWelcome.html\\\" target=\\\"_blank\\\" rel=\\\"noopener\\\">ACM service principal permission</a> to do so.<br><br><em><strong>A certificate is eligible for automatic renewal subject to the following considerations:</strong></em></p>\\n\",\n    \"<p>1)<span style=\\\"color: green;\\\"> ELIGIBLE</span> if associated with another AWS service, such as Elastic Load Balancing or CloudFront.<br>2)<span style=\\\"color: green;\\\"> ELIGIBLE</span>if exported since being issued or last renewed.<br>3)<span style=\\\"color: green;\\\"> ELIGIBLE</span> if it is a private certificate issued by calling the ACM RequestCertificate API and then exported or associated with another AWS service.<br>4)<span style=\\\"color: green;\\\"> ELIGIBLE</span> if it is a private certificate issued through the management console and then exported or associated with another AWS service.<br>5)<span style=\\\"color: red;\\\"> NOT ELIGIBLE</span> if it is a private certificate issued by calling the AWS Private CA IssueCertificate API.<br>6)<span style=\\\"color: red;\\\"> NOT ELIGIBLE</span> if imported or already expired</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters: <code>aws_certificate_arn</code>, <code>region</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following output: <code>None</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"dd7da102-0ea1-4d13-a87b-c4e7af382228\",\n   \"metadata\": {\n    \"actionBashCommand\": false,\n    \"actionCategories\": [\n     \"CATEGORY_TYPE_DEVOPS\",\n     \"CATEGORY_TYPE_SRE\",\n     \"CATEGORY_TYPE_AWS\",\n     \"CATEGORY_TYPE_AWS_ACM\"\n    ],\n    \"actionDescription\": \"Renew Expiring ACM Certificates\",\n    \"actionEntryFunction\": \"aws_renew_expiring_acm_certificates\",\n    \"actionIsCheck\": false,\n    \"actionIsRemediation\": true,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": null,\n    \"actionNextHopParameterMapping\": null,\n    \"actionNouns\": [\n     \"certificates\",\n     \"acm\",\n     \"aws\"\n    ],\n    \"actionOutputType\": \"ACTION_OUTPUT_TYPE_DICT\",\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"actionTitle\": \"Renew Expiring ACM Certificates\",\n    \"actionType\": \"LEGO_TYPE_AWS\",\n    \"actionVerbs\": [\n     \"renew\"\n    ],\n    \"actionVersion\": \"1.0.0\",\n    \"action_modified\": false,\n    \"action_uuid\": \"89773c9cb2201505fbf5dbac0cc34a4056ba1a45a315addffec9af7a4b9b7390\",\n    \"collapsed\": true,\n    \"condition_enabled\": true,\n    \"continueOnError\": true,\n    \"credentialsJson\": {},\n    \"description\": \"Renew Expiring ACM Certificates\",\n    \"id\": 1,\n    \"index\": 1,\n    \"inputData\": [\n     {\n      \"aws_certificate_arn\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"certificate\\\\\\\\\\\")\\\"\"\n      },\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"region\\\\\\\\\\\")\\\"\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"aws_certificate_arn\": {\n        \"description\": \"ARN of the Certificate\",\n        \"items\": {},\n        \"title\": \"Certificate ARN\",\n        \"type\": \"array\"\n       },\n       \"region\": {\n        \"description\": \"Name of the AWS Region\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"aws_certificate_arn\",\n       \"region\"\n      ],\n      \"title\": \"aws_renew_expiring_acm_certificates\",\n      \"type\": \"object\"\n     }\n    ],\n    \"iterData\": [\n     {\n      \"iter_enabled\": true,\n      \"iter_item\": {\n       \"aws_certificate_arn\": \"certificate\",\n       \"region\": \"region\"\n      },\n      \"iter_list\": {\n       \"constant\": false,\n       \"objectItems\": true,\n       \"value\": \"all_expiring_certificates\"\n      }\n     }\n    ],\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"language\": \"python\",\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"Renew Expiring ACM Certificates\",\n    \"orderProperties\": [\n     \"aws_certificate_arn\",\n     \"region\"\n    ],\n    \"printOutput\": true,\n    \"startcondition\": \"len(all_expiring_certificates)!=0\",\n    \"tags\": [\n     \"aws_renew_expiring_acm_certificates\"\n    ],\n    \"uuid\": \"89773c9cb2201505fbf5dbac0cc34a4056ba1a45a315addffec9af7a4b9b7390\",\n    \"version\": \"1.0.0\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"# Copyright (c) 2021 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"import pprint\\n\",\n    \"from typing import Dict, List\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_renew_expiring_acm_certificates_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_renew_expiring_acm_certificates(handle, aws_certificate_arn: List, region: str='') -> Dict:\\n\",\n    \"    \\\"\\\"\\\"aws_renew_expiring_acm_certificates returns all the ACM issued certificates\\n\",\n    \"       which are about to expire given a threshold number of days\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned from Task Validate\\n\",\n    \"\\n\",\n    \"        :type aws_certificate_arn: List\\n\",\n    \"        :param aws_certificate_arn: ARN of the Certificate\\n\",\n    \"\\n\",\n    \"        :type region: str\\n\",\n    \"        :param region: Region name of the AWS account\\n\",\n    \"\\n\",\n    \"        :rtype: Result Dictionary of result\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    result = {}\\n\",\n    \"    try:\\n\",\n    \"        acmClient = handle.client('acm', region_name=region)\\n\",\n    \"        for arn in aws_certificate_arn:\\n\",\n    \"            acmClient.renew_certificate(CertificateArn=arn)\\n\",\n    \"            result[arn] = \\\"Successfully renewed\\\"\\n\",\n    \"    except Exception as e:\\n\",\n    \"        result[\\\"error\\\"] = e\\n\",\n    \"    return result\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(continueOnError=True)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"aws_certificate_arn\\\": \\\"iter.get(\\\\\\\\\\\"certificate\\\\\\\\\\\")\\\",\\n\",\n    \"    \\\"region\\\": \\\"iter.get(\\\\\\\\\\\"region\\\\\\\\\\\")\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(iterJson='''{\\n\",\n    \"    \\\"iter_enabled\\\": true,\\n\",\n    \"    \\\"iter_list_is_const\\\": false,\\n\",\n    \"    \\\"iter_list\\\": \\\"all_expiring_certificates\\\",\\n\",\n    \"    \\\"iter_parameter\\\": [\\\"aws_certificate_arn\\\",\\\"region\\\"]\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"len(all_expiring_certificates)!=0\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_renew_expiring_acm_certificates, lego_printer=aws_renew_expiring_acm_certificates_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"14ce7477-5f71-4127-8477-43b76473590b\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Conclusion\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Conclusion\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Conclusion\\\">Conclusion</h3>\\n\",\n    \"<p>In this Runbook, we demonstrated the use of unSkript's AWS actions to list all expiring ACM SSL Certificates and subsequently renewed them. To view the full platform capabilities of unSkript please visit <a href=\\\"https://us.app.unskript.io\\\" target=\\\"_blank\\\" rel=\\\"noopener\\\">us.app.unskript.io</a></p>\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"execution_data\": {\n   \"runbook_name\": \"Renew AWS SSL Certificates that are close to expiration\",\n   \"parameters\": [\n    \"region\",\n    \"threshold_days\"\n   ]\n  },\n  \"kernelspec\": {\n   \"display_name\": \"unSkript (Build: 1169)\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"codemirror_mode\": {\n    \"name\": \"ipython\",\n    \"version\": 3\n   },\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"nbconvert_exporter\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.10.6\"\n  },\n  \"parameterSchema\": {\n   \"properties\": {\n    \"certificate_arns\": {\n     \"description\": \"List of AWS ACM Certificates\",\n     \"title\": \"certificate_arns\",\n     \"type\": \"array\"\n    },\n    \"region\": {\n     \"description\": \"AWS region which have Certificates. Eg: \\\"us-west-2\\\"\",\n     \"title\": \"region\",\n     \"type\": \"string\"\n    },\n    \"threshold\": {\n     \"default\": 90,\n     \"description\": \"Threshold number of days to check if a certificate is nearing it's expiry. Eg:45\",\n     \"title\": \"threshold\",\n     \"type\": \"number\"\n    }\n   },\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"parameterValues\": {},\n  \"vscode\": {\n   \"interpreter\": {\n    \"hash\": \"b0fa6594d8f4cbf19f97940f81e996739fb7646882a419484c72d19e05852a7e\"\n   }\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}"
  },
  {
    "path": "AWS/AWS_Renew_SSL_Certificate.json",
    "content": "{\n    \"name\": \"Renew AWS SSL Certificates that are close to expiration\",\n    \"description\": \"This runbook can be used to list all AWS SSL (ACM) Certificates that need to be renewed within a given threshold number of days. Optionally it can renew the certificate using AWS ACM service.\",\n    \"uuid\": \"76681732b20a69913f0d9248272271bf2f4ab6459498ec6d0ab055870e0db0bb\",\n    \"icon\": \"CONNECTOR_TYPE_AWS\",\n    \"categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SECOPS\" ],\n    \"connector_types\": [ \"CONNECTOR_TYPE_AWS\" ],\n    \"version\": \"1.0.0\"\n  }"
  },
  {
    "path": "AWS/AWS_Restart_Unhealthy_Services_Target_Group.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"60338620-97a9-4b89-9897-f6ff0b25a8a2\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Steps Overview\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Steps Overview\"\n   },\n   \"source\": [\n    \"<hr><center><img src=\\\"https://storage.googleapis.com/unskript-website/assets/favicon.png\\\" alt=\\\"unSkript.com\\\" width=\\\"100\\\" height=\\\"100\\\">\\n\",\n    \"<h1 id=\\\"unSkript-Runbooks&para;&para;&para;\\\">unSkript Runbooks</h1>\\n\",\n    \"<div class=\\\"alert alert-block alert-success\\\">\\n\",\n    \"<h3 id=\\\"Objective&para;&para;&para;\\\">Objective</h3>\\n\",\n    \"<br><strong><em>Stop untagged EC2 Instances</em></strong></div>\\n\",\n    \"</center>\\n\",\n    \"<p>&nbsp;</p>\\n\",\n    \"<center>\\n\",\n    \"<h2 id=\\\"Restart-Unhealthy-Services-in-Target-Group&para;&para;&para;\\\"><u>Restart Unhealthy Services in Target Group</u></h2>\\n\",\n    \"</center>\\n\",\n    \"<h1 id=\\\"Steps-Overview&para;&para;&para;\\\">Steps Overview</h1>\\n\",\n    \"<p>1. List Unhealthy Instances in a Target Group<br>2. Restart EC2 instances</p>\\n\",\n    \"<hr>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 21,\n   \"id\": \"94297ac5-ac59-4e6c-9c9d-d669cf61c92d\",\n   \"metadata\": {\n    \"collapsed\": true,\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-05-18T14:21:36.668Z\"\n    },\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"name\": \"Input Verification\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Input Verification\",\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"if instance_ids and not region:\\n\",\n    \"    raise SystemExit(\\\"Provide region for the instance!\\\")\\n\",\n    \"if region == None:\\n\",\n    \"    region = \\\"\\\"\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"5f9ba125-c580-42cb-b7d7-941cdc145e9b\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 1\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 1\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"List-Unhealthy-Instances-in-a-Target-Group&para;\\\"><a id=\\\"2\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>List Unhealthy Instances in a Target Group</h3>\\n\",\n    \"<p>Here we will fetch all the untagged&nbsp; EC2 instances.&nbsp;</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters: <code>region(Optional)</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following output: <code>unheathy_instances</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"683e6a7b-a04c-4298-987b-0e304b994906\",\n   \"metadata\": {\n    \"actionBashCommand\": false,\n    \"actionCategories\": [\n     \"CATEGORY_TYPE_CLOUDOPS\",\n     \"CATEGORY_TYPE_DEVOPS\",\n     \"CATEGORY_TYPE_SRE\",\n     \"CATEGORY_TYPE_TROUBLESHOOTING\",\n     \"CATEGORY_TYPE_AWS\",\n     \"CATEGORY_TYPE_AWS_ELB\"\n    ],\n    \"actionDescription\": \"List Unhealthy Instances in a target group\",\n    \"actionEntryFunction\": \"aws_list_unhealthy_instances_in_target_group\",\n    \"actionIsCheck\": true,\n    \"actionIsRemediation\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [\n     \"7a5cf9629c56eb979a01977330c3d2df656e965a78323be4fa49fdc3b527c9d7\"\n    ],\n    \"actionNextHopParameterMapping\": {\n     \"region\": \".[].region\"\n    },\n    \"actionNouns\": [\n     \"unhealthy\",\n     \"instances\",\n     \"target\",\n     \"group\",\n     \"aws\"\n    ],\n    \"actionOutputType\": \"ACTION_OUTPUT_TYPE_LIST\",\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"actionTitle\": \"AWS List Unhealthy Instances in a Target Group\",\n    \"actionType\": \"LEGO_TYPE_AWS\",\n    \"actionVerbs\": [\n     \"list\"\n    ],\n    \"actionVersion\": \"1.0.0\",\n    \"action_modified\": false,\n    \"action_uuid\": \"6f7558325461f2ef5ec668dbf6356f199b20b606eba684e74764e1a16e46cd0d\",\n    \"collapsed\": true,\n    \"condition_enabled\": true,\n    \"continueOnError\": false,\n    \"credentialsJson\": {},\n    \"description\": \"List Unhealthy Instances in a target group\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-05-18T14:17:21.320Z\"\n    },\n    \"id\": 13,\n    \"index\": 13,\n    \"inputData\": [\n     {\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"region\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"region\": {\n        \"default\": \"\",\n        \"description\": \"Name of the AWS Region\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [],\n      \"title\": \"aws_list_unhealthy_instances_in_target_group\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"language\": \"python\",\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS List Unhealthy Instances in a Target Group\",\n    \"orderProperties\": [\n     \"region\"\n    ],\n    \"outputParams\": {\n     \"output_name\": \"unhealthy_instances\",\n     \"output_name_enabled\": true,\n     \"output_runbook_enabled\": false,\n     \"output_runbook_name\": \"\"\n    },\n    \"printOutput\": true,\n    \"startcondition\": \"not instance_ids\",\n    \"tags\": [],\n    \"uuid\": \"6f7558325461f2ef5ec668dbf6356f199b20b606eba684e74764e1a16e46cd0d\",\n    \"version\": \"1.0.0\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"import pprint\\n\",\n    \"from typing import Optional, Tuple\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from unskript.connectors.aws import aws_get_paginator\\n\",\n    \"from unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\\n\",\n    \"from unskript.legos.utils import parseARN\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_list_unhealthy_instances_in_target_group_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def get_all_target_groups(handle, r):\\n\",\n    \"    target_arns_list = []\\n\",\n    \"    elbv2Client = handle.client('elbv2', region_name=r)\\n\",\n    \"    try:\\n\",\n    \"        tbs = aws_get_paginator(elbv2Client, \\\"describe_target_groups\\\", \\\"TargetGroups\\\")\\n\",\n    \"        for index, tb in enumerate(tbs):\\n\",\n    \"            target_arns_list.append(tb.get('TargetGroupArn'))\\n\",\n    \"    except Exception:\\n\",\n    \"        pass\\n\",\n    \"    return target_arns_list\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_list_unhealthy_instances_in_target_group(handle, region: str=None) -> Tuple:\\n\",\n    \"    result = []\\n\",\n    \"    unhealthy_instances_list = []\\n\",\n    \"    all_target_groups = []\\n\",\n    \"    unhealhthy_instances_dict ={}\\n\",\n    \"    all_regions = [region]\\n\",\n    \"    if region is None or len(region)==0:\\n\",\n    \"        all_regions = aws_list_all_regions(handle=handle)\\n\",\n    \"    for r in all_regions:\\n\",\n    \"        try:\\n\",\n    \"            output = get_all_target_groups(handle,r)\\n\",\n    \"            if len(output)!=0:\\n\",\n    \"                all_target_groups.append(output)\\n\",\n    \"        except Exception as e:\\n\",\n    \"            print(\\\"????????\\\")\\n\",\n    \"            pass\\n\",\n    \"    for target_group in all_target_groups:\\n\",\n    \"        for o in target_group:\\n\",\n    \"            parsedArn = parseARN(o)\\n\",\n    \"            region_name = parsedArn['region']\\n\",\n    \"            elbv2Client = handle.client('elbv2', region_name=region_name)\\n\",\n    \"            try:\\n\",\n    \"                targetHealthResponse = elbv2Client.describe_target_health(TargetGroupArn=o)\\n\",\n    \"            except Exception as e:\\n\",\n    \"                raise e\\n\",\n    \"            for ins in targetHealthResponse[\\\"TargetHealthDescriptions\\\"]:\\n\",\n    \"                if ins['TargetHealth']['State'] in ['unhealthy']:\\n\",\n    \"                    unhealthy_instances_list.append(ins['Target']['Id'])\\n\",\n    \"    if len(unhealthy_instances_list)!=0:\\n\",\n    \"        unhealhthy_instances_dict['instance'] = unhealthy_instances_list\\n\",\n    \"        unhealhthy_instances_dict['region'] = region_name\\n\",\n    \"        result.append(unhealhthy_instances_dict)\\n\",\n    \"    if len(result)!=0:\\n\",\n    \"        return (False,result)\\n\",\n    \"    return (True, None)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"region\\\": \\\"region\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"not instance_ids\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"task.configure(outputName=\\\"unhealthy_instances\\\")\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_list_unhealthy_instances_in_target_group, lego_printer=aws_list_unhealthy_instances_in_target_group_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"0e938725-b388-4c57-87b1-fd2e4719f0e1\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-1 Extension\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-1 Extension\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Create-List-of-untagged-instances&para;\\\"><a id=\\\"2\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Create List of unhealthy instances</h3>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following output: <code>all_unhealthy_instances</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 28,\n   \"id\": \"ad13b804-ad7f-433e-8910-d01d679a262a\",\n   \"metadata\": {\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-05-18T16:16:12.444Z\"\n    },\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"name\": \"Create List of unhealthy instances\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Create List of unhealthy instances\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"all_unhealthy_instances = []\\n\",\n    \"try:\\n\",\n    \"    if unhealthy_instances[0] == False:\\n\",\n    \"        for each_instance in unhealthy_instances[1]:\\n\",\n    \"            all_unhealthy_instances.append(each_instance)\\n\",\n    \"except Exception as e:\\n\",\n    \"    if instance_ids:\\n\",\n    \"        for instance in instance_ids:\\n\",\n    \"            instance_dict = {}\\n\",\n    \"            instance_dict[\\\"instance\\\"] = instance\\n\",\n    \"            instance_dict[\\\"region\\\"] = region\\n\",\n    \"            all_unhealthy_instances.append(instance_dict)\\n\",\n    \"    else:\\n\",\n    \"        raise Exception(e)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"464a91c4-371f-426e-a6d6-32c2266d42e4\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 2\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 2\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"List-Unhealthy-Instances-in-a-Target-Group&para;\\\"><a id=\\\"2\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Restart EC2 instances</h3>\\n\",\n    \"<p>Here we will restart all the unhealthy EC2 instances.&nbsp;</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters: <code>region, instance_ids</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following output: <code>None</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"23059dc8-f854-4301-a557-c62683a0d045\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionCategories\": [],\n    \"actionIsCheck\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [],\n    \"actionNextHopParameterMapping\": {},\n    \"actionOutputType\": \"\",\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": false,\n    \"action_uuid\": \"e7d021a8e955291cf31e811e64a86baa2a902ea2185cb76e7121ebbab261c320\",\n    \"checkEnabled\": false,\n    \"continueOnError\": true,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Restart AWS EC2 Instances\",\n    \"id\": 250,\n    \"index\": 250,\n    \"inputData\": [\n     {\n      \"instance_ids\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"instance\\\\\\\\\\\")\\\"\"\n      },\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"region\\\\\\\\\\\")\\\"\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"instance_ids\": {\n        \"description\": \"List of instance IDs. For eg. [\\\"i-foo\\\", \\\"i-bar\\\"]\",\n        \"items\": {\n         \"type\": \"string\"\n        },\n        \"title\": \"Instance IDs\",\n        \"type\": \"array\"\n       },\n       \"region\": {\n        \"description\": \"AWS Region of the instances.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"instance_ids\",\n       \"region\"\n      ],\n      \"title\": \"aws_restart_ec2_instances\",\n      \"type\": \"object\"\n     }\n    ],\n    \"isUnskript\": false,\n    \"iterData\": [\n     {\n      \"iter_enabled\": true,\n      \"iter_item\": {\n       \"instance_ids\": \"instance\",\n       \"region\": \"region\"\n      },\n      \"iter_list\": {\n       \"constant\": false,\n       \"objectItems\": true,\n       \"value\": \"all_unhealthy_instances\"\n      }\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"Restart AWS EC2 Instances\",\n    \"nouns\": [],\n    \"orderProperties\": [\n     \"instance_ids\",\n     \"region\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"printOutput\": true,\n    \"tags\": [],\n    \"verbs\": []\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##  Copyright (c) 2021 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from typing import List, Dict\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"import pprint\\n\",\n    \"from beartype import beartype\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_restart_ec2_instances_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_restart_ec2_instances(handle, instance_ids: List, region: str) -> Dict:\\n\",\n    \"    \\\"\\\"\\\"aws_restart_instances Restarts instances.\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned by the task.validate(...) method.\\n\",\n    \"\\n\",\n    \"        :type instance_ids: list\\n\",\n    \"        :param instance_ids: List of instance ids.\\n\",\n    \"\\n\",\n    \"        :type region: string\\n\",\n    \"        :param region: Region for instance.\\n\",\n    \"\\n\",\n    \"        :rtype: Dict with the restarted instances info.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"\\n\",\n    \"    ec2Client = handle.client('ec2', region_name=region)\\n\",\n    \"    res = ec2Client.reboot_instances(InstanceIds=instance_ids)\\n\",\n    \"    return res\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(continueOnError=True)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"instance_ids\\\": \\\"iter.get(\\\\\\\\\\\"instance\\\\\\\\\\\")\\\",\\n\",\n    \"    \\\"region\\\": \\\"iter.get(\\\\\\\\\\\"region\\\\\\\\\\\")\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(iterJson='''{\\n\",\n    \"    \\\"iter_enabled\\\": true,\\n\",\n    \"    \\\"iter_list_is_const\\\": false,\\n\",\n    \"    \\\"iter_list\\\": \\\"all_unhealthy_instances\\\",\\n\",\n    \"    \\\"iter_parameter\\\": [\\\"instance_ids\\\",\\\"region\\\"]\\n\",\n    \"    }''')\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_restart_ec2_instances, lego_printer=aws_restart_ec2_instances_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"ae582460-4ae2-4d66-8328-1fb1deb238c3\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Conclusion\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Conclusion\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Conclusion&para;&para;\\\">Conclusion</h3>\\n\",\n    \"<p>In this Runbook, we were able to restart all unhealthy EC2 instances in a target group using unSkript's AWS actions. To view the full platform capabilities of unSkript please visit&nbsp;<a href=\\\"https://us.app.unskript.io\\\" target=\\\"_blank\\\" rel=\\\"noopener\\\">us.app.unskript.io</a></p>\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"execution_data\": {\n   \"runbook_name\": \"AWS Restart unhealthy services in a Target Group\",\n   \"parameters\": [\n    \"region\"\n   ]\n  },\n  \"kernelspec\": {\n   \"display_name\": \"unSkript (Build: 1166)\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"codemirror_mode\": {\n    \"name\": \"ipython\",\n    \"version\": 3\n   },\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"nbconvert_exporter\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.9.6\"\n  },\n  \"parameterSchema\": {\n   \"properties\": {\n    \"instance_ids\": {\n     \"description\": \"List of AWS EC2 instance.\",\n     \"title\": \"instance_ids\",\n     \"type\": \"array\"\n    },\n    \"region\": {\n     \"description\": \"AWS region(s) to get the target groups from. Eg: us-west-2\",\n     \"title\": \"region\",\n     \"type\": \"string\"\n    }\n   },\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"vscode\": {\n   \"interpreter\": {\n    \"hash\": \"31f2aee4e71d21fbe5cf8b01ff0e069b9275f58929596ceb00d14d90e3e16cd6\"\n   }\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}"
  },
  {
    "path": "AWS/AWS_Restart_Unhealthy_Services_Target_Group.json",
    "content": "{\n  \"name\": \"AWS Restart unhealthy services in a Target Group\",\n  \"description\": \"This runbook restarts unhealthy services in a target group. The restart command is provided via a tag attached to the instance.\",\n  \"uuid\": \"7a5cf9629c56eb979a01977330c3d2df656e965a78323be4fa49fdc3b527c9d7\", \n  \"icon\": \"CONNECTOR_TYPE_AWS\",\n  \"categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\", \"CATEGORY_TYPE_TROUBLESHOOTING\" ],\n  \"connector_types\": [ \"CONNECTOR_TYPE_AWS\" ],\n  \"version\": \"1.0.0\"\n}\n\n"
  },
  {
    "path": "AWS/AWS_Restrict_S3_Buckets_with_READ_WRITE_Permissions.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"c92fbc7c-b9b3-4fd9-8f55-9811f3580311\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Steps Overview\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Steps Overview\"\n   },\n   \"source\": [\n    \"<center><img src=\\\"https://storage.googleapis.com/unskript-website/assets/favicon.png\\\" alt=\\\"unSkript.com\\\" width=\\\"100\\\" height=\\\"100\\\">\\n\",\n    \"<h1 id=\\\"unSkript-Runbooks&para;\\\">unSkript Runbooks</h1>\\n\",\n    \"<div class=\\\"alert alert-block alert-success\\\">\\n\",\n    \"<h3 id=\\\"Objective&para;\\\">Objective</h3>\\n\",\n    \"<br><strong style=\\\"color: #000000;\\\"><em><strong>Restrict S3 Buckets with READ/WRITE Permissions for all authenticated users.</strong></em></strong></div>\\n\",\n    \"</center>\\n\",\n    \"<p>&nbsp;</p>\\n\",\n    \"<center>\\n\",\n    \"<h2 id=\\\"Restrict-S3-Buckets-with-READ/WRITE-Permissions&para;\\\"><u>Restrict S3 Buckets with READ/WRITE Permissions</u></h2>\\n\",\n    \"</center>\\n\",\n    \"<h1 id=\\\"Steps-Overview&para;\\\">Steps Overview</h1>\\n\",\n    \"<p>1)&nbsp;<a href=\\\"#1\\\" target=\\\"_self\\\" rel=\\\"noopener\\\">Filter Public S3 buckets with ACL Permissions</a><br>2)&nbsp;<a href=\\\"#2\\\" target=\\\"_self\\\" rel=\\\"noopener\\\">Change the permissions to private</a></p>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 3,\n   \"id\": \"95180c6c-d28d-487f-9d7b-bfeefe0357e8\",\n   \"metadata\": {\n    \"collapsed\": true,\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-05-19T07:05:08.327Z\"\n    },\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"name\": \"Input Verification\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Input Verification\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"if region == None:\\n\",\n    \"    region = ''\\n\",\n    \"if bucket_names and not region:\\n\",\n    \"    raise SystemExit(\\\"Provide a region for the S3 bucket names!\\\")\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"f6cfb169-e57e-4e88-8cf2-e85e828b6a2c\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 1\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 1\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"List-expiring-ACM-certificates\\\"><a id=\\\"1\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Filter S3 buckets with ACL Permissiosn<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#List-expiring-ACM-certificates\\\" target=\\\"_self\\\">&para;</a></h3>\\n\",\n    \"<p>This action will fetch all public S3 buckets with the chosen permissions- <em>\\\"READ\\\",\\\"READ_ACP\\\",\\\"WRITE\\\",\\\"WRITE_ACP\\\", and \\\"FULL_CONTROL\\\"</em>, If no permissions are given, the action will execute for <span style=\\\"color: blue;\\\"> READ</span> and <span style=\\\"color: blue;\\\"> WRITE</span>.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters: <code>bucket_permission(Optional)</code>, <code>region(Optional)</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following output: <code>public_buckets</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 4,\n   \"id\": \"6b5c887b-254a-4790-9eaf-9e320615bd75\",\n   \"metadata\": {\n    \"actionBashCommand\": false,\n    \"actionCategories\": [\n     \"CATEGORY_TYPE_CLOUDOPS\",\n     \"CATEGORY_TYPE_SECOPS\",\n     \"CATEGORY_TYPE_AWS\",\n     \"CATEGORY_TYPE_AWS_S3\"\n    ],\n    \"actionDescription\": \"Get AWS public S3 Buckets using ACL\",\n    \"actionEntryFunction\": \"aws_filter_public_s3_buckets_by_acl\",\n    \"actionIsCheck\": true,\n    \"actionIsRemediation\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [\n     \"305fe6a6f0512eb2d91b71c508b3a192e5b7021bf8196f4deeec5397f2b85e84\"\n    ],\n    \"actionNextHopParameterMapping\": {},\n    \"actionNouns\": [\n     \"aws\",\n     \"s3\",\n     \"public\",\n     \"buckets\",\n     \"by\",\n     \"acl\"\n    ],\n    \"actionOutputType\": \"ACTION_OUTPUT_TYPE_LIST\",\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"actionTitle\": \"Get AWS public S3 Buckets using ACL\",\n    \"actionType\": \"LEGO_TYPE_AWS\",\n    \"actionVerbs\": [\n     \"filter\"\n    ],\n    \"actionVersion\": \"1.0.0\",\n    \"action_modified\": false,\n    \"action_uuid\": \"b13d82d445e9064eb3cb88ca6247696ee3e7bfceb02b617833992f8552bf48fb\",\n    \"condition_enabled\": true,\n    \"continueOnError\": false,\n    \"credentialsJson\": {},\n    \"description\": \"Get AWS public S3 Buckets using ACL\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-05-19T07:05:35.678Z\"\n    },\n    \"id\": 4,\n    \"index\": 4,\n    \"inputData\": [\n     {\n      \"permission\": {\n       \"constant\": true,\n       \"value\": \"bucket_permission\"\n      },\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"\\\"us-west-2\\\"\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"definitions\": {\n       \"BucketACLPermissions\": {\n        \"description\": \"An enumeration.\",\n        \"enum\": [\n         \"READ\",\n         \"WRITE\",\n         \"READ_ACP\",\n         \"WRITE_ACP\",\n         \"FULL_CONTROL\"\n        ],\n        \"title\": \"BucketACLPermissions\",\n        \"type\": \"string\"\n       }\n      },\n      \"properties\": {\n       \"permission\": {\n        \"allOf\": [\n         {\n          \"$ref\": \"#/definitions/BucketACLPermissions\"\n         }\n        ],\n        \"default\": \"READ\",\n        \"description\": \"Set of permissions that AWS S3 supports in an ACL for buckets and objects\",\n        \"title\": \"S3 Bucket's ACL Permission\",\n        \"type\": \"enum\"\n       },\n       \"region\": {\n        \"default\": \"\",\n        \"description\": \"Name of the AWS Region\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [],\n      \"title\": \"aws_filter_public_s3_buckets_by_acl\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"language\": \"python\",\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"Get AWS public S3 Buckets using ACL\",\n    \"orderProperties\": [\n     \"region\",\n     \"permission\"\n    ],\n    \"outputParams\": {\n     \"output_name\": \"public_buckets\",\n     \"output_name_enabled\": true,\n     \"output_runbook_enabled\": false,\n     \"output_runbook_name\": \"\"\n    },\n    \"printOutput\": true,\n    \"startcondition\": \"not bucket_names\",\n    \"tags\": [\n     \"aws_filter_public_s3_buckets_by_acl\"\n    ],\n    \"uuid\": \"b13d82d445e9064eb3cb88ca6247696ee3e7bfceb02b617833992f8552bf48fb\",\n    \"version\": \"1.0.0\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"##  Copyright (c) 2021 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import List, Optional, Tuple\\n\",\n    \"from unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\\n\",\n    \"from unskript.legos.aws.aws_get_s3_buckets.aws_get_s3_buckets import aws_get_s3_buckets\\n\",\n    \"from unskript.enums.aws_acl_permissions_enums import BucketACLPermissions\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_filter_public_s3_buckets_by_acl_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def check_publicly_accessible_buckets(s3Client,b,all_permissions):\\n\",\n    \"    public_check = [\\\"http://acs.amazonaws.com/groups/global/AuthenticatedUsers\\\",\\n\",\n    \"                   \\\"http://acs.amazonaws.com/groups/global/AllUsers\\\"]\\n\",\n    \"    public_buckets = False\\n\",\n    \"    try:\\n\",\n    \"        res = s3Client.get_bucket_acl(Bucket=b)\\n\",\n    \"        for perm in all_permissions:\\n\",\n    \"            for grant in res[\\\"Grants\\\"]:\\n\",\n    \"                if 'Permission' in grant.keys() and perm == grant[\\\"Permission\\\"]:\\n\",\n    \"                    if 'URI' in grant[\\\"Grantee\\\"] and grant[\\\"Grantee\\\"][\\\"URI\\\"] in public_check:\\n\",\n    \"                        public_buckets = True\\n\",\n    \"    except Exception as e:\\n\",\n    \"        pass\\n\",\n    \"    return public_buckets\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_filter_public_s3_buckets_by_acl(handle, permission:BucketACLPermissions=BucketACLPermissions.READ, region: str=None) -> Tuple:\\n\",\n    \"    \\\"\\\"\\\"aws_filter_public_s3_buckets_by_acl get list of public buckets.\\n\",\n    \"\\n\",\n    \"        Note- By default(if no permissions are given) READ and WRITE ACL Permissioned S3 buckets are checked for public access. Other ACL Permissions are - \\\"READ_ACP\\\"|\\\"WRITE_ACP\\\"|\\\"FULL_CONTROL\\\"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned from task.validate(...)\\n\",\n    \"\\n\",\n    \"        :type permission: Enum\\n\",\n    \"        :param permission: Set of permissions that AWS S3 supports in an ACL for buckets and objects.\\n\",\n    \"\\n\",\n    \"        :type region: string\\n\",\n    \"        :param region: location of the bucket.\\n\",\n    \"\\n\",\n    \"        :rtype: Object with status, list of public S3 buckets with READ/WRITE ACL Permissions, and errors\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    all_permissions = [permission]\\n\",\n    \"    if permission is None or len(permission)==0:\\n\",\n    \"        all_permissions = [\\\"READ\\\",\\\"WRITE\\\"]\\n\",\n    \"    result = []\\n\",\n    \"    all_buckets = []\\n\",\n    \"    all_regions = [region]\\n\",\n    \"    if region is None or len(region)==0:\\n\",\n    \"        all_regions = aws_list_all_regions(handle=handle)\\n\",\n    \"    try:\\n\",\n    \"        for r in all_regions:\\n\",\n    \"            s3Client = handle.client('s3',region_name=r)\\n\",\n    \"            output = aws_get_s3_buckets(handle=handle, region=r)\\n\",\n    \"            if len(output)!= 0:\\n\",\n    \"                for o in output:\\n\",\n    \"                    all_buckets_dict = {}\\n\",\n    \"                    all_buckets_dict[\\\"region\\\"]=r\\n\",\n    \"                    all_buckets_dict[\\\"bucket\\\"]=o\\n\",\n    \"                    all_buckets.append(all_buckets_dict)\\n\",\n    \"    except Exception as e:\\n\",\n    \"        raise e\\n\",\n    \"\\n\",\n    \"    for bucket in all_buckets:\\n\",\n    \"        s3Client = handle.client('s3',region_name= bucket['region'])\\n\",\n    \"        flag = check_publicly_accessible_buckets(s3Client,bucket['bucket'], all_permissions)\\n\",\n    \"        if flag:\\n\",\n    \"            result.append(bucket)\\n\",\n    \"    if len(result)!=0:\\n\",\n    \"        return (False, result)\\n\",\n    \"    else:\\n\",\n    \"        return (True, None)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"permission\\\": \\\"bucket_permission\\\",\\n\",\n    \"    \\\"region\\\": \\\"\\\\\\\\\\\"us-west-2\\\\\\\\\\\"\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"not bucket_names\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"task.configure(outputName=\\\"public_buckets\\\")\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_filter_public_s3_buckets_by_acl, lego_printer=aws_filter_public_s3_buckets_by_acl_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"24c71589-028b-4d3b-908f-ce867b462f7a\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 2A\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 2A\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Create-List-of-Expiring-Certificates\\\"><a id=\\\"1\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Create List of public S3 Buckets<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Create-List-of-Expiring-Certificates\\\" target=\\\"_self\\\">&para;</a></h3>\\n\",\n    \"<p>This action filters regions that have no public buckets and creates a list of public buckets that have are to be made private.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters: <code>None</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following output: <code>all_public_buckets</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 25,\n   \"id\": \"fa0655b5-e142-445c-9a39-312b4ee9f3f6\",\n   \"metadata\": {\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-02-02T15:56:00.421Z\"\n    },\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"name\": \"Create List of public S3 buckets\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Create List of public S3 buckets\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"all_public_buckets = []\\n\",\n    \"try:\\n\",\n    \"    if public_buckets[0] == False:\\n\",\n    \"        if len(public_buckets[1])!=0:\\n\",\n    \"            all_public_buckets=public_buckets[1]\\n\",\n    \"except Exception:\\n\",\n    \"    for buck in bucket_names:\\n\",\n    \"        data_dict = {}\\n\",\n    \"        data_dict[\\\"region\\\"] = region\\n\",\n    \"        data_dict[\\\"bucket\\\"] = buck\\n\",\n    \"        all_public_buckets.append(data_dict)\\n\",\n    \"print(all_public_buckets)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"b49c03e9-2951-4fab-b5f5-5338b8a955f9\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 2\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 2\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"List-expiring-ACM-certificates\\\"><a id=\\\"1\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Change permission to private<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#List-expiring-ACM-certificates\\\" target=\\\"_self\\\">&para;</a></h3>\\n\",\n    \"<p>Using unSkript's AWS Change ACL Permission of public S3 Bucket action, we will fchange the permissions of the bucket to <em>private, public-read, public-read-write, authenticated-read.&nbsp;</em>If no canned_acl_permission is selected, <span style=\\\"color: blue;\\\"> private</span> will be set by default.&nbsp;</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters: <code>bucket_name</code>, <code>region,canned_acl_permission</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following output: <code>None</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"a0896792-2764-4e3e-ab44-82f234e1c5f7\",\n   \"metadata\": {\n    \"actionBashCommand\": false,\n    \"actionCategories\": [\n     \"CATEGORY_TYPE_SECOPS\",\n     \"CATEGORY_TYPE_AWS\",\n     \"CATEGORY_TYPE_AWS_S3\"\n    ],\n    \"actionDescription\": \"AWS Change ACL Permission public S3 Bucket\",\n    \"actionEntryFunction\": \"aws_change_acl_permissions_of_buckets\",\n    \"actionIsCheck\": false,\n    \"actionIsRemediation\": true,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": null,\n    \"actionNextHopParameterMapping\": null,\n    \"actionNouns\": null,\n    \"actionOutputType\": \"ACTION_OUTPUT_TYPE_DICT\",\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"actionTitle\": \"AWS Change ACL Permission of public S3 Bucket\",\n    \"actionType\": \"LEGO_TYPE_AWS\",\n    \"actionVerbs\": null,\n    \"actionVersion\": \"1.0.0\",\n    \"action_modified\": false,\n    \"action_uuid\": \"305fe6a6f0512eb2d91b71c508b3a192e5b7021bf8196f4deeec5397f2b85e84\",\n    \"condition_enabled\": true,\n    \"continueOnError\": true,\n    \"credentialsJson\": {},\n    \"description\": \"AWS Change ACL Permission public S3 Bucket\",\n    \"id\": 3,\n    \"index\": 3,\n    \"inputData\": [\n     {\n      \"acl\": {\n       \"constant\": true,\n       \"value\": \"acl_permission\"\n      },\n      \"bucket_name\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"bucket_name\\\\\\\\\\\")\\\"\"\n      },\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"region\\\\\\\\\\\")\\\"\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"definitions\": {\n       \"CannedACLPermissions\": {\n        \"description\": \"An enumeration.\",\n        \"enum\": [\n         \"Private\",\n         \"PublicRead\",\n         \"PublicReadWrite\",\n         \"AuthenticatedRead\"\n        ],\n        \"title\": \"CannedACLPermissions\",\n        \"type\": \"string\"\n       }\n      },\n      \"properties\": {\n       \"acl\": {\n        \"allOf\": [\n         {\n          \"$ref\": \"#/definitions/CannedACLPermissions\"\n         }\n        ],\n        \"description\": \"Canned ACL Permission type - 'private'|'public-read'|'public-read-write'|'authenticated-read'.\",\n        \"title\": \"Canned ACL Permission\",\n        \"type\": \"enum\"\n       },\n       \"bucket_name\": {\n        \"description\": \"AWS S3 Bucket Name.\",\n        \"title\": \"Bucket Name\",\n        \"type\": \"string\"\n       },\n       \"region\": {\n        \"description\": \"AWS Region.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"region\",\n       \"bucket_name\"\n      ],\n      \"title\": \"aws_change_acl_permissions_of_buckets\",\n      \"type\": \"object\"\n     }\n    ],\n    \"iterData\": [\n     {\n      \"iter_enabled\": true,\n      \"iter_item\": {\n       \"bucket_name\": \"bucket_name\",\n       \"region\": \"region\"\n      },\n      \"iter_list\": {\n       \"constant\": false,\n       \"objectItems\": true,\n       \"value\": \"all_public_buckets\"\n      }\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"language\": \"python\",\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS Change ACL Permission of public S3 Bucket\",\n    \"orderProperties\": [\n     \"region\",\n     \"bucket_name\",\n     \"acl\"\n    ],\n    \"printOutput\": true,\n    \"startcondition\": \"len(all_public_buckets)!=0\",\n    \"tags\": [\n     \"aws_change_acl_permissions_of_buckets\"\n    ],\n    \"uuid\": \"305fe6a6f0512eb2d91b71c508b3a192e5b7021bf8196f4deeec5397f2b85e84\",\n    \"version\": \"1.0.0\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"# Copyright (c) 2021 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"import pprint\\n\",\n    \"from typing import Optional, Dict\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from unskript.enums.aws_canned_acl_enums import CannedACLPermissions\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_change_acl_permissions_of_buckets_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_change_acl_permissions_of_buckets(\\n\",\n    \"    handle,\\n\",\n    \"    bucket_name: str,\\n\",\n    \"    acl: CannedACLPermissions=CannedACLPermissions.Private,\\n\",\n    \"    region: str = None\\n\",\n    \"    ) -> Dict:\\n\",\n    \"    \\\"\\\"\\\" aws_put_bucket_acl get Dict of buckets ACL change info.\\n\",\n    \"\\n\",\n    \"            :type handle: Session\\n\",\n    \"            :param handle: Object returned by the task.validate(...) method\\n\",\n    \"\\n\",\n    \"            :type bucket_name: string\\n\",\n    \"            :param bucket_name: S3 bucket name where to set ACL on.\\n\",\n    \"\\n\",\n    \"            :type acl: CannedACLPermissions\\n\",\n    \"            :param acl: Canned ACL Permission type - 'private'|'public-read'|'public-read-write\\n\",\n    \"            '|'authenticated-read'.\\n\",\n    \"\\n\",\n    \"            :type region: string\\n\",\n    \"            :param region: location of the bucket.\\n\",\n    \"\\n\",\n    \"            :rtype: Dict of buckets ACL change info\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    # connect to the S3 using client\\n\",\n    \"    all_permissions = acl\\n\",\n    \"    if acl is None or len(acl)==0:\\n\",\n    \"        all_permissions = \\\"private\\\"\\n\",\n    \"    s3Client = handle.client('s3',\\n\",\n    \"                             region_name=region)\\n\",\n    \"\\n\",\n    \"    # Put bucket ACL for the permissions grant\\n\",\n    \"    response = s3Client.put_bucket_acl(\\n\",\n    \"                    Bucket=bucket_name,\\n\",\n    \"                    ACL=all_permissions )\\n\",\n    \"\\n\",\n    \"    return response\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(continueOnError=True)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"region\\\": \\\"\\\\\\\\\\\"iter.get(\\\\\\\\\\\\\\\\\\\\\\\\\\\"region\\\\\\\\\\\\\\\\\\\\\\\\\\\")\\\\\\\\\\\"\\\",\\n\",\n    \"    \\\"bucket_name\\\": \\\"\\\\\\\\\\\"iter.get(\\\\\\\\\\\\\\\\\\\\\\\\\\\"bucket_name\\\\\\\\\\\\\\\\\\\\\\\\\\\")\\\\\\\\\\\"\\\",\\n\",\n    \"    \\\"acl\\\": \\\"acl_permission\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(iterJson='''{\\n\",\n    \"    \\\"iter_enabled\\\": true,\\n\",\n    \"    \\\"iter_list_is_const\\\": false,\\n\",\n    \"    \\\"iter_list\\\": \\\"all_public_buckets\\\",\\n\",\n    \"    \\\"iter_parameter\\\": [\\\"region\\\",\\\"bucket_name\\\"]\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"len(all_public_buckets)!=0\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_change_acl_permissions_of_buckets, lego_printer=aws_change_acl_permissions_of_buckets_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"eada3017-32cf-46e2-b02c-4eb60256a3a9\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Conclusion\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Conclusion\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Conclusion\\\">Conclusion<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Conclusion\\\" target=\\\"_self\\\">&para;</a></h3>\\n\",\n    \"<p>In this Runbook, we were able to restrict S3 buckets having read and write permissions to private. To view the full platform capabilities of unSkript please visit&nbsp;<a href=\\\"https://us.app.unskript.io\\\" target=\\\"_blank\\\" rel=\\\"noopener\\\">us.app.unskript.io</a></p>\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"execution_data\": {\n   \"runbook_name\": \"Restrict S3 Buckets with READ/WRITE Permissions to all Authenticated Users\",\n   \"parameters\": null\n  },\n  \"kernelspec\": {\n   \"display_name\": \"unSkript (Build: 1169)\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"codemirror_mode\": {\n    \"name\": \"ipython\",\n    \"version\": 3\n   },\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"nbconvert_exporter\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.9.6\"\n  },\n  \"parameterSchema\": {\n   \"properties\": {\n    \"acl_permission\": {\n     \"default\": \"private\",\n     \"description\": \"Canned ACL Permission type - Eg: 'private'|'public-read'|'public-read-write'|'authenticated-read'\",\n     \"enum\": [\n      \"private\",\n      \"public-read\",\n      \"public-read-write\"\n     ],\n     \"enumNames\": [\n      \"private\",\n      \"public-read\",\n      \"public-read-write\"\n     ],\n     \"title\": \"acl_permission\",\n     \"type\": \"string\"\n    },\n    \"bucket_names\": {\n     \"description\": \"List of S3 bucket names.\",\n     \"title\": \"bucket_names\",\n     \"type\": \"array\"\n    },\n    \"bucket_permission\": {\n     \"default\": \"READ\",\n     \"description\": \"Set of permissions that AWS S3 supports in an ACL for buckets and objects. Eg:\\\"READ\\\",\\\"WRITE_ACP\\\",\\\"FULL_CONTROL\\\"\",\n     \"enum\": [\n      \"READ\",\n      \"WRITE\",\n      \"READ_ACP\"\n     ],\n     \"enumNames\": [\n      \"READ\",\n      \"WRITE\",\n      \"READ_ACP\"\n     ],\n     \"title\": \"bucket_permission\",\n     \"type\": \"string\"\n    },\n    \"region\": {\n     \"description\": \"AWS Region to get the buckets from. Eg:us-west-2\\\"\",\n     \"title\": \"region\",\n     \"type\": \"string\"\n    }\n   },\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"vscode\": {\n   \"interpreter\": {\n    \"hash\": \"31f2aee4e71d21fbe5cf8b01ff0e069b9275f58929596ceb00d14d90e3e16cd6\"\n   }\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}"
  },
  {
    "path": "AWS/AWS_Restrict_S3_Buckets_with_READ_WRITE_Permissions.json",
    "content": "{\n    \"name\": \"Restrict S3 Buckets with READ/WRITE Permissions to all Authenticated Users\",\n    \"description\": \"This runbook will list all the S3 buckets.Filter buckets which has ACL public READ/WRITE permissions and Change the ACL Public READ/WRITE permissions to private in the given region.\",\n    \"uuid\": \"750987144b20d7b5984a37e58c2e17b69fd33f799a1f027f0ff7532cee5913c6\",\n    \"icon\": \"CONNECTOR_TYPE_AWS\",\n    \"categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_SECOPS\" ],\n    \"connector_types\": [ \"CONNECTOR_TYPE_AWS\" ],\n    \"version\": \"1.0.0\"\n}"
  },
  {
    "path": "AWS/AWS_Secure_Publicly_Accessible_RDS_Instances.ipynb",
    "content": "{\n    \"cells\": [\n        {\n            \"cell_type\": \"markdown\",\n            \"id\": \"c0e8284f-f6a8-4b7f-971c-8fb037002354\",\n            \"metadata\": {\n                \"jupyter\": {\n                    \"source_hidden\": false\n                },\n                \"name\": \"Runbook Overview\",\n                \"orderProperties\": [],\n                \"tags\": [],\n                \"title\": \"Runbook Overview\"\n            },\n            \"source\": [\n                \"<center><img src=\\\"https://unskript.com/assets/favicon.png\\\" alt=\\\"unSkript.com\\\" width=\\\"100\\\" height=\\\"100\\\">\\n\",\n                \"<h1 id=\\\"unSkript-Runbooks&nbsp;\\\">unSkript Runbooks&nbsp;<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#unSkript-Runbooks\\\" target=\\\"_self\\\">&para;</a></h1>\\n\",\n                \"<div class=\\\"alert alert-block alert-success\\\">\\n\",\n                \"<h3 id=\\\"Objective\\\"><strong>Objective</strong><a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Objective\\\" target=\\\"_self\\\">&para;</a></h3>\\n\",\n                \"<strong>Get publicly accessible AWS RDS DB instances and change them to private.</strong></div>\\n\",\n                \"</center><center>\\n\",\n                \"<h2 id=\\\"Publicly-Accessible-Amazon-RDS-Instances\\\">Secure Publicly Accessible Amazon RDS Instances<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Publicly-Accessible-Amazon-RDS-Instances\\\" target=\\\"_self\\\">&para;</a></h2>\\n\",\n                \"</center>\\n\",\n                \"<h1 id=\\\"Steps-Overview\\\">Steps Overview<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Steps-Overview\\\" target=\\\"_self\\\">&para;</a></h1>\\n\",\n                \"<ol>\\n\",\n                \"<li>AWS Get Publicly Accessible RDS Instances</li>\\n\",\n                \"<li>Change public access to private</li>\\n\",\n                \"</ol>\"\n            ]\n        },\n        {\n            \"cell_type\": \"code\",\n            \"execution_count\": null,\n            \"id\": \"29c6d54f-6a4e-4058-ab1f-5354a79cfd66\",\n            \"metadata\": {\n                \"customAction\": true,\n                \"execution_data\": {\n                    \"last_date_success_run_cell\": \"2023-08-16T09:07:26.188Z\"\n                },\n                \"jupyter\": {\n                    \"source_hidden\": true\n                },\n                \"name\": \"Input verification\",\n                \"orderProperties\": [],\n                \"tags\": [],\n                \"title\": \"Input verification\"\n            },\n            \"outputs\": [],\n            \"source\": [\n                \"if rds_instances and not region:\\n\",\n                \"    raise SystemExit(\\\"Provide a region for the RDS Instances!\\\")\\n\",\n                \"if region == None:\\n\",\n                \"    region = \\\"\\\"\"\n            ]\n        },\n        {\n            \"cell_type\": \"markdown\",\n            \"id\": \"5d6e0429-7d5d-4a6e-aae2-165235fdeb49\",\n            \"metadata\": {\n                \"jupyter\": {\n                    \"source_hidden\": false\n                },\n                \"name\": \"Step 1\",\n                \"orderProperties\": [],\n                \"tags\": [],\n                \"title\": \"Step 1\"\n            },\n            \"source\": [\n                \"<h3 id=\\\"AWS-Get-Publicly-Accessible-RDS-Instances\\\"><a id=\\\"1\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>AWS Get Publicly Accessible RDS Instances</h3>\\n\",\n                \"<p>Using unSkript's <strong>AWS Get Publicly Accessible RDS Instances</strong>&nbsp;action, we will get all the publicly accessible instances from RDS instances.</p>\\n\",\n                \"<blockquote>\\n\",\n                \"<p>Input parameters: <code>region</code></p>\\n\",\n                \"</blockquote>\\n\",\n                \"<blockquote>\\n\",\n                \"<p>Output variable: <code>rds_instances</code></p>\\n\",\n                \"</blockquote>\"\n            ]\n        },\n        {\n            \"cell_type\": \"code\",\n            \"execution_count\": null,\n            \"id\": \"7f6a6416-23f4-42d0-8d3c-dad850450f9e\",\n            \"metadata\": {\n                \"actionBashCommand\": false,\n                \"actionCategories\": [\n                    \"CATEGORY_TYPE_IAM\",\n                    \"CATEGORY_TYPE_SECOPS\",\n                    \"CATEGORY_TYPE_AWS\",\n                    \"CATEGORY_TYPE_AWS_RDS\"\n                ],\n                \"actionDescription\": \"AWS Get Publicly Accessible RDS Instances\",\n                \"actionEntryFunction\": \"aws_get_publicly_accessible_db_instances\",\n                \"actionIsCheck\": true,\n                \"actionIsRemediation\": false,\n                \"actionNeedsCredential\": true,\n                \"actionNextHop\": [\n                    \"dda26fd556dd6b59e2fac9c9ed6e81fc19e5374746049d494237bcdc6a17fae4\"\n                ],\n                \"actionNextHopParameterMapping\": {\n                    \"dda26fd556dd6b59e2fac9c9ed6e81fc19e5374746049d494237bcdc6a17fae4\": {\n                        \"name\": \"Publicly Accessible Amazon RDS Instances\",\n                        \"region\": \".[0].region\"\n                    }\n                },\n                \"actionNouns\": null,\n                \"actionOutputType\": \"ACTION_OUTPUT_TYPE_LIST\",\n                \"actionSupportsIteration\": true,\n                \"actionSupportsPoll\": true,\n                \"actionTitle\": \"AWS Get Publicly Accessible RDS Instances\",\n                \"actionType\": \"LEGO_TYPE_AWS\",\n                \"actionVerbs\": null,\n                \"actionVersion\": \"1.0.0\",\n                \"action_modified\": false,\n                \"action_uuid\": \"97bfc082be1cffdf5c795b3119bfa90b36946934b37cf213d762e0ee3ee881f8\",\n                \"condition_enabled\": true,\n                \"credentialsJson\": {},\n                \"description\": \"AWS Get Publicly Accessible RDS Instances\",\n                \"execution_count\": {},\n                \"execution_data\": {},\n                \"id\": 3,\n                \"index\": 3,\n                \"inputschema\": [\n                    {\n                        \"properties\": {\n                            \"region\": {\n                                \"default\": \"\",\n                                \"description\": \"Region of the RDS.\",\n                                \"title\": \"Region for RDS\",\n                                \"type\": \"string\"\n                            }\n                        },\n                        \"title\": \"aws_get_publicly_accessible_db_instances\",\n                        \"type\": \"object\"\n                    }\n                ],\n                \"jupyter\": {\n                    \"source_hidden\": true\n                },\n                \"language\": \"python\",\n                \"legotype\": \"LEGO_TYPE_AWS\",\n                \"name\": \"AWS Get Publicly Accessible RDS Instances\",\n                \"orderProperties\": [\n                    \"region\"\n                ],\n                \"outputParams\": {\n                    \"output_name\": \"public_rds_instances\",\n                    \"output_name_enabled\": true,\n                    \"output_runbook_enabled\": false,\n                    \"output_runbook_name\": \"\"\n                },\n                \"printOutput\": true,\n                \"startcondition\": \"not rds_instances\",\n                \"tags\": [\n                    \"aws_get_publicly_accessible_db_instances\"\n                ],\n                \"uuid\": \"97bfc082be1cffdf5c795b3119bfa90b36946934b37cf213d762e0ee3ee881f8\",\n                \"version\": \"1.0.0\"\n            },\n            \"outputs\": [],\n            \"source\": [\n                \"##\\n\",\n                \"##  Copyright (c) 2021 unSkript, Inc\\n\",\n                \"##  All rights reserved.\\n\",\n                \"##\\n\",\n                \"import pprint\\n\",\n                \"from typing import Optional, Tuple\\n\",\n                \"from pydantic import BaseModel, Field\\n\",\n                \"from unskript.legos.utils import CheckOutput\\n\",\n                \"from unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\\n\",\n                \"from unskript.connectors.aws import aws_get_paginator\\n\",\n                \"\\n\",\n                \"\\n\",\n                \"from beartype import beartype\\n\",\n                \"@beartype\\n\",\n                \"def aws_get_publicly_accessible_db_instances_printer(output):\\n\",\n                \"    if output is None:\\n\",\n                \"        return\\n\",\n                \"\\n\",\n                \"    if isinstance(output, CheckOutput):\\n\",\n                \"        print(output.json())\\n\",\n                \"    else:\\n\",\n                \"        pprint.pprint(output)\\n\",\n                \"\\n\",\n                \"\\n\",\n                \"@beartype\\n\",\n                \"def aws_get_publicly_accessible_db_instances(handle, region: str = \\\"\\\") -> Tuple:\\n\",\n                \"    \\\"\\\"\\\"aws_get_publicly_accessible_db_instances Gets all publicly accessible DB instances\\n\",\n                \"\\n\",\n                \"        :type handle: object\\n\",\n                \"        :param handle: Object returned from task.validate(...).\\n\",\n                \"\\n\",\n                \"        :type region: string\\n\",\n                \"        :param region: Region of the RDS.\\n\",\n                \"\\n\",\n                \"        :rtype: CheckOutput with status result and list of publicly accessible RDS instances.\\n\",\n                \"    \\\"\\\"\\\"\\n\",\n                \"    result = []\\n\",\n                \"    all_regions = [region]\\n\",\n                \"    if not region:\\n\",\n                \"        all_regions = aws_list_all_regions(handle)\\n\",\n                \"    for reg in all_regions:\\n\",\n                \"        try:\\n\",\n                \"            ec2Client = handle.client('rds', region_name=reg)\\n\",\n                \"            response = aws_get_paginator(ec2Client, \\\"describe_db_instances\\\", \\\"DBInstances\\\")\\n\",\n                \"            for db in response:\\n\",\n                \"                db_instance_dict = {}\\n\",\n                \"                if db['PubliclyAccessible']:\\n\",\n                \"                    db_instance_dict[\\\"region\\\"] = reg\\n\",\n                \"                    db_instance_dict[\\\"instance\\\"] = db['DBInstanceIdentifier']\\n\",\n                \"                    result.append(db_instance_dict)\\n\",\n                \"        except Exception:\\n\",\n                \"            pass\\n\",\n                \"\\n\",\n                \"    if len(result) != 0:\\n\",\n                \"        return (False, result)\\n\",\n                \"    return (True, None)\\n\",\n                \"\\n\",\n                \"\\n\",\n                \"task = Task(Workflow())\\n\",\n                \"task.configure(conditionsJson='''{\\n\",\n                \"    \\\"condition_enabled\\\": true,\\n\",\n                \"    \\\"condition_cfg\\\": \\\"not rds_instances\\\",\\n\",\n                \"    \\\"condition_result\\\": true\\n\",\n                \"    }''')\\n\",\n                \"task.configure(credentialsJson='''{\\\\\\\"credential_type\\\\\\\": \\\\\\\"CONNECTOR_TYPE_AWS\\\\\\\"}''')\\n\",\n                \"\\n\",\n                \"task.configure(outputName=\\\"public_rds_instances\\\")\\n\",\n                \"\\n\",\n                \"task.configure(printOutput=True)\\n\",\n                \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n                \"if err is None:\\n\",\n                \"    task.execute(aws_get_publicly_accessible_db_instances, lego_printer=aws_get_publicly_accessible_db_instances_printer, hdl=hdl, args=args)\"\n            ]\n        },\n        {\n            \"cell_type\": \"markdown\",\n            \"id\": \"d56e5ae8-9277-4615-a3a9-dda4f55955bf\",\n            \"metadata\": {\n                \"jupyter\": {\n                    \"source_hidden\": false\n                },\n                \"name\": \"Step 1A\",\n                \"orderProperties\": [],\n                \"tags\": [],\n                \"title\": \"Step 1A\"\n            },\n            \"source\": [\n                \"<h3 id=\\\"Modify-Output\\\">Create List of Public RDS Instances</h3>\\n\",\n                \"<p>In this action, we modify the output from step 1 and return a list of dictionary items for the publicly accessible RDS instances.</p>\\n\",\n                \"<blockquote>\\n\",\n                \"<p><strong>Output variable:</strong> all_public_instances</p>\\n\",\n                \"</blockquote>\"\n            ]\n        },\n        {\n            \"cell_type\": \"code\",\n            \"execution_count\": null,\n            \"id\": \"be5f0054-e0f8-40e7-b797-993033a3fe04\",\n            \"metadata\": {\n                \"collapsed\": true,\n                \"credentialsJson\": {},\n                \"customAction\": true,\n                \"execution_data\": {\n                    \"last_date_success_run_cell\": \"2023-08-16T09:07:53.288Z\"\n                },\n                \"jupyter\": {\n                    \"outputs_hidden\": true,\n                    \"source_hidden\": true\n                },\n                \"name\": \"Create List of Public RDS Instances\",\n                \"orderProperties\": [],\n                \"tags\": [],\n                \"title\": \"Create List of Public RDS Instances\"\n            },\n            \"outputs\": [],\n            \"source\": [\n                \"all_public_instances = []\\n\",\n                \"try:\\n\",\n                \"    if public_rds_instances[0] == False:\\n\",\n                \"        for instance in public_rds_instances[1]:\\n\",\n                \"            all_public_instances.append(instance)\\n\",\n                \"except Exception as e:\\n\",\n                \"    if rds_instances:\\n\",\n                \"        for ins in rds_instances:\\n\",\n                \"            data_dict = {}\\n\",\n                \"            data_dict[\\\"region\\\"] = region\\n\",\n                \"            data_dict[\\\"instance\\\"] = ins\\n\",\n                \"            all_public_instances.append(data_dict)\\n\",\n                \"    else:\\n\",\n                \"        raise Exception(e)\\n\",\n                \"print(all_public_instances)\"\n            ]\n        },\n        {\n            \"cell_type\": \"markdown\",\n            \"id\": \"a518b936-dc13-4a56-962d-1595f7c74b71\",\n            \"metadata\": {\n                \"jupyter\": {\n                    \"source_hidden\": false\n                },\n                \"name\": \"Step 2\",\n                \"orderProperties\": [],\n                \"tags\": [],\n                \"title\": \"Step 2\"\n            },\n            \"source\": [\n                \"<h3 id=\\\"Change-the-public-access-to-private\\\">Change the public access to private</h3>\\n\",\n                \"<p>Using unSkript's Modify Publicly Accessible RDS Instances action we will modify the access to all the publicly accessible instances from the <em>public</em> to <em>private</em>.</p>\\n\",\n                \"<blockquote>\\n\",\n                \"<p>This action takes the following parameters: <code>region</code>, <code>db_instance_identifier</code></p>\\n\",\n                \"</blockquote>\"\n            ]\n        },\n        {\n            \"cell_type\": \"code\",\n            \"execution_count\": null,\n            \"id\": \"1b4ad0cc-6140-4f6f-a06e-c894b583cb99\",\n            \"metadata\": {\n                \"actionBashCommand\": false,\n                \"actionCategories\": [\n                    \"CATEGORY_TYPE_CLOUDOPS\",\n                    \"CATEGORY_TYPE_AWS\",\n                    \"CATEGORY_TYPE_AWS_RDS\"\n                ],\n                \"actionDescription\": \"Change public accessibility of RDS Instances to False.\",\n                \"actionEntryFunction\": \"aws_make_rds_instance_not_publicly_accessible\",\n                \"actionIsCheck\": false,\n                \"actionIsRemediation\": false,\n                \"actionNeedsCredential\": true,\n                \"actionNextHop\": null,\n                \"actionNextHopParameterMapping\": null,\n                \"actionNouns\": null,\n                \"actionOutputType\": \"ACTION_OUTPUT_TYPE_STR\",\n                \"actionSupportsIteration\": true,\n                \"actionSupportsPoll\": true,\n                \"actionTitle\": \"Disallow AWS RDS Instance public accessibility\",\n                \"actionType\": \"LEGO_TYPE_AWS\",\n                \"actionVerbs\": null,\n                \"actionVersion\": \"1.0.0\",\n                \"action_modified\": false,\n                \"action_uuid\": \"15d2e1417496ecb13e7bb88d7429f74dabbb6f8b9bc7d9df275647eae402e4dd\",\n                \"condition_enabled\": true,\n                \"continueOnError\": true,\n                \"credentialsJson\": {},\n                \"description\": \"Change public accessibility of RDS Instances to False.\",\n                \"execution_count\": {},\n                \"execution_data\": {},\n                \"id\": 7,\n                \"index\": 7,\n                \"inputData\": [\n                    {\n                        \"db_instance_identifier\": {\n                            \"constant\": false,\n                            \"value\": \"\\\"iter.get(\\\\\\\\\\\"instance\\\\\\\\\\\")\\\"\"\n                        },\n                        \"region\": {\n                            \"constant\": false,\n                            \"value\": \"\\\"iter.get(\\\\\\\\\\\"region\\\\\\\\\\\")\\\"\"\n                        }\n                    }\n                ],\n                \"inputschema\": [\n                    {\n                        \"properties\": {\n                            \"db_instance_identifier\": {\n                                \"description\": \"The DB instance identifier for the DB instance to be deleted. This parameter isn’t case-sensitive.\",\n                                \"title\": \"RDS Instance Identifier\",\n                                \"type\": \"string\"\n                            },\n                            \"region\": {\n                                \"description\": \"AWS region of instance identifier\",\n                                \"title\": \"AWS Region\",\n                                \"type\": \"string\"\n                            }\n                        },\n                        \"required\": [\n                            \"db_instance_identifier\",\n                            \"region\"\n                        ],\n                        \"title\": \"aws_make_rds_instance_not_publicly_accessible\",\n                        \"type\": \"object\"\n                    }\n                ],\n                \"iterData\": [\n                    {\n                        \"iter_enabled\": true,\n                        \"iter_item\": {\n                            \"db_instance_identifier\": \"instance\",\n                            \"region\": \"region\"\n                        },\n                        \"iter_list\": {\n                            \"constant\": false,\n                            \"objectItems\": true,\n                            \"value\": \"all_public_instances\"\n                        }\n                    }\n                ],\n                \"jupyter\": {\n                    \"source_hidden\": true\n                },\n                \"language\": \"python\",\n                \"legotype\": \"LEGO_TYPE_AWS\",\n                \"name\": \"Disallow AWS RDS Instance public accessibility\",\n                \"orderProperties\": [\n                    \"db_instance_identifier\",\n                    \"region\"\n                ],\n                \"printOutput\": true,\n                \"startcondition\": \"len(all_public_instances)!=0\",\n                \"tags\": [\n                    \"aws_make_rds_instance_not_publicly_accessible\"\n                ],\n                \"uuid\": \"15d2e1417496ecb13e7bb88d7429f74dabbb6f8b9bc7d9df275647eae402e4dd\",\n                \"version\": \"1.0.0\"\n            },\n            \"outputs\": [],\n            \"source\": [\n                \"##\\n\",\n                \"# Copyright (c) 2023 unSkript, Inc\\n\",\n                \"# All rights reserved.\\n\",\n                \"##\\n\",\n                \"from pydantic import BaseModel, Field\\n\",\n                \"\\n\",\n                \"\\n\",\n                \"from beartype import beartype\\n\",\n                \"@beartype\\n\",\n                \"def aws_make_rds_instance_not_publicly_accessible_printer(output):\\n\",\n                \"    if output is None:\\n\",\n                \"        return\\n\",\n                \"    print(output)\\n\",\n                \"\\n\",\n                \"\\n\",\n                \"@beartype\\n\",\n                \"def aws_make_rds_instance_not_publicly_accessible(handle, db_instance_identifier: str, region: str) -> str:\\n\",\n                \"    \\\"\\\"\\\"\\n\",\n                \"    aws_make_rds_instance_not_publicly_accessible makes the specified RDS instance not publicly accessible.\\n\",\n                \"\\n\",\n                \"    :type handle: object\\n\",\n                \"    :param handle: Object returned from task.validate(...).\\n\",\n                \"\\n\",\n                \"    :type db_instance_identifier: string\\n\",\n                \"    :param db_instance_identifier: Identifier of the RDS instance.\\n\",\n                \"\\n\",\n                \"    :type region: string\\n\",\n                \"    :param region: Region of the RDS instance.\\n\",\n                \"\\n\",\n                \"    :rtype: Response of the operation.\\n\",\n                \"    \\\"\\\"\\\"\\n\",\n                \"    try:\\n\",\n                \"        rdsClient = handle.client('rds', region_name=region)\\n\",\n                \"        rdsClient.modify_db_instance(\\n\",\n                \"            DBInstanceIdentifier=db_instance_identifier,\\n\",\n                \"            PubliclyAccessible=False\\n\",\n                \"        )\\n\",\n                \"    except Exception as e:\\n\",\n                \"        raise e\\n\",\n                \"    return f\\\"Public accessiblilty is being changed to False...\\\"\\n\",\n                \"\\n\",\n                \"\\n\",\n                \"\\n\",\n                \"\\n\",\n                \"task = Task(Workflow())\\n\",\n                \"task.configure(continueOnError=True)\\n\",\n                \"task.configure(credentialsJson='''{\\\\\\\"credential_type\\\\\\\": \\\\\\\"CONNECTOR_TYPE_AWS\\\\\\\"}''')\\n\",\n                \"task.configure(inputParamsJson='''{\\n\",\n                \"    \\\"db_instance_identifier\\\": \\\"iter.get(\\\\\\\\\\\"instance\\\\\\\\\\\")\\\",\\n\",\n                \"    \\\"region\\\": \\\"iter.get(\\\\\\\\\\\"region\\\\\\\\\\\")\\\"\\n\",\n                \"    }''')\\n\",\n                \"task.configure(iterJson='''{\\n\",\n                \"    \\\"iter_enabled\\\": true,\\n\",\n                \"    \\\"iter_list_is_const\\\": false,\\n\",\n                \"    \\\"iter_list\\\": \\\"all_public_instances\\\",\\n\",\n                \"    \\\"iter_parameter\\\": [\\\"db_instance_identifier\\\",\\\"region\\\"]\\n\",\n                \"    }''')\\n\",\n                \"task.configure(conditionsJson='''{\\n\",\n                \"    \\\"condition_enabled\\\": true,\\n\",\n                \"    \\\"condition_cfg\\\": \\\"len(all_public_instances)!=0\\\",\\n\",\n                \"    \\\"condition_result\\\": true\\n\",\n                \"    }''')\\n\",\n                \"\\n\",\n                \"task.configure(printOutput=True)\\n\",\n                \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n                \"if err is None:\\n\",\n                \"    task.execute(aws_make_rds_instance_not_publicly_accessible, lego_printer=aws_make_rds_instance_not_publicly_accessible_printer, hdl=hdl, args=args)\"\n            ]\n        },\n        {\n            \"cell_type\": \"markdown\",\n            \"id\": \"276822d0-0d5d-4023-83c1-3f8b12e50568\",\n            \"metadata\": {\n                \"jupyter\": {\n                    \"source_hidden\": false\n                },\n                \"name\": \"Conclusion\",\n                \"orderProperties\": [],\n                \"tags\": [],\n                \"title\": \"Conclusion\"\n            },\n            \"source\": [\n                \"<h3 id=\\\"Conclusion\\\">Conclusion<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Conclusion\\\" target=\\\"_self\\\">&para;</a></h3>\\n\",\n                \"<p>In this Runbook, we demonstrated the use of unSkript's AWS actions. This runbook help to find publicly accessible RDS instances and change it to private. To view the full platform capabilities of unSkript please visit&nbsp;<a href=\\\"https://us.app.unskript.io\\\" target=\\\"_blank\\\" rel=\\\"noopener\\\">https://us.app.unskript.io</a></p>\"\n            ]\n        }\n    ],\n    \"metadata\": {\n        \"execution_data\": {\n            \"parameters\": [\n                \"channel\",\n                \"region\"\n            ],\n            \"runbook_name\": \"Secure Publicly Accessible Amazon RDS Instances\"\n        },\n        \"kernelspec\": {\n            \"display_name\": \"Python 3.10.6 64-bit\",\n            \"language\": \"python\",\n            \"name\": \"python3\"\n        },\n        \"language_info\": {\n            \"codemirror_mode\": {\n                \"name\": \"ipython\",\n                \"version\": 3\n            },\n            \"file_extension\": \".py\",\n            \"mimetype\": \"text/x-python\",\n            \"name\": \"python\",\n            \"nbconvert_exporter\": \"python\",\n            \"pygments_lexer\": \"ipython3\",\n            \"version\": \"3.10.6\"\n        },\n        \"parameterSchema\": {\n            \"properties\": {\n                \"rds_instances\": {\n                    \"description\": \"List of RDS instance DB Identifiers(names).\",\n                    \"title\": \"rds_instances\",\n                    \"type\": \"array\"\n                },\n                \"region\": {\n                    \"description\": \"RDS instance region\",\n                    \"title\": \"region\",\n                    \"type\": \"string\"\n                }\n            },\n            \"required\": [],\n            \"title\": \"Schema\",\n            \"type\": \"object\"\n        },\n        \"parameterValues\": {},\n        \"vscode\": {\n            \"interpreter\": {\n                \"hash\": \"b0fa6594d8f4cbf19f97940f81e996739fb7646882a419484c72d19e05852a7e\"\n            }\n        }\n    },\n    \"nbformat\": 4,\n    \"nbformat_minor\": 5\n}\n"
  },
  {
    "path": "AWS/AWS_Secure_Publicly_Accessible_RDS_Instances.json",
    "content": "{\n  \"name\": \"Secure Publicly Accessible Amazon RDS Instances\",\n  \"description\": \"This runbook can be used to find the publicly accessible RDS instances for the given AWS region and change them to private.\",\n  \"uuid\": \"dda26fd556dd6b59e2fac9c9ed6e81fc19e5374746049d494237bcdc6a17fae4\",\n  \"icon\": \"CONNECTOR_TYPE_AWS\",\n  \"categories\": [ \"CATEGORY_TYPE_CLOUDOPS\",\"CATEGORY_TYPE_SECOPS\" ],\n  \"connector_types\": [ \"CONNECTOR_TYPE_AWS\" ],\n  \"version\": \"1.0.0\"\n}\n\n"
  },
  {
    "path": "AWS/AWS_Secure_Publicly_accessible_Amazon_RDS_Snapshot.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"9bdb4ffc-b726-49e9-95b8-063371b3fa61\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Steps Overview\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Steps Overview\"\n   },\n   \"source\": [\n    \"<center><img src=\\\"https://storage.googleapis.com/unskript-website/assets/favicon.png\\\" alt=\\\"unSkript.com\\\" width=\\\"100\\\" height=\\\"100\\\">\\n\",\n    \"<h1 id=\\\"unSkript-Runbooks\\\">unSkript Runbooks</h1>\\n\",\n    \"<div class=\\\"alert alert-block alert-success\\\">\\n\",\n    \"<h3 id=\\\"Objective\\\">Objective</h3>\\n\",\n    \"<br><strong><em>Change publicly accessible of RDS DB Snapshots to private</em></strong></div>\\n\",\n    \"</center>\\n\",\n    \"<p>&nbsp;</p>\\n\",\n    \"<center>\\n\",\n    \"<h2 id=\\\"Secure-Publicly-accessible-Amazon-RDS-Snapshot\\\"><u>Secure Publicly accessible Amazon RDS Snapshot</u></h2>\\n\",\n    \"</center>\\n\",\n    \"<h1 id=\\\"Steps-Overview\\\">Steps Overview</h1>\\n\",\n    \"<p><br>1.&nbsp;Get publicly accessible DB snapshots<br>2.&nbsp;Change the public access to private</p>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 6,\n   \"id\": \"cd58d0b6-ced5-4efb-b1fe-267082c51ce5\",\n   \"metadata\": {\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-05-17T17:54:04.130Z\"\n    },\n    \"name\": \"Input Verification\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Input Verification\",\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"if public_snapshot_ids and not region:\\n\",\n    \"    raise SystemExit(\\\"Provide a region for the Snapshots!\\\")\\n\",\n    \"if region == None:\\n\",\n    \"    region = \\\"\\\"\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"af87220a-b782-4b7e-b581-95677550cbc9\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-1\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-1\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Get-publicly-accessible-DB-snapshots\\\"><a id=\\\"2\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Get publicly accessible DB snapshots</h3>\\n\",\n    \"<p>Using unSkript's Get Publicly Accessible DB Snapshots in RDS action we will fetch all the publicly accessible snapshots from the list of manual DB snapshots.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters: <code>region(Optional)</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following output: <code>all_snapshots</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"6625eaae-2435-4542-a589-8456221c7e88\",\n   \"metadata\": {\n    \"actionBashCommand\": false,\n    \"actionCategories\": [\n     \"CATEGORY_TYPE_CLOUDOPS\",\n     \"CATEGORY_TYPE_SECOPS\",\n     \"CATEGORY_TYPE_AWS\",\n     \"CATEGORY_TYPE_AWS_RDS\"\n    ],\n    \"actionDescription\": \"AWS Get Publicly Accessible DB Snapshots in RDS\",\n    \"actionEntryFunction\": \"aws_get_publicly_accessible_db_snapshots\",\n    \"actionIsCheck\": true,\n    \"actionIsRemediation\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [\n     \"7c0d143556a33b81d3fb1ff08dfdd59cebe5d58b00b55e8ae660df2e42f71bfe\"\n    ],\n    \"actionNextHopParameterMapping\": {},\n    \"actionNouns\": [\n     \"aws\",\n     \"database\",\n     \"snapshots\",\n     \"public\",\n     \"accessible\"\n    ],\n    \"actionOutputType\": \"ACTION_OUTPUT_TYPE_LIST\",\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"actionTitle\": \"AWS Get Publicly Accessible DB Snapshots in RDS\",\n    \"actionType\": \"LEGO_TYPE_AWS\",\n    \"actionVerbs\": [\n     \"get\"\n    ],\n    \"actionVersion\": \"1.0.0\",\n    \"action_modified\": false,\n    \"action_uuid\": \"e665224418391a4deafae48140c5b83c8af7b881dd281acbd79ed9ceb52aad4f\",\n    \"collapsed\": true,\n    \"condition_enabled\": true,\n    \"continueOnError\": false,\n    \"credentialsJson\": {},\n    \"description\": \"AWS Get Publicly Accessible DB Snapshots in RDS\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-05-17T17:53:59.863Z\"\n    },\n    \"id\": 5,\n    \"index\": 5,\n    \"inputData\": [\n     {\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"region\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"region\": {\n        \"default\": \"\",\n        \"description\": \"Region of the RDS\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [],\n      \"title\": \"aws_get_publicly_accessible_db_snapshots\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"language\": \"python\",\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS Get Publicly Accessible DB Snapshots in RDS\",\n    \"orderProperties\": [\n     \"region\"\n    ],\n    \"outputParams\": {\n     \"output_name\": \"publicly_accessible_snapshots\",\n     \"output_name_enabled\": true,\n     \"output_runbook_enabled\": false,\n     \"output_runbook_name\": \"\"\n    },\n    \"printOutput\": true,\n    \"startcondition\": \"not public_snapshot_ids\",\n    \"tags\": [],\n    \"uuid\": \"e665224418391a4deafae48140c5b83c8af7b881dd281acbd79ed9ceb52aad4f\",\n    \"version\": \"1.0.0\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##  Copyright (c) 2021 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"import pprint\\n\",\n    \"from typing import Optional, Tuple\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from unskript.legos.utils import CheckOutput\\n\",\n    \"from unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\\n\",\n    \"from unskript.legos.aws.aws_filter_all_manual_database_snapshots.aws_filter_all_manual_database_snapshots import aws_filter_all_manual_database_snapshots\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_publicly_accessible_db_snapshots_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_publicly_accessible_db_snapshots(handle, region: str=None) -> Tuple:\\n\",\n    \"    \\\"\\\"\\\"aws_get_publicly_accessible_db_snapshots lists of publicly accessible\\n\",\n    \"       db_snapshot_identifier.\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned from task.validate(...).\\n\",\n    \"\\n\",\n    \"        :type region: string\\n\",\n    \"        :param region: Region of the RDS.\\n\",\n    \"\\n\",\n    \"        :rtype: Object with status, result having publicly accessible Snapshots\\n\",\n    \"        Identifier in RDS, error\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    manual_snapshots_list = []\\n\",\n    \"    result = []\\n\",\n    \"    all_regions = [region]\\n\",\n    \"    if region is None or not region:\\n\",\n    \"        all_regions = aws_list_all_regions(handle=handle)\\n\",\n    \"    try:\\n\",\n    \"        for r in all_regions:\\n\",\n    \"            snapshots_dict = {}\\n\",\n    \"            output = aws_filter_all_manual_database_snapshots(handle=handle, region=r)\\n\",\n    \"            snapshots_dict[\\\"region\\\"] = r\\n\",\n    \"            snapshots_dict[\\\"snapshot\\\"] = output\\n\",\n    \"            manual_snapshots_list.append(snapshots_dict)\\n\",\n    \"    except Exception as e:\\n\",\n    \"        raise e\\n\",\n    \"\\n\",\n    \"    for all_snapshots in manual_snapshots_list:\\n\",\n    \"        try:\\n\",\n    \"            ec2Client = handle.client('rds', region_name=all_snapshots['region'])\\n\",\n    \"            for each_snapshot in all_snapshots['snapshot']:\\n\",\n    \"                response = ec2Client.describe_db_snapshot_attributes(\\n\",\n    \"                    DBSnapshotIdentifier=each_snapshot\\n\",\n    \"                    )\\n\",\n    \"                db_attribute = response[\\\"DBSnapshotAttributesResult\\\"]\\n\",\n    \"                for value in db_attribute['DBSnapshotAttributes']:\\n\",\n    \"                    p_dict={}\\n\",\n    \"                    if \\\"all\\\" in value[\\\"AttributeValues\\\"]:\\n\",\n    \"                        p_dict[\\\"region\\\"] = all_snapshots['region']\\n\",\n    \"                        p_dict[\\\"open_snapshot\\\"] = db_attribute['DBSnapshotIdentifier']\\n\",\n    \"                        result.append(p_dict)\\n\",\n    \"        except Exception:\\n\",\n    \"            pass\\n\",\n    \"    if len(result)!=0:\\n\",\n    \"        return (False, result)\\n\",\n    \"    return (True, None)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"region\\\": \\\"region\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"not public_snapshot_ids\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"task.configure(outputName=\\\"publicly_accessible_snapshots\\\")\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_get_publicly_accessible_db_snapshots, lego_printer=aws_get_publicly_accessible_db_snapshots_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"0867d634-3d7c-473e-b5fe-06f042452c63\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 2A\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 2A\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Create-List-of-Public-DB-Snapshots\\\">Create List of Public DB Snapshots</h3>\\n\",\n    \"<p>This action filters regions that have no manual DB snapshots and creates a list those that have</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters: <code>None</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following output:&nbsp;<code>all_public_snapshots</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 5,\n   \"id\": \"87b57cf2-3eeb-45e6-9eb5-e7106692ea61\",\n   \"metadata\": {\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-02-02T16:13:22.904Z\"\n    },\n    \"name\": \"Create List of Public DB Snapshots\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Create List of Public DB Snapshots\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"all_public_snapshots = []\\n\",\n    \"try:\\n\",\n    \"    if publicly_accessible_snapshots[0] == False:\\n\",\n    \"        for snapshot in publicly_accessible_snapshots[1]:\\n\",\n    \"            all_public_snapshots.append(snapshot)\\n\",\n    \"except Exception as e:\\n\",\n    \"    if public_snapshot_ids:\\n\",\n    \"        for snap in public_snapshot_ids:\\n\",\n    \"            data_dict = {}\\n\",\n    \"            data_dict[\\\"region\\\"] = region\\n\",\n    \"            data_dict[\\\"open_snapshot\\\"] = snap\\n\",\n    \"            all_public_snapshots.append(data_dict)\\n\",\n    \"    else:\\n\",\n    \"        raise Exception(e)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"93579475-9902-4be4-b9de-fd6fadbc2710\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 2\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 2\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Change-the-public-access-to-private\\\">Change the public access to private</h3>\\n\",\n    \"<p>Using unSkript's Modify Publicly Accessible RDS Snapshots action we will modify the access to all the publicly accessible snapshots from the <em>public</em> to <em>private</em>.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters: <code>region</code>, <code>db_snapshot_identifier</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 6,\n   \"id\": \"2e58c74d-fd35-429c-b787-0be39f56d0b5\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": false,\n    \"action_uuid\": \"1a325ba527fbf504000b5d5961f4ef6366daed4a50951e657bfff87eedad52df\",\n    \"condition_enabled\": true,\n    \"continueOnError\": true,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"AWS Modify Publicly Accessible RDS Snapshots\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-01-30T18:04:38.167Z\"\n    },\n    \"id\": 239,\n    \"index\": 239,\n    \"inputData\": [\n     {\n      \"db_snapshot_identifier\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"open_snapshot\\\\\\\\\\\")\\\"\"\n      },\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"region\\\\\\\\\\\")\\\"\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"db_snapshot_identifier\": {\n        \"description\": \"DB Snapshot Idntifier of RDS.\",\n        \"title\": \"DB Snapshot Idntifier\",\n        \"type\": \"string\"\n       },\n       \"region\": {\n        \"description\": \"Region of the RDS.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"db_snapshot_identifier\",\n       \"region\"\n      ],\n      \"title\": \"aws_modify_public_db_snapshots\",\n      \"type\": \"object\"\n     }\n    ],\n    \"iterData\": [\n     {\n      \"iter_enabled\": true,\n      \"iter_item\": {\n       \"db_snapshot_identifier\": \"open_snapshot\",\n       \"region\": \"region\"\n      },\n      \"iter_list\": {\n       \"constant\": false,\n       \"objectItems\": true,\n       \"value\": \"all_public_snapshots\"\n      }\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS Modify Publicly Accessible RDS Snapshots\",\n    \"nouns\": [],\n    \"orderProperties\": [\n     \"db_snapshot_identifier\",\n     \"region\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"printOutput\": true,\n    \"startcondition\": \"len(all_public_snapshots) != 0\",\n    \"tags\": [\n     \"aws_modify_public_db_snapshots\"\n    ],\n    \"verbs\": []\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##  Copyright (c) 2021 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from typing import List, Dict\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_modify_public_db_snapshots_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_modify_public_db_snapshots(handle, db_snapshot_identifier: str, region: str) -> List:\\n\",\n    \"    \\\"\\\"\\\"aws_modify_public_db_snapshots lists of publicly accessible DB Snapshot Idntifier Info.\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned from task.validate(...).\\n\",\n    \"\\n\",\n    \"        :type db_snapshot_identifier: string\\n\",\n    \"        :param db_snapshot_identifier: DB Snapshot Idntifier of RDS.\\n\",\n    \"\\n\",\n    \"        :type region: string\\n\",\n    \"        :param region: Region of the RDS.\\n\",\n    \"\\n\",\n    \"        :rtype: List with Dict of DB Snapshot Idntifier Info.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"    ec2Client = handle.client('rds', region_name=region)\\n\",\n    \"    result = []\\n\",\n    \"    try:\\n\",\n    \"        response = ec2Client.modify_db_snapshot_attribute(\\n\",\n    \"            DBSnapshotIdentifier=db_snapshot_identifier,\\n\",\n    \"            AttributeName='restore',\\n\",\n    \"            ValuesToRemove=['all'])\\n\",\n    \"\\n\",\n    \"        result.append(response)\\n\",\n    \"\\n\",\n    \"    except Exception as error:\\n\",\n    \"        result.append(error)\\n\",\n    \"\\n\",\n    \"    return result\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(continueOnError=True)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"region\\\": \\\"iter.get(\\\\\\\\\\\"region\\\\\\\\\\\")\\\",\\n\",\n    \"    \\\"db_snapshot_identifier\\\": \\\"iter.get(\\\\\\\\\\\"open_snapshot\\\\\\\\\\\")\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(iterJson='''{\\n\",\n    \"    \\\"iter_enabled\\\": true,\\n\",\n    \"    \\\"iter_list_is_const\\\": false,\\n\",\n    \"    \\\"iter_list\\\": \\\"all_public_snapshots\\\",\\n\",\n    \"    \\\"iter_parameter\\\": [\\\"region\\\",\\\"db_snapshot_identifier\\\"]\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"len(all_public_snapshots) != 0\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_modify_public_db_snapshots, lego_printer=aws_modify_public_db_snapshots_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"faee311b-d041-46f6-8734-396ccba4e664\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Conclusion\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Conclusion\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Conclusion\\\">Conclusion</h3>\\n\",\n    \"<p>In this Runbook, we were able to secure all the publicly accessible AWS RDS DB Snapshots by using unSkript's AWS actions. To view the full platform capabilities of unSkript please visit <a href=\\\"https://us.app.unskript.io\\\" target=\\\"_blank\\\" rel=\\\"noopener\\\">us.app.unskript.io</a></p>\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"execution_data\": {\n   \"runbook_name\": \"Secure Publicly accessible Amazon RDS Snapshot\",\n   \"parameters\": [\n    \"region\"\n   ]\n  },\n  \"kernelspec\": {\n   \"display_name\": \"unSkript (Build: 1166)\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"codemirror_mode\": {\n    \"name\": \"ipython\",\n    \"version\": 3\n   },\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"nbconvert_exporter\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.9.6\"\n  },\n  \"parameterSchema\": {\n   \"properties\": {\n    \"public_snapshot_ids\": {\n     \"description\": \"List of publicly accessible snapshot ids.\",\n     \"title\": \"public_snapshot_ids\",\n     \"type\": \"array\"\n    },\n    \"region\": {\n     \"description\": \"AWS Region(s) to get publicly accessible RDS Db Snapshots. Eg: us-west-2.\",\n     \"title\": \"region\",\n     \"type\": \"string\"\n    }\n   },\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"vscode\": {\n   \"interpreter\": {\n    \"hash\": \"b0fa6594d8f4cbf19f97940f81e996739fb7646882a419484c72d19e05852a7e\"\n   }\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}"
  },
  {
    "path": "AWS/AWS_Secure_Publicly_accessible_Amazon_RDS_Snapshot.json",
    "content": "{\n  \"name\": \"Secure Publicly accessible Amazon RDS Snapshot\",\n  \"description\": \"This lego can be used to list all the manual database snapshots in the given region. Get publicly accessible DB snapshots in RDS and Modify the publicly accessible DB snapshots in RDS to private.\",\n  \"uuid\": \"7c0d143556a33b81d3fb1ff08dfdd59cebe5d58b00b55e8ae660df2e42f71bfe\",\n  \"icon\": \"CONNECTOR_TYPE_AWS\",\n  \"categories\": [ \"CATEGORY_TYPE_CLOUDOPS\",\"CATEGORY_TYPE_SECOPS\" ],\n  \"connector_types\": [ \"CONNECTOR_TYPE_AWS\" ],\n  \"version\": \"1.0.0\"\n}"
  },
  {
    "path": "AWS/AWS_Stop_Idle_EC2_Instances.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"0bc2da9b-06db-4411-b7a1-60bf674c3cd4\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Runbook Overview\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Runbook Overview\"\n   },\n   \"source\": [\n    \"<center><img src=\\\"https://storage.googleapis.com/unskript-website/assets/favicon.png\\\" alt=\\\"unSkript.com\\\" width=\\\"100\\\" height=\\\"100\\\">\\n\",\n    \"<h1 id=\\\"-unSkript-Runbooks-\\\">unSkript Runbooks <a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#-unSkript-Runbooks-\\\" target=\\\"_self\\\">&para;</a></h1>\\n\",\n    \"<div class=\\\"alert alert-block alert-success\\\">\\n\",\n    \"<h3 id=\\\"Objective\\\"><strong>Objective</strong><a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Objective\\\" target=\\\"_self\\\">&para;</a></h3>\\n\",\n    \"<strong>To stop idle EC2 instances using unSkript actions.</strong></div>\\n\",\n    \"</center><center>\\n\",\n    \"<h2 id=\\\"Delete-Unattached-EBS-Volume\\\">Stop Idle EC2 Instances</h2>\\n\",\n    \"</center>\\n\",\n    \"<h1 id=\\\"Steps-Overview\\\">Steps Overview<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Steps-Overview\\\" target=\\\"_self\\\">&para;</a></h1>\\n\",\n    \"<p>1. AWS Find Idle Instances<br>2. Stop AWS Instances</p>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 10,\n   \"id\": \"f2e094c0-a7ca-4a26-b2c6-b5c8d669f300\",\n   \"metadata\": {\n    \"collapsed\": true,\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-05-17T16:26:11.186Z\"\n    },\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"name\": \"Input Verification\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Input Verification\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"if instance_ids and not region:\\n\",\n    \"    raise SystemExit(\\\"Enter AWS Region for given instances!\\\")\\n\",\n    \"if region == None:\\n\",\n    \"    region = \\\"\\\"\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"7e446ecd-a076-4cc2-8745-b8843474e82c\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-1\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-1\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"AWS-Find-Idle-Instances\\\">AWS Find Idle Instances</h3>\\n\",\n    \"<p>Here we will use unSkript&nbsp;<strong>AWS Find Idle Instances</strong> action. This action filters all idle instances from the given region, idle_cpu_threshold and idle_duration return a list of all the idle instances. It will execute if the <code>Instance_Ids</code>&nbsp;parameter is not passed.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Input parameters: <code>region, idle_cpu_threshold, idle_duration</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Output variable: <code>idle_instances</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"933eb1d6-32e2-4dd2-87cf-b27fbb51c2d0\",\n   \"metadata\": {\n    \"actionBashCommand\": false,\n    \"actionCategories\": [\n     \"CATEGORY_TYPE_COST_OPT\",\n     \"CATEGORY_TYPE_SRE\",\n     \"CATEGORY_TYPE_AWS\",\n     \"CATEGORY_TYPE_AWS_EC2\"\n    ],\n    \"actionDescription\": \"Find Idle EC2 instances\",\n    \"actionEntryFunction\": \"aws_find_idle_instances\",\n    \"actionIsCheck\": true,\n    \"actionIsRemediation\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [\n     \"c03babff32b83949e6ca20a49901d42a5a74ed3036de4609096390c9f6d0851a\"\n    ],\n    \"actionNextHopParameterMapping\": {},\n    \"actionNouns\": null,\n    \"actionOutputType\": \"ACTION_OUTPUT_TYPE_LIST\",\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"actionTitle\": \"AWS Find Idle Instances\",\n    \"actionType\": \"LEGO_TYPE_AWS\",\n    \"actionVerbs\": null,\n    \"actionVersion\": \"1.0.0\",\n    \"action_modified\": false,\n    \"action_uuid\": \"c25a662a49587285082c36455564eed5664cc852926fcc2cec374300492df09d\",\n    \"collapsed\": true,\n    \"condition_enabled\": true,\n    \"continueOnError\": false,\n    \"credentialsJson\": {},\n    \"description\": \"Find Idle EC2 instances\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-05-17T16:28:20.633Z\"\n    },\n    \"id\": 1,\n    \"index\": 1,\n    \"inputData\": [\n     {\n      \"idle_cpu_threshold\": {\n       \"constant\": false,\n       \"value\": \"int(idle_cpu_threshold)\"\n      },\n      \"idle_duration\": {\n       \"constant\": false,\n       \"value\": \"int(idle_duration)\"\n      },\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"region\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"idle_cpu_threshold\": {\n        \"default\": 5,\n        \"description\": \"Idle CPU threshold (in percent)\",\n        \"title\": \"Idle CPU Threshold\",\n        \"type\": \"integer\"\n       },\n       \"idle_duration\": {\n        \"default\": 6,\n        \"description\": \"Idle duration (in hours)\",\n        \"title\": \"Idle Duration\",\n        \"type\": \"integer\"\n       },\n       \"region\": {\n        \"default\": \"\",\n        \"description\": \"AWS Region to get the instances from. Eg: \\\"us-west-2\\\"\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [],\n      \"title\": \"aws_find_idle_instances\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"language\": \"python\",\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS Find Idle Instances\",\n    \"orderProperties\": [\n     \"idle_cpu_threshold\",\n     \"idle_duration\",\n     \"region\"\n    ],\n    \"outputParams\": {\n     \"output_name\": \"idle_instances\",\n     \"output_name_enabled\": true,\n     \"output_runbook_enabled\": false,\n     \"output_runbook_name\": \"\"\n    },\n    \"printOutput\": true,\n    \"startcondition\": \"not instance_ids\",\n    \"tags\": [],\n    \"uuid\": \"c25a662a49587285082c36455564eed5664cc852926fcc2cec374300492df09d\",\n    \"version\": \"1.0.0\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"# Copyright (c) 2023 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\\n\",\n    \"from typing import Optional, Tuple\\n\",\n    \"import pprint\\n\",\n    \"import datetime\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_find_idle_instances_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def is_instance_idle(instance_id , idle_cpu_threshold, idle_duration, cloudwatchclient):\\n\",\n    \"    try:\\n\",\n    \"        now = datetime.datetime.utcnow()\\n\",\n    \"        start_time = now - datetime.timedelta(hours=idle_duration)\\n\",\n    \"        cpu_utilization_stats = cloudwatchclient.get_metric_statistics(\\n\",\n    \"            Namespace=\\\"AWS/EC2\\\",\\n\",\n    \"            MetricName=\\\"CPUUtilization\\\",\\n\",\n    \"            Dimensions=[{\\\"Name\\\": \\\"InstanceId\\\", \\\"Value\\\": instance_id}],\\n\",\n    \"            StartTime=start_time.isoformat(),\\n\",\n    \"            EndTime=now.isoformat(),\\n\",\n    \"            Period=3600,\\n\",\n    \"            Statistics=[\\\"Average\\\"],\\n\",\n    \"        )\\n\",\n    \"        if not cpu_utilization_stats[\\\"Datapoints\\\"]:\\n\",\n    \"            return False\\n\",\n    \"        average_cpu = sum([datapoint[\\\"Average\\\"] for datapoint in cpu_utilization_stats[\\\"Datapoints\\\"]]) / len(cpu_utilization_stats[\\\"Datapoints\\\"])\\n\",\n    \"    except Exception as e:\\n\",\n    \"        raise e\\n\",\n    \"    return average_cpu < idle_cpu_threshold\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_find_idle_instances(handle, idle_cpu_threshold:int = 5, idle_duration:int = 6, region:str='') -> Tuple:\\n\",\n    \"    \\\"\\\"\\\"aws_find_idle_instances finds idle EC2 instances\\n\",\n    \"\\n\",\n    \"    :type region: string\\n\",\n    \"    :param region: AWS Region to get the instances from. Eg: \\\"us-west-2\\\"\\n\",\n    \"\\n\",\n    \"    :type idle_cpu_threshold: int\\n\",\n    \"    :param idle_cpu_threshold: (in percent) Idle CPU threshold (in percent)\\n\",\n    \"\\n\",\n    \"    :type idle_duration: int\\n\",\n    \"    :param idle_duration: (in hours) Idle CPU threshold (in hours)\\n\",\n    \"\\n\",\n    \"    :rtype: Tuple with status result and list of Idle Instances.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    result = []\\n\",\n    \"    all_regions = [region]\\n\",\n    \"    if not region:\\n\",\n    \"        all_regions = aws_list_all_regions(handle)\\n\",\n    \"    for reg in all_regions:\\n\",\n    \"        try:\\n\",\n    \"            ec2client = handle.client('ec2', region_name=reg)\\n\",\n    \"            cloudwatchclient = handle.client(\\\"cloudwatch\\\", region_name=reg)\\n\",\n    \"            all_instances = ec2client.describe_instances()\\n\",\n    \"            for instance in all_instances['Reservations']:\\n\",\n    \"                for i in instance['Instances']:\\n\",\n    \"                    if i['State'][\\\"Name\\\"] == \\\"running\\\" and is_instance_idle(i['InstanceId'], reg, idle_cpu_threshold,idle_duration, cloudwatchclient ):\\n\",\n    \"                        idle_instances = {}\\n\",\n    \"                        idle_instances[\\\"instance\\\"] = i['InstanceId']\\n\",\n    \"                        idle_instances[\\\"region\\\"] = reg\\n\",\n    \"                        result.append(idle_instances)\\n\",\n    \"        except Exception:\\n\",\n    \"            pass\\n\",\n    \"    if len(result) != 0:\\n\",\n    \"        return (False, result)\\n\",\n    \"    else:\\n\",\n    \"        return (True, None)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"idle_cpu_threshold\\\": \\\"int(idle_cpu_threshold)\\\",\\n\",\n    \"    \\\"idle_duration\\\": \\\"int(idle_duration)\\\",\\n\",\n    \"    \\\"region\\\": \\\"region\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"not instance_ids\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"idle_instances\\\")\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_find_idle_instances, lego_printer=aws_find_idle_instances_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"81e38fc8-6cde-4287-a728-5aa6c2caa07b\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-1 Extension\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-1 Extension\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Modify-Idle-Instances-Output\\\">Modify Idle Instances Output</h3>\\n\",\n    \"<p>In this action, we will pass the list of idle instances from Step 1 and sort the output as per Step 2.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Output variable: <code>idle_instances_list</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 22,\n   \"id\": \"135f2a41-a19c-4477-815a-911bb8fd5620\",\n   \"metadata\": {\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-05-17T16:33:14.480Z\"\n    },\n    \"name\": \"Modify Idle Instances Output\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Modify Idle Instances Output\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"idle_instances_list = []\\n\",\n    \"try:\\n\",\n    \"    if idle_instances[0] == False:\\n\",\n    \"        for instance in idle_instances[1]:\\n\",\n    \"            idle_instances_list.append(instance)\\n\",\n    \"except Exception as e:\\n\",\n    \"    if instance_ids:\\n\",\n    \"        for instance in instance_ids:\\n\",\n    \"            instance_dict = {}\\n\",\n    \"            instance_dict[\\\"instance\\\"] = instance\\n\",\n    \"            instance_dict[\\\"region\\\"] = region\\n\",\n    \"            idle_instances_list.append(instance_dict)\\n\",\n    \"    else:\\n\",\n    \"        raise Exception(e)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"86252292-3008-4943-869e-c9b581ef4306\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-2\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-2\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Stop-AWS-Instances\\\">Stop AWS Instances</h3>\\n\",\n    \"<p>Here we will use unSkript&nbsp;<strong>Stop AWS Instances</strong> action. In this action, we will pass the list of idle instances from step 1 and stop those instances.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Input parameters: <code>instance_id</code>, <code>region</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Output variable: <code>stop_instances</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"ab0bab01-d02a-44d2-aa4f-82652a585f93\",\n   \"metadata\": {\n    \"actionBashCommand\": false,\n    \"actionCategories\": [\n     \"CATEGORY_TYPE_DEVOPS\",\n     \"CATEGORY_TYPE_SRE\",\n     \"CATEGORY_TYPE_AWS\",\n     \"CATEGORY_TYPE_AWS_EC2\"\n    ],\n    \"actionDescription\": \"Stop an AWS Instance\",\n    \"actionEntryFunction\": \"aws_stop_instances\",\n    \"actionIsCheck\": false,\n    \"actionIsRemediation\": true,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": null,\n    \"actionNextHopParameterMapping\": null,\n    \"actionNouns\": [\n     \"aws\",\n     \"instances\"\n    ],\n    \"actionOutputType\": \"ACTION_OUTPUT_TYPE_DICT\",\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"actionTitle\": \"Stop AWS Instances\",\n    \"actionType\": \"LEGO_TYPE_AWS\",\n    \"actionVerbs\": [\n     \"stop\"\n    ],\n    \"actionVersion\": \"1.0.0\",\n    \"action_modified\": false,\n    \"action_uuid\": \"ef6e03e0bb46f1d9eb58405e5eed4b962c4ae9eeaaf64877c1c4e820c2854c6e\",\n    \"condition_enabled\": true,\n    \"continueOnError\": true,\n    \"credentialsJson\": {},\n    \"description\": \"Stop an AWS Instance\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-04-25T14:38:41.240Z\"\n    },\n    \"id\": 2,\n    \"index\": 2,\n    \"inputData\": [\n     {\n      \"instance_id\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"instance\\\\\\\\\\\")\\\"\"\n      },\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"region\\\\\\\\\\\")\\\"\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"instance_id\": {\n        \"description\": \"ID of the instance to be stopped.\",\n        \"title\": \"Instance Id\",\n        \"type\": \"string\"\n       },\n       \"region\": {\n        \"description\": \"AWS Region of the instance.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"instance_id\",\n       \"region\"\n      ],\n      \"title\": \"aws_stop_instances\",\n      \"type\": \"object\"\n     }\n    ],\n    \"iterData\": [\n     {\n      \"iter_enabled\": true,\n      \"iter_item\": {\n       \"instance_id\": \"instance\",\n       \"region\": \"region\"\n      },\n      \"iter_list\": {\n       \"constant\": false,\n       \"objectItems\": true,\n       \"value\": \"idle_instances_list\"\n      }\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"language\": \"python\",\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"Stop AWS Instances\",\n    \"orderProperties\": [\n     \"instance_id\",\n     \"region\"\n    ],\n    \"outputParams\": {\n     \"output_name\": \"stop_instances\",\n     \"output_name_enabled\": true,\n     \"output_runbook_enabled\": false,\n     \"output_runbook_name\": \"\"\n    },\n    \"printOutput\": true,\n    \"startcondition\": \"len(idle_instances_list) != 0\",\n    \"tags\": [],\n    \"title\": \"Stop AWS Instances\",\n    \"uuid\": \"ef6e03e0bb46f1d9eb58405e5eed4b962c4ae9eeaaf64877c1c4e820c2854c6e\",\n    \"version\": \"1.0.0\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##  Copyright (c) 2021 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import Dict\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_stop_instances_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_stop_instances(handle, instance_id: str, region: str) -> Dict:\\n\",\n    \"    \\\"\\\"\\\"aws_stop_instances Stops instances.\\n\",\n    \"\\n\",\n    \"        :type instance_id: string\\n\",\n    \"        :param instance_id: String containing the name of AWS EC2 instance\\n\",\n    \"\\n\",\n    \"        :type region: string\\n\",\n    \"        :param region: AWS region for instance\\n\",\n    \"\\n\",\n    \"        :rtype: Dict with the stopped instances state info.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"\\n\",\n    \"    ec2Client = handle.client('ec2', region_name=region)\\n\",\n    \"    output = {}\\n\",\n    \"    res = ec2Client.stop_instances(InstanceIds=[instance_id])\\n\",\n    \"    for instances in res['StoppingInstances']:\\n\",\n    \"        output[instances['InstanceId']] = instances['CurrentState']\\n\",\n    \"\\n\",\n    \"    return output\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(continueOnError=True)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"instance_id\\\": \\\"iter.get(\\\\\\\\\\\"instance\\\\\\\\\\\")\\\",\\n\",\n    \"    \\\"region\\\": \\\"iter.get(\\\\\\\\\\\"region\\\\\\\\\\\")\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(iterJson='''{\\n\",\n    \"    \\\"iter_enabled\\\": true,\\n\",\n    \"    \\\"iter_list_is_const\\\": false,\\n\",\n    \"    \\\"iter_list\\\": \\\"idle_instances_list\\\",\\n\",\n    \"    \\\"iter_parameter\\\": [\\\"instance_id\\\",\\\"region\\\"]\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"len(idle_instances_list) != 0\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"stop_instances\\\")\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_stop_instances, lego_printer=aws_stop_instances_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"086512e7-14b2-4894-bd36-0e8f63e5a8e7\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Conclusion\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Conclusion\"\n   },\n   \"source\": [\n    \"### Conclusion\\n\",\n    \"<p>In this Runbook, we demonstrated the use of unSkript's AWS actions to filter idle instances and stop those. To view the full platform capabilities of unSkript please visit&nbsp;<a href=\\\"https://us.app.unskript.io\\\" target=\\\"_blank\\\" rel=\\\"noopener\\\">https://us.app.unskript.io</a></p>\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"execution_data\": {\n   \"runbook_name\": \"Stop Idle EC2 Instances\",\n   \"parameters\": null\n  },\n  \"kernelspec\": {\n   \"display_name\": \"unSkript (Build: 1166)\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"codemirror_mode\": {\n    \"name\": \"ipython\",\n    \"version\": 3\n   },\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"nbconvert_exporter\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.9.6\"\n  },\n  \"parameterSchema\": {\n   \"properties\": {\n    \"idle_cpu_threshold\": {\n     \"default\": 5,\n     \"description\": \"The CPU utilization threshold below which an instance is considered idle (e.g., 10).\",\n     \"title\": \"idle_cpu_threshold\",\n     \"type\": \"number\"\n    },\n    \"idle_duration\": {\n     \"default\": 6,\n     \"description\": \"The duration of time (in hours) for which an instance must have CPU utilization below the threshold to be considered idle (e.g., 24 for 1 day).\",\n     \"title\": \"idle_duration\",\n     \"type\": \"number\"\n    },\n    \"instance_ids\": {\n     \"description\": \"\\nList of idle instance ids.\",\n     \"title\": \"instance_ids\",\n     \"type\": \"array\"\n    },\n    \"region\": {\n     \"description\": \"AWS Region e.g. \\\"us-west-2\\\"\",\n     \"title\": \"region\",\n     \"type\": \"string\"\n    }\n   },\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"show_action_drag_hint_done\": {\n   \"environment_id\": \"1499f27c-6406-4fbd-bd1b-c6f92800018f\",\n   \"environment_name\": \"Staging\",\n   \"execution_id\": \"\",\n   \"inputs_for_searched_lego\": \"\",\n   \"notebook_id\": \"d4159cb3-6c83-4ba5-a2f7-d23c0777076b.ipynb\",\n   \"parameters\": null,\n   \"runbook_name\": \"Stop Idle EC2 Instances\",\n   \"search_string\": \"\",\n   \"show_tool_tip\": true,\n   \"tenant_id\": \"982dba5f-d9df-48ae-a5bf-ec1fc94d4882\",\n   \"tenant_url\": \"https://tenant-staging.alpha.unskript.io\",\n   \"user_email_id\": \"support+staging@unskript.com\",\n   \"workflow_id\": \"f8ead207-81c0-414a-a15b-76fcdefafe8d\"\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}"
  },
  {
    "path": "AWS/AWS_Stop_Idle_EC2_Instances.json",
    "content": "{\n  \"name\": \"Stop Idle EC2 Instances\",\n  \"description\": \"This runbook can be used to Stop all EC2 Instances that are idle using given cpu threshold and duration.\",\n  \"uuid\": \"c03babff32b83949e6ca20a49901d42a5a74ed3036de4609096390c9f6d0851a\",\n  \"icon\": \"CONNECTOR_TYPE_AWS\",\n  \"categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_COST_OPT\" ],\n  \"connector_types\": [ \"CONNECTOR_TYPE_AWS\" ],\n  \"version\": \"1.0.0\"\n}\n"
  },
  {
    "path": "AWS/AWS_Stop_Untagged_EC2_Instances.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"cadbcf65-5c79-4496-81ef-c9e1e18ee932\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Steps Overview\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Steps Overview\"\n   },\n   \"source\": [\n    \"<hr><center><img src=\\\"https://storage.googleapis.com/unskript-website/assets/favicon.png\\\" alt=\\\"unSkript.com\\\" width=\\\"100\\\" height=\\\"100\\\">\\n\",\n    \"<h1 id=\\\"unSkript-Runbooks&para;\\\">unSkript Runbooks</h1>\\n\",\n    \"<div class=\\\"alert alert-block alert-success\\\">\\n\",\n    \"<h3 id=\\\"Objective&para;\\\">Objective</h3>\\n\",\n    \"<br><strong style=\\\"color: #000000;\\\"><em>Stop untagged EC2 Instances</em></strong></div>\\n\",\n    \"</center>\\n\",\n    \"<p>&nbsp;</p>\\n\",\n    \"<center>\\n\",\n    \"<h2 id=\\\"Stop-Untagged-EC2-Instances&para;\\\"><u>Stop Untagged EC2 Instances</u></h2>\\n\",\n    \"</center>\\n\",\n    \"<h1 id=\\\"Steps-Overview&para;\\\">Steps Overview</h1>\\n\",\n    \"<p>1)&nbsp;<a href=\\\"#1\\\" target=\\\"_self\\\" rel=\\\"noopener\\\">Get all untagged EC2 instance</a><br>2)&nbsp;<a href=\\\"#2\\\" target=\\\"_self\\\" rel=\\\"noopener\\\">Stop Untagged EC2 instances</a></p>\\n\",\n    \"<hr>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"35ab47d1-a42d-4130-aca4-495956725ea0\",\n   \"metadata\": {\n    \"collapsed\": true,\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"name\": \"Input Verification\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Input Verification\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"if region == None:\\n\",\n    \"    region = ''\\n\",\n    \"if instance_ids and not region:\\n\",\n    \"    raise SystemExit(\\\"Provide a region for the EC2 Instance IDs!\\\")\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"7a7ce858-86e0-44a5-a8a7-68af0664fa27\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 2\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 2\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Get-publicly-accessible-DB-snapshots\\\"><a id=\\\"2\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Get all Untagged EC2 Instances</h3>\\n\",\n    \"<p>Here we will fetch all the untagged&nbsp; EC2 instances.&nbsp;</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters: <code>region(Optional)</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following output: <code>untagged_instances</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"44455214-e204-4278-818f-47734b8194c4\",\n   \"metadata\": {\n    \"actionBashCommand\": false,\n    \"actionCategories\": [\n     \"CATEGORY_TYPE_CLOUDOPS\",\n     \"CATEGORY_TYPE_COST_OPT\",\n     \"CATEGORY_TYPE_AWS\",\n     \"CATEGORY_TYPE_AWS_EC2\"\n    ],\n    \"actionDescription\": \"Filter AWS Untagged EC2 Instances\",\n    \"actionEntryFunction\": \"aws_filter_untagged_ec2_instances\",\n    \"actionIsCheck\": true,\n    \"actionIsRemediation\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [\n     \"a16703da15d9e9e2d8a56b146e730b5e4c1496721ff1dc8606a5021d521ed9e3\"\n    ],\n    \"actionNextHopParameterMapping\": {},\n    \"actionNouns\": [\n     \"aws\",\n     \"instances\",\n     \"untagged\"\n    ],\n    \"actionOutputType\": \"ACTION_OUTPUT_TYPE_LIST\",\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"actionTitle\": \"Filter AWS Untagged EC2 Instances\",\n    \"actionType\": \"LEGO_TYPE_AWS\",\n    \"actionVerbs\": [\n     \"filter\"\n    ],\n    \"actionVersion\": \"1.0.0\",\n    \"action_modified\": false,\n    \"action_uuid\": \"0ebc91f11a150d8933a8ebf4cf8824f0ca8cd9e64383b30dd9fad4e7b9b26ac9\",\n    \"collapsed\": true,\n    \"condition_enabled\": true,\n    \"continueOnError\": false,\n    \"credentialsJson\": {},\n    \"description\": \"Filter AWS Untagged EC2 Instances\",\n    \"id\": 1,\n    \"index\": 1,\n    \"inputData\": [\n     {\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"region\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"region\": {\n        \"default\": \"\",\n        \"description\": \"Name of the AWS Region\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [],\n      \"title\": \"aws_filter_untagged_ec2_instances\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"language\": \"python\",\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"Filter AWS Untagged EC2 Instances\",\n    \"orderProperties\": [\n     \"region\"\n    ],\n    \"outputParams\": {\n     \"output_name\": \"untagged_instances\",\n     \"output_name_enabled\": true,\n     \"output_runbook_enabled\": false,\n     \"output_runbook_name\": \"\"\n    },\n    \"printOutput\": true,\n    \"startcondition\": \"not instance_ids\",\n    \"tags\": [\n     \"aws_filter_untagged_ec2_instances\"\n    ],\n    \"uuid\": \"0ebc91f11a150d8933a8ebf4cf8824f0ca8cd9e64383b30dd9fad4e7b9b26ac9\",\n    \"version\": \"1.0.0\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"##  Copyright (c) 2021 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import List, Tuple, Optional\\n\",\n    \"from unskript.connectors.aws import aws_get_paginator\\n\",\n    \"from unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_filter_untagged_ec2_instances_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def check_untagged_instance(res, r):\\n\",\n    \"    instance_list = []\\n\",\n    \"    for reservation in res:\\n\",\n    \"        for instance in reservation['Instances']:\\n\",\n    \"            instances_dict = {}\\n\",\n    \"            tags = instance.get('Tags', None)\\n\",\n    \"            if tags is None:\\n\",\n    \"                instances_dict['region']= r\\n\",\n    \"                instances_dict['instanceID']= instance['InstanceId']\\n\",\n    \"                instance_list.append(instances_dict)\\n\",\n    \"    return instance_list\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_filter_untagged_ec2_instances(handle, region: str= None) -> Tuple:\\n\",\n    \"    \\\"\\\"\\\"aws_filter_untagged_ec2_instances Returns an array of instances which has no tags.\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned from task.validate(...).\\n\",\n    \"\\n\",\n    \"        :type region: str\\n\",\n    \"        :param region: Region to filter instances.\\n\",\n    \"\\n\",\n    \"        :rtype: Tuple of status, and list of untagged EC2 Instances\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    result = []\\n\",\n    \"    all_instances = []\\n\",\n    \"    all_regions = [region]\\n\",\n    \"    if region is None or len(region)==0:\\n\",\n    \"        all_regions = aws_list_all_regions(handle=handle)\\n\",\n    \"    for r in all_regions:\\n\",\n    \"        try:\\n\",\n    \"            ec2Client = handle.client('ec2', region_name=r)\\n\",\n    \"            res = aws_get_paginator(ec2Client, \\\"describe_instances\\\", \\\"Reservations\\\")\\n\",\n    \"            untagged_instances = check_untagged_instance(res, r)\\n\",\n    \"            if len(untagged_instances)!=0:\\n\",\n    \"                all_instances.append(untagged_instances)\\n\",\n    \"        except Exception as e:\\n\",\n    \"            pass\\n\",\n    \"    try:\\n\",\n    \"        result = all_instances[0]\\n\",\n    \"    except Exception as e:\\n\",\n    \"        pass\\n\",\n    \"    if len(result) != 0:\\n\",\n    \"        return (False, result)\\n\",\n    \"    else:\\n\",\n    \"        return (True, None)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"region\\\": \\\"region\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"not instance_ids\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"untagged_instances\\\")\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_filter_untagged_ec2_instances, lego_printer=aws_filter_untagged_ec2_instances_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"4bc1ab78-471e-4f0a-9933-d84abb36dada\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 2A\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 2A\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Create-List-of-untagged-instances\\\"><a id=\\\"2\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Create List of untagged instances<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Create-List-of-untagged-instances\\\" target=\\\"_self\\\">&para;</a></h3>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following output: <code>all_untagged_instances</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 5,\n   \"id\": \"087ae782-c90b-46ba-8ed0-76bf9992f51d\",\n   \"metadata\": {\n    \"collapsed\": true,\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-02-14T17:26:37.448Z\"\n    },\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"name\": \"Create List of untagged instances\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Create List of untagged instances\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"all_untagged_instances = []\\n\",\n    \"try:\\n\",\n    \"    if untagged_instances[0] == False:\\n\",\n    \"        if len(untagged_instances[1])!=0:\\n\",\n    \"            all_untagged_instances=untagged_instances[1]\\n\",\n    \"except Exception:\\n\",\n    \"    for ids in instance_ids:\\n\",\n    \"        data_dict = {}\\n\",\n    \"        data_dict[\\\"region\\\"] = region\\n\",\n    \"        data_dict[\\\"instanceID\\\"] = ids\\n\",\n    \"        all_untagged_instances.append(data_dict)\\n\",\n    \"print(all_untagged_instances)\\n\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"2e75ed26-5dfd-4a64-a6af-1aa336ac9455\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 3\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 3\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Get-publicly-accessible-DB-snapshots\\\"><a id=\\\"2\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Stop untagged EC2 Instances</h3>\\n\",\n    \"<p>Using unSkript's Stop EC2 instances action we will stop all untagged EC2 instances found in Step 1.\\n\",\n    \"\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters: <code>region(Optional)</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"1432974e-5c85-48f7-9b17-c3ef3be94152\",\n   \"metadata\": {\n    \"actionBashCommand\": false,\n    \"actionCategories\": [\n     \"CATEGORY_TYPE_DEVOPS\",\n     \"CATEGORY_TYPE_SRE\",\n     \"CATEGORY_TYPE_AWS\",\n     \"CATEGORY_TYPE_AWS_EC2\"\n    ],\n    \"actionDescription\": \"Stop an AWS Instance\",\n    \"actionEntryFunction\": \"aws_stop_instances\",\n    \"actionIsCheck\": false,\n    \"actionIsRemediation\": true,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": null,\n    \"actionNextHopParameterMapping\": null,\n    \"actionNouns\": [\n     \"aws\",\n     \"instances\"\n    ],\n    \"actionOutputType\": \"ACTION_OUTPUT_TYPE_DICT\",\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"actionTitle\": \"Stop AWS Instances\",\n    \"actionType\": \"LEGO_TYPE_AWS\",\n    \"actionVerbs\": [\n     \"stop\"\n    ],\n    \"actionVersion\": \"1.0.0\",\n    \"action_modified\": false,\n    \"action_uuid\": \"ef6e03e0bb46f1d9eb58405e5eed4b962c4ae9eeaaf64877c1c4e820c2854c6e\",\n    \"collapsed\": true,\n    \"condition_enabled\": true,\n    \"continueOnError\": false,\n    \"credentialsJson\": {},\n    \"description\": \"Stop an AWS Instance\",\n    \"id\": 3,\n    \"index\": 3,\n    \"inputData\": [\n     {\n      \"instance_id\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"instanceID\\\\\\\\\\\")\\\"\"\n      },\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"region\\\\\\\\\\\")\\\"\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"instance_id\": {\n        \"description\": \"ID of the instance to be stopped.\",\n        \"title\": \"Instance Id\",\n        \"type\": \"string\"\n       },\n       \"region\": {\n        \"description\": \"AWS Region of the instance.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"instance_id\",\n       \"region\"\n      ],\n      \"title\": \"aws_stop_instances\",\n      \"type\": \"object\"\n     }\n    ],\n    \"iterData\": [\n     {\n      \"iter_enabled\": true,\n      \"iter_item\": {\n       \"instance_id\": \"instanceID\",\n       \"region\": \"region\"\n      },\n      \"iter_list\": {\n       \"constant\": false,\n       \"objectItems\": true,\n       \"value\": \"all_untagged_instances\"\n      }\n     }\n    ],\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"language\": \"python\",\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"Stop AWS Instances\",\n    \"orderProperties\": [\n     \"instance_id\",\n     \"region\"\n    ],\n    \"printOutput\": true,\n    \"startcondition\": \"len(all_untagged_instances)!=0\",\n    \"tags\": [\n     \"aws_stop_instances\"\n    ],\n    \"uuid\": \"ef6e03e0bb46f1d9eb58405e5eed4b962c4ae9eeaaf64877c1c4e820c2854c6e\",\n    \"version\": \"1.0.0\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##  Copyright (c) 2021 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"import pprint\\n\",\n    \"from typing import Dict\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_stop_instances_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_stop_instances(handle, instance_id: str, region: str) -> Dict:\\n\",\n    \"    \\\"\\\"\\\"aws_stop_instances Stops instances.\\n\",\n    \"\\n\",\n    \"        :type instance_id: string\\n\",\n    \"        :param instance_id: String containing the name of AWS EC2 instance\\n\",\n    \"\\n\",\n    \"        :type region: string\\n\",\n    \"        :param region: AWS region for instance\\n\",\n    \"\\n\",\n    \"        :rtype: Dict with the stopped instances state info.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"\\n\",\n    \"    ec2Client = handle.client('ec2', region_name=region)\\n\",\n    \"    output = {}\\n\",\n    \"    res = ec2Client.stop_instances(InstanceIds=[instance_id])\\n\",\n    \"    for instances in res['StoppingInstances']:\\n\",\n    \"        output[instances['InstanceId']] = instances['CurrentState']\\n\",\n    \"\\n\",\n    \"    return output\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(continueOnError=False)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"instance_id\\\": \\\"iter.get(\\\\\\\\\\\"instanceID\\\\\\\\\\\")\\\",\\n\",\n    \"    \\\"region\\\": \\\"iter.get(\\\\\\\\\\\"region\\\\\\\\\\\")\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(iterJson='''{\\n\",\n    \"    \\\"iter_enabled\\\": true,\\n\",\n    \"    \\\"iter_list_is_const\\\": false,\\n\",\n    \"    \\\"iter_list\\\": \\\"all_untagged_instances\\\",\\n\",\n    \"    \\\"iter_parameter\\\": [\\\"instance_id\\\",\\\"region\\\"]\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"len(all_untagged_instances)!=0\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_stop_instances, lego_printer=aws_stop_instances_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"4df3773a-43ff-43f8-9693-505c04936438\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Conclusion\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Conclusion\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Conclusion&para;\\\">Conclusion<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Conclusion\\\" target=\\\"_self\\\" rel=\\\"noopener\\\">&para;</a><a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Conclusion&para;\\\" target=\\\"_self\\\">&para;</a></h3>\\n\",\n    \"<p>In this Runbook, we were able to find all untagged EC2 instances and stop them using unSkript's AWS actions. To view the full platform capabilities of unSkript please visit&nbsp;<a href=\\\"https://us.app.unskript.io\\\" target=\\\"_blank\\\" rel=\\\"noopener\\\">us.app.unskript.io</a></p>\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"execution_data\": {\n   \"runbook_name\": \"Stop all Untagged AWS EC2 Instances\",\n   \"parameters\": [\n    \"region\",\n    \"execution_flag\"\n   ]\n  },\n  \"kernelspec\": {\n   \"display_name\": \"unSkript (Build: 1169)\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"codemirror_mode\": {\n    \"name\": \"ipython\",\n    \"version\": 3\n   },\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"nbconvert_exporter\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.10.6\"\n  },\n  \"parameterSchema\": {\n   \"properties\": {\n    \"instance_ids\": {\n     \"description\": \"List of EC2 instance IDs\",\n     \"title\": \"instance_ids\",\n     \"type\": \"array\"\n    },\n    \"region\": {\n     \"description\": \"AWS region to check for untagged EC2 instances. Eg: \\\"us-west-2\\\". If left empty, all regions will be considered.\",\n     \"title\": \"region\",\n     \"type\": \"string\"\n    }\n   },\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"parameterValues\": {},\n  \"vscode\": {\n   \"interpreter\": {\n    \"hash\": \"b0fa6594d8f4cbf19f97940f81e996739fb7646882a419484c72d19e05852a7e\"\n   }\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}"
  },
  {
    "path": "AWS/AWS_Stop_Untagged_EC2_Instances.json",
    "content": "{\n    \"name\": \"Stop all Untagged AWS EC2 Instances\",\n    \"description\": \"This runbook can be used to Stop all EC2 Instances that are Untagged\",\n    \"uuid\": \"a16703da15d9e9e2d8a56b146e730b5e4c1496721ff1dc8606a5021d521ed9e3\",\n    \"icon\": \"CONNECTOR_TYPE_AWS\",\n    \"categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_COST_OPT\" ],\n    \"connector_types\": [ \"CONNECTOR_TYPE_AWS\" ],\n    \"version\": \"1.0.0\"\n}  "
  },
  {
    "path": "AWS/AWS_Terminate_EC2_Instances_Without_Valid_Lifetime_Tag.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"280d4d6f-a47c-4fa2-8d55-a4e19899d46c\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Steps Overview\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Steps Overview\"\n   },\n   \"source\": [\n    \"<center><img src=\\\"https://storage.googleapis.com/unskript-website/assets/favicon.png\\\" alt=\\\"unSkript.com\\\" width=\\\"100\\\" height=\\\"100\\\">\\n\",\n    \"<h1 id=\\\"-unSkript-Runbooks-\\\">unSkript Runbooks</h1>\\n\",\n    \"<div class=\\\"alert alert-block alert-success\\\">\\n\",\n    \"<h3 id=\\\"-Objective\\\">Objective</h3>\\n\",\n    \"<br><strong style=\\\"color: #000000;\\\"><em>Filter and Terminate EC2 Instances Without Valid Lifetime Tag</em></strong></div>\\n\",\n    \"</center>\\n\",\n    \"<p>&nbsp;</p>\\n\",\n    \"<center>\\n\",\n    \"<h2 id=\\\"Terminate-EC2-Instances-Without-Valid-Lifetime-Tag\\\"><u>Terminate EC2 Instances Without Valid Lifetime Tag</u></h2>\\n\",\n    \"</center>\\n\",\n    \"<h1 id=\\\"Steps-Overview\\\">Steps Overview</h1>\\n\",\n    \"<p>1)<a href=\\\"#1\\\" target=\\\"_self\\\" rel=\\\"noopener\\\"> Filter AWS EC2 Instances Without Lifetime Tag</a><br>2)<a href=\\\"#2\\\" target=\\\"_self\\\" rel=\\\"noopener\\\"> Terminate AWS Instance</a></p>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"bf6692b0-f2e2-40c2-bff3-72f2ce14be4c\",\n   \"metadata\": {\n    \"collapsed\": true,\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"name\": \"Input Verification\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Input Verification\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"if region == None:\\n\",\n    \"    region = ''\\n\",\n    \"if instance_ids and not region:\\n\",\n    \"    raise SystemExit(\\\"Provide a region for the EC2 Instance IDs!\\\")\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"74abf372-b918-4c9e-acf5-e4213d747d5f\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 1\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 1\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Filter-AWS-EC2-Instances-Without-Lifetime-Tag\\\"><a id=\\\"1\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Filter AWS EC2 Instances Without Lifetime Tag<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Filter-AWS-EC2-Instances-Without-Lifetime-Tag\\\" target=\\\"_self\\\">&para;</a></h3>\\n\",\n    \"<p>Here we will use unSkript Filter AWS EC2 Instances Without Lifetime Tag action to get all the EC2 instances which don't have lifetime tag.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters: <code>lifetime_tag</code>,<code>region, termination_date_tag (all Optional)</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following output: <code>untagged_instances</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"5fa6542a-0d95-4200-8fe9-d502c31d59c7\",\n   \"metadata\": {\n    \"actionBashCommand\": false,\n    \"actionCategories\": [\n     \"CATEGORY_TYPE_CLOUDOPS\",\n     \"CATEGORY_TYPE_DEVOPS\",\n     \"CATEGORY_TYPE_COST_OPT\",\n     \"CATEGORY_TYPE_AWS\",\n     \"CATEGORY_TYPE_AWS_EC2\"\n    ],\n    \"actionDescription\": \"Filter AWS EC2 Instances Without Termination and Lifetime Tag and Check of they are valid\",\n    \"actionEntryFunction\": \"aws_filter_instances_without_termination_and_lifetime_tag\",\n    \"actionIsCheck\": true,\n    \"actionIsRemediation\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [\n     \"29ce1935204c64d816fd1f01f4fe41e8d8bd47725b899535c6acee703a7bcf0d\"\n    ],\n    \"actionNextHopParameterMapping\": {},\n    \"actionNouns\": [\n     \"aws\",\n     \"instances\",\n     \"without\",\n     \"termination\",\n     \"lifetime\",\n     \"tag\"\n    ],\n    \"actionOutputType\": \"ACTION_OUTPUT_TYPE_LIST\",\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"actionTitle\": \"Filter AWS EC2 Instances Without Termination and Lifetime Tag\",\n    \"actionType\": \"LEGO_TYPE_AWS\",\n    \"actionVerbs\": [\n     \"filter\"\n    ],\n    \"actionVersion\": \"1.0.0\",\n    \"action_modified\": false,\n    \"action_uuid\": \"6cc8a1355937c21df3ace495375225012fa8915f4125ad143367e0feb34486c5\",\n    \"collapsed\": true,\n    \"condition_enabled\": true,\n    \"continueOnError\": false,\n    \"credentialsJson\": {},\n    \"description\": \"Filter AWS EC2 Instances Without Termination and Lifetime Tag and Check of they are valid\",\n    \"id\": 1,\n    \"index\": 1,\n    \"inputData\": [\n     {\n      \"lifetime_tag_name\": {\n       \"constant\": false,\n       \"value\": \"lifetime_tag\"\n      },\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"region\"\n      },\n      \"termination_tag_name\": {\n       \"constant\": false,\n       \"value\": \"termination_date_tag\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"lifetime_tag_name\": {\n        \"default\": \"\\\"lifetimeTag\\\"\",\n        \"description\": \"Name of the Lifetime Date Tag given to an EC2 instance. By default \\\"lifetimeTag\\\" is considered \",\n        \"title\": \"Lifetime Tag Name\",\n        \"type\": \"string\"\n       },\n       \"region\": {\n        \"default\": \"\",\n        \"description\": \"Name of the AWS Region\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       },\n       \"termination_tag_name\": {\n        \"default\": \"\\\"terminationDateTag\\\"\",\n        \"description\": \"Name of the Termination Date Tag given to an EC2 instance. By default \\\"terminationDateTag\\\" is considered \",\n        \"title\": \"Termination Date Tag Name\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [],\n      \"title\": \"aws_filter_instances_without_termination_and_lifetime_tag\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"language\": \"python\",\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"Filter AWS EC2 Instances Without Termination and Lifetime Tag\",\n    \"orderProperties\": [\n     \"region\",\n     \"termination_tag_name\",\n     \"lifetime_tag_name\"\n    ],\n    \"outputParams\": {\n     \"output_name\": \"untagged_ec2_instances\",\n     \"output_name_enabled\": true,\n     \"output_runbook_enabled\": false,\n     \"output_runbook_name\": \"\"\n    },\n    \"printOutput\": true,\n    \"startcondition\": \"not instance_ids\",\n    \"tags\": [\n     \"aws_filter_instances_without_termination_and_lifetime_tag\"\n    ],\n    \"uuid\": \"6cc8a1355937c21df3ace495375225012fa8915f4125ad143367e0feb34486c5\",\n    \"version\": \"1.0.0\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"##  Copyright (c) 2021 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import List, Tuple, Optional\\n\",\n    \"from unskript.connectors.aws import aws_get_paginator\\n\",\n    \"from unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\\n\",\n    \"import pprint\\n\",\n    \"from datetime import datetime, date\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_filter_instances_without_termination_and_lifetime_tag_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def fetch_instances_from_valid_region(res,r, termination_tag_name, lifetime_tag_name):\\n\",\n    \"    result=[]\\n\",\n    \"    instances_dict={}\\n\",\n    \"    for reservation in res:\\n\",\n    \"            for instance in reservation['Instances']:\\n\",\n    \"                try:\\n\",\n    \"                    tagged_instance = instance['Tags']\\n\",\n    \"                    tag_keys = [tags['Key'] for tags in tagged_instance]\\n\",\n    \"                    if termination_tag_name not in tag_keys or lifetime_tag_name not in tag_keys:\\n\",\n    \"                        result.append(instance['InstanceId'])\\n\",\n    \"                    elif termination_tag_name not in tag_keys and lifetime_tag_name not in tag_keys:\\n\",\n    \"                        result.append(instance['InstanceId'])\\n\",\n    \"                    if termination_tag_name in tag_keys:\\n\",\n    \"                        for x in instance['Tags']:\\n\",\n    \"                            if x['Key'] == termination_tag_name:\\n\",\n    \"                                right_now = date.today()\\n\",\n    \"                                date_object = datetime.strptime(x['Value'], '%d-%m-%Y').date()\\n\",\n    \"                                if date_object < right_now:\\n\",\n    \"                                    result.append(instance['InstanceId'])\\n\",\n    \"                            elif x['Key'] == lifetime_tag_name:\\n\",\n    \"                                launch_time = instance['LaunchTime']\\n\",\n    \"                                convert_to_datetime = launch_time.strftime(\\\"%d-%m-%Y\\\")\\n\",\n    \"                                launch_date = datetime.strptime(convert_to_datetime,'%d-%m-%Y').date()\\n\",\n    \"                                if x['Value'] != 'INDEFINITE':\\n\",\n    \"                                    if launch_date < right_now:\\n\",\n    \"                                        result.append(instance['InstanceId'])\\n\",\n    \"                except Exception as e:\\n\",\n    \"                        if len(instance['InstanceId'])!=0:\\n\",\n    \"                            result.append(instance['InstanceId'])\\n\",\n    \"    if len(result)!=0:\\n\",\n    \"        instances_dict['region']= r\\n\",\n    \"        instances_dict['instances']= result\\n\",\n    \"    return instances_dict\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_filter_instances_without_termination_and_lifetime_tag(handle, region: str=None, termination_tag_name:str='terminationDateTag', lifetime_tag_name:str='lifetimeTag') -> Tuple:\\n\",\n    \"    \\\"\\\"\\\"aws_filter_ec2_without_lifetime_tag Returns an List of instances which not have lifetime tag.\\n\",\n    \"\\n\",\n    \"        Assumed tag key format - terminationDateTag, lifetimeTag\\n\",\n    \"        Assumed Date format for both keys is -> dd-mm-yy\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned from task.validate(...).\\n\",\n    \"\\n\",\n    \"        :type region: string\\n\",\n    \"        :param region: Optional, Name of AWS Region\\n\",\n    \"\\n\",\n    \"        :type termination_tag_name: string\\n\",\n    \"        :param termination_tag_name: Optional, Name of the Termination Date Tag given to an EC2 instance. By default \\\"terminationDateTag\\\" is considered\\n\",\n    \"\\n\",\n    \"        :type lifetime_tag_name: string\\n\",\n    \"        :param lifetime_tag_name: Optional, Name of the Lifetime Date Tag given to an EC2 instance. By default \\\"lifetimeTag\\\" is considered\\n\",\n    \"\\n\",\n    \"        :rtype: Tuple of status, instances which dont having terminationDateTag and lifetimeTag, and error\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    final_list=[]\\n\",\n    \"    all_regions = [region]\\n\",\n    \"    if region is None or len(region) == 0:\\n\",\n    \"        all_regions = aws_list_all_regions(handle=handle)\\n\",\n    \"    for r in all_regions:\\n\",\n    \"        try:\\n\",\n    \"            ec2Client = handle.client('ec2', region_name=r)\\n\",\n    \"            all_reservations = aws_get_paginator(ec2Client, \\\"describe_instances\\\", \\\"Reservations\\\")\\n\",\n    \"            instances_without_tags = fetch_instances_from_valid_region(all_reservations, r, termination_tag_name, lifetime_tag_name)\\n\",\n    \"            if len(instances_without_tags)!=0:\\n\",\n    \"                final_list.append(instances_without_tags)\\n\",\n    \"        except Exception as e:\\n\",\n    \"            pass\\n\",\n    \"    if len(final_list)!=0:\\n\",\n    \"        return (False, final_list)\\n\",\n    \"    else:\\n\",\n    \"        return (True, None)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"region\\\": \\\"region\\\",\\n\",\n    \"    \\\"termination_tag_name\\\": \\\"termination_date_tag\\\",\\n\",\n    \"    \\\"lifetime_tag_name\\\": \\\"lifetime_tag\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"not instance_ids\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"task.configure(outputName=\\\"untagged_instances\\\")\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_filter_instances_without_termination_and_lifetime_tag, lego_printer=aws_filter_instances_without_termination_and_lifetime_tag_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"9cd2a6cc-c0b6-48c7-837f-c623f8cf53d4\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 2A\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 2A\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Create-List-of-Untagged-Instances\\\">Create List of Untagged Instances</h3>\\n\",\n    \"<p>This action filters regions that have no untagged EC2 instances and creates a list of the ones that have have</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters: <code>None</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following output: <code>all_untagged_instances</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 5,\n   \"id\": \"ff6ee3fa-b94b-4679-943b-782b32c1a095\",\n   \"metadata\": {\n    \"collapsed\": true,\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-02-02T16:25:30.826Z\"\n    },\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"name\": \"Create List of  Instances Without Termination and Lifetime Tag \",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Create List of  Instances Without Termination and Lifetime Tag \"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"all_untagged_instances = []\\n\",\n    \"try:\\n\",\n    \"    if untagged_instances[0] == False:\\n\",\n    \"        if len(untagged_instances[1])!=0:\\n\",\n    \"            all_untagged_instances=untagged_instances[1]\\n\",\n    \"except Exception:\\n\",\n    \"    data_dict[\\\"region\\\"] = region\\n\",\n    \"    data_dict[\\\"instances\\\"] = instance_ids\\n\",\n    \"    all_untagged_instances.append(data_dict)\\n\",\n    \"print(all_untagged_instances)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"c09c5eb2-a9a7-4119-aef4-b07e0fdd6c80\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 2\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 2\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Terminate-AWS-Instance\\\">Terminate AWS Instance</h3>\\n\",\n    \"<p>This action terminates EC2 instances which don't have lifetime tag as captured in Step 1\\ud83d\\udc46</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters: <code>instance_ids, region</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following output: <code>None</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"7f622b00-2f51-4f44-aeaa-18f67823a4ea\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": false,\n    \"action_uuid\": \"8744e8836d7a0aff41120620fa4d703dacff25b0dbb5c9c7b87b83783c6c9d18\",\n    \"collapsed\": true,\n    \"condition_enabled\": true,\n    \"continueOnError\": true,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Terminate AWS Instances\",\n    \"id\": 192,\n    \"index\": 192,\n    \"inputData\": [\n     {\n      \"instance_ids\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"instances\\\\\\\\\\\")\\\"\"\n      },\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"region\\\\\\\\\\\")\\\"\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"instance_ids\": {\n        \"description\": \"List of instance IDs. For eg. [\\\"i-foo\\\", \\\"i-bar\\\"]\",\n        \"items\": {\n         \"type\": \"string\"\n        },\n        \"title\": \"Instance IDs\",\n        \"type\": \"array\"\n       },\n       \"region\": {\n        \"description\": \"AWS Region of the instance.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"instance_ids\",\n       \"region\"\n      ],\n      \"title\": \"aws_terminate_instance\",\n      \"type\": \"object\"\n     }\n    ],\n    \"iterData\": [\n     {\n      \"iter_enabled\": true,\n      \"iter_item\": {\n       \"instance_ids\": \"instances\",\n       \"region\": \"region\"\n      },\n      \"iter_list\": {\n       \"constant\": false,\n       \"objectItems\": true,\n       \"value\": \"all_untagged_instances\"\n      }\n     }\n    ],\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"Terminate AWS Instances\",\n    \"nouns\": [],\n    \"orderProperties\": [\n     \"instance_ids\",\n     \"region\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"printOutput\": true,\n    \"startcondition\": \"if terminate==True\",\n    \"tags\": [\n     \"aws_terminate_instance\"\n    ],\n    \"verbs\": []\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"# Copyright (c) 2021 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"from typing import List, Dict\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_terminate_instance_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_terminate_instance(handle, instance_ids: List, region: str) -> Dict:\\n\",\n    \"    \\\"\\\"\\\"aws_terminate_instance Returns an Dict of info terminated instance.\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned from task.validate(...).\\n\",\n    \"\\n\",\n    \"        :type instance_ids: List\\n\",\n    \"        :param instance_ids: Tag to filter Instances.\\n\",\n    \"\\n\",\n    \"        :type region: string\\n\",\n    \"        :param region: Used to filter the instance for specific region.\\n\",\n    \"\\n\",\n    \"        :rtype: Dict of info terminated instance.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    ec2Client = handle.client('ec2', region_name=region)\\n\",\n    \"    res = ec2Client.terminate_instances(InstanceIds=instance_ids)\\n\",\n    \"\\n\",\n    \"    return res\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(continueOnError=True)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"instance_ids\\\": \\\"iter.get(\\\\\\\\\\\"instances\\\\\\\\\\\")\\\",\\n\",\n    \"    \\\"region\\\": \\\"iter.get(\\\\\\\\\\\"region\\\\\\\\\\\")\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(iterJson='''{\\n\",\n    \"    \\\"iter_enabled\\\": true,\\n\",\n    \"    \\\"iter_list_is_const\\\": false,\\n\",\n    \"    \\\"iter_list\\\": \\\"all_untagged_instances\\\",\\n\",\n    \"    \\\"iter_parameter\\\": [\\\"instance_ids\\\",\\\"region\\\"]\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"if terminate==True\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_terminate_instance, lego_printer=aws_terminate_instance_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"324ba188-b516-4100-aebd-18ec3ce8203c\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Conclusion\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Conclusion\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Conclusion\\\">Conclusion</h3>\\n\",\n    \"<p>In this Runbook, we demonstrated the use of unSkript's AWS actions to filter untagged instances and terminate them. To view the full platform capabilities of unSkript please visit <a href=\\\"https://us.app.unskript.io\\\" target=\\\"_blank\\\" rel=\\\"noopener\\\">us.app.unskript.io</a></p>\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"execution_data\": {\n   \"runbook_name\": \"Terminate EC2 Instances Without Valid Lifetime Tag\",\n   \"parameters\": [\n    \"region\",\n    \"terminate\"\n   ]\n  },\n  \"kernelspec\": {\n   \"display_name\": \"Python 3.10.6 64-bit\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"codemirror_mode\": {\n    \"name\": \"ipython\",\n    \"version\": 3\n   },\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"nbconvert_exporter\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.10.6\"\n  },\n  \"parameterSchema\": {\n   \"properties\": {\n    \"instance_ids\": {\n     \"description\": \"List of EC2 instance IDs\",\n     \"title\": \"instance_ids\",\n     \"type\": \"array\"\n    },\n    \"lifetime_tag_name\": {\n     \"default\": \"lifetimeTag\",\n     \"description\": \"Tag name used to identify the lifetime of a given EC2 instance.\",\n     \"title\": \"lifetime_tag_name\",\n     \"type\": \"string\"\n    },\n    \"region\": {\n     \"description\": \"AWS Region to search for EC2 instances. Eg: \\\"us-west-2\\\"\",\n     \"title\": \"region\",\n     \"type\": \"string\"\n    },\n    \"terminate\": {\n     \"default\": false,\n     \"description\": \"Check parameter to terminate instances without the termination and lifetime tag. If changed to True, all instances without the termination and lifetime tag will be terminated. By default the value is false\",\n     \"title\": \"terminate\",\n     \"type\": \"boolean\"\n    },\n    \"termination_tag_name\": {\n     \"default\": \"terminationDateTag\",\n     \"description\": \"Tag name used to identify the termination date of a given EC2 instance.\",\n     \"title\": \"termination_tag_name\",\n     \"type\": \"string\"\n    }\n   },\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"vscode\": {\n   \"interpreter\": {\n    \"hash\": \"b0fa6594d8f4cbf19f97940f81e996739fb7646882a419484c72d19e05852a7e\"\n   }\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}"
  },
  {
    "path": "AWS/AWS_Terminate_EC2_Instances_Without_Valid_Lifetime_Tag.json",
    "content": "{\n    \"name\": \"Terminate EC2 Instances Without Valid Lifetime Tag\",\n    \"description\": \"This runbook can be used to list all the EC2 instances which don't have a lifetime tag and then terminate them.\",\n    \"uuid\": \"29ce1935204c64d816fd1f01f4fe41e8d8bd47725b899535c6acee703a7bcf0d\",\n    \"icon\": \"CONNECTOR_TYPE_AWS\",\n    \"categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_COST_OPT\" ],\n    \"connector_types\": [ \"CONNECTOR_TYPE_AWS\" ],\n    \"version\": \"1.0.0\"\n  }"
  },
  {
    "path": "AWS/AWS_Update_RDS_Instances_from_Old_to_New_Generation.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"e2fffe48-5eb4-4177-95ec-7955cc381ad8\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Runbook Overview\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Runbook Overview\"\n   },\n   \"source\": [\n    \"<center><img src=\\\"https://storage.googleapis.com/unskript-website/assets/favicon.png\\\" alt=\\\"unSkript.com\\\" width=\\\"100\\\" height=\\\"100\\\">\\n\",\n    \"<h1 id=\\\"unSkript-Runbooks&nbsp;&para;&para;\\\">unSkript Runbooks</h1>\\n\",\n    \"<div class=\\\"alert alert-block alert-success\\\"><strong>To modify old generation RDS instances to a given instance class in AWS using unSkript actions.</strong></div>\\n\",\n    \"</center><center>\\n\",\n    \"<h2 id=\\\"Modify-RDS-Instances-Using-Previous-Gen-Instance-Types-in-AWS\\\">AWS Update RDS Instances from Old to New Generation</h2>\\n\",\n    \"</center>\\n\",\n    \"<h1 id=\\\"Steps-Overview&para;&para;\\\">Steps Overview</h1>\\n\",\n    \"<ol>\\n\",\n    \"<li>AWS Get Older Generation RDS Instances</li>\\n\",\n    \"<li>Modify the DB Instance Class</li>\\n\",\n    \"</ol>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 6,\n   \"id\": \"cbd771e6-6e0a-4ea0-a653-00f65120e145\",\n   \"metadata\": {\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-05-19T14:15:05.316Z\"\n    },\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"name\": \"Input Verification\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Input Verification\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"if not rds_instance_ids and rds_instance_type and region:\\n\",\n    \"    raise SystemExit(\\\"Provide rds_instance_ids!\\\")\\n\",\n    \"if not rds_instance_type:\\n\",\n    \"    raise SystemExit(\\\"Provide rds_instance_type!\\\")\\n\",\n    \"if not region and rds_instance_type and rds_instance_ids:\\n\",\n    \"    raise SystemExit(\\\"Provide region!\\\")\\n\",\n    \"if region == None:\\n\",\n    \"    region = \\\"\\\"\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"dbbf73ef-3c3e-49b7-8c4b-301e02614d84\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-1\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-1\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Filter-Unattached-EBS-Volumes\\\">AWS Get Older Generation RDS Instances</h3>\\n\",\n    \"<p>Here we will use unSkript&nbsp;<strong>AWS Get Older Generation RDS Instances</strong> action. This action filters all the rds instances from the given region and returns a list of all the older generation instance-type instances. It will execute if the <code>rds_instance_ids</code>&nbsp;parameter is not passed.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Input parameters: <code>region</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Output variable: <code>old_gen_rds_instances</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"1924ed03-0486-43e1-a388-2b753939b386\",\n   \"metadata\": {\n    \"actionBashCommand\": false,\n    \"actionCategories\": [\n     \"CATEGORY_TYPE_COST_OPT\",\n     \"CATEGORY_TYPE_SRE\",\n     \"CATEGORY_TYPE_AWS\",\n     \"CATEGORY_TYPE_AWS_RDS\"\n    ],\n    \"actionDescription\": \"AWS Get Older Generation RDS Instances action retrieves information about RDS instances using older generation instance types.\",\n    \"actionEntryFunction\": \"aws_get_older_generation_rds_instances\",\n    \"actionIsCheck\": true,\n    \"actionIsRemediation\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [],\n    \"actionNextHopParameterMapping\": {},\n    \"actionNouns\": null,\n    \"actionOutputType\": \"ACTION_OUTPUT_TYPE_LIST\",\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"actionTitle\": \"AWS Get Older Generation RDS Instances\",\n    \"actionType\": \"LEGO_TYPE_AWS\",\n    \"actionVerbs\": null,\n    \"actionVersion\": \"1.0.0\",\n    \"action_modified\": false,\n    \"action_uuid\": \"08da2db2f8fe2dbce378c314e54341b68ee2e9e99ae271f2acd044ef7e8bdee3\",\n    \"collapsed\": true,\n    \"condition_enabled\": true,\n    \"continueOnError\": false,\n    \"credentialsJson\": {},\n    \"description\": \"AWS Get Older Generation RDS Instances action retrieves information about RDS instances using older generation instance types.\",\n    \"id\": 1,\n    \"index\": 1,\n    \"inputData\": [\n     {\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"region\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"region\": {\n        \"default\": \"\",\n        \"description\": \"AWS Region.\",\n        \"title\": \"AWS Region\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [],\n      \"title\": \"aws_get_older_generation_rds_instances\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"language\": \"python\",\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS Get Older Generation RDS Instances\",\n    \"orderProperties\": [\n     \"region\"\n    ],\n    \"outputParams\": {\n     \"output_name\": \"old_gen_rds_instances\",\n     \"output_name_enabled\": true,\n     \"output_runbook_enabled\": false,\n     \"output_runbook_name\": \"\"\n    },\n    \"printOutput\": true,\n    \"show_tool_tip\": true,\n    \"startcondition\": \"not rds_instance_ids\",\n    \"tags\": [],\n    \"uuid\": \"08da2db2f8fe2dbce378c314e54341b68ee2e9e99ae271f2acd044ef7e8bdee3\",\n    \"version\": \"1.0.0\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"##  Copyright (c) 2023 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import Optional, Tuple\\n\",\n    \"from unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\\n\",\n    \"from unskript.connectors.aws import aws_get_paginator\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_older_generation_rds_instances_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def is_previous_gen_instance(instance_type):\\n\",\n    \"    previous_gen_instance_types = ['db.m1', 'db.m2', 'db.t1']\\n\",\n    \"    for prev_gen_type in previous_gen_instance_types:\\n\",\n    \"        if instance_type.startswith(prev_gen_type):\\n\",\n    \"            return True\\n\",\n    \"    return False\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_older_generation_rds_instances(handle, region: str = \\\"\\\") -> Tuple:\\n\",\n    \"    \\\"\\\"\\\"aws_get_older_generation_rds_instances Gets all older generation RDS DB instances\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned from task.validate(...).\\n\",\n    \"\\n\",\n    \"        :type region: string\\n\",\n    \"        :param region: Optional, Region of the RDS.\\n\",\n    \"\\n\",\n    \"        :rtype: Status, List of old RDS Instances\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    result = []\\n\",\n    \"    all_regions = [region]\\n\",\n    \"    if not region:\\n\",\n    \"        all_regions = aws_list_all_regions(handle)\\n\",\n    \"    for reg in all_regions:\\n\",\n    \"        try:\\n\",\n    \"            ec2Client = handle.client('rds', region_name=reg)\\n\",\n    \"            response = aws_get_paginator(ec2Client, \\\"describe_db_instances\\\", \\\"DBInstances\\\")\\n\",\n    \"            for db in response:\\n\",\n    \"                instance_type = \\\".\\\".join(db['DBInstanceClass'].split(\\\".\\\", 2)[:2])\\n\",\n    \"                response = is_previous_gen_instance(instance_type)\\n\",\n    \"                if response:\\n\",\n    \"                    db_instance_dict = {}\\n\",\n    \"                    db_instance_dict[\\\"region\\\"] = reg\\n\",\n    \"                    db_instance_dict[\\\"instance\\\"] = db['DBInstanceIdentifier']\\n\",\n    \"                    result.append(db_instance_dict)\\n\",\n    \"        except Exception:\\n\",\n    \"            pass\\n\",\n    \"\\n\",\n    \"    if len(result) != 0:\\n\",\n    \"        return (False, result)\\n\",\n    \"    else:\\n\",\n    \"        return (True, None)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"region\\\": \\\"region\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"not rds_instance_ids\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"task.configure(outputName=\\\"old_gen_rds_instances\\\")\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_get_older_generation_rds_instances, lego_printer=aws_get_older_generation_rds_instances_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"a23d2c03-f186-470d-9947-ffba9bb49e63\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-1 Extension\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-1 Extension\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Modify-Step-1-Output\\\">Modify Step-1 Output<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Modify-Step-1-Output\\\" target=\\\"_self\\\">&para;</a></h3>\\n\",\n    \"<p>In this action, we modify the output from step 1 and return a list of aws cli command for the older generation RDS instances.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Output variable: <code>older_rds_instances</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"5cbcb4b2-149f-43f7-b723-e2f3766c9980\",\n   \"metadata\": {\n    \"collapsed\": true,\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-05-12T10:41:25.703Z\"\n    },\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"name\": \"Modify Step-1 Output\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Modify Step-1 Output\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"older_rds_instances = []\\n\",\n    \"try:\\n\",\n    \"    if old_gen_rds_instances[0] == False:\\n\",\n    \"        for instance in old_gen_rds_instances[1]:\\n\",\n    \"            instance['instance_type'] = rds_instance_type\\n\",\n    \"            command = \\\"aws rds modify-db-instance --db-instance-identifier \\\" + instance['instance'] + \\\" --db-instance-class \\\" + instance['instance_type'] + \\\" --region \\\" + instance['region'] + \\\" --apply-immediately\\\"\\n\",\n    \"            older_rds_instances.append(command)\\n\",\n    \"except Exception as e:\\n\",\n    \"    for i in rds_instance_ids:\\n\",\n    \"        command = \\\"aws rds modify-db-instance --db-instance-identifier \\\" + i + \\\" --db-instance-class \\\" + rds_instance_type + \\\" --region \\\" + region + \\\" --apply-immediately\\\"\\n\",\n    \"        older_rds_instances.append(command)\\n\",\n    \"    else:\\n\",\n    \"        raise Exception(e)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"d1f1a3bf-e7d4-4243-8a99-6e1b66abef29\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-2\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-2\"\n   },\n   \"source\": [\n    \"<p><strong>Modify DB Instance Class</strong></p>\\n\",\n    \"<p>In this action, we pass an aws cli command to modify the RDS instance class.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Input parameters: <code>aws_command</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Output variable: <code>modified_output</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"0886a33e-052f-41bc-980f-6dd500a35a71\",\n   \"metadata\": {\n    \"actionBashCommand\": false,\n    \"actionCategories\": [\n     \"CATEGORY_TYPE_DEVOPS\",\n     \"CATEGORY_TYPE_SRE\",\n     \"CATEGORY_TYPE_AWS\",\n     \"CATEGORY_TYPE_AWS_CLI\"\n    ],\n    \"actionDescription\": \"Execute command using AWS CLI\",\n    \"actionEntryFunction\": \"aws_execute_cli_command\",\n    \"actionIsCheck\": false,\n    \"actionIsRemediation\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": null,\n    \"actionNextHopParameterMapping\": null,\n    \"actionNouns\": null,\n    \"actionOutputType\": \"ACTION_OUTPUT_TYPE_STR\",\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"actionTitle\": \"Run Command via AWS CLI\",\n    \"actionType\": \"LEGO_TYPE_AWS\",\n    \"actionVerbs\": null,\n    \"actionVersion\": \"1.0.0\",\n    \"action_modified\": false,\n    \"action_uuid\": \"1db371aff42291641eb6ba83d7acc3fe28e2468d83be1552e8258dc878c0f70d\",\n    \"collapsed\": true,\n    \"condition_enabled\": true,\n    \"continueOnError\": true,\n    \"credentialsJson\": {},\n    \"description\": \"Execute command using AWS CLI\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-05-12T10:20:20.403Z\"\n    },\n    \"id\": 1,\n    \"index\": 1,\n    \"inputData\": [\n     {\n      \"aws_command\": {\n       \"constant\": false,\n       \"value\": \"iter_item\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"aws_command\": {\n        \"description\": \"AWS Command eg \\\"aws ec2 describe-instances\\\"\",\n        \"title\": \"AWS Command\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"aws_command\"\n      ],\n      \"title\": \"aws_execute_cli_command\",\n      \"type\": \"object\"\n     }\n    ],\n    \"iterData\": [\n     {\n      \"iter_enabled\": true,\n      \"iter_item\": \"aws_command\",\n      \"iter_list\": {\n       \"constant\": false,\n       \"objectItems\": false,\n       \"value\": \"older_rds_instances\"\n      }\n     }\n    ],\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"language\": \"python\",\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"Run Command via AWS CLI: Modify DB Instance Class\",\n    \"orderProperties\": [\n     \"aws_command\"\n    ],\n    \"outputParams\": {\n     \"output_name\": \"modified_output\",\n     \"output_name_enabled\": true,\n     \"output_runbook_enabled\": false,\n     \"output_runbook_name\": \"\"\n    },\n    \"printOutput\": true,\n    \"startcondition\": \"len(older_rds_instances)!=0\",\n    \"tags\": [],\n    \"title\": \"Run Command via AWS CLI: Modify DB Instance Class\",\n    \"uuid\": \"1db371aff42291641eb6ba83d7acc3fe28e2468d83be1552e8258dc878c0f70d\",\n    \"version\": \"1.0.0\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"#\\n\",\n    \"# Copyright (c) 2021 unSkript.com\\n\",\n    \"# All rights reserved.\\n\",\n    \"#\\n\",\n    \"\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_execute_cli_command_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_execute_cli_command(handle, aws_command: str) -> str:\\n\",\n    \"\\n\",\n    \"    result = handle.aws_cli_command(aws_command)\\n\",\n    \"    if result is None or result.returncode != 0:\\n\",\n    \"        print(\\n\",\n    \"            f\\\"Error while executing command ({aws_command}): {result}\\\")\\n\",\n    \"        return str()\\n\",\n    \"\\n\",\n    \"    return result.stdout\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(continueOnError=True)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"aws_command\\\": \\\"iter_item\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(iterJson='''{\\n\",\n    \"    \\\"iter_enabled\\\": true,\\n\",\n    \"    \\\"iter_list_is_const\\\": false,\\n\",\n    \"    \\\"iter_list\\\": \\\"older_rds_instances\\\",\\n\",\n    \"    \\\"iter_parameter\\\": \\\"aws_command\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"len(older_rds_instances)!=0\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"modified_output\\\")\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_execute_cli_command, lego_printer=aws_execute_cli_command_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"37022260-01cb-4cb7-9ed1-aeb30ac4ad64\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Conclusion\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Conclusion\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Conclusion\\\">Conclusion</h3>\\n\",\n    \"<p>In this Runbook, we demonstrated the use of unSkript's AWS actions to get RDS instances with old generation and modify them to the new given instance class. To view the full platform capabunscriptedof unSkript please visit <a href=\\\"https://us.app.unskript.io\\\" target=\\\"_blank\\\" rel=\\\"noopener\\\">https://us.app.unskript.io</a></p>\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"execution_data\": {\n   \"runbook_name\": \"AWS Update RDS Instances from Old to New Generation\",\n   \"parameters\": null\n  },\n  \"kernelspec\": {\n   \"display_name\": \"unSkript (Build: 1169)\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"codemirror_mode\": {\n    \"name\": \"ipython\",\n    \"version\": 3\n   },\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"nbconvert_exporter\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.9.6\"\n  },\n  \"parameterSchema\": {\n   \"properties\": {\n    \"rds_instance_ids\": {\n     \"description\": \"RDS Instance Ids.\",\n     \"title\": \"rds_instance_ids\",\n     \"type\": \"array\"\n    },\n    \"rds_instance_type\": {\n     \"description\": \"RDS Instance Type e.g. \\\"db.t3.micro\\\"\",\n     \"title\": \"rds_instance_type\",\n     \"type\": \"string\"\n    },\n    \"region\": {\n     \"description\": \"AWS Region\",\n     \"title\": \"region\",\n     \"type\": \"string\"\n    }\n   },\n   \"required\": [\n    \"rds_instance_type\"\n   ],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"show_action_drag_hint_done\": {\n   \"environment_id\": \"1499f27c-6406-4fbd-bd1b-c6f92800018f\",\n   \"environment_name\": \"Staging\",\n   \"execution_id\": \"\",\n   \"inputs_for_searched_lego\": \"\",\n   \"notebook_id\": \"d4159cb3-6c83-4ba5-a2f7-d23c0777076b.ipynb\",\n   \"parameters\": null,\n   \"runbook_name\": \"gcp\",\n   \"search_string\": \"\",\n   \"show_tool_tip\": true,\n   \"tenant_id\": \"982dba5f-d9df-48ae-a5bf-ec1fc94d4882\",\n   \"tenant_url\": \"https://tenant-staging.alpha.unskript.io\",\n   \"user_email_id\": \"support+staging@unskript.com\",\n   \"workflow_id\": \"f8ead207-81c0-414a-a15b-76fcdefafe8d\"\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}"
  },
  {
    "path": "AWS/AWS_Update_RDS_Instances_from_Old_to_New_Generation.json",
    "content": "{\n  \"name\": \"AWS Update RDS Instances from Old to New Generation\",\n  \"description\": \"This runbook can be used to find the old generation RDS instances for the given AWS region and modify then to the given instance class.\",\n  \"uuid\": \"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855\",\n  \"icon\": \"CONNECTOR_TYPE_AWS\",\n  \"categories\": [ \"CATEGORY_TYPE_CLOUDOPS\",\"CATEGORY_TYPE_SECOPS\" ],\n  \"connector_types\": [ \"CONNECTOR_TYPE_AWS\" ],\n  \"version\": \"1.0.0\"\n}\n"
  },
  {
    "path": "AWS/AWS_Update_Redshift_Database.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"5331e347-9cea-40fe-9828-959657edd35d\",\n   \"metadata\": {\n    \"name\": \"Introduction\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Introduction\"\n   },\n   \"source\": [\n    \"<p>This Runbook takes data from an S3 bucket, and populates an AWS Redshift table with the data.</p>\\n\",\n    \"<p>The initial reason for this RunBook was to populate AWS Cost and Usage Reports (CUR) into Redshift &nbsp;The CUR is dumped into a S3 bucket. In order to run queries, it must be copied into a Redshift table.</p>\\n\",\n    \"<p>We have written a series of blog posts on this:</p>\\n\",\n    \"<p><a href=\\\"https://unskript.com/blog/keeping-your-cloud-costs-in-check-automated-aws-cost-charts-and-alerting/\\\" target=\\\"_blank\\\" rel=\\\"noopener\\\">https://unskript.com/blog/keeping-your-cloud-costs-in-check-automated-aws-cost-charts-and-alerting/</a></p>\\n\",\n    \"<p><a href=\\\"https://unskript.com/blog/cloud-costs-charting-daily-ec2-usage-and-cost/\\\" target=\\\"_blank\\\" rel=\\\"noopener\\\">https://unskript.com/blog/cloud-costs-charting-daily-ec2-usage-and-cost/</a></p>\\n\",\n    \"<p>&nbsp;</p>\\n\",\n    \"<h2>Prerequisites</h2>\\n\",\n    \"<p>&nbsp;</p>\\n\",\n    \"<p>Here are the steps you need to complete before you can run this runbook:</p>\\n\",\n    \"<ol>\\n\",\n    \"<li>&nbsp;Create a Cost and Usage Report at AWS (here's a <a href=\\\"https://docs.aws.amazon.com/cur/latest/userguide/cur-create.html\\\">step by step guide</a>)</li>\\n\",\n    \"<li>Create a AWS Secret that in Secrets Manager that has access to your AWS Redshift.&nbsp;</li>\\n\",\n    \"<li>Once your CUR report has started populating, you'll need to create a table in Redshift &nbsp;In your S3 bucket, there will be a folder for the year/month. Inside will be a file that ends in RedshiftCommands.sql\\n\",\n    \"<ol>\\n\",\n    \"<li>The first line (it's really long) creates the table.&nbsp; Run this is the RedShift query editor (V2).&nbsp;</li>\\n\",\n    \"<li>The second line is the query to update the table &nbsp;You'll need this for this runbook (create sql query - in the rebuildSQL variable)</li>\\n\",\n    \"</ol>\\n\",\n    \"</li>\\n\",\n    \"</ol>\\n\",\n    \"<p>Every month, you'll need to create the new table in RedShift manually. (this is a TODO for anyone interested in contributing!)&nbsp;</p>\\n\",\n    \"<p>&nbsp;</p>\\n\",\n    \"<h2>What this RunBook does</h2>\\n\",\n    \"<ol>\\n\",\n    \"<li>Gets the AWS SecretARN from Secrets Manager &nbsp;Given the secret_name input - this action will return the ARN required to make Redshift Queries,</li>\\n\",\n    \"<li>Create SQL queries. There are 2 queries to be run:\\n\",\n    \"<ol>\\n\",\n    \"<li>Truncate Table - this deletes all existing data (but keeps the columns).</li>\\n\",\n    \"<li>rebuildSQL - This makes the query to update the table with the latest data from S3 &nbsp;This query requires the Query from your RedshiftCommands.sq1 &nbsp;We just change the tablename into a variable so that it can be used month after month.&nbsp;</li>\\n\",\n    \"</ol>\\n\",\n    \"</li>\\n\",\n    \"<li>AWS Redshift Query - truncate.&nbsp; This applies the Truncate table query to your RedShift table.</li>\\n\",\n    \"<li>AWS Get Redshift Query Details - checks to see that the first query has completed before running the 2nd query</li>\\n\",\n    \"<li>AWS Redshift Query - truncate.</li>\\n\",\n    \"<li>AWS Redshift Query rebuild sql - this query repopulates the Redshift table.&nbsp; This may take a while. In this runbook - we do not look to see that the query has finished.&nbsp; We just wait a few moniutes before making additional calls on the table.</li>\\n\",\n    \"</ol>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 10,\n   \"id\": \"d6578210-4baf-4e05-9076-a6042696b231\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionCategories\": [],\n    \"actionIsCheck\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [],\n    \"actionNextHopParameterMapping\": {},\n    \"actionOutputType\": \"\",\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": false,\n    \"action_uuid\": \"1ce9f756a4f1503df353fd5e8df7ea32ebe801a93c607251fea1a5367861da89\",\n    \"checkEnabled\": false,\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Given a Secret Name - this Action returns the Secret ARN\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-04-07T00:55:45.859Z\"\n    },\n    \"id\": 189,\n    \"index\": 189,\n    \"inputData\": [\n     {\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"region\"\n      },\n      \"secret_name\": {\n       \"constant\": false,\n       \"value\": \"\\\"awsuser-doug-redshift\\\"\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"region\": {\n        \"description\": \"AWS Region.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       },\n       \"secret_name\": {\n        \"description\": \"AWS Secret Name\",\n        \"title\": \"secret_name\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"region\",\n       \"secret_name\"\n      ],\n      \"title\": \"aws_get_secrets_manager_secretARN\",\n      \"type\": \"object\"\n     }\n    ],\n    \"isUnskript\": false,\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS Get Secrets Manager SecretARN\",\n    \"nouns\": [],\n    \"orderProperties\": [\n     \"region\",\n     \"secret_name\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"secretArn\",\n     \"output_name_enabled\": true\n    },\n    \"printOutput\": true,\n    \"tags\": [\n     \"aws_get_secrets_manager_secretARN\"\n    ],\n    \"verbs\": []\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"##  Copyright (c) 2023 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"\\n\",\n    \"from __future__ import annotations\\n\",\n    \"\\n\",\n    \"from typing import Optional\\n\",\n    \"\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_secrets_manager_secretARN_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint({\\\"secret\\\": output})\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"@beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_secrets_manager_secretARN(handle, region: str, secret_name:str) -> str:\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"    # Create a Secrets Manager client\\n\",\n    \"\\n\",\n    \"    client = handle.client(\\n\",\n    \"        service_name='secretsmanager',\\n\",\n    \"        region_name=region\\n\",\n    \"    )\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"    get_secret_value_response = client.get_secret_value(\\n\",\n    \"        SecretId=secret_name\\n\",\n    \"        )\\n\",\n    \"\\n\",\n    \"    #print(get_secret_value_response)\\n\",\n    \"    # Decrypts secret using the associated KMS key.\\n\",\n    \"    secretArn = get_secret_value_response['ARN']\\n\",\n    \"    return secretArn\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"region\\\": \\\"region\\\",\\n\",\n    \"    \\\"secret_name\\\": \\\"\\\\\\\\\\\"awsuser-doug-redshift\\\\\\\\\\\"\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"secretArn\\\")\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_get_secrets_manager_secretARN, lego_printer=aws_get_secrets_manager_secretARN_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 11,\n   \"id\": \"378bb118-1598-408e-9f2d-16b20a8f8a62\",\n   \"metadata\": {\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-04-07T00:55:47.240Z\"\n    },\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"name\": \"create Queries\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"create Queries\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"import datetime\\n\",\n    \"\\n\",\n    \"today = datetime.datetime.now()\\n\",\n    \"\\n\",\n    \"yearmonth = today.strftime('%Y%m')\\n\",\n    \"month = today.strftime('%m')\\n\",\n    \"year =  today.strftime('%Y')\\n\",\n    \"yearmonthday = yearmonth +\\\"01\\\"\\n\",\n    \"#print(\\\"yearmonthday\\\",yearmonthday)\\n\",\n    \"if int(month) <12:\\n\",\n    \"    nextMonth = int(month)+1\\n\",\n    \"    if nextMonth < 10:\\n\",\n    \"        nextMonthStr = \\\"0\\\" + str(nextMonth)\\n\",\n    \"    else: \\n\",\n    \"        nextMonthStr = str(nextMonth)\\n\",\n    \"if int(month) == 12:\\n\",\n    \"    nextMonthStr = \\\"01\\\"\\n\",\n    \"    year = year +1   \\n\",\n    \"nextMonthYMD = year + nextMonthStr +\\\"01\\\"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"tableName = 'awsbilling'+ yearmonth\\n\",\n    \"dateRange = yearmonthday+'-'+nextMonthYMD\\n\",\n    \"#print(\\\"dateRange\\\", dateRange)\\n\",\n    \"\\n\",\n    \"TruncateSQL = f\\\"truncate table {tableName}\\\"\\n\",\n    \"print(\\\"TruncateSQL\\\", TruncateSQL)\\n\",\n    \"RebuildSql = f\\\"copy {tableName} from 's3://unskript-billing-doug/all/unskript-billing-doug/{dateRange}/unskript-billing-doug-RedshiftManifest.json' credentials     'aws_iam_role=arn:aws:iam::100498623390:role/service-role/AmazonRedshift-CommandsAccessRole-20230103T181457' region 'us-west-2'    GZIP CSV IGNOREHEADER 1 TIMEFORMAT 'auto' manifest;\\\"\\n\",\n    \"print(\\\"RebuildSql\\\", RebuildSql)\\n\",\n    \"\\n\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 12,\n   \"id\": \"f0244b7e-176d-46a4-97db-a438de322d02\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionCategories\": [],\n    \"actionIsCheck\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [],\n    \"actionNextHopParameterMapping\": {},\n    \"actionOutputType\": \"\",\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": false,\n    \"action_uuid\": \"edacb40b6b085473676c85af90fd36de2b23e8fd763ee25c787e8fd629c45773\",\n    \"checkEnabled\": false,\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Make a SQL Query to the given AWS Redshift database\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-04-07T00:55:49.937Z\"\n    },\n    \"id\": 246,\n    \"index\": 246,\n    \"inputData\": [\n     {\n      \"cluster\": {\n       \"constant\": false,\n       \"value\": \"cluster\"\n      },\n      \"database\": {\n       \"constant\": false,\n       \"value\": \"database\"\n      },\n      \"query\": {\n       \"constant\": false,\n       \"value\": \"TruncateSQL\"\n      },\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"region\"\n      },\n      \"secretArn\": {\n       \"constant\": false,\n       \"value\": \"secretArn\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"cluster\": {\n        \"description\": \"Name of Redshift Cluster\",\n        \"title\": \"cluster\",\n        \"type\": \"string\"\n       },\n       \"database\": {\n        \"description\": \"Name of your Redshift database\",\n        \"title\": \"database\",\n        \"type\": \"string\"\n       },\n       \"query\": {\n        \"description\": \"sql query to run\",\n        \"title\": \"query\",\n        \"type\": \"string\"\n       },\n       \"region\": {\n        \"description\": \"AWS Region.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       },\n       \"secretArn\": {\n        \"description\": \"Value of your Secrets Manager ARN\",\n        \"title\": \"secretArn\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"region\",\n       \"query\",\n       \"cluster\",\n       \"database\",\n       \"secretArn\"\n      ],\n      \"title\": \"aws_create_redshift_query\",\n      \"type\": \"object\"\n     }\n    ],\n    \"isUnskript\": false,\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS Redshift Query truncate\",\n    \"nouns\": [],\n    \"orderProperties\": [\n     \"region\",\n     \"query\",\n     \"cluster\",\n     \"database\",\n     \"secretArn\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"truncateId\",\n     \"output_name_enabled\": true\n    },\n    \"printOutput\": true,\n    \"tags\": [\n     \"aws_create_redshift_query\"\n    ],\n    \"title\": \"AWS Redshift Query truncate\",\n    \"verbs\": []\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"##  Copyright (c) 2021 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from __future__ import annotations\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import List, Dict\\n\",\n    \"from unskript.connectors.aws import aws_get_paginator\\n\",\n    \"import pprint\\n\",\n    \"from beartype import beartype\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_create_redshift_query(handle, region: str,cluster:str, database:str, secretArn: str, query:str) -> str:\\n\",\n    \"\\n\",\n    \"    # Input param validation.\\n\",\n    \"    #major change\\n\",\n    \"    client = handle.client('redshift-data', region_name=region)\\n\",\n    \"    # define your query\\n\",\n    \"    query = query\\n\",\n    \"    # execute the query\\n\",\n    \"    response = client.execute_statement(\\n\",\n    \"        ClusterIdentifier=cluster,\\n\",\n    \"        Database=database,\\n\",\n    \"        SecretArn=secretArn,\\n\",\n    \"        Sql=query\\n\",\n    \"    )\\n\",\n    \"    resultId = response['Id']\\n\",\n    \"    print(response)\\n\",\n    \"    print(\\\"resultId\\\",resultId)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"    return resultId\\n\",\n    \"\\n\",\n    \"#make a change\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"def unskript_default_printer(output):\\n\",\n    \"    if isinstance(output, (list, tuple)):\\n\",\n    \"        for item in output:\\n\",\n    \"            print(f'item: {item}')\\n\",\n    \"    elif isinstance(output, dict):\\n\",\n    \"        for item in output.items():\\n\",\n    \"            print(f'item: {item}')\\n\",\n    \"    else:\\n\",\n    \"        print(f'Output for {task.name}')\\n\",\n    \"        print(output)\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"cluster\\\": \\\"cluster\\\",\\n\",\n    \"    \\\"database\\\": \\\"database\\\",\\n\",\n    \"    \\\"query\\\": \\\"TruncateSQL\\\",\\n\",\n    \"    \\\"region\\\": \\\"region\\\",\\n\",\n    \"    \\\"secretArn\\\": \\\"secretArn\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"truncateId\\\")\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_create_redshift_query, lego_printer=unskript_default_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 13,\n   \"id\": \"6d8836e9-0f5d-4d54-9aba-85563f8a4a3b\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionCategories\": [],\n    \"actionIsCheck\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [],\n    \"actionNextHopParameterMapping\": {},\n    \"actionOutputType\": \"\",\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": false,\n    \"action_uuid\": \"26435cb53d995eccf75fd1e0692e611fcdb1b7e09511bbfe365f0e9a5abc416f\",\n    \"checkEnabled\": false,\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Given an QueryId, this Action will give you the status of the Query, along with other data like  the number of lines/\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-04-07T00:55:52.723Z\"\n    },\n    \"id\": 210,\n    \"index\": 210,\n    \"inputData\": [\n     {\n      \"queryId\": {\n       \"constant\": false,\n       \"value\": \"truncateId\"\n      },\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"region\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"queryId\": {\n        \"description\": \"Id of Redshift Query\",\n        \"title\": \"queryId\",\n        \"type\": \"string\"\n       },\n       \"region\": {\n        \"description\": \"AWS Region.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"region\",\n       \"queryId\"\n      ],\n      \"title\": \"aws_get_redshift_query_details\",\n      \"type\": \"object\"\n     }\n    ],\n    \"isUnskript\": false,\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS Get Redshift Query Details\",\n    \"nouns\": [],\n    \"orderProperties\": [\n     \"region\",\n     \"queryId\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"printOutput\": true,\n    \"tags\": [\n     \"aws_get_redshift_query_details\"\n    ],\n    \"verbs\": []\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"from __future__ import annotations\\n\",\n    \"##\\n\",\n    \"##  Copyright (c) 2023 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import List, Dict\\n\",\n    \"from unskript.connectors.aws import aws_get_paginator\\n\",\n    \"import pprint\\n\",\n    \"from beartype import beartype\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from typing import Optional\\n\",\n    \"\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_redshift_query_details(handle, region: str, queryId:str) -> Dict:\\n\",\n    \"\\n\",\n    \"    client = handle.client('redshift-data', region_name=region)\\n\",\n    \"    response = client.describe_statement(\\n\",\n    \"    Id=queryId\\n\",\n    \"    )\\n\",\n    \"    return response\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"def unskript_default_printer(output):\\n\",\n    \"    if isinstance(output, (list, tuple)):\\n\",\n    \"        for item in output:\\n\",\n    \"            print(f'item: {item}')\\n\",\n    \"    elif isinstance(output, dict):\\n\",\n    \"        for item in output.items():\\n\",\n    \"            print(f'item: {item}')\\n\",\n    \"    else:\\n\",\n    \"        print(f'Output for {task.name}')\\n\",\n    \"        print(output)\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"queryId\\\": \\\"truncateId\\\",\\n\",\n    \"    \\\"region\\\": \\\"region\\\"\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_get_redshift_query_details, lego_printer=unskript_default_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 14,\n   \"id\": \"bc3de8c7-1498-4f4d-a5db-6683eebd778d\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionCategories\": [],\n    \"actionIsCheck\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [],\n    \"actionNextHopParameterMapping\": {},\n    \"actionOutputType\": \"\",\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": false,\n    \"action_uuid\": \"edacb40b6b085473676c85af90fd36de2b23e8fd763ee25c787e8fd629c45773\",\n    \"checkEnabled\": false,\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Make a SQL Query to the given AWS Redshift database\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-04-07T00:56:01.597Z\"\n    },\n    \"id\": 246,\n    \"index\": 246,\n    \"inputData\": [\n     {\n      \"cluster\": {\n       \"constant\": false,\n       \"value\": \"cluster\"\n      },\n      \"database\": {\n       \"constant\": false,\n       \"value\": \"database\"\n      },\n      \"query\": {\n       \"constant\": false,\n       \"value\": \"RebuildSql\"\n      },\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"region\"\n      },\n      \"secretArn\": {\n       \"constant\": false,\n       \"value\": \"secretArn\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"cluster\": {\n        \"description\": \"Name of Redshift Cluster\",\n        \"title\": \"cluster\",\n        \"type\": \"string\"\n       },\n       \"database\": {\n        \"description\": \"Name of your Redshift database\",\n        \"title\": \"database\",\n        \"type\": \"string\"\n       },\n       \"query\": {\n        \"description\": \"sql query to run\",\n        \"title\": \"query\",\n        \"type\": \"string\"\n       },\n       \"region\": {\n        \"description\": \"AWS Region.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       },\n       \"secretArn\": {\n        \"description\": \"Value of your Secrets Manager ARN\",\n        \"title\": \"secretArn\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"region\",\n       \"query\",\n       \"cluster\",\n       \"database\",\n       \"secretArn\"\n      ],\n      \"title\": \"aws_create_redshift_query\",\n      \"type\": \"object\"\n     }\n    ],\n    \"isUnskript\": false,\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS Redshift Query rebuild sql\",\n    \"nouns\": [],\n    \"orderProperties\": [\n     \"region\",\n     \"query\",\n     \"cluster\",\n     \"database\",\n     \"secretArn\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"printOutput\": true,\n    \"tags\": [\n     \"aws_create_redshift_query\"\n    ],\n    \"title\": \"AWS Redshift Query rebuild sql\",\n    \"verbs\": []\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"##  Copyright (c) 2021 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from __future__ import annotations\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import List, Dict\\n\",\n    \"from unskript.connectors.aws import aws_get_paginator\\n\",\n    \"import pprint\\n\",\n    \"from beartype import beartype\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_create_redshift_query(handle, region: str,cluster:str, database:str, secretArn: str, query:str) -> str:\\n\",\n    \"\\n\",\n    \"    # Input param validation.\\n\",\n    \"    #major change\\n\",\n    \"    client = handle.client('redshift-data', region_name=region)\\n\",\n    \"    # define your query\\n\",\n    \"    query = query\\n\",\n    \"    # execute the query\\n\",\n    \"    response = client.execute_statement(\\n\",\n    \"        ClusterIdentifier=cluster,\\n\",\n    \"        Database=database,\\n\",\n    \"        SecretArn=secretArn,\\n\",\n    \"        Sql=query\\n\",\n    \"    )\\n\",\n    \"    resultId = response['Id']\\n\",\n    \"    print(response)\\n\",\n    \"    print(\\\"resultId\\\",resultId)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"    return resultId\\n\",\n    \"\\n\",\n    \"#make a change\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"def unskript_default_printer(output):\\n\",\n    \"    if isinstance(output, (list, tuple)):\\n\",\n    \"        for item in output:\\n\",\n    \"            print(f'item: {item}')\\n\",\n    \"    elif isinstance(output, dict):\\n\",\n    \"        for item in output.items():\\n\",\n    \"            print(f'item: {item}')\\n\",\n    \"    else:\\n\",\n    \"        print(f'Output for {task.name}')\\n\",\n    \"        print(output)\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"cluster\\\": \\\"cluster\\\",\\n\",\n    \"    \\\"database\\\": \\\"database\\\",\\n\",\n    \"    \\\"query\\\": \\\"RebuildSql\\\",\\n\",\n    \"    \\\"region\\\": \\\"region\\\",\\n\",\n    \"    \\\"secretArn\\\": \\\"secretArn\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_create_redshift_query, lego_printer=unskript_default_printer, hdl=hdl, args=args)\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"execution_data\": {\n   \"runbook_name\": \"AWS Redshift Update Database\",\n   \"parameters\": [\n    \"database\",\n    \"region\",\n    \"secret_name\",\n    \"cluster\"\n   ]\n  },\n  \"kernelspec\": {\n   \"display_name\": \"Python 3.10.6 64-bit\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"codemirror_mode\": {\n    \"name\": \"ipython\",\n    \"version\": 3\n   },\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"nbconvert_exporter\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.10.6\"\n  },\n  \"parameterSchema\": {\n   \"properties\": {\n    \"cluster\": {\n     \"default\": \"doug-billing-3\",\n     \"description\": \"Redshift Cluster Name\",\n     \"title\": \"cluster\",\n     \"type\": \"string\"\n    },\n    \"database\": {\n     \"default\": \"dev\",\n     \"description\": \"redshift database name\",\n     \"title\": \"database\",\n     \"type\": \"string\"\n    },\n    \"region\": {\n     \"default\": \"us-west-2\",\n     \"description\": \"AWS Region\",\n     \"title\": \"region\",\n     \"type\": \"string\"\n    },\n    \"secret_name\": {\n     \"default\": \"awsuser-doug-redshift\",\n     \"description\": \"AWS Secret Name that can access Redshift\",\n     \"title\": \"secret_name\",\n     \"type\": \"string\"\n    }\n   },\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"vscode\": {\n   \"interpreter\": {\n    \"hash\": \"b0fa6594d8f4cbf19f97940f81e996739fb7646882a419484c72d19e05852a7e\"\n   }\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}"
  },
  {
    "path": "AWS/AWS_Update_Redshift_Database.json",
    "content": "{\n    \"name\": \"AWS Redshift Update Database\",\n    \"description\": \"This runbook can be used to update a redshift database from a SQL file stored in S3.\",  \n    \"uuid\": \"a79201f821993867e23dd9603ed7ef5123324353d717c566f902f7ca6e471f5c\",\n    \"icon\": \"CONNECTOR_TYPE_AWS\",\n    \"categories\": [ \"CATEGORY_TYPE_COST_OPT\", \"CATEGORY_TYPE_CLOUDOPS\" ],\n    \"connector_types\": [ \"CONNECTOR_TYPE_AWS\" ],\n    \"version\": \"1.0.0\"\n  }"
  },
  {
    "path": "AWS/AWS_Update_Resource_Tags.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"79251bc7-c6cd-4344-a8d5-754bf62eb17e\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Update Tags for AWS Resources\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Update Tags for AWS Resources\"\n   },\n   \"source\": [\n    \"<p><img src=\\\"https://storage.googleapis.com/unskript-website/assets/favicon.png\\\" alt=\\\"unSkript.com\\\" width=\\\"100\\\" height=\\\"100\\\"></p>\\n\",\n    \"<h1 id=\\\"unSkript-Runbooks-&para;\\\">unSkript Runbooks <a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#-unSkript-Runbooks-\\\" target=\\\"_self\\\" rel=\\\"noopener\\\">&para;</a><a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#unSkript-Runbooks-&para;\\\" target=\\\"_self\\\">&para;</a></h1>\\n\",\n    \"<div class=\\\"alert alert-block alert-success\\\"><strong>&nbsp;This runbook demonstrates How to Update Tags for AWS Resources using unSkript legos.</strong></div>\\n\",\n    \"<p>&nbsp;</p>\\n\",\n    \"<center>\\n\",\n    \"<h2 id=\\\"Enforce-Mandatory-Tags-Across-All-AWS-Resources&para;\\\">Update Tags for selected AWS Resources<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Enforce-Mandatory-Tags-Across-All-AWS-Resources\\\" target=\\\"_self\\\" rel=\\\"noopener\\\">&para;</a><a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Enforce-Mandatory-Tags-Across-All-AWS-Resources&para;\\\" target=\\\"_self\\\">&para;</a></h2>\\n\",\n    \"</center>\\n\",\n    \"<h1 id=\\\"Steps-Overview&para;\\\">Steps Overview<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Steps-Overview\\\" target=\\\"_self\\\" rel=\\\"noopener\\\">&para;</a><a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Steps-Overview&para;\\\" target=\\\"_self\\\">&para;</a></h1>\\n\",\n    \"<ol>\\n\",\n    \"<li>List all the Resources ARNs in the given region with the selected tag.</li>\\n\",\n    \"<li>WE'll print a list of tagged resources along with the current value of the tag. Select and change as desired.</li>\\n\",\n    \"<li>Update the Selected tags at AWS.</li>\\n\",\n    \"</ol>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 29,\n   \"id\": \"0ec169e9-f3f2-400d-9b58-e4a598769e61\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": true,\n    \"action_uuid\": \"aee6cabb55096d5cf6098faa7e4a94135e8f5b0572b36d4b3252d7745fae595b\",\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"customCell\": true,\n    \"description\": \"AWS Get Untagged Resources\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-06-07T15:26:03.051Z\"\n    },\n    \"id\": 187,\n    \"index\": 187,\n    \"inputData\": [\n     {\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"Region\"\n      },\n      \"tag\": {\n       \"constant\": false,\n       \"value\": \"Tag_Key\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"region\": {\n        \"default\": \"\",\n        \"description\": \"AWS Region.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       },\n       \"tag\": {\n        \"default\": \"\\\"Tag_Key\\\"\",\n        \"description\": \"The Tag to search for\",\n        \"title\": \"tag\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"region\",\n       \"tag\"\n      ],\n      \"title\": \"aws_get_resources_with_tag\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS Get Resources With Tag\",\n    \"nouns\": [\n     \"aws\",\n     \"resources\"\n    ],\n    \"orderProperties\": [\n     \"region\",\n     \"tag\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"taggedResources\",\n     \"output_name_enabled\": true,\n     \"output_runbook_enabled\": false,\n     \"output_runbook_name\": \"\"\n    },\n    \"printOutput\": true,\n    \"service_id_enabled\": false,\n    \"tags\": [\n     \"aws_get_untagged_resources\"\n    ],\n    \"title\": \"AWS Get Resources With Tag\",\n    \"verbs\": [\n     \"list\"\n    ]\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##  Copyright (c) 2021 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from typing import List, Dict\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from unskript.connectors.aws import aws_get_paginator\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_resources_with_tag_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(f\\\"there are {len(output)} resources with tag {Tag_Key}. We can fix a max of 20.\\\" )\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_resources_with_tag(handle, region: str, tag:str) -> List:\\n\",\n    \"    \\\"\\\"\\\"aws_get_resources_with_tag Returns an List of Untagged Resources.\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned from task.validate(...).\\n\",\n    \"\\n\",\n    \"        :type region: str\\n\",\n    \"        :param region: Region to filter resources.\\n\",\n    \"\\n\",\n    \"        :rtype: List of untagged resources.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"\\n\",\n    \"    ec2Client = handle.client('resourcegroupstaggingapi', region_name=region)\\n\",\n    \"    result = []\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"    try:\\n\",\n    \"        response = aws_get_paginator(ec2Client, \\\"get_resources\\\", \\\"ResourceTagMappingList\\\")\\n\",\n    \"        for resources in response:\\n\",\n    \"            if  resources[\\\"Tags\\\"]:\\n\",\n    \"                #has tags\\n\",\n    \"                #print(tagged_instance)\\n\",\n    \"                #get all the keys for the instance\\n\",\n    \"                for kv in resources['Tags']:\\n\",\n    \"                    key = kv[\\\"Key\\\"]\\n\",\n    \"                    if tag == key:\\n\",\n    \"                        temp = {\\\"arn\\\": resources[\\\"ResourceARN\\\"], \\\"value\\\":kv[\\\"Value\\\"]}\\n\",\n    \"                        result.append(temp)\\n\",\n    \"\\n\",\n    \"    except Exception as error:\\n\",\n    \"        result.append({\\\"error\\\":error})\\n\",\n    \"\\n\",\n    \"    return result\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"region\\\": \\\"Region\\\",\\n\",\n    \"    \\\"tag\\\": \\\"Tag_Key\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"taggedResources\\\")\\n\",\n    \"\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_get_resources_with_tag, lego_printer=aws_get_resources_with_tag_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 54,\n   \"id\": \"151b0bf1-c0d4-45d7-b384-1b55077824da\",\n   \"metadata\": {\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-06-07T15:45:25.213Z\"\n    },\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"name\": \"tag updater\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"tag updater\",\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"import ipywidgets as widgets\\n\",\n    \"from IPython.display import display\\n\",\n    \"\\n\",\n    \"# List of dictionaries with ARN and value\\n\",\n    \"data = taggedResources\\n\",\n    \"\\n\",\n    \"# Create an HBox for each dictionary entry with checkbox and text field\\n\",\n    \"checkboxes = []\\n\",\n    \"text_fields = []\\n\",\n    \"for entry in data:\\n\",\n    \"    checkbox = widgets.Checkbox(value=False, description=entry['arn'])\\n\",\n    \"    text_field = widgets.Text(description='Value:', value=entry['value'])\\n\",\n    \"    hbox = widgets.HBox([checkbox, text_field])\\n\",\n    \"    checkboxes.append(checkbox)\\n\",\n    \"    text_fields.append(text_field)\\n\",\n    \"\\n\",\n    \"# Output list of dictionaries\\n\",\n    \"updated_list = []\\n\",\n    \"\\n\",\n    \"# Function to handle checkbox change\\n\",\n    \"def checkbox_changed(change):\\n\",\n    \"    checkbox = change.owner\\n\",\n    \"    index = checkboxes.index(checkbox)\\n\",\n    \"    entry = data[index]\\n\",\n    \"    entry['checked'] = checkbox.value\\n\",\n    \"    update_output()\\n\",\n    \"\\n\",\n    \"# Function to handle text field change\\n\",\n    \"def text_field_changed(change):\\n\",\n    \"    text_field = change.owner\\n\",\n    \"    index = text_fields.index(text_field)\\n\",\n    \"    entry = data[index]\\n\",\n    \"    entry['value'] = text_field.value\\n\",\n    \"    update_output()\\n\",\n    \"\\n\",\n    \"# Register the checkbox change handler for each checkbox\\n\",\n    \"for checkbox in checkboxes:\\n\",\n    \"    checkbox.observe(checkbox_changed, 'value')\\n\",\n    \"\\n\",\n    \"# Register the text field change handler for each text field\\n\",\n    \"for text_field in text_fields:\\n\",\n    \"    text_field.observe(text_field_changed, 'value')\\n\",\n    \"\\n\",\n    \"# Function to update the output list of dictionaries\\n\",\n    \"def update_output():\\n\",\n    \"    global updated_list\\n\",\n    \"    updated_list = [{'arn': entry['arn'], 'value': entry['value']} for entry in data if 'checked' in entry and entry['checked']]\\n\",\n    \"\\n\",\n    \"# Display the checkboxes and text fields\\n\",\n    \"checkboxes_widgets = []\\n\",\n    \"for hbox in zip(checkboxes, text_fields):\\n\",\n    \"    checkboxes_widgets.append(widgets.HBox(hbox))\\n\",\n    \"display(*checkboxes_widgets)\\n\",\n    \"\\n\",\n    \"\\n\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 55,\n   \"id\": \"4d8ec1e4-9b88-46cb-a15c-db64d4531ee2\",\n   \"metadata\": {\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-06-07T15:45:32.204Z\"\n    },\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"print(output)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"ce65fdd0-ee64-42d0-90a6-0fe1c0f54608\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"AWS Attach Tags to Resources\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"AWS Attach Tags to Resources\"\n   },\n   \"source\": [\n    \"Here we will use unSkript AWS Attach Tags to Resources Lego. This lego take handle, resource_arn: list, tag_key: str, tag_value: str, region: str as input. This input is used to attach mandatory tags to all untagged Resources.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 149,\n   \"id\": \"b0bf6aee-2b72-4348-8c38-fe3783619da6\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionCategories\": [\n     \"CATEGORY_TYPE_COST_OPT\",\n     \"CATEGORY_TYPE_DEVOPS\",\n     \"CATEGORY_TYPE_AWS\"\n    ],\n    \"actionIsCheck\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHopParameterMapping\": {},\n    \"actionOutputType\": \"\",\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": false,\n    \"action_uuid\": \"878cb7819ecb4687ecfa8c6143365d10fe6b127adeb4a27fd71d06a3a2243d22\",\n    \"checkEnabled\": false,\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"AWS Attach Tags to Resources\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-06-04T03:20:41.106Z\"\n    },\n    \"id\": 260,\n    \"index\": 260,\n    \"inputData\": [\n     {\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"Region\"\n      },\n      \"resource_arn\": {\n       \"constant\": false,\n       \"value\": \"checked_list\"\n      },\n      \"tag_key\": {\n       \"constant\": false,\n       \"value\": \"Tag_Key\"\n      },\n      \"tag_value\": {\n       \"constant\": false,\n       \"value\": \"\\\"01/01/2025\\\"\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"region\": {\n        \"description\": \"AWS Region.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       },\n       \"resource_arn\": {\n        \"description\": \"Resource ARNs.\",\n        \"items\": {},\n        \"title\": \"Resource ARN\",\n        \"type\": \"array\"\n       },\n       \"tag_key\": {\n        \"description\": \"Resource Tag Key.\",\n        \"title\": \"Tag Key\",\n        \"type\": \"string\"\n       },\n       \"tag_value\": {\n        \"description\": \"Resource Tag Value.\",\n        \"title\": \"Tag Value\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"region\",\n       \"resource_arn\",\n       \"tag_key\",\n       \"tag_value\"\n      ],\n      \"title\": \"aws_attach_tags_to_resources\",\n      \"type\": \"object\"\n     }\n    ],\n    \"isUnskript\": false,\n    \"iterData\": [\n     {\n      \"iter_enabled\": false,\n      \"iter_item\": \"resource_arn\",\n      \"iter_list\": {\n       \"constant\": false,\n       \"objectItems\": false,\n       \"value\": \"checked_list\"\n      }\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS Attach Tags to Resources\",\n    \"nouns\": [],\n    \"orderProperties\": [\n     \"resource_arn\",\n     \"tag_key\",\n     \"tag_value\",\n     \"region\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"printOutput\": true,\n    \"tags\": [\n     \"aws_attach_tags_to_resources\"\n    ],\n    \"verbs\": []\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##  Copyright (c) 2021 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"import pprint\\n\",\n    \"from typing import Dict\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_attach_tags_to_resources_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_attach_tags_to_resources(\\n\",\n    \"    handle,\\n\",\n    \"    resource_arn: list,\\n\",\n    \"    tag_key: str,\\n\",\n    \"    tag_value: str,\\n\",\n    \"    region: str\\n\",\n    \"    ) -> Dict:\\n\",\n    \"    \\\"\\\"\\\"aws_attach_tags_to_resources Returns an Dict of resource info.\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned from task.validate(...).\\n\",\n    \"\\n\",\n    \"        :type resource_arn: list\\n\",\n    \"        :param resource_arn: Resource ARNs.\\n\",\n    \"\\n\",\n    \"        :type tag_key: str\\n\",\n    \"        :param tag_key: Resource Tag Key.\\n\",\n    \"\\n\",\n    \"        :type tag_value: str\\n\",\n    \"        :param tag_value: Resource Tag value.\\n\",\n    \"\\n\",\n    \"        :type region: str\\n\",\n    \"        :param region: Region to filter resources.\\n\",\n    \"\\n\",\n    \"        :rtype: Dict of resource info.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    ec2Client = handle.client('resourcegroupstaggingapi', region_name=region)\\n\",\n    \"    result = {}\\n\",\n    \"    try:\\n\",\n    \"        response = ec2Client.tag_resources(\\n\",\n    \"            ResourceARNList=resource_arn,\\n\",\n    \"            Tags={tag_key: tag_value}\\n\",\n    \"            )\\n\",\n    \"        result = response\\n\",\n    \"\\n\",\n    \"    except Exception as error:\\n\",\n    \"        result[\\\"error\\\"] = error\\n\",\n    \"\\n\",\n    \"    return result\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(continueOnError=False)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"region\\\": \\\"Region\\\",\\n\",\n    \"    \\\"resource_arn\\\": \\\"checked_list\\\",\\n\",\n    \"    \\\"tag_key\\\": \\\"Tag_Key\\\",\\n\",\n    \"    \\\"tag_value\\\": \\\"\\\\\\\\\\\"01/01/2025\\\\\\\\\\\"\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(iterJson='''{\\n\",\n    \"    \\\"iter_enabled\\\": false,\\n\",\n    \"    \\\"iter_list_is_const\\\": false,\\n\",\n    \"    \\\"iter_list\\\": \\\"checked_list\\\",\\n\",\n    \"    \\\"iter_parameter\\\": \\\"resource_arn\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_attach_tags_to_resources, lego_printer=aws_attach_tags_to_resources_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"a8280ac4-d504-44d2-b5ea-d97f7ca672c8\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Conclusion\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Conclusion\"\n   },\n   \"source\": [\n    \"In this Runbook, we demonstrated the use of unSkript's AWS legos to attach tags. This Runbook gets the list of all untagged resources of a given region, discovers tag keys of the given region and attaches mandatory tags to all the untagged resource. To view the full platform capabilities of unSkript please visit https://unskript.com\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"execution_data\": {\n   \"runbook_name\": \"AWS Update Resource Tags\",\n   \"parameters\": [\n    \"Region\",\n    \"Tag_Key\",\n    \"Tag_Value\"\n   ]\n  },\n  \"kernelspec\": {\n   \"display_name\": \"unSkript (Build: 1185)\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"codemirror_mode\": {\n    \"name\": \"ipython\",\n    \"version\": 3\n   },\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"nbconvert_exporter\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.9.6\"\n  },\n  \"outputParameterSchema\": null,\n  \"parameterSchema\": {\n   \"definitions\": null,\n   \"properties\": {\n    \"Region\": {\n     \"default\": \"us-west-2\",\n     \"description\": \"Resources Region\",\n     \"title\": \"Region\",\n     \"type\": \"string\"\n    },\n    \"Tag_Key\": {\n     \"default\": \"owner\",\n     \"description\": \"Mandatory Tag key for resources (only use when tag need to be attached to all the resources)\",\n     \"title\": \"Tag_Key\",\n     \"type\": \"string\"\n    },\n    \"Tag_Value\": {\n     \"description\": \"Mandatory Tag Value for resources (only use when tag need to be attached to all the resources)\",\n     \"title\": \"Tag_Value\",\n     \"type\": \"string\"\n    }\n   },\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}"
  },
  {
    "path": "AWS/AWS_Update_Resource_Tags.json",
    "content": "{\n    \"name\": \"AWS Update Resource Tags\",\n    \"description\": \"This runbook can be used to update an existing tag to any resource in an AWS Region.\",  \n    \"uuid\": \"a79201f821993867e23dd9603ed7ef5523324353d717c566f902f7ca6e471f5c\",\n    \"icon\": \"CONNECTOR_TYPE_AWS\",\n    \"categories\": [ \"CATEGORY_TYPE_COST_OPT\", \"CATEGORY_TYPE_CLOUDOPS\" ],\n    \"connector_types\": [ \"CONNECTOR_TYPE_AWS\" ],\n    \"version\": \"1.0.0\"\n  }"
  },
  {
    "path": "AWS/AWS_Update_Resources_About_To_Expire.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"79251bc7-c6cd-4344-a8d5-754bf62eb17e\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Update AWS Resources about to expire\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Update AWS Resources about to expire\"\n   },\n   \"source\": [\n    \"<p><img src=\\\"https://storage.googleapis.com/unskript-website/assets/favicon.png\\\" alt=\\\"unSkript.com\\\" width=\\\"100\\\" height=\\\"100\\\"></p>\\n\",\n    \"<h1 id=\\\"unSkript-Runbooks-&para;\\\">unSkript Runbooks <a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#-unSkript-Runbooks-\\\" target=\\\"_self\\\" rel=\\\"noopener\\\">&para;</a><a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#unSkript-Runbooks-&para;\\\" target=\\\"_self\\\">&para;</a></h1>\\n\",\n    \"<div class=\\\"alert alert-block alert-success\\\"><strong>&nbsp;This runbook updates the expiration tag for AWS Resources that are about to expire.</strong></div>\\n\",\n    \"<p>&nbsp;</p>\\n\",\n    \"<center><center>\\n\",\n    \"<h2 id=\\\"Update-AWS-Resources-about-to-expire&para;\\\">Update AWS Resources about to expire<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Enforce-Mandatory-Tags-Across-All-AWS-Resources\\\" target=\\\"_self\\\" rel=\\\"noopener\\\">&para;</a><a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Update-AWS-Resources-about-to-expire&para;\\\" target=\\\"_self\\\">&para;</a></h2>\\n\",\n    \"</center>\\n\",\n    \"<h1 id=\\\"Steps-Overview&para;\\\">Steps Overview<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Steps-Overview\\\" target=\\\"_self\\\" rel=\\\"noopener\\\">&para;</a><a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Steps-Overview&para;\\\" target=\\\"_self\\\">&para;</a></h1>\\n\",\n    \"<ol>\\n\",\n    \"<li>Get all resources in a given region with an expiratiuon tag.</li>\\n\",\n    \"<li>Filter those that have expired or are about to expire</li>\\n\",\n    \"<li>Update manually with the date picker.</li>\\n\",\n    \"<li>Update the epiration tag to the selected AWS Resources.</li>\\n\",\n    \"<li>Send a Slack message on number of expiring resources.</li>\\n\",\n    \"</ol>\\n\",\n    \"<p>&nbsp;</p>\\n\",\n    \"<p>If this RunBook is run Programatically - no dates will be changed - just a Slack alert sent.</p>\\n\",\n    \"<p>&nbsp;</p>\\n\",\n    \"<p>&nbsp;</p>\\n\",\n    \"</center>\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"a49a1258-79d2-4846-8731-4ed74b36d6bc\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"AWS Get Resources with expiration tag\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"AWS Get Resources with expiration tag\"\n   },\n   \"source\": [\n    \"<p>Here we will use unSkript AWS Get Resources with the expiration tag&nbsp; - the tag name is an input parameter for the runbook.</p>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 12,\n   \"id\": \"0ec169e9-f3f2-400d-9b58-e4a598769e61\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": true,\n    \"action_uuid\": \"aee6cabb55096d5cf6098faa7e4a94135e8f5b0572b36d4b3252d7745fae595b\",\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"customCell\": true,\n    \"description\": \"AWS Get Untagged Resources\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-06-07T14:51:19.386Z\"\n    },\n    \"id\": 187,\n    \"index\": 187,\n    \"inputData\": [\n     {\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"Region\"\n      },\n      \"tag\": {\n       \"constant\": false,\n       \"value\": \"expiration_tag\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"region\": {\n        \"default\": \"\",\n        \"description\": \"AWS Region.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       },\n       \"tag\": {\n        \"default\": \"\\\"Tag_Key\\\"\",\n        \"description\": \"The Tag to search for\",\n        \"title\": \"tag\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"region\",\n       \"tag\"\n      ],\n      \"title\": \"aws_get_resources_with_expiration_tag\",\n      \"type\": \"object\"\n     }\n    ],\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS Get Resources With Expiration Tag\",\n    \"nouns\": [\n     \"aws\",\n     \"resources\"\n    ],\n    \"orderProperties\": [\n     \"region\",\n     \"tag\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"ExpirationResources\",\n     \"output_name_enabled\": true,\n     \"output_runbook_enabled\": false,\n     \"output_runbook_name\": \"\"\n    },\n    \"printOutput\": true,\n    \"service_id_enabled\": false,\n    \"tags\": [\n     \"aws_get_untagged_resources\"\n    ],\n    \"title\": \"AWS Get Resources With Expiration Tag\",\n    \"verbs\": [\n     \"list\"\n    ]\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##  Copyright (c) 2021 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from typing import List, Dict\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from unskript.connectors.aws import aws_get_paginator\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_resources_with_expiration_tag_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(f\\\"there are {len(output)} resources missing tag {Tag_Key}. We can fix a max of 20.\\\" )\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_resources_with_expiration_tag(handle, region: str, tag:str) -> List:\\n\",\n    \"\\n\",\n    \"    ec2Client = handle.client('resourcegroupstaggingapi', region_name=region)\\n\",\n    \"    result = []\\n\",\n    \"    try:\\n\",\n    \"        response = aws_get_paginator(ec2Client, \\\"get_resources\\\", \\\"ResourceTagMappingList\\\")\\n\",\n    \"        for resources in response:\\n\",\n    \"            if resources[\\\"Tags\\\"]:\\n\",\n    \"                #has tags\\n\",\n    \"                tags = resources['Tags']\\n\",\n    \"                for kv in resources['Tags']:\\n\",\n    \"                    if kv[\\\"Key\\\"] == tag:\\n\",\n    \"                        #we have found an expiration tag\\n\",\n    \"                        temp ={'arn': [resources[\\\"ResourceARN\\\"]], 'expires':kv[\\\"Value\\\"]}\\n\",\n    \"                        print(temp)\\n\",\n    \"                        result.append(temp)\\n\",\n    \"\\n\",\n    \"    except Exception as error:\\n\",\n    \"        result.append({\\\"error\\\":error})\\n\",\n    \"\\n\",\n    \"    return result\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"region\\\": \\\"Region\\\",\\n\",\n    \"    \\\"tag\\\": \\\"expiration_tag\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"ExpirationResources\\\")\\n\",\n    \"\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_get_resources_with_expiration_tag, lego_printer=aws_get_resources_with_expiration_tag_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"8be56b96-04b8-4518-afaa-157b4d530321\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Filter the Resources\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Filter the Resources\"\n   },\n   \"source\": [\n    \"<p>Now, we filter for ony resrouces that have expired, or are bout to expire, and display in list with a date picker.</p>\\n\",\n    \"<p>&nbsp;</p>\\n\",\n    \"<p>Updating the date will allow us to change the value to the expiration tag.</p>\\n\",\n    \"<p>&nbsp;</p>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 4,\n   \"id\": \"2f6628f1-6285-49fb-9423-2eeb0575043d\",\n   \"metadata\": {\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-06-07T14:37:42.200Z\"\n    },\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"name\": \"find resources about to expire\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"find resources about to expire\",\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"from datetime import datetime, timedelta\\n\",\n    \"\\n\",\n    \"#print(ExpirationResources)\\n\",\n    \"expiringList = []\\n\",\n    \"# Get the current date\\n\",\n    \"current_date = datetime.now()\\n\",\n    \"\\n\",\n    \"# Calculate the date days_to_expire days from now\\n\",\n    \"future_date = current_date + timedelta(days=days_to_expire)\\n\",\n    \"\\n\",\n    \"for resource in ExpirationResources:\\n\",\n    \"    expires = datetime.strptime(resource['expires'], \\\"%m/%d/%Y\\\")\\n\",\n    \"    if expires < future_date:\\n\",\n    \"        expiringList.append(resource)\\n\",\n    \"\\n\",\n    \"print(expiringList)\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 5,\n   \"id\": \"de6350ed-9d0c-45fe-8917-5e95d370eed7\",\n   \"metadata\": {\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-06-07T14:37:47.898Z\"\n    },\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"name\": \"select expirations to renew\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"select expirations to renew\",\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"from datetime import datetime\\n\",\n    \"import ipywidgets as widgets\\n\",\n    \"from IPython.display import display\\n\",\n    \"\\n\",\n    \"# Sample list of dictionaries\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"def update_expiration_date(expiration_date, arn):\\n\",\n    \"    for item in expiringList:\\n\",\n    \"        if item[\\\"arn\\\"][0] == arn:\\n\",\n    \"            item[\\\"expires\\\"] = expiration_date.strftime(\\\"%m/%d/%Y\\\")\\n\",\n    \"    print(expiringList)\\n\",\n    \"\\n\",\n    \"def on_date_change(change):\\n\",\n    \"    arn = change.owner.description\\n\",\n    \"    expiration_date = change.new\\n\",\n    \"    update_expiration_date(expiration_date, arn)\\n\",\n    \"\\n\",\n    \"# Create a date picker for each ARN\\n\",\n    \"for item in expiringList:\\n\",\n    \"    expiration_date = datetime.strptime(item[\\\"expires\\\"], \\\"%m/%d/%Y\\\").date()\\n\",\n    \"    date_picker = widgets.DatePicker(description=item[\\\"arn\\\"][0], \\n\",\n    \"                                     style=dict(description_width='initial'),\\n\",\n    \"                                     layout=dict(width='80%'),\\n\",\n    \"                                     value=expiration_date)\\n\",\n    \"    date_picker.observe(on_date_change, names='value')\\n\",\n    \"    display(date_picker)\\n\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 5,\n   \"id\": \"14eaf63e-750e-40d1-aa57-2fde82fefba8\",\n   \"metadata\": {\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-06-06T18:40:55.402Z\"\n    },\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"print(expiringList)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"ce65fdd0-ee64-42d0-90a6-0fe1c0f54608\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"AWS Attach Tags to Resources\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"AWS Attach Tags to Resources\"\n   },\n   \"source\": [\n    \"<p>Here we will use unSkript AWS Attach Tags to Resources Lego.&nbsp;</p>\\n\",\n    \"<p>&nbsp;</p>\\n\",\n    \"<p>The updated dates from the date picker will be used to replace the current value in the expiration tag.</p>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 32,\n   \"id\": \"b0bf6aee-2b72-4348-8c38-fe3783619da6\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionCategories\": [\n     \"CATEGORY_TYPE_COST_OPT\",\n     \"CATEGORY_TYPE_DEVOPS\",\n     \"CATEGORY_TYPE_AWS\"\n    ],\n    \"actionIsCheck\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHopParameterMapping\": {},\n    \"actionOutputType\": \"\",\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": false,\n    \"action_uuid\": \"878cb7819ecb4687ecfa8c6143365d10fe6b127adeb4a27fd71d06a3a2243d22\",\n    \"checkEnabled\": false,\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"AWS Attach Tags to Resources\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-06-05T21:11:47.279Z\"\n    },\n    \"id\": 260,\n    \"index\": 260,\n    \"inputData\": [\n     {\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"Region\"\n      },\n      \"resource_arn\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"arn\\\\\\\\\\\")\\\"\"\n      },\n      \"tag_key\": {\n       \"constant\": false,\n       \"value\": \"expiration_tag\"\n      },\n      \"tag_value\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"expires\\\\\\\\\\\")\\\"\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"region\": {\n        \"description\": \"AWS Region.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       },\n       \"resource_arn\": {\n        \"description\": \"Resource ARNs.\",\n        \"items\": {},\n        \"title\": \"Resource ARN\",\n        \"type\": \"array\"\n       },\n       \"tag_key\": {\n        \"description\": \"Resource Tag Key.\",\n        \"title\": \"Tag Key\",\n        \"type\": \"string\"\n       },\n       \"tag_value\": {\n        \"description\": \"Resource Tag Value.\",\n        \"title\": \"Tag Value\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"region\",\n       \"resource_arn\",\n       \"tag_key\",\n       \"tag_value\"\n      ],\n      \"title\": \"aws_attach_tags_to_resources\",\n      \"type\": \"object\"\n     }\n    ],\n    \"isUnskript\": false,\n    \"iterData\": [\n     {\n      \"iter_enabled\": true,\n      \"iter_item\": {\n       \"resource_arn\": \"arn\",\n       \"tag_value\": \"expires\"\n      },\n      \"iter_list\": {\n       \"constant\": false,\n       \"objectItems\": true,\n       \"value\": \"[x for x in expiringList]\"\n      }\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS Attach Tags to Resources\",\n    \"nouns\": [],\n    \"orderProperties\": [\n     \"resource_arn\",\n     \"tag_key\",\n     \"tag_value\",\n     \"region\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"test\",\n     \"output_name_enabled\": true,\n     \"output_runbook_enabled\": false,\n     \"output_runbook_name\": \"\"\n    },\n    \"printOutput\": true,\n    \"tags\": [\n     \"aws_attach_tags_to_resources\"\n    ],\n    \"verbs\": []\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##  Copyright (c) 2021 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"import pprint\\n\",\n    \"from typing import Dict\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_attach_tags_to_resources_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_attach_tags_to_resources(\\n\",\n    \"    handle,\\n\",\n    \"    resource_arn: list,\\n\",\n    \"    tag_key: str,\\n\",\n    \"    tag_value: str,\\n\",\n    \"    region: str\\n\",\n    \"    ) -> Dict:\\n\",\n    \"    \\\"\\\"\\\"aws_attach_tags_to_resources Returns an Dict of resource info.\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned from task.validate(...).\\n\",\n    \"\\n\",\n    \"        :type resource_arn: list\\n\",\n    \"        :param resource_arn: Resource ARNs.\\n\",\n    \"\\n\",\n    \"        :type tag_key: str\\n\",\n    \"        :param tag_key: Resource Tag Key.\\n\",\n    \"\\n\",\n    \"        :type tag_value: str\\n\",\n    \"        :param tag_value: Resource Tag value.\\n\",\n    \"\\n\",\n    \"        :type region: str\\n\",\n    \"        :param region: Region to filter resources.\\n\",\n    \"\\n\",\n    \"        :rtype: Dict of resource info.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    ec2Client = handle.client('resourcegroupstaggingapi', region_name=region)\\n\",\n    \"    result = {}\\n\",\n    \"    try:\\n\",\n    \"        response = ec2Client.tag_resources(\\n\",\n    \"            ResourceARNList=resource_arn,\\n\",\n    \"            Tags={tag_key: tag_value}\\n\",\n    \"            )\\n\",\n    \"        result = response\\n\",\n    \"\\n\",\n    \"    except Exception as error:\\n\",\n    \"        result[\\\"error\\\"] = error\\n\",\n    \"\\n\",\n    \"    return result\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(continueOnError=False)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"region\\\": \\\"Region\\\",\\n\",\n    \"    \\\"resource_arn\\\": \\\"iter.get(\\\\\\\\\\\"arn\\\\\\\\\\\")\\\",\\n\",\n    \"    \\\"tag_key\\\": \\\"expiration_tag\\\",\\n\",\n    \"    \\\"tag_value\\\": \\\"iter.get(\\\\\\\\\\\"expires\\\\\\\\\\\")\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(iterJson='''{\\n\",\n    \"    \\\"iter_enabled\\\": true,\\n\",\n    \"    \\\"iter_list_is_const\\\": false,\\n\",\n    \"    \\\"iter_list\\\": \\\"[x for x in expiringList]\\\",\\n\",\n    \"    \\\"iter_parameter\\\": [\\\"resource_arn\\\",\\\"tag_value\\\"]\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"test\\\")\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_attach_tags_to_resources, lego_printer=aws_attach_tags_to_resources_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 9,\n   \"id\": \"67e94cc1-d88f-4eaf-a419-62903a7e8c7a\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionCategories\": [\n     \"CATEGORY_TYPE_CLOUDOPS\",\n     \"CATEGORY_TYPE_DEVOPS\",\n     \"CATEGORY_TYPE_SRE\",\n     \"CATEGORY_TYPE_SLACK\"\n    ],\n    \"actionIsCheck\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHopParameterMapping\": {},\n    \"actionOutputType\": \"\",\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": false,\n    \"action_uuid\": \"6a87f83ab0ecfeecb9c98d084e2b1066c26fa64be5b4928d5573a5d60299802d\",\n    \"checkEnabled\": false,\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Post Slack Message\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-06-07T14:42:45.225Z\"\n    },\n    \"id\": 106,\n    \"index\": 106,\n    \"inputData\": [\n     {\n      \"channel\": {\n       \"constant\": false,\n       \"value\": \"\\\"devrel_doug_test1\\\"\"\n      },\n      \"message\": {\n       \"constant\": false,\n       \"value\": \"f\\\"There are {len(expiringList)} AWS resources set to expire in the next {days_to_expire} days! Use the AWS Resources About To Expire RunBook to manually update these dates to avoid any deletion of important resources\\\"\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"channel\": {\n        \"description\": \"Name of slack channel.\",\n        \"title\": \"Channel\",\n        \"type\": \"string\"\n       },\n       \"message\": {\n        \"description\": \"Message for slack channel.\",\n        \"title\": \"Message\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"channel\",\n       \"message\"\n      ],\n      \"title\": \"slack_post_message\",\n      \"type\": \"object\"\n     }\n    ],\n    \"isUnskript\": false,\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_SLACK\",\n    \"name\": \"Post Slack Message\",\n    \"nouns\": [],\n    \"orderProperties\": [\n     \"channel\",\n     \"message\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"printOutput\": true,\n    \"tags\": [\n     \"slack_post_message\"\n    ],\n    \"verbs\": []\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"# Copyright (c) 2021 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"\\n\",\n    \"import pprint\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from beartype import beartype\\n\",\n    \"from slack_sdk import WebClient\\n\",\n    \"from slack_sdk.errors import SlackApiError\\n\",\n    \"\\n\",\n    \"pp = pprint.PrettyPrinter(indent=2)\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def slack_post_message_printer(output):\\n\",\n    \"    if output is not None:\\n\",\n    \"        pprint.pprint(output)\\n\",\n    \"    else:\\n\",\n    \"        return\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"@beartype\\n\",\n    \"def slack_post_message(\\n\",\n    \"        handle: WebClient,\\n\",\n    \"        channel: str,\\n\",\n    \"        message: str) -> str:\\n\",\n    \"\\n\",\n    \"    try:\\n\",\n    \"        handle.chat_postMessage(\\n\",\n    \"            channel=channel,\\n\",\n    \"            text=message)\\n\",\n    \"        return f\\\"Successfully Sent Message on Channel: #{channel}\\\"\\n\",\n    \"    except SlackApiError as e:\\n\",\n    \"        pp.pprint(\\n\",\n    \"            f\\\"Failed sending message to slack channel {channel}, Error: {e.response['error']}\\\")\\n\",\n    \"        if e.response['error'] == 'channel_not_found':\\n\",\n    \"            raise Exception('Channel Not Found') from e\\n\",\n    \"        if e.response['error'] == 'duplicate_channel_not_found':\\n\",\n    \"            raise Exception('Channel associated with the message_id not valid') from e\\n\",\n    \"        if e.response['error'] == 'not_in_channel':\\n\",\n    \"            raise Exception('Cannot post message to channel user is not in') from e\\n\",\n    \"        if e.response['error'] == 'is_archived':\\n\",\n    \"            raise Exception('Channel has been archived') from e\\n\",\n    \"        if e.response['error'] == 'msg_too_long':\\n\",\n    \"            raise Exception('Message text is too long') from e\\n\",\n    \"        if e.response['error'] == 'no_text':\\n\",\n    \"            raise Exception('Message text was not provided') from e\\n\",\n    \"        if e.response['error'] == 'restricted_action':\\n\",\n    \"            raise Exception('Workspace preference prevents user from posting') from e\\n\",\n    \"        if e.response['error'] == 'restricted_action_read_only_channel':\\n\",\n    \"            raise Exception('Cannot Post message, read-only channel') from e\\n\",\n    \"        if e.response['error'] == 'team_access_not_granted':\\n\",\n    \"            raise Exception('The token used is not granted access to the workspace') from e\\n\",\n    \"        if e.response['error'] == 'not_authed':\\n\",\n    \"            raise Exception('No Authtnecition token provided') from e\\n\",\n    \"        if e.response['error'] == 'invalid_auth':\\n\",\n    \"            raise Exception('Some aspect of Authentication cannot be validated. Request denied') from e\\n\",\n    \"        if e.response['error'] == 'access_denied':\\n\",\n    \"            raise Exception('Access to a resource specified in the request denied') from e\\n\",\n    \"        if e.response['error'] == 'account_inactive':\\n\",\n    \"            raise Exception('Authentication token is for a deleted user') from e\\n\",\n    \"        if e.response['error'] == 'token_revoked':\\n\",\n    \"            raise Exception('Authentication token for a deleted user has been revoked') from e\\n\",\n    \"        if e.response['error'] == 'no_permission':\\n\",\n    \"            raise Exception('The workspace toekn used does not have necessary permission to send message') from e\\n\",\n    \"        if e.response['error'] == 'ratelimited':\\n\",\n    \"            raise Exception('The request has been ratelimited. Retry sending message later') from e\\n\",\n    \"        if e.response['error'] == 'service_unavailable':\\n\",\n    \"            raise Exception('The service is temporarily unavailable') from e\\n\",\n    \"        if e.response['error'] == 'fatal_error':\\n\",\n    \"            raise Exception('The server encountered catostrophic error while sending message') from e\\n\",\n    \"        if e.response['error'] == 'internal_error':\\n\",\n    \"            raise Exception('The server could not complete operation, likely due to transietn issue') from e\\n\",\n    \"        if e.response['error'] == 'request_timeout':\\n\",\n    \"            raise Exception('Sending message error via POST: either message was missing or truncated') from e\\n\",\n    \"        else:\\n\",\n    \"            raise Exception(f'Failed Sending Message to slack channel {channel} Error: {e.response[\\\"error\\\"]}') from e\\n\",\n    \"\\n\",\n    \"    except Exception as e:\\n\",\n    \"        print(\\\"\\\\n\\\\n\\\")\\n\",\n    \"        pp.pprint(\\n\",\n    \"            f\\\"Failed sending message to slack channel {channel}, Error: {str(e)}\\\")\\n\",\n    \"        return f\\\"Unable to send message on {channel}\\\"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"channel\\\": \\\"\\\\\\\\\\\"devrel_doug_test1\\\\\\\\\\\"\\\",\\n\",\n    \"    \\\"message\\\": \\\"f\\\\\\\\\\\"There are {len(expiringList)} AWS resources set to expire in the next {days_to_expire} days! Use the AWS Resources About To Expire RunBook to manually update these dates to avoid any deletion of important resources\\\\\\\\\\\"\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(slack_post_message, lego_printer=slack_post_message_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"a8280ac4-d504-44d2-b5ea-d97f7ca672c8\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Conclusion\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Conclusion\"\n   },\n   \"source\": [\n    \"In this Runbook, we demonstrated the use of unSkript's AWS legos to attach tags. This Runbook gets the list of all untagged resources of a given region, discovers tag keys of the given region and attaches mandatory tags to all the untagged resource. To view the full platform capabilities of unSkript please visit https://unskript.com\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"execution_data\": {\n   \"runbook_name\": \"AWS Add Tags Across Selected AWS Resources\",\n   \"parameters\": [\n    \"expiration_tag\",\n    \"days_to_expire\",\n    \"Region\"\n   ]\n  },\n  \"kernelspec\": {\n   \"display_name\": \"unSkript (Build: 1185)\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"codemirror_mode\": {\n    \"name\": \"ipython\",\n    \"version\": 3\n   },\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"nbconvert_exporter\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.9.6\"\n  },\n  \"outputParameterSchema\": null,\n  \"parameterSchema\": {\n   \"definitions\": null,\n   \"properties\": {\n    \"Region\": {\n     \"default\": \"us-west-2\",\n     \"description\": \"Resources Region\",\n     \"title\": \"Region\",\n     \"type\": \"string\"\n    },\n    \"days_to_expire\": {\n     \"default\": 15,\n     \"description\": \"Find resources set to expire in the next days_to_expire days.\",\n     \"title\": \"days_to_expire\",\n     \"type\": \"number\"\n    },\n    \"expiration_tag\": {\n     \"default\": \"expiration\",\n     \"description\": \"The name of the tag that is used to identify the Resource expiration\",\n     \"title\": \"expiration_tag\",\n     \"type\": \"string\"\n    }\n   },\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}"
  },
  {
    "path": "AWS/AWS_Update_Resources_About_To_Expire.json",
    "content": "{\n    \"name\": \"AWS Add Tags Across Selected AWS Resources\",\n    \"description\": \"This finds resources missing a tag, and allows you to choose which resources should add a specific tag/value pair.\",  \n    \"uuid\": \"a79201f821993867e23dd9603ed7ef5523324353d717c566f902f7ac6e471f5c\",\n    \"icon\": \"CONNECTOR_TYPE_AWS\",\n    \"categories\": [ \"CATEGORY_TYPE_COST_OPT\", \"CATEGORY_TYPE_CLOUDOPS\" ],\n    \"connector_types\": [ \"CONNECTOR_TYPE_AWS\" ],\n    \"version\": \"1.0.0\"\n  }"
  },
  {
    "path": "AWS/AWS_encrypt_unencrypted_S3_buckets.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"cbabc8b5-57b4-45b8-890c-370bb1ed6f02\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Runbook Overview\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Runbook Overview\"\n   },\n   \"source\": [\n    \"<center><img src=\\\"https://storage.googleapis.com/unskript-website/assets/favicon.png\\\" alt=\\\"unSkript.com\\\" width=\\\"100\\\" height=\\\"100\\\">\\n\",\n    \"<h1 id=\\\"unSkript-Runbooks\\\">unSkript Runbooks<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#unSkript-Runbooks\\\" target=\\\"_self\\\">&para;</a></h1>\\n\",\n    \"<div class=\\\"alert alert-block alert-success\\\">\\n\",\n    \"<h3 id=\\\"Objective\\\"><strong>Objective</strong><a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Objective\\\" target=\\\"_self\\\">&para;</a></h3>\\n\",\n    \"<strong>This runbook demonstrates how to encrypt unencrypted S3 buckets.</strong></div>\\n\",\n    \"</center><center>\\n\",\n    \"<h2 id=\\\"Encrypt-unencrypted-S3-buckets\\\">Encrypt unencrypted S3 buckets<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Encrypt-unencrypted-S3-buckets\\\" target=\\\"_self\\\">&para;</a></h2>\\n\",\n    \"</center>\\n\",\n    \"<h1 id=\\\"Steps-Overview\\\">Steps Overview<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Steps-Overview\\\" target=\\\"_self\\\">&para;</a></h1>\\n\",\n    \"<ol>\\n\",\n    \"<li>Filter all the S3 buckets which are unencrypted.</li>\\n\",\n    \"<li>Apply encryption on unencrypted S3 buckets.</li>\\n\",\n    \"</ol>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"904610fd-51a8-40f8-9850-a288f4cd1ca5\",\n   \"metadata\": {\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"name\": \"Input Verification \",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Input Verification \"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"if bucket_name and not region:\\n\",\n    \"    raise SystemExit(\\\"Provide region for the S3 Bucket!\\\")\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"0a291f67-3cb7-46b2-b0eb-1dc1bedecb5e\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Gather Information\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Gather Information\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"AWS-List-All-Regions\\\">AWS List All Regions<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#AWS-List-All-Regions\\\" target=\\\"_self\\\">&para;</a></h3>\\n\",\n    \"<p>In this action, we list all the available regions from AWS if the user does not provide a region as a parameter.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Output variable: <code>region</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 3,\n   \"id\": \"53f85394-1036-40b4-922f-c8d72c50acd6\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": false,\n    \"action_uuid\": \"708ea4af5f8fe7096a15b3a52c4a657606bab9e177386fad7a847341ed607d64\",\n    \"condition_enabled\": true,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"List all available AWS Regions\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-02-03T04:42:43.566Z\"\n    },\n    \"id\": 215,\n    \"index\": 215,\n    \"inputschema\": [\n     {\n      \"properties\": {},\n      \"title\": \"aws_list_all_regions\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS List All Regions\",\n    \"nouns\": [\n     \"regions\",\n     \"aws\"\n    ],\n    \"orderProperties\": [],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"region\",\n     \"output_name_enabled\": true\n    },\n    \"printOutput\": true,\n    \"startcondition\": \"not region\",\n    \"tags\": [\n     \"aws_list_all_regions\"\n    ],\n    \"verbs\": [\n     \"list\"\n    ]\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"#\\n\",\n    \"# Copyright (c) 2021 unSkript.com\\n\",\n    \"# All rights reserved.\\n\",\n    \"#\\n\",\n    \"\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import Dict, List\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_list_all_regions_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_list_all_regions(handle) -> List:\\n\",\n    \"    \\\"\\\"\\\"aws_list_all_regions lists all the AWS regions\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned from Task Validate\\n\",\n    \"\\n\",\n    \"        :rtype: Result List of result\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"\\n\",\n    \"    result = handle.aws_cli_command(\\\"aws ec2 describe-regions --all-regions --query 'Regions[].{Name:RegionName}' --output text\\\")\\n\",\n    \"    if result is None or result.returncode != 0:\\n\",\n    \"        print(\\\"Error while executing command : {}\\\".format(result))\\n\",\n    \"        return str()\\n\",\n    \"    result_op = list(result.stdout.split(\\\"\\\\n\\\"))\\n\",\n    \"    list_region = [x for x in result_op if x != '']\\n\",\n    \"    return list_region\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"not region\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"task.configure(outputName=\\\"region\\\")\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_list_all_regions, lego_printer=aws_list_all_regions_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"38f0ef87-76cb-4505-b012-5681855c9920\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-1\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-1\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Filter-AWS-Unencrypted-S3-Buckets\\\">Filter AWS Unencrypted S3 Buckets<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Filter-Unattached-EBS-Volumes\\\" target=\\\"_self\\\">&para;</a></h3>\\n\",\n    \"<p>Here we will use unSkript <strong>Filter Unencrypted S3 Buckets</strong> action. This action filters all the S3 buckets from the given region and returns a list of those S3 buckets without encryption. It will execute if the bucket_name parameter is not given.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Input parameters: <code>region</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Output variable: <code>unencrypted_buckets</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 4,\n   \"id\": \"14360884-0e4a-4b33-8e08-f0f5c3cf7ad5\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": false,\n    \"action_uuid\": \"2fa5c0d3a9ed5951fbf2a1390610941af8e145521c244fa07b597d6ca6665a43\",\n    \"condition_enabled\": true,\n    \"continueOnError\": true,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Filter AWS Unencrypted S3 Buckets\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-02-03T04:44:13.354Z\"\n    },\n    \"id\": 235,\n    \"index\": 235,\n    \"inputData\": [\n     {\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"iter_item\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"region\": {\n        \"default\": \"\",\n        \"description\": \"AWS Region.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       }\n      },\n      \"title\": \"aws_filter_unencrypted_s3_buckets\",\n      \"type\": \"object\"\n     }\n    ],\n    \"iterData\": [\n     {\n      \"iter_enabled\": true,\n      \"iter_item\": \"region\",\n      \"iter_list\": {\n       \"constant\": false,\n       \"objectItems\": false,\n       \"value\": \"region\"\n      }\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"Filter AWS Unencrypted S3 Buckets\",\n    \"nouns\": [],\n    \"orderProperties\": [\n     \"region\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"unencrypted_buckets\",\n     \"output_name_enabled\": true\n    },\n    \"printOutput\": true,\n    \"startcondition\": \"not bucket_name\",\n    \"tags\": [\n     \"aws_filter_unencrypted_s3_buckets\"\n    ],\n    \"title\": \"Filter AWS Unencrypted S3 Buckets\",\n    \"verbs\": []\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"##  Copyright (c) 2021 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import Optional\\n\",\n    \"from unskript.legos.utils import CheckOutput, CheckOutputStatus\\n\",\n    \"from unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\\n\",\n    \"from botocore.exceptions import ClientError\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_filter_unencrypted_s3_buckets_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"\\n\",\n    \"    if isinstance(output, CheckOutput):\\n\",\n    \"        print(output.json())\\n\",\n    \"    else:\\n\",\n    \"        pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_filter_unencrypted_s3_buckets(handle, region: str = \\\"\\\") -> CheckOutput:\\n\",\n    \"    \\\"\\\"\\\"aws_filter_unencrypted_s3_buckets List of unencrypted S3 bucket name .\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned from task.validate(...).\\n\",\n    \"\\n\",\n    \"        :type region: string\\n\",\n    \"        :param region: Filter S3 buckets.\\n\",\n    \"\\n\",\n    \"        :rtype: CheckOutput with status result and list of unencrypted S3 bucket name.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    result = []\\n\",\n    \"    all_regions = [region]\\n\",\n    \"    if not region:\\n\",\n    \"        all_regions = aws_list_all_regions(handle)\\n\",\n    \"    for reg in all_regions:\\n\",\n    \"        try:\\n\",\n    \"            s3Client = handle.client('s3', region_name=reg)\\n\",\n    \"            response = s3Client.list_buckets()\\n\",\n    \"            # List unencrypted S3 buckets\\n\",\n    \"            for bucket in response['Buckets']:\\n\",\n    \"                try:\\n\",\n    \"                    response = s3Client.get_bucket_encryption(Bucket=bucket['Name'])\\n\",\n    \"                    encRules = response['ServerSideEncryptionConfiguration']['Rules']\\n\",\n    \"                except ClientError as e:\\n\",\n    \"                    bucket_dict = {}\\n\",\n    \"                    bucket_dict[\\\"region\\\"] = reg\\n\",\n    \"                    bucket_dict[\\\"bucket\\\"] = bucket['Name']\\n\",\n    \"                    result.append(bucket_dict)\\n\",\n    \"        except Exception as error:\\n\",\n    \"            pass\\n\",\n    \"\\n\",\n    \"    if len(result) != 0:\\n\",\n    \"        return CheckOutput(status=CheckOutputStatus.FAILED,\\n\",\n    \"                   objects=result,\\n\",\n    \"                   error=str(\\\"\\\"))\\n\",\n    \"    else:\\n\",\n    \"        return CheckOutput(status=CheckOutputStatus.SUCCESS,\\n\",\n    \"                   objects=result,\\n\",\n    \"                   error=str(\\\"\\\"))\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(continueOnError=True)\\n\",\n    \"task.configure(outputName=\\\"unencrypted_buckets\\\")\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"region\\\": \\\"iter_item\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(iterJson='''{\\n\",\n    \"    \\\"iter_enabled\\\": true,\\n\",\n    \"    \\\"iter_list_is_const\\\": false,\\n\",\n    \"    \\\"iter_list\\\": \\\"region\\\",\\n\",\n    \"    \\\"iter_parameter\\\": \\\"region\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"not bucket_name\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_filter_unencrypted_s3_buckets, lego_printer=aws_filter_unencrypted_s3_buckets_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"f2ed3b50-50f4-4983-b409-690aecf27b1c\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-1 Extension\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-1 Extension\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Modify-Unencrypted-S3-Buckets-Output\\\">Modify Unencrypted S3 Buckets Output</h3>\\n\",\n    \"<p>In this action, we modify the output from step 1 and return a list of dictionary items for the Unencrypted S3 Buckets</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Output variable: ebs_list</p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 4,\n   \"id\": \"47117b25-2533-4021-b4f3-329b7fee165e\",\n   \"metadata\": {\n    \"collapsed\": true,\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-02-10T10:31:04.455Z\"\n    },\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"name\": \"Modify Step-1 Output\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Modify Step-1 Output\",\n    \"trusted\": true\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"bucket_list = []\\n\",\n    \"\\n\",\n    \"try:\\n\",\n    \"    for k, v in unencrypted_buckets.items():\\n\",\n    \"        if v.status == CheckOutputStatus.FAILED:\\n\",\n    \"            for bucket in v.objects:\\n\",\n    \"                bucket_list.append(bucket)\\n\",\n    \"except Exception as e:\\n\",\n    \"    for i in bucket_name:\\n\",\n    \"        data_dict = {}\\n\",\n    \"        data_dict[\\\"region\\\"] = region[0]\\n\",\n    \"        data_dict[\\\"bucket\\\"] = i\\n\",\n    \"        bucket_list.append(data_dict)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"0a1ba685-0340-4af8-9bc7-32e9beff2837\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-2\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-2\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Apply-AWS-Default-Encryption-for-S3-Bucket\\\">Apply AWS Default Encryption for S3 Bucket</h3>\\n\",\n    \"<p>Here we will use the unSkript <strong>Apply AWS Default Encryption for the S3 Buckets</strong> action. In this action, we will apply the default encryption configuration to the unencrypted S3 buckets by passing the list of unencrypted S3 buckets from step 1.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Input parameters: <code>name</code>, <code>region</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Output variable: <code>apply_output</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 2,\n   \"id\": \"80b2e9a4-023a-4235-99ba-dce06988eb6e\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_uuid\": \"eb57da3b21aec38d005bf0355a48ba53937c7ac62f98e9c968c9501412d72008\",\n    \"condition_enabled\": true,\n    \"continueOnError\": true,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Apply a New AWS Policy for S3 Bucket\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2022-08-26T20:00:28.237Z\"\n    },\n    \"id\": 135,\n    \"index\": 135,\n    \"inputData\": [\n     {\n      \"name\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"bucket\\\\\\\\\\\")\\\"\"\n      },\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"region\\\\\\\\\\\")\\\"\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"name\": {\n        \"default\": \"\",\n        \"description\": \"Name of the bucket.\",\n        \"title\": \"Bucket name\",\n        \"type\": \"string\"\n       },\n       \"region\": {\n        \"default\": \"\",\n        \"description\": \"AWS region of the bucket.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"name\",\n       \"policy\",\n       \"region\"\n      ],\n      \"title\": \"aws_put_bucket_policy\",\n      \"type\": \"object\"\n     }\n    ],\n    \"iterData\": [\n     {\n      \"iter_enabled\": true,\n      \"iter_item\": {\n       \"name\": \"bucket\",\n       \"region\": \"region\"\n      },\n      \"iter_list\": {\n       \"constant\": false,\n       \"objectItems\": true,\n       \"value\": \"bucket_list\"\n      }\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"Apply AWS Default Encryption for S3 Bucket\",\n    \"nouns\": [\n     \"aws\",\n     \"policy\",\n     \"bucket\"\n    ],\n    \"orderProperties\": [\n     \"name\",\n     \"region\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"apply_output\",\n     \"output_name_enabled\": true\n    },\n    \"printOutput\": true,\n    \"startcondition\": \"len(bucket_list) > 0\",\n    \"tags\": [\n     \"aws_put_bucket_policy\"\n    ],\n    \"title\": \"Apply AWS Default Encryption for S3 Bucket\",\n    \"verbs\": [\n     \"apply\"\n    ]\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"##  Copyright (c) 2021 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import Dict\\n\",\n    \"import json\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_put_bucket_encryption_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_put_bucket_encryption(handle, name: str, region: str) -> Dict:\\n\",\n    \"    \\\"\\\"\\\"aws_put_bucket_encryption Puts default encryption configuration for bucket.\\n\",\n    \"\\n\",\n    \"          :type name: string\\n\",\n    \"          :param name: NAme of the S3 bucket.\\n\",\n    \"\\n\",\n    \"          :type region: string\\n\",\n    \"          :param region: location of the bucket\\n\",\n    \"\\n\",\n    \"          :rtype: Dict with the response info.\\n\",\n    \"      \\\"\\\"\\\"\\n\",\n    \"    s3Client = handle.client('s3',\\n\",\n    \"                             region_name=region)\\n\",\n    \"\\n\",\n    \"    # Setup default encryption configuration \\n\",\n    \"    response = s3Client.put_bucket_encryption(\\n\",\n    \"        Bucket=name,\\n\",\n    \"        ServerSideEncryptionConfiguration={\\n\",\n    \"            \\\"Rules\\\": [\\n\",\n    \"                {\\\"ApplyServerSideEncryptionByDefault\\\": {\\\"SSEAlgorithm\\\": \\\"AES256\\\"}}\\n\",\n    \"            ]},\\n\",\n    \"        )\\n\",\n    \"    return response\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"task.configure(continueOnError=True)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"region\\\": \\\"iter.get(\\\\\\\\\\\"region\\\\\\\\\\\")\\\",\\n\",\n    \"    \\\"name\\\": \\\"iter.get(\\\\\\\\\\\"bucket\\\\\\\\\\\")\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(iterJson='''{\\n\",\n    \"    \\\"iter_enabled\\\": true,\\n\",\n    \"    \\\"iter_list_is_const\\\": false,\\n\",\n    \"    \\\"iter_list\\\": \\\"bucket_list\\\",\\n\",\n    \"    \\\"iter_parameter\\\": [\\\"name\\\",\\\"region\\\"]\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"len(bucket_list) > 0\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"apply_output\\\")\\n\",\n    \"\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_put_bucket_encryption, lego_printer=aws_put_bucket_encryption_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"dea3003f-03e9-4dff-86fb-b4073ee4ef79\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Conclusion\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Conclusion\"\n   },\n   \"source\": [\n    \"### Conclusion\\n\",\n    \"<p>In this Runbook, we demonstrated the use of unSkript's AWS legos to filter all unencrypted S3 buckets and apply default encryption configuration to the buckets. To view the full platform capabilities of unSkript please visit <a href=\\\"https://us.app.unskript.io\\\" target=\\\"_blank\\\" rel=\\\"noopener\\\">https://us.app.unskript.io</a></p>\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"execution_data\": {\n   \"runbook_name\": \"Encrypt unencrypted S3 buckets\",\n   \"parameters\": [\n    \"bucket_name\",\n    \"region\"\n   ]\n  },\n  \"kernelspec\": {\n   \"display_name\": \"base\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.9.12\"\n  },\n  \"parameterSchema\": {\n   \"properties\": {\n    \"bucket_name\": {\n     \"description\": \"list of S3 bucket Name\",\n     \"title\": \"bucket_name\",\n     \"type\": \"array\"\n    },\n    \"region\": {\n     \"description\": \"AWS Region e.g.[\\\"us-west-2\\\"]\",\n     \"title\": \"region\",\n     \"type\": \"array\"\n    }\n   },\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"parameterValues\": null,\n  \"vscode\": {\n   \"interpreter\": {\n    \"hash\": \"5e269198fab4eb2ea6fe7c886c38b87b334869f0501ab924e1d16d60aeba5d23\"\n   }\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}"
  },
  {
    "path": "AWS/AWS_encrypt_unencrypted_S3_buckets.json",
    "content": "{\n  \"name\": \"Encrypt unencrypted S3 buckets\",\n  \"description\": \"This runbook can be used to filter all the S3 buckets which are unencrypted and apply encryption on unencrypted S3 buckets.\",\n  \"uuid\": \"50d9c6abd7dce3ff9183d4135353e82859bc5a9639455b35bd229331be6048df\",\n  \"icon\": \"CONNECTOR_TYPE_AWS\",\n  \"categories\": [ \"CATEGORY_TYPE_CLOUDOPS\",\"CATEGORY_TYPE_SECOPS\" ],\n  \"connector_types\": [ \"CONNECTOR_TYPE_AWS\" ],\n  \"version\": \"1.0.0\"\n}\n"
  },
  {
    "path": "AWS/Add_new_IAM_user.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"8a97b231-94d6-4e10-a24c-6eac9a4572e4\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Add New IAM User\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Add New IAM User\"\n   },\n   \"source\": [\n    \"<center>\\n\",\n    \"<img src=\\\"https://storage.googleapis.com/unskript-website/assets/favicon.png\\\" alt=\\\"unSkript.com\\\" width=\\\"100\\\" height=\\\"100\\\"/> \\n\",\n    \"<h1> unSkript Runbooks </h1>\\n\",\n    \"\\n\",\n    \"<div class=\\\"alert alert-block alert-success\\\">\\n\",\n    \"    <h3> Objective</h3> <br>\\n\",\n    \"    <b>To add a new IAM user using unSkript actions.</b>\\n\",\n    \"</div>\\n\",\n    \"</center>\\n\",\n    \"<br>\\n\",\n    \"\\n\",\n    \"<center><h2>Add New IAM User</h2></center>\\n\",\n    \"\\n\",\n    \"# Steps Overview\\n\",\n    \"   1)[Create IAM User](#1)</br>\\n\",\n    \"   2)[Create login profile](#2)</br>\\n\",\n    \"   3)[Check the caller identity](#3)</br>\\n\",\n    \"   4)[Post slack message](#4)\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 5,\n   \"id\": \"07e691b1-dd70-4c51-b871-47f608ecd89b\",\n   \"metadata\": {\n    \"collapsed\": true,\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-01-06T13:27:50.928Z\"\n    },\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"name\": \"Gathering Information\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Gathering Information\",\n    \"trusted\": true,\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"tag_key = \\\"Name\\\"\\n\",\n    \"tag_value = username\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"6cb8f37f-8bf2-41a0-b1ae-d946038ea808\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-1\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-1\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Creating-an-IAM-User\\\"><a id=\\\"1\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Creating an IAM User</h3>\\n\",\n    \"<p>Here we will use unSkript <strong>Create New IAM User</strong> action. This action creates an IAM user in AWS and assigns the given tag to the user.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Input parameters: <code>user_name</code>, <code>tag_key</code>, <code>tag_value</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Output variable: <code>user_details</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"9fe78a10-d76f-4961-8e5c-bf381c5b3cc9\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_uuid\": \"3f71dd060d5955f5dc9104dbaf418bf957b2222c510cb3afd09ded8e41e433d9\",\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Create New IAM User\",\n    \"id\": 222,\n    \"index\": 222,\n    \"inputData\": [\n     {\n      \"tag_key\": {\n       \"constant\": false,\n       \"value\": \"tag_key\"\n      },\n      \"tag_value\": {\n       \"constant\": false,\n       \"value\": \"tag_value\"\n      },\n      \"user_name\": {\n       \"constant\": false,\n       \"value\": \"username\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"tag_key\": {\n        \"description\": \"Tag Key to new IAM User.\",\n        \"title\": \"Tag Key\",\n        \"type\": \"string\"\n       },\n       \"tag_value\": {\n        \"description\": \"Tag Value to new IAM User.\",\n        \"title\": \"Tag Value\",\n        \"type\": \"string\"\n       },\n       \"user_name\": {\n        \"description\": \"IAM User Name.\",\n        \"title\": \"User Name\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"user_name\",\n       \"tag_key\",\n       \"tag_value\"\n      ],\n      \"title\": \"aws_create_iam_user\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"Create New IAM User\",\n    \"nouns\": [\n     \"aws\",\n     \"IAM\",\n     \"user\"\n    ],\n    \"orderProperties\": [\n     \"user_name\",\n     \"tag_key\",\n     \"tag_value\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"user_details\",\n     \"output_name_enabled\": true\n    },\n    \"printOutput\": true,\n    \"tags\": [\n     \"aws_create_iam_user\"\n    ],\n    \"title\": \"Create New IAM User\",\n    \"verbs\": [\n     \"create\"\n    ]\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##  Copyright (c) 2021 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from typing import List, Dict\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from botocore.exceptions import ClientError\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_create_iam_user_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_create_iam_user(handle, user_name: str, tag_key: str, tag_value: str) -> Dict:\\n\",\n    \"    \\\"\\\"\\\"aws_create_iam_user Creates new IAM User.\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned by the task.validate(...) method\\n\",\n    \"\\n\",\n    \"        :type user_name: string\\n\",\n    \"        :param user_name: Name of new IAM User.\\n\",\n    \"\\n\",\n    \"        :type tag_key: string\\n\",\n    \"        :param tag_key: Tag Key assign to new User.\\n\",\n    \"\\n\",\n    \"        :type tag_value: string\\n\",\n    \"        :param tag_value: Tag Value assign to new User.\\n\",\n    \"\\n\",\n    \"        :rtype: Dict with the stopped instances state info.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"\\n\",\n    \"    ec2Client = handle.client(\\\"iam\\\")\\n\",\n    \"    result = {}\\n\",\n    \"    try:\\n\",\n    \"        response = ec2Client.create_user(\\n\",\n    \"            UserName=user_name,\\n\",\n    \"            Tags=[\\n\",\n    \"                {\\n\",\n    \"                    'Key': tag_key,\\n\",\n    \"                    'Value': tag_value\\n\",\n    \"                }])\\n\",\n    \"        result = response\\n\",\n    \"    except ClientError as error:\\n\",\n    \"        if error.response['Error']['Code'] == 'EntityAlreadyExists':\\n\",\n    \"            result = error.response\\n\",\n    \"        else:\\n\",\n    \"            result = error.response\\n\",\n    \"\\n\",\n    \"    return result\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"tag_key\\\": \\\"tag_key\\\",\\n\",\n    \"    \\\"tag_value\\\": \\\"tag_value\\\",\\n\",\n    \"    \\\"user_name\\\": \\\"username\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"user_details\\\")\\n\",\n    \"\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_create_iam_user, lego_printer=aws_create_iam_user_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"c174d638-f107-450f-ab2d-d28cf097a722\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-2\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-2\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Create-login-Profile\\\"><a id=\\\"2\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Create login Profile</h3>\\n\",\n    \"<p>This action only executes when step 1 successfully creates a user. In this action, we will pass the newly created username and temporary password, which will create an user profile for the user in AWS.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Input parameters: <code>user_name</code>, <code>password</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Output variable: <code>profile_details</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"35887cbc-bdb1-4f3b-8f59-a2bb78e9b605\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_uuid\": \"7b52e5fdfddd113a1c489d95d5fd8c9a98043c6ea721588531db6a5261434975\",\n    \"condition_enabled\": true,\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Create Login profile for IAM User\",\n    \"id\": 166,\n    \"index\": 166,\n    \"inputData\": [\n     {\n      \"password\": {\n       \"constant\": false,\n       \"value\": \"password\"\n      },\n      \"user_name\": {\n       \"constant\": false,\n       \"value\": \"username\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"password\": {\n        \"description\": \"Password for IAM User.\",\n        \"title\": \"Password\",\n        \"type\": \"string\"\n       },\n       \"user_name\": {\n        \"description\": \"IAM User Name.\",\n        \"title\": \"User Name\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"user_name\",\n       \"password\"\n      ],\n      \"title\": \"aws_create_user_login_profile\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"Create Login profile for IAM User\",\n    \"nouns\": [\n     \"aws\",\n     \"IAM\",\n     \"login\"\n    ],\n    \"orderProperties\": [\n     \"user_name\",\n     \"password\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"profile_details\",\n     \"output_name_enabled\": true\n    },\n    \"printOutput\": true,\n    \"startcondition\": \"'User' in UserInfo\",\n    \"tags\": [\n     \"aws_create_user_login_profile\"\n    ],\n    \"verbs\": [\n     \"create\"\n    ]\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"##  Copyright (c) 2021 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import Dict\\n\",\n    \"from botocore.exceptions import ClientError\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_create_user_login_profile_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_create_user_login_profile(handle, user_name: str, password: str) -> Dict:\\n\",\n    \"    \\\"\\\"\\\"aws_create_user_login_profile Create login profile for IAM User.\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned by the task.validate(...) method.\\n\",\n    \"\\n\",\n    \"        :type user_name: string\\n\",\n    \"        :param user_name: Name of new IAM User.\\n\",\n    \"\\n\",\n    \"        :type password: string\\n\",\n    \"\\n\",\n    \"        :rtype: Dict with the Profile Creation status info.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"\\n\",\n    \"    ec2Client = handle.client(\\\"iam\\\")\\n\",\n    \"    result = {}\\n\",\n    \"    try:\\n\",\n    \"        response = ec2Client.create_login_profile(\\n\",\n    \"            UserName=user_name,\\n\",\n    \"            PasswordResetRequired=True)\\n\",\n    \"\\n\",\n    \"        result = response\\n\",\n    \"    except ClientError as error:\\n\",\n    \"        if error.response['Error']['Code'] == 'EntityAlreadyExists':\\n\",\n    \"            result = error.response\\n\",\n    \"        else:\\n\",\n    \"            result = error.response\\n\",\n    \"\\n\",\n    \"    return result\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"password\\\": \\\"password\\\",\\n\",\n    \"    \\\"user_name\\\": \\\"username\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"'User' in UserInfo\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"profile_details\\\")\\n\",\n    \"\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_create_user_login_profile, lego_printer=aws_create_user_login_profile_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"29511895-d1cc-4a01-9990-8928642b5006\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-3\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-3\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Check-Caller-Identity\\\"><a id=\\\"3\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Check Caller Identity</h3>\\n\",\n    \"<p>Here we will use unSkript <strong>Get Caller Identity Action</strong> action. These Action does not take any inputs. shows the caller's identity for the current user.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Output variable: <code>caller_details</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 5,\n   \"id\": \"dd1e1542-ddd7-4b86-86a2-17e999458fbd\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_uuid\": \"afacde59-a401-4a8b-901d-46c4b3970b78\",\n    \"createTime\": \"2022-07-27T16:51:48Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"v0.0.0\",\n    \"description\": \"Test\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2022-09-02T16:44:27.574Z\"\n    },\n    \"id\": 100001,\n    \"index\": 100001,\n    \"inputschema\": [\n     {\n      \"properties\": {},\n      \"required\": [\n       \"instance_ids\",\n       \"region\"\n      ],\n      \"title\": \"aws_restart_ec2_instances_test\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"Get Caller Identity \",\n    \"nouns\": [],\n    \"orderProperties\": [],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"caller_details\",\n     \"output_name_enabled\": true\n    },\n    \"printOutput\": true,\n    \"tags\": [],\n    \"title\": \"Get Caller Identity \",\n    \"verbs\": []\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##  Copyright (c) 2021 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from typing import List, Dict\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_caller_identity_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_caller_identity(handle) -> Dict:\\n\",\n    \"    ec2Client = handle.client('sts')\\n\",\n    \"    response = ec2Client.get_caller_identity()\\n\",\n    \"\\n\",\n    \"    return response\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"task.configure(outputName=\\\"caller_details\\\")\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_get_caller_identity, lego_printer=aws_get_caller_identity_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"d1f05583-fa8c-4f8c-a357-3f6154df4620\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-4\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-4\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Post-Slack-Message\\\"><a id=\\\"4\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Post Slack Message</h3>\\n\",\n    \"<p>Here we will use unSkript <strong>Post Slack Message</strong> action. These actions send a message on the Slack channel with the newly created username.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Input parameters: <code>channel</code>, <code>message</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Output variable: <code>send_status</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 6,\n   \"id\": \"8cacd129-1fed-4c9e-9f2f-70da41c43c88\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_uuid\": \"6a87f83ab0ecfeecb9c98d084e2b1066c26fa64be5b4928d5573a5d60299802d\",\n    \"condition_enabled\": true,\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Post Slack Message\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2022-09-02T16:45:12.785Z\"\n    },\n    \"id\": 62,\n    \"index\": 62,\n    \"inputData\": [\n     {\n      \"channel\": {\n       \"constant\": false,\n       \"value\": \"Channel_Name\"\n      },\n      \"message\": {\n       \"constant\": false,\n       \"value\": \"\\\"New IAM user {}\\\".format(user_name)\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"channel\": {\n        \"default\": \"\",\n        \"description\": \"Name of the slack channel where the message to be posted\",\n        \"title\": \"Channel\",\n        \"type\": \"string\"\n       },\n       \"message\": {\n        \"default\": \"\",\n        \"description\": \"Message to be sent\",\n        \"title\": \"Message\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"channel\",\n       \"message\"\n      ],\n      \"title\": \"slack_post_message\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_SLACK\",\n    \"name\": \"Post Slack Message\",\n    \"nouns\": [\n     \"slack\",\n     \"message\"\n    ],\n    \"orderProperties\": [\n     \"channel\",\n     \"message\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"send_status\",\n     \"output_name_enabled\": true\n    },\n    \"printOutput\": true,\n    \"startcondition\": \"'User' in UserInfo and not channel\",\n    \"tags\": [\n     \"slack_post_message\"\n    ],\n    \"verbs\": [\n     \"post\"\n    ]\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"# Copyright (c) 2021 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from slack_sdk import WebClient\\n\",\n    \"from slack_sdk.errors import SlackApiError\\n\",\n    \"\\n\",\n    \"pp = pprint.PrettyPrinter(indent=2)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def slack_post_message_printer(output):\\n\",\n    \"    if output is not None:\\n\",\n    \"        pprint.pprint(output)\\n\",\n    \"    else:\\n\",\n    \"        return\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def slack_post_message(\\n\",\n    \"        handle: WebClient,\\n\",\n    \"        channel: str,\\n\",\n    \"        message: str) -> str:\\n\",\n    \"\\n\",\n    \"    try:\\n\",\n    \"        response = handle.chat_postMessage(\\n\",\n    \"            channel=channel,\\n\",\n    \"            text=message)\\n\",\n    \"        return f\\\"Successfuly Sent Message on Channel: #{channel}\\\"\\n\",\n    \"    except SlackApiError as e:\\n\",\n    \"        pp.pprint(\\n\",\n    \"            f\\\"Failed sending message to slack channel {channel}, Error: {e.response['error']}\\\")\\n\",\n    \"        if e.response['error'] == 'channel_not_found':\\n\",\n    \"            raise Exception('Channel Not Found')\\n\",\n    \"        elif e.response['error'] == 'duplicate_channel_not_found':\\n\",\n    \"            raise Exception('Channel associated with the message_id not valid')\\n\",\n    \"        elif e.response['error'] == 'not_in_channel':\\n\",\n    \"            raise Exception('Cannot post message to channel user is not in')\\n\",\n    \"        elif e.response['error'] == 'is_archived':\\n\",\n    \"            raise Exception('Channel has been archived')\\n\",\n    \"        elif e.response['error'] == 'msg_too_long':\\n\",\n    \"            raise Exception('Message text is too long')\\n\",\n    \"        elif e.response['error'] == 'no_text':\\n\",\n    \"            raise Exception('Message text was not provided')\\n\",\n    \"        elif e.response['error'] == 'restricted_action':\\n\",\n    \"            raise Exception('Workspace preference prevents user from posting')\\n\",\n    \"        elif e.response['error'] == 'restricted_action_read_only_channel':\\n\",\n    \"            raise Exception('Cannot Post message, read-only channel')\\n\",\n    \"        elif e.response['error'] == 'team_access_not_granted':\\n\",\n    \"            raise Exception('The token used is not granted access to the workspace')\\n\",\n    \"        elif e.response['error'] == 'not_authed':\\n\",\n    \"            raise Exception('No Authtnecition token provided')\\n\",\n    \"        elif e.response['error'] == 'invalid_auth':\\n\",\n    \"            raise Exception('Some aspect of Authentication cannot be validated. Request denied')\\n\",\n    \"        elif e.response['error'] == 'access_denied':\\n\",\n    \"            raise Exception('Access to a resource specified in the request denied')\\n\",\n    \"        elif e.response['error'] == 'account_inactive':\\n\",\n    \"            raise Exception('Authentication token is for a deleted user')\\n\",\n    \"        elif e.response['error'] == 'token_revoked':\\n\",\n    \"            raise Exception('Authentication token for a deleted user has been revoked')\\n\",\n    \"        elif e.response['error'] == 'no_permission':\\n\",\n    \"            raise Exception('The workspace toekn used does not have necessary permission to send message')\\n\",\n    \"        elif e.response['error'] == 'ratelimited':\\n\",\n    \"            raise Exception('The request has been ratelimited. Retry sending message later')\\n\",\n    \"        elif e.response['error'] == 'service_unavailable':\\n\",\n    \"            raise Exception('The service is temporarily unavailable')\\n\",\n    \"        elif e.response['error'] == 'fatal_error':\\n\",\n    \"            raise Exception('The server encountered catostrophic error while sending message')\\n\",\n    \"        elif e.response['error'] == 'internal_error':\\n\",\n    \"            raise Exception('The server could not complete operation, likely due to transietn issue')\\n\",\n    \"        elif e.response['error'] == 'request_timeout':\\n\",\n    \"            raise Exception('Sending message error via POST: either message was missing or truncated')\\n\",\n    \"        else:\\n\",\n    \"            raise Exception(f'Failed Sending Message to slack channel {channel} Error: {e.response[\\\"error\\\"]}')\\n\",\n    \"\\n\",\n    \"    except Exception as e:\\n\",\n    \"        print(\\\"\\\\n\\\\n\\\")\\n\",\n    \"        pp.pprint(\\n\",\n    \"            f\\\"Failed sending message to slack channel {channel}, Error: {e.__str__()}\\\")\\n\",\n    \"        return f\\\"Unable to send message on {channel}\\\"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"channel\\\": \\\"Channel_Name\\\",\\n\",\n    \"    \\\"message\\\": \\\"\\\\\\\\\\\"New IAM user {}\\\\\\\\\\\".format(user_name)\\\"\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"'User' in UserInfo and not channel\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"send_status\\\")\\n\",\n    \"\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(slack_post_message, lego_printer=slack_post_message_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"e9df5398-15b1-4279-92b8-d4c62372afed\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Conclusion\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Conclusion\"\n   },\n   \"source\": [\n    \"### Conclusion\\n\",\n    \"In this Runbook, we demonstrated the use of unSkript's AWS and slack actions to perform AWS create new IAM user, login profile and also show the caller identity of the user. On Success, post a message on the slack channel about the User creation. To view the full platform capabilities of unSkript please visit https://us.app.unskript.io\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"execution_data\": {\n   \"runbook_name\": \"Create a new AWS IAM User\",\n   \"parameters\": [\n    \"channel\",\n    \"password\",\n    \"username\"\n   ]\n  },\n  \"kernelspec\": {\n   \"display_name\": \"Python 3\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]\"\n  },\n  \"parameterSchema\": {\n   \"properties\": {\n    \"channel\": {\n     \"description\": \"Slack Channel Name to send the new User Information. Example random, general\",\n     \"title\": \"channel\",\n     \"type\": \"string\"\n    },\n    \"password\": {\n     \"description\": \"Login profile password for new IAM user.\",\n     \"format\": \"password\",\n     \"title\": \"password\",\n     \"type\": \"string\",\n     \"writeOnly\": true\n    },\n    \"username\": {\n     \"description\": \"Name of the user that needs to be created\",\n     \"title\": \"username\",\n     \"type\": \"string\"\n    }\n   },\n   \"required\": [\n    \"username\",\n    \"password\"\n   ],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"parameterValues\": {},\n  \"vscode\": {\n   \"interpreter\": {\n    \"hash\": \"e8899eb02dfbc033aab5733bdae1bd213fa031d40331094008e8673d99ebab63\"\n   }\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}"
  },
  {
    "path": "AWS/Add_new_IAM_user.json",
    "content": "{\n  \"name\": \"Create a new AWS IAM User\",\n  \"description\": \"AWS has an inbuilt identity and access management system known as AWS IAM. IAM supports the concept of users, group, roles and privileges. IAM user is an identity that can be created and assigned some privileges. This runbook can be used to create an AWS IAM User\",\n  \"uuid\": \"924025582b6c1b3ea3c8c834f1ee430a2df8bd42c5119191cb5c5da3121f1d18\",\n  \"icon\": \"CONNECTOR_TYPE_AWS\",\n  \"categories\": [ \"CATEGORY_TYPE_IAM\", \"CATEGORY_TYPE_SECOPS\" ],\n  \"connector_types\": [ \"CONNECTOR_TYPE_AWS\" ],\n  \"version\": \"1.0.0\"\n}\n\n"
  },
  {
    "path": "AWS/Configure_url_endpoint_on_a_cloudwatch_alarm.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"02550ae3\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Steps Overview\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Steps Overview\"\n   },\n   \"source\": [\n    \"<hr><center><img src=\\\"https://storage.googleapis.com/unskript-website/assets/favicon.png\\\" alt=\\\"unSkript.com\\\" width=\\\"100\\\" height=\\\"100\\\">\\n\",\n    \"<h1 id=\\\"unSkript-Runbooks&para;\\\">unSkript Runbooks</h1>\\n\",\n    \"<div class=\\\"alert alert-block alert-success\\\">\\n\",\n    \"<h3 id=\\\"Objective&para;\\\">Objective</h3>\\n\",\n    \"<br><strong style=\\\"color: #000000;\\\"><em>Configure url endpoint to the SNS associated with a cloudwatch alarm</em></strong></div>\\n\",\n    \"</center>\\n\",\n    \"<p>&nbsp;</p>\\n\",\n    \"<center>\\n\",\n    \"<h2 id=\\\"Stop-Untagged-EC2-Instances&para;\\\"><u>Attach a webhook endpoint to AWS Cloudwatch alarm</u></h2>\\n\",\n    \"</center>\\n\",\n    \"<h1 id=\\\"Steps-Overview&para;\\\">Steps Overview</h1>\\n\",\n    \"<p>1)&nbsp;<a href=\\\"#1\\\">Attach a webhook endpoint to AWS Cloudwatch alarm</a></p>\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"943a923f\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 1\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 1\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Get-all-Untagged-EC2-Instances\\\"><a id=\\\"2\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Attach a webhook endpoint to AWS Cloudwatch alarms</h3>\\n\",\n    \"<p>Here we will configure the url endpoint to the SNS associated with a cloudwatch alarm.&nbsp;</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters: <code>alarm_name, region, region</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following output: <code>None</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"60e41cc8-b61f-4104-a41c-f084bce38740\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionCategories\": [],\n    \"actionIsCheck\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [],\n    \"actionNextHopParameterMapping\": {},\n    \"actionOutputType\": \"\",\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": false,\n    \"action_uuid\": \"e591113f7afc699ee564d67ef912ea2d689acc91d7640a2a05e68c039153bd33\",\n    \"checkEnabled\": false,\n    \"collapsed\": true,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Attach a webhook endpoint to one of the SNS attached to the AWS Cloudwatch alarm.\",\n    \"id\": 213,\n    \"index\": 213,\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"alarm_name\": {\n        \"description\": \"Cloudwatch alarm name.\",\n        \"title\": \"Alarm name\",\n        \"type\": \"string\"\n       },\n       \"region\": {\n        \"description\": \"AWS Region of the cloudwatch.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       },\n       \"url\": {\n        \"description\": \"URL where the alarm notification needs to be sent. URL should start with http or https.\",\n        \"title\": \"URL\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"alarm_name\",\n       \"region\",\n       \"url\"\n      ],\n      \"title\": \"aws_cloudwatch_attach_webhook_notification_to_alarm\",\n      \"type\": \"object\"\n     }\n    ],\n    \"isUnskript\": false,\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"Attach a webhook endpoint to AWS Cloudwatch alarm\",\n    \"nouns\": [],\n    \"orderProperties\": [\n     \"alarm_name\",\n     \"region\",\n     \"url\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"printOutput\": true,\n    \"tags\": [\n     \"aws_cloudwatch_attach_webhook_notification_to_alarm\"\n    ],\n    \"title\": \"Attach a webhook endpoint to AWS Cloudwatch alarm\",\n    \"verbs\": [],\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"# Copyright (c) 2021 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from unskript.legos.aws.aws_get_handle.aws_get_handle import Session\\n\",\n    \"from urllib.parse import urlparse\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_cloudwatch_attach_webhook_notification_to_alarm_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint({\\\"Subscription ARN\\\" : output})\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_cloudwatch_attach_webhook_notification_to_alarm(\\n\",\n    \"    hdl: Session,\\n\",\n    \"    alarm_name: str,\\n\",\n    \"    region: str,\\n\",\n    \"    url: str\\n\",\n    \") -> str:\\n\",\n    \"    \\\"\\\"\\\"aws_cloudwatch_attach_webhook_notification_to_alarm returns subscriptionArn\\n\",\n    \"\\n\",\n    \"        :type alarm_name: string\\n\",\n    \"        :param alarm_name: Cloudwatch alarm name.\\n\",\n    \"\\n\",\n    \"        :type url: string\\n\",\n    \"        :param url: URL where the alarm notification needs to be sent.\\n\",\n    \"\\n\",\n    \"        :type region: string\\n\",\n    \"        :param region: AWS Region of the cloudwatch.\\n\",\n    \"\\n\",\n    \"        :rtype: Returns subscriptionArn\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    cloudwatchClient = hdl.client(\\\"cloudwatch\\\", region_name=region)\\n\",\n    \"\\n\",\n    \"    # Get the configured SNS(es) to this alarm.\\n\",\n    \"    alarmDetail = cloudwatchClient.describe_alarms(\\n\",\n    \"        AlarmNames=[alarm_name]\\n\",\n    \"    )\\n\",\n    \"    if alarmDetail is None:\\n\",\n    \"        return f'Alarm {alarm_name} not found in AWS region {region}'\\n\",\n    \"    # Need to get the AlarmActions from either composite or metric field.\\n\",\n    \"    if len(alarmDetail['CompositeAlarms']) > 0:\\n\",\n    \"        snses = alarmDetail['CompositeAlarms'][0]['AlarmActions']\\n\",\n    \"    else:\\n\",\n    \"        snses = alarmDetail['MetricAlarms'][0]['AlarmActions']\\n\",\n    \"\\n\",\n    \"    #Pick any sns to configure the url endpoint.\\n\",\n    \"    if len(snses) == 0:\\n\",\n    \"        return f'No SNS configured for alarm {alarm_name}'\\n\",\n    \"\\n\",\n    \"    snsArn = snses[0]\\n\",\n    \"    print(f'Configuring url endpoint on SNS {snsArn}')\\n\",\n    \"\\n\",\n    \"    snsClient = hdl.client('sns', region_name=region)\\n\",\n    \"    # Figure out the protocol from the url\\n\",\n    \"    try:\\n\",\n    \"        parsedURL = urlparse(url)\\n\",\n    \"    except Exception as e:\\n\",\n    \"        print(f'Invalid URL {url}, {e}')\\n\",\n    \"        raise e\\n\",\n    \"\\n\",\n    \"    if parsedURL.scheme != 'http' and parsedURL.scheme != 'https':\\n\",\n    \"        return f'Invalid URL {url}'\\n\",\n    \"\\n\",\n    \"    protocol = parsedURL.scheme\\n\",\n    \"    try:\\n\",\n    \"       response = snsClient.subscribe(\\n\",\n    \"            TopicArn=snsArn,\\n\",\n    \"            Protocol=protocol,\\n\",\n    \"            Endpoint=url,\\n\",\n    \"            ReturnSubscriptionArn=True)\\n\",\n    \"    except Exception as e:\\n\",\n    \"        print(f'Subscribe to SNS topic arn {snsArn} failed, {e}')\\n\",\n    \"        raise e\\n\",\n    \"    subscriptionArn = response['SubscriptionArn']\\n\",\n    \"    print(f'URL {url} subscribed to SNS {snsArn}, subscription ARN {subscriptionArn}')\\n\",\n    \"    return subscriptionArn\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_cloudwatch_attach_webhook_notification_to_alarm, lego_printer=aws_cloudwatch_attach_webhook_notification_to_alarm_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"93fbb5a1\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Conclusion\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Conclusion\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Conclusion\\\">Conclusion</h3>\\n\",\n    \"<p>In this Runbook, we demonstrated the use of unSkript's AWS legos to configure the url endpoint to the SNS associated with a cloudwatch alarm. To view the full platform capabilities of unSkript please visit <a href=\\\"https://us.app.unskript.io\\\" target=\\\"_blank\\\" rel=\\\"noopener\\\">us.app.unskript.io</a></p>\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"execution_data\": {\n   \"runbook_name\": \"Configure URL endpoint on a AWS CloudWatch alarm\",\n   \"parameters\": [\n    \"Region\",\n    \"URL\",\n    \"AlarmName\"\n   ]\n  },\n  \"kernelspec\": {\n   \"display_name\": \"Python 3.10.6 64-bit\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.10.6\"\n  },\n  \"parameterSchema\": {\n   \"properties\": {\n    \"AlarmName\": {\n     \"description\": \"Name of the AWS Alarm\",\n     \"title\": \"AlarmName\",\n     \"type\": \"string\"\n    },\n    \"Region\": {\n     \"description\": \"AWS Region of the alarm\",\n     \"title\": \"Region\",\n     \"type\": \"string\"\n    },\n    \"URL\": {\n     \"description\": \"URL to be attached to the SNS \",\n     \"title\": \"URL\",\n     \"type\": \"string\"\n    }\n   },\n   \"required\": [\n    \"AlarmName\",\n    \"Region\",\n    \"URL\"\n   ],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"parameterValues\": {},\n  \"vscode\": {\n   \"interpreter\": {\n    \"hash\": \"b0fa6594d8f4cbf19f97940f81e996739fb7646882a419484c72d19e05852a7e\"\n   }\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}"
  },
  {
    "path": "AWS/Configure_url_endpoint_on_a_cloudwatch_alarm.json",
    "content": "{\n  \"name\": \"Configure URL endpoint on a AWS CloudWatch alarm\",\n  \"description\": \"Configures the URL endpoint to the SNS associated with a CloudWatch alarm. This allows to external functions to be invoked within unSkript in response to an alert getting generated. Alarms can be attached to the handlers to perform data enrichment or remediation\", \n  \"uuid\": \"196a6ad5bd13b29d0a3acbf3227b134f7a38777cb1051928f0cb456845c643e0\",\n  \"icon\": \"CONNECTOR_TYPE_AWS\",\n  \"categories\": [ \"CATEGORY_TYPE_CLOUDOPS\" ],\n  \"connector_types\": [ \"CONNECTOR_TYPE_AWS\" ],\n  \"version\": \"1.0.0\"\n}\n\n"
  },
  {
    "path": "AWS/Copy_ami_to_all_given_AWS_regions.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"07894245-58bd-475f-b722-8d7513fbe063\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Copy AMI to All Given AWS Regions\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Copy AMI to All Given AWS Regions\"\n   },\n   \"source\": [\n    \"<center>\\n\",\n    \"<img src=\\\"https://storage.googleapis.com/unskript-website/assets/favicon.png\\\" alt=\\\"unSkript.com\\\" width=\\\"100\\\" height=\\\"100\\\"/> \\n\",\n    \"<h1> unSkript Runbooks </h1>\\n\",\n    \"<div class=\\\"alert alert-block alert-success\\\">\\n\",\n    \"    <b> This runbook demonstrates how to copy an AMI from one region to multiple AWS regions using unSkript actions.</b>\\n\",\n    \"</div>\\n\",\n    \"\\n\",\n    \"<br>\\n\",\n    \"</center>\\n\",\n    \"<center><h2>Copy AMI To All Given AWS Regions</h2></center>\\n\",\n    \"\\n\",\n    \"# Steps Overview\\n\",\n    \"   1)[Get All AWS Regions](#1)</br>\\n\",\n    \"   2)[Get AMI Name](#2)</br>\\n\",\n    \"   3)[Copy AMI To Other AWS Regions](#3)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"5573f1a7-bf52-4f6c-a458-c7c84092f8b9\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-1\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-1\"\n   },\n   \"source\": [\n    \"### <a id=\\\"1\\\"> Get All AWS Regions</a>\\n\",\n    \"Here in step-1 we will collect all the regions from the AWS by using a command `aws ec2 describe-regions --all-regions --query 'Regions[].{Name:RegionName}' --output text`. This action only executes when the user doesn't provide the destination regions.\\n\",\n    \"\\n\",\n    \">Input parameters: `aws_command`\\n\",\n    \"\\n\",\n    \">Output variable: `regions`\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"5437ce43-a9a6-4f99-b6ce-3c3e585512ff\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_uuid\": \"1db371aff42291641eb6ba83d7acc3fe28e2468d83be1552e8258dc878c0f70d\",\n    \"condition_enabled\": true,\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \" Execute command using AWS CLI\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2022-12-06T05:26:25.710Z\"\n    },\n    \"id\": 190,\n    \"index\": 190,\n    \"inputData\": [\n     {\n      \"aws_command\": {\n       \"constant\": false,\n       \"value\": \"\\\"aws ec2 describe-regions --all-regions --query 'Regions[].{Name:RegionName}' --output text\\\"\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"aws_command\": {\n        \"description\": \"AWS Command eg \\\"aws ec2 describe-instances\\\"\",\n        \"title\": \"AWS Command\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"aws_command\"\n      ],\n      \"title\": \"aws_execute_cli_command\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \" Get All AWS Regions\",\n    \"nouns\": [\n     \"command\",\n     \"aws\"\n    ],\n    \"orderProperties\": [\n     \"aws_command\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"regions\",\n     \"output_name_enabled\": true\n    },\n    \"startcondition\": \"not destination_regions\",\n    \"tags\": [\n     \"aws_execute_cli_command\"\n    ],\n    \"title\": \" Get All AWS Regions\",\n    \"verbs\": [\n     \"run\"\n    ]\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"#\\n\",\n    \"# Copyright (c) 2021 unSkript.com\\n\",\n    \"# All rights reserved.\\n\",\n    \"#\\n\",\n    \"\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_execute_cli_command_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_execute_cli_command(handle, aws_command: str) -> str:\\n\",\n    \"\\n\",\n    \"    result = handle.aws_cli_command(aws_command)\\n\",\n    \"    if result is None or result.returncode != 0:\\n\",\n    \"        print(\\n\",\n    \"            f\\\"Error while executing command ({aws_command}): {result}\\\")\\n\",\n    \"        return str()\\n\",\n    \"\\n\",\n    \"    return result.stdout\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"not destination_regions\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"regions\\\")\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"aws_command\\\": \\\"\\\\\\\\\\\"aws ec2 describe-regions --all-regions --query 'Regions[].{Name:RegionName}' --output text\\\\\\\\\\\"\\\"\\n\",\n    \"    }''')\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_execute_cli_command, lego_printer=aws_execute_cli_command_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"dbe210d2-4ae1-4208-971c-b227f428374c\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-1A\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-1A\"\n   },\n   \"source\": [\n    \"### Modify Region Output\\n\",\n    \"In this action, we convert the output from step 1 to a python list, which will be consumed in subsequent cells.\\n\",\n    \"\\n\",\n    \">Output variable: `destination_regions`\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"7ad05b8a-17ce-410d-8d21-c7c87834259e\",\n   \"metadata\": {\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2022-12-06T17:48:02.215Z\"\n    },\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"name\": \"Modify Region Output\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Modify Region Output\",\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"try:\\n\",\n    \"    if regions:\\n\",\n    \"        result_op = list(regions.split(\\\"\\\\n\\\"))\\n\",\n    \"        destination_regions = [x for x in result_op if x != '']\\n\",\n    \"except Exception as e:\\n\",\n    \"    pass\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"7004eefd-6ae7-4b3f-b71a-d9ca92080b4c\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-2\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-2\"\n   },\n   \"source\": [\n    \"### <a id=\\\"2\\\"> Get AMI Name</a>\\n\",\n    \"Here in step-2 we will findout the name of AMI using the given AMI Id with `aws ec2 describe-images --region \\\"+ source_region +\\\" --image-ids \\\"+ ami_id +\\\" --query 'Images[*].[Name]' --output text`.\\n\",\n    \"\\n\",\n    \">Input parameters: `aws_command`\\n\",\n    \"\\n\",\n    \">Output variable: `ami_name`\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"4decf7c0-8770-4e21-8377-422e21344d65\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_uuid\": \"1db371aff42291641eb6ba83d7acc3fe28e2468d83be1552e8258dc878c0f70d\",\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \" Execute command using AWS CLI\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2022-12-06T05:26:55.776Z\"\n    },\n    \"id\": 190,\n    \"index\": 190,\n    \"inputData\": [\n     {\n      \"aws_command\": {\n       \"constant\": false,\n       \"value\": \"\\\"aws ec2 describe-images --region \\\"+ source_region +\\\" --image-ids \\\"+ ami_id +\\\" --query 'Images[*].[Name]' --output text\\\"\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"aws_command\": {\n        \"description\": \"AWS Command eg \\\"aws ec2 describe-instances\\\"\",\n        \"title\": \"AWS Command\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"aws_command\"\n      ],\n      \"title\": \"aws_execute_cli_command\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"Get Name of AMI\",\n    \"nouns\": [\n     \"command\",\n     \"aws\"\n    ],\n    \"orderProperties\": [\n     \"aws_command\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"ami_name\",\n     \"output_name_enabled\": true\n    },\n    \"tags\": [\n     \"aws_execute_cli_command\"\n    ],\n    \"title\": \"Get Name of AMI\",\n    \"verbs\": [\n     \"run\"\n    ]\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"#\\n\",\n    \"# Copyright (c) 2021 unSkript.com\\n\",\n    \"# All rights reserved.\\n\",\n    \"#\\n\",\n    \"\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_execute_cli_command_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_execute_cli_command(handle, aws_command: str) -> str:\\n\",\n    \"\\n\",\n    \"    result = handle.aws_cli_command(aws_command)\\n\",\n    \"    if result is None or result.returncode != 0:\\n\",\n    \"        print(\\n\",\n    \"            f\\\"Error while executing command ({aws_command}): {result}\\\")\\n\",\n    \"        return str()\\n\",\n    \"\\n\",\n    \"    return result.stdout\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"aws_command\\\": \\\"\\\\\\\\\\\"aws ec2 describe-images --region \\\\\\\\\\\"+ source_region +\\\\\\\\\\\" --image-ids \\\\\\\\\\\"+ ami_id +\\\\\\\\\\\" --query 'Images[*].[Name]' --output text\\\\\\\\\\\"\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"ami_name\\\")\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_execute_cli_command, lego_printer=aws_execute_cli_command_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"b52bf456-17b8-40bf-9f4b-30096bad3071\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-2A\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-2A\"\n   },\n   \"source\": [\n    \"### Create Batch Command to Copy AMI\\n\",\n    \"In this action, we create the batch command to copy AMI to the given region(s) and pass the output to step 3 as iterator.\\n\",\n    \"\\n\",\n    \">Output variable: `aws_command`\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"b8963ffb-25b7-4e8b-91c4-94cfaf2d9fb0\",\n   \"metadata\": {\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2022-12-06T05:31:54.697Z\"\n    },\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"name\": \"Create Batch Command to Copy AMI\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Create Batch Command to Copy AMI\",\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"aws_command = []\\n\",\n    \"copy_command = \\\"aws ec2 copy-image --source-image-id {AMI_Id} --source-region {SourceRegion} --region {DestinationRegion} --name {Name}\\\"\\n\",\n    \"if destination_regions:\\n\",\n    \"    for region in destination_regions:\\n\",\n    \"        command = copy_command.format(AMI_Id=ami_id, SourceRegion=source_region,\\n\",\n    \"                                     DestinationRegion=region, Name=ami_name.replace('\\\\n',''))\\n\",\n    \"        aws_command.append(command)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"96407399-f877-484b-8fd3-96ed27eea1d4\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-3\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-3\"\n   },\n   \"source\": [\n    \"### <a id=\\\"3\\\"> Copy AMI To Other AWS Regions</a>\\n\",\n    \"In this action, we will provide all the necessary inputs to copy the AMI from one region to another. If the user did not provide the destination region, we use the output from step 1 and copy the AMI to all destination regions by using `aws ec2 copy-image --source-image-id <ami-id> --source-region <source-region> --region <dest-region> --name <ami-name>`.\\n\",\n    \"\\n\",\n    \">Input parameters: `aws_command`\\n\",\n    \"\\n\",\n    \">Output variable: `command_output`\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"446d05f7-eded-46ab-8de4-e7a6ccd2dbf9\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_uuid\": \"1db371aff42291641eb6ba83d7acc3fe28e2468d83be1552e8258dc878c0f70d\",\n    \"condition_enabled\": false,\n    \"continueOnError\": true,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \" Execute command using AWS CLI\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2022-12-06T05:32:10.076Z\"\n    },\n    \"id\": 190,\n    \"index\": 190,\n    \"inputData\": [\n     {\n      \"aws_command\": {\n       \"constant\": false,\n       \"value\": \"iter_item\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"aws_command\": {\n        \"description\": \"AWS Command eg \\\"aws ec2 describe-instances\\\"\",\n        \"title\": \"AWS Command\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"aws_command\"\n      ],\n      \"title\": \"aws_execute_cli_command\",\n      \"type\": \"object\"\n     }\n    ],\n    \"iterData\": [\n     {\n      \"iter_enabled\": true,\n      \"iter_item\": \"aws_command\",\n      \"iter_list\": {\n       \"constant\": false,\n       \"objectItems\": false,\n       \"value\": \"aws_command\"\n      }\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \" Copy AMI To Other AWS Regions\",\n    \"nouns\": [\n     \"command\",\n     \"aws\"\n    ],\n    \"orderProperties\": [\n     \"aws_command\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"command_output\",\n     \"output_name_enabled\": true\n    },\n    \"startcondition\": \"abcd\",\n    \"tags\": [\n     \"aws_execute_cli_command\"\n    ],\n    \"title\": \" Copy AMI To Other AWS Regions\",\n    \"verbs\": [\n     \"run\"\n    ]\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"#\\n\",\n    \"# Copyright (c) 2021 unSkript.com\\n\",\n    \"# All rights reserved.\\n\",\n    \"#\\n\",\n    \"\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_execute_cli_command_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_execute_cli_command(handle, aws_command: str) -> str:\\n\",\n    \"\\n\",\n    \"    result = handle.aws_cli_command(aws_command)\\n\",\n    \"    if result is None or result.returncode != 0:\\n\",\n    \"        print(\\n\",\n    \"            f\\\"Error while executing command ({aws_command}): {result}\\\")\\n\",\n    \"        return str()\\n\",\n    \"\\n\",\n    \"    return result.stdout\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(continueOnError=True)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"aws_command\\\": \\\"iter_item\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(iterJson='''{\\n\",\n    \"    \\\"iter_enabled\\\": true,\\n\",\n    \"    \\\"iter_list_is_const\\\": false,\\n\",\n    \"    \\\"iter_list\\\": \\\"aws_command\\\",\\n\",\n    \"    \\\"iter_parameter\\\": \\\"aws_command\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": false,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"abcd\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"command_output\\\")\\n\",\n    \"\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_execute_cli_command, lego_printer=aws_execute_cli_command_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"c0303e5a-8cb0-4889-a8d0-4c743fef8e17\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Conclusion\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Conclusion\"\n   },\n   \"source\": [\n    \"   In this Runbook, we demonstrated the use of unSkript's AWS actions to perform copy AMI from one region to multiple AWS regions.\\n\",\n    \"   To view the full platform capabilities of unSkript please visit https://us.app.unskript.io\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"execution_data\": {\n   \"runbook_name\": \"Copy AMI to All Given AWS Regions\",\n   \"parameters\": [\n    \"ami_id\",\n    \"destination_regions\",\n    \"source_region\"\n   ]\n  },\n  \"kernelspec\": {\n   \"display_name\": \"Python 3\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]\"\n  },\n  \"parameterSchema\": {\n   \"properties\": {\n    \"ami_id\": {\n     \"description\": \"ID of Amazon Machine Image (AMI) is provides the information required to launch an instance.\",\n     \"title\": \"ami_id\",\n     \"type\": \"string\"\n    },\n    \"destination_regions\": {\n     \"description\": \"Regions where AMI needs to be copied. If left blank it will automatically select all the available regions of AWS. e.g. [\\\"us-west-1\\\",\\\"eu-east-1\\\"]\",\n     \"title\": \"destination_regions\",\n     \"type\": \"array\"\n    },\n    \"source_region\": {\n     \"description\": \"Name of the region from which AMI needs to copy\",\n     \"title\": \"source_region\",\n     \"type\": \"string\"\n    }\n   },\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"parameterValues\": null,\n  \"vscode\": {\n   \"interpreter\": {\n    \"hash\": \"e8899eb02dfbc033aab5733bdae1bd213fa031d40331094008e8673d99ebab63\"\n   }\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}"
  },
  {
    "path": "AWS/Copy_ami_to_all_given_AWS_regions.json",
    "content": "{\n  \"name\": \"Copy AMI to All Given AWS Regions\",\n  \"description\": \"This runbook can be used to copy AMI from one region to multiple AWS regions using unSkript legos with AWS CLI commands.We can get all the available regions by using AWS CLI Commands.\",\n  \"uuid\": \"5d35a9ac871745c75ebc757d8c64e864fd62dc9eac03e624f0a91e6ebc897368\",\n  \"icon\": \"CONNECTOR_TYPE_AWS\",\n  \"categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\" ],\n  \"connector_types\": [ \"CONNECTOR_TYPE_AWS\" ],\n  \"version\": \"1.0.0\"\n}\n"
  },
  {
    "path": "AWS/Delete_Unused_AWS_NAT_Gateways.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"82eebdfd-c880-40df-bd6d-5b546c92164b\",\n   \"metadata\": {\n    \"name\": \"Steps Overview\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Steps Overview\"\n   },\n   \"source\": [\n    \"<center><img src=\\\"https://storage.googleapis.com/unskript-website/assets/favicon.png\\\" alt=\\\"unSkript.com\\\" width=\\\"100\\\" height=\\\"100\\\">\\n\",\n    \"<h1 id=\\\"unSkript-Runbooks\\\">unSkript Runbooks<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#unSkript-Runbooks\\\" target=\\\"_self\\\">&para;</a></h1>\\n\",\n    \"<div class=\\\"alert alert-block alert-success\\\">\\n\",\n    \"<h3 id=\\\"Objective\\\">Objective<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Objective\\\" target=\\\"_self\\\">&para;</a></h3>\\n\",\n    \"<br><strong style=\\\"color: #000000;\\\"><em>Find and Delete unused NAT Gateways</em></strong></div>\\n\",\n    \"</center>\\n\",\n    \"<p>&nbsp;</p>\\n\",\n    \"<center>\\n\",\n    \"<h2 id=\\\"Delete-Unused-AWS-Log-Streams\\\"><u>Delete Unuse</u><span style=\\\"text-decoration: underline;\\\">d <strong style=\\\"color: rgb(0, 0, 0); text-decoration: underline;\\\">N</strong><strong style=\\\"color: rgb(0, 0, 0); text-decoration: underline;\\\">AT Gateways</strong></span></h2>\\n\",\n    \"</center>\\n\",\n    \"<h1 id=\\\"Steps-Overview\\\">Steps Overview</h1>\\n\",\n    \"<p>1)<a href=\\\"#1\\\" target=\\\"_self\\\" rel=\\\"noopener\\\"> Find unused NAT gateways</a><br>2)<a href=\\\"#2\\\" target=\\\"_self\\\" rel=\\\"noopener\\\"> Delete unused NAT gateways</a></p>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"1290c59b-9107-46c0-8f0b-8dce39e91ef9\",\n   \"metadata\": {\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"name\": \"Input Verification\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Input Verification\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"if nat_gateway_ids and not region:\\n\",\n    \"    raise SystemExit(\\\"Provide a region for the NAT Gateway ID's!\\\")\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"93889351-e132-4fb5-9e3f-43fbba454161\",\n   \"metadata\": {\n    \"name\": \"Step 1A\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 1A\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"List-all-AWS-Regions\\\">List all AWS Regions</h3>\\n\",\n    \"<p>This action fetches all AWS Regions to execute Step 1\\ud83d\\udc47. This action will only execute if no <code>region</code> is provided.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters: <code>None</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following ouput: <code>region</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 7,\n   \"id\": \"8a580cb0-7c57-4c8a-af46-f23f607931fa\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionOutputType\": null,\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": false,\n    \"action_uuid\": \"708ea4af5f8fe7096a15b3a52c4a657606bab9e177386fad7a847341ed607d64\",\n    \"collapsed\": true,\n    \"condition_enabled\": true,\n    \"credentialsJson\": {},\n    \"description\": \"List all available AWS Regions\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-04-18T14:15:19.579Z\"\n    },\n    \"id\": 1,\n    \"index\": 1,\n    \"inputschema\": [\n     {\n      \"properties\": {},\n      \"title\": \"aws_list_all_regions\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"language\": \"python\",\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"metadata\": {\n     \"action_bash_command\": false,\n     \"action_categories\": [\n      \"CATEGORY_TYPE_CLOUDOPS\",\n      \"CATEGORY_TYPE_DEVOPS\",\n      \"CATEGORY_TYPE_SRE\",\n      \"CATEGORY_TYPE_AWS\"\n     ],\n     \"action_description\": \"List all available AWS Regions\",\n     \"action_entry_function\": \"aws_list_all_regions\",\n     \"action_is_check\": false,\n     \"action_is_remediation\": false,\n     \"action_needs_credential\": true,\n     \"action_next_hop\": null,\n     \"action_next_hop_parameter_mapping\": null,\n     \"action_nouns\": [\n      \"regions\",\n      \"aws\"\n     ],\n     \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n     \"action_supports_iteration\": true,\n     \"action_supports_poll\": true,\n     \"action_title\": \"AWS List All Regions\",\n     \"action_type\": \"LEGO_TYPE_AWS\",\n     \"action_verbs\": [\n      \"list\"\n     ],\n     \"action_version\": \"1.0.0\"\n    },\n    \"name\": \"AWS List All Regions\",\n    \"orderProperties\": [],\n    \"outputParams\": {\n     \"output_name\": \"region\",\n     \"output_name_enabled\": true,\n     \"output_runbook_enabled\": false,\n     \"output_runbook_name\": \"\"\n    },\n    \"printOutput\": true,\n    \"startcondition\": \"not region\",\n    \"tags\": [\n     \"aws_list_all_regions\"\n    ],\n    \"uuid\": \"708ea4af5f8fe7096a15b3a52c4a657606bab9e177386fad7a847341ed607d64\",\n    \"version\": \"1.0.0\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"#\\n\",\n    \"# Copyright (c) 2021 unSkript.com\\n\",\n    \"# All rights reserved.\\n\",\n    \"#\\n\",\n    \"\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import Dict, List\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_list_all_regions_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_list_all_regions(handle) -> List:\\n\",\n    \"    \\\"\\\"\\\"aws_list_all_regions lists all the AWS regions\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned from Task Validate\\n\",\n    \"\\n\",\n    \"        :rtype: Result List of result\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"\\n\",\n    \"    result = handle.aws_cli_command(\\\"aws ec2 --region us-west-2 describe-regions --all-regions --query 'Regions[].{Name:RegionName}' --output text\\\")\\n\",\n    \"    if result is None or result.returncode != 0:\\n\",\n    \"        print(\\\"Error while executing command : {}\\\".format(result))\\n\",\n    \"        return str()\\n\",\n    \"    result_op = list(result.stdout.split(\\\"\\\\n\\\"))\\n\",\n    \"    list_region = [x for x in result_op if x != '']\\n\",\n    \"    return list_region\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"not region\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"task.configure(outputName=\\\"region\\\")\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_list_all_regions, lego_printer=aws_list_all_regions_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"2020e8d0-ba3b-4c71-84b2-10917465a27e\",\n   \"metadata\": {\n    \"name\": \"Step 1\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 1\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Filter-unused-log-streams\\\"><a id=\\\"1\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Filter unused NAT Gateways</h3>\\n\",\n    \"<p>Using unSkript's Filter AWS Find Unused NAT Gateways action, we will find unused gateways given a threshold number of days from the metric <span style=\\\"color: rgb(53, 152, 219);\\\">BytesIn</span>. If the metric gives an empty result, we consider the NAT Gateway to be unused in the last <em>x days.</em></p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters: <code>region, threhold_days</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following output: <code>unused_gateways</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"73c6c8db-6fca-4f7b-9fa8-a2f57da9b2c1\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionOutputType\": null,\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": false,\n    \"action_uuid\": \"0f0c137beaf6a9246508393d1e868cea529d30a88631cd0f321799acbfbd47bb\",\n    \"collapsed\": true,\n    \"condition_enabled\": true,\n    \"continueOnError\": true,\n    \"credentialsJson\": {},\n    \"description\": \"This action to get all of the Nat gateways that have zero traffic over those\",\n    \"id\": 4,\n    \"index\": 4,\n    \"inputData\": [\n     {\n      \"number_of_days\": {\n       \"constant\": false,\n       \"value\": \"int(threshold_days)\"\n      },\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"iter_item\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"number_of_days\": {\n        \"description\": \"Number of days to check the Datapoints.\",\n        \"title\": \"Number of Days\",\n        \"type\": \"integer\"\n       },\n       \"region\": {\n        \"description\": \"AWS Region.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [],\n      \"title\": \"aws_filter_unused_nat_gateway\",\n      \"type\": \"object\"\n     }\n    ],\n    \"iterData\": [\n     {\n      \"iter_enabled\": true,\n      \"iter_item\": \"region\",\n      \"iter_list\": {\n       \"constant\": false,\n       \"objectItems\": false,\n       \"value\": \"region\"\n      }\n     }\n    ],\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"language\": \"python\",\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"metadata\": {\n     \"action_bash_command\": false,\n     \"action_categories\": [\n      \"CATEGORY_TYPE_CLOUDOPS\",\n      \"CATEGORY_TYPE_SECOPS\",\n      \"CATEGORY_TYPE_AWS\",\n      \"CATEGORY_TYPE_AWS_NAT_GATEWAY\",\n      \"CATEGORY_TYPE_AWS_EC2\"\n     ],\n     \"action_description\": \"This action to get all of the Nat gateways that have zero traffic over those\",\n     \"action_entry_function\": \"aws_filter_unused_nat_gateway\",\n     \"action_is_check\": true,\n     \"action_is_remediation\": false,\n     \"action_needs_credential\": true,\n     \"action_next_hop\": [],\n     \"action_next_hop_parameter_mapping\": {},\n     \"action_nouns\": null,\n     \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n     \"action_supports_iteration\": true,\n     \"action_supports_poll\": true,\n     \"action_title\": \"AWS Find Unused NAT Gateways\",\n     \"action_type\": \"LEGO_TYPE_AWS\",\n     \"action_verbs\": null,\n     \"action_version\": \"1.0.0\"\n    },\n    \"name\": \"AWS Find Unused NAT Gateways\",\n    \"orderProperties\": [\n     \"region\",\n     \"number_of_days\"\n    ],\n    \"outputParams\": {\n     \"output_name\": \"unused_gateways\",\n     \"output_name_enabled\": true,\n     \"output_runbook_enabled\": false,\n     \"output_runbook_name\": \"\"\n    },\n    \"printOutput\": true,\n    \"startcondition\": \"not nat_gateway_ids\",\n    \"tags\": [\n     \"aws_filter_unused_nat_gateway\"\n    ],\n    \"uuid\": \"0f0c137beaf6a9246508393d1e868cea529d30a88631cd0f321799acbfbd47bb\",\n    \"version\": \"1.0.0\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"##  Copyright (c) 2021 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from typing import Optional, Tuple\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from datetime import datetime, timedelta\\n\",\n    \"from unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_filter_unused_nat_gateway_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"def is_nat_gateway_used(handle, nat_gateway, start_time, end_time,number_of_days):\\n\",\n    \"    datapoints = []\\n\",\n    \"    if nat_gateway['State'] != 'deleted':\\n\",\n    \"        # Get the metrics data for the specified NAT Gateway over the last 7 days\\n\",\n    \"        metrics_data = handle.get_metric_statistics(\\n\",\n    \"            Namespace='AWS/NATGateway',\\n\",\n    \"            MetricName='ActiveConnectionCount',\\n\",\n    \"            Dimensions=[\\n\",\n    \"                {\\n\",\n    \"                    'Name': 'NatGatewayId',\\n\",\n    \"                    'Value': nat_gateway['NatGatewayId']\\n\",\n    \"                },\\n\",\n    \"            ],\\n\",\n    \"            StartTime=start_time,\\n\",\n    \"            EndTime=end_time,\\n\",\n    \"            Period=86400*number_of_days,\\n\",\n    \"            Statistics=['Sum']\\n\",\n    \"        )\\n\",\n    \"        datapoints += metrics_data['Datapoints']\\n\",\n    \"    if len(datapoints) == 0 or metrics_data['Datapoints'][0]['Sum']==0:\\n\",\n    \"        return False\\n\",\n    \"    else:\\n\",\n    \"        return True\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_filter_unused_nat_gateway(handle, number_of_days: int = 7, region: str = \\\"\\\") -> Tuple:\\n\",\n    \"    \\\"\\\"\\\"aws_get_natgateway_by_vpc Returns an array of NAT gateways.\\n\",\n    \"\\n\",\n    \"        :type region: string\\n\",\n    \"        :param region: Region to filter NAT Gateways.\\n\",\n    \"\\n\",\n    \"        :type number_of_days: int\\n\",\n    \"        :param number_of_days: Number of days to check the Datapoints.\\n\",\n    \"\\n\",\n    \"        :rtype: Array of NAT gateways.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    result = []\\n\",\n    \"    end_time = datetime.utcnow()\\n\",\n    \"    start_time = end_time - timedelta(days=number_of_days)\\n\",\n    \"    all_regions = [region]\\n\",\n    \"    if not region:\\n\",\n    \"        all_regions = aws_list_all_regions(handle)\\n\",\n    \"\\n\",\n    \"    for reg in all_regions:\\n\",\n    \"        try:\\n\",\n    \"            ec2Client = handle.client('ec2', region_name=reg)\\n\",\n    \"            cloudwatch = handle.client('cloudwatch', region_name=reg)\\n\",\n    \"            response = ec2Client.describe_nat_gateways()\\n\",\n    \"            for nat_gateway in response['NatGateways']:\\n\",\n    \"                nat_gateway_info = {}\\n\",\n    \"                if not is_nat_gateway_used(cloudwatch, nat_gateway, start_time, end_time,number_of_days):\\n\",\n    \"                    nat_gateway_info[\\\"nat_gateway_id\\\"] = nat_gateway['NatGatewayId']\\n\",\n    \"                    nat_gateway_info[\\\"reg\\\"] = reg\\n\",\n    \"                    result.append(nat_gateway_info)\\n\",\n    \"        except Exception as e:\\n\",\n    \"            pass\\n\",\n    \"\\n\",\n    \"    if len(result) != 0:\\n\",\n    \"        return (False, result)\\n\",\n    \"    else:\\n\",\n    \"        return (True, None)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(continueOnError=True)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"number_of_days\\\": \\\"int(threshold_days)\\\",\\n\",\n    \"    \\\"region\\\": \\\"iter_item\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(iterJson='''{\\n\",\n    \"    \\\"iter_enabled\\\": true,\\n\",\n    \"    \\\"iter_list_is_const\\\": false,\\n\",\n    \"    \\\"iter_list\\\": \\\"region\\\",\\n\",\n    \"    \\\"iter_parameter\\\": \\\"region\\\"\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"not nat_gateway_ids\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"unused_gateways\\\")\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_filter_unused_nat_gateway, lego_printer=aws_filter_unused_nat_gateway_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"a311041f-620a-4b6b-914f-e52c6c3a71f4\",\n   \"metadata\": {\n    \"name\": \"Step 2A\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 2A\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Create-List-of-Unused-Log-Streams&para;\\\">Create List of Unused NAT Gateways</h3>\\n\",\n    \"<p>This action filters regions that have no unused gateways and creates a list of those that have them.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters: <code>None</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following output: <code>all_unused_gateways</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"b85ce542-bdf0-44d2-9e75-213002d5c036\",\n   \"metadata\": {\n    \"collapsed\": true,\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"name\": \"Create List of Unused NAT Gateways\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Create List of Unused NAT Gateways\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"all_unused_gateways = []\\n\",\n    \"dummy = []\\n\",\n    \"try:\\n\",\n    \"    for reg,res in unused_gateways.items():\\n\",\n    \"        if res[0]==False:\\n\",\n    \"            if len(res[1])!=0:\\n\",\n    \"                dummy = res[1]\\n\",\n    \"                for x in dummy:\\n\",\n    \"                    all_unused_gateways.append(x)\\n\",\n    \"except Exception:\\n\",\n    \"    for nat_id in nat_gateway_ids:\\n\",\n    \"        data_dict = {}\\n\",\n    \"        data_dict[\\\"reg\\\"] = region[0]\\n\",\n    \"        data_dict[\\\"nat_gateway_id\\\"] = nat_id\\n\",\n    \"        all_unused_gateways.append(data_dict)\\n\",\n    \"print(all_unused_gateways)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"9fb3704a-9b19-49c4-96ab-a982217bbcd3\",\n   \"metadata\": {\n    \"name\": \"Step 2\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 2\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Delete-unused-log-streams\\\"><a id=\\\"2\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Delete unused NAT Gateways</h3>\\n\",\n    \"<p>This action deleted unused log streams found in Step 1.&nbsp;</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters:&nbsp;<code>region, nat_gateway_id</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"84d27641-52db-4efc-9cb7-e52995729c2f\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionOutputType\": null,\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": false,\n    \"action_uuid\": \"c24c20b1d1d8a9f31ddbf6f2adf96cbd37df3a0fcf99e4a9a85b1f8b897ad8d4\",\n    \"condition_enabled\": true,\n    \"continueOnError\": true,\n    \"credentialsJson\": {},\n    \"description\": \"AWS Delete NAT Gateway\",\n    \"id\": 2,\n    \"index\": 2,\n    \"inputData\": [\n     {\n      \"nat_gateway_id\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"nat_gateway_id\\\\\\\\\\\")\\\"\"\n      },\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"reg\\\\\\\\\\\")\\\"\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"nat_gateway_id\": {\n        \"description\": \"ID of the NAT Gateway.\",\n        \"title\": \"NAT Gateway ID\",\n        \"type\": \"string\"\n       },\n       \"region\": {\n        \"description\": \"AWS Region of the bucket.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"nat_gateway_id\",\n       \"region\"\n      ],\n      \"title\": \"aws_delete_nat_gateway\",\n      \"type\": \"object\"\n     }\n    ],\n    \"iterData\": [\n     {\n      \"iter_enabled\": true,\n      \"iter_item\": {\n       \"nat_gateway_id\": \"nat_gateway_id\",\n       \"region\": \"reg\"\n      },\n      \"iter_list\": {\n       \"constant\": false,\n       \"objectItems\": true,\n       \"value\": \"all_unused_gateways\"\n      }\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"language\": \"python\",\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"metadata\": {\n     \"action_bash_command\": false,\n     \"action_categories\": [\n      \"CATEGORY_TYPE_DEVOPS\",\n      \"CATEGORY_TYPE_SRE\",\n      \"CATEGORY_TYPE_AWS\"\n     ],\n     \"action_description\": \"AWS Delete NAT Gateway\",\n     \"action_entry_function\": \"aws_delete_nat_gateway\",\n     \"action_is_check\": false,\n     \"action_is_remediation\": false,\n     \"action_needs_credential\": true,\n     \"action_next_hop\": null,\n     \"action_next_hop_parameter_mapping\": null,\n     \"action_nouns\": null,\n     \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\n     \"action_supports_iteration\": true,\n     \"action_supports_poll\": true,\n     \"action_title\": \"AWS Delete NAT Gateway\",\n     \"action_type\": \"LEGO_TYPE_AWS\",\n     \"action_verbs\": null,\n     \"action_version\": \"1.0.0\"\n    },\n    \"name\": \"AWS Delete NAT Gateway\",\n    \"orderProperties\": [\n     \"nat_gateway_id\",\n     \"region\"\n    ],\n    \"printOutput\": true,\n    \"startcondition\": \"len(all_unused_gateways)!=0\",\n    \"tags\": [\n     \"aws_delete_nat_gateway\"\n    ],\n    \"title\": \"AWS Delete NAT Gateway\",\n    \"uuid\": \"c24c20b1d1d8a9f31ddbf6f2adf96cbd37df3a0fcf99e4a9a85b1f8b897ad8d4\",\n    \"version\": \"1.0.0\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"# Copyright (c) 2021 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import Dict\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_delete_nat_gateway_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_delete_nat_gateway(handle, nat_gateway_id: str, region: str) -> Dict:\\n\",\n    \"    \\\"\\\"\\\"aws_delete_nat_gateway Returns an dict of NAT gateways information.\\n\",\n    \"\\n\",\n    \"        :type region: string\\n\",\n    \"        :param region: Region to filter instances.\\n\",\n    \"\\n\",\n    \"        :type nat_gateway_id: string\\n\",\n    \"        :param nat_gateway_id: ID of the NAT Gateway.\\n\",\n    \"\\n\",\n    \"        :rtype: dict of NAT gateways information.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    try:\\n\",\n    \"        ec2Client = handle.client('ec2', region_name=region)\\n\",\n    \"        response = ec2Client.delete_nat_gateway(NatGatewayId=nat_gateway_id)\\n\",\n    \"        return response\\n\",\n    \"    except Exception as e:\\n\",\n    \"        raise Exception(e)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(continueOnError=True)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"nat_gateway_id\\\": \\\"iter.get(\\\\\\\\\\\"nat_gateway_id\\\\\\\\\\\")\\\",\\n\",\n    \"    \\\"region\\\": \\\"iter.get(\\\\\\\\\\\"reg\\\\\\\\\\\")\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(iterJson='''{\\n\",\n    \"    \\\"iter_enabled\\\": true,\\n\",\n    \"    \\\"iter_list_is_const\\\": false,\\n\",\n    \"    \\\"iter_list\\\": \\\"all_unused_gateways\\\",\\n\",\n    \"    \\\"iter_parameter\\\": [\\\"nat_gateway_id\\\",\\\"region\\\"]\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"len(all_unused_gateways)!=0\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_delete_nat_gateway, lego_printer=aws_delete_nat_gateway_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"9c7430c8-3660-45bd-90ef-9ceab77e3daa\",\n   \"metadata\": {\n    \"name\": \"Conclusion\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Conclusion\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Conclusion\\\">Conclusion<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Conclusion\\\" target=\\\"_self\\\">&para;</a></h3>\\n\",\n    \"<p>In this Runbook, we were able to filter unused NAT Gateways given a threshold number of days and delete them. This runbook enables us to saves cost as AWS charges us based on the number of hours the gateway was available and the data (GB) it processes. To view the full platform capabilities of unSkript please visit&nbsp;<a href=\\\"https://us.app.unskript.io\\\" target=\\\"_blank\\\" rel=\\\"noopener\\\">us.app.unskript.io</a></p>\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"execution_data\": {\n   \"runbook_name\": \"Delete Unused AWS NAT Gateways\",\n   \"parameters\": null\n  },\n  \"kernelspec\": {\n   \"display_name\": \"Python 3.10.6 64-bit\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"codemirror_mode\": {\n    \"name\": \"ipython\",\n    \"version\": 3\n   },\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"nbconvert_exporter\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.10.6\"\n  },\n  \"outputParameterSchema\": {\n   \"properties\": {},\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"parameterSchema\": {\n   \"properties\": {\n    \"nat_gateway_ids\": {\n     \"description\": \"List of NAT Gateway ID's. \",\n     \"title\": \"nat_gateway_ids\",\n     \"type\": \"array\"\n    },\n    \"region\": {\n     \"description\": \"AWS Region(s) to search for unused NAT Gateway. Eg: [\\\"us-west-2\\\",\\\"ap-south-1\\\"]\",\n     \"title\": \"region\",\n     \"type\": \"array\"\n    },\n    \"threshold_days\": {\n     \"default\": 7,\n     \"description\": \"Threshold number of days to check if a NAT Gateway was used.\",\n     \"title\": \"threshold_days\",\n     \"type\": \"number\"\n    }\n   },\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"vscode\": {\n   \"interpreter\": {\n    \"hash\": \"b0fa6594d8f4cbf19f97940f81e996739fb7646882a419484c72d19e05852a7e\"\n   }\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}"
  },
  {
    "path": "AWS/Delete_Unused_AWS_NAT_Gateways.json",
    "content": "{\n    \"name\": \"Delete Unused AWS NAT Gateways\",\n    \"description\": \"This runbook can be used to identify and remove any unused NAT Gateways. This allows us to adhere to best practices and avoid unnecessary costs. NAT gateways are used to connect a private instance with outside networks. When a NAT gateway is provisioned, AWS charges you based on the number of hours it was available and the data (GB) it processes.\",\n    \"uuid\": \"26da5206a0a18b30a83f9a72e0dc61408237920bf84831165974610c79875bfb\",\n    \"icon\": \"CONNECTOR_TYPE_AWS\",\n    \"categories\": [ \"CATEGORY_TYPE_COST_OPT\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\" ],\n    \"connector_types\": [ \"CONNECTOR_TYPE_AWS\" ],\n    \"version\": \"1.0.0\"\n  }\n  \n  "
  },
  {
    "path": "AWS/Detach_Instance_from_ASG.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"9a175295-d9f6-47f1-bab9-c4b9d6cdf375\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Runbook Overview\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Runbook Overview\"\n   },\n   \"source\": [\n    \"<center>\\n\",\n    \"<h1 id=\\\"\\\"><img src=\\\"https://storage.googleapis.com/unskript-website/assets/favicon.png\\\" alt=\\\"unSkript.com\\\" width=\\\"100\\\" height=\\\"100\\\"></h1>\\n\",\n    \"<h1 id=\\\"unSkript-Runbooks\\\">unSkript Runbooks</h1>\\n\",\n    \"<div class=\\\"alert alert-block alert-success\\\">\\n\",\n    \"<h3 id=\\\"Objective\\\"><strong>Objective</strong></h3>\\n\",\n    \"<strong>Detach EC2 Instance from Auto Scaling Group</strong></div>\\n\",\n    \"</center><center>\\n\",\n    \"<h2 id=\\\"Detach-EC2-Instance-from-Auto-Scaling-Group\\\"><strong>Detach EC2 Instance from Auto Scaling Group</strong></h2>\\n\",\n    \"</center>\\n\",\n    \"<h1 id=\\\"Steps-Overview\\\"><sub>Steps Overview</sub></h1>\\n\",\n    \"<p>1. &nbsp;Get Unhealthy instances from ASG</p>\\n\",\n    \"<p>2.&nbsp; AWS Detach Instances From AutoScaling Group</p>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 11,\n   \"id\": \"d4246eb1-a222-4926-8d78-39ed59991674\",\n   \"metadata\": {\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-02-08T13:09:35.318Z\"\n    },\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"name\": \"Input Verification\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Input Verification\",\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"if instance_id and not region:\\n\",\n    \"    raise SystemExit(\\\"Provide region for the instance!\\\")\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"3125e39b-1f1a-4927-b0ad-8589898dce2e\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-1 A\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-1 A\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Get-AWS-AutoScaling-Group-Instances\\\">Get AWS AutoScaling Group Instances</h3>\\n\",\n    \"<p>Using unSkript's <strong>Get AWS AutoScaling Group Instances</strong> action we list all the EC2 instances for a given region with Auto Scaling Group name. This action only executes if the instance_id and region have been given as parameters.</p>\\n\",\n    \"<ul>\\n\",\n    \"<li><strong>Input parameters:</strong>&nbsp; <code>instance_ids, region</code></li>\\n\",\n    \"<li><strong>Output variable:</strong>&nbsp; <code>asg_instance</code></li>\\n\",\n    \"</ul>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 3,\n   \"id\": \"425ed4aa-fca2-43e1-a99f-378af3939198\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": false,\n    \"action_uuid\": \"4baa10996438c3e1acea659c68a4e383d0be4484f8ec6fe2a6d4b883fcb592c3\",\n    \"condition_enabled\": true,\n    \"continueOnError\": true,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Use This Action to Get AWS AutoScaling Group Instances\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-02-08T13:17:46.043Z\"\n    },\n    \"id\": 171,\n    \"index\": 171,\n    \"inputData\": [\n     {\n      \"instance_ids\": {\n       \"constant\": false,\n       \"value\": \"instance_id\"\n      },\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"iter_item\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"instance_ids\": {\n        \"description\": \"List of instances.\",\n        \"items\": {},\n        \"title\": \"Instance IDs\",\n        \"type\": \"array\"\n       },\n       \"region\": {\n        \"description\": \"AWS Region of the ECS service.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"instance_ids\",\n       \"region\"\n      ],\n      \"title\": \"aws_get_auto_scaling_instances\",\n      \"type\": \"object\"\n     }\n    ],\n    \"iterData\": [\n     {\n      \"iter_enabled\": true,\n      \"iter_item\": \"region\",\n      \"iter_list\": {\n       \"constant\": false,\n       \"objectItems\": false,\n       \"value\": \"region\"\n      }\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"Get AWS AutoScaling Group Instances\",\n    \"nouns\": [],\n    \"orderProperties\": [\n     \"instance_ids\",\n     \"region\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"asg_instance\",\n     \"output_name_enabled\": true\n    },\n    \"printOutput\": true,\n    \"startcondition\": \"len(instance_id)>0\",\n    \"tags\": [\n     \"aws_get_auto_scaling_instances\"\n    ],\n    \"title\": \"Get AWS AutoScaling Group Instances\",\n    \"verbs\": []\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##  Copyright (c) 2021 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from typing import List, Dict\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from tabulate import tabulate\\n\",\n    \"from botocore.exceptions import ClientError\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_auto_scaling_instances_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    print(tabulate(output, headers='keys'))\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_auto_scaling_instances(handle, instance_ids: list, region: str) -> List:\\n\",\n    \"    \\\"\\\"\\\"aws_get_auto_scaling_instances List of Dict with instanceId and attached groups.\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned from task.validate(...).\\n\",\n    \"\\n\",\n    \"        :type instance_ids: list\\n\",\n    \"        :param instance_ids: List of instances.\\n\",\n    \"\\n\",\n    \"        :type region: string\\n\",\n    \"        :param region: Region to filter instances.\\n\",\n    \"\\n\",\n    \"        :rtype: List of Dict with instanceId and attached groups.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    result = []\\n\",\n    \"    ec2Client = handle.client('autoscaling', region_name=region)\\n\",\n    \"    try:\\n\",\n    \"        response = ec2Client.describe_auto_scaling_instances(InstanceIds=instance_ids)\\n\",\n    \"        for group in response[\\\"AutoScalingInstances\\\"]:\\n\",\n    \"            group_dict = {}\\n\",\n    \"            group_dict[\\\"InstanceId\\\"] = group[\\\"InstanceId\\\"]\\n\",\n    \"            group_dict[\\\"AutoScalingGroupName\\\"] = group[\\\"AutoScalingGroupName\\\"]\\n\",\n    \"            group_dict[\\\"region\\\"] = region\\n\",\n    \"            result.append(group_dict)\\n\",\n    \"    except Exception as error:\\n\",\n    \"        pass\\n\",\n    \"    return result\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(continueOnError=True)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"instance_ids\\\": \\\"instance_id\\\",\\n\",\n    \"    \\\"region\\\": \\\"iter_item\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(iterJson='''{\\n\",\n    \"    \\\"iter_enabled\\\": true,\\n\",\n    \"    \\\"iter_list_is_const\\\": false,\\n\",\n    \"    \\\"iter_list\\\": \\\"region\\\",\\n\",\n    \"    \\\"iter_parameter\\\": \\\"region\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"len(instance_id)>0\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"asg_instance\\\")\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_get_auto_scaling_instances, lego_printer=aws_get_auto_scaling_instances_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"4063787e-593e-4273-8298-795d9bcb218c\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Gather Information\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Gather Information\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"AWS-List-All-Regions\\\">AWS List All Regions<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#AWS-List-All-Regions\\\" target=\\\"_self\\\">&para;</a></h3>\\n\",\n    \"<p>In this action, we list all the available regions from AWS if the user does not provide a region as a parameter.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Output variable: <code>region</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 4,\n   \"id\": \"09ac66fd-9282-4e66-b899-23577859adcb\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": false,\n    \"action_uuid\": \"708ea4af5f8fe7096a15b3a52c4a657606bab9e177386fad7a847341ed607d64\",\n    \"condition_enabled\": true,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"List all available AWS Regions\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-02-08T13:17:57.248Z\"\n    },\n    \"id\": 215,\n    \"index\": 215,\n    \"inputschema\": [\n     {\n      \"properties\": {},\n      \"title\": \"aws_list_all_regions\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS List All Regions\",\n    \"nouns\": [\n     \"regions\",\n     \"aws\"\n    ],\n    \"orderProperties\": [],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"region\",\n     \"output_name_enabled\": true\n    },\n    \"printOutput\": true,\n    \"startcondition\": \"not region\",\n    \"tags\": [\n     \"aws_list_all_regions\"\n    ],\n    \"verbs\": [\n     \"list\"\n    ]\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"#\\n\",\n    \"# Copyright (c) 2021 unSkript.com\\n\",\n    \"# All rights reserved.\\n\",\n    \"#\\n\",\n    \"\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import Dict, List\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_list_all_regions_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_list_all_regions(handle) -> List:\\n\",\n    \"    \\\"\\\"\\\"aws_list_all_regions lists all the AWS regions\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned from Task Validate\\n\",\n    \"\\n\",\n    \"        :rtype: Result List of result\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"\\n\",\n    \"    result = handle.aws_cli_command(\\\"aws ec2 describe-regions --all-regions --query 'Regions[].{Name:RegionName}' --output text\\\")\\n\",\n    \"    if result is None or result.returncode != 0:\\n\",\n    \"        print(\\\"Error while executing command : {}\\\".format(result))\\n\",\n    \"        return str()\\n\",\n    \"    result_op = list(result.stdout.split(\\\"\\\\n\\\"))\\n\",\n    \"    list_region = [x for x in result_op if x != '']\\n\",\n    \"    return list_region\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"not region\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"task.configure(outputName=\\\"region\\\")\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_list_all_regions, lego_printer=aws_list_all_regions_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"561775c0-545a-4ca2-9c79-11b919f7dac0\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-1 B\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-1 B\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Get-Unhealthy-instances-from-ASG\\\">Get Unhealthy instances from ASG</h3>\\n\",\n    \"<p>Here we will use unSkript&nbsp;<strong>Get Unhealthy instances from ASG</strong> action. This action filters all the unhealthy instances from the Auto Scaling Group. It will execute if the <code>instance_id</code> parameter is not given.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Input parameters: <code>region</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Output variable: <code>unhealthy_instance</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 5,\n   \"id\": \"28d0cedd-44e9-4deb-abc3-5e05442a46a9\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": false,\n    \"action_uuid\": \"5de92ab7221455580796b1ebe93c61e3fec51d5dac22e907f96b6e0d7564e0ad\",\n    \"condition_enabled\": true,\n    \"continueOnError\": true,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Get Unhealthy instances from Auto Scaling Group\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-02-08T13:18:20.322Z\"\n    },\n    \"id\": 172,\n    \"index\": 172,\n    \"inputData\": [\n     {\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"iter_item\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"region\": {\n        \"default\": \"\",\n        \"description\": \"AWS Region of the ASG.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       }\n      },\n      \"title\": \"aws_filter_unhealthy_instances_from_asg\",\n      \"type\": \"object\"\n     }\n    ],\n    \"iterData\": [\n     {\n      \"iter_enabled\": true,\n      \"iter_item\": \"region\",\n      \"iter_list\": {\n       \"constant\": false,\n       \"objectItems\": false,\n       \"value\": \"region\"\n      }\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"Get Unhealthy instances from ASG\",\n    \"nouns\": [],\n    \"orderProperties\": [\n     \"region\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"unhealthy_instance\",\n     \"output_name_enabled\": true\n    },\n    \"printOutput\": true,\n    \"startcondition\": \"not instance_id\",\n    \"tags\": [\n     \"aws_filter_unhealthy_instances_from_asg\"\n    ],\n    \"title\": \"Get Unhealthy instances from ASG\",\n    \"verbs\": []\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##  Copyright (c) 2021 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from typing import Optional, Tuple\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from unskript.legos.utils import CheckOutput, CheckOutputStatus\\n\",\n    \"from unskript.connectors.aws import aws_get_paginator\\n\",\n    \"from unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_filter_unhealthy_instances_from_asg_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"\\n\",\n    \"    if isinstance(output, CheckOutput):\\n\",\n    \"        print(output.json())\\n\",\n    \"    else:\\n\",\n    \"        pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_filter_unhealthy_instances_from_asg(handle, region: str = \\\"\\\") -> CheckOutput:\\n\",\n    \"    \\\"\\\"\\\"aws_filter_unhealthy_instances_from_asg gives unhealthy instances from ASG\\n\",\n    \"\\n\",\n    \"        :type region: string\\n\",\n    \"        :param region: AWS region.\\n\",\n    \"\\n\",\n    \"        :rtype: CheckOutput with status result and list of unhealthy instances from ASG.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    result = []\\n\",\n    \"    all_regions = [region]\\n\",\n    \"    if not region:\\n\",\n    \"        all_regions = aws_list_all_regions(handle)\\n\",\n    \"\\n\",\n    \"    for reg in all_regions:\\n\",\n    \"        try:\\n\",\n    \"            asg_client = handle.client('autoscaling', region_name=reg)\\n\",\n    \"            response = aws_get_paginator(asg_client, \\\"describe_auto_scaling_instances\\\", \\\"AutoScalingInstances\\\")\\n\",\n    \"\\n\",\n    \"            # filter instances to only include those that are in an \\\"unhealthy\\\" state\\n\",\n    \"            for instance in response:\\n\",\n    \"                data_dict = {}\\n\",\n    \"                if instance['HealthStatus'] == 'Unhealthy':\\n\",\n    \"                    data_dict[\\\"InstanceId\\\"] = instance[\\\"InstanceId\\\"]\\n\",\n    \"                    data_dict[\\\"AutoScalingGroupName\\\"] = instance[\\\"AutoScalingGroupName\\\"]\\n\",\n    \"                    data_dict[\\\"region\\\"] = reg\\n\",\n    \"                    result.append(data_dict)\\n\",\n    \"\\n\",\n    \"        except Exception as e:\\n\",\n    \"            pass\\n\",\n    \"\\n\",\n    \"    if len(result) != 0:\\n\",\n    \"        return CheckOutput(status=CheckOutputStatus.FAILED,\\n\",\n    \"                   objects=result,\\n\",\n    \"                   error=str(\\\"\\\"))\\n\",\n    \"    else:\\n\",\n    \"        return CheckOutput(status=CheckOutputStatus.SUCCESS,\\n\",\n    \"                   objects=result,\\n\",\n    \"                   error=str(\\\"\\\"))\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(continueOnError=True)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"region\\\": \\\"iter_item\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(iterJson='''{\\n\",\n    \"    \\\"iter_enabled\\\": true,\\n\",\n    \"    \\\"iter_list_is_const\\\": false,\\n\",\n    \"    \\\"iter_list\\\": \\\"region\\\",\\n\",\n    \"    \\\"iter_parameter\\\": \\\"region\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"not instance_id\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"task.configure(outputName=\\\"unhealthy_instance\\\")\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_filter_unhealthy_instances_from_asg, lego_printer=aws_filter_unhealthy_instances_from_asg_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"32d0f938-ad56-453c-89be-52c139228017\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-1 Extension\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-1 Extension\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Modify-Output\\\">Modify Output</h3>\\n\",\n    \"<p>In this action, we modify the output from step 1 A and step 1 B to return a list of dictionary items for the unhealthy instances from ASG.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Output variable: detach_instance_list</p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 12,\n   \"id\": \"e47022b7-ec19-4149-a7a7-3e2ebde54f87\",\n   \"metadata\": {\n    \"collapsed\": true,\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-02-08T13:23:56.168Z\"\n    },\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"name\": \"Modify Output\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Modify Output\",\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"detach_instance_list = []\\n\",\n    \"try:\\n\",\n    \"    for k, v in asg_instance.items():\\n\",\n    \"        for i in v:\\n\",\n    \"            detach_instance_list.append(i)\\n\",\n    \"except Exception as e:\\n\",\n    \"    if unhealthy_instance and not asg_name:\\n\",\n    \"        for k, v in unhealthy_instance.items():\\n\",\n    \"            if v.status == CheckOutputStatus.FAILED:\\n\",\n    \"                for instance in v.objects:\\n\",\n    \"                    detach_instance_list.append(instance)\\n\",\n    \"    else:\\n\",\n    \"        for k, v in unhealthy_instance.items():\\n\",\n    \"            if v.status == CheckOutputStatus.FAILED:\\n\",\n    \"                for instance in v.objects:\\n\",\n    \"                    if asg_name in instance[\\\"AutoScalingGroupName\\\"]:\\n\",\n    \"                        detach_instance_list.append(instance)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"614ed424-9394-449e-9dc6-5547f765470a\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-2\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-2\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"AWS-Detach-Instances-From-AutoScaling-Group\\\">AWS Detach Instances From AutoScaling Group</h3>\\n\",\n    \"<p>In this action, we detach the AWS unhealthy instances from the Auto Scaling Group which we get from step 1.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Input parameters: <code>instance_ids, group_name, region</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Output variable: <code>detach_output</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 10,\n   \"id\": \"95603003-ac39-493a-af8a-f1910784a6f2\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": false,\n    \"action_uuid\": \"8e6e08f606d40e2f4481128d356cc67d30be72349074c513627b3f03a178cf6e\",\n    \"condition_enabled\": true,\n    \"continueOnError\": true,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Use This Action to AWS Detach Instances From AutoScaling Group\",\n    \"id\": 284,\n    \"index\": 284,\n    \"inputData\": [\n     {\n      \"group_name\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"AutoScalingGroupName\\\\\\\\\\\")\\\"\"\n      },\n      \"instance_ids\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"InstanceId\\\\\\\\\\\")\\\"\"\n      },\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"region\\\\\\\\\\\")\\\"\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"group_name\": {\n        \"description\": \"Name of AutoScaling Group.\",\n        \"title\": \"Group Name\",\n        \"type\": \"string\"\n       },\n       \"instance_ids\": {\n        \"description\": \"List of instances.\",\n        \"title\": \"Instance IDs\",\n        \"type\": \"string\"\n       },\n       \"region\": {\n        \"description\": \"AWS Region of autoscaling group.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"instance_ids\",\n       \"group_name\",\n       \"region\"\n      ],\n      \"title\": \"aws_detach_autoscaling_instances\",\n      \"type\": \"object\"\n     }\n    ],\n    \"iterData\": [\n     {\n      \"iter_enabled\": true,\n      \"iter_item\": {\n       \"group_name\": \"AutoScalingGroupName\",\n       \"instance_ids\": \"InstanceId\",\n       \"region\": \"region\"\n      },\n      \"iter_list\": {\n       \"constant\": false,\n       \"objectItems\": true,\n       \"value\": \"detach_instance_list\"\n      }\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS Detach Instances From AutoScaling Group\",\n    \"nouns\": [],\n    \"orderProperties\": [\n     \"instance_ids\",\n     \"group_name\",\n     \"region\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"detach_output\",\n     \"output_name_enabled\": true\n    },\n    \"printOutput\": true,\n    \"startcondition\": \"len(detach_instance_list)>0\",\n    \"tags\": [\n     \"aws_detach_autoscaling_instances\"\n    ],\n    \"title\": \"AWS Detach Instances From AutoScaling Group\",\n    \"verbs\": []\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##  Copyright (c) 2021 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from typing import List, Dict\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_detach_autoscaling_instances_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_detach_autoscaling_instances(\\n\",\n    \"    handle,\\n\",\n    \"    instance_ids: str,\\n\",\n    \"    group_name: str,\\n\",\n    \"    region: str\\n\",\n    \") -> Dict:\\n\",\n    \"    \\\"\\\"\\\"aws_detach_autoscaling_instances detach instances from autoscaling group.\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned from task.validate(...).\\n\",\n    \"\\n\",\n    \"        :type instance_ids: string\\n\",\n    \"        :param instance_ids: Name of instances.\\n\",\n    \"\\n\",\n    \"        :type group_name: string\\n\",\n    \"        :param group_name: Name of AutoScaling Group.\\n\",\n    \"\\n\",\n    \"        :type region: string\\n\",\n    \"        :param region: AWS Region of autoscaling group.\\n\",\n    \"\\n\",\n    \"        :rtype: Dict with the detach instance info.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"\\n\",\n    \"    ec2Client = handle.client(\\\"autoscaling\\\", region_name=region)\\n\",\n    \"    result = {}\\n\",\n    \"    try:\\n\",\n    \"        response = ec2Client.detach_instances(\\n\",\n    \"            InstanceIds=[instance_ids],\\n\",\n    \"            AutoScalingGroupName=group_name,\\n\",\n    \"            ShouldDecrementDesiredCapacity=True\\n\",\n    \"            )\\n\",\n    \"        result = response\\n\",\n    \"    except Exception as error:\\n\",\n    \"        result[\\\"error\\\"] = error\\n\",\n    \"\\n\",\n    \"    return result\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(continueOnError=True)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"group_name\\\": \\\"iter.get(\\\\\\\\\\\"AutoScalingGroupName\\\\\\\\\\\")\\\",\\n\",\n    \"    \\\"instance_ids\\\": \\\"iter.get(\\\\\\\\\\\"InstanceId\\\\\\\\\\\")\\\",\\n\",\n    \"    \\\"region\\\": \\\"iter.get(\\\\\\\\\\\"region\\\\\\\\\\\")\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(iterJson='''{\\n\",\n    \"    \\\"iter_enabled\\\": true,\\n\",\n    \"    \\\"iter_list_is_const\\\": false,\\n\",\n    \"    \\\"iter_list\\\": \\\"detach_instance_list\\\",\\n\",\n    \"    \\\"iter_parameter\\\": [\\\"instance_ids\\\",\\\"group_name\\\",\\\"region\\\"]\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"len(detach_instance_list)>0\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"task.configure(outputName=\\\"detach_output\\\")\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_detach_autoscaling_instances, lego_printer=aws_detach_autoscaling_instances_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"346d8d07-6708-4663-bf8c-5d17c8b6506f\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Conclusion\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Conclusion\"\n   },\n   \"source\": [\n    \"### Conclusion\\n\",\n    \"<p>In this Runbook, we demonstrated the use of unSkript's AWS actions. This runbook helps to detach the instances from the Auto Scaling Group. To view the full platform capabilities of unSkript please visit <a href=\\\"https://us.app.unskript.io\\\" target=\\\"_blank\\\" rel=\\\"noopener\\\">https://us.app.unskript.io</a></p>\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"execution_data\": {\n   \"runbook_name\": \"Detach EC2 Instance from ASG\",\n   \"parameters\": [\n    \"asg_name\",\n    \"instance_id\",\n    \"region\"\n   ]\n  },\n  \"kernelspec\": {\n   \"display_name\": \"unSkript (Build: 839)\",\n   \"name\": \"python_kubernetes\"\n  },\n  \"language_info\": {\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"pygments_lexer\": \"ipython3\"\n  },\n  \"parameterSchema\": {\n   \"properties\": {\n    \"asg_name\": {\n     \"description\": \"Auto Scaling Group Name. Note: if ASG name is given no need to give region.\",\n     \"title\": \"asg_name\",\n     \"type\": \"string\"\n    },\n    \"instance_id\": {\n     \"description\": \"Instance Ids that are attached to Auto Scaling Group. Note: if instance id is given then the region is mandatory.\",\n     \"title\": \"instance_id\",\n     \"type\": \"array\"\n    },\n    \"region\": {\n     \"description\": \"AWS region e.g.[\\\"us-west-2\\\"]\",\n     \"title\": \"region\",\n     \"type\": \"array\"\n    }\n   },\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"parameterValues\": null\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}"
  },
  {
    "path": "AWS/Detach_Instance_from_ASG.json",
    "content": "{\n  \"name\": \"Detach EC2 Instance from ASG\",\n  \"description\": \"This runbook can be used to detach an instance from Auto Scaling Group. You can remove (detach) an instance that is in the Service state from an Auto Scaling group. After the instance is detached, you can manage it independently from the rest of the Auto Scaling group. By detaching an instance, you can move an instance out of one Auto Scaling group and attach it to a different group. For more information, see Attach EC2 instances to your Auto Scaling group.\",\n  \"uuid\": \"5ef84b8b1ddc1b41112bc18d14fdda95535f0b271a31232c821f7b56753b77fd\",\n  \"icon\": \"CONNECTOR_TYPE_AWS\",\n  \"categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\" ],\n  \"connector_types\": [ \"CONNECTOR_TYPE_AWS\" ],\n  \"version\": \"1.0.0\"\n}\n\n"
  },
  {
    "path": "AWS/Detect_ECS_failed_deployment.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"111d99a7\",\n   \"metadata\": {},\n   \"source\": [\n    \"<img src=\\\"https://storage.googleapis.com/unskript-website/assets/favicon.png\\\" alt=\\\"unSkript.com\\\" width=\\\"100\\\" height=\\\"100\\\"/> \\n\",\n    \"<h1> unSkript Runbooks </h1>\\n\",\n    \"<div class=\\\"alert alert-block alert-success\\\">\\n\",\n    \"    <b> This runbook demonstrates How to detect failed ECS deployment.</b>\\n\",\n    \"</div>\\n\",\n    \"\\n\",\n    \"<br>\\n\",\n    \"\\n\",\n    \"<center><h2>Detect Failed ECS Deployment</h2></center>\\n\",\n    \"\\n\",\n    \"## Steps Overview\\n\",\n    \" 1. Filter Out the Failed Deployments for the given cluster\\n\",\n    \" 2. Construct the list of failed Deployments\\n\",\n    \" 3. Post a Slack Message with the list \\n\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"415ed2a1-9cc4-44a6-b74e-27dc9ee66256\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_uuid\": \"32a78f4ec627183ee0b4a1c3737064d6ced94c093070151890c7557750d94fc0\",\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"List of stopped tasks, associated with a deployment, along with their stopped reason\",\n    \"id\": 102,\n    \"index\": 102,\n    \"inputData\": [\n     {\n      \"cluster_name\": {\n       \"constant\": false,\n       \"value\": \"ClusterName\"\n      },\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"Region\"\n      },\n      \"service_name\": {\n       \"constant\": false,\n       \"value\": \"ServiceName\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"cluster_name\": {\n        \"description\": \"ECS Cluster name\",\n        \"title\": \"Cluster name\",\n        \"type\": \"string\"\n       },\n       \"region\": {\n        \"description\": \"AWS Region of the ECS service.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       },\n       \"service_name\": {\n        \"description\": \"ECS Service name in the specified cluster.\",\n        \"title\": \"Service name\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"cluster_name\",\n       \"service_name\",\n       \"region\"\n      ],\n      \"title\": \"aws_ecs_detect_failed_deployment\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"ECS detect failed deployment \",\n    \"nouns\": [\n     \"ecs\",\n     \"failed\",\n     \"deployment\"\n    ],\n    \"orderProperties\": [\n     \"cluster_name\",\n     \"service_name\",\n     \"region\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"tags\": [\n     \"aws_ecs_detect_failed_deployment\"\n    ],\n    \"verbs\": [\n     \"detect\"\n    ],\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"##  Copyright (c) 2021 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import List\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_ecs_detect_failed_deployment(handle, cluster_name: str, service_name: str, region: str) -> List:\\n\",\n    \"    ecsClient = handle.client('ecs', region_name=region)\\n\",\n    \"    try:\\n\",\n    \"        serviceStatus = ecsClient.describe_services(cluster=cluster_name, services=[service_name])\\n\",\n    \"    except Exception as e:\\n\",\n    \"        print(f'Failed to get service status for {service_name}, cluster {cluster_name}, {e}')\\n\",\n    \"        return None\\n\",\n    \"    # When the deployment is in progress, there will be 2 deployment entries, one PRIMARY and one ACTIVE. PRIMARY will eventually replace\\n\",\n    \"    # ACTIVE, if its successful.\\n\",\n    \"    deployments = serviceStatus.get('services')[0].get('deployments')\\n\",\n    \"    if deployments is None:\\n\",\n    \"        print(\\\"Empty deployment\\\")\\n\",\n    \"        return None\\n\",\n    \"\\n\",\n    \"    deploymentInProgress = False\\n\",\n    \"    for deployment in deployments:\\n\",\n    \"        if deployment['status'] == \\\"PRIMARY\\\":\\n\",\n    \"            primaryDeploymentID = deployment['id']\\n\",\n    \"        else:\\n\",\n    \"            deploymentInProgress = True\\n\",\n    \"\\n\",\n    \"    if deploymentInProgress is False:\\n\",\n    \"        print(\\\"No deployment in progress\\\")\\n\",\n    \"        return None\\n\",\n    \"\\n\",\n    \"    # Check if there are any stopped tasks because of this deployment\\n\",\n    \"    stoppedTasks = ecsClient.list_tasks(cluster=cluster_name, startedBy=primaryDeploymentID, desiredStatus=\\\"STOPPED\\\").get('taskArns')\\n\",\n    \"    if len(stoppedTasks) == 0:\\n\",\n    \"        print(f'No stopped tasks associated with the deploymentID {primaryDeploymentID}, service {service_name}, cluster {cluster_name}')\\n\",\n    \"        return\\n\",\n    \"\\n\",\n    \"    # Get the reason for the stopped tasks\\n\",\n    \"    taskDetails = ecsClient.describe_tasks(cluster=cluster_name, tasks=stoppedTasks)\\n\",\n    \"    output = []\\n\",\n    \"    for taskDetail in taskDetails.get('tasks'):\\n\",\n    \"        output.append({\\\"TaskARN\\\":taskDetail['taskArn'], \\\"StoppedReason\\\":taskDetail['stoppedReason']})\\n\",\n    \"    return output\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"cluster_name\\\": \\\"ClusterName\\\",\\n\",\n    \"    \\\"region\\\": \\\"Region\\\",\\n\",\n    \"    \\\"service_name\\\": \\\"ServiceName\\\"\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.output = task.execute(aws_ecs_detect_failed_deployment, hdl=hdl, args=args)\\n\",\n    \"    if task.output_name != None:\\n\",\n    \"        globals().update({task.output_name: task.output[0]})\\n\",\n    \"\\n\",\n    \"if hasattr(task, 'output'):\\n\",\n    \"    if isinstance(task.output, (list, tuple)):\\n\",\n    \"        for item in task.output:\\n\",\n    \"            print(f'item: {item}')\\n\",\n    \"    elif isinstance(task.output, dict):\\n\",\n    \"        for item in task.output.items():\\n\",\n    \"            print(f'item: {item}')\\n\",\n    \"    else:\\n\",\n    \"        print(f'Output for {task.name}')\\n\",\n    \"        print(task.output)\\n\",\n    \"    w.tasks[task.name]= task.output\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"220f3413\",\n   \"metadata\": {},\n   \"source\": [\n    \"## 2 Construct List of failed deployment\\n\",\n    \"\\n\",\n    \"Here we gather the output from the previous cell execution and iterate over it to find out which Tasks\\n\",\n    \"failed to run, the reason of the failure.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 5,\n   \"id\": \"b98ddacd-413b-443d-ba06-4aef7acf0b5d\",\n   \"metadata\": {\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2022-05-03T18:41:56.162Z\"\n    },\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"name\": \"Form slack message\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Form slack message\",\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"from tabulate import tabulate\\n\",\n    \"message = \\\"\\\"\\n\",\n    \"if len(task.output) > 0:\\n\",\n    \"    tasks = []\\n\",\n    \"    for i in task.output:\\n\",\n    \"        tasks.append([i.get('TaskARN'), i.get('StoppedReason')])\\n\",\n    \"    message = f'Stopped tasks in cluster {ClusterName}, service {ServiceName} \\\\n {tabulate(tasks, headers=[\\\"TaskARN\\\", \\\"Stopped Reason\\\"], tablefmt=\\\"grid\\\")}'\\n\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"1c5898d5\",\n   \"metadata\": {},\n   \"source\": [\n    \"## 3 Post Slack Message\\n\",\n    \"\\n\",\n    \"We post the failed list of deployments on to the given Slack Channel\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"3583ea2f-8d64-4ae7-8379-ee360f061bc7\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_uuid\": \"6a87f83ab0ecfeecb9c98d084e2b1066c26fa64be5b4928d5573a5d60299802d\",\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Post Slack Message\",\n    \"id\": 44,\n    \"index\": 44,\n    \"inputData\": [\n     {\n      \"channel\": {\n       \"constant\": false,\n       \"value\": \"Channel\"\n      },\n      \"message\": {\n       \"constant\": false,\n       \"value\": \"message\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"channel\": {\n        \"description\": \"Name of the slack channel where the message to be posted\",\n        \"title\": \"Channel\",\n        \"type\": \"string\"\n       },\n       \"message\": {\n        \"description\": \"Message to be sent\",\n        \"title\": \"Message\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"channel\",\n       \"message\"\n      ],\n      \"title\": \"slack_post_message\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_SLACK\",\n    \"name\": \"Post Slack Message\",\n    \"nouns\": [\n     \"slack\",\n     \"message\"\n    ],\n    \"orderProperties\": [\n     \"channel\",\n     \"message\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"tags\": [\n     \"slack_post_message\"\n    ],\n    \"verbs\": [\n     \"post\"\n    ],\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"# Copyright (c) 2021 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from slack_sdk import WebClient\\n\",\n    \"from slack_sdk.errors import SlackApiError\\n\",\n    \"\\n\",\n    \"pp = pprint.PrettyPrinter(indent=2)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"def legoPrinter(func):\\n\",\n    \"    def Printer(*args, **kwargs):\\n\",\n    \"        output = func(*args, **kwargs)\\n\",\n    \"        if output:\\n\",\n    \"            channel = kwargs[\\\"channel\\\"]\\n\",\n    \"            pp.pprint(print(f\\\"Message sent to Slack channel {channel}\\\"))\\n\",\n    \"        return output\\n\",\n    \"    return Printer\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@legoPrinter\\n\",\n    \"@beartype\\n\",\n    \"def slack_post_message(\\n\",\n    \"        handle: WebClient,\\n\",\n    \"        channel: str,\\n\",\n    \"        message: str) -> bool:\\n\",\n    \"\\n\",\n    \"    try:\\n\",\n    \"        response = handle.chat_postMessage(\\n\",\n    \"            channel=channel,\\n\",\n    \"            text=message)\\n\",\n    \"        return True\\n\",\n    \"    except SlackApiError as e:\\n\",\n    \"        print(\\\"\\\\n\\\\n\\\")\\n\",\n    \"        pp.pprint(\\n\",\n    \"            f\\\"Failed sending message to slack channel {channel}, Error: {e.response['error']}\\\")\\n\",\n    \"        return False\\n\",\n    \"    except Exception as e:\\n\",\n    \"        print(\\\"\\\\n\\\\n\\\")\\n\",\n    \"        pp.pprint(\\n\",\n    \"            f\\\"Failed sending message to slack channel {channel}, Error: {e.__str__()}\\\")\\n\",\n    \"        return False\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"channel\\\": \\\"Channel\\\",\\n\",\n    \"    \\\"message\\\": \\\"message\\\"\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.output = task.execute(slack_post_message, hdl=hdl, args=args)\\n\",\n    \"    if task.output_name != None:\\n\",\n    \"        globals().update({task.output_name: task.output[0]})\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"f69039f5\",\n   \"metadata\": {},\n   \"source\": [\n    \"## Conclusion\\n\",\n    \"\\n\",\n    \"In this runbook we saw how easy it is to piece together a Runbook with pre-built and custom-legos that can achieve the task of identifying the failed Deployments and posting it on Slack. To learn more about the full capability of unSkript platform please visit https://us.app.unskript.io \"\n   ]\n  }\n ],\n \"metadata\": {\n  \"execution_data\": {\n   \"runbook_name\": \"Detect ECS failed deployment\",\n   \"parameters\": [\n    \"Channel\",\n    \"ClusterName\",\n    \"Region\",\n    \"ServiceName\"\n   ]\n  },\n  \"kernelspec\": {\n   \"display_name\": \"Python 3.9.6 64-bit\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.9.6\"\n  },\n  \"parameterSchema\": {\n   \"properties\": {\n    \"Channel\": {\n     \"description\": \"Slack channel name\",\n     \"title\": \"Channel\",\n     \"type\": \"string\"\n    },\n    \"ClusterName\": {\n     \"description\": \"ECS Cluster name\",\n     \"title\": \"ClusterName\",\n     \"type\": \"string\"\n    },\n    \"Region\": {\n     \"description\": \"AWS Region of the ECS cluster\",\n     \"title\": \"Region\",\n     \"type\": \"string\"\n    },\n    \"ServiceName\": {\n     \"description\": \"ECS Service name under the cluster\",\n     \"title\": \"ServiceName\",\n     \"type\": \"string\"\n    }\n   },\n   \"required\": [\n    \"Channel\",\n    \"ClusterName\",\n    \"Region\",\n    \"ServiceName\"\n   ],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"parameterValues\": {\n   \"Channel\": null,\n   \"ClusterName\": null,\n   \"Region\": null,\n   \"ServiceName\": null\n  },\n  \"vscode\": {\n   \"interpreter\": {\n    \"hash\": \"31f2aee4e71d21fbe5cf8b01ff0e069b9275f58929596ceb00d14d90e3e16cd6\"\n   }\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}"
  },
  {
    "path": "AWS/Detect_ECS_failed_deployment.json",
    "content": "{\n  \"name\": \"Detect ECS failed deployment\",\n  \"description\": \"This runbook check if there is a failed deployment in progress for a service in an ECS cluster. If it finds one, it sends the list of stopped task associated with this deployment and their stopped reason to slack.\",\n  \"uuid\": \"d5a5b8447076f47e99d7603527a6e82fed42e6ae22bffe8eabf446220765e0bc\",\n  \"icon\": \"CONNECTOR_TYPE_AWS\",\n  \"categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\" ],\n  \"connector_types\": [ \"CONNECTOR_TYPE_AWS\" ],\n  \"version\": \"1.0.0\"\n}\n\n"
  },
  {
    "path": "AWS/Enforce_Mandatory_Tags_Across_All_AWS_Resources.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"79251bc7-c6cd-4344-a8d5-754bf62eb17e\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Enforce Mandatory Tags Across All AWS Resources\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Enforce Mandatory Tags Across All AWS Resources\"\n   },\n   \"source\": [\n    \"<img src=\\\"https://storage.googleapis.com/unskript-website/assets/favicon.png\\\" alt=\\\"unSkript.com\\\" width=\\\"100\\\" height=\\\"100\\\"/> \\n\",\n    \"<h1> unSkript Runbooks </h1>\\n\",\n    \"<div class=\\\"alert alert-block alert-success\\\">\\n\",\n    \"    <b> This runbook demonstrates How to Enforce Mandatory Tags Across All AWS Resources using unSkript legos.</b>\\n\",\n    \"</div>\\n\",\n    \"\\n\",\n    \"<br>\\n\",\n    \"\\n\",\n    \"<center><h2>Enforce Mandatory Tags Across All AWS Resources</h2></center>\\n\",\n    \"\\n\",\n    \"# Steps Overview\\n\",\n    \"   1. List all the Untagged Resources ARNs in the given region.\\n\",\n    \"   2. Get tag keys of all Resources.\\n\",\n    \"   3. Attach Mandatory Tags to All the AWS Resources.\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"a49a1258-79d2-4846-8731-4ed74b36d6bc\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"AWS Get Untagged Resources\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"AWS Get Untagged Resources\"\n   },\n   \"source\": [\n    \"Here we will use unSkript AWS Get Untagged Resources Lego. This lego take region: str as input. This inputs is used to find out all Untagged Resources.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 2,\n   \"id\": \"0ec169e9-f3f2-400d-9b58-e4a598769e61\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_uuid\": \"aee6cabb55096d5cf6098faa7e4a94135e8f5b0572b36d4b3252d7745fae595b\",\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"AWS Get Untagged Resources\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2022-09-20T16:26:29.705Z\"\n    },\n    \"id\": 187,\n    \"index\": 187,\n    \"inputData\": [\n     {\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"Region\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"region\": {\n        \"description\": \"AWS Region.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"region\"\n      ],\n      \"title\": \"aws_get_untagged_resources\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS Get Untagged Resources\",\n    \"nouns\": [\n     \"aws\",\n     \"resources\"\n    ],\n    \"orderProperties\": [\n     \"region\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"UntaggedResources\",\n     \"output_name_enabled\": true\n    },\n    \"tags\": [\n     \"aws_get_untagged_resources\"\n    ],\n    \"title\": \"AWS Get Untagged Resources\",\n    \"verbs\": [\n     \"list\"\n    ]\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##  Copyright (c) 2021 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from typing import List, Dict\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from unskript.connectors.aws import aws_get_paginator\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_untagged_resources_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_untagged_resources(handle, region: str) -> List:\\n\",\n    \"    \\\"\\\"\\\"aws_get_untagged_resources Returns an List of Untagged Resources.\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned from task.validate(...).\\n\",\n    \"\\n\",\n    \"        :type region: str\\n\",\n    \"        :param region: Region to filter resources.\\n\",\n    \"\\n\",\n    \"        :rtype: List of untagged resources.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"\\n\",\n    \"    ec2Client = handle.client('resourcegroupstaggingapi', region_name=region)\\n\",\n    \"    result = []\\n\",\n    \"    try:\\n\",\n    \"        response = aws_get_paginator(ec2Client, \\\"get_resources\\\", \\\"ResourceTagMappingList\\\")\\n\",\n    \"        for resources in response:\\n\",\n    \"            if not resources[\\\"Tags\\\"]:\\n\",\n    \"               result.append(resources[\\\"ResourceARN\\\"])\\n\",\n    \"    except Exception as error:\\n\",\n    \"        result.append({\\\"error\\\":error})\\n\",\n    \"\\n\",\n    \"    return result\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"region\\\": \\\"Region\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"UntaggedResources\\\")\\n\",\n    \"\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_get_untagged_resources, lego_printer=aws_get_untagged_resources_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"be97efa2-dbb5-40b2-8d07-cc000278ba84\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Get Tags Keys Of All Resources\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Get Tags Keys Of All Resources\"\n   },\n   \"source\": [\n    \"Here we will use unSkript Get Tag Keys Of All Resources Lego. This lego take region: str as input. This input is used to find out all Tag Keys of Resources.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 2,\n   \"id\": \"363de8c8-6aa8-40f4-8856-a62a2f0a69f5\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_uuid\": \"db00e432f32042fe9e14ba89a69a4fb86f88f8554c5d45af4cd287a6e5e01532\",\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"AWS Get Tags of All Resources\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2022-09-21T16:57:34.499Z\"\n    },\n    \"id\": 132,\n    \"index\": 132,\n    \"inputData\": [\n     {\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"Region\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"region\": {\n        \"description\": \"AWS Region.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"region\"\n      ],\n      \"title\": \"aws_resources_tags\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS Get Tag Keys of All Resources\",\n    \"nouns\": [\n     \"aws\",\n     \"resources\"\n    ],\n    \"orderProperties\": [\n     \"region\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"tags\": [\n     \"aws_resources_tags\"\n    ],\n    \"title\": \"AWS Get Tag Keys of All Resources\",\n    \"trusted\": true,\n    \"verbs\": [\n     \"list\"\n    ]\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##  Copyright (c) 2021 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from typing import List, Dict\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from unskript.connectors.aws import aws_get_paginator\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_resources_tags_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_resources_tags(handle, region: str) -> List:\\n\",\n    \"    \\\"\\\"\\\"aws_resources_tags Returns an List of all Resources Tags.\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned from task.validate(...).\\n\",\n    \"\\n\",\n    \"        :type region: str\\n\",\n    \"        :param region: Region to filter resources.\\n\",\n    \"\\n\",\n    \"        :rtype: List of all Resources Tags.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    ec2Client = handle.client('resourcegroupstaggingapi', region_name=region)\\n\",\n    \"    result = []\\n\",\n    \"    try:\\n\",\n    \"        response = aws_get_paginator(ec2Client, \\\"get_tag_keys\\\", \\\"TagKeys\\\")\\n\",\n    \"        result = response\\n\",\n    \"    except Exception as error:\\n\",\n    \"        result.append({\\\"error\\\":error})\\n\",\n    \"\\n\",\n    \"    return result\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"region\\\": \\\"Region\\\"\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_resources_tags, lego_printer=aws_resources_tags_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"ce65fdd0-ee64-42d0-90a6-0fe1c0f54608\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"AWS Attach Tags to Resources\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"AWS Attach Tags to Resources\"\n   },\n   \"source\": [\n    \"Here we will use unSkript AWS Attach Tags to Resources Lego. This lego take handle, resource_arn: list, tag_key: str, tag_value: str, region: str as input. This input is used to attach mandatory tags to all untagged Resources.\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"ebbc96ac\",\n   \"metadata\": {},\n   \"source\": []\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"e7815002-3aaf-4b3b-a3fe-12d1c3b1edba\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_uuid\": \"878cb7819ecb4687ecfa8c6143365d10fe6b127adeb4a27fd71d06a3a2243d22\",\n    \"continueOnError\": true,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"AWS Attach Tags to Resources\",\n    \"id\": 167,\n    \"index\": 167,\n    \"inputData\": [\n     {\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"Region\"\n      },\n      \"resource_arn\": {\n       \"constant\": false,\n       \"value\": \"iter_item\"\n      },\n      \"tag_key\": {\n       \"constant\": false,\n       \"value\": \"Tag_Key\"\n      },\n      \"tag_value\": {\n       \"constant\": false,\n       \"value\": \"Tag_Value\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"region\": {\n        \"description\": \"AWS Region.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       },\n       \"resource_arn\": {\n        \"description\": \"Resource ARNs.\",\n        \"items\": {},\n        \"title\": \"Resource ARN\",\n        \"type\": \"array\"\n       },\n       \"tag_key\": {\n        \"description\": \"Resource Tag Key.\",\n        \"title\": \"Tag Key\",\n        \"type\": \"string\"\n       },\n       \"tag_value\": {\n        \"description\": \"Resource Tag Value.\",\n        \"title\": \"Tag Value\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"resource_arn\",\n       \"tag_key\",\n       \"tag_value\",\n       \"region\"\n      ],\n      \"title\": \"aws_tag_resources\",\n      \"type\": \"object\"\n     }\n    ],\n    \"iterData\": [\n     {\n      \"iter_enabled\": true,\n      \"iter_item\": \"resource_arn\",\n      \"iter_list\": {\n       \"constant\": false,\n       \"objectItems\": false,\n       \"value\": \"UntaggedResources\"\n      }\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS Attach Tags to Resources\",\n    \"nouns\": [\n     \"aws\",\n     \"resources\"\n    ],\n    \"orderProperties\": [\n     \"resource_arn\",\n     \"tag_key\",\n     \"tag_value\",\n     \"region\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"tags\": [\n     \"aws_tag_resources\"\n    ],\n    \"title\": \"AWS Attach Tags to Resources\",\n    \"verbs\": [\n     \"dict\"\n    ]\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##  Copyright (c) 2021 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from typing import List, Dict\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from unskript.connectors.aws import aws_get_paginator\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_tag_resources_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_tag_resources(handle, resource_arn: list, tag_key: str, tag_value: str, region: str) -> Dict:\\n\",\n    \"    \\\"\\\"\\\"aws_tag_resources Returns an Dict of resource info.\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned from task.validate(...).\\n\",\n    \"\\n\",\n    \"        :type resource_arn: list\\n\",\n    \"        :param resource_arn: Resource ARNs.\\n\",\n    \"\\n\",\n    \"        :type tag_key: str\\n\",\n    \"        :param tag_key: Resource Tag Key.\\n\",\n    \"\\n\",\n    \"        :type tag_value: str\\n\",\n    \"        :param tag_value: Resource Tag value.\\n\",\n    \"\\n\",\n    \"        :type region: str\\n\",\n    \"        :param region: Region to filter resources.\\n\",\n    \"\\n\",\n    \"        :rtype: Dict of resource info.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    ec2Client = handle.client('resourcegroupstaggingapi', region_name=region)\\n\",\n    \"    result = {}\\n\",\n    \"    try:\\n\",\n    \"        response = ec2Client.tag_resources(\\n\",\n    \"            ResourceARNList=resource_arn,\\n\",\n    \"            Tags={tag_key: tag_value}\\n\",\n    \"            )\\n\",\n    \"        result = response\\n\",\n    \"\\n\",\n    \"    except Exception as error:\\n\",\n    \"        result[\\\"error\\\"] = error\\n\",\n    \"\\n\",\n    \"    return result\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(continueOnError=True)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"region\\\": \\\"Region\\\",\\n\",\n    \"    \\\"resource_arn\\\": \\\"iter_item\\\",\\n\",\n    \"    \\\"tag_key\\\": \\\"Tag_Key\\\",\\n\",\n    \"    \\\"tag_value\\\": \\\"Tag_Value\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(iterJson='''{\\n\",\n    \"    \\\"iter_enabled\\\": true,\\n\",\n    \"    \\\"iter_list_is_const\\\": false,\\n\",\n    \"    \\\"iter_list\\\": \\\"UntaggedResources\\\",\\n\",\n    \"    \\\"iter_parameter\\\": \\\"resource_arn\\\"\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_tag_resources, lego_printer=aws_tag_resources_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"a8280ac4-d504-44d2-b5ea-d97f7ca672c8\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Conclusion\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Conclusion\"\n   },\n   \"source\": [\n    \"In this Runbook, we demonstrated the use of unSkript's AWS legos to attach tags. This Runbook gets the list of all untagged resources of a given region, discovers tag keys of the given region and attaches mandatory tags to all the untagged resource. To view the full platform capabilities of unSkript please visit https://unskript.com\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"execution_data\": {\n   \"runbook_name\": \"Enforce Mandatory Tags Across All AWS Resources\",\n   \"parameters\": [\n    \"Region\",\n    \"Tag_Key\",\n    \"Tag_Value\"\n   ]\n  },\n  \"kernelspec\": {\n   \"display_name\": \"Python 3.10.6 64-bit\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.10.6\"\n  },\n  \"parameterSchema\": {\n   \"properties\": {\n    \"Region\": {\n     \"default\": \"us-west-2\",\n     \"description\": \"Resources Region\",\n     \"title\": \"Region\",\n     \"type\": \"string\"\n    },\n    \"Tag_Key\": {\n     \"default\": \"Description\",\n     \"description\": \"Mandatory Tag key for resources (only use when tag need to be attached to all the resources)\",\n     \"title\": \"Tag_Key\",\n     \"type\": \"string\"\n    },\n    \"Tag_Value\": {\n     \"default\": \"Unskript\",\n     \"description\": \"Mandatory Tag Value for resources (only use when tag need to be attached to all the resources)\",\n     \"title\": \"Tag_Value\",\n     \"type\": \"string\"\n    }\n   },\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"parameterValues\": {\n   \"Tag_Key\": \"Description\",\n   \"Tag_Value\": \"Unskript\"\n  },\n  \"vscode\": {\n   \"interpreter\": {\n    \"hash\": \"e8899eb02dfbc033aab5733bdae1bd213fa031d40331094008e8673d99ebab63\"\n   }\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}"
  },
  {
    "path": "AWS/Enforce_Mandatory_Tags_Across_All_AWS_Resources.json",
    "content": "{\n  \"name\": \"Enforce Mandatory Tags Across All AWS Resources\",\n  \"description\": \"This runbook can be used to Enforce Mandatory Tags Across All AWS Resources.We can get all the  untag resources of the given region,discovers tag keys of the given region and attaches mandatory tags to all the untagged resource.\",\n  \"uuid\": \"67e7f3a96603a7ce5b67cbe7b6228ce09c91a889c3fe2e93dc4b2a54100c1d3e\",\n  \"icon\": \"CONNECTOR_TYPE_AWS\",\n  \"categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\" ],\n  \"connector_types\": [ \"CONNECTOR_TYPE_AWS\" ],\n  \"version\": \"1.0.0\"\n}\n"
  },
  {
    "path": "AWS/Find_EC2_Instances_Scheduled_to_retire.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"96ddce27-c542-40c7-b8af-76426cc7dc54\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Find EC2 Instances Schedule to Retire\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Find EC2 Instances Schedule to Retire\"\n   },\n   \"source\": [\n    \"\\n\",\n    \"<img src=\\\"https://storage.googleapis.com/unskript-website/assets/favicon.png\\\" alt=\\\"unSkript.com\\\" width=\\\"100\\\" height=\\\"100\\\"/> \\n\",\n    \"<h1> unSkript Runbooks </h1>\\n\",\n    \"<div class=\\\"alert alert-block alert-success\\\">\\n\",\n    \"    <b> This runbook demonstrates How to Find EC2 Instances Scheduled to Retire soon using unSkript legos.</b>\\n\",\n    \"</div>\\n\",\n    \"\\n\",\n    \"<br>\\n\",\n    \"\\n\",\n    \"<center><h2>Find EC2 Instances Scheduled to Retire</h2></center>\\n\",\n    \"\\n\",\n    \"# Steps Overview\\n\",\n    \"    1) Filter all AWS EC2 instances\\n\",\n    \"    2) Get the event.code for the scheduled event instance-retirement.\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"77b0652c-eff5-4527-8daa-f830d49bdb23\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Filter ALL AWS EC2 Instances\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Filter ALL AWS EC2 Instances\"\n   },\n   \"source\": [\n    \"Here we will use unSkript Filter ALL AWS EC2 Instances Lego. This lego takes region as input. This input is used to discover all the EC2 instances.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 2,\n   \"id\": \"b23ce2bb-a142-4517-9881-9e9beec177fb\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_uuid\": \"abe9fc82a53b80dc1dd4d5a89e31d22b0338e73e86d2ca859576f38cc6d19f48\",\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Filter AWS EC2 Instance by Tag\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2022-08-30T20:27:22.911Z\"\n    },\n    \"id\": 155,\n    \"index\": 155,\n    \"inputData\": [\n     {\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"Region\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"region\": {\n        \"default\": \"\",\n        \"description\": \"AWS Region.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"tag_key\",\n       \"tag_value\",\n       \"region\"\n      ],\n      \"title\": \"aws_filter_ec2_by_tags\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"Filter All AWS EC2 Instances\",\n    \"nouns\": [\n     \"aws\",\n     \"ec2\",\n     \"instances\",\n     \"tag\"\n    ],\n    \"orderProperties\": [\n     \"region\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"instance_ids\",\n     \"output_name_enabled\": true\n    },\n    \"tags\": [\n     \"aws_filter_ec2_by_tags\"\n    ],\n    \"title\": \"Filter All AWS EC2 Instances\",\n    \"verbs\": [\n     \"filter\"\n    ]\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"##  Copyright (c) 2021 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import List\\n\",\n    \"from unskript.connectors.aws import aws_get_paginator\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_filter_ec2_instances_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint({\\\"Instances\\\": output})\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_filter_ec2_instances(handle, region: str) -> List:\\n\",\n    \"    \\\"\\\"\\\"aws_filter_ec2_by_tags Returns an array of instances.\\n\",\n    \"\\n\",\n    \"        :type region: string\\n\",\n    \"        :param region: Used to filter the volume for specific region.\\n\",\n    \"\\n\",\n    \"        :rtype: Array of instances.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    # Input param validation.\\n\",\n    \"\\n\",\n    \"    ec2Client = handle.client('ec2', region_name=region)\\n\",\n    \"    res = aws_get_paginator(ec2Client, \\\"describe_instances\\\", \\\"Reservations\\\")\\n\",\n    \"\\n\",\n    \"    result = []\\n\",\n    \"    for reservation in res:\\n\",\n    \"        for instance in reservation['Instances']:\\n\",\n    \"            result.append(instance['InstanceId'])\\n\",\n    \"    return result\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"region\\\": \\\"Region\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"instance_ids\\\")\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_filter_ec2_instances, lego_printer=aws_filter_ec2_instances_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"62c27136-3d0f-4984-ac2b-c6bc3abf2a97\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Get AWS Scheduled to Retire Instances\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Get AWS Scheduled to Retire Instances\"\n   },\n   \"source\": [\n    \"Here we will use unSkript Get AWS Scheduled to Retire Instances Lego. This lego takes instance_ids list and region as input. This input is used to discover EC2 instances which are schedule to retire soon.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 7,\n   \"id\": \"3c981e53-49f2-47d0-baf2-e6553acc59b7\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_uuid\": \"aa1e026ca8002b906315feba401e5c46889d459270adce3b65d480dc9530311f\",\n    \"collapsed\": true,\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Use This Action to Get Details about an AWS EC2 Instance\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2022-08-30T10:55:26.731Z\"\n    },\n    \"id\": 122,\n    \"index\": 122,\n    \"inputData\": [\n     {\n      \"instance_ids\": {\n       \"constant\": false,\n       \"value\": \"instance_ids\"\n      },\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"Region\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"instance_ids\": {\n        \"default\": \"\",\n        \"description\": \"Instance Ids\",\n        \"title\": \"instance_ids\",\n        \"type\": \"array\"\n       },\n       \"region\": {\n        \"default\": \"\",\n        \"description\": \"AWS Region of the instance.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"instance_id\",\n       \"region\"\n      ],\n      \"title\": \"aws_get_instance_details\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"Get AWS Scheduled to Retire Instances\",\n    \"nouns\": [\n     \"instance\",\n     \"details\"\n    ],\n    \"orderProperties\": [\n     \"region\",\n     \"instance_ids\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"tags\": [\n     \"aws_get_instance_details\"\n    ],\n    \"title\": \"Get AWS Scheduled to Retire Instances\",\n    \"verbs\": [\n     \"get\"\n    ]\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"# Copyright (c) 2021 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import List\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_schedule_to_retire_instances(\\n\",\n    \"    handle,\\n\",\n    \"    instance_ids: list,\\n\",\n    \"    region: str,\\n\",\n    \") -> List:\\n\",\n    \"\\n\",\n    \"    ec2client = handle.client('ec2', region_name=region)\\n\",\n    \"    instances = []\\n\",\n    \"    response = ec2client.describe_instance_status(\\n\",\n    \"        Filters=[\\n\",\n    \"        {\\n\",\n    \"            'Name': 'event.code',\\n\",\n    \"            'Values': ['instance-retirement']}],\\n\",\n    \"        InstanceIds=instance_ids)\\n\",\n    \"\\n\",\n    \"    for instance in response['InstanceStatuses']:\\n\",\n    \"        instance_id = instance['InstanceId']\\n\",\n    \"        instances.append(instance_id)\\n\",\n    \"\\n\",\n    \"    return instances\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"def aws_get_schedule_to_retire_instances_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint({\\\"Instances\\\": output})\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"instance_ids\\\": \\\"instance_ids\\\",\\n\",\n    \"    \\\"region\\\": \\\"Region\\\"\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_get_schedule_to_retire_instances, lego_printer=aws_get_schedule_to_retire_instances_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"3627e67c-5900-4ba0-ae45-6d00957b7d83\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Conclusion\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Conclusion\"\n   },\n   \"source\": [\n    \"In this Runbook, we demonstrated the use of unSkript's AWS legos to perform AWS this runbook Get the EC2 instances which are schedule to retire soon. To view the full platform capabilities of unSkript please visit https://unskript.com\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"execution_data\": {\n   \"runbook_name\": \"Handle AWS EC2 Instance Scheduled to retire\",\n   \"parameters\": null\n  },\n  \"kernelspec\": {\n   \"display_name\": \"Python 3.10.6 64-bit\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.10.6\"\n  },\n  \"parameterSchema\": {\n   \"properties\": {\n    \"Region\": {\n     \"default\": \"us-west-2\",\n     \"description\": \"AWS Region\",\n     \"title\": \"Region\",\n     \"type\": \"string\"\n    }\n   },\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"parameterValues\": {\n   \"Region\": \"us-west-2\"\n  },\n  \"vscode\": {\n   \"interpreter\": {\n    \"hash\": \"e8899eb02dfbc033aab5733bdae1bd213fa031d40331094008e8673d99ebab63\"\n   }\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}"
  },
  {
    "path": "AWS/Find_EC2_Instances_Scheduled_to_retire.json",
    "content": "{\n  \"name\": \"Handle AWS EC2 Instance Scheduled to retire\",\n  \"description\": \"To avoid unexpected interruptions, it's a good practice to check to see if there are any EC2 instances scheduled to retire. This runbook can be used to List the EC2 instances that are scheduled to retire. To handle the instance retirement, user can stop and restart it before the retirement date. That action moves the instance over to a more stable host.\",\n  \"uuid\": \"6684091dbbcd51c416f37c3070df6efd9fcb029c06047fcab62f32ee4c2f0596\",\n  \"icon\": \"CONNECTOR_TYPE_AWS\",\n  \"categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\" ],\n  \"connector_types\": [ \"CONNECTOR_TYPE_AWS\" ],\n  \"version\": \"1.0.0\"\n}\n"
  },
  {
    "path": "AWS/IAM_security_least_privilege.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"c212a0cc-7a58-4451-9f19-1b058bc7aec8\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"intro\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"intro\"\n   },\n   \"source\": [\n    \"<h1>Create IAM Access Policy based on Usage</h1>\\n\",\n    \"<p>&nbsp;</p>\\n\",\n    \"<p>IAM Access should follow the policy of least privilege. This means that the credentials give \\\"exactly, enough\\\" access to &nbsp;perform the requried task, but no more.&nbsp; That way, if the credentials were ever to be compromised, the blast radius is minimized.</p>\\n\",\n    \"<p>This RunBook will take an active IAM profile, and analyze it's access over the last &lt;threshold&gt; hours.&nbsp; Using CloudTrail logs, we can determine what was accessed, and create a new IAM profile that gives access to just these features,</p>\\n\",\n    \"<p>&nbsp;</p>\\n\",\n    \"<h2>Input parameters:</h2>\\n\",\n    \"<h3>&nbsp;Credentials</h3>\\n\",\n    \"<p>You will need two IAM accounts to complete this Runbook:</p>\\n\",\n    \"<ol>\\n\",\n    \"<li>admin_iam:&nbsp; Use these credentials to run each Action - creating IAM policies requires admin access.</li>\\n\",\n    \"<li>reference_iam_arn: This parameter should have the ARN for the reference IAM account.&nbsp; We'll use the activity from this account to generate a new IAM policy.</li>\\n\",\n    \"</ol>\\n\",\n    \"<h3>Inputs</h3>\\n\",\n    \"<ol>\\n\",\n    \"<li>CloudTrail ARN:&nbsp; This is the ARN of the CloudTrail log that you wish to query. IF you are not sure which ARN you wish to use, you can use the \\\"AWS Describe Cloudtrails\\\" Action to get a list of all your trails.</li>\\n\",\n    \"<li>Region: the AWS Region.</li>\\n\",\n    \"<li>threshold: The number of hours of cloudtrail logs to exaine for activity,</li>\\n\",\n    \"<li>policy_name: the name of your new IAM access policy</li>\\n\",\n    \"<li>user_name: The new IAM user you will create with the policy_name attached.</li>\\n\",\n    \"</ol>\\n\",\n    \"<h2>Steps</h2>\\n\",\n    \"<ol>\\n\",\n    \"<li><strong>AWS Describe Cloudtrails:</strong> Gets a list of all the Cloudtrail logs in a region. Use this to get your CloudTrailARN.&nbsp; This step requires a region.\\n\",\n    \"<ol>\\n\",\n    \"<li>If you know the Cloudtrail ARN - you can safely delete this Step</li>\\n\",\n    \"</ol>\\n\",\n    \"</li>\\n\",\n    \"<li><strong>AWS Start IAM Policy Generation</strong>: Begins the process of creating a IAM Policy.&nbsp; Note that you can only create one policy at a time, so if a previous policy is still in progress, this may throw an error. \\n\",\n    \"<ol>\\n\",\n    \"<li>Inputs:&nbsp;\\n\",\n    \"<ol>\\n\",\n    \"<li>Region: AWS Region</li>\\n\",\n    \"<li>CloudTrailARN: ARN of the Cloudtrail - default is to use the runbook input parameter.</li>\\n\",\n    \"<li>IAMPrincipalARN.&nbsp; The IAM user whose access is being duplicated</li>\\n\",\n    \"<li>AccessRole: IAM access role for \\\"AccessAnalyzerMonitorServiceRole\\\". You'll need to create this in your AWS Console</li>\\n\",\n    \"<li>hours: number of hours of logs to examine:&nbsp; Default is threshold input parameter.</li>\\n\",\n    \"</ol>\\n\",\n    \"</li>\\n\",\n    \"<li>Output:\\n\",\n    \"<ol>\\n\",\n    \"<li>JobId - the UUID of the Policy creation</li>\\n\",\n    \"</ol>\\n\",\n    \"</li>\\n\",\n    \"</ol>\\n\",\n    \"</li>\\n\",\n    \"<li><strong>AWS Get Generated Policy</strong> Gets the policy generated in the previous step.&nbsp; Note that generation can take some time, and the response (in variable generatedPolicy) has a status (generatedPolicy['jobDetails']['status']). When this reads \\\"SUCCEEDED\\\", the runbook can be contiued.)\\n\",\n    \"<ol>\\n\",\n    \"<li>Inputs\\n\",\n    \"<ol>\\n\",\n    \"<li>Region</li>\\n\",\n    \"<li>JobId (from the start generation step)</li>\\n\",\n    \"</ol>\\n\",\n    \"</li>\\n\",\n    \"<li>Output:\\n\",\n    \"<ol>\\n\",\n    \"<li>Response from the Call</li>\\n\",\n    \"</ol>\\n\",\n    \"</li>\\n\",\n    \"</ol>\\n\",\n    \"</li>\\n\",\n    \"<li><strong>AWS Get Account Number</strong> This Action retrieves your AWS Account number.&nbsp; It is required to clean up the policy that is returned from step 3.&nbsp; Output is the accountNumber</li>\\n\",\n    \"<li><strong>Clean Up Policy: </strong>This step reads the generated policy (generatedPolicy['generatedPolicyResult']['generatedPolicies'][0]['policy']), and does some cleanup.&nbsp; In the generated policy, there are variables in the policy that must be given concrete values.&nbsp; In our test runs, the following changes have been made (you may need to do further cleanup in this Action to continue:\\n\",\n    \"<ol>\\n\",\n    \"<li>policy = policy.replace('${Region}', \\\"us-west-2\\\")</li>\\n\",\n    \"<li>policy = policy.replace('${Account}', accountNumber)</li>\\n\",\n    \"</ol>\\n\",\n    \"</li>\\n\",\n    \"<li><strong>AWS Create IAM Policy</strong> Takes the policy that was cleaned in the last step, and creates a new policy in your AWS Account. The name of the policy is based on the policy_name input variable,&nbsp; There can only be one unique value, so once a policy is created, this will need to be changed.</li>\\n\",\n    \"<li><strong>Create New IAM User</strong> Creates a new IAM user (using the user_name input).&nbsp;&nbsp;</li>\\n\",\n    \"<li><strong>AWS Attach New Policy to User&nbsp;</strong> Attaches the created policy to the created user</li>\\n\",\n    \"</ol>\\n\",\n    \"<p>&nbsp;</p>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"3f89abe5-c88d-41b9-a7fa-4bb909c9282f\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionCategories\": [],\n    \"actionIsCheck\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [],\n    \"actionNextHopParameterMapping\": {},\n    \"actionOutputType\": \"ACTION_OUTPUT_TYPE_DICT\",\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": false,\n    \"action_uuid\": \"856a647c-33c8-4c2a-8a29-33d9fb6cfacd\",\n    \"checkEnabled\": false,\n    \"continueOnError\": false,\n    \"createTime\": \"2023-03-24T10:38:24Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"v0.0.0\",\n    \"customCell\": true,\n    \"description\": \"Describe all CloudTrail Logs in a Region\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-04-13T21:50:22.735Z\"\n    },\n    \"id\": 100353,\n    \"index\": 100353,\n    \"inputData\": [\n     {\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"region\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"region\": {\n        \"default\": \"\\\"us-west-2\\\"\",\n        \"description\": \"AWS Region\",\n        \"required\": true,\n        \"title\": \"region\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"region\"\n      ],\n      \"title\": \"aws_describe_cloudtrail\",\n      \"type\": \"object\"\n     }\n    ],\n    \"isUnskript\": false,\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS Describe Cloudtrails \",\n    \"nouns\": [],\n    \"orderProperties\": [\n     \"region\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"printOutput\": true,\n    \"tags\": [],\n    \"title\": \"AWS Describe Cloudtrails \",\n    \"verbs\": []\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"# Copyright (c) 2021 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"from pydantic import BaseModel, Field, SecretStr\\n\",\n    \"from typing import Dict, List\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_describe_cloudtrail_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_describe_cloudtrail(handle, region:str) -> Dict:\\n\",\n    \"    # Create a client object for CloudTrail\\n\",\n    \"    cloudtrail_client = handle.client('cloudtrail', region_name=region)\\n\",\n    \"\\n\",\n    \"    # Use the describe_trails method to get information about the available trails\\n\",\n    \"    trails = cloudtrail_client.describe_trails()\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"    return trails\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"region\\\": \\\"region\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_describe_cloudtrail, lego_printer=aws_describe_cloudtrail_printer, hdl=hdl, args=args)\"\n   ],\n   \"output\": {}\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"e18563ce-5ea1-44c8-b903-5143129ae002\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionCategories\": [],\n    \"actionIsCheck\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [],\n    \"actionNextHopParameterMapping\": {},\n    \"actionOutputType\": \"\",\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": false,\n    \"action_uuid\": \"8f56b9e753e57065e02e107dfd472df3e3b6e3440bd8156f37dc752a1f337909\",\n    \"checkEnabled\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"List all AWS IAM Users\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-04-13T21:50:27.772Z\"\n    },\n    \"id\": 265,\n    \"index\": 265,\n    \"inputschema\": [\n     {\n      \"properties\": {},\n      \"title\": \"aws_list_all_iam_users\",\n      \"type\": \"object\"\n     }\n    ],\n    \"isUnskript\": false,\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS List All IAM Users\",\n    \"nouns\": [\n     \"users\",\n     \"iam\",\n     \"aws\"\n    ],\n    \"orderProperties\": [],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"printOutput\": true,\n    \"tags\": [\n     \"aws_list_all_iam_users\"\n    ],\n    \"verbs\": [\n     \"list\"\n    ]\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"# Copyright (c) 2021 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"from pydantic import BaseModel, Field, SecretStr\\n\",\n    \"from typing import Dict, List\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_list_all_iam_users_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_list_all_iam_users(handle) -> List:\\n\",\n    \"    \\\"\\\"\\\"aws_list_all_iam_users lists all the IAM users\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned from Task Validate\\n\",\n    \"\\n\",\n    \"        :rtype: Result List of all IAM users\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    client = handle.client('iam')\\n\",\n    \"    users_list=[]\\n\",\n    \"    response = client.list_users()\\n\",\n    \"    try:\\n\",\n    \"        for x in response['Users']:\\n\",\n    \"            users_list.append(x['UserName'])\\n\",\n    \"    except Exception as e:\\n\",\n    \"        users_list.append(e)\\n\",\n    \"    return users_list\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_list_all_iam_users, lego_printer=aws_list_all_iam_users_printer, hdl=hdl, args=args)\"\n   ],\n   \"output\": {}\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"ee69210c-1492-43e0-8bf9-3a6a046fb1a2\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionCategories\": [],\n    \"actionIsCheck\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [],\n    \"actionNextHopParameterMapping\": {},\n    \"actionOutputType\": \"ACTION_OUTPUT_TYPE_DICT\",\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": true,\n    \"action_uuid\": \"306d6f72-62a4-4313-abc0-4dab0e8d5442\",\n    \"checkEnabled\": false,\n    \"continueOnError\": false,\n    \"createTime\": \"2023-03-24T10:37:02Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"v0.0.0\",\n    \"customCell\": true,\n    \"description\": \"Using the Access Analyzer, use an existing IAM profile and track it's usage over a period of time.  Generate a credentials profile based on that usage.\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-04-14T17:02:12.053Z\"\n    },\n    \"id\": 100352,\n    \"index\": 100352,\n    \"inputData\": [\n     {\n      \"AccessRole\": {\n       \"constant\": false,\n       \"value\": \"\\\"arn:aws:iam::100498623390:role/service-role/AccessAnalyzerMonitorServiceRole_CTBKDXMCCK\\\"\"\n      },\n      \"CloudTrailARN\": {\n       \"constant\": false,\n       \"value\": \"CloudTrailArn\"\n      },\n      \"IAMPrincipalARN\": {\n       \"constant\": false,\n       \"value\": \"reference_iam_arn\"\n      },\n      \"hours\": {\n       \"constant\": false,\n       \"value\": \"24\"\n      },\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"region\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"AccessRole\": {\n        \"default\": \"\\\"arn:aws:iam::100498623390:role/service-role/AccessAnalyzerMonitorServiceRole_CTBKDXMCCK\\\"\",\n        \"description\": \"Access Role that can query the CloudTrail Logs\",\n        \"required\": true,\n        \"title\": \"AccessRole\",\n        \"type\": \"string\"\n       },\n       \"CloudTrailARN\": {\n        \"default\": \"\",\n        \"description\": \"Cloud Trail ARN\",\n        \"required\": true,\n        \"title\": \"CloudTrailARN\",\n        \"type\": \"string\"\n       },\n       \"IAMPrincipalARN\": {\n        \"default\": \"\",\n        \"description\": \"IAM ARN we are copying the profile into.\",\n        \"required\": true,\n        \"title\": \"IAMPrincipalARN\",\n        \"type\": \"string\"\n       },\n       \"hours\": {\n        \"default\": 24,\n        \"description\": \"Hours of data to examine\",\n        \"required\": true,\n        \"title\": \"hours\",\n        \"type\": \"number\"\n       },\n       \"region\": {\n        \"default\": \"\\\"us-west-2\\\"\",\n        \"description\": \"AWS Region\",\n        \"required\": true,\n        \"title\": \"region\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"AccessRole\",\n       \"CloudTrailARN\",\n       \"IAMPrincipalARN\",\n       \"hours\",\n       \"region\"\n      ],\n      \"title\": \"AWS_Start_IAM_Policy_Generation\",\n      \"type\": \"object\"\n     }\n    ],\n    \"isUnskript\": false,\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS Start IAM Policy Generation \",\n    \"nouns\": [],\n    \"orderProperties\": [\n     \"region\",\n     \"CloudTrailARN\",\n     \"IAMPrincipalARN\",\n     \"AccessRole\",\n     \"hours\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"jobId\",\n     \"output_name_enabled\": true,\n     \"output_runbook_enabled\": false,\n     \"output_runbook_name\": \"\"\n    },\n    \"printOutput\": true,\n    \"tags\": [],\n    \"title\": \"AWS Start IAM Policy Generation \",\n    \"verbs\": []\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"# Copyright (c) 2023 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"from pydantic import BaseModel, Field, SecretStr\\n\",\n    \"from typing import Dict, List\\n\",\n    \"import pprint\\n\",\n    \"from datetime import datetime, timedelta\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def AWS_Start_IAM_Policy_Generation_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def AWS_Start_IAM_Policy_Generation(handle, region:str, CloudTrailARN:str, IAMPrincipalARN:str, AccessRole:str, hours:float) -> str:\\n\",\n    \"\\n\",\n    \"    client = handle.client('accessanalyzer', region_name=region)\\n\",\n    \"    policyGenerationDict = {'principalArn': IAMPrincipalARN}\\n\",\n    \"    myTrail = {'cloudTrailArn': CloudTrailARN,\\n\",\n    \"                   'regions': [region],\\n\",\n    \"                   'allRegions': False\\n\",\n    \"              }\\n\",\n    \"    endTime = datetime.now()\\n\",\n    \"    endTime = endTime.strftime(\\\"%Y-%m-%dT%H:%M:%S\\\")\\n\",\n    \"    startTime = datetime.now()- timedelta(hours =hours)\\n\",\n    \"    startTime =startTime.strftime(\\\"%Y-%m-%dT%H:%M:%S\\\")\\n\",\n    \"    response = client.start_policy_generation(    \\n\",\n    \"        policyGenerationDetails=policyGenerationDict,\\n\",\n    \"        cloudTrailDetails={\\n\",\n    \"            'trails': [myTrail],\\n\",\n    \"            'accessRole': AccessRole,\\n\",\n    \"            'startTime': startTime,\\n\",\n    \"            'endTime': endTime\\n\",\n    \"        }\\n\",\n    \"    )\\n\",\n    \"    jobId = response['jobId']\\n\",\n    \"    return jobId\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"AccessRole\\\": \\\"\\\\\\\\\\\"arn:aws:iam::100498623390:role/service-role/AccessAnalyzerMonitorServiceRole_CTBKDXMCCK\\\\\\\\\\\"\\\",\\n\",\n    \"    \\\"CloudTrailARN\\\": \\\"CloudTrailArn\\\",\\n\",\n    \"    \\\"IAMPrincipalARN\\\": \\\"reference_iam_arn\\\",\\n\",\n    \"    \\\"hours\\\": \\\"float(24)\\\",\\n\",\n    \"    \\\"region\\\": \\\"region\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"jobId\\\")\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(AWS_Start_IAM_Policy_Generation, lego_printer=AWS_Start_IAM_Policy_Generation_printer, hdl=hdl, args=args)\"\n   ],\n   \"output\": {}\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"14a19146-ef56-4643-8d21-8e742c0beb6e\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionCategories\": [],\n    \"actionIsCheck\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [],\n    \"actionNextHopParameterMapping\": {},\n    \"actionOutputType\": \"ACTION_OUTPUT_TYPE_DICT\",\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": false,\n    \"action_uuid\": \"5548a22f-4669-4ad2-9f61-268507f818c7\",\n    \"checkEnabled\": false,\n    \"continueOnError\": false,\n    \"createTime\": \"2023-03-24T10:41:12Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"v0.0.0\",\n    \"customCell\": true,\n    \"description\": \"Once an Access Policy has been generated, this Action retrieves the policy.\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-04-17T13:16:02.591Z\"\n    },\n    \"id\": 100355,\n    \"index\": 100355,\n    \"inputData\": [\n     {\n      \"jobId\": {\n       \"constant\": false,\n       \"value\": \"jobId\"\n      },\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"region\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"jobId\": {\n        \"default\": \"\",\n        \"description\": \"Policy JobId\",\n        \"required\": true,\n        \"title\": \"jobId\",\n        \"type\": \"string\"\n       },\n       \"region\": {\n        \"default\": \"\\\"us-west-2\\\"\",\n        \"description\": \"region\",\n        \"required\": true,\n        \"title\": \"region\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"jobId\",\n       \"region\"\n      ],\n      \"title\": \"aws_get_generated_policy\",\n      \"type\": \"object\"\n     }\n    ],\n    \"isUnskript\": false,\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS Get Generated Policy\",\n    \"nouns\": [],\n    \"orderProperties\": [\n     \"region\",\n     \"jobId\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"generatedPolicy\",\n     \"output_name_enabled\": true\n    },\n    \"printOutput\": true,\n    \"tags\": [],\n    \"title\": \"AWS Get Generated Policy\",\n    \"verbs\": []\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"# Copyright (c) 2023 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"from pydantic import BaseModel, Field, SecretStr\\n\",\n    \"from typing import Dict, List\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_generated_policy_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_generated_policy(handle, region:str,jobId:str) -> Dict:\\n\",\n    \"    client = handle.client('accessanalyzer', region_name=region)\\n\",\n    \"    response = client.get_generated_policy(\\n\",\n    \"        jobId=jobId,\\n\",\n    \"        includeResourcePlaceholders=True,\\n\",\n    \"        includeServiceLevelTemplate=True\\n\",\n    \"    )\\n\",\n    \"    result = {}\\n\",\n    \"    result['generatedPolicyResult'] = response['generatedPolicyResult']\\n\",\n    \"    result['generationStatus'] = response['jobDetails']['status']\\n\",\n    \"    return result\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"jobId\\\": \\\"jobId\\\",\\n\",\n    \"    \\\"region\\\": \\\"region\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"generatedPolicy\\\")\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_get_generated_policy, lego_printer=aws_get_generated_policy_printer, hdl=hdl, args=args)\"\n   ],\n   \"output\": {}\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"44929a58-d92e-4c2f-b3f4-a730ab7aed92\",\n   \"metadata\": {\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-04-17T13:16:49.500Z\"\n    },\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"print(generatedPolicy['generationStatus'])\\n\",\n    \"\\n\"\n   ],\n   \"output\": {}\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"b6f9b8b8-de1b-4e64-8e3d-23cb7ee71bed\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionCategories\": [],\n    \"actionIsCheck\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [],\n    \"actionNextHopParameterMapping\": {},\n    \"actionOutputType\": \"\",\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": false,\n    \"action_uuid\": \"8f56b9e753e57065e02e107dfd472df3e3b6e3440bd8156f37dc752a1f337909\",\n    \"checkEnabled\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"customCell\": true,\n    \"description\": \"List all AWS IAM Users\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-04-13T21:55:05.777Z\"\n    },\n    \"id\": 265,\n    \"index\": 265,\n    \"inputschema\": [\n     {\n      \"properties\": {},\n      \"required\": [],\n      \"title\": \"aws_get_acount_number\",\n      \"type\": \"object\"\n     }\n    ],\n    \"isUnskript\": false,\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS Get AWS Account Number\",\n    \"nouns\": [\n     \"users\",\n     \"iam\",\n     \"aws\"\n    ],\n    \"orderProperties\": [],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"accountNumber\",\n     \"output_name_enabled\": true\n    },\n    \"printOutput\": true,\n    \"tags\": [\n     \"aws_list_all_iam_users\"\n    ],\n    \"title\": \"AWS Get AWS Account Number\",\n    \"verbs\": [\n     \"list\"\n    ]\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"# Copyright (c) 2021 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"from pydantic import BaseModel, Field, SecretStr\\n\",\n    \"from typing import Dict, List\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_acount_number_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_acount_number(handle) -> str:\\n\",\n    \"    # Create a client object for the AWS Identity and Access Management (IAM) service\\n\",\n    \"    iam_client = handle.client('iam')\\n\",\n    \"\\n\",\n    \"    # Call the get_user() method to get information about the current user\\n\",\n    \"    response = iam_client.get_user()\\n\",\n    \"\\n\",\n    \"    # Extract the account ID from the ARN (Amazon Resource Name) of the user\\n\",\n    \"    account_id = response['User']['Arn'].split(':')[4]\\n\",\n    \"\\n\",\n    \"    # Print the account ID\\n\",\n    \"    return account_id\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"\\n\",\n    \"task.configure(outputName=\\\"accountNumber\\\")\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_get_acount_number, lego_printer=aws_get_acount_number_printer, hdl=hdl, args=args)\"\n   ],\n   \"output\": {}\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"5be8f9ec-7591-4a70-b87d-be9bcddd070b\",\n   \"metadata\": {\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-04-17T13:16:57.042Z\"\n    },\n    \"name\": \"clean up policy\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"clean up policy\",\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"import json\\n\",\n    \"import re \\n\",\n    \"\\n\",\n    \"\\n\",\n    \"policy = generatedPolicy['generatedPolicyResult']['generatedPolicies'][0]['policy']\\n\",\n    \"#print(policy)\\n\",\n    \"\\n\",\n    \"policy = json.dumps(policy)\\n\",\n    \"\\n\",\n    \"policy = policy.replace('${Region}', \\\"us-west-2\\\")\\n\",\n    \"policy = policy.replace('${Account}', accountNumber)\\n\",\n    \"policy = re.sub(\\\"\\\\${[A-Za-z]*}\\\", \\\"*\\\", policy)\\n\",\n    \"policy = json.loads(policy)\\n\",\n    \"policy = str(policy)\\n\",\n    \"print(type(policy), policy)\\n\"\n   ],\n   \"output\": {}\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"2e2f8b0b-4cd4-4601-b5ea-735c5c9cf6c2\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionCategories\": [],\n    \"actionIsCheck\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [],\n    \"actionNextHopParameterMapping\": {},\n    \"actionOutputType\": \"ACTION_OUTPUT_TYPE_DICT\",\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": true,\n    \"action_uuid\": \"4a3d4143-35d2-4f52-aa36-791cc1d6d2d0\",\n    \"checkEnabled\": false,\n    \"continueOnError\": false,\n    \"createTime\": \"2023-03-24T10:42:47Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"v0.0.0\",\n    \"customCell\": true,\n    \"description\": \"Takes a generated policy and saves it as an IAM policy that can be applied to any IAM user.\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-04-14T16:38:15.659Z\"\n    },\n    \"id\": 100356,\n    \"index\": 100356,\n    \"inputData\": [\n     {\n      \"PolicyName\": {\n       \"constant\": false,\n       \"value\": \"policy_name\"\n      },\n      \"policyDocument\": {\n       \"constant\": false,\n       \"value\": \"policy\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"PolicyName\": {\n        \"default\": \"\",\n        \"description\": \"Name of Policy to generate at AWS\",\n        \"required\": true,\n        \"title\": \"PolicyName\",\n        \"type\": \"string\"\n       },\n       \"policyDocument\": {\n        \"default\": \"\",\n        \"description\": \"Stringified JSON policy\",\n        \"required\": true,\n        \"title\": \"policyDocument\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"PolicyName\",\n       \"policyDocument\"\n      ],\n      \"title\": \"aws_create_IAMpolicy\",\n      \"type\": \"object\"\n     }\n    ],\n    \"isUnskript\": false,\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS Create IAM Policy\",\n    \"nouns\": [],\n    \"orderProperties\": [\n     \"policyDocument\",\n     \"PolicyName\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"createdPolicy\",\n     \"output_name_enabled\": true\n    },\n    \"printOutput\": true,\n    \"tags\": [],\n    \"title\": \"AWS Create IAM Policy\",\n    \"verbs\": []\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"# Copyright (c) 2021 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"from pydantic import BaseModel, Field, SecretStr\\n\",\n    \"from typing import Dict, List\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_create_IAMpolicy_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_create_IAMpolicy(handle, policyDocument:str, PolicyName:str) -> Dict:\\n\",\n    \"\\n\",\n    \"    client = handle.client('iam')\\n\",\n    \"    response = client.create_policy(\\n\",\n    \"        PolicyName=PolicyName,\\n\",\n    \"        PolicyDocument=policyDocument,\\n\",\n    \"        Description='generated Via unSkript',\\n\",\n    \"\\n\",\n    \"    )\\n\",\n    \"    return response\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"PolicyName\\\": \\\"policy_name\\\",\\n\",\n    \"    \\\"policyDocument\\\": \\\"policy\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"createdPolicy\\\")\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_create_IAMpolicy, lego_printer=aws_create_IAMpolicy_printer, hdl=hdl, args=args)\"\n   ],\n   \"output\": {}\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"81841c82-639c-4fb7-8a1e-aab82134f4e9\",\n   \"metadata\": {\n    \"collapsed\": true,\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-03-30T14:53:52.572Z\"\n    },\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"print(createdPolicy['Policy']['Arn'])\"\n   ],\n   \"output\": {}\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"22deeb36-539b-493a-9d18-ce9c214cdacd\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionCategories\": [],\n    \"actionIsCheck\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [],\n    \"actionNextHopParameterMapping\": {},\n    \"actionOutputType\": \"\",\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": false,\n    \"action_uuid\": \"3f71dd060d5955f5dc9104dbaf418bf957b2222c510cb3afd09ded8e41e433d9\",\n    \"checkEnabled\": false,\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Create New IAM User\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-04-14T16:38:20.334Z\"\n    },\n    \"id\": 204,\n    \"index\": 204,\n    \"inputData\": [\n     {\n      \"tag_key\": {\n       \"constant\": false,\n       \"value\": \"\\\"test\\\"\"\n      },\n      \"tag_value\": {\n       \"constant\": false,\n       \"value\": \"\\\"test\\\"\"\n      },\n      \"user_name\": {\n       \"constant\": false,\n       \"value\": \"user_name\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"tag_key\": {\n        \"description\": \"Tag Key to new IAM User.\",\n        \"title\": \"Tag Key\",\n        \"type\": \"string\"\n       },\n       \"tag_value\": {\n        \"description\": \"Tag Value to new IAM User.\",\n        \"title\": \"Tag Value\",\n        \"type\": \"string\"\n       },\n       \"user_name\": {\n        \"description\": \"IAM User Name.\",\n        \"title\": \"User Name\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"user_name\",\n       \"tag_key\",\n       \"tag_value\"\n      ],\n      \"title\": \"aws_create_iam_user\",\n      \"type\": \"object\"\n     }\n    ],\n    \"isUnskript\": false,\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"Create New IAM User\",\n    \"nouns\": [],\n    \"orderProperties\": [\n     \"user_name\",\n     \"tag_key\",\n     \"tag_value\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"printOutput\": true,\n    \"tags\": [\n     \"aws_create_iam_user\"\n    ],\n    \"verbs\": []\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##  Copyright (c) 2021 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from typing import List, Dict\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from botocore.exceptions import ClientError\\n\",\n    \"import pprint\\n\",\n    \"from beartype import beartype\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_create_iam_user_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_create_iam_user(handle, user_name: str, tag_key: str, tag_value: str) -> Dict:\\n\",\n    \"    \\\"\\\"\\\"aws_create_iam_user Creates new IAM User.\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned by the task.validate(...) method\\n\",\n    \"\\n\",\n    \"        :type user_name: string\\n\",\n    \"        :param user_name: Name of new IAM User.\\n\",\n    \"\\n\",\n    \"        :type tag_key: string\\n\",\n    \"        :param tag_key: Tag Key assign to new User.\\n\",\n    \"\\n\",\n    \"        :type tag_value: string\\n\",\n    \"        :param tag_value: Tag Value assign to new User.\\n\",\n    \"\\n\",\n    \"        :rtype: Dict with the stopped instances state info.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"\\n\",\n    \"    ec2Client = handle.client(\\\"iam\\\")\\n\",\n    \"    result = {}\\n\",\n    \"    try:\\n\",\n    \"        response = ec2Client.create_user(\\n\",\n    \"            UserName=user_name,\\n\",\n    \"            Tags=[\\n\",\n    \"                {\\n\",\n    \"                    'Key': tag_key,\\n\",\n    \"                    'Value': tag_value\\n\",\n    \"                }])\\n\",\n    \"        result = response\\n\",\n    \"    except ClientError as error:\\n\",\n    \"        if error.response['Error']['Code'] == 'EntityAlreadyExists':\\n\",\n    \"            result = error.response\\n\",\n    \"        else:\\n\",\n    \"            result = error.response\\n\",\n    \"\\n\",\n    \"    return result\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"tag_key\\\": \\\"\\\\\\\\\\\"test\\\\\\\\\\\"\\\",\\n\",\n    \"    \\\"tag_value\\\": \\\"\\\\\\\\\\\"test\\\\\\\\\\\"\\\",\\n\",\n    \"    \\\"user_name\\\": \\\"user_name\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_create_iam_user, lego_printer=aws_create_iam_user_printer, hdl=hdl, args=args)\"\n   ],\n   \"output\": {}\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"bebb49ec-d4f7-4ca5-910c-d0a836d882a2\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionCategories\": [],\n    \"actionIsCheck\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [],\n    \"actionNextHopParameterMapping\": {},\n    \"actionOutputType\": \"\",\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": false,\n    \"action_uuid\": \"dee9134df84f6c675edab485389572795169495347e40abbdf81f24ec807a85c\",\n    \"checkEnabled\": false,\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"AWS Attach New Policy to User\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-03-30T14:54:31.327Z\"\n    },\n    \"id\": 215,\n    \"index\": 215,\n    \"inputData\": [\n     {\n      \"policy_name\": {\n       \"constant\": false,\n       \"value\": \"createdPolicy['Policy']['Arn']\"\n      },\n      \"user_name\": {\n       \"constant\": false,\n       \"value\": \"user_name\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"policy_name\": {\n        \"description\": \"Policy name to apply the permissions to the user.\",\n        \"title\": \"Policy Name\",\n        \"type\": \"string\"\n       },\n       \"user_name\": {\n        \"description\": \"IAM user whose policies need to fetched.\",\n        \"title\": \"User Name\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"user_name\",\n       \"policy_name\"\n      ],\n      \"title\": \"aws_attache_iam_policy\",\n      \"type\": \"object\"\n     }\n    ],\n    \"isUnskript\": false,\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS Attach New Policy to User\",\n    \"nouns\": [],\n    \"orderProperties\": [\n     \"user_name\",\n     \"policy_name\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"printOutput\": true,\n    \"tags\": [\n     \"aws_attache_iam_policy\"\n    ],\n    \"verbs\": []\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##  Copyright (c) 2021 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from typing import List, Dict\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from botocore.exceptions import ClientError\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_attach_iam_policy_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_attach_iam_policy(handle, user_name: str, policy_name: str) -> Dict:\\n\",\n    \"    \\\"\\\"\\\"aws_attache_iam_policy used to provide user permissions.\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned from task.validate(...).\\n\",\n    \"\\n\",\n    \"        :type user_name: string\\n\",\n    \"        :param user_name: Dictionary of credentials info.\\n\",\n    \"\\n\",\n    \"        :type policy_name: string\\n\",\n    \"        :param policy_name: Policy name to apply the permissions to the user.\\n\",\n    \"\\n\",\n    \"        :rtype: Dict with User policy information.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    result = {}\\n\",\n    \"    iamResource = handle.resource('iam')\\n\",\n    \"    try:\\n\",\n    \"        user = iamResource.User(user_name)\\n\",\n    \"        response = user.attach_policy(\\n\",\n    \"            PolicyArn='arn:aws:iam::aws:policy/'+policy_name\\n\",\n    \"            )\\n\",\n    \"        result = response\\n\",\n    \"    except ClientError as error:\\n\",\n    \"        result = error.response\\n\",\n    \"\\n\",\n    \"    return result\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"def unskript_default_printer(output):\\n\",\n    \"    if isinstance(output, (list, tuple)):\\n\",\n    \"        for item in output:\\n\",\n    \"            print(f'item: {item}')\\n\",\n    \"    elif isinstance(output, dict):\\n\",\n    \"        for item in output.items():\\n\",\n    \"            print(f'item: {item}')\\n\",\n    \"    else:\\n\",\n    \"        print(f'Output for {task.name}')\\n\",\n    \"        print(output)\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"policy_name\\\": \\\"createdPolicy['Policy']['Arn']\\\",\\n\",\n    \"    \\\"user_name\\\": \\\"user_name\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_attach_iam_policy, lego_printer=unskript_default_printer, hdl=hdl, args=args)\"\n   ],\n   \"output\": {}\n  }\n ],\n \"metadata\": {\n  \"execution_data\": {\n   \"runbook_name\": \"Create an IAM user using Principle of Least Privilege\",\n   \"parameters\": [\n    \"user_name\",\n    \"CloudTrailArn\",\n    \"policy_name\",\n    \"region\"\n   ]\n  },\n  \"kernelspec\": {\n   \"display_name\": \"unSkript (Build: 1039)\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"codemirror_mode\": {\n    \"name\": \"ipython\",\n    \"version\": 3\n   },\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"nbconvert_exporter\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.9.6\"\n  },\n  \"parameterSchema\": {\n   \"properties\": {\n    \"CloudTrailArn\": {\n     \"default\": \"arn:aws:cloudtrail:us-west-2:100498623390:trail/management-events\",\n     \"description\": \"ARN of the CloudTrail to be queried\",\n     \"title\": \"CloudTrailArn\",\n     \"type\": \"string\"\n    },\n    \"policy_name\": {\n     \"default\": \"generated_iam_policy_11\",\n     \"description\": \"IAM Policy to be created\",\n     \"title\": \"policy_name\",\n     \"type\": \"string\"\n    },\n    \"reference_iam_arn\": {\n     \"default\": \"arn:aws:iam::100498623390:user/doug-billing-s3\",\n     \"description\": \"The arn of the Reference IAM. We will build a new policy based on the activity of this account.\",\n     \"title\": \"reference_iam_arn\",\n     \"type\": \"string\"\n    },\n    \"region\": {\n     \"default\": \"us-west-2\",\n     \"description\": \"AWS Region\",\n     \"title\": \"region\",\n     \"type\": \"string\"\n    },\n    \"threshold\": {\n     \"default\": 24,\n     \"description\": \"Number of hours to examine\",\n     \"title\": \"threshold\",\n     \"type\": \"number\"\n    },\n    \"user_name\": {\n     \"default\": \"Doug_generated_iam_14\",\n     \"description\": \"IAM user to be created\",\n     \"title\": \"user_name\",\n     \"type\": \"string\"\n    }\n   },\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}"
  },
  {
    "path": "AWS/IAM_security_least_privilege.json",
    "content": "{\n  \"name\": \"Create an IAM user using Principle of Least Privilege\",\n  \"description\": \"Extract usage details from Cloudtrail of an existing user. Apply the usage to a new IAM Policy, and connect it to a new IAM profile.\",\n  \"uuid\": \"65d8f7ea1d41ccf49b4a624b70cdbde0d16ad9ba348829e4ddbd59a83ce644bc\",\n  \"icon\": \"CONNECTOR_TYPE_AWS\",\n  \"categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SECOPS\" ],\n  \"connector_types\": [ \"CONNECTOR_TYPE_AWS\" ],\n  \"version\": \"1.0.0\"\n}\n\n"
  },
  {
    "path": "AWS/Monitor_AWS_DynamoDB_provision_capacity.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"1fe9e993-6175-40e2-be4d-b38474f610c4\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Monitor AWS DynamoDB provision capacity\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Monitor AWS DynamoDB provision capacity\"\n   },\n   \"source\": [\n    \"<img src=\\\"https://storage.googleapis.com/unskript-website/assets/favicon.png\\\" alt=\\\"unSkript.com\\\" width=\\\"100\\\" height=\\\"100\\\"/> \\n\",\n    \"<h1> unSkript Runbooks </h1>\\n\",\n    \"<div class=\\\"alert alert-block alert-success\\\">\\n\",\n    \"    <b> This runbook demonstrates How Monitor AWS DynamoDB provision capacity using unSkript legos.</b>\\n\",\n    \"</div>\\n\",\n    \"\\n\",\n    \"<br>\\n\",\n    \"\\n\",\n    \"<center><h2>Monitor AWS DynamoDB provision capacity</h2></center>\\n\",\n    \"\\n\",\n    \"# Steps Overview\\n\",\n    \"    Collecting the data metrics from cloudwatch related to DynamoDB for provision capacity like:\\n\",\n    \"    - PROVISIONED READ CAPACITY UNITS\\n\",\n    \"    - PROVISIONED WRITE CAPACITY UNITS\\n\",\n    \"    - ACCOUNT PROVISIONED READ CAPACITY UTILIZATION\\n\",\n    \"    - ACCOUNT PROVISIONED WRITE CAPACITY UTILIZATION\\n\",\n    \"    - MAX PROVISIONED TABLE WRITE CAPACITY UTILIZATION\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"e7ff4abf-4104-4673-98a2-f35cf8ec2cd2\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Provisioned Read Capacity Units\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Provisioned Read Capacity Units\"\n   },\n   \"source\": [\n    \"Here we will use unSkript Get AWS CloudWatch Metrics for AWS/DynamoDB Lego. This lego takes metric_name, dimensions, period, timeSince, statistics, region and period as input. This input is used to get cloudwatch metrics of DynamoDB for PROVISIONED READ CAPACITY UNITS.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 49,\n   \"id\": \"7d2eadc7-110e-426b-b73b-77b14f747c19\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_uuid\": \"5baddb8c2b083c19c73cec00f89256d9f79aac1c6ecaca3333864240201c85fd\",\n    \"collapsed\": true,\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Get AWS CloudWatch Metrics for AWS DynamoDB\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2022-09-29T12:49:36.763Z\"\n    },\n    \"id\": 157,\n    \"index\": 157,\n    \"inputData\": [\n     {\n      \"dimensions\": {\n       \"constant\": false,\n       \"value\": \"[{\\\"Name\\\":\\\"TableName\\\",\\\"Value\\\":\\\"test\\\"}]\"\n      },\n      \"metric_name\": {\n       \"constant\": true,\n       \"value\": \"ProvisionedReadCapacityUnits\"\n      },\n      \"period\": {\n       \"constant\": false,\n       \"value\": \"Period\"\n      },\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"Region\"\n      },\n      \"statistics\": {\n       \"constant\": true,\n       \"value\": \"Average\"\n      },\n      \"timeSince\": {\n       \"constant\": false,\n       \"value\": \"Time_Since\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"definitions\": {\n       \"DynamoDBMetrics\": {\n        \"description\": \"An enumeration.\",\n        \"enum\": [\n         \"AccountMaxReads\",\n         \"AccountMaxTableLevelReads\",\n         \"AccountMaxTableLevelWrites\",\n         \"AccountMaxWrites\",\n         \"AccountProvisionedReadCapacityUtilization\",\n         \"AccountProvisionedWriteCapacityUtilization\",\n         \"AgeOfOldestUnreplicatedRecord\",\n         \"ConditionalCheckFailedRequests\",\n         \"ConsumedChangeDataCaptureUnits\",\n         \"ConsumedReadCapacityUnits\",\n         \"ConsumedWriteCapacityUnits\",\n         \"FailedToReplicateRecordCount\",\n         \"MaxProvisionedTableWriteCapacityUtilization\",\n         \"OnlineIndexConsumedWriteCapacity\",\n         \"OnlineIndexPercentageProgress\",\n         \"OnlineIndexThrottleEvents\",\n         \"PendingReplicationCount\",\n         \"ProvisionedReadCapacityUnits\",\n         \"ProvisionedWriteCapacityUnits\",\n         \"ReadThrottleEvents\",\n         \"ReplicationLatency\",\n         \"ReturnedBytes\",\n         \"ReturnedItemCount\",\n         \"ReturnedRecordsCount\",\n         \"SuccessfulRequestLatency\",\n         \"SystemErrors\",\n         \"TimeToLiveDeletedItemCount\",\n         \"ThrottledPutRecordCount\",\n         \"ThrottledRequests\",\n         \"TransactionConflict\",\n         \"UserErrors\",\n         \"WriteThrottleEvents\"\n        ],\n        \"title\": \"DynamoDBMetrics\"\n       },\n       \"StatisticsType\": {\n        \"description\": \"An enumeration.\",\n        \"enum\": [\n         \"SampleCount\",\n         \"Average\",\n         \"Sum\",\n         \"Minimum\",\n         \"Maximum\",\n         \"Percentile\"\n        ],\n        \"title\": \"StatisticsType\"\n       }\n      },\n      \"properties\": {\n       \"dimensions\": {\n        \"description\": \"A dimension is a name/value pair that is part of the identity of a metric.\",\n        \"items\": {\n         \"type\": \"object\"\n        },\n        \"title\": \"Dimensions\",\n        \"type\": \"array\"\n       },\n       \"metric_name\": {\n        \"allOf\": [\n         {\n          \"$ref\": \"#/definitions/DynamoDBMetrics\"\n         }\n        ],\n        \"description\": \"The name of the metric, with or without spaces.\",\n        \"title\": \"Metric Name\"\n       },\n       \"period\": {\n        \"default\": 60,\n        \"description\": \"The granularity, in seconds, of the returned data points.\",\n        \"title\": \"Period\",\n        \"type\": \"integer\"\n       },\n       \"region\": {\n        \"description\": \"AWS Region of the cloudwatch.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       },\n       \"statistics\": {\n        \"allOf\": [\n         {\n          \"$ref\": \"#/definitions/StatisticsType\"\n         }\n        ],\n        \"description\": \"Cloudwatch metric statistics. Possible values: SampleCount, Average, Sum, Minimum, Maximum\",\n        \"title\": \"Statistics\"\n       },\n       \"timeSince\": {\n        \"description\": \"Starting from now, window (in seconds) for which you want to get the datapoints for.\",\n        \"title\": \"Time Since\",\n        \"type\": \"integer\"\n       }\n      },\n      \"required\": [\n       \"metric_name\",\n       \"dimensions\",\n       \"timeSince\",\n       \"statistics\",\n       \"region\"\n      ],\n      \"title\": \"aws_get_cloudwatch_metrics_dynamodb\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"Monitor PROVISIONED READ CAPACITY UNITS  for DynamoDB\",\n    \"nouns\": [\n     \"aws\",\n     \"cloudwatch\",\n     \"metrics\",\n     \"dynamodb\"\n    ],\n    \"orderProperties\": [\n     \"metric_name\",\n     \"dimensions\",\n     \"period\",\n     \"timeSince\",\n     \"statistics\",\n     \"region\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"tags\": [\n     \"aws_get_cloudwatch_metrics_dynamodb\"\n    ],\n    \"title\": \"Monitor PROVISIONED READ CAPACITY UNITS  for DynamoDB\",\n    \"trusted\": true,\n    \"verbs\": [\n     \"get\"\n    ]\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"# Copyright (c) 2021 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"\\n\",\n    \"import enum\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import Optional, List\\n\",\n    \"from datetime import datetime, timedelta\\n\",\n    \"import matplotlib.pyplot as plt\\n\",\n    \"from tabulate import tabulate\\n\",\n    \"from unskript.legos.aws.aws_get_handle import Session\\n\",\n    \"from unskript.enums.aws_cloudwatch_enums import DynamoDBMetrics\\n\",\n    \"from unskript.enums.aws_k8s_enums import StatisticsType\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_cloudwatch_metrics_dynamodb_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    plt.show()\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_cloudwatch_metrics_dynamodb(\\n\",\n    \"    hdl: Session,\\n\",\n    \"    metric_name: DynamoDBMetrics,\\n\",\n    \"    dimensions: List[dict],\\n\",\n    \"    timeSince: int,\\n\",\n    \"    statistics: StatisticsType,\\n\",\n    \"    region: str,\\n\",\n    \"    period: int = 60,\\n\",\n    \") -> str:\\n\",\n    \"    \\\"\\\"\\\"aws_get_cloudwatch_metrics_dynamodb shows plotted AWS cloudwatch statistics for Dynamodb.\\n\",\n    \"\\n\",\n    \"    :type metric_name: DynamoDBMetrics\\n\",\n    \"    :param metric_name: The name of the metric, with or without spaces.\\n\",\n    \"\\n\",\n    \"    :type dimensions: List[dict]\\n\",\n    \"    :param dimensions: A dimension is a name/value pair that is part of the identity of a metric.\\n\",\n    \"\\n\",\n    \"    :type period: int\\n\",\n    \"    :param period: The granularity, in seconds, of the returned data points.\\n\",\n    \"\\n\",\n    \"    :type timeSince: int\\n\",\n    \"    :param timeSince: Starting from now, window (in seconds) for which you want to get the datapoints for.\\n\",\n    \"\\n\",\n    \"    :type statistics: StatisticsType\\n\",\n    \"    :param statistics: Cloudwatch metric statistics. Possible values: SampleCount, Average, Sum, Minimum, Maximum.\\n\",\n    \"\\n\",\n    \"    :type region: string\\n\",\n    \"    :param region: AWS Region of the cloudwatch.\\n\",\n    \"\\n\",\n    \"    :rtype: Shows ploted statistics.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    metric_name = metric_name.value if metric_name else None\\n\",\n    \"    statistics = statistics.value if statistics else None\\n\",\n    \"    cloudwatchClient = hdl.client(\\\"cloudwatch\\\", region_name=region)\\n\",\n    \"    # Gets metric data.\\n\",\n    \"    res = cloudwatchClient.get_metric_data(\\n\",\n    \"        MetricDataQueries=[\\n\",\n    \"            {\\n\",\n    \"                'Id': metric_name.lower(),\\n\",\n    \"                'MetricStat': {\\n\",\n    \"                    'Metric': {\\n\",\n    \"                        'Namespace': 'AWS/DynamoDB',\\n\",\n    \"                        'MetricName': metric_name,\\n\",\n    \"                        'Dimensions': dimensions\\n\",\n    \"                    },\\n\",\n    \"                    'Period': period,\\n\",\n    \"                    'Stat': statistics,\\n\",\n    \"                },\\n\",\n    \"            },\\n\",\n    \"        ],\\n\",\n    \"        StartTime=datetime.utcnow() - timedelta(seconds=timeSince),\\n\",\n    \"        EndTime=datetime.utcnow(),\\n\",\n    \"        ScanBy='TimestampAscending'\\n\",\n    \"    )\\n\",\n    \"\\n\",\n    \"    timestamps = []\\n\",\n    \"    values = []\\n\",\n    \"\\n\",\n    \"    for timestamp in res['MetricDataResults'][0]['Timestamps']:\\n\",\n    \"        timestamps.append(timestamp)\\n\",\n    \"    for value in res['MetricDataResults'][0]['Values']:\\n\",\n    \"        values.append(value)\\n\",\n    \"\\n\",\n    \"    timestamps.sort()\\n\",\n    \"    values.sort()\\n\",\n    \"\\n\",\n    \"    plt.plot_date(timestamps, values, \\\"-o\\\")\\n\",\n    \"\\n\",\n    \"    data = []\\n\",\n    \"    for dt, val in zip(res['MetricDataResults'][0]['Timestamps'], res['MetricDataResults'][0]['Values']):\\n\",\n    \"        data.append([dt.strftime('%Y-%m-%d::%H-%M'), val])\\n\",\n    \"    head = [\\\"Timestamp\\\", \\\"Value\\\"]\\n\",\n    \"    table = tabulate(data, headers=head, tablefmt=\\\"grid\\\")\\n\",\n    \"\\n\",\n    \"    return table\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"dimensions\\\": \\\"[{\\\\\\\\\\\"Name\\\\\\\\\\\":\\\\\\\\\\\"TableName\\\\\\\\\\\",\\\\\\\\\\\"Value\\\\\\\\\\\":\\\\\\\\\\\"test\\\\\\\\\\\"}]\\\",\\n\",\n    \"    \\\"metric_name\\\": \\\"DynamoDBMetrics.PROVISIONEDREADCAPACITYUNITS\\\",\\n\",\n    \"    \\\"period\\\": \\\"Period\\\",\\n\",\n    \"    \\\"region\\\": \\\"Region\\\",\\n\",\n    \"    \\\"statistics\\\": \\\"StatisticsType.SAMPLE_COUNT\\\",\\n\",\n    \"    \\\"timeSince\\\": \\\"Time_Since\\\"\\n\",\n    \"    }''')\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_get_cloudwatch_metrics_dynamodb, lego_printer=aws_get_cloudwatch_metrics_dynamodb_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"c5a24402-4e87-4c9f-bebd-d926f57d5415\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Account Provisioned Read Capacity Units\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Account Provisioned Read Capacity Units\"\n   },\n   \"source\": [\n    \"Here we will use unSkript Get AWS CloudWatch Metrics for AWS/DynamoDB Lego. This lego takes metric_name, dimensions, period, timeSince, statistics, region and period as input. This inputs is used to get cloudwatch metrics of DynamoDB for Account Provisioned Read Capacity Units.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 15,\n   \"id\": \"a9219574-6a0b-4563-a0ed-0dc0ea65e588\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_uuid\": \"5baddb8c2b083c19c73cec00f89256d9f79aac1c6ecaca3333864240201c85fd\",\n    \"collapsed\": true,\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Get AWS CloudWatch Metrics for AWS DynamoDB\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2022-09-29T12:26:22.805Z\"\n    },\n    \"id\": 157,\n    \"index\": 157,\n    \"inputData\": [\n     {\n      \"dimensions\": {\n       \"constant\": false,\n       \"value\": \"[{\\\"Name\\\":\\\"TableName\\\",\\\"Value\\\":\\\"test\\\"}]\"\n      },\n      \"metric_name\": {\n       \"constant\": true,\n       \"value\": \"AccountProvisionedReadCapacityUtilization\"\n      },\n      \"period\": {\n       \"constant\": false,\n       \"value\": \"Period\"\n      },\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"Region\"\n      },\n      \"statistics\": {\n       \"constant\": true,\n       \"value\": \"SampleCount\"\n      },\n      \"timeSince\": {\n       \"constant\": false,\n       \"value\": \"Time_Since\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"definitions\": {\n       \"DynamoDBMetrics\": {\n        \"description\": \"An enumeration.\",\n        \"enum\": [\n         \"AccountMaxReads\",\n         \"AccountMaxTableLevelReads\",\n         \"AccountMaxTableLevelWrites\",\n         \"AccountMaxWrites\",\n         \"AccountProvisionedReadCapacityUtilization\",\n         \"AccountProvisionedWriteCapacityUtilization\",\n         \"AgeOfOldestUnreplicatedRecord\",\n         \"ConditionalCheckFailedRequests\",\n         \"ConsumedChangeDataCaptureUnits\",\n         \"ConsumedReadCapacityUnits\",\n         \"ConsumedWriteCapacityUnits\",\n         \"FailedToReplicateRecordCount\",\n         \"MaxProvisionedTableWriteCapacityUtilization\",\n         \"OnlineIndexConsumedWriteCapacity\",\n         \"OnlineIndexPercentageProgress\",\n         \"OnlineIndexThrottleEvents\",\n         \"PendingReplicationCount\",\n         \"ProvisionedReadCapacityUnits\",\n         \"ProvisionedWriteCapacityUnits\",\n         \"ReadThrottleEvents\",\n         \"ReplicationLatency\",\n         \"ReturnedBytes\",\n         \"ReturnedItemCount\",\n         \"ReturnedRecordsCount\",\n         \"SuccessfulRequestLatency\",\n         \"SystemErrors\",\n         \"TimeToLiveDeletedItemCount\",\n         \"ThrottledPutRecordCount\",\n         \"ThrottledRequests\",\n         \"TransactionConflict\",\n         \"UserErrors\",\n         \"WriteThrottleEvents\"\n        ],\n        \"title\": \"DynamoDBMetrics\"\n       },\n       \"StatisticsType\": {\n        \"description\": \"An enumeration.\",\n        \"enum\": [\n         \"SampleCount\",\n         \"Average\",\n         \"Sum\",\n         \"Minimum\",\n         \"Maximum\",\n         \"Percentile\"\n        ],\n        \"title\": \"StatisticsType\"\n       }\n      },\n      \"properties\": {\n       \"dimensions\": {\n        \"description\": \"A dimension is a name/value pair that is part of the identity of a metric.\",\n        \"items\": {\n         \"type\": \"object\"\n        },\n        \"title\": \"Dimensions\",\n        \"type\": \"array\"\n       },\n       \"metric_name\": {\n        \"allOf\": [\n         {\n          \"$ref\": \"#/definitions/DynamoDBMetrics\"\n         }\n        ],\n        \"description\": \"The name of the metric, with or without spaces.\",\n        \"title\": \"Metric Name\"\n       },\n       \"period\": {\n        \"default\": 60,\n        \"description\": \"The granularity, in seconds, of the returned data points.\",\n        \"title\": \"Period\",\n        \"type\": \"integer\"\n       },\n       \"region\": {\n        \"description\": \"AWS Region of the cloudwatch.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       },\n       \"statistics\": {\n        \"allOf\": [\n         {\n          \"$ref\": \"#/definitions/StatisticsType\"\n         }\n        ],\n        \"description\": \"Cloudwatch metric statistics. Possible values: SampleCount, Average, Sum, Minimum, Maximum\",\n        \"title\": \"Statistics\"\n       },\n       \"timeSince\": {\n        \"description\": \"Starting from now, window (in seconds) for which you want to get the datapoints for.\",\n        \"title\": \"Time Since\",\n        \"type\": \"integer\"\n       }\n      },\n      \"required\": [\n       \"metric_name\",\n       \"dimensions\",\n       \"timeSince\",\n       \"statistics\",\n       \"region\"\n      ],\n      \"title\": \"aws_get_cloudwatch_metrics_dynamodb\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"Monitor ACCOUNT PROVISIONED READ CAPACITY UTILIZATION  for DynamoDB\",\n    \"nouns\": [\n     \"aws\",\n     \"cloudwatch\",\n     \"metrics\",\n     \"dynamodb\"\n    ],\n    \"orderProperties\": [\n     \"metric_name\",\n     \"dimensions\",\n     \"period\",\n     \"timeSince\",\n     \"statistics\",\n     \"region\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"tags\": [\n     \"aws_get_cloudwatch_metrics_dynamodb\"\n    ],\n    \"title\": \"Monitor ACCOUNT PROVISIONED READ CAPACITY UTILIZATION  for DynamoDB\",\n    \"trusted\": true,\n    \"verbs\": [\n     \"get\"\n    ]\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"# Copyright (c) 2021 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"\\n\",\n    \"import enum\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import Optional, List\\n\",\n    \"from datetime import datetime, timedelta\\n\",\n    \"import matplotlib.pyplot as plt\\n\",\n    \"from tabulate import tabulate\\n\",\n    \"from unskript.legos.aws.aws_get_handle import Session\\n\",\n    \"from unskript.enums.aws_cloudwatch_enums import DynamoDBMetrics\\n\",\n    \"from unskript.enums.aws_k8s_enums import StatisticsType\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_cloudwatch_metrics_dynamodb_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    plt.show()\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_cloudwatch_metrics_dynamodb(\\n\",\n    \"    hdl: Session,\\n\",\n    \"    metric_name: DynamoDBMetrics,\\n\",\n    \"    dimensions: List[dict],\\n\",\n    \"    timeSince: int,\\n\",\n    \"    statistics: StatisticsType,\\n\",\n    \"    region: str,\\n\",\n    \"    period: int = 60,\\n\",\n    \") -> str:\\n\",\n    \"    \\\"\\\"\\\"aws_get_cloudwatch_metrics_dynamodb shows plotted AWS cloudwatch statistics for Dynamodb.\\n\",\n    \"\\n\",\n    \"    :type metric_name: DynamoDBMetrics\\n\",\n    \"    :param metric_name: The name of the metric, with or without spaces.\\n\",\n    \"\\n\",\n    \"    :type dimensions: List[dict]\\n\",\n    \"    :param dimensions: A dimension is a name/value pair that is part of the identity of a metric.\\n\",\n    \"\\n\",\n    \"    :type period: int\\n\",\n    \"    :param period: The granularity, in seconds, of the returned data points.\\n\",\n    \"\\n\",\n    \"    :type timeSince: int\\n\",\n    \"    :param timeSince: Starting from now, window (in seconds) for which you want to get the datapoints for.\\n\",\n    \"\\n\",\n    \"    :type statistics: StatisticsType\\n\",\n    \"    :param statistics: Cloudwatch metric statistics. Possible values: SampleCount, Average, Sum, Minimum, Maximum.\\n\",\n    \"\\n\",\n    \"    :type region: string\\n\",\n    \"    :param region: AWS Region of the cloudwatch.\\n\",\n    \"\\n\",\n    \"    :rtype: Shows ploted statistics.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    metric_name = metric_name.value if metric_name else None\\n\",\n    \"    statistics = statistics.value if statistics else None\\n\",\n    \"    cloudwatchClient = hdl.client(\\\"cloudwatch\\\", region_name=region)\\n\",\n    \"    # Gets metric data.\\n\",\n    \"    res = cloudwatchClient.get_metric_data(\\n\",\n    \"        MetricDataQueries=[\\n\",\n    \"            {\\n\",\n    \"                'Id': metric_name.lower(),\\n\",\n    \"                'MetricStat': {\\n\",\n    \"                    'Metric': {\\n\",\n    \"                        'Namespace': 'AWS/DynamoDB',\\n\",\n    \"                        'MetricName': metric_name,\\n\",\n    \"                        'Dimensions': dimensions\\n\",\n    \"                    },\\n\",\n    \"                    'Period': period,\\n\",\n    \"                    'Stat': statistics,\\n\",\n    \"                },\\n\",\n    \"            },\\n\",\n    \"        ],\\n\",\n    \"        StartTime=datetime.utcnow() - timedelta(seconds=timeSince),\\n\",\n    \"        EndTime=datetime.utcnow(),\\n\",\n    \"        ScanBy='TimestampAscending'\\n\",\n    \"    )\\n\",\n    \"\\n\",\n    \"    timestamps = []\\n\",\n    \"    values = []\\n\",\n    \"\\n\",\n    \"    for timestamp in res['MetricDataResults'][0]['Timestamps']:\\n\",\n    \"        timestamps.append(timestamp)\\n\",\n    \"    for value in res['MetricDataResults'][0]['Values']:\\n\",\n    \"        values.append(value)\\n\",\n    \"\\n\",\n    \"    timestamps.sort()\\n\",\n    \"    values.sort()\\n\",\n    \"\\n\",\n    \"    plt.plot_date(timestamps, values, \\\"-o\\\")\\n\",\n    \"\\n\",\n    \"    data = []\\n\",\n    \"    for dt, val in zip(res['MetricDataResults'][0]['Timestamps'], res['MetricDataResults'][0]['Values']):\\n\",\n    \"        data.append([dt.strftime('%Y-%m-%d::%H-%M'), val])\\n\",\n    \"    head = [\\\"Timestamp\\\", \\\"Value\\\"]\\n\",\n    \"    table = tabulate(data, headers=head, tablefmt=\\\"grid\\\")\\n\",\n    \"\\n\",\n    \"    return table\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"dimensions\\\": \\\"[{\\\\\\\\\\\"Name\\\\\\\\\\\":\\\\\\\\\\\"TableName\\\\\\\\\\\",\\\\\\\\\\\"Value\\\\\\\\\\\":\\\\\\\\\\\"test\\\\\\\\\\\"}]\\\",\\n\",\n    \"    \\\"metric_name\\\": \\\"DynamoDBMetrics.ACCOUNTPROVISIONEDREADCAPACITYUTILIZATION\\\",\\n\",\n    \"    \\\"period\\\": \\\"Period\\\",\\n\",\n    \"    \\\"region\\\": \\\"Region\\\",\\n\",\n    \"    \\\"statistics\\\": \\\"StatisticsType.SAMPLE_COUNT\\\",\\n\",\n    \"    \\\"timeSince\\\": \\\"Time_Since\\\"\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_get_cloudwatch_metrics_dynamodb, lego_printer=aws_get_cloudwatch_metrics_dynamodb_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"5313405e-6e04-4cde-8cfc-d902cb932cd9\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Account Provisioned Write Capacity Utilization\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Account Provisioned Write Capacity Utilization\"\n   },\n   \"source\": [\n    \"Here we will use unSkript Get AWS CloudWatch Metrics for AWS/DynamoDB Lego. This lego takes metric_name, dimensions, period, timeSince, statistics, region and period as input. This inputs is used to get cloudwatch metrics of DynamoDB for Account Provisioned Write Capacity Utilization.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 25,\n   \"id\": \"a99bfd3e-b179-4df2-97e4-bf0410f920f6\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_uuid\": \"5baddb8c2b083c19c73cec00f89256d9f79aac1c6ecaca3333864240201c85fd\",\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Get AWS CloudWatch Metrics for AWS DynamoDB\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2022-08-18T11:36:20.135Z\"\n    },\n    \"id\": 157,\n    \"index\": 157,\n    \"inputData\": [\n     {\n      \"dimensions\": {\n       \"constant\": false,\n       \"value\": \"[{\\\"Name\\\":\\\"TableName\\\",\\\"Value\\\":\\\"test\\\"}]\"\n      },\n      \"metric_name\": {\n       \"constant\": true,\n       \"value\": \"AccountProvisionedWriteCapacityUtilization\"\n      },\n      \"period\": {\n       \"constant\": false,\n       \"value\": \"Period\"\n      },\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"Region\"\n      },\n      \"statistics\": {\n       \"constant\": true,\n       \"value\": \"SampleCount\"\n      },\n      \"timeSince\": {\n       \"constant\": false,\n       \"value\": \"Time_Since\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"definitions\": {\n       \"DynamoDBMetrics\": {\n        \"description\": \"An enumeration.\",\n        \"enum\": [\n         \"AccountMaxReads\",\n         \"AccountMaxTableLevelReads\",\n         \"AccountMaxTableLevelWrites\",\n         \"AccountMaxWrites\",\n         \"AccountProvisionedReadCapacityUtilization\",\n         \"AccountProvisionedWriteCapacityUtilization\",\n         \"AgeOfOldestUnreplicatedRecord\",\n         \"ConditionalCheckFailedRequests\",\n         \"ConsumedChangeDataCaptureUnits\",\n         \"ConsumedReadCapacityUnits\",\n         \"ConsumedWriteCapacityUnits\",\n         \"FailedToReplicateRecordCount\",\n         \"MaxProvisionedTableWriteCapacityUtilization\",\n         \"OnlineIndexConsumedWriteCapacity\",\n         \"OnlineIndexPercentageProgress\",\n         \"OnlineIndexThrottleEvents\",\n         \"PendingReplicationCount\",\n         \"ProvisionedReadCapacityUnits\",\n         \"ProvisionedWriteCapacityUnits\",\n         \"ReadThrottleEvents\",\n         \"ReplicationLatency\",\n         \"ReturnedBytes\",\n         \"ReturnedItemCount\",\n         \"ReturnedRecordsCount\",\n         \"SuccessfulRequestLatency\",\n         \"SystemErrors\",\n         \"TimeToLiveDeletedItemCount\",\n         \"ThrottledPutRecordCount\",\n         \"ThrottledRequests\",\n         \"TransactionConflict\",\n         \"UserErrors\",\n         \"WriteThrottleEvents\"\n        ],\n        \"title\": \"DynamoDBMetrics\"\n       },\n       \"StatisticsType\": {\n        \"description\": \"An enumeration.\",\n        \"enum\": [\n         \"SampleCount\",\n         \"Average\",\n         \"Sum\",\n         \"Minimum\",\n         \"Maximum\",\n         \"Percentile\"\n        ],\n        \"title\": \"StatisticsType\"\n       }\n      },\n      \"properties\": {\n       \"dimensions\": {\n        \"description\": \"A dimension is a name/value pair that is part of the identity of a metric.\",\n        \"items\": {\n         \"type\": \"object\"\n        },\n        \"title\": \"Dimensions\",\n        \"type\": \"array\"\n       },\n       \"metric_name\": {\n        \"allOf\": [\n         {\n          \"$ref\": \"#/definitions/DynamoDBMetrics\"\n         }\n        ],\n        \"description\": \"The name of the metric, with or without spaces.\",\n        \"title\": \"Metric Name\"\n       },\n       \"period\": {\n        \"default\": 60,\n        \"description\": \"The granularity, in seconds, of the returned data points.\",\n        \"title\": \"Period\",\n        \"type\": \"integer\"\n       },\n       \"region\": {\n        \"description\": \"AWS Region of the cloudwatch.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       },\n       \"statistics\": {\n        \"allOf\": [\n         {\n          \"$ref\": \"#/definitions/StatisticsType\"\n         }\n        ],\n        \"description\": \"Cloudwatch metric statistics. Possible values: SampleCount, Average, Sum, Minimum, Maximum\",\n        \"title\": \"Statistics\"\n       },\n       \"timeSince\": {\n        \"description\": \"Starting from now, window (in seconds) for which you want to get the datapoints for.\",\n        \"title\": \"Time Since\",\n        \"type\": \"integer\"\n       }\n      },\n      \"required\": [\n       \"metric_name\",\n       \"dimensions\",\n       \"timeSince\",\n       \"statistics\",\n       \"region\"\n      ],\n      \"title\": \"aws_get_cloudwatch_metrics_dynamodb\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"Monitor ACCOUNT PROVISIONED WRITE CAPACITY UTILIZATION  for DynamoDB\",\n    \"nouns\": [\n     \"aws\",\n     \"cloudwatch\",\n     \"metrics\",\n     \"dynamodb\"\n    ],\n    \"orderProperties\": [\n     \"metric_name\",\n     \"dimensions\",\n     \"period\",\n     \"timeSince\",\n     \"statistics\",\n     \"region\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"tags\": [\n     \"aws_get_cloudwatch_metrics_dynamodb\"\n    ],\n    \"title\": \"Monitor ACCOUNT PROVISIONED WRITE CAPACITY UTILIZATION  for DynamoDB\",\n    \"verbs\": [\n     \"get\"\n    ]\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"# Copyright (c) 2021 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"\\n\",\n    \"import enum\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import Optional, List\\n\",\n    \"from datetime import datetime, timedelta\\n\",\n    \"import matplotlib.pyplot as plt\\n\",\n    \"from tabulate import tabulate\\n\",\n    \"from unskript.legos.aws.aws_get_handle import Session\\n\",\n    \"from unskript.enums.aws_cloudwatch_enums import DynamoDBMetrics\\n\",\n    \"from unskript.enums.aws_k8s_enums import StatisticsType\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_cloudwatch_metrics_dynamodb_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    plt.show()\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_cloudwatch_metrics_dynamodb(\\n\",\n    \"    hdl: Session,\\n\",\n    \"    metric_name: DynamoDBMetrics,\\n\",\n    \"    dimensions: List[dict],\\n\",\n    \"    timeSince: int,\\n\",\n    \"    statistics: StatisticsType,\\n\",\n    \"    region: str,\\n\",\n    \"    period: int = 60,\\n\",\n    \") -> str:\\n\",\n    \"    \\\"\\\"\\\"aws_get_cloudwatch_metrics_dynamodb shows plotted AWS cloudwatch statistics for Dynamodb.\\n\",\n    \"\\n\",\n    \"    :type metric_name: DynamoDBMetrics\\n\",\n    \"    :param metric_name: The name of the metric, with or without spaces.\\n\",\n    \"\\n\",\n    \"    :type dimensions: List[dict]\\n\",\n    \"    :param dimensions: A dimension is a name/value pair that is part of the identity of a metric.\\n\",\n    \"\\n\",\n    \"    :type period: int\\n\",\n    \"    :param period: The granularity, in seconds, of the returned data points.\\n\",\n    \"\\n\",\n    \"    :type timeSince: int\\n\",\n    \"    :param timeSince: Starting from now, window (in seconds) for which you want to get the datapoints for.\\n\",\n    \"\\n\",\n    \"    :type statistics: StatisticsType\\n\",\n    \"    :param statistics: Cloudwatch metric statistics. Possible values: SampleCount, Average, Sum, Minimum, Maximum.\\n\",\n    \"\\n\",\n    \"    :type region: string\\n\",\n    \"    :param region: AWS Region of the cloudwatch.\\n\",\n    \"\\n\",\n    \"    :rtype: Shows ploted statistics.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    metric_name = metric_name.value if metric_name else None\\n\",\n    \"    statistics = statistics.value if statistics else None\\n\",\n    \"    cloudwatchClient = hdl.client(\\\"cloudwatch\\\", region_name=region)\\n\",\n    \"    # Gets metric data.\\n\",\n    \"    res = cloudwatchClient.get_metric_data(\\n\",\n    \"        MetricDataQueries=[\\n\",\n    \"            {\\n\",\n    \"                'Id': metric_name.lower(),\\n\",\n    \"                'MetricStat': {\\n\",\n    \"                    'Metric': {\\n\",\n    \"                        'Namespace': 'AWS/DynamoDB',\\n\",\n    \"                        'MetricName': metric_name,\\n\",\n    \"                        'Dimensions': dimensions\\n\",\n    \"                    },\\n\",\n    \"                    'Period': period,\\n\",\n    \"                    'Stat': statistics,\\n\",\n    \"                },\\n\",\n    \"            },\\n\",\n    \"        ],\\n\",\n    \"        StartTime=datetime.utcnow() - timedelta(seconds=timeSince),\\n\",\n    \"        EndTime=datetime.utcnow(),\\n\",\n    \"        ScanBy='TimestampAscending'\\n\",\n    \"    )\\n\",\n    \"\\n\",\n    \"    timestamps = []\\n\",\n    \"    values = []\\n\",\n    \"\\n\",\n    \"    for timestamp in res['MetricDataResults'][0]['Timestamps']:\\n\",\n    \"        timestamps.append(timestamp)\\n\",\n    \"    for value in res['MetricDataResults'][0]['Values']:\\n\",\n    \"        values.append(value)\\n\",\n    \"\\n\",\n    \"    timestamps.sort()\\n\",\n    \"    values.sort()\\n\",\n    \"\\n\",\n    \"    plt.plot_date(timestamps, values, \\\"-o\\\")\\n\",\n    \"\\n\",\n    \"    data = []\\n\",\n    \"    for dt, val in zip(res['MetricDataResults'][0]['Timestamps'], res['MetricDataResults'][0]['Values']):\\n\",\n    \"        data.append([dt.strftime('%Y-%m-%d::%H-%M'), val])\\n\",\n    \"    head = [\\\"Timestamp\\\", \\\"Value\\\"]\\n\",\n    \"    table = tabulate(data, headers=head, tablefmt=\\\"grid\\\")\\n\",\n    \"\\n\",\n    \"    return table\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"dimensions\\\": \\\"[{\\\\\\\\\\\"Name\\\\\\\\\\\":\\\\\\\\\\\"TableName\\\\\\\\\\\",\\\\\\\\\\\"Value\\\\\\\\\\\":\\\\\\\\\\\"test\\\\\\\\\\\"}]\\\",\\n\",\n    \"    \\\"metric_name\\\": \\\"DynamoDBMetrics.ACCOUNTPROVISIONEDWRITECAPACITYUTILIZATION\\\",\\n\",\n    \"    \\\"period\\\": \\\"Period\\\",\\n\",\n    \"    \\\"region\\\": \\\"Region\\\",\\n\",\n    \"    \\\"statistics\\\": \\\"StatisticsType.SAMPLE_COUNT\\\",\\n\",\n    \"    \\\"timeSince\\\": \\\"Time_Since\\\"\\n\",\n    \"    }''')\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_get_cloudwatch_metrics_dynamodb, lego_printer=aws_get_cloudwatch_metrics_dynamodb_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"6790d98a-6039-4eb3-8360-5be866013461\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Max Provisioned Table Write Capacity Utilization\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Max Provisioned Table Write Capacity Utilization\"\n   },\n   \"source\": [\n    \"Here we will use unSkript Get AWS CloudWatch Metrics for AWS/DynamoDB Lego. This lego takes metric_name, dimensions, period, timeSince, statistics, region and period as input. This inputs is used to get cloudwatch metrics of DynamoDB for Max Provisioned Table Write Capacity Utilization.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 26,\n   \"id\": \"e42453c3-b288-4c7a-8722-0237dc783cf4\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_uuid\": \"5baddb8c2b083c19c73cec00f89256d9f79aac1c6ecaca3333864240201c85fd\",\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Get AWS CloudWatch Metrics for AWS DynamoDB\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2022-08-18T11:53:22.625Z\"\n    },\n    \"id\": 157,\n    \"index\": 157,\n    \"inputData\": [\n     {\n      \"dimensions\": {\n       \"constant\": false,\n       \"value\": \"[{\\\"Name\\\":\\\"TableName\\\",\\\"Value\\\":\\\"test\\\"}]\"\n      },\n      \"metric_name\": {\n       \"constant\": true,\n       \"value\": \"MaxProvisionedTableWriteCapacityUtilization\"\n      },\n      \"period\": {\n       \"constant\": false,\n       \"value\": \"Period\"\n      },\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"Region\"\n      },\n      \"statistics\": {\n       \"constant\": true,\n       \"value\": \"SampleCount\"\n      },\n      \"timeSince\": {\n       \"constant\": false,\n       \"value\": \"Time_Since\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"definitions\": {\n       \"DynamoDBMetrics\": {\n        \"description\": \"An enumeration.\",\n        \"enum\": [\n         \"AccountMaxReads\",\n         \"AccountMaxTableLevelReads\",\n         \"AccountMaxTableLevelWrites\",\n         \"AccountMaxWrites\",\n         \"AccountProvisionedReadCapacityUtilization\",\n         \"AccountProvisionedWriteCapacityUtilization\",\n         \"AgeOfOldestUnreplicatedRecord\",\n         \"ConditionalCheckFailedRequests\",\n         \"ConsumedChangeDataCaptureUnits\",\n         \"ConsumedReadCapacityUnits\",\n         \"ConsumedWriteCapacityUnits\",\n         \"FailedToReplicateRecordCount\",\n         \"MaxProvisionedTableWriteCapacityUtilization\",\n         \"OnlineIndexConsumedWriteCapacity\",\n         \"OnlineIndexPercentageProgress\",\n         \"OnlineIndexThrottleEvents\",\n         \"PendingReplicationCount\",\n         \"ProvisionedReadCapacityUnits\",\n         \"ProvisionedWriteCapacityUnits\",\n         \"ReadThrottleEvents\",\n         \"ReplicationLatency\",\n         \"ReturnedBytes\",\n         \"ReturnedItemCount\",\n         \"ReturnedRecordsCount\",\n         \"SuccessfulRequestLatency\",\n         \"SystemErrors\",\n         \"TimeToLiveDeletedItemCount\",\n         \"ThrottledPutRecordCount\",\n         \"ThrottledRequests\",\n         \"TransactionConflict\",\n         \"UserErrors\",\n         \"WriteThrottleEvents\"\n        ],\n        \"title\": \"DynamoDBMetrics\"\n       },\n       \"StatisticsType\": {\n        \"description\": \"An enumeration.\",\n        \"enum\": [\n         \"SampleCount\",\n         \"Average\",\n         \"Sum\",\n         \"Minimum\",\n         \"Maximum\",\n         \"Percentile\"\n        ],\n        \"title\": \"StatisticsType\"\n       }\n      },\n      \"properties\": {\n       \"dimensions\": {\n        \"description\": \"A dimension is a name/value pair that is part of the identity of a metric.\",\n        \"items\": {\n         \"type\": \"object\"\n        },\n        \"title\": \"Dimensions\",\n        \"type\": \"array\"\n       },\n       \"metric_name\": {\n        \"allOf\": [\n         {\n          \"$ref\": \"#/definitions/DynamoDBMetrics\"\n         }\n        ],\n        \"description\": \"The name of the metric, with or without spaces.\",\n        \"title\": \"Metric Name\"\n       },\n       \"period\": {\n        \"default\": 60,\n        \"description\": \"The granularity, in seconds, of the returned data points.\",\n        \"title\": \"Period\",\n        \"type\": \"integer\"\n       },\n       \"region\": {\n        \"description\": \"AWS Region of the cloudwatch.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       },\n       \"statistics\": {\n        \"allOf\": [\n         {\n          \"$ref\": \"#/definitions/StatisticsType\"\n         }\n        ],\n        \"description\": \"Cloudwatch metric statistics. Possible values: SampleCount, Average, Sum, Minimum, Maximum\",\n        \"title\": \"Statistics\"\n       },\n       \"timeSince\": {\n        \"description\": \"Starting from now, window (in seconds) for which you want to get the datapoints for.\",\n        \"title\": \"Time Since\",\n        \"type\": \"integer\"\n       }\n      },\n      \"required\": [\n       \"metric_name\",\n       \"dimensions\",\n       \"timeSince\",\n       \"statistics\",\n       \"region\"\n      ],\n      \"title\": \"aws_get_cloudwatch_metrics_dynamodb\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"Monitor MAX PROVISIONED TABLE WRITE CAPACITY UTILIZATION  for DynamoDB\",\n    \"nouns\": [\n     \"aws\",\n     \"cloudwatch\",\n     \"metrics\",\n     \"dynamodb\"\n    ],\n    \"orderProperties\": [\n     \"metric_name\",\n     \"dimensions\",\n     \"period\",\n     \"timeSince\",\n     \"statistics\",\n     \"region\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"tags\": [\n     \"aws_get_cloudwatch_metrics_dynamodb\"\n    ],\n    \"title\": \"Monitor MAX PROVISIONED TABLE WRITE CAPACITY UTILIZATION  for DynamoDB\",\n    \"verbs\": [\n     \"get\"\n    ]\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"# Copyright (c) 2021 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"\\n\",\n    \"import enum\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import Optional, List\\n\",\n    \"from datetime import datetime, timedelta\\n\",\n    \"import matplotlib.pyplot as plt\\n\",\n    \"from tabulate import tabulate\\n\",\n    \"from unskript.legos.aws.aws_get_handle import Session\\n\",\n    \"from unskript.enums.aws_cloudwatch_enums import DynamoDBMetrics\\n\",\n    \"from unskript.enums.aws_k8s_enums import StatisticsType\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_cloudwatch_metrics_dynamodb_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    plt.show()\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_cloudwatch_metrics_dynamodb(\\n\",\n    \"    hdl: Session,\\n\",\n    \"    metric_name: DynamoDBMetrics,\\n\",\n    \"    dimensions: List[dict],\\n\",\n    \"    timeSince: int,\\n\",\n    \"    statistics: StatisticsType,\\n\",\n    \"    region: str,\\n\",\n    \"    period: int = 60,\\n\",\n    \") -> str:\\n\",\n    \"    \\\"\\\"\\\"aws_get_cloudwatch_metrics_dynamodb shows plotted AWS cloudwatch statistics for Dynamodb.\\n\",\n    \"\\n\",\n    \"    :type metric_name: DynamoDBMetrics\\n\",\n    \"    :param metric_name: The name of the metric, with or without spaces.\\n\",\n    \"\\n\",\n    \"    :type dimensions: List[dict]\\n\",\n    \"    :param dimensions: A dimension is a name/value pair that is part of the identity of a metric.\\n\",\n    \"\\n\",\n    \"    :type period: int\\n\",\n    \"    :param period: The granularity, in seconds, of the returned data points.\\n\",\n    \"\\n\",\n    \"    :type timeSince: int\\n\",\n    \"    :param timeSince: Starting from now, window (in seconds) for which you want to get the datapoints for.\\n\",\n    \"\\n\",\n    \"    :type statistics: StatisticsType\\n\",\n    \"    :param statistics: Cloudwatch metric statistics. Possible values: SampleCount, Average, Sum, Minimum, Maximum.\\n\",\n    \"\\n\",\n    \"    :type region: string\\n\",\n    \"    :param region: AWS Region of the cloudwatch.\\n\",\n    \"\\n\",\n    \"    :rtype: Shows ploted statistics.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    metric_name = metric_name.value if metric_name else None\\n\",\n    \"    statistics = statistics.value if statistics else None\\n\",\n    \"    cloudwatchClient = hdl.client(\\\"cloudwatch\\\", region_name=region)\\n\",\n    \"    # Gets metric data.\\n\",\n    \"    res = cloudwatchClient.get_metric_data(\\n\",\n    \"        MetricDataQueries=[\\n\",\n    \"            {\\n\",\n    \"                'Id': metric_name.lower(),\\n\",\n    \"                'MetricStat': {\\n\",\n    \"                    'Metric': {\\n\",\n    \"                        'Namespace': 'AWS/DynamoDB',\\n\",\n    \"                        'MetricName': metric_name,\\n\",\n    \"                        'Dimensions': dimensions\\n\",\n    \"                    },\\n\",\n    \"                    'Period': period,\\n\",\n    \"                    'Stat': statistics,\\n\",\n    \"                },\\n\",\n    \"            },\\n\",\n    \"        ],\\n\",\n    \"        StartTime=datetime.utcnow() - timedelta(seconds=timeSince),\\n\",\n    \"        EndTime=datetime.utcnow(),\\n\",\n    \"        ScanBy='TimestampAscending'\\n\",\n    \"    )\\n\",\n    \"\\n\",\n    \"    timestamps = []\\n\",\n    \"    values = []\\n\",\n    \"\\n\",\n    \"    for timestamp in res['MetricDataResults'][0]['Timestamps']:\\n\",\n    \"        timestamps.append(timestamp)\\n\",\n    \"    for value in res['MetricDataResults'][0]['Values']:\\n\",\n    \"        values.append(value)\\n\",\n    \"\\n\",\n    \"    timestamps.sort()\\n\",\n    \"    values.sort()\\n\",\n    \"\\n\",\n    \"    plt.plot_date(timestamps, values, \\\"-o\\\")\\n\",\n    \"\\n\",\n    \"    data = []\\n\",\n    \"    for dt, val in zip(res['MetricDataResults'][0]['Timestamps'], res['MetricDataResults'][0]['Values']):\\n\",\n    \"        data.append([dt.strftime('%Y-%m-%d::%H-%M'), val])\\n\",\n    \"    head = [\\\"Timestamp\\\", \\\"Value\\\"]\\n\",\n    \"    table = tabulate(data, headers=head, tablefmt=\\\"grid\\\")\\n\",\n    \"\\n\",\n    \"    return table\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"dimensions\\\": \\\"[{\\\\\\\\\\\"Name\\\\\\\\\\\":\\\\\\\\\\\"TableName\\\\\\\\\\\",\\\\\\\\\\\"Value\\\\\\\\\\\":\\\\\\\\\\\"test\\\\\\\\\\\"}]\\\",\\n\",\n    \"    \\\"metric_name\\\": \\\"DynamoDBMetrics.MAXPROVISIONEDTABLEWRITECAPACITYUTILIZATION\\\",\\n\",\n    \"    \\\"period\\\": \\\"Period\\\",\\n\",\n    \"    \\\"region\\\": \\\"Region\\\",\\n\",\n    \"    \\\"statistics\\\": \\\"StatisticsType.SAMPLE_COUNT\\\",\\n\",\n    \"    \\\"timeSince\\\": \\\"Time_Since\\\"\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_get_cloudwatch_metrics_dynamodb, lego_printer=aws_get_cloudwatch_metrics_dynamodb_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"f493f8af-147b-42b8-8b17-7c54e9063968\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Provisioned Write Capacity Units\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Provisioned Write Capacity Units\"\n   },\n   \"source\": [\n    \"Here we will use unSkript Get AWS CloudWatch Metrics for AWS/DynamoDB Lego. This lego takes metric_name, dimensions, period, timeSince, statistics, region and period as input. This inputs is used to get cloudwatch metrics of DynamoDB for Provisioned Write Capacity Units.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 27,\n   \"id\": \"fdd1519a-ccc1-4933-96c1-09ebd8230863\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_uuid\": \"5baddb8c2b083c19c73cec00f89256d9f79aac1c6ecaca3333864240201c85fd\",\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Get AWS CloudWatch Metrics for AWS DynamoDB\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2022-08-18T11:55:46.643Z\"\n    },\n    \"id\": 157,\n    \"index\": 157,\n    \"inputData\": [\n     {\n      \"dimensions\": {\n       \"constant\": false,\n       \"value\": \"[{\\\"Name\\\":\\\"TableName\\\",\\\"Value\\\":\\\"test\\\"}]\"\n      },\n      \"metric_name\": {\n       \"constant\": true,\n       \"value\": \"ProvisionedWriteCapacityUnits\"\n      },\n      \"period\": {\n       \"constant\": false,\n       \"value\": \"Period\"\n      },\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"Region\"\n      },\n      \"statistics\": {\n       \"constant\": true,\n       \"value\": \"SampleCount\"\n      },\n      \"timeSince\": {\n       \"constant\": false,\n       \"value\": \"Time_Since\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"definitions\": {\n       \"DynamoDBMetrics\": {\n        \"description\": \"An enumeration.\",\n        \"enum\": [\n         \"AccountMaxReads\",\n         \"AccountMaxTableLevelReads\",\n         \"AccountMaxTableLevelWrites\",\n         \"AccountMaxWrites\",\n         \"AccountProvisionedReadCapacityUtilization\",\n         \"AccountProvisionedWriteCapacityUtilization\",\n         \"AgeOfOldestUnreplicatedRecord\",\n         \"ConditionalCheckFailedRequests\",\n         \"ConsumedChangeDataCaptureUnits\",\n         \"ConsumedReadCapacityUnits\",\n         \"ConsumedWriteCapacityUnits\",\n         \"FailedToReplicateRecordCount\",\n         \"MaxProvisionedTableWriteCapacityUtilization\",\n         \"OnlineIndexConsumedWriteCapacity\",\n         \"OnlineIndexPercentageProgress\",\n         \"OnlineIndexThrottleEvents\",\n         \"PendingReplicationCount\",\n         \"ProvisionedReadCapacityUnits\",\n         \"ProvisionedWriteCapacityUnits\",\n         \"ReadThrottleEvents\",\n         \"ReplicationLatency\",\n         \"ReturnedBytes\",\n         \"ReturnedItemCount\",\n         \"ReturnedRecordsCount\",\n         \"SuccessfulRequestLatency\",\n         \"SystemErrors\",\n         \"TimeToLiveDeletedItemCount\",\n         \"ThrottledPutRecordCount\",\n         \"ThrottledRequests\",\n         \"TransactionConflict\",\n         \"UserErrors\",\n         \"WriteThrottleEvents\"\n        ],\n        \"title\": \"DynamoDBMetrics\"\n       },\n       \"StatisticsType\": {\n        \"description\": \"An enumeration.\",\n        \"enum\": [\n         \"SampleCount\",\n         \"Average\",\n         \"Sum\",\n         \"Minimum\",\n         \"Maximum\",\n         \"Percentile\"\n        ],\n        \"title\": \"StatisticsType\"\n       }\n      },\n      \"properties\": {\n       \"dimensions\": {\n        \"description\": \"A dimension is a name/value pair that is part of the identity of a metric.\",\n        \"items\": {\n         \"type\": \"object\"\n        },\n        \"title\": \"Dimensions\",\n        \"type\": \"array\"\n       },\n       \"metric_name\": {\n        \"allOf\": [\n         {\n          \"$ref\": \"#/definitions/DynamoDBMetrics\"\n         }\n        ],\n        \"description\": \"The name of the metric, with or without spaces.\",\n        \"title\": \"Metric Name\"\n       },\n       \"period\": {\n        \"default\": 60,\n        \"description\": \"The granularity, in seconds, of the returned data points.\",\n        \"title\": \"Period\",\n        \"type\": \"integer\"\n       },\n       \"region\": {\n        \"description\": \"AWS Region of the cloudwatch.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       },\n       \"statistics\": {\n        \"allOf\": [\n         {\n          \"$ref\": \"#/definitions/StatisticsType\"\n         }\n        ],\n        \"description\": \"Cloudwatch metric statistics. Possible values: SampleCount, Average, Sum, Minimum, Maximum\",\n        \"title\": \"Statistics\"\n       },\n       \"timeSince\": {\n        \"description\": \"Starting from now, window (in seconds) for which you want to get the datapoints for.\",\n        \"title\": \"Time Since\",\n        \"type\": \"integer\"\n       }\n      },\n      \"required\": [\n       \"metric_name\",\n       \"dimensions\",\n       \"timeSince\",\n       \"statistics\",\n       \"region\"\n      ],\n      \"title\": \"aws_get_cloudwatch_metrics_dynamodb\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"Monitor PROVISIONED WRITE CAPACITY UNITS  for DynamoDB\",\n    \"nouns\": [\n     \"aws\",\n     \"cloudwatch\",\n     \"metrics\",\n     \"dynamodb\"\n    ],\n    \"orderProperties\": [\n     \"metric_name\",\n     \"dimensions\",\n     \"period\",\n     \"timeSince\",\n     \"statistics\",\n     \"region\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"tags\": [\n     \"aws_get_cloudwatch_metrics_dynamodb\"\n    ],\n    \"title\": \"Monitor PROVISIONED WRITE CAPACITY UNITS  for DynamoDB\",\n    \"verbs\": [\n     \"get\"\n    ]\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"# Copyright (c) 2021 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"\\n\",\n    \"import enum\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import Optional, List\\n\",\n    \"from datetime import datetime, timedelta\\n\",\n    \"import matplotlib.pyplot as plt\\n\",\n    \"from tabulate import tabulate\\n\",\n    \"from unskript.legos.aws.aws_get_handle import Session\\n\",\n    \"from unskript.enums.aws_cloudwatch_enums import DynamoDBMetrics\\n\",\n    \"from unskript.enums.aws_k8s_enums import StatisticsType\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_cloudwatch_metrics_dynamodb_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    plt.show()\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_cloudwatch_metrics_dynamodb(\\n\",\n    \"    hdl: Session,\\n\",\n    \"    metric_name: DynamoDBMetrics,\\n\",\n    \"    dimensions: List[dict],\\n\",\n    \"    timeSince: int,\\n\",\n    \"    statistics: StatisticsType,\\n\",\n    \"    region: str,\\n\",\n    \"    period: int = 60,\\n\",\n    \") -> str:\\n\",\n    \"    \\\"\\\"\\\"aws_get_cloudwatch_metrics_dynamodb shows plotted AWS cloudwatch statistics for Dynamodb.\\n\",\n    \"\\n\",\n    \"    :type metric_name: DynamoDBMetrics\\n\",\n    \"    :param metric_name: The name of the metric, with or without spaces.\\n\",\n    \"\\n\",\n    \"    :type dimensions: List[dict]\\n\",\n    \"    :param dimensions: A dimension is a name/value pair that is part of the identity of a metric.\\n\",\n    \"\\n\",\n    \"    :type period: int\\n\",\n    \"    :param period: The granularity, in seconds, of the returned data points.\\n\",\n    \"\\n\",\n    \"    :type timeSince: int\\n\",\n    \"    :param timeSince: Starting from now, window (in seconds) for which you want to get the datapoints for.\\n\",\n    \"\\n\",\n    \"    :type statistics: StatisticsType\\n\",\n    \"    :param statistics: Cloudwatch metric statistics. Possible values: SampleCount, Average, Sum, Minimum, Maximum.\\n\",\n    \"\\n\",\n    \"    :type region: string\\n\",\n    \"    :param region: AWS Region of the cloudwatch.\\n\",\n    \"\\n\",\n    \"    :rtype: Shows ploted statistics.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    metric_name = metric_name.value if metric_name else None\\n\",\n    \"    statistics = statistics.value if statistics else None\\n\",\n    \"    cloudwatchClient = hdl.client(\\\"cloudwatch\\\", region_name=region)\\n\",\n    \"    # Gets metric data.\\n\",\n    \"    res = cloudwatchClient.get_metric_data(\\n\",\n    \"        MetricDataQueries=[\\n\",\n    \"            {\\n\",\n    \"                'Id': metric_name.lower(),\\n\",\n    \"                'MetricStat': {\\n\",\n    \"                    'Metric': {\\n\",\n    \"                        'Namespace': 'AWS/DynamoDB',\\n\",\n    \"                        'MetricName': metric_name,\\n\",\n    \"                        'Dimensions': dimensions\\n\",\n    \"                    },\\n\",\n    \"                    'Period': period,\\n\",\n    \"                    'Stat': statistics,\\n\",\n    \"                },\\n\",\n    \"            },\\n\",\n    \"        ],\\n\",\n    \"        StartTime=datetime.utcnow() - timedelta(seconds=timeSince),\\n\",\n    \"        EndTime=datetime.utcnow(),\\n\",\n    \"        ScanBy='TimestampAscending'\\n\",\n    \"    )\\n\",\n    \"\\n\",\n    \"    timestamps = []\\n\",\n    \"    values = []\\n\",\n    \"\\n\",\n    \"    for timestamp in res['MetricDataResults'][0]['Timestamps']:\\n\",\n    \"        timestamps.append(timestamp)\\n\",\n    \"    for value in res['MetricDataResults'][0]['Values']:\\n\",\n    \"        values.append(value)\\n\",\n    \"\\n\",\n    \"    timestamps.sort()\\n\",\n    \"    values.sort()\\n\",\n    \"\\n\",\n    \"    plt.plot_date(timestamps, values, \\\"-o\\\")\\n\",\n    \"\\n\",\n    \"    data = []\\n\",\n    \"    for dt, val in zip(res['MetricDataResults'][0]['Timestamps'], res['MetricDataResults'][0]['Values']):\\n\",\n    \"        data.append([dt.strftime('%Y-%m-%d::%H-%M'), val])\\n\",\n    \"    head = [\\\"Timestamp\\\", \\\"Value\\\"]\\n\",\n    \"    table = tabulate(data, headers=head, tablefmt=\\\"grid\\\")\\n\",\n    \"\\n\",\n    \"    return table\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"dimensions\\\": \\\"[{\\\\\\\\\\\"Name\\\\\\\\\\\":\\\\\\\\\\\"TableName\\\\\\\\\\\",\\\\\\\\\\\"Value\\\\\\\\\\\":\\\\\\\\\\\"test\\\\\\\\\\\"}]\\\",\\n\",\n    \"    \\\"metric_name\\\": \\\"DynamoDBMetrics.PROVISIONEDWRITECAPACITYUNITS\\\",\\n\",\n    \"    \\\"period\\\": \\\"Period\\\",\\n\",\n    \"    \\\"region\\\": \\\"Region\\\",\\n\",\n    \"    \\\"statistics\\\": \\\"StatisticsType.SAMPLE_COUNT\\\",\\n\",\n    \"    \\\"timeSince\\\": \\\"Time_Since\\\"\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_get_cloudwatch_metrics_dynamodb, lego_printer=aws_get_cloudwatch_metrics_dynamodb_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"a9ff1794-1d3c-40e6-9070-ee2dcc9054d0\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Conclusion\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Conclusion\"\n   },\n   \"source\": [\n    \"In this Runbook, we demonstrated the use of unSkript's AWS legos to perform AWS action and this runbook Collecting the data metrics from cloudwatch related to DynamoDB for provision capacity. To view the full platform capabilities of unSkript please visit https://unskript.com\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"execution_data\": {\n   \"runbook_name\": \"Monitor AWS DynamoDB provision capacity\",\n   \"parameters\": [\n    \"Period\",\n    \"Region\",\n    \"Time_Since\"\n   ]\n  },\n  \"kernelspec\": {\n   \"display_name\": \"Python 3.9.6 64-bit\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.9.6\"\n  },\n  \"parameterSchema\": {\n   \"properties\": {\n    \"Period\": {\n     \"default\": 300,\n     \"description\": \"The granularity, in seconds, of the returned data points.\",\n     \"title\": \"Period\",\n     \"type\": \"number\"\n    },\n    \"Region\": {\n     \"default\": \"us-west-2\",\n     \"description\": \"AWS Region\",\n     \"title\": \"Region\",\n     \"type\": \"string\"\n    },\n    \"Time_Since\": {\n     \"default\": 20800,\n     \"description\": \"Starting from now, window (in seconds) for which you want to get the datapoints for.\",\n     \"title\": \"Time_Since\",\n     \"type\": \"number\"\n    }\n   },\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"parameterValues\": {\n   \"Period\": null,\n   \"Region\": null,\n   \"Time_Since\": null\n  },\n  \"vscode\": {\n   \"interpreter\": {\n    \"hash\": \"31f2aee4e71d21fbe5cf8b01ff0e069b9275f58929596ceb00d14d90e3e16cd6\"\n   }\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}"
  },
  {
    "path": "AWS/Monitor_AWS_DynamoDB_provision_capacity.json",
    "content": "{\n  \"name\": \"Monitor AWS DynamoDB provision capacity\",\n  \"description\": \"This runbook can be used to collect the data from cloudwatch related to AWS DynamoDB for provision capacity.\",\n  \"uuid\": \"bada29f95fb658b8b5912f7bfdeac4b21ba236c14391e0b7e877421092412780 \",\n  \"icon\": \"CONNECTOR_TYPE_AWS\",\n  \"categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\" ],\n  \"connector_types\": [ \"CONNECTOR_TYPE_AWS\" ],\n  \"version\": \"1.0.0\"\n}\n"
  },
  {
    "path": "AWS/README.md",
    "content": "# AWS RunBooks\n* [AWS Access Key Rotation for IAM users](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Access_Key_Rotation.ipynb): This runbook can be used to configure AWS Access Key rotation. Changing access keys (which consist of an access key ID and a secret access key) on a regular schedule is a well-known security best practice because it shortens the period an access key is active and therefore reduces the business impact if they are compromised. Having an established process that is run regularly also ensures the operational steps around key rotation are verified, so changing a key is never a scary step.\n* [Add Lifecycle Policy to S3 Buckets](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Add_Lifecycle_Policy_To_S3_Buckets.ipynb): Attaching lifecycle policies to AWS S3 buckets enables us to automate the management of object lifecycle in your storage buckets. By configuring lifecycle policies, you can define rules that determine the actions to be taken on objects based on their age or other criteria. This includes transitioning objects to different storage classes, such as moving infrequently accessed data to lower-cost storage tiers or archiving them to Glacier, as well as setting expiration dates for objects. By attaching lifecycle policies to your S3 buckets, you can optimize storage costs by automatically moving data to the most cost-effective storage tier based on its lifecycle. Additionally, it allows you to efficiently manage data retention and comply with regulatory requirements or business policies regarding data expiration. This runbook helps us find all the buckets without any lifecycle policy and attach one to them.\n* [AWS Add Mandatory tags to EC2](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Add_Mandatory_tags_to_EC2.ipynb): This xRunBook is a set of example actions that could be used to establish mandatory tagging to EC2 instances.  First testing instances for compliance, and creating reports of instances that are missing the required tags. There is also and action to add tags to an instance - to help bring them into tag compliance.\n* [AWS Update Resources about to expire](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Add_Tag_Across_Selected_AWS_Resources.ipynb): This finds resources that have an expiration tag that is about to expire.  Can eitehr send a Slack message in 'auto'mode, or can be used to manually remediate the issue interactively.\n* [AWS Bulk Update Resource Tag](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Bulk_Update_Resource_Tag.ipynb): This runbook will find all AWS Resources tagged with a given key:value tag.  It will then update the tag's value to a new value. This can be used to bulk update the owner of resources, or any other reason you might need to change the tag value for many AWS resources.\n* [Change AWS EBS Volume To GP3 Type](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Change_EBS_Volume_To_GP3_Type.ipynb): This runbook can be used to change the type of an EBS volume to GP3(General Purpose 3). GP3 type volume has a number of advantages over it's predecessors. gp3 volumes are ideal for a wide variety of applications that require high performance at low cost\n* [Change AWS Route53 TTL](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Change_Route53_TTL.ipynb): For a record in a hosted zone, lower TTL means that more queries arrive at the name servers because the cached values expire sooner. If you configure a higher TTL for your records, then the intermediate resolvers cache the records for longer time. As a result, there are fewer queries received by the name servers. This configuration reduces the charges corresponding to the DNS queries answered. However, higher TTL slows the propagation of record changes because the previous values are cached for longer periods. This Runbook can be used to configure a higher value of a TTL .\n* [Create IAM User with policy](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Create_New_IAM_User_With_Policy.ipynb): Create new IAM user with a security Policy.  Sends confirmation to Slack.\n* [Delete EBS Volume Attached to Stopped Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Delete_EBS_Volumes_Attached_To_Stopped_Instances.ipynb): EBS (Elastic Block Storage) volumes are attached to EC2 Instances as storage devices. Unused (Unattached) EBS Volumes can keep accruing costs even when their associated EC2 instances are no longer running. These volumes need to be deleted if the instances they are attached to are no more required. This runbook helps us find such volumes and delete them.\n* [Delete EBS Volume With Low Usage](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Delete_EBS_Volumes_With_Low_Usage.ipynb): This runbook can help us identify low usage Amazon Elastic Block Store (EBS) volumes and delete these volumes in order to lower the cost of your AWS bill. This is calculates using the VolumeUsage metric. It measures the percentage of the total storage space that is currently being used by an EBS volume. This metric is reported as a percentage value between 0 and 100.\n* [Delete ECS Clusters with Low CPU Utilization](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Delete_ECS_Clusters_with_Low_CPU_Utilization.ipynb): ECS clusters are a managed service that allows users to run Docker containers on AWS, making it easier to manage and scale containerized applications. However, running ECS clusters with low CPU utilization can result in wasted resources and unnecessary costs. AWS charges for the resources allocated to a cluster, regardless of whether they are fully utilized or not. By deleting clusters that are not being fully utilized, you can reduce the number of resources being allocated and lower the overall cost of running ECS. Furthermore, deleting unused or low-utilization clusters can also improve overall system performance by freeing up resources for other applications that require more processing power. This runbook helps us to identify such clusters and delete them.\n* [Delete AWS ELBs With No Targets Or Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Delete_ELBs_With_No_Targets_Or_Instances.ipynb): ELBs are used to distribute incoming traffic across multiple targets or instances, but if those targets or instances are no longer in use, then the ELBs may be unnecessary and can be deleted to save costs. Deleting ELBs with no targets or instances is a simple but effective way to optimize costs in your AWS environment. By identifying and removing these unused ELBs, you can reduce the number of resources you are paying for and avoid unnecessary charges. This runbook helps you identify all types of ELB's- Network, Application, Classic that don't have any target groups or instances attached to them.\n* [Delete IAM profile](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Delete_IAM_User.ipynb): This runbook is the inverse of Create IAM user with profile - removes the profile, the login and then the IAM user itself..\n* [Delete Old EBS Snapshots](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Delete_Old_EBS_Snapshots.ipynb): Amazon Elastic Block Store (EBS) snapshots are created incrementally, an initial snapshot will include all the data on the disk, and subsequent snapshots will only store the blocks on the volume that have changed since the prior snapshot. Unchanged data is not stored, but referenced using the previous snapshot. This runbook helps us to find old EBS snapshots and thereby lower storage costs.\n* [Delete RDS Instances with Low CPU Utilization](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Delete_RDS_Instances_with_Low_CPU_Utilization.ipynb): Deleting RDS instances with low CPU utilization is a cost optimization strategy that involves identifying RDS instances with consistently low CPU usage and deleting them to save costs. This approach helps to eliminate unnecessary costs associated with running idle database instances that are not being fully utilized. This runbook helps us to find and delete such instances.\n* [Delete Redshift Clusters with Low CPU Utilization](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Delete_Redshift_Clusters_with_Low_CPU_Utilization.ipynb): Redshift clusters are the basic units of compute and storage in Amazon Redshift, and they can be configured to meet specific performance and cost requirements. In order to optimize the cost and performance of Redshift clusters, it is important to regularly monitor their CPU utilization. If a cluster is consistently showing low CPU utilization over an extended period of time, it may be a good idea to delete the cluster to save costs. This runbook helps us find such clusters and delete them.\n* [Delete Unattached AWS EBS Volumes](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Delete_Unattached_EBS_Volume.ipynb): This runbook can be used to delete all unattached EBS Volumes within an AWS region. You can delete an Amazon EBS volume that you no longer need. After deletion, its data is gone and the volume can't be attached to any instance. So before deletion, you can store a snapshot of the volume, which you can use to re-create the volume later.\n* [Delete Unused AWS Secrets](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Delete_Unused_AWS_Secrets.ipynb): This runbook can be used to delete unused secrets in AWS.\n* [Delete Unused AWS Log Streams](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Delete_Unused_Log_Streams.ipynb): Cloudwatch will retain empty Log Streams after the data retention time period. Those log streams should be deleted in order to save costs. This runbook can find unused log streams over a threshold number of days and help you delete them.\n* [Delete Unused NAT Gateways](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Delete_Unused_NAT_Gateways.ipynb): This runbook search for all unused NAT gateways from all the region and delete those gateways.\n* [Delete Unused Route53 HealthChecks](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Delete_Unused_Route53_Healthchecks.ipynb): When we associate healthchecks with an endpoint, Amazon Route53 sends health check requests to the endpoint IP address. These health checks validate that the endpoint IP addresses are operating as intended. There may be multiple reasons that healtchecks are lying usused for example- health check was mistakenly configured against your application by another customer, health check was configured from your account for testing purposes but wasn't deleted when testing was complete, health check was based on domain names and hence requests were sent due to DNS caching,  Elastic Load Balancing service updated its public IP addresses due to scaling, and the IP addresses were reassigned to your load balancer, and many more. This runbook finds such healthchecks and deletes them to save AWS costs.\n* [AWS Detach EC2 Instance from ASG](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Detach_ec2_Instance_from_ASG.ipynb): This runbook can be used to detach an instance from Auto Scaling Group. You can remove (detach) an instance that is in the InService state from an Auto Scaling group. After the instance is detached, you can manage it independently from the rest of the Auto Scaling group. By detaching an instance, you can move an instance out of one Auto Scaling group and attach it to a different group. For more information, see Attach EC2 instances to your Auto Scaling group.\n* [AWS EC2 Disk Cleanup](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_EC2_Disk_Cleanup.ipynb): This runbook locates large files in an EC2 instance and backs them up into a given S3 bucket. Afterwards, it deletes the files backed up and send a message on a specified Slack channel. It uses SSH and linux commands to perform the functions it needs.\n* [Enforce HTTP Redirection across all AWS ALB instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Enforce_HTTP_Redirection_across_AWS_ALB.ipynb): This runbook can be used to enforce HTTP redirection across all AWS ALBs. Web encryption protocols like SSL and TLS have been around for nearly three decades. By securing web data in transit, these security measures ensure that third parties can’t simply intercept unencrypted data and cause harm. HTTPS uses the underlying SSL/TLS technology and is the standard way to communicate web data in an encrypted and authenticated manner instead of using insecure HTTP protocol. In this runbook, we implement the industry best practice of redirecting all unencrypted HTTP data to the secure HTTPS protocol.\n* [AWS Ensure Redshift Clusters have Paused Resume Enabled](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Ensure_Redshift_Clusters_have_Paused_Resume_Enabled.ipynb): This runbook finds redshift clusters that don't have pause resume enabled and schedules the pause resume for the cluster.\n* [AWS Get unhealthy EC2 instances from ELB](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Get_Elb_Unhealthy_Instances.ipynb): This runbook can be used to list unhealthy EC2 instance from an ELB. Sometimes it difficult to determine why Amazon EC2 Auto Scaling didn't terminate an unhealthy instance from Activity History alone. You can find further details about an unhealthy instance's state, and how to terminate that instance, by checking the a few extra things.\n* [AWS Redshift Get Daily Costs from AWS Products](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Get_Redshift_Daily_Product_Costs.ipynb): This runbook can be used to create charts and alerts around Your AWS product usage. It requires a Cost and USage report to be live in RedShift.\n* [AWS Redshift Get Daily Costs from EC2 Usage](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Get_Redshift_EC2_Daily_Costs.ipynb): This runbook can be used to create charts and alerts around AWS EC2 usage. It requires a Cost and USage report to be live in RedShift.\n* [AWS Lowering CloudTrail Costs by Removing Redundant Trails](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Lowering_AWS_CloudTrail_Costs_by_Removing_Redundant_Trails.ipynb): The AWS CloudTrail service allows developers to enable policies managing compliance, governance, and auditing of their AWS account. In addition, AWS CloudTrail offers logging, monitoring, and storage of any activity around actions related to your AWS structures. The service activates from the moment you set up your AWS account and while it provides real-time activity visibility, it also means higher AWS costs. Here Finding Redundant Trails in AWS\n* [List unused Amazon EC2 key pairs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Notify_About_Unused_Keypairs.ipynb): This runbook finds all EC2 key pairs that are not used by an EC2 instance and notifies a slack channel about them. Optionally it can delete the key pairs based on user configuration.\n* [Publicly Accessible Amazon RDS Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Publicly_Accessible_Amazon_RDS_Instances.ipynb): This runbook can be used to find the publicly accessible RDS instances for the given AWS region.\n* [Purchase Reserved Nodes For Long Running AWS ElastiCache Clusters](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Purchase_Reserved_Cache_Nodes_For_Long_Running_ElastiCache_Clusters.ipynb): Ensuring that long-running AWS ElastiCache clusters have Reserved Nodes purchased for them is an effective cost optimization strategy for AWS users. By committing to a specific capacity of ElastiCache nodes for a period of one or three years, users can take advantage of significant discounts compared to On-Demand pricing. This approach can help optimize AWS costs for ElastiCache clusters that are expected to run for an extended period and have predictable usage patterns. This runbook helps us optimize costs by ensuring that Reserved Nodes are purchased for these ElastiCache clusters.\n* [Purchase Reserved Instances For Long Running AWS RDS Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Purchase_Reserved_Instances_For_Long_Running_RDS_Instances.ipynb): Ensuring that long-running AWS RDS instances have Reserved Instances purchased for them is an important cost optimization strategy for AWS users. By committing to a specific capacity of RDS instances for a period of one or three years, users can take advantage of significant discounts compared to On-Demand pricing. This approach can help optimize AWS costs for RDS instances that are expected to run for an extended period and have predictable usage patterns. This runbook helps us to optimize costs by ensuring that Reserved Instances are purchased for these RDS instances.\n* [Purchase Reserved Nodes For Long Running AWS Redshift Clusters](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Purchase_Reserved_Nodes_For_Long_Running_Redshift_Clusters.ipynb): Ensuring that long-running AWS Redshift Clusters have Reserved Nodes purchased for them is a critical cost optimization strategy . By committing to a specific capacity of Redshift nodes for a period of one or three years, users can take advantage of significant discounts compared to On-Demand pricing. This approach can help optimize AWS costs for Redshift Clusters that are expected to run for an extended period and have predictable usage patterns. This runbook helps us to ensure that Reserved Nodes are purchased for these clusters so that users can effectively plan ahead, reduce their AWS bill, and optimize their costs over time.\n* [Release Unattached AWS Elastic IPs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Release_Unattached_Elastic_IPs.ipynb): A disassociated Elastic IP address remains allocated to your account until you explicitly release it. AWS imposes a small hourly charge for Elastic IP addresses that are not associated with a running instance. This runbook can be used to deleted those unattached AWS Elastic IP addresses.\n* [Remediate unencrypted S3 buckets](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Remediate_unencrypted_S3_buckets.ipynb): This runbook can be used to filter all the S3 buckets which are unencrypted and apply encryption on unencrypted S3 buckets.\n* [Renew AWS SSL Certificates that are close to expiration](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Renew_SSL_Certificate.ipynb): This runbook can be used to list all AWS SSL (ACM) Certificates that need to be renewed within a given threshold number of days. Optionally it can renew the certificate using AWS ACM service.\n* [AWS Restart unhealthy services in a Target Group](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Restart_Unhealthy_Services_Target_Group.ipynb): This runbook restarts unhealthy services in a target group. The restart command is provided via a tag attached to the instance.\n* [Restrict S3 Buckets with READ/WRITE Permissions to all Authenticated Users](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Restrict_S3_Buckets_with_READ_WRITE_Permissions.ipynb): This runbook will list all the S3 buckets.Filter buckets which has ACL public READ/WRITE permissions and Change the ACL Public READ/WRITE permissions to private in the given region.\n* [Secure Publicly accessible Amazon RDS Snapshot](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Secure_Publicly_accessible_Amazon_RDS_Snapshot.ipynb): This lego can be used to list all the manual database snapshots in the given region. Get publicly accessible DB snapshots in RDS and Modify the publicly accessible DB snapshots in RDS to private.\n* [Stop Idle EC2 Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Stop_Idle_EC2_Instances.ipynb): This runbook can be used to Stop all EC2 Instances that are idle using given cpu threshold and duration.\n* [Stop all Untagged AWS EC2 Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Stop_Untagged_EC2_Instances.ipynb): This runbook can be used to Stop all EC2 Instances that are Untagged\n* [Terminate EC2 Instances Without Valid Lifetime Tag](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Terminate_EC2_Instances_Without_Valid_Lifetime_Tag.ipynb): This runbook can be used to list all the EC2 instances which don't have a lifetime tag and then terminate them.\n* [AWS Update RDS Instances from Old to New Generation](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Update_RDS_Instances_from_Old_to_New_Generation.ipynb): This runbook can be used to find the old generation RDS instances for the given AWS region and modify then to the given instance class.\n* [AWS Redshift Update Database](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Update_Redshift_Database.ipynb): This runbook can be used to update a redshift database from a SQL file stored in S3.\n* [AWS Update Resource Tags](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Update_Resource_Tags.ipynb): This runbook can be used to update an existing tag to any resource in an AWS Region.\n* [AWS Add Tags Across Selected AWS Resources](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Update_Resources_About_To_Expire.ipynb): This finds resources missing a tag, and allows you to choose which resources should add a specific tag/value pair.\n* [Encrypt unencrypted S3 buckets](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_encrypt_unencrypted_S3_buckets.ipynb): This runbook can be used to filter all the S3 buckets which are unencrypted and apply encryption on unencrypted S3 buckets.\n* [Create a new AWS IAM User](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Add_new_IAM_user.ipynb): AWS has an inbuilt identity and access management system known as AWS IAM. IAM supports the concept of users, group, roles and privileges. IAM user is an identity that can be created and assigned some privileges. This runbook can be used to create an AWS IAM User\n* [Configure URL endpoint on a AWS CloudWatch alarm](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Configure_url_endpoint_on_a_cloudwatch_alarm.ipynb): Configures the URL endpoint to the SNS associated with a CloudWatch alarm. This allows to external functions to be invoked within unSkript in response to an alert getting generated. Alarms can be attached to the handlers to perform data enrichment or remediation\n* [Copy AMI to All Given AWS Regions](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Copy_ami_to_all_given_AWS_regions.ipynb): This runbook can be used to copy AMI from one region to multiple AWS regions using unSkript legos with AWS CLI commands.We can get all the available regions by using AWS CLI Commands.\n* [Delete Unused AWS NAT Gateways](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Delete_Unused_AWS_NAT_Gateways.ipynb): This runbook can be used to identify and remove any unused NAT Gateways. This allows us to adhere to best practices and avoid unnecessary costs. NAT gateways are used to connect a private instance with outside networks. When a NAT gateway is provisioned, AWS charges you based on the number of hours it was available and the data (GB) it processes.\n* [Detach EC2 Instance from ASG](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Detach_Instance_from_ASG.ipynb): This runbook can be used to detach an instance from Auto Scaling Group. You can remove (detach) an instance that is in the Service state from an Auto Scaling group. After the instance is detached, you can manage it independently from the rest of the Auto Scaling group. By detaching an instance, you can move an instance out of one Auto Scaling group and attach it to a different group. For more information, see Attach EC2 instances to your Auto Scaling group.\n* [Detect ECS failed deployment](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Detect_ECS_failed_deployment.ipynb): This runbook check if there is a failed deployment in progress for a service in an ECS cluster. If it finds one, it sends the list of stopped task associated with this deployment and their stopped reason to slack.\n* [Enforce Mandatory Tags Across All AWS Resources](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Enforce_Mandatory_Tags_Across_All_AWS_Resources.ipynb): This runbook can be used to Enforce Mandatory Tags Across All AWS Resources.We can get all the  untag resources of the given region,discovers tag keys of the given region and attaches mandatory tags to all the untagged resource.\n* [Handle AWS EC2 Instance Scheduled to retire](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Find_EC2_Instances_Scheduled_to_retire.ipynb): To avoid unexpected interruptions, it's a good practice to check to see if there are any EC2 instances scheduled to retire. This runbook can be used to List the EC2 instances that are scheduled to retire. To handle the instance retirement, user can stop and restart it before the retirement date. That action moves the instance over to a more stable host.\n* [Create an IAM user using Principle of Least Privilege](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/IAM_security_least_privilege.ipynb): Extract usage details from Cloudtrail of an existing user. Apply the usage to a new IAM Policy, and connect it to a new IAM profile.\n* [Monitor AWS DynamoDB provision capacity](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Monitor_AWS_DynamoDB_provision_capacity.ipynb): This runbook can be used to collect the data from cloudwatch related to AWS DynamoDB for provision capacity.\n* [Resize EBS Volume](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Resize_EBS_Volume.ipynb): This run resizes the EBS volume to a specified amount. This runbook can be attached to Disk usage related Cloudwatch alarms to do the appropriate resizing. It also extends the filesystem to use the new volume size.\n* [Resize list of pvcs.](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Resize_List_Of_Pvcs.ipynb): This runbook can be used to resize list of pvcs in a namespace. By default, it uses all pvcs to be resized.\n* [Resize PVC](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Resize_PVC.ipynb): This runbook resizes the PVC to input size.\n* [Restart AWS EC2 Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Restart_AWS_EC2_Instances_By_Tag.ipynb): This runbook can be used to Restart AWS EC2 Instances\n* [Launch AWS EC2 from AMI](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Run_EC2_from_AMI.ipynb): This lego can be used to launch an AWS EC2 instance from AMI in the given region.\n* [Troubleshooting Your EC2 Configuration in a Private Subnet](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Troubleshooting_Your_EC2_Configuration_in_Private_Subnet.ipynb): This runbook can be used to troubleshoot EC2 instance configuration in a private subnet by capturing the VPC ID for a given instance ID. Using VPC ID to get Internet Gateway details then try to SSH and connect to internet.\n* [Update and Manage AWS User permission](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Update_and_Manage_AWS_User_Permission.ipynb): This runbook can be used Update and Manage AWS IAM User Permission\n\n# AWS Actions\n* [AWS Start IAM Policy Generation ](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/AWS_Start_IAM_Policy_Generation/README.md): Given a region, a CloudTrail ARN (where the logs are being recorded), a reference IAM ARN (whose usage we will parse), and a Service role, this will begin the generation of a IAM policy.  The output is a String of the generation Id.\n* [Add Lifecycle Configuration to AWS S3 Bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_add_lifecycle_configuration_to_s3_bucket/README.md): Creates a new lifecycle configuration for the bucket or replaces an existing lifecycle configuration.\n* [Apply AWS Default Encryption for S3 Bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_apply_default_encryption_for_s3_buckets/README.md): Apply AWS Default Encryption for S3 Bucket\n* [Attach an EBS volume to an AWS EC2 Instance](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_attach_ebs_to_instances/README.md): Attach an EBS volume to an AWS EC2 Instance\n* [AWS Attach New Policy to User](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_attach_iam_policy/README.md): AWS Attach New Policy to User\n* [AWS Attach Tags to Resources](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_attach_tags_to_resources/README.md): AWS Attach Tags to Resources\n* [AWS Change ACL Permission of public S3 Bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_change_acl_permissions_of_buckets/README.md): AWS Change ACL Permission public S3 Bucket\n* [AWS Check if RDS instances are not M5 or T3](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_check_rds_non_m5_t3_instances/README.md): AWS Check if RDS instances are not M5 or T3\n* [Check SSL Certificate Expiry](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_check_ssl_certificate_expiry/README.md): Check ACM SSL Certificate expiry date\n* [Attach a webhook endpoint to AWS Cloudwatch alarm](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_cloudwatch_attach_webhook_notification_to_alarm/README.md): Attach a webhook endpoint to one of the SNS attached to the AWS Cloudwatch alarm.\n* [AWS Create IAM Policy](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_create_IAMpolicy/README.md): Given an AWS policy (as a string), and the name for the policy, this will create an IAM policy.\n* [AWS Create Access Key](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_create_access_key/README.md): Create a new Access Key for the User\n* [Create AWS Bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_create_bucket/README.md): Create a new AWS S3 Bucket\n* [Create New IAM User](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_create_iam_user/README.md): Create New IAM User\n* [AWS Redshift Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_create_redshift_query/README.md): Make a SQL Query to the given AWS Redshift database\n* [Create Login profile for IAM User](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_create_user_login_profile/README.md): Create Login profile for IAM User\n* [AWS Create Snapshot For Volume](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_create_volumes_snapshot/README.md): Create a snapshot for EBS volume of the EC2 Instance for backing up the data stored in EBS\n* [AWS Delete Access Key](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_delete_access_key/README.md): Delete an Access Key for a User\n* [Delete AWS Bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_delete_bucket/README.md): Delete an AWS S3 Bucket\n* [AWS Delete Classic Load Balancer](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_delete_classic_load_balancer/README.md): Delete Classic Elastic Load Balancers\n* [AWS Delete EBS Snapshot](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_delete_ebs_snapshot/README.md): Delete EBS Snapshot for an EC2 instance\n* [AWS Delete ECS Cluster](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_delete_ecs_cluster/README.md): Delete AWS ECS Cluster\n* [AWS Delete Load Balancer](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_delete_load_balancer/README.md): AWS Delete Load Balancer\n* [AWS Delete Log Stream](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_delete_log_stream/README.md): AWS Delete Log Stream\n* [AWS Delete NAT Gateway](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_delete_nat_gateway/README.md): AWS Delete NAT Gateway\n* [AWS Delete RDS Instance](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_delete_rds_instance/README.md): Delete AWS RDS Instance\n* [AWS Delete Redshift Cluster](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_delete_redshift_cluster/README.md): Delete AWS Redshift Cluster\n* [AWS Delete Route 53 HealthCheck](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_delete_route53_health_check/README.md): AWS Delete Route 53 HealthCheck\n* [Delete AWS Default Encryption for S3 Bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_delete_s3_bucket_encryption/README.md): Delete AWS Default Encryption for S3 Bucket\n* [AWS Delete Secret](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_delete_secret/README.md): AWS Delete Secret\n* [Delete AWS EBS Volume by Volume ID](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_delete_volume_by_id/README.md): Delete AWS Volume by Volume ID\n* [ Deregisters AWS Instances from a Load Balancer](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_deregister_instances/README.md):  Deregisters AWS Instances from a Load Balancer\n* [AWS Describe Cloudtrails ](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_describe_cloudtrail/README.md): Given an AWS Region, this Action returns a Dict with all of the Cloudtrail logs being recorded\n* [ Detach as AWS Instance with a Elastic Block Store](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_detach_ebs_to_instances/README.md):  Detach as AWS Instance with a Elastic Block Store.\n* [AWS Detach Instances From AutoScaling Group](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_detach_instances_from_autoscaling_group/README.md): Use This Action to AWS Detach Instances From AutoScaling Group\n* [EBS Modify Volume](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_ebs_modify_volume/README.md): Modify/Resize volume for Elastic Block Storage (EBS).\n* [AWS ECS Describe Task Definition.](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_ecs_describe_task_definition/README.md): Describe AWS ECS Task Definition.\n* [ECS detect failed deployment ](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_ecs_detect_failed_deployment/README.md): List of stopped tasks, associated with a deployment, along with their stopped reason\n* [Restart AWS ECS Service](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_ecs_service_restart/README.md): Restart an AWS ECS Service\n* [Update AWS ECS Service](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_ecs_update_service/README.md): Update AWS ECS Service\n* [ Copy EKS Pod logs to bucket.](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_eks_copy_pod_logs_to_bucket/README.md):  Copy given EKS pod logs to given S3 Bucket.\n* [ Delete EKS POD in a given Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_eks_delete_pod/README.md):  Delete a EKS POD in a given Namespace\n* [List of EKS dead pods](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_eks_get_all_dead_pods/README.md): Get list of all dead pods in a given EKS cluster\n* [List of EKS Namespaces](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_eks_get_all_namespaces/README.md): Get list of all Namespaces in a given EKS cluster\n* [List of EKS pods](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_eks_get_all_pods/README.md): Get list of all pods in a given EKS cluster\n* [ List of EKS deployment for given Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_eks_get_deployments_name/README.md):  Get list of EKS deployment names for given Namespace\n* [Get CPU and memory utilization of node.](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_eks_get_node_cpu_memory/README.md):  Get CPU and memory utilization of given node.\n* [ Get EKS Nodes](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_eks_get_nodes/README.md):  Get EKS Nodes\n* [ List of EKS pods not in RUNNING State](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_eks_get_not_running_pods/README.md):  Get list of all pods in a given EKS cluster that are not running.\n* [Get pod CPU and Memory usage from given namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_eks_get_pod_cpu_memory/README.md): Get all pod CPU and Memory usage from given namespace\n* [ EKS Get pod status](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_eks_get_pod_status/README.md):  Get a Status of given POD in a given Namespace and EKS cluster name\n* [ EKS Get Running Pods](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_eks_get_running_pods/README.md):  Get a list of running pods from given namespace and EKS cluster name\n* [ Run Kubectl commands on EKS Cluster](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_eks_run_kubectl_cmd/README.md): This action runs a kubectl command on an AWS EKS Cluster\n* [Get AWS EMR Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_emr_get_instances/README.md): Get a list of EC2 Instances for an EMR cluster. Filtered by node type (MASTER|CORE|TASK)\n* [Run Command via AWS CLI](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_execute_cli_command/README.md): Execute command using AWS CLI\n* [ Run Command via SSM](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_execute_command_ssm/README.md):  Execute command on EC2 instance(s) using SSM\n* [AWS Filter All Manual Database Snapshots](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_all_manual_database_snapshots/README.md): Use This Action to AWS Filter All Manual Database Snapshots\n* [Filter AWS Unattached EBS Volume](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_ebs_unattached_volumes/README.md): Filter AWS Unattached EBS Volume\n* [Filter AWS EBS Volume with Low IOPS](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_ebs_volumes_with_low_iops/README.md): IOPS (Input/Output Operations Per Second) is a metric used to measure the amount of input/output operations that an EBS volume can perform per second.\n* [Filter AWS EC2 Instance](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_ec2_by_tags/README.md): Filter AWS EC2 Instance\n* [Filter AWS EC2 instance by VPC Ids](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_ec2_by_vpc/README.md): Use this Action to Filter AWS EC2 Instance by VPC Ids\n* [Filter All AWS EC2 Instance](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_ec2_instances/README.md): Filter All AWS EC2 Instance\n* [Filter AWS EC2 Instances Without Lifetime Tag](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_ec2_without_lifetime_tag/README.md): Filter AWS EC2 Instances Without Lifetime Tag\n* [Filter AWS EC2 Instances Without Termination and Lifetime Tag](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_instances_without_termination_and_lifetime_tag/README.md): Filter AWS EC2 Instances Without Termination and Lifetime Tag and Check of they are valid\n* [AWS Filter Large EC2 Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_large_ec2_instances/README.md): This Action to filter all instances whose instanceType contains Large or xLarge, and that DO NOT have the largetag key/value.\n* [AWS Find Long Running EC2 Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_long_running_instances/README.md): This action list a all instances that are older than the threshold\n* [AWS Filter Old EBS Snapshots](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_old_ebs_snapshots/README.md): This action list a all snapshots details that are older than the threshold\n* [Get AWS public S3 Buckets using ACL](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_public_s3_buckets_by_acl/README.md): Get AWS public S3 Buckets using ACL\n* [Filter AWS Target groups by tag name](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_target_groups_by_tags/README.md): Filter AWS Target groups which have the provided tag attached to it. It also returns the value of that tag for each target group\n* [Filter AWS Unencrypted S3 Buckets](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_unencrypted_s3_buckets/README.md): Filter AWS Unencrypted S3 Buckets\n* [Get Unhealthy instances from ASG](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_unhealthy_instances_from_asg/README.md): Get Unhealthy instances from Auto Scaling Group\n* [Filter AWS Untagged EC2 Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_untagged_ec2_instances/README.md): Filter AWS Untagged EC2 Instances\n* [Filter AWS Unused Keypairs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_unused_keypairs/README.md): Filter AWS Unused Keypairs\n* [AWS Filter Unused Log Stream](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_unused_log_streams/README.md): This action lists all log streams that are unused for all the log groups by the given threshold.\n* [AWS Find Unused NAT Gateways](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_unused_nat_gateway/README.md): This action to get all of the Nat gateways that have zero traffic over those\n* [Find AWS ELBs with no targets or instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_find_elbs_with_no_targets_or_instances/README.md): Find AWS ELBs with no targets or instances attached to them.\n* [AWS Find Idle Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_find_idle_instances/README.md): Find Idle EC2 instances\n* [AWS Filter Lambdas with Long Runtime](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_find_long_running_lambdas/README.md): This action retrieves a list of all Lambda functions and searches for log events for each function for given runtime(duration).\n* [AWS Find Low Connections RDS instances Per Day](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_find_low_connection_rds_instances/README.md): This action will find RDS DB instances with a number of connections below the specified minimum in the specified region.\n* [AWS Find EMR Clusters of Old Generation Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_find_old_gen_emr_clusters/README.md): This action list of EMR clusters of old generation instances.\n* [AWS Find RDS Instances with low CPU Utilization](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_find_rds_instances_with_low_cpu_utilization/README.md): This lego finds RDS instances are not utilizing their CPU resources to their full potential.\n* [AWS Find Redshift Cluster without Pause Resume Enabled](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_find_redshift_cluster_without_pause_resume_enabled/README.md): Use This Action to AWS find redshift cluster for which paused resume are not Enabled\n* [AWS Find Redshift Clusters with low CPU Utilization](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_find_redshift_clusters_with_low_cpu_utilization/README.md): Find underutilized Redshift clusters in terms of CPU utilization.\n* [AWS Find S3 Buckets without Lifecycle Policies](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_find_s3_buckets_without_lifecycle_policies/README.md): S3 lifecycle policies enable you to automatically transition objects to different storage classes or delete them when they are no longer needed. This action finds all S3 buckets without lifecycle policies. \n* [Finding Redundant Trails in AWS](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_finding_redundant_trails/README.md): This action will find a redundant cloud trail if the attribute IncludeGlobalServiceEvents is true, and then we need to find multiple duplications.\n* [AWS Get AWS Account Number](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_acount_number/README.md): Some AWS functions require the AWS Account number. This programmatically retrieves it.\n* [Get AWS CloudWatch Alarms List](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_alarms_list/README.md): Get AWS CloudWatch Alarms List\n* [Get AWS ALB Listeners Without HTTP Redirection](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_alb_listeners_without_http_redirect/README.md): Get AWS ALB Listeners Without HTTP Redirection\n* [Get AWS EC2 Instances All ](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_all_ec2_instances/README.md): Use This Action to Get All AWS EC2 Instances\n* [AWS Get All Load Balancers](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_all_load_balancers/README.md): AWS Get All Load Balancers\n* [AWS Get All Service Names v3](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_all_service_names/README.md): Get a list of all service names in a region\n* [AWS Get Untagged Resources](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_all_untagged_resources/README.md): AWS Get Untagged Resources\n* [Get AWS AutoScaling Group Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_auto_scaling_instances/README.md): Use This Action to Get AWS AutoScaling Group Instances\n* [Get AWS Bucket Size](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_bucket_size/README.md): Get an AWS Bucket Size\n* [Get AWS EBS Metrics from Cloudwatch](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_ebs/README.md): Get AWS CloudWatch Statistics for EBS volumes\n* [Get AWS EC2 Metrics from Cloudwatch](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_ec2/README.md): Get AWS CloudWatch Metrics for EC2 instances. These could be CPU, Network, Disk based measurements\n* [Get AWS EC2 CPU Utilization Statistics from Cloudwatch](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_ec2_cpuutil/README.md): Get AWS CloudWatch Statistics for cpu utilization for EC2 instances\n* [Get AWS CloudWatch Metrics for AWS/ApplicationELB](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_metrics_applicationelb/README.md): Get AWS CloudWatch Metrics for AWS/ApplicationELB\n* [Get AWS CloudWatch Metrics for AWS/ELB](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_metrics_classic_elb/README.md): Get AWS CloudWatch Metrics for Classic Loadbalancer\n* [Get AWS CloudWatch Metrics for AWS/DynamoDB](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_metrics_dynamodb/README.md): Get AWS CloudWatch Metrics for AWS DynamoDB\n* [Get AWS CloudWatch Metrics for AWS/AutoScaling](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_metrics_ec2autoscaling/README.md): Get AWS CloudWatch Metrics for AWS EC2 AutoScaling groups\n* [Get AWS CloudWatch Metrics for AWS/GatewayELB](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_metrics_gatewayelb/README.md): Get AWS CloudWatch Metrics for AWS/GatewayELB\n* [Get AWS CloudWatch Metrics for AWS/Lambda](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_metrics_lambda/README.md): Get AWS CloudWatch Metrics for AWS/Lambda\n* [Get AWS CloudWatch Metrics for AWS/NetworkELB](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_metrics_network_elb/README.md): Get AWS CloudWatch Metrics for Network Loadbalancer\n* [Get AWS CloudWatch Metrics for AWS/RDS](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_metrics_rds/README.md): Get AWS CloudWatch Metrics for AWS/RDS\n* [Get AWS CloudWatch Metrics for AWS/Redshift](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_metrics_redshift/README.md): Get AWS CloudWatch Metrics for AWS/Redshift\n* [Get AWS CloudWatch Metrics for AWS/SQS](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_metrics_sqs/README.md): Get AWS CloudWatch Metrics for AWS/SQS\n* [Get AWS CloudWatch Statistics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_statistics/README.md): Get AWS CloudWatch Statistics\n* [AWS Get Costs For All Services](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cost_for_all_services/README.md): Get Costs for all AWS services in a given time period.\n* [AWS Get Costs For Data Transfer](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cost_for_data_transfer/README.md): Get daily cost for Data Transfer in AWS\n* [AWS Get Daily Total Spend](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_daily_total_spend/README.md): AWS get daily total spend from Cost Explorer\n* [AWS Get EBS Volumes for Low Usage](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_ebs_volume_for_low_usage/README.md): This action list low use volumes from AWS which used <10% capacity from the given threshold days.\n* [Get EBS Volumes By Type](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_ebs_volumes_by_type/README.md): Get EBS Volumes By Type\n* [Get AWS EBS Volume Without GP3 Type](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_ebs_volumes_without_gp3_type/README.md): AWS recently introduced the General Purpose SSD (gp3) volume type, which is designed to provide higher IOPS performance at a lower cost than the gp2 volume type.\n* [Get EC2 CPU Consumption For All Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_ec2_cpu_consumption/README.md): Get EC2 CPU Consumption For All Instances\n* [Get EC2 Data Traffic In and Out For All Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_ec2_data_traffic/README.md): Get EC2 Data Traffic In and Out For All Instances\n* [Get Age of all EC2 Instances in Days](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_ec2_instance_age/README.md): Get Age of all EC2 Instances in Days\n* [AWS ECS Instances without AutoScaling policy](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_ecs_instances_without_autoscaling/README.md): AWS ECS Instances without AutoScaling policy.\n* [Get AWS ECS Service Status](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_ecs_services_status/README.md): Get the Status of an AWS ECS Service\n* [AWS ECS Services without AutoScaling policy](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_ecs_services_without_autoscaling/README.md): AWS ECS Services without AutoScaling policy.\n* [AWS Get Generated Policy](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_generated_policy/README.md): Given a Region and the ID of a policy generation job, this Action will return the policy (once it has been completed).\n* [Get AWS boto3 handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_handle/README.md): Get AWS boto3 handle\n* [AWS List IAM users without password policies](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_iam_users_without_password_policies/README.md): Get a list of all IAM users that have no password policy attached to them.\n* [AWS Get Idle EMR Clusters](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_idle_emr_clusters/README.md): This action list of EMR clusters that have been idle for more than the specified time.\n* [Get AWS Instance Details with Matching Private DNS Name](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_instance_detail_with_private_dns_name/README.md): Use this action to get details of an AWS EC2 Instance that matches a Private DNS Name\n* [Get AWS Instances Details](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_instance_details/README.md): Get AWS Instances Details\n* [List All AWS EC2 Instances Under the ELB](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_instances/README.md):  Get a list of all AWS EC2 Instances from given ELB\n* [AWS Get Internet Gateway by VPC ID](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_internet_gateway_by_vpc/README.md): AWS Get Internet Gateway by VPC ID\n* [Find AWS Lambdas Not Using ARM64 Graviton2 Processor](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_lambdas_not_using_arm_graviton2_processor/README.md): Find all AWS Lambda functions that are not using the Arm-based AWS Graviton2 processor for their runtime architecture\n* [Get AWS Lambdas With High Error Rate](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_lambdas_with_high_error_rate/README.md): Get AWS Lambda Functions that exceed a given threshold error rate.\n* [AWS Get Long Running ElastiCache clusters Without Reserved Nodes](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_long_running_elasticcache_clusters_without_reserved_nodes/README.md): This action gets information about long running ElastiCache clusters and their status, and checks if they have any reserved nodes associated with them.\n* [AWS Get Long Running RDS Instances Without Reserved Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_long_running_rds_instances_without_reserved_instances/README.md): This action gets information about long running instances and their status, and checks if they have any reserved nodes associated with them.\n* [AWS Get Long Running Redshift Clusters Without Reserved Nodes](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_long_running_redshift_clusters_without_reserved_nodes/README.md): This action gets information about running clusters and their status, and checks if they have any reserved nodes associated with them.\n* [AWS Get NAT Gateway Info by VPC ID](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_nat_gateway_by_vpc/README.md): This action is used to get the details about nat gateways configured for VPC.\n* [Get all Targets for Network Load Balancer (NLB)](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_nlb_targets/README.md): Use this action to get all targets for Network Load Balancer (NLB)\n* [AWS Get Network Load Balancer (NLB) without Targets](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_nlbs_without_targets/README.md): Use this action to get AWS Network Load Balancer (NLB) without Targets\n* [AWS Get Older Generation RDS Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_older_generation_rds_instances/README.md): AWS Get Older Generation RDS Instances action retrieves information about RDS instances using older generation instance types.\n* [AWS Get Private Address from NAT Gateways](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_private_address_from_nat_gateways/README.md): This action is used to get private address from NAT gateways.\n* [Get AWS EC2 Instances with a public IP](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_public_ec2_instances/README.md): lists all EC2 instances with a public IP\n* [AWS Get Publicly Accessible RDS Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_publicly_accessible_db_instances/README.md): AWS Get Publicly Accessible RDS Instances\n* [AWS Get Publicly Accessible DB Snapshots in RDS](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_publicly_accessible_db_snapshots/README.md): AWS Get Publicly Accessible DB Snapshots in RDS\n* [Get AWS RDS automated db snapshots above retention period](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_rds_automated_snapshots_above_retention_period/README.md): This Action gets the snapshots above a certain retention period.\n* [AWS Get Redshift Query Details](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_redshift_query_details/README.md): Given an QueryId, this Action will give you the status of the Query, along with other data like  the number of lines/\n* [AWS Get Redshift Result](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_redshift_result/README.md): Given a QueryId, Get the Query Result, and format into a List\n* [AWS Get EC2 Instances About To Retired](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_reserved_instances_about_to_retired/README.md): AWS Get EC2 Instances About To Retired\n* [AWS Get Resources Missing Tag](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_resources_missing_tag/README.md): Gets a list of all AWS resources that are missing the tag in the input parameters.\n* [AWS Get Resources With Expiration Tag](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_resources_with_expiration_tag/README.md): AWS Get all Resources with an expiration tag\n* [AWS Get Resources With Tag](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_resources_with_tag/README.md): For a given tag and region, get every AWS resource with that tag.\n* [Get AWS S3 Buckets](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_s3_buckets/README.md): Get AWS S3 Buckets\n* [Get Schedule To Retire AWS EC2 Instance](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_schedule_to_retire_instances/README.md): Get Schedule To Retire AWS EC2 Instance\n* [ Get secrets from secretsmanager](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_secret_from_secretmanager/README.md):  Get secrets from AWS secretsmanager\n* [AWS Get Secrets Manager Secret](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_secrets_manager_secret/README.md): Get string (of JSON) containing Secret details\n* [AWS Get Secrets Manager SecretARN](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_secrets_manager_secretARN/README.md): Given a Secret Name - this Action returns the Secret ARN\n* [Get AWS Security Group Details](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_security_group_details/README.md): Get details about a security group, given its ID.\n* [AWS Get Service Quota for a Specific ServiceName](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_service_quota_details/README.md): Given an AWS Region, Service Code and Quota Code, this Action will output the quota information for the specified service.\n* [AWS Get Quotas for a Service](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_service_quotas/README.md): Given inputs of the AWS Region, and the Service_Code for a service, this Action will output all of the Service Quotas and limits.\n* [Get Stopped Instance Volumes](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_stopped_instance_volumes/README.md): This action helps to list the volumes that are attached to stopped instances.\n* [Get STS Caller Identity](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_sts_caller_identity/README.md): Get STS Caller Identity\n* [AWS Get Tags of All Resources](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_tags_of_all_resources/README.md): AWS Get Tags of All Resources\n* [Get Timed Out AWS Lambdas](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_timed_out_lambdas/README.md): Get AWS Lambda functions that have exceeded the maximum amount of time in seconds that a Lambda function can run.\n* [AWS Get TTL For Route53 Records](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_ttl_for_route53_records/README.md): Get TTL for Route53 records for a hosted zone.\n* [AWS: Check for short Route 53 TTL](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_ttl_under_given_hours/README.md): AWS: Check for short Route 53 TTL\n* [Get UnHealthy EC2 Instances for Classic ELB](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_unhealthy_instances/README.md): Get UnHealthy EC2 Instances for Classic ELB\n* [Get Unhealthy instances from ELB](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_unhealthy_instances_from_elb/README.md): Get Unhealthy instances from Elastic Load Balancer\n* [AWS get Unused Route53 Health Checks](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_unused_route53_health_checks/README.md): AWS get Unused Route53 Health Checks\n* [AWS Get IAM Users with Old Access Keys](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_users_with_old_access_keys/README.md): This Lego collects the access keys that have never been used or the access keys that have been used but are older than the threshold.\n* [Launch AWS EC2 Instance From an AMI](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_launch_instance_from_ami/README.md): Use this instance to Launch an AWS EC2 instance from an AMI\n* [AWS List Access Key](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_access_keys/README.md): List all Access Keys for the User\n* [AWS List All IAM Users](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_all_iam_users/README.md): List all AWS IAM Users\n* [AWS List All Regions](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_all_regions/README.md): List all available AWS Regions\n* [AWS List Application LoadBalancers ARNs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_application_loadbalancers/README.md): AWS List Application LoadBalancers ARNs\n* [AWS List Attached User Policies](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_attached_user_policies/README.md): AWS List Attached User Policies\n* [AWS List ECS Clusters with Low CPU Utilization](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_clusters_with_low_utilization/README.md): This action searches for clusters that have low CPU utilization.\n* [AWS List Expiring Access Keys](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_expiring_access_keys/README.md): List Expiring IAM User Access Keys\n* [List Expiring ACM Certificates](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_expiring_acm_certificates/README.md): List All Expiring ACM Certificates\n* [AWS List Hosted Zones](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_hosted_zones/README.md): List all AWS Hosted zones\n* [AWS List Unattached Elastic IPs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_unattached_elastic_ips/README.md): This action lists Elastic IP address and check if it is associated with an instance or network interface.\n* [AWS List Unhealthy Instances in a Target Group](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_unhealthy_instances_in_target_group/README.md): List Unhealthy Instances in a target group\n* [AWS List Unused Secrets](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_unused_secrets/README.md): This action lists all the unused secrets from AWS by comparing the last used date with the given threshold.\n* [AWS List IAM Users With Old Passwords](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_users_with_old_passwords/README.md): This Lego filter gets all the IAM users' login profiles, and if the login profile is available, checks for the last password change if the password is greater than the given threshold, and lists those users.\n* [AWS List Instances behind a Load Balancer.](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_loadbalancer_list_instances/README.md): List AWS Instances behind a Load Balancer\n* [Make AWS Bucket Public](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_make_bucket_public/README.md): Make an AWS Bucket Public!\n* [AWS Modify EBS Volume to GP3](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_modify_ebs_volume_to_gp3/README.md): AWS recently introduced the General Purpose SSD (gp3) volume type, which is designed to provide higher IOPS performance at a lower cost than the gp2 volume type.\n* [AWS Modify ALB Listeners HTTP Redirection](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_modify_listener_for_http_redirection/README.md): AWS Modify ALB Listeners HTTP Redirection\n* [AWS Modify Publicly Accessible RDS Snapshots](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_modify_public_db_snapshots/README.md): AWS Modify Publicly Accessible RDS Snapshots\n* [Get AWS Postgresql Max Configured Connections](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_postgresql_get_configured_max_connections/README.md): Get AWS Postgresql Max Configured Connections\n* [Plot AWS PostgreSQL Active Connections](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_postgresql_plot_active_connections/README.md): Plot AWS PostgreSQL Action Connections\n* [AWS Purchase ElastiCache Reserved Nodes](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_purchase_elasticcache_reserved_node/README.md): This action purchases a reserved cache node offering.\n* [AWS Purchase RDS Reserved Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_purchase_rds_reserved_instance/README.md): This action purchases a reserved DB instance offering.\n* [AWS Purchase Redshift Reserved Nodes](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_purchase_redshift_reserved_node/README.md): This action purchases reserved nodes. Amazon Redshift offers a predefined set of reserved node offerings. You can purchase one or more of the offerings.\n* [ Apply CORS Policy for S3 Bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_put_bucket_cors/README.md):  Apply CORS Policy for S3 Bucket\n* [Apply AWS New Policy for S3 Bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_put_bucket_policy/README.md): Apply a New AWS Policy for S3 Bucket\n* [Read AWS S3 Object](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_read_object/README.md): Read an AWS S3 Object\n* [ Register AWS Instances with a Load Balancer](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_register_instances/README.md):  Register AWS Instances with a Load Balancer\n* [AWS Release Elastic IP](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_release_elastic_ip/README.md): AWS Release Elastic IP for both VPC and Standard\n* [Renew Expiring ACM Certificates](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_renew_expiring_acm_certificates/README.md): Renew Expiring ACM Certificates\n* [AWS_Request_Service_Quota_Increase](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_request_service_quota_increase/README.md): Given an AWS Region, Service Code, quota code and a new value for the quota, this Action sends a request to AWS for a new value. Your Connector must have servicequotas:RequestServiceQuotaIncrease enabled for this to work.\n* [Restart AWS EC2 Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_restart_ec2_instances/README.md): Restart AWS EC2 Instances\n* [AWS Revoke Policy from IAM User](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_revoke_policy_from_iam_user/README.md): AWS Revoke Policy from IAM User\n* [Start AWS Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_run_instances/README.md): Start an AWS EC2 Instances\n* [AWS Schedule Redshift Cluster Pause Resume Enabled](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_schedule_pause_resume_enabled/README.md): AWS Schedule Redshift Cluster Pause Resume Enabled\n* [AWS Service Quota Limits](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_service_quota_limits/README.md): Input a List of Service Quotas, and get back which of your instances are above the warning percentage of the quota\n* [AWS VPC service quota limit](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_service_quota_limits_vpc/README.md): This Action queries all VPC Storage quotas, and returns all usage over warning_percentage.\n* [Stop AWS Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_stop_instances/README.md): Stop an AWS Instance\n* [Tag AWS Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_tag_ec2_instances/README.md): Tag AWS Instances\n* [AWS List Instances in a ELBV2 Target Group](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_target_group_list_instances/README.md): List AWS Instance in a ELBv2 Target Group\n* [ AWS List Unhealthy Instances in a ELBV2 Target Group](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_target_group_list_unhealthy_instances/README.md):  List AWS Unhealthy Instance in a ELBv2 Target Group\n* [AWS Register/Unregister Instances from a Target Group.](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_target_group_register_unregister_instances/README.md): Register/Unregister AWS Instance from a Target Group\n* [Terminate AWS EC2 Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_terminate_ec2_instances/README.md): This Action will Terminate AWS EC2 Instances\n* [AWS Update Access Key](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_update_access_key/README.md): Update status of the Access Key\n* [AWS Update TTL for Route53 Record](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_update_ttl_for_route53_records/README.md): Update TTL for an existing record in a hosted zone.\n* [Upload file to S3](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_upload_file_to_s3/README.md): Upload a local file to S3\n* [AWS_VPC_service_quota_warning](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_vpc_service_quota_warning/README.md): Given an AWS Region and a warning percentage, this Action queries all VPC quota limits, and returns any of Quotas that are over the alert value.\n"
  },
  {
    "path": "AWS/Resize_EBS_Volume.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"cd97199b\",\n   \"metadata\": {},\n   \"source\": [\n    \"<img src=\\\"https://storage.googleapis.com/unskript-website/assets/favicon.png\\\" alt=\\\"unSkript.com\\\" width=\\\"100\\\" height=\\\"100\\\"/> \\n\",\n    \"<h1> unSkript Runbooks </h1>\\n\",\n    \"<div class=\\\"alert alert-block alert-success\\\">\\n\",\n    \"    <b> This runbook demonstrates How to resize EBS volume using unSkript legos.</b>\\n\",\n    \"</div>\\n\",\n    \"\\n\",\n    \"<br>\\n\",\n    \"\\n\",\n    \"<center><h2>Resize EBS Volume</h2></center>\\n\",\n    \"\\n\",\n    \"# Steps Overview\\n\",\n    \"1) Extract the instanceID and device name from the AWS Cloudwatch alert.\\n\",\n    \"2) Get the PrivateIP address and ssh key pair for the instance.\\n\",\n    \"3) Since we don't support ssh key pair to credential mapping at this time, this runbook will need to be configured with the ssh credential corresponding to the instance(s).\\n\",\n    \"4) Extract the parent block device from the partition name.\\n\",\n    \"5) Extract the EBS volume corresponding to the parent block device.\\n\",\n    \"6) Modify the EBS volume by the SizeToIncreaseBy amount.\\n\",\n    \"7) Extend the file system \"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"5dbc3089-480d-4735-9f08-cf8025ebf488\",\n   \"metadata\": {\n    \"actionNeedsCredential\": true,\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Requirements\"\n   },\n   \"source\": [\n    \"- [ ] This runbook ONLY works for linux instances.\\n\",\n    \"- [ ] The instance needs to have ebsnvme-id installed. This is used to figure out the nvme block device to EBS volumeID.\\n\",\n    \"- [ ] It only supports xfs file system.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 5,\n   \"id\": \"45647e9b-eb55-47b0-965c-5b9cf405eb8f\",\n   \"metadata\": {\n    \"actionNeedsCredential\": false,\n    \"actionSupportsIteration\": false,\n    \"actionSupportsPoll\": false,\n    \"collapsed\": true,\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"orderProperties\": [],\n    \"runWithPrevious\": true,\n    \"tags\": [],\n    \"title\": \"SNS Alert object processing\",\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"# Get the instance ID and device name\\n\",\n    \"for dim in AlertObject['Trigger']['Dimensions']:\\n\",\n    \"    if dim['name'] == 'InstanceId':\\n\",\n    \"        InstanceID = dim['value']\\n\",\n    \"    if dim['name'] == 'device':\\n\",\n    \"        DeviceName = \\\"/dev/\\\"+dim['value']\\n\",\n    \"    if dim['name'] == 'fstype':\\n\",\n    \"        FSType = dim['value']\\n\",\n    \"\\n\",\n    \"# Get the region from the AlarmArn\\n\",\n    \"alarmArn = AlertObject['AlarmArn']\\n\",\n    \"Region = alarmArn.split(':')[3]\\n\",\n    \"print(f'InstanceID {InstanceID} Region {Region} FSType {FSType} Device {DeviceName}')\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"f80d6539\",\n   \"metadata\": {},\n   \"source\": [\n    \"Here we will use unSkript Get AWS Instance Details Lego. This lego takes instance_id and region as input. This input is used to discover all the details of EC2 instance.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"e951f9f2-c75a-497b-8bbc-2144bbfdd070\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_uuid\": \"aa1e026ca8002b906315feba401e5c46889d459270adce3b65d480dc9530311f\",\n    \"collapsed\": true,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Use This Action to Get Details about an AWS EC2 Instance\",\n    \"id\": 103,\n    \"index\": 103,\n    \"inputData\": [\n     {\n      \"instance_id\": {\n       \"constant\": false,\n       \"value\": \"InstanceID\"\n      },\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"Region\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"instance_id\": {\n        \"description\": \"ID of the instance.\",\n        \"title\": \"Instance Id\",\n        \"type\": \"string\"\n       },\n       \"region\": {\n        \"description\": \"AWS Region of the instance.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"instance_id\",\n       \"region\"\n      ],\n      \"title\": \"aws_get_instance_details\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"Get AWS Instance Details\",\n    \"nouns\": [\n     \"instance\",\n     \"details\"\n    ],\n    \"orderProperties\": [\n     \"instance_id\",\n     \"region\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"InstanceDetail\",\n     \"output_name_enabled\": true\n    },\n    \"tags\": [\n     \"aws_get_instance_details\"\n    ],\n    \"verbs\": [\n     \"get\"\n    ],\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"# Copyright (c) 2021 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_instance_details(\\n\",\n    \"    handle,\\n\",\n    \"    instance_id: str,\\n\",\n    \"    region: str,\\n\",\n    \"):\\n\",\n    \"\\n\",\n    \"    ec2client = handle.client('ec2', region_name=region)\\n\",\n    \"    instances = []\\n\",\n    \"    response = ec2client.describe_instances(\\n\",\n    \"        Filters=[{\\\"Name\\\": \\\"instance-id\\\", \\\"Values\\\": [instance_id]}])\\n\",\n    \"    for reservation in response[\\\"Reservations\\\"]:\\n\",\n    \"        for instance in reservation[\\\"Instances\\\"]:\\n\",\n    \"            instances.append(instance)\\n\",\n    \"\\n\",\n    \"    return instances[0]\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"instance_id\\\": \\\"InstanceID\\\",\\n\",\n    \"    \\\"region\\\": \\\"Region\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"InstanceDetail\\\")\\n\",\n    \"\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.output = task.execute(aws_get_instance_details, hdl=hdl, args=args)\\n\",\n    \"    if task.output_name != None:\\n\",\n    \"        globals().update({task.output_name: task.output[0]})\\n\",\n    \"\\n\",\n    \"if hasattr(task, 'output'):\\n\",\n    \"    if isinstance(task.output, (list, tuple)):\\n\",\n    \"        for item in task.output:\\n\",\n    \"            print(f'item: {item}')\\n\",\n    \"    elif isinstance(task.output, dict):\\n\",\n    \"        for item in task.output.items():\\n\",\n    \"            print(f'item: {item}')\\n\",\n    \"    else:\\n\",\n    \"        print(f'Output for {task.name}')\\n\",\n    \"        print(task.output)\\n\",\n    \"    w.tasks[task.name]= task.output\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 7,\n   \"id\": \"2f549ed5-b14b-4e2b-9190-34b3e5135869\",\n   \"metadata\": {\n    \"actionNeedsCredential\": false,\n    \"actionSupportsIteration\": false,\n    \"actionSupportsPoll\": false,\n    \"collapsed\": true,\n    \"jupyter\": {\n     \"outputs_hidden\": true\n    },\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Get the parent block device from partition name\",\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"# lsblk -no pkname /dev/nvme0n1p1\\n\",\n    \"PrivateIPAddress = InstanceDetail.get('PrivateIPAddress')\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"c2fc0efa\",\n   \"metadata\": {},\n   \"source\": [\n    \"Here we will use unSkript SSH Execute Remote Command Lego. This lego takes hosts, command and sudo as input. This input is used to SSH Execute Remote Command and get the block device details.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"d6b0c43c-4941-4641-b4d4-d60eebfb619e\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_uuid\": \"5279b2046bb2eb4a691ba748086f4af9e580a849faae557694bb12a8c2b7b379\",\n    \"collapsed\": true,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"SSH Execute Remote Command\",\n    \"id\": 58,\n    \"index\": 58,\n    \"inputData\": [\n     {\n      \"command\": {\n       \"constant\": false,\n       \"value\": \"\\\"lsblk -no pkname \\\" + DeviceName\"\n      },\n      \"hosts\": {\n       \"constant\": false,\n       \"value\": \"[PrivateIPAddress]\"\n      },\n      \"sudo\": {\n       \"constant\": true,\n       \"value\": true\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"command\": {\n        \"description\": \"Command to be executed on the remote server.\",\n        \"title\": \"Command\",\n        \"type\": \"string\"\n       },\n       \"hosts\": {\n        \"description\": \"List of hosts to connect to. For eg. [\\\"host1\\\", \\\"host2\\\"].\",\n        \"items\": {\n         \"type\": \"string\"\n        },\n        \"title\": \"Hosts\",\n        \"type\": \"array\"\n       },\n       \"sudo\": {\n        \"default\": false,\n        \"description\": \"Run the command with sudo.\",\n        \"title\": \"Run with sudo\",\n        \"type\": \"boolean\"\n       }\n      },\n      \"required\": [\n       \"hosts\",\n       \"command\"\n      ],\n      \"title\": \"ssh_execute_remote_command\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_SSH\",\n    \"name\": \"Get the Block Device\",\n    \"nouns\": [\n     \"ssh\",\n     \"command\"\n    ],\n    \"orderProperties\": [\n     \"hosts\",\n     \"command\",\n     \"sudo\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"BlockDeviceDetail\",\n     \"output_name_enabled\": true\n    },\n    \"tags\": [\n     \"ssh_execute_remote_command\"\n    ],\n    \"title\": \"Get the Block Device\",\n    \"verbs\": [\n     \"execute\"\n    ],\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"# Copyright (c) 2021 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"import json\\n\",\n    \"import tempfile\\n\",\n    \"import os\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from pssh.clients import ParallelSSHClient\\n\",\n    \"from typing import List, Optional\\n\",\n    \"from unskript.connectors import ssh\\n\",\n    \"\\n\",\n    \"from unskript.legos.cellparams import CellParams\\n\",\n    \"from unskript import connectors\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def ssh_execute_remote_command(sshClient, hosts: List[str], command: str, sudo: bool = False):\\n\",\n    \"\\n\",\n    \"    client = sshClient(hosts)\\n\",\n    \"    runCommandOutput = client.run_command(command=command, sudo=sudo)\\n\",\n    \"    client.join()\\n\",\n    \"    res = {}\\n\",\n    \"\\n\",\n    \"    for host_output in runCommandOutput:\\n\",\n    \"        hostname = host_output.host\\n\",\n    \"        output = []\\n\",\n    \"        for line in host_output.stdout:\\n\",\n    \"            output.append(line)\\n\",\n    \"        res[hostname] = output\\n\",\n    \"\\n\",\n    \"        o = \\\"\\\\n\\\".join(output)\\n\",\n    \"        print(f\\\"Output from host {hostname}\\\\n{o}\\\\n\\\")\\n\",\n    \"\\n\",\n    \"    return res\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"command\\\": \\\"\\\\\\\\\\\"lsblk -no pkname \\\\\\\\\\\" + DeviceName\\\",\\n\",\n    \"    \\\"hosts\\\": \\\"[PrivateIPAddress]\\\",\\n\",\n    \"    \\\"sudo\\\": \\\"True\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"BlockDeviceDetail\\\")\\n\",\n    \"\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.output = task.execute(ssh_execute_remote_command, hdl=hdl, args=args)\\n\",\n    \"    if task.output_name != None:\\n\",\n    \"        globals().update({task.output_name: task.output[0]})\\n\",\n    \"\\n\",\n    \"if hasattr(task, 'output'):\\n\",\n    \"    if isinstance(task.output, (list, tuple)):\\n\",\n    \"        for item in task.output:\\n\",\n    \"            print(f'item: {item}')\\n\",\n    \"    elif isinstance(task.output, dict):\\n\",\n    \"        for item in task.output.items():\\n\",\n    \"            print(f'item: {item}')\\n\",\n    \"    else:\\n\",\n    \"        print(f'Output for {task.name}')\\n\",\n    \"        print(task.output)\\n\",\n    \"    w.tasks[task.name]= task.output\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 9,\n   \"id\": \"25340e10-e1a2-4b63-a6d3-9e6c9a3a49ff\",\n   \"metadata\": {\n    \"actionNeedsCredential\": false,\n    \"actionSupportsIteration\": false,\n    \"actionSupportsPoll\": false,\n    \"collapsed\": true,\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"BlockDeviceName\",\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"BlockDeviceName = BlockDeviceDetail[PrivateIPAddress][0]\\n\",\n    \"print(BlockDeviceName)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"1ef6a217\",\n   \"metadata\": {},\n   \"source\": [\n    \"Here we will use unSkript SSH Execute Remote Command Lego. This lego takes hosts, command and sudo as input. This input is used to SSH Execute Remote Command and get the EBS volume details.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"fc870de5-f0d1-42a4-a91d-b543972fbc36\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_uuid\": \"5279b2046bb2eb4a691ba748086f4af9e580a849faae557694bb12a8c2b7b379\",\n    \"collapsed\": true,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"SSH Execute Remote Command\",\n    \"id\": 58,\n    \"index\": 58,\n    \"inputData\": [\n     {\n      \"command\": {\n       \"constant\": false,\n       \"value\": \"\\\"/sbin/ebsnvme-id \\\" + DeviceName\"\n      },\n      \"hosts\": {\n       \"constant\": false,\n       \"value\": \"[PrivateIPAddress]\"\n      },\n      \"sudo\": {\n       \"constant\": true,\n       \"value\": true\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"command\": {\n        \"description\": \"Command to be executed on the remote server.\",\n        \"title\": \"Command\",\n        \"type\": \"string\"\n       },\n       \"hosts\": {\n        \"description\": \"List of hosts to connect to. For eg. [\\\"host1\\\", \\\"host2\\\"].\",\n        \"items\": {\n         \"type\": \"string\"\n        },\n        \"title\": \"Hosts\",\n        \"type\": \"array\"\n       },\n       \"sudo\": {\n        \"default\": false,\n        \"description\": \"Run the command with sudo.\",\n        \"title\": \"Run with sudo\",\n        \"type\": \"boolean\"\n       }\n      },\n      \"required\": [\n       \"hosts\",\n       \"command\"\n      ],\n      \"title\": \"ssh_execute_remote_command\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_SSH\",\n    \"name\": \"Get the EBS volume corresponding to block device\",\n    \"nouns\": [\n     \"ssh\",\n     \"command\"\n    ],\n    \"orderProperties\": [\n     \"hosts\",\n     \"command\",\n     \"sudo\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"EBSVolumeDetail\",\n     \"output_name_enabled\": true\n    },\n    \"tags\": [\n     \"ssh_execute_remote_command\"\n    ],\n    \"title\": \"Get the EBS volume corresponding to block device\",\n    \"verbs\": [\n     \"execute\"\n    ],\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"# Copyright (c) 2021 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"import json\\n\",\n    \"import tempfile\\n\",\n    \"import os\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from pssh.clients import ParallelSSHClient\\n\",\n    \"from typing import List, Optional\\n\",\n    \"from unskript.connectors import ssh\\n\",\n    \"\\n\",\n    \"from unskript.legos.cellparams import CellParams\\n\",\n    \"from unskript import connectors\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def ssh_execute_remote_command(sshClient, hosts: List[str], command: str, sudo: bool = False):\\n\",\n    \"\\n\",\n    \"    client = sshClient(hosts)\\n\",\n    \"    runCommandOutput = client.run_command(command=command, sudo=sudo)\\n\",\n    \"    client.join()\\n\",\n    \"    res = {}\\n\",\n    \"\\n\",\n    \"    for host_output in runCommandOutput:\\n\",\n    \"        hostname = host_output.host\\n\",\n    \"        output = []\\n\",\n    \"        for line in host_output.stdout:\\n\",\n    \"            output.append(line)\\n\",\n    \"        res[hostname] = output\\n\",\n    \"\\n\",\n    \"        o = \\\"\\\\n\\\".join(output)\\n\",\n    \"        print(f\\\"Output from host {hostname}\\\\n{o}\\\\n\\\")\\n\",\n    \"\\n\",\n    \"    return res\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"command\\\": \\\"\\\\\\\\\\\"/sbin/ebsnvme-id \\\\\\\\\\\" + DeviceName\\\",\\n\",\n    \"    \\\"hosts\\\": \\\"[PrivateIPAddress]\\\",\\n\",\n    \"    \\\"sudo\\\": \\\"True\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"EBSVolumeDetail\\\")\\n\",\n    \"\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.output = task.execute(ssh_execute_remote_command, hdl=hdl, args=args)\\n\",\n    \"    if task.output_name != None:\\n\",\n    \"        globals().update({task.output_name: task.output[0]})\\n\",\n    \"\\n\",\n    \"if hasattr(task, 'output'):\\n\",\n    \"    if isinstance(task.output, (list, tuple)):\\n\",\n    \"        for item in task.output:\\n\",\n    \"            print(f'item: {item}')\\n\",\n    \"    elif isinstance(task.output, dict):\\n\",\n    \"        for item in task.output.items():\\n\",\n    \"            print(f'item: {item}')\\n\",\n    \"    else:\\n\",\n    \"        print(f'Output for {task.name}')\\n\",\n    \"        print(task.output)\\n\",\n    \"    w.tasks[task.name]= task.output\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 11,\n   \"id\": \"04f13f9b-b4e4-41de-9d46-93ba7c0f1d64\",\n   \"metadata\": {\n    \"actionNeedsCredential\": false,\n    \"actionSupportsIteration\": false,\n    \"actionSupportsPoll\": false,\n    \"collapsed\": true,\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"EBSVolume\",\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"EBSVolume = EBSVolumeDetail[PrivateIPAddress][0].split(\\\":\\\")[1].lstrip()\\n\",\n    \"print(EBSVolume)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"a60e2d42\",\n   \"metadata\": {},\n   \"source\": [\n    \"Here we will use unSkript EBS Modify Volume Lego. This lego takes volume_id, resize_option, resize_value and region as input. This input is used to Modify/Resize volume for Elastic Block Storage (EBS).\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"73ae8acd-cd9a-48ba-a0ae-738036241167\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_uuid\": \"45ea29baa63ca8b078fdba68742bc30e7ecd950bcab2e4bb572b3d6bbb984c12\",\n    \"collapsed\": true,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Modify/Resize volume for Elastic Block Storage (EBS).\",\n    \"id\": 109,\n    \"index\": 109,\n    \"inputData\": [\n     {\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"Region\"\n      },\n      \"resize_option\": {\n       \"constant\": false,\n       \"value\": \"\\\"Add\\\"\"\n      },\n      \"resize_value\": {\n       \"constant\": false,\n       \"value\": \"SizeToIncreaseBy\"\n      },\n      \"volume_id\": {\n       \"constant\": false,\n       \"value\": \"EBSVolume\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"region\": {\n        \"description\": \"AWS Region of the volume.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       },\n       \"resize_option\": {\n        \"description\": \"\\n            Option to resize the volume. 2 options supported:\\n            1. Add - Use this option to resize by an amount.\\n            2. Multiple - Use this option if you want to resize by a multiple of the current volume size.\\n        \",\n        \"title\": \"Resize option\",\n        \"type\": \"string\"\n       },\n       \"resize_value\": {\n        \"description\": \"\\n            Based on the resize option chosen, specify the value. For eg, if you chose Add option, this\\n            value will be a value in Gb (like 100). If you chose Multiple option, this value will be a multiplying factor\\n            to the current volume size. So, if you want to double, you specify 2 here.\\n        \",\n        \"title\": \"Value\",\n        \"type\": \"number\"\n       },\n       \"volume_id\": {\n        \"description\": \"EBS Volume ID to resize.\",\n        \"title\": \"EBS Volume ID\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"volume_id\",\n       \"resize_option\",\n       \"resize_value\",\n       \"region\"\n      ],\n      \"title\": \"aws_ebs_modify_volume\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"EBS Modify Volume\",\n    \"nouns\": [\n     \"ebs\",\n     \"volume\"\n    ],\n    \"orderProperties\": [\n     \"volume_id\",\n     \"resize_option\",\n     \"resize_value\",\n     \"region\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"tags\": [\n     \"aws_ebs_modify_volume\"\n    ],\n    \"verbs\": [\n     \"modify\"\n    ],\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"# Copyright (c) 2021 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"\\n\",\n    \"import enum\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from unskript.legos.aws.aws_get_handle import Session\\n\",\n    \"from polling2 import poll_decorator\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"class SizingOption(enum.Enum):\\n\",\n    \"    Add = \\\"Add\\\"\\n\",\n    \"    Mutiple = \\\"Multiple\\\"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_ebs_modify_volume(\\n\",\n    \"    hdl: Session,\\n\",\n    \"    volume_id: str,\\n\",\n    \"    resize_option: str,\\n\",\n    \"    resize_value: float,\\n\",\n    \"    region: str,\\n\",\n    \"):\\n\",\n    \"    \\\"\\\"\\\"aws_ebs_modify_volume modifies the size of the EBS Volume.\\n\",\n    \"    You can either increase it a provided value or by a provided multiple value.\\n\",\n    \"\\n\",\n    \"    :type volume_id: string\\n\",\n    \"    :param volume_id: ebs volume id.\\n\",\n    \"\\n\",\n    \"    :type resize_option: string\\n\",\n    \"    :param resize_option: option to resize the volume, by a fixed amount or by a multiple of the existing size.\\n\",\n    \"\\n\",\n    \"    :type value: int\\n\",\n    \"    :param value: The value by which the volume should be modified, depending upon the resize option.\\n\",\n    \"\\n\",\n    \"    :type region: string\\n\",\n    \"    :param region: AWS Region of the volume.\\n\",\n    \"\\n\",\n    \"    :rtype: New volume size.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"\\n\",\n    \"    ec2Client = hdl.client(\\\"ec2\\\", region_name=region)\\n\",\n    \"    ec2Resource = hdl.resource(\\\"ec2\\\", region_name=region)\\n\",\n    \"    # Get the current volume size.\\n\",\n    \"    Volume = ec2Resource.Volume(volume_id)\\n\",\n    \"    currentSize = Volume.size\\n\",\n    \"\\n\",\n    \"    if resize_option == SizingOption.Add.value:\\n\",\n    \"        newSize = currentSize + resize_value\\n\",\n    \"    elif resize_option == SizingOption.Mutiple.value:\\n\",\n    \"        newSize = currentSize * resize_value\\n\",\n    \"\\n\",\n    \"    print(f'CurrentSize {currentSize}, NewSize {newSize}')\\n\",\n    \"\\n\",\n    \"    resp = ec2Client.modify_volume(\\n\",\n    \"        VolumeId=volume_id,\\n\",\n    \"        Size=newSize)\\n\",\n    \"\\n\",\n    \"    # Check the modification state\\n\",\n    \"    try:\\n\",\n    \"        check_modification_status(ec2Client, volume_id)\\n\",\n    \"    except Exception as e:\\n\",\n    \"        print(f'Modify volumeID {volume_id} failed: {str(e)}')\\n\",\n    \"        raise e\\n\",\n    \"\\n\",\n    \"    print(f'Volume {volume_id} size modified successfully to {newSize}')\\n\",\n    \"    return\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@poll_decorator(step=60, timeout=600, check_success=lambda x: x is True)\\n\",\n    \"def check_modification_status(ec2Client, volumeID) -> bool:\\n\",\n    \"    resp = ec2Client.describe_volumes_modifications(VolumeIds=[volumeID])\\n\",\n    \"    state = resp['VolumesModifications'][0]['ModificationState']\\n\",\n    \"    progress = resp['VolumesModifications'][0]['Progress']\\n\",\n    \"    print(f'Volume modification state {state}, Progress {progress}')\\n\",\n    \"    if state == 'completed' or state == None:\\n\",\n    \"        return True\\n\",\n    \"    elif state == 'failed':\\n\",\n    \"        raise Exception(\\\"Get Status Failed\\\")\\n\",\n    \"    return False\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"region\\\": \\\"Region\\\",\\n\",\n    \"    \\\"resize_option\\\": \\\"\\\\\\\\\\\"Add\\\\\\\\\\\"\\\",\\n\",\n    \"    \\\"volume_id\\\": \\\"EBSVolume\\\"\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.output = task.execute(aws_ebs_modify_volume, hdl=hdl, args=args)\\n\",\n    \"    if task.output_name != None:\\n\",\n    \"        globals().update({task.output_name: task.output[0]})\\n\",\n    \"\\n\",\n    \"if hasattr(task, 'output'):\\n\",\n    \"    if isinstance(task.output, (list, tuple)):\\n\",\n    \"        for item in task.output:\\n\",\n    \"            print(f'item: {item}')\\n\",\n    \"    elif isinstance(task.output, dict):\\n\",\n    \"        for item in task.output.items():\\n\",\n    \"            print(f'item: {item}')\\n\",\n    \"    else:\\n\",\n    \"        print(f'Output for {task.name}')\\n\",\n    \"        print(task.output)\\n\",\n    \"    w.tasks[task.name]= task.output\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 12,\n   \"id\": \"8b628a00-35e6-42eb-b579-9aa28fac1b69\",\n   \"metadata\": {\n    \"actionNeedsCredential\": false,\n    \"actionSupportsIteration\": false,\n    \"actionSupportsPoll\": false,\n    \"collapsed\": true,\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Create the growpart command\",\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"# growpart /dev/nvme0n1 1\\n\",\n    \"growpartCommand = \\\"growpart /dev/\\\" + BlockDeviceName + \\\" 1\\\"\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"cdf4cb16\",\n   \"metadata\": {},\n   \"source\": [\n    \"Here we will use unSkript SSH Execute Remote Command Lego. This lego takes hosts, command and sudo as input. This input is used to SSH Execute Remote Command.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"3fcc063c-c1ab-42d8-8d89-18dd60f755ae\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_uuid\": \"5279b2046bb2eb4a691ba748086f4af9e580a849faae557694bb12a8c2b7b379\",\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"SSH Execute Remote Command\",\n    \"id\": 58,\n    \"index\": 58,\n    \"inputData\": [\n     {\n      \"command\": {\n       \"constant\": false,\n       \"value\": \"growpartCommand\"\n      },\n      \"hosts\": {\n       \"constant\": false,\n       \"value\": \"[PrivateIPAddress]\"\n      },\n      \"sudo\": {\n       \"constant\": true,\n       \"value\": true\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"command\": {\n        \"description\": \"Command to be executed on the remote server.\",\n        \"title\": \"Command\",\n        \"type\": \"string\"\n       },\n       \"hosts\": {\n        \"description\": \"List of hosts to connect to. For eg. [\\\"host1\\\", \\\"host2\\\"].\",\n        \"items\": {\n         \"type\": \"string\"\n        },\n        \"title\": \"Hosts\",\n        \"type\": \"array\"\n       },\n       \"sudo\": {\n        \"default\": false,\n        \"description\": \"Run the command with sudo.\",\n        \"title\": \"Run with sudo\",\n        \"type\": \"boolean\"\n       }\n      },\n      \"required\": [\n       \"hosts\",\n       \"command\"\n      ],\n      \"title\": \"ssh_execute_remote_command\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_SSH\",\n    \"name\": \"Executing growpart command\",\n    \"nouns\": [\n     \"ssh\",\n     \"command\"\n    ],\n    \"orderProperties\": [\n     \"hosts\",\n     \"command\",\n     \"sudo\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"tags\": [\n     \"ssh_execute_remote_command\"\n    ],\n    \"title\": \"Executing growpart command\",\n    \"verbs\": [\n     \"execute\"\n    ],\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"# Copyright (c) 2021 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"import json\\n\",\n    \"import tempfile\\n\",\n    \"import os\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from pssh.clients import ParallelSSHClient\\n\",\n    \"from typing import List, Optional\\n\",\n    \"from unskript.connectors import ssh\\n\",\n    \"\\n\",\n    \"from unskript.legos.cellparams import CellParams\\n\",\n    \"from unskript import connectors\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def ssh_execute_remote_command(sshClient, hosts: List[str], command: str, sudo: bool = False):\\n\",\n    \"\\n\",\n    \"    client = sshClient(hosts)\\n\",\n    \"    runCommandOutput = client.run_command(command=command, sudo=sudo)\\n\",\n    \"    client.join()\\n\",\n    \"    res = {}\\n\",\n    \"\\n\",\n    \"    for host_output in runCommandOutput:\\n\",\n    \"        hostname = host_output.host\\n\",\n    \"        output = []\\n\",\n    \"        for line in host_output.stdout:\\n\",\n    \"            output.append(line)\\n\",\n    \"        res[hostname] = output\\n\",\n    \"\\n\",\n    \"        o = \\\"\\\\n\\\".join(output)\\n\",\n    \"        print(f\\\"Output from host {hostname}\\\\n{o}\\\\n\\\")\\n\",\n    \"\\n\",\n    \"    return res\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"command\\\": \\\"growpartCommand\\\",\\n\",\n    \"    \\\"hosts\\\": \\\"[PrivateIPAddress]\\\",\\n\",\n    \"    \\\"sudo\\\": \\\"True\\\"\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.output = task.execute(ssh_execute_remote_command, hdl=hdl, args=args)\\n\",\n    \"    if task.output_name != None:\\n\",\n    \"        globals().update({task.output_name: task.output[0]})\\n\",\n    \"\\n\",\n    \"if hasattr(task, 'output'):\\n\",\n    \"    if isinstance(task.output, (list, tuple)):\\n\",\n    \"        for item in task.output:\\n\",\n    \"            print(f'item: {item}')\\n\",\n    \"    elif isinstance(task.output, dict):\\n\",\n    \"        for item in task.output.items():\\n\",\n    \"            print(f'item: {item}')\\n\",\n    \"    else:\\n\",\n    \"        print(f'Output for {task.name}')\\n\",\n    \"        print(task.output)\\n\",\n    \"    w.tasks[task.name]= task.output\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"a673117a\",\n   \"metadata\": {},\n   \"source\": [\n    \"Here we will use unSkript SSH Execute Remote Command Lego. This lego takes hosts, command and sudo as input. This input is used to SSH Execute Remote Command and get Mount path details.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"3da58118-c5f6-415f-bdc9-97434703dff8\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_uuid\": \"5279b2046bb2eb4a691ba748086f4af9e580a849faae557694bb12a8c2b7b379\",\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"SSH Execute Remote Command\",\n    \"id\": 58,\n    \"index\": 58,\n    \"inputData\": [\n     {\n      \"command\": {\n       \"constant\": false,\n       \"value\": \"\\\"findmnt -nr -o target -S \\\" + DeviceName\"\n      },\n      \"hosts\": {\n       \"constant\": false,\n       \"value\": \"[PrivateIPAddress]\"\n      },\n      \"sudo\": {\n       \"constant\": true,\n       \"value\": false\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"command\": {\n        \"description\": \"Command to be executed on the remote server.\",\n        \"title\": \"Command\",\n        \"type\": \"string\"\n       },\n       \"hosts\": {\n        \"description\": \"List of hosts to connect to. For eg. [\\\"host1\\\", \\\"host2\\\"].\",\n        \"items\": {\n         \"type\": \"string\"\n        },\n        \"title\": \"Hosts\",\n        \"type\": \"array\"\n       },\n       \"sudo\": {\n        \"default\": false,\n        \"description\": \"Run the command with sudo.\",\n        \"title\": \"Run with sudo\",\n        \"type\": \"boolean\"\n       }\n      },\n      \"required\": [\n       \"hosts\",\n       \"command\"\n      ],\n      \"title\": \"ssh_execute_remote_command\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_SSH\",\n    \"name\": \"Get the mount target from the device name\",\n    \"nouns\": [\n     \"ssh\",\n     \"command\"\n    ],\n    \"orderProperties\": [\n     \"hosts\",\n     \"command\",\n     \"sudo\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"MountPathDetail\",\n     \"output_name_enabled\": true\n    },\n    \"tags\": [\n     \"ssh_execute_remote_command\"\n    ],\n    \"title\": \"Get the mount target from the device name\",\n    \"verbs\": [\n     \"execute\"\n    ],\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"# Copyright (c) 2021 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"import json\\n\",\n    \"import tempfile\\n\",\n    \"import os\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from pssh.clients import ParallelSSHClient\\n\",\n    \"from typing import List, Optional\\n\",\n    \"from unskript.connectors import ssh\\n\",\n    \"\\n\",\n    \"from unskript.legos.cellparams import CellParams\\n\",\n    \"from unskript import connectors\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def ssh_execute_remote_command(sshClient, hosts: List[str], command: str, sudo: bool = False):\\n\",\n    \"\\n\",\n    \"    client = sshClient(hosts)\\n\",\n    \"    runCommandOutput = client.run_command(command=command, sudo=sudo)\\n\",\n    \"    client.join()\\n\",\n    \"    res = {}\\n\",\n    \"\\n\",\n    \"    for host_output in runCommandOutput:\\n\",\n    \"        hostname = host_output.host\\n\",\n    \"        output = []\\n\",\n    \"        for line in host_output.stdout:\\n\",\n    \"            output.append(line)\\n\",\n    \"        res[hostname] = output\\n\",\n    \"\\n\",\n    \"        o = \\\"\\\\n\\\".join(output)\\n\",\n    \"        print(f\\\"Output from host {hostname}\\\\n{o}\\\\n\\\")\\n\",\n    \"\\n\",\n    \"    return res\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"command\\\": \\\"\\\\\\\\\\\"findmnt -nr -o target -S \\\\\\\\\\\" + DeviceName\\\",\\n\",\n    \"    \\\"hosts\\\": \\\"[PrivateIPAddress]\\\",\\n\",\n    \"    \\\"sudo\\\": \\\"False\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"MountPathDetail\\\")\\n\",\n    \"\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.output = task.execute(ssh_execute_remote_command, hdl=hdl, args=args)\\n\",\n    \"    if task.output_name != None:\\n\",\n    \"        globals().update({task.output_name: task.output[0]})\\n\",\n    \"\\n\",\n    \"if hasattr(task, 'output'):\\n\",\n    \"    if isinstance(task.output, (list, tuple)):\\n\",\n    \"        for item in task.output:\\n\",\n    \"            print(f'item: {item}')\\n\",\n    \"    elif isinstance(task.output, dict):\\n\",\n    \"        for item in task.output.items():\\n\",\n    \"            print(f'item: {item}')\\n\",\n    \"    else:\\n\",\n    \"        print(f'Output for {task.name}')\\n\",\n    \"        print(task.output)\\n\",\n    \"    w.tasks[task.name]= task.output\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 15,\n   \"id\": \"bfbd0896-91ad-41ad-881b-41ebe96d8812\",\n   \"metadata\": {\n    \"actionNeedsCredential\": false,\n    \"actionSupportsIteration\": false,\n    \"actionSupportsPoll\": false,\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Store the Mount Path\",\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"MountPath = MountPathDetail[PrivateIPAddress][0]\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"c4fe887f\",\n   \"metadata\": {},\n   \"source\": [\n    \"Here we will use unSkript SSH Execute Remote Command Lego. This lego takes hosts, command and sudo as input. This input is used to SSH Execute Remote Command for Mount path.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"e43b6a6b-c0e6-4eb5-ae47-bd474e00fb04\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_uuid\": \"5279b2046bb2eb4a691ba748086f4af9e580a849faae557694bb12a8c2b7b379\",\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"SSH Execute Remote Command\",\n    \"id\": 58,\n    \"index\": 58,\n    \"inputData\": [\n     {\n      \"command\": {\n       \"constant\": false,\n       \"value\": \"\\\"xfs_growfs -d \\\" + MountPath\"\n      },\n      \"hosts\": {\n       \"constant\": false,\n       \"value\": \"[PrivateIPAddress]\"\n      },\n      \"sudo\": {\n       \"constant\": true,\n       \"value\": true\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"command\": {\n        \"description\": \"Command to be executed on the remote server.\",\n        \"title\": \"Command\",\n        \"type\": \"string\"\n       },\n       \"hosts\": {\n        \"description\": \"List of hosts to connect to. For eg. [\\\"host1\\\", \\\"host2\\\"].\",\n        \"items\": {\n         \"type\": \"string\"\n        },\n        \"title\": \"Hosts\",\n        \"type\": \"array\"\n       },\n       \"sudo\": {\n        \"default\": false,\n        \"description\": \"Run the command with sudo.\",\n        \"title\": \"Run with sudo\",\n        \"type\": \"boolean\"\n       }\n      },\n      \"required\": [\n       \"hosts\",\n       \"command\"\n      ],\n      \"title\": \"ssh_execute_remote_command\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_SSH\",\n    \"name\": \"Execute the extend file system command\",\n    \"nouns\": [\n     \"ssh\",\n     \"command\"\n    ],\n    \"orderProperties\": [\n     \"hosts\",\n     \"command\",\n     \"sudo\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"tags\": [\n     \"ssh_execute_remote_command\"\n    ],\n    \"title\": \"Execute the extend file system command\",\n    \"verbs\": [\n     \"execute\"\n    ],\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"# Copyright (c) 2021 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"import json\\n\",\n    \"import tempfile\\n\",\n    \"import os\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from pssh.clients import ParallelSSHClient\\n\",\n    \"from typing import List, Optional\\n\",\n    \"from unskript.connectors import ssh\\n\",\n    \"\\n\",\n    \"from unskript.legos.cellparams import CellParams\\n\",\n    \"from unskript import connectors\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def ssh_execute_remote_command(sshClient, hosts: List[str], command: str, sudo: bool = False):\\n\",\n    \"\\n\",\n    \"    client = sshClient(hosts)\\n\",\n    \"    runCommandOutput = client.run_command(command=command, sudo=sudo)\\n\",\n    \"    client.join()\\n\",\n    \"    res = {}\\n\",\n    \"\\n\",\n    \"    for host_output in runCommandOutput:\\n\",\n    \"        hostname = host_output.host\\n\",\n    \"        output = []\\n\",\n    \"        for line in host_output.stdout:\\n\",\n    \"            output.append(line)\\n\",\n    \"        res[hostname] = output\\n\",\n    \"\\n\",\n    \"        o = \\\"\\\\n\\\".join(output)\\n\",\n    \"        print(f\\\"Output from host {hostname}\\\\n{o}\\\\n\\\")\\n\",\n    \"\\n\",\n    \"    return res\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"command\\\": \\\"\\\\\\\\\\\"xfs_growfs -d \\\\\\\\\\\" + MountPath\\\",\\n\",\n    \"    \\\"hosts\\\": \\\"[PrivateIPAddress]\\\",\\n\",\n    \"    \\\"sudo\\\": \\\"True\\\"\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.output = task.execute(ssh_execute_remote_command, hdl=hdl, args=args)\\n\",\n    \"    if task.output_name != None:\\n\",\n    \"        globals().update({task.output_name: task.output[0]})\\n\",\n    \"\\n\",\n    \"if hasattr(task, 'output'):\\n\",\n    \"    if isinstance(task.output, (list, tuple)):\\n\",\n    \"        for item in task.output:\\n\",\n    \"            print(f'item: {item}')\\n\",\n    \"    elif isinstance(task.output, dict):\\n\",\n    \"        for item in task.output.items():\\n\",\n    \"            print(f'item: {item}')\\n\",\n    \"    else:\\n\",\n    \"        print(f'Output for {task.name}')\\n\",\n    \"        print(task.output)\\n\",\n    \"    w.tasks[task.name]= task.output\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"7662bd37\",\n   \"metadata\": {},\n   \"source\": [\n    \"### Conclusion\\n\",\n    \"In this Runbook, we demonstrated the use of unSkript's AWS and SSH legos to Resize EBS Volume and run resizes the EBS volume to a specified amount. This runbook can be attached to Disk usage related Cloudwatch alarms to do the appropriate resizing. It also extends the filesystem to use the new volume size. To view the full platform capabilities of unSkript please visit https://us.app.unskript.io\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"execution_data\": {\n   \"runbook_name\": \"Resize EBS Volume\",\n   \"parameters\": [\n    \"SizeToIncreaseBy\"\n   ]\n  },\n  \"kernelspec\": {\n   \"display_name\": \"Python 3.9.6 64-bit\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.9.6\"\n  },\n  \"parameterSchema\": {\n   \"properties\": {\n    \"SizeToIncreaseBy\": {\n     \"description\": \"This is the size you want to increase the volume by. This values is in Gb.\",\n     \"title\": \"SizeToIncreaseBy\",\n     \"type\": \"number\",\n     \"value\": 0\n    }\n   },\n   \"required\": [],\n   \"title\": \"ResizeEBSVolumeOnDiskUtilizationAlert\",\n   \"type\": \"object\"\n  },\n  \"parameterValues\": {\n   \"SizeToIncreaseBy\": null\n  },\n  \"vscode\": {\n   \"interpreter\": {\n    \"hash\": \"31f2aee4e71d21fbe5cf8b01ff0e069b9275f58929596ceb00d14d90e3e16cd6\"\n   }\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}"
  },
  {
    "path": "AWS/Resize_EBS_Volume.json",
    "content": "{\n  \"name\": \"Resize EBS Volume\",\n  \"description\": \"This run resizes the EBS volume to a specified amount. This runbook can be attached to Disk usage related Cloudwatch alarms to do the appropriate resizing. It also extends the filesystem to use the new volume size.\",  \n  \"uuid\": \"4b31bec81a76443dfcab7826e79dfe4154423986dce04ef389077bbc59d75ecf\",\n  \"icon\": \"CONNECTOR_TYPE_AWS\",\n  \"categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\" ],\n  \"connector_types\": [ \"CONNECTOR_TYPE_AWS\" ],\n  \"version\": \"1.0.0\"\n}\n\n"
  },
  {
    "path": "AWS/Resize_List_Of_Pvcs.ipynb",
    "content": "{\n    \"cells\": [\n        {\n            \"cell_type\": \"code\",\n            \"execution_count\": null,\n            \"id\": \"7520a096-71b8-45b0-93be-a31be6fc0949\",\n            \"metadata\": {\n                \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n                \"actionBashCommand\": false,\n                \"actionNeedsCredential\": true,\n                \"actionSupportsIteration\": true,\n                \"actionSupportsPoll\": true,\n                \"action_uuid\": \"0c96676c124796bc48e751c641ea0ccc722e7d29f1ffe665fe756a7106d756c5\",\n                \"createTime\": \"1970-01-01T00:00:00Z\",\n                \"currentVersion\": \"0.1.0\",\n                \"description\": \"List pvcs by namespace. By default, it will list all pvcs in all namespaces.\",\n                \"id\": 30,\n                \"index\": 30,\n                \"inputschema\": [\n                    {\n                        \"properties\": {\n                            \"namespace\": {\n                                \"default\": \"\",\n                                \"description\": \"Kubernetes namespace\",\n                                \"title\": \"Namespace\",\n                                \"type\": \"string\"\n                            }\n                        },\n                        \"title\": \"k8s_list_pvcs\",\n                        \"type\": \"object\"\n                    }\n                ],\n                \"jupyter\": {\n                    \"source_hidden\": true\n                },\n                \"legotype\": \"LEGO_TYPE_K8S\",\n                \"name\": \"List pvcs\",\n                \"nouns\": [\n                    \"pvc\"\n                ],\n                \"orderProperties\": [\n                    \"namespace\"\n                ],\n                \"output\": {\n                    \"type\": \"\"\n                },\n                \"outputParams\": {\n                    \"output_name\": \"pvcsList\",\n                    \"output_name_enabled\": true\n                },\n                \"tags\": [\n                    \"k8s_list_pvcs\"\n                ],\n                \"verbs\": [\n                    \"list\"\n                ],\n                \"credentialsJson\": {},\n                \"execution_data\": {},\n                \"execution_count\": {}\n            },\n            \"outputs\": [],\n            \"source\": [\n                \"#\\n\",\n                \"# Copyright (c) 2021 unSkript.com\\n\",\n                \"# All rights reserved.\\n\",\n                \"#\\n\",\n                \"\\n\",\n                \"from pydantic import BaseModel, Field\\n\",\n                \"from typing import Optional, List, Tuple\\n\",\n                \"\\n\",\n                \"from beartype import beartype\\n\",\n                \"def legoPrinter(func):\\n\",\n                \"    def Printer(*args, **kwargs):\\n\",\n                \"        output = func(*args, **kwargs)\\n\",\n                \"        print(output)\\n\",\n                \"\\n\",\n                \"    return Printer\\n\",\n                \"\\n\",\n                \"@legoPrinter\\n\",\n                \"@beartype\\n\",\n                \"def k8s_list_pvcs(handle, namespace: str = '') -> List:\\n\",\n                \"    if namespace == '':\\n\",\n                \"        kubectl_command = 'kubectl get pvc -A --output=jsonpath=\\\\'{range .items[*]}{@.metadata.namespace}{\\\",\\\"}{@.metadata.name}{\\\"\\\\\\\\n\\\"}{end}\\\\''\\n\",\n                \"    else:\\n\",\n                \"        kubectl_command = 'kubectl get pvc -n ' + namespace + ' --output=jsonpath=\\\\'{range .items[*]}{@.metadata.namespace}{\\\",\\\"}{@.metadata.name}{\\\"\\\\\\\\n\\\"}{end}\\\\''\\n\",\n                \"    result = handle.run_native_cmd(kubectl_command)\\n\",\n                \"    if result is None or hasattr(result, \\\"stderr\\\") is False or result.stderr is None:\\n\",\n                \"        print(\\n\",\n                \"            f\\\"Error while executing command ({kubectl_command}): {result.stderr}\\\")\\n\",\n                \"        return None\\n\",\n                \"    names_list = [y for y in (x.strip() for x in result.stdout.splitlines()) if y]\\n\",\n                \"    output = []\\n\",\n                \"    for i in names_list:\\n\",\n                \"        ns, name = i.split(\\\",\\\")\\n\",\n                \"        output.append({\\\"Namespace\\\": ns, \\\"Name\\\":name})\\n\",\n                \"    return output\\n\",\n                \"\\n\",\n                \"\\n\",\n                \"task = Task(Workflow())\\n\",\n                \"task.configure(outputName=\\\"pvcsList\\\")\\n\",\n                \"\\n\",\n                \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n                \"if err is None:\\n\",\n                \"    task.output = task.execute(k8s_list_pvcs, hdl=hdl, args=args)\\n\",\n                \"    if task.output_name != None:\\n\",\n                \"        globals().update({task.output_name: task.output[0]})\"\n            ]\n        },\n        {\n            \"cell_type\": \"code\",\n            \"execution_count\": 21,\n            \"id\": \"46616499-6e96-462c-b9fc-b16b2538d6b2\",\n            \"metadata\": {\n                \"actionNeedsCredential\": false,\n                \"actionSupportsIteration\": false,\n                \"actionSupportsPoll\": false,\n                \"collapsed\": true,\n                \"jupyter\": {\n                    \"outputs_hidden\": true,\n                    \"source_hidden\": true\n                },\n                \"orderProperties\": [],\n                \"tags\": [],\n                \"credentialsJson\": {}\n            },\n            \"outputs\": [],\n            \"source\": [\n                \"##\\n\",\n                \"# Copyright (c) 2021 unSkript, Inc\\n\",\n                \"# All rights reserved.\\n\",\n                \"##\\n\",\n                \"from pydantic import BaseModel, Field\\n\",\n                \"from unskript.connectors.infra import InfraConnector\\n\",\n                \"from typing import Optional\\n\",\n                \"import requests\\n\",\n                \"from polling2 import poll_decorator\\n\",\n                \"import html_to_json\\n\",\n                \"import uuid \\n\",\n                \"\\n\",\n                \"class Schema(BaseModel):\\n\",\n                \"    Namespace: Optional[str] = Field(\\n\",\n                \"        None, description='Namespace of the PVC', title='Namespace'\\n\",\n                \"    )\\n\",\n                \"    PVCName: Optional[str] = Field(None, description='Name of the PVC', title='PVCName')\\n\",\n                \"    ResizeOption: Optional[str] = Field(\\n\",\n                \"        'Add',\\n\",\n                \"        description='Option to resize the volume. 2 options supported:             1. Add - Use this option to resize by an amount.             2. Multiple - Use this option if you want to resize by a multiple of the current volume size.',\\n\",\n                \"        title='ResizeOption',\\n\",\n                \"    )\\n\",\n                \"    RestartPodsAfterResize: Optional[bool] = Field(\\n\",\n                \"        False,\\n\",\n                \"        description='Restart the pods after PVC resize',\\n\",\n                \"        title='RestartPodsAfterResize',\\n\",\n                \"    )\\n\",\n                \"    Channel: Optional[str] = Field(\\n\",\n                \"        None,\\n\",\n                \"        description='Slack Channel name where notification will be send.',\\n\",\n                \"        title='SlackChannelName',\\n\",\n                \"    )\\n\",\n                \"    Value: Optional[float] = Field(\\n\",\n                \"        None,\\n\",\n                \"        description='Based on the resize option chosen, specify the value. For eg, if you chose Add option, this             value will be a value in Gb (like 100). If you chose, this value will be a multiplying factor             to the current volume size. For eg, to double, specify value as 2.',\\n\",\n                \"        title='Value',\\n\",\n                \"    )\\n\",\n                \"\\n\",\n                \"@poll_decorator(step=10, timeout=60, check_success=lambda x: x is True)\\n\",\n                \"def checkExecutionStatus(handle, tenantID, executionID) -> bool:\\n\",\n                \"    print(f'Checking execution status')\\n\",\n                \"    url = f'{env[\\\"TENANT_URL\\\"]}/executions/{executionID}'\\n\",\n                \"    try:\\n\",\n                \"        resp = handle.request('GET', url, params={'tenant_id': tenantID, \\\"summary\\\": True})\\n\",\n                \"        resp.raise_for_status()\\n\",\n                \"    except Exception as e:\\n\",\n                \"        print(f'Get execution {executionID} failed, {e}')\\n\",\n                \"        return False\\n\",\n                \"\\n\",\n                \"    try:\\n\",\n                \"        result = resp.json()\\n\",\n                \"    except Exception:\\n\",\n                \"        result = html_to_json.convert(resp.content)\\n\",\n                \"    if result['execution']['executionStatus'] == \\\"EXECUTION_STATUS_SUCCEEDED\\\" or result['execution']['executionStatus'] == \\\"EXECUTION_STATUS_FAILED\\\":\\n\",\n                \"        return True\\n\",\n                \"    else:\\n\",\n                \"        return False\\n\",\n                \"\\n\",\n                \"\\n\",\n                \"from beartype import beartype\\n\",\n                \"@beartype\\n\",\n                \"def call_pvc_resize_runbook(handle: InfraConnector, Namespace: str, PVCName: str, ResizeOption: str, RestartPodsAfterResize:bool, Value: float, Channel: str = None):\\n\",\n                \"    workflowIDToBeCalled = RunbookID\\n\",\n                \"    apiToken = APIToken\\n\",\n                \"    tenantID = env['TENANT_ID']\\n\",\n                \"    environmentID = env['ENVIRONMENT_ID']\\n\",\n                \"    userID = \\\"Bot-user\\\"\\n\",\n                \"\\n\",\n                \"    params = Schema()\\n\",\n                \"    params.Namespace = Namespace\\n\",\n                \"    params.PVCName = PVCName\\n\",\n                \"    params.Value = Value\\n\",\n                \"    params.ResizeOption = ResizeOption\\n\",\n                \"    params.Channel = Channel\\n\",\n                \"    payload = {\\n\",\n                \"        \\\"req_hdr\\\": {\\n\",\n                \"            \\\"tid\\\": str(uuid.uuid4())\\n\",\n                \"        },\\n\",\n                \"        \\\"tenant_id\\\": tenantID,\\n\",\n                \"        \\\"environment_id\\\": environmentID,\\n\",\n                \"        \\\"user_id\\\": userID,\\n\",\n                \"        \\\"params\\\": params.json()\\n\",\n                \"    }\\n\",\n                \"    handle = requests.Session()\\n\",\n                \"    authHeader = f'unskript-sha {apiToken}'\\n\",\n                \"    handle.headers.update({'Authorization': authHeader})\\n\",\n                \"    url = f'{env[\\\"TENANT_URL\\\"]}/workflows/{workflowIDToBeCalled}/run'\\n\",\n                \"\\n\",\n                \"    try:\\n\",\n                \"        resp = handle.request('POST', url, json=payload)\\n\",\n                \"        resp.raise_for_status()\\n\",\n                \"    except Exception as e:\\n\",\n                \"        print(f'Workflow run failed, {e}')\\n\",\n                \"        raise e\\n\",\n                \"\\n\",\n                \"    try:\\n\",\n                \"        result = resp.json()\\n\",\n                \"    except Exception:\\n\",\n                \"        result = html_to_json.convert(resp.content)\\n\",\n                \"\\n\",\n                \"    executionID = result['executionId']\\n\",\n                \"    print(f'ExecutionID {executionID}')\\n\",\n                \"\\n\",\n                \"    try:\\n\",\n                \"        checkExecutionStatus(handle, tenantID, executionID)\\n\",\n                \"    except Exception as e:\\n\",\n                \"        handle.close()\\n\",\n                \"        print(f'Check execution status for {executionID} failed, {e}')\\n\",\n                \"        raise e\\n\",\n                \"\\n\",\n                \"    handle.close()\\n\",\n                \"    return\\n\",\n                \"\\n\",\n                \"task = Task(Workflow())\\n\",\n                \"task.configure(inputParamsJson='''{\\n\",\n                \"    \\\"Namespace\\\": {\\n\",\n                \"        \\\"constant\\\": false,\\n\",\n                \"        \\\"value\\\": \\\"iter.get(\\\\\\\\\\\\\\\"Namespace\\\\\\\\\\\\\\\")\\\"\\n\",\n                \"    },\\n\",\n                \"    \\\"PVCName\\\": {\\n\",\n                \"        \\\"constant\\\": false,\\n\",\n                \"        \\\"value\\\": \\\"iter.get(\\\\\\\\\\\\\\\"Name\\\\\\\\\\\\\\\")\\\"\\n\",\n                \"    },\\n\",\n                \"    \\\"ResizeOption\\\": {\\n\",\n                \"        \\\"constant\\\": false,\\n\",\n                \"        \\\"value\\\": \\\"ResizeOption\\\"\\n\",\n                \"    },\\n\",\n                \"    \\\"RestartPodsAfterResize\\\": {\\n\",\n                \"        \\\"constant\\\": true,\\n\",\n                \"        \\\"value\\\": false\\n\",\n                \"    },\\n\",\n                \"    \\\"Channel\\\": {\\n\",\n                \"        \\\"constant\\\": false,\\n\",\n                \"        \\\"value\\\": \\\"Channel\\\"\\n\",\n                \"    },\\n\",\n                \"    \\\"Value\\\": {\\n\",\n                \"        \\\"constant\\\": false,\\n\",\n                \"        \\\"value\\\": \\\"Value\\\"\\n\",\n                \"    }\\n\",\n                \"}''')\\n\",\n                \"task.configure(iterJson='''{\\n\",\n                \"    \\\"iter_enabled\\\": true,\\n\",\n                \"    \\\"iter_list_is_const\\\": false,\\n\",\n                \"    \\\"iter_list\\\": \\\"pvcsList\\\",\\n\",\n                \"    \\\"iter_parameter\\\": [\\n\",\n                \"        \\\"Namespace\\\",\\n\",\n                \"        \\\"PVCName\\\"\\n\",\n                \"    ]\\n\",\n                \"}''')\\n\",\n                \"\\n\",\n                \"\\n\",\n                \"\\n\",\n                \"(err, hdl, args) = task.validate(vars=vars(), infra=True)\\n\",\n                \"if err is None:\\n\",\n                \"    task.output = task.execute(call_pvc_resize_runbook, hdl, args)\\n\",\n                \"if hasattr(task, 'output'):\\n\",\n                \"    if isinstance(task.output, (list, tuple)):\\n\",\n                \"        for item in task.output:\\n\",\n                \"            print(f'item: {item}')\\n\",\n                \"    elif isinstance(task.output, dict):\\n\",\n                \"        for item in task.output.items():\\n\",\n                \"            print(f'item: {item}')\\n\",\n                \"    else:\\n\",\n                \"        print(task.output)\"\n            ]\n        }\n    ],\n    \"metadata\": {\n        \"execution_data\": {\n            \"runbook_name\": \"Resize list of pvcs.\",\n            \"parameters\": [\n                \"APIToken\",\n                \"Channel\",\n                \"ResizeOption\",\n                \"RunbookID\",\n                \"Value\"\n            ]\n        },\n        \"kernelspec\": {\n            \"display_name\": \"Python 3.9.6 64-bit\",\n            \"language\": \"python\",\n            \"name\": \"python3\"\n        },\n        \"language_info\": {\n            \"file_extension\": \".py\",\n            \"mimetype\": \"text/x-python\",\n            \"name\": \"python\",\n            \"pygments_lexer\": \"ipython3\",\n            \"version\": \"3.9.6\"\n        },\n        \"parameterSchema\": {\n            \"properties\": {\n                \"APIToken\": {\n                    \"description\": \"APIToken to talk to unskript apis\",\n                    \"title\": \"APIToken\",\n                    \"type\": \"string\"\n                },\n                \"Channel\": {\n                    \"description\": \"Slack Channel name where notification will be send\",\n                    \"title\": \"Channel\",\n                    \"type\": \"string\"\n                },\n                \"ResizeOption\": {\n                    \"default\": \"Add\",\n                    \"description\": \"Option to resize the volume. 2 options supported:             1. Add - Use this option to resize by an amount.             2. Multiple - Use this option if you want to resize by a multiple of the current volume size.\",\n                    \"title\": \"ResizeOption\",\n                    \"type\": \"string\"\n                },\n                \"RunbookID\": {\n                    \"description\": \"UUID of the PVC Resize runbook\",\n                    \"title\": \"RunbookID\",\n                    \"type\": \"string\"\n                },\n                \"Value\": {\n                    \"description\": \"Based on the resize option chosen, specify the value. For eg, if you chose Add option, this             value will be a value in Gb (like 100). If you chose, this value will be a multiplying factor             to the current volume size. For eg, to double, specify value as 2.\",\n                    \"title\": \"Value\",\n                    \"type\": \"number\"\n                }\n            },\n            \"required\": [],\n            \"title\": \"Schema\",\n            \"type\": \"object\"\n        },\n        \"parameterValues\": {},\n        \"vscode\": {\n            \"interpreter\": {\n                \"hash\": \"31f2aee4e71d21fbe5cf8b01ff0e069b9275f58929596ceb00d14d90e3e16cd6\"\n            }\n        }\n    },\n    \"nbformat\": 4,\n    \"nbformat_minor\": 5\n}"
  },
  {
    "path": "AWS/Resize_List_Of_Pvcs.json",
    "content": "{\n  \"name\": \"Resize list of pvcs.\",\n  \"description\": \"This runbook can be used to resize list of pvcs in a namespace. By default, it uses all pvcs to be resized.\",\n  \"uuid\": \"40df55f0b809c1f77b7c5c5c106fc534f58b7eb93ac92993723e9798631e7359\", \n  \"icon\": \"CONNECTOR_TYPE_K8S\",\n  \"categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\" ],\n  \"connector_types\": [ \"CONNECTOR_TYPE_K8S\" ],\n  \"version\": \"1.0.0\"\n}\n\n"
  },
  {
    "path": "AWS/Restart_AWS_EC2_Instances_By_Tag.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"5f773213-63a4-46c6-9336-b6e4edf494d5\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Runbook Overview\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Runbook Overview\"\n   },\n   \"source\": [\n    \"<center><img src=\\\"https://storage.googleapis.com/unskript-website/assets/favicon.png\\\" alt=\\\"unSkript.com\\\" width=\\\"100\\\" height=\\\"100\\\">\\n\",\n    \"<h1 id=\\\"unSkript-Runbooks\\\">unSkript Runbooks</h1>\\n\",\n    \"<div class=\\\"alert alert-block alert-success\\\">\\n\",\n    \"<h3 id=\\\"Objective\\\"><strong>Objective</strong></h3>\\n\",\n    \"<strong>To Restart AWS EC2 Instance by given tag using unSkript actions.</strong></div>\\n\",\n    \"</center><center>\\n\",\n    \"<h2 id=\\\"Restart-EC2-Instance-By-Given-Tag\\\">Restart EC2 Instance By Given Tag</h2>\\n\",\n    \"</center>\\n\",\n    \"<h1 id=\\\"Steps-Overview\\\">Steps Overview</h1>\\n\",\n    \"<ol>\\n\",\n    \"<li>Filter AWS EC2 Instances by tag</li>\\n\",\n    \"<li>Restart AWS EC2 Instance</li>\\n\",\n    \"<li>Get AWS Instance Details</li>\\n\",\n    \"</ol>\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"4d25f1c6-c93c-4c75-8046-f1eab26ab982\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-1\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-1\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Filter-AWS-EC2-Instances-by-tag\\\">Filter AWS EC2 Instances by tag</h3>\\n\",\n    \"<p>In this action, we search for all the instances from AWS for a given tag and region and return a list of instances.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p><strong>Input parameters:</strong> <code>tag_key,&nbsp;tag_value,&nbsp;region</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p><strong>Output variable:</strong> <code>instance_list</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 10,\n   \"id\": \"ef696074-ab97-4de7-b3ee-08faeacd22ff\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionCategories\": [],\n    \"actionIsCheck\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [],\n    \"actionNextHopParameterMapping\": {},\n    \"actionOutputType\": \"\",\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": false,\n    \"action_uuid\": \"a94de204575d5609dce3abee3f63e84913548ad792e51dd949333bf60ebd842a\",\n    \"checkEnabled\": false,\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Filter AWS EC2 Instance\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-02-16T11:17:19.558Z\"\n    },\n    \"id\": 260,\n    \"index\": 260,\n    \"inputData\": [\n     {\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"region\"\n      },\n      \"tag_key\": {\n       \"constant\": false,\n       \"value\": \"tag_key\"\n      },\n      \"tag_value\": {\n       \"constant\": false,\n       \"value\": \"tag_value\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"region\": {\n        \"description\": \"AWS Region.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       },\n       \"tag_key\": {\n        \"description\": \"The key of the tag.\",\n        \"title\": \"Tag Key\",\n        \"type\": \"string\"\n       },\n       \"tag_value\": {\n        \"description\": \"The value of the key.\",\n        \"title\": \"Tag Value\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"tag_key\",\n       \"tag_value\",\n       \"region\"\n      ],\n      \"title\": \"aws_filter_ec2_by_tags\",\n      \"type\": \"object\"\n     }\n    ],\n    \"isUnskript\": false,\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"Filter AWS EC2 Instance by tag\",\n    \"nouns\": [],\n    \"orderProperties\": [\n     \"tag_key\",\n     \"tag_value\",\n     \"region\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"instance_list\",\n     \"output_name_enabled\": true\n    },\n    \"printOutput\": true,\n    \"tags\": [\n     \"aws_filter_ec2_by_tags\"\n    ],\n    \"title\": \"Filter AWS EC2 Instance by tag\",\n    \"trusted\": true,\n    \"verbs\": []\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"##  Copyright (c) 2021 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import List\\n\",\n    \"from unskript.connectors.aws import aws_get_paginator\\n\",\n    \"import pprint\\n\",\n    \"from beartype import beartype\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_filter_ec2_by_tags_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint({\\\"Instances\\\": output})\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_filter_ec2_by_tags(handle, tag_key: str, tag_value: str, region: str) -> List:\\n\",\n    \"    \\\"\\\"\\\"aws_filter_ec2_by_tags Returns an array of instances matching tags.\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned by the task.validate(...) method.\\n\",\n    \"\\n\",\n    \"        :type tag_key: string\\n\",\n    \"        :param tag_key: Key for the EC2 instance tag.\\n\",\n    \"\\n\",\n    \"        :type tag_value: string\\n\",\n    \"        :param tag_value: value for the EC2 instance tag.\\n\",\n    \"\\n\",\n    \"        :type region: string\\n\",\n    \"        :param region: EC2 instance region.\\n\",\n    \"\\n\",\n    \"        :rtype: Array of instances matching tags.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"\\n\",\n    \"    ec2Client = handle.client('ec2', region_name=region)\\n\",\n    \"    res = aws_get_paginator(ec2Client, \\\"describe_instances\\\", \\\"Reservations\\\",\\n\",\n    \"                            Filters=[{'Name': 'tag:' + tag_key, 'Values': [tag_value]}])\\n\",\n    \"\\n\",\n    \"    result = []\\n\",\n    \"    for reservation in res:\\n\",\n    \"        for instance in reservation['Instances']:\\n\",\n    \"            result.append(instance['InstanceId'])\\n\",\n    \"    return result\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"region\\\": \\\"region\\\",\\n\",\n    \"    \\\"tag_key\\\": \\\"tag_key\\\",\\n\",\n    \"    \\\"tag_value\\\": \\\"tag_value\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"instance_list\\\")\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_filter_ec2_by_tags, lego_printer=aws_filter_ec2_by_tags_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"7b35320c-c614-4d36-8fbb-cc102c02f72b\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-2\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-2\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Restart-AWS-EC2-Instances\\\">Restart AWS EC2 Instances</h3>\\n\",\n    \"<p>Here we will use the unSkript&nbsp;<strong>Restart AWS EC2 Instances </strong>action. This action is used to restart the instances which we get using the above step 1. We pass the instances IDs list to step 2.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p><strong>Input parameters:</strong> <code>instance_ids, region</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p><strong>Output variable:</strong> <code>restart_instance</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 22,\n   \"id\": \"bb7a4450-1efc-4aec-85c6-d9b4a8635762\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionCategories\": [],\n    \"actionIsCheck\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [],\n    \"actionNextHopParameterMapping\": {},\n    \"actionOutputType\": \"\",\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": false,\n    \"action_uuid\": \"e7d021a8e955291cf31e811e64a86baa2a902ea2185cb76e7121ebbab261c320\",\n    \"checkEnabled\": false,\n    \"collapsed\": true,\n    \"condition_enabled\": true,\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Restart AWS EC2 Instances\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-02-16T11:22:08.644Z\"\n    },\n    \"id\": 257,\n    \"index\": 257,\n    \"inputData\": [\n     {\n      \"instance_ids\": {\n       \"constant\": false,\n       \"value\": \"instance_list\"\n      },\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"region\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"instance_ids\": {\n        \"description\": \"List of instance IDs. For eg. [\\\"i-foo\\\", \\\"i-bar\\\"]\",\n        \"items\": {\n         \"type\": \"string\"\n        },\n        \"title\": \"Instance IDs\",\n        \"type\": \"array\"\n       },\n       \"region\": {\n        \"description\": \"AWS Region of the instances.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"instance_ids\",\n       \"region\"\n      ],\n      \"title\": \"aws_restart_ec2_instances\",\n      \"type\": \"object\"\n     }\n    ],\n    \"isUnskript\": false,\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"Restart AWS EC2 Instances\",\n    \"nouns\": [],\n    \"orderProperties\": [\n     \"instance_ids\",\n     \"region\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"restart_instance\",\n     \"output_name_enabled\": true\n    },\n    \"printOutput\": true,\n    \"startcondition\": \"not dry_run_flag\",\n    \"tags\": [\n     \"aws_restart_ec2_instances\"\n    ],\n    \"trusted\": true,\n    \"verbs\": []\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##  Copyright (c) 2021 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from typing import List, Dict\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"import pprint\\n\",\n    \"from beartype import beartype\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_restart_ec2_instances_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_restart_ec2_instances(handle, instance_ids: List, region: str) -> Dict:\\n\",\n    \"    \\\"\\\"\\\"aws_restart_instances Restarts instances.\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned by the task.validate(...) method.\\n\",\n    \"\\n\",\n    \"        :type instance_ids: list\\n\",\n    \"        :param instance_ids: List of instance ids.\\n\",\n    \"\\n\",\n    \"        :type region: string\\n\",\n    \"        :param region: Region for instance.\\n\",\n    \"\\n\",\n    \"        :rtype: Dict with the restarted instances info.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"\\n\",\n    \"    ec2Client = handle.client('ec2', region_name=region)\\n\",\n    \"    res = ec2Client.reboot_instances(InstanceIds=instance_ids)\\n\",\n    \"    return res\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"instance_ids\\\": \\\"instance_list\\\",\\n\",\n    \"    \\\"region\\\": \\\"region\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"not dry_run_flag\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"restart_instance\\\")\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_restart_ec2_instances, lego_printer=aws_restart_ec2_instances_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"5336e57c-1c40-4f67-938c-dceee50b42be\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-3\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-3\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Get-AWS-Instance-Details\\\">Get AWS Instance Details</h3>\\n\",\n    \"<p>Here we will use unSkript <strong>Get AWS Instance Details</strong> action to get the details of the instances. This action is used to get details of instances that we received in step 1.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p><strong>Input parameters:</strong> <code>instance_id, region</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p><strong>Output variable:</strong> <code>instance_details</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 23,\n   \"id\": \"b7630f20-a68e-45eb-bb5c-193231b5d262\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionCategories\": [],\n    \"actionIsCheck\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [],\n    \"actionNextHopParameterMapping\": {},\n    \"actionOutputType\": \"\",\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": false,\n    \"action_uuid\": \"930c0624b3d32846a0946e0a54dac8e69d7a1ee0e28e10de7338c68f06df8420\",\n    \"checkEnabled\": false,\n    \"continueOnError\": true,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Get AWS Instances Details\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-02-16T11:23:01.767Z\"\n    },\n    \"id\": 210,\n    \"index\": 210,\n    \"inputData\": [\n     {\n      \"instance_id\": {\n       \"constant\": false,\n       \"value\": \"iter_item\"\n      },\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"region\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"instance_id\": {\n        \"description\": \"ID of the instance.\",\n        \"title\": \"Instance Id\",\n        \"type\": \"string\"\n       },\n       \"region\": {\n        \"description\": \"AWS Region of the instance.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"instance_id\",\n       \"region\"\n      ],\n      \"title\": \"aws_get_instance_details\",\n      \"type\": \"object\"\n     }\n    ],\n    \"isUnskript\": false,\n    \"iterData\": [\n     {\n      \"iter_enabled\": true,\n      \"iter_item\": \"instance_id\",\n      \"iter_list\": {\n       \"constant\": false,\n       \"objectItems\": false,\n       \"value\": \"instance_list\"\n      }\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"Get AWS Instances Details\",\n    \"nouns\": [],\n    \"orderProperties\": [\n     \"instance_id\",\n     \"region\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"instance_details\",\n     \"output_name_enabled\": true\n    },\n    \"printOutput\": true,\n    \"tags\": [\n     \"aws_get_instance_details\"\n    ],\n    \"trusted\": true,\n    \"verbs\": []\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"# Copyright (c) 2021 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import Dict\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_instances_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_instance_details(handle, instance_id: str, region: str) -> Dict:\\n\",\n    \"    \\\"\\\"\\\"aws_get_instance_details Returns instance details.\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned by the task.validate(...) method.\\n\",\n    \"\\n\",\n    \"        :type instance_ids: list\\n\",\n    \"        :param instance_ids: List of instance ids.\\n\",\n    \"\\n\",\n    \"        :type region: string\\n\",\n    \"        :param region: Region for instance.\\n\",\n    \"\\n\",\n    \"        :rtype: Dict with the instance details.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"\\n\",\n    \"    ec2client = handle.client('ec2', region_name=region)\\n\",\n    \"    instances = []\\n\",\n    \"    response = ec2client.describe_instances(\\n\",\n    \"        Filters=[{\\\"Name\\\": \\\"instance-id\\\", \\\"Values\\\": [instance_id]}])\\n\",\n    \"    for reservation in response[\\\"Reservations\\\"]:\\n\",\n    \"        for instance in reservation[\\\"Instances\\\"]:\\n\",\n    \"            instances.append(instance)\\n\",\n    \"\\n\",\n    \"    return instances[0]\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"def unskript_default_printer(output):\\n\",\n    \"    if isinstance(output, (list, tuple)):\\n\",\n    \"        for item in output:\\n\",\n    \"            print(f'item: {item}')\\n\",\n    \"    elif isinstance(output, dict):\\n\",\n    \"        for item in output.items():\\n\",\n    \"            print(f'item: {item}')\\n\",\n    \"    else:\\n\",\n    \"        print(f'Output for {task.name}')\\n\",\n    \"        print(output)\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(continueOnError=True)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"instance_id\\\": \\\"iter_item\\\",\\n\",\n    \"    \\\"region\\\": \\\"region\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(iterJson='''{\\n\",\n    \"    \\\"iter_enabled\\\": true,\\n\",\n    \"    \\\"iter_list_is_const\\\": false,\\n\",\n    \"    \\\"iter_list\\\": \\\"instance_list\\\",\\n\",\n    \"    \\\"iter_parameter\\\": \\\"instance_id\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"instance_details\\\")\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_get_instance_details, lego_printer=unskript_default_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"bee3abf5-3864-4a20-8154-2293a5c8aa28\",\n   \"metadata\": {\n    \"name\": \"Step-3 Extension\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-3 Extension\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Instance-Details\\\">Get AWS Instance Details</h3>\\n\",\n    \"<p>In this action, we sort the output from step-3 and present the details of the instance in the good table.</p>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 24,\n   \"id\": \"a773f2a8-24b3-4dd6-a3c9-6266c9bafa05\",\n   \"metadata\": {\n    \"continueOnError\": false,\n    \"credentialsJson\": {},\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-02-16T11:23:04.116Z\"\n    },\n    \"inputData\": [\n     {}\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {},\n      \"required\": [],\n      \"title\": \"Instance Details\"\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"name\": \"Instance Details\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Instance Details\",\n    \"trusted\": true\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"import pprint\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import Dict\\n\",\n    \"from tabulate import tabulate\\n\",\n    \"\\n\",\n    \"output = instance_details\\n\",\n    \"instance_list = instance_list\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"def Instance_Details(output, instance_list: list):\\n\",\n    \"    data1 = []\\n\",\n    \"    Header = \\\"\\\"\\n\",\n    \"    for instance_id in instance_list:\\n\",\n    \"        if instance_id in output.keys():\\n\",\n    \"            output1 = output[instance_id]\\n\",\n    \"            if isinstance(output1, (list, tuple)):\\n\",\n    \"                for item in output1:\\n\",\n    \"                    print(f'item: {item}')\\n\",\n    \"            elif isinstance(output1, dict):\\n\",\n    \"                for key, value in output1.items():\\n\",\n    \"                    if isinstance(value, (list)):\\n\",\n    \"                        pass\\n\",\n    \"                    else:\\n\",\n    \"                        if key == \\\"InstanceId\\\":\\n\",\n    \"                            Header = value\\n\",\n    \"                        data1.append([key, value])\\n\",\n    \"                print(f'\\\\n\\\\033[1m Table for Instance ID : {Header} \\\\033[0;0m')\\n\",\n    \"                print(tabulate(data1))\\n\",\n    \"            else:\\n\",\n    \"                print(f'Output for {task.name}')\\n\",\n    \"                print(output1)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"Instance_Details(output, instance_list)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"b3f10b1c-f542-48da-9b6e-1123873385a8\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Conclusion\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Conclusion\"\n   },\n   \"source\": [\n    \"### Conclusion\\n\",\n    \"<p>In this Runbook, we demonstrated the use of unSkript's AWS legos to restart the AWS EC2 instances and get the details. To view the full platform capabilities of unSkript please visit <a href=\\\"https://us.app.unskript.io\\\" target=\\\"_blank\\\" rel=\\\"noopener\\\">https://us.app.unskript.io</a></p>\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"execution_data\": {\n   \"runbook_name\": \"Restart AWS EC2 Instances\",\n   \"parameters\": [\n    \"tag_value\",\n    \"region\",\n    \"tag_key\"\n   ]\n  },\n  \"kernelspec\": {\n   \"display_name\": \"unSkript (Build: 891)\",\n   \"name\": \"python_kubernetes\"\n  },\n  \"language_info\": {\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"pygments_lexer\": \"ipython3\"\n  },\n  \"parameterSchema\": {\n   \"properties\": {\n    \"dry_run_flag\": {\n     \"default\": false,\n     \"description\": \"If the dry run flag is set to false it will find the instances for the given tag and restart them and if set to true it will only display the instances for the given tag.\",\n     \"title\": \"dry_run_flag\",\n     \"type\": \"boolean\"\n    },\n    \"region\": {\n     \"default\": \"us-west-2\",\n     \"description\": \"AWS Region\",\n     \"title\": \"region\",\n     \"type\": \"string\"\n    },\n    \"tag_key\": {\n     \"default\": \"Service\",\n     \"description\": \"Tag Key\",\n     \"title\": \"tag_key\",\n     \"type\": \"string\"\n    },\n    \"tag_value\": {\n     \"default\": \"devmongodb\",\n     \"description\": \"Tag Value\",\n     \"title\": \"tag_value\",\n     \"type\": \"string\"\n    }\n   },\n   \"required\": [\n    \"region\",\n    \"tag_key\",\n    \"tag_value\"\n   ],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"parameterValues\": {\n   \"dry_run_flag\": false,\n   \"region\": \"us-west-2\",\n   \"tag_key\": \"Name\",\n   \"tag_value\": \"test-recreate-instance\"\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}"
  },
  {
    "path": "AWS/Restart_AWS_EC2_Instances_By_Tag.json",
    "content": "{\n  \"name\": \"Restart AWS EC2 Instances\",\n  \"description\": \"This runbook can be used to Restart AWS EC2 Instances\",\n  \"uuid\": \"e6e51e94e093ff3730b95c689232afaa3fc4f337d6fdac0ebb644fb2d6380afd\",\n  \"icon\": \"CONNECTOR_TYPE_AWS\",\n  \"categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_SRE\", \"CATEGORY_TYPE_TROUBLESHOOTING\" ],\n  \"connector_types\": [ \"CONNECTOR_TYPE_AWS\" ],\n  \"version\": \"1.0.0\"\n}\n\n"
  },
  {
    "path": "AWS/Run_EC2_from_AMI.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"0ecd43d6-5d15-4210-95d5-6b7052748b74\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Runbook Overview\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Runbook Overview\"\n   },\n   \"source\": [\n    \"<center><img src=\\\"https://storage.googleapis.com/unskript-website/assets/favicon.png\\\" alt=\\\"unSkript.com\\\" width=\\\"100\\\" height=\\\"100\\\">\\n\",\n    \"<h1 id=\\\"unSkript-Runbooks\\\">unSkript Runbooks</h1>\\n\",\n    \"<div class=\\\"alert alert-block alert-success\\\">\\n\",\n    \"<h3 id=\\\"Objective\\\"><strong>Objective</strong></h3>\\n\",\n    \"<strong>To&nbsp;Launch AWS EC2 from AMI using unSkript actions.</strong></div>\\n\",\n    \"</center><center>\\n\",\n    \"<h2 id=\\\"Launch-AWS-EC2-from-AMI\\\">Launch AWS EC2 from AMI</h2>\\n\",\n    \"</center>\\n\",\n    \"<h1 id=\\\"Steps-Overview\\\">Steps Overview</h1>\\n\",\n    \"<ol>\\n\",\n    \"<li>Launch AWS EC2 Instance From an AMI</li>\\n\",\n    \"<li>Get AWS Instances Details</li>\\n\",\n    \"</ol>\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"f2e0dbee-429f-4ddd-8808-bc79df9e7686\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-1\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-1\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Launch-AWS-EC2-Instance-From-an-AMI\\\">Launch AWS EC2 Instance From an AMI</h3>\\n\",\n    \"<p>Here we will use unSkript <strong>Launch AWS EC2 instance from an AMI</strong> action. This action is used to launch an EC2 instance from AMI.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p><strong>Input parameters:</strong> <code>ami_id, region</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p><strong>Output variable:</strong> <code>launch_instance</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 9,\n   \"id\": \"cf78d7ed-4073-4231-bff5-54879ff27239\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionCategories\": [],\n    \"actionIsCheck\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [],\n    \"actionNextHopParameterMapping\": {},\n    \"actionOutputType\": \"\",\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": false,\n    \"action_uuid\": \"dc0cd6cd07b4a3c94ea019493659c3f455a7ae952ea7e5eefcb7c8d402271ef5\",\n    \"checkEnabled\": false,\n    \"continueOnError\": true,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Use this instance to Launch an AWS EC2 instance from an AMI\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-02-16T11:44:06.426Z\"\n    },\n    \"id\": 294,\n    \"index\": 294,\n    \"inputData\": [\n     {\n      \"ami_id\": {\n       \"constant\": false,\n       \"value\": \"iter_item\"\n      },\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"region\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"ami_id\": {\n        \"description\": \"AMI Id.\",\n        \"title\": \"AMI Id\",\n        \"type\": \"string\"\n       },\n       \"region\": {\n        \"description\": \"AWS Region.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"ami_id\",\n       \"region\"\n      ],\n      \"title\": \"aws_launch_instance_from_ami\",\n      \"type\": \"object\"\n     }\n    ],\n    \"isUnskript\": false,\n    \"iterData\": [\n     {\n      \"iter_enabled\": true,\n      \"iter_item\": \"ami_id\",\n      \"iter_list\": {\n       \"constant\": false,\n       \"objectItems\": false,\n       \"value\": \"ami_id\"\n      }\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"Launch AWS EC2 Instance From an AMI\",\n    \"nouns\": [],\n    \"orderProperties\": [\n     \"ami_id\",\n     \"region\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"launch_instance\",\n     \"output_name_enabled\": true\n    },\n    \"printOutput\": true,\n    \"tags\": [\n     \"aws_launch_instance_from_ami\"\n    ],\n    \"verbs\": []\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"##  Copyright (c) 2021 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import List\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_launch_instance_from_ami_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_launch_instance_from_ami(handle, ami_id: str, region: str) -> List:\\n\",\n    \"    \\\"\\\"\\\"aws_launch_instance_from_ami Launch instances from a particular image.\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned from task.validate(...).\\n\",\n    \"\\n\",\n    \"        :type ami_id: string\\n\",\n    \"        :param ami_id: AMI Id Information required to launch an instance.\\n\",\n    \"\\n\",\n    \"        :type region: string\\n\",\n    \"        :param region: Region to filter instances.\\n\",\n    \"\\n\",\n    \"        :rtype: Dict with launched instances info.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    ec2Client = handle.client('ec2', region_name=region)\\n\",\n    \"\\n\",\n    \"    res = ec2Client.run_instances(ImageId=ami_id, MinCount=1, MaxCount=1)\\n\",\n    \"\\n\",\n    \"    return res['Instances']\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(continueOnError=True)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"ami_id\\\": \\\"iter_item\\\",\\n\",\n    \"    \\\"region\\\": \\\"region\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(iterJson='''{\\n\",\n    \"    \\\"iter_enabled\\\": true,\\n\",\n    \"    \\\"iter_list_is_const\\\": false,\\n\",\n    \"    \\\"iter_list\\\": \\\"ami_id\\\",\\n\",\n    \"    \\\"iter_parameter\\\": \\\"ami_id\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"launch_instance\\\")\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_launch_instance_from_ami, lego_printer=aws_launch_instance_from_ami_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"e6396a97-387f-4e18-9f41-efb3d0a6bf96\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-1 Extension\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-1 Extension\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Modify-Output\\\">Modify Output</h3>\\n\",\n    \"<p>In this action, we sort the output from step-1 and get the instance ids.</p>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 15,\n   \"id\": \"5c2feffc-68fe-44bd-bb9b-b6e20244efe3\",\n   \"metadata\": {\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-02-16T11:48:17.267Z\"\n    },\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"name\": \"Modify Output\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Modify Output\",\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"instance_ids = []\\n\",\n    \"if launch_instance:\\n\",\n    \"    for k, v in launch_instance.items():\\n\",\n    \"        for i in v:\\n\",\n    \"            instance_ids.append(i[\\\"InstanceId\\\"])\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"939c7878-2dc1-42f5-9945-7248fc6b85ba\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-2\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-2\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Get-AWS-Instance-Details\\\">Get AWS Instance Details</h3>\\n\",\n    \"<p>Here we will use unSkript <strong>Get AWS Instance Details</strong> action to get the details of the instances. This action is used to get details of instances that we received in step 1.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p><strong>Input parameters:</strong> <code>instance_id, region</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p><strong>Output variable:</strong> <code>instance_details</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 16,\n   \"id\": \"966fb848-2bfb-4530-91da-a085ca6c9cd0\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionCategories\": [],\n    \"actionIsCheck\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [],\n    \"actionNextHopParameterMapping\": {},\n    \"actionOutputType\": \"\",\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": false,\n    \"action_uuid\": \"930c0624b3d32846a0946e0a54dac8e69d7a1ee0e28e10de7338c68f06df8420\",\n    \"checkEnabled\": false,\n    \"continueOnError\": true,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Get AWS Instances Details\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-02-16T11:51:06.278Z\"\n    },\n    \"id\": 210,\n    \"index\": 210,\n    \"inputData\": [\n     {\n      \"instance_id\": {\n       \"constant\": false,\n       \"value\": \"iter_item\"\n      },\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"region\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"instance_id\": {\n        \"description\": \"ID of the instance.\",\n        \"title\": \"Instance Id\",\n        \"type\": \"string\"\n       },\n       \"region\": {\n        \"description\": \"AWS Region of the instance.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"instance_id\",\n       \"region\"\n      ],\n      \"title\": \"aws_get_instance_details\",\n      \"type\": \"object\"\n     }\n    ],\n    \"isUnskript\": false,\n    \"iterData\": [\n     {\n      \"iter_enabled\": true,\n      \"iter_item\": \"instance_id\",\n      \"iter_list\": {\n       \"constant\": false,\n       \"objectItems\": false,\n       \"value\": \"instance_ids\"\n      }\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"Get AWS Instances Details\",\n    \"nouns\": [],\n    \"orderProperties\": [\n     \"instance_id\",\n     \"region\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"instance_details\",\n     \"output_name_enabled\": true\n    },\n    \"printOutput\": true,\n    \"tags\": [\n     \"aws_get_instance_details\"\n    ],\n    \"title\": \"Get AWS Instances Details\",\n    \"verbs\": []\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"# Copyright (c) 2021 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import Dict\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_instances_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_instance_details(handle, instance_id: str, region: str) -> Dict:\\n\",\n    \"    \\\"\\\"\\\"aws_get_instance_details Returns instance details.\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned by the task.validate(...) method.\\n\",\n    \"\\n\",\n    \"        :type instance_ids: list\\n\",\n    \"        :param instance_ids: List of instance ids.\\n\",\n    \"\\n\",\n    \"        :type region: string\\n\",\n    \"        :param region: Region for instance.\\n\",\n    \"\\n\",\n    \"        :rtype: Dict with the instance details.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"\\n\",\n    \"    ec2client = handle.client('ec2', region_name=region)\\n\",\n    \"    instances = []\\n\",\n    \"    response = ec2client.describe_instances(\\n\",\n    \"        Filters=[{\\\"Name\\\": \\\"instance-id\\\", \\\"Values\\\": [instance_id]}])\\n\",\n    \"    for reservation in response[\\\"Reservations\\\"]:\\n\",\n    \"        for instance in reservation[\\\"Instances\\\"]:\\n\",\n    \"            instances.append(instance)\\n\",\n    \"\\n\",\n    \"    return instances[0]\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"def unskript_default_printer(output):\\n\",\n    \"    if isinstance(output, (list, tuple)):\\n\",\n    \"        for item in output:\\n\",\n    \"            print(f'item: {item}')\\n\",\n    \"    elif isinstance(output, dict):\\n\",\n    \"        for item in output.items():\\n\",\n    \"            print(f'item: {item}')\\n\",\n    \"    else:\\n\",\n    \"        print(f'Output for {task.name}')\\n\",\n    \"        print(output)\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(continueOnError=True)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"region\\\": \\\"region\\\",\\n\",\n    \"    \\\"instance_id\\\": \\\"iter_item\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(iterJson='''{\\n\",\n    \"    \\\"iter_enabled\\\": true,\\n\",\n    \"    \\\"iter_list_is_const\\\": false,\\n\",\n    \"    \\\"iter_list\\\": \\\"instance_ids\\\",\\n\",\n    \"    \\\"iter_parameter\\\": \\\"instance_id\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"instance_details\\\")\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_get_instance_details, lego_printer=unskript_default_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"466d72b2-48f8-45cc-b587-08b23129f43e\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-2 Extension\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-2 Extension\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Instance-Details\\\">Instance Details</h3>\\n\",\n    \"<p>In this action, we sort the output from step-2 and present the details of the instance in the good table.</p>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 17,\n   \"id\": \"7ae145b4-6660-4441-942e-74e984318779\",\n   \"metadata\": {\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-02-16T11:51:11.921Z\"\n    },\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"name\": \"Instance Details\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Instance Details\",\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"import pprint\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import Dict\\n\",\n    \"from tabulate import tabulate\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"\\n\",\n    \"output = instance_details\\n\",\n    \"instance_list = instance_ids\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"def Instance_Details(output, instance_list: list):\\n\",\n    \"    data1 = []\\n\",\n    \"    Header = \\\"\\\"\\n\",\n    \"    for instance_id in instance_list:\\n\",\n    \"        if instance_id in output.keys():\\n\",\n    \"            output1 = output[instance_id]\\n\",\n    \"            if isinstance(output1, (list, tuple)):\\n\",\n    \"                for item in output1:\\n\",\n    \"                    print(f'item: {item}')\\n\",\n    \"            elif isinstance(output1, dict):\\n\",\n    \"                for key, value in output1.items():\\n\",\n    \"                    if isinstance(value, (list)):\\n\",\n    \"                        pass\\n\",\n    \"                    else:\\n\",\n    \"                        if key == \\\"InstanceId\\\":\\n\",\n    \"                            Header = value\\n\",\n    \"                        data1.append([key,value])\\n\",\n    \"                print(f'\\\\n\\\\033[1m Table for Instance ID : {Header} \\\\033[0;0m')\\n\",\n    \"                print(tabulate(data1))\\n\",\n    \"            else:\\n\",\n    \"                print(f'Output for {task.name}')\\n\",\n    \"                print(output1)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"Instance_Details(output, instance_list)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"1e18a18c-f822-452d-9a94-719f23734fa4\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Conclusion\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Conclusion\"\n   },\n   \"source\": [\n    \"### Conclusion\\n\",\n    \"<p>In this Runbook, we demonstrated the use of unSkript's AWS legos to perform AWS actions and this runbook launched EC2 instances from AMI and show the details of the instance. To view the full platform capabilities of unSkript please visit <a href=\\\"https://us.app.unskript.io\\\" target=\\\"_blank\\\" rel=\\\"noopener\\\">https://us.app.unskript.io</a></p>\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"execution_data\": {\n   \"runbook_name\": \"Launch AWS EC2 from AMI\",\n   \"parameters\": [\n    \"ami_id\",\n    \"region\"\n   ]\n  },\n  \"kernelspec\": {\n   \"display_name\": \"unSkript (Build: 891)\",\n   \"name\": \"python_kubernetes\"\n  },\n  \"language_info\": {\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"pygments_lexer\": \"ipython3\"\n  },\n  \"parameterSchema\": {\n   \"properties\": {\n    \"ami_id\": {\n     \"description\": \"List of AMI IDs to be the launch instance.\",\n     \"title\": \"ami_id\",\n     \"type\": \"array\"\n    },\n    \"region\": {\n     \"description\": \"AWS region\",\n     \"title\": \"region\",\n     \"type\": \"string\"\n    }\n   },\n   \"required\": [\n    \"ami_id\",\n    \"region\"\n   ],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"parameterValues\": null\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}"
  },
  {
    "path": "AWS/Run_EC2_from_AMI.json",
    "content": "{\n  \"name\": \"Launch AWS EC2 from AMI\",\n  \"description\": \"This lego can be used to launch an AWS EC2 instance from AMI in the given region.\",\n  \"uuid\": \"61fc20fd176f9b1d491d4d6cb58aab4d33759405874fbf8c83716c67bcdb52cc\",\n  \"icon\": \"CONNECTOR_TYPE_AWS\",\n  \"categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\" ],\n  \"connector_types\": [ \"CONNECTOR_TYPE_AWS\" ],\n  \"version\": \"1.0.0\"\n}\n"
  },
  {
    "path": "AWS/Troubleshooting_Your_EC2_Configuration_in_Private_Subnet.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"2d5c877a-6cb6-46fa-b902-3a631c5798b4\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Troubleshooting Your EC2 Configuration in a Private Subnet\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Troubleshooting Your EC2 Configuration in a Private Subnet\"\n   },\n   \"source\": [\n    \"<img src=\\\"https://storage.googleapis.com/unskript-website/assets/favicon.png\\\" alt=\\\"unSkript.com\\\" width=\\\"100\\\" height=\\\"100\\\"/> \\n\",\n    \"<h1> unSkript Runbooks </h1>\\n\",\n    \"<div class=\\\"alert alert-block alert-success\\\">\\n\",\n    \"    <b> This runbook demonstrates How to troubleshoot your EC2 configuration in a private subnet using unSkript legos.</b>\\n\",\n    \"</div>\\n\",\n    \"\\n\",\n    \"<br>\\n\",\n    \"\\n\",\n    \"<center><h2>Troubleshooting Your EC2 Configuration in a Private Subnet</h2></center>\\n\",\n    \"\\n\",\n    \"# Steps Overview\\n\",\n    \"    1) Get all details of given instance id and capture the vpc id.\\n\",\n    \"    2) Using VPC ID get the NAT Gateway details.\\n\",\n    \"    3) Using VPC ID get the Internet Gateway details.\\n\",\n    \"    3) SSH given instance and try to connect to the internet.\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"8113d1a2-51f9-4eba-82c0-9066643d7a26\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Get AWS Instance Details\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Get AWS Instance Details\"\n   },\n   \"source\": [\n    \"Here we will use unSkript Get AWS Instance Details Lego. This lego takes instance_id: str and region: str as input. This input is used to discover all details about given EC2 instance.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"b71a097f-91ba-400a-9f4c-e91fbd92b606\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_uuid\": \"aa1e026ca8002b906315feba401e5c46889d459270adce3b65d480dc9530311f\",\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Use This Action to Get Details about an AWS EC2 Instance\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2022-09-20T18:14:32.832Z\"\n    },\n    \"id\": 123,\n    \"index\": 123,\n    \"inputData\": [\n     {\n      \"instance_id\": {\n       \"constant\": false,\n       \"value\": \"Instance_id\"\n      },\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"Region\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"instance_id\": {\n        \"description\": \"ID of the instance.\",\n        \"title\": \"Instance Id\",\n        \"type\": \"string\"\n       },\n       \"region\": {\n        \"description\": \"AWS Region of the instance.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"instance_id\",\n       \"region\"\n      ],\n      \"title\": \"aws_get_instance_details\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"Get AWS Instance Details\",\n    \"nouns\": [\n     \"instance\",\n     \"details\"\n    ],\n    \"orderProperties\": [\n     \"instance_id\",\n     \"region\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"IncidentDetails\",\n     \"output_name_enabled\": true\n    },\n    \"tags\": [\n     \"aws_get_instance_details\"\n    ],\n    \"verbs\": [\n     \"get\"\n    ]\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"# Copyright (c) 2021 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import Dict\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_instance_details(\\n\",\n    \"    handle,\\n\",\n    \"    instance_id: str,\\n\",\n    \"    region: str,\\n\",\n    \") -> Dict:\\n\",\n    \"\\n\",\n    \"    ec2client = handle.client('ec2', region_name=region)\\n\",\n    \"    instances = []\\n\",\n    \"    response = ec2client.describe_instances(\\n\",\n    \"        Filters=[{\\\"Name\\\": \\\"instance-id\\\", \\\"Values\\\": [instance_id]}])\\n\",\n    \"    for reservation in response[\\\"Reservations\\\"]:\\n\",\n    \"        for instance in reservation[\\\"Instances\\\"]:\\n\",\n    \"            instances.append(instance)\\n\",\n    \"    return instances[0]\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"def unskript_default_printer(output):\\n\",\n    \"    if isinstance(output, (list, tuple)):\\n\",\n    \"        for item in output:\\n\",\n    \"            print(f'item: {item}')\\n\",\n    \"    elif isinstance(output, dict):\\n\",\n    \"        for item in output.items():\\n\",\n    \"            print(f'item: {item}')\\n\",\n    \"    else:\\n\",\n    \"        print(f'Output for {task.name}')\\n\",\n    \"        print(output)\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"instance_id\\\": \\\"Instance_id\\\",\\n\",\n    \"    \\\"region\\\": \\\"Region\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"IncidentDetails\\\")\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_get_instance_details, lego_printer=unskript_default_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"34ea4f4f-b016-4c82-99c9-c967d10248b9\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"AWS Get NAT Gateway by VPC ID\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"AWS Get NAT Gateway by VPC ID\"\n   },\n   \"source\": [\n    \"Here we will use unSkript AWS Get NAT Gateway Info by VPC ID Lego. This lego takes vpc_id: str, region: str as input. This input is used to discover details about NAT gateway by using VPC ID.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"0cc9ee2a-9ff6-481e-bb42-6a400816bfb9\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_uuid\": \"d091c88bc74a85efb6038f5afd6da34047b53faf70ec2c203af85599b2154f76\",\n    \"continueOnError\": true,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"AWS Get NAT Gateway Info by VPC ID\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2022-09-20T17:31:19.042Z\"\n    },\n    \"id\": 146,\n    \"index\": 146,\n    \"inputData\": [\n     {\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"Region\"\n      },\n      \"vpc_id\": {\n       \"constant\": false,\n       \"value\": \"iter_item\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"region\": {\n        \"description\": \"AWS Region.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       },\n       \"vpc_id\": {\n        \"description\": \"VPC ID of the Instance.\",\n        \"title\": \"VPC ID\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"vpc_id\",\n       \"region\"\n      ],\n      \"title\": \"aws_get_natgateway_by_vpc\",\n      \"type\": \"object\"\n     }\n    ],\n    \"iterData\": [\n     {\n      \"iter_enabled\": true,\n      \"iter_item\": \"vpc_id\",\n      \"iter_list\": {\n       \"constant\": false,\n       \"objectItems\": false,\n       \"value\": \"[n['VpcId'] for n in IncidentDetails.get('NetworkInterfaces') if n.get('VpcId') != None ]\"\n      }\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS Get NAT Gateway Info by VPC ID\",\n    \"nouns\": [\n     \"aws\",\n     \"ec2\",\n     \"nat gateways\",\n     \"vpc\"\n    ],\n    \"orderProperties\": [\n     \"vpc_id\",\n     \"region\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"NATOutput\",\n     \"output_name_enabled\": true\n    },\n    \"tags\": [\n     \"aws_get_natgateway_by_vpc\"\n    ],\n    \"title\": \"AWS Get NAT Gateway Info by VPC ID\",\n    \"verbs\": [\n     \"dict\"\n    ]\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##  Copyright (c) 2021 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from typing import List, Dict\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from unskript.connectors.aws import aws_get_paginator\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_natgateway_by_vpc_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_natgateway_by_vpc(handle, vpc_id: str, region: str) -> Dict:\\n\",\n    \"    \\\"\\\"\\\"aws_get_natgateway_by_vpc Returns an Dict of NAT Gateway info.\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned from task.validate(...).\\n\",\n    \"\\n\",\n    \"        :type vpc_id: str\\n\",\n    \"        :param vpc_id: VPC ID to find NAT Gateway.\\n\",\n    \"\\n\",\n    \"        :type region: str\\n\",\n    \"        :param region: Region to filter instance.\\n\",\n    \"\\n\",\n    \"        :rtype: Dict of NAT Gateway info.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"\\n\",\n    \"    ec2Client = handle.client('ec2', region_name=region)\\n\",\n    \"    result = {}\\n\",\n    \"    try:\\n\",\n    \"        response = aws_get_paginator(ec2Client, \\\"describe_nat_gateways\\\", \\\"NatGateways\\\",\\n\",\n    \"                                Filters=[{'Name': 'vpc-id','Values': [vpc_id]}])\\n\",\n    \"        for nat_info in response:\\n\",\n    \"            if \\\"NatGatewayId\\\" in nat_info:\\n\",\n    \"                result[\\\"NatGatewayId\\\"] = nat_info[\\\"NatGatewayId\\\"]\\n\",\n    \"            if \\\"State\\\" in nat_info:\\n\",\n    \"                result[\\\"State\\\"] = nat_info[\\\"State\\\"]\\n\",\n    \"            if \\\"SubnetId\\\" in nat_info:\\n\",\n    \"                result[\\\"SubnetId\\\"] = nat_info[\\\"SubnetId\\\"]\\n\",\n    \"    except Exception as error:\\n\",\n    \"        result[\\\"error\\\"] = error\\n\",\n    \"\\n\",\n    \"    return result\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(continueOnError=True)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"region\\\": \\\"Region\\\",\\n\",\n    \"    \\\"vpc_id\\\": \\\"iter_item\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(iterJson='''{\\n\",\n    \"    \\\"iter_enabled\\\": true,\\n\",\n    \"    \\\"iter_list_is_const\\\": false,\\n\",\n    \"    \\\"iter_list\\\": \\\"[n['VpcId'] for n in IncidentDetails.get('NetworkInterfaces') if n.get('VpcId') != None ]\\\",\\n\",\n    \"    \\\"iter_parameter\\\": \\\"vpc_id\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"NATOutput\\\")\\n\",\n    \"\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_get_natgateway_by_vpc, lego_printer=aws_get_natgateway_by_vpc_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"0f97c43c-ce42-4fef-96bd-09e266c9cda3\",\n   \"metadata\": {\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2022-09-20T17:31:21.530Z\"\n    },\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"name\": \"NAT Gateway Output\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"NAT Gateway Output\",\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"import pprint\\n\",\n    \"pprint.pprint(NATOutput)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"1aa9a0cb-3002-4e95-b5aa-c6a70c939d94\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"AWS Get Internet Gateway by VPC ID\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"AWS Get Internet Gateway by VPC ID\"\n   },\n   \"source\": [\n    \"Here we will use unSkript AWS Get Internet Gateway by VPC ID Lego. This lego takes vpc_id: str, region: str as input. This input is used to discover name of Internet gateway by using VPC ID.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"10d7d677-1856-4cf7-af80-fd1378e5e4ef\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_uuid\": \"c52c6a51e5c0b38f2d4ee38dfb402497b2b91f8bc63d4bd62afebb769ece7ee3\",\n    \"continueOnError\": true,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"AWS Get Internet Gateway by VPC ID\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2022-09-20T17:32:45.290Z\"\n    },\n    \"id\": 209,\n    \"index\": 209,\n    \"inputData\": [\n     {\n      \"region\": {\n       \"constant\": false,\n       \"value\": \"Region\"\n      },\n      \"vpc_id\": {\n       \"constant\": false,\n       \"value\": \"iter_item\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"region\": {\n        \"description\": \"AWS Region.\",\n        \"title\": \"Region\",\n        \"type\": \"string\"\n       },\n       \"vpc_id\": {\n        \"description\": \"VPC ID of the Instance.\",\n        \"title\": \"VPC ID\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"vpc_id\",\n       \"region\"\n      ],\n      \"title\": \"aws_get_internet_gateway_by_vpc\",\n      \"type\": \"object\"\n     }\n    ],\n    \"iterData\": [\n     {\n      \"iter_enabled\": true,\n      \"iter_item\": \"vpc_id\",\n      \"iter_list\": {\n       \"constant\": false,\n       \"objectItems\": false,\n       \"value\": \"[n['VpcId'] for n in IncidentDetails.get('NetworkInterfaces') if n.get('VpcId') != None ]\"\n      }\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS Get Internet Gateway by VPC ID\",\n    \"nouns\": [\n     \"aws\",\n     \"ec2\",\n     \"internet gateways\",\n     \"vpc\"\n    ],\n    \"orderProperties\": [\n     \"vpc_id\",\n     \"region\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"InternetOutput\",\n     \"output_name_enabled\": true\n    },\n    \"tags\": [\n     \"aws_get_internet_gateway_by_vpc\"\n    ],\n    \"title\": \"AWS Get Internet Gateway by VPC ID\",\n    \"verbs\": [\n     \"list\"\n    ]\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##  Copyright (c) 2021 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from typing import List, Dict\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from unskript.connectors.aws import aws_get_paginator\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_internet_gateway_by_vpc_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_get_internet_gateway_by_vpc(handle, vpc_id: str, region: str) -> List:\\n\",\n    \"    \\\"\\\"\\\"aws_get_internet_gateway_by_vpc Returns an List of internet Gateway.\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned from task.validate(...).\\n\",\n    \"\\n\",\n    \"        :type vpc_id: str\\n\",\n    \"        :param vpc_id: VPC ID to find NAT Gateway.\\n\",\n    \"\\n\",\n    \"        :type region: str\\n\",\n    \"        :param region: Region to filter instance.\\n\",\n    \"\\n\",\n    \"        :rtype: List of Internet Gateway.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"\\n\",\n    \"    ec2Client = handle.client('ec2', region_name=region)\\n\",\n    \"    result = []\\n\",\n    \"    try:\\n\",\n    \"        response = aws_get_paginator(ec2Client, \\\"describe_internet_gateways\\\", \\\"InternetGateways\\\",\\n\",\n    \"                                Filters=[{'Name': 'attachment.vpc-id','Values': [vpc_id]}])\\n\",\n    \"        for nat_info in response:\\n\",\n    \"            if \\\"InternetGatewayId\\\" in nat_info:\\n\",\n    \"                result.append(nat_info[\\\"InternetGatewayId\\\"])\\n\",\n    \"\\n\",\n    \"    except Exception as error:\\n\",\n    \"        result.append({\\\"error\\\":error})\\n\",\n    \"\\n\",\n    \"    return result\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(continueOnError=True)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"region\\\": \\\"Region\\\",\\n\",\n    \"    \\\"vpc_id\\\": \\\"iter_item\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(iterJson='''{\\n\",\n    \"    \\\"iter_enabled\\\": true,\\n\",\n    \"    \\\"iter_list_is_const\\\": false,\\n\",\n    \"    \\\"iter_list\\\": \\\"[n['VpcId'] for n in IncidentDetails.get('NetworkInterfaces') if n.get('VpcId') != None ]\\\",\\n\",\n    \"    \\\"iter_parameter\\\": \\\"vpc_id\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"InternetOutput\\\")\\n\",\n    \"\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_get_internet_gateway_by_vpc, lego_printer=aws_get_internet_gateway_by_vpc_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"b5b0d239-39e8-4960-953c-a9746c2d12ff\",\n   \"metadata\": {\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2022-09-20T17:32:47.027Z\"\n    },\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"name\": \"Internet Gateway Output\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Internet Gateway Output\",\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"import pprint\\n\",\n    \"pprint.pprint(InternetOutput)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"8e35ab9c-c545-4324-9f94-1854042b86fc\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"SSH Execute Remote Command\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"SSH Execute Remote Command\"\n   },\n   \"source\": [\n    \"Here we will use unSkript SSH Execute Remote Command Lego. This lego takes hosts: List[str], command: str, sudo: bool as input. This inputs is used to connect to instance and check weather the the instance connected to internet.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"483b3bb9-cb21-4330-b874-cbca159fdfe1\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_uuid\": \"5279b2046bb2eb4a691ba748086f4af9e580a849faae557694bb12a8c2b7b379\",\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"SSH Execute Remote Command\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2022-09-21T12:53:09.438Z\"\n    },\n    \"id\": 77,\n    \"index\": 77,\n    \"inputData\": [\n     {\n      \"command\": {\n       \"constant\": false,\n       \"value\": \"Command\"\n      },\n      \"hosts\": {\n       \"constant\": false,\n       \"value\": \"[n['PrivateIpAddress'] for n in IncidentDetails.get('NetworkInterfaces') if n.get('PrivateIpAddress') != None ]\"\n      },\n      \"sudo\": {\n       \"constant\": true,\n       \"value\": false\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"command\": {\n        \"description\": \"Command to be executed on the remote server.\",\n        \"title\": \"Command\",\n        \"type\": \"string\"\n       },\n       \"hosts\": {\n        \"description\": \"List of hosts to connect to. For eg. [\\\"host1\\\", \\\"host2\\\"].\",\n        \"items\": {\n         \"type\": \"string\"\n        },\n        \"title\": \"Hosts\",\n        \"type\": \"array\"\n       },\n       \"sudo\": {\n        \"default\": false,\n        \"description\": \"Run the command with sudo.\",\n        \"title\": \"Run with sudo\",\n        \"type\": \"boolean\"\n       }\n      },\n      \"required\": [\n       \"hosts\",\n       \"command\"\n      ],\n      \"title\": \"ssh_execute_remote_command\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_SSH\",\n    \"name\": \"SSH Execute Remote Command\",\n    \"nouns\": [\n     \"ssh\",\n     \"command\"\n    ],\n    \"orderProperties\": [\n     \"hosts\",\n     \"command\",\n     \"sudo\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"tags\": [\n     \"ssh_execute_remote_command\"\n    ],\n    \"verbs\": [\n     \"execute\"\n    ]\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"# Copyright (c) 2021 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import List, Optional, Dict\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def ssh_execute_remote_command_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    print(\\\"\\\\n\\\")\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def ssh_execute_remote_command(sshClient, hosts: List[str], command: str, sudo: bool = False) -> Dict:\\n\",\n    \"\\n\",\n    \"    client = sshClient(hosts)\\n\",\n    \"    runCommandOutput = client.run_command(command=command, sudo=sudo)\\n\",\n    \"    client.join()\\n\",\n    \"    res = {}\\n\",\n    \"\\n\",\n    \"    for host_output in runCommandOutput:\\n\",\n    \"        hostname = host_output.host\\n\",\n    \"        output = []\\n\",\n    \"        for line in host_output.stdout:\\n\",\n    \"            output.append(line)\\n\",\n    \"\\n\",\n    \"        o = \\\"\\\\n\\\".join(output)\\n\",\n    \"        res[hostname] = o\\n\",\n    \"\\n\",\n    \"    return res\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"command\\\": \\\"Command\\\",\\n\",\n    \"    \\\"hosts\\\": \\\"[n['PrivateIpAddress'] for n in IncidentDetails.get('NetworkInterfaces') if n.get('PrivateIpAddress') != None ]\\\",\\n\",\n    \"    \\\"sudo\\\": \\\"False\\\"\\n\",\n    \"    }''')\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(ssh_execute_remote_command, lego_printer=ssh_execute_remote_command_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"6d0c973a-a674-4d7c-9548-80376c41b318\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Conclusion\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Conclusion\"\n   },\n   \"source\": [\n    \"In this Runbook, we demonstrated the use of unSkript's AWS legos to perform AWS action and this runbook collect the instance details and then by using vpc id it will check for NAT Gateway and Internet Gateway. To view the full platform capabilities of unSkript please visit https://unskript.com\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"execution_data\": {\n   \"runbook_name\": \"Troubleshooting Your EC2 Configuration in a Private Subnet\",\n   \"parameters\": [\n    \"Instance_id\",\n    \"Region\"\n   ]\n  },\n  \"kernelspec\": {\n   \"display_name\": \"Python 3.10.6 64-bit\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.10.6\"\n  },\n  \"parameterSchema\": {\n   \"properties\": {\n    \"Command\": {\n     \"default\": \"ping -c 5 8.8.8.8\",\n     \"description\": \"Command to test connectivity\",\n     \"title\": \"Command\",\n     \"type\": \"string\"\n    },\n    \"Instance_id\": {\n     \"default\": \"i-0dd687a6b0eb4da63\",\n     \"description\": \"Instance ID\",\n     \"title\": \"Instance_id\",\n     \"type\": \"string\"\n    },\n    \"Region\": {\n     \"default\": \"us-west-2\",\n     \"description\": \"Region\",\n     \"title\": \"Region\",\n     \"type\": \"string\"\n    }\n   },\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"parameterValues\": {\n   \"Command\": \"ping -c 5 8.8.8.8\"\n  },\n  \"vscode\": {\n   \"interpreter\": {\n    \"hash\": \"e8899eb02dfbc033aab5733bdae1bd213fa031d40331094008e8673d99ebab63\"\n   }\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}"
  },
  {
    "path": "AWS/Troubleshooting_Your_EC2_Configuration_in_Private_Subnet.json",
    "content": "{\n  \"name\": \"Troubleshooting Your EC2 Configuration in a Private Subnet\",\n  \"description\": \"This runbook can be used to troubleshoot EC2 instance configuration in a private subnet by capturing the VPC ID for a given instance ID. Using VPC ID to get Internet Gateway details then try to SSH and connect to internet.\",\n  \"uuid\": \"c123bb9eff909c27f2d330792689c63110889e0b7754041e2e24ade22ca16615\",\n  \"icon\": \"CONNECTOR_TYPE_AWS\",\n  \"categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_TROUBLESHOOTING\" ],\n  \"connector_types\": [ \"CONNECTOR_TYPE_AWS\" ],\n  \"version\": \"1.0.0\"\n}\n"
  },
  {
    "path": "AWS/Update_and_Manage_AWS_User_Permission.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"f0865bbe-2a84-4654-9f9a-f794657031b8\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Update and Manage AWS User Permission\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Update and Manage AWS User Permission\"\n   },\n   \"source\": [\n    \"<img src=\\\"https://storage.googleapis.com/unskript-website/assets/favicon.png\\\" alt=\\\"unSkript.com\\\" width=\\\"100\\\" height=\\\"100\\\"/> \\n\",\n    \"<h1> unSkript Runbooks </h1>\\n\",\n    \"<div class=\\\"alert alert-block alert-success\\\">\\n\",\n    \"    <b> This runbook demonstrates How to Update and Manage AWS User Permissions using unSkript legos.</b>\\n\",\n    \"</div>\\n\",\n    \"\\n\",\n    \"<br>\\n\",\n    \"\\n\",\n    \"<center><h2>Update and Manage AWS User Permissions</h2></center>\\n\",\n    \"\\n\",\n    \"# Steps Overview\\n\",\n    \"1) List all the IAM user attached policies. \\n\",\n    \"2) Attache new policy to the given user.\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"568e67dc-f423-4354-8223-6a4cf1fab87e\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"AWS List Attached User Policies\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"AWS List Attached User Policies\"\n   },\n   \"source\": [\n    \"Here we will use unSkript AWS List Attached User Policies Lego. This lego takes UserName as input. This inputs is used to list all the policies for given user.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 3,\n   \"id\": \"88c08055-bf9e-417a-907e-a282fcf10d98\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_uuid\": \"afacde59-a401-4a8b-901d-46c4b3970b78\",\n    \"continueOnError\": false,\n    \"createTime\": \"2022-07-27T16:51:48Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"v0.0.0\",\n    \"description\": \"Test\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2022-09-29T18:57:37.385Z\"\n    },\n    \"id\": 100001,\n    \"index\": 100001,\n    \"inputData\": [\n     {\n      \"UserName\": {\n       \"constant\": false,\n       \"value\": \"UserName\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"UserName\": {\n        \"default\": \"\",\n        \"description\": \"IAM User Name whose attached policies needs to be fetched\",\n        \"title\": \"UserName\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"instance_ids\",\n       \"region\"\n      ],\n      \"title\": \"aws_restart_ec2_instances_test\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS List Attached User Policies\",\n    \"nouns\": [],\n    \"orderProperties\": [\n     \"UserName\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"tags\": [],\n    \"title\": \"AWS List Attached User Policies\",\n    \"trusted\": true,\n    \"verbs\": []\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##  Copyright (c) 2021 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from typing import List, Dict\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from botocore.exceptions import ClientError\\n\",\n    \"import pprint\\n\",\n    \"from beartype import beartype\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_list_attached_user_policies_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_list_attached_user_policies(handle, UserName: str) -> List:\\n\",\n    \"    \\\"\\\"\\\"aws_list_attached_user_policies returns the list of policies attached to the user.\\n\",\n    \"\\n\",\n    \"        :type UserName: string\\n\",\n    \"        :param UserName: IAM user whose policies need to fetched.\\n\",\n    \"\\n\",\n    \"        :rtype: List with with the attched policy names.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    result = []\\n\",\n    \"    user_list = []\\n\",\n    \"    ec2Client = handle.client('iam')\\n\",\n    \"    try:\\n\",\n    \"        response = ec2Client.list_attached_user_policies(UserName=UserName)\\n\",\n    \"        for i in response[\\\"AttachedPolicies\\\"]:\\n\",\n    \"            result.append(i['PolicyName'])\\n\",\n    \"\\n\",\n    \"    except ClientError as error:\\n\",\n    \"        result.append(error.response)\\n\",\n    \"\\n\",\n    \"    return result\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"UserName\\\": \\\"UserName\\\"\\n\",\n    \"    }''')\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_list_attached_user_policies, lego_printer=aws_list_attached_user_policies_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"3814f2a7-bd7f-4745-8321-a55be4cf0853\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"AWS Attach New Policy to User\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"AWS Attach New Policy to User\"\n   },\n   \"source\": [\n    \"Here we will use unSkript AWS Attach IAM Policy Lego. This lego takes UserName and PolicyName as input. This inputs is used to Attach a new policy to the given user and provide specified permissions.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 13,\n   \"id\": \"4de7fc11-3301-4ff7-9b1a-8ea0cd61c4bf\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_uuid\": \"afacde59-a401-4a8b-901d-46c4b3970b78\",\n    \"collapsed\": true,\n    \"continueOnError\": false,\n    \"createTime\": \"2022-07-27T16:51:48Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"v0.0.0\",\n    \"description\": \"Test\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2022-09-05T16:25:19.900Z\"\n    },\n    \"id\": 100001,\n    \"index\": 100001,\n    \"inputData\": [\n     {\n      \"PolicyName\": {\n       \"constant\": false,\n       \"value\": \"Policy_Name\"\n      },\n      \"UserName\": {\n       \"constant\": false,\n       \"value\": \"UserName\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"PolicyName\": {\n        \"default\": \"\",\n        \"description\": \"Name of the policy which permissions needs to attached for the user.\",\n        \"title\": \"PolicyName\",\n        \"type\": \"string\"\n       },\n       \"UserName\": {\n        \"default\": \"\",\n        \"description\": \"IAM User Name where policy needs to attached\",\n        \"title\": \"UserName\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"instance_ids\",\n       \"region\"\n      ],\n      \"title\": \"aws_restart_ec2_instances_test\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_AWS\",\n    \"name\": \"AWS Attach New Policy to User\",\n    \"nouns\": [],\n    \"orderProperties\": [\n     \"UserName\",\n     \"PolicyName\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"tags\": [],\n    \"title\": \"AWS Attach New Policy to User\",\n    \"verbs\": []\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##  Copyright (c) 2021 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from typing import List, Dict\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from botocore.exceptions import ClientError\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def aws_attache_iam_policy_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def aws_attache_iam_policy(handle, UserName: str, PolicyName: str) -> Dict:\\n\",\n    \"    \\\"\\\"\\\"aws_attache_iam_policy used to provide user permissions.\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object containing global params for the notebook.\\n\",\n    \"\\n\",\n    \"        :type UserName: dict\\n\",\n    \"        :param UserName: Dictionary of credentials info.\\n\",\n    \"\\n\",\n    \"        :type PolicyName: string\\n\",\n    \"        :param PolicyName: Policy name to apply the permissions to the user .\\n\",\n    \"\\n\",\n    \"        :rtype: Dict with User policy info.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    result = {}\\n\",\n    \"    iamResource = handle.resource('iam')\\n\",\n    \"    try:\\n\",\n    \"        user = iamResource.User(UserName)\\n\",\n    \"        response = user.attach_policy(\\n\",\n    \"            PolicyArn='arn:aws:iam::aws:policy/'+PolicyName\\n\",\n    \"            )\\n\",\n    \"        result = response\\n\",\n    \"    except ClientError as error:\\n\",\n    \"        result = error.response\\n\",\n    \"\\n\",\n    \"    return result\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"PolicyName\\\": \\\"Policy_Name\\\",\\n\",\n    \"    \\\"UserName\\\": \\\"UserName\\\"\\n\",\n    \"    }''')\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(aws_attach_iam_policy, lego_printer=aws_attach_iam_policy_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"de59d49d-edb6-4849-addd-bf18fcf2683d\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Conclusion\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Conclusion\"\n   },\n   \"source\": [\n    \"In this Runbook, we demonstrated the use of unSkript's AWS to perform AWS actions. This runbook is used to list all the attached policies to IAM user and and attach new policy to the same user. To view the full platform capabilities of unSkript please visit https://unskript.com\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"execution_data\": {\n   \"runbook_name\": \"Update and Manage AWS User permission\",\n   \"parameters\": [\n    \"UserName\"\n   ]\n  },\n  \"kernelspec\": {\n   \"display_name\": \"unSkript (Build: 618)\",\n   \"name\": \"python_kubernetes\"\n  },\n  \"language_info\": {\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"pygments_lexer\": \"ipython3\"\n  },\n  \"parameterSchema\": {\n   \"properties\": {\n    \"Policy_Name\": {\n     \"default\": \"CloudWatchFullAccess\",\n     \"description\": \"Policy name for apply the permissions to user\",\n     \"title\": \"Policy_Name\",\n     \"type\": \"string\"\n    },\n    \"UserName\": {\n     \"default\": \"TestRunbook\",\n     \"description\": \"IAM User Name which needs to attach policy\",\n     \"title\": \"UserName\",\n     \"type\": \"string\"\n    }\n   },\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"parameterValues\": {\n   \"Policy_Name\": \"CloudWatchFullAccess\"\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}"
  },
  {
    "path": "AWS/Update_and_Manage_AWS_User_Permission.json",
    "content": "{\n  \"name\": \"Update and Manage AWS User permission\",\n  \"description\": \"This runbook can be used Update and Manage AWS IAM User Permission\",\n  \"uuid\": \"79c167af0209e60fc45455bf4943b733904d4ab8654028d8434d193d1bf8c16c\",\n  \"icon\": \"CONNECTOR_TYPE_AWS\",\n  \"categories\": [ \"CATEGORY_TYPE_IAM\" ],\n  \"connector_types\": [ \"CONNECTOR_TYPE_AWS\" ],\n  \"version\": \"1.0.0\"\n}\n\n"
  },
  {
    "path": "AWS/__init__.py",
    "content": "#\n# unSkript (c) 2022\n#\n"
  },
  {
    "path": "AWS/legos/AWS_Start_IAM_Policy_Generation/AWS_Start_IAM_Policy_Generation.json",
    "content": "{\n  \"action_title\": \"AWS Start IAM Policy Generation \",\n  \"action_description\": \"Given a region, a CloudTrail ARN (where the logs are being recorded), a reference IAM ARN (whose usage we will parse), and a Service role, this will begin the generation of a IAM policy.  The output is a String of the generation Id.\",\n  \"action_type\": \"LEGO_TYPE_AWS\",\n  \"action_entry_function\": \"AWS_Start_IAM_Policy_Generation\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\n  \"action_is_check\": false,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\": [\"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_COST_OPT\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_IAM\"]\n}"
  },
  {
    "path": "AWS/legos/AWS_Start_IAM_Policy_Generation/AWS_Start_IAM_Policy_Generation.py",
    "content": "##\n# Copyright (c) 2023 unSkript, Inc\n# All rights reserved.\n##\nimport pprint\nfrom datetime import datetime, timedelta\nfrom pydantic import BaseModel\n\n\nclass InputSchema(BaseModel):\n    pass\n\n\ndef AWS_Start_IAM_Policy_Generation_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\ndef AWS_Start_IAM_Policy_Generation(\n        handle,\n        region:str,\n        CloudTrailARN:str,\n        IAMPrincipalARN:str,\n        AccessRole:str,\n        hours:float\n        ) -> str:\n\n    client = handle.client('accessanalyzer', region_name=region)\n    policyGenerationDict = {'principalArn': IAMPrincipalARN}\n    myTrail = {'cloudTrailArn': CloudTrailARN,\n                   'regions': [region],\n                   'allRegions': False\n              }\n    endTime = datetime.now()\n    endTime = endTime.strftime(\"%Y-%m-%dT%H:%M:%S\")\n    startTime = datetime.now()- timedelta(hours =hours)\n    startTime =startTime.strftime(\"%Y-%m-%dT%H:%M:%S\")\n    response = client.start_policy_generation(\n        policyGenerationDetails=policyGenerationDict,\n        cloudTrailDetails={\n            'trails': [myTrail],\n            'accessRole': AccessRole,\n            'startTime': startTime,\n            'endTime': endTime\n        }\n    )\n    jobId = response['jobId']\n    return jobId\n"
  },
  {
    "path": "AWS/legos/AWS_Start_IAM_Policy_Generation/README.md",
    "content": "[<img align=\"left\" src=\"https://raw.githubusercontent.com/unskript/Awesome-CloudOps-Automation/master/.github/images/runbooksh_dark.png#gh-dark-mode-only\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://raw.githubusercontent.com/unskript/Awesome-CloudOps-Automation/master/.github/images/runbooksh_dark.png)\n[<img align=\"left\" src=\"https://raw.githubusercontent.com/unskript/Awesome-CloudOps-Automation/master/.github/images/runbooksh_light.png#gh-light-mode-only\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://raw.githubusercontent.com/unskript/Awesome-CloudOps-Automation/master/.github/images/runbooksh_light.png)\n\n\n# AWS Start IAM Policy Generation\n\n\n## Description\nGiven a region, a CloudTrail ARN (where the logs are being recorded), a reference IAM ARN (whose usage we will parse), and a Service role, this will begin the generation of a IAM policy.  The output is a String of the generation Id.\n\n## Action Details\n```python\naction.start_iam_policy_generation(handle, region:str, CloudTrailARN:str, IAMPrincipalARN:str, AccessRole:str, hours:float) -> str\n```\n- `handle`: Object of type unSkript AWS Connector.\n- `region`: AWS region where CloudTrail logs are recorded.\n- `CloudTrailARN`: ARN of the logs you wish to parse.\n- `IAMPrincipalARN`: Reference ARN - we are copying the usage from this account.\n- `AccessRole`: Role that allows access to logs.\n- `hours`: Hours of data to parse.\n\n## Action Output\nThis action will return a string value representing the generation Id.\n<img src=\"./1.jpg\">\n\n## See it in Action\n\n\nYou can try out this action on the [runbooks.sh](http://runbooks.sh) open-source platform or on the [unSkript Cloud Free Trial](https://us.app.unskript.io). \n\nFeel free to join the community Slack at [https://communityinviter.com/apps/cloud-ops-community/awesome-cloud-automation](https://communityinviter.com/apps/cloud-ops-community/awesome-cloud-automation) for support, questions, and comments"
  },
  {
    "path": "AWS/legos/AWS_Start_IAM_Policy_Generation/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/__init__.py",
    "content": "#\n# unSkript (c) 2022\n#\n"
  },
  {
    "path": "AWS/legos/aws_add_lifecycle_configuration_to_s3_bucket/README.md",
    "content": "[<img align=\"left\" src=\"https://raw.githubusercontent.com/unskript/Awesome-CloudOps-Automation/master/.github/images/runbooksh_dark.png#gh-dark-mode-only\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://raw.githubusercontent.com/unskript/Awesome-CloudOps-Automation/master/.github/images/runbooksh_dark.png)\n[<img align=\"left\" src=\"https://raw.githubusercontent.com/unskript/Awesome-CloudOps-Automation/master/.github/images/runbooksh_light.png#gh-light-mode-only\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://raw.githubusercontent.com/unskript/Awesome-CloudOps-Automation/master/.github/images/runbooksh_light.png)\n<h1>Add Lifecycle Configuration to AWS S3 Bucket</h1>\n\n## Description\nCreates a new lifecycle configuration for the bucket or replaces an existing lifecycle configuration.\n\n## Action Details\n```python\naws_add_lifecycle_configuration_to_s3_bucket(handle, region: str, bucket_name:str, expiration_days:int=30, prefix:str='', noncurrent_days:int=30)\n```\n- handle: Object of type unSkript AWS Connector.\n- bucket_name: The name of the bucket for which to set the configuration.\n- expiration_days: Specifies the expiration for the lifecycle of the object in the form of days. E.g., 30 (days).\n- prefix: Prefix identifying one or more objects to which the rule applies.\n- noncurrent_days: Specifies the number of days an object is noncurrent before Amazon S3 permanently deletes the noncurrent object versions.\n\n## Action Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can try out this action by visiting the following links:\n- [Runbooks.sh](http://runbooks.sh): Open source Runbooks and Cloud Automation.\n- [unSkript Live](https://us.app.unskript.io): Cloud free trial.\n- Community Slack: Join the [Cloud Ops Community](https://communityinviter.com/apps/cloud-ops-community/awesome-cloud-automation) for support, questions, and comments.\n"
  },
  {
    "path": "AWS/legos/aws_add_lifecycle_configuration_to_s3_bucket/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_add_lifecycle_configuration_to_s3_bucket/aws_add_lifecycle_configuration_to_s3_bucket.json",
    "content": "{\n  \"action_title\": \"Add Lifecycle Configuration to AWS S3 Bucket\",\n  \"action_description\": \"Creates a new lifecycle configuration for the bucket or replaces an existing lifecycle configuration.\",\n  \"action_type\": \"LEGO_TYPE_AWS\",\n  \"action_entry_function\": \"aws_add_lifecycle_configuration_to_s3_bucket\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\n  \"action_is_check\": false,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\":[ \"CATEGORY_TYPE_CLOUDOPS\" ,\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_S3\" ]\n}"
  },
  {
    "path": "AWS/legos/aws_add_lifecycle_configuration_to_s3_bucket/aws_add_lifecycle_configuration_to_s3_bucket.py",
    "content": "##\n##  Copyright (c) 2023 unSkript, Inc\n##  All rights reserved.\n##\nfrom pydantic import BaseModel, Field\nfrom typing import Dict, Optional\nimport pprint\n\n\nclass InputSchema(BaseModel):\n    region: str = Field(\n        description='AWS Region.', \n        title='Region')\n    bucket_name: str = Field(\n        description='The name of the bucket for which to set the configuration.',\n        title='Bucket Name',\n    )\n    expiration_days: Optional[float] = Field(\n        30,\n        description='Specifies the expiration for the lifecycle of the object in the form of days. Eg: 30 (days)',\n        title='Expiration Days',\n    )\n    prefix: Optional[str] = Field(\n        '',\n        description='Prefix identifying one or more objects to which the rule applies.',\n        title='Prefix',\n    )\n    noncurrent_days: Optional[float] = Field(\n        30,\n        description='Specifies the number of days an object is noncurrent before Amazon S3 permanently deletes the noncurrent object versions',\n        title='Noncurrent Days',\n    )\n\n\n\ndef aws_add_lifecycle_configuration_to_s3_bucket_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\n\ndef aws_add_lifecycle_configuration_to_s3_bucket(handle, region: str, bucket_name:str, expiration_days:int=30, prefix:str='', noncurrent_days:int=30) -> Dict:\n    \"\"\"aws_add_lifecycle_configuration_to_s3_bucket returns response of adding lifecycle configuration\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type region: string\n        :param region: location of the bucket\n        \n        :type bucket_name: string\n        :param bucket_name: The name of the bucket for which to set the configuration.\n\n        :type expiration_days: int\n        :param expiration_days: Specifies the expiration for the lifecycle of the object in the form of days. Eg: 30 (days)\n\n        :type prefix: string\n        :param prefix: location of the bucket\n\n        :type noncurrent_days: int\n        :param noncurrent_days: Specifies the number of days an object is noncurrent before Amazon S3 permanently deletes the noncurrent object versions.\n\n        :rtype: Dict of the response of adding lifecycle configuration\n    \"\"\"\n    s3Client = handle.client(\"s3\", region_name=region)\n    try:\n        lifecycle_config = {\n            'Rules': [\n                {\n                    'Expiration': {\n                        'Days': expiration_days,\n                    },\n                    'Filter': {\n                        'Prefix': ''\n                    },\n                    'Status': 'Enabled',\n                    'NoncurrentVersionExpiration': {\n                        'NoncurrentDays': noncurrent_days\n                    }\n                }\n            ]\n        }\n        bucket_name = 'testrunbook'\n        response = s3Client.put_bucket_lifecycle_configuration(\n            Bucket=bucket_name,\n            LifecycleConfiguration=lifecycle_config\n        )\n    except Exception as e:\n        raise e\n    return response\n\n\n"
  },
  {
    "path": "AWS/legos/aws_apply_default_encryption_for_s3_buckets/README.md",
    "content": "\r\n[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Apply AWS Default Encryption for S3 Bucket </h1>\r\n\r\n## Description\r\nThis Lego apply AWS default encryption for S3 bucket.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_put_bucket_encryption(handle: object, name: str, region: str)\r\n\r\n        handle: Object of type unSkript AWS Connector\r\n        name: Name of the S3 bucket.\r\n        region: Location of the S3 buckets.\r\n\r\n## Lego Input\r\nThis Lego take three input handle, name and region.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)\r\n\r\n"
  },
  {
    "path": "AWS/legos/aws_apply_default_encryption_for_s3_buckets/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_apply_default_encryption_for_s3_buckets/aws_apply_default_encryption_for_s3_buckets.json",
    "content": "{\r\n    \"action_title\": \"Apply AWS Default Encryption for S3 Bucket\",\r\n    \"action_description\": \"Apply AWS Default Encryption for S3 Bucket\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_apply_default_encryption_for_s3_buckets\",\r\n    \"action_needs_credential\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [ \"CATEGORY_TYPE_SECOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_S3\"  ]\r\n}\r\n"
  },
  {
    "path": "AWS/legos/aws_apply_default_encryption_for_s3_buckets/aws_apply_default_encryption_for_s3_buckets.py",
    "content": "##\r\n##  Copyright (c) 2021 unSkript, Inc\r\n##  All rights reserved.\r\n##\r\nimport pprint\r\nfrom typing import Dict\r\nfrom pydantic import BaseModel, Field\r\n\r\n\r\nclass InputSchema(BaseModel):\r\n    region: str = Field(\r\n        title='Region',\r\n        description='AWS Region.')\r\n\r\n    bucket_name: str = Field(\r\n        title='Bucket Name',\r\n        description='AWS S3 Bucket Name.')\r\n\r\n\r\ndef aws_apply_default_encryption_for_s3_buckets_printer(output):\r\n    if output is None:\r\n        return\r\n    pprint.pprint(output)\r\n\r\n\r\ndef aws_apply_default_encryption_for_s3_buckets(handle, bucket_name: str, region: str) -> Dict:\r\n    \"\"\"aws_put_bucket_encryption Puts default encryption configuration for bucket.\r\n        \r\n        :type handle: object\r\n        :param handle: Object returned from task.validate(...).\r\n\r\n        :type bucket_name: string\r\n        :param bucket_name: Name of the S3 bucket.\r\n\r\n        :type region: string\r\n        :param region: location of the bucket\r\n\r\n        :rtype: Dict with the response info.\r\n    \"\"\"\r\n    s3Client = handle.client('s3', region_name=region)\r\n    result = {}\r\n    # Setup default encryption configuration\r\n    try:\r\n        response = s3Client.put_bucket_encryption(\r\n            Bucket=bucket_name,\r\n            ServerSideEncryptionConfiguration={\r\n                \"Rules\": [\r\n                    {\"ApplyServerSideEncryptionByDefault\": {\"SSEAlgorithm\": \"AES256\"}}\r\n                ]},\r\n            )\r\n        result['Response'] = response\r\n\r\n    except Exception as e:\r\n        result['Error'] = e\r\n        \r\n    return result\r\n"
  },
  {
    "path": "AWS/legos/aws_attach_ebs_to_instances/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Attach an EBS volume to an AWS EC2 Instance </h1>\r\n\r\n## Description\r\nThis Lego attach an EBS volume to an AWS EC2 Instance.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_attach_ebs_to_instances(handle: Session, region: str, instance_id: str, volume_id: str, device_name: str)\r\n\r\n        handle: Object of type unSkript AWS Connector\r\n        region: Location of the S3 buckets.\r\n        instance_id: ID of the EC2 instance.\r\n        volume_id: ID of the EBS volume.\r\n        device_name: Name of the Device.\r\n\r\n## Lego Input\r\nThis Lego take five inputs handle, region, instance_id, volume_id and device_name.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_attach_ebs_to_instances/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_attach_ebs_to_instances/aws_attach_ebs_to_instances.json",
    "content": "{\r\n    \"action_title\": \"Attach an EBS volume to an AWS EC2 Instance\",\r\n    \"action_description\": \"Attach an EBS volume to an AWS EC2 Instance\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_attach_ebs_to_instances\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_EC2\"  ]\r\n}"
  },
  {
    "path": "AWS/legos/aws_attach_ebs_to_instances/aws_attach_ebs_to_instances.py",
    "content": "# Copyright (c) 2021 unSkript, Inc\r\n# All rights reserved.\r\n##\r\nimport pprint\r\nfrom typing import Dict\r\nfrom pydantic import BaseModel, Field\r\nfrom unskript.legos.aws.aws_get_handle.aws_get_handle import Session\r\n\r\n\r\nclass InputSchema(BaseModel):\r\n    region: str = Field(\r\n        title='Region',\r\n        description='AWS Region of the EBS volume')\r\n    instance_id: str = Field(\r\n        title='Instance Id',\r\n        description='ID of the EC2 instance')\r\n    volume_id: str = Field(\r\n        title='Volume Id',\r\n        description='ID of the EBS volume')\r\n    device_name: str = Field(\r\n        title='Device Name',\r\n        description='The device name')\r\n\r\n\r\ndef aws_attach_ebs_to_instances_printer(output):\r\n    if output is None:\r\n        return\r\n    pprint.pprint(output)\r\n\r\n\r\ndef aws_attach_ebs_to_instances(\r\n    handle: Session,\r\n    region: str,\r\n    instance_id: str,\r\n    volume_id: str,\r\n    device_name: str\r\n    ) -> Dict:   \r\n    \"\"\"aws_attach_ebs_to_instances Attach instances under a particular Elastic Block Store (EBS).\r\n\r\n    :type region: string\r\n    :param region: AWS Region of the EBS volume\r\n\r\n    :type instance_id: string\r\n    :param instance_id: ID of the instance\r\n\r\n    :type volume_id: string\r\n    :param volume_id: The ID of the volume\r\n\r\n    :type device_name: string\r\n    :param device_name: The device name\r\n\r\n    :rtype: dict with registered instance details.\r\n    \"\"\"\r\n\r\n    ec2Client = handle.client('ec2', region_name=region)\r\n    response = ec2Client.attach_volume(\r\n        Device=device_name,\r\n        InstanceId=instance_id,\r\n        VolumeId=volume_id\r\n    )\r\n\r\n    return response\r\n"
  },
  {
    "path": "AWS/legos/aws_attach_iam_policy/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>AWS Attach New Policy to User </h1>\r\n\r\n## Description\r\nThis Lego attach new AWS Policy to User.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_attach_iam_policy(handle: object, UserName: str, PolicyName: str)\r\n\r\n        handle: Object of type unSkript AWS Connector\r\n        UserName: IAM user whose policies need to fetched.\r\n        PolicyName: Policy name to apply the permissions to the user.\r\n\r\n## Lego Input\r\nThis Lego take three inputs handle, UserName and PolicyName. \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_attach_iam_policy/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_attach_iam_policy/aws_attach_iam_policy.json",
    "content": "{\r\n    \"action_title\": \"AWS Attach New Policy to User\",\r\n    \"action_description\": \"AWS Attach New Policy to User\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_attach_iam_policy\",\r\n    \"action_needs_credential\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [ \"CATEGORY_TYPE_IAM\", \"CATEGORY_TYPE_SECOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_IAM\"  ]\r\n  }\r\n  \r\n"
  },
  {
    "path": "AWS/legos/aws_attach_iam_policy/aws_attach_iam_policy.py",
    "content": "##  Copyright (c) 2021 unSkript, Inc\r\n##  All rights reserved.\r\n##\r\nimport pprint\r\nfrom typing import Dict\r\nfrom pydantic import BaseModel, Field\r\nfrom botocore.exceptions import ClientError\r\n\r\n\r\nclass InputSchema(BaseModel):\r\n    user_name: str = Field(\r\n        title='User Name',\r\n        description='IAM user whose policies need to fetched.')\r\n    policy_name: str = Field(\r\n        title='Policy Name',\r\n        description='Policy name to apply the permissions to the user.')\r\n\r\n\r\ndef aws_attach_iam_policy_printer(output):\r\n    if output is None:\r\n        return\r\n    pprint.pprint(output)\r\n\r\n\r\ndef aws_attach_iam_policy(handle, user_name: str, policy_name: str) -> Dict:\r\n    \"\"\"aws_attache_iam_policy used to provide user permissions.\r\n\r\n        :type handle: object\r\n        :param handle: Object returned from task.validate(...).\r\n\r\n        :type user_name: string\r\n        :param user_name: Dictionary of credentials info.\r\n\r\n        :type policy_name: string\r\n        :param policy_name: Policy name to apply the permissions to the user.\r\n\r\n        :rtype: Dict with User policy information.\r\n    \"\"\"\r\n    result = {}\r\n    iamResource = handle.resource('iam')\r\n    try:\r\n        user = iamResource.User(user_name)\r\n        response = user.attach_policy(\r\n            PolicyArn='arn:aws:iam::aws:policy/'+policy_name\r\n            )\r\n        result = response\r\n    except ClientError as error:\r\n        result = error.response\r\n\r\n    return result\r\n"
  },
  {
    "path": "AWS/legos/aws_attach_tags_to_resources/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>AWS Add Tag to Resources</h1>\n\n## Description\nFor a list of resources, and a tag key/value pair, add the tag to each resource.\n\n## Action Details\n\tdef aws_attach_tags_to_resources(\n\t    handle,\n\t    resource_arn: list,\n\t    tag_key: str,\n\t    tag_value: str,\n\t    region: str\n\t    ) -> Dict:\n\n## Action Input\nThis Action takes a list of AWS ARNs, and a tag key/value pair, and attached the key value to each ARN.\n\nNote: The AWS API has a limit of 20 ARNs per call, so if you supply >20 ARNs, this Action will split your list into multiple API calls.\n\t\n\n## Action Output\nHere is a sample output.\n\n<img src=\"./1.png\">\n\n\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_attach_tags_to_resources/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_attach_tags_to_resources/aws_attach_tags_to_resources.json",
    "content": "{\r\n    \"action_title\": \"AWS Attach Tags to Resources\",\r\n    \"action_description\": \"AWS Attach Tags to Resources\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_attach_tags_to_resources\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [ \"CATEGORY_TYPE_COST_OPT\", \"CATEGORY_TYPE_DEVOPS\",\"CATEGORY_TYPE_AWS\" ]\r\n}\r\n"
  },
  {
    "path": "AWS/legos/aws_attach_tags_to_resources/aws_attach_tags_to_resources.py",
    "content": "from __future__ import annotations\n\n##  Copyright (c) 2023 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import Dict\nfrom pydantic import BaseModel, Field\n\n\n\nfrom typing import List\n\nfrom pydantic import BaseModel, Field\n\n\nclass InputSchema(BaseModel):\n    region: str = Field(..., description='AWS Region.', title='Region')\n    resource_arn: List = Field(..., description='Resource ARNs.', title='Resource ARN')\n    tag_key: str = Field(..., description='Resource Tag Key.', title='Tag Key')\n    tag_value: str = Field(..., description='Resource Tag Value.', title='Tag Value')\n\n\n# This API has a limit of 20 ARNs per api call...\n#we'll need to break up the list into chunks of 20\ndef break_list(long_list, max_size):\n    return [long_list[i:i + max_size] for i in range(0, len(long_list), max_size)]\n\n\n\ndef aws_attach_tags_to_resources_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\ndef aws_attach_tags_to_resources(\n    handle,\n    resource_arn: list,\n    tag_key: str,\n    tag_value: str,\n    region: str\n    ) -> Dict:\n    \"\"\"aws_attach_tags_to_resources Returns an Dict of resource info.\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type resource_arn: list\n        :param resource_arn: Resource ARNs.\n\n        :type tag_key: str\n        :param tag_key: Resource Tag Key.\n\n        :type tag_value: str\n        :param tag_value: Resource Tag value.\n\n        :type region: str\n        :param region: Region to filter resources.\n\n        :rtype: Dict of resource info.\n    \"\"\"\n    ec2Client = handle.client('resourcegroupstaggingapi', region_name=region)\n    result = {}\n\n    #break the ARN list into groups of 20 to send through the API\n    list_of_lists = break_list(resource_arn, 20)\n\n    for index, smallerList in enumerate(list_of_lists):\n\n        try:\n            response = ec2Client.tag_resources(\n                ResourceARNList=smallerList,\n                Tags={tag_key: tag_value}\n                )\n            result[index] = response\n\n        except Exception as error:\n            result[f\"{index} error\"] = error\n\n    return result\n\n\n\n"
  },
  {
    "path": "AWS/legos/aws_change_acl_permissions_of_buckets/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>AWS Change ACL Permission of public S3 Bucket</h1>\r\n\r\n## Description\r\nThis Lego change ACL permission of public S3 bucket.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_put_bucket_acl(handle: object, name: str, acl: str, region: str)\r\n\r\n        handle: Object of type unSkript AWS Connector.\r\n        name: AWS S3 Bucket Name.\r\n        acl: \"canned ACL type - 'private'|'public-read'|'public-read-write'|'authenticated-read'.\"\r\n        region: Used to filter the volume for specific region.\r\n\r\n## Lego Input\r\nThis Lego take four inputs handle, name, acl and region.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_change_acl_permissions_of_buckets/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_change_acl_permissions_of_buckets/aws_change_acl_permissions_of_buckets.json",
    "content": "{\r\n    \"action_title\": \"AWS Change ACL Permission of public S3 Bucket\",\r\n    \"action_description\": \"AWS Change ACL Permission public S3 Bucket\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_change_acl_permissions_of_buckets\",\r\n    \"action_needs_credential\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_is_remediation\": true,\r\n    \"action_categories\": [ \"CATEGORY_TYPE_SECOPS\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_S3\"  ]\r\n  }\r\n  \r\n"
  },
  {
    "path": "AWS/legos/aws_change_acl_permissions_of_buckets/aws_change_acl_permissions_of_buckets.py",
    "content": "##\r\n# Copyright (c) 2021 unSkript, Inc\r\n# All rights reserved.\r\n##\r\nimport pprint\r\nfrom typing import Optional, Dict\r\nfrom pydantic import BaseModel, Field\r\nfrom unskript.enums.aws_canned_acl_enums import CannedACLPermissions\r\n\r\n\r\n\r\nclass InputSchema(BaseModel):\r\n    region: str = Field(\r\n        title='Region',\r\n        description='AWS Region.')\r\n    bucket_name: str = Field(\r\n        title='Bucket Name',\r\n        description='AWS S3 Bucket Name.')\r\n    acl: Optional[CannedACLPermissions] = Field(\r\n        title='Canned ACL Permission',\r\n        description=(\"Canned ACL Permission type - 'private'|'public-read'|'public-read-write\"\r\n                     \"'|'authenticated-read'.\"))\r\n\r\ndef aws_change_acl_permissions_of_buckets_printer(output):\r\n    if output is None:\r\n        return\r\n    pprint.pprint(output)\r\n\r\n\r\ndef aws_change_acl_permissions_of_buckets(\r\n    handle,\r\n    bucket_name: str,\r\n    acl: CannedACLPermissions=CannedACLPermissions.Private,\r\n    region: str = None\r\n    ) -> Dict:\r\n    \"\"\" aws_put_bucket_acl get Dict of buckets ACL change info.\r\n\r\n            :type handle: Session\r\n            :param handle: Object returned by the task.validate(...) method\r\n\r\n            :type bucket_name: string\r\n            :param bucket_name: S3 bucket name where to set ACL on.\r\n\r\n            :type acl: CannedACLPermissions\r\n            :param acl: Canned ACL Permission type - 'private'|'public-read'|'public-read-write\r\n            '|'authenticated-read'.\r\n\r\n            :type region: string\r\n            :param region: location of the bucket.\r\n\r\n            :rtype: Dict of buckets ACL change info\r\n    \"\"\"\r\n    # connect to the S3 using client\r\n    all_permissions = acl\r\n    if acl is None or len(acl)==0:\r\n        all_permissions = \"private\"\r\n    s3Client = handle.client('s3',\r\n                             region_name=region)\r\n\r\n    # Put bucket ACL for the permissions grant\r\n    response = s3Client.put_bucket_acl(\r\n                    Bucket=bucket_name,\r\n                    ACL=all_permissions )\r\n\r\n    return response\r\n"
  },
  {
    "path": "AWS/legos/aws_check_rds_non_m5_t3_instances/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>AWS Check if RDS instances are not M5 or T3 </h1>\r\n\r\n## Description\r\nThis Lego check for AWS RDS instances that are not M5 or T3.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_check_rds_non_m5_t3_instances(handle, region: str = \"\")\r\n\r\n        handle: Object of type unSkript AWS Connector.\r\n        region: Optional, AWS region. Eg: “us-west-2”\r\n\r\n## Lego Input\r\nThis Lego take two inputs handle and region.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_check_rds_non_m5_t3_instances/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_check_rds_non_m5_t3_instances/aws_check_rds_non_m5_t3_instances.json",
    "content": "{\r\n    \"action_title\": \"AWS Check if RDS instances are not M5 or T3\",\r\n    \"action_description\": \"AWS Check if RDS instances are not M5 or T3\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_check_rds_non_m5_t3_instances\",\r\n    \"action_needs_credential\": true,\r\n    \"action_is_check\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [\"CATEGORY_TYPE_IAM\", \"CATEGORY_TYPE_SECOPS\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_RDS\"],\r\n    \"action_next_hop\": [],\r\n    \"action_next_hop_parameter_mapping\": {}\r\n}\r\n"
  },
  {
    "path": "AWS/legos/aws_check_rds_non_m5_t3_instances/aws_check_rds_non_m5_t3_instances.py",
    "content": "##\r\n##  Copyright (c) 2021 unSkript, Inc\r\n##  All rights reserved.\r\n##\r\nimport pprint\r\nfrom typing import Optional, Tuple\r\nfrom pydantic import BaseModel, Field\r\nfrom unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\r\nfrom unskript.connectors.aws import aws_get_paginator\r\n\r\n\r\nclass InputSchema(BaseModel):\r\n    region: Optional[str] = Field(\r\n        '',\r\n        title='AWS Region',\r\n        description='AWS Region.'\r\n    )\r\n\r\n\r\ndef aws_check_rds_non_m5_t3_instances_printer(output):\r\n    if output is None:\r\n        return\r\n\r\n    pprint.pprint(output)\r\n\r\n\r\ndef aws_check_rds_non_m5_t3_instances(handle, region: str = \"\") -> Tuple:\r\n    \"\"\"aws_check_rds_non_m5_t3_instances Gets all DB instances that are not m5 or t3.\r\n\r\n        :type handle: object\r\n        :param handle: Object returned from task.validate(...).\r\n\r\n        :type region: string\r\n        :param region: AWS Region.\r\n\r\n        :rtype: A tuple with a status flag and a list of DB instances that are not m5 or t3.\r\n    \"\"\"\r\n    result = []\r\n    all_regions = [region]\r\n    if not region:\r\n        all_regions = aws_list_all_regions(handle)\r\n\r\n    for reg in all_regions:\r\n        try:\r\n            ec2Client = handle.client('rds', region_name=reg)\r\n            response = aws_get_paginator(ec2Client, \"describe_db_instances\", \"DBInstances\")\r\n            for db in response:\r\n                db_instance_dict = {}\r\n                if db['DBInstanceClass'][3:5] not in ['m5', 't3']:\r\n                    db_instance_dict[\"region\"] = reg\r\n                    db_instance_dict[\"instance\"] = db['DBInstanceIdentifier']\r\n                    result.append(db_instance_dict)\r\n        except Exception:\r\n            pass\r\n\r\n    if len(result) != 0:\r\n        return (False, result)\r\n    return (True, None)\r\n    "
  },
  {
    "path": "AWS/legos/aws_check_ssl_certificate_expiry/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Check SSL Certificate Expiry</h1>\r\n\r\n## Description\r\nThis Lego returns all the ACM issued certificates which are about to expire after a given threshold number of days\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_check_ssl_certificate_expiry(handle, threshold_days: int, region: str)\r\n\r\n        handle: Object of type unSkript AWS Connector.\r\n        threshold_days: Integer, Threshold number of days to check for expiry. Eg: 30 -lists all certificates which are expiring within 30 days.\r\n        region: String,  Region where the Certificates are present.\r\n\r\n## Lego Input\r\nhandle: Object of type unSkript AWS Connector\r\nthreshold_days: Threshold number of days to check for expiry. Eg: 30\r\nregion: AWS Region name. Eg: \"us-west-2\"\r\n\r\n## Lego Output\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_check_ssl_certificate_expiry/__init__.py",
    "content": "##\n##  Copyright (c) 2022 unSkript, Inc\n##  All rights reserved.\n##\n"
  },
  {
    "path": "AWS/legos/aws_check_ssl_certificate_expiry/aws_check_ssl_certificate_expiry.json",
    "content": "{\n    \"action_title\": \"Check SSL Certificate Expiry\",\n    \"action_description\": \"Check ACM SSL Certificate expiry date\",\n    \"action_type\": \"LEGO_TYPE_AWS\",\n    \"action_entry_function\": \"aws_check_ssl_certificate_expiry\",\n    \"action_needs_credential\": true,\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\n    \"action_supports_poll\": true,\n    \"action_supports_iteration\": true,\n    \"action_categories\": [ \"CATEGORY_TYPE_SECOPS\",\"CATEGORY_TYPE_AWS\", \"CATEGORY_TYPE_AWS_ACM\" ]\n  }"
  },
  {
    "path": "AWS/legos/aws_check_ssl_certificate_expiry/aws_check_ssl_certificate_expiry.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nimport pprint\nfrom typing import Dict\nimport datetime\nimport dateutil\nfrom pydantic import BaseModel, Field\n\n\nclass InputSchema(BaseModel):\n    threshold_days: int = Field(\n        title=\"Threshold Days\",\n        description=(\"Threshold number of days to check for expiry. \"\n                     \"Eg: 30 -lists all certificates which are expiring within 30 days\")\n    )\n    region: str = Field(\n        title='Region',\n        description='Name of the AWS Region'\n    )\n\n\ndef aws_check_ssl_certificate_expiry_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef aws_check_ssl_certificate_expiry(handle, threshold_days: int, region: str,) -> Dict:\n      \n    \"\"\"aws_check_ssl_certificate_expiry returns all the ACM issued certificates\n       which are about to expire.\n\n            :type handle: object\n            :param handle: Object returned from Task Validate\n\n            :type threshold_days: int\n            :param threshold_days: Threshold number of days to check for expiry.\n             Eg: 30 -lists all certificates which are expiring within 30 days\n\n            :type region: str\n            :param region: Region name of the AWS account\n\n            :rtype: Result Dictionary of result\n    \"\"\"\n    iamClient = handle.client('acm', region_name=region)\n    arn_list=[]\n    domain_list = []\n    days_list= []\n    expiring_domain_list={}\n    result={}\n    certificates_list = iamClient.list_certificates(CertificateStatuses=['ISSUED'])\n    for each_arn in certificates_list['CertificateSummaryList']:\n        arn_list.append(each_arn['CertificateArn'])\n        domain_list.append(each_arn['DomainName'])\n    for certificate in arn_list:\n        details = iamClient.describe_certificate(CertificateArn=certificate)\n        for key,value in details['Certificate'].items():\n            if key == \"NotAfter\":\n                expiry_date = value\n                right_now = datetime.datetime.now(dateutil.tz.tzlocal())\n                diff = expiry_date-right_now\n                days_remaining = diff.days\n                days = 0\n                if 0 < days_remaining < threshold_days:\n                    days = days_remaining\n                elif days_remaining < 0:\n                    days = days_remaining\n                elif days_remaining > threshold_days:\n                    days = days_remaining\n                days_list.append(days)\n    for i, n in enumerate(domain_list):\n        result[n] = days_list[i]\n    for k,v in result.items():\n        if v < threshold_days:\n            expiring_domain_list[k]=v\n    return expiring_domain_list\n"
  },
  {
    "path": "AWS/legos/aws_cloudwatch_attach_webhook_notification_to_alarm/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Attach a webhook endpoint to AWS Cloudwatch alarm </h1>\r\n\r\n## Description\r\nThis Lego Attach a webhook endpoint to AWS Cloudwatch alarm.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_cloudwatch_attach_webhook_notification_to_alarm(hdl: Session, alarm_name: str, region: str, url: str)\r\n\r\n        hdl: Object of type unSkript AWS Connector.\r\n        alarm_name: Cloudwatch alarm name.\r\n        url: URL where the alarm notification needs to be sent.\r\n        region: AWS Region of the cloudwatch.\r\n\r\n## Lego Input\r\n\r\nThis Lego take four inputs hdl, alarm_name, url and region.\r\n\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_cloudwatch_attach_webhook_notification_to_alarm/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_cloudwatch_attach_webhook_notification_to_alarm/aws_cloudwatch_attach_webhook_notification_to_alarm.json",
    "content": "{\r\n    \"action_title\": \"Attach a webhook endpoint to AWS Cloudwatch alarm\",\r\n    \"action_description\": \"Attach a webhook endpoint to one of the SNS attached to the AWS Cloudwatch alarm.\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_cloudwatch_attach_webhook_notification_to_alarm\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_CLOUDWATCH\"  ]\r\n}\r\n    "
  },
  {
    "path": "AWS/legos/aws_cloudwatch_attach_webhook_notification_to_alarm/aws_cloudwatch_attach_webhook_notification_to_alarm.py",
    "content": "##\r\n# Copyright (c) 2021 unSkript, Inc\r\n# All rights reserved.\r\n##\r\n\r\nimport pprint\r\nfrom urllib.parse import urlparse\r\nfrom pydantic import BaseModel, Field\r\nfrom unskript.legos.aws.aws_get_handle.aws_get_handle import Session\r\n\r\nclass InputSchema(BaseModel):\r\n    alarm_name: str = Field(\r\n        title=\"Alarm name\",\r\n        description=\"Cloudwatch alarm name.\",\r\n    )\r\n    region: str = Field(\r\n        title=\"Region\",\r\n        description=\"AWS Region of the cloudwatch.\")\r\n    url: str = Field(\r\n        title=\"URL\",\r\n        description=(\"URL where the alarm notification needs to be sent. \"\r\n                       \"URL should start with http or https.\")\r\n    )\r\n\r\ndef aws_cloudwatch_attach_webhook_notification_to_alarm_printer(output):\r\n    if output is None:\r\n        return\r\n    pprint.pprint({\"Subscription ARN\" : output})\r\n\r\n\r\ndef aws_cloudwatch_attach_webhook_notification_to_alarm(\r\n    hdl: Session,\r\n    alarm_name: str,\r\n    region: str,\r\n    url: str\r\n) -> str:\r\n    \"\"\"aws_cloudwatch_attach_webhook_notification_to_alarm returns subscriptionArn\r\n\r\n        :type alarm_name: string\r\n        :param alarm_name: Cloudwatch alarm name.\r\n\r\n        :type url: string\r\n        :param url: URL where the alarm notification needs to be sent.\r\n\r\n        :type region: string\r\n        :param region: AWS Region of the cloudwatch.\r\n\r\n        :rtype: Returns subscriptionArn\r\n    \"\"\"\r\n    cloudwatchClient = hdl.client(\"cloudwatch\", region_name=region)\r\n\r\n    # Get the configured SNS(es) to this alarm.\r\n    alarmDetail = cloudwatchClient.describe_alarms(\r\n        AlarmNames=[alarm_name]\r\n    )\r\n    if alarmDetail is None:\r\n        return f'Alarm {alarm_name} not found in AWS region {region}'\r\n    # Need to get the AlarmActions from either composite or metric field.\r\n    if len(alarmDetail['CompositeAlarms']) > 0:\r\n        snses = alarmDetail['CompositeAlarms'][0]['AlarmActions']\r\n    else:\r\n        snses = alarmDetail['MetricAlarms'][0]['AlarmActions']\r\n\r\n    #Pick any sns to configure the url endpoint.\r\n    if len(snses) == 0:\r\n        return f'No SNS configured for alarm {alarm_name}'\r\n\r\n    snsArn = snses[0]\r\n    print(f'Configuring url endpoint on SNS {snsArn}')\r\n\r\n    snsClient = hdl.client('sns', region_name=region)\r\n    # Figure out the protocol from the url\r\n    try:\r\n        parsedURL = urlparse(url)\r\n    except Exception as e:\r\n        print(f'Invalid URL {url}, {e}')\r\n        raise e\r\n\r\n    if parsedURL.scheme not in ('http', 'https'):\r\n        return f'Invalid URL {url}'\r\n\r\n    protocol = parsedURL.scheme\r\n    try:\r\n       response = snsClient.subscribe(\r\n            TopicArn=snsArn,\r\n            Protocol=protocol,\r\n            Endpoint=url,\r\n            ReturnSubscriptionArn=True)\r\n    except Exception as e:\r\n        print(f'Subscribe to SNS topic arn {snsArn} failed, {e}')\r\n        raise e\r\n    subscriptionArn = response['SubscriptionArn']\r\n    print(f'URL {url} subscribed to SNS {snsArn}, subscription ARN {subscriptionArn}')\r\n    return subscriptionArn\r\n"
  },
  {
    "path": "AWS/legos/aws_create_IAMpolicy/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>AWS Create IAM Policy</h1>\n\n## Description\nGiven an AWS policy (as a string), and the name for the policy, this will create an IAM policy.\n\n## Lego Details\n\taws_create_IAMpolicy(handle, policyDocument:str, PolicyName:str)\n\t\t\n\t\t\n\thandle: Object of type unSkript AWS Connector.\n\tpolicyDocument: THE STRINGIFIED JSON OF THE POLICY\n\tPolicyName: the name of your new IAM policy.\n\t\n\n\n## Lego Input\nThis Lego takes inputs handle,policyDocument and PolicyName\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.jpg\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_create_IAMpolicy/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_create_IAMpolicy/aws_create_IAMpolicy.json",
    "content": "{\n  \"action_title\": \"AWS Create IAM Policy\",\n  \"action_description\": \"Given an AWS policy (as a string), and the name for the policy, this will create an IAM policy.\",\n  \"action_type\": \"LEGO_TYPE_AWS\",\n  \"action_entry_function\": \"aws_create_IAMpolicy\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\n  \"action_is_check\": false,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\": [  \"CATEGORY_TYPE_SECOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_IAM\" ,\"CATEGORY_TYPE_IAM\"  ]\n}"
  },
  {
    "path": "AWS/legos/aws_create_IAMpolicy/aws_create_IAMpolicy.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nimport pprint\nfrom typing import Dict\nfrom pydantic import BaseModel\n\n\nclass InputSchema(BaseModel):\n    pass\n\n\ndef aws_create_IAMpolicy_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\ndef aws_create_IAMpolicy(handle, policyDocument:str, PolicyName:str) -> Dict:\n\n    client = handle.client('iam')\n    response = client.create_policy(\n        PolicyName=PolicyName,\n        PolicyDocument=policyDocument,\n        Description='generated Via unSkript',\n\n    )\n    return response\n"
  },
  {
    "path": "AWS/legos/aws_create_access_key/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>AWS Create Access Key </h1>\r\n\r\n## Description\r\nThis Lego Creates an access key for a User.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_create_access_key(handle: object, aws_username: str)\r\n\r\n        handle: Object of type unSkript AWS Connector.\r\n        aws_username: Name of IAM User.\r\n\r\n## Lego Input\r\nThis Lego take 2 inputs handle, aws_username and\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_create_access_key/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_create_access_key/aws_create_access_key.json",
    "content": "{\r\n    \"action_title\": \"AWS Create Access Key\",\r\n    \"action_description\": \"Create a new Access Key for the User\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_create_access_key\",\r\n    \"action_needs_credential\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_is_remediation\": true,\r\n    \"action_categories\": [  \"CATEGORY_TYPE_SECOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_IAM\" ,\"CATEGORY_TYPE_IAM\"  ]\r\n\r\n}"
  },
  {
    "path": "AWS/legos/aws_create_access_key/aws_create_access_key.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nimport pprint\nfrom typing import Dict\nfrom pydantic import BaseModel, Field\n\n\nclass InputSchema(BaseModel):\n    aws_username: str = Field(\n        title=\"Username\",\n        description=\"Username of the IAM User\"\n    )\n\n\ndef aws_create_access_key_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef aws_create_access_key(\n    handle,\n    aws_username: str\n) -> Dict:\n    \"\"\"aws_create_access_key creates a new access key for the given user.\n        :type handle: object\n        :param handle: Object returned from Task Validate\n\n        :type aws_username: str\n        :param aws_username: Username of the IAM user to be looked up\n\n        :rtype: Result Dictionary of result\n    \"\"\"\n    iamClient = handle.client('iam')\n    result = iamClient.create_access_key(UserName=aws_username)\n    retVal = {}\n    temp_list = []\n    for key, value in result.items():\n        if key not in temp_list:\n            temp_list.append(key)\n            retVal[key] = value\n    return retVal\n"
  },
  {
    "path": "AWS/legos/aws_create_bucket/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Create AWS Bucket</h1>\r\n\r\n## Description\r\nThis Lego create a new AWS S3 Bucket.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_create_access_key(handle: object, name: str, acl: str, region: str)\r\n\r\n        handle: Object of type unSkript AWS Connector.\r\n        name: Name of the bucket to be created.\r\n        acl: The Canned ACL to apply to the bucket. Possible values: private, public-read, public-read-write, authenticated-read.\r\n        region: AWS Region of the bucket.\r\n## Lego Input\r\nThis Lego take four inputs handle, name, acl and region.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_create_bucket/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_create_bucket/aws_create_bucket.json",
    "content": "{\r\n    \"action_title\": \"Create AWS Bucket\",\r\n    \"action_description\": \"Create a new AWS S3 Bucket\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_create_bucket\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_S3\"   ]\r\n\r\n}\r\n    "
  },
  {
    "path": "AWS/legos/aws_create_bucket/aws_create_bucket.py",
    "content": "##\r\n# Copyright (c) 2021 unSkript, Inc\r\n# All rights reserved.\r\n##\r\nimport pprint\r\nfrom typing import Optional, Dict\r\nfrom pydantic import BaseModel, Field\r\nfrom unskript.legos.aws.aws_get_handle.aws_get_handle import Session\r\n\r\n\r\nclass InputSchema(BaseModel):\r\n    name: str = Field(\r\n        title='Bucket Name',\r\n        description='Name of the bucket to be created.')\r\n    acl: str = Field(\r\n        title='ACL',\r\n        description=('The Canned ACL to apply to the bucket. Possible values: '\r\n                     'private, public-read, public-read-write, authenticated-read.'))\r\n    region: Optional[str] = Field(\r\n        title='Region',\r\n        description='AWS Region of the bucket.')\r\n\r\n\r\ndef aws_create_bucket_printer(output):\r\n    if output is None:\r\n        return\r\n    pprint.pprint(output)\r\n\r\n\r\ndef aws_create_bucket(handle: Session, name: str, acl: str, region: str = None) -> Dict:\r\n    \"\"\"aws_create_bucket Creates a new bucket.\r\n\r\n        :rtype: Dict with the new bucket info.\r\n    \"\"\"\r\n    # Input param validation.\r\n    if region is None:\r\n        s3Client = handle.client('s3')\r\n        res = s3Client.create_bucket(\r\n            ACL=acl,\r\n            Bucket=name)\r\n    else:\r\n        s3Client = handle.client('s3', region_name=region)\r\n        res = s3Client.create_bucket(\r\n            ACL=acl,\r\n            Bucket=name,\r\n            CreateBucketConfiguration={\r\n                'LocationConstraint': region})\r\n    return res\r\n"
  },
  {
    "path": "AWS/legos/aws_create_iam_user/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Create New IAM User </h1>\r\n\r\n## Description\r\nThis Lego create new IAM User.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_create_iam_user(handle: object, user_name: str, tag_key: str, tag_value: str)\r\n\r\n        handle: Object of type unSkript AWS Connector.\r\n        user_name: Name of new IAM User.\r\n        tag_key: Tag Key assign to new User.\r\n        tag_value: Tag Value assign to new User.\r\n\r\n## Lego Input\r\nThis Lego take four inputs handle, user_name, tag_key and tag_value.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_create_iam_user/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_create_iam_user/aws_create_iam_user.json",
    "content": "{\r\n    \"action_title\": \"Create New IAM User\",\r\n    \"action_description\": \"Create New IAM User\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_create_iam_user\",\r\n    \"action_needs_credential\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [  \"CATEGORY_TYPE_SECOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_IAM\" ,\"CATEGORY_TYPE_IAM\"  ]\r\n  }\r\n  "
  },
  {
    "path": "AWS/legos/aws_create_iam_user/aws_create_iam_user.py",
    "content": "##  Copyright (c) 2021 unSkript, Inc\r\n##  All rights reserved.\r\n##\r\nimport pprint\r\nfrom typing import Dict\r\nfrom pydantic import BaseModel, Field\r\nfrom botocore.exceptions import ClientError\r\nfrom beartype import beartype\r\n\r\n\r\nclass InputSchema(BaseModel):\r\n    user_name: str = Field(\r\n        title='User Name',\r\n        description='IAM User Name.')\r\n    tag_key: str = Field(\r\n        title='Tag Key',\r\n        description='Tag Key to new IAM User.')\r\n    tag_value: str = Field(\r\n        title='Tag Value',\r\n        description='Tag Value to new IAM User.')\r\n\r\n@beartype\r\ndef aws_create_iam_user_printer(output):\r\n    if output is None:\r\n        return\r\n    pprint.pprint(output)\r\n\r\n\r\n@beartype\r\ndef aws_create_iam_user(handle, user_name: str, tag_key: str, tag_value: str) -> Dict:\r\n    \"\"\"aws_create_iam_user Creates new IAM User.\r\n\r\n        :type handle: object\r\n        :param handle: Object returned by the task.validate(...) method\r\n        \r\n        :type user_name: string\r\n        :param user_name: Name of new IAM User.\r\n\r\n        :type tag_key: string\r\n        :param tag_key: Tag Key assign to new User.\r\n\r\n        :type tag_value: string\r\n        :param tag_value: Tag Value assign to new User.\r\n\r\n        :rtype: Dict with the stopped instances state info.\r\n    \"\"\"\r\n\r\n    ec2Client = handle.client(\"iam\")\r\n    result = {}\r\n    try:\r\n        response = ec2Client.create_user(\r\n            UserName=user_name,\r\n            Tags=[\r\n                {\r\n                    'Key': tag_key,\r\n                    'Value': tag_value\r\n                }])\r\n        result = response\r\n    except ClientError as error:\r\n        if error.response['Error']['Code'] == 'EntityAlreadyExists':\r\n            result = error.response\r\n        else:\r\n            result = error.response\r\n\r\n    return result\r\n"
  },
  {
    "path": "AWS/legos/aws_create_redshift_query/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Create AWS Redshift Query</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Action Creates a Query on a RedShift Database\r\n\r\n\r\n## Lego Details\r\n    def aws_create_redshift_query(handle, region: str,cluster:str, database:str, secretArn: str, query:str) -> str:\r\n\r\n        handle: Object of type unSkript datadog Connector\r\n\t\tregion: AWS Region\r\n\t\tcluster: Name of the AWS redshift cluster\r\n\t\tdatabase: database you wish to query\r\n\t\tsecretArn: ARN used to connect tothe database\r\n\t\tquery: the SQL Query\r\n\r\n## Lego Input\r\n        handle: Object of type unSkript datadog Connector\r\n\t\tregion: AWS Region\r\n\t\tcluster: Name of the AWS redshift cluster\r\n\t\tdatabase: database you wish to query\r\n\t\tsecretArn: ARN used to connect tothe database\r\n\t\tquery: the SQL Query\r\n\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.jpg\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_create_redshift_query/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_create_redshift_query/aws_create_redshift_query.json",
    "content": "{\n  \"action_title\": \"AWS Redshift Query\",\n  \"action_description\": \"Make a SQL Query to the given AWS Redshift database\",\n  \"action_type\": \"LEGO_TYPE_AWS\",\n  \"action_entry_function\": \"aws_create_redshift_query\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\n  \"action_is_check\": false,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\": [  \"CATEGORY_TYPE_SECOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_REDSHIFT\"   ]\n\n}"
  },
  {
    "path": "AWS/legos/aws_create_redshift_query/aws_create_redshift_query.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\n\n\nfrom __future__ import annotations\nfrom pydantic import BaseModel, Field\nfrom beartype import beartype\n\n\nclass InputSchema(BaseModel):\n    region: str = Field(..., description='AWS Region.', title='Region')\n    query: str = Field(\n\n        description='sql query to run',\n        title='query',\n    )\n    cluster: str = Field(\n\n        description='Name of Redshift Cluster', title='cluster'\n    )\n    database: str = Field(\n        description='Name of your Redshift database', title='database'\n    )\n    secretArn: str = Field(\n        description='Value of your Secrets Manager ARN', title='secretArn'\n    )\n\n\n\n\n@beartype\ndef aws_create_redshift_query(\n    handle,\n    region: str,\n    cluster:str,\n    database:str,\n    secretArn: str,\n    query:str\n    ) -> str:\n\n    # Input param validation.\n    #major change\n    client = handle.client('redshift-data', region_name=region)\n    # execute the query\n    response = client.execute_statement(\n        ClusterIdentifier=cluster,\n        Database=database,\n        SecretArn=secretArn,\n        Sql=query\n    )\n    resultId = response['Id']\n    print(response)\n    print(\"resultId\",resultId)\n\n\n    return resultId\n\n#make a change\n"
  },
  {
    "path": "AWS/legos/aws_create_user_login_profile/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Create Login profile for IAM User </h1>\r\n\r\n## Description\r\nThis Lego create login profile for IAM user.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_create_user_login_profile(handle: object, UserName: str, Password: str)\r\n\r\n        handle: Object of type unSkript AWS Connector.\r\n        UserName: Name of new IAM User.\r\n        Password: temporary password for new User.\r\n\r\n## Lego Input\r\nThis Lego take three inputs handle, UserName and Password.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_create_user_login_profile/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_create_user_login_profile/aws_create_user_login_profile.json",
    "content": "{\r\n    \"action_title\": \"Create Login profile for IAM User\",\r\n    \"action_description\": \"Create Login profile for IAM User\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_create_user_login_profile\",\r\n    \"action_needs_credential\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [  \"CATEGORY_TYPE_SECOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_IAM\" ,\"CATEGORY_TYPE_IAM\"  ]\r\n\r\n  }\r\n  "
  },
  {
    "path": "AWS/legos/aws_create_user_login_profile/aws_create_user_login_profile.py",
    "content": "##\r\n##  Copyright (c) 2021 unSkript, Inc\r\n##  All rights reserved.\r\n##\r\nimport pprint\r\nfrom typing import Dict\r\nfrom pydantic import BaseModel, Field\r\nfrom botocore.exceptions import ClientError\r\n\r\n\r\nclass InputSchema(BaseModel):\r\n    user_name: str = Field(\r\n        title='User Name',\r\n        description='IAM User Name.')\r\n    password: str = Field(\r\n        title='Password',\r\n        description='Password for IAM User.')\r\n\r\n\r\ndef aws_create_user_login_profile_printer(output):\r\n    if output is None:\r\n        return\r\n    pprint.pprint(output)\r\n\r\n\r\ndef aws_create_user_login_profile(\r\n    handle,\r\n    user_name: str,\r\n    password: str\r\n    ) -> Dict:\r\n    \"\"\"aws_create_user_login_profile Create login profile for IAM User.\r\n\r\n        :type handle: object\r\n        :param handle: Object returned by the task.validate(...) method.\r\n\r\n        :type user_name: string\r\n        :param user_name: Name of new IAM User.\r\n\r\n        :type password: string\r\n        :param password: temporary password for new User.\r\n\r\n        :rtype: Dict with the Profile Creation status info.\r\n    \"\"\"\r\n\r\n    ec2Client = handle.client(\"iam\")\r\n    result = {}\r\n    try:\r\n        response = ec2Client.create_login_profile(\r\n            UserName=user_name,\r\n            Password=password,\r\n            PasswordResetRequired=True)\r\n\r\n        result = response\r\n    except ClientError as error:\r\n        if error.response['Error']['Code'] == 'EntityAlreadyExists':\r\n            result = error.response\r\n        else:\r\n            result = error.response\r\n\r\n    return result\r\n"
  },
  {
    "path": "AWS/legos/aws_create_volumes_snapshot/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>AWS Create Snapshot For Volume </h1>\r\n\r\n## Description\r\nThis action create a snapshot for EBS volume of the EC2 Instance for backing up the data stored in EBS.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_create_volumes_snapshot(handle: object, volume_id: str, region: str)\r\n\r\n        handle: Object of type unSkript AWS Connector.\r\n        volume_id: Volume ID needed to create snapshot for particular volume.\r\n        region: Used to filter the volume for specific region.\r\n\r\n## Lego Input\r\n\r\nThis action take three inputs handle, volume_id and region.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_create_volumes_snapshot/__init__.py",
    "content": "##\n##  Copyright (c) 2022 unSkript, Inc\n##  All rights reserved.\n##\n"
  },
  {
    "path": "AWS/legos/aws_create_volumes_snapshot/aws_create_volumes_snapshot.json",
    "content": "{\r\n    \"action_title\": \"AWS Create Snapshot For Volume\",\r\n    \"action_description\": \"Create a snapshot for EBS volume of the EC2 Instance for backing up the data stored in EBS\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_create_volumes_snapshot\",\r\n    \"action_needs_credential\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_EC2\" ]\r\n\r\n  }\r\n  "
  },
  {
    "path": "AWS/legos/aws_create_volumes_snapshot/aws_create_volumes_snapshot.py",
    "content": "##\r\n##  Copyright (c) 2022 unSkript, Inc\r\n##  All rights reserved.\r\n##\r\nimport pprint\r\nfrom typing import List\r\nfrom pydantic import BaseModel, Field\r\n\r\n\r\nclass InputSchema(BaseModel):\r\n    volume_id: str = Field(\r\n        title='Volume ID',\r\n        description='Volume ID to create snapshot for particular volume e.g. vol-01eb21cfce30a956c')\r\n    region: str = Field(\r\n        title='Region',\r\n        description='AWS Region.')\r\n\r\n\r\ndef aws_create_volumes_snapshot_printer(output):\r\n    if output is None:\r\n        return\r\n    pprint.pprint(output)\r\n\r\n\r\ndef aws_create_volumes_snapshot(handle, volume_id: str, region: str) -> List:\r\n    \"\"\"aws_create_volumes_snapshot Returns an list containing SnapshotId.\r\n\r\n        :type region: string\r\n        :param region: used to filter the volume for a given region.\r\n\r\n        :type volume_id: string\r\n        :param volume_id: Volume ID to create snapshot for particular volume.\r\n\r\n        :rtype: List containing SnapshotId.\r\n    \"\"\"\r\n    result = []\r\n\r\n    ec2Client = handle.resource('ec2', region_name=region)\r\n\r\n    try:\r\n        response = ec2Client.create_snapshot(VolumeId=volume_id)\r\n        result.append(response)\r\n    except Exception as e:\r\n        raise e\r\n\r\n    return result\r\n"
  },
  {
    "path": "AWS/legos/aws_delete_access_key/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>AWS Delete Access Key </h1>\r\n\r\n## Description\r\nThis Lego Delete an access key for a User.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_delete_access_key(handle: object, aws_username: str, aws_access_key_id: str)\r\n\r\n        handle: Object of type unSkript AWS Connector.\r\n        aws_username: Name of IAM User.\r\n        aws_access_key_id: Old Access Key ID of the User.\r\n\r\n## Lego Input\r\nThis Lego take three inputs handle, aws_username and aws_access_key_id.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_delete_access_key/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_delete_access_key/aws_delete_access_key.json",
    "content": "{\r\n    \"action_title\": \"AWS Delete Access Key\",\r\n    \"action_description\": \"Delete an Access Key for a User\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_delete_access_key\",\r\n    \"action_needs_credential\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_is_remediation\": true,\r\n    \"action_categories\": [  \"CATEGORY_TYPE_SECOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_IAM\" ,\"CATEGORY_TYPE_IAM\"  ]\r\n\r\n}"
  },
  {
    "path": "AWS/legos/aws_delete_access_key/aws_delete_access_key.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nimport pprint\nfrom typing import Dict\nfrom pydantic import BaseModel, Field\n\n\nclass InputSchema(BaseModel):\n    aws_username: str = Field(\n        title=\"Username\",\n        description=\"Username of the IAM User\"\n    )\n    aws_access_key_id: str = Field(\n        title=\"Access Key ID\",\n        description=\"Old Access Key ID of the User\"\n    )\n\n\ndef aws_delete_access_key_printer(output):\n    if output is None:\n        return\n    pprint.pprint(\"Access Key successfully deleted\")\n    pprint.pprint(output)\n\n\ndef aws_delete_access_key(\n    handle,\n    aws_username: str,\n    aws_access_key_id: str,\n) -> Dict:\n    \"\"\"aws_delete_access_key deleted the given access key.\n        :type handle: object\n        :param handle: Object returned from Task Validate\n\n        :type aws_username: str\n        :param aws_username: Username of the IAM user to be looked up\n\n        :type aws_access_key_id: str\n        :param aws_access_key_id: Old Access Key ID of the user which needs to be deleted\n\n        :rtype: Result Status Dictionary of result\n    \"\"\"\n    iamClient = handle.client('iam')\n    result = iamClient.delete_access_key(UserName=aws_username, AccessKeyId=aws_access_key_id)\n    retVal = {}\n    temp_list = []\n    for key, value in result.items():\n        if key not in temp_list:\n            temp_list.append(key)\n            retVal[key] = value\n    return retVal\n"
  },
  {
    "path": "AWS/legos/aws_delete_bucket/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Delete AWS Bucket </h1>\r\n\r\n## Description\r\nThis Lego delete AWS Bucket.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_delete_bucket(handle: object, name: str, region: str)\r\n\r\n        handle: Object of type unSkript AWS Connector.\r\n        name: Name of the bucket to be deleted.\r\n        region: AWS Region of the bucket.\r\n\r\n## Lego Input\r\nThis Lego take three inputs handle, name and region.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_delete_bucket/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_delete_bucket/aws_delete_bucket.json",
    "content": "{\r\n    \"action_title\": \"Delete AWS Bucket\",\r\n    \"action_description\": \"Delete an AWS S3 Bucket\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_delete_bucket\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_S3\"  ]\r\n}\r\n    "
  },
  {
    "path": "AWS/legos/aws_delete_bucket/aws_delete_bucket.py",
    "content": "##\r\n# Copyright (c) 2021 unSkript, Inc\r\n# All rights reserved.\r\n##\r\nimport pprint\r\nfrom typing import Optional, Dict\r\nfrom pydantic import BaseModel, Field\r\nfrom unskript.legos.aws.aws_get_handle.aws_get_handle import Session\r\n\r\n\r\nclass InputSchema(BaseModel):\r\n    name: str = Field(\r\n        title='Bucket Name',\r\n        description='Name of the bucket to be deleted.')\r\n    region: Optional[str] = Field(\r\n        title='Region',\r\n        description='AWS Region of the bucket.')\r\n\r\n\r\ndef aws_delete_bucket_printer(output):\r\n    if output is None:\r\n        return\r\n    pprint.pprint(output)\r\n\r\n\r\ndef aws_delete_bucket(handle: Session, name: str, region: str = None) -> Dict:\r\n    \"\"\"aws_delete_bucket Deletes a bucket.\r\n        :type handle: object\r\n        :param handle: Object returned from Task Validate\r\n\r\n        :type name: string\r\n        :param name: Name of the bucket to be deleted.\r\n\r\n        :type region: string\r\n        :param region: AWS Region of the bucket.\r\n\r\n        :rtype: Dict with the deleted bucket info.\r\n    \"\"\"\r\n\r\n    if region is None:\r\n        s3Client = handle.client('s3')\r\n    else:\r\n        s3Client = handle.client('s3', region_name=region)\r\n\r\n    res = s3Client.delete_bucket(Bucket=name)\r\n    return res\r\n"
  },
  {
    "path": "AWS/legos/aws_delete_classic_load_balancer/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>AWS Delete Classic Load Balancer</h1>\n\n## Description\nDelete Classic Elastic Load Balancers\n\n## Lego Details\n\taws_delete_classic_load_balancer(handle, region: str, elb_name: str)\n\t\thandle: Object of type unSkript AWS Connector.\n\t\telb_name: Classic load balancer name.\n\n## Lego Input\nThis Lego takes inputs handle, elb_name\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_delete_classic_load_balancer/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_delete_classic_load_balancer/aws_delete_classic_load_balancer.json",
    "content": "{\n  \"action_title\": \"AWS Delete Classic Load Balancer\",\n  \"action_description\": \"Delete Classic Elastic Load Balancers\",\n  \"action_type\": \"LEGO_TYPE_AWS\",\n  \"action_entry_function\": \"aws_delete_classic_load_balancer\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\n  \"action_is_check\": false,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\"]\n}"
  },
  {
    "path": "AWS/legos/aws_delete_classic_load_balancer/aws_delete_classic_load_balancer.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nfrom pydantic import BaseModel, Field\nfrom typing import Optional, Dict\nimport pprint\n\n\nclass InputSchema(BaseModel):\n    region: str = Field(..., description='AWS Region.', title='Region')\n    elb_name: str = Field(..., description='Name of classic ELB', title='Classic Load Balancer Name')\n\n\n\ndef aws_delete_classic_load_balancer_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef aws_delete_classic_load_balancer(handle, region: str, elb_name: str) -> Dict:\n    \"\"\"aws_delete_classic_load_balancer reponse of deleting a classic load balancer.\n\n        :type region: string\n        :param region: AWS Region.\n\n        :type elb_name: string\n        :param elb_name: Classic load balancer name.\n\n        :rtype: dict of deleted load balancers reponse.\n    \"\"\"\n    try:\n        elblient = handle.client('elb', region_name=region)\n        response = elblient.delete_load_balancer(LoadBalancerName=elb_name)\n        return response\n    except Exception as e:\n        raise Exception(e)\n\n\n"
  },
  {
    "path": "AWS/legos/aws_delete_ebs_snapshot/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Delete AWS EBS Snapshot </h1>\r\n\r\n## Description\r\nThis Lego deletes AWS EBS Volume\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_delete_ebs_snapshot(handle: object, name: str, region: str)\r\n\r\n        handle: Object of type unSkript AWS Connector\r\n        snapshot_id: EBS snapshot ID. Eg: 'snap-34bt4bfjed9d'\r\n        region: AWS region\r\n\r\n## Lego Input\r\nThis Lego take three inputs handle, snapshot_id and region.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_delete_ebs_snapshot/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_delete_ebs_snapshot/aws_delete_ebs_snapshot.json",
    "content": "{\n    \"action_title\": \"AWS Delete EBS Snapshot\",\n    \"action_description\": \"Delete EBS Snapshot for an EC2 instance\",\n    \"action_type\": \"LEGO_TYPE_AWS\",\n    \"action_entry_function\": \"aws_delete_ebs_snapshot\",\n    \"action_needs_credential\": true,\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n    \"action_supports_iteration\": true,\n    \"action_supports_poll\": true,\n    \"action_categories\": [\"CATEGORY_TYPE_COST_OPT\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_EBS\"]\n  }"
  },
  {
    "path": "AWS/legos/aws_delete_ebs_snapshot/aws_delete_ebs_snapshot.py",
    "content": "##\n##  Copyright (c) 2023 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import Dict\nfrom pydantic import BaseModel, Field\n\nclass InputSchema(BaseModel):\n    region: str = Field(\n        title='Region',\n        description='AWS Region.')\n\n    snapshot_id: str = Field(\n        title='Snapshot ID',\n        description='EBS snapshot ID. Eg: \"snap-34bt4bfjed9d\"')\n\n\ndef aws_delete_ebs_snapshot_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef aws_delete_ebs_snapshot(handle, region: str, snapshot_id: str) -> Dict:\n    \"\"\"aws_delete_ebs_snapshot Returns a dict of deleted snapshot details\n\n        :type region: string\n        :param region: AWS Region.\n\n        :type snapshot_id: string\n        :param snapshot_id: EBS snapshot ID. Eg: 'snap-34bt4bfjed9d'\n\n        :rtype: Deleted snapshot details\n    \"\"\"\n    result = []\n    try:\n        ec2Client = handle.client('ec2', region_name=region)\n        result = ec2Client.delete_snapshot(SnapshotId=snapshot_id)\n    except Exception as e:\n        raise e\n    return  result\n"
  },
  {
    "path": "AWS/legos/aws_delete_ecs_cluster/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>AWS Delete ECS Cluster</h1>\n\n## Description\nDelete AWS ECS Cluster\n\n## Lego Details\n\taws_delete_ecs_cluster(handle, region: str, cluster_name: str)\n\t\thandle: Object of type unSkript AWS Connector.\n\t\tcluster_name: ECS Cluster name\n\n## Lego Input\nThis Lego takes inputs handle, cluster_name\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_delete_ecs_cluster/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_delete_ecs_cluster/aws_delete_ecs_cluster.json",
    "content": "{\n  \"action_title\": \"AWS Delete ECS Cluster\",\n  \"action_description\": \"Delete AWS ECS Cluster\",\n  \"action_type\": \"LEGO_TYPE_AWS\",\n  \"action_entry_function\": \"aws_delete_ecs_cluster\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\n  \"action_is_check\": false,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\": [\"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\"]\n}"
  },
  {
    "path": "AWS/legos/aws_delete_ecs_cluster/aws_delete_ecs_cluster.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nfrom pydantic import BaseModel, Field\nfrom typing import Optional, Dict\nimport pprint\n\n\nclass InputSchema(BaseModel):\n    region: str = Field(..., description='AWS Region.', title='Region')\n    cluster_name: str = Field(\n        ...,\n        description='ECS Cluster name that needs to be deleted',\n        title='ECS Cluster Name',\n    )\n\n\n\ndef aws_delete_ecs_cluster_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef aws_delete_ecs_cluster(handle, region: str, cluster_name: str) -> Dict:\n    \"\"\"aws_delete_ecs_cluster dict of loadbalancers info.\n\n        :type region: string\n        :param region: AWS Region.\n\n        :type cluster_name: string\n        :param cluster_name: ECS Cluster name\n\n        :rtype: dict of load balancers info.\n    \"\"\"\n    try:\n        ec2Client = handle.client('ecs', region_name=region)\n        response = ec2Client.delete_cluster(cluster=cluster_name)\n        return response\n    except Exception as e:\n        raise Exception(e)\n\n\n"
  },
  {
    "path": "AWS/legos/aws_delete_load_balancer/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>AWS Delete Load Balancer</h1>\r\n\r\n## Description\r\nThis Lego delete AWS load balancer.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_delete_load_balancer(handle, region: str, elb_arn: str)\r\n\r\n        handle: Object of type unSkript AWS Connector.\r\n        elb_arn: load balancer ARNs.\r\n        region: AWS Region.\r\n\r\n## Lego Input\r\nThis Lego take three inputs handle, elb_arn and region.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_delete_load_balancer/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_delete_load_balancer/aws_delete_load_balancer.json",
    "content": "{\r\n    \"action_title\": \"AWS Delete Load Balancer\",\r\n    \"action_description\": \"AWS Delete Load Balancer\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_delete_load_balancer\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\"]\r\n}\r\n    "
  },
  {
    "path": "AWS/legos/aws_delete_load_balancer/aws_delete_load_balancer.py",
    "content": "##\r\n# Copyright (c) 2021 unSkript, Inc\r\n# All rights reserved.\r\n##\r\nimport pprint\r\nfrom typing import Dict\r\nfrom pydantic import BaseModel, Field\r\n\r\nclass InputSchema(BaseModel):\r\n    elb_arn: str = Field(\r\n        title='Load Balancer ARN (ALB/NLB type)',\r\n        description='Load Balancer ARN of the ALB/NLB type Load Balancer.'\r\n        )\r\n    region: str = Field(\r\n        title='Region',\r\n        description='AWS Region.'\r\n        )\r\n\r\n\r\ndef aws_delete_load_balancer_printer(output):\r\n    if output is None:\r\n        return\r\n    pprint.pprint(output)\r\n\r\n\r\ndef aws_delete_load_balancer(handle, region: str, elb_arn: str) -> Dict:\r\n    \"\"\"aws_delete_load_balancer dict of loadbalancers info.\r\n\r\n        :type region: string\r\n        :param region: AWS Region.\r\n\r\n        :type elb_arn: string\r\n        :param elb_arn: load balancer ARNs.\r\n\r\n        :rtype: dict of load balancers info.\r\n    \"\"\"\r\n    try:\r\n        elbv2Client = handle.client('elbv2', region_name=region)\r\n        response = elbv2Client.delete_load_balancer(LoadBalancerArn=elb_arn)\r\n        return response\r\n    except Exception as e:\r\n        raise Exception(e)\r\n    "
  },
  {
    "path": "AWS/legos/aws_delete_log_stream/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>AWS Delete Log Stream</h1>\r\n\r\n## Description\r\nThis Lego delete Log Streams.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_delete_log_stream(handle, log_group_name: str, log_stream_name: str, region: str)\r\n\r\n        handle: Object of type unSkript AWS Connector.\r\n        log_group_name: Name of the log group.\r\n        log_stream_name: Name of the log stream.\r\n        region: AWS Region.\r\n\r\n## Lego Input\r\nThis Lego take four inputs handle, log_group_name, log_stream_name and region.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_delete_log_stream/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_delete_log_stream/aws_delete_log_stream.json",
    "content": "{\r\n    \"action_title\": \"AWS Delete Log Stream\",\r\n    \"action_description\": \"AWS Delete Log Stream\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_delete_log_stream\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [\"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\"]\r\n}\r\n    "
  },
  {
    "path": "AWS/legos/aws_delete_log_stream/aws_delete_log_stream.py",
    "content": "##\r\n# Copyright (c) 2021 unSkript, Inc\r\n# All rights reserved.\r\n##\r\nimport pprint\r\nfrom typing import Dict\r\nfrom pydantic import BaseModel, Field\r\n\r\n\r\nclass InputSchema(BaseModel):\r\n    log_group_name: str = Field(\r\n        title='Log Group Name',\r\n        description='Name of the log group.')\r\n    log_stream_name: str = Field(\r\n        title='Log Stream Name',\r\n        description='Name of the log stream.')\r\n    region: str = Field(\r\n        title='Region',\r\n        description='AWS Region')\r\n\r\n\r\ndef aws_delete_log_stream_printer(output):\r\n    if output is None:\r\n        return\r\n    pprint.pprint(output)\r\n\r\n\r\ndef aws_delete_log_stream(handle, log_group_name: str, log_stream_name: str, region: str) -> Dict:\r\n    \"\"\"aws_delete_log_stream Deletes a log stream.\r\n    \r\n        :type log_group_name: string\r\n        :param log_group_name: Name of the log group.\r\n        \r\n        :type log_stream_name: string\r\n        :param log_stream_name: Name of the log stream.\r\n\r\n        :type region: string\r\n        :param region: AWS Region.\r\n\r\n        :rtype: Dict with the deleted log stream info.\r\n    \"\"\"\r\n    try:\r\n        log_Client = handle.client('logs', region_name=region)\r\n        response = log_Client.delete_log_stream(\r\n            logGroupName=log_group_name,\r\n            logStreamName=log_stream_name)\r\n        return response\r\n    except Exception as e:\r\n        raise Exception(e)\r\n    "
  },
  {
    "path": "AWS/legos/aws_delete_nat_gateway/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>AWS Delete NAT Gateway</h1>\r\n\r\n## Description\r\nThis Lego delete AWS NAT Gateway.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_delete_nat_gateway(handle, nat_gateway_id: str, region: str)\r\n\r\n        handle: Object of type unSkript AWS Connector.\r\n        nat_gateway_id: ID of the NAT Gateway.\r\n        region: AWS Region.\r\n\r\n## Lego Input\r\nThis Lego take three inputs handle, nat_gateway_id and region.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_delete_nat_gateway/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_delete_nat_gateway/aws_delete_nat_gateway.json",
    "content": "{\r\n    \"action_title\": \"AWS Delete NAT Gateway\",\r\n    \"action_description\": \"AWS Delete NAT Gateway\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_delete_nat_gateway\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\"]\r\n}\r\n    "
  },
  {
    "path": "AWS/legos/aws_delete_nat_gateway/aws_delete_nat_gateway.py",
    "content": "##\r\n# Copyright (c) 2021 unSkript, Inc\r\n# All rights reserved.\r\n##\r\nimport pprint\r\nfrom typing import Dict\r\nfrom pydantic import BaseModel, Field\r\n\r\n\r\nclass InputSchema(BaseModel):\r\n    nat_gateway_id: str = Field(\r\n        title='NAT Gateway ID',\r\n        description='ID of the NAT Gateway.')\r\n    region: str = Field(\r\n        title='Region',\r\n        description='AWS Region.')\r\n\r\n\r\ndef aws_delete_nat_gateway_printer(output):\r\n    if output is None:\r\n        return\r\n    pprint.pprint(output)\r\n\r\n\r\ndef aws_delete_nat_gateway(handle, nat_gateway_id: str, region: str) -> Dict:\r\n    \"\"\"aws_delete_nat_gateway Returns an dict of NAT gateways information.\r\n\r\n        :type region: string\r\n        :param region: AWS Region.\r\n\r\n        :type nat_gateway_id: string\r\n        :param nat_gateway_id: ID of the NAT Gateway.\r\n\r\n        :rtype: dict of NAT gateways information.\r\n    \"\"\"\r\n    try:\r\n        ec2Client = handle.client('ec2', region_name=region)\r\n        response = ec2Client.delete_nat_gateway(NatGatewayId=nat_gateway_id)\r\n        return response\r\n    except Exception as e:\r\n        raise Exception(e) from e\r\n"
  },
  {
    "path": "AWS/legos/aws_delete_rds_instance/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>AWS Delete RDS Instance</h1>\n\n## Description\nDelete AWS RDS Instance\n\n## Lego Details\n\taws_delete_rds_instance(handle, region: str, instance_id: str)\n\t\thandle: Object of type unSkript AWS Connector.\n\t\tinstance_id: The DB instance identifier for the DB instance to be deleted. This parameter isn’t case-sensitive.\n\n## Lego Input\nThis Lego takes inputs handle,instance_id.\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_delete_rds_instance/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_delete_rds_instance/aws_delete_rds_instance.json",
    "content": "{\n  \"action_title\": \"AWS Delete RDS Instance\",\n  \"action_description\": \"Delete AWS RDS Instance\",\n  \"action_type\": \"LEGO_TYPE_AWS\",\n  \"action_entry_function\": \"aws_delete_rds_instance\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\n  \"action_is_check\": false,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\": [\"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_RDS\"]\n}"
  },
  {
    "path": "AWS/legos/aws_delete_rds_instance/aws_delete_rds_instance.py",
    "content": "##\n# Copyright (c) 2023 unSkript, Inc\n# All rights reserved.\n##\nimport pprint\nfrom typing import Dict\nfrom pydantic import BaseModel, Field\n\nclass InputSchema(BaseModel):\n    instance_id: str = Field(\n        ...,\n        description=('The DB instance identifier for the DB instance to be deleted. '\n                     'This parameter isn’t case-sensitive.'),\n        title='RDS DB Identifier',\n    )\n    region: str = Field(\n        ..., description='AWS region of instance identifier', title='AWS Region'\n    )\n\n\n\ndef aws_delete_rds_instance_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef aws_delete_rds_instance(handle, region: str, instance_id: str) -> Dict:\n    \"\"\"aws_delete_rds_instance dict of response.\n\n        :type region: string\n        :param region: AWS Region.\n\n        :type instance_id: string\n        :param instance_id: The DB instance identifier for the DB instance to be deleted.\n        This parameter isn’t case-sensitive.\n\n        :rtype: dict of response of deleting an RDS instance\n    \"\"\"\n    try:\n        ec2Client = handle.client('rds', region_name=region)\n        response = ec2Client.delete_db_instance(DBInstanceIdentifier=instance_id)\n        return response\n    except Exception as e:\n        raise Exception(e) from e\n"
  },
  {
    "path": "AWS/legos/aws_delete_redshift_cluster/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>AWS Delete Redshift Cluster</h1>\n\n## Description\nDelete AWS Redshift Cluster\n\n## Lego Details\n\taws_delete_redshift_cluster(handle, region: str, cluster_identifier: str, skip_final_cluster_snapshot:bool=False)\n\t\thandle: Object of type unSkript AWS Connector.\n\t\tskip_final_cluster_snapshot: Determines whether a final snapshot of the cluster is created before Amazon Redshift deletes the cluster. If true, a final cluster snapshot is not created. If false, a final cluster snapshot is created before the cluster is deleted.\n\t\tcluster_identifier: The identifier of the cluster to be deleted.\n\n\n## Lego Input\nThis Lego takes inputs handle, cluster_identifier, skip_final_cluster_snapshot.\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_delete_redshift_cluster/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_delete_redshift_cluster/aws_delete_redshift_cluster.json",
    "content": "{\n  \"action_title\": \"AWS Delete Redshift Cluster\",\n  \"action_description\": \"Delete AWS Redshift Cluster\",\n  \"action_type\": \"LEGO_TYPE_AWS\",\n  \"action_entry_function\": \"aws_delete_redshift_cluster\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\n  \"action_is_check\": false,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\": [\"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_REDSHIFT\"]\n}"
  },
  {
    "path": "AWS/legos/aws_delete_redshift_cluster/aws_delete_redshift_cluster.py",
    "content": "##\n# Copyright (c) 2023 unSkript, Inc\n# All rights reserved.\n##\nfrom pydantic import BaseModel, Field\nfrom typing import Optional, Dict\nimport pprint\n\n\nclass InputSchema(BaseModel):\n    region: str = Field(..., description='AWS Region.', title='Region')\n    cluster_identifier: str = Field(\n        ...,\n        description='The identifier of the cluster to be deleted.',\n        title='Cluster Identifier',\n    )\n    skip_final_cluster_snapshot: Optional[bool] = Field(\n        False,\n        description='Determines whether a final snapshot of the cluster is created before Amazon Redshift deletes the cluster. If true, a final cluster snapshot is not created. If false, a final cluster snapshot is created before the cluster is deleted.',\n        title='Skip Final Cluster Snapshot',\n    )\n\n\n\ndef aws_delete_redshift_cluster_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef aws_delete_redshift_cluster(handle, region: str, cluster_identifier: str, skip_final_cluster_snapshot:bool=False) -> Dict:\n    \"\"\"aws_delete_redshift_cluster dict response.\n\n        :type region: string\n        :param region: AWS Region.\n\n        :type cluster_identifier: string\n        :param cluster_identifier: The identifier of the cluster to be deleted.\n\n        :type skip_final_cluster_snapshot: boolean\n        :param skip_final_cluster_snapshot: Determines whether a final snapshot of the cluster is created before Amazon Redshift deletes the cluster. If true, a final cluster snapshot is not created. If false, a final cluster snapshot is created before the cluster is deleted.\n\n        :rtype: dict of response\n    \"\"\"\n    try:\n        redshiftClient = handle.client('redshift', region_name=region)\n        response = redshiftClient.delete_cluster(\n            ClusterIdentifier=cluster_identifier,\n            SkipFinalClusterSnapshot=skip_final_cluster_snapshot\n            )\n        return response\n    except Exception as e:\n        raise Exception(e)\n\n\n"
  },
  {
    "path": "AWS/legos/aws_delete_route53_health_check/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>AWS Delete Route 53 HealthCheck</h1>\r\n\r\n## Description\r\nThis Lego delete AWS Route 53 HealthCheck.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_delete_route53_health_check(handle, health_check_id: str)\r\n\r\n        handle: Object of type unSkript AWS Connector.\r\n        health_check_id: The ID of the Health Check to delete.\r\n\r\n## Lego Input\r\nThis Lego take four inputs handle and health_check_id.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_delete_route53_health_check/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_delete_route53_health_check/aws_delete_route53_health_check.json",
    "content": "{\r\n    \"action_title\": \"AWS Delete Route 53 HealthCheck\",\r\n    \"action_description\": \"AWS Delete Route 53 HealthCheck\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_delete_route53_health_check\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [\"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\"]\r\n}\r\n    "
  },
  {
    "path": "AWS/legos/aws_delete_route53_health_check/aws_delete_route53_health_check.py",
    "content": "##\r\n# Copyright (c) 2021 unSkript, Inc\r\n# All rights reserved.\r\n##\r\nimport pprint\r\nfrom typing import Dict\r\nfrom pydantic import BaseModel, Field\r\n\r\n\r\nclass InputSchema(BaseModel):\r\n    health_check_id: str = Field(\r\n        title='Health Check ID',\r\n        description='The ID of the Health Check to delete.')\r\n\r\n\r\ndef aws_delete_route53_health_check_printer(output):\r\n    if output is None:\r\n        return\r\n    pprint.pprint(output)\r\n\r\n\r\ndef aws_delete_route53_health_check(handle, health_check_id: str) -> Dict:\r\n    \"\"\"aws_delete_route53_health_check Deletes a Route 53 Health Check.\r\n\r\n        :type health_check_id: string\r\n        :param health_check_id: The ID of the Health Check to delete.\r\n\r\n        :rtype: dict of health check information.\r\n    \"\"\"\r\n    try:\r\n        route_client = handle.client('route53')\r\n        response = route_client.delete_health_check(HealthCheckId=health_check_id)\r\n        return response\r\n    except Exception as e:\r\n        raise Exception(e) from e\r\n    "
  },
  {
    "path": "AWS/legos/aws_delete_s3_bucket_encryption/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Delete AWS Default Encryption for S3 Bucket </h1>\r\n\r\n## Description\r\nThis Lego delete AWS default encryption for S3 bucket.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_delete_s3_bucket_encryption(handle: object, name: str, region: str)\r\n\r\n        handle: Object of type unSkript AWS Connector\r\n        name: NAme of the S3 bucket.\r\n        region: Location of the S3 buckets.\r\n\r\n## Lego Input\r\nThis Lego take three inputs handle, name and region.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_delete_s3_bucket_encryption/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_delete_s3_bucket_encryption/aws_delete_s3_bucket_encryption.json",
    "content": "{\r\n    \"action_title\": \"Delete AWS Default Encryption for S3 Bucket\",\r\n    \"action_description\": \"Delete AWS Default Encryption for S3 Bucket\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_delete_s3_bucket_encryption\",\r\n    \"action_needs_credential\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [  \"CATEGORY_TYPE_SECOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_S3\"  ]\r\n  }"
  },
  {
    "path": "AWS/legos/aws_delete_s3_bucket_encryption/aws_delete_s3_bucket_encryption.py",
    "content": "##\r\n##  Copyright (c) 2021 unSkript, Inc\r\n##  All rights reserved.\r\n##\r\nimport pprint\r\nfrom typing import Dict\r\nfrom pydantic import BaseModel, Field\r\n\r\nclass InputSchema(BaseModel):\r\n    region: str = Field(\r\n        title='Region',\r\n        description='AWS Region.')\r\n\r\n    bucket_name: str = Field(\r\n        title='Bucket Name',\r\n        description='AWS S3 Bucket Name.')\r\n\r\n\r\ndef aws_delete_s3_bucket_encryption_printer(output):\r\n    if output is None:\r\n        return\r\n    pprint.pprint(output)\r\n\r\n\r\ndef aws_delete_s3_bucket_encryption(handle, bucket_name: str, region: str) -> Dict:\r\n    \"\"\"aws_delete_s3_bucket_encryption Puts default encryption configuration for bucket.\r\n        :type handle: object\r\n        :param handle: Object returned by the task.validate(...) method.\r\n\r\n        :type bucket_name: string\r\n        :param bucket_name: Name of the S3 bucket.\r\n\r\n        :type region: string\r\n        :param region: location of the bucket\r\n\r\n        :rtype: Dict with the response info.\r\n    \"\"\"\r\n    s3Client = handle.client('s3', region_name=region)\r\n    \r\n    result = {}\r\n\r\n    # Setup default encryption configuration\r\n    try:\r\n        response = s3Client.delete_bucket_encryption(Bucket=bucket_name)\r\n\r\n        result['Response'] = response\r\n\r\n    except Exception as e:\r\n        result['Error'] = e\r\n        \r\n    return result\r\n"
  },
  {
    "path": "AWS/legos/aws_delete_secret/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>AWS Delete Secret</h1>\r\n\r\n## Description\r\nThis Lego delete AWS Secret.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_delete_secret(handle, region: str, secret_name : str)\r\n\r\n        handle: Object of type unSkript AWS Connector.\r\n        secret_name: Name of the secret to be deleted.\r\n        region: AWS Region.\r\n\r\n## Lego Input\r\nThis Lego take three inputs handle, secret_name and region.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_delete_secret/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_delete_secret/aws_delete_secret.json",
    "content": "{\r\n    \"action_title\": \"AWS Delete Secret\",\r\n    \"action_description\": \"AWS Delete Secret\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_delete_secret\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\"]\r\n}\r\n    "
  },
  {
    "path": "AWS/legos/aws_delete_secret/aws_delete_secret.py",
    "content": "##\r\n# Copyright (c) 2023 unSkript, Inc\r\n# All rights reserved.\r\n##\r\nimport pprint\r\nfrom typing import Dict\r\nfrom pydantic import BaseModel, Field\r\n\r\n\r\nclass InputSchema(BaseModel):\r\n    secret_name: str = Field(\r\n        title='Secret Name',\r\n        description='Name of the secret to be deleted.')\r\n    region: str = Field(\r\n        title='Region',\r\n        description='AWS Region.')\r\n\r\n\r\ndef aws_delete_secret_printer(output):\r\n    if output is None:\r\n        return\r\n    pprint.pprint(output)\r\n\r\n\r\ndef aws_delete_secret(handle, region: str, secret_name: str) -> Dict:\r\n    \"\"\"aws_delete_secret Dict with secret details.\r\n\r\n        :type handle: object\r\n        :param handle: Object returned from Task Validate\r\n\r\n        :type secret_name: string\r\n        :param secret_name: Name of the secret to be deleted.\r\n\r\n        :type region: string\r\n        :param region: AWS Region.\r\n\r\n        :rtype: Dict with secret details.\r\n    \"\"\"\r\n    try:\r\n        secrets_client = handle.client('secretsmanager', region_name=region)\r\n        response = secrets_client.delete_secret(SecretId=secret_name)\r\n        return response\r\n    except Exception as e:\r\n        raise Exception(e) from e\r\n    "
  },
  {
    "path": "AWS/legos/aws_delete_volume_by_id/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Delete AWS EBS Volume </h1>\r\n\r\n## Description\r\nThis Lego deletes AWS EBS volume.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_delete_volumes(handle: object, volume_id: str, region: str)\r\n\r\n        handle: Object of type unSkript AWS Connector.\r\n        volume_id: Volume ID to delete particular volume.\r\n        region: Used to filter the volume for specific region.\r\n\r\n## Lego Input\r\n\r\nThis Lego take three inputs handle, volume_id and region.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_delete_volume_by_id/__init__.py",
    "content": "##\n##  Copyright (c) 2022 unSkript, Inc\n##  All rights reserved.\n##\n"
  },
  {
    "path": "AWS/legos/aws_delete_volume_by_id/aws_delete_volume_by_id.json",
    "content": "{\r\n    \"action_title\": \"Delete AWS EBS Volume by Volume ID\",\r\n    \"action_description\": \"Delete AWS Volume by Volume ID\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_delete_volume_by_id\",\r\n    \"action_needs_credential\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_supports_poll\": true,\r\n    \"action_is_remediation\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_EC2\"  ]\r\n  }\r\n  \r\n"
  },
  {
    "path": "AWS/legos/aws_delete_volume_by_id/aws_delete_volume_by_id.py",
    "content": "##\r\n##  Copyright (c) 2022 unSkript, Inc\r\n##  All rights reserved.\r\n##\r\nimport pprint\r\nfrom pydantic import BaseModel, Field\r\n\r\n\r\nclass InputSchema(BaseModel):\r\n    volume_id: str = Field(\r\n        title='Volume ID',\r\n        description='Volume ID.')\r\n    region: str = Field(\r\n        title='Region',\r\n        description='AWS Region.')\r\n\r\n\r\ndef aws_delete_volume_by_id_printer(output):\r\n    if output is None:\r\n        return\r\n    pprint.pprint({\"Output\": output})\r\n\r\n\r\ndef aws_delete_volume_by_id(handle, volume_id: str, region: str) -> str:\r\n    \"\"\"aws_filter_ebs_unattached_volumes Returns an array of ebs volumes.\r\n\r\n        :type handle: object\r\n        :param handle: Object returned by the task.validate(...) method.\r\n\r\n        :type region: string\r\n        :param region: Used to filter the volume for specific region.\r\n\r\n        :type volume_id: string\r\n        :param volume_id: Volume ID needed to delete particular volume.\r\n\r\n        :rtype: Result of the API in the List form.\r\n    \"\"\"\r\n    result = []\r\n\r\n    ec2Client = handle.client('ec2',region_name=region)\r\n\r\n    # Adding logic for deletion criteria\r\n    try:\r\n        response = ec2Client.delete_volume(VolumeId=volume_id,)\r\n        result.append(response)\r\n    except Exception as e:\r\n        result.append(e)\r\n\r\n    return result\r\n"
  },
  {
    "path": "AWS/legos/aws_deregister_instances/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Deregisters AWS Instances from a Load Balancer </h1>\r\n\r\n## Description\r\nThis Lego deregisters AWS instances from a Load Balancer.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_deregister_instances(handle: object, elb_name: str, instance_ids: List, region: str)\r\n\r\n        handle: Object of type unSkript AWS Connector.\r\n        elb_name: Name of the Load Balancer.\r\n        instance_ids: List of instance IDs. For eg. [\"i-foo\", \"i-bar\"]\r\n        region: AWS Region of the ELB.\r\n\r\n## Lego Input\r\n\r\nThis Lego take four inputs handle, elb_name, instance_ids and region.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_deregister_instances/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_deregister_instances/aws_deregister_instances.json",
    "content": "{\r\n    \"action_title\": \" Deregisters AWS Instances from a Load Balancer\",\r\n    \"action_description\": \" Deregisters AWS Instances from a Load Balancer\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_deregister_instances\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_ELB\"  ]\r\n}"
  },
  {
    "path": "AWS/legos/aws_deregister_instances/aws_deregister_instances.py",
    "content": "##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import List, Dict\nfrom pydantic import BaseModel, Field\n\nclass InputSchema(BaseModel):\n    elb_name: str = Field(\n        title='ELB Name',\n        description='Name of the Load Balancer.')\n    instance_ids: List[str] = Field(\n        title='Instance IDs',\n        description='List of instance IDs. For eg. [\"i-foo\", \"i-bar\"]')\n    region: str = Field(\n        title='Region',\n        description='AWS Region of the ELB.')\n\n\ndef aws_deregister_instances_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef aws_deregister_instances(handle, elb_name: str, instance_ids: List, region: str) -> Dict:\n    \"\"\"aws_deregister_instances deregisters instances from a given Load Balancer.\n     \n        :type handle: object\n        :param handle: Object returned by the task.validate(...) method.\n\n        :type elb_name: string\n        :param elb_name: Name of the Load Balancer.\n        \n        :type instance_ids: list\n        :param instance_ids: List of instance IDs. For eg. [\"i-foo\", \"i-bar\"]\n        \n        :type region: string\n        :param region: AWS Region of the ELB.\n        \n        :rtype: dict with registered instance details.\n    \"\"\"\n\n    elbClient = handle.client('elb', region_name=region)\n\n    res = elbClient.deregister_instances_from_load_balancer(\n        LoadBalancerName=elb_name,\n        Instances=[{'InstanceId': instance_id} for instance_id in instance_ids]\n    )\n\n    return res\n"
  },
  {
    "path": "AWS/legos/aws_describe_cloudtrail/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>AWS Describe Cloudtrails </h1>\n\n## Description\nGiven an AWS Region, this Action returns a Dict with all of the Cloudtrail logs being recorded\n\n## Lego Details\n\taws_describe_cloudtrail(handle, region:str)\n\t\thandle: Object of type unSkript AWS Connector.\n\n\tRegion you wish to get cloudtrail log list from.\n\n\n## Lego Input\nThis Lego takes inputs handle, and region\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.jpg\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_describe_cloudtrail/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_describe_cloudtrail/aws_describe_cloudtrail.json",
    "content": "{\n  \"action_title\": \"AWS Describe Cloudtrails \",\n  \"action_description\": \"Given an AWS Region, this Action returns a Dict with all of the Cloudtrail logs being recorded\",\n  \"action_type\": \"LEGO_TYPE_AWS\",\n  \"action_entry_function\": \"aws_describe_cloudtrail\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\n  \"action_is_check\": false,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\"  ]\n}"
  },
  {
    "path": "AWS/legos/aws_describe_cloudtrail/aws_describe_cloudtrail.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nfrom __future__ import annotations\nimport pprint\nfrom typing import Dict\nfrom pydantic import BaseModel\n\n\nclass InputSchema(BaseModel):\n    pass\n\n\ndef aws_describe_cloudtrail_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\ndef aws_describe_cloudtrail(handle, region:str) -> Dict:\n    # Create a client object for CloudTrail\n    cloudtrail_client = handle.client('cloudtrail', region_name=region)\n\n    # Use the describe_trails method to get information about the available trails\n    trails = cloudtrail_client.describe_trails()\n\n\n    return trails\n"
  },
  {
    "path": "AWS/legos/aws_detach_ebs_to_instances/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Detach as AWS Instance with a Elastic Block Store </h1>\r\n\r\n## Description\r\nThis Lego detach as AWS instance with a Elastic Block Store.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_detach_ebs_to_instances(handle: object, region: str, volume_id: str)\r\n\r\n        handle: Object of type unSkript AWS Connector.\r\n        volume_id: The ID of the volume.\r\n        region: AWS Region of the ESB.\r\n\r\n## Lego Input\r\n\r\nThis Lego take three inputs handle, volume_id and region.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_detach_ebs_to_instances/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_detach_ebs_to_instances/aws_detach_ebs_to_instances.json",
    "content": "{\r\n    \"action_title\": \" Detach as AWS Instance with a Elastic Block Store\",\r\n    \"action_description\": \" Detach as AWS Instance with a Elastic Block Store.\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_detach_ebs_to_instances\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_EC2\" ,\"CATEGORY_TYPE_AWS_EBS\"  ]\r\n}\r\n    "
  },
  {
    "path": "AWS/legos/aws_detach_ebs_to_instances/aws_detach_ebs_to_instances.py",
    "content": "# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nimport pprint\nfrom typing import Dict\nfrom pydantic import BaseModel, Field\n\n\nclass InputSchema(BaseModel):\n    region: str = Field(\n        title='Region',\n        description='AWS Region of the ESB.')\n    volume_id: str = Field(\n        title='Volume id',\n        description='The ID of the volume.')\n\n\ndef aws_detach_ebs_to_instances_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef aws_detach_ebs_to_instances(handle, region: str, volume_id: str) -> Dict:\n    \"\"\"aws_detach_ebs_to_instances Detach instances from a particular Elastic Block Store (EBS).\n\n     :type handle: object\n     :param handle:Object returned from task.validate(...).\n     \n     :type volume_id: string\n     :param volume_id: The ID of the volume.\n\n     :type region: string\n     :param region: AWS Region of the ESB.\n\n     :rtype: dict with registered instance details.\n    \"\"\"\n\n    ec2Client = handle.client('ec2', region_name=region)\n\n    response = ec2Client.detach_volume(VolumeId=volume_id)\n\n    print(response)\n\n    return response\n"
  },
  {
    "path": "AWS/legos/aws_detach_instances_from_autoscaling_group/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>AWS Detach Instances From AutoScaling Group </h1>\r\n\r\n## Description\r\nThis Lego detach AWS instances from autoscaling group.\r\n\r\n## Lego Details\r\n\r\n    aws_detach_autoscaling_instances(handle,instance_id: str,group_name: str,region: str)\r\n\r\n        handle: Object of type unSkript AWS Connector\r\n        instance_ids: Name of instances.\r\n        group_name: Name of AutoScaling Group.\r\n        region: Used to filter the volume for specific region.\r\n\r\n## Lego Input\r\nThis Lego take four inputs handle, instance_ids, group_name and region.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n<img src=\"./1.png\">\r\n\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_detach_instances_from_autoscaling_group/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_detach_instances_from_autoscaling_group/aws_detach_instances_from_autoscaling_group.json",
    "content": "{\r\n    \"action_title\": \"AWS Detach Instances From AutoScaling Group\",\r\n    \"action_description\": \"Use This Action to AWS Detach Instances From AutoScaling Group\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_detach_instances_from_autoscaling_group\",\r\n    \"action_needs_credential\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_EC2\"  ]\r\n}\r\n"
  },
  {
    "path": "AWS/legos/aws_detach_instances_from_autoscaling_group/aws_detach_instances_from_autoscaling_group.py",
    "content": "##  Copyright (c) 2021 unSkript, Inc\r\n##  All rights reserved.\r\n##\r\nimport pprint\r\nfrom typing import Dict\r\nfrom pydantic import BaseModel, Field\r\n\r\nclass InputSchema(BaseModel):\r\n    instance_ids: str = Field(\r\n        title='Instance IDs',\r\n        description='List of instances.')\r\n    group_name: str = Field(\r\n        title='Group Name',\r\n        description='Name of AutoScaling Group.')\r\n    region: str = Field(\r\n        title='Region',\r\n        description='AWS Region of autoscaling group.')\r\n\r\ndef aws_detach_instances_from_autoscaling_group_printer(output):\r\n    if output is None:\r\n        return\r\n    pprint.pprint(output)\r\n\r\n\r\ndef aws_detach_instances_from_autoscaling_group(\r\n    handle,\r\n    instance_ids: str,\r\n    group_name: str,\r\n    region: str\r\n) -> Dict:\r\n    \"\"\"aws_detach_autoscaling_instances detach instances from autoscaling group.\r\n\r\n        :type handle: object\r\n        :param handle: Object returned from task.validate(...).\r\n\r\n        :type instance_ids: string\r\n        :param instance_ids: Name of instances.\r\n\r\n        :type group_name: string\r\n        :param group_name: Name of AutoScaling Group.\r\n\r\n        :type region: string\r\n        :param region: AWS Region of autoscaling group.\r\n\r\n        :rtype: Dict with the detach instance info.\r\n    \"\"\"\r\n\r\n    ec2Client = handle.client(\"autoscaling\", region_name=region)\r\n    result = {}\r\n    try:\r\n        response = ec2Client.detach_instances(\r\n            InstanceIds=[instance_ids],\r\n            AutoScalingGroupName=group_name,\r\n            ShouldDecrementDesiredCapacity=True\r\n            )\r\n        result = response\r\n    except Exception as error:\r\n        result[\"error\"] = error\r\n       \r\n    return result\r\n"
  },
  {
    "path": "AWS/legos/aws_ebs_modify_volume/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>EBS Modify Volume</h1>\r\n\r\n## Description\r\nThis Lego Modify/Resize volume for Elastic Block Storage (EBS).\r\n\r\n## Lego Details\r\n\r\n    aws_ebs_modify_volume(hdl: Session, volume_id: str, resize_option: SizingOption, resize_value: float, region: str,)\r\n\r\n        hdl: Object of type unSkript AWS Connector\r\n        volume_id: EBS Volume ID to resize.\r\n        resize_option: Option to resize the volume.\r\n        resize_value: Based on the resize option chosen, specify the value.\r\n        region: AWS Region of the volume.\r\n\r\n## Lego Input\r\nThis Lego take five inputs hdl, volume_id, resize_option, resize_value and region.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n<img src=\"./1.png\">\r\n\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_ebs_modify_volume/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_ebs_modify_volume/aws_ebs_modify_volume.json",
    "content": "{\r\n    \"action_title\": \"EBS Modify Volume\",\r\n    \"action_description\": \"Modify/Resize volume for Elastic Block Storage (EBS).\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_ebs_modify_volume\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_EC2\"  ]\r\n}"
  },
  {
    "path": "AWS/legos/aws_ebs_modify_volume/aws_ebs_modify_volume.py",
    "content": "##\r\n# Copyright (c) 2021 unSkript, Inc\r\n# All rights reserved.\r\n##\r\n\r\nimport pprint\r\nfrom pydantic import BaseModel, Field\r\nfrom unskript.legos.aws.aws_get_handle.aws_get_handle import Session\r\nfrom unskript.enums.aws_k8s_enums import SizingOption\r\nfrom polling2 import poll_decorator\r\n\r\n\r\nclass InputSchema(BaseModel):\r\n    volume_id: str = Field(\r\n        title=\"EBS Volume ID\",\r\n        description=\"EBS Volume ID to resize.\"\r\n    )\r\n    resize_option: SizingOption = Field(\r\n        title=\"Resize option\",\r\n        description='''\r\n            Option to resize the volume. 2 options supported:\r\n            1. Add - Use this option to resize by an amount.\r\n            2. Multiple - Use this option if you want to resize by a multiple of the current volume size.\r\n        '''\r\n    )\r\n    resize_value: int = Field(\r\n        title=\"Value\",\r\n        description='''\r\n            Based on the resize option chosen, specify the value. For eg, if you chose Add option, this\r\n            value will be a value in Gb (like 100). If you chose Multiple option, this value will be a multiplying factor\r\n            to the current volume size. So, if you want to double, you specify 2 here.\r\n        '''\r\n    )\r\n    region: str = Field(\r\n        title=\"Region\",\r\n        description=\"AWS Region of the volume.\"\r\n    )\r\n\r\n\r\ndef aws_ebs_modify_volume_printer(output):\r\n    if output is None:\r\n        return\r\n    pprint.pprint(output)\r\n\r\n\r\ndef aws_ebs_modify_volume(\r\n    hdl: Session,\r\n    volume_id: str,\r\n    resize_option: SizingOption,\r\n    resize_value: int,\r\n    region: str,\r\n    ) -> str:\r\n    \"\"\"aws_ebs_modify_volume modifies the size of the EBS Volume.\r\n    You can either increase it a provided value or by a provided multiple value.\r\n\r\n    :type volume_id: string\r\n    :param volume_id: ebs volume id.\r\n\r\n    :type resize_option: SizingOption\r\n    :param resize_option: option to resize the volume, by a fixed amount\r\n    or by a multiple of the existing size.\r\n\r\n    :type value: int\r\n    :param value: The value by which the volume should be modified,\r\n    depending upon the resize option.\r\n\r\n    :type region: string\r\n    :param region: AWS Region of the volume.\r\n\r\n    :rtype: New volume size.\r\n    \"\"\"\r\n    ec2Client = hdl.client(\"ec2\", region_name=region)\r\n    ec2Resource = hdl.resource(\"ec2\", region_name=region)\r\n    # Get the current volume size.\r\n    Volume = ec2Resource.Volume(volume_id)\r\n    currentSize = Volume.size\r\n    newSize = None\r\n\r\n    if resize_option == SizingOption.Add:\r\n        newSize = currentSize + resize_value\r\n    elif resize_option == SizingOption.Multiple:\r\n        newSize = currentSize * resize_value\r\n    else:\r\n        raise ValueError(f\"Invalid resize option: {resize_option}\")\r\n\r\n\r\n    print(f'CurrentSize {currentSize}, NewSize {newSize}')\r\n    \r\n    resp=ec2Client.modify_volume(\r\n        VolumeId=volume_id,\r\n        Size=newSize) \r\n    pprint.pprint(resp.StatusMessage)\r\n    \r\n    # Check the modification state\r\n    try:\r\n        check_modification_status(ec2Client, volume_id)\r\n    except Exception as e:\r\n        raise f'Modify volumeID {volume_id} failed: {str(e)}'\r\n\r\n    return f'Volume {volume_id} size modified successfully to {newSize}'\r\n\r\n\r\n@poll_decorator(step=60, timeout=600, check_success=lambda x: x is True)\r\ndef check_modification_status(ec2Client, volumeID) -> bool:\r\n    resp = ec2Client.describe_volumes_modifications(VolumeIds=[volumeID])\r\n    state = resp['VolumesModifications'][0]['ModificationState']\r\n    progress = resp['VolumesModifications'][0]['Progress']\r\n    print(f'Volume modification state {state}, Progress {progress}')\r\n    if state in ('completed', None):\r\n        return True\r\n    if state == 'failed':\r\n        raise Exception(\"Get Status Failed\")\r\n    return False\r\n"
  },
  {
    "path": "AWS/legos/aws_ecs_describe_task_definition/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>AWS ECS Describe Task Definition</h1>\r\n\r\n## Description\r\nThis Lego describes AWS ECS Task Definition..\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_ecs_describe_task_definition(handle, region: str, taskDefinition: str)\r\n\r\n        handle: Object of type unSkript AWS Connector\r\n        region: AWS Region of the ECS service..\r\n        taskDefinition: The family and revision (family:revision ) or full ARN of the task definition to run in service.\r\n\r\n## Lego Input\r\nThis Lego takes three inputs handle, region and taskDefinition. \r\n\r\n## Lego Output\r\n```\r\n{\r\n    'ResponseMetadata': {\r\n        'HTTPHeaders': {\r\n            'content-length': '1145',\r\n                'content-type': 'application/x-amz-json-1.1',\r\n                    'date': 'Thu, 13 Oct 2022 21:01:39 GMT',\r\n                        'x-amzn-requestid': 'e21bb321-9051-4a5e-859c-3ca17d8b8193'\r\n        },\r\n        'HTTPStatusCode': 200,\r\n            'RequestId': 'e21bb321-9051-4a5e-859c-3ca17d8b8193',\r\n                'RetryAttempts': 0\r\n    },\r\n    'taskDefinition': {\r\n        'compatibilities': ['EC2', 'FARGATE'],\r\n            'containerDefinitions': [{\r\n                'cpu': 0,\r\n                'environment': [],\r\n                'essential': True,\r\n                'image': 'amazon/amazon-ecs-sample',\r\n                'logConfiguration': {\r\n                    'logDriver': 'awslogs',\r\n                    'options': {\r\n                        'awslogs-group': '/ecs/AWSSampleApp',\r\n                        'awslogs-region': 'us-west-2',\r\n                        'awslogs-stream-prefix': 'ecs'\r\n                    }\r\n                },\r\n                'mountPoints': [],\r\n                'name': 'AmazonSampleImage',\r\n                'portMappings': [],\r\n                'volumesFrom': []\r\n            }],\r\n                'cpu': '256',\r\n                    'executionRoleArn': 'arn:aws:iam::100498623390:role/DevProxyRoleToBeAssumed',\r\n                        'family': 'AWSTestApp',\r\n                            'memory': '512',\r\n                                'networkMode': 'awsvpc',\r\n                                    'placementConstraints': [],\r\n                                        'registeredAt': datetime.datetime(2022, 10, 14, 2, 31, 37, 50000, tzinfo = tzlocal()),\r\n                                            'registeredBy': 'arn:aws:sts::100498623390:assumed-role/DevProxyRoleToBeAssumed/test',\r\n                                                'requiresAttributes': [{ 'name': 'com.amazonaws.ecs.capability.logging-driver.awslogs' },\r\n                                                { 'name': 'ecs.capability.execution-role-awslogs' },\r\n                                                { 'name': 'com.amazonaws.ecs.capability.docker-remote-api.1.19' },\r\n                                                { 'name': 'com.amazonaws.ecs.capability.docker-remote-api.1.18' },\r\n                                                { 'name': 'ecs.capability.task-eni' }],\r\n                                                    'requiresCompatibilities': ['FARGATE'],\r\n                                                        'revision': 85,\r\n                                                            'status': 'ACTIVE',\r\n                                                                'taskDefinitionArn': 'arn:aws:ecs:us-west-2:100498623390:task-definition/AWSTestApp:85',\r\n                                                                    'volumes': []\r\n    }\r\n}\r\n```\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_ecs_describe_task_definition/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "AWS/legos/aws_ecs_describe_task_definition/aws_ecs_describe_task_definition.json",
    "content": "{\r\n\"action_title\": \"AWS ECS Describe Task Definition.\",\r\n\"action_description\": \"Describe AWS ECS Task Definition.\",\r\n\"action_type\": \"LEGO_TYPE_AWS\",\r\n\"action_entry_function\": \"aws_ecs_describe_task_definition\",\r\n\"action_needs_credential\": true,\r\n\"action_supports_poll\": true,\r\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\r\n\"action_supports_iteration\": true,\r\n\"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_ECS\"  ]\r\n\r\n}\r\n"
  },
  {
    "path": "AWS/legos/aws_ecs_describe_task_definition/aws_ecs_describe_task_definition.py",
    "content": "##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##  @author: Yugal Pachpande, @email: yugal.pachpande@unskript.com\n##\nfrom pydantic import BaseModel, Field\nfrom typing import Dict\nimport pprint\n\n\nclass InputSchema(BaseModel):\n    region: str = Field(\n        title='Region',\n        description='AWS Region of the ECS service.')\n    taskDefinition: str = Field(\n        title='TaskDefinition',\n        description='The family and revision (family:revision ) or full ARN of the task definition to run in service eg: srv-722a3657e6e3-TaskDefinition:2'\n    )\n\n\n\ndef aws_ecs_describe_task_definition_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef aws_ecs_describe_task_definition(handle, region: str, taskDefinition: str) -> Dict:\n    \"\"\"aws_ecs_describe_task_definition returns Dict .\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type taskDefinition: string\n        :param taskDefinition: Full ARN of the task definition to run in service.\n\n        :type region: string\n        :param region: AWS Region of the ECS service.\n\n        :return: Dict resp of task defination.\n\n    \"\"\"\n    ecs_client = handle.client('ecs', region_name=region)\n    try:\n        data = ecs_client.describe_task_definition(taskDefinition=taskDefinition)\n    except Exception as e:\n        errString = f'\"Error to describe task definition {str(e)}\"'\n        print(errString)\n        raise Exception(errString)\n    return data\n"
  },
  {
    "path": "AWS/legos/aws_ecs_detect_failed_deployment/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>ECS detect failed deployment </h1>\r\n\r\n## Description\r\nThis Lego shows the list of stopped tasks, associated with a deployment, along with their stopped reason.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_ecs_detect_failed_deployment(handle, cluster_name: str, service_name: str, region: str)\r\n\r\n        handle: Object of type unSkript AWS Connector\r\n        cluster_name: Cluster name that your service runs on.\r\n        service_name: ECS Service name in the specified cluster.\r\n        region: AWS Region of the ECS service..\r\n\r\n## Lego Input\r\nThis Lego takes four inputs handle, region, cluster_name  and service_name. \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_ecs_detect_failed_deployment/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "AWS/legos/aws_ecs_detect_failed_deployment/aws_ecs_detect_failed_deployment.json",
    "content": "{\r\n\"action_title\": \"ECS detect failed deployment \",\r\n\"action_description\": \"List of stopped tasks, associated with a deployment, along with their stopped reason\",\r\n\"action_type\": \"LEGO_TYPE_AWS\",\r\n\"action_entry_function\": \"aws_ecs_detect_failed_deployment\",\r\n\"action_needs_credential\": true,\r\n\"action_supports_poll\": true,\r\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n\"action_supports_iteration\": true,\r\n\"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_ECS\"  ]\r\n}\r\n"
  },
  {
    "path": "AWS/legos/aws_ecs_detect_failed_deployment/aws_ecs_detect_failed_deployment.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nfrom pydantic import BaseModel, Field\nfrom typing import List\nimport pprint\n\n\nclass InputSchema(BaseModel):\n    cluster_name: str = Field(\n        title=\"Cluster name\",\n        description=\"ECS Cluster name\"\n    )\n    service_name: str = Field(\n        title=\"Service name\",\n        description=\"ECS Service name in the specified cluster.\"\n    )\n    region: str = Field(\n        title='Region',\n        description='AWS Region of the ECS service.')\n\n\ndef aws_ecs_detect_failed_deployment_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef aws_ecs_detect_failed_deployment(handle, cluster_name: str, service_name: str, region: str) -> List:\n    \"\"\"aws_ecs_detect_failed_deployment returns the list .\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type cluster_name: string\n        :param cluster_name: ECS Cluster name.\n\n        :type service_name: string\n        :param service_name: ECS Service name in the specified cluster.\n\n        :type region: string\n        :param region: AWS Region of the ECS service.\n\n        :rtype: List of stopped task while deployement along with reason.\n    \"\"\"\n    ecsClient = handle.client('ecs', region_name=region)\n    try:\n        serviceStatus = ecsClient.describe_services(cluster=cluster_name, services=[service_name])\n    except Exception as e:\n        print(f'Failed to get service status for {service_name}, cluster {cluster_name}, {e}')\n        return [f'Failed to get service status for {service_name}, cluster {cluster_name}, {e}']\n    # When the deployment is in progress, there will be 2 deployment entries, one PRIMARY and one ACTIVE. PRIMARY will eventually replace\n    # ACTIVE, if its successful.\n    deployments = serviceStatus.get('services')[0].get('deployments')\n    if deployments is None:\n        print(\"Empty deployment\")\n        return [\"Empty deployment\"]\n\n    deploymentInProgress = False\n    primaryDeploymentID = \"\"\n    for deployment in deployments:\n        if deployment['status'] == \"PRIMARY\":\n            primaryDeploymentID = deployment['id']\n        else:\n            deploymentInProgress = True\n\n    if deploymentInProgress is False:\n        print(\"No deployment in progress\")\n        return [\"No deployment in progress\"]\n\n    # Check if there are any stopped tasks because of this deployment\n    stoppedTasks = ecsClient.list_tasks(cluster=cluster_name, startedBy=primaryDeploymentID, desiredStatus=\"STOPPED\").get('taskArns')\n    if len(stoppedTasks) == 0:\n        print(f'No stopped tasks associated with the deploymentID {primaryDeploymentID}, service {service_name}, cluster {cluster_name}')\n        return [f'No stopped tasks associated with the deploymentID {primaryDeploymentID}, service {service_name}, cluster {cluster_name}']\n\n    # Get the reason for the stopped tasks\n    taskDetails = ecsClient.describe_tasks(cluster=cluster_name, tasks=stoppedTasks)\n    output = []\n    for taskDetail in taskDetails.get('tasks'):\n        output.append({\"TaskARN\":taskDetail['taskArn'], \"StoppedReason\":taskDetail['stoppedReason']})\n    return output\n\n\n"
  },
  {
    "path": "AWS/legos/aws_ecs_service_restart/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Restart AWS ECS Service </h1>\r\n\r\n## Description\r\nThis Lego Restart an AWS ECS Service.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_ecs_service_restart(handle, cluster_arn: str, service_name: str, region: str)\r\n\r\n        handle: Object of type unSkript AWS Connector\r\n        cluster_arn: Full ARN of the cluster.\r\n        service: The name of the service to restart.\r\n        region: AWS Region of the ECS service.\r\n\r\n        \r\n\r\n## Lego Input\r\nThis Lego takes four inputs handle, region, service_name, and cluster_arn. \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_ecs_service_restart/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "AWS/legos/aws_ecs_service_restart/aws_ecs_service_restart.json",
    "content": "{\r\n\"action_title\": \"Restart AWS ECS Service\",\r\n\"action_description\": \"Restart an AWS ECS Service\",\r\n\"action_type\": \"LEGO_TYPE_AWS\",\r\n\"action_entry_function\": \"aws_ecs_service_restart\",\r\n\"action_needs_credential\": true,\r\n\"action_supports_poll\": true,\r\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_BOOL\",\r\n\"action_supports_iteration\": true,\r\n\"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_ECS\"  ]\r\n}\r\n"
  },
  {
    "path": "AWS/legos/aws_ecs_service_restart/aws_ecs_service_restart.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nfrom pydantic import BaseModel, Field\nimport pprint\n\n\nclass InputSchema(BaseModel):\n    cluster_arn: str = Field(\n        title='Cluster ARN',\n        description='Full ARN of the cluster.'\n    )\n    service_name: str = Field(\n        title='Service Name',\n        description='Service name to restart.'\n    )\n    region: str = Field(\n        title='Region',\n        description='AWS Region of the cloudwatch.')\n\n\ndef aws_ecs_service_restart_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef aws_ecs_service_restart(handle, cluster_arn: str, service_name: str, region: str) -> bool:\n    \"\"\"aws_ecs_service_restart returns boolean.\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type cluster_arn: string\n        :param cluster_arn: Full ARN of the cluster.\n\n        :type service_name: string\n        :param service_name: ECS Service name in the specified cluster.\n\n        :type region: string\n        :param region: AWS Region of the ECS service.\n\n        :rtype: Returns True if the service was restarted successfully and an exception if not.\n\n    \"\"\"\n\n    # Input param validation.\n\n    ecsClient = handle.client('ecs', region_name=region)\n    ecsClient.update_service(\n        cluster=cluster_arn,\n        service=service_name,\n        forceNewDeployment=True\n    )\n    try:\n        waiter = ecsClient.get_waiter('services_stable')\n        waiter.wait(\n            cluster=cluster_arn,\n            services=[service_name]\n        )\n    except:\n        errString = f'\"Failed restart service: {service_name} in cluster: {cluster_arn} after 40 checks.\"'\n        print(errString)\n        raise Exception(errString)\n    return True\n"
  },
  {
    "path": "AWS/legos/aws_ecs_update_service/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Update AWS ECS Service </h1>\r\n\r\n## Description\r\nThis Lego Updates AWS ECS Service.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_ecs_update_service(handle, region: str, service: str, taskDefinition: str, cluster: str = None)\r\n\r\n        handle: Object of type unSkript AWS Connector\r\n        region: AWS Region of the ECS service..\r\n        service: The name of the service to update.\r\n        taskDefinition: The family and revision (family:revision ) or full ARN of the task definition to run in service.\r\n        cluster: Cluster name that your service runs on.\r\n\r\n## Lego Input\r\nThis Lego takes five inputs handle, region, service, taskDefinition and cluster. \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n```\r\n{'ResponseMetadata': \r\n    {'HTTPHeaders': {'content-length': '2510',\r\n                    'content-type': 'application/x-amz-json-1.1',\r\n                    'date': 'Thu, 13 Oct 2022 20:49:08 GMT',\r\n                    'x-amzn-requestid': '9e820eae-1d35-4812-bfca-37c4e1ecd8d0'},\r\n                      'HTTPStatusCode': 200,\r\n                      'RequestId': '9e820eae-1d35-4812-bfca-37c4e1ecd8d0',\r\n                      'RetryAttempts': 0},\r\n 'service': {'clusterArn': 'arn:aws:ecs:us-west-2:100498623390:cluster/TestECSCluster',\r\n             'createdAt': datetime.datetime(2022, 10, 14, 2, 19, 5, 841000, tzinfo=tzlocal()),\r\n             'createdBy': 'arn:aws:iam::100498623390:role/DevProxyRoleToBeAssumed',\r\n             'deploymentConfiguration': {'deploymentCircuitBreaker': {'enable': False,\r\n                                                                      'rollback': False},\r\n                                         'maximumPercent': 200,\r\n                                         'minimumHealthyPercent': 100},\r\n             'deploymentController': {'type': 'ECS'},\r\n             'deployments': [{'createdAt': datetime.datetime(2022, 10, 14, 2, 19, 8, 706000, tzinfo=tzlocal()),\r\n                              'desiredCount': 0,\r\n                              'failedTasks': 0,\r\n                              'id': 'ecs-svc/8526124291437356365',\r\n                              'launchType': 'FARGATE',\r\n                              'networkConfiguration': {'awsvpcConfiguration': {'assignPublicIp': 'ENABLED',\r\n                                                                               'securityGroups': ['sg-0b7a1a8fdf5417f28'],\r\n                                                                               'subnets': ['subnet-c643f49b']}},\r\n                              'pendingCount': 0,\r\n                              'platformVersion': '1.4.0',\r\n                              'rolloutState': 'IN_PROGRESS',\r\n                              'rolloutStateReason': 'ECS deployment '\r\n                                                    'ecs-svc/8526124291437356365 '\r\n                                                    'in progress.',\r\n                              'runningCount': 0,\r\n                              'status': 'PRIMARY',\r\n                              'taskDefinition': 'arn:aws:ecs:us-west-2:100498623390:task-definition/AWSTestAppTwo:17',\r\n                              'updatedAt': datetime.datetime(2022, 10, 14, 2, 19, 8, 706000, tzinfo=tzlocal())},\r\n                             {'createdAt': datetime.datetime(2022, 10, 14, 2, 19, 5, 841000, tzinfo=tzlocal()),\r\n                              'desiredCount': 2,\r\n                              'failedTasks': 0,\r\n                              'id': 'ecs-svc/6903532899083802063',\r\n                              'launchType': 'FARGATE',\r\n                              'networkConfiguration': {'awsvpcConfiguration': {'assignPublicIp': 'ENABLED',\r\n                                                                               'securityGroups': ['sg-0b7a1a8fdf5417f28'],\r\n                                                                               'subnets': ['subnet-c643f49b']}},\r\n            'pendingCount': 0,\r\n            'platformVersion': '1.4.0',\r\n            'rolloutState': 'IN_PROGRESS',\r\n            'rolloutStateReason': 'ECS deployment '\r\n                                'ecs-svc/6903532899083802063 '\r\n                                'in progress.',\r\n            'runningCount': 0,\r\n            'status': 'ACTIVE',\r\n            'taskDefinition': 'arn:aws:ecs:us-west-2:100498623390:task-definition/AWSTestApp:84',\r\n            'updatedAt': datetime.datetime(2022, 10, 14, 2, 19, 5, 841000, tzinfo=tzlocal())}],\r\n             'desiredCount': 2,\r\n             'enableECSManagedTags': False,\r\n             'enableExecuteCommand': False,\r\n             'events': [],\r\n             'launchType': 'FARGATE',\r\n             'loadBalancers': [],\r\n             'networkConfiguration': {'awsvpcConfiguration': {'assignPublicIp': 'ENABLED',\r\n            'securityGroups': ['sg-0b7a1a8fdf5417f28'],\r\n            'subnets': ['subnet-c643f49b']}},\r\n             'pendingCount': 0,\r\n             'placementConstraints': [],\r\n             'placementStrategy': [],\r\n             'platformVersion': 'LATEST',\r\n             'propagateTags': 'NONE',\r\n             'roleArn': 'arn:aws:iam::100498623390:role/aws-service-role/ecs.amazonaws.com/AWSServiceRoleForECS',\r\n             'runningCount': 0,\r\n             'schedulingStrategy': 'REPLICA',\r\n             'serviceArn': 'arn:aws:ecs:us-west-2:100498623390:service/TestECSCluster/TestECSService',\r\n             'serviceName': 'TestECSService',\r\n             'serviceRegistries': [],\r\n             'status': 'ACTIVE',\r\n             'taskDefinition': 'arn:aws:ecs:us-west-2:100498623390:task-definition/AWSTestAppTwo:17'}\r\n             }\r\n            \r\n```\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_ecs_update_service/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "AWS/legos/aws_ecs_update_service/aws_ecs_update_service.json",
    "content": "{\r\n\"action_title\": \"Update AWS ECS Service\",\r\n\"action_description\": \"Update AWS ECS Service\",\r\n\"action_type\": \"LEGO_TYPE_AWS\",\r\n\"action_entry_function\": \"aws_ecs_update_service\",\r\n\"action_needs_credential\": true,\r\n\"action_supports_poll\": true,\r\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\r\n\"action_supports_iteration\": true,\r\n\"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_ECS\"  ]\r\n}\r\n"
  },
  {
    "path": "AWS/legos/aws_ecs_update_service/aws_ecs_update_service.py",
    "content": "##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##  @author: Yugal Pachpande, @email: yugal.pachpande@unskript.com\n##\nfrom pydantic import BaseModel, Field\nfrom typing import Optional, Dict\nimport pprint\n\n\nclass InputSchema(BaseModel):\n    region: str = Field(\n        title='Region',\n        description='AWS Region of the ECS service.')\n    cluster: Optional[str] = Field(\n        title='Cluster Name',\n        description='Cluster name that your service runs on.')\n    service: str = Field(\n        title='Service Name',\n        description='The name of the service to update.')\n    taskDefinition: str = Field(\n        title='Task Definition',\n        description='The family and revision (family:revision ) or full ARN of the task definition to run in service eg: srv-722a3657e6e3-TaskDefinition:2'\n    )\n\n\ndef aws_ecs_update_service_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef aws_ecs_update_service(handle, region: str, service: str, taskDefinition: str, cluster: str = None) -> Dict:\n    \"\"\"aws_ecs_update_service returns the Dict .\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n        \n        :type region: string\n        :param region: AWS Region of the ECS service.\n\n        :type service: string\n        :param service: ECS Service name in the specified cluster.\n\n        :type taskDefinition: string\n        :param taskDefinition: Full ARN of the task definition to run in service.\n\n        :type cluster: string\n        :param cluster: ECS Cluster name.\n\n        :rtype: Dict of updated service.\n    \"\"\"\n    ecs_client = handle.client('ecs', region_name=region)\n\n    if cluster:\n        response = ecs_client.update_service(\n            cluster=cluster,\n            service=service,\n            taskDefinition=taskDefinition,\n        )\n    else:\n        response = ecs_client.update_service(\n            service=service,\n            taskDefinition=taskDefinition,\n        )\n\n    return response\n"
  },
  {
    "path": "AWS/legos/aws_eks_copy_pod_logs_to_bucket/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Copy EKS Pod logs to bucket. </h1>\r\n\r\n## Description\r\nThis Lego Copy given EKS pod logs to given S3 Bucket.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_eks_copy_pod_logs_to_bucket(handle, clusterName: str, namespaceName: str, podName: str, bucketName: str,region: str)\r\n\r\n        handle: Object of type unSkript AWS Connector\r\n        clusterName: Name of the EKS cluster.\r\n        podName: Name of the pod.\r\n        bucketName: Name of the S3 bucket.\r\n        region: AWS Region of the EKS cluster. \r\n        namespaceName: EKS Cluster Namespace.\r\n\r\n## Lego Input\r\nThis Lego take six input handle, clusterName, podName, bucketName, namespaceName and region. \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_eks_copy_pod_logs_to_bucket/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "AWS/legos/aws_eks_copy_pod_logs_to_bucket/aws_eks_copy_pod_logs_to_bucket.json",
    "content": "{\r\n\"action_title\": \" Copy EKS Pod logs to bucket.\",\r\n\"action_description\": \" Copy given EKS pod logs to given S3 Bucket.\",\r\n\"action_type\": \"LEGO_TYPE_AWS\",\r\n\"action_entry_function\": \"aws_eks_copy_pod_logs_to_bucket\",\r\n\"action_needs_credential\": true,\r\n\"action_supports_poll\": true,\r\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\r\n\"action_supports_iteration\": true,\r\n\"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_EKS\" ]\r\n}\r\n"
  },
  {
    "path": "AWS/legos/aws_eks_copy_pod_logs_to_bucket/aws_eks_copy_pod_logs_to_bucket.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nfrom pydantic import BaseModel, Field\nfrom kubernetes import client\nfrom kubernetes.client.rest import ApiException\nfrom typing import  Dict\nimport pprint\n\n\nclass InputSchema(BaseModel):\n    clusterName: str = Field(\n        title='Cluster Name',\n        description='Name of cluster')\n    namespaceName: str = Field(\n        title='namespace Name',\n        description='Name of namespace')\n    podName: str = Field(\n        title='Pod Name',\n        description='Name of Pod')\n    bucketName: str = Field(\n        title='S3 Bucket Name',\n        description='Name of S3 Bucket')\n    region: str = Field(\n        title='Region',\n        description='AWS Region of the cluster')\n\n\ndef aws_eks_copy_pod_logs_to_bucket_printer(output):\n    if output is None:\n        return\n    print(\"\\n\")\n    pprint.pprint(output)\n\n\ndef aws_eks_copy_pod_logs_to_bucket(handle, clusterName: str, namespaceName: str, podName: str, bucketName: str,\n                                    region: str) -> Dict:\n    \"\"\"aws_eks_copy_pod_logs_to_bucket returns Dict.\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type clusterName: string\n        :param clusterName: Cluster name.\n\n        :type podName: string\n        :param podName: Pod name.\n\n        :type bucketName: string\n        :param bucketName: Name of S3 Bucket.\n\n        :type namespaceName: string\n        :param namespaceName: Cluster Namespace.\n\n        :type region: string\n        :param region: AWS Region of the EKS cluster.\n\n        :rtype: Dict of name of pod and bucket with succcess message.\n    \"\"\"\n    k8shandle = handle.unskript_get_eks_handle(clusterName, region)\n\n    coreApiClient = client.CoreV1Api(api_client=k8shandle)\n    try:\n        api_response = coreApiClient.read_namespaced_pod_log(name=podName,\n                                                             namespace=namespaceName)\n    except ApiException as e:\n        print(f\"An Exception occured while reading pod log: {str(e)}\")\n        raise e\n\n    s3Client = handle.client('s3', region_name=region)\n    try:\n        s3Client.put_object(Bucket=bucketName, Key=f\"tests/{podName}_pod_logs\",\n                            Body=api_response)\n    except Exception as e:\n        print(f\"Error: {str(e)}\")\n        raise e\n    return {\"success\": f\"Successfully copied {podName} pod logs to {bucketName} bucket.\"}\n"
  },
  {
    "path": "AWS/legos/aws_eks_delete_pod/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Delete EKS POD in a given Namespace </h1>\r\n\r\n## Description\r\nThis Lego Delete a EKS POD in a given Namespace.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_eks_delete_pod(handle, clusterName: str, namespace: str, podname: str, region: str)\r\n\r\n        handle: Object of type unSkript AWS Connector\r\n        clusterName: Name of the EKS cluster.\r\n        region: AWS Region of the EKS cluster. \r\n        namespace: EKS Cluster Namespace.\r\n        podname: Name of pod to be deleted.\r\n\r\n## Lego Input\r\nThis Lego takes five inputs handle, clusterName, podname, region and namespace. \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_eks_delete_pod/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "AWS/legos/aws_eks_delete_pod/aws_eks_delete_pod.json",
    "content": "{\r\n\"action_title\": \" Delete EKS POD in a given Namespace\",\r\n\"action_description\": \" Delete a EKS POD in a given Namespace\",\r\n\"action_type\": \"LEGO_TYPE_AWS\",\r\n\"action_entry_function\": \"aws_eks_delete_pod\",\r\n\"action_needs_credential\": true,\r\n\"action_supports_poll\": true,\r\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\r\n\"action_supports_iteration\": true,\r\n\"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_EKS\" ]\r\n}\r\n"
  },
  {
    "path": "AWS/legos/aws_eks_delete_pod/aws_eks_delete_pod.py",
    "content": "# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n# @author: Yugal Pachpande, @email: yugal.pachpande@unskript.com\n##\n\nfrom pydantic import BaseModel, Field\nfrom kubernetes import client\nfrom kubernetes.client.rest import ApiException\nimport pprint\nfrom typing import Dict\n\n\nclass InputSchema(BaseModel):\n    clusterName: str = Field(\n        title='Cluster Name',\n        description='Name of cluster')\n    namespace: str = Field(\n        title='Namespace',\n        description='Kubernetes namespace')\n    podname: str = Field(\n        title='Podname',\n        description='K8S Pod Name')\n    region: str = Field(\n        title='Region',\n        description='AWS Region of the cluster')\n\n\ndef aws_eks_delete_pod_printer(output):\n    if output is None:\n        return\n    print(\"\\n\")\n    pprint.pprint(output)\n\n\ndef aws_eks_delete_pod(handle, clusterName: str, namespace: str, podname: str, region: str) -> Dict:\n    \"\"\"aws_eks_delete_pod returns list.\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type clusterName: string\n        :param clusterName: Cluster name.\n\n        :type namespace: string\n        :param namespace: Cluster Namespace.\n\n        :type podname: string\n        :param podname: Name of pod to be deleted.\n\n        :type region: string\n        :param region: AWS Region of the EKS cluster.\n\n        :rtype: Dict of details of deleted pod.\n    \"\"\"\n    k8shandle = handle.unskript_get_eks_handle(clusterName, region)\n    CoreV1Api = client.CoreV1Api(api_client=k8shandle)\n\n    try:\n        resp = CoreV1Api.delete_namespaced_pod(\n            name=podname, namespace=namespace, pretty=True)\n    except ApiException as e:\n        resp = 'An Exception occured while executing the command' + e.reason\n    return resp\n"
  },
  {
    "path": "AWS/legos/aws_eks_get_all_dead_pods/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>List of EKS dead pods</h1>\r\n\r\n## Description\r\nThis Lego Get list of all dead pods in a given EKS cluster.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_eks_get_all_dead_pods(handle: Session,clusterName: str,region: str, namespace: str = 'all',) -> List:\r\n\r\n        handle: Object of type unSkript AWS Connector\r\n        clusterName: Name of the EKS cluster.\r\n        region: AWS Region of the EKS cluster. \r\n        namespace: EKS Cluster Namespace.\r\n\r\n## Lego Input\r\nThis Lego takes four inputs handle, clusterName, region and namespace.  \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_eks_get_all_dead_pods/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "AWS/legos/aws_eks_get_all_dead_pods/aws_eks_get_all_dead_pods.json",
    "content": "{\r\n\"action_title\": \"List of EKS dead pods\",\r\n\"action_description\": \"Get list of all dead pods in a given EKS cluster\",\r\n\"action_type\": \"LEGO_TYPE_AWS\",\r\n\"action_entry_function\": \"aws_eks_get_all_dead_pods\",\r\n\"action_needs_credential\": true,\r\n\"action_supports_poll\": true,\r\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n\"action_supports_iteration\": true,\r\n\"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_EKS\" ]\r\n\r\n}\r\n"
  },
  {
    "path": "AWS/legos/aws_eks_get_all_dead_pods/aws_eks_get_all_dead_pods.py",
    "content": "##\r\n# Copyright (c) 2021 unSkript, Inc\r\n# All rights reserved.\r\n##\r\nimport pprint\r\nfrom pydantic import BaseModel, Field\r\nfrom typing import Optional, List\r\nfrom kubernetes import client\r\nfrom kubernetes.client.rest import ApiException\r\nimport pandas as pd\r\nfrom unskript.legos.aws.aws_get_handle.aws_get_handle import Session\r\n\r\n\r\nclass InputSchema(BaseModel):\r\n    clusterName: str = Field(\r\n        title='Cluster Name',\r\n        description='Name of EKS cluster')\r\n    namespace: Optional[str] = Field(\r\n        'all',\r\n        title='Cluster Namespace',\r\n        description='Cluster Namespace')\r\n    region: str = Field(\r\n        title='Region',\r\n        description='AWS Region of the EKS cluster')\r\n\r\n\r\ndef aws_eks_get_all_dead_pods_printer(output):\r\n    if output is None:\r\n        return\r\n    print(\"\\n\")\r\n    if not output:\r\n        print (\"There are no dead pods in this namespace\")\r\n        return\r\n    pprint.pprint(pd.DataFrame(output))\r\n\r\n\r\ndef aws_eks_get_all_dead_pods(handle: Session,clusterName: str,region: str,namespace: str = 'all',) -> List:\r\n    \"\"\"aws_eks_get_all_dead_podsr eturns list.\r\n\r\n        :type handle: object\r\n        :param handle: Object returned from task.validate(...).\r\n\r\n        :type clusterName: string\r\n        :param clusterName: Cluster name.\r\n\r\n        :type namespace: string\r\n        :param namespace: Cluster Namespace.\r\n\r\n        :type region: string\r\n        :param region: AWS Region of the EKS cluster.\r\n\r\n        :rtype: List of all dead pods in a namespace.\r\n    \"\"\"\r\n\r\n    k8shandle = handle.unskript_get_eks_handle(clusterName, region)\r\n    coreApiClient = client.CoreV1Api(api_client=k8shandle)\r\n    try:\r\n        res = coreApiClient.list_namespaced_pod(\r\n            namespace=namespace, pretty=True)\r\n    except ApiException as e:\r\n        pprint.pprint(str(e))\r\n        res = 'An Exception occured while executing the command' + e.reason\r\n\r\n    data = []\r\n    for i in res.items:\r\n        for container_status in i.status.container_statuses:\r\n            if container_status.started is False or container_status.ready is False:\r\n                waiting_state = container_status.state.waiting\r\n                status = waiting_state.reason\r\n                if status.lower() in [\"evicted\"]:\r\n                    data.append({\"Pod Ip\": i.status.pod_ip,\r\n                                 \"Namespace\": i.metadata.namespace,\r\n                                 \"Pod Name\": i.metadata.name,\r\n                                 \"Container Name\": container_status.name,\r\n                                 \"Status\": status,\r\n                                 \"Start Time\": i.status.start_time,\r\n                                 })\r\n    pd.set_option('display.max_rows', None)\r\n    pd.set_option('display.max_columns', None)\r\n    pd.set_option('display.width', None)\r\n    pd.set_option('display.max_colwidth', None)\r\n    return data\r\n"
  },
  {
    "path": "AWS/legos/aws_eks_get_all_namespaces/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>List of EKS Namespaces</h1>\r\n\r\n## Description\r\nThis Lego Gets list of all Namespaces in a given EKS cluster.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_eks_get_all_namespaces(handle: Session, clusterName: str, region: str)\r\n\r\n        handle: Object of type unSkript AWS Connector\r\n        clusterName: Name of the EKS cluster.\r\n        region: AWS Region of the EKS cluster. \r\n\r\n## Lego Input\r\nThis Lego takes three inputs handle, clusterName and region.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_eks_get_all_namespaces/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "AWS/legos/aws_eks_get_all_namespaces/aws_eks_get_all_namespaces.json",
    "content": "{\r\n\"action_title\": \"List of EKS Namespaces\",\r\n\"action_description\": \"Get list of all Namespaces in a given EKS cluster\",\r\n\"action_type\": \"LEGO_TYPE_AWS\",\r\n\"action_entry_function\": \"aws_eks_get_all_namespaces\",\r\n\"action_needs_credential\": true,\r\n\"action_supports_poll\": true,\r\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n\"action_supports_iteration\": true,\r\n\"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_EKS\"]\r\n}\r\n"
  },
  {
    "path": "AWS/legos/aws_eks_get_all_namespaces/aws_eks_get_all_namespaces.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nimport pprint\nfrom pydantic import BaseModel, Field\nfrom kubernetes import client\nfrom kubernetes.client.rest import ApiException\nimport pandas as pd\nfrom typing import List\nfrom unskript.legos.aws.aws_get_handle.aws_get_handle import Session\n\n\nclass InputSchema(BaseModel):\n    clusterName: str = Field(\n        title='Cluster Name',\n        description='Name of EKS cluster')\n    region: str = Field(\n        title='Region',\n        description='AWS Region of the EKS cluster')\n\n\ndef aws_eks_get_all_namespaces_printer(output):\n    if output is None:\n        return\n    print(\"\\n\")\n    pprint.pprint(pd.DataFrame(output))\n\n\ndef aws_eks_get_all_namespaces(handle: Session, clusterName: str, region: str) -> List:\n    \"\"\"aws_eks_get_all_namespaces returns list.\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type clusterName: string\n        :param clusterName: Cluster name.\n\n        :type region: string\n        :param region: AWS Region of the EKS cluster.\n\n        :rtype: List of all namespaces in cluster.\n    \"\"\"\n    k8shandle = handle.unskript_get_eks_handle(clusterName, region)\n    coreApiClient = client.CoreV1Api(api_client=k8shandle)\n    try:\n        res = coreApiClient.list_namespace(pretty=True)\n    except ApiException as e:\n        pprint.pprint(str(e))\n        res = 'An Exception occured while executing the command' + e.reason\n\n    data = []\n    for i in res.items:\n        data.append({\"Namespace\": i.metadata.name,\n                     \"Status\": i.status.phase,\n                     \"Start Time\": str(i.metadata.creation_timestamp),\n                     })\n    pd.set_option('display.max_rows', None)\n    pd.set_option('display.max_columns', None)\n    pd.set_option('display.width', None)\n    pd.set_option('display.max_colwidth', None)\n    return data\n"
  },
  {
    "path": "AWS/legos/aws_eks_get_all_pods/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>List of EKS pods </h1>\r\n\r\n## Description\r\nGet list of all pods in a given EKS cluster.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_eks_get_all_pods(handle: Session, clusterName: str, region: str, namespace: str = 'all', )\r\n\r\n        handle: Object of type unSkript AWS Connector\r\n        clusterName: Name of the EKS cluster.\r\n        region: AWS Region of the EKS cluster. \r\n        namespace: EKS Cluster Namespace.\r\n\r\n## Lego Input\r\nThis Lego takes four inputs handle, clusterName, region and namespace. \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_eks_get_all_pods/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "AWS/legos/aws_eks_get_all_pods/aws_eks_get_all_pods.json",
    "content": "{\r\n\"action_title\": \"List of EKS pods\",\r\n\"action_description\": \"Get list of all pods in a given EKS cluster\",\r\n\"action_type\": \"LEGO_TYPE_AWS\",\r\n\"action_entry_function\": \"aws_eks_get_all_pods\",\r\n\"action_needs_credential\": true,\r\n\"action_supports_poll\": true,\r\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n\"action_supports_iteration\": true,\r\n\"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_EKS\"]\r\n\r\n}\r\n"
  },
  {
    "path": "AWS/legos/aws_eks_get_all_pods/aws_eks_get_all_pods.py",
    "content": "##\r\n# Copyright (c) 2021 unSkript, Inc\r\n# All rights reserved.\r\n##\r\nimport pprint\r\nfrom pydantic import BaseModel, Field\r\nfrom typing import Optional, List\r\nfrom kubernetes import client\r\nfrom kubernetes.client.rest import ApiException\r\nimport pandas as pd\r\n\r\nfrom unskript.legos.aws.aws_get_handle.aws_get_handle import Session\r\n\r\n\r\nclass InputSchema(BaseModel):\r\n    clusterName: str = Field(\r\n        title='Cluster Name',\r\n        description='Name of EKS cluster')\r\n    namespace: Optional[str] = Field(\r\n        'all',\r\n        title='Cluster Namespace',\r\n        description='Cluster Namespace')\r\n    region: str = Field(\r\n        title='Region',\r\n        description='AWS Region of the EKS cluster')\r\n\r\n\r\ndef aws_eks_get_all_pods_printer(output):\r\n    if output is None:\r\n        return\r\n    print(\"\\n\")\r\n    pprint.pprint(pd.DataFrame(output))\r\n\r\n\r\ndef aws_eks_get_all_pods(handle: Session, clusterName: str, region: str, namespace: str = 'all', ) -> List:\r\n    \"\"\"aws_eks_get_all_pods returns list.\r\n\r\n        :type handle: object\r\n        :param handle: Object returned from task.validate(...).\r\n\r\n        :type clusterName: string\r\n        :param clusterName: Cluster name.\r\n\r\n        :type namespace: string\r\n        :param namespace: Cluster Namespace.\r\n\r\n        :type region: string\r\n        :param region: AWS Region of the EKS cluster.\r\n\r\n        :rtype: List of pods with status ip and start time.\r\n    \"\"\"\r\n\r\n    k8shandle = handle.unskript_get_eks_handle(clusterName, region)\r\n    coreApiClient = client.CoreV1Api(api_client=k8shandle)\r\n    try:\r\n        res = coreApiClient.list_namespaced_pod(\r\n            namespace=namespace, pretty=True)\r\n    except ApiException as e:\r\n        pprint.pprint(str(e))\r\n        res = 'An Exception occured while executing the command' + e.reason\r\n\r\n    data = []\r\n    for i in res.items:\r\n        data.append({\"Pod Ip\": i.status.pod_ip,\r\n                     \"Namespace\": i.metadata.namespace,\r\n                     \"Name\": i.metadata.name,\r\n                     \"Status\": i.status.phase,\r\n                     \"Start Time\": i.status.start_time,\r\n                     })\r\n    pd.set_option('display.max_rows', None)\r\n    pd.set_option('display.max_columns', None)\r\n    pd.set_option('display.width', None)\r\n    pd.set_option('display.max_colwidth', None)\r\n    return data\r\n"
  },
  {
    "path": "AWS/legos/aws_eks_get_deployments_name/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>List of EKS deployment for given Namespace</h1>\r\n\r\n## Description\r\nGet list of EKS deployment names for given Namespace\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_eks_get_deployments_name(handle, clusterName: str, namespace: str, region: str)\r\n\r\n        handle: Object of type unSkript AWS Connector\r\n        clusterName: Name of the EKS cluster.\r\n        region: AWS Region of the EKS cluster. \r\n        namespace: EKS Cluster Namespace.\r\n\r\n## Lego Input\r\nThis Lego takes four inputs handle, clusterName, region and namespace. \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_eks_get_deployments_name/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "AWS/legos/aws_eks_get_deployments_name/aws_eks_get_deployments_name.json",
    "content": "{\r\n\"action_title\": \" List of EKS deployment for given Namespace\",\r\n\"action_description\": \" Get list of EKS deployment names for given Namespace\",\r\n\"action_type\": \"LEGO_TYPE_AWS\",\r\n\"action_entry_function\": \"aws_eks_get_deployments_name\",\r\n\"action_needs_credential\": true,\r\n\"action_supports_poll\": true,\r\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n\"action_supports_iteration\": true,\r\n\"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_EKS\" ]\r\n\r\n}\r\n"
  },
  {
    "path": "AWS/legos/aws_eks_get_deployments_name/aws_eks_get_deployments_name.py",
    "content": "##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##  @author: Yugal Pachpande, @email: yugal.pachpande@unskript.com\n##\n\nimport pprint\nfrom pydantic import BaseModel, Field\nfrom kubernetes import client\nfrom kubernetes.client.rest import ApiException\nfrom typing import List\nimport pandas as pd\n\nclass InputSchema(BaseModel):\n    clusterName: str = Field(\n        title='Cluster Name',\n        description='Name of cluster')\n    namespace: str = Field(\n        title='Cluster Namespace',\n        description='Cluster Namespace')\n    region: str = Field(\n        title='Region',\n        description='AWS Region of the cluster')\n\n\ndef aws_eks_get_deployments_name_printer(output):\n    if output is None:\n        return\n    print(\"\\n\")\n    pprint.pprint(pd.DataFrame(output))\n\n\ndef aws_eks_get_deployments_name(handle, clusterName: str, namespace: str, region: str) -> List:\n    \"\"\"aws_eks_get_deployments_name returns list.\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type clusterName: string\n        :param clusterName: Cluster name.\n\n        :type namespace: string\n        :param namespace: Cluster Namespace.\n\n        :type region: string\n        :param region: AWS Region of the EKS cluster.\n\n        :rtype: List of deployments.\n    \"\"\"\n    k8shandle = handle.unskript_get_eks_handle(clusterName, region)\n    coreApiClient = client.AppsV1Api(api_client=k8shandle)\n    deployments_list = []\n\n    try:\n        resp = coreApiClient.list_namespaced_deployment(namespace, pretty=True)\n        for deployment in resp.items:\n            res = {}\n            res[\"NAME\"] = deployment.metadata.name\n            res['READY'] = f\"Ready {deployment.status.ready_replicas}/{deployment.status.available_replicas}\"\n            res['UP-TO-DATE'] = deployment.status.updated_replicas\n            res['AVAILABLE'] = deployment.status.available_replicas\n            res['START_TIME'] = deployment.metadata.creation_timestamp.strftime(\"%m/%d/%Y, %H:%M:%S\")\n            deployments_list.append(res)\n\n        pd.set_option('display.max_rows', None)\n        pd.set_option('display.max_columns', None)\n        pd.set_option('display.width', None)\n        pd.set_option('display.max_colwidth', None)\n    except ApiException as e:\n         return ['An Exception occured while executing the command' + e.reason]\n    return deployments_list\n"
  },
  {
    "path": "AWS/legos/aws_eks_get_node_cpu_memory/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Get CPU and memory utilization of node </h1>\r\n\r\n## Description\r\nThis Lego Gets CPU and memory utilization of given node.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_eks_get_node_cpu_memory(handle: Session, clusterName: str, region: str, nodeName: str = None)\r\n\r\n        handle: Object of type unSkript AWS Connector\r\n        clusterName: Name of the EKS cluster.\r\n        region: AWS Region of the EKS cluster. \r\n        nodeName: Node name of EKS cluster.\r\n\r\n## Lego Input\r\nThis Lego takes four inputs handle, clusterName, region and nodeName.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_eks_get_node_cpu_memory/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "AWS/legos/aws_eks_get_node_cpu_memory/aws_eks_get_node_cpu_memory.json",
    "content": "{\r\n\"action_title\": \"Get CPU and memory utilization of node.\",\r\n\"action_description\": \" Get CPU and memory utilization of given node.\",\r\n\"action_type\": \"LEGO_TYPE_AWS\",\r\n\"action_entry_function\": \"aws_eks_get_node_cpu_memory\",\r\n\"action_needs_credential\": true,\r\n\"action_supports_poll\": true,\r\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n\"action_supports_iteration\": true,\r\n\"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_EKS\"]\r\n\r\n}\r\n"
  },
  {
    "path": "AWS/legos/aws_eks_get_node_cpu_memory/aws_eks_get_node_cpu_memory.py",
    "content": "##\r\n# Copyright (c) 2021 unSkript, Inc\r\n# All rights reserved.\r\n##\r\nimport pprint\r\nimport pandas as pd\r\nfrom typing import Optional, List\r\nfrom pydantic import BaseModel, Field\r\nfrom kubernetes import client\r\nfrom kubernetes.client.rest import ApiException\r\n\r\nfrom unskript.legos.aws.aws_get_handle.aws_get_handle import Session\r\n\r\n\r\nclass InputSchema(BaseModel):\r\n    clusterName: str = Field(\r\n        title='Cluster Name',\r\n        description='Name of cluster.')\r\n    nodeName: Optional[str] = Field(\r\n        title='Node Name',\r\n        description='Name of node.')\r\n    region: str = Field(\r\n        title='Region',\r\n        description='AWS Region of the cluster.')\r\n\r\n\r\ndef aws_eks_get_node_cpu_memory_printer(output):\r\n    if output is None:\r\n        return\r\n    print(\"\\n\")\r\n    pprint.pprint(pd.DataFrame(output))\r\n\r\n\r\ndef aws_eks_get_node_cpu_memory(handle: Session, clusterName: str, region: str, nodeName: str = None) -> List:\r\n    \"\"\"aws_eks_get_node_cpu_memory returns list.\r\n\r\n        :type handle: object\r\n        :param handle: Object returned from task.validate(...).\r\n\r\n        :type clusterName: string\r\n        :param clusterName: ECS Cluster name.\r\n\r\n        :type region: string\r\n        :param region: AWS Region of the EKS cluster.\r\n\r\n        :type nodeName: string\r\n        :param nodeName: Name of Node.\r\n\r\n        :rtype: List of nodes with cpu and memory details.\r\n    \"\"\"\r\n    k8shandle = handle.unskript_get_eks_handle(clusterName, region)\r\n    coreApiClient = client.CoreV1Api(api_client=k8shandle)\r\n    try:\r\n        if nodeName:\r\n            resp = coreApiClient.read_node(nodeName)\r\n            data = [{\"node_name\": resp.metadata.name, \"cpu\": int(resp.status.capacity.get(\"cpu\").split(\"Ki\")[0]),\r\n                     \"memory\": f\"{round(int(resp.status.capacity.get('memory').split('Ki')[0]) / 1024, 2)} Mi\"}]\r\n\r\n        else:\r\n            data = []\r\n            resp = coreApiClient.list_node(pretty=True)\r\n            for node in resp.items:\r\n                data.append({\"node_name\": node.metadata.name,\r\n                             \"cpu\": node.status.capacity.get(\"cpu\"),\r\n                             \"memory\": f\"{round(int(node.status.capacity.get('memory').split('Ki')[0]) / 1024, 2)} Mi\"})\r\n\r\n    except ApiException as e:\r\n        pprint.pprint(str(e))\r\n        data = [\r\n            {'error': 'An Exception occured while executing the command' + e.reason}]\r\n    pd.set_option('display.max_rows', None)\r\n    pd.set_option('display.max_columns', None)\r\n    pd.set_option('display.width', None)\r\n    pd.set_option('display.max_colwidth', None)\r\n    return data\r\n"
  },
  {
    "path": "AWS/legos/aws_eks_get_nodes/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Get EKS Nodes</h1>\r\n\r\n## Description\r\nThis Lego Gets EKS Nodes.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_eks_get_nodes(handle, clusterName: str, region: str)\r\n\r\n        handle: Object of type unSkript AWS Connector\r\n        clusterName: Name of the EKS cluster.\r\n        region: AWS Region of the EKS cluster. \r\n\r\n## Lego Input\r\nThis Lego takes three inputs handle, clusterName and region. \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_eks_get_nodes/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "AWS/legos/aws_eks_get_nodes/aws_eks_get_nodes.json",
    "content": "{\r\n\"action_title\": \" Get EKS Nodes\",\r\n\"action_description\": \" Get EKS Nodes\",\r\n\"action_type\": \"LEGO_TYPE_AWS\",\r\n\"action_entry_function\": \"aws_eks_get_nodes\",\r\n\"action_needs_credential\": true,\r\n\"action_supports_poll\": true,\r\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n\"action_supports_iteration\": true,\r\n\"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_EKS\"]\r\n\r\n}\r\n"
  },
  {
    "path": "AWS/legos/aws_eks_get_nodes/aws_eks_get_nodes.py",
    "content": "# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n# @author: Yugal Pachpande, @email: yugal.pachpande@unskript.com\n##\n\nfrom pydantic import BaseModel, Field\nfrom kubernetes import client\nimport datetime\nfrom kubernetes.client.rest import ApiException\nfrom typing import List\nfrom unskript.legos.utils import print_output_in_tabular_format\n\n\nclass InputSchema(BaseModel):\n    clusterName: str = Field(\n        title='Cluster Name',\n        description='Name of cluster')\n    region: str = Field(\n        title='Region',\n        description='AWS Region of the cluster')\n\n\ndef aws_eks_get_nodes_printer(output):\n    if output is None:\n        return\n    print(\"\\n\")\n    print(print_output_in_tabular_format(output))\n\n\ndef aws_eks_get_nodes(handle, clusterName: str, region: str) -> List:\n    \"\"\"aws_eks_get_nodes returns the list of all eks nodes.\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n        \n        :type clusterName: string\n        :param clusterName: Name of the cluster.\n\n        :type region: string\n        :param region: AWS Region of the cluster.\n\n        :rtype: List with details of nodes.\n    \"\"\"\n\n    k8shandle = handle.unskript_get_eks_handle(clusterName, region)\n    coreApiClient = client.CoreV1Api(api_client=k8shandle)\n\n    try:\n        resp = coreApiClient.list_node(pretty=True)\n\n    except ApiException as e:\n        resp = 'An Exception occured while executing the command' + e.reason\n        raise e\n\n    output = []\n    for node in resp.items:\n        labels = [f\"{label}={value}\"\n                  for label, value in node.metadata.labels.items()]\n        nodeStatus = node.status.conditions\n        type = \"\"\n        for i in nodeStatus:\n            type = i.type\n\n        output.append(\n            {\"name\": node.metadata.name, \"status\": type,\n             \"age\": f\"{(datetime.datetime.now() - node.metadata.creation_timestamp.replace(tzinfo=None)).days}d\",\n             \"version\": node.status.node_info.kubelet_version, \"labels\": \",\".join(labels)})\n    return output\n"
  },
  {
    "path": "AWS/legos/aws_eks_get_not_running_pods/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>List of EKS pods not in RUNNING State </h1>\r\n\r\n## Description\r\nGet list of all pods in a given EKS cluster that are not running.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_eks_get_not_running_pods(handle: Session, clusterName: str, region: str)\r\n\r\n        handle: Object of type unSkript AWS Connector\r\n        clusterName: Name of the EKS cluster.\r\n        region: AWS Region of the EKS cluster. \r\n\r\n## Lego Input\r\nThis Lego takes three inputs handle, clusterName and region. \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_eks_get_not_running_pods/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "AWS/legos/aws_eks_get_not_running_pods/aws_eks_get_not_running_pods.json",
    "content": "{\r\n\"action_title\": \" List of EKS pods not in RUNNING State\",\r\n\"action_description\": \" Get list of all pods in a given EKS cluster that are not running.\",\r\n\"action_type\": \"LEGO_TYPE_AWS\",\r\n\"action_entry_function\": \"aws_eks_get_not_running_pods\",\r\n\"action_needs_credential\": true,\r\n\"action_supports_poll\": true,\r\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n\"action_supports_iteration\": true,\r\n\"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_EKS\" ]\r\n}\r\n"
  },
  {
    "path": "AWS/legos/aws_eks_get_not_running_pods/aws_eks_get_not_running_pods.py",
    "content": "##\r\n# Copyright (c) 2021 unSkript, Inc\r\n# All rights reserved.\r\n##\r\nimport pprint\r\nimport pandas as pd\r\nfrom pydantic import BaseModel, Field\r\nfrom kubernetes import client\r\nfrom kubernetes.client.rest import ApiException\r\nfrom typing import List\r\n\r\nfrom unskript.legos.aws.aws_get_handle.aws_get_handle import Session\r\n\r\n\r\nclass InputSchema(BaseModel):\r\n    clusterName: str = Field(\r\n        title='Cluster Name.',\r\n        description='Name of EKS cluster.')\r\n    region: str = Field(\r\n        title='Region',\r\n        description='AWS Region of the EKS cluster.')\r\n\r\n\r\ndef aws_eks_get_not_running_pods_printer(output):\r\n    if output is None:\r\n        return\r\n    print(\"\\n\")\r\n    pprint.pprint(pd.DataFrame(output))\r\n\r\n\r\ndef aws_eks_get_not_running_pods(handle: Session, clusterName: str, region: str) -> List:\r\n    \"\"\"aws_eks_get_not_running_pods returns list.\r\n\r\n        :type handle: object\r\n        :param handle: Object returned from task.validate(...).\r\n\r\n        :type clusterName: string\r\n        :param clusterName: Cluster name.\r\n\r\n        :type region: string\r\n        :param region: AWS Region of the EKS cluster.\r\n\r\n        :rtype: List of pods not in running state .\r\n    \"\"\"\r\n    k8shandle = handle.unskript_get_eks_handle(clusterName, region)\r\n    coreApiClient = client.CoreV1Api(api_client=k8shandle)\r\n    try:\r\n        resp = coreApiClient.list_pod_for_all_namespaces(pretty=True)\r\n\r\n    except ApiException as e:\r\n        pprint.pprint(str(e))\r\n        resp = 'An Exception occured while executing the command' + e.reason\r\n\r\n    res = []\r\n    for container in resp.items:\r\n        if container.status.phase not in [\"Running\"]:\r\n            res.append({\"pod_name\": container.metadata.name, \"status\": container.status.phase,\r\n                        \"namespace\": container.metadata.namespace,\r\n                        \"node_name\": container.spec.node_name})\r\n    pd.set_option('display.max_rows', None)\r\n    pd.set_option('display.max_columns', None)\r\n    pd.set_option('display.width', None)\r\n    pd.set_option('display.max_colwidth', -1)\r\n    return res\r\n"
  },
  {
    "path": "AWS/legos/aws_eks_get_pod_cpu_memory/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Get pod CPU and Memory usage from given namespace</h1>\r\n\r\n## Description\r\nThis Lego Get all pod CPU and Memory usage from given namespace.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_eks_get_pod_cpu_memory(handle, clusterName: str, namespace: str, region: str)\r\n\r\n        handle: Object of type unSkript AWS Connector\r\n        clusterName: Name of the EKS cluster.\r\n        region: AWS Region of the EKS cluster. \r\n        namespace: EKS Cluster Namespace.\r\n\r\n## Lego Input\r\nThis Lego takes four inputs handle, clusterName, region and namespace.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_eks_get_pod_cpu_memory/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "AWS/legos/aws_eks_get_pod_cpu_memory/aws_eks_get_pod_cpu_memory.json",
    "content": "{\r\n\"action_title\": \"Get pod CPU and Memory usage from given namespace\",\r\n\"action_description\": \"Get all pod CPU and Memory usage from given namespace\",\r\n\"action_type\": \"LEGO_TYPE_AWS\",\r\n\"action_entry_function\": \"aws_eks_get_pod_cpu_memory\",\r\n\"action_needs_credential\": true,\r\n\"action_supports_poll\": true,\r\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n\"action_supports_iteration\": true,\r\n\"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_EKS\"]\r\n\r\n}\r\n"
  },
  {
    "path": "AWS/legos/aws_eks_get_pod_cpu_memory/aws_eks_get_pod_cpu_memory.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nimport pprint\nimport pandas as pd\nfrom pydantic import BaseModel, Field\nfrom kubernetes import client\nfrom kubernetes.client.rest import ApiException\nfrom typing import List\n\n\nclass InputSchema(BaseModel):\n    clusterName: str = Field(\n        title='Cluster Name',\n        description='Name of cluster')\n    namespace: str = Field(\n        title='Cluster namespace',\n        description='Cluster Namespace')\n    region: str = Field(\n        title='Region',\n        description='AWS Region of the cluster')\n\n\ndef aws_eks_get_pod_cpu_memory_printer(output):\n    if output is None:\n        return\n    print(\"\\n\")\n    pprint.pprint(pd.DataFrame(output))\n\n\ndef aws_eks_get_pod_cpu_memory(handle, clusterName: str, namespace: str, region: str) -> List:\n    \"\"\"aws_eks_get_pod_cpu_memory returns list.\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type clusterName: string\n        :param clusterName: Cluster name.\n\n        :type namespace: string\n        :param namespace: Cluster Namespace.\n\n        :type region: string\n        :param region: AWS Region of the EKS cluster.\n\n        :rtype: List of pods with cpu and memory usage details.\n    \"\"\"\n    k8shandle = handle.unskript_get_eks_handle(clusterName, region)\n    CustomObjectsClient = client.CustomObjectsApi(api_client=k8shandle)\n\n    try:\n        data = []\n        resp = CustomObjectsClient.list_namespaced_custom_object(group=\"metrics.k8s.io\",\n                                                                 version=\"v1beta1\",\n                                                                 namespace=namespace,\n                                                                 plural=\"pods\")\n\n        for pod in resp.get('items', []):\n            for container in pod.get('containers', []):\n                data.append({\n                    \"pod_name\": pod['metadata']['name'], \"container_name\": container.get('name'),\n                    \"cpu\": container['usage'][\"cpu\"],\n                    \"memory\": f\"{round(int(container['usage']['memory'].split('Ki')[0]) / 1024, 2)} Mi\"})\n\n    except ApiException as e:\n        pprint.pprint(str(e))\n        data = [\n            {'error': 'An Exception occured while executing the command' + e.reason}]\n        raise e\n    pd.set_option('display.max_rows', None)\n    pd.set_option('display.max_columns', None)\n    pd.set_option('display.width', None)\n    pd.set_option('display.max_colwidth', None)\n    return data\n"
  },
  {
    "path": "AWS/legos/aws_eks_get_pod_status/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>EKS Get pod status </h1>\r\n\r\n## Description\r\nThis Lego Get a Status of given POD in a given Namespace and EKS cluster name.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_eks_get_pod_status(handle: Session, clusterName: str, pod_name: str, region: str, namespace: str = None)\r\n\r\n        handle: Object of type unSkript AWS Connector\r\n        clusterName: Name of the EKS cluster.\r\n        pod_name: Name of the pod.\r\n        region: AWS Region of the EKS cluster. \r\n        namespace: EKS Cluster Namespace.\r\n\r\n## Lego Input\r\nThis Lego takes five inputs handle, clusterName, pod_name, region and namespace. \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_eks_get_pod_status/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "AWS/legos/aws_eks_get_pod_status/aws_eks_get_pod_status.json",
    "content": "{\r\n\"action_title\": \" EKS Get pod status\",\r\n\"action_description\": \" Get a Status of given POD in a given Namespace and EKS cluster name\",\r\n\"action_type\": \"LEGO_TYPE_AWS\",\r\n\"action_entry_function\": \"aws_eks_get_pod_status\",\r\n\"action_needs_credential\": true,\r\n\"action_supports_poll\": true,\r\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\r\n\"action_supports_iteration\": true,\r\n\"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_EKS\" ]\r\n\r\n}\r\n"
  },
  {
    "path": "AWS/legos/aws_eks_get_pod_status/aws_eks_get_pod_status.py",
    "content": "# Copyright (c) 2021 unSkript, Inc\r\n# All rights reserved.\r\n# @author: Yugal Pachpande, @email: yugal.pachpande@unskript.com\r\n##\r\n\r\nfrom typing import Optional, Dict\r\nfrom pydantic import BaseModel, Field\r\nfrom kubernetes import client\r\nfrom unskript.legos.aws.aws_get_handle.aws_get_handle import Session\r\nimport pprint\r\n\r\n\r\nclass InputSchema(BaseModel):\r\n    clusterName: str = Field(\r\n        title='Cluster Name',\r\n        description='Name of cluster')\r\n    namespace: Optional[str] = Field(\r\n        title='Cluster Namespace',\r\n        description='Cluster Namespace')\r\n    pod_name: str = Field(\r\n        title='Pod Name',\r\n        description='Name of the pod.')\r\n    region: str = Field(\r\n        title='Region',\r\n        description='AWS Region of the cluster')\r\n\r\n\r\ndef aws_eks_get_pod_status_printer(output):\r\n    if output is None:\r\n        return\r\n    print(\"\\n\")\r\n    pprint.pprint(output)\r\n\r\n\r\ndef aws_eks_get_pod_status(handle: Session, clusterName: str, pod_name: str, region: str, namespace: str = None) -> Dict:\r\n    \"\"\"aws_eks_get_pod_status returns Dict.\r\n\r\n        :type handle: object\r\n        :param handle: Object returned from task.validate(...).\r\n\r\n        :type clusterName: string\r\n        :param clusterName: Cluster name.\r\n\r\n        :type pod_name: string\r\n        :param pod_name: Name of the pod.\r\n\r\n        :type namespace: string\r\n        :param namespace: Cluster Namespace.\r\n\r\n        :type region: string\r\n        :param region: AWS Region of the EKS cluster.\r\n\r\n        :rtype: Dict of pods details with status.\r\n    \"\"\"\r\n    k8shandle = handle.unskript_get_eks_handle(clusterName, region)\r\n    coreApiClient = client.CoreV1Api(api_client=k8shandle)\r\n    status = coreApiClient.read_namespaced_pod_status(\r\n        namespace=namespace, name=pod_name)\r\n\r\n    res = {}\r\n\r\n    ready_containers_number = 0\r\n    containers_number = 0\r\n    restarts_number = 0\r\n\r\n    for container in status.status.container_statuses:\r\n        if container.ready:\r\n            ready_containers_number += 1\r\n        if container.restart_count:\r\n            restarts_number = restarts_number + container.restart_count\r\n        containers_number += 1\r\n    res[\"NAME\"] = pod_name\r\n    res['READY'] = f\"Ready {ready_containers_number}/{containers_number}\"\r\n    res['STATUS'] = status.status.phase\r\n    res['RESTARTS'] = restarts_number\r\n    res['START_TIME'] = status.status.start_time.strftime(\"%m/%d/%Y, %H:%M:%S\")\r\n    return res\r\n"
  },
  {
    "path": "AWS/legos/aws_eks_get_running_pods/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>EKS Get Running Pods </h1>\r\n\r\n## Description\r\nThis Lego Gets a list of running pods from given namespace and EKS cluster name.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_eks_get_running_pods(handle, clusterName: str, namespace: str, region: str)\r\n\r\n        handle: Object of type unSkript AWS Connector\r\n        clusterName: Name of the EKS cluster.\r\n        region: AWS Region of the EKS cluster. \r\n        namespace: EKS Cluster Namespace.\r\n\r\n## Lego Input\r\nThis Lego takes four inputs handle, clusterName, region and namespace. \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_eks_get_running_pods/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "AWS/legos/aws_eks_get_running_pods/aws_eks_get_running_pods.json",
    "content": "{\r\n\"action_title\": \" EKS Get Running Pods\",\r\n\"action_description\": \" Get a list of running pods from given namespace and EKS cluster name\",\r\n\"action_type\": \"LEGO_TYPE_AWS\",\r\n\"action_entry_function\": \"aws_eks_get_running_pods\",\r\n\"action_needs_credential\": true,\r\n\"action_supports_poll\": true,\r\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n\"action_supports_iteration\": true,\r\n\"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_EKS\"]\r\n\r\n}\r\n"
  },
  {
    "path": "AWS/legos/aws_eks_get_running_pods/aws_eks_get_running_pods.py",
    "content": "# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n# @author: Yugal Pachpande, @email: yugal.pachpande@unskript.com\n##\n\nimport pprint\nfrom pydantic import BaseModel, Field\nfrom kubernetes import client\nfrom typing import List\n\n\nclass InputSchema(BaseModel):\n    clusterName: str = Field(\n        title='Cluster Name',\n        description='Name of cluster')\n    namespace: str = Field(\n        title='Cluster Namespace',\n        description='Cluster Namespace')\n    region: str = Field(\n        title='Region',\n        description='AWS Region of the cluster')\n\n\ndef aws_eks_get_running_pods_printer(output):\n    if output is None:\n        return\n    print(\"\\n\") \n    pprint.pprint(output)\n\n\ndef aws_eks_get_running_pods(handle, clusterName: str, namespace: str, region: str) -> List:\n    \"\"\"aws_eks_get_running_pods returns list.\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type clusterName: string\n        :param clusterName: Cluster name.\n\n        :type namespace: string\n        :param namespace: Cluster Namespace.\n\n        :type region: string\n        :param region: AWS Region of the EKS cluster.\n\n        :rtype: List of pods with status ip and start time.\n    \"\"\"\n    k8shandle = handle.unskript_get_eks_handle(clusterName, region)\n    coreApiClient = client.CoreV1Api(api_client=k8shandle)\n    ret = coreApiClient.list_namespaced_pod(namespace=namespace)\n    all_healthy_pods = []\n    for i in ret.items:\n        phase = i.status.phase\n        if phase in (\"Running\", \"Succeeded\"):\n            all_healthy_pods.append(i.metadata.name)\n    return all_healthy_pods\n"
  },
  {
    "path": "AWS/legos/aws_eks_run_kubectl_cmd/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Run Kubectl commands on EKS Cluster </h1>\r\n\r\n## Description\r\nThis lego runs a kubectl command on an AWS EKS Cluster.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_eks_run_kubectl_cmd(handle, clusterName: str, command: str, region: str)\r\n\r\n        handle: Object of type unSkript AWS Connector\r\n        clusterName: Name of the EKS cluster.\r\n        command: Kubectl command to run on EKS Cluster.\r\n        region: AWS Region of the EKS cluster. \r\n\r\n## Lego Input\r\nThis Lego take four input handle, command, clusterName and region. \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_eks_run_kubectl_cmd/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "AWS/legos/aws_eks_run_kubectl_cmd/aws_eks_run_kubectl_cmd.json",
    "content": "{\r\n\"action_title\": \" Run Kubectl commands on EKS Cluster\",\r\n\"action_description\": \"This action runs a kubectl command on an AWS EKS Cluster\",\r\n\"action_type\": \"LEGO_TYPE_AWS\",\r\n\"action_entry_function\": \"aws_eks_run_kubectl_cmd\",\r\n\"action_needs_credential\": true,\r\n\"action_supports_poll\": true,\r\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\r\n\"action_supports_iteration\": true,\r\n\"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_EKS\" ]\r\n}\r\n"
  },
  {
    "path": "AWS/legos/aws_eks_run_kubectl_cmd/aws_eks_run_kubectl_cmd.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nfrom pydantic import BaseModel, Field\nimport pprint\n\n\nclass InputSchema(BaseModel):\n    clusterName: str = Field(\n        title='EKS Cluster Name',\n        description='Name EKS Cluster')\n    command: str = Field(\n        title='Kubectl Command',\n        description='kubectl commands For Eg. kubectl get pods --all-namespaces')\n    region: str = Field(\n        title='Region',\n        description='AWS Region of the cluster')\n\n\ndef aws_eks_run_kubectl_cmd_printer(output):\n    if output is None:\n        return\n    print(\"\\n\")\n    pprint.pprint(output)\n\n\ndef aws_eks_run_kubectl_cmd(handle, clusterName: str, command: str, region: str) -> str:\n    \"\"\"aws_eks_run_kubectl_cmd returns string.\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type clusterName: string\n        :param clusterName: Cluster name.\n\n        :type command: string\n        :param command: Kubectl command to run on EKS Cluster .\n\n        :type region: string\n        :param region: AWS Region of the EKS cluster.\n\n        :rtype: string of output of command result.\n    \"\"\"\n    result = handle.unskript_get_eks_handle(clusterName, region).run_native_cmd(command)\n    if result.stderr:\n        return \"The kubectl command didn't work!\"\n    return result.stdout\n"
  },
  {
    "path": "AWS/legos/aws_emr_get_instances/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Get AWS EMR Instances </h1>\r\n\r\n## Description\r\nThis Lego get a list of EC2 Instances for an EMR cluster. Filtered by node type (MASTER|CORE|TASK)\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_emr_get_instances(handle: object, cluster_id: str, instance_group_type: str, region: str)\r\n\r\n        handle: Object of type unSkript AWS Connector.\r\n        cluster_id: Cluster ID for the EMR cluster.\r\n        instance_group_type: Group type to filter on.\r\n        region: AWS Region of the cluster\r\n\r\n## Lego Input\r\n\r\nThis Lego take four inputs handle, cluster_id, instance_group_type and region.\r\n\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_emr_get_instances/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_emr_get_instances/aws_emr_get_instances.json",
    "content": "{\r\n    \"action_title\": \"Get AWS EMR Instances\",\r\n    \"action_description\": \"Get a list of EC2 Instances for an EMR cluster. Filtered by node type (MASTER|CORE|TASK)\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_emr_get_instances\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_EMR\" ]\r\n}\r\n"
  },
  {
    "path": "AWS/legos/aws_emr_get_instances/aws_emr_get_instances.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nfrom pydantic import BaseModel, Field\nfrom typing import List\nimport pprint\n\n\nclass InputSchema(BaseModel):\n    cluster_id: str = Field(\n        title='Cluster Id',\n        description='Cluster ID for the EMR cluster. Eg j-abcd')\n    instance_group_type: str = Field(\n        title='Instance Group Type',\n        description='Group type to filter on. Possible values are MASTER|CORE|TASK'\n    )\n    region: str = Field(\n        title='Region',\n        description='AWS Region of the cluster')\n\n\ndef aws_emr_get_instances_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\n\ndef aws_emr_get_instances(\n        handle,\n        cluster_id: str,\n        instance_group_type: str,\n        region: str) -> List:\n\n    \"\"\"aws_get_unhealthy_instances returns array of emr instances\n\n     :type handle: object\n     :param handle: Object returned from task.validate(...).\n\n     :type cluster_id: string\n     :param cluster_id: Cluster ID for the EMR cluster.\n\n     :type instance_group_type: string\n     :param instance_group_type: Group type to filter on.\n\n     :type region: string\n     :param region: AWS Region of the cluster\n\n     :rtype: Returns array of emr instances\n    \"\"\"\n    client = handle.client('emr', region_name=region)\n    response = client.list_instances(\n        ClusterId=cluster_id,\n        InstanceGroupTypes=[instance_group_type],\n    )\n    if response.get('Instances') is None:\n        return []\n    return([x.get('Ec2InstanceId') for x in response.get('Instances')])\n"
  },
  {
    "path": "AWS/legos/aws_execute_cli_command/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Run Command via AWS CLI </h1>\r\n\r\n## Description\r\nThis Lego can be used to run any aws cli command just like `aws sts get-caller-identity` etc...\r\n\r\n## Lego Details\r\n\r\n    aws_execute_cli_command(handle, aws_command: str)\r\n\r\n        handle: Object of type unSkript AWS Connector\r\n        aws_command: AWS command.\r\n\r\n## Lego Input\r\nThis Lego take two inputs handle, aws_command. The aws_command is the actual command\r\nstarting with the `aws` keyword.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)\r\n"
  },
  {
    "path": "AWS/legos/aws_execute_cli_command/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_execute_cli_command/aws_execute_cli_command.json",
    "content": "{\r\n    \"action_title\": \"Run Command via AWS CLI\",\r\n    \"action_description\": \"Execute command using AWS CLI\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_execute_cli_command\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_CLI\" ]\r\n}\r\n"
  },
  {
    "path": "AWS/legos/aws_execute_cli_command/aws_execute_cli_command.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n\nfrom pydantic import BaseModel, Field\nimport pprint\n\n\nclass InputSchema(BaseModel):\n    aws_command: str = Field(\n        title='AWS Command',\n        description='AWS Command '\n                    'eg \"aws ec2 describe-instances\"'\n    )\n\n\ndef aws_execute_cli_command_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef aws_execute_cli_command(handle, aws_command: str) -> str:\n\n    result = handle.aws_cli_command(aws_command)\n    if result is None or result.returncode != 0:\n        print(\n            f\"Error while executing command ({aws_command}): {result}\")\n        return str()\n\n    return result.stdout\n"
  },
  {
    "path": "AWS/legos/aws_execute_command_ssm/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Run Command via SSM</h1>\r\n\r\n## Description\r\nThis Lego execute command on EC2 instance(s) using SSM.\r\n\r\n## Lego Details\r\n\r\n    aws_execute_command_ssm(handle, instance_ids: list, parameters: list, region: str,\r\n                            document_name: str)\r\n\r\n        handle: Object of type unSkript AWS Connector\r\n        instance_ids: List of instance IDs. For eg. [\"i-foo\", \"i-bar\"].\r\n        parameters: List of commands to execute on instance. For eg. [\"ifconfig\", \"pwd\"].\r\n        document_name: Document Name.\r\n        region: AWS Region of the AWS Instance.\r\n\r\n## Lego Input\r\nThis Lego take five inputs handle, instance_ids, parameters, document_name and region.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n<img src=\"./1.png\">\r\n\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_execute_command_ssm/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_execute_command_ssm/aws_execute_command_ssm.json",
    "content": "{\r\n    \"action_title\": \" Run Command via SSM\",\r\n    \"action_description\": \" Execute command on EC2 instance(s) using SSM\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_execute_command_ssm\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_SSM\" ]\r\n}\r\n    "
  },
  {
    "path": "AWS/legos/aws_execute_command_ssm/aws_execute_command_ssm.py",
    "content": "# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\n\nfrom pydantic import BaseModel, Field\nfrom typing import List, Dict\nimport pprint\nimport time\n\n\nclass InputSchema(BaseModel):\n    instance_ids: List[str] = Field(\n        title='Instance IDs',\n        description='List of instance IDs. For eg. [\"i-foo\", \"i-bar\"]')\n    document_name: str = Field(\n        'AWS-RunPowerShellScript',\n        title='Document Name',\n        description='Name of the SSM document to run.')\n    parameters: List[str] = Field(\n        title='SSM Document Name',\n        description='List of commands to execute on instance. For eg. [\"ifconfig\", \"pwd\"]')\n    region: str = Field(\n        title='Region',\n        description='AWS Region of the AWS Instance.')\n\n\ndef aws_execute_command_ssm_printer(output):\n    if output is None:\n        return\n    print(\"\\n\")\n    pprint.pprint(output)\n\n\ndef aws_execute_command_ssm(handle, instance_ids: list, parameters: list, region: str,\n                            document_name: str = \"AWS-RunPowerShellScript\") -> Dict:\n    \"\"\"aws_execute_command_via_ssm EC2 Run Command via SSH.\n     \n     :type handle: object\n     :param handle: Object returned from task.validate(...).\n\n     :type instance_ids: list\n     :param instance_ids: List of instance IDs. For eg. [\"i-foo\", \"i-bar\"]\n\n     :type parameters: list\n     :param parameters: List of commands to execute on instance. For eg. [\"ifconfig\", \"pwd\"]\n\n     :type document_name: string\n     :param document_name: Document Name.\n\n     :type region: string\n     :param region: AWS Region of the AWS Instance.\n\n     :rtype: Dict of command output.\n    \"\"\"\n\n    ssm_client = handle.client('ssm', region_name=region)\n    response = ssm_client.send_command(\n        InstanceIds=instance_ids,\n        DocumentName=document_name,\n        Parameters={\n            'commands': parameters\n        })\n    command_id = response['Command']['CommandId']\n    output = {}\n    time.sleep(2)\n    for instance_id in instance_ids:\n        res = ssm_client.get_command_invocation(\n            CommandId=command_id,\n            InstanceId=instance_id,\n        )\n        output[instance_id] = res\n    return output\n"
  },
  {
    "path": "AWS/legos/aws_filter_all_manual_database_snapshots/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>AWS Filter All Manual Databse Snapshots </h1>\r\n\r\n## Description\r\nThis Lego filter AWS manual database snapshots.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_get_manual_database_snapshots(handle: object, region: str)\r\n\r\n        handle: Object of type unSkript AWS Connector\r\n        region: Region for database.\r\n\r\n## Lego Input\r\nThis Lego take two inputs handle and region. \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_filter_all_manual_database_snapshots/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_filter_all_manual_database_snapshots/aws_filter_all_manual_database_snapshots.json",
    "content": "{\r\n    \"action_title\": \"AWS Filter All Manual Database Snapshots\",\r\n    \"action_description\": \"Use This Action to AWS Filter All Manual Database Snapshots\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_filter_all_manual_database_snapshots\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_is_check\":false,\r\n    \"action_verbs\": [\"filter\"],\r\n    \"action_nouns\": [\"aws\",\"database\",\"snapshots\",\"manual\"],\r\n    \"action_categories\":[],\r\n    \"action_next_hop\":[],\r\n    \"action_next_hop_parameter_mapping\":{},\r\n    \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_EC2\",\"CATEGORY_TYPE_DB\" ]\r\n}\r\n"
  },
  {
    "path": "AWS/legos/aws_filter_all_manual_database_snapshots/aws_filter_all_manual_database_snapshots.py",
    "content": "##  Copyright (c) 2021 unSkript, Inc\r\n##  All rights reserved.\r\n##\r\nfrom typing import List, Dict\r\nfrom pydantic import BaseModel, Field\r\nfrom unskript.connectors.aws import aws_get_paginator\r\nimport pprint\r\n\r\n\r\nclass InputSchema(BaseModel):\r\n    region: str = Field(\r\n        title='Region',\r\n        description='AWS Region of database.')\r\n\r\ndef aws_filter_all_manual_database_snapshots_printer(output):\r\n    if output is None:\r\n        return\r\n    pprint.pprint(output)\r\n\r\n\r\ndef aws_filter_all_manual_database_snapshots(handle, region: str) -> List:\r\n    \"\"\"aws_get_manual_snapshots List all the manual database snapshots.\r\n\r\n        :type handle: object\r\n        :param handle: Object returned from task.validate(...).\r\n\r\n        :type region: string\r\n        :param region: Region for database.\r\n\r\n        :rtype: List of manual database snapshots.\r\n    \"\"\"\r\n\r\n    ec2Client = handle.client('rds', region_name=region)\r\n    result = []\r\n    try:\r\n        response = aws_get_paginator(ec2Client, \"describe_db_snapshots\",\"DBSnapshots\",\r\n                                     SnapshotType='manual')\r\n        for snapshot in response:\r\n            result.append(snapshot['DBSnapshotIdentifier'])\r\n    except Exception as error:\r\n        pass\r\n\r\n    return result\r\n"
  },
  {
    "path": "AWS/legos/aws_filter_ebs_unattached_volumes/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Filter AWS Unattached EBS Volume </h1>\r\n\r\n## Description\r\nThis Lego filter AWS EBS volumes.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_filter_ebs_unattached_volumes(handle: object, region: str)\r\n\r\n        handle: Object of type unSkript AWS Connector.\r\n        region: Optional, AWS region. Eg: “us-west-2”\r\n\r\n## Lego Input\r\nThis Lego take two inputs handle and region.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_filter_ebs_unattached_volumes/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_filter_ebs_unattached_volumes/aws_filter_ebs_unattached_volumes.json",
    "content": "{\r\n    \"action_title\": \"Filter AWS Unattached EBS Volume\",\r\n    \"action_description\": \"Filter AWS Unattached EBS Volume\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_filter_ebs_unattached_volumes\",\r\n    \"action_needs_credential\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_is_check\":true,\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_next_hop\": [\"da23633be34037f023e1c1f56220ec75eb2729d7d8eb2bca9badec15ed0fd2ca\"],\r\n    \"action_next_hop_parameter_mapping\": {\"da23633be34037f023e1c1f56220ec75eb2729d7d8eb2bca9badec15ed0fd2ca\": {\"name\": \"Delete Unattached AWS EBS Volumes\", \"region\":\".[0].region\",\"ebs_volume\":\"map(.volume_id)\"}},\r\n    \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_EC2\",\"CATEGORY_TYPE_AWS_EBC\" ]\r\n}\r\n  "
  },
  {
    "path": "AWS/legos/aws_filter_ebs_unattached_volumes/aws_filter_ebs_unattached_volumes.py",
    "content": "##\r\n##  Copyright (c) 2021 unSkript, Inc\r\n##  All rights reserved.\r\n##\r\nfrom pydantic import BaseModel, Field\r\nfrom typing import Optional, Tuple\r\nfrom unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\r\nimport pprint\r\n\r\n\r\nclass InputSchema(BaseModel):\r\n    region: Optional[str] = Field(\r\n        default=\"\",\r\n        title='Region',\r\n        description='AWS Region.')\r\n\r\n\r\ndef aws_filter_ebs_unattached_volumes_printer(output):\r\n    if output is None:\r\n        return\r\n\r\n    pprint.pprint(output)\r\n\r\n\r\ndef aws_filter_ebs_unattached_volumes(handle, region: str = \"\") -> Tuple:\r\n    \"\"\"aws_filter_ebs_unattached_volumes Returns an array of ebs volumes.\r\n\r\n        :type region: string\r\n        :param region: Used to filter the volume for specific region.\r\n\r\n        :rtype: Tuple with status result and list of EBS Unattached Volume.\r\n    \"\"\"\r\n    result=[]\r\n    all_regions = [region]\r\n    if not region:\r\n        all_regions = aws_list_all_regions(handle)\r\n\r\n    for reg in all_regions:\r\n        try:\r\n            # Filtering the volume by region\r\n            ec2Client = handle.resource('ec2', region_name=reg)\r\n            volumes = ec2Client.volumes.all()\r\n\r\n            # collecting the volumes which has zero attachments\r\n            for volume in volumes:\r\n                volume_dict = {}\r\n                if len(volume.attachments) == 0:\r\n                    volume_dict[\"region\"] = reg\r\n                    volume_dict[\"volume_id\"] = volume.id\r\n                    result.append(volume_dict)\r\n        except Exception as e:\r\n            pass\r\n\r\n    if len(result) != 0:\r\n        return (False, result)\r\n    else:\r\n        return (True, None)"
  },
  {
    "path": "AWS/legos/aws_filter_ebs_volumes_with_low_iops/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Filter AWS EBS Volume with Low IOPS </h1>\r\n\r\n## Description\r\nThis Lego used to measure the amount of input/output operations that an EBS volume can perform per second and gives a list of volume with low IOPS.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_filter_ebs_volumes_with_low_iops(handle, region: str = \"\", iops_threshold: int = 100)\r\n\r\n        handle: Object of type unSkript AWS Connector.\r\n        region: Optional, AWS region. Eg: “us-west-2”\r\n        iops_threshold: Optional, IOPS's Threshold e.g 100\r\n\r\n## Lego Input\r\nThis Lego take three inputs handle, iops_threshold and region.\r\niops_threshold: If not provided the value is set to 100.\r\nregion: If not provided it will get all regions from AWS.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_filter_ebs_volumes_with_low_iops/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_filter_ebs_volumes_with_low_iops/aws_filter_ebs_volumes_with_low_iops.json",
    "content": "{\r\n    \"action_title\": \"Filter AWS EBS Volume with Low IOPS\",\r\n    \"action_description\": \"IOPS (Input/Output Operations Per Second) is a metric used to measure the amount of input/output operations that an EBS volume can perform per second.\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_filter_ebs_volumes_with_low_iops\",\r\n    \"action_needs_credential\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_is_check\":true,\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [ \"CATEGORY_TYPE_IAM\", \"CATEGORY_TYPE_SECOPS\"],\r\n    \"action_next_hop\": [],\r\n    \"action_next_hop_parameter_mapping\": {}\r\n}\r\n  "
  },
  {
    "path": "AWS/legos/aws_filter_ebs_volumes_with_low_iops/aws_filter_ebs_volumes_with_low_iops.py",
    "content": "##\r\n##  Copyright (c) 2023 unSkript, Inc\r\n##  All rights reserved.\r\n##\r\nfrom pydantic import BaseModel, Field\r\nfrom typing import Optional, Tuple\r\nfrom unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\r\nimport pprint\r\n\r\n\r\nclass InputSchema(BaseModel):\r\n    region: Optional[str] = Field(\r\n        default=\"\",\r\n        title='Region',\r\n        description='AWS Region.')\r\n    iops_threshold: Optional[int] = Field(\r\n        default=100,\r\n        title=\"IOPS's Threshold\",\r\n        description=\"IOPS's Threshold is a metric used to measure the amount of input/output operations that an EBS volume can perform per second.\")\r\n\r\n\r\ndef aws_filter_ebs_volumes_with_low_iops_printer(output):\r\n    if output is None:\r\n        return\r\n\r\n    pprint.pprint(output)\r\n\r\n\r\ndef aws_filter_ebs_volumes_with_low_iops(handle, region: str = \"\", iops_threshold: int = 100) -> Tuple:\r\n    \"\"\"aws_filter_ebs_unattached_volumes Returns an array of ebs volumes.\r\n\r\n        :type region: string\r\n        :param region: Used to filter the volume for specific region.\r\n\r\n        :type iops_threshold: int\r\n        :param iops_threshold: IOPS's Threshold is a metric used to measure the amount of input/output operations that an EBS volume can perform per second.\r\n\r\n        :rtype: Tuple with status result and list of low IOPS EBS Volumes.\r\n    \"\"\"\r\n    result=[]\r\n    all_regions = [region]\r\n    if not region:\r\n        all_regions = aws_list_all_regions(handle)\r\n\r\n    for reg in all_regions:\r\n        try:\r\n            # Filtering the volume by region\r\n            ec2Client = handle.resource('ec2', region_name=reg)\r\n            volumes = ec2Client.volumes.all()\r\n\r\n            # collecting the volumes which has low IOPS's\r\n            for volume in volumes:\r\n                volume_dict = {}\r\n                if volume.iops < iops_threshold:\r\n                    volume_dict[\"region\"] = reg\r\n                    volume_dict[\"volume_id\"] = volume.id\r\n                    volume_dict[\"volume_iops\"] = volume.iops\r\n                    result.append(volume_dict)\r\n        except Exception as e:\r\n            pass\r\n\r\n    if len(result) != 0:\r\n        return (False, result)\r\n    else:\r\n        return (True, None)"
  },
  {
    "path": "AWS/legos/aws_filter_ec2_by_tags/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \n<h1>Filter AWS EC2 Instances </h1>\n\n## Description\nThis Lego filter the AWS EC2 Instance.\n\n\n## Lego Details\n\n    aws_filter_ec2_by_tags(handle: object, tag_key: str, tag_value: str, region: str)\n\n        handle: Object of type unSkript AWS Connector.\n        tag_key: Key for the EC2 instance tag.\n        tag_value: value for the EC2 instance tag.\n        region: EC2 instance region.\n\n## Lego Input\n\nThis Lego take four inputs handle, tag_key, tag_value and region. \n\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_filter_ec2_by_tags/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_filter_ec2_by_tags/aws_filter_ec2_by_tags.json",
    "content": "{\r\n    \"action_title\": \"Filter AWS EC2 Instance\",\r\n    \"action_description\": \"Filter AWS EC2 Instance\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_filter_ec2_by_tags\",\r\n    \"action_needs_credential\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [  \"CATEGORY_TYPE_COST_OPT\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_EC2\" ]\r\n\r\n  }\r\n  "
  },
  {
    "path": "AWS/legos/aws_filter_ec2_by_tags/aws_filter_ec2_by_tags.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nfrom pydantic import BaseModel, Field\nfrom typing import List\nfrom unskript.connectors.aws import aws_get_paginator\nimport pprint\nfrom beartype import beartype\n\nclass InputSchema(BaseModel):\n    tag_key: str = Field(\n        title='Tag Key',\n        description='The key of the tag.')\n    tag_value: str = Field(\n        title='Tag Value',\n        description='The value of the key.')\n    region: str = Field(\n        title='Region',\n        description='AWS Region.')\n\n@beartype\ndef aws_filter_ec2_by_tags_printer(output):\n    if output is None:\n        return\n    pprint.pprint({\"Instances\": output})\n\n\n@beartype\ndef aws_filter_ec2_by_tags(handle, tag_key: str, tag_value: str, region: str) -> List:\n    \"\"\"aws_filter_ec2_by_tags Returns an array of instances matching tags.\n\n        :type handle: object\n        :param handle: Object returned by the task.validate(...) method.\n\n        :type tag_key: string\n        :param tag_key: Key for the EC2 instance tag.\n\n        :type tag_value: string\n        :param tag_value: value for the EC2 instance tag.\n\n        :type region: string\n        :param region: EC2 instance region.\n\n        :rtype: Array of instances matching tags.\n    \"\"\"\n\n    ec2Client = handle.client('ec2', region_name=region)\n    res = aws_get_paginator(ec2Client, \"describe_instances\", \"Reservations\",\n                            Filters=[{'Name': 'tag:' + tag_key, 'Values': [tag_value]}])\n    \n    result = []\n    for reservation in res:\n        for instance in reservation['Instances']:\n            result.append(instance['InstanceId'])\n    return result\n"
  },
  {
    "path": "AWS/legos/aws_filter_ec2_by_vpc/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Filter AWS EC2 instance by VPC Ids </h1>\r\n\r\n## Description\r\nThis Lego filter AWS EC2 Instance by VPC Ids.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_filter_ec2_by_vpc(handle: object, vpc_id: str, region: str)\r\n\r\n        handle: Object of type unSkript AWS Connector.\r\n        vpc_id: VPC ID of the instances.\r\n        region: AWS Region.\r\n\r\n## Lego Input\r\n\r\nThis Lego take three inputs handle, vpc_id and region. \r\n\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_filter_ec2_by_vpc/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_filter_ec2_by_vpc/aws_filter_ec2_by_vpc.json",
    "content": "{\r\n    \"action_title\": \"Filter AWS EC2 instance by VPC Ids\",\r\n    \"action_description\": \"Use this Action to Filter AWS EC2 Instance by VPC Ids\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_filter_ec2_by_vpc\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_EC2\",\"CATEGORY_TYPE_AWS_VPC\" ]\r\n}\r\n    "
  },
  {
    "path": "AWS/legos/aws_filter_ec2_by_vpc/aws_filter_ec2_by_vpc.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nfrom pydantic import BaseModel, Field\nfrom typing import List\nfrom unskript.connectors.aws import aws_get_paginator\nimport pprint\n\n\nclass InputSchema(BaseModel):\n    vpc_id: str = Field(\n        title='VPC Id',\n        description='VPC ID of the instances.')\n    region: str = Field(\n        title='Region',\n        description='AWS Region.')\n\n\ndef aws_filter_ec2_by_vpc_printer(output):\n    if output is None:\n        return\n    pprint.pprint({\"Instances\": output})\n\n\n\ndef aws_filter_ec2_by_vpc(handle, vpc_id: str, region: str) -> List:\n    \"\"\"aws_filter_ec2_by_vpc_id Returns a array of instances matching the vpc id.\n\n        :type handle: object\n        :param handle: Object containing global params for the notebook.\n\n        :type vpc_id: string\n        :param vpc_id: VPC ID of the instances.\n\n        :type region: string\n        :param region: AWS Region.\n\n        :rtype: Array of the instances maching the vpc id.\n    \"\"\"\n    # Input param validation.\n\n    ec2Client = handle.client('ec2', region_name=region)\n\n    res = aws_get_paginator(ec2Client, \"describe_instances\", \"Reservations\",\n                            Filters=[{'Name': 'vpc-id', 'Values': [vpc_id]}])\n\n    result = []\n    for reservation in res:\n        for instance in reservation['Instances']:\n            result.append(instance['InstanceId'])\n    return result\n"
  },
  {
    "path": "AWS/legos/aws_filter_ec2_instances/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Filter All AWS EC2 Instances </h1>\r\n\r\n## Description\r\nThis Lego filter the AWS EC2 Instance and gives a list of Instances.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_filter_ec2_instances(handle: object, region: str)\r\n\r\n        handle: Object of type unSkript AWS Connector\r\n        region: Used to filter the volume for specific region.\r\n\r\n## Lego Input\r\nThis Lego take one input region. \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n<img src=\"./1.png\">\r\n\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://unskript.com)"
  },
  {
    "path": "AWS/legos/aws_filter_ec2_instances/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_filter_ec2_instances/aws_filter_ec2_instances.json",
    "content": "{\r\n    \"action_title\": \"Filter All AWS EC2 Instance\",\r\n    \"action_description\": \"Filter All AWS EC2 Instance\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_filter_ec2_instances\",\r\n    \"action_needs_credential\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_EC2\" ]\r\n\r\n  }\r\n  "
  },
  {
    "path": "AWS/legos/aws_filter_ec2_instances/aws_filter_ec2_instances.py",
    "content": "##\r\n##  Copyright (c) 2021 unSkript, Inc\r\n##  All rights reserved.\r\n##\r\nfrom pydantic import BaseModel, Field\r\nfrom typing import List\r\nfrom unskript.connectors.aws import aws_get_paginator\r\nimport pprint\r\n\r\n\r\nfrom beartype import beartype\r\n\r\nclass InputSchema(BaseModel):\r\n    region: str = Field(\r\n        title='Region',\r\n        description='AWS Region.')\r\n\r\n@beartype\r\ndef aws_filter_ec2_instances_printer(output):\r\n    if output is None:\r\n        return\r\n    pprint.pprint({\"Instances\": output})\r\n\r\n\r\n@beartype\r\ndef aws_filter_ec2_instances(handle, region: str) -> List:\r\n    \"\"\"aws_filter_ec2_by_tags Returns an array of instances.\r\n\r\n        :type region: string\r\n        :param region: Used to filter the volume for specific region.\r\n\r\n        :rtype: Array of instances.\r\n    \"\"\"\r\n    # Input param validation.\r\n\r\n    ec2Client = handle.client('ec2', region_name=region)\r\n    res = aws_get_paginator(ec2Client, \"describe_instances\", \"Reservations\")\r\n\r\n    result = []\r\n    for reservation in res:\r\n        for instance in reservation['Instances']:\r\n            result.append(instance['InstanceId'])\r\n    return result"
  },
  {
    "path": "AWS/legos/aws_filter_ec2_without_lifetime_tag/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Filter AWS EC2 Instances Without Lifetime Tag </h1>\r\n\r\n## Description\r\nThis Lego filter the AWS EC2 Instances which don't have Lifetime Tag.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_filter_ec2_without_lifetime_tag(handle: object, lifetime_tag: str, region: str)\r\n\r\n        handle: Object of type unSkript AWS Connector\r\n        lifetime_tag: Tag to filter Instances.\r\n        region: Region to filter instances.\r\n\r\n## Lego Input\r\nThis Lego take three inputs handle, lifetime_tag and region.\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_filter_ec2_without_lifetime_tag/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_filter_ec2_without_lifetime_tag/aws_filter_ec2_without_lifetime_tag.json",
    "content": "{\r\n    \"action_title\": \"Filter AWS EC2 Instances Without Lifetime Tag\",\r\n    \"action_description\": \"Filter AWS EC2 Instances Without Lifetime Tag\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_filter_ec2_without_lifetime_tag\",\r\n    \"action_needs_credential\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [  \"CATEGORY_TYPE_COST_OPT\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_EC2\" ]\r\n\r\n  }\r\n\r\n\r\n  "
  },
  {
    "path": "AWS/legos/aws_filter_ec2_without_lifetime_tag/aws_filter_ec2_without_lifetime_tag.py",
    "content": "##\r\n##  Copyright (c) 2021 unSkript, Inc\r\n##  All rights reserved.\r\n##\r\nfrom pydantic import BaseModel, Field\r\nfrom typing import List\r\nfrom unskript.connectors.aws import aws_get_paginator\r\nimport pprint\r\n\r\n\r\nclass InputSchema(BaseModel):\r\n    lifetime_tag: str = Field(\r\n        title='Lifetime tag',\r\n        description='Tag which indicates the lifecycle of instance.')\r\n\r\n    region: str = Field(\r\n        title='Region',\r\n        description='AWS Region.')\r\n\r\n\r\ndef aws_filter_ec2_without_lifetime_tag_printer(output):\r\n    if output is None:\r\n        return\r\n    pprint.pprint({\"Instances\": output})\r\n\r\n\r\ndef aws_filter_ec2_without_lifetime_tag(handle, lifetime_tag: str, region: str) -> List:\r\n    \"\"\"aws_filter_ec2_without_lifetime_tag Returns an List of instances which not have lifetime tag.\r\n\r\n        :type handle: object\r\n        :param handle: Object returned from task.validate(...).\r\n\r\n        :type lifetime_tag: string\r\n        :param lifetime_tag: Tag to filter Instances.\r\n\r\n        :type region: string\r\n        :param region: Used to filter the instance for specific region.\r\n\r\n        :rtype: Array of instances which not having lifetime tag.\r\n    \"\"\"\r\n\r\n    ec2Client = handle.client('ec2', region_name=region)\r\n    res = aws_get_paginator(ec2Client, \"describe_instances\", \"Reservations\")\r\n\r\n    result = []\r\n    for reservation in res:\r\n        for instance in reservation['Instances']:\r\n            try:\r\n                tagged_instance = instance['Tags']\r\n                tag_keys = [tags['Key'] for tags in tagged_instance]\r\n                if lifetime_tag not in tag_keys:\r\n                    result.append(instance['InstanceId'])\r\n\r\n            except Exception as e:\r\n                result.append(instance['InstanceId'])\r\n\r\n    return result\r\n\r\n"
  },
  {
    "path": "AWS/legos/aws_filter_instances_without_termination_and_lifetime_tag/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Filter AWS EC2 Instances Without Termination and Lifetime Tag</h1>\r\n\r\n## Description\r\nThis Lego Filter Instances without Termination and Lifetime Tag and check of they are valid\r\n\r\n## Lego Details\r\n\r\n    aws_filter_instances_without_termination_and_lifetime_tag(handle: object,region: str, termination_tag_name:str,lifetime_tag_name:str )\r\n\r\n        handle: Object of type unSkript AWS Connector.\r\n        region: Optional, AWS region. Eg: \"us-west-2\"\r\n        termination_tag_name: Optional, Name of the Termination Date Tag given to an EC2 instance. By default \"terminationDateTag\" is considered \r\n        lifetime_tag_name: Optional, Name of the Lifetime Date Tag given to an EC2 instance. By default \"lifetimeTag\" is considered \r\n\r\n## Lego Input\r\nThis Lego take 4 inputs handle, region, termination_tag_name, lifetime_tag_name\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_filter_instances_without_termination_and_lifetime_tag/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_filter_instances_without_termination_and_lifetime_tag/aws_filter_instances_without_termination_and_lifetime_tag.json",
    "content": "{\n    \"action_title\": \"Filter AWS EC2 Instances Without Termination and Lifetime Tag\",\n    \"action_description\": \"Filter AWS EC2 Instances Without Termination and Lifetime Tag and Check of they are valid\",\n    \"action_type\": \"LEGO_TYPE_AWS\",\n    \"action_entry_function\": \"aws_filter_instances_without_termination_and_lifetime_tag\",\n    \"action_needs_credential\": true,\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n    \"action_supports_poll\": true,\n    \"action_supports_iteration\": true,\n    \"action_verbs\": [\"filter\"],\n    \"action_nouns\": [\"aws\",\"instances\",\"without\",\"termination\",\"lifetime\",\"tag\"],\n    \"action_is_check\":true,\n    \"action_categories\":[ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_COST_OPT\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_EC2\" ],\n    \"action_next_hop\":[\"29ce1935204c64d816fd1f01f4fe41e8d8bd47725b899535c6acee703a7bcf0d\"],\n    \"action_next_hop_parameter_mapping\":{\"29ce1935204c64d816fd1f01f4fe41e8d8bd47725b899535c6acee703a7bcf0d\": {\"name\": \"Terminate EC2 Instances Without Valid Lifetime Tags\", \"region\": \".[0].region\", \"instance_ids\":\".[0].instances\"}}\n  }"
  },
  {
    "path": "AWS/legos/aws_filter_instances_without_termination_and_lifetime_tag/aws_filter_instances_without_termination_and_lifetime_tag.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nfrom pydantic import BaseModel, Field\nfrom typing import List, Tuple, Optional\nfrom unskript.connectors.aws import aws_get_paginator\nfrom unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\nimport pprint\nfrom datetime import datetime, date\n\nclass InputSchema(BaseModel):\n    region: Optional[str] = Field(\n        default=\"\",\n        title='Region',\n        description='Name of the AWS Region'\n    )\n    termination_tag_name: Optional[str] = Field(\n        default=\"terminationDateTag\",\n        title='Termination Date Tag Name',\n        description='Name of the Termination Date Tag given to an EC2 instance. By default \"terminationDateTag\" is considered '\n    )\n    lifetime_tag_name: Optional[str] = Field(\n        default=\"lifetimeTag\",\n        title='Lifetime Tag Name',\n        description='Name of the Lifetime Date Tag given to an EC2 instance. By default \"lifetimeTag\" is considered '\n    )\n\n\ndef aws_filter_instances_without_termination_and_lifetime_tag_printer(output):\n    if output is None:\n        return\n    \n    pprint.pprint(output)\n\n\ndef fetch_instances_from_valid_region(reservations, aws_region, termination_tag_name, lifetime_tag_name):\n    result = []\n    right_now = date.today()\n    \n    for reservation in reservations:\n        for instance in reservation.get('Instances', []):\n            instance_id = instance.get('InstanceId')\n            tagged_instance = instance.get('Tags', [])\n            \n            tag_keys = {tag['Key'] for tag in tagged_instance}\n            tag_values = {tag['Key']: tag['Value'] for tag in tagged_instance}\n\n            if not (termination_tag_name in tag_keys and lifetime_tag_name in tag_keys):\n                if instance_id:\n                    result.append(instance_id)\n                continue  # Skip to next instance if tags not found\n\n            try:\n                termination_date = datetime.strptime(tag_values.get(termination_tag_name, ''), '%d-%m-%Y').date()\n                if termination_date < right_now:\n                    result.append(instance_id)\n\n                lifetime_value = tag_values.get(lifetime_tag_name)\n                launch_date = datetime.strptime(instance.get('LaunchTime').strftime(\"%d-%m-%Y\"),'%d-%m-%Y').date()\n                \n                if lifetime_value != 'INDEFINITE' and launch_date < right_now:\n                    result.append(instance_id)\n\n            except Exception as e:\n                if instance_id:\n                    result.append(instance_id)\n                print(f\"Error processing instance {instance_id}: {e}\")\n\n    return {'region': aws_region, 'instances': result} if result else {}\n\ndef aws_filter_instances_without_termination_and_lifetime_tag(handle, region: str=None, termination_tag_name:str='terminationDateTag', lifetime_tag_name:str='lifetimeTag') -> Tuple:\n    \"\"\"aws_filter_ec2_without_lifetime_tag Returns an List of instances which not have lifetime tag.\n\n        Assumed tag key format - terminationDateTag, lifetimeTag\n        Assumed Date format for both keys is -> dd-mm-yy\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type region: string\n        :param region: Optional, Name of AWS Region\n\n        :type termination_tag_name: string\n        :param termination_tag_name: Optional, Name of the Termination Date Tag given to an EC2 instance. By default \"terminationDateTag\" is considered \n\n        :type lifetime_tag_name: string\n        :param lifetime_tag_name: Optional, Name of the Lifetime Date Tag given to an EC2 instance. By default \"lifetimeTag\" is considered \n\n        :rtype: Tuple of status, instances which dont having terminationDateTag and lifetimeTag, and error\n    \"\"\"\n    final_list = []\n    all_regions = [region] if region else aws_list_all_regions(handle=handle)\n\n    for r in all_regions:\n        try:\n            ec2Client = handle.client('ec2', region_name=r)\n            all_reservations = aws_get_paginator(ec2Client, \"describe_instances\", \"Reservations\")\n            instances_without_tags = fetch_instances_from_valid_region(all_reservations, r, termination_tag_name, lifetime_tag_name)\n            \n            if instances_without_tags:\n                final_list.append(instances_without_tags)\n        except Exception as e:\n            pass\n    \n    if final_list:\n        return (False, final_list)\n    else:\n        return (True, None)\n"
  },
  {
    "path": "AWS/legos/aws_filter_large_ec2_instances/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>AWS Filter Large EC2 Instances </h1>\r\n\r\n## Description\r\nThis Lego filter all instances whose instanceType contains Large or xLarge, and that DO NOT have the largetag key/value.\r\n\r\n## Lego Details\r\n\r\n    aws_filter_large_ec2_instances(handle, tag_key: str, tag_value: str, region: str)\r\n\r\n        handle: Object of type unSkript AWS Connector\r\n        tag_key: The key for the EC2 instance tag.\r\n        tag_value: The value for the EC2 instance tag.\r\n        region: EC2 instance region.\r\n\r\n## Lego Input\r\nThis Lego take four inputs handle, tag_key, tag_value and region.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://unskript.com)"
  },
  {
    "path": "AWS/legos/aws_filter_large_ec2_instances/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_filter_large_ec2_instances/aws_filter_large_ec2_instances.json",
    "content": "{\r\n    \"action_title\": \"AWS Filter Large EC2 Instances\",\r\n    \"action_description\": \"This Action to filter all instances whose instanceType contains Large or xLarge, and that DO NOT have the largetag key/value.\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_filter_large_ec2_instances\",\r\n    \"action_needs_credential\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [  \"CATEGORY_TYPE_COST_OPT\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_EC2\" ]\r\n\r\n}\r\n"
  },
  {
    "path": "AWS/legos/aws_filter_large_ec2_instances/aws_filter_large_ec2_instances.py",
    "content": "##\r\n##  Copyright (c) 2021 unSkript, Inc\r\n##  All rights reserved.\r\n##\r\nfrom pydantic import BaseModel, Field\r\nfrom typing import List\r\nfrom unskript.connectors.aws import aws_get_paginator\r\nimport pprint\r\n\r\n\r\nclass InputSchema(BaseModel):\r\n    region: str = Field(\r\n        title='Region',\r\n        description='AWS Region of the ECS service.')\r\n    tag_key: str = Field(\r\n        title='Tag Key',\r\n        description='The key for the EC2 instance tag.')\r\n    tag_value: str = Field(\r\n        title='Tag Value',\r\n        description='The value for the EC2 instance tag.')\r\n    \r\n\r\ndef aws_filter_large_ec2_instances_printer(output):\r\n    if output is None:\r\n        return\r\n    pprint.pprint({\"Instances\": output})\r\n\r\n\r\ndef aws_filter_large_ec2_instances(handle, tag_key: str, tag_value: str, region: str) -> List:\r\n    \"\"\"aws_filter_large_ec2_instances Returns an array of instances with large instance type\r\n\r\n        :type handle: object\r\n        :param handle: Object returned by the task.validate(...) method.\r\n\r\n        :type tag_key: string\r\n        :param tag_key: The key for the EC2 instance tag.\r\n\r\n        :type tag_value: string\r\n        :param tag_value: The value for the EC2 instance tag.\r\n\r\n        :type region: string\r\n        :param region: EC2 instance region.\r\n\r\n        :rtype: Array of instances with large instance type.\r\n    \"\"\"\r\n    result = []\r\n    try:\r\n        ec2Client = handle.client('ec2', region_name=region)\r\n        res = aws_get_paginator(ec2Client, \"describe_instances\", \"Reservations\",\r\n                                Filters=[{'Name': 'instance-type', 'Values': ['*large']}])\r\n        for reservation in res:\r\n            for instance in reservation['Instances']:\r\n                if not any(tag['Key'] == tag_key and tag['Value'] == tag_value for tag in instance[\"Tags\"]):\r\n                    result.append(instance['InstanceId'])\r\n    except Exception as e:\r\n        result.append(e)\r\n\r\n    return result\r\n"
  },
  {
    "path": "AWS/legos/aws_filter_long_running_instances/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>AWS Find Long Running EC2 Instances</h1>\r\n\r\n## Description\r\nThis Lego used to get a list a all instances that are older than the threshold.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_filter_long_running_instances(handle, region: str, threshold: int = 10)\r\n\r\n        handle: Object of type unSkript AWS Connector.\r\n        threshold: (in days) The threshold to check the instances older than the threshold.\r\n        region: AWS Region.\r\n## Lego Input\r\n\r\nThis Lego take three inputs handle, threshold and region.\r\n\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_filter_long_running_instances/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_filter_long_running_instances/aws_filter_long_running_instances.json",
    "content": "{\r\n    \"action_title\": \"AWS Find Long Running EC2 Instances\",\r\n    \"action_description\": \"This action list a all instances that are older than the threshold\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_filter_long_running_instances\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [  \"CATEGORY_TYPE_COST_OPT\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_EC2\" ]\r\n\r\n}\r\n    "
  },
  {
    "path": "AWS/legos/aws_filter_long_running_instances/aws_filter_long_running_instances.py",
    "content": "##\n##  Copyright (c) 2023 unSkript, Inc\n##  All rights reserved.\n##\nfrom pydantic import BaseModel, Field\nfrom typing import List\nfrom unskript.connectors.aws import aws_get_paginator\nimport pprint\nfrom datetime import datetime, timedelta\nimport pytz\nfrom beartype import beartype\n\n\nclass InputSchema(BaseModel):\n    threshold: int = Field(\n        default=30,\n        title=\"Threshold (in day's)\",\n        description=\"(in day's) The threshold to check the instances older than the threshold.\")\n    region: str = Field(\n        title='Region',\n        description='AWS Region')\n\n\ndef aws_filter_long_running_instances_printer(output):\n    if output is None:\n        return\n    pprint.pprint({\"Instances\": output})\n\n\ndef aws_filter_long_running_instances(handle, region: str, threshold: int = 10) -> List:\n    \"\"\"aws_filter_long_running_instances Returns an array of long running EC2 instances.\n\n        :type handle: object\n        :param handle: Object returned by the task.validate(...) method.\n\n        :type region: string\n        :param region: EC2 instance region.\n\n        :type threshold: string\n        :param threshold: (in days) The threshold to check the instances older than the threshold.\n\n        :rtype: Array of long running EC2 instances.\n    \"\"\"\n    result = []\n    current_time = datetime.now(pytz.UTC)\n    ec2Client = handle.client('ec2', region_name=region)\n    res = aws_get_paginator(ec2Client, \"describe_instances\", \"Reservations\")\n    for reservation in res:\n        for instance in reservation['Instances']:\n            launch_time = instance[\"LaunchTime\"]\n            running_time = current_time - launch_time\n            if running_time > timedelta(days=int(threshold)):\n                result.append(instance['InstanceId'])\n    return result\n"
  },
  {
    "path": "AWS/legos/aws_filter_old_ebs_snapshots/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \n<h1>AWS Filter Old EBS Snapshots </h1>\n\n## Description\nThis Lego used to get a list of all snapshots details that are older than the threshold.\n\n\n## Lego Details\n\n    aws_filter_old_ebs_snapshots(handle, region: str, threshold: int = 30)\n\n        handle: Object of type unSkript AWS Connector.\n        region: EC2 instance region.\n        threshold: (in days) The threshold to check the snapshots older than the threshold.\n\n## Lego Input\n\nThis Lego take three inputs handle, threshold and region. \n\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n\n## See it in Action\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_filter_old_ebs_snapshots/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_filter_old_ebs_snapshots/aws_filter_old_ebs_snapshots.json",
    "content": "{\r\n    \"action_title\": \"AWS Filter Old EBS Snapshots\",\r\n    \"action_description\": \"This action list a all snapshots details that are older than the threshold\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_filter_old_ebs_snapshots\",\r\n    \"action_needs_credential\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_is_check\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_next_hop\": [\"303d6481e8cfa508d9ba11f847906c7d46f30a1c70f9b6b0e04b12409e74f704\"],\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [  \"CATEGORY_TYPE_COST_OPT\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_EBS\" ],\r\n    \"action_next_hop_parameter_mapping\":{\"303d6481e8cfa508d9ba11f847906c7d46f30a1c70f9b6b0e04b12409e74f704\": {\"name\": \"Delete Old EBS Snapshots\", \"region\":\".[0].region\",\"snapshot_ids\":\"map(.snapshot_id)\"}}\r\n}\r\n  "
  },
  {
    "path": "AWS/legos/aws_filter_old_ebs_snapshots/aws_filter_old_ebs_snapshots.py",
    "content": "##\n##  Copyright (c) 2023 unSkript, Inc\n##  All rights reserved.\n##\nfrom pydantic import BaseModel, Field\nfrom typing import List, Optional, Tuple\nfrom unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\nimport pprint\nfrom datetime import datetime, timedelta\nimport pytz\n\n\nclass InputSchema(BaseModel):\n    region: Optional[str] = Field(\n        title='Region',\n        description='AWS Region.')\n    threshold: Optional[int] = Field(\n        default=30,\n        title=\"Threshold (in days)\",\n        description=\"(in day's) The threshold to check the snapshots older than the threshold.\")\n\n\ndef aws_filter_old_ebs_snapshots_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef aws_filter_old_ebs_snapshots(handle, region: str=\"\", threshold: int = 30) -> Tuple:\n    \"\"\"aws_filter_old_ebs_snapshots Returns an array of EBS snapshots details.\n\n        :type region: string\n        :param region: AWS Region.\n        \n        :type threshold: int\n        :param threshold: (in days) The threshold to check the snapshots older than the threshold.\n\n        :rtype: List of EBS snapshots details.\n    \"\"\"\n    result = []\n    all_regions = [region]\n    if not region:\n        all_regions = aws_list_all_regions(handle)\n    for reg in all_regions:\n        try:\n            # Filtering the volume by region\n            current_time = datetime.now(pytz.UTC)\n            ec2Client = handle.resource('ec2', region_name=reg)\n            response = ec2Client.snapshots.filter(OwnerIds=['self'])\n            for snapshot in response:\n                snap_data = {}\n                running_time = current_time - snapshot.start_time\n                if running_time > timedelta(days=int(threshold)):\n                    snap_data[\"region\"] = reg\n                    snap_data[\"snapshot_id\"] = snapshot.id\n                    result.append(snap_data)\n        except Exception as e:\n            pass\n    if len(result)!=0:\n        return (False, result)\n    else:\n        return (True, None)\n"
  },
  {
    "path": "AWS/legos/aws_filter_public_s3_buckets_by_acl/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Get AWS public S3 Buckets by ACL</h1>\r\n\r\n## Description\r\nThis Lego Get AWS public S3 Buckets.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_get_public_s3_buckets(handle: object, permission:Enum, region: str)\r\n\r\n        handle: Object of type unSkript AWS Connector.\r\n        permission: Enum, Set of permissions that AWS S3 supports in an ACL for buckets and objects. Eg: \"READ\",\"WRITE_ACP\",\"FULL_CONTROL\"\r\n        region: region: Optional, AWS region. Eg: “us-west-2”\r\n\r\n## Lego Input\r\nThis Lego takes three inputs handle, permission, region.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_filter_public_s3_buckets_by_acl/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_filter_public_s3_buckets_by_acl/aws_filter_public_s3_buckets_by_acl.json",
    "content": "{\r\n  \"action_title\": \"Get AWS public S3 Buckets using ACL\",\r\n  \"action_description\": \"Get AWS public S3 Buckets using ACL\",\r\n  \"action_type\": \"LEGO_TYPE_AWS\",\r\n  \"action_entry_function\": \"aws_filter_public_s3_buckets_by_acl\",\r\n  \"action_needs_credential\": true,\r\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n  \"action_supports_poll\": true,\r\n  \"action_supports_iteration\": true,\r\n  \"action_is_check\":true,\r\n  \"action_verbs\": [\"filter\"],\r\n  \"action_nouns\": [\"aws\",\"s3\",\"public\",\"buckets\",\"by\",\"acl\"],\r\n  \"action_categories\":[ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_SECOPS\" ,\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_S3\" ],\r\n  \"action_next_hop\":[\"750987144b20d7b5984a37e58c2e17b69fd33f799a1f027f0ff7532cee5913c6\"],\r\n  \"action_next_hop_parameter_mapping\":{\"750987144b20d7b5984a37e58c2e17b69fd33f799a1f027f0ff7532cee5913c6\": {\"name\": \"Restrict S3 Buckets with READ/WRITE Permissions to all Authenticated Users\", \"region\": \".[0].region\", \"bucket_names\":\"map(.bucket)\"}}\r\n}"
  },
  {
    "path": "AWS/legos/aws_filter_public_s3_buckets_by_acl/aws_filter_public_s3_buckets_by_acl.py",
    "content": "##\r\n##  Copyright (c) 2021 unSkript, Inc\r\n##  All rights reserved.\r\n##\r\nimport pprint\r\nfrom typing import Optional, Tuple\r\nfrom pydantic import BaseModel, Field\r\nfrom unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\r\nfrom unskript.legos.aws.aws_get_s3_buckets.aws_get_s3_buckets import aws_get_s3_buckets\r\nfrom unskript.enums.aws_acl_permissions_enums import BucketACLPermissions\r\n\r\n\r\nclass InputSchema(BaseModel):\r\n    region: Optional[str] = Field(\r\n    default=\"\",\r\n    title='Region',\r\n    description='Name of the AWS Region'\r\n    )\r\n    permission: Optional[BucketACLPermissions] = Field(\r\n        default=BucketACLPermissions.READ,\r\n        title=\"S3 Bucket's ACL Permission\",\r\n        description=\"Set of permissions that AWS S3 supports in an ACL for buckets and objects\"\r\n    )\r\n\r\ndef aws_filter_public_s3_buckets_by_acl_printer(output):\r\n    if output is None:\r\n        return\r\n    pprint.pprint(output)\r\n\r\ndef check_publicly_accessible_buckets(s3Client,b,all_permissions):\r\n    public_check = [\"http://acs.amazonaws.com/groups/global/AuthenticatedUsers\",\r\n                   \"http://acs.amazonaws.com/groups/global/AllUsers\"]\r\n    public_buckets = False\r\n    try:\r\n        res = s3Client.get_bucket_acl(Bucket=b)\r\n        for perm in all_permissions:\r\n            for grant in res[\"Grants\"]:\r\n                if 'Permission' in grant.keys() and perm == grant[\"Permission\"]:\r\n                    if 'URI' in grant[\"Grantee\"] and grant[\"Grantee\"][\"URI\"] in public_check:\r\n                        public_buckets = True\r\n    except Exception:\r\n        pass\r\n    return public_buckets\r\n\r\ndef aws_filter_public_s3_buckets_by_acl(\r\n        handle,\r\n        permission:BucketACLPermissions=BucketACLPermissions.READ,\r\n        region: str=None\r\n        ) -> Tuple:\r\n    \"\"\"aws_filter_public_s3_buckets_by_acl get list of public buckets.\r\n        \r\n        Note- By default(if no permissions are given) READ and WRITE ACL Permissioned S3 buckets are\r\n        checked for public access.Other ACL Permissions are - \"READ_ACP\"|\"WRITE_ACP\"|\"FULL_CONTROL\"\r\n        :type handle: object\r\n        :param handle: Object returned from task.validate(...)\r\n\r\n        :type permission: Enum\r\n        :param permission: Set of permissions that AWS S3 supports in an ACL for buckets and objects.\r\n\r\n        :type region: string\r\n        :param region: location of the bucket.\r\n        \r\n        :rtype: Object with status, list of public S3 buckets with READ/WRITE ACL Permissions, and errors\r\n    \"\"\"\r\n    all_permissions = [permission]\r\n    if permission is None or len(permission)==0:\r\n        all_permissions = [\"READ\",\"WRITE\"]\r\n    result = []\r\n    all_buckets = []\r\n    all_regions = [region]\r\n    if region is None or len(region)==0:\r\n        all_regions = aws_list_all_regions(handle=handle)\r\n    try:\r\n        for r in all_regions:\r\n            s3Client = handle.client('s3',region_name=r)\r\n            output = aws_get_s3_buckets(handle=handle, region=r)\r\n            if len(output)!= 0:\r\n                for o in output:\r\n                    all_buckets_dict = {}\r\n                    all_buckets_dict[\"region\"]=r\r\n                    all_buckets_dict[\"bucket\"]=o\r\n                    all_buckets.append(all_buckets_dict)\r\n    except Exception as e:\r\n        raise e\r\n\r\n    for bucket in all_buckets:\r\n        s3Client = handle.client('s3',region_name= bucket['region'])\r\n        flag = check_publicly_accessible_buckets(s3Client,bucket['bucket'], all_permissions)\r\n        if flag:\r\n            result.append(bucket)\r\n    if len(result)!=0:\r\n        return (False, result)\r\n    else:\r\n        return (True, None)\r\n"
  },
  {
    "path": "AWS/legos/aws_filter_target_groups_by_tags/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Filter AWS Target groups by tag name </h1>\r\n\r\n## Description\r\nThis Lego filter AWS Target groups which have the provided tag attached to it.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_filter_target_groups_by_tags(handle: object, tag_key: str, region: str)\r\n\r\n        handle: Object of type unSkript AWS Connector.\r\n        tag_key: Name of the tag to filter by.\r\n        region: AWS Region.\r\n\r\n## Lego Input\r\n\r\nThis Lego take three inputs handle, tag_key and region. \r\n\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_filter_target_groups_by_tags/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_filter_target_groups_by_tags/aws_filter_target_groups_by_tags.json",
    "content": "{\r\n    \"action_title\": \"Filter AWS Target groups by tag name\",\r\n    \"action_description\": \"Filter AWS Target groups which have the provided tag attached to it. It also returns the value of that tag for each target group\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_filter_target_groups_by_tags\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\":[ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_SECOPS\" ,\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_VPC\",\"CATEGORY_TYPE_AWS_ELB\"  ]\r\n}\r\n    "
  },
  {
    "path": "AWS/legos/aws_filter_target_groups_by_tags/aws_filter_target_groups_by_tags.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import List\nfrom pydantic import BaseModel, Field\nfrom unskript.connectors.aws import aws_get_paginator\n\nclass InputSchema(BaseModel):\n    tag_key: str = Field(\n        title='Tag name',\n        description='Name of the tag to filter by.')\n    region: str = Field(\n        title='Region',\n        description='AWS Region.')\n\n\ndef aws_filter_target_groups_by_tags_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef aws_filter_target_groups_by_tags(handle, tag_key: str, region: str) -> List:\n    \"\"\"aws_filter_target_groups_by_tags Returns a array of dict with target group and tag value.\n\n        :type handle: object\n        :param handle: Object containing global params for the notebook.\n\n        :type vpc_id: string\n        :param vpc_id: VPC ID of the instances.\n\n        :type region: string\n        :param region: AWS Region.\n\n        :rtype: Returns a array of dict with target group and tag value.\n    \"\"\"\n    elbv2Client = handle.client('elbv2', region_name=region)\n    tbs = aws_get_paginator(elbv2Client, \"describe_target_groups\", \"TargetGroups\")\n    tbArnsList = []\n    output = []\n    count = 0\n    tbsLength = len(tbs)\n    for index, tb in enumerate(tbs):\n        # Need to call describe_tags to get the tags associated with these TGs,\n        # however that call can only take 20 TGs.\n        tbArnsList.append(tb.get('TargetGroupArn'))\n        count = count + 1\n        if count == 20 or index == tbsLength - 1:\n            tagDescriptions = elbv2Client.describe_tags(ResourceArns=tbArnsList).get('TagDescriptions')\n            # Check if the tag name exists in any of the TGs.\n            for tagDescription in tagDescriptions:\n                for tag in tagDescription.get('Tags'):\n                    if tag.get('Key') == tag_key:\n                        output.append({\n                            \"ResourceARN\": tagDescription.get('ResourceArn'),\n                            \"TagValue\": tag.get('Value')\n                            })\n                        break\n            count = 0\n            tbArnsList = []\n    return output\n"
  },
  {
    "path": "AWS/legos/aws_filter_unencrypted_s3_buckets/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Filter AWS Unencrypted S3 Buckets </h1>\r\n\r\n## Description\r\nThis Lego filter AWS unencrypted S3 buckets.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_filter_unencrypted_s3_buckets(handle: object, region: str)\r\n\r\n        handle: Object of type unSkript AWS Connector\r\n        region: Location of the S3 buckets.\r\n\r\n## Lego Input\r\nThis Lego take two inputs handle and region.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_filter_unencrypted_s3_buckets/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_filter_unencrypted_s3_buckets/aws_filter_unencrypted_s3_buckets.json",
    "content": "{\r\n    \"action_title\": \"Filter AWS Unencrypted S3 Buckets\",\r\n    \"action_description\": \"Filter AWS Unencrypted S3 Buckets\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_filter_unencrypted_s3_buckets\",\r\n    \"action_needs_credential\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_is_check\":true,\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\":[ \"CATEGORY_TYPE_CLOUDOPS\",\"CATEGORY_TYPE_SECOPS\", \"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_S3\" ],\r\n    \"action_next_hop\":[\"50d9c6abd7dce3ff9183d4135353e82859bc5a9639455b35bd229331be6048df\"],\r\n    \"action_next_hop_parameter_mapping\":{\"50d9c6abd7dce3ff9183d4135353e82859bc5a9639455b35bd229331be6048df\": {\"name\": \"Encrypt unencrypted S3 buckets\",\"region\": \".[0].region\", \"bucket_name\":\"map(.bucket)\"}}\r\n\r\n}\r\n"
  },
  {
    "path": "AWS/legos/aws_filter_unencrypted_s3_buckets/aws_filter_unencrypted_s3_buckets.py",
    "content": "##\r\n##  Copyright (c) 2021 unSkript, Inc\r\n##  All rights reserved.\r\n##\r\nimport pprint\r\nfrom typing import Optional, Tuple\r\nfrom pydantic import BaseModel, Field\r\nfrom unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\r\nfrom botocore.exceptions import ClientError\r\n\r\n\r\nclass InputSchema(BaseModel):\r\n    region: Optional[str] = Field(\r\n        default=\"\",\r\n        title='Region',\r\n        description='AWS Region.')\r\n\r\n\r\ndef aws_filter_unencrypted_s3_buckets_printer(output):\r\n    if output is None:\r\n        return\r\n\r\n    pprint.pprint(output)\r\n\r\n\r\ndef aws_filter_unencrypted_s3_buckets(handle, region: str = \"\") -> Tuple:\r\n    \"\"\"aws_filter_unencrypted_s3_buckets List of unencrypted S3 bucket name .\r\n\r\n        :type handle: object\r\n        :param handle: Object returned from task.validate(...).\r\n\r\n        :type region: string\r\n        :param region: Filter S3 buckets.\r\n\r\n        :rtype: Tuple with status result and list of unencrypted S3 bucket name.\r\n    \"\"\"\r\n    result = []\r\n    all_regions = [region]\r\n    if not region:\r\n        all_regions = aws_list_all_regions(handle)\r\n    for reg in all_regions:\r\n        try:\r\n            s3Client = handle.client('s3', region_name=reg)\r\n            response = s3Client.list_buckets()\r\n            # List unencrypted S3 buckets\r\n            for bucket in response['Buckets']:\r\n                try:\r\n                    response = s3Client.get_bucket_encryption(Bucket=bucket['Name'])\r\n                    encRules = response['ServerSideEncryptionConfiguration']['Rules']\r\n                except ClientError:\r\n                    bucket_dict = {}\r\n                    bucket_dict[\"region\"] = reg\r\n                    bucket_dict[\"bucket\"] = bucket['Name']\r\n                    result.append(bucket_dict)\r\n        except Exception:\r\n            pass\r\n\r\n    if len(result) != 0:\r\n        return (False, result)\r\n    return (True, None)\r\n"
  },
  {
    "path": "AWS/legos/aws_filter_unhealthy_instances_from_asg/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Get Unhealthy instances from ASG </h1>\r\n\r\n## Description\r\nThis Lego Filter AWS unhealthy instances from Auto Scaling Group.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_filter_unhealthy_instances_from_asg(handle, region: str = \"\")\r\n\r\n        handle: Object of type unSkript AWS Connector\r\n        region: Optional, AWS region. Eg: “us-west-2”\r\n\r\n## Lego Input\r\nThis Lego take two inputs handle and region. \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_filter_unhealthy_instances_from_asg/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_filter_unhealthy_instances_from_asg/aws_filter_unhealthy_instances_from_asg.json",
    "content": "{\r\n    \"action_title\": \"Get Unhealthy instances from ASG\",\r\n    \"action_description\": \"Get Unhealthy instances from Auto Scaling Group\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_filter_unhealthy_instances_from_asg\",\r\n    \"action_needs_credential\": true,\r\n    \"action_is_check\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_ASG\",\"CATEGORY_TYPE_AWS_EC2\" ],\r\n    \"action_next_hop\": [\"680ad9d119afab5f647e1afe7826b88d89bf35304954c3328e65a2fcf470f930\"],\r\n    \"action_next_hop_parameter_mapping\": {\"680ad9d119afab5f647e1afe7826b88d89bf35304954c3328e65a2fcf470f930\": {\"name\": \"AWS Detach EC2 Instance from ASG\", \"region\": \".[0].region\", \"instance_ids\":\"map(.InstanceId)\"}}\r\n}\r\n\r\n"
  },
  {
    "path": "AWS/legos/aws_filter_unhealthy_instances_from_asg/aws_filter_unhealthy_instances_from_asg.py",
    "content": "##  Copyright (c) 2021 unSkript, Inc\r\n##  All rights reserved.\r\n##\r\nimport pprint\r\nfrom typing import Optional, Tuple\r\nfrom pydantic import BaseModel, Field\r\nfrom unskript.connectors.aws import aws_get_paginator\r\nfrom unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\r\n\r\n\r\nclass InputSchema(BaseModel):\r\n    region: Optional[str] = Field(\r\n        default=\"\",\r\n        title='Region',\r\n        description='AWS Region of the ASG.')\r\n\r\n\r\ndef aws_filter_unhealthy_instances_from_asg_printer(output):\r\n    if output is None:\r\n        return\r\n    pprint.pprint(output)\r\n\r\n\r\ndef aws_filter_unhealthy_instances_from_asg(handle, region: str = \"\") -> Tuple:\r\n    \"\"\"aws_filter_unhealthy_instances_from_asg gives unhealthy instances from ASG\r\n\r\n        :type region: string\r\n        :param region: AWS region.\r\n\r\n        :rtype: CheckOutput with status result and list of unhealthy instances from ASG.\r\n    \"\"\"\r\n    result = []\r\n    all_regions = [region]\r\n    if not region:\r\n        all_regions = aws_list_all_regions(handle)\r\n\r\n    for reg in all_regions:\r\n        try:\r\n            asg_client = handle.client('autoscaling', region_name=reg)\r\n            response = aws_get_paginator(\r\n                asg_client,\r\n                \"describe_auto_scaling_instances\",\r\n                \"AutoScalingInstances\"\r\n                )\r\n\r\n            # filter instances to only include those that are in an \"unhealthy\" state\r\n            for instance in response:\r\n                data_dict = {}\r\n                if instance['HealthStatus'] == 'Unhealthy':\r\n                    data_dict[\"InstanceId\"] = instance[\"InstanceId\"]\r\n                    data_dict[\"AutoScalingGroupName\"] = instance[\"AutoScalingGroupName\"]\r\n                    data_dict[\"region\"] = reg\r\n                    result.append(data_dict)\r\n\r\n        except Exception:\r\n            pass\r\n\r\n    if len(result) != 0:\r\n        return (False, result)\r\n    return (True, None)\r\n"
  },
  {
    "path": "AWS/legos/aws_filter_untagged_ec2_instances/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Filter AWS Untagged EC2 Instances </h1>\r\n\r\n## Description\r\nThis Lego filter the AWS Untagged EC2 Instances.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_filter_untagged_ec2_instances(handle: object, region: str)\r\n\r\n        handle: Object of type unSkript AWS Connector\r\n        region: region: Optional, AWS region. Eg: “us-west-2”\r\n\r\n## Lego Input\r\n\r\nThis Lego take two inputs handle and region.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_filter_untagged_ec2_instances/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_filter_untagged_ec2_instances/aws_filter_untagged_ec2_instances.json",
    "content": "{\r\n  \"action_title\": \"Filter AWS Untagged EC2 Instances\",\r\n  \"action_description\": \"Filter AWS Untagged EC2 Instances\",\r\n  \"action_type\": \"LEGO_TYPE_AWS\",\r\n  \"action_entry_function\": \"aws_filter_untagged_ec2_instances\",\r\n  \"action_needs_credential\": true,\r\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n  \"action_supports_poll\": true,\r\n  \"action_supports_iteration\": true,\r\n  \"action_verbs\": [\"filter\"],\r\n  \"action_nouns\": [\"aws\",\"instances\",\"untagged\"],\r\n  \"action_is_check\": true,\r\n  \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_COST_OPT\" ,\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_EC2\"],\r\n  \"action_next_hop\": [\"a16703da15d9e9e2d8a56b146e730b5e4c1496721ff1dc8606a5021d521ed9e3\"],\r\n  \"action_next_hop_parameter_mapping\": {\"a16703da15d9e9e2d8a56b146e730b5e4c1496721ff1dc8606a5021d521ed9e3\": {\"name\": \"Stop all Untagged AWS EC2 Instances\", \"region\": \".[0].region\", \"instance_ids\":\".map(.instanceID)\"}}\r\n}"
  },
  {
    "path": "AWS/legos/aws_filter_untagged_ec2_instances/aws_filter_untagged_ec2_instances.py",
    "content": "##\r\n##  Copyright (c) 2021 unSkript, Inc\r\n##  All rights reserved.\r\n##\r\nimport pprint\r\nfrom typing import Tuple, Optional\r\nfrom pydantic import BaseModel, Field\r\nfrom unskript.connectors.aws import aws_get_paginator\r\nfrom unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\r\n\r\n\r\nclass InputSchema(BaseModel):\r\n    region: Optional[str] = Field(\r\n        default=\"\",\r\n        title='Region',\r\n        description='Name of the AWS Region'\r\n    )\r\n\r\n\r\ndef aws_filter_untagged_ec2_instances_printer(output):\r\n    if output is None:\r\n        return\r\n    pprint.pprint(output)\r\n\r\ndef check_untagged_instance(res, r):\r\n    instance_list = []\r\n    for reservation in res:\r\n        for instance in reservation['Instances']:\r\n            instances_dict = {}\r\n            tags = instance.get('Tags', None)\r\n            if tags is None:\r\n                instances_dict['region'] = r\r\n                instances_dict['instanceID'] = instance['InstanceId']\r\n                instance_list.append(instances_dict)\r\n    return instance_list\r\n\r\n\r\ndef aws_filter_untagged_ec2_instances(handle, region: str= None) -> Tuple:\r\n    \"\"\"aws_filter_untagged_ec2_instances Returns an array of instances which has no tags.\r\n\r\n        :type handle: object\r\n        :param handle: Object returned from task.validate(...).\r\n\r\n        :type region: str\r\n        :param region: Region to filter instances.\r\n\r\n        :rtype: Tuple of status, and list of untagged EC2 Instances\r\n    \"\"\"\r\n    if not handle or (region and region not in aws_list_all_regions(handle)):\r\n        raise ValueError(\"Invalid input parameters provided.\")\r\n    result = []\r\n    all_regions = [region]\r\n    if region is None or len(region) == 0:\r\n        all_regions = aws_list_all_regions(handle=handle)\r\n    for r in all_regions:\r\n        try:\r\n            ec2Client = handle.client('ec2', region_name=r)\r\n            res = aws_get_paginator(ec2Client, \"describe_instances\", \"Reservations\")\r\n            untagged_instances = check_untagged_instance(res, r)\r\n            result.extend(untagged_instances)\r\n        except Exception as e:\r\n            pass\r\n\r\n    if len(result) != 0:\r\n        return (False, result)\r\n    return (True, None)\r\n"
  },
  {
    "path": "AWS/legos/aws_filter_unused_keypairs/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Filter AWS Unused Keypairs </h1>\r\n\r\n## Description\r\nThis Lego Filter AWS Unused Keypairs.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_filter_unused_keypairs(handle: object, region: str)\r\n\r\n        handle: Object of type unSkript AWS Connector\r\n        region: Optional, Region to filter instances. Eg:'us-west-2'\r\n\r\n## Lego Input\r\nThis Lego take two inputs handle and region. \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_filter_unused_keypairs/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_filter_unused_keypairs/aws_filter_unused_keypairs.json",
    "content": "{\r\n    \"action_title\": \"Filter AWS Unused Keypairs\",\r\n    \"action_description\": \"Filter AWS Unused Keypairs\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_filter_unused_keypairs\",\r\n    \"action_needs_credential\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_supports_poll\": true,\r\n    \"action_is_check\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\":[ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_EC2\" ],\r\n    \"action_next_hop\":[\"a28edafac5f3bac3ca34d677d9b01a4bc6f74893e50bc103e5cefb00e0f48746\"],\r\n    \"action_next_hop_parameter_mapping\":{}\r\n  }\r\n"
  },
  {
    "path": "AWS/legos/aws_filter_unused_keypairs/aws_filter_unused_keypairs.py",
    "content": "##\r\n##  Copyright (c) 2021 unSkript, Inc\r\n##  All rights reserved.\r\n##\r\nimport pprint\r\nfrom typing import Optional, Tuple\r\nfrom pydantic import BaseModel, Field\r\nfrom unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\r\nfrom unskript.connectors.aws import aws_get_paginator\r\n\r\nclass InputSchema(BaseModel):\r\n    region: Optional[str] = Field(\r\n        default=\"\",\r\n        title='Region',\r\n        description='Name of the AWS Region')\r\n\r\n\r\ndef aws_filter_unused_keypairs_printer(output):\r\n    if output is None:\r\n        return\r\n    pprint.pprint(output)\r\n\r\n\r\ndef aws_filter_unused_keypairs(handle, region: str = None) -> Tuple:\r\n    \"\"\"aws_filter_unused_keypairs Returns an array of KeyPair.\r\n\r\n        :type region: object\r\n        :param region: Object containing global params for the notebook.\r\n\r\n        :rtype: Object with status, result of unused key pairs, and error.\r\n    \"\"\"\r\n    all_keys_dict = {}\r\n    used_keys_dict = {}\r\n    key_pairs_all = []\r\n    used_key_pairs = []\r\n    result = []\r\n    all_regions = [region]\r\n    if region is None or len(region)==0:\r\n        all_regions = aws_list_all_regions(handle=handle)\r\n    for r in all_regions:\r\n        try:\r\n            ec2Client = handle.client('ec2', region_name=r)\r\n            key_pairs_all = list(map(\r\n                lambda i: i['KeyName'],\r\n                ec2Client.describe_key_pairs()['KeyPairs']\r\n                ))\r\n            res = aws_get_paginator(ec2Client, \"describe_instances\", \"Reservations\")\r\n            for reservation in res:\r\n                for keypair in reservation['Instances']:\r\n                    if 'KeyName'in keypair and keypair['KeyName'] not in used_key_pairs:\r\n                        used_key_pairs.append(keypair['KeyName'])\r\n            used_keys_dict[\"region\"]=r\r\n            used_keys_dict[\"key_name\"]=used_key_pairs\r\n            all_keys_dict[\"region\"]=r\r\n            all_keys_dict[\"key_name\"]=key_pairs_all\r\n            final_dict = {}\r\n            final_list=[]\r\n            for k,v in all_keys_dict.items():\r\n                if v!=[]:\r\n                    if k==\"key_name\":\r\n                        for each in v:\r\n                            if each not in used_keys_dict[\"key_name\"]:\r\n                                final_list.append(each)\r\n                if len(final_list)!=0:\r\n                    final_dict[\"region\"]=r\r\n                    final_dict[\"unused_keys\"]=final_list\r\n            if len(final_dict)!=0:\r\n                result.append(final_dict)\r\n        except Exception:\r\n            pass\r\n\r\n    if len(result) != 0:\r\n        return (False, result)\r\n    return (True, None)\r\n"
  },
  {
    "path": "AWS/legos/aws_filter_unused_log_streams/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>AWS Filter Unused Log Stream</h1>\r\n\r\n## Description\r\nThis Lego lists all log streams that are unused for all the log groups by the given threshold.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_filter_unused_log_streams(handle, region: str, time_period_in_days: int = 30)\r\n\r\n        handle: Object of type unSkript AWS Connector.\r\n        time_period_in_days: Optional, (in days) The threshold to filter the unused log strams.\r\n        region: Optional, AWS Region.\r\n## Lego Input\r\n\r\nThis Lego take three inputs handle, time_period_in_days and region.\r\n\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_filter_unused_log_streams/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_filter_unused_log_streams/aws_filter_unused_log_streams.json",
    "content": "{\r\n    \"action_title\": \"AWS Filter Unused Log Stream\",\r\n    \"action_description\": \"This action lists all log streams that are unused for all the log groups by the given threshold.\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_filter_unused_log_streams\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_is_check\": true,\r\n    \"action_next_hop\":[\"64b6e7809ddfb1094901da74924ca3386510a1cd\"],\r\n    \"action_next_hop_parameter_mapping\":{\"64b6e7809ddfb1094901da74924ca3386510a1cd\": {\"name\":\"Delete Unused AWS Log Streams\", \"region\": \".[0].region\", \"log_stream_name\": \"map(.log_stream_name)\", \"log_group_name\":\".[0].log_stream_group\"}},\r\n    \"action_categories\":[ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_SECOPS\" ,\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_LOGS\"  ]\r\n}\r\n    "
  },
  {
    "path": "AWS/legos/aws_filter_unused_log_streams/aws_filter_unused_log_streams.py",
    "content": "##\n##  Copyright (c) 2023 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import Optional, Tuple\nfrom datetime import datetime, timedelta\nfrom pydantic import BaseModel, Field\nimport botocore.config\nfrom unskript.connectors.aws import aws_get_paginator\nfrom unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\n\nclass InputSchema(BaseModel):\n    time_period_in_days: Optional[int] = Field(\n        default=30,\n        title=\"Threshold (in days)\",\n        description=\"(in days) The threshold to filter the unused log strams.\")\n    region: Optional[str] = Field(\n        title='Region',\n        description='AWS Region')\n\n\ndef aws_filter_unused_log_streams_printer(output):\n    if output is None:\n        return\n\n    pprint.pprint(output)\n\n\ndef aws_filter_unused_log_streams(handle, region: str = \"\", time_period_in_days: int = 30) -> Tuple:\n    \"\"\"aws_filter_unused_log_streams Returns an array of unused log strams for all log groups.\n\n        :type region: string\n        :param region: Used to filter the volume for specific region.\n        \n        :type time_period_in_days: int\n        :param time_period_in_days: (in days) The threshold to filter the unused log strams.\n\n        :rtype: Array of unused log strams for all log groups.\n    \"\"\"\n    result = []\n    now = datetime.utcnow()\n    all_regions = [region]\n    if not region:\n        all_regions = aws_list_all_regions(handle)\n\n    for reg in all_regions:\n        try:\n            start_time = now - timedelta(days=time_period_in_days)\n            config = botocore.config.Config(retries={'max_attempts': 10})\n            ec2Client = handle.client('logs', region_name=reg, config=config)\n            response = aws_get_paginator(ec2Client, \"describe_log_groups\", \"logGroups\")\n            for log_group in response:\n                log_group_name = log_group['logGroupName']\n                response1 = aws_get_paginator(ec2Client, \"describe_log_streams\", \"logStreams\",\n                                            logGroupName=log_group_name,\n                                            orderBy='LastEventTime',\n                                            descending=True)\n\n                for log_stream in response1:\n                    unused_log_streams = {}\n                    last_event_time = log_stream.get('lastEventTimestamp')\n                    if last_event_time is None:\n                        # The log stream has never logged an event\n                        unused_log_streams[\"log_group_name\"] = log_group_name\n                        unused_log_streams[\"log_stream_name\"] = log_stream['logStreamName']\n                        unused_log_streams[\"region\"] = reg\n                        result.append(unused_log_streams)\n                    elif datetime.fromtimestamp(last_event_time/1000.0) < start_time:\n                        # The log stream has not logged an event in the past given days\n                        unused_log_streams[\"log_group_name\"] = log_group_name\n                        unused_log_streams[\"log_stream_name\"] = log_stream['logStreamName']\n                        unused_log_streams[\"region\"] = reg\n                        result.append(unused_log_streams)\n        except Exception:\n            pass\n\n    if len(result) != 0:\n        return (False, result)\n    return (True, None)\n"
  },
  {
    "path": "AWS/legos/aws_filter_unused_nat_gateway/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>AWS Find Unused NAT Gateways </h1>\r\n\r\n## Description\r\nThis Lego get all of the Nat gateways that have zero traffic over those.\r\n\r\n## Lego Details\r\n\r\n    aws_filter_unused_nat_gateway(handle, number_of_days: int, region: str)\r\n\r\n        handle: Object of type unSkript AWS Connector\r\n        region: Optional, Region to filter instances.\r\n        number_of_days: Optional, Number of days to check the Datapoints.\r\n\r\n## Lego Input\r\nThis Lego take three inputs handle, number_of_days and region.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://unskript.com)"
  },
  {
    "path": "AWS/legos/aws_filter_unused_nat_gateway/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_filter_unused_nat_gateway/aws_filter_unused_nat_gateway.json",
    "content": "{\r\n    \"action_title\": \"AWS Find Unused NAT Gateways\",\r\n    \"action_description\": \"This action to get all of the Nat gateways that have zero traffic over those\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_filter_unused_nat_gateway\",\r\n    \"action_needs_credential\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_is_check\": true,\r\n    \"action_next_hop\":[\"f2b1eecf9b4f727ec80fc4d4f5c7915b788cafe969552af0a26f8db9747bbcd4\"],\r\n    \"action_next_hop_parameter_mapping\":{\"f2b1eecf9b4f727ec80fc4d4f5c7915b788cafe969552af0a26f8db9747bbcd4\": {\"name\": \"Delete Unused NAT Gateways\",\"region\":\".[0].region\",\"nat_gateway_ids\":\"map(.nat_gateway_id)\"}},\r\n    \"action_categories\":[ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_SECOPS\" ,\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_NAT_GATEWAY\",\"CATEGORY_TYPE_AWS_EC2\"  ]\r\n}\r\n"
  },
  {
    "path": "AWS/legos/aws_filter_unused_nat_gateway/aws_filter_unused_nat_gateway.py",
    "content": "##\r\n##  Copyright (c) 2021 unSkript, Inc\r\n##  All rights reserved.\r\n##\r\nimport pprint\r\nfrom typing import Optional, Tuple\r\nfrom datetime import datetime, timedelta\r\nfrom pydantic import BaseModel, Field\r\nfrom unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\r\n\r\nclass InputSchema(BaseModel):\r\n    region: Optional[str] = Field(\r\n        title='Region',\r\n        description='AWS Region.')\r\n    number_of_days: Optional[int] = Field(\r\n        title=\"Number of Days\",\r\n        description='Number of days to check the Datapoints.')\r\n\r\n\r\ndef aws_filter_unused_nat_gateway_printer(output):\r\n    if output is None:\r\n        return\r\n    pprint.pprint(output)\r\n\r\n\r\ndef is_nat_gateway_used(handle, nat_gateway, start_time, end_time,number_of_days):\r\n    datapoints = []\r\n    if nat_gateway['State'] != 'deleted':\r\n        # Get the metrics data for the specified NAT Gateway over the last 7 days\r\n        try:\r\n            metrics_data = handle.get_metric_statistics(\r\n                Namespace='AWS/NATGateway',\r\n                MetricName='ActiveConnectionCount',\r\n                Dimensions=[\r\n                    {\r\n                        'Name': 'NatGatewayId',\r\n                        'Value': nat_gateway['NatGatewayId']\r\n                    },\r\n                ],\r\n                StartTime=start_time,\r\n                EndTime=end_time,\r\n                Period=86400 * number_of_days,\r\n                Statistics=['Sum']\r\n            )\r\n            datapoints += metrics_data.get('Datapoints', [])\r\n        except Exception as e:\r\n            print(f\"An error occurred while fetching metrics data for {nat_gateway['NatGatewayId']}: {e}\")\r\n            return False\r\n\r\n    return len(datapoints) != 0 and datapoints[0].get('Sum', 0) != 0\r\n\r\n\r\n\r\ndef aws_filter_unused_nat_gateway(handle, number_of_days: int = 7, region: str = \"\") -> Tuple:\r\n    \"\"\"aws_get_natgateway_by_vpc Returns an array of NAT gateways.\r\n\r\n        :type region: string\r\n        :param region: Region to filter NAT Gateways.\r\n\r\n        :type number_of_days: int\r\n        :param number_of_days: Number of days to check the Datapoints.\r\n\r\n        :rtype: Array of NAT gateways.\r\n    \"\"\"\r\n    result = []\r\n    if not handle or (region and region not in aws_list_all_regions(handle)):\r\n        raise ValueError(\"Invalid input parameters provided.\")\r\n    end_time = datetime.utcnow()\r\n    start_time = end_time - timedelta(days=number_of_days)\r\n    all_regions = [region] if region else aws_list_all_regions(handle)\r\n\r\n    for reg in all_regions:\r\n        try:\r\n            ec2Client = handle.client('ec2', region_name=reg)\r\n            cloudwatch = handle.client('cloudwatch', region_name=reg)\r\n            response = ec2Client.describe_nat_gateways()\r\n            for nat_gateway in response.get('NatGateways', []):\r\n                nat_gateway_info = {}\r\n                if not is_nat_gateway_used(cloudwatch, nat_gateway, start_time, end_time, number_of_days):\r\n                    nat_gateway_info[\"nat_gateway_id\"] = nat_gateway['NatGatewayId']\r\n                    nat_gateway_info[\"region\"] = reg\r\n                    result.append(nat_gateway_info)\r\n        except Exception:\r\n            pass\r\n\r\n    if len(result) != 0:\r\n        return (False, result)\r\n    return (True, None)\r\n"
  },
  {
    "path": "AWS/legos/aws_find_elbs_with_no_targets_or_instances/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>Find AWS ELBs with no targets or instances</h1>\n\n## Description\nFind AWS ELBs with no targets or instances attached to them.\n\n## Lego Details\n\taws_find_elb_with_no_targets_or_instances(handle, region: str = \"\")\n\t\thandle: Object of type unSkript AWS Connector.\n\t\tregion: Optional, AWS region\n\n\n## Lego Input\nThis Lego takes inputs handle,region.\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_find_elbs_with_no_targets_or_instances/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_find_elbs_with_no_targets_or_instances/aws_find_elbs_with_no_targets_or_instances.json",
    "content": "{\n  \"action_title\": \"Find AWS ELBs with no targets or instances\",\n  \"action_description\": \"Find AWS ELBs with no targets or instances attached to them.\",\n  \"action_type\": \"LEGO_TYPE_AWS\",\n  \"action_entry_function\": \"aws_find_elbs_with_no_targets_or_instances\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n  \"action_is_check\": true,\n  \"action_supports_iteration\": true,\n  \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_COST_OPT\" ,\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_ELB\"],\n  \"action_supports_poll\": true,\n  \"action_next_hop\": [\"2aba76792cb2802cae55deb60d28820522aeba93865572a1e9c7ddc5309e1312\"],\n  \"action_next_hop_parameter_mapping\": {\"2aba76792cb2802cae55deb60d28820522aeba93865572a1e9c7ddc5309e1312\": {\"name\": \"Delete AWS ELBs With No Targets Or Instances\", \"region\":\".[0].region\",\"elb_arns\":\"map(.elb_arn)\",\"elb_names\":\"map(.elb_name)\" }}\n}"
  },
  {
    "path": "AWS/legos/aws_find_elbs_with_no_targets_or_instances/aws_find_elbs_with_no_targets_or_instances.py",
    "content": "##\n##  Copyright (c) 2023 unSkript, Inc\n##  All rights reserved.\n##\nfrom pydantic import BaseModel, Field\nfrom typing import Optional, Tuple\nfrom unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\nfrom unskript.connectors.aws import aws_get_paginator\nimport pprint\n\n\nclass InputSchema(BaseModel):\n    region: Optional[str] = Field('', description='AWS Region.', title='region')\n\n\n\ndef aws_find_elbs_with_no_targets_or_instances_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\ndef aws_find_elbs_with_no_targets_or_instances(handle, region: str = \"\")->Tuple:\n    \"\"\"aws_find_elbs_with_no_targets_or_instances Returns details of Elb's with no target groups or instances\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type region: str\n        :param region: AWS Region\n\n        :rtype: Tuple of status, and details of ELB's with no targets or instances\n    \"\"\"\n    result = []\n    all_load_balancers = []\n    all_regions = [region]\n    if not region:\n        all_regions = aws_list_all_regions(handle)\n    for reg in all_regions:\n        try:\n            elbv2Client = handle.client('elbv2', region_name=reg)\n            elbv2_response = aws_get_paginator(elbv2Client, \"describe_load_balancers\", \"LoadBalancers\")\n            elbClient = handle.client('elb', region_name=reg)\n            elb_response = elbClient.describe_load_balancers()\n            for lb in elbv2_response:\n                elb_dict = {}\n                elb_dict[\"load_balancer_name\"] = lb['LoadBalancerName']\n                elb_dict[\"load_balancer_arn\"] = lb['LoadBalancerArn']\n                elb_dict[\"load_balancer_type\"] = lb['Type']\n                elb_dict[\"load_balancer_dns\"] = lb['DNSName']\n                elb_dict[\"region\"] = reg\n                all_load_balancers.append(elb_dict)\n            for lb in elb_response['LoadBalancerDescriptions']:\n                elb_dict = {}\n                elb_dict[\"load_balancer_name\"] = lb['LoadBalancerName']\n                elb_dict[\"load_balancer_type\"] = 'classic'\n                elb_dict[\"load_balancer_dns\"] = lb['DNSName']\n                elb_dict[\"region\"] = reg\n                all_load_balancers.append(elb_dict)\n        except Exception as e:\n            pass\n    for load_balancer in all_load_balancers:\n        if load_balancer['load_balancer_type']=='network' or load_balancer['load_balancer_type']=='application':\n            elbv2Client = handle.client('elbv2', region_name=load_balancer['region'])\n            target_groups = elbv2Client.describe_target_groups(\n                LoadBalancerArn=load_balancer['load_balancer_arn']\n            )\n            if len(target_groups['TargetGroups']) == 0:\n                    elb_dict = {}\n                    elb_dict[\"elb_arn\"] = load_balancer['load_balancer_arn']\n                    elb_dict[\"elb_name\"] = load_balancer['load_balancer_name']\n                    elb_dict[\"region\"] = load_balancer['region']\n                    elb_dict[\"type\"] = load_balancer['load_balancer_type']\n                    result.append(elb_dict)\n        else:\n            elbClient = handle.client('elb', region_name=load_balancer['region'])\n            res = elbClient.describe_instance_health(\n                LoadBalancerName=load_balancer['load_balancer_name'],\n            )\n            if len(res['InstanceStates'])==0:\n                elb_dict = {}\n                elb_dict[\"elb_name\"] = load_balancer['load_balancer_name']\n                elb_dict[\"region\"] = load_balancer['region']\n                elb_dict[\"type\"] = load_balancer['load_balancer_type']\n                result.append(elb_dict)\n    if len(result) != 0:\n        return (False, result)\n    else:\n        return (True, None)\n\n\n"
  },
  {
    "path": "AWS/legos/aws_find_idle_instances/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>AWS Find Idle Instances</h1>\n\n## Description\nFind Idle EC2 instances\n\n## Lego Details\n\taws_find_idle_instances(handle, idle_cpu_threshold:int, idle_duration:int, region:str='')\n\n\t\thandle: Object of type unSkript AWS Connector.\n\t\tidle_cpu_threshold: (in percent) Idle CPU threshold (in percent)\n\t\tidle_duration: (in hours) Idle CPU threshold (in hours)\n\t\tregion: AWS Region to get the instances from. Eg: \"us-west-2\"\n\n\n## Lego Input\nThis Lego takes inputs handle,\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_find_idle_instances/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_find_idle_instances/aws_find_idle_instances.json",
    "content": "{\n  \"action_title\": \"AWS Find Idle Instances\",\n  \"action_description\": \"Find Idle EC2 instances\",\n  \"action_type\": \"LEGO_TYPE_AWS\",\n  \"action_entry_function\": \"aws_find_idle_instances\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n  \"action_is_check\": true,\n  \"action_next_hop\": [\"c03babff32b83949e6ca20a49901d42a5a74ed3036de4609096390c9f6d0851a\"],\n  \"action_next_hop_parameter_mapping\": {\"c03babff32b83949e6ca20a49901d42a5a74ed3036de4609096390c9f6d0851a\": {\"name\": \"Stop Idle EC2 Instances\", \"region\": \".[0].region\", \"instance_ids\":\"map(.instance)\"}},\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\": [\"CATEGORY_TYPE_COST_OPT\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_EC2\"]\n}"
  },
  {
    "path": "AWS/legos/aws_find_idle_instances/aws_find_idle_instances.py",
    "content": "##\n# Copyright (c) 2023 unSkript, Inc\n# All rights reserved.\n##\nimport pprint\nfrom typing import Optional, Tuple\nimport datetime\nfrom pydantic import BaseModel, Field\nfrom unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\n\n\nclass InputSchema(BaseModel):\n    idle_cpu_threshold: Optional[int] = Field(\n        default=5,\n        description='Idle CPU threshold (in percent)', \n        title='Idle CPU Threshold'\n    )\n    idle_duration: Optional[int] = Field(\n       default=6,\n       description='Idle duration (in hours)',\n       title='Idle Duration'\n    )\n    region: Optional[str] = Field(\n        default='',\n        description='AWS Region to get the instances from. Eg: \"us-west-2\"',\n        title='Region',\n    )\n\n\ndef aws_find_idle_instances_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef is_instance_idle(instance_id , idle_cpu_threshold, idle_duration, cloudwatchclient):\n    try:\n        now = datetime.datetime.utcnow()\n        start_time = now - datetime.timedelta(hours=idle_duration)\n        cpu_utilization_stats = cloudwatchclient.get_metric_statistics(\n            Namespace=\"AWS/EC2\",\n            MetricName=\"CPUUtilization\",\n            Dimensions=[{\"Name\": \"InstanceId\", \"Value\": instance_id}],\n            StartTime=start_time.isoformat(),\n            EndTime=now.isoformat(),\n            Period=3600,\n            Statistics=[\"Average\"],\n        )\n        if not cpu_utilization_stats[\"Datapoints\"]:\n            return False\n        average_cpu = sum(datapoint[\"Average\"] for datapoint in cpu_utilization_stats[\"Datapoints\"]) / len(cpu_utilization_stats[\"Datapoints\"])\n    except Exception as e:\n        raise e\n    return average_cpu < idle_cpu_threshold\n\n  \ndef aws_find_idle_instances(\n    handle,\n    idle_cpu_threshold:int = 5,\n    idle_duration:int = 6,\n    region:str=''\n    ) -> Tuple:\n    \"\"\"aws_find_idle_instances finds idle EC2 instances\n\n    :type region: string\n    :param region: AWS Region to get the instances from. Eg: \"us-west-2\"\n\n    :type idle_cpu_threshold: int\n    :param idle_cpu_threshold: (in percent) Idle CPU threshold (in percent)\n\n    :type idle_duration: int\n    :param idle_duration: (in hours) Idle CPU threshold (in hours)\n\n    :rtype: Tuple with status result and list of Idle Instances.\n    \"\"\"\n    result = []\n    all_regions = [region]\n    if not region:\n        all_regions = aws_list_all_regions(handle)\n    for reg in all_regions:\n        try:\n            ec2client = handle.client('ec2', region_name=reg)\n            cloudwatchclient = handle.client(\"cloudwatch\", region_name=reg)\n            all_instances = ec2client.describe_instances()\n            for instance in all_instances['Reservations']:\n                for i in instance['Instances']:\n                    if i['State'][\"Name\"] == \"running\" and is_instance_idle(\n                        i['InstanceId'],\n                        idle_cpu_threshold,\n                        idle_duration,\n                        cloudwatchclient\n                        ):\n                        idle_instances = {}\n                        idle_instances[\"instance\"] = i['InstanceId']\n                        idle_instances[\"region\"] = reg\n                        result.append(idle_instances)\n        except Exception:\n            pass\n    if len(result) != 0:\n        return (False, result)\n    return (True, None)\n"
  },
  {
    "path": "AWS/legos/aws_find_long_running_lambdas/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>AWS Filter Lambdas with Long Runtime</h1>\r\n\r\n## Description\r\nThis Lego used to retrieves a list of all Lambda functions and searches for log events for each function for given runtime(duration).\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_find_long_running_lambdas(handle, days_back: int = 7, duration_threshold: int = 500, region: str = \"\")\r\n\r\n        handle: Object of type unSkript AWS Connector.\r\n        days_back: Optional, (In days) An integer specifying the number of days to search back for logs.\r\n        duration_threshold: Optional, (In milliseconds) specifying the threshold for the minimum runtime of a Lambda function.\r\n        region: Optional, AWS Region.\r\n## Lego Input\r\n\r\nThis Lego take four inputs handle, days_back, duration_threshold and region.\r\n\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_find_long_running_lambdas/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_find_long_running_lambdas/aws_find_long_running_lambdas.json",
    "content": "{\r\n    \"action_title\": \"AWS Filter Lambdas with Long Runtime\",\r\n    \"action_description\": \"This action retrieves a list of all Lambda functions and searches for log events for each function for given runtime(duration).\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_find_long_running_lambdas\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_is_check\": true,\r\n    \"action_next_hop\": [],\r\n    \"action_next_hop_parameter_mapping\": {},\r\n    \"action_categories\": [  \"CATEGORY_TYPE_COST_OPT\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\"]\r\n}\r\n    "
  },
  {
    "path": "AWS/legos/aws_find_long_running_lambdas/aws_find_long_running_lambdas.py",
    "content": "##\n##  Copyright (c) 2023 unSkript, Inc\n##  All rights reserved.\n##\nfrom pydantic import BaseModel, Field\nfrom typing import Tuple, Optional\nfrom unskript.connectors.aws import aws_get_paginator\nfrom unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\nimport pprint\nimport datetime\n\n\nclass InputSchema(BaseModel):\n    days_back: Optional[int] = Field(\n        default=7,\n        title=\"Days to Search\",\n        description=\"(In days) An integer specifying the number of days to search back for logs.\")\n    duration_threshold: Optional[int] = Field(\n        default=500,\n        title=\"Minimum Duration of a Lambda Function\",\n        description=\"(In milliseconds) specifying the threshold for the minimum runtime of a Lambda function.\")\n    region: Optional[str] = Field(\n        title='Region',\n        description='AWS Region')\n\n\ndef aws_find_long_running_lambdas_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef aws_find_long_running_lambdas(handle, days_back: int = 7, duration_threshold: int = 500, region: str = \"\") -> Tuple:\n    \"\"\"aws_find_long_running_lambdas Returns an List long running lambdas.\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type region: str\n        :param region: AWS Region.\n\n        :type days_back: int\n        :param days_back: (In days) An integer specifying the number of days to search back for logs.\n        \n        :type duration_threshold: int\n        :param duration_threshold: (In milliseconds) specifying the threshold for the minimum runtime of a Lambda function.\n\n        :rtype: List long running lambdas.\n    \"\"\"\n    result = []\n    start_time = datetime.datetime.now() - datetime.timedelta(days=days_back)\n    all_regions = [region]\n    if not region:\n        all_regions = aws_list_all_regions(handle)\n\n    for reg in all_regions:\n        try:\n            lambda_client = handle.client('lambda', region_name=reg)\n            log_client = handle.client('logs', region_name=reg)\n            response = aws_get_paginator(lambda_client, \"list_functions\", \"Functions\")\n            for function in response:\n                function_name = function['FunctionName']\n                log_group_name = f\"/aws/lambda/{function_name}\"\n                try:\n                    # Call the FilterLogEvents method to search the logs for the function\n                    log_response = aws_get_paginator(log_client, \"filter_log_events\", \"events\",\n                                                     logGroupName=log_group_name,\n                                                     startTime=int(start_time.timestamp() * 1000))\n                    for event in log_response:\n                        if 'REPORT' in event['message']:\n                            message_data = event['message'].split('\\t')\n                            duration_index = message_data.index('Duration:') + 1\n                            duration_str = message_data[duration_index].strip()\n                            duration = float(duration_str[:-2])\n                            if duration >= duration_threshold:\n                                result.append({'function_name': function_name, 'duration': duration, \"region\": reg})\n                except:\n                    pass\n        except Exception as error:\n            pass\n\n    if len(result) != 0:\n        return (False, result)\n    else:\n        return (True, None)"
  },
  {
    "path": "AWS/legos/aws_find_low_connection_rds_instances/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>AWS Find Low Connections RDS instances Per Day</h1>\r\n\r\n## Description\r\nThis Lego finds RDS DB instances with a number of connections below the specified minimum in the specified region.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_find_low_connection_rds_instances(handle, min_connections:int = 10, region: str = \"\")\r\n\r\n        handle: Object of type unSkript AWS Connector.\r\n        region: Optional, AWS region. Eg: “us-west-2”\r\n        min_connections: Optional, the minimum number of connections for an instance to be considered active.\r\n\r\n## Lego Input\r\nThis Lego take three inputs handle, min_connections and region.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_find_low_connection_rds_instances/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_find_low_connection_rds_instances/aws_find_low_connection_rds_instances.json",
    "content": "{\r\n    \"action_title\": \"AWS Find Low Connections RDS instances Per Day\",\r\n    \"action_description\": \"This action will find RDS DB instances with a number of connections below the specified minimum in the specified region.\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_find_low_connection_rds_instances\",\r\n    \"action_needs_credential\": true,\r\n    \"action_is_check\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [ \"CATEGORY_TYPE_IAM\", \"CATEGORY_TYPE_SECOPS\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_RDS\" ],\r\n    \"action_next_hop\": [],\r\n    \"action_next_hop_parameter_mapping\": {}\r\n}\r\n"
  },
  {
    "path": "AWS/legos/aws_find_low_connection_rds_instances/aws_find_low_connection_rds_instances.py",
    "content": "##\r\n##  Copyright (c) 2023 unSkript, Inc\r\n##  All rights reserved.\r\n##\r\nfrom pydantic import BaseModel, Field\r\nfrom typing import Optional, Tuple\r\nfrom unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\r\nfrom unskript.connectors.aws import aws_get_paginator\r\nimport datetime\r\nimport pprint\r\n\r\n\r\nclass InputSchema(BaseModel):\r\n    region: Optional[str] = Field(\r\n        default='',\r\n        title='Region for RDS',\r\n        description='Region of the RDS.'\r\n    )\r\n    min_connections: Optional[int] = Field(\r\n        default=10,\r\n        title='Minimum Number of Connections',\r\n        description='The minimum number of connections for an instance to be considered active.'\r\n    )\r\n\r\n\r\ndef aws_find_low_connection_rds_instances_printer(output):\r\n    if output is None:\r\n        return\r\n    pprint.pprint(output)\r\n\r\n\r\ndef aws_find_low_connection_rds_instances(handle, min_connections:int = 10, region: str = \"\") -> Tuple:\r\n    \"\"\"aws_find_low_connection_rds_instances Gets information about RDS instances.\r\n\r\n        :type region: string\r\n        :param region: AWS Region.\r\n\r\n        :type min_connections: int\r\n        :param min_connections: The minimum number of connections for an instance to be considered active.\r\n\r\n        :rtype: A list containing information about RDS instances.\r\n    \"\"\"\r\n    result = []\r\n    all_regions = [region]\r\n    if not region:\r\n        all_regions = aws_list_all_regions(handle)\r\n    for reg in all_regions:\r\n        try:\r\n            rds_Client = handle.client('rds', region_name=reg)\r\n            cloudwatch = handle.client('cloudwatch', region_name=reg)\r\n            response = aws_get_paginator(rds_Client, \"describe_db_instances\", \"DBInstances\")\r\n            for db in response:\r\n                db_instance_dict = {}\r\n                db_instance_identifier = db['DBInstanceIdentifier']\r\n                end_time = datetime.datetime.now()\r\n                start_time = end_time - datetime.timedelta(days=1)\r\n                response1 = cloudwatch.get_metric_statistics(\r\n                    Namespace='AWS/RDS',\r\n                    MetricName='DatabaseConnections',\r\n                    Dimensions=[\r\n                        {\r\n                            'Name': 'DBInstanceIdentifier',\r\n                            'Value': db_instance_identifier\r\n                        }\r\n                    ],\r\n                    StartTime=start_time,\r\n                    EndTime=end_time,\r\n                    Period=86460,\r\n                    Statistics=['Sum']\r\n                )\r\n                data_points = response1['Datapoints']\r\n                if data_points:\r\n                    connections = data_points[-1]['Sum']\r\n                    if connections < min_connections:\r\n                        db_instance_dict[\"region\"] = reg\r\n                        db_instance_dict[\"db_instance\"] = db_instance_identifier\r\n                        db_instance_dict[\"connections\"] = int(connections)\r\n                        result.append(db_instance_dict)\r\n        except Exception as error:\r\n            pass\r\n\r\n    if len(result) != 0:\r\n        return (False, result)\r\n    else:\r\n        return (True, None)"
  },
  {
    "path": "AWS/legos/aws_find_old_gen_emr_clusters/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>AWS Find EMR Clusters of Old Generation Instances</h1>\r\n\r\n## Description\r\nThis Lego list of EMR clusters of old generation instances.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_find_old_gen_emr_clusters(handle, region: str = \"\")\r\n\r\n        handle: Object of type unSkript AWS Connector.\r\n        region: Optional, AWS region. Eg: “us-west-2”\r\n\r\n## Lego Input\r\nThis Lego take two inputs handle and region.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_find_old_gen_emr_clusters/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_find_old_gen_emr_clusters/aws_find_old_gen_emr_clusters.json",
    "content": "{\r\n    \"action_title\": \"AWS Find EMR Clusters of Old Generation Instances\",\r\n    \"action_description\": \"This action list of EMR clusters of old generation instances.\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_find_old_gen_emr_clusters\",\r\n    \"action_needs_credential\": true,\r\n    \"action_is_check\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [ \"CATEGORY_TYPE_IAM\", \"CATEGORY_TYPE_SECOPS\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_RDS\"],\r\n    \"action_next_hop\": [],\r\n    \"action_next_hop_parameter_mapping\": {}\r\n}\r\n"
  },
  {
    "path": "AWS/legos/aws_find_old_gen_emr_clusters/aws_find_old_gen_emr_clusters.py",
    "content": "##\r\n##  Copyright (c) 2021 unSkript, Inc\r\n##  All rights reserved.\r\n##\r\nfrom pydantic import BaseModel, Field\r\nfrom typing import Optional, Tuple\r\nfrom unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\r\nfrom unskript.connectors.aws import aws_get_paginator\r\nimport pprint\r\n\r\n\r\nclass InputSchema(BaseModel):\r\n    region: Optional[str] = Field(\r\n        default='',\r\n        title='AWS Region',\r\n        description='AWS Region.'\r\n    )\r\n\r\n\r\ndef aws_find_old_gen_emr_clusters_printer(output):\r\n    if output is None:\r\n        return\r\n    pprint.pprint(output)\r\n\r\n\r\ndef aws_find_old_gen_emr_clusters(handle, region: str = \"\") -> Tuple:\r\n    \"\"\"aws_find_old_gen_emr_clusters Gets list of old generation EMR clusters.\r\n\r\n        :type region: string\r\n        :param region: AWS Region.\r\n\r\n        :rtype: Tuple with list of old generation EMR clusters.\r\n    \"\"\"\r\n    result = []\r\n    all_regions = [region]\r\n    old_gen_type_prefixes = ['m1.', 'c1.', 'cc1.', 'm2.', 'cr1.', 'cg1.', 't1.']\r\n    if not region:\r\n        all_regions = aws_list_all_regions(handle)\r\n    for reg in all_regions:\r\n        try:\r\n            emr_Client = handle.client('emr', region_name=reg)\r\n            response = aws_get_paginator(emr_Client, \"list_clusters\", \"Clusters\")\r\n            for cluster in response:\r\n                instance_groups_list = aws_get_paginator(emr_Client, \"list_instance_groups\", \"InstanceGroups\",\r\n                                                        ClusterId=cluster['Id'])\r\n                for instance_group in instance_groups_list:\r\n                    cluster_dict = {}\r\n                    if instance_group['InstanceType'].startswith(tuple(old_gen_type_prefixes)):\r\n                        cluster_dict[\"cluster_id\"] = cluster['Id']\r\n                        cluster_dict[\"region\"] = reg\r\n                        result.append(cluster_dict)\r\n                        break\r\n        except Exception as error:\r\n            pass\r\n\r\n    if len(result) != 0:\r\n        return (False, result)\r\n    else:\r\n        return (True, None)"
  },
  {
    "path": "AWS/legos/aws_find_rds_instances_with_low_cpu_utilization/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>AWS Find RDS Instances with low CPU Utilization</h1>\n\n## Description\nThis lego finds RDS instances are not utilizing their CPU resources to their full potential.\n\n## Lego Details\n\taws_find_rds_instances_with_low_cpu_utilization(handle, utilization_threshold=10, region: str = \"\", duration_minutes=5)\n\t\thandle: Object of type unSkript AWS Connector.\n\t\tutilization_threshold: The threshold percentage of CPU utilization for an RDS Instance.\n\t\tduration_minutes: Value in minutes to get the start time of the metrics for CPU Utilization\n\n\n## Lego Input\nThis Lego takes inputs handle, utilization_threshold, duration_minutes.\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_find_rds_instances_with_low_cpu_utilization/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_find_rds_instances_with_low_cpu_utilization/aws_find_rds_instances_with_low_cpu_utilization.json",
    "content": "{\n  \"action_title\": \"AWS Find RDS Instances with low CPU Utilization\",\n  \"action_description\": \"This lego finds RDS instances are not utilizing their CPU resources to their full potential.\",\n  \"action_type\": \"LEGO_TYPE_AWS\",\n  \"action_entry_function\": \"aws_find_rds_instances_with_low_cpu_utilization\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n  \"action_is_check\": true,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_next_hop\":[\"655835b762ba634f02074a48e4bae12f7a3e29bb8e6776eb8d657ddbfe181a59\"],\n  \"action_next_hop_parameter_mapping\":{\"655835b762ba634f02074a48e4bae12f7a3e29bb8e6776eb8d657ddbfe181a59\": {\"name\": \"Delete RDS Instances with Low CPU Utilization\", \"region\": \".[0].region\", \"db_identifiers\":\"map(.instance)\"}},\n  \"action_categories\":[ \"CATEGORY_TYPE_COST_OPT\",\"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS_RDS\",\"CATEGORY_TYPE_AWS\"]\n}"
  },
  {
    "path": "AWS/legos/aws_find_rds_instances_with_low_cpu_utilization/aws_find_rds_instances_with_low_cpu_utilization.py",
    "content": "##\n##  Copyright (c) 2023 unSkript, Inc\n##  All rights reserved.\n##\nfrom pydantic import BaseModel, Field\nfrom typing import Optional, Tuple\nfrom unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\nfrom unskript.connectors.aws import aws_get_paginator\nimport pprint\nfrom datetime import datetime,timedelta\n\n\nclass InputSchema(BaseModel):\n    region: Optional[str] = Field(\n        '', description='AWS Region to get the RDS Instance', title='AWS Region'\n    )\n    duration_minutes: Optional[int] = Field(\n        5,\n        description='Value in minutes to get the start time of the metrics for CPU Utilization',\n        title='Duration of Start time',\n    )\n    utilization_threshold: Optional[int] = Field(\n        10,\n        description='The threshold percentage of CPU utilization for an RDS Instance.',\n        title='CPU Utilization Threshold',\n    )\n\n\n\ndef aws_find_rds_instances_with_low_cpu_utilization_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef aws_find_rds_instances_with_low_cpu_utilization(handle, utilization_threshold:int=10, region: str = \"\", duration_minutes:int=5) -> Tuple:\n    \"\"\"aws_find_rds_instances_with_low_cpu_utilization finds RDS instances that have a lower cpu utlization than the given threshold\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type region: string\n        :param region: Region of the RDS.\n\n        :type utilization_threshold: integer\n        :param utilization_threshold: The threshold percentage of CPU utilization for an RDS Instance.\n\n        :type duration_minutes: integer\n        :param duration_minutes: Value in minutes to get the start time of the metrics for CPU Utilization\n\n        :rtype: status, list of instances and their region.\n    \"\"\"\n    if not handle or utilization_threshold < 0 or utilization_threshold > 100 or duration_minutes <= 0:\n        raise ValueError(\"Invalid input parameters provided.\")\n    result = []\n    all_regions = [region]\n    if not region:\n        all_regions = aws_list_all_regions(handle)\n    for reg in all_regions:\n        try:\n            rdsClient = handle.client('rds', region_name=reg)\n            cloudwatchClient = handle.client('cloudwatch', region_name=reg)\n            all_instances = aws_get_paginator(rdsClient, \"describe_db_instances\", \"DBInstances\")\n            for db in all_instances:\n                response = cloudwatchClient.get_metric_data(\n                    MetricDataQueries=[\n                        {\n                            'Id': 'cpu',\n                            'MetricStat': {\n                                'Metric': {\n                                    'Namespace': 'AWS/RDS',\n                                    'MetricName': 'CPUUtilization',\n                                    'Dimensions': [\n                                        {\n                                            'Name': 'DBInstanceIdentifier',\n                                            'Value': db['DBInstanceIdentifier']\n                                        },\n                                    ]\n                                },\n                                'Period': 60,\n                                'Stat': 'Average',\n                            },\n                            'ReturnData': True,\n                        },\n                    ],\n                    StartTime=(datetime.now() - timedelta(minutes=duration_minutes)).isoformat(),\n                    EndTime=datetime.utcnow().isoformat(),\n                )\n                if 'Values' in response['MetricDataResults'][0]:\n                    cpu_utilization = response['MetricDataResults'][0]['Values'][0]\n                    if cpu_utilization < utilization_threshold:\n                        db_instance_dict = {}\n                        db_instance_dict[\"region\"] = reg\n                        db_instance_dict[\"instance\"] = db['DBInstanceIdentifier']\n                        result.append(db_instance_dict)\n        except Exception as error:\n            pass\n\n    if len(result) != 0:\n        return (False, result)\n    else:\n        return (True, None)"
  },
  {
    "path": "AWS/legos/aws_find_redshift_cluster_without_pause_resume_enabled/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>AWS Find Redshift Cluster without Pause Resume Enabled</h1>\r\n\r\n## Description\r\nThis Lego find AWS redshift cluster for which paused resume are not Enabled.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_find_redshift_cluster_without_pause_resume_enabled(handle, region: str = \"\")\r\n\r\n        handle: Object of type unSkript AWS Connector\r\n        region: Optional, AWS Region.\r\n\r\n## Lego Input\r\nThis Lego take two inputs handle and region. \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_find_redshift_cluster_without_pause_resume_enabled/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_find_redshift_cluster_without_pause_resume_enabled/aws_find_redshift_cluster_without_pause_resume_enabled.json",
    "content": "{\r\n    \"action_title\": \"AWS Find Redshift Cluster without Pause Resume Enabled\",\r\n    \"action_description\": \"Use This Action to AWS find redshift cluster for which paused resume are not Enabled\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_find_redshift_cluster_without_pause_resume_enabled\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_is_check\":true,\r\n    \"action_next_hop\":[],\r\n    \"action_next_hop_parameter_mapping\":{\"8b9c4eadb5f2fb817be0952f3ecb28c8e490ece6281286a74a95d5fe25019400\": {\"name\": \"AWS Ensure Redshift Clusters have Paused Resume Enabled\", \"region\": \".[0].region\", \"redshift_clusters\":\"map(.cluster_name)\"}},\r\n    \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_EC2\",\"CATEGORY_TYPE_DB\" ]\r\n}\r\n"
  },
  {
    "path": "AWS/legos/aws_find_redshift_cluster_without_pause_resume_enabled/aws_find_redshift_cluster_without_pause_resume_enabled.py",
    "content": "##  Copyright (c) 2023 unSkript, Inc\r\n##  All rights reserved.\r\n##\r\nfrom pydantic import BaseModel, Field\r\nfrom typing import Optional, Tuple\r\nfrom unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\r\nfrom unskript.connectors.aws import aws_get_paginator\r\nimport pprint\r\n\r\n\r\nclass InputSchema(BaseModel):\r\n    region: Optional[str] = Field(\r\n        default='',\r\n        title='Region',\r\n        description='AWS Region.')\r\n    \r\n\r\ndef aws_find_redshift_cluster_without_pause_resume_enabled_printer(output):\r\n    if output is None:\r\n        return\r\n    pprint.pprint(output)\r\n\r\n\r\ndef aws_find_redshift_cluster_without_pause_resume_enabled(handle, region: str = \"\") -> Tuple:\r\n    \"\"\"aws_find_redshift_cluster_without_pause_resume_enabled Gets all redshift cluster which don't have pause and resume not enabled.\r\n\r\n        :type handle: object\r\n        :param handle: Object returned from task.validate(...).\r\n\r\n        :type region: string\r\n        :param region: AWS Region.\r\n\r\n        :rtype: Tuple with the status result and a list of all redshift clusters that don't have pause and resume enabled.\r\n    \"\"\"\r\n    result = []\r\n    all_regions = [region]\r\n    if not region:\r\n        all_regions = aws_list_all_regions(handle)\r\n    for reg in all_regions:\r\n        try:\r\n            redshift_Client = handle.client('redshift', region_name=reg)\r\n            response = aws_get_paginator(redshift_Client, \"describe_clusters\", \"Clusters\")\r\n            for cluster in response:\r\n                cluster_dict = {}\r\n                cluster_name = cluster[\"ClusterIdentifier\"]\r\n                schedule_actions = aws_get_paginator(redshift_Client, \"describe_scheduled_actions\", \"ScheduledActions\",Filters=[{'Name': 'cluster-identifier', 'Values': [cluster_name]}])\r\n\r\n                if schedule_actions:\r\n                    for actions in schedule_actions:\r\n                        if \"ResumeCluster\" in actions[\"TargetAction\"].keys() or \"PauseCluster\" in actions[\"TargetAction\"].keys():\r\n                            pass\r\n                        else:\r\n                            cluster_dict[\"cluster_name\"] = cluster_name\r\n                            cluster_dict[\"region\"] = reg\r\n                            result.append(cluster_dict)\r\n                else:\r\n                    cluster_dict[\"cluster_name\"] = cluster_name\r\n                    cluster_dict[\"region\"] = reg\r\n                    result.append(cluster_dict)\r\n        except Exception as error:\r\n            pass\r\n\r\n    if len(result) != 0:\r\n        return (False, result)\r\n    else:\r\n        return (True, None)\r\n"
  },
  {
    "path": "AWS/legos/aws_find_redshift_clusters_with_low_cpu_utilization/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>AWS Find Redshift Clusters with low CPU Utilization</h1>\n\n## Description\nFind underutilized Redshift clusters in terms of CPU utilization.\n\n## Lego Details\n\taws_find_redshift_clusters_with_low_cpu_utilization(handle, utilization_threshold:int=10, region: str = \"\", duration_minutes:int=5)\n\t\thandle: Object of type unSkript AWS Connector.\n\t\tutilization_threshold: The threshold percentage of CPU utilization for a Redshift Cluster.\n\t\tduration_minutes: The threshold percentage of CPU utilization for a Redshift Cluster.\n\n## Lego Input\nThis Lego takes inputs handle, utilization_threshold, duration_minutes\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_find_redshift_clusters_with_low_cpu_utilization/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_find_redshift_clusters_with_low_cpu_utilization/aws_find_redshift_clusters_with_low_cpu_utilization.json",
    "content": "{\n  \"action_title\": \"AWS Find Redshift Clusters with low CPU Utilization\",\n  \"action_description\": \"Find underutilized Redshift clusters in terms of CPU utilization.\",\n  \"action_type\": \"LEGO_TYPE_AWS\",\n  \"action_entry_function\": \"aws_find_redshift_clusters_with_low_cpu_utilization\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n  \"action_is_check\": true,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_next_hop\":[\"2a51c98c5c99d132011e285546e365402351fd3d09214041aea7592367bd48bf\"],\n  \"action_next_hop_parameter_mapping\":{\"2a51c98c5c99d132011e285546e365402351fd3d09214041aea7592367bd48bf\": {\"name\": \"Delete Redshift Clusters with Low CPU Utilization\", \"region\": \".[0].region\", \"cluster_identifiers\":\"map(.cluster)\"}},\n  \"action_categories\":[\"CATEGORY_TYPE_COST_OPT\" ,\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_REDSHIFT\",\"CATEGORY_TYPE_AWS_CLOUDWATCH\"]\n}"
  },
  {
    "path": "AWS/legos/aws_find_redshift_clusters_with_low_cpu_utilization/aws_find_redshift_clusters_with_low_cpu_utilization.py",
    "content": "##\n##  Copyright (c) 2023 unSkript, Inc\n##  All rights reserved.\n##\nfrom pydantic import BaseModel, Field\nfrom typing import Optional, Tuple\nfrom unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\nimport pprint\nfrom datetime import datetime,timedelta\n\n\nclass InputSchema(BaseModel):\n    region: Optional[str] = Field(\n        '', description='AWS Region to get the Redshift Cluster', title='AWS Region'\n    )\n    duration_minutes: Optional[int] = Field(\n        5,\n        description='Value in minutes to determine the start time of the data points. ',\n        title='Duration (in minutes)',\n    )\n    utilization_threshold: Optional[int] = Field(\n        10,\n        description='The threshold value in percent of CPU utilization of the Redshift cluster',\n        title='CPU utilization threshold(in %)',\n    )\n\n\n\ndef aws_find_redshift_clusters_with_low_cpu_utilization_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef aws_find_redshift_clusters_with_low_cpu_utilization(handle, utilization_threshold:int=10, region: str = \"\", duration_minutes:int=5) -> Tuple:\n    \"\"\"aws_find_redshift_clusters_with_low_cpu_utilization finds Redshift Clusters that have a lower cpu utlization than the given threshold\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type region: string\n        :param region: Region of the Cluster.\n\n        :type utilization_threshold: integer\n        :param utilization_threshold: The threshold percentage of CPU utilization for a Redshift Cluster.\n\n        :type duration_minutes: integer\n        :param duration_minutes: The threshold percentage of CPU utilization for a Redshift Cluster.\n\n        :rtype: status, list of clusters and their region.\n    \"\"\"\n    result = []\n    all_regions = [region]\n    if not region:\n        all_regions = aws_list_all_regions(handle)\n    for reg in all_regions:\n        try:\n            redshiftClient = handle.client('redshift', region_name=reg)\n            cloudwatchClient = handle.client('cloudwatch', region_name=reg)\n            for cluster in redshiftClient.describe_clusters()['Clusters']:\n                cluster_identifier = cluster['ClusterIdentifier']\n                response = cloudwatchClient.get_metric_statistics(\n                Namespace='AWS/Redshift',\n                MetricName='CPUUtilization',\n                Dimensions=[\n                    {\n                        'Name': 'ClusterIdentifier',\n                        'Value': cluster_identifier\n                    }\n                ],\n                StartTime=(datetime.utcnow() - timedelta(minutes=duration_minutes)).isoformat(),\n                EndTime=datetime.utcnow().isoformat(),\n                Period=60,\n                Statistics=['Average']\n                )\n                if len(response['Datapoints']) != 0:\n                    cpu_usage_percent = response['Datapoints'][-1]['Average']\n                    if cpu_usage_percent < utilization_threshold:\n                        cluster_dict = {}\n                        cluster_dict[\"region\"] = reg\n                        cluster_dict[\"cluster\"] = cluster_identifier\n                        result.append(cluster_dict)\n        except Exception:\n            pass\n\n    if len(result) != 0:\n        return (False, result)\n    else:\n        return (True, None)\n\n\n"
  },
  {
    "path": "AWS/legos/aws_find_s3_buckets_without_lifecycle_policies/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>AWS Find S3 Buckets without Lifecycle Policies</h1>\n\n## Description\nS3 lifecycle policies enable you to automatically transition objects to different storage classes or delete them when they are no longer needed. This action finds all S3 buckets without lifecycle policies. \n\n## Lego Details\n\taws_find_s3_buckets_without_lifecycle_policies(handle, region: str=\"\")\n\t\thandle: Object of type unSkript AWS Connector.\n\t\tregion: AWS Region of the bucket\n\n\n## Lego Input\nThis Lego takes inputs handle and region.\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_find_s3_buckets_without_lifecycle_policies/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_find_s3_buckets_without_lifecycle_policies/aws_find_s3_buckets_without_lifecycle_policies.json",
    "content": "{\n  \"action_title\": \"AWS Find S3 Buckets without Lifecycle Policies\",\n  \"action_description\": \"S3 lifecycle policies enable you to automatically transition objects to different storage classes or delete them when they are no longer needed. This action finds all S3 buckets without lifecycle policies. \",\n  \"action_type\": \"LEGO_TYPE_AWS\",\n  \"action_entry_function\": \"aws_find_s3_buckets_without_lifecycle_policies\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n  \"action_is_check\": true,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_next_hop\":[\"3d74913836e037a001f718b48f1e19010394b90afc2422d0572ab5c515521075\"],\n  \"action_next_hop_parameter_mapping\":{\"3d74913836e037a001f718b48f1e19010394b90afc2422d0572ab5c515521075\": {\"name\": \"Add Lifecycle Policy to S3 Buckets\", \"region\": \".[0].region\", \"bucket_names\":\"map(.bucket_name)\"}},\n  \"action_categories\":[ \"CATEGORY_TYPE_CLOUDOPS\" ,\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_S3\"]\n}"
  },
  {
    "path": "AWS/legos/aws_find_s3_buckets_without_lifecycle_policies/aws_find_s3_buckets_without_lifecycle_policies.py",
    "content": "##\n##  Copyright (c) 2023 unSkript, Inc\n##  All rights reserved.\n##\nfrom pydantic import BaseModel, Field\nfrom unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\nfrom unskript.legos.aws.aws_get_s3_buckets.aws_get_s3_buckets import aws_get_s3_buckets\nfrom typing import List, Optional, Tuple\nimport pprint\n\n\nclass InputSchema(BaseModel):\n    region: Optional[str] = Field('', description='AWS Region of S3 buckets.', title='Region')\n\n\n\ndef aws_find_s3_buckets_without_lifecycle_policies_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef aws_find_s3_buckets_without_lifecycle_policies(handle, region: str=\"\") -> Tuple:\n    \"\"\"aws_find_s3_buckets_without_lifecycle_policies List all the S3 buckets without lifecycle policies\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type region: string\n        :param region: AWS Region of the bucket\n\n        :rtype: Status, List of all the S3 buckets without lifecycle policies with regions\n    \"\"\"\n    result = []\n    all_regions = [region]\n    if not region:\n        all_regions = aws_list_all_regions(handle)\n    for reg in all_regions:\n        try:\n            s3Session = handle.resource(\"s3\", region_name=reg)\n            response = aws_get_s3_buckets(handle, region=reg)\n            for bucket in response:\n                bucket_region = s3Session.meta.client.get_bucket_location(Bucket=bucket)['LocationConstraint']\n                if bucket_region is None:\n                    bucket_region = 'us-east-1'\n                if bucket_region != reg:\n                    continue\n                bucket_lifecycle_configuration = s3Session.BucketLifecycleConfiguration(bucket)\n                try:\n                    if bucket_lifecycle_configuration.rules:\n                        continue\n                except Exception:\n                    bucket_details = {}\n                    bucket_details[\"bucket_name\"] = bucket\n                    bucket_details[\"region\"] = reg\n                    result.append(bucket_details)\n        except Exception:\n            pass\n    if len(result) != 0:\n        return (False, result)\n    else:\n        return (True, None)\n\n\n"
  },
  {
    "path": "AWS/legos/aws_finding_redundant_trails/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Finding Redundant Trails in AWS </h1>\r\n\r\n## Description\r\nThis Lego finds the redundant cloud trails from AWS if the attribute IncludeGlobalServiceEvents is true because it records global events.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_finding_redundant_trails(handle: object)\r\n\r\n        handle: Object of type unSkript AWS Connector.\r\n\r\n## Lego Input\r\nThis Lego take one input handle.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_finding_redundant_trails/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_finding_redundant_trails/aws_finding_redundant_trails.json",
    "content": "{\r\n    \"action_title\": \"Finding Redundant Trails in AWS\",\r\n    \"action_description\": \"This action will find a redundant cloud trail if the attribute IncludeGlobalServiceEvents is true, and then we need to find multiple duplications.\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_finding_redundant_trails\",\r\n    \"action_needs_credential\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_is_check\":true,\r\n    \"action_categories\": [\"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_COST_OPT\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_CLOUDTRAIL\"],\r\n    \"action_next_hop\": [\"c4d55f5dd5bb964460f4ad7335daa8bb094792b0d64149dbddca019513f05598\"],\r\n    \"action_next_hop_parameter_mapping\": {\"c4d55f5dd5bb964460f4ad7335daa8bb094792b0d64149dbddca019513f05598\": {\"name\": \"AWS Lowering CloudTrail Costs by Removing Redundant Trails\", \"region\": \".[].regions[0]\"}}\r\n}"
  },
  {
    "path": "AWS/legos/aws_finding_redundant_trails/aws_finding_redundant_trails.py",
    "content": "##\r\n##  Copyright (c) 2023 unSkript, Inc\r\n##  All rights reserved.\r\n##\r\nimport pprint\r\nfrom typing import Tuple\r\nfrom pydantic import BaseModel\r\nfrom unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\r\n\r\n\r\nclass InputSchema(BaseModel):\r\n    pass\r\n\r\n\r\ndef aws_finding_redundant_trails_printer(output):\r\n    if output is None:\r\n        return\r\n    pprint.pprint(output)\r\n\r\n\r\ndef aws_finding_redundant_trails(handle) -> Tuple:\r\n    \"\"\"aws_finding_redundant_trails Returns an array of redundant trails in AWS\r\n\r\n        :type handle: object\r\n        :param handle: Object returned by the task.validate(...) method.\r\n\r\n        :rtype: Tuple with check status and list of redundant trails\r\n    \"\"\"\r\n    result = []\r\n    all_regions = aws_list_all_regions(handle)\r\n    for reg in all_regions:\r\n        try:\r\n            ec2Client = handle.client('cloudtrail', region_name=reg)\r\n            response = ec2Client.describe_trails()\r\n            for glob_service in response[\"trailList\"]:\r\n                if glob_service[\"IncludeGlobalServiceEvents\"] is True:\r\n                    for i in result:\r\n                        if i[\"trail_name\"] == glob_service[\"Name\"]:\r\n                            i[\"regions\"].append(reg)\r\n                    if not any(i[\"trail_name\"] == glob_service[\"Name\"] for i in result):\r\n                        trail_dict = {}\r\n                        trail_dict[\"trail_name\"] = glob_service[\"Name\"]\r\n                        trail_dict[\"regions\"] = [reg]\r\n                        result.append(trail_dict)\r\n        except Exception:\r\n            pass\r\n\r\n    if len(result) != 0:\r\n        return (False, result)\r\n    return (True, None)\r\n"
  },
  {
    "path": "AWS/legos/aws_get_acount_number/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>AWS Get AWS Account Number</h1>\n\n## Description\nSome AWS functions require the AWS Account number. This programmatically retrieves it.\n\n## Lego Details\n\taws_get_acount_number(handle)\n\t\thandle: Object of type unSkript AWS Connector.\n\n\n## Lego Input\nThis Lego takes inputs handle. \n\n## Lego Output\nThis Action returns your account number as a string.\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_get_acount_number/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_acount_number/aws_get_acount_number.json",
    "content": "{\n  \"action_title\": \"AWS Get AWS Account Number\",\n  \"action_description\": \"Some AWS functions require the AWS Account number. This programmatically retrieves it.\",\n  \"action_type\": \"LEGO_TYPE_AWS\",\n  \"action_entry_function\": \"aws_get_acount_number\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\n  \"action_is_check\": false,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\": [\"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_COST_OPT\",\"CATEGORY_TYPE_AWS\"]\n}"
  },
  {
    "path": "AWS/legos/aws_get_acount_number/aws_get_acount_number.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nimport pprint\nfrom pydantic import BaseModel\n\nclass InputSchema(BaseModel):\n    pass\n\ndef aws_get_acount_number_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\ndef aws_get_acount_number(handle) -> str:\n    # Create a client object for the AWS Identity and Access Management (IAM) service\n    iam_client = handle.client('iam')\n\n    # Call the get_user() method to get information about the current user\n    response = iam_client.get_user()\n\n    # Extract the account ID from the ARN (Amazon Resource Name) of the user\n    account_id = response['User']['Arn'].split(':')[4]\n\n    # Print the account ID\n    return account_id\n"
  },
  {
    "path": "AWS/legos/aws_get_alarms_list/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Get AWS CloudWatch Alarms List </h1>\r\n\r\n## Description\r\nThis Lego get AWS CloudWatch Alarms List.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_get_alarms_list(handle: object, region: str, alarm_name: str)\r\n\r\n        handle: Object of type unSkript AWS Connector.\r\n        alarm_name: Name of the particular alarm in the cloudwatch.\r\n        region: AWS Region of the cloudwatch.\r\n\r\n## Lego Input\r\n\r\nThis Lego take three inputs handle, alarm_name and region. \r\n\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_get_alarms_list/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_alarms_list/aws_get_alarms_list.json",
    "content": "{\r\n    \"action_title\": \"Get AWS CloudWatch Alarms List\",\r\n    \"action_description\": \"Get AWS CloudWatch Alarms List\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_get_alarms_list\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\":[ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_SECOPS\" ,\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_CLOUDWATCH\"  ]\r\n}\r\n    "
  },
  {
    "path": "AWS/legos/aws_get_alarms_list/aws_get_alarms_list.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import Optional, List\nfrom pydantic import BaseModel, Field\nfrom unskript.connectors.aws import aws_get_paginator\n\n\nclass InputSchema(BaseModel):\n    region: str = Field(\n        title='Region',\n        description='AWS Region of the cloudwatch.')\n    alarm_name: Optional[str] = Field(\n        title='Alarm Name',\n        description='Name of the particular alarm in the cloudwatch.')\n\n\ndef aws_get_alarms_list_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef aws_get_alarms_list(handle, region: str, alarm_name: str = None) -> List:\n    \"\"\"aws_get_alarms_list get AWS cloudwatches alarms list.\n       for a given instance ID. This routine assume instance_id\n       being present in the inputParmsJson.\n\n       :type handle: object\n       :param handle: Object returned from task.validate(...).\n\n       :type alarm_name: string\n       :param alarm_name: Name of the particular alarm in the cloudwatch.\n\n       :type region: string\n       :param region: AWS Region of the cloudwatch.\n\n       :rtype: Returns alarms dict list and next token if pagination requested.\n    \"\"\"\n    # Input param validation.\n    cloudwatchClient = handle.client('cloudwatch', region_name=region)\n    result = []\n    # if alarm is specified it's returning only it's details\n    if alarm_name is not None:\n        res = aws_get_paginator(\n            cloudwatchClient,\n            \"describe_alarms\",\n            \"MetricAlarms\",\n            AlarmNames=[alarm_name]\n            )\n    else:\n        res = aws_get_paginator(cloudwatchClient, \"describe_alarms\", \"MetricAlarms\")\n\n    for alarm in res:\n        alarm_info = {}\n        alarm_info['AlarmName'] = alarm['AlarmName']\n        alarm_info['AlarmArn'] = alarm['AlarmArn']\n        alarm_info['Dimensions'] = alarm['Dimensions']\n        if 'AlarmDescription' in alarm:\n            alarm_info['AlarmDescription'] = alarm['AlarmDescription']\n        else:\n            alarm_info['AlarmDescription'] = \"\"\n        result.append(alarm_info)\n\n    return result\n"
  },
  {
    "path": "AWS/legos/aws_get_alb_listeners_without_http_redirect/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Get AWS ALB Listeners Without HTTP Redirection </h1>\r\n\r\n## Description\r\nThis Lego Filter AWS ALB listeners without HTTP redirection.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_get_alb_listeners_without_http_redirect(handle, region: str)\r\n\r\n        handle: Object of type unSkript AWS Connector\r\n        region: Optional, AWS region. Eg: “us-west-2”\r\n\r\n## Lego Input\r\nThis Lego take two inputs handle and region. \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_get_alb_listeners_without_http_redirect/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_alb_listeners_without_http_redirect/aws_get_alb_listeners_without_http_redirect.json",
    "content": "{\r\n    \"action_title\": \"Get AWS ALB Listeners Without HTTP Redirection\",\r\n    \"action_description\": \"Get AWS ALB Listeners Without HTTP Redirection\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_get_alb_listeners_without_http_redirect\",\r\n    \"action_needs_credential\": true,\r\n    \"action_is_check\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [ \"CATEGORY_TYPE_IAM\", \"CATEGORY_TYPE_SECOPS\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_ELB\" ],\r\n    \"action_next_hop\": [\"7d87da036fb983f7909a22a01529790dddc5179ebbb8f95517a66314d236555c\"],\r\n    \"action_next_hop_parameter_mapping\": {\"7d87da036fb983f7909a22a01529790dddc5179ebbb8f95517a66314d236555c\": {\"name\": \"Enforce HTTP Redirection across all AWS ALB instances\",\"region\":\".[0].region\",\"alb_listener_arns\":\"map(.listener_arn)\"}}\r\n}\r\n"
  },
  {
    "path": "AWS/legos/aws_get_alb_listeners_without_http_redirect/aws_get_alb_listeners_without_http_redirect.py",
    "content": "##  Copyright (c) 2021 unSkript, Inc\r\n##  All rights reserved.\r\n##\r\nimport pprint\r\nfrom typing import Optional, Tuple\r\nfrom pydantic import BaseModel, Field\r\nfrom unskript.connectors.aws import aws_get_paginator\r\nfrom unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\r\nfrom unskript.legos.aws.aws_list_application_loadbalancers.aws_list_application_loadbalancers import aws_list_application_loadbalancers\r\n\r\n\r\nclass InputSchema(BaseModel):\r\n    region: Optional[str] = Field(\r\n        default=\"\",\r\n        title='Region',\r\n        description='AWS Region of the ALB listeners.')\r\n\r\n\r\ndef aws_get_alb_listeners_without_http_redirect_printer(output):\r\n    if output is None:\r\n        return\r\n\r\n    pprint.pprint(output)\r\n\r\n\r\ndef aws_get_alb_listeners_without_http_redirect(handle, region: str = \"\") -> Tuple:\r\n    \"\"\"aws_get_alb_listeners_without_http_redirect List of ALB listeners without HTTP redirection.\r\n\r\n        :type handle: object\r\n        :param handle: Object returned from task.validate(...).\r\n\r\n        :type region: string\r\n        :param region: Region to filter ALB listeners.\r\n\r\n        :rtype: Tuple of status result and list of ALB listeners without HTTP redirection.\r\n    \"\"\"\r\n    result = []\r\n    all_regions = [region]\r\n    alb_list = []\r\n    if not region:\r\n        all_regions = aws_list_all_regions(handle)\r\n\r\n    for reg in all_regions:\r\n        try:\r\n            alb_dict = {}\r\n            loadbalancer_arn = aws_list_application_loadbalancers(handle, reg)\r\n            alb_dict[\"region\"] = reg\r\n            alb_dict[\"alb_arn\"] = loadbalancer_arn\r\n            alb_list.append(alb_dict)\r\n        except Exception:\r\n            pass\r\n        \r\n    for alb in alb_list:\r\n        try:\r\n            ec2Client = handle.client('elbv2', region_name=alb[\"region\"])\r\n            for load in alb[\"alb_arn\"]:\r\n                response = aws_get_paginator(ec2Client, \"describe_listeners\", \"Listeners\",\r\n                                             LoadBalancerArn=load)\r\n                for listner in response:\r\n                    if 'SslPolicy' not in listner:\r\n                        resp = aws_get_paginator(ec2Client, \"describe_rules\", \"Rules\",\r\n                                             ListenerArn=listner['ListenerArn'])\r\n                        for rule in resp:\r\n                            for action in rule['Actions']:\r\n                                listener_dict = {}\r\n                                if action['Type'] != 'redirect':\r\n                                    listener_dict[\"region\"] = alb[\"region\"]\r\n                                    listener_dict[\"listener_arn\"] = listner['ListenerArn']\r\n                                    result.append(listener_dict)\r\n        except Exception:\r\n            pass\r\n\r\n    if len(result) != 0:\r\n        return (False, result)\r\n    return (True, None)\r\n"
  },
  {
    "path": "AWS/legos/aws_get_all_ec2_instances/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Get AWS EC2 Instances All </h1>\r\n\r\n## Description\r\nThis Lego filter all AWS EC2 instances.\r\n\r\n## Lego Details\r\n\r\n    aws_filter_ec2_instances(handle, region: str)\r\n\r\n        handle: Object of type unSkript AWS Connector\r\n        region: Region to filter instances.\r\n\r\n## Lego Input\r\nThis Lego take two inputs handle and region.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://unskript.com)"
  },
  {
    "path": "AWS/legos/aws_get_all_ec2_instances/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_all_ec2_instances/aws_get_all_ec2_instances.json",
    "content": "{\r\n    \"action_title\": \"Get AWS EC2 Instances All \",\r\n    \"action_description\": \"Use This Action to Get All AWS EC2 Instances\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_get_all_ec2_instances\",\r\n    \"action_needs_credential\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\":[ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_SECOPS\" ,\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_EC2\"  ]\r\n}\r\n"
  },
  {
    "path": "AWS/legos/aws_get_all_ec2_instances/aws_get_all_ec2_instances.py",
    "content": "##\r\n##  Copyright (c) 2021 unSkript, Inc\r\n##  All rights reserved.\r\n##\r\nimport pprint\r\nfrom typing import List\r\nfrom pydantic import BaseModel, Field\r\nfrom unskript.connectors.aws import aws_get_paginator\r\n\r\nclass InputSchema(BaseModel):\r\n    region: str = Field(\r\n        title='Region',\r\n        description='AWS Region of the ECS service.')\r\n\r\ndef aws_get_all_ec2_instances_printer(output):\r\n    if output is None:\r\n        return\r\n    pprint.pprint({\"Instances\": output})\r\n\r\n\r\ndef aws_get_all_ec2_instances(handle, region: str) -> List:\r\n    \"\"\"aws_get_all_ec2_instances Returns an array of instances.\r\n\r\n        :type handle: object\r\n        :param handle: Object returned from task.validate(...).\r\n\r\n        :type region: string\r\n        :param region: Region to filter instances.\r\n\r\n        :rtype: Array of instances.\r\n    \"\"\"\r\n    ec2Client = handle.client('ec2', region_name=region)\r\n    res = aws_get_paginator(ec2Client, \"describe_instances\", \"Reservations\")\r\n    result = []\r\n    for reservation in res:\r\n        for instance in reservation['Instances']:\r\n            result.append(instance['InstanceId'])\r\n    return result\r\n"
  },
  {
    "path": "AWS/legos/aws_get_all_load_balancers/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>AWS Get All Load Balancers</h1>\r\n\r\n## Description\r\nThis Lego filter AWS list all load balancers.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_get_all_load_balancers(handle, region: str = \"\")\r\n\r\n        handle: Object of type unSkript AWS Connector\r\n        region: AWS Region.\r\n\r\n## Lego Input\r\nThis Lego take two inputs handle and region.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_get_all_load_balancers/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_all_load_balancers/aws_get_all_load_balancers.json",
    "content": "{\r\n    \"action_title\": \"AWS Get All Load Balancers\",\r\n    \"action_description\": \"AWS Get All Load Balancers\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_get_all_load_balancers\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\", \"CATEGORY_TYPE_AWS_ELB\"]\r\n}"
  },
  {
    "path": "AWS/legos/aws_get_all_load_balancers/aws_get_all_load_balancers.py",
    "content": "##\r\n##  Copyright (c) 2021 unSkript, Inc\r\n##  All rights reserved.\r\n##\r\nimport pprint\r\nfrom typing import Optional, List\r\nfrom pydantic import BaseModel, Field\r\nfrom unskript.connectors.aws import aws_get_paginator\r\nfrom unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\r\n\r\n\r\nclass InputSchema(BaseModel):\r\n    region: Optional[str] = Field(\r\n        title='AWS Region',\r\n        description='AWS Region.'\r\n    )\r\n\r\n\r\ndef aws_get_all_load_balancers_printer(output):\r\n    if output is None:\r\n        return\r\n    pprint.pprint(output)\r\n\r\n\r\ndef aws_get_all_load_balancers(handle, region: str = \"\") -> List:\r\n    \"\"\"aws_get_all_load_balancers Returns an list of load balancer details.\r\n\r\n        :type handle: object\r\n        :param handle: Object returned from task.validate(...).\r\n\r\n        :type region: string\r\n        :param region: AWS Region.\r\n\r\n        :rtype: List of load balancer details.\r\n    \"\"\"\r\n    result = []\r\n    all_regions = [region]\r\n    if not region:\r\n        all_regions = aws_list_all_regions(handle)\r\n\r\n    for reg in all_regions:\r\n        try:\r\n            elb_Client = handle.client('elbv2', region_name=reg)\r\n            response = aws_get_paginator(elb_Client, \"describe_load_balancers\", \"LoadBalancers\")\r\n            for lb in response:\r\n                elb_dict = {}\r\n                elb_dict[\"load_balancer_name\"] = lb['LoadBalancerName']\r\n                elb_dict[\"load_balancer_arn\"] = lb['LoadBalancerArn']\r\n                elb_dict[\"load_balancer_type\"] = lb['Type']\r\n                elb_dict[\"load_balancer_dns\"] = lb['DNSName']\r\n                elb_dict[\"region\"] = reg\r\n                result.append(elb_dict)\r\n        except Exception:\r\n            pass\r\n    return result\r\n"
  },
  {
    "path": "AWS/legos/aws_get_all_service_names/README.md",
    "content": "\r\n[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>AWS Get All Service Names </h1>\r\n\r\n## Description\r\nFor a given region, this Action will output all the Service Names\r\n\r\n\r\n## Lego Details\r\n\r\n    def aws_get_all_service_names(handle, region:str) -> List:\r\n\r\n        handle: Object of type unSkript AWS Connector\r\n        region: Location of the S3 buckets.\r\n\r\n## Lego Input\r\nThis Lego take two inputs handle and region.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.jpg\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)\r\n\r\n"
  },
  {
    "path": "AWS/legos/aws_get_all_service_names/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_all_service_names/aws_get_all_service_names.json",
    "content": "{\n  \"action_title\": \"AWS Get All Service Names v3\",\n  \"action_description\": \"Get a list of all service names in a region\",\n  \"action_type\": \"LEGO_TYPE_AWS\",\n  \"action_entry_function\": \"aws_get_all_service_names\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n  \"action_is_check\": false,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\":[ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_SECOPS\" ,\"CATEGORY_TYPE_AWS\" ]\n}\n"
  },
  {
    "path": "AWS/legos/aws_get_all_service_names/aws_get_all_service_names.py",
    "content": "from __future__ import annotations\nimport pprint\nfrom typing import List\nfrom pydantic import BaseModel, Field\nfrom unskript.connectors.aws import aws_get_paginator\nfrom beartype import beartype\n\n\nclass InputSchema(BaseModel):\n    region: str = Field(..., description='The AWS Regiob', title='region')\n\n\n@beartype\ndef aws_get_all_service_names_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\n\n@beartype\ndef aws_get_all_service_names(handle, region:str) -> List:\n    sqClient = handle.client('service-quotas',region_name=region)\n    resPaginate = aws_get_paginator(sqClient,'list_services','Services',PaginationConfig={\n        'MaxItems': 1000,\n        'PageSize': 100\n        })\n\n    #res = sqClient.list_services(MaxResults = 100)\n    return resPaginate\n"
  },
  {
    "path": "AWS/legos/aws_get_all_untagged_resources/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>AWS Get Untagged Resources </h1>\r\n\r\n## Description\r\nThis Lego filters all the untagged resources of given region.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_get_untagged_resources(handle, region: str)\r\n\r\n        handle: Object of type unSkript AWS Connector.\r\n        region: Region to filter resources.\r\n\r\n## Lego Input\r\nThis Lego take two inputs handle and region.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_get_all_untagged_resources/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_all_untagged_resources/aws_get_all_untagged_resources.json",
    "content": "{\r\n    \"action_title\": \"AWS Get Untagged Resources\",\r\n    \"action_description\": \"AWS Get Untagged Resources\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_get_all_untagged_resources\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\":[ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_COST_OPT\" ,\"CATEGORY_TYPE_AWS\" ]\r\n}\r\n"
  },
  {
    "path": "AWS/legos/aws_get_all_untagged_resources/aws_get_all_untagged_resources.py",
    "content": "##  Copyright (c) 2021 unSkript, Inc\r\n##  All rights reserved.\r\n##\r\nimport pprint\r\nfrom typing import List\r\nfrom pydantic import BaseModel, Field\r\nfrom unskript.connectors.aws import aws_get_paginator\r\n\r\nclass InputSchema(BaseModel):\r\n    region: str = Field(\r\n        title='Region',\r\n        description='AWS Region.')\r\n\r\ndef aws_get_all_untagged_resources_printer(output):\r\n    if output is None:\r\n        return\r\n    pprint.pprint(output)\r\n\r\n\r\ndef aws_get_all_untagged_resources(handle, region: str) -> List:\r\n    \"\"\"aws_get_all_untagged_resources Returns an List of Untagged Resources.\r\n\r\n        :type handle: object\r\n        :param handle: Object returned from task.validate(...).\r\n\r\n        :type region: str\r\n        :param region: Region to filter resources.\r\n\r\n        :rtype: List of untagged resources.\r\n    \"\"\"\r\n\r\n    ec2Client = handle.client('resourcegroupstaggingapi', region_name=region)\r\n    result = []\r\n    try:\r\n        response = aws_get_paginator(ec2Client, \"get_resources\", \"ResourceTagMappingList\")\r\n        for resources in response:\r\n            if not resources[\"Tags\"]:\r\n               result.append(resources[\"ResourceARN\"])\r\n    except Exception as error:\r\n        result.append({\"error\":error})\r\n\r\n    return result\r\n"
  },
  {
    "path": "AWS/legos/aws_get_auto_scaling_instances/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Get AWS Auto Scaling Group Instances </h1>\r\n\r\n## Description\r\nThis Lego filter AWS autoscaling group instances.\r\n\r\n## Lego Details\r\n\r\n    aws_get_auto_scaling_instances(handle, instance_ids: List, region: str)\r\n\r\n        handle: Object of type unSkript AWS Connector\r\n        instance_ids: List of instances.\r\n        region: Region to filter instances.\r\n\r\n## Lego Input\r\nThis Lego take three inputs handle, instance_ids and region.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n<img src=\"./1.png\">\r\n\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_get_auto_scaling_instances/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_auto_scaling_instances/aws_get_auto_scaling_instances.json",
    "content": "{\r\n    \"action_title\": \"Get AWS AutoScaling Group Instances\",\r\n    \"action_description\": \"Use This Action to Get AWS AutoScaling Group Instances\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_get_auto_scaling_instances\",\r\n    \"action_needs_credential\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\":[ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\" ,\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_EC2\",\"CATEGORY_TYPE_AWS_ASG\"  ]\r\n}"
  },
  {
    "path": "AWS/legos/aws_get_auto_scaling_instances/aws_get_auto_scaling_instances.py",
    "content": "##  Copyright (c) 2021 unSkript, Inc\r\n##  All rights reserved.\r\n##\r\nfrom typing import List\r\nfrom pydantic import BaseModel, Field\r\nfrom tabulate import tabulate\r\n\r\nclass InputSchema(BaseModel):\r\n    instance_ids: list = Field(\r\n        title='Instance IDs',\r\n        description='List of instances.')\r\n\r\n    region: str = Field(\r\n        title='Region',\r\n        description='AWS Region of the ECS service.')\r\n\r\n\r\ndef aws_get_auto_scaling_instances_printer(output):\r\n    if output is None:\r\n        return\r\n    print(tabulate(output, headers='keys'))\r\n\r\n\r\ndef aws_get_auto_scaling_instances(handle, instance_ids: list, region: str) -> List:\r\n    \"\"\"aws_get_auto_scaling_instances List of Dict with instanceId and attached groups.\r\n\r\n        :type handle: object\r\n        :param handle: Object returned from task.validate(...).\r\n\r\n        :type instance_ids: list\r\n        :param instance_ids: List of instances.\r\n\r\n        :type region: string\r\n        :param region: Region to filter instances.\r\n\r\n        :rtype: List of Dict with instanceId and attached groups.\r\n    \"\"\"\r\n    result = []\r\n    ec2Client = handle.client('autoscaling', region_name=region)\r\n    try:\r\n        response = ec2Client.describe_auto_scaling_instances(InstanceIds=instance_ids)\r\n        for group in response[\"AutoScalingInstances\"]:\r\n            group_dict = {}\r\n            group_dict[\"InstanceId\"] = group[\"InstanceId\"]\r\n            group_dict[\"AutoScalingGroupName\"] = group[\"AutoScalingGroupName\"]\r\n            result.append(group_dict)\r\n    except Exception as error:\r\n        err = {\"Error\":error}\r\n        result.append(err)\r\n    return result\r\n"
  },
  {
    "path": "AWS/legos/aws_get_bucket_size/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Get AWS Bucket Size </h1>\r\n\r\n## Description\r\nThis Lego used to get an AWS Bucket Size.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_get_bucket_size(handle: object, bucketName: str)\r\n\r\n        handle: Object of type unSkript AWS Connector.\r\n        bucketName: Name of the particular alarm in the cloudwatch.\r\n\r\n## Lego Input\r\n\r\nThis Lego take two inputs handle and bucketName.\r\n\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_get_bucket_size/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_bucket_size/aws_get_bucket_size.json",
    "content": "{\r\n    \"action_title\": \"Get AWS Bucket Size\",\r\n    \"action_description\": \"Get an AWS Bucket Size\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_get_bucket_size\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\":[ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\" ,\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_S3\"  ]\r\n}\r\n    "
  },
  {
    "path": "AWS/legos/aws_get_bucket_size/aws_get_bucket_size.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\n\nimport pprint\nimport datetime\nfrom pydantic import BaseModel, Field\nfrom boto3.session import Session\n\n\n## FIXME: make this a JSON schema rather than class\nclass InputSchema(BaseModel):\n    bucketName: str = Field(\n        title='Bucket Name',\n        description='Name of the bucket.')\n\ndef aws_get_bucket_size_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef aws_get_bucket_size(handle: Session, bucketName: str) -> str:\n    \"\"\"aws_get_bucket_size Returns the size of the bucket.\n\n        :type handle: Session\n        :param handle: Handle to the boto3 session\n\n        :type bucketName: string\n        :param bucketName: Name of the bucket\n\n        :rtype: String with the size of the bucket.\n    \"\"\"\n    now = datetime.datetime.now()\n    # Need to get the region of the bucket first.\n    s3Client = handle.client('s3')\n    try:\n        bucketLocationResp = s3Client.get_bucket_location(\n            Bucket=bucketName\n        )\n        print(\"location of bucket: \", bucketLocationResp)\n    except Exception as e:\n        print(f\"Could not get location for bucket {bucketName}, error {e}\")\n        raise e\n    region = bucketLocationResp['LocationConstraint']\n\n    cw = handle.client('cloudwatch', region_name=region)\n\n    # Gets the corresponding metrics from CloudWatch for bucket\n    response = cw.get_metric_statistics(Namespace='AWS/S3',\n                                        MetricName='BucketSizeBytes',\n                                        Dimensions=[\n                                            {'Name': 'BucketName', 'Value': bucketName},\n                                            {'Name': 'StorageType', 'Value': 'StandardStorage'}\n                                        ],\n                                        Statistics=['Average'],\n                                        Period=3600,\n                                        StartTime=(now - datetime.timedelta(days=7)).isoformat(),\n                                        EndTime=now.isoformat()\n                                        )\n    print(response)\n    for res in response[\"Datapoints\"]:\n        return str(f\"{int(res['Average'])}\").rjust(25)\n    # Note the use of \"{:,}\".format.\n    # This is a new shorthand method to format output.\n"
  },
  {
    "path": "AWS/legos/aws_get_cloudwatch_ebs/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Get AWS EBS Metrics from Cloudwatch </h1>\r\n\r\n## Description\r\nThis Lego get AWS CloudWatch Statistics for EBS volumes.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_get_cloudwatch_ebs(hdl: Session, metric_name: EBSMetrics, volumes: List[str], region: str, timeSince: int,statistics: StatisticsType, period: int)\r\n\r\n        hdl: Object of type unSkript AWS Connector.\r\n        metric_name: The name of the EBS metric.\r\n        volumes: List of EBS volumes\r\n        timeSince: Starting from now, window (in seconds) for which you want to get the datapoints for.\r\n        statistics: Cloudwatch metric statistics.\r\n        period: The granularity, in seconds, of the returned data points.\r\n        region: AWS Region of the cloudwatch.\r\n\r\n## Lego Input\r\n\r\nThis Lego take seven inputs hdl, metric_name, volumes, timeSince, statistics, period and region.\r\n\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_get_cloudwatch_ebs/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_cloudwatch_ebs/aws_get_cloudwatch_ebs.json",
    "content": "{\r\n    \"action_title\": \"Get AWS EBS Metrics from Cloudwatch\",\r\n    \"action_description\": \"Get AWS CloudWatch Statistics for EBS volumes\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_get_cloudwatch_ebs\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\":[ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\" ,\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_EBS\",\"CATEGORY_TYPE_AWS_CLOUDWATCH\"  ]\r\n}\r\n    "
  },
  {
    "path": "AWS/legos/aws_get_cloudwatch_ebs/aws_get_cloudwatch_ebs.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\n\nimport pprint\nfrom typing import Optional, List\nfrom datetime import datetime, timedelta\nfrom pydantic import BaseModel, Field\nimport matplotlib.pyplot as plt\nfrom unskript.legos.aws.aws_get_handle.aws_get_handle import Session\nfrom unskript.enums.aws_cloudwatch_enums import EBSMetrics\nfrom unskript.enums.aws_k8s_enums import StatisticsType\nfrom tabulate import tabulate\n\n\nclass InputSchema(BaseModel):\n    volumes: List[str] = Field(\n        title=\"Volume\",\n        description=\"List of EBS volumes\",\n    )\n    metric_name: EBSMetrics = Field(\n        title=\"Metric\",\n        description=(\"The name of the EBS metric. Eg VolumeReadBytes|VolumeWriteBytes|VolumeReadOps\"\n                     \"|VolumeWriteOps|VolumeTotalReadTime|VolumeTotalWriteTime|VolumeIdleTime\"\n                     \"|VolumeQueueLength|VolumeThroughputPercentage|VolumeConsumedReadWriteOps|BurstBalance\")\n    )\n    period: Optional[int] = Field(\n        default=60,\n        title=\"Period\",\n        description=\"The granularity, in seconds, of the returned data points.\",\n    )\n    timeSince: int = Field(\n        default=3600,\n        title=\"Time Since\",\n        description=(\"Starting from now, window (in seconds) for which you want to get\"\n                     \" the datapoints for.\")\n    )\n    statistics: StatisticsType = Field(\n        title=\"Statistics\",\n        description=(\"Cloudwatch metric statistics. Possible values: \"\n                     \"SampleCount, Average, Sum, Minimum, Maximum.\")\n    )\n    region: str = Field(\n        title=\"Region\",\n        description=\"AWS Region of the cloudwatch.\")\n\n\ndef aws_get_cloudwatch_ebs_printer(output):\n    if output is None:\n        return\n    plt.show()\n    pprint.pprint(output)\n\n\ndef aws_get_cloudwatch_ebs(\n    hdl: Session,\n    metric_name: EBSMetrics,\n    volumes: List[str],\n    region: str,\n    timeSince: int,\n    statistics: StatisticsType,\n    period: int = 60,\n) -> str:\n\n    \"\"\"aws_get_cloudwatch_ebs shows plotted AWS cloudwatch statistics for ebs.\n\n        :type metric_name: ApplicationELBMetrics\n        :param metric_name: The name of the metric, with or without spaces.\n\n        :type volumes: List[str]\n        :param volumes: List of EBS volumes\n\n        :type period: int\n        :param period: The granularity, in seconds, of the returned data points.\n\n        :type timeSince: int\n        :param timeSince: Starting from now, window (in seconds) for which you\n        want to get the datapoints for.\n\n        :type statistics: StatisticsType\n        :param statistics: Cloudwatch metric statistics. Possible values: SampleCount,\n        Average, Sum, Minimum, Maximum.\n\n        :type region: string\n        :param region: AWS Region of the cloudwatch.\n\n        :rtype: Shows ploted statistics.\n    \"\"\"\n    metric_name = metric_name.value if metric_name else None\n    statistics = statistics.value if statistics else None\n    cloudwatchClient = hdl.client(\"cloudwatch\", region_name=region)\n\n    name_space = \"AWS/EBS\"\n    dimensions = [{\"Name\": \"VolumeId\", \"Value\": v}\n                  for v in volumes]\n\n    # Gets metric statistics.\n    res = cloudwatchClient.get_metric_statistics(\n        Namespace=name_space,\n        MetricName=metric_name,\n        Dimensions=dimensions,\n        Period=period,\n        StartTime=datetime.utcnow() - timedelta(seconds=timeSince),\n        EndTime=datetime.utcnow(),\n        Statistics=[statistics],\n    )\n    data = {}\n    for datapoints in res[\"Datapoints\"]:\n        data[datapoints[\"Timestamp\"]] = datapoints[statistics]\n\n    # Sorts data.\n    data_keys = data.keys()\n    times_stamps = list(data_keys)\n    times_stamps.sort()\n    sorted_values = []\n    table_data = []\n    for value in times_stamps:\n        table_data.append([value, data[value]])\n        sorted_values.append(data[value])\n    head = [\"Timestamp\", \"Value\"]\n    table = tabulate(table_data, headers=head, tablefmt=\"grid\")\n    # Puts datapoints into the plot.\n    plt.plot_date(times_stamps, sorted_values, \"-o\")\n\n    return table\n"
  },
  {
    "path": "AWS/legos/aws_get_cloudwatch_ec2/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Get AWS EC2 Metrics from Cloudwatch </h1>\r\n\r\n## Description\r\nThis Lego get AWS CloudWatch Metrics for EC2 instances. These could be CPU, Network, Disk based measurements.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_get_cloudwatch_ec2(hdl: Session, metric_name: EC2Metrics, instance: str, region: str, timeSince: int,statistics: StatisticsType, period: int)\r\n\r\n        hdl: Object of type unSkript AWS Connector.\r\n        instance: AWS EC2 instance ID.\r\n        metric_name: The name of the metric, with or without spaces.\r\n        timeSince: Starting from now, window (in seconds) for which you want to get the datapoints for.\r\n        statistics: Cloudwatch metric statistics.\r\n        period: The granularity, in seconds, of the returned data points.\r\n        region: AWS Region of the cloudwatch.\r\n\r\n## Lego Input\r\n\r\nThis Lego take seven inputs hdl, metric_name, instance, timeSince, statistics, period and region.\r\n\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_get_cloudwatch_ec2/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_cloudwatch_ec2/aws_get_cloudwatch_ec2.json",
    "content": "{\r\n    \"action_title\": \"Get AWS EC2 Metrics from Cloudwatch\",\r\n    \"action_description\": \"Get AWS CloudWatch Metrics for EC2 instances. These could be CPU, Network, Disk based measurements\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_get_cloudwatch_ec2\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\":[ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\" ,\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_EC2\",\"CATEGORY_TYPE_AWS_CLOUDWATCH\"  ]\r\n\r\n}\r\n  "
  },
  {
    "path": "AWS/legos/aws_get_cloudwatch_ec2/aws_get_cloudwatch_ec2.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\n\nimport pprint\nfrom typing import Optional\nfrom datetime import datetime, timedelta\nfrom pydantic import BaseModel, Field\nimport matplotlib.pyplot as plt\nfrom unskript.legos.aws.aws_get_handle.aws_get_handle import Session\nfrom unskript.enums.aws_k8s_enums import StatisticsType\nfrom unskript.enums.aws_cloudwatch_enums import EC2Metrics\nfrom tabulate import tabulate\n\n\nclass InputSchema(BaseModel):\n    instance: str = Field(\n        title=\"Instances\",\n        description=\"AWS EC2 instance ID. Eg. i-abcd\",\n    )\n    metric_name: EC2Metrics = Field(\n        title=\"Metric\",\n        description=(\"The name of the metric. Eg CPUUtilization|DiskReadOps|DiskWriteOps\"\n                     \"|DiskReadBytes|DiskWriteBytes|MetadataNoToken|NetworkIn|NetworkOut\"\n                     \"|NetworkPacketsIn|NetworkPacketsOut\")\n    )\n    period: Optional[int] = Field(\n        default=60,\n        title=\"Period\",\n        description=\"The granularity, in seconds, of the returned data points.\",\n    )\n    timeSince: int = Field(\n        default=3600,\n        title=\"Time Since\",\n        description=(\"Starting from now, window (in seconds) for which you want to get \"\n                     \"the datapoints for.\")\n    )\n    statistics: StatisticsType = Field(\n        title=\"Statistics\",\n        description=(\"Cloudwatch metric statistics. Possible values: SampleCount, Average, \"\n                     \"Sum, Minimum, Maximum.\")\n    )\n    region: str = Field(\n        title=\"Region\",\n        description=\"AWS Region of the cloudwatch.\")\n\n\ndef aws_get_cloudwatch_ec2_printer(output):\n    if output is None:\n        return\n    plt.show()\n    pprint.pprint(output)\n\n\n\ndef aws_get_cloudwatch_ec2(\n    hdl: Session,\n    instance: str,\n    metric_name: EC2Metrics,\n    region: str,\n    timeSince: int,\n    statistics: StatisticsType,\n    period: int = 60,\n) -> str:\n\n    \"\"\"aws_get_cloudwatch_ec2 shows plotted AWS cloudwatch statistics for ec2.\n\n        :type metric_name: ApplicationELBMetrics\n        :param metric_name: The name of the metric, with or without spaces.\n\n        :type instance: string\n        :param instance: AWS EC2 instance ID.\n\n        :type period: int\n        :param period: The granularity, in seconds, of the returned data points.\n\n        :type timeSince: int\n        :param timeSince: Starting from now, window (in seconds) for which you \n        want to get the datapoints for.\n\n        :type statistics: StatisticsType\n        :param statistics: Cloudwatch metric statistics. Possible values: SampleCount,\n        Average, Sum, Minimum, Maximum.\n\n        :type region: string\n        :param region: AWS Region of the cloudwatch.\n\n        :rtype: Shows ploted statistics.\n    \"\"\"\n    metric_name = metric_name.value if metric_name else None\n    statistics = statistics.value if statistics else None\n    cloudwatchClient = hdl.client(\"cloudwatch\", region_name=region)\n\n    name_space = \"AWS/EC2\"\n    dimensions = [{\"Name\": \"InstanceId\", \"Value\": instance}]\n\n    # Gets metric statistics.\n    res = cloudwatchClient.get_metric_statistics(\n        Namespace=name_space,\n        MetricName=metric_name,\n        Dimensions=dimensions,\n        Period=period,\n        StartTime=datetime.utcnow() - timedelta(seconds=timeSince),\n        EndTime=datetime.utcnow(),\n        Statistics=[statistics],\n    )\n\n    data = {}\n    table_data = []\n    for datapoints in res[\"Datapoints\"]:\n        data[datapoints[\"Timestamp\"]] = datapoints[statistics]\n\n    # Sorts data.\n    data_keys = data.keys()\n    times_stamps = list(data_keys)\n    times_stamps.sort()\n    sorted_values = []\n    for value in times_stamps:\n        table_data.append([value, data[value]])\n        sorted_values.append(data[value])\n    head = [\"Timestamp\", \"Value\"]\n    table = tabulate(table_data, headers=head, tablefmt=\"grid\")\n    # Puts datapoints into the plot.\n    plt.plot_date(times_stamps, sorted_values, \"-o\")\n\n    return table\n"
  },
  {
    "path": "AWS/legos/aws_get_cloudwatch_ec2_cpuutil/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Get AWS EC2 CPU Utilization Statistics from Cloudwatch </h1>\r\n\r\n## Description\r\nThis Lego get AWS EC2 CPU Utilization Statistics from Cloudwatch.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_get_cloudwatch_ec2_cpuutil(hdl: Session, instance: str, region: str, timeSince: int, statistics: StatisticsType, period: int)\r\n\r\n        hdl: Object of type unSkript AWS Connector.\r\n        instance: AWS EC2 instance ID.\r\n        timeSince: Starting from now, window (in seconds) for which you want to get the datapoints for.\r\n        statistics: Cloudwatch metric statistics.\r\n        period: The granularity, in seconds, of the returned data points.\r\n        region: AWS Region of the cloudwatch.\r\n\r\n## Lego Input\r\n\r\nThis Lego take six inputs hdl, instance, timeSince, statistics, period and region.\r\n\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_get_cloudwatch_ec2_cpuutil/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_cloudwatch_ec2_cpuutil/aws_get_cloudwatch_ec2_cpuutil.json",
    "content": "{\r\n    \"action_title\": \"Get AWS EC2 CPU Utilization Statistics from Cloudwatch\",\r\n    \"action_description\": \"Get AWS CloudWatch Statistics for cpu utilization for EC2 instances\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_get_cloudwatch_ec2_cpuutil\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\":[ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\" ,\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_EC2\",\"CATEGORY_TYPE_AWS_CLOUDWATCH\"  ]\r\n}\r\n    "
  },
  {
    "path": "AWS/legos/aws_get_cloudwatch_ec2_cpuutil/aws_get_cloudwatch_ec2_cpuutil.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\n\nimport pprint\nfrom datetime import datetime, timedelta\nfrom typing import Optional\nimport matplotlib.pyplot as plt\nfrom pydantic import BaseModel, Field\nfrom unskript.legos.aws.aws_get_handle.aws_get_handle import Session\nfrom unskript.enums.aws_k8s_enums import StatisticsType\nfrom tabulate import tabulate\n\n\nclass InputSchema(BaseModel):\n    instance: str = Field(\n        title=\"Instance\",\n        description=\"AWS EC2 instance ID. Eg. i-abcd\",\n    )\n    period: Optional[int] = Field(\n        default=60,\n        title=\"Period\",\n        description=\"The granularity, in seconds, of the returned data points.\",\n    )\n    timeSince: int = Field(\n        default=3600,\n        title=\"Time Since\",\n        description=(\"Starting from now, window (in seconds) for which you want to get \"\n                     \"the datapoints for.\")\n    )\n    statistics: StatisticsType = Field(\n        default=StatisticsType.AVERAGE,\n        title=\"Statistics\",\n        description=(\"Cloudwatch metric statistics. Possible values: SampleCount, Average, \"\n                     \"Sum, Minimum, Maximum.\")\n    )\n    region: str = Field(\n        title=\"Region\",\n        description=\"AWS Region of the cloudwatch.\")\n\n\ndef aws_get_cloudwatch_ec2_cpuutil_printer(output):\n    if output is None:\n        return\n    print(\"\\n\")\n    pprint.pprint(output)\n\n\ndef aws_get_cloudwatch_ec2_cpuutil(\n    hdl: Session,\n    instance: str,\n    region: str,\n    timeSince: int = 3600,\n    statistics: StatisticsType = StatisticsType.AVERAGE,\n    period: int = 60,\n) -> str:\n    \"\"\"aws_get_cloudwatch_ec2_cpuutil shows plotted AWS cloudwatch statistics\n       for ec2 cpu utilization.\n\n        :type instance: string\n        :param instance: AWS EC2 instance ID.\n\n        :type period: int\n        :param period: The granularity, in seconds, of the returned data points.\n\n        :type timeSince: int\n        :param timeSince: Starting from now, window (in seconds) for which you\n        want to get the datapoints for.\n\n        :type statistics: StatisticsType\n        :param statistics: Cloudwatch metric statistics. Possible values: SampleCount,\n        Average, Sum, Minimum, Maximum.\n\n        :type region: string\n        :param region: AWS Region of the cloudwatch.\n\n        :rtype: Shows ploted statistics.\n    \"\"\"\n    cloudwatchClient = hdl.client(\"cloudwatch\", region_name=region)\n\n    name_space = \"AWS/EC2\"\n    dimensions = [{\"Name\": \"InstanceId\", \"Value\": instance}]\n    metric_name = \"CPUUtilization\"\n\n    # Gets metric statistics.\n    res = cloudwatchClient.get_metric_statistics(\n        Namespace=name_space,\n        MetricName=metric_name,\n        Dimensions=dimensions,\n        Period=period,\n        StartTime=datetime.utcnow() - timedelta(seconds=timeSince),\n        EndTime=datetime.utcnow(),\n        Statistics=[statistics.value],\n    )\n\n    data = {}\n    table_data = []\n    for datapoints in res[\"Datapoints\"]:\n        data[datapoints[\"Timestamp\"]] = datapoints[statistics.value]\n\n    # Sorts data.\n    data_keys = data.keys()\n    times_stamps = list(data_keys)\n    times_stamps.sort()\n    sorted_values = []\n    for value in times_stamps:\n        table_data.append([value, data[value]])\n        sorted_values.append(data[value])\n    head = [\"Timestamp\", \"Value\"]\n    table = tabulate(table_data, headers=head, tablefmt=\"grid\")\n    # Puts datapoints into the plot.\n    plt.plot_date(times_stamps, sorted_values, \"-o\")\n\n    return table\n"
  },
  {
    "path": "AWS/legos/aws_get_cloudwatch_metrics_applicationelb/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Get AWS CloudWatch Metrics for AWS/ApplicationELB </h1>\r\n\r\n## Description\r\nThis Lego get AWS CloudWatch Metrics for AWS/ApplicationELB.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_get_cloudwatch_ebs(hdl: Session, metric_name: ApplicationELBMetrics, dimensions: List[dict], region: str, timeSince: int,statistics: StatisticsType, period: int)\r\n\r\n        hdl: Object of type unSkript AWS Connector.\r\n        metric_name: The name of the metric, with or without spaces.\r\n        dimensions: A dimension is a name/value pair that is part of the identity of a metric.\r\n        timeSince: Starting from now, window (in seconds) for which you want to get the datapoints for.\r\n        statistics: Cloudwatch metric statistics.\r\n        period: The granularity, in seconds, of the returned data points.\r\n        region: AWS Region of the cloudwatch.\r\n\r\n## Lego Input\r\n\r\nThis Lego take seven inputs hdl, metric_name, dimensions, timeSince, statistics, period and region.\r\n\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_get_cloudwatch_metrics_applicationelb/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_cloudwatch_metrics_applicationelb/aws_get_cloudwatch_metrics_applicationelb.json",
    "content": "{\r\n    \"action_title\": \"Get AWS CloudWatch Metrics for AWS/ApplicationELB\",\r\n    \"action_description\": \"Get AWS CloudWatch Metrics for AWS/ApplicationELB\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_get_cloudwatch_metrics_applicationelb\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\":[ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\" ,\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_ELB\",\"CATEGORY_TYPE_AWS_CLOUDWATCH\"  ]\r\n}\r\n    "
  },
  {
    "path": "AWS/legos/aws_get_cloudwatch_metrics_applicationelb/aws_get_cloudwatch_metrics_applicationelb.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\n\nimport pprint\nfrom typing import Optional, List\nfrom datetime import datetime, timedelta\nfrom pydantic import BaseModel, Field\nimport matplotlib.pyplot as plt\nfrom tabulate import tabulate\nfrom unskript.legos.aws.aws_get_handle.aws_get_handle import Session\nfrom unskript.enums.aws_cloudwatch_enums import ApplicationELBMetrics\nfrom unskript.enums.aws_k8s_enums import StatisticsType\n\n\nclass InputSchema(BaseModel):\n    metric_name: ApplicationELBMetrics = Field(\n        title=\"Metric Name\",\n        description=\"The name of the metric, with or without spaces.\",\n    )\n    dimensions: List[dict] = Field(\n        title=\"Dimensions\",\n        description=\"A dimension is a name/value pair that is part of the identity of a metric.\",\n    )\n    period: Optional[int] = Field(\n        60,\n        title=\"Period\",\n        description=\"The granularity, in seconds, of the returned data points.\",\n    )\n    timeSince: int = Field(\n        title=\"Time Since\",\n        description=(\"Starting from now, window (in seconds) for which you want \"\n                     \"to get the datapoints for.\")\n    )\n    statistics: StatisticsType = Field(\n        title=\"Statistics\",\n        description=(\"Cloudwatch metric statistics. Possible values: Average, Sum, \"\n                     \"Minimum, Maximum.\")\n    )\n    region: str = Field(\n        title=\"Region\", description=\"AWS Region of the cloudwatch.\")\n\n\ndef aws_get_cloudwatch_metrics_applicationelb_printer(output):\n    if output is None:\n        return\n    plt.show()\n    pprint.pprint(output)\n\n\ndef aws_get_cloudwatch_metrics_applicationelb(\n    hdl: Session,\n    metric_name: ApplicationELBMetrics,\n    dimensions: List[dict],\n    timeSince: int,\n    statistics: StatisticsType,\n    region: str,\n    period: int = 60,\n) -> str:\n    \"\"\"aws_get_cloudwatch_metrics_applicationelb shows plotted AWS cloudwatch statistics\n       for Application ELB.\n\n        :type metric_name: ApplicationELBMetrics\n        :param metric_name: The name of the metric, with or without spaces.\n\n        :type dimensions: List[dict]\n        :param dimensions: A dimension is a name/value pair that is part of the \n        identity of a metric.\n\n        :type period: int\n        :param period: The granularity, in seconds, of the returned data points.\n\n        :type timeSince: int\n        :param timeSince: Starting from now, window (in seconds) for which you want to \n        get the datapoints for.\n\n        :type statistics: StatisticsType\n        :param statistics: Cloudwatch metric statistics. Possible values: SampleCount,\n        Average, Sum, Minimum, Maximum.\n\n        :type region: string\n        :param region: AWS Region of the cloudwatch.\n\n        :rtype: Shows ploted statistics.\n    \"\"\"\n    metric_name = metric_name.value if metric_name else None\n    statistics = statistics.value if statistics else None\n    cloudwatchClient = hdl.client(\"cloudwatch\", region_name=region)\n    # Gets metric data.\n    res = cloudwatchClient.get_metric_data(\n        MetricDataQueries=[\n            {\n                'Id': metric_name.lower(),\n                'MetricStat': {\n                    'Metric': {\n                        'Namespace': 'AWS/ApplicationELB',\n                        'MetricName': metric_name,\n                        'Dimensions': dimensions\n                    },\n                    'Period': period,\n                    'Stat': statistics,\n                },\n            },\n        ],\n        StartTime=datetime.utcnow() - timedelta(seconds=timeSince),\n        EndTime=datetime.utcnow(),\n        ScanBy='TimestampAscending'\n    )\n\n    timestamps = []\n    values = []\n\n    for timestamp in res['MetricDataResults'][0]['Timestamps']:\n        timestamps.append(timestamp)\n    for value in res['MetricDataResults'][0]['Values']:\n        values.append(value)\n\n    timestamps.sort()\n    values.sort()\n\n    plt.plot_date(timestamps, values, \"-o\")\n\n    data = []\n    for dt, val in zip(\n        res['MetricDataResults'][0]['Timestamps'],\n        res['MetricDataResults'][0]['Values']\n        ):\n        data.append([dt.strftime('%Y-%m-%d::%H-%M'), val])\n    head = [\"Timestamp\", \"Value\"]\n    table = tabulate(data, headers=head, tablefmt=\"grid\")\n\n    return table\n"
  },
  {
    "path": "AWS/legos/aws_get_cloudwatch_metrics_classic_elb/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Get AWS CloudWatch Metrics for AWS/ELB</h1>\r\n\r\n## Description\r\nThis Lego get AWS CloudWatch Metrics for Classic Loadbalancer.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_get_cloudwatch_metrics_classic_elb(hdl: Session, metric_name: ClassicELBMetrics, dimensions: List[dict], region: str, timeSince: int,statistics: StatisticsType, period: int)\r\n\r\n        hdl: Object of type unSkript AWS Connector.\r\n        metric_name: The name of the metric\r\n        dimensions: A dimension is a name/value pair that is part of the identity of a metric.\r\n        timeSince: Starting from now, window (in seconds) for which you want to get the datapoints for.\r\n        statistics: Cloudwatch metric statistics.\r\n        period: The granularity, in seconds, of the returned data points.\r\n        units: Unit of measure.\r\n        region: AWS Region of the cloudwatch.\r\n\r\n## Lego Input\r\n\r\nThis Lego take eight inputs hdl, metric_name, dimensions, timeSince, statistics, period, units and region.\r\n\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_get_cloudwatch_metrics_classic_elb/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_cloudwatch_metrics_classic_elb/aws_get_cloudwatch_metrics_classic_elb.json",
    "content": "{\r\n    \"action_title\": \"Get AWS CloudWatch Metrics for AWS/ELB\",\r\n    \"action_description\": \"Get AWS CloudWatch Metrics for Classic Loadbalancer\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_get_cloudwatch_metrics_classic_elb\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\":[ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\" ,\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_ELB\",\"CATEGORY_TYPE_AWS_CLOUDWATCH\"  ]\r\n}\r\n    "
  },
  {
    "path": "AWS/legos/aws_get_cloudwatch_metrics_classic_elb/aws_get_cloudwatch_metrics_classic_elb.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\n\nimport pprint\nfrom typing import Optional, List\nfrom datetime import datetime, timedelta\nfrom pydantic import BaseModel, Field\nimport matplotlib.pyplot as plt\nfrom tabulate import tabulate\nfrom unskript.legos.aws.aws_get_handle.aws_get_handle import Session\nfrom unskript.enums.aws_k8s_enums import StatisticsType\nfrom unskript.enums.aws_cloudwatch_enums import UnitsType, ClassicELBMetrics\n\n\ndef aws_get_cloudwatch_metrics_classic_elb_printer(output):\n    if output is None:\n        return\n    plt.show()\n    pprint.pprint(output)\n\n\nclass InputSchema(BaseModel):\n    metric_name: ClassicELBMetrics = Field(\n        title=\"Metric Name\",\n        description=\"The name of the metric, with or without spaces.\",\n    )\n    dimensions: List[dict] = Field(\n        title=\"Dimensions\",\n        description=\"A dimension is a name/value pair that is part of the identity of a metric.\",\n    )\n    period: Optional[int] = Field(\n        60,\n        title=\"Period\",\n        description=\"The granularity, in seconds, of the returned data points.\",\n    )\n    timeSince: int = Field(\n        title=\"Time Since\",\n        description=(\"Starting from now, window (in seconds) for which you want to get \"\n                     \"the datapoints for.\")\n    )\n    statistics: StatisticsType = Field(\n        title=\"Statistics\",\n        description=(\"Cloudwatch metric statistics. Possible values: Average, Sum, \"\n                     \"Minimum, Maximum.\")\n    )\n    units: Optional[UnitsType] = Field(\n        title=\"Units\",\n        description=\"Unit of measure.\",\n    )\n    region: str = Field(\n        title=\"Region\", description=\"AWS Region of the cloudwatch.\")\n\n\ndef aws_get_cloudwatch_metrics_classic_elb(\n    hdl: Session,\n    metric_name: ClassicELBMetrics,\n    dimensions: List[dict],\n    timeSince: int,\n    statistics: StatisticsType,\n    region: str,\n    units: UnitsType,\n    period: int = 60,\n) -> str:\n    \"\"\"aws_get_cloudwatch_metrics_ClassicELB shows plotted AWS cloudwatch\n       statistics for Classic ELB.\n\n        :type metric_name: ClassicELBMetrics\n        :param metric_name: The name of the metric, with or without spaces.\n\n        :type dimensions: List[dict]\n        :param dimensions: A dimension is a name/value pair that is part of\n        the identity of a metric.\n\n        :type period: int\n        :param period: The granularity, in seconds, of the returned data points.\n\n        :type timeSince: int\n        :param timeSince: Starting from now, window (in seconds) for which you\n        want to get the datapoints for.\n\n        :type statistics: StatisticsType\n        :param statistics: Cloudwatch metric statistics. Possible values: SampleCount,\n        Average, Sum, Minimum, Maximum.\n\n        :type region: string\n        :param region: AWS Region of the cloudwatch.\n\n        :type units: UnitsType\n        :param units: Unit of measure.\n\n        :rtype: Shows ploted statistics.\n    \"\"\"\n    metric_name = metric_name.value if metric_name else None\n    statistics = statistics.value if statistics else None\n    units = units.value if units else None\n    cloudwatchClient = hdl.client(\"cloudwatch\", region_name=region)\n    # Gets metric data.\n    res = cloudwatchClient.get_metric_data(\n        MetricDataQueries=[\n            {\n                'Id': metric_name.lower(),\n                'MetricStat': {\n                    'Metric': {\n                        'Namespace': 'AWS/ELB',\n                        'MetricName': metric_name,\n                        'Dimensions': dimensions\n                    },\n                    'Period': period,\n                    'Stat': statistics,\n                    'Unit': units\n                },\n            },\n        ],\n        StartTime=datetime.utcnow() - timedelta(seconds=timeSince),\n        EndTime=datetime.utcnow(),\n        ScanBy='TimestampAscending')\n\n    timestamps = []\n    values = []\n\n    for i in res['MetricDataResults'][0]['Timestamps']:\n        dt = i\n        timestamps.append(i)\n    for j in res['MetricDataResults'][0]['Values']:\n        values.append(j)\n\n    timestamps.sort()\n    values.sort()\n\n    plt.plot_date(timestamps, values, \"-o\")\n\n    data = []\n    for dt, val in zip(\n        res['MetricDataResults'][0]['Timestamps'],\n        res['MetricDataResults'][0]['Values']\n        ):\n        data.append([dt.strftime('%Y-%m-%d::%H-%M'), val])\n    head = [\"Timestamp\", \"Value\"]\n    table = tabulate(data, headers=head, tablefmt=\"grid\")\n\n    return table\n"
  },
  {
    "path": "AWS/legos/aws_get_cloudwatch_metrics_dynamodb/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>AWS Get Cloudwatch Metrics DynamoDB </h1>\r\n\r\n## Description\r\nThis Lego gives the AWS Cloudwatch Metrics for DynamoDB.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_get_cloudwatch_metrics_dynamodb(hdl: object, metric_name: DynamoDBMetrics, dimensions: List[dict], timeSince: int, statistics: StatisticsType, region: str, period: int)\r\n\r\n        hdl: Object of type unSkript AWS Connector\r\n        metric_name: The name of the metric, with or without spaces.\r\n        dimensions: A dimension is a name/value pair that is part of the identity of a metric.\r\n        period: The granularity, in seconds, of the returned data points.\r\n        timeSince: Starting from now, window (in seconds) for which you want to get the datapoints for.\r\n        statistics: Cloudwatch metric statistics. Possible values: SampleCount, Average, Sum, Minimum, Maximum.\r\n        region: AWS Region of the cloudwatch.\r\n\r\n\r\n## Lego Input\r\nThis Lego take seven inputs hdl, metric_name, dimensions, period, timeSince, statistics and region.\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_get_cloudwatch_metrics_dynamodb/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_cloudwatch_metrics_dynamodb/aws_get_cloudwatch_metrics_dynamodb.json",
    "content": "{\r\n\"action_title\": \"Get AWS CloudWatch Metrics for AWS/DynamoDB\",\r\n\"action_description\": \"Get AWS CloudWatch Metrics for AWS DynamoDB\",\r\n\"action_type\": \"LEGO_TYPE_AWS\",\r\n\"action_entry_function\": \"aws_get_cloudwatch_metrics_dynamodb\",\r\n\"action_needs_credential\": true,\r\n\"action_supports_poll\": true,\r\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\r\n\"action_supports_iteration\": true,\r\n\"action_categories\":[ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\" ,\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_DYNAMODB\",\"CATEGORY_TYPE_AWS_CLOUDWATCH\"  ]\r\n}\r\n"
  },
  {
    "path": "AWS/legos/aws_get_cloudwatch_metrics_dynamodb/aws_get_cloudwatch_metrics_dynamodb.py",
    "content": "##\r\n# Copyright (c) 2021 unSkript, Inc\r\n# All rights reserved.\r\n##\r\n\r\nimport pprint\r\nfrom typing import Optional, List\r\nfrom datetime import datetime, timedelta\r\nfrom pydantic import BaseModel, Field\r\nimport matplotlib.pyplot as plt\r\nfrom tabulate import tabulate\r\nfrom unskript.legos.aws.aws_get_handle.aws_get_handle import Session\r\nfrom unskript.enums.aws_cloudwatch_enums import DynamoDBMetrics\r\nfrom unskript.enums.aws_k8s_enums import StatisticsType\r\n\r\n\r\nclass InputSchema(BaseModel):\r\n    metric_name: DynamoDBMetrics = Field(\r\n        title=\"Metric Name\",\r\n        description=\"The name of the metric, with or without spaces.\",\r\n    )\r\n    dimensions: List[dict] = Field(\r\n        title=\"Dimensions\",\r\n        description=\"A dimension is a name/value pair that is part of the identity of a metric.\",\r\n    )\r\n    period: Optional[int] = Field(\r\n        60,\r\n        title=\"Period\",\r\n        description=\"The granularity, in seconds, of the returned data points.\",\r\n    )\r\n    timeSince: int = Field(\r\n        title=\"Time Since\",\r\n        description=(\"Starting from now, window (in seconds) for which you want \"\r\n                     \"to get the datapoints for.\")\r\n    )\r\n    statistics: StatisticsType = Field(\r\n        title=\"Statistics\",\r\n        description=(\"Cloudwatch metric statistics. Possible values: SampleCount, \"\r\n                     \"Average, Sum, Minimum, Maximum\")\r\n    )\r\n    region: str = Field(\r\n        title=\"Region\", description=\"AWS Region of the cloudwatch.\")\r\n\r\n\r\ndef aws_get_cloudwatch_metrics_dynamodb_printer(output):\r\n    if output is None:\r\n        return\r\n    plt.show()\r\n    pprint.pprint(output)\r\n\r\n\r\ndef aws_get_cloudwatch_metrics_dynamodb(\r\n    hdl: Session,\r\n    metric_name: DynamoDBMetrics,\r\n    dimensions: List[dict],\r\n    timeSince: int,\r\n    statistics: StatisticsType,\r\n    region: str,\r\n    period: int = 60,\r\n) -> str:\r\n    \"\"\"aws_get_cloudwatch_metrics_dynamodb shows plotted AWS cloudwatch statistics for Dynamodb.\r\n\r\n    :type metric_name: DynamoDBMetrics\r\n    :param metric_name: The name of the metric, with or without spaces.\r\n\r\n    :type dimensions: List[dict]\r\n    :param dimensions: A dimension is a name/value pair that is part of the identity of a metric.\r\n\r\n    :type period: int\r\n    :param period: The granularity, in seconds, of the returned data points.\r\n\r\n    :type timeSince: int\r\n    :param timeSince: Starting from now, window (in seconds) for which you want to get\r\n    the datapoints for.\r\n\r\n    :type statistics: StatisticsType\r\n    :param statistics: Cloudwatch metric statistics. Possible values: SampleCount, Average,\r\n    Sum, Minimum, Maximum.\r\n\r\n    :type region: string\r\n    :param region: AWS Region of the cloudwatch.\r\n\r\n    :rtype: Shows ploted statistics.\r\n    \"\"\"\r\n    metric_name = metric_name.value if metric_name else None\r\n    statistics = statistics.value if statistics else None\r\n    cloudwatchClient = hdl.client(\"cloudwatch\", region_name=region)\r\n    # Gets metric data.\r\n    res = cloudwatchClient.get_metric_data(\r\n        MetricDataQueries=[\r\n            {\r\n                'Id': metric_name.lower(),\r\n                'MetricStat': {\r\n                    'Metric': {\r\n                        'Namespace': 'AWS/DynamoDB',\r\n                        'MetricName': metric_name,\r\n                        'Dimensions': dimensions\r\n                    },\r\n                    'Period': period,\r\n                    'Stat': statistics,\r\n                },\r\n            },\r\n        ],\r\n        StartTime=datetime.utcnow() - timedelta(seconds=timeSince),\r\n        EndTime=datetime.utcnow(),\r\n        ScanBy='TimestampAscending'\r\n    )\r\n\r\n    timestamps = []\r\n    values = []\r\n\r\n    for timestamp in res['MetricDataResults'][0]['Timestamps']:\r\n        timestamps.append(timestamp)\r\n    for value in res['MetricDataResults'][0]['Values']:\r\n        values.append(value)\r\n\r\n    timestamps.sort()\r\n    values.sort()\r\n\r\n    plt.plot_date(timestamps, values, \"-o\")\r\n\r\n    data = []\r\n    for dt, val in zip(\r\n        res['MetricDataResults'][0]['Timestamps'],\r\n        res['MetricDataResults'][0]['Values']\r\n        ):\r\n        data.append([dt.strftime('%Y-%m-%d::%H-%M'), val])\r\n    head = [\"Timestamp\", \"Value\"]\r\n    table = tabulate(data, headers=head, tablefmt=\"grid\")\r\n\r\n    return table\r\n"
  },
  {
    "path": "AWS/legos/aws_get_cloudwatch_metrics_ec2autoscaling/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Get AWS CloudWatch Metrics for AWS/AutoScaling </h1>\r\n\r\n## Description\r\nThis Lego get AWS CloudWatch Metrics for AWS EC2 AutoScaling groups.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_get_cloudwatch_metrics_ec2autoscaling(hdl: Session, metric_name: EC2AutoscalingMetrics, dimensions: List[dict], region: str, timeSince: int,statistics: StatisticsType, period: int)\r\n\r\n        hdl: Object of type unSkript AWS Connector.\r\n        metric_name: The name of the metric, with or without spaces.\r\n        dimensions: A dimension is a name/value pair that is part of the identity of a metric.\r\n        timeSince: Starting from now, window (in seconds) for which you want to get the datapoints for.\r\n        statistics: Cloudwatch metric statistics.\r\n        period: The granularity, in seconds, of the returned data points.\r\n        region: AWS Region of the cloudwatch.\r\n\r\n## Lego Input\r\n\r\nThis Lego take seven inputs hdl, metric_name, dimensions, timeSince, statistics, period and region.\r\n\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_get_cloudwatch_metrics_ec2autoscaling/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_cloudwatch_metrics_ec2autoscaling/aws_get_cloudwatch_metrics_ec2autoscaling.json",
    "content": "{\r\n    \"action_title\": \"Get AWS CloudWatch Metrics for AWS/AutoScaling\",\r\n    \"action_description\": \"Get AWS CloudWatch Metrics for AWS EC2 AutoScaling groups\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_get_cloudwatch_metrics_ec2autoscaling\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\":[ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\" ,\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_EC2\",\"CATEGORY_TYPE_AWS_CLOUDWATCH\"  ]\r\n}\r\n    "
  },
  {
    "path": "AWS/legos/aws_get_cloudwatch_metrics_ec2autoscaling/aws_get_cloudwatch_metrics_ec2autoscaling.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\n\nimport pprint\nfrom typing import Optional, List\nfrom datetime import datetime, timedelta\nfrom pydantic import BaseModel, Field\nimport matplotlib.pyplot as plt\nfrom tabulate import tabulate\nfrom unskript.legos.aws.aws_get_handle.aws_get_handle import Session\nfrom unskript.enums.aws_cloudwatch_enums import EC2AutoscalingMetrics\nfrom unskript.enums.aws_k8s_enums import StatisticsType\n\n\nclass InputSchema(BaseModel):\n    metric_name: EC2AutoscalingMetrics = Field(\n        title=\"Metric Name\",\n        description=\"The name of the metric, with or without spaces.\",\n    )\n    dimensions: List[dict] = Field(\n        title=\"Dimensions\",\n        description=\"A dimension is a name/value pair that is part of the identity of a metric.\",\n    )\n    period: Optional[int] = Field(\n        60,\n        title=\"Period\",\n        description=\"The granularity, in seconds, of the returned data points.\",\n    )\n    timeSince: int = Field(\n        title=\"Time Since\",\n        description=(\"Starting from now, window (in seconds) for which you want \"\n                     \"to get the datapoints for.\")\n    )\n    statistics: StatisticsType = Field(\n        title=\"Statistics\",\n        description=\"Cloudwatch metric statistics\",\n    )\n    region: str = Field(\n        title=\"Region\", description=\"AWS Region of the cloudwatch.\")\n\n\ndef aws_get_cloudwatch_metrics_ec2autoscaling_printer(output):\n    if output is None:\n        return\n    plt.show()\n    pprint.pprint(output)\n\n\ndef aws_get_cloudwatch_metrics_ec2autoscaling(\n    hdl: Session,\n    metric_name: EC2AutoscalingMetrics,\n    dimensions: List[dict],\n    timeSince: int,\n    statistics: StatisticsType,\n    region: str,\n    period: int = 60,\n) -> str:\n    \"\"\"aws_get_cloudwatch_metrics_ec2autoscaling shows plotted AWS cloudwatch\n       statistics for EC2 Autoscaling groups.\n\n        :type metric_name: EC2AutoscalingMetrics\n        :param metric_name: The name of the metric, with or without spaces.\n\n        :type dimensions: List[dict]\n        :param dimensions: A dimension is a name/value pair that is part of the\n        identity of a metric.\n\n        :type period: int\n        :param period: The granularity, in seconds, of the returned data points.\n\n        :type timeSince: int\n        :param timeSince: Starting from now, window (in seconds) for which you want\n        to get the datapoints for.\n\n        :type statistics: StatisticsType\n        :param statistics: Cloudwatch metric statistics. Possible values: SampleCount,\n        Average, Sum, Minimum, Maximum.\n\n        :type region: string\n        :param region: AWS Region of the cloudwatch.\n\n        :rtype: Shows ploted statistics.\n    \"\"\"\n    metric_name = metric_name.value if metric_name else None\n    statistics = statistics.value if statistics else None\n    cloudwatchClient = hdl.client(\"cloudwatch\", region_name=region)\n    # Gets metric data.\n    res = cloudwatchClient.get_metric_data(\n        MetricDataQueries=[\n            {\n                'Id': metric_name.lower(),\n                'MetricStat': {\n                    'Metric': {\n                        'Namespace': 'AWS/AutoScaling',\n                        'MetricName': metric_name,\n                        'Dimensions': dimensions\n                    },\n                    'Period': period,\n                    'Stat': statistics,\n                },\n            },\n        ],\n        StartTime=datetime.utcnow() - timedelta(seconds=timeSince),\n        EndTime=datetime.utcnow(),\n        ScanBy='TimestampAscending'\n    )\n\n    timestamps = []\n    values = []\n\n    for timestamp in res['MetricDataResults'][0]['Timestamps']:\n        timestamps.append(timestamp)\n    for value in res['MetricDataResults'][0]['Values']:\n        values.append(value)\n\n    timestamps.sort()\n    values.sort()\n\n    plt.plot_date(timestamps, values, \"-o\")\n\n    data = []\n    for dt, val in zip(\n        res['MetricDataResults'][0]['Timestamps'],\n        res['MetricDataResults'][0]['Values']\n        ):\n        data.append([dt.strftime('%Y-%m-%d::%H-%M'), val])\n    head = [\"Timestamp\", \"Value\"]\n    table = tabulate(data, headers=head, tablefmt=\"grid\")\n\n    return table\n"
  },
  {
    "path": "AWS/legos/aws_get_cloudwatch_metrics_gatewayelb/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Get AWS CloudWatch Metrics for AWS/GatewayELB </h1>\r\n\r\n## Description\r\nThis Lego get AWS CloudWatch Metrics for AWS/GatewayELB.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_get_cloudwatch_metrics_gatewayelb(hdl: Session, metric_name: GatewayELBMetrics, dimensions: List[dict], region: str, timeSince: int,statistics: StatisticsType, period: int)\r\n\r\n        hdl: Object of type unSkript AWS Connector.\r\n        metric_name: The name of the metric, with or without spaces.\r\n        dimensions: A dimension is a name/value pair that is part of the identity of a metric.\r\n        timeSince: Starting from now, window (in seconds) for which you want to get the datapoints for.\r\n        statistics: Cloudwatch metric statistics.\r\n        period: The granularity, in seconds, of the returned data points.\r\n        region: AWS Region of the cloudwatch.\r\n\r\n## Lego Input\r\n\r\nThis Lego take seven inputs hdl, metric_name, dimensions, timeSince, statistics, period and region.\r\n\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_get_cloudwatch_metrics_gatewayelb/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_cloudwatch_metrics_gatewayelb/aws_get_cloudwatch_metrics_gatewayelb.json",
    "content": "{\r\n    \"action_title\": \"Get AWS CloudWatch Metrics for AWS/GatewayELB\",\r\n    \"action_description\": \"Get AWS CloudWatch Metrics for AWS/GatewayELB\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_get_cloudwatch_metrics_gatewayelb\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\":[ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\" ,\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_ELB\",\"CATEGORY_TYPE_AWS_CLOUDWATCH\"  ]\r\n\r\n}\r\n    "
  },
  {
    "path": "AWS/legos/aws_get_cloudwatch_metrics_gatewayelb/aws_get_cloudwatch_metrics_gatewayelb.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\n\nimport pprint\nfrom typing import Optional, List\nfrom datetime import datetime, timedelta\nfrom pydantic import BaseModel, Field\nimport matplotlib.pyplot as plt\nfrom tabulate import tabulate\nfrom unskript.legos.aws.aws_get_handle.aws_get_handle import Session\nfrom unskript.enums.aws_cloudwatch_enums import GatewayELBMetrics\nfrom unskript.enums.aws_k8s_enums import StatisticsType\n\n\nclass InputSchema(BaseModel):\n    metric_name: GatewayELBMetrics = Field(\n        title=\"Metric Name\",\n        description=\"The name of the metric, with or without spaces.\",\n    )\n    dimensions: List[dict] = Field(\n        title=\"Dimensions\",\n        description=\"A dimension is a name/value pair that is part of the identity of a metric.\",\n    )\n    period: Optional[int] = Field(\n        60,\n        title=\"Period\",\n        description=\"The granularity, in seconds, of the returned data points.\",\n    )\n    timeSince: int = Field(\n        title=\"Time Since\",\n        description=(\"Starting from now, window (in seconds) for which you want \"\n                     \"to get the datapoints for.\")\n    )\n    statistics: StatisticsType = Field(\n        title=\"Statistics\",\n        description=(\"Cloudwatch metric statistics. Possible values: Average, \"\n                     \"Sum, Minimum, Maximum.\")\n    )\n    region: str = Field(\n        title=\"Region\", description=\"AWS Region of the cloudwatch.\")\n\n\ndef aws_get_cloudwatch_metrics_gatewayelb_printer(output):\n    if output is None:\n        return\n    plt.show()\n    pprint.pprint(output)\n\n\ndef aws_get_cloudwatch_metrics_gatewayelb(\n    hdl: Session,\n    metric_name: GatewayELBMetrics,\n    dimensions: List[dict],\n    timeSince: int,\n    statistics: StatisticsType,\n    region: str,\n    period: int = 60,\n) -> str:\n    \"\"\"aws_get_cloudwatch_metrics_gatewayelb shows plotted AWS cloudwatch\n       statistics for Gateway ELB.\n\n        :type metric_name: GatewayELBMetrics\n        :param metric_name: The name of the metric, with or without spaces.\n\n        :type dimensions: List[dict]\n        :param dimensions: A dimension is a name/value pair that is part\n        of the identity of a metric.\n\n        :type period: int\n        :param period: The granularity, in seconds, of the returned data points.\n\n        :type timeSince: int\n        :param timeSince: Starting from now, window (in seconds) for which you\n        want to get the datapoints for.\n\n        :type statistics: StatisticsType\n        :param statistics: Cloudwatch metric statistics. Possible values: SampleCount,\n        Average, Sum, Minimum, Maximum.\n\n        :type region: string\n        :param region: AWS Region of the cloudwatch.\n\n        :rtype: Shows ploted statistics.\n    \"\"\"\n    metric_name = metric_name.value if metric_name else None\n    statistics = statistics.value if statistics else None\n    cloudwatchClient = hdl.client(\"cloudwatch\", region_name=region)\n    # Gets metric data.\n    res = cloudwatchClient.get_metric_data(\n        MetricDataQueries=[\n            {\n                'Id': metric_name.lower(),\n                'MetricStat': {\n                    'Metric': {\n                        'Namespace': 'AWS/GatewayELB',\n                        'MetricName': metric_name,\n                        'Dimensions': dimensions\n                    },\n                    'Period': period,\n                    'Stat': statistics,\n                },\n            },\n        ],\n        StartTime=datetime.utcnow() - timedelta(seconds=timeSince),\n        EndTime=datetime.utcnow(),\n        ScanBy='TimestampAscending'\n    )\n\n    timestamps = []\n    values = []\n\n    for timestamp in res['MetricDataResults'][0]['Timestamps']:\n        timestamps.append(timestamp)\n    for value in res['MetricDataResults'][0]['Values']:\n        values.append(value)\n\n    timestamps.sort()\n    values.sort()\n\n    plt.plot_date(timestamps, values, \"-o\")\n\n    data = []\n    for dt, val in zip(\n        res['MetricDataResults'][0]['Timestamps'],\n        res['MetricDataResults'][0]['Values']\n        ):\n        data.append([dt.strftime('%Y-%m-%d::%H-%M'), val])\n    head = [\"Timestamp\", \"Value\"]\n    table = tabulate(data, headers=head, tablefmt=\"grid\")\n\n    return table\n"
  },
  {
    "path": "AWS/legos/aws_get_cloudwatch_metrics_lambda/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Get AWS CloudWatch Metrics for AWS/Lambda </h1>\r\n\r\n## Description\r\nThis Lego get AWS CloudWatch Metrics for AWS/Lambda.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_get_cloudwatch_metrics_lambda(hdl: Session, metric_name: LambdaMetrics, dimensions: List[dict], region: str, timeSince: int,statistics: StatisticsType, period: int)\r\n\r\n        hdl: Object of type unSkript AWS Connector.\r\n        metric_name: The name of the metric, with or without spaces.\r\n        dimensions: A dimension is a name/value pair that is part of the identity of a metric.\r\n        timeSince: Starting from now, window (in seconds) for which you want to get the datapoints for.\r\n        statistics: Cloudwatch metric statistics.\r\n        period: The granularity, in seconds, of the returned data points.\r\n        region: AWS Region of the cloudwatch.\r\n\r\n## Lego Input\r\n\r\nThis Lego take seven inputs hdl, metric_name, dimensions, timeSince, statistics, period and region.\r\n\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_get_cloudwatch_metrics_lambda/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_cloudwatch_metrics_lambda/aws_get_cloudwatch_metrics_lambda.json",
    "content": "{\r\n    \"action_title\": \"Get AWS CloudWatch Metrics for AWS/Lambda\",\r\n    \"action_description\": \"Get AWS CloudWatch Metrics for AWS/Lambda\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_get_cloudwatch_metrics_lambda\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\":[ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\" ,\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_LAMBDA\",\"CATEGORY_TYPE_AWS_CLOUDWATCH\"  ]\r\n}\r\n    "
  },
  {
    "path": "AWS/legos/aws_get_cloudwatch_metrics_lambda/aws_get_cloudwatch_metrics_lambda.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\n\nimport pprint\nfrom typing import Optional, List, Dict\nfrom datetime import datetime, timedelta\nfrom pydantic import BaseModel, Field\nimport matplotlib.pyplot as plt\nfrom tabulate import tabulate\nfrom unskript.legos.aws.aws_get_handle.aws_get_handle import Session\nfrom unskript.enums.aws_k8s_enums import StatisticsType\nfrom unskript.enums.aws_cloudwatch_enums import LambdaMetrics\n\n\nclass InputSchema(BaseModel):\n    metric_name: LambdaMetrics = Field(\n        title=\"Metric Name\",\n        description=\"The name of the metric, with or without spaces.\",\n    )\n    dimensions: List[dict] = Field(\n        title=\"Dimensions\",\n        description=\"A dimension is a name/value pair that is part of the identity of a metric.\",\n    )\n    period: Optional[int] = Field(\n        60,\n        title=\"Period\",\n        description=\"The granularity, in seconds, of the returned data points.\",\n    )\n    timeSince: int = Field(\n        title=\"Time Since\",\n        description=(\"Starting from now, window (in seconds) for which you want \"\n                     \"to get the datapoints for.\")\n    )\n    statistics: StatisticsType = Field(\n        title=\"Statistics\",\n        description=(\"Cloudwatch metric statistics. Possible values: SampleCount, \"\n                     \"Average, Sum, Minimum, Maximum.\")\n    )\n    region: str = Field(\n        title=\"Region\", description=\"AWS Region of the cloudwatch.\")\n\n\ndef aws_get_cloudwatch_metrics_lambda_printer(output) -> str:\n    if output is None:\n        return \"\"\n    if isinstance(output, Dict):\n        for key in output:\n            plt.plot_date(output[key][0], output[key][1], \"-o\")\n            pprint.pprint(output[key][2])\n            plt.show()\n    else:\n        plt.plot_date(output[0], output[1], \"-o\")\n        pprint.pprint(output[2])\n        plt.show()\n    return None\n\n\n\ndef aws_get_cloudwatch_metrics_lambda(\n    hdl: Session,\n    metric_name: LambdaMetrics,\n    dimensions: List[dict],\n    timeSince: int,\n    statistics: StatisticsType,\n    region: str,\n    period: int = 60,\n) -> List:\n    \"\"\"get_lambda_metrics shows plotted AWS cloudwatch statistics for Lambda.\n\n        :type metric_name: LambdaMetrics\n        :param metric_name: The name of the metric, with or without spaces.\n\n        :type dimensions: List[dict]\n        :param dimensions: A dimension is a name/value pair that is part of\n        the identity of a metric.\n\n        :type period: int\n        :param period: The granularity, in seconds, of the returned data points.\n\n        :type timeSince: int\n        :param timeSince: Starting from now, window (in seconds) for which you want\n        to get the datapoints for.\n\n        :type statistics: StatisticsType\n        :param statistics: Cloudwatch metric statistics. Possible values: SampleCount,\n        Average, Sum, Minimum, Maximum.\n\n        :type region: string\n        :param region: AWS Region of the cloudwatch.\n\n        :rtype: Shows plotted statistics.\n    \"\"\"\n    result = []\n    cloudwatch_client = hdl.client(\"cloudwatch\", region_name=region)\n    statistics = statistics.value if statistics else None\n    metric_name = metric_name.value if metric_name else None\n    # Gets metric data.\n    res = cloudwatch_client.get_metric_data(\n        MetricDataQueries=[\n            {\n                'Id': metric_name.lower(),\n                'MetricStat': {\n                    'Metric': {\n                        'Namespace': 'AWS/Lambda',\n                        'MetricName': metric_name,\n                        'Dimensions': dimensions\n                    },\n                    'Period': period,\n                    'Stat': statistics\n                },\n            },\n        ],\n        StartTime=datetime.utcnow() - timedelta(seconds=timeSince),\n        EndTime=datetime.utcnow(),\n        ScanBy='TimestampAscending'\n    )\n\n    timestamps = []\n    values = []\n\n    for timestamp in res['MetricDataResults'][0]['Timestamps']:\n        timestamps.append(timestamp)\n    for value in res['MetricDataResults'][0]['Values']:\n        values.append(value)\n\n    timestamps.sort()\n    values.sort()\n\n    data = []\n    for dt,val in zip(\n        res['MetricDataResults'][0]['Timestamps'],\n        res['MetricDataResults'][0]['Values']\n        ):\n        data.append([dt.strftime('%Y-%m-%d::%H-%M'), val])\n    head = [\"Timestamp\", \"Value\"]\n    table = tabulate(data, headers=head, tablefmt=\"grid\")\n    result.append(timestamps)\n    result.append(values)\n    result.append(table)\n    return result\n"
  },
  {
    "path": "AWS/legos/aws_get_cloudwatch_metrics_network_elb/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Get AWS CloudWatch Metrics for AWS/NetworkELB </h1>\r\n\r\n## Description\r\nThis Lego get AWS CloudWatch Metrics for Network Loadbalancer.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_get_cloudwatch_metrics_network_elb(hdl: Session, metric_name: NetworkELBMetrics, dimensions: List[dict], region: str, timeSince: int,statistics: StatisticsType, units: UnitsType, period: int)\r\n\r\n        hdl: Object of type unSkript AWS Connector.\r\n        metric_name: The name of the EBS metric.\r\n        dimensions: A dimension is a name/value pair that is part of the identity of a metric.\r\n        timeSince: Starting from now, window (in seconds) for which you want to get the datapoints for.\r\n        statistics: Cloudwatch metric statistics.\r\n        period: The granularity, in seconds, of the returned data points.\r\n        units: Unit of measure.\r\n        region: AWS Region of the cloudwatch.\r\n\r\n## Lego Input\r\n\r\nThis Lego take eight inputs hdl, metric_name, dimensions, timeSince, statistics, period, units and region.\r\n\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_get_cloudwatch_metrics_network_elb/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_cloudwatch_metrics_network_elb/aws_get_cloudwatch_metrics_network_elb.json",
    "content": "{\r\n    \"action_title\": \"Get AWS CloudWatch Metrics for AWS/NetworkELB\",\r\n    \"action_description\": \"Get AWS CloudWatch Metrics for Network Loadbalancer\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_get_cloudwatch_metrics_network_elb\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\":[ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\" ,\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_ELB\",\"CATEGORY_TYPE_AWS_CLOUDWATCH\"  ]\r\n}\r\n    "
  },
  {
    "path": "AWS/legos/aws_get_cloudwatch_metrics_network_elb/aws_get_cloudwatch_metrics_network_elb.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\n\nimport pprint\nfrom typing import Optional, List\nfrom datetime import datetime, timedelta\nfrom pydantic import BaseModel, Field\nimport matplotlib.pyplot as plt\nfrom tabulate import tabulate\nfrom unskript.legos.aws.aws_get_handle.aws_get_handle import Session\nfrom unskript.enums.aws_k8s_enums import StatisticsType\nfrom unskript.enums.aws_cloudwatch_enums import UnitsType, NetworkELBMetrics\n\n\nclass InputSchema(BaseModel):\n    metric_name: NetworkELBMetrics = Field(\n        title=\"Metric Name\",\n        description=\"The name of the metric, with or without spaces.\",\n    )\n    dimensions: List[dict] = Field(\n        title=\"Dimensions\",\n        description=\"A dimension is a name/value pair that is part of the identity of a metric.\",\n    )\n    period: Optional[int] = Field(\n        60,\n        title=\"Period\",\n        description=\"The granularity, in seconds, of the returned data points.\",\n    )\n    timeSince: int = Field(\n        title=\"Time Since\",\n        description=(\"Starting from now, window (in seconds) for which you want \"\n                     \"to get the datapoints for.\")\n    )\n    statistics: StatisticsType = Field(\n        title=\"Statistics\",\n        description=(\"Cloudwatch metric statistics. Possible values: Average, \"\n                     \"Sum, Minimum, Maximum, SampleCount.\")\n    )\n    units: Optional[UnitsType] = Field(\n        title=\"Units\",\n        description=\"Unit of measure\",\n    )\n    region: str = Field(\n        title=\"Region\", description=\"AWS Region of the cloudwatch.\")\n\n\ndef aws_get_cloudwatch_metrics_network_elb_printer(output):\n    if output is None:\n        return\n    plt.show()\n    pprint.pprint(output)\n\n\ndef aws_get_cloudwatch_metrics_network_elb(\n    hdl: Session,\n    metric_name: NetworkELBMetrics,\n    dimensions: List[dict],\n    timeSince: int,\n    statistics: StatisticsType,\n    region: str,\n    units: UnitsType,\n    period: int = 60,\n) -> str:\n    \"\"\"aws_get_cloudwatch_metrics_NetworkELB shows plotted AWS cloudwatch statistics for NetworkELB.\n\n        :type metric_name: NetworkELBMetrics\n        :param metric_name: The name of the metric, with or without spaces.\n\n        :type dimensions: List[dict]\n        :param dimensions: A dimension is a name/value pair that is part of the\n        identity of a metric.\n\n        :type period: int\n        :param period: The granularity, in seconds, of the returned data points.\n\n        :type timeSince: int\n        :param timeSince: Starting from now, window (in seconds) for which you want\n        to get the datapoints for.\n\n        :type statistics: StatisticsType\n        :param statistics: Cloudwatch metric statistics. Possible values: SampleCount,\n        Average, Sum, Minimum, Maximum.\n\n        :type region: string\n        :param region: AWS Region of the cloudwatch.\n\n        :type units: UnitsType\n        :param units: Unit of measure.\n\n        :rtype: Shows ploted statistics.\n    \"\"\"\n    metric_name = metric_name.value if metric_name else None\n    statistics = statistics.value if statistics else None\n    units = units.value if units else None\n    cloudwatchClient = hdl.client(\"cloudwatch\", region_name=region)\n    # Gets metric data.\n    res = cloudwatchClient.get_metric_data(\n        MetricDataQueries=[\n            {\n                'Id': metric_name.lower(),\n                'MetricStat': {\n                    'Metric': {\n                        'Namespace': 'AWS/NetworkELB',\n                        'MetricName': metric_name,\n                        'Dimensions': dimensions\n                    },\n                    'Period': period,\n                    'Stat': statistics,\n                    'Unit': units\n                },\n            },\n        ],\n        StartTime=datetime.utcnow() - timedelta(seconds=timeSince),\n        EndTime=datetime.utcnow(),\n        ScanBy='TimestampAscending'\n    )\n\n    timestamps = []\n    values = []\n\n    for i in res['MetricDataResults'][0]['Timestamps']:\n        dt = i\n        timestamps.append(dt)\n    for j in res['MetricDataResults'][0]['Values']:\n        values.append(j)\n\n    timestamps.sort()\n    values.sort()\n\n    plt.plot_date(timestamps, values, \"-o\")\n\n    data = []\n    for dt, val in zip(\n        res['MetricDataResults'][0]['Timestamps'],\n        res['MetricDataResults'][0]['Values']\n        ):\n        data.append([dt.strftime('%Y-%m-%d::%H-%M'), val])\n    head = [\"Timestamp\", \"Value\"]\n    table = tabulate(data, headers=head, tablefmt=\"grid\")\n\n    return table\n"
  },
  {
    "path": "AWS/legos/aws_get_cloudwatch_metrics_rds/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Get AWS CloudWatch Metrics for AWS/RDS </h1>\r\n\r\n## Description\r\nThis Lego get AWS CloudWatch Metrics for AWS/RDS.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_get_cloudwatch_metrics_rds(hdl: Session, metric_name: RDSMetrics, dimensions: List[dict], region: str, timeSince: int,statistics: StatisticsType, period: int)\r\n\r\n        hdl: Object of type unSkript AWS Connector.\r\n        metric_name: The name of the metric, with or without spaces.\r\n        dimensions: A dimension is a name/value pair that is part of the identity of a metric.\r\n        timeSince: Starting from now, window (in seconds) for which you want to get the datapoints for.\r\n        statistics: Cloudwatch metric statistics.\r\n        period: The granularity, in seconds, of the returned data points.\r\n        region: AWS Region of the cloudwatch.\r\n\r\n## Lego Input\r\n\r\nThis Lego take seven inputs hdl, metric_name, dimensions, timeSince, statistics, period and region.\r\n\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_get_cloudwatch_metrics_rds/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_cloudwatch_metrics_rds/aws_get_cloudwatch_metrics_rds.json",
    "content": "{\r\n    \"action_title\": \"Get AWS CloudWatch Metrics for AWS/RDS\",\r\n    \"action_description\": \"Get AWS CloudWatch Metrics for AWS/RDS\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_get_cloudwatch_metrics_rds\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\":[ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\" ,\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_RDS\",\"CATEGORY_TYPE_AWS_CLOUDWATCH\"  ]\r\n}\r\n    "
  },
  {
    "path": "AWS/legos/aws_get_cloudwatch_metrics_rds/aws_get_cloudwatch_metrics_rds.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\n\nimport pprint\nfrom typing import Optional, List\nfrom datetime import datetime, timedelta\nfrom pydantic import BaseModel, Field\nimport matplotlib.pyplot as plt\nfrom tabulate import tabulate\nfrom unskript.legos.aws.aws_get_handle.aws_get_handle import Session\nfrom unskript.enums.aws_cloudwatch_enums import RDSMetrics\nfrom unskript.enums.aws_k8s_enums import StatisticsType\n\n\nclass InputSchema(BaseModel):\n    metric_name: RDSMetrics = Field(\n        title=\"Metric Name\",\n        description=\"The name of the metric, with or without spaces.\",\n    )\n    dimensions: List[dict] = Field(\n        title=\"Dimensions\",\n        description=\"A dimension is a name/value pair that is part of the identity of a metric.\",\n    )\n    period: Optional[int] = Field(\n        60,\n        title=\"Period\",\n        description=\"The granularity, in seconds, of the returned data points.\",\n    )\n    timeSince: int = Field(\n        title=\"Time Since\",\n        description=(\"Starting from now, window (in seconds) for which you want \"\n                     \"to get the datapoints for.\")\n    )\n    statistics: StatisticsType = Field(\n        title=\"Statistics\",\n        description=(\"Cloudwatch metric statistics. Possible values: Average, \"\n                     \"Sum, Minimum, Maximum.\")\n    )\n    region: str = Field(\n        title=\"Region\", description=\"AWS Region of the cloudwatch.\")\n\n\ndef aws_get_cloudwatch_metrics_rds_printer(output):\n    if output is None:\n        return\n    plt.show()\n    pprint.pprint(output)\n\n\ndef aws_get_cloudwatch_metrics_rds(\n    hdl: Session,\n    metric_name: RDSMetrics,\n    dimensions: List[dict],\n    timeSince: int,\n    statistics: StatisticsType,\n    region: str,\n    period: int = 60,\n) -> str:\n    \"\"\"aws_get_cloudwatch_metrics_rds shows plotted AWS cloudwatch statistics for RDS.\n\n        :type metric_name: RDSMetrics\n        :param metric_name: The name of the metric, with or without spaces.\n\n        :type dimensions: List[dict]\n        :param dimensions: A dimension is a name/value pair that is part of\n        the identity of a metric.\n\n        :type period: int\n        :param period: The granularity, in seconds, of the returned data points.\n\n        :type timeSince: int\n        :param timeSince: Starting from now, window (in seconds) for which you want\n        to get the datapoints for.\n\n        :type statistics: StatisticsType\n        :param statistics: Cloudwatch metric statistics. Possible values: SampleCount,\n        Average, Sum, Minimum, Maximum.\n\n        :type region: string\n        :param region: AWS Region of the cloudwatch.\n\n        :rtype: Shows ploted statistics.\n    \"\"\"\n    metric_name = metric_name.value if metric_name else None\n    statistics = statistics.value if statistics else None\n    cloudwatchClient = hdl.client(\"cloudwatch\", region_name=region)\n    # Gets metric data.\n    res = cloudwatchClient.get_metric_data(\n        MetricDataQueries=[\n            {\n                'Id': metric_name.lower(),\n                'MetricStat': {\n                    'Metric': {\n                        'Namespace': 'AWS/RDS',\n                        'MetricName': metric_name,\n                        'Dimensions': dimensions\n                    },\n                    'Period': period,\n                    'Stat': statistics,\n                },\n            },\n        ],\n        StartTime=datetime.utcnow() - timedelta(seconds=timeSince),\n        EndTime=datetime.utcnow(),\n        ScanBy='TimestampAscending'\n    )\n\n    timestamps = []\n    values = []\n\n    for timestamp in res['MetricDataResults'][0]['Timestamps']:\n        timestamps.append(timestamp)\n    for value in res['MetricDataResults'][0]['Values']:\n        values.append(value)\n\n    timestamps.sort()\n    values.sort()\n\n    plt.plot_date(timestamps, values, \"-o\")\n\n    data = []\n    for dt, val in zip(\n        res['MetricDataResults'][0]['Timestamps'],\n        res['MetricDataResults'][0]['Values']\n        ):\n        data.append([dt.strftime('%Y-%m-%d::%H-%M'), val])\n    head = [\"Timestamp\", \"Value\"]\n    table = tabulate(data, headers=head, tablefmt=\"grid\")\n\n    return table\n"
  },
  {
    "path": "AWS/legos/aws_get_cloudwatch_metrics_redshift/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Get AWS CloudWatch Metrics for AWS/Redshift </h1>\r\n\r\n## Description\r\nThis Lego get AWS CloudWatch Metrics for AWS/Redshift.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_get_cloudwatch_metrics_redshift(hdl: Session, metric_name: RedshiftMetrics, dimensions: List[dict], region: str, timeSince: int,statistics: StatisticsType, period: int)\r\n\r\n        hdl: Object of type unSkript AWS Connector.\r\n        metric_name: The name of the metric, with or without spaces.\r\n        dimensions: A dimension is a name/value pair that is part of the identity of a metric.\r\n        timeSince: Starting from now, window (in seconds) for which you want to get the datapoints for.\r\n        statistics: Cloudwatch metric statistics.\r\n        period: The granularity, in seconds, of the returned data points.\r\n        region: AWS Region of the cloudwatch.\r\n\r\n## Lego Input\r\n\r\nThis Lego take seven inputs hdl, metric_name, dimensions, timeSince, statistics, period and region.\r\n\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_get_cloudwatch_metrics_redshift/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_cloudwatch_metrics_redshift/aws_get_cloudwatch_metrics_redshift.json",
    "content": "{\r\n    \"action_title\": \"Get AWS CloudWatch Metrics for AWS/Redshift\",\r\n    \"action_description\": \"Get AWS CloudWatch Metrics for AWS/Redshift\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_get_cloudwatch_metrics_redshift\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\":[ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\" ,\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_REDSHIFT\",\"CATEGORY_TYPE_AWS_CLOUDWATCH\"  ]\r\n}"
  },
  {
    "path": "AWS/legos/aws_get_cloudwatch_metrics_redshift/aws_get_cloudwatch_metrics_redshift.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\n\nimport pprint\nfrom typing import Optional, List\nfrom datetime import datetime, timedelta\nfrom pydantic import BaseModel, Field\nimport matplotlib.pyplot as plt\nfrom tabulate import tabulate\nfrom unskript.legos.aws.aws_get_handle.aws_get_handle import Session\nfrom unskript.enums.aws_cloudwatch_enums import RedshiftMetrics\nfrom unskript.enums.aws_k8s_enums import StatisticsType\n\n\nclass InputSchema(BaseModel):\n    metric_name: RedshiftMetrics = Field(\n        title=\"Metric Name\",\n        description=\"The name of the metric, with or without spaces.\",\n    )\n    dimensions: List[dict] = Field(\n        title=\"Dimensions\",\n        description=\"A dimension is a name/value pair that is part of the identity of a metric.\",\n    )\n    period: Optional[int] = Field(\n        60,\n        title=\"Period\",\n        description=\"The granularity, in seconds, of the returned data points.\",\n    )\n    timeSince: int = Field(\n        title=\"Time Since\",\n        description=(\"Starting from now, window (in seconds) for which you want \"\n                     \"to get the datapoints for.\")\n    )\n    statistics: StatisticsType = Field(\n        title=\"Statistics\",\n        description=(\"Cloudwatch metric statistics. Possible values: Average, \"\n                     \"Sum, Minimum, Maximum.\")\n    )\n    region: str = Field(\n        title=\"Region\", description=\"AWS Region of the cloudwatch.\")\n\n\ndef aws_get_cloudwatch_metrics_redshift_printer(output):\n    if output is None:\n        return\n    plt.show()\n    pprint.pprint(output)\n\n\ndef aws_get_cloudwatch_metrics_redshift(\n    hdl: Session,\n    metric_name: RedshiftMetrics,\n    dimensions: List[dict],\n    timeSince: int,\n    statistics: StatisticsType,\n    region: str,\n    period: int = 60,\n) -> str:\n    \"\"\"aws_get_cloudwatch_metrics_redshift shows plotted AWS cloudwatch statistics for Redshift.\n\n        :type metric_name: RedshiftMetrics\n        :param metric_name: The name of the metric, with or without spaces.\n\n        :type dimensions: List[dict]\n        :param dimensions: A dimension is a name/value pair that is part of the\n        identity of a metric.\n\n        :type period: int\n        :param period: The granularity, in seconds, of the returned data points.\n\n        :type timeSince: int\n        :param timeSince: Starting from now, window (in seconds) for which you want to\n        get the datapoints for.\n\n        :type statistics: StatisticsType\n        :param statistics: Cloudwatch metric statistics. Possible values: SampleCount,\n        Average, Sum, Minimum, Maximum.\n\n        :type region: string\n        :param region: AWS Region of the cloudwatch.\n\n        :rtype: Shows ploted statistics.\n    \"\"\"\n    metric_name = metric_name.value if metric_name else None\n    statistics = statistics.value if statistics else None\n    cloudwatchClient = hdl.client(\"cloudwatch\", region_name=region)\n    # Gets metric data.\n    res = cloudwatchClient.get_metric_data(\n        MetricDataQueries=[\n            {\n                'Id': metric_name.lower(),\n                'MetricStat': {\n                    'Metric': {\n                        'Namespace': 'AWS/Redshift',\n                        'MetricName': metric_name,\n                        'Dimensions': dimensions\n                    },\n                    'Period': period,\n                    'Stat': statistics,\n                },\n            },\n        ],\n        StartTime=datetime.utcnow() - timedelta(seconds=timeSince),\n        EndTime=datetime.utcnow(),\n        ScanBy='TimestampAscending'\n    )\n\n    timestamps = []\n    values = []\n\n    for timestamp in res['MetricDataResults'][0]['Timestamps']:\n        timestamps.append(timestamp)\n    for value in res['MetricDataResults'][0]['Values']:\n        values.append(value)\n\n    timestamps.sort()\n    values.sort()\n\n    plt.plot_date(timestamps, values, \"-o\")\n\n    data = []\n    for dt, val in zip(\n        res['MetricDataResults'][0]['Timestamps'],\n        res['MetricDataResults'][0]['Values']\n        ):\n        data.append([dt.strftime('%Y-%m-%d::%H-%M'), val])\n    head = [\"Timestamp\", \"Value\"]\n    table = tabulate(data, headers=head, tablefmt=\"grid\")\n\n    return table\n"
  },
  {
    "path": "AWS/legos/aws_get_cloudwatch_metrics_sqs/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Get AWS CloudWatch Metrics for AWS/SQS </h1>\r\n\r\n## Description\r\nThis Lego get AWS CloudWatch Metrics for AWS/SQS.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_get_cloudwatch_metrics_sqs(hdl: Session, metric_name: SQSMetrics, dimensions: List[dict], units: UnitsType, region: str, timeSince: int,statistics: StatisticsType, period: int)\r\n\r\n        hdl: Object of type unSkript AWS Connector.\r\n        metric_name: The name of the metric, with or without spaces.\r\n        dimensions: A dimension is a name/value pair that is part of the identity of a metric.\r\n        timeSince: Starting from now, window (in seconds) for which you want to get the datapoints for.\r\n        statistics: Cloudwatch metric statistics.\r\n        period: The granularity, in seconds, of the returned data points.\r\n        units: Unit of measure.\r\n        region: AWS Region of the cloudwatch.\r\n\r\n## Lego Input\r\n\r\nThis Lego take eight inputs hdl, metric_name, dimensions, timeSince, statistics, period, units and region.\r\n\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_get_cloudwatch_metrics_sqs/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_cloudwatch_metrics_sqs/aws_get_cloudwatch_metrics_sqs.json",
    "content": "{\r\n    \"action_title\": \"Get AWS CloudWatch Metrics for AWS/SQS\",\r\n    \"action_description\": \"Get AWS CloudWatch Metrics for AWS/SQS\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_get_cloudwatch_metrics_sqs\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\":[ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\" ,\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_SQS\",\"CATEGORY_TYPE_AWS_CLOUDWATCH\"  ]\r\n}\r\n    "
  },
  {
    "path": "AWS/legos/aws_get_cloudwatch_metrics_sqs/aws_get_cloudwatch_metrics_sqs.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\n\nimport pprint\nfrom typing import Optional, List\nfrom datetime import datetime, timedelta\nfrom pydantic import BaseModel, Field\nimport matplotlib.pyplot as plt\nfrom tabulate import tabulate\nfrom unskript.legos.aws.aws_get_handle.aws_get_handle import Session\nfrom unskript.enums.aws_k8s_enums import StatisticsType\nfrom unskript.enums.aws_cloudwatch_enums import UnitsType, SQSMetrics\n\n\nclass InputSchema(BaseModel):\n    metric_name: SQSMetrics = Field(\n        title=\"Metric Name\",\n        description=\"The name of the metric, with or without spaces.\",\n    )\n    dimensions: List[dict] = Field(\n        title=\"Dimensions\",\n        description=\"A dimension is a name/value pair that is part of the identity of a metric.\",\n    )\n    period: Optional[int] = Field(\n        60,\n        title=\"Period\",\n        description=\"The granularity, in seconds, of the returned data points.\",\n    )\n    timeSince: int = Field(\n        title=\"Time Since\",\n        description=(\"Starting from now, window (in seconds) for which you want \"\n                     \"to get the datapoints for.\")\n    )\n    statistics: StatisticsType = Field(\n        title=\"Statistics\",\n        description=(\"Cloudwatch metric statistics. Possible values: Average, \"\n                     \"Sum, Minimum, Maximum.\")\n    )\n    units: UnitsType = Field(\n        title=\"Units\",\n        description=\"Unit of measure.\",\n    )\n    region: str = Field(\n        title=\"Region\", description=\"AWS Region of the cloudwatch.\")\n\n\ndef aws_get_cloudwatch_metrics_sqs_printer(output):\n    if output is None:\n        return\n    plt.show()\n    pprint.pprint(output)\n\n\ndef aws_get_cloudwatch_metrics_sqs(\n    hdl: Session,\n    metric_name: SQSMetrics,\n    dimensions: List[dict],\n    timeSince: int,\n    statistics: StatisticsType,\n    region: str,\n    units: UnitsType,\n    period: int = 60,\n) -> str:\n    \"\"\"aws_get_cloudwatch_metrics_sqs shows plotted AWS cloudwatch statistics for SQS.\n\n        :type metric_name: SQSMetrics\n        :param metric_name: The name of the metric, with or without spaces.\n\n        :type dimensions: List[dict]\n        :param dimensions: A dimension is a name/value pair that is part of\n        the identity of a metric.\n\n        :type period: int\n        :param period: The granularity, in seconds, of the returned data points.\n\n        :type timeSince: int\n        :param timeSince: Starting from now, window (in seconds) for which you want\n        to get the datapoints for.\n\n        :type statistics: StatisticsType\n        :param statistics: Cloudwatch metric statistics. Possible values: SampleCount,\n        Average, Sum, Minimum, Maximum.\n\n        :type region: string\n        :param region: AWS Region of the cloudwatch.\n\n        :type units: UnitsType\n        :param units: Unit of measure.\n\n        :rtype: Shows ploted statistics.\n    \"\"\"\n    metric_name = metric_name.value if metric_name else None\n    statistics = statistics.value if statistics else None\n    units = units.value if units else None\n    cloudwatchClient = hdl.client(\"cloudwatch\", region_name=region)\n    # Gets metric data.\n    res = cloudwatchClient.get_metric_data(\n        MetricDataQueries=[\n            {\n                'Id': metric_name.lower(),\n                'MetricStat': {\n                    'Metric': {\n                        'Namespace': 'AWS/SQS',\n                        'MetricName': metric_name,\n                        'Dimensions': dimensions\n                    },\n                    'Period': period,\n                    'Stat': statistics,\n                    'Unit': units\n                },\n            },\n        ],\n        StartTime=datetime.utcnow() - timedelta(seconds=timeSince),\n        EndTime=datetime.utcnow(),\n        ScanBy='TimestampAscending'\n    )\n\n    timestamps = []\n    values = []\n\n    for i in res['MetricDataResults'][0]['Timestamps']:\n        dt = i\n        timestamps.append(i)\n    for j in res['MetricDataResults'][0]['Values']:\n        values.append(j)\n\n    timestamps.sort()\n    values.sort()\n\n    plt.plot_date(timestamps, values, \"-o\")\n\n    data = []\n    for dt, val in zip(\n        res['MetricDataResults'][0]['Timestamps'],\n        res['MetricDataResults'][0]['Values']\n        ):\n        data.append([dt.strftime('%Y-%m-%d::%H-%M'), val])\n    head = [\"Timestamp\", \"Value\"]\n    table = tabulate(data, headers=head, tablefmt=\"grid\")\n\n    return table\n"
  },
  {
    "path": "AWS/legos/aws_get_cloudwatch_statistics/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Get AWS CloudWatch Statistics </h1>\r\n\r\n## Description\r\nThis Lego get AWS CloudWatch Statistics.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_get_cloudwatch_statistics(hdl: Session, name_space: str, metric_name: str, dimensions: List[dict], region: str, timeSince: int,statistics: StatisticsType, period: int)\r\n\r\n        hdl: Object of type unSkript AWS Connector.\r\n        name_space: he namespace of the metric, with or without spaces. For eg: AWS/SQS, AWS/ECS\r\n        metric_name: The name of the metric, with or without spaces.\r\n        dimensions: A dimension is a name/value pair that is part of the identity of a metric.\r\n        timeSince: Starting from now, window (in seconds) for which you want to get the datapoints for.\r\n        statistics: Cloudwatch metric statistics.\r\n        period: The granularity, in seconds, of the returned data points.\r\n        region: AWS Region of the cloudwatch.\r\n\r\n## Lego Input\r\n\r\nThis Lego take eight inputs hdl, name_space, metric_name, dimensions, timeSince, statistics, period and region.\r\n\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_get_cloudwatch_statistics/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_cloudwatch_statistics/aws_get_cloudwatch_statistics.json",
    "content": "{\r\n    \"action_title\": \"Get AWS CloudWatch Statistics\",\r\n    \"action_description\": \"Get AWS CloudWatch Statistics\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_get_cloudwatch_statistics\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\":[ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\" ,\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_CLOUDWATCH\"  ]\r\n}\r\n    "
  },
  {
    "path": "AWS/legos/aws_get_cloudwatch_statistics/aws_get_cloudwatch_statistics.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\n\nimport pprint\nfrom typing import Optional, List\nfrom datetime import datetime, timedelta\nfrom pydantic import BaseModel, Field\nimport matplotlib.pyplot as plt\nfrom unskript.legos.aws.aws_get_handle.aws_get_handle import Session\nfrom unskript.enums.aws_k8s_enums import StatisticsType\nfrom tabulate import tabulate\n\n\nclass InputSchema(BaseModel):\n    name_space: str = Field(\n        title=\"Namespace\",\n        description=\"The namespace of the metric, with or without spaces. For eg: AWS/SQS, AWS/ECS\",\n    )\n    metric_name: str = Field(\n        title=\"Metric Name\",\n        description=\"The name of the metric, with or without spaces.\",\n    )\n    dimensions: List[dict] = Field(\n        title=\"Dimensions\",\n        description=\"A dimension is a name/value pair that is part of the identity of a metric.\",\n    )\n    period: Optional[int] = Field(\n        60,\n        title=\"Period\",\n        description=\"The granularity, in seconds, of the returned data points.\",\n    )\n    timeSince: int = Field(\n        title=\"Time Since\",\n        description=(\"Starting from now, window (in seconds) for which you want \"\n                     \"to get the datapoints for.\")\n    )\n    statistics: StatisticsType = Field(\n        title=\"Statistics\",\n        description=(\"Cloudwatch metric statistics. Possible values: SampleCount, \"\n                     \"Average, Sum, Minimum, Maximum.\")\n    )\n    region: str = Field(\n        title=\"Region\", description=\"AWS Region of the cloudwatch.\")\n\n\ndef aws_get_cloudwatch_statistics_printer(output):\n    if output is None:\n        return\n    plt.show()\n    pprint.pprint(output)\n\n\ndef aws_get_cloudwatch_statistics(\n    hdl: Session,\n    name_space: str,\n    metric_name: str,\n    dimensions: List[dict],\n    timeSince: int,\n    statistics: StatisticsType,\n    region: str,\n    period: int = 60,\n) -> str:\n    \"\"\"aws_get_cloud_statistics shows ploted AWS cloudwatch statistics.\n    for a given instance ID. This routine assume instance_id\n    being present in the inputParmsJson.\n\n        :type name_space: string\n        :param name_space: he namespace of the metric, with or without spaces.\n        For eg: AWS/SQS, AWS/ECS\n\n        :type metric_name: string\n        :param metric_name: The name of the metric, with or without spaces.\n\n        :type dimensions: List[dict]\n        :param dimensions: A dimension is a name/value pair that is part of the\n        identity of a metric.\n\n        :type period: int\n        :param period: The granularity, in seconds, of the returned data points.\n\n        :type timeSince: int\n        :param timeSince: Starting from now, window (in seconds) for which you want\n        to get the datapoints for.\n\n        :type statistics: StatisticsType\n        :param statistics: Cloudwatch metric statistics. Possible values: SampleCount,\n        Average, Sum, Minimum, Maximum.\n\n        :type region: string\n        :param region: AWS Region of the cloudwatch.\n\n        :rtype: Shows ploted statistics.\n    \"\"\"\n    statistics = statistics.value if statistics else None\n    cloudwatchClient = hdl.client(\"cloudwatch\", region_name=region)\n    # Gets metric statistics.\n    res = cloudwatchClient.get_metric_statistics(\n        Namespace=name_space,\n        MetricName=metric_name,\n        Dimensions=dimensions,\n        Period=period,\n        StartTime=datetime.utcnow() - timedelta(seconds=timeSince),\n        EndTime=datetime.utcnow(),\n        Statistics=[statistics],\n    )\n\n    data = {}\n    table_data = []\n    for datapoints in res[\"Datapoints\"]:\n        data[datapoints[\"Timestamp\"]] = datapoints[statistics]\n\n    # Sorts data.\n    data_keys = data.keys()\n    times_stamps = list(data_keys)\n    times_stamps.sort()\n    sorted_values = []\n    for value in times_stamps:\n        table_data.append([value, data[value]])\n        sorted_values.append(data[value])\n    head = [\"Timestamp\", \"Value\"]\n    table = tabulate(table_data, headers=head, tablefmt=\"grid\")\n    # Puts datapoints into the plot.\n    plt.plot_date(times_stamps, sorted_values, \"-o\")\n\n    return table\n"
  },
  {
    "path": "AWS/legos/aws_get_cost_for_all_services/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>AWS Get Costs For All Services</h1>\n\n## Description\nGet Costs for all AWS services in a given time period\n\n## Lego Details\n\taws_get_cost_for_all_services(handle, region:str,number_of_months: int=\"\", start_date: str=\"\", end_date:str=\"\")\n\t\thandle: Object of type unSkript AWS Connector.\n\t\tnumber_of_months: Optional, Number of months to fetch the daily costs for. Eg: 1 (This will fetch all the costs for the last 30 days)\n\t\tstart_date: Optional,,Start date to get the daily costs from. Note: It should be given in YYYY-MM-DD format. Eg: 2023-03-11\n\t\tend_date: Optional, End date till which daily costs are to be fetched. Note: It should be given in YYYY-MM-DD format. Eg: 2023-04-11\n\t\tregion: AWS Region.\n\n\n## Lego Input\nThis Lego takes 5 inputs handle, number_of_months, start_date, end_date, region\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_get_cost_for_all_services/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_cost_for_all_services/aws_get_cost_for_all_services.json",
    "content": "{\n  \"action_title\": \"AWS Get Costs For All Services\",\n  \"action_description\": \"Get Costs for all AWS services in a given time period.\",\n  \"action_type\": \"LEGO_TYPE_AWS\",\n  \"action_entry_function\": \"aws_get_cost_for_all_services\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n  \"action_is_check\": false,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\": [\"CATEGORY_TYPE_COST_OPT\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_COST_EXPLORER\"]\n}"
  },
  {
    "path": "AWS/legos/aws_get_cost_for_all_services/aws_get_cost_for_all_services.py",
    "content": "##\n# Copyright (c) 2023 unSkript, Inc\n# All rights reserved.\n##\nimport datetime\nfrom typing import List, Optional\nfrom pydantic import BaseModel, Field\nimport tabulate\nfrom dateutil.relativedelta import relativedelta\n\n\nclass InputSchema(BaseModel):\n    number_of_months: Optional[int] = Field(\n        1,\n        description=('Number of months to fetch the daily costs for. '\n                     'Eg: 1 (This will fetch all the costs for the last 30 days)'),\n        title='Number of Months',\n    )\n    start_date: Optional[str] = Field(\n        '',\n        description=('Start date to get the daily costs from. Note: '\n                     'It should be given in YYYY-MM-DD format. Eg: 2023-04-11'),\n        title='Start Date',\n    )\n    end_date: Optional[str] = Field(\n        '',\n        description=('End date till which daily costs are to be fetched. '\n                     'Note: It should be given in YYYY-MM-DD format. Eg: 2023-04-11'),\n        title='End Date',\n    )\n    region: str = Field(..., description='AWS region.', title='Region')\n\n\ndef aws_get_cost_for_all_services_printer(output):\n    if output is None:\n        return\n    rows = [x.values() for x in output]\n    print(tabulate.tabulate(rows, tablefmt=\"fancy_grid\", headers=['Date','Service','Cost']))\n\ndef aws_get_cost_for_all_services(\n        handle, region:str,\n        number_of_months: int=1,\n        start_date: str=\"\",\n        end_date:str=\"\"\n        ) -> List:\n    \"\"\"aws_get_cost_for_all_services returns cost for all services\n\n        :type handle: object\n        :param handle: Object returned by the task.validate(...) method.\n\n        :type number_of_months: list\n        :param number_of_months: List of instance ids.\n\n        :type start_date: list\n        :param start_date: List of instance ids.\n\n        :type end_date: list\n        :param end_date: List of instance ids.\n\n        :type region: string\n        :param region: Region for instance.\n\n        :rtype: List of dicts of all costs per AWS service in a given time period\n    \"\"\"\n    if number_of_months:\n        no_of_months = int(number_of_months)\n        end = datetime.date.today().strftime('%Y-%m-%d')\n        start = (datetime.date.today() + relativedelta(months=-no_of_months)).strftime('%Y-%m-%d')\n    elif not start_date and not end_date and not number_of_months:\n        no_of_months = 1\n        end = datetime.date.today().strftime('%Y-%m-%d')\n        start = (datetime.date.today() + relativedelta(months=-no_of_months)).strftime('%Y-%m-%d')\n    else:\n        start = start_date\n        end = end_date\n    total_cost = 0\n    result = []\n    CEclient = handle.client('ce', region_name=region)\n    try:\n        response = CEclient.get_cost_and_usage(\n        TimePeriod = {\n            'Start': start,\n            'End': end\n        },\n        Granularity='DAILY',\n        Metrics = [\n            'UnblendedCost',\n                ],\n        GroupBy=[\n            {\n                'Type': 'DIMENSION',\n                'Key': 'SERVICE'\n            },\n        ],\n        )\n    except Exception as e:\n        raise e\n    for daily_cost in response['ResultsByTime']:\n        date = daily_cost['TimePeriod']['Start']\n        for group in daily_cost['Groups']:\n            cost_est = {}\n            cost_est[\"date\"] = date\n            service_name = group['Keys'][0]\n            service_cost = group['Metrics']['UnblendedCost']['Amount']\n            cost_est[\"service_name\"] = service_name\n            cost_est[\"service_cost\"] = service_cost\n            total_cost += float(service_cost)\n            result.append(cost_est)\n    print(f\"Total Cost: {total_cost}\")\n    return result\n"
  },
  {
    "path": "AWS/legos/aws_get_cost_for_data_transfer/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>AWS Get Costs For Data Transfer</h1>\n\n## Description\nGet daily cost for Data Transfer in AWS\n\n## Lego Details\n\taws_get_cost_for_data_transfer(handle, region:str,number_of_months: int=\"\", start_date: str=\"\", end_date:str=\"\")\n\t\thandle: Object of type unSkript AWS Connector.\n\t\tnumber_of_months: Optional, Number of months to fetch the daily costs for. Eg: 1 (This will fetch all the costs for the last 30 days)\n\t\tstart_date: Optional,,Start date to get the daily costs from. Note: It should be given in YYYY-MM-DD format. Eg: 2023-03-11\n\t\tend_date: Optional, End date till which daily costs are to be fetched. Note: It should be given in YYYY-MM-DD format. Eg: 2023-04-11\n\t\tregion: AWS Region.\n\n\n## Lego Input\nThis Lego takes 5 inputs handle, number_of_months, start_date, end_date, region\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_get_cost_for_data_transfer/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_cost_for_data_transfer/aws_get_cost_for_data_transfer.json",
    "content": "{\n  \"action_title\": \"AWS Get Costs For Data Transfer\",\n  \"action_description\": \"Get daily cost for Data Transfer in AWS\",\n  \"action_type\": \"LEGO_TYPE_AWS\",\n  \"action_entry_function\": \"aws_get_cost_for_data_transfer\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n  \"action_is_check\": false,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\": [\"CATEGORY_TYPE_COST_OPT\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_COST_EXPLORER\"]\n}"
  },
  {
    "path": "AWS/legos/aws_get_cost_for_data_transfer/aws_get_cost_for_data_transfer.py",
    "content": "##\n# Copyright (c) 2023 unSkript, Inc\n# All rights reserved.\n##\nimport datetime\nfrom typing import List, Optional\nfrom pydantic import BaseModel, Field\nimport tabulate\nfrom dateutil.relativedelta import relativedelta\n\n\nclass InputSchema(BaseModel):\n    number_of_months: Optional[float] = Field(\n        '',\n        description=('Number of months to fetch the daily costs for. '\n                     'Eg: 1 (This will fetch all the costs for the last 30 days)'),\n        title='Number of Months',\n    )\n    start_date: Optional[str] = Field(\n        '',\n        description=('Start date to get the daily costs from. Note: '\n                     'It should be given in YYYY-MM-DD format. Eg: 2023-04-11'),\n        title='Start Date',\n    )\n    end_date: Optional[str] = Field(\n        '',\n        description=('End date till which daily costs are to be fetched. Note: '\n                     'It should be given in YYYY-MM-DD format. Eg: 2023-04-11'),\n        title='End Date',\n    )\n    region: str = Field(..., description='AWS region.', title='region')\n\n\ndef aws_get_cost_for_data_transfer_printer(output):\n    if output is None:\n        return\n    rows = [x.values() for x in output]\n    print(tabulate.tabulate(\n        rows, tablefmt=\"fancy_grid\",\n        headers=['Date','Usage Type','Total Usage Qty','Total Usage Cost']\n        ))\n\ndef aws_get_cost_for_data_transfer(\n        handle,\n        region:str,\n        number_of_months: int=\"\",\n        start_date: str=\"\",\n        end_date:str=\"\"\n        ) -> List:\n    \"\"\"aws_get_cost_for_data_trasfer returns daily cost spendings on data transfer\n\n        :type handle: object\n        :param handle: Object returned by the task.validate(...) method.\n\n        :type number_of_months: int\n        :param number_of_months: Optional, Number of months to fetch the daily costs for. \n        Eg: 1 (This will fetch all the costs for the last 30 days)\n\n        :type start_date: string\n        :param start_date: Optional, Start date to get the daily costs from. Note: \n        It should be given in YYYY-MM-DD format. Eg: 2023-03-11\n\n        :type end_date: string\n        :param end_date: Optional, End date till which daily costs are to be fetched. \n        Note: It should be given in YYYY-MM-DD format. Eg: 2023-04-11\n\n        :type region: string\n        :param region: AWS Region.\n\n        :type region: string\n        :param region: Region for instance.\n\n        :rtype: List of dicts with data transfer costs\n    \"\"\"\n    if number_of_months:\n        no_of_months = int(number_of_months)\n        end = datetime.date.today().strftime('%Y-%m-%d')\n        start = (datetime.date.today() + relativedelta(months=-no_of_months)).strftime('%Y-%m-%d')\n    elif not start_date and not end_date and not number_of_months:\n        no_of_months = 1\n        end = datetime.date.today().strftime('%Y-%m-%d')\n        start = (datetime.date.today() + relativedelta(months=-no_of_months)).strftime('%Y-%m-%d')\n    else:\n        start = start_date\n        end = end_date\n    result = []\n    CEclient = handle.client('ce', region_name=region)\n    try:\n        response = CEclient.get_cost_and_usage(\n        TimePeriod={\n            'Start': start,\n            'End': end\n        },\n        Granularity='DAILY',\n        Metrics=[\n            'UsageQuantity',\n            'BlendedCost',\n        ],\n        GroupBy=[\n            {\n                'Type': 'DIMENSION',\n                'Key': 'USAGE_TYPE'\n            },\n        ],\n        Filter={\n            'Dimensions': {\n                'Key': 'USAGE_TYPE',\n                'Values': [\n                    'DataTransfer-Out-Bytes',\n                    'DataTransfer-In-Bytes',\n                ],\n            },\n        },\n        )\n    except Exception as e:\n        raise e\n    for daily_cost in response['ResultsByTime']:\n        date = daily_cost['TimePeriod']['Start']\n        total_cost = 0\n        total_usage = 0\n        for group in daily_cost['Groups']:\n            cost_est = {}\n            usage_type = group['Keys'][0]\n            usage_quantity = float(group['Metrics']['UsageQuantity']['Amount']) / (1024 ** 4)\n            usage_cost = group['Metrics']['BlendedCost']['Amount']\n            total_usage += usage_quantity\n            total_cost += float(usage_cost)\n            cost_est[\"date\"] = date\n            cost_est[\"usage_type\"] = usage_type\n            cost_est[\"total_usage\"] = round(total_usage,3)\n            cost_est[\"total_cost\"] = total_cost\n            result.append(cost_est)\n    return result\n"
  },
  {
    "path": "AWS/legos/aws_get_daily_total_spend/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>AWS Get Daily Total Spend</h1>\n\n## Description\nAWS get daily total spend from Cost Explorer\n\n## Lego Details\n\taws_get_daily_total_spend(handle, region:str,number_of_months: int=\"\", start_date: str=\"\", end_date:str=\"\")\n\n\t\thandle: Object of type unSkript AWS Connector.\n\t\tnumber_of_months: Optional, Number of months to fetch the daily costs for. Eg: 1 (This will fetch all the costs for the last 30 days)\n\t\tstart_date: Optional,,Start date to get the daily costs from. Note: It should be given in YYYY-MM-DD format. Eg: 2023-03-11\n\t\tend_date: Optional, End date till which daily costs are to be fetched. Note: It should be given in YYYY-MM-DD format. Eg: 2023-04-11\n\t\tregion: AWS Region.\n\n\n## Lego Input\nThis Lego takes 5 inputs handle, number_of_months, start_date, end_date, region\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_get_daily_total_spend/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_daily_total_spend/aws_get_daily_total_spend.json",
    "content": "{\n  \"action_title\": \"AWS Get Daily Total Spend\",\n  \"action_description\": \"AWS get daily total spend from Cost Explorer\",\n  \"action_type\": \"LEGO_TYPE_AWS\",\n  \"action_entry_function\": \"aws_get_daily_total_spend\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n  \"action_is_check\": false,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\": [\"CATEGORY_TYPE_COST_OPT\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_COST_EXPLORER\"]\n}"
  },
  {
    "path": "AWS/legos/aws_get_daily_total_spend/aws_get_daily_total_spend.py",
    "content": "##\n# Copyright (c) 2023 unSkript, Inc\n# All rights reserved.\n##\nimport datetime\nfrom typing import List, Optional\nfrom pydantic import BaseModel, Field\nimport tabulate\nfrom dateutil.relativedelta import relativedelta\n\n\nclass InputSchema(BaseModel):\n    number_of_months: Optional[int] = Field(\n        '',\n        description=('Number of months to fetch the daily costs for. '\n                     'Eg: 1 (This will fetch all the costs for the last 30 days)'),\n        title='Number of months',\n    )\n    start_date: Optional[str] = Field(\n        '',\n        description=('Start date to get the daily costs from. Note: '\n                     'It should be given in YYYY-MM-DD format. Eg: 2023-03-11'),\n        title='Start Date',\n    )\n    end_date: Optional[str] = Field(\n        '',\n        description=('End date till which daily costs are to be fetched. '\n                     'Note: It should be given in YYYY-MM-DD format. Eg: 2023-04-11'),\n        title='End Date',\n    )\n    region: str = Field(..., description='AWS region.', title='Region')\n\n\ndef aws_get_daily_total_spend_printer(output):\n    if output is None:\n        return\n    rows = [x.values() for x in output]\n    print(tabulate.tabulate(rows, tablefmt=\"fancy_grid\", headers=['Date', 'Cost']))\n\ndef aws_get_daily_total_spend(\n        handle, region:str,\n        number_of_months: int=\"\",\n        start_date: str=\"\", \n        end_date:str=\"\"\n        ) -> List:\n    \"\"\"aws_get_daily_total_spend returns daily cost spendings\n\n        :type handle: object\n        :param handle: Object returned by the task.validate(...) method.\n\n        :type number_of_months: int\n        :param number_of_months: Optional, Number of months to fetch the daily costs for. \n        Eg: 1 (This will fetch all the costs for the last 30 days)\n\n        :type start_date: string\n        :param start_date: Optional, Start date to get the daily costs from. \n        Note: It should be given in YYYY-MM-DD format. Eg: 2023-03-11\n\n        :type end_date: string\n        :param end_date: Optional, End date till which daily costs are to be fetched. \n        Note: It should be given in YYYY-MM-DD format. Eg: 2023-04-11\n\n        :type region: string\n        :param region: AWS Region.\n\n        :rtype: List of dicts with costs on the respective dates\n    \"\"\"\n    if number_of_months:\n        no_of_months = int(number_of_months)\n        end = datetime.date.today().strftime('%Y-%m-%d')\n        start = (datetime.date.today() + relativedelta(months=-no_of_months)).strftime('%Y-%m-%d')\n    elif not start_date and not end_date and not number_of_months:\n        no_of_months = 1\n        end = datetime.date.today().strftime('%Y-%m-%d')\n        start = (datetime.date.today() + relativedelta(months=-no_of_months)).strftime('%Y-%m-%d')\n    else:\n        start = start_date\n        end = end_date\n    result = []\n    client = handle.client('ce', region_name=region)\n    try:\n        response = client.get_cost_and_usage(\n            TimePeriod={\n                'Start': start,\n                'End': end\n            },\n            Granularity='DAILY',\n            Metrics=[\n                'BlendedCost',\n            ]\n        )\n    except Exception as e:\n        raise e\n    for daily_cost in response['ResultsByTime']:\n        daily_cost_est = {}\n        date = daily_cost['TimePeriod']['Start']\n        cost = daily_cost['Total']['BlendedCost']['Amount']\n        daily_cost_est[\"date\"] = date\n        daily_cost_est[\"cost\"] = cost\n        result.append(daily_cost_est)\n    return result\n"
  },
  {
    "path": "AWS/legos/aws_get_ebs_volume_for_low_usage/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>AWS Get EBS Volumes for Low Usage </h1>\r\n\r\n## Description\r\nThis Lego list low use volumes from AWS which used <10% capacity from the given threshold days.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_get_ebs_volume_for_low_usage(handle, region: str = \"\", threshold_days: int = 10, usage_percent: int = 10)\r\n\r\n        handle: Object of type unSkript AWS Connector.\r\n        region: Optional, AWS region. Eg: “us-west-2”\r\n        threshold_days: (in days) The threshold to check the EBS volume usage less than the threshold.\r\n        usage_percent: (in percent) The threshold to compaire the EBS volume usage\r\n        less than the threshold.\r\n\r\n## Lego Input\r\nThis Lego take four inputs handle, threshold_days, usage_percent and region.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_get_ebs_volume_for_low_usage/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_ebs_volume_for_low_usage/aws_get_ebs_volume_for_low_usage.json",
    "content": "{\r\n    \"action_title\": \"AWS Get EBS Volumes for Low Usage\",\r\n    \"action_description\": \"This action list low use volumes from AWS which used <10% capacity from the given threshold days.\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_get_ebs_volume_for_low_usage\",\r\n    \"action_needs_credential\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_OBJECT\",\r\n    \"action_is_check\":true,\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [ \"CATEGORY_TYPE_IAM\", \"CATEGORY_TYPE_SECOPS\"],\r\n    \"action_next_hop\": [\"c9e1563d58cd6e3778a6c3fb11643498e3cdf3965a18c09214423998d62847b8\"],\r\n    \"action_next_hop_parameter_mapping\": {\"c9e1563d58cd6e3778a6c3fb11643498e3cdf3965a18c09214423998d62847b8\": {\"name\": \"Delete EBS Volume With Low Usage\", \"region\": \".[0].region\", \"volume_ids\":\"map(.volume_id)\"}}\r\n}\r\n  "
  },
  {
    "path": "AWS/legos/aws_get_ebs_volume_for_low_usage/aws_get_ebs_volume_for_low_usage.py",
    "content": "##\r\n##  Copyright (c) 2023 unSkript, Inc\r\n##  All rights reserved.\r\n##\r\nimport pprint\r\nfrom typing import Optional, Tuple\r\nfrom datetime import datetime, timedelta\r\nfrom pydantic import BaseModel, Field\r\nfrom unskript.connectors.aws import aws_get_paginator\r\nfrom unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\r\n\r\n\r\nclass InputSchema(BaseModel):\r\n    region: Optional[str] = Field(\r\n        default=\"\",\r\n        title='Region',\r\n        description='AWS Region.')\r\n    threshold_days: Optional[int] = Field(\r\n        default=10,\r\n        title='Threshold (In days)',\r\n        description='(in days) The threshold to check the EBS volume usage within given days.')\r\n    threshold_usage_percent: Optional[int] = Field(\r\n        default=10,\r\n        title='Minium usage percent',\r\n        description='This is the threshold usage percent, below which it will be considered a low usage.')\r\n\r\ndef aws_get_ebs_volume_for_low_usage_printer(output):\r\n    if output is None:\r\n        return\r\n\r\n    pprint.pprint(output)\r\n\r\n\r\ndef aws_get_ebs_volume_for_low_usage(handle, region: str = \"\", threshold_days: int = 10, threshold_usage_percent: int = 10) -> Tuple:\r\n    \"\"\"aws_get_ebs_volume_for_low_usage Returns an array of ebs volumes.\r\n\r\n        :type region: string\r\n        :param region: AWS Region.\r\n\r\n        :type threshold_days: int\r\n        :param threshold_days: (in days) The threshold to check the EBS volume usage within given days.\r\n\r\n        :type threshold_usage_percent: int\r\n        :param usage_percent: (in percent) The threshold to compaire the EBS volume usage\r\n        less than the threshold.\r\n\r\n        :rtype: Tuple with status result and list of EBS Volume.\r\n    \"\"\"\r\n    result = []\r\n    all_regions = [region]\r\n    if not region:\r\n        all_regions = aws_list_all_regions(handle)\r\n\r\n    for reg in all_regions:\r\n        try:\r\n            # Filtering the volume by region\r\n            ec2Client = handle.client('ec2', region_name=reg)\r\n            response = aws_get_paginator(ec2Client, \"describe_volumes\", \"Volumes\")\r\n            now = datetime.utcnow()\r\n            days_ago = now - timedelta(days=threshold_days)\r\n            # collecting the volumes which has zero attachments\r\n            for volume in response:\r\n                ebs_volume = {}\r\n                volume_id = volume[\"VolumeId\"]\r\n                volume_size = volume['Size']\r\n                cloudwatch = handle.client('cloudwatch', region_name=reg)\r\n                read_metric_data = cloudwatch.get_metric_statistics(\r\n                                    Namespace='AWS/EBS',\r\n                                    MetricName='VolumeReadBytes',\r\n                                    Dimensions=[\r\n                                        {\r\n                                            'Name': 'VolumeId',\r\n                                            'Value': volume_id\r\n                                        }\r\n                                    ],\r\n                                    StartTime=days_ago,\r\n                                    EndTime=now,\r\n                                    Period=86400,\r\n                                    Statistics=['Sum']\r\n                                )\r\n                write_metric_data = cloudwatch.get_metric_statistics(\r\n                                    Namespace='AWS/EBS',\r\n                                    MetricName='VolumeWriteBytes',\r\n                                    Dimensions=[\r\n                                        {'Name': 'VolumeId', 'Value': volume_id},\r\n                                    ],\r\n                                    StartTime=days_ago,\r\n                                    EndTime=now,\r\n                                    Period=86400,\r\n                                    Statistics=['Sum']\r\n                                )\r\n                if not read_metric_data['Datapoints'] and not write_metric_data['Datapoints']:\r\n                    continue\r\n                volume_read_bytes = read_metric_data['Datapoints'][0]['Sum'] if read_metric_data['Datapoints'] else 0\r\n                volume_write_bytes = write_metric_data['Datapoints'][0]['Sum'] if write_metric_data['Datapoints'] else 0\r\n                volume_usage_bytes = volume_read_bytes + volume_write_bytes\r\n                volume_usage_percent = volume_usage_bytes / (volume_size * 1024 * 1024 * 1024) * 100\r\n                if volume_usage_percent < threshold_usage_percent:\r\n                    ebs_volume[\"volume_id\"] = volume_id\r\n                    ebs_volume[\"region\"] = reg\r\n                    result.append(ebs_volume)\r\n        except Exception as e:\r\n            pass\r\n\r\n    if len(result) != 0:\r\n        return (False, result)\r\n    return (True, None)\r\n"
  },
  {
    "path": "AWS/legos/aws_get_ebs_volumes_by_type/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Get EBS Volumes By Type</h1>\r\n\r\n## Description\r\nThis Lego filter AWS EBS volumes by their type.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_get_ebs_volumes_by_type(handle, region: str)\r\n\r\n        handle: Object of type unSkript AWS Connector.\r\n        region: AWS region. Eg: “us-west-2”\r\n\r\n## Lego Input\r\nThis Lego take two inputs handle and region.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_get_ebs_volumes_by_type/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_ebs_volumes_by_type/aws_get_ebs_volumes_by_type.json",
    "content": "{\r\n    \"action_title\": \"Get EBS Volumes By Type\",\r\n    \"action_description\": \"Get EBS Volumes By Type\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_get_ebs_volumes_by_type\",\r\n    \"action_needs_credential\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_EC2\",\"CATEGORY_TYPE_AWS_EBC\", \"CATEGORY_TYPE_COST_OPT\" ]\r\n}\r\n  "
  },
  {
    "path": "AWS/legos/aws_get_ebs_volumes_by_type/aws_get_ebs_volumes_by_type.py",
    "content": "##\r\n##  Copyright (c) 2023 unSkript, Inc\r\n##  All rights reserved.\r\n##\r\nimport pprint\r\nfrom typing import Dict\r\nfrom pydantic import BaseModel, Field\r\n\r\n\r\nclass InputSchema(BaseModel):\r\n    region: str = Field(\r\n        title='Region',\r\n        description='AWS Region.')\r\n\r\n\r\ndef aws_get_ebs_volumes_by_type_printer(output):\r\n    if output is None:\r\n        return\r\n\r\n    pprint.pprint(output)\r\n\r\n\r\ndef aws_get_ebs_volumes_by_type(handle, region: str) -> Dict:\r\n    \"\"\"aws_get_ebs_volumes_by_type Returns an dict of ebs volumes with there types.\r\n\r\n        :type region: string\r\n        :param region: AWS Region.\r\n\r\n        :rtype: Dict of ebs volumes with there types.\r\n    \"\"\"\r\n    result = {}\r\n    try:\r\n        ec2Client = handle.resource('ec2', region_name=region)\r\n        volumes = ec2Client.volumes.all()\r\n        # collecting the volumes by there types\r\n        for volume in volumes:\r\n            volume_id = volume.id\r\n            volume_type = volume.volume_type\r\n            if volume_type in result:\r\n                result[volume_type].append(volume_id)\r\n            else:\r\n                result[volume_type] = [volume_id]\r\n\r\n    except Exception as e:\r\n        raise Exception(e) from e\r\n    return result\r\n"
  },
  {
    "path": "AWS/legos/aws_get_ebs_volumes_without_gp3_type/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Get AWS EBS Volume Without GP3 Type </h1>\r\n\r\n## Description\r\nThis Lego is used to get the EBS volume, which doesn't use the General Purpose SSD (gp3) volume type.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_get_ebs_volumes_without_gp3_type(handle, region: str = \"\")\r\n\r\n        handle: Object of type unSkript AWS Connector.\r\n        region: Optional, AWS region. Eg: “us-west-2”\r\n\r\n## Lego Input\r\nThis Lego take two inputs handle and region.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_get_ebs_volumes_without_gp3_type/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_ebs_volumes_without_gp3_type/aws_get_ebs_volumes_without_gp3_type.json",
    "content": "{\r\n    \"action_title\": \"Get AWS EBS Volume Without GP3 Type\",\r\n    \"action_description\": \"AWS recently introduced the General Purpose SSD (gp3) volume type, which is designed to provide higher IOPS performance at a lower cost than the gp2 volume type.\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_get_ebs_volumes_without_gp3_type\",\r\n    \"action_needs_credential\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_is_check\":true,\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [ \"CATEGORY_TYPE_IAM\", \"CATEGORY_TYPE_SECOPS\"],\r\n    \"action_next_hop\": [\"2475714639442a9adcd0a87f7d193f6e8a6bbb9537d1eb6b03a6befb8ef84b19\"],\r\n    \"action_next_hop_parameter_mapping\": {\"2475714639442a9adcd0a87f7d193f6e8a6bbb9537d1eb6b03a6befb8ef84b19\": {\"name\": \"Change AWS EBS Volume To GP3 Type\", \"region\":\".[0].region\",\"ebs_volume_ids\":\"map(.volume_id)\"}}\r\n}"
  },
  {
    "path": "AWS/legos/aws_get_ebs_volumes_without_gp3_type/aws_get_ebs_volumes_without_gp3_type.py",
    "content": "##\r\n##  Copyright (c) 2021 unSkript, Inc\r\n##  All rights reserved.\r\n##\r\nimport pprint\r\nfrom typing import Optional, Tuple\r\nfrom pydantic import BaseModel, Field\r\nfrom unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\r\n\r\n\r\nclass InputSchema(BaseModel):\r\n    region: Optional[str] = Field(\r\n        default=\"\",\r\n        title='Region',\r\n        description='AWS Region.')\r\n\r\n\r\ndef aws_get_ebs_volumes_without_gp3_type_printer(output):\r\n    if output is None:\r\n        return\r\n\r\n    pprint.pprint(output)\r\n\r\n\r\ndef aws_get_ebs_volumes_without_gp3_type(handle, region: str = \"\") -> Tuple:\r\n    \"\"\"aws_get_ebs_volumes_without_gp3_type Returns an array of ebs volumes.\r\n\r\n        :type region: string\r\n        :param region: Used to filter the volume for specific region.\r\n\r\n        :rtype: Tuple with status result and list of EBS Volume without GP3 type.\r\n    \"\"\"\r\n    result=[]\r\n    all_regions = [region]\r\n    if not region:\r\n        all_regions = aws_list_all_regions(handle)\r\n\r\n    for reg in all_regions:\r\n        try:\r\n            # Filtering the volume by region\r\n            ec2Client = handle.resource('ec2', region_name=reg)\r\n            volumes = ec2Client.volumes.all()\r\n\r\n            # collecting the volumes which has zero attachments\r\n            for volume in volumes:\r\n                volume_dict = {}\r\n                if volume.volume_type != \"gp3\":\r\n                    volume_dict[\"region\"] = reg\r\n                    volume_dict[\"volume_id\"] = volume.id\r\n                    volume_dict[\"volume_type\"] = volume.volume_type\r\n                    result.append(volume_dict)\r\n        except Exception:\r\n            pass\r\n\r\n    if len(result) != 0:\r\n        return (False, result)\r\n    return (True, None)\r\n"
  },
  {
    "path": "AWS/legos/aws_get_ec2_cpu_consumption/README.md",
    "content": "\r\n[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>get average CPU utilization over last 24 hours for EC2 instances </h1>\r\n\r\n## Description\r\nGiven a region, this will query all instances, and give you an average CPU utilization over 24 hours.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_get_ec2_cpu_consumption(handle, region: str) \r\n\r\n        handle: Object of type unSkript AWS Connector\r\n        region: Location of the EC2 instances.\r\n\r\n## Lego Input\r\nThis Lego take two inputs: handle and region.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.jpg\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Sandbox](https://us.app.unskript.io)\r\n\r\n"
  },
  {
    "path": "AWS/legos/aws_get_ec2_cpu_consumption/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_ec2_cpu_consumption/aws_get_ec2_cpu_consumption.json",
    "content": "{\r\n    \"action_title\": \"Get EC2 CPU Consumption For All Instances\",\r\n    \"action_description\": \"Get EC2 CPU Consumption For All Instances\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_get_ec2_cpu_consumption\",\r\n    \"action_needs_credential\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\":[ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\" ,\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_EC2\"  ]\r\n}"
  },
  {
    "path": "AWS/legos/aws_get_ec2_cpu_consumption/aws_get_ec2_cpu_consumption.py",
    "content": "##  Copyright (c) 2022 unSkript, Inc\r\n##  All rights reserved.\r\n## written by Doug Sillars with the aid of ChatGPT\r\n##read the blog https://unskript.com/will-ai-replace-us-using-chatgpt-to-create-python-actions-for-unskript/\r\n##\r\nimport pprint\r\nfrom typing import Dict\r\nfrom datetime import datetime, timedelta\r\nfrom pydantic import BaseModel, Field\r\nfrom unskript.connectors.aws import aws_get_paginator\r\nfrom beartype import beartype\r\n\r\n@beartype\r\ndef aws_get_ec2_cpu_consumption_printer(output):\r\n    if output is None:\r\n        return\r\n    pprint.pprint(output)\r\n\r\nclass InputSchema(BaseModel):\r\n    region: str = Field(\r\n        title='Region',\r\n        description='AWS Region of the ECS service')\r\n\r\n@beartype\r\ndef aws_get_ec2_cpu_consumption(handle, region: str) -> Dict:\r\n\r\n\r\n    ec2Client = handle.client('ec2', region_name=region)\r\n    cw= handle.client('cloudwatch', region_name=region)\r\n    res = aws_get_paginator(ec2Client, \"describe_instances\", \"Reservations\")\r\n\r\n    # Get the current time and the time 24 hours ago\r\n    now = datetime.now()\r\n    yesterday = now - timedelta(hours=24)\r\n\r\n    # Set the start and end times for the data to retrieve\r\n    start_time = yesterday.strftime('%Y-%m-%dT%H:%M:%SZ')\r\n    end_time = now.strftime('%Y-%m-%dT%H:%M:%SZ')\r\n    results={}\r\n    # Iterate through the list of instances\r\n    for reservation in res:\r\n        for instance in reservation['Instances']:\r\n                # Get the instance ID and launch time\r\n                instance_id = instance['InstanceId']\r\n                # Get the average CPU usage for the last 24 hours\r\n                response = cw.get_metric_statistics(\r\n                    Namespace='AWS/EC2',\r\n                    MetricName='CPUUtilization',\r\n                    Dimensions=[\r\n                        {\r\n                            'Name': 'InstanceId',\r\n                            'Value': instance_id\r\n                        },\r\n                    ],\r\n                    StartTime=start_time,\r\n                    EndTime=end_time,\r\n                    Period=3600,\r\n                    Statistics=['Average']\r\n                )\r\n\r\n                # Calculate the average CPU usage for the past 24 hours\r\n                #error check for the presence of CPU  usage data\r\n                if len(response['Datapoints'])>0:               \r\n                    cpu_utilization_values = [datapoint['Average'] for\r\n                                              datapoint in response['Datapoints']]\r\n                    avg_cpu_utilization = sum(cpu_utilization_values) / len(cpu_utilization_values)\r\n                    results[instance_id] = avg_cpu_utilization\r\n                else:\r\n                    results[instance_id] = \"error\"\r\n    return results\r\n"
  },
  {
    "path": "AWS/legos/aws_get_ec2_data_traffic/README.md",
    "content": "\r\n[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>get Network traffic In and Out For Last Hour: All EC2 Instances </h1>\r\n\r\n## Description\r\nGiven a region, this will query all instances, and give you MB in and out for the last hour.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_get_ec2_data_traffic(handle, region: str) \r\n\r\n        handle: Object of type unSkript AWS Connector\r\n        region: Location of the EC2 instances.\r\n\r\n## Lego Input\r\nThis Lego take two inputs: handle and region.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.jpg\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Sandbox](https://us.app.unskript.io)\r\n\r\n"
  },
  {
    "path": "AWS/legos/aws_get_ec2_data_traffic/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_ec2_data_traffic/aws_get_ec2_data_traffic.json",
    "content": "{\r\n    \"action_title\": \"Get EC2 Data Traffic In and Out For All Instances\",\r\n    \"action_description\": \"Get EC2 Data Traffic In and Out For All Instances\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_get_ec2_data_traffic\",\r\n    \"action_needs_credential\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\":[ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\" ,\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_EC2\"  ]\r\n}"
  },
  {
    "path": "AWS/legos/aws_get_ec2_data_traffic/aws_get_ec2_data_traffic.py",
    "content": "##  Copyright (c) 2022 unSkript, Inc\r\n##  All rights reserved.\r\n## written by Doug Sillars with the aid of ChatGPT\r\n##read the blog https://unskript.com/will-ai-replace-us-using-chatgpt-to-create-python-actions-for-unskript/\r\n##\r\n##\r\nimport pprint\r\nfrom typing import Dict\r\nfrom datetime import datetime, timedelta\r\nfrom pydantic import BaseModel, Field\r\nfrom unskript.connectors.aws import aws_get_paginator\r\nfrom beartype import beartype\r\n\r\n@beartype\r\ndef aws_get_ec2_data_traffic_printer(output):\r\n    if output is None:\r\n        return\r\n    pprint.pprint(output)\r\n\r\n\r\nclass InputSchema(BaseModel):\r\n    region: str = Field(\r\n        title='Region',\r\n        description='AWS Region of the ECS service')\r\n\r\n\r\n\r\n@beartype\r\ndef aws_get_ec2_data_traffic(handle, region: str) -> Dict:\r\n\r\n    ec2Client = handle.client('ec2', region_name=region)\r\n    cloudwatch= handle.client('cloudwatch', region_name=region)\r\n    res = aws_get_paginator(ec2Client, \"describe_instances\", \"Reservations\")\r\n\r\n   # Set the desired time range for the data traffic metrics\r\n    time_range = {\r\n        'StartTime': datetime.utcnow() - timedelta(hours=1),\r\n        'EndTime': datetime.utcnow()\r\n    }\r\n    result={}\r\n        # Iterate through the list of instances\r\n    for reservation in res:\r\n        for instance in reservation['Instances']:\r\n                # Get the instance ID and launch time\r\n                instance_id = instance['InstanceId']\r\n                # Set the desired dimensions for the data traffic metrics\r\n                dimensions = [\r\n                    {\r\n                        'Name': 'InstanceId',\r\n                        'Value': instance_id\r\n                    }\r\n                ]\r\n\r\n                # Get the data traffic in and out metrics for all EC2 instances\r\n                metrics = cloudwatch.get_metric_data(\r\n                    MetricDataQueries=[\r\n                        {\r\n                            'Id': 'm1',\r\n                            'MetricStat': {\r\n                                'Metric': {\r\n                                    'Namespace': 'AWS/EC2',\r\n                                    'MetricName': 'NetworkIn',\r\n                                    'Dimensions': dimensions\r\n                                },\r\n                                'Period': 3600,\r\n                                'Stat': 'Sum',\r\n                                'Unit': 'Bytes'\r\n                            }\r\n                        },\r\n                        {\r\n                            'Id': 'm2',\r\n                            'MetricStat': {\r\n                                'Metric': {\r\n                                    'Namespace': 'AWS/EC2',\r\n                                    'MetricName': 'NetworkOut',\r\n                                    'Dimensions': dimensions\r\n                                },\r\n                                'Period': 3600,\r\n                                'Stat': 'Sum',\r\n                                'Unit': 'Bytes'\r\n                            }\r\n                        }\r\n                    ],\r\n                    StartTime=time_range['StartTime'],\r\n                    EndTime=time_range['EndTime']\r\n                )\r\n                #bytes dont mean anything.  Lets use MB\r\n\r\n                if len(metrics['MetricDataResults'][0]['Values'])>0:\r\n                    NetworkInMB = round(\r\n                        float(metrics['MetricDataResults'][0]['Values'][0])/1024/1024,\r\n                        2\r\n                        )\r\n                else:\r\n                    NetworkInMB = \"error\"\r\n                if len(metrics['MetricDataResults'][1]['Values'])>0:    \r\n                    NetworkOutMB = round(\r\n                        float(metrics['MetricDataResults'][1]['Values'][0])/1024/1024,\r\n                        2\r\n                        )\r\n                else:\r\n                    NetworkOutMB = \"error\"\r\n                metricsIwant = {\r\n                    metrics['MetricDataResults'][0]['Label'] : NetworkInMB,\r\n                    metrics['MetricDataResults'][1]['Label'] : NetworkOutMB\r\n                    }\r\n                result[instance_id] = metricsIwant\r\n\r\n    return result\r\n"
  },
  {
    "path": "AWS/legos/aws_get_ec2_instance_age/README.md",
    "content": "\r\n[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Get Age of All EC2 Instances </h1>\r\n\r\n## Description\r\nGiven a region, this will query all instances, and give you the age in days of every EC2 instance.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_get_ec2_instance_age(handle, region: str) \r\n\r\n        handle: Object of type unSkript AWS Connector\r\n        region: Location of the EC2 instances.\r\n\r\n## Lego Input\r\nThis Lego take two inputs: handle and region.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.jpg\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Sandbox](https://us.app.unskript.io)\r\n\r\n"
  },
  {
    "path": "AWS/legos/aws_get_ec2_instance_age/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_ec2_instance_age/aws_get_ec2_instance_age.json",
    "content": "{\r\n    \"action_title\": \"Get Age of all EC2 Instances in Days\",\r\n    \"action_description\": \"Get Age of all EC2 Instances in Days\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_get_ec2_instance_age\",\r\n    \"action_needs_credential\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\":[ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\" ,\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_EC2\"  ]\r\n}"
  },
  {
    "path": "AWS/legos/aws_get_ec2_instance_age/aws_get_ec2_instance_age.py",
    "content": "##  Copyright (c) 2022 unSkript, Inc\r\n##  All rights reserved.\r\n##\r\n## written by Doug Sillars with the aid of ChatGPT\r\n##read the blog https://unskript.com/will-ai-replace-us-using-chatgpt-to-create-python-actions-for-unskript/\r\n##\r\n\r\nimport pprint\r\nfrom typing import Dict\r\nfrom datetime import datetime, timezone\r\nfrom pydantic import BaseModel, Field\r\nfrom unskript.connectors.aws import aws_get_paginator\r\nfrom beartype import beartype\r\n\r\n@beartype\r\ndef aws_get_ec2_instance_age_printer(output):\r\n    if output is None:\r\n        return\r\n    pprint.pprint(output)\r\n\r\nclass InputSchema(BaseModel):\r\n    region: str = Field(\r\n        title='Region',\r\n        description='AWS Region of the ECS service')\r\n\r\n\r\n\r\n@beartype\r\ndef aws_get_ec2_instance_age(handle, region: str) -> Dict:\r\n\r\n\r\n    ec2Client = handle.client('ec2', region_name=region)\r\n    res = aws_get_paginator(ec2Client, \"describe_instances\", \"Reservations\")\r\n\r\n    # Get the current time\r\n    now = datetime.now(timezone.utc)\r\n    result={}\r\n    # Iterate through the list of instances\r\n    for reservation in res:\r\n        for instance in reservation['Instances']:\r\n            # Get the instance ID and launch time\r\n            instance_id = instance['InstanceId']\r\n            launch_time = instance['LaunchTime']\r\n\r\n            # Calculate the age of the instance\r\n            age = now - launch_time\r\n\r\n            # Print the instance ID and age\r\n            ageText = f\"Instance {instance_id} is {age.days} days old\"\r\n            print(ageText)\r\n            result[instance_id] = age.days\r\n    return result\r\n"
  },
  {
    "path": "AWS/legos/aws_get_ec2_instances_with_smaller_cpu_size/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>Get AWS EC2 with smaller CPU size</h1>\n\n## Description\nThis action finds EC2 instances with smaller CPU size than threshold. (vCPU count)\n\n## Lego Details\n\taws_get_ec2_instances_with_smaller_cpu_size(handle, instance_ids: list = [], region: str = \"\", threshold: float=2.0)\n\t\thandle: Object of type unSkript AWS Connector.\n\t\tinstance_ids: List of instance IDs to check.\n\t\tthreshold: The CPU size threshold. Example value is 2.0.\n\t\tregion: Region to get instances from. (Optional)\n\n## Lego Input\nThis Lego takes inputs handle, instance_ids, threshold, region.\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_get_ec2_instances_with_smaller_cpu_size/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_ec2_instances_with_smaller_cpu_size/aws_get_ec2_instances_with_smaller_cpu_size.json",
    "content": "{\n  \"action_title\": \"Get AWS EC2 with smaller CPU size\",\n  \"action_description\": \"This action finds EC2 instances with smaller CPU size than threshold. (vCPU count)\",\n  \"action_type\": \"LEGO_TYPE_AWS\",\n  \"action_entry_function\": \"aws_get_ec2_instances_with_smaller_cpu_size\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n  \"action_is_check\":true,\n  \"action_supports_poll\": true,\n  \"action_supports_iteration\": true,\n  \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\", \"CATEGORY_TYPE_AWS\", \"CATEGORY_TYPE_AWS_EC2\"],\n  \"action_next_hop\": [\"\"],\n  \"action_next_hop_parameter_mapping\": {}\n}"
  },
  {
    "path": "AWS/legos/aws_get_ec2_instances_with_smaller_cpu_size/aws_get_ec2_instances_with_smaller_cpu_size.py",
    "content": "##  Copyright (c) 2023 unSkript, Inc\n##  All rights reserved.\n##\nfrom typing import List, Optional\nfrom pydantic import BaseModel, Field\nimport pprint\nimport json\nfrom unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\nfrom unskript.legos.aws.aws_get_all_ec2_instances.aws_get_all_ec2_instances import aws_get_all_ec2_instances\n\n\nclass InputSchema(BaseModel):\n    region: Optional[str] = Field(\n        '', description='AWS Region to get the RDS Instance', title='AWS Region'\n    )\n    instance_ids: Optional[List] = Field(\n        '', description='List of instance IDs to check.', title='List of Instance IDs'\n    )\n    threshold: Optional[float] = Field(\n        default= 2,\n        description='The CPU size threshold. Default value is 2.0. Size map is as follows-\\n    \"nano\": 2,\\n    \"micro\": 2,\\n    \"small\": 1,\\n    \"medium\": 1,\\n    \"large\": 2,\\n    \"xlarge\": 4,\\n    \"2xlarge\": 8,\\n    \"3xlarge\": 12,\\n    \"4xlarge\": 16,\\n    \"6xlarge\": 24,\\n    \"8xlarge\": 32,\\n    \"9xlarge\": 36,\\n    \"10xlarge\": 40,\\n    \"12xlarge\": 48,\\n    \"16xlarge\": 64,\\n    \"18xlarge\": 72,\\n    \"24xlarge\": 96,\\n    \"32xlarge\": 128,\\n    \"metal\": 96',\n        title='Threshold (vCPU)',\n    )\n\n\ndef aws_get_ec2_instances_with_smaller_cpu_size_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\ndef aws_get_ec2_instances_with_smaller_cpu_size(handle, instance_ids: list = [], region: str = \"\", threshold: float=2.0):\n    \"\"\"Check the CPU size (vCPU count) and compare with the threshold.\n\n    :type threshold: float\n    :param threshold: The CPU size threshold. Example value is 2.0.\n\n    :type instance_ids: list\n    :param instance_ids: List of instance IDs to check.\n\n    :type region: str\n    :param region: Region to get instances from.\n\n    :rtype: Status, List of dicts of instance IDs with CPU size less than the threshold\n    \"\"\"\n    size_to_cpu_map_str = \"\"\"\n    {\n        \"nano\": 2,\n        \"micro\": 2,\n        \"small\": 1,\n        \"medium\": 1,\n        \"large\": 2,\n        \"xlarge\": 4,\n        \"2xlarge\": 8,\n        \"3xlarge\": 12,\n        \"4xlarge\": 16,\n        \"6xlarge\": 24,\n        \"8xlarge\": 32,\n        \"9xlarge\": 36,\n        \"10xlarge\": 40,\n        \"12xlarge\": 48,\n        \"16xlarge\": 64,\n        \"18xlarge\": 72,\n        \"24xlarge\": 96,\n        \"32xlarge\": 128,\n        \"metal\": 96\n    }\n    \"\"\"\n\n    size_to_cpu_map = json.loads(size_to_cpu_map_str)\n    result = []\n    instances_with_low_cpu_size = {}\n\n    try:\n        if instance_ids and not region:\n            raise ValueError(\"Region must be specified when instance IDs are given.\")\n\n        if instance_ids and region:\n            # If instance_ids and region are given\n            regions = [region]\n            all_instance_ids = [{region: instance_ids}]\n        elif not instance_ids and region:\n            # If instance_ids are not given but region is given\n            regions = [region]\n            all_instance_ids = [{region: aws_get_all_ec2_instances(handle, region)}]\n        else:\n            # If neither instance_ids nor region are given\n            regions = aws_list_all_regions(handle)\n            all_instance_ids = []\n            for reg in regions:\n                try:\n                    all_instance_ids.append({reg:aws_get_all_ec2_instances(handle, reg)})\n                except Exception:\n                    pass\n\n        for region_instances in all_instance_ids:\n            for selected_region, inst_ids in region_instances.items():\n                ec2 = handle.client('ec2', region_name=selected_region)\n                for instance_id in inst_ids:\n                    # Get the instance details\n                    resp = ec2.describe_instances(InstanceIds=[instance_id])\n                    # Get the instance type\n                    instance_type = resp['Reservations'][0]['Instances'][0]['InstanceType']\n                    # Get the size from the instance type\n                    instance_size = instance_type.split('.')[1]\n                    # Get the vCPU count from the size using the mapping\n                    cpu_size = size_to_cpu_map.get(instance_size, 0)\n\n                    # If the CPU size is less than the threshold, add to the list.\n                    if cpu_size < threshold:\n                        if region not in instances_with_low_cpu_size:\n                            instances_with_low_cpu_size = {\"region\":selected_region, \"instance_id\": instance_id}\n                            result.append(instances_with_low_cpu_size)\n\n    except Exception as e:\n        raise e\n\n    if len(result) != 0:\n        return (False, result)\n    return (True, None)\n\n\n"
  },
  {
    "path": "AWS/legos/aws_get_ecs_instances_without_autoscaling/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>AWS ECS Instances without AutoScaling policy</h1>\r\n\r\n## Description\r\nThis Lego is used to get AWS ECS Instances without AutoScaling policy.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_get_ecs_instances_without_autoscaling(handle, region: str = \"\")\r\n\r\n        handle: Object of type unSkript AWS Connector.\r\n        region: Optional, AWS region. Eg: “us-west-2”\r\n\r\n## Lego Input\r\nThis Lego take two inputs handle and region.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_get_ecs_instances_without_autoscaling/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_ecs_instances_without_autoscaling/aws_get_ecs_instances_without_autoscaling.json",
    "content": "{\r\n    \"action_title\": \"AWS ECS Instances without AutoScaling policy\",\r\n    \"action_description\": \"AWS ECS Instances without AutoScaling policy.\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_get_ecs_instances_without_autoscaling\",\r\n    \"action_needs_credential\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_is_check\":true,\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [ \"CATEGORY_TYPE_ECS\", \"CATEGORY_TYPE_SECOPS\"],\r\n    \"action_next_hop\": [],\r\n    \"action_next_hop_parameter_mapping\": {}\r\n}\r\n  "
  },
  {
    "path": "AWS/legos/aws_get_ecs_instances_without_autoscaling/aws_get_ecs_instances_without_autoscaling.py",
    "content": "##\r\n##  Copyright (c) 2023 unSkript, Inc\r\n##  All rights reserved.\r\n##\r\nimport pprint\r\nfrom typing import Optional, Tuple\r\nfrom pydantic import BaseModel, Field\r\nfrom unskript.connectors.aws import aws_get_paginator\r\nfrom unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\r\n\r\n\r\nclass InputSchema(BaseModel):\r\n    region: Optional[str] = Field(\r\n        default=\"\",\r\n        title='AWS Region',\r\n        description='AWS Region.')\r\n\r\n\r\ndef aws_get_ecs_instances_without_autoscaling_printer(output):\r\n    if output is None:\r\n        return\r\n    pprint.pprint(output)\r\n\r\n\r\ndef aws_get_ecs_instances_without_autoscaling(handle, region: str = \"\") -> Tuple:\r\n    \"\"\"aws_get_ecs_instances_without_autoscaling Returns an array of instances.\r\n\r\n        :type handle: object\r\n        :param handle: Object returned from task.validate(...).\r\n\r\n        :type region: string\r\n        :param region: AWS Region.\r\n\r\n        :rtype: Array of instances.\r\n    \"\"\"\r\n    if not handle or (region and region not in aws_list_all_regions(handle)):\r\n        raise ValueError(\"Invalid input parameters provided.\")\r\n    result = []\r\n    all_regions = [region]\r\n    if not region:\r\n        all_regions = aws_list_all_regions(handle)\r\n\r\n    for reg in all_regions:\r\n        try:\r\n            if reg not in aws_list_all_regions(handle):\r\n                raise ValueError(f\"Invalid region provided: {reg}\")\r\n            ecs_Client = handle.client('ecs', region_name=reg)\r\n            autoscaling_client = handle.client('autoscaling', region_name=reg)\r\n            response = aws_get_paginator(ecs_Client, \"list_clusters\", \"clusterArns\")\r\n            cluster_names = [arn.split('/')[-1] for arn in response]\r\n            for cluster in cluster_names:\r\n                response_1 = aws_get_paginator(ecs_Client, \"list_container_instances\",\r\n                                               \"containerInstanceArns\", cluster=cluster)\r\n                if not response_1:\r\n                    continue\r\n                container_instances_data = ecs_Client.describe_container_instances(\r\n                    cluster=cluster,\r\n                    containerInstances=response_1\r\n                    )\r\n                for ec2_instance in container_instances_data['containerInstances']:\r\n                    cluster_dict = {}\r\n                    response = autoscaling_client.describe_auto_scaling_instances(\r\n                        InstanceIds=[ec2_instance['ec2InstanceId']]\r\n                        )\r\n                    if response['AutoScalingInstances']:\r\n                        asg_name = response['AutoScalingInstances'][0]['AutoScalingGroupName']\r\n                        asg_response = autoscaling_client.describe_auto_scaling_groups(\r\n                            AutoScalingGroupNames=[asg_name]\r\n                            )\r\n                        if not asg_response['AutoScalingGroups']:\r\n                            cluster_dict[\"instance_id\"] = ec2_instance['ec2InstanceId']\r\n                            cluster_dict[\"cluster\"] = cluster\r\n                            cluster_dict[\"region\"] = reg\r\n                            result.append(cluster_dict)\r\n                    else:\r\n                        cluster_dict[\"instance_id\"] = ec2_instance['ec2InstanceId']\r\n                        cluster_dict[\"cluster\"] = cluster\r\n                        cluster_dict[\"region\"] = reg\r\n                        result.append(cluster_dict)\r\n        except Exception:\r\n            pass\r\n\r\n    if len(result) != 0:\r\n        return (False, result)\r\n    return (True, None)\r\n"
  },
  {
    "path": "AWS/legos/aws_get_ecs_services_status/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Get AWS ECS Service Status</h1>\r\n\r\n## Description\r\nThis Lego Get the Status of an AWS ECS Service.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_get_ecs_services_status(handle, region: str)\r\n\r\n        handle: Object of type unSkript AWS Connector.\r\n        region: AWS Region of the ECS service. \r\n\r\n## Lego Input\r\nThis Lego take two inputs handle and region. \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_get_ecs_services_status/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "AWS/legos/aws_get_ecs_services_status/aws_get_ecs_services_status.json",
    "content": "{\r\n    \"action_title\": \"Get AWS ECS Service Status\",\r\n    \"action_description\": \"Get the Status of an AWS ECS Service\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_get_ecs_services_status\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\":[ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\" ,\"CATEGORY_TYPE_AWS\"  ]\r\n}"
  },
  {
    "path": "AWS/legos/aws_get_ecs_services_status/aws_get_ecs_services_status.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import Dict\nfrom pydantic import BaseModel, Field\n\n\nclass InputSchema(BaseModel):\n    region: str = Field(\n        title='Region',\n        description='AWS Region of the ECS service.')\n\n\ndef aws_get_ecs_services_status_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef aws_get_ecs_services_status(handle, region: str) -> Dict:\n    \"\"\"aws_get_ecs_services_status returns the status of all ECS services.\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type region: string\n        :param region: AWS Region of the ECS service.\n\n        :rtype: Dict with the services status status info.\n    \"\"\"\n\n    healthClient = handle.client('ecs', region_name=region)\n\n    clusters = healthClient.list_clusters()['clusterArns']\n    output = {}\n    for cluster in clusters:\n        clusterName = cluster.split('/')[1]\n\n        services = healthClient.list_services(cluster=clusterName)['serviceArns']\n        if len(services) > 0:\n            servises_status = healthClient.describe_services(cluster=cluster, services=services)\n            for service in servises_status['services']:\n                output[service['serviceName']] = service['status']\n    return output\n"
  },
  {
    "path": "AWS/legos/aws_get_ecs_services_without_autoscaling/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>AWS ECS Services without AutoScaling policy</h1>\r\n\r\n## Description\r\nThis Lego is used to get AWS ECS Services without AutoScaling policy.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_get_ecs_services_without_autoscaling(handle, region: str = \"\")\r\n\r\n        handle: Object of type unSkript AWS Connector.\r\n        region: Optional, AWS region. Eg: “us-west-2”\r\n\r\n## Lego Input\r\nThis Lego take two inputs handle and region.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_get_ecs_services_without_autoscaling/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_ecs_services_without_autoscaling/aws_get_ecs_services_without_autoscaling.json",
    "content": "{\r\n    \"action_title\": \"AWS ECS Services without AutoScaling policy\",\r\n    \"action_description\": \"AWS ECS Services without AutoScaling policy.\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_get_ecs_services_without_autoscaling\",\r\n    \"action_needs_credential\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_is_check\":true,\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [ \"CATEGORY_TYPE_ECS\", \"CATEGORY_TYPE_SECOPS\"],\r\n    \"action_next_hop\": [],\r\n    \"action_next_hop_parameter_mapping\": {}\r\n}\r\n  "
  },
  {
    "path": "AWS/legos/aws_get_ecs_services_without_autoscaling/aws_get_ecs_services_without_autoscaling.py",
    "content": "##\r\n##  Copyright (c) 2021 unSkript, Inc\r\n##  All rights reserved.\r\n##\r\nimport pprint\r\nfrom typing import Optional, Tuple\r\nfrom pydantic import BaseModel, Field\r\nfrom unskript.connectors.aws import aws_get_paginator\r\nfrom unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\r\n\r\n\r\nclass InputSchema(BaseModel):\r\n    region: Optional[str] = Field(\r\n        default=\"\",\r\n        title='AWS Region',\r\n        description='AWS Region.')\r\n\r\n\r\ndef aws_get_ecs_services_without_autoscaling_printer(output):\r\n    if output is None:\r\n        return\r\n    pprint.pprint(output)\r\n\r\n\r\ndef aws_get_ecs_services_without_autoscaling(handle, region: str = \"\") -> Tuple:\r\n    \"\"\"aws_get_ecs_services_without_autoscaling Returns an array of Sevices.\r\n\r\n        :type handle: object\r\n        :param handle: Object returned from task.validate(...).\r\n\r\n        :type region: string\r\n        :param region: AWS Region.\r\n\r\n        :rtype: Array of Sevices.\r\n    \"\"\"\r\n    result = []\r\n    all_regions = [region]\r\n    if not region:\r\n        all_regions = aws_list_all_regions(handle)\r\n\r\n    for reg in all_regions:\r\n        try:\r\n            ecs_Client = handle.client('ecs', region_name=reg)\r\n            autoscaling_client = handle.client('application-autoscaling', region_name=reg)\r\n            response = aws_get_paginator(ecs_Client, \"list_clusters\", \"clusterArns\")\r\n            cluster_names = [arn.split('/')[-1] for arn in response]\r\n            for cluster in cluster_names:\r\n                response_1 = aws_get_paginator(ecs_Client, \"list_services\",\r\n                                               \"serviceArns\", cluster=cluster)\r\n                for service in response_1:\r\n                    cluster_dict = {}\r\n                    response_2 = autoscaling_client.describe_scaling_policies(\r\n                                    ServiceNamespace='ecs', ResourceId=service)\r\n                    scaling_policies = response_2['ScalingPolicies']\r\n                    if not scaling_policies:\r\n                        cluster_dict[\"service\"] = service\r\n                        cluster_dict[\"cluster\"] = cluster\r\n                        cluster_dict[\"region\"] = reg\r\n                        result.append(cluster_dict)\r\n        except Exception:\r\n            pass\r\n\r\n    if len(result) != 0:\r\n        return (False, result)\r\n    return (True, None)\r\n"
  },
  {
    "path": "AWS/legos/aws_get_generated_policy/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>AWS Get Generated Policy</h1>\n\n## Description\nGiven a Region and the ID of a policy generation job, this Action will return the policy (once it has been completed).\n\n## Lego Details\n\taws_get_generated_policy(handle, region:str,jobId:str)\n\t\thandle: Object of type unSkript AWS Connector.\n\n\tregion: AWS region of the cloudtrail that is being used to generate the policy\n\tjobId: The JobID of the policy being generated.\n\n\n## Lego Input\nThis Lego takes inputs handle, region and JobId.\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.jpg\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_get_generated_policy/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_generated_policy/aws_get_generated_policy.json",
    "content": "{\n  \"action_title\": \"AWS Get Generated Policy\",\n  \"action_description\": \"Given a Region and the ID of a policy generation job, this Action will return the policy (once it has been completed).\",\n  \"action_type\": \"LEGO_TYPE_AWS\",\n  \"action_entry_function\": \"aws_get_generated_policy\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\n  \"action_is_check\": false,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\": [\"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_COST_OPT\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_IAM\"]\n}"
  },
  {
    "path": "AWS/legos/aws_get_generated_policy/aws_get_generated_policy.py",
    "content": "##\n# Copyright (c) 2023 unSkript, Inc\n# All rights reserved.\n##\nfrom __future__ import annotations\nimport pprint\nfrom typing import Dict\nfrom pydantic import BaseModel\n\n\nclass InputSchema(BaseModel):\n    pass\n\n\ndef aws_get_generated_policy_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\ndef aws_get_generated_policy(handle, region:str,jobId:str) -> Dict:\n    client = handle.client('accessanalyzer', region_name=region)\n    response = client.get_generated_policy(\n        jobId=jobId,\n        includeResourcePlaceholders=True,\n        includeServiceLevelTemplate=True\n    )\n    result = {}\n    result['generatedPolicyResult'] = response['generatedPolicyResult']\n    result['generationStatus'] = response['jobDetails']['status']\n    return result\n"
  },
  {
    "path": "AWS/legos/aws_get_handle/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Get AWS boto3 handle </h1>\r\n\r\n## Description\r\nThis Lego Get AWS boto3 handle.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_get_handle(handle: object)\r\n\r\n        handle: Object of type unSkript AWS Connector.\r\n\r\n## Lego Input\r\n\r\nThis Lego take input handle.\r\n\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_get_handle/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_handle/aws_get_handle.json",
    "content": "{\r\n    \"action_title\": \"Get AWS boto3 handle\",\r\n    \"action_description\": \"Get AWS boto3 handle\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_get_handle\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": false,\r\n    \"action_supports_iteration\": false,\r\n    \"action_categories\": [  \"CATEGORY_TYPE_AWS\" ]\r\n\r\n  }"
  },
  {
    "path": "AWS/legos/aws_get_handle/aws_get_handle.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nfrom pydantic import BaseModel\nfrom unskript.connectors.aws import Session\n\nclass InputSchema(BaseModel):\n    pass\n\n\ndef aws_get_handle(handle: Session):\n    \"\"\"aws_get_handle returns the AWS session handle.\n       :rtype: AWS handle.\n    \"\"\"\n    return handle\n"
  },
  {
    "path": "AWS/legos/aws_get_iam_users_without_attached_policies/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>AWS list IAM users without attached policies</h1>\n\n## Description\nGet a list of all IAM users that do not have any user-managed or AWS-managed policies attached to them.\n\n## Lego Details\n\taws_get_iam_users_without_attached_policies(handle)\n\t\thandle: Object of type unSkript AWS Connector.\n\n## Lego Input\nThis Lego takes one input- handle.\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_get_iam_users_without_attached_policies/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_iam_users_without_attached_policies/aws_get_iam_users_without_attached_policies.json",
    "content": "{\n  \"action_title\": \"AWS list IAM users without attached policies\",\n  \"action_description\": \"Get a list of all IAM users that do not have any user-managed or AWS-managed policies attached to them\",\n  \"action_type\": \"LEGO_TYPE_AWS\",\n  \"action_entry_function\": \"aws_get_iam_users_without_attached_policies\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n  \"action_is_check\": true,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\", \"CATEGORY_TYPE_AWS_IAM\"],\n  \"action_next_hop\": [],\n  \"action_next_hop_parameter_mapping\": {}\n}"
  },
  {
    "path": "AWS/legos/aws_get_iam_users_without_attached_policies/aws_get_iam_users_without_attached_policies.py",
    "content": "##  Copyright (c) 2023 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import Tuple\nfrom unskript.legos.aws.aws_list_all_iam_users.aws_list_all_iam_users import aws_list_all_iam_users\nfrom pydantic import BaseModel, Field\n\n\nclass InputSchema(BaseModel):\n    pass\n\n\n\ndef aws_get_iam_users_without_attached_policies_printer(output):\n    if output is None:\n        return\n    status, res = output\n    if status:\n        print(\"There are no IAM users that do not have any user-managed or AWS-managed policies attached to them\")\n    else:\n        print(\"IAM users that do not have any user-managed or AWS-managed policies attached to them: \",res)\n\n\ndef aws_get_iam_users_without_attached_policies(handle) -> Tuple:\n    \"\"\"aws_get_iam_users_without_attached_policies lists all the IAM users that do not have any user-managed or AWS-managed policies attached to them\n\n        :type handle: object\n        :param handle: Object returned from Task Validate\n\n        :rtype: Status, List os all IAM users that do not have any user-managed or AWS-managed policies attached to them\n    \"\"\"\n    result = []\n    iam_client = handle.client('iam')\n    paginator = iam_client.get_paginator('list_users')\n    for response in paginator.paginate():\n        for user in response['Users']:\n            user_name = user['UserName']\n            try:\n                # Check for user-managed policies attached to the user\n                user_policies = iam_client.list_user_policies(UserName=user_name)\n                # Check for AWS-managed policies attached to the user\n                attached_policies = iam_client.list_attached_user_policies(UserName=user_name)\n                # If the user has no policies, add to result\n                if not user_policies['PolicyNames'] and not attached_policies['AttachedPolicies']:\n                    result.append(user_name)\n            except Exception as e:\n                print(f\"An error occurred while processing user {user_name}: {e}\")\n    return (False, result) if result else (True, None)\n"
  },
  {
    "path": "AWS/legos/aws_get_idle_emr_clusters/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>AWS Get Idle EMR Clusters</h1>\r\n\r\n## Description\r\nThis Lego list of EMR clusters that have been idle for more than the specified time.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_get_idle_emr_clusters(handle, max_idle_time: int = 30, region: str = \"\")\r\n\r\n        handle: Object of type unSkript AWS Connector.\r\n        region: Optional, AWS region. Eg: “us-west-2”.\r\n        max_idle_time: (minutes) The maximum idle time in minutes.\r\n\r\n## Lego Input\r\nThis Lego take three inputs handle, max_idle_time and region.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_get_idle_emr_clusters/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_idle_emr_clusters/aws_get_idle_emr_clusters.json",
    "content": "{\r\n    \"action_title\": \"AWS Get Idle EMR Clusters\",\r\n    \"action_description\": \"This action list of EMR clusters that have been idle for more than the specified time.\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_get_idle_emr_clusters\",\r\n    \"action_needs_credential\": true,\r\n    \"action_is_check\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [ \"CATEGORY_TYPE_IAM\", \"CATEGORY_TYPE_SECOPS\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_RDS\"],\r\n    \"action_next_hop\": [],\r\n    \"action_next_hop_parameter_mapping\": {}\r\n}\r\n"
  },
  {
    "path": "AWS/legos/aws_get_idle_emr_clusters/aws_get_idle_emr_clusters.py",
    "content": "##\r\n##  Copyright (c) 2021 unSkript, Inc\r\n##  All rights reserved.\r\n##\r\nfrom pydantic import BaseModel, Field\r\nfrom typing import Optional, Tuple\r\nfrom unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\r\nfrom unskript.connectors.aws import aws_get_paginator\r\nfrom datetime import datetime, timedelta\r\nimport pprint\r\n\r\n\r\nclass InputSchema(BaseModel):\r\n    region: Optional[str] = Field(\r\n        default='',\r\n        title='AWS Region',\r\n        description='AWS Region.'\r\n    )\r\n    max_idle_time: Optional[int] = Field(\r\n        default=30,\r\n        title='Max Idle Time (minutes)',\r\n        description='The maximum idle time in minutes.'\r\n    )\r\n\r\n\r\ndef aws_get_idle_emr_clusters_printer(output):\r\n    if output is None:\r\n        return\r\n    pprint.pprint(output)\r\n\r\n\r\ndef aws_get_idle_emr_clusters(handle, max_idle_time: int = 30, region: str = \"\") -> Tuple:\r\n    \"\"\"aws_get_idle_emr_clusters Gets list of idle EMR clusters.\r\n\r\n        :type region: string\r\n        :param region: AWS Region.\r\n\r\n        :type max_idle_time: int\r\n        :param max_idle_time: (minutes) The maximum idle time in minutes.\r\n\r\n        :rtype: List of idle EMR clusters.\r\n    \"\"\"\r\n    result = []\r\n    all_regions = [region] if region else aws_list_all_regions(handle)\r\n    min_last_state_change_time = datetime.now() - timedelta(minutes=max_idle_time)\r\n    for reg in all_regions:\r\n        try:\r\n            emr_Client = handle.client('emr', region_name=reg)\r\n            response = aws_get_paginator(emr_Client, \"list_clusters\", \"Clusters\")\r\n            for cluster in response:\r\n                if 'Status' in cluster and 'Timeline' in cluster['Status'] and 'ReadyDateTime' in cluster['Status']['Timeline']:\r\n                    last_state_change_time = cluster['Status']['Timeline']['ReadyDateTime']\r\n                    if last_state_change_time < min_last_state_change_time:\r\n                        cluster_dict = {\r\n                            \"cluster_id\": cluster['Id'],\r\n                            \"region\": reg\r\n                        }\r\n                        result.append(cluster_dict)\r\n        except Exception as error:\r\n            pass\r\n\r\n    if len(result) != 0:\r\n        return (False, result)\r\n    else:\r\n        return (True, None)"
  },
  {
    "path": "AWS/legos/aws_get_instance_detail_with_private_dns_name/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Get AWS Instance Details with Matching Private DNS Name </h1>\r\n\r\n## Description\r\nThis Lego used to get details of an AWS EC2 Instance that matches a Private DNS Name.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_get_instance_detail_with_private_dns_name(handle: object, dns_name: str, region: str)\r\n\r\n        handle: Object of type unSkript AWS Connector.\r\n        dns_name: Private DNS Name.\r\n        region: AWS Region of the resource.\r\n## Lego Input\r\n\r\nThis Lego take three inputs handle, dns_name and region.\r\n\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_get_instance_detail_with_private_dns_name/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_instance_detail_with_private_dns_name/aws_get_instance_detail_with_private_dns_name.json",
    "content": "{\r\n    \"action_title\": \"Get AWS Instance Details with Matching Private DNS Name\",\r\n    \"action_description\": \"Use this action to get details of an AWS EC2 Instance that matches a Private DNS Name\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_get_instance_detail_with_private_dns_name\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\":[ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\" ,\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_EC2\"  ]\r\n}\r\n    "
  },
  {
    "path": "AWS/legos/aws_get_instance_detail_with_private_dns_name/aws_get_instance_detail_with_private_dns_name.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nimport pprint\nfrom typing import List\nfrom pydantic import BaseModel, Field\nfrom unskript.connectors.aws import aws_get_paginator\n\n\nclass InputSchema(BaseModel):\n    dns_name: str = Field(\n        title='Private DNS Name',\n        description='Private DNS Name.')\n    region: str = Field(\n        title='Region',\n        description='AWS Region of the resource.')\n\n\ndef aws_get_instance_detail_with_private_dns_name_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef aws_get_instance_detail_with_private_dns_name(\n        handle,\n        dns_name: str,\n        region: str) -> List:\n    \"\"\"aws_get_instance_detail_with_private_dns_name Returns an array of private dns name.\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type dns_name: string\n        :param dns_name: Private DNS Name.\n\n        :type region: string\n        :param region: AWS Region of the resource.\n\n        :rtype: Returns an array of private dns name\n    \"\"\"\n\n    ec2Client = handle.client('ec2', region_name=region)\n\n    res = aws_get_paginator(ec2Client, \"describe_instances\", \"Reservations\",\n                            Filters=[{\"Name\": 'private-dns-name', \"Values\": [dns_name]}])\n    instances = []\n    for reservation in res:\n        for instance in reservation['Instances']:\n            instances.append(instance)\n\n    return instances\n"
  },
  {
    "path": "AWS/legos/aws_get_instance_details/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \n<h1>Get AWS Instances Details </h1>\n\n## Description\nThis Lego gives the AWS EC2 Instances details.\n\n\n## Lego Details\n\n    aws_get_instance_details(handle: object, instance_id: str, region: str,)\n\n        handle: Object of type unSkript AWS Connector.\n        instance_id : Id of instance.\n        region: Region to filter instances.\n\n## Lego Input\nThis Lego take three inputs handle, instance_ids and region.\n\n## Lego Output\nHere is a sample output.\n\n<img src=\"./1.png\">\n<img src=\"./2.png\">\n\n\n\n## See it in Action\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_get_instance_details/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_instance_details/aws_get_instance_details.json",
    "content": "{\r\n    \"action_title\": \"Get AWS Instances Details\",\r\n    \"action_description\": \"Get AWS Instances Details\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_get_instance_details\",\r\n    \"action_needs_credential\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\":[ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\" ,\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_EC2\"  ]\r\n  }\r\n  "
  },
  {
    "path": "AWS/legos/aws_get_instance_details/aws_get_instance_details.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nimport pprint\nfrom typing import Dict\nfrom pydantic import BaseModel, Field\nfrom beartype import beartype\n\n\nclass InputSchema(BaseModel):\n    instance_id: str = Field(\n        title='Instance Id',\n        description='ID of the instance.')\n    region: str = Field(\n        title='Region',\n        description='AWS Region of the instance.')\n\n\ndef aws_get_instances_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\n@beartype\ndef aws_get_instance_details(handle, instance_id: str, region: str) -> Dict:\n    \"\"\"aws_get_instance_details Returns instance details.\n\n        :type handle: object\n        :param handle: Object returned by the task.validate(...) method.\n\n        :type instance_ids: list\n        :param instance_ids: List of instance ids.\n\n        :type region: string\n        :param region: Region for instance.\n\n        :rtype: Dict with the instance details.\n    \"\"\"\n\n    ec2client = handle.client('ec2', region_name=region)\n    instances = []\n    response = ec2client.describe_instances(\n        Filters=[{\"Name\": \"instance-id\", \"Values\": [instance_id]}])\n    for reservation in response[\"Reservations\"]:\n        for instance in reservation[\"Instances\"]:\n            instances.append(instance)\n\n    return instances[0]\n"
  },
  {
    "path": "AWS/legos/aws_get_instances/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>List All AWS EC2 Instances </h1>\r\n\r\n## Description\r\nThis Lego used to get a list of all AWS EC2 Instances from given ELB.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_get_instances(handle: object, elb_name: str, region: str)\r\n\r\n        handle: Object of type unSkript AWS Connector.\r\n        elb_name: Name of the Elastic Load Balancer Name\r\n        region: AWS Region of the ECS service.\r\n## Lego Input\r\n\r\nThis Lego take three inputs handle, elb_name and region.\r\n\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_get_instances/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_instances/aws_get_instances.json",
    "content": "{\r\n    \"action_title\": \"List All AWS EC2 Instances Under the ELB\",\r\n    \"action_description\": \" Get a list of all AWS EC2 Instances from given ELB\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_get_instances\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\":[ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\" ,\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_EC2\"  ]\r\n}\r\n    "
  },
  {
    "path": "AWS/legos/aws_get_instances/aws_get_instances.py",
    "content": "##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##  @author: Yugal Pachpande, @email: yugal.pachpande@unskript.com\n##\nfrom typing import List\nfrom pydantic import BaseModel, Field\nimport pandas as pd\n\n\nclass InputSchema(BaseModel):\n    elb_name: str = Field(\n        title='Elastic Load Balancer Name',\n        description='Name of the Elastic Load Balancer Name')\n    region: str = Field(\n        title='Region',\n        description='AWS Region of the ECS service')\n\n\ndef aws_get_instances_printer(output):\n\n    if output is None:\n        return\n    df = pd.DataFrame(output)\n    pd.set_option('display.max_rows', None)\n    pd.set_option('display.max_columns', None)\n    pd.set_option('display.width', None)\n    pd.set_option('display.max_colwidth', None)\n    print(\"\\n\")\n    print(df)\n\n\ndef aws_get_instances(handle, elb_name: str, region: str) -> List:\n    \"\"\"aws_get_all_instances Get a list of all AWS EC2 Instances from given ELB\n\n     :type handle: object\n     :param handle: Object returned from task.validate(...).\n\n     :type elb_name: string\n     :param elb_name: Name of the Elastic Load Balancer Name\n\n     :type region: string\n     :param region: AWS Region of the ECS service.\n\n     :rtype: list of dict with all AWS EC2 Instances from given ELB\n    \"\"\"\n\n    elbClient = handle.client('elb', region_name=region)\n    res = elbClient.describe_instance_health(\n        LoadBalancerName=elb_name,\n    )\n\n    instances = []\n    for instance in res['InstanceStates']:\n        instances.append(instance)\n\n    return instances\n"
  },
  {
    "path": "AWS/legos/aws_get_internet_gateway_by_vpc/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>AWS Get Internet Gateway by VPC ID </h1>\r\n\r\n## Description\r\nThis Lego search for Internet Gateway available for given VPC id.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_get_internet_gateway_by_vpc(handle, vpc_id: str, region: str)\r\n\r\n        handle: Object of type unSkript AWS Connector.\r\n        vpc_id: VPC ID to find Internet Gateway.\r\n        region: Region to filter instance.\r\n\r\n## Lego Input\r\n\r\nThis Lego take three inputs handle, vpc_id and region. \r\n\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_get_internet_gateway_by_vpc/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_internet_gateway_by_vpc/aws_get_internet_gateway_by_vpc.json",
    "content": "{\r\n    \"action_title\": \"AWS Get Internet Gateway by VPC ID\",\r\n    \"action_description\": \"AWS Get Internet Gateway by VPC ID\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_get_internet_gateway_by_vpc\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\":[ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\" ,\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_VPC\"  ]\r\n}\r\n"
  },
  {
    "path": "AWS/legos/aws_get_internet_gateway_by_vpc/aws_get_internet_gateway_by_vpc.py",
    "content": "##  Copyright (c) 2021 unSkript, Inc\r\n##  All rights reserved.\r\n##\r\nimport pprint\r\nfrom typing import List\r\nfrom pydantic import BaseModel, Field\r\nfrom unskript.connectors.aws import aws_get_paginator\r\n\r\nclass InputSchema(BaseModel):\r\n    vpc_id: str = Field(\r\n        title='VPC ID',\r\n        description='VPC ID of the Instance.')\r\n    region: str = Field(\r\n        title='Region',\r\n        description='AWS Region.')\r\n\r\ndef aws_get_internet_gateway_by_vpc_printer(output):\r\n    if output is None:\r\n        return\r\n    pprint.pprint(output)\r\n\r\ndef aws_get_internet_gateway_by_vpc(handle, vpc_id: str, region: str) -> List:\r\n    \"\"\"aws_get_internet_gateway_by_vpc Returns an List of internet Gateway.\r\n\r\n        :type handle: object\r\n        :param handle: Object returned from task.validate(...).\r\n\r\n        :type vpc_id: str\r\n        :param vpc_id: VPC ID to find Internet Gateway.\r\n\r\n        :type region: str\r\n        :param region: Region to filter instance.\r\n\r\n        :rtype: List of Internet Gateway.\r\n    \"\"\"\r\n\r\n    ec2Client = handle.client('ec2', region_name=region)\r\n    result = []\r\n    try:\r\n        response = aws_get_paginator(ec2Client, \"describe_internet_gateways\", \"InternetGateways\",\r\n                                Filters=[{'Name': 'attachment.vpc-id','Values': [vpc_id]}])\r\n        for nat_info in response:\r\n            if \"InternetGatewayId\" in nat_info:\r\n                result.append(nat_info[\"InternetGatewayId\"])\r\n\r\n    except Exception as error:\r\n        result.append({\"error\":error})\r\n\r\n    return result\r\n"
  },
  {
    "path": "AWS/legos/aws_get_lambdas_not_using_arm_graviton2_processor/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>Get AWS Lambdas Not Using ARM64 Graviton2 Processor</h1>\n\n## Description\nGet all AWS Lambda functions that are not using the Arm-based AWS Graviton2 processor for their runtime architecture\n\n## Lego Details\n\taws_get_lambdas_not_using_arm_graviton2_processor(handle,region:str=\"\")\n\t\thandle: Object of type unSkript AWS Connector.\n\t\tregion: Optional, AWS Region\n\n## Lego Input\nThis Lego takes inputs handle, region\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_get_lambdas_not_using_arm_graviton2_processor/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_lambdas_not_using_arm_graviton2_processor/aws_get_lambdas_not_using_arm_graviton2_processor.json",
    "content": "{\n  \"action_title\": \"Find AWS Lambdas Not Using ARM64 Graviton2 Processor\",\n  \"action_description\": \"Find all AWS Lambda functions that are not using the Arm-based AWS Graviton2 processor for their runtime architecture\",\n  \"action_type\": \"LEGO_TYPE_AWS\",\n  \"action_entry_function\": \"aws_get_lambdas_not_using_arm_graviton2_processor\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n  \"action_is_check\": true,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_COST_OPT\" ,\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_LAMBDA\"],\n  \"action_next_hop\": [],\n  \"action_next_hop_parameter_mapping\": {}\n}"
  },
  {
    "path": "AWS/legos/aws_get_lambdas_not_using_arm_graviton2_processor/aws_get_lambdas_not_using_arm_graviton2_processor.py",
    "content": "##\n# Copyright (c) 2023 unSkript, Inc\n# All rights reserved.\n##\nfrom pydantic import BaseModel, Field\nfrom typing import Optional, Tuple\nfrom unskript.legos.aws.aws_execute_cli_command.aws_execute_cli_command import aws_execute_cli_command\nfrom unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\nfrom unskript.connectors.aws import aws_get_paginator\nimport pprint\n\n\nclass InputSchema(BaseModel):\n    region: Optional[str] = Field(\n        '', \n        description='AWS region. Eg: \"us-west-2\"', \n        title='Region'\n    )\n\n\n\ndef aws_get_lambdas_not_using_arm_graviton2_processor_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef aws_get_lambdas_not_using_arm_graviton2_processor(handle, region: str = \"\") -> Tuple:\n    \"\"\"aws_get_lambdas_not_using_arm_graviton2_processor finds AWS Lambda functions wnot using Graviton2 processor\n\n    :type handle: object\n    :param handle: Object returned from task.validate(...).\n\n    :type region: string\n    :param region: AWS Region to get the instances from. Eg: \"us-west-2\"\n\n    :rtype: Tuple with status of result and list of Lambda functions that don't use the arm-based graviton2 processor\n    \"\"\"\n\n    result = []\n    all_regions = [region] if region else aws_list_all_regions(handle)\n\n    for reg in all_regions:\n        try:\n            lambda_client = handle.client('lambda', region_name=reg)\n            response = aws_get_paginator(lambda_client, \"list_functions\", \"Functions\")\n            for res in response:\n                architectures = res.get('Architectures', [])\n                function_name = res.get('FunctionName', \"\")\n                if 'arm64' not in architectures and function_name:\n                    result.append({\"function_name\": function_name, \"region\": reg})\n        except Exception as e:\n            pass\n\n    if result:\n        return (False, result)\n    return (True, None)"
  },
  {
    "path": "AWS/legos/aws_get_lambdas_with_high_error_rate/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>Get AWS Lambdas With High Error Rate</h1>\n\n## Description\nGet AWS Lambda Functions that exceed a given threshold error rate.\n\n## Lego Details\n\taws_get_lambdas_with_high_error_rate(handle, error_rate_threshold:float, days_back:int, region:str=\"\")\n\n\t\thandle: Object of type unSkript AWS Connector.\n\t\tregion: Optional, AWS Region to get the instances from. Eg: \"us-west-2\"\n\t\terror_rate_threshold: Optional, (in percent) Idle CPU threshold (in percent)\n\t\tdays_back: Optional, (in hours) Idle CPU threshold (in hours)\n\n\n\n## Lego Input\nThis Lego takes 4 inputs handle,\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_get_lambdas_with_high_error_rate/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_lambdas_with_high_error_rate/aws_get_lambdas_with_high_error_rate.json",
    "content": "{\n  \"action_title\": \"Get AWS Lambdas With High Error Rate\",\n  \"action_description\": \"Get AWS Lambda Functions that exceed a given threshold error rate.\",\n  \"action_type\": \"LEGO_TYPE_AWS\",\n  \"action_entry_function\": \"aws_get_lambdas_with_high_error_rate\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n  \"action_is_check\": true,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\": [\"CATEGORY_TYPE_COST_OPT\",\"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_ROUTE53\"]\n}"
  },
  {
    "path": "AWS/legos/aws_get_lambdas_with_high_error_rate/aws_get_lambdas_with_high_error_rate.py",
    "content": "##\n# Copyright (c) 2023 unSkript, Inc\n# All rights reserved.\n##\nimport pprint\nfrom typing import Tuple, Optional\nimport datetime\nfrom pydantic import BaseModel, Field\nfrom unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\n\n\nclass InputSchema(BaseModel):\n    error_rate_threshold: Optional[float] = Field(\n        0.1,\n        description='Error rate threshold value. Eg: 0.1 (i.e. 10%)',\n        title='Error Rate Threshold',\n    )\n    days_back: Optional[int] = Field(\n        7,\n        description=('Number of days to go back. Default value ids 7 days. '\n                     'Eg: 7 (This checks for functions with high error rate in the last 7 days)'),\n        title='Days Back',\n    )\n    region: Optional[str] = Field(\n        '', \n        description='AWS region. Eg: \"us-west-2\"',\n        title='Region'\n    )\n\n\ndef aws_get_lambdas_with_high_error_rate_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef aws_get_lambdas_with_high_error_rate(\n        handle,\n        error_rate_threshold:float=0.1,\n        days_back:int=7,\n        region:str=\"\"\n        ) -> Tuple:\n    \"\"\"aws_get_lambdas_with_high_error_rate finds AWS Lambda functions with high error rate\n\n    :type region: string\n    :param region: AWS Region to get the instances from. Eg: \"us-west-2\"\n\n    :type error_rate_threshold: float\n    :param error_rate_threshold: (in percent) Idle CPU threshold (in percent)\n\n    :type days_back: int\n    :param days_back: (in hours) Idle CPU threshold (in hours)\n\n    :rtype: Tuple with status result and list of Lambda functions with high error rate\n\n    \"\"\"\n    if not handle or (region and region not in aws_list_all_regions(handle)):\n        raise ValueError(\"Invalid input parameters provided.\")\n    result = []\n    all_regions = [region]\n    if not region:\n        all_regions = aws_list_all_regions(handle)\n    for reg in all_regions:\n        try:\n            lambdaClient = handle.client('lambda', region_name=reg)\n            cloudwatchClient = handle.client('cloudwatch', region_name=reg)\n            # Get a list of all the Lambda functions in your account\n            response = lambdaClient.list_functions()\n            number_of_days = int(days_back)\n            start_time = datetime.datetime.now() - datetime.timedelta(days=number_of_days)\n            # Iterate through the list of functions and filter out the ones with a high error rate\n            for function in response['Functions']:\n                # Get the configuration for the function\n                config_response = lambdaClient.get_function_configuration(\n                    FunctionName=function['FunctionName']\n                    )\n                # Get the Errors metric for the function\n                errors_response = cloudwatchClient.get_metric_statistics(\n                    Namespace='AWS/Lambda',\n                    MetricName='Errors',\n                    Dimensions=[\n                        {\n                            'Name': 'FunctionName',\n                            'Value': function['FunctionName']\n                        },\n                    ],\n                    StartTime=start_time,\n                    EndTime=datetime.datetime.now(),\n                    Period=3600,\n                    Statistics=['Sum']\n                )\n                datapoints = errors_response.get('Datapoints')\n                if datapoints and 'Sum' in datapoints[0]:\n                    errors_sum = datapoints[0]['Sum']\n                    invocations = config_response.get('NumberOfInvocations', 0)\n                    if invocations > 0:\n                        error_rate = errors_sum / invocations\n                         # Check if the error rate is greater than the threshold\n                        if error_rate > error_rate_threshold:\n                            lambda_func = {'function_name': function['FunctionName'], 'region': reg}\n                            result.append(lambda_func)\n        except Exception:\n            pass\n    if len(result) != 0:\n        return (False, result)\n    return (True, None)\n"
  },
  {
    "path": "AWS/legos/aws_get_long_running_elasticcache_clusters_without_reserved_nodes/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>AWS Get Long Running ElasticCache clusters Without Reserved Nodes</h1>\n\n## Description\nThis action gets information about long running ElasticCache clusters and their status, and checks if they have any reserved nodes associated with them.\n\n## Lego Details\n\taws_get_long_running_elasticcache_clusters_without_reserved_nodes(handle, region: str = \"\", threshold:int = 10)\n\t\thandle: Object of type unSkript AWS Connector.\n\t\tthreshold: Threshold(in days) to find long running ElasticCache clusters. Eg: 30, This will find all the clusters that have been created a month ago.\n\n## Lego Input\nThis Lego takes inputs handle,threshold.\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_get_long_running_elasticcache_clusters_without_reserved_nodes/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_long_running_elasticcache_clusters_without_reserved_nodes/aws_get_long_running_elasticcache_clusters_without_reserved_nodes.json",
    "content": "{\n  \"action_title\": \"AWS Get Long Running ElastiCache clusters Without Reserved Nodes\",\n  \"action_description\": \"This action gets information about long running ElastiCache clusters and their status, and checks if they have any reserved nodes associated with them.\",\n  \"action_type\": \"LEGO_TYPE_AWS\",\n  \"action_entry_function\": \"aws_get_long_running_elasticcache_clusters_without_reserved_nodes\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n  \"action_is_check\": true,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\":[ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\" ,\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_ELASTICACHE\"],\n  \"action_next_hop\": [],\n  \"action_next_hop_parameter_mapping\":{\"51a0b15d932dddeea9b1991fb6299577756408ff7c47acc5dec3eb114e33562b\": {\"name\": \"Purchase Reserved Nodes For Long Running AWS ElastiCache Clusters\", \"region\": \".[0].region\"}}\n}"
  },
  {
    "path": "AWS/legos/aws_get_long_running_elasticcache_clusters_without_reserved_nodes/aws_get_long_running_elasticcache_clusters_without_reserved_nodes.py",
    "content": "##\n##  Copyright (c) 2023 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import Optional, Tuple\nfrom datetime import datetime, timedelta, timezone\nfrom pydantic import BaseModel, Field\nfrom unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\n\n\nclass InputSchema(BaseModel):\n    region: Optional[str] = Field(\n        '', description='AWS Region to get the ElasticCache Cluster', title='AWS Region'\n    )\n    threshold: Optional[float] = Field(\n        10,\n        description='Threshold(in days) to find long running ElasticCache clusters. Eg: 30, This will find all the clusters that have been created a month ago.',\n        title='Threshold(in days)',\n    )\n\n\n\ndef aws_get_long_running_elasticcache_clusters_without_reserved_nodes_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef aws_get_long_running_elasticcache_clusters_without_reserved_nodes(handle, region: str = \"\", threshold:int = 10) -> Tuple:\n    \"\"\"aws_get_long_running_elasticcache_clusters_without_reserved_nodes finds ElasticCache Clusters that are long running and have no reserved nodes\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type region: string\n        :param region: Region of the Cluster.\n\n        :type threshold: integer\n        :param threshold: Threshold(in days) to find long running ElasticCache clusters. Eg: 30, This will find all the clusters that have been created a month ago.\n\n        :rtype: status, list of clusters, nodetype and their region.\n    \"\"\"\n    result = []\n    reservedNodesPerRegion = {}\n    all_regions = [region]\n    if not region:\n        all_regions = aws_list_all_regions(handle)\n    # Get the list of reserved node per region per type. We just need to maintain\n    # what type of reserved nodes are present per region. So, reservedNodesPerRegion\n    # would be like:\n    # <region>:{<nodeType>:True/False}\n    for reg in all_regions:\n        try:\n            elasticacheClient = handle.client('elasticache', region_name=reg)\n            response = elasticacheClient.describe_reserved_cache_nodes()\n            reservedNodesPerType = {}\n            if response['ReservedCacheNodes']:\n                for node in response['ReservedCacheNodes']:\n                    reservedNodesPerType[node['CacheNodeType']] = True\n            else:\n                continue\n            reservedNodesPerRegion[reg] = reservedNodesPerType\n        except Exception:\n            pass\n\n    for reg in all_regions:\n        try:\n            elasticacheClient = handle.client('elasticache', region_name=reg)\n            for cluster in elasticacheClient.describe_cache_clusters()['CacheClusters']:\n                cluster_age = datetime.now(timezone.utc) - cluster['CacheClusterCreateTime']\n                if cluster_age > timedelta(days=threshold):\n                    # Check if the cluster node type is present in the reservedNodesPerRegion map.\n                    reservedNodes = reservedNodesPerRegion.get(reg)\n                    if reservedNodes is not None:\n                        if reservedNodes.get(cluster['CacheNodeType']) is True:\n                            continue\n                    cluster_dict = {}\n                    cluster_dict[\"region\"] = reg\n                    cluster_dict[\"cluster\"] = cluster['CacheClusterId']\n                    cluster_dict[\"node_type\"] = cluster['CacheNodeType']\n                    result.append(cluster_dict)\n        except Exception:\n            pass\n\n    if len(result) != 0:\n        return (False, result)\n    return (True, None)\n"
  },
  {
    "path": "AWS/legos/aws_get_long_running_rds_instances_without_reserved_instances/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>AWS Get Long Running RDS Instances Without Reserved Instances</h1>\n\n## Description\nThis action gets information about long running instances and their status, and checks if they have any reserved nodes associated with them.\n\n## Lego Details\n\taws_get_long_running_rds_instances_without_reserved_instances(handle, region: str = \"\", threshold:int=10)\n\t\thandle: Object of type unSkript AWS Connector.\n\t\tthreshold: Threshold(in days) to find long running RDS instances. Eg: 30, This will find all the instances that have been created a month ago.\n\n\n## Lego Input\nThis Lego takes inputs handle,threshold.\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_get_long_running_rds_instances_without_reserved_instances/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_long_running_rds_instances_without_reserved_instances/aws_get_long_running_rds_instances_without_reserved_instances.json",
    "content": "{\n  \"action_title\": \"AWS Get Long Running RDS Instances Without Reserved Instances\",\n  \"action_description\": \"This action gets information about long running instances and their status, and checks if they have any reserved nodes associated with them.\",\n  \"action_type\": \"LEGO_TYPE_AWS\",\n  \"action_entry_function\": \"aws_get_long_running_rds_instances_without_reserved_instances\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n  \"action_is_check\": true,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\": [\"CATEGORY_TYPE_COST_OPT\",\"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_RDS\"],\n  \"action_next_hop\": [\"e0ff270a41b65b1804da257ffec5fbdec7dd51bdb3da925cced7fa3391bfe70b\"],\n  \"action_next_hop_parameter_mapping\":{\"e0ff270a41b65b1804da257ffec5fbdec7dd51bdb3da925cced7fa3391bfe70b\": {\"name\": \"Purchase Reserved Instances For Long Running AWS RDS Instances\", \"region\": \".[0].region\"}}\n}"
  },
  {
    "path": "AWS/legos/aws_get_long_running_rds_instances_without_reserved_instances/aws_get_long_running_rds_instances_without_reserved_instances.py",
    "content": "##\n##  Copyright (c) 2023 unSkript, Inc\n##  All rights reserved.\n##\nfrom tabulate import tabulate\nfrom typing import Optional, Tuple\nfrom pydantic import BaseModel, Field\nfrom datetime import datetime,timedelta, timezone\nfrom unskript.connectors.aws import aws_get_paginator\nfrom unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\n\nclass InputSchema(BaseModel):\n    region: Optional[str] = Field('', description='AWS Region.', title='AWS Region')\n    threshold: Optional[float] = Field(\n        10,\n        description='Threshold(in days) to find long running RDS instances. Eg: 30, This will find all the instances that have been created a month ago.',\n        title='Threshold(in days)',\n    )\n\ndef aws_get_long_running_rds_instances_without_reserved_instances_printer(output):\n    if output is None:\n        print(\"Output is None.\")\n        return\n    status, res = output\n    if status:\n        print(\"There are no DB instances that have been running for longer than the specified threshold and do not have corresponding reserved instances.\")\n    else:\n        print(\"DB instances that have been running for longer than the specified threshold and do not have corresponding reserved instances:\")\n        table_data = [[item['region'], item['instance_type'], item['instance']] for item in res]\n        headers = ['Region', 'Instance Type', 'Instance']\n        table = tabulate(table_data, headers=headers, tablefmt='grid')\n        print(table)\n\n\ndef aws_get_long_running_rds_instances_without_reserved_instances(handle, region: str = \"\", threshold: float = 10.0) -> Tuple:\n    \"\"\"aws_get_long_running_rds_instances_without_reserved_instances Gets all DB instances that have been running for longer than the specified threshold and do not have corresponding reserved instances.\n\n    :type handle: object\n    :param handle: Object returned from task.validate(...).\n\n    :type region: string\n    :param region: AWS Region.\n\n    :type threshold: int\n    :param threshold: Threshold(in days) to find long running RDS instances. Eg: 30, This will find all the instances that have been created a month ago.\n\n    :rtype: A tuple with a Status,and list of DB instances that don't have reserved instances\n    \"\"\"\n    result = []\n    all_regions = [region]\n    reservedInstancesPerRegion = {}\n    if not region:\n        all_regions = aws_list_all_regions(handle)\n    for reg in all_regions:\n        try:\n            rdsClient = handle.client('rds', region_name=reg)\n            response = rdsClient.describe_reserved_db_instances()\n            reservedInstancesPerType = {}\n            if 'ReservedDBInstances' in response:\n                for ins in response['ReservedDBInstances']:\n                    reservedInstancesPerType[ins['DBInstanceClass']] = True\n            reservedInstancesPerRegion[reg] = reservedInstancesPerType\n        except Exception:\n            pass\n    for reg in all_regions:\n        try:\n            rdsClient = handle.client('rds', region_name=reg)\n            response = aws_get_paginator(rdsClient, \"describe_db_instances\", \"DBInstances\")\n            for instance in response:\n                if instance['DBInstanceStatus'] == 'available':\n                    # Check for existence of keys before using them\n                    if 'InstanceCreateTime' in instance and 'DBInstanceClass' in instance:\n                        uptime = datetime.now(timezone.utc) - instance['InstanceCreateTime']\n                        if uptime > timedelta(days=threshold):\n                            # Check if the DB instance type is present in the reservedInstancesPerRegion map.\n                            reservedInstances = reservedInstancesPerRegion.get(reg, {})\n                            if not reservedInstances.get(instance['DBInstanceClass']):\n                                db_instance_dict = {\n                                    \"region\": reg,\n                                    \"instance_type\": instance['DBInstanceClass'],\n                                    \"instance\": instance['DBInstanceIdentifier']\n                                }\n                                result.append(db_instance_dict)\n        except Exception:\n            pass\n\n    if len(result) != 0:\n        return (False, result)\n    else:\n        return (True, None)"
  },
  {
    "path": "AWS/legos/aws_get_long_running_redshift_clusters_without_reserved_nodes/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>AWS Get Long Running Redshift Clusters Without Reserved Nodes</h1>\n\n## Description\nThis action gets information about running clusters and their status, and checks if they have any reserved nodes associated with them.\n\n## Lego Details\n\taws_get_long_running_redshift_clusters_without_reserved_nodes(handle, region: str = \"\", threshold:int = 10)\n\t\thandle: Object of type unSkript AWS Connector.\n\t\tthreshold: Threshold(in days) to find long running redshift clusters. Eg: 30, This will find all the clusters that have been created a month ago.\n\n\n## Lego Input\nThis Lego takes inputs handle,threshold.\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_get_long_running_redshift_clusters_without_reserved_nodes/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_long_running_redshift_clusters_without_reserved_nodes/aws_get_long_running_redshift_clusters_without_reserved_nodes.json",
    "content": "{\n  \"action_title\": \"AWS Get Long Running Redshift Clusters Without Reserved Nodes\",\n  \"action_description\": \"This action gets information about running clusters and their status, and checks if they have any reserved nodes associated with them.\",\n  \"action_type\": \"LEGO_TYPE_AWS\",\n  \"action_entry_function\": \"aws_get_long_running_redshift_clusters_without_reserved_nodes\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n  \"action_is_check\": true,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\": [\"CATEGORY_TYPE_COST_OPT\",\"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_REDSHIFT\"],\n  \"action_next_hop\": [\"08d3033e428c5fa241be26cfc8787fb16c05c6aa31830075e730fefd5aaf744f\"],\n  \"action_next_hop_parameter_mapping\":{\"08d3033e428c5fa241be26cfc8787fb16c05c6aa31830075e730fefd5aaf744f\": {\"name\": \"Purchase Reserved Nodes For Long Running AWS Redshift Clusters\", \"region\": \".[0].region\"}}\n}"
  },
  {
    "path": "AWS/legos/aws_get_long_running_redshift_clusters_without_reserved_nodes/aws_get_long_running_redshift_clusters_without_reserved_nodes.py",
    "content": "##\n##  Copyright (c) 2023 unSkript, Inc\n##  All rights reserved.\n##\nfrom pydantic import BaseModel, Field\nfrom typing import Optional, Tuple\nfrom unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\nimport pprint\nfrom datetime import datetime,timedelta, timezone\n\n\nclass InputSchema(BaseModel):\n    region: Optional[str] = Field(\n        '', \n        description='AWS Region to get the Redshift Cluster', \n        title='AWS Region'\n    )\n    threshold: Optional[float] = Field(\n        10,\n        description='Threshold(in days) to find long running redshift clusters. Eg: 30, This will find all the clusters that have been created a month ago.',\n        title='Threshold(in days)',\n    )\n\n\n\ndef aws_get_long_running_redshift_clusters_without_reserved_nodes_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef aws_get_long_running_redshift_clusters_without_reserved_nodes(handle, region: str = \"\", threshold:int = 10) -> Tuple:\n    \"\"\"aws_get_long_running_redshift_clusters_without_reserved_nodes finds Redshift Clusters that are long running and have no reserved nodes\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type region: string\n        :param region: Region of the Cluster.\n\n        :type threshold: integer\n        :param threshold: Threshold(in days) to find long running redshift clusters. Eg: 30, This will find all the clusters that have been created a month ago.\n\n        :rtype: status, list of clusters, nodetype and their region.\n    \"\"\"\n    if not handle or threshold < 0:\n        raise ValueError(\"Invalid input parameters provided.\")\n\n    result = []\n    reservedNodesPerRegion = {}\n    all_regions = [region] if region else aws_list_all_regions(handle)\n\n    for reg in all_regions:\n        try:\n            redshiftClient = handle.client('redshift', region_name=reg)\n            response = redshiftClient.describe_reserved_nodes()\n            reservedNodesPerType = {}\n            if response['ReservedNodes']:\n                for node in response['ReservedNodes']:\n                    reservedNodesPerType[node['NodeType']] = True\n                reservedNodesPerRegion[reg] = reservedNodesPerType\n        except Exception:\n            pass\n\n    for reg in all_regions:\n        try:\n            redshiftClient = handle.client('redshift', region_name=reg)\n            clusters = redshiftClient.describe_clusters()['Clusters']\n            for cluster in clusters:\n                cluster_age = datetime.now(timezone.utc) - cluster['ClusterCreateTime']\n                if cluster['ClusterStatus'] == 'available' and cluster_age.days > threshold:\n                    reservedNodes = reservedNodesPerRegion.get(reg, {})\n                    if not reservedNodes.get(cluster['NodeType']):\n                        cluster_dict = {\n                            \"region\": reg,\n                            \"cluster\": cluster['ClusterIdentifier'],\n                            \"node_type\": cluster['NodeType']\n                        }\n                        result.append(cluster_dict)\n        except Exception as error:\n            pass\n\n    if result:\n        return (False, result)\n    else:\n        return (True, None)"
  },
  {
    "path": "AWS/legos/aws_get_nat_gateway_by_vpc/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>AWS Get NAT Gateway Info by VPC ID </h1>\r\n\r\n## Description\r\nThis Lego search for NAT Gateway available for given VPC id.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_get_natgateway_by_vpc(handle, vpc_id: str, region: str)\r\n\r\n        handle: Object of type unSkript AWS Connector.\r\n        vpc_id: VPC ID to find NAT Gateway.\r\n        region: Region to filter instance.\r\n\r\n## Lego Input\r\n\r\nThis Lego take three inputs handle, vpc_id and region. \r\n\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_get_nat_gateway_by_vpc/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_nat_gateway_by_vpc/aws_get_nat_gateway_by_vpc.json",
    "content": "{\r\n\"action_title\": \"AWS Get NAT Gateway Info by VPC ID\",\r\n\"action_description\": \"This action is used to get the details about nat gateways configured for VPC.\",\r\n\"action_type\": \"LEGO_TYPE_AWS\",\r\n\"action_entry_function\": \"aws_get_nat_gateway_by_vpc\",\r\n\"action_needs_credential\": true,\r\n\"action_supports_poll\": true,\r\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n\"action_supports_iteration\": true,\r\n\"action_categories\":[ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\" ,\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_VPC\"  ]\r\n}\r\n"
  },
  {
    "path": "AWS/legos/aws_get_nat_gateway_by_vpc/aws_get_nat_gateway_by_vpc.py",
    "content": "##  Copyright (c) 2021 unSkript, Inc\r\n##  All rights reserved.\r\n##\r\nimport pprint\r\nfrom typing import List\r\nfrom pydantic import BaseModel, Field\r\n\r\n\r\nclass InputSchema(BaseModel):\r\n    vpc_id: str = Field(\r\n        title='VPC ID',\r\n        description='VPC ID of the Instance.')\r\n    region: str = Field(\r\n        title='Region',\r\n        description='AWS Region.')\r\n\r\n\r\ndef aws_get_nat_gateway_by_vpc_printer(output):\r\n    if output is None:\r\n        return\r\n    pprint.pprint(output)\r\n\r\n\r\ndef aws_get_nat_gateway_by_vpc(handle, vpc_id: str, region: str) -> List:\r\n    \"\"\"aws_get_natgateway_by_vpc Returns an array of NAT gateways.\r\n\r\n        :type region: string\r\n        :param region: Region to filter instances.\r\n\r\n        :type vpc_id: string\r\n        :param vpc_id: ID of the Virtual Private Cloud (VPC)\r\n\r\n        :rtype: Array of NAT gateways.\r\n    \"\"\"\r\n    result = []\r\n    try:\r\n        ec2Client = handle.client('ec2', region_name=region)\r\n        response = ec2Client.describe_nat_gateways(\r\n            Filter=[{'Name': 'vpc-id','Values': [vpc_id]}])\r\n        if response['NatGateways']:\r\n            for i in response['NatGateways']:\r\n                nat_dict = {}\r\n                if \"NatGatewayId\" in i:\r\n                    nat_dict[\"nat_id\"] = i[\"NatGatewayId\"]\r\n                if \"SubnetId\" in i:\r\n                    nat_dict[\"subnet_id\"] = i[\"SubnetId\"]\r\n                if \"VpcId\" in i:\r\n                    nat_dict[\"vpc_id\"] = i[\"VpcId\"]\r\n                for address in i[\"NatGatewayAddresses\"]:\r\n                    if \"PrivateIp\" in address:\r\n                        nat_dict[\"private_ip\"] = address[\"PrivateIp\"]\r\n                    if \"PublicIp\" in address:\r\n                        nat_dict[\"public_ip\"] = address[\"PublicIp\"]\r\n                result.append(nat_dict)\r\n    except Exception:\r\n        pass\r\n    return result\r\n"
  },
  {
    "path": "AWS/legos/aws_get_nlb_targets/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Get all Targets for Network Load Balancer (NLB)</h1>\r\n\r\n## Description\r\nThis Lego to get all targets for Network Load Balancer (NLB).\r\n\r\n## Lego Details\r\n\r\n    aws_get_nlb_targets(handle, region: str, nlb_arn: str)\r\n\r\n        handle: Object of type unSkript AWS Connector\r\n        region: AWS Region.\r\n        nlb_arn: Network Load Balancer ARNs.\r\n\r\n## Lego Input\r\nThis Lego take two inputs handle, nlb_arn and region.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://unskript.com)"
  },
  {
    "path": "AWS/legos/aws_get_nlb_targets/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_nlb_targets/aws_get_nlb_targets.json",
    "content": "{\r\n    \"action_title\": \"Get all Targets for Network Load Balancer (NLB)\",\r\n    \"action_description\": \"Use this action to get all targets for Network Load Balancer (NLB)\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_get_nlb_targets\",\r\n    \"action_needs_credential\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\":[ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_SECOPS\" ,\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_EC2\"]\r\n}\r\n"
  },
  {
    "path": "AWS/legos/aws_get_nlb_targets/aws_get_nlb_targets.py",
    "content": "##\r\n##  Copyright (c) 2023 unSkript, Inc\r\n##  All rights reserved.\r\n##\r\nimport pprint\r\nfrom typing import List\r\nfrom pydantic import BaseModel, Field\r\n\r\nclass InputSchema(BaseModel):\r\n    region: str = Field(\r\n        title='Region',\r\n        description='AWS Region')\r\n    nlb_arn: str = Field(\r\n        title='Network Loadbalancer ARNs',\r\n        description='Network Load Balancer ARNs.')\r\n\r\n\r\ndef aws_get_nlb_targets_printer(output):\r\n    if output is None:\r\n        return\r\n    pprint.pprint(output)\r\n\r\n\r\ndef aws_get_nlb_targets(handle, region: str, nlb_arn: str) -> List:\r\n    \"\"\"aws_get_nlb_targets lists Network loadbalancers target details.\r\n\r\n        :type handle: object\r\n        :param handle: Object returned from task.validate(...).\r\n\r\n        :type region: string\r\n        :param region: AWS Region.\r\n\r\n        :type nlb_arn: string\r\n        :param nlb_arn: Network Load Balancer ARNs.\r\n\r\n        :rtype: List Network load balancers target details.\r\n    \"\"\"\r\n    result = []\r\n    try:\r\n        elb_Client = handle.client('elbv2', region_name=region)\r\n        response = elb_Client.describe_target_health(TargetGroupArn=nlb_arn)\r\n        for target in response['TargetHealthDescriptions']:\r\n            target_dict = {}\r\n            target_dict[\"target_id\"] = target['Target']['Id']\r\n            target_dict[\"target_port\"] = target['Target']['Port']\r\n            target_dict[\"target_health\"] = target['TargetHealth']['State']\r\n            result.append(target_dict)\r\n    except Exception as e:\r\n        raise Exception(e) from e\r\n    return result\r\n"
  },
  {
    "path": "AWS/legos/aws_get_nlbs_without_targets/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>AWS Get Network Load Balancer (NLB) without Targets</h1>\r\n\r\n## Description\r\nThis Lego to get AWS Network Load Balancer (NLB) without Targets.\r\n\r\n## Lego Details\r\n\r\n    aws_get_nlbs_without_targets(handle, region: str = \"\")\r\n\r\n        handle: Object of type unSkript AWS Connector\r\n        region: Optional, AWS Region.\r\n\r\n## Lego Input\r\nThis Lego take two inputs handle and region.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://unskript.com)"
  },
  {
    "path": "AWS/legos/aws_get_nlbs_without_targets/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_nlbs_without_targets/aws_get_nlbs_without_targets.json",
    "content": "{\r\n    \"action_title\": \"AWS Get Network Load Balancer (NLB) without Targets\",\r\n    \"action_description\": \"Use this action to get AWS Network Load Balancer (NLB) without Targets\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_get_nlbs_without_targets\",\r\n    \"action_needs_credential\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_is_check\": true,\r\n    \"action_next_hop\": [],\r\n    \"action_next_hop_parameter_mapping\": {},\r\n    \"action_categories\":[ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_SECOPS\" ,\"CATEGORY_TYPE_AWS\"]\r\n}\r\n"
  },
  {
    "path": "AWS/legos/aws_get_nlbs_without_targets/aws_get_nlbs_without_targets.py",
    "content": "##\r\n##  Copyright (c) 2023 unSkript, Inc\r\n##  All rights reserved.\r\n##\r\nimport pprint\r\nfrom typing import Tuple, Optional\r\nfrom pydantic import BaseModel, Field\r\nfrom unskript.connectors.aws import aws_get_paginator\r\nfrom unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\r\n\r\nclass InputSchema(BaseModel):\r\n    region: Optional[str] = Field(\r\n        default='',\r\n        title='AWS Region',\r\n        description='AWS Region')\r\n\r\n\r\ndef aws_get_nlbs_without_targets_printer(output):\r\n    if output is None:\r\n        return\r\n    pprint.pprint(output)\r\n\r\n\r\ndef aws_get_nlbs_without_targets(handle, region: str = \"\") -> Tuple:\r\n    \"\"\"aws_get_nlbs_without_targets lists Network loadbalancers ARNs without targets.\r\n\r\n        :type handle: object\r\n        :param handle: Object returned from task.validate(...).\r\n\r\n        :type region: string\r\n        :param region: AWS Region.\r\n\r\n        :rtype: lists Network loadbalancers ARNs without targets.\r\n    \"\"\"\r\n    if handle is None:\r\n        raise ValueError(\"Handle must not be None.\")\r\n    if region and region not in aws_list_all_regions(handle):\r\n        raise ValueError(f\"Invalid region: {region}.\")\r\n    result = []\r\n    all_regions = [region]\r\n    if not region:\r\n        all_regions = aws_list_all_regions(handle)\r\n    for reg in all_regions:\r\n        try:\r\n            elbv2_client = handle.client('elbv2', region_name=reg)\r\n            resp = aws_get_paginator(elbv2_client, \"describe_load_balancers\", \"LoadBalancers\")\r\n            for elb in resp:\r\n                nlb_dict = {}\r\n                if elb['Type'] == \"network\":\r\n                    target_groups = elbv2_client.describe_target_groups(\r\n                        LoadBalancerArn=elb['LoadBalancerArn']\r\n                        )\r\n                    if len(target_groups['TargetGroups']) == 0:\r\n                        nlb_dict[\"loadBalancer_arn\"] = elb['LoadBalancerArn']\r\n                        nlb_dict[\"loadBalancer_name\"] = elb[\"LoadBalancerName\"]\r\n                        nlb_dict[\"region\"] = reg\r\n                        result.append(nlb_dict)\r\n        except Exception:\r\n            pass\r\n    if len(result) != 0:\r\n        return (False, result)\r\n    return (True, None)\r\n"
  },
  {
    "path": "AWS/legos/aws_get_older_generation_rds_instances/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>AWS Get Older Generation RDS Instances</h1>\n\n## Description\nAWS Get Older Generation RDS Instances action retrieves information about RDS instances using older generation instance types.\n\n## Lego Details\n\taws_get_older_generation_rds_instances(handle, region: str = \"\")\n\t\thandle: Object of type unSkript AWS Connector.\n\t\tregion: AWS Region (Optional)\n\n\n## Lego Input\nThis Lego takes inputs handle,\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_get_older_generation_rds_instances/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_older_generation_rds_instances/aws_get_older_generation_rds_instances.json",
    "content": "{\n  \"action_title\": \"AWS Get Older Generation RDS Instances\",\n  \"action_description\": \"AWS Get Older Generation RDS Instances action retrieves information about RDS instances using older generation instance types.\",\n  \"action_type\": \"LEGO_TYPE_AWS\",\n  \"action_entry_function\": \"aws_get_older_generation_rds_instances\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n  \"action_is_check\": true,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_next_hop\": [\"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855\"],\n  \"action_next_hop_parameter_mapping\": {\"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855\": {\"name\": \"AWS Update RDS Instances from Old to New Generation\", \"region\": \".[0].region\", \"rds_instance_ids\":\"map(.instance)\"}},\n  \"action_categories\":[ \"CATEGORY_TYPE_COST_OPT\", \"CATEGORY_TYPE_SRE\" ,\"CATEGORY_TYPE_AWS\", \"CATEGORY_TYPE_AWS_RDS\"]\n}"
  },
  {
    "path": "AWS/legos/aws_get_older_generation_rds_instances/aws_get_older_generation_rds_instances.py",
    "content": "##\n##  Copyright (c) 2023 unSkript, Inc\n##  All rights reserved.\n##\nfrom pydantic import BaseModel, Field\nfrom typing import Optional, Tuple\nfrom unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\nfrom unskript.connectors.aws import aws_get_paginator\nimport pprint\n\n\nclass InputSchema(BaseModel):\n    region: Optional[str] = Field('', description='AWS Region.', title='AWS Region')\n\n\n\ndef aws_get_older_generation_rds_instances_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef is_previous_gen_instance(instance_type):\n    previous_gen_instance_types = ['db.m1', 'db.m2', 'db.t1']\n    for prev_gen_type in previous_gen_instance_types:\n        if instance_type.startswith(prev_gen_type):\n            return True\n    return False\n\n\ndef aws_get_older_generation_rds_instances(handle, region: str = \"\") -> Tuple:\n    \"\"\"aws_get_older_generation_rds_instances Gets all older generation RDS DB instances\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type region: string\n        :param region: Optional, Region of the RDS.\n\n        :rtype: Status, List of old RDS Instances\n    \"\"\"\n    if not handle or (region and region not in aws_list_all_regions(handle)):\n        raise ValueError(\"Invalid input parameters provided.\")\n    result = []\n    all_regions = [region]\n    if not region:\n        all_regions = aws_list_all_regions(handle)\n    for reg in all_regions:\n        try:\n            ec2Client = handle.client('rds', region_name=reg)\n            response = aws_get_paginator(ec2Client, \"describe_db_instances\", \"DBInstances\")\n            for db in response:\n                instance_type = \".\".join(db['DBInstanceClass'].split(\".\", 2)[:2])\n                response = is_previous_gen_instance(instance_type)\n                if response:\n                    db_instance_dict = {}\n                    db_instance_dict[\"region\"] = reg\n                    db_instance_dict[\"instance\"] = db['DBInstanceIdentifier']\n                    result.append(db_instance_dict)\n        except Exception:\n            pass\n\n    if len(result) != 0:\n        return (False, result)\n    else:\n        return (True, None)"
  },
  {
    "path": "AWS/legos/aws_get_private_address_from_nat_gateways/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>AWS Get Private Address from NAT Gateways </h1>\r\n\r\n## Description\r\nThis Lego used to get private address from NAT gateways.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_get_private_address_from_nat_gateways(handle, region: str = \"\")\r\n\r\n        handle: Object of type unSkript AWS Connector.\r\n        region: Optional- Region to filter NAT Gateways.\r\n\r\n## Lego Input\r\n\r\nThis Lego take two inputs handle and region. \r\n\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_get_private_address_from_nat_gateways/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_private_address_from_nat_gateways/aws_get_private_address_from_nat_gateways.json",
    "content": "{\r\n\"action_title\": \"AWS Get Private Address from NAT Gateways\",\r\n\"action_description\": \"This action is used to get private address from NAT gateways.\",\r\n\"action_type\": \"LEGO_TYPE_AWS\",\r\n\"action_entry_function\": \"aws_get_private_address_from_nat_gateways\",\r\n\"action_needs_credential\": true,\r\n\"action_supports_poll\": true,\r\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n\"action_supports_iteration\": true,\r\n\"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_TROUBLESHOOTING\" ,\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_NAT_GATEWAY\"],\r\n\"action_next_hop\": [\"c123bb9eff909c27f2d330792689c63110889e0b7754041e2e24ade22ca16615\"],\r\n\"action_next_hop_parameter_mapping\": {}\r\n}\r\n\r\n"
  },
  {
    "path": "AWS/legos/aws_get_private_address_from_nat_gateways/aws_get_private_address_from_nat_gateways.py",
    "content": "##  Copyright (c) 2021 unSkript, Inc\r\n##  All rights reserved.\r\n##\r\nimport pprint\r\nfrom typing import Tuple, Optional\r\nfrom pydantic import BaseModel, Field\r\nfrom unskript.connectors.aws import aws_get_paginator\r\nfrom unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\r\n\r\n\r\nclass InputSchema(BaseModel):\r\n    region: Optional[str] = Field(\r\n        default=\"\",\r\n        title='Region',\r\n        description='AWS Region.')\r\n\r\n\r\ndef aws_get_private_address_from_nat_gateways_printer(output):\r\n    if output is None:\r\n        return\r\n    pprint.pprint(output)\r\n\r\n\r\ndef aws_get_private_address_from_nat_gateways(handle, region: str = \"\") -> Tuple:\r\n    \"\"\"aws_get_private_address_from_nat_gateways Returns an private address of NAT gateways.\r\n\r\n        :type region: string\r\n        :param region: Region to filter NAT Gateways.\r\n\r\n        :rtype: Tuple with private address of NAT gateways.\r\n    \"\"\"\r\n    result = []\r\n    all_regions = [region]\r\n    if not region:\r\n        all_regions = aws_list_all_regions(handle)\r\n    for reg in all_regions:\r\n        try:\r\n            ec2Client = handle.client('ec2', region_name=reg)\r\n            response = aws_get_paginator(ec2Client, \"describe_nat_gateways\", \"NatGateways\")\r\n            for i in response:\r\n                nat_dict = {}\r\n                nat_dict[\"nat_id\"] = i[\"NatGatewayId\"]\r\n                nat_dict[\"vpc_id\"] = i[\"VpcId\"]\r\n                nat_dict[\"region\"] = reg\r\n                for address in i[\"NatGatewayAddresses\"]:\r\n                    if \"PrivateIp\" in address:\r\n                        nat_dict[\"private_ip\"] = address[\"PrivateIp\"]\r\n                result.append(nat_dict)\r\n        except Exception:\r\n            pass\r\n    if len(result) != 0:\r\n        return (False, result)\r\n    return (True, None)\r\n"
  },
  {
    "path": "AWS/legos/aws_get_public_ec2_instances/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Get AWS public EC2 instances </h1>\r\n\r\n## Description\r\nThis Lego gets a list of all public AWS EC2 Instances.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_get_public_ec2_instances(handle: object, region: str)\r\n\r\n        handle: Object of type unSkript AWS Connector.\r\n        region: Used to filter the volume for specific region.\r\n\r\n## Lego Input\r\nThis Lego take two input handle and region.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./2.jpg\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Sandbox](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_get_public_ec2_instances/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_public_ec2_instances/aws_get_public_ec2_instances.json",
    "content": "{\r\n    \"action_title\": \"Get AWS EC2 Instances with a public IP\",\r\n    \"action_description\": \"lists all EC2 instances with a public IP\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_get_public_ec2_instances\",\r\n    \"action_needs_credential\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\":[ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\" ,\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_EC2\"  ]\r\n  }\r\n  "
  },
  {
    "path": "AWS/legos/aws_get_public_ec2_instances/aws_get_public_ec2_instances.py",
    "content": "##\r\n##  Copyright (c) 2022 unSkript, Inc\r\n##  All rights reserved.\r\n##  Written by Doug Sillars (and a little help from ChatGPT)\r\n\r\nimport pprint\r\nfrom typing import Dict\r\nfrom pydantic import BaseModel, Field\r\nfrom unskript.connectors.aws import aws_get_paginator\r\nfrom beartype import beartype\r\n\r\n\r\nclass InputSchema(BaseModel):\r\n    region: str = Field(\r\n        title='Region',\r\n        description='AWS Region.')\r\n\r\n@beartype\r\ndef aws_get_public_ec2_instances_printer(output):\r\n    if output is None:\r\n        return\r\n    pprint.pprint(output)\r\n\r\n\r\n@beartype\r\ndef aws_get_public_ec2_instances(handle, region: str) -> Dict:\r\n\r\n\r\n    ec2Client = handle.client('ec2', region_name=region)\r\n\r\n    res = aws_get_paginator(ec2Client, \"describe_instances\", \"Reservations\")\r\n\r\n\r\n    result={}\r\n        # Iterate through the list of instances\r\n    for reservation in res:\r\n         for instance in reservation['Instances']:\r\n            #print(\"instance\",instance)\r\n            instance_id = instance['InstanceId']\r\n            public_DNS = instance['PublicDnsName']\r\n            if len(public_DNS)>0:\r\n                public_ip = instance['PublicIpAddress']\r\n                result[instance_id] = {\"public DNS\": public_DNS,\"public IP\":public_ip}\r\n    return result\r\n"
  },
  {
    "path": "AWS/legos/aws_get_publicly_accessible_db_instances/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>AWS List Publicly Accessible RDS Instances </h1>\r\n\r\n## Description\r\nThis Lego filter AWS publicly accessible RDS instances.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_get_publicly_accessible_db_instances(handle: object, region: str,)\r\n\r\n        handle: Object of type unSkript AWS Connector.\r\n        region: Optional, AWS region. Eg: “us-west-2”\r\n\r\n## Lego Input\r\nThis Lego take two inputs handle and region.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_get_publicly_accessible_db_instances/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_publicly_accessible_db_instances/aws_get_publicly_accessible_db_instances.json",
    "content": "{\r\n    \"action_title\": \"AWS Get Publicly Accessible RDS Instances\",\r\n    \"action_description\": \"AWS Get Publicly Accessible RDS Instances\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_get_publicly_accessible_db_instances\",\r\n    \"action_needs_credential\": true,\r\n    \"action_is_check\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [ \"CATEGORY_TYPE_IAM\", \"CATEGORY_TYPE_SECOPS\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_RDS\" ],\r\n    \"action_next_hop\": [\"dda26fd556dd6b59e2fac9c9ed6e81fc19e5374746049d494237bcdc6a17fae4\"],\r\n    \"action_next_hop_parameter_mapping\": {\"dda26fd556dd6b59e2fac9c9ed6e81fc19e5374746049d494237bcdc6a17fae4\": {\"name\": \"Secure Publicly Accessible Amazon RDS Instances\",\"region\":\".[0].region\",\"rds_instances\":\"map(.instance)\" }}\r\n}\r\n"
  },
  {
    "path": "AWS/legos/aws_get_publicly_accessible_db_instances/aws_get_publicly_accessible_db_instances.py",
    "content": "##\r\n##  Copyright (c) 2021 unSkript, Inc\r\n##  All rights reserved.\r\n##\r\nimport pprint\r\nfrom typing import Optional, Tuple\r\nfrom pydantic import BaseModel, Field\r\nfrom unskript.legos.utils import CheckOutput\r\nfrom unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\r\nfrom unskript.connectors.aws import aws_get_paginator\r\n\r\n\r\nclass InputSchema(BaseModel):\r\n    region: Optional[str] = Field(\r\n        '',\r\n        title='Region for RDS',\r\n        description='Region of the RDS.'\r\n    )\r\n\r\n\r\ndef aws_get_publicly_accessible_db_instances_printer(output):\r\n    if output is None:\r\n        return\r\n\r\n    if isinstance(output, CheckOutput):\r\n        print(output.json())\r\n    else:\r\n        pprint.pprint(output)\r\n\r\n\r\ndef aws_get_publicly_accessible_db_instances(handle, region: str = \"\") -> Tuple:\r\n    \"\"\"aws_get_publicly_accessible_db_instances Gets all publicly accessible DB instances\r\n\r\n        :type handle: object\r\n        :param handle: Object returned from task.validate(...).\r\n\r\n        :type region: string\r\n        :param region: Region of the RDS.\r\n\r\n        :rtype: CheckOutput with status result and list of publicly accessible RDS instances.\r\n    \"\"\"\r\n    result = []\r\n    all_regions = [region]\r\n    if not region:\r\n        all_regions = aws_list_all_regions(handle)\r\n    for reg in all_regions:\r\n        try:\r\n            ec2Client = handle.client('rds', region_name=reg)\r\n            response = aws_get_paginator(ec2Client, \"describe_db_instances\", \"DBInstances\")\r\n            for db in response:\r\n                db_instance_dict = {}\r\n                if db['PubliclyAccessible']:\r\n                    db_instance_dict[\"region\"] = reg\r\n                    db_instance_dict[\"instance\"] = db['DBInstanceIdentifier']\r\n                    result.append(db_instance_dict)\r\n        except Exception:\r\n            pass\r\n\r\n    if len(result) != 0:\r\n        return (False, result)\r\n    return (True, None)\r\n"
  },
  {
    "path": "AWS/legos/aws_get_publicly_accessible_db_snapshots/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>AWS Get Publicly Accessible DB Snapshots in RDS </h1>\r\n\r\n## Description\r\nThis Lego filter publicly accessible DB snapshots in RDS.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_get_publicly_accessible_db_snapshots(handle, region: str)\r\n\r\n        handle: Object of type unSkript AWS Connector\r\n        region: Region of the RDS.\r\n\r\n## Lego Input\r\nThis Lego takes two inputs handle,region. \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_get_publicly_accessible_db_snapshots/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_publicly_accessible_db_snapshots/aws_get_publicly_accessible_db_snapshots.json",
    "content": "{\r\n    \"action_title\": \"AWS Get Publicly Accessible DB Snapshots in RDS\",\r\n    \"action_description\": \"AWS Get Publicly Accessible DB Snapshots in RDS\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_get_publicly_accessible_db_snapshots\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_is_check\":true,\r\n    \"action_verbs\": [\"get\"],\r\n    \"action_nouns\": [\"aws\",\"database\",\"snapshots\",\"public\",\"accessible\"],\r\n    \"action_categories\":[ \"CATEGORY_TYPE_CLOUDOPS\",\"CATEGORY_TYPE_SECOPS\" ,\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_RDS\"  ],\r\n    \"action_next_hop\":[\"7c0d143556a33b81d3fb1ff08dfdd59cebe5d58b00b55e8ae660df2e42f71bfe\"],\r\n    \"action_next_hop_parameter_mapping\":{\"7c0d143556a33b81d3fb1ff08dfdd59cebe5d58b00b55e8ae660df2e42f71bfe\": {\"name\": \"Secure Publicly accessible Amazon RDS Snapshot\",\"region\": \".[0].region\", \"public_snapshot_ids\":\"map(.open_snapshot)\"}}\r\n}"
  },
  {
    "path": "AWS/legos/aws_get_publicly_accessible_db_snapshots/aws_get_publicly_accessible_db_snapshots.py",
    "content": "##  Copyright (c) 2021 unSkript, Inc\r\n##  All rights reserved.\r\n##\r\nimport pprint\r\nfrom typing import Optional, Tuple\r\nfrom pydantic import BaseModel, Field\r\nfrom unskript.legos.utils import CheckOutput\r\nfrom unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\r\nfrom unskript.legos.aws.aws_filter_all_manual_database_snapshots.aws_filter_all_manual_database_snapshots import aws_filter_all_manual_database_snapshots\r\n\r\n\r\nclass InputSchema(BaseModel):\r\n    region: Optional[str] = Field(\r\n        default=\"\",\r\n        title='Region',\r\n        description='Region of the RDS'\r\n    )\r\n\r\n\r\ndef aws_get_publicly_accessible_db_snapshots_printer(output):\r\n    if output is None:\r\n        return\r\n    if isinstance(output, CheckOutput):\r\n        print(output.json())\r\n    else:\r\n        pprint.pprint(output)\r\n\r\n\r\ndef aws_get_publicly_accessible_db_snapshots(handle, region: str=None) -> Tuple:\r\n    \"\"\"aws_get_publicly_accessible_db_snapshots lists of publicly accessible\r\n       db_snapshot_identifier.\r\n\r\n        :type handle: object\r\n        :param handle: Object returned from task.validate(...).\r\n\r\n        :type region: string\r\n        :param region: Region of the RDS.\r\n\r\n        :rtype: Object with status, result having publicly accessible Snapshots \r\n        Identifier in RDS, error\r\n    \"\"\"\r\n    manual_snapshots_list=[]\r\n    result=[]\r\n    all_regions = [region]\r\n    if region is None or not region:\r\n        all_regions = aws_list_all_regions(handle=handle)\r\n    try:\r\n        for r in all_regions:\r\n            snapshots_dict = {}\r\n            output = aws_filter_all_manual_database_snapshots(handle=handle, region=r)\r\n            snapshots_dict[\"region\"]=r\r\n            snapshots_dict[\"snapshot\"]=output\r\n            manual_snapshots_list.append(snapshots_dict)\r\n    except Exception as e:\r\n        raise e\r\n\r\n    for all_snapshots in manual_snapshots_list:\r\n        try:\r\n            ec2Client = handle.client('rds', region_name=all_snapshots['region'])\r\n            for each_snapshot in all_snapshots['snapshot']:\r\n                response = ec2Client.describe_db_snapshot_attributes(\r\n                    DBSnapshotIdentifier=each_snapshot\r\n                    )\r\n                db_attribute = response[\"DBSnapshotAttributesResult\"]\r\n                for value in db_attribute['DBSnapshotAttributes']:\r\n                    p_dict={}\r\n                    if \"all\" in value[\"AttributeValues\"]:\r\n                        p_dict[\"region\"] = all_snapshots['region']\r\n                        p_dict[\"open_snapshot\"] = db_attribute['DBSnapshotIdentifier']\r\n                        result.append(p_dict)\r\n        except Exception:\r\n            pass\r\n    if len(result)!=0:\r\n        return (False, result)\r\n    return (True, [])\r\n"
  },
  {
    "path": "AWS/legos/aws_get_rds_automated_snapshots_above_retention_period/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>Get AWS RDS automated db snapshots above retention period</h1>\n\n## Description\nThis Action gets the snapshots above a certain retention period.\n\n## Lego Details\n\taws_get_rds_automated_snapshots_above_retention_period(handle, region: str=\"\", threshold:int=7)\n\t\thandle: Object of type unSkript AWS Connector.\n\t\tthreshold: The threshold number of days check for retention of automated snapshots. Default is 7 days\n\n\n## Lego Input\nThis Lego takes inputs handle, threshold.\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_get_rds_automated_snapshots_above_retention_period/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_rds_automated_snapshots_above_retention_period/aws_get_rds_automated_snapshots_above_retention_period.json",
    "content": "{\n  \"action_title\": \"Get AWS RDS automated db snapshots above retention period\",\n  \"action_description\": \"This Action gets the snapshots above a certain retention period.\",\n  \"action_type\": \"LEGO_TYPE_AWS\",\n  \"action_entry_function\": \"aws_get_rds_automated_snapshots_above_retention_period\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n  \"action_is_check\": true,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\":[\"CATEGORY_TYPE_CLOUDOPS\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_RDS\"],\n  \"action_next_hop\":[],\n  \"action_next_hop_parameter_mapping\":{}\n}"
  },
  {
    "path": "AWS/legos/aws_get_rds_automated_snapshots_above_retention_period/aws_get_rds_automated_snapshots_above_retention_period.py",
    "content": "##\n##  Copyright (c) 2023 unSkript, Inc\n##  All rights reserved.\n##\nfrom typing import Tuple, Optional\nfrom pydantic import BaseModel, Field\nfrom datetime import datetime, timedelta\nimport pytz\nfrom unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\nfrom unskript.connectors.aws import aws_get_paginator\nimport pprint\n\n\nclass InputSchema(BaseModel):\n    region: Optional[str] = Field(\n        '', description='AWS Region of database.', title='Region'\n    )\n    threshold: Optional[int] = Field(\n        '',\n        description='The threshold number of days check for retention of automated snapshots. Default is 7 days',\n        title='Threshold(in days)',\n    )\n\n\n\ndef aws_get_rds_automated_snapshots_above_retention_period_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef aws_get_rds_automated_snapshots_above_retention_period(handle, region: str=\"\", threshold:int=7) -> Tuple:\n    \"\"\"aws_get_rds_automated_snapshots_above_retention_period List all the manual database snapshots.\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type region: string\n        :param region: Region for database.\n\n        :type threshold: int\n        :param threshold: The threshold number of days check for retention of automated snapshots. Default is 7 days.\n\n        :rtype: List of manual database snapshots.\n    \"\"\"\n    if not handle or threshold <= 0: # Input validation\n        raise ValueError(\"Invalid handle or threshold must be a positive integer.\")\n\n    result = []\n    all_regions = [region]\n    if not region:\n        all_regions = aws_list_all_regions(handle)\n    min_creation_time = datetime.now(pytz.UTC) - timedelta(days=threshold)\n    for reg in all_regions:\n        try:\n            rdsClient = handle.client('rds', region_name=reg)\n            response = aws_get_paginator(rdsClient, \"describe_db_snapshots\",\"DBSnapshots\",\n                                         SnapshotType='automated')\n            for snapshot in response:\n                if 'SnapshotCreateTime' in snapshot: # Check if the key exists\n                    snapshot_time = snapshot['SnapshotCreateTime'].replace(tzinfo=pytz.UTC)\n                    if snapshot_time < min_creation_time:\n                        result.append({\"db_identifier\": snapshot['DBSnapshotIdentifier'], \"region\": reg})\n        except Exception as e:\n            pass\n    if len(result) != 0:\n        return (False, result)\n    else:\n        return (True, None)\n\n"
  },
  {
    "path": "AWS/legos/aws_get_redshift_query_details/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Get Redshift Query Details</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Action retrieves a Details on a Redshift Query\r\n\r\n\r\n## Lego Details\r\n   def aws_get_redshift_query_details(handle, region: str, queryId:str) -> Dict:\r\n\r\n        handle: Object of type unSkript datadog Connector\r\n\t\tregion: AWS Region\r\n\t\tqueryId: id of the Redshift query\r\n\r\n## Lego Input\r\n        handle: Object of type unSkript datadog Connector\r\n\t\tregion: AWS Region\r\n\t\tqueryId: id of the Redshift query\r\n\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./redshiftdetails.jpg\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_get_redshift_query_details/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_redshift_query_details/aws_get_redshift_query_details.json",
    "content": "{\n  \"action_title\": \"AWS Get Redshift Query Details\",\n  \"action_description\": \"Given an QueryId, this Action will give you the status of the Query, along with other data like  the number of lines/\",\n  \"action_type\": \"LEGO_TYPE_AWS\",\n  \"action_entry_function\": \"aws_get_redshift_query_details\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\n  \"action_is_check\": false,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\":[ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\" ,\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_REDSHIFT\"  ]\n}"
  },
  {
    "path": "AWS/legos/aws_get_redshift_query_details/aws_get_redshift_query_details.py",
    "content": "##\n##  Copyright (c) 2023 unSkript, Inc\n##  All rights reserved.\n##\nfrom __future__ import annotations\nfrom typing import Dict\nfrom pydantic import BaseModel, Field\nfrom beartype import beartype\n\nclass InputSchema(BaseModel):\n    region: str = Field(..., description='AWS Region.', title='Region')\n    queryId: str = Field(\n\n         description='Id of Redshift Query', title='queryId'\n\n    )\n\n@beartype\ndef aws_get_redshift_query_details(handle, region: str, queryId:str) -> Dict:\n\n    client = handle.client('redshift-data', region_name=region)\n    response = client.describe_statement(\n    Id=queryId\n    )\n    resultReady = response['HasResultSet']\n    queryTimeNs = response['Duration']\n    ResultRows = response['ResultRows']\n    details = {\"Status\": response['Status'],\n                \"resultReady\": resultReady, \n               \"queryTimeNs\":queryTimeNs,\n               \"ResultRows\":ResultRows\n              }\n    return details\n"
  },
  {
    "path": "AWS/legos/aws_get_redshift_result/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Get AWS Redshift Result</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Action retrieves a Result from a Redshift Query. Formats the query into a List for easy manipulation into a dataframe\r\n\r\n\r\n## Lego Details\r\n    def aws_get_redshift_result(handle, region:str, resultId: str) -> List:\r\n\r\n        handle: Object of type unSkript datadog Connector\r\n\t\tregion: AWS Region\r\n\t\tresultId: QueryId of teh Redshift Query.\r\n\r\n## Lego Input\r\n    handle: Object of type unSkript datadog Connector\r\n\tregion: AWS Region\r\n\tresultId: QueryId of teh Redshift Query.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./redshift.jpg\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_get_redshift_result/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_redshift_result/aws_get_redshift_result.json",
    "content": "{\n  \"action_title\": \"AWS Get Redshift Result\",\n  \"action_description\": \"Given a QueryId, Get the Query Result, and format into a List\",\n  \"action_type\": \"LEGO_TYPE_AWS\",\n  \"action_entry_function\": \"aws_get_redshift_result\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n  \"action_is_check\": false,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\":[ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\" ,\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_REDSHIFT\"  ]\n}"
  },
  {
    "path": "AWS/legos/aws_get_redshift_result/aws_get_redshift_result.py",
    "content": "from __future__ import annotations\n##\n##  Copyright (c) 2023 unSkript, Inc\n##  All rights reserved.\n##\nfrom typing import List\nfrom pydantic import BaseModel, Field\nfrom beartype import beartype\n\n\nclass InputSchema(BaseModel):\n\n    resultId: str = Field(description='Redshift Query Result', title='resultId')\n    region: str = Field(..., description='AWS Region', title='region')\n\n\n@beartype\ndef aws_get_redshift_result(handle, region:str, resultId: str) -> List:\n\n\n    client = handle.client('redshift-data', region_name=region)\n    result = client.get_statement_result(\n        Id=resultId\n    )\n    #result has the Dictionary, but it is not easily queried\n    #get all the columns into an array\n    columnNames = []\n    for column in result['ColumnMetadata']:\n        columnNames.append(column['label'])\n    #print(columnNames)\n\n    #now let's make the output into a dict\n    listResult = []\n    for record in result['Records']:\n        entryCounter = 0\n        entryDict = {}\n        for entry in record:\n            for value in entry.values():\n                entryDict[columnNames[entryCounter]] = value\n            entryCounter +=1\n        #print(\"entryDict\",entryDict)\n        listResult.append(entryDict)\n\n    #print(listResult)\n    return listResult\n"
  },
  {
    "path": "AWS/legos/aws_get_reserved_instances_about_to_retired/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>AWS Get EC2 Instances About To Retired</h1>\r\n\r\n## Description\r\nThis healthcheck Lego filter AWS reserved instance is scheduled to end within the threshold.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_get_reserved_instances_about_to_retired(handle, region: str, threshold: int = 7)\r\n\r\n        handle: Object of type unSkript AWS Connector.\r\n        region: Optional, AWS region. Eg: “us-west-2”,\r\n        threshold: The threshold for the reserved instance is scheduled to end within the threshold.\r\n\r\n## Lego Input\r\nThis Lego take two inputs handle, threshold and region.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)\r\n"
  },
  {
    "path": "AWS/legos/aws_get_reserved_instances_about_to_retired/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_reserved_instances_about_to_retired/aws_get_reserved_instances_about_to_retired.json",
    "content": "{\r\n    \"action_title\": \"AWS Get EC2 Instances About To Retired\",\r\n    \"action_description\": \"AWS Get EC2 Instances About To Retired\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_get_reserved_instances_about_to_retired\",\r\n    \"action_needs_credential\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_is_check\": true,\r\n    \"action_next_hop\": [],\r\n    \"action_next_hop_parameter_mapping\": {},\r\n    \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_EC2\", \"CATEGORY_TYPE_COST_OPT\"]\r\n}"
  },
  {
    "path": "AWS/legos/aws_get_reserved_instances_about_to_retired/aws_get_reserved_instances_about_to_retired.py",
    "content": "##\r\n##  Copyright (c) 2021 unSkript, Inc\r\n##  All rights reserved.\r\n##\r\nimport pprint\r\nfrom typing import Tuple, Optional\r\nfrom datetime import datetime, timezone\r\nfrom pydantic import BaseModel, Field\r\nfrom unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\r\n\r\nclass InputSchema(BaseModel):\r\n    region: Optional[str] = Field(\r\n        title='Region',\r\n        description='AWS Region.')\r\n    threshold: int = Field(\r\n        default=7,\r\n        title='Threshold(In days)',\r\n        description=('The threshold for the reserved instance is '\r\n                     'scheduled to end within the threshold.')\r\n                     )\r\n\r\n\r\ndef aws_get_reserved_instances_about_to_retired_printer(output):\r\n    if output is None:\r\n        return\r\n    pprint.pprint({\"Instances\": output})\r\n\r\n\r\ndef aws_get_reserved_instances_about_to_retired(\r\n        handle,\r\n        region: str = \"\",\r\n        threshold: int = 7\r\n        ) -> Tuple:\r\n    \"\"\"aws_get_reserved_instances_about_to_retired Returns an array\r\n       of reserved instances.\r\n\r\n        :type handle: object\r\n        :param handle: Object returned from task.validate(...).\r\n\r\n        :type region: string\r\n        :param region: Region to filter instances.\r\n        \r\n        :type threshold: int\r\n        :param threshold: (in days) The threshold for the reserved \r\n        instance is scheduled to end within the threshold.\r\n\r\n        :rtype: Array of instances.\r\n    \"\"\"\r\n    now = datetime.now(timezone.utc)\r\n    result = []\r\n    all_regions = [region]\r\n    if not region:\r\n        all_regions = aws_list_all_regions(handle)\r\n    for reg in all_regions:\r\n        try:\r\n            ec2Client = handle.client('ec2', region_name=reg)\r\n            response = ec2Client.describe_reserved_instances()\r\n            for reserved_id in response[\"ReservedInstances\"]:\r\n                instance_dict = {}\r\n                # check if the Reserved Instance is scheduled to end within the threshold\r\n                if reserved_id['State'] == 'active' and (reserved_id['End'] - now).days <= threshold:\r\n                    instance_dict[\"instance_id\"] = reserved_id[\"ReservedInstancesId\"]\r\n                    instance_dict[\"region\"] = reg\r\n                    result.append(instance_dict)\r\n        except Exception:\r\n            pass\r\n\r\n    if len(result) != 0:\r\n        return (False, result)\r\n    return (True, None)\r\n"
  },
  {
    "path": "AWS/legos/aws_get_resources_missing_tag/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>AWS Get Resources Missing Tag</h1>\n\n## Description\nGets a list of all AWS resources that are missing the tag in the input parameters.\n\n## Lego Details\n\taws_get_resources_missing_tag(handle, region: str, tag:str)\n\t\thandle: Object of type unSkript AWS Connector.\n\n\tPlease refer to README.md file of any existing lego and similarly add the description for your input parameters.\n\n\n## Lego Input\nThis Lego takes inputs handle,\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_get_resources_missing_tag/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_resources_missing_tag/aws_get_resources_missing_tag.json",
    "content": "{\n  \"action_title\": \"AWS Get Resources Missing Tag\",\n  \"action_description\": \"Gets a list of all AWS resources that are missing the tag in the input parameters.\",\n  \"action_type\": \"LEGO_TYPE_AWS\",\n  \"action_entry_function\": \"aws_get_resources_missing_tag\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n  \"action_is_check\": false,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true\n}"
  },
  {
    "path": "AWS/legos/aws_get_resources_missing_tag/aws_get_resources_missing_tag.py",
    "content": "from __future__ import annotations\n\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nfrom typing import List, Dict\nfrom pydantic import BaseModel, Field\nfrom unskript.connectors.aws import aws_get_paginator\nimport pprint\n\n\n\nclass InputSchema(BaseModel):\n    region: str = Field(..., description='AWS Region.', title='Region')\n    tag: str = Field(..., description='The Tag to search for', title='tag')\n\n\ndef aws_get_resources_missing_tag_printer(output):\n    if output is None:\n        return\n    pprint.pprint(f\"there are {len(output)} resources missing the tag. We can fix a max of 20.\" )\n\n\ndef aws_get_resources_missing_tag(handle, region: str, tag:str) -> List:\n    \"\"\"aws_get_resources_missing_tag Returns an List of Untagged Resources.\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type region: str\n        :param region: Region to filter resources.\n\n        :rtype: List of untagged resources.\n    \"\"\"\n\n    ec2Client = handle.client('resourcegroupstaggingapi', region_name=region)\n    result = []\n\n    arnKeywordsToIgnore = [\"sqlworkbench\",\n                           \"AutoScalingManagedRule\",\n                           \"sagarProxy\",\n                           \"fsap-0f4d1bbd83f172783\",\n                           \"experiment\"]\n\n    try:\n        response = aws_get_paginator(ec2Client, \"get_resources\", \"ResourceTagMappingList\")\n        for resources in response:\n            if not resources[\"Tags\"]:\n                #no tags at all!!\n                arnIgnore = False\n                for substring in arnKeywordsToIgnore:\n                    if substring in resources[\"ResourceARN\"]:\n                        arnIgnore = True\n                if not arnIgnore:\n                    # instance is missing tag\n                    result.append(resources[\"ResourceARN\"])\n            else:\n                #has tags\n                allTags = True\n                keyList = []\n                tagged_instance = resources['Tags']\n                #print(tagged_instance)\n                #get all the keys for the instance\n                for kv in tagged_instance:\n                    key = kv[\"Key\"]\n                    keyList.append(key)\n                #see if the required tags are represented in the keylist\n                #if they are not - the instance is not in compliance\n                if tag not in keyList:\n                    allTags = False\n                if not allTags:\n                    arnIgnore = False\n                    for substring in arnKeywordsToIgnore:\n                        if substring in resources[\"ResourceARN\"]:\n                            arnIgnore = True\n                    if not arnIgnore:\n                        # instance is missing tag\n                        result.append(resources[\"ResourceARN\"])\n\n    except Exception as error:\n        result.append({\"error\":error})\n\n    return result\n\n\n"
  },
  {
    "path": "AWS/legos/aws_get_resources_with_expiration_tag/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>AWS Get Resources With Expiration Tag</h1>\n\n## Description\nAWS Get all Resources with an expiration tag\n\n## Lego Details\n\taws_get_resources_with_expiration_tag(handle, region: str, tag:str)\n\t\thandle: Object of type unSkript AWS Connector.\n\n\tPlease refer to README.md file of any existing lego and similarly add the description for your input parameters.\n\n\n## Lego Input\nThis Lego takes inputs handle,\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_get_resources_with_expiration_tag/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_resources_with_expiration_tag/aws_get_resources_with_expiration_tag.json",
    "content": "{\n  \"action_title\": \"AWS Get Resources With Expiration Tag\",\n  \"action_description\": \"AWS Get all Resources with an expiration tag\",\n  \"action_type\": \"LEGO_TYPE_AWS\",\n  \"action_entry_function\": \"aws_get_resources_with_expiration_tag\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n  \"action_is_check\": false,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true\n}"
  },
  {
    "path": "AWS/legos/aws_get_resources_with_expiration_tag/aws_get_resources_with_expiration_tag.py",
    "content": "from __future__ import annotations\n\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nfrom typing import List, Dict\nfrom pydantic import BaseModel, Field\nfrom unskript.connectors.aws import aws_get_paginator\nimport pprint\n\n\n\nclass InputSchema(BaseModel):\n    region: str = Field(..., description='AWS Region.', title='Region')\n    tag: str = Field(..., description='The Tag to search for', title='tag')\n\n\ndef aws_get_resources_with_expiration_tag_printer(output):\n    if output is None:\n        return\n    pprint.pprint(f\"there are {len(output)} resources with expiration tag.\" )\n\n\ndef aws_get_resources_with_expiration_tag(handle, region: str, tag:str) -> List:\n\n    ec2Client = handle.client('resourcegroupstaggingapi', region_name=region)\n    result = []\n    try:\n        response = aws_get_paginator(ec2Client, \"get_resources\", \"ResourceTagMappingList\")\n        for resources in response:\n            if resources[\"Tags\"]:\n                #has tags\n                tags = resources['Tags']\n                for kv in resources['Tags']:\n                    if kv[\"Key\"] == tag:\n                        #we have found an expiration tag\n                        temp ={'arn': [resources[\"ResourceARN\"]], 'expires':kv[\"Value\"]}\n                        print(temp)\n                        result.append(temp)\n\n    except Exception as error:\n        result.append({\"error\":error})\n\n    return result\n\n\n"
  },
  {
    "path": "AWS/legos/aws_get_resources_with_tag/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>AWS Get Resources With Tag</h1>\n\n## Description\nFor a given tag and region, get every AWS resource with that tag.\n\n## Lego Details\n\taws_get_resources_with_tag(handle, region: str, tag:str)\n\t\thandle: Object of type unSkript AWS Connector.\n\n\tPlease refer to README.md file of any existing lego and similarly add the description for your input parameters.\n\n\n## Lego Input\nThis Lego takes inputs handle,\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_get_resources_with_tag/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_resources_with_tag/aws_get_resources_with_tag.json",
    "content": "{\n  \"action_title\": \"AWS Get Resources With Tag\",\n  \"action_description\": \"For a given tag and region, get every AWS resource with that tag.\",\n  \"action_type\": \"LEGO_TYPE_AWS\",\n  \"action_entry_function\": \"aws_get_resources_with_tag\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n  \"action_is_check\": false,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true\n}"
  },
  {
    "path": "AWS/legos/aws_get_resources_with_tag/aws_get_resources_with_tag.py",
    "content": "from __future__ import annotations\n\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nfrom typing import List, Dict\nfrom pydantic import BaseModel, Field\nfrom unskript.connectors.aws import aws_get_paginator\nimport pprint\n\n\nclass InputSchema(BaseModel):\n    region: str = Field(..., description='AWS Region.', title='Region')\n    tag: str = Field(..., description='The Tag to search for', title='tag')\n\n\ndef aws_get_resources_with_tag_printer(output):\n    if output is None:\n        return\n    pprint.pprint(f\"there are {len(output)} resources with the desired tag.\" )\n\n\ndef aws_get_resources_with_tag(handle, region: str, tag:str) -> List:\n    \"\"\"aws_get_resources_with_tag Returns an List of Untagged Resources.\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type region: str\n        :param region: Region to filter resources.\n\n        :rtype: List of untagged resources.\n    \"\"\"\n\n    ec2Client = handle.client('resourcegroupstaggingapi', region_name=region)\n    result = []\n\n\n    try:\n        response = aws_get_paginator(ec2Client, \"get_resources\", \"ResourceTagMappingList\")\n        for resources in response:\n            if  resources[\"Tags\"]:\n                #has tags\n                #print(tagged_instance)\n                #get all the keys for the instance\n                for kv in resources['Tags']:\n                    key = kv[\"Key\"]\n                    if tag == key:\n                        temp = {\"arn\": resources[\"ResourceARN\"], \"value\":kv[\"Value\"]}\n                        result.append(temp)\n\n    except Exception as error:\n        result.append({\"error\":error})\n\n    return result\n\n\n"
  },
  {
    "path": "AWS/legos/aws_get_s3_buckets/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Get AWS S3 Buckets </h1>\r\n\r\n## Description\r\nThis Lego get AWS S3 buckets.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_get_s3_buckets(handle: object, region: str)\r\n\r\n        handle: Object of type unSkript AWS Connector.\r\n        region: Used to filter the volume for specific region.\r\n\r\n## Lego Input\r\nThis Lego take two input handle and region.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_get_s3_buckets/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_s3_buckets/aws_get_s3_buckets.json",
    "content": "{\r\n  \"action_title\": \"Get AWS S3 Buckets\",\r\n  \"action_description\": \"Get AWS S3 Buckets\",\r\n  \"action_type\": \"LEGO_TYPE_AWS\",\r\n  \"action_entry_function\": \"aws_get_s3_buckets\",\r\n  \"action_needs_credential\": true,\r\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n  \"action_supports_poll\": true,\r\n  \"action_supports_iteration\": true,\r\n  \"action_categories\":[ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\" ,\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_S3\"  ]\r\n}\r\n  "
  },
  {
    "path": "AWS/legos/aws_get_s3_buckets/aws_get_s3_buckets.py",
    "content": "##\r\n##  Copyright (c) 2021 unSkript, Inc\r\n##  All rights reserved.\r\n##\r\nimport pprint\r\nfrom typing import List\r\nfrom pydantic import BaseModel, Field\r\n\r\n\r\nclass InputSchema(BaseModel):\r\n    region: str = Field(\r\n        title='Region',\r\n        description='AWS Region.')\r\n\r\n\r\ndef aws_get_s3_buckets_printer(output):\r\n    if output is None:\r\n        return\r\n    pprint.pprint(output)\r\n\r\n\r\ndef aws_get_s3_buckets(handle, region: str) -> List:\r\n    \"\"\"aws_get_s3_buckets List all the S3 buckets.\r\n        :type handle: object\r\n        :param handle: Object returned from task.validate(...).\r\n        :type region: string\r\n        :param region: location of the bucket\r\n        :rtype: List of all the S3 buckets\r\n    \"\"\"\r\n    s3Session = handle.resource(\"s3\", region_name=region)\r\n    try:\r\n        response = s3Session.buckets.all()\r\n        result = []\r\n        for bucket in response:\r\n            result.append(bucket.name)\r\n    except Exception:\r\n        pass\r\n    return result\r\n"
  },
  {
    "path": "AWS/legos/aws_get_schedule_to_retire_instances/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Get Schedule To Retire AWS EC2 Instance </h1>\r\n\r\n## Description\r\nThis Lego Get Schedule To Retire AWS EC2 Instance and gives a list of Instances.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_get_schedule_to_retire_instances(handle: object, region: str)\r\n\r\n        handle: Object of type unSkript AWS Connector\r\n        region: Optional, AWS region. Eg: “us-west-2”\r\n\r\n## Lego Input\r\nThis Lego take two inputs instance_ids and region. \r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://unskript.com)"
  },
  {
    "path": "AWS/legos/aws_get_schedule_to_retire_instances/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_schedule_to_retire_instances/aws_get_schedule_to_retire_instances.json",
    "content": "{\r\n    \"action_title\": \"Get Schedule To Retire AWS EC2 Instance\",\r\n    \"action_description\": \"Get Schedule To Retire AWS EC2 Instance\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_get_schedule_to_retire_instances\",\r\n    \"action_needs_credential\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_is_check\": true,\r\n    \"action_verbs\": [\"get\"],\r\n    \"action_nouns\": [\"aws\",\"schedule\",\"retire\",\"instances\"],\r\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\", \"CATEGORY_TYPE_AWS_EC2\"],\r\n    \"action_next_hop\":[\"6684091dbbcd51c416f37c3070df6efd9fcb029c06047fcab62f32ee4c2f0596\"],\r\n    \"action_next_hop_parameter_mapping\":{}\r\n  }\r\n  "
  },
  {
    "path": "AWS/legos/aws_get_schedule_to_retire_instances/aws_get_schedule_to_retire_instances.py",
    "content": "##\r\n# Copyright (c) 2021 unSkript, Inc\r\n# All rights reserved.\r\n##\r\n\r\nfrom typing import Tuple, Optional\r\nfrom pydantic import BaseModel, Field\r\nfrom unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\r\nfrom unskript.legos.aws.aws_filter_ec2_instances.aws_filter_ec2_instances import aws_filter_ec2_instances\r\n\r\n\r\nclass InputSchema(BaseModel):\r\n    region: Optional[str] = Field(\r\n        default=\"\",\r\n        title='Region',\r\n        description='Region of the RDS'\r\n    )\r\n\r\n\r\ndef aws_get_schedule_to_retire_instances_printer(output):\r\n    if output is None:\r\n        return\r\n    status, res = output\r\n    if status:\r\n        print(\"There are no instances that are scheduled to retire.\")\r\n    else:\r\n        print(res)\r\n\r\ndef aws_get_schedule_to_retire_instances( handle, region: str=\"\") -> Tuple:\r\n    \"\"\"aws_get_schedule_to_retire_instances Returns a tuple of instances scheduled to retire.\r\n\r\n        :type region: string\r\n        :param region: Used to filter the volume for specific region.\r\n\r\n        :rtype: Object with status, list of instances scheduled to retire, and errors\r\n    \"\"\"\r\n    result = []\r\n    all_regions = [region] if region or len(region)!=0 else aws_list_all_regions(handle)\r\n\r\n    for r in all_regions:\r\n        try:\r\n            ec2client = handle.client('ec2', region_name=r)\r\n            instances = aws_filter_ec2_instances(handle=handle, region=r)\r\n            if not instances:\r\n                print(f\"No instances found in {r} region!\")\r\n                continue\r\n            try:\r\n                response = ec2client.describe_instance_status(\r\n                    Filters=[{'Name': 'event.code', 'Values': ['instance-retirement']}],\r\n                    InstanceIds=instances\r\n                )\r\n                instance_statuses = response.get('InstanceStatuses', [])\r\n                for res in instance_statuses:\r\n                    result.append({'instance': res['InstanceId'], 'region': r})\r\n            except Exception as e:\r\n                print(f\"An error occurred while describing instance status for instances in region {r}: {e}\")\r\n        except Exception:\r\n            pass\r\n\r\n    return (False, result) if result else (True, None)"
  },
  {
    "path": "AWS/legos/aws_get_secret_from_secretmanager/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Get secrets from secretsmanager </h1>\r\n\r\n## Description\r\nThis Lego used to get secrets from secretsmanager.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_get_secret_from_secretmanager(handle: object, SecretId: str, region: str)\r\n\r\n        handle: Object of type unSkript AWS Connector.\r\n        SecretId: Name of the secret.\r\n        region: AWS Region.\r\n## Lego Input\r\n\r\nThis Lego take three inputs handle, SecretId and region.\r\n\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_get_secret_from_secretmanager/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_secret_from_secretmanager/aws_get_secret_from_secretmanager.json",
    "content": "{\r\n    \"action_title\": \" Get secrets from secretsmanager\",\r\n    \"action_description\": \" Get secrets from AWS secretsmanager\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_get_secret_from_secretmanager\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_SECOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\", \"CATEGORY_TYPE_AWS_SECRET_MANAGER\"]\r\n}\r\n    "
  },
  {
    "path": "AWS/legos/aws_get_secret_from_secretmanager/aws_get_secret_from_secretmanager.py",
    "content": "##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##  @author: Yugal Pachpande, @email: yugal.pachpande@unskript.com\n##\nimport pprint\nfrom pydantic import BaseModel, Field\nfrom botocore.exceptions import ClientError\n\n\nclass InputSchema(BaseModel):\n    SecretId: str = Field(\n        title='Secret Name',\n        description='Name of the secret.')\n    region: str = Field(\n        title='Region',\n        description='AWS Region.')\n\n\ndef aws_get_secret_from_secretmanager_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef aws_get_secret_from_secretmanager(handle, SecretId: str, region: str) -> str:\n    \"\"\"aws_get_secrets_from_secretsmanager returns The decrypted secret value\n\n     :type handle: object\n     :param handle: Object returned from task.validate(...).\n\n     :type SecretId: string\n     :param SecretId: Name of the secret.\n\n     :type region: string\n     :param region: AWS Region.\n\n     :rtype: The decrypted secret value\n    \"\"\"\n\n    secretsmanager_client = handle.client(service_name='secretsmanager', region_name=region)\n\n    try:\n        response = secretsmanager_client.get_secret_value(SecretId=SecretId)\n    except ClientError as e:\n        if e.response['Error']['Code'] == 'ResourceNotFoundException':\n            print(\"The requested secret \" + SecretId + \" was not found\")\n        elif e.response['Error']['Code'] == 'InvalidRequestException':\n            print(\"The request was invalid due to:\", e)\n        elif e.response['Error']['Code'] == 'InvalidParameterException':\n            print(\"The request had invalid params:\", e)\n        elif e.response['Error']['Code'] == 'DecryptionFailure':\n            print(\"The requested secret can't be decrypted using the provided KMS key:\", e)\n        elif e.response['Error']['Code'] == 'InternalServiceError':\n            print(\"An error occurred on service side:\", e)\n    else:\n        # Secrets Manager decrypts the secret value using the associated KMS CMK\n        # Depending on whether the secret was a string or binary, only one of\n        # these fields will be populated\n        if 'SecretString' in response:\n            text_secret_data = response['SecretString']\n            pprint.pprint(text_secret_data)\n            return text_secret_data\n\n        binary_secret_data = response['SecretBinary']\n        pprint.pprint(binary_secret_data)\n        return binary_secret_data\n"
  },
  {
    "path": "AWS/legos/aws_get_secrets_manager_secret/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Get AWS Secret Details</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Action retrieves a Secret from AWS Secret Manager\r\n\r\n\r\n## Lego Details\r\n    def aws_get_secrets_manager_secret(handle, region: str, secret_name:str) -> str:\r\n\r\n        handle: Object of type unSkript AWS Connector\r\n\t\tregion: AWS Region\r\n\t\tsecret_name: Name of the AWS Secret to obtain\r\n\r\n## Lego Input\r\n\t\tregion: AWS Region\r\n\t\tsecret_name: Name of the AWS Secret to obtain\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./awsgetsecret.jpg\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)\r\n"
  },
  {
    "path": "AWS/legos/aws_get_secrets_manager_secret/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_secrets_manager_secret/aws_get_secrets_manager_secret.json",
    "content": "{\n  \"action_title\": \"AWS Get Secrets Manager Secret\",\n  \"action_description\": \"Get string (of JSON) containing Secret details\",\n  \"action_type\": \"LEGO_TYPE_AWS\",\n  \"action_entry_function\": \"aws_get_secrets_manager_secret\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\n  \"action_is_check\": false,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_SECOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\", \"CATEGORY_TYPE_AWS_SECRET_MANAGER\"]\n}\n"
  },
  {
    "path": "AWS/legos/aws_get_secrets_manager_secret/aws_get_secrets_manager_secret.py",
    "content": "##\n##  Copyright (c) 2023 unSkript, Inc\n##  All rights reserved.\n##\nfrom __future__ import annotations\nimport pprint\nfrom pydantic import BaseModel, Field\nfrom beartype import beartype\nfrom botocore.exceptions import ClientError\n\n\nclass InputSchema(BaseModel):\n    region: str = Field(..., description='AWS Region.', title='Region')\n    secret_name: str = Field(\n        description='AWS Secret Name', title='secret_name'\n\n    )\n\n@beartype\ndef aws_get_secrets_manager_secret_printer(output):\n    if output is None:\n        return\n    pprint.pprint({\"secret\": output})\n\n\n@beartype\n@beartype\ndef aws_get_secrets_manager_secret(handle, region: str, secret_name:str) -> str:\n\n\n    # Create a Secrets Manager client\n\n    client = handle.client(\n        service_name='secretsmanager',\n        region_name=region\n    )\n\n    try:\n        get_secret_value_response = client.get_secret_value(\n            SecretId=secret_name\n        )\n    except ClientError as e:\n        # For a list of exceptions thrown, see\n        # https://docs.aws.amazon.com/secretsmanager/latest/apireference/API_GetSecretValue.html\n        raise e\n    #print(get_secret_value_response)\n    # Decrypts secret using the associated KMS key.\n    secret = get_secret_value_response['SecretString']\n    return secret\n"
  },
  {
    "path": "AWS/legos/aws_get_secrets_manager_secretARN/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Get AWS Secrets Manager SecretARN</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Action retrieves the SecretARN from AWS Secret Manager. This can be used to make a RedShift query (amongst other things).\r\n\r\n\r\n## Lego Details\r\n    def aws_get_secrets_manager_secretARN(handle, region: str, secret_name:str) -> str:\r\n\r\n        handle: Object of type unSkript datadog Connector\r\n\t\tregion: AWS Region\r\n\t\tsecret_name: Name of the AWS Secret to obtain\r\n\r\n## Lego Input\r\nThis Requires an AWS Region, and the name of the Secret you wish to get the ARN for.\r\n\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./awssecretnarn.jpg\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_get_secrets_manager_secretARN/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_secrets_manager_secretARN/aws_get_secrets_manager_secretARN.json",
    "content": "{\n  \"action_title\": \"AWS Get Secrets Manager SecretARN\",\n  \"action_description\": \"Given a Secret Name - this Action returns the Secret ARN\",\n  \"action_type\": \"LEGO_TYPE_AWS\",\n  \"action_entry_function\": \"aws_get_secrets_manager_secretARN\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\n  \"action_is_check\": false,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_SECOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\", \"CATEGORY_TYPE_AWS_SECRET_MANAGER\"]\n}"
  },
  {
    "path": "AWS/legos/aws_get_secrets_manager_secretARN/aws_get_secrets_manager_secretARN.py",
    "content": "##\n##  Copyright (c) 2023 unSkript, Inc\n##  All rights reserved.\n##\nfrom __future__ import annotations\nimport pprint\nfrom pydantic import BaseModel, Field\nfrom beartype import beartype\nfrom botocore.exceptions import ClientError\n\n\nclass InputSchema(BaseModel):\n    region: str = Field(\n        description='AWS Region.',\n        title='Region'\n        )\n    secret_name: str = Field(\n         description='AWS Secret Name',\n         title='secret_name'\n        )\n\n\n@beartype\ndef aws_get_secrets_manager_secretARN_printer(output):\n    if output is None:\n        return\n    pprint.pprint({\"secret\": output})\n\n\n@beartype\ndef aws_get_secrets_manager_secretARN(handle, region: str, secret_name:str) -> str:\n    # Create a Secrets Manager client\n    client = handle.client(\n        service_name='secretsmanager',\n        region_name=region\n    )\n\n    try:\n        get_secret_value_response = client.get_secret_value(\n            SecretId=secret_name\n        )\n    except ClientError as e:\n        # For a list of exceptions thrown, see\n        # https://docs.aws.amazon.com/secretsmanager/latest/apireference/API_GetSecretValue.html\n        raise e\n    # Decrypts secret using the associated KMS key.\n    secretArn = get_secret_value_response['ARN']\n    return secretArn\n"
  },
  {
    "path": "AWS/legos/aws_get_security_group_details/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Get AWS Security Group Details </h1>\r\n\r\n## Description\r\nThis Lego used to get details about a security group, given its ID.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_get_security_group_details(handle: object, group_id: str, region: str)\r\n\r\n        handle: Object of type unSkript AWS Connector.\r\n        group_id: AWS Security Group ID. For eg: sg-12345\r\n        region: AWS Region of the ECS service.\r\n## Lego Input\r\n\r\nThis Lego take three inputs handle, group_id and region.\r\n\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_get_security_group_details/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_security_group_details/aws_get_security_group_details.json",
    "content": "{\r\n    \"action_title\": \"Get AWS Security Group Details\",\r\n    \"action_description\": \"Get details about a security group, given its ID.\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_get_security_group_details\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_SECOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\", \"CATEGORY_TYPE_AWS_EC2\"]\r\n}\r\n    "
  },
  {
    "path": "AWS/legos/aws_get_security_group_details/aws_get_security_group_details.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import Dict\nfrom pydantic import BaseModel, Field\n\nclass InputSchema(BaseModel):\n    group_id: str = Field(\n        title='Security Group ID',\n        description='AWS Security Group ID. For eg: sg-12345')\n    region: str = Field(\n        title='Region',\n        description='AWS Region'\n    )\n\n\ndef aws_get_security_group_details_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef aws_get_security_group_details(handle, group_id: str, region: str) -> Dict:\n    \"\"\"aws_get_security_group_details returns The decrypted secret value\n\n     :type handle: object\n     :param handle: Object returned from task.validate(...).\n\n     :type group_id: string\n     :param group_id: AWS Security Group ID. For eg: sg-12345\n\n     :type region: string\n     :param region: AWS Region.\n\n     :rtype: The decrypted secret value\n    \"\"\"\n\n    ec2Client = handle.client('ec2', region_name=region)\n\n    res = ec2Client.describe_security_groups(GroupIds=[group_id])\n    return res\n"
  },
  {
    "path": "AWS/legos/aws_get_service_quota_details/README.md",
    "content": "\r\n[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Get Service Quota Details </h1>\r\n\r\n## Description\r\nFor a Given Service code and Quota Code - get the quota details.\r\n\r\n## Lego Details\r\n\r\n    def aws_get_service_quota_details(handle, service_code:str, quota_code:str, region:str) -> Dict:\r\n\r\n        handle: Object of type unSkript AWS Connector\r\n        service_code: Service Code name (ex: ec2)\r\n        quota_code: the quota code of the service.\r\n        region: Location of the S3 buckets.\r\n\r\n## Lego Input\r\nThis Lego take four inputs: handle, service_code, quota_code and region.\r\n\r\n## Lego Output\r\n\r\n<img src=\"./1.jpg\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)\r\n\r\n"
  },
  {
    "path": "AWS/legos/aws_get_service_quota_details/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_service_quota_details/aws_get_service_quota_details.json",
    "content": "{\n  \"action_title\": \"AWS Get Service Quota for a Specific ServiceName\",\n  \"action_description\": \"Given an AWS Region, Service Code and Quota Code, this Action will output the quota information for the specified service.\",\n  \"action_type\": \"LEGO_TYPE_AWS\",\n  \"action_entry_function\": \"aws_get_service_quota_details\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\n  \"action_is_check\": false,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\"]\n}"
  },
  {
    "path": "AWS/legos/aws_get_service_quota_details/aws_get_service_quota_details.py",
    "content": "##\n##  Copyright (c) 2023 unSkript, Inc\n##  All rights reserved.\n##\nfrom __future__ import annotations\nimport pprint\nfrom typing import Dict\nfrom pydantic import BaseModel, Field\nfrom beartype import beartype\n\n\nclass InputSchema(BaseModel):\n    quota_code: str = Field(\n        description='The quota code for the Service Type',\n        title='quota_code',\n    )\n    region: str = Field(..., description='AWS Region.', title='Region')\n    service_code: str = Field(\n         description='The service code to be queried', title='service_code'\n    )\n\n@beartype\ndef aws_get_service_quota_details_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n#list_service_quotas\n#list_aws_default_service_quotas\n@beartype\ndef aws_get_service_quota_details(handle, service_code:str, quota_code:str, region:str) -> Dict:\n    sqClient = handle.client('service-quotas',region_name=region)\n    res = sqClient.get_service_quota(\n        ServiceCode=service_code,\n        QuotaCode=quota_code)\n\n    #res = sqClient.list_services(MaxResults = 100)\n    return res\n"
  },
  {
    "path": "AWS/legos/aws_get_service_quotas/README.md",
    "content": "\r\n[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>AWS Get Service Quotas for a Service </h1>\r\n\r\n## Description\r\nThis Action retrieves all of the Service Quotas for a service code.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_get_service_quotas_v1(handle, service_code:str, region:str) -> List:\r\n\r\n        handle: Object of type unSkript AWS Connector\r\n        service_code: Name of the S3 bucket.\r\n        region: Location of the S3 buckets.\r\n\r\n## Lego Input\r\nThis Lego take three input handle, service_code and region.\r\n\r\n## Lego Output\r\nThe output is a list of every Quota for the Service Name\r\n<img src=\"./1.jpg\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)\r\n\r\n"
  },
  {
    "path": "AWS/legos/aws_get_service_quotas/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_service_quotas/aws_get_service_quotas.json",
    "content": "{\n  \"action_title\": \"AWS Get Quotas for a Service\",\n  \"action_description\": \"Given inputs of the AWS Region, and the Service_Code for a service, this Action will output all of the Service Quotas and limits.\",\n  \"action_type\": \"LEGO_TYPE_AWS\",\n  \"action_entry_function\": \"aws_get_service_quotas\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n  \"action_is_check\": false,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\"]\n}\n"
  },
  {
    "path": "AWS/legos/aws_get_service_quotas/aws_get_service_quotas.py",
    "content": "##\n##  Copyright (c) 2023 unSkript, Inc\n##  All rights reserved.\n##\nfrom __future__ import annotations\nimport pprint\nfrom typing import List\nfrom pydantic import BaseModel, Field\nfrom beartype import beartype\nfrom unskript.connectors.aws import aws_get_paginator\n\n\nclass InputSchema(BaseModel):\n    region: str = Field(..., description='AWS region', title='region')\n    service_code: str = Field(\n        'ec2',\n        description='The service code is used to get all quotas for the service',\n        title='service_code',\n    )\n\n\n@beartype\ndef aws_get_service_quotas_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n#list_service_quotas\n#list_aws_default_service_quotas\n@beartype\ndef aws_get_service_quotas(handle, service_code:str, region:str) -> List:\n    sqClient = handle.client('service-quotas',region_name=region)\n    resPaginate = aws_get_paginator(sqClient,'list_service_quotas','Quotas',\n        ServiceCode=service_code,\n        PaginationConfig={\n            'MaxItems': 1000,\n            'PageSize': 100\n        })\n\n    #res = sqClient.list_services(MaxResults = 100)\n    return resPaginate\n"
  },
  {
    "path": "AWS/legos/aws_get_stopped_instance_volumes/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Get Stopped Instance Volumes</h1>\r\n\r\n## Description\r\nThis action helps to list the volumes that are attached to stopped instances.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_get_stopped_instance_volumes(handle, region: str)\r\n\r\n        handle: Object of type unSkript AWS Connector.\r\n        region: Optional, AWS region. Eg: “us-west-2”\r\n\r\n## Lego Input\r\nThis Lego take two inputs handle and region.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_get_stopped_instance_volumes/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_stopped_instance_volumes/aws_get_stopped_instance_volumes.json",
    "content": "{\r\n    \"action_title\": \"Get Stopped Instance Volumes\",\r\n    \"action_description\": \"This action helps to list the volumes that are attached to stopped instances.\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_get_stopped_instance_volumes\",\r\n    \"action_needs_credential\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_is_check\":true,\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_next_hop\": [\"a9d17f4c9feb963b6096290eedb21af43d89e803cdcb1238dc11a544a3071a1e\"],\r\n    \"action_next_hop_parameter_mapping\": {\"a9d17f4c9feb963b6096290eedb21af43d89e803cdcb1238dc11a544a3071a1e\": {\"name\": \"Delete EBS Volume Attached to Stopped Instances\", \"region\":\".[0].region\",\"volume_ids\":\"map(.volume_id)\"}},\r\n    \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_EC2\",\"CATEGORY_TYPE_AWS_EBS\" ]\r\n}"
  },
  {
    "path": "AWS/legos/aws_get_stopped_instance_volumes/aws_get_stopped_instance_volumes.py",
    "content": "##\r\n##  Copyright (c) 2023 unSkript, Inc\r\n##  All rights reserved.\r\n##\r\nimport pprint\r\nfrom typing import Optional, Tuple\r\nfrom pydantic import BaseModel, Field\r\nfrom unskript.connectors.aws import aws_get_paginator\r\nfrom unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\r\n\r\n\r\nclass InputSchema(BaseModel):\r\n    region: Optional[str] = Field(\r\n        default=\"\",\r\n        title='Region',\r\n        description='AWS Region.')\r\n\r\n\r\ndef aws_get_stopped_instance_volumes_printer(output):\r\n    if output is None:\r\n        return\r\n    pprint.pprint(output)\r\n\r\n\r\ndef aws_get_stopped_instance_volumes(handle, region: str = \"\") -> Tuple:\r\n    \"\"\"aws_get_stopped_instance_volumes Returns an array of volumes.\r\n\r\n        :type handle: object\r\n        :param handle: Object returned from task.validate(...).\r\n\r\n        :type region: string\r\n        :param region: Region to filter instances.\r\n\r\n        :rtype: Array of volumes.\r\n    \"\"\"\r\n    result = []\r\n    all_regions = [region]\r\n    if not region:\r\n        all_regions = aws_list_all_regions(handle)\r\n\r\n    for reg in all_regions:\r\n        try:\r\n            ec2Client = handle.client('ec2', region_name=reg)\r\n            res = aws_get_paginator(ec2Client, \"describe_instances\", \"Reservations\")\r\n            for reservation in res:\r\n                for instance in reservation['Instances']:\r\n                    if instance['State']['Name'] == 'stopped':\r\n                        block_device_mappings = instance['BlockDeviceMappings']\r\n                        for mapping in block_device_mappings:\r\n                            if 'Ebs' in mapping:\r\n                                ebs_volume = {}\r\n                                volume_id = mapping['Ebs']['VolumeId']\r\n                                ebs_volume[\"volume_id\"] = volume_id\r\n                                ebs_volume[\"region\"] = reg\r\n                                result.append(ebs_volume)\r\n        except Exception:\r\n            pass\r\n\r\n    if len(result) != 0:\r\n        return (False, result)\r\n    return (True, None)\r\n    "
  },
  {
    "path": "AWS/legos/aws_get_sts_caller_identity/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Get STS Caller Identity </h1>\r\n\r\n## Description\r\nThis Lego get STS caller identity.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_get_sts_caller_identity(handle: object)\r\n\r\n        handle: Object of type unSkript AWS Connector.\r\n\r\n## Lego Input\r\nThis Lego take one input handle.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_get_sts_caller_identity/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_sts_caller_identity/aws_get_sts_caller_identity.json",
    "content": "{\r\n    \"action_title\": \"Get STS Caller Identity\",\r\n    \"action_description\": \"Get STS Caller Identity\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_get_sts_caller_identity\",\r\n    \"action_needs_credential\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\", \"CATEGORY_TYPE_AWS_STS\"]\r\n  }\r\n  "
  },
  {
    "path": "AWS/legos/aws_get_sts_caller_identity/aws_get_sts_caller_identity.py",
    "content": "##  Copyright (c) 2021 unSkript, Inc\r\n##  All rights reserved.\r\n##\r\nimport pprint\r\nfrom typing import Dict\r\nfrom pydantic import BaseModel\r\n\r\n\r\nclass InputSchema(BaseModel):\r\n    pass\r\n\r\ndef aws_get_sts_caller_identity_printer(output):\r\n    if output is None:\r\n        return\r\n    pprint.pprint(output)\r\n\r\n\r\ndef aws_get_sts_caller_identity(handle) -> Dict:\r\n    \"\"\"aws_get_caller_identity Returns an dict of STS caller identity info.\r\n\r\n        :type handle: object\r\n        :param handle: Object returned from task.validate(...).\r\n\r\n        :rtype: dict of STS caller identity info\r\n    \"\"\"\r\n    ec2Client = handle.client('sts')\r\n    response = ec2Client.get_caller_identity()\r\n\r\n    return response\r\n"
  },
  {
    "path": "AWS/legos/aws_get_tags_of_all_resources/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>AWS Get Tags of All Resources </h1>\r\n\r\n## Description\r\nThis Lego filters all the tags of resources for given region.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_resources_tags(handle, region: str)\r\n\r\n        handle: Object of type unSkript AWS Connector.\r\n        region: Region to filter resources.\r\n\r\n## Lego Input\r\nThis Lego take two inputs handle and region.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_get_tags_of_all_resources/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_tags_of_all_resources/aws_get_tags_of_all_resources.json",
    "content": "{\r\n    \"action_title\": \"AWS Get Tags of All Resources\",\r\n    \"action_description\": \"AWS Get Tags of All Resources\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_get_tags_of_all_resources\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\"]\r\n}\r\n"
  },
  {
    "path": "AWS/legos/aws_get_tags_of_all_resources/aws_get_tags_of_all_resources.py",
    "content": "##  Copyright (c) 2021 unSkript, Inc\r\n##  All rights reserved.\r\n##\r\nimport pprint\r\nfrom typing import List\r\nfrom pydantic import BaseModel, Field\r\nfrom unskript.connectors.aws import aws_get_paginator\r\n\r\nclass InputSchema(BaseModel):\r\n    region: str = Field(\r\n        title='Region',\r\n        description='AWS Region.')\r\n\r\ndef aws_get_tags_of_all_resources_printer(output):\r\n    if output is None:\r\n        return\r\n    pprint.pprint(output)\r\n\r\ndef aws_get_tags_of_all_resources(handle, region: str) -> List:\r\n    \"\"\"aws_get_tags_of_all_resources Returns an List of all Resources Tags.\r\n\r\n        :type handle: object\r\n        :param handle: Object returned from task.validate(...).\r\n\r\n        :type region: str\r\n        :param region: Region to filter resources.\r\n\r\n        :rtype: List of all Resources Tags.\r\n    \"\"\"\r\n    ec2Client = handle.client('resourcegroupstaggingapi', region_name=region)\r\n    result = []\r\n    try:\r\n        response = aws_get_paginator(ec2Client, \"get_tag_keys\", \"TagKeys\")\r\n        result = response\r\n    except Exception as error:\r\n        result.append({\"error\":error})\r\n\r\n    return result\r\n"
  },
  {
    "path": "AWS/legos/aws_get_timed_out_lambdas/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>Get Timed Out AWS Lambdas</h1>\n\n## Description\nGet AWS Lambda functions that have exceeded the maximum amount of time in seconds that a Lambda function can run.\n\n## Lego Details\n\taws_get_timed_out_lambdas(handle, days_back:int, region:str)\n\t\thandle: Object of type unSkript AWS Connector.\n\t\tregion: AWS region. Eg: \"us-west-2\"\n\t\tdays_back: Int, (in days) Number of days to go back. Default value is 1 day.\n\n## Lego Input\nThis Lego takes inputs handle,\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_get_timed_out_lambdas/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_timed_out_lambdas/aws_get_timed_out_lambdas.json",
    "content": "{\n  \"action_title\": \"Get Timed Out AWS Lambdas\",\n  \"action_description\": \"Get AWS Lambda functions that have exceeded the maximum amount of time in seconds that a Lambda function can run.\",\n  \"action_type\": \"LEGO_TYPE_AWS\",\n  \"action_entry_function\": \"aws_get_timed_out_lambdas\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n  \"action_is_check\": true,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\": [\"CATEGORY_TYPE_COST_OPT\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_ELB\"],\n  \"action_next_hop\": [],\n  \"action_next_hop_parameter_mapping\": {}\n}"
  },
  {
    "path": "AWS/legos/aws_get_timed_out_lambdas/aws_get_timed_out_lambdas.py",
    "content": "##\n# Copyright (c) 2023 unSkript, Inc\n# All rights reserved.\n##\n\nfrom pydantic import BaseModel, Field\nfrom typing import Tuple, Optional\nfrom unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\nimport pprint\nimport datetime\n\nclass InputSchema(BaseModel):\n    days_back: Optional[int] = Field(\n        1,\n        description='(in days) Number of days to go back. Default value is 1 day.',\n        title='Days Back',\n    )\n    region: Optional[str] = Field(\n        '', \n        description='AWS region. Eg: \"us-west-2\"', \n        title='Region'\n    )\n\n\ndef aws_get_timed_out_lambdas_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef aws_get_timed_out_lambdas(handle, days_back:int=1, region:str=\"\") -> Tuple:\n    \"\"\"aws_get_timed_out_lambdas finds AWS Lambda functions with high error rate\n\n    :type region: string\n    :param region: AWS region. Eg: \"us-west-2\"\n\n    :type days_back: int\n    :param days_back: (in days) Number of days to go back. Default value is 1 day.\n\n    :rtype: Tuple with status result and list of Lambda functions that have timed out\n\n    \"\"\"\n    result = []\n    all_regions = [region]\n    if not region:\n        all_regions = aws_list_all_regions(handle)\n    for reg in all_regions:\n        try:\n            lambdaClient = handle.client('lambda', region_name=reg)\n            cloudwatchClient = handle.client('cloudwatch', region_name=reg)\n            # Get a list of all the Lambda functions in your account\n            response = lambdaClient.list_functions()\n            number_of_days = int(days_back)\n            start_time = datetime.datetime.now() - datetime.timedelta(days=number_of_days)\n            # Iterate through the list of functions and filter out the ones that have timed out\n            for function in response['Functions']:\n                # Get the configuration for the function\n                config_response = lambdaClient.get_function_configuration(FunctionName=function['FunctionName'])\n                # Check if the function has a timeout set and if it has timed out\n                if 'Timeout' in config_response and config_response['Timeout'] > 0:\n                    metrics_response = cloudwatchClient.get_metric_data(\n                        MetricDataQueries=[\n                            {\n                                'Id': 'm1',\n                                'MetricStat': {\n                                    'Metric': {\n                                        'Namespace': 'AWS/Lambda',\n                                        'MetricName': 'Duration',\n                                        'Dimensions': [\n                                            {\n                                                'Name': 'FunctionName',\n                                                'Value': function['FunctionName']\n                                            },\n                                        ]\n                                    },\n                                    'Period': 300,\n                                    'Stat': 'p90'\n                                },\n                                'ReturnData': True\n                            },\n                        ],\n                        StartTime=start_time,\n                        EndTime=datetime.datetime.now()\n                    )\n\n                    # Check if the function has timed out\n                    if len(metrics_response['MetricDataResults'][0]['Values'])!=0:\n                        if metrics_response['MetricDataResults'][0]['Values'][0] >= config_response['Timeout'] * 1000:\n                            lambda_func = {}\n                            lambda_func['function_name'] = function['FunctionName']\n                            lambda_func['region'] = reg\n                            result.append(lambda_func)\n                    else:\n                        continue\n        except Exception:\n            pass\n    if len(result) != 0:\n        return (False, result)\n    else:\n        return (True, None)"
  },
  {
    "path": "AWS/legos/aws_get_ttl_for_route53_records/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>AWS Get TTL For Route53 Records</h1>\n\n## Description\nGet TTL for Route53 records for a hosted zone.\n\n## Lego Details\n\taws_get_ttl_for_route53_records(handle, hosted_zone_id:str)\n\n\t\thandle: Object of type unSkript AWS Connector.\n\n\t\thosted_zone_id: ID of the Hosted zone used for routing traffic.\n\n\n## Lego Input\nThis Lego takes two inputs handle, hosted_zone_id\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_get_ttl_for_route53_records/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_ttl_for_route53_records/aws_get_ttl_for_route53_records.json",
    "content": "{\n  \"action_title\": \"AWS Get TTL For Route53 Records\",\n  \"action_description\": \"Get TTL for Route53 records for a hosted zone.\",\n  \"action_type\": \"LEGO_TYPE_AWS\",\n  \"action_entry_function\": \"aws_get_ttl_for_route53_records\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n  \"action_is_check\": false,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\": [\"CATEGORY_TYPE_COST_OPT\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_ROUTE53\"]\n}"
  },
  {
    "path": "AWS/legos/aws_get_ttl_for_route53_records/aws_get_ttl_for_route53_records.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nimport pprint\nfrom typing import List\nfrom pydantic import BaseModel, Field\nfrom unskript.connectors.aws import aws_get_paginator\n\n\nclass InputSchema(BaseModel):\n    hosted_zone_id: str = Field(\n        ...,\n        description='ID of the Hosted zone used for routing traffic.',\n        title='Hosted Zone ID',\n    )\n\n\ndef aws_get_ttl_for_route53_records_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\ndef aws_get_ttl_for_route53_records(handle, hosted_zone_id:str) -> List:\n    \"\"\"aws_get_ttl_for_route53_records Returns TTL for records in a hosted zone\n\n        :type handle: object\n        :param handle: Object returned by the task.validate(...) method.\n\n        :type hosted_zone_id: str\n        :param hosted_zone_id: ID of the Hosted zone used for routing traffic.\n\n        :rtype: List of details with the record type, record name and record TTL.\n    \"\"\"\n    route53Client = handle.client('route53')\n    response = aws_get_paginator(\n        route53Client,\n        \"list_resource_record_sets\",\n        \"ResourceRecordSets\",\n        HostedZoneId=hosted_zone_id\n        )\n    result = []\n    for record in response:\n        records = {}\n        record_name = record.get('Name')\n        record_type = record.get('Type')\n        record_ttl = record.get('TTL', 'N/A')\n        records[\"record_name\"] = record_name\n        records[\"record_type\"] = record_type\n        records[\"record_ttl\"] = record_ttl\n        result.append(records)\n    return result\n"
  },
  {
    "path": "AWS/legos/aws_get_ttl_under_given_hours/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>AWS: Check for short Route 53 TTL</h1>\n\n## Description\nGet Route53 records for a hosted zone under the given threshold (in hours).\n\n## Lego Details\n\taws_get_ttl_under_given_hours(handle, threshold: int = 1)\n\n\t\thandle: Object of type unSkript AWS Connector.\n\t\tthreshold: (In hours) A threshold in hours to verify route 53 TTL is within the threshold.\n\n\n## Lego Input\nThis Lego take two inputs handle and threshold\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_get_ttl_under_given_hours/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_ttl_under_given_hours/aws_get_ttl_under_given_hours.json",
    "content": "{\n  \"action_title\": \"AWS: Check for short Route 53 TTL\",\n  \"action_description\": \"AWS: Check for short Route 53 TTL\",\n  \"action_type\": \"LEGO_TYPE_AWS\",\n  \"action_entry_function\": \"aws_get_ttl_under_given_hours\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_is_check\": true,\n  \"action_next_hop\": [\"a0773e52a3a3a8688e47a9e10eba1c680913d28a9a8c4466113181534bd1f972\"],\n  \"action_next_hop_parameter_mapping\": {\"a0773e52a3a3a8688e47a9e10eba1c680913d28a9a8c4466113181534bd1f972\": {\"name\": \"Change AWS Route53 TTL\", \"hosted_zone_id\": \"map(.hosted_zone_id)\", \"record_name\": \"map(.record_name)\", \"record_type\": \"map(.record_type)\"}},\n  \"action_categories\": [\"CATEGORY_TYPE_COST_OPT\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_ROUTE53\"]\n}"
  },
  {
    "path": "AWS/legos/aws_get_ttl_under_given_hours/aws_get_ttl_under_given_hours.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nimport pprint\nfrom typing import Tuple, Optional\nfrom pydantic import BaseModel, Field\nfrom unskript.connectors.aws import aws_get_paginator\nfrom unskript.legos.aws.aws_get_ttl_for_route53_records.aws_get_ttl_for_route53_records import aws_get_ttl_for_route53_records\n\n\nclass InputSchema(BaseModel):\n    threshold: Optional[int] = Field(\n        default=1,\n        description=('(In hours) A threshold in hours to verify route '\n                     '53 TTL is within the threshold.'),\n        title='Threshold (In hours)',\n    )\n\n\ndef aws_get_ttl_under_given_hours_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef aws_get_ttl_under_given_hours(handle, threshold: int = 1) -> Tuple:\n    \"\"\"aws_get_ttl_under_x_hours Returns TTL for records in a hosted zone\n\n        :type handle: object\n        :param handle: Object returned by the task.validate(...) method.\n\n        :type threshold: str\n        :param threshold: (In hours) A threshold in hours to verify route\n        53 TTL is within the threshold.\n\n        :rtype: List of details with the record type, record name and record TTL.\n    \"\"\"\n    if handle is None:\n        raise ValueError(\"Handle must not be None.\")\n\n    result = []\n    try:\n        route_client = handle.client('route53')\n        seconds = threshold * 3600\n        hosted_zones = aws_get_paginator(route_client, \"list_hosted_zones\", \"HostedZones\")\n        for zone in hosted_zones:\n            zone_id = zone.get('Id')\n            if not zone_id:\n                continue\n            \n            record_ttl_data = aws_get_ttl_for_route53_records(handle, zone_id)\n            for record_ttl in record_ttl_data:\n                if 'record_ttl' not in record_ttl or isinstance(record_ttl['record_ttl'], str):\n                    continue\n                elif record_ttl['record_ttl'] < seconds:\n                    records = {\n                        \"hosted_zone_id\": zone_id,\n                        \"record_name\": record_ttl.get('record_name', ''),\n                        \"record_type\": record_ttl.get('record_type', ''),\n                        \"record_ttl\": record_ttl['record_ttl'],\n                    }\n                    result.append(records)\n    except Exception as e:\n        raise e\n\n    if len(result) != 0:\n        return (False, result)\n    return (True, None)\n"
  },
  {
    "path": "AWS/legos/aws_get_unhealthy_instances/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Get UnHealthy EC2 Instances for Classic ELB </h1>\r\n\r\n## Description\r\nThis Lego used to get UnHealthy EC2 Instances for Classic ELB.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_get_unhealthy_instances(handle: object, elb_name: str, region: str)\r\n\r\n        handle: Object of type unSkript AWS Connector.\r\n        elb_name: Name of the ELB. NOTE: It ONLY supports Classic.\r\n        region: Name of the AWS Region.\r\n## Lego Input\r\n\r\nThis Lego take three inputs handle, elb_name and region.\r\n\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_get_unhealthy_instances/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_unhealthy_instances/aws_get_unhealthy_instances.json",
    "content": "{\r\n    \"action_title\": \"Get UnHealthy EC2 Instances for Classic ELB\",\r\n    \"action_description\": \"Get UnHealthy EC2 Instances for Classic ELB\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_get_unhealthy_instances\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\", \"CATEGORY_TYPE_AWS_ELB\"]\r\n}\r\n    "
  },
  {
    "path": "AWS/legos/aws_get_unhealthy_instances/aws_get_unhealthy_instances.py",
    "content": "# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nimport pprint\nfrom typing import List\nfrom pydantic import BaseModel, Field\n\n\nclass InputSchema(BaseModel):\n    elb_name: str = Field(\n        title='ELB Name',\n        description='Name of the ELB. NOTE: It ONLY supports Classic.')\n    region: str = Field(\n        title='Region',\n        description='Name of the AWS Region'\n    )\n\n\ndef aws_get_unhealthy_instances_printer(output):\n    if output is None:\n        return\n    if output == []:\n        print(\"All instances are healthy\")\n    else:\n        pprint.pprint(output)\n\n\ndef aws_get_unhealthy_instances(handle, elb_name: str, region: str) -> List:\n    \"\"\"aws_get_unhealthy_instances returns array of unhealthy instances\n\n     :type handle: object\n     :param handle: Object returned from task.validate(...).\n\n     :type elb_name: string\n     :param elb_name: Name of the ELB. Note: It ONLY supports Classic.\n\n     :type region: string\n     :param region: Name of the AWS Region.\n\n     :rtype: Returns array of unhealthy instances\n    \"\"\"\n\n    elbClient = handle.client('elb', region_name=region)\n    res = elbClient.describe_instance_health(\n        LoadBalancerName=elb_name,\n    )\n\n    unhealthy_instances = []\n    for instance in res['InstanceStates']:\n        if instance['State'] == \"OutOfService\":\n            unhealthy_instances.append(instance)\n\n    return unhealthy_instances\n"
  },
  {
    "path": "AWS/legos/aws_get_unhealthy_instances_from_elb/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Get Unhealthy instances from ELB </h1>\r\n\r\n## Description\r\nThis action filters unhealthy AWS instances from the Elastic Load Balancer.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_get_unhealthy_ec2_instances_for_elb(handle, elb_name: str = \"\", region: str = \"\")\r\n\r\n        handle: Object of type unSkript AWS Connector\r\n        elb_name: Optional, Name of the elastic load balancer.\r\n        region: Optional, AWS region. Eg: \"us-west-2\"\r\n\r\n## Lego Input\r\nThis Lego takes three inputs: handle, elb_name, and region.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_get_unhealthy_instances_from_elb/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_unhealthy_instances_from_elb/aws_get_unhealthy_instances_from_elb.json",
    "content": "{\r\n    \"action_title\": \"Get Unhealthy instances from ELB\",\r\n    \"action_description\": \"Get Unhealthy instances from Elastic Load Balancer\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_get_unhealthy_instances_from_elb\",\r\n    \"action_needs_credential\": true,\r\n    \"action_is_check\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\" , \"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_ELB\"],\r\n    \"action_next_hop\": [\"94707558cebedbcb77aabaec5d6d2d1bf3f4664db6e9e905d6d905a11a3ef8bc\"],\r\n    \"action_next_hop_parameter_mapping\": {\"94707558cebedbcb77aabaec5d6d2d1bf3f4664db6e9e905d6d905a11a3ef8bc\": {\"name\": \"AWS Get unhealthy EC2 instances from ELB\", \"region\": \".[0].region\", \"elb_name\":\"map(.load_balancer_name)\"}}\r\n}"
  },
  {
    "path": "AWS/legos/aws_get_unhealthy_instances_from_elb/aws_get_unhealthy_instances_from_elb.py",
    "content": "##  Copyright (c) 2021 unSkript, Inc\r\n##  All rights reserved.\r\n##\r\nimport pprint\r\nfrom typing import Optional, Tuple\r\nfrom pydantic import BaseModel, Field\r\nfrom unskript.connectors.aws import aws_get_paginator\r\nfrom unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\r\n\r\n\r\nclass InputSchema(BaseModel):\r\n    elb_name: Optional[str] = Field(\r\n        default=\"\",\r\n        title='ELB Name',\r\n        description='Name of the elastic load balancer.')\r\n\r\n    region: Optional[str] = Field(\r\n        default=\"\",\r\n        title='Region',\r\n        description='AWS Region of the ELB.')\r\n\r\n\r\ndef aws_get_unhealthy_instances_from_elb_printer(output):\r\n    if output is None:\r\n        return\r\n\r\n    pprint.pprint(output)\r\n\r\n\r\ndef aws_get_unhealthy_instances_from_elb(handle, elb_name: str = \"\", region: str = \"\") -> Tuple:\r\n    \"\"\"aws_get_unhealthy_instances_from_elb gives unhealthy instances from ELB\r\n\r\n        :type elb_name: string\r\n        :param elb_name: Name of the elastic load balancer.\r\n\r\n        :type region: string\r\n        :param region: AWS region.\r\n\r\n        :rtype: A tuple with execution results and a list of unhealthy instances from ELB\r\n    \"\"\"\r\n\r\n    result = []\r\n    all_regions = [region] if region else aws_list_all_regions(handle)\r\n    elb_list = []\r\n\r\n    # Handling the logic when elb_name is not provided\r\n    if not elb_name:\r\n        for reg in all_regions:\r\n            print(reg)\r\n            try:\r\n                asg_client = handle.client('elb', region_name=reg)\r\n                response = aws_get_paginator(asg_client, \"describe_load_balancers\", \"LoadBalancerDescriptions\")\r\n                for i in response:\r\n                    elb_list.append({\"load_balancer_name\": i[\"LoadBalancerName\"], \"region\": reg})\r\n            except Exception:\r\n                pass\r\n\r\n    # Handling the logic when only elb_name is provided\r\n    if elb_name and not region:\r\n        for reg in all_regions:\r\n            try:\r\n                asg_client = handle.client('elb', region_name=reg)\r\n                response = aws_get_paginator(asg_client, \"describe_load_balancers\", \"LoadBalancerDescriptions\")\r\n                for i in response:\r\n                    if elb_name in i[\"LoadBalancerName\"]:\r\n                        elb_list.append({\"load_balancer_name\": i[\"LoadBalancerName\"], \"region\": reg})\r\n            except Exception:\r\n                pass\r\n\r\n    # Handling the logic when both elb_name and region are provided\r\n    if elb_name and region:\r\n        try:\r\n            elbClient = handle.client('elb', region_name=region)\r\n            res = elbClient.describe_instance_health(LoadBalancerName=elb_name)\r\n            for instance in res['InstanceStates']:\r\n                if instance['State'] == \"OutOfService\":\r\n                    result.append({\r\n                        \"instance_id\": instance[\"InstanceId\"],\r\n                        \"region\": region,\r\n                        \"load_balancer_name\": elb_name\r\n                    })\r\n        except Exception as e:\r\n            raise e\r\n\r\n    # Handling the logic when elb_list is populated\r\n    for elb in elb_list:\r\n        try:\r\n            elbClient = handle.client('elb', region_name=elb[\"region\"])\r\n            res = elbClient.describe_instance_health(LoadBalancerName=elb[\"load_balancer_name\"])\r\n            for instance in res['InstanceStates']:\r\n                if instance['State'] == \"OutOfService\":\r\n                    result.append({\r\n                        \"instance_id\": instance[\"InstanceId\"],\r\n                        \"region\": elb[\"region\"],\r\n                        \"load_balancer_name\": elb[\"load_balancer_name\"]\r\n                    })\r\n        except Exception as e:\r\n            raise e\r\n\r\n    return (False, result) if result else (True, None)\r\n"
  },
  {
    "path": "AWS/legos/aws_get_unused_route53_health_checks/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>AWS get Unused Route53 Health Checks</h1>\n\n## Description\nGet get Unused Route53 Health Checks for hosted zones.\n\n## Lego Details\n\taws_get_unused_route53_health_checks(handle, hosted_zone_id: str = \"\")\n\n\t\thandle: Object of type unSkript AWS Connector.\n\t\thosted_zone_id: Optional. Used to filter the health checks for a specific hosted zone.\n\n\n## Lego Input\nThis Lego take two inputs handle and hosted_zone_id\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_get_unused_route53_health_checks/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_get_unused_route53_health_checks/aws_get_unused_route53_health_checks.json",
    "content": "{\n  \"action_title\": \"AWS get Unused Route53 Health Checks\",\n  \"action_description\": \"AWS get Unused Route53 Health Checks\",\n  \"action_type\": \"LEGO_TYPE_AWS\",\n  \"action_entry_function\": \"aws_get_unused_route53_health_checks\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_is_check\": true,\n  \"action_next_hop\": [\"10a363abaf49098a0376eae46a6bfac421e606952369fc6ea02768ad319dd0be\"],\n  \"action_next_hop_parameter_mapping\": {\"10a363abaf49098a0376eae46a6bfac421e606952369fc6ea02768ad319dd0be\": {\"name\": \"Delete Unused Route53 HealthChecks\", \"health_check_ids\": \".\"}},\n  \"action_categories\": [\"CATEGORY_TYPE_COST_OPT\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_ROUTE53\"]\n}"
  },
  {
    "path": "AWS/legos/aws_get_unused_route53_health_checks/aws_get_unused_route53_health_checks.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nimport pprint\nfrom typing import Optional, Tuple\nfrom pydantic import BaseModel, Field\nfrom unskript.connectors.aws import aws_get_paginator\n\n\nclass InputSchema(BaseModel):\n    hosted_zone_id: Optional[str] = Field(\n        default='',\n        description='Used to filter the health checks for a specific hosted zone.',\n        title='Hosted Zone ID',\n    )\n\n\ndef aws_get_unused_route53_health_checks_printer(output):\n    if output is None:\n        return\n\n    pprint.pprint(output)\n\n\ndef aws_get_unused_route53_health_checks(handle, hosted_zone_id: str = \"\") -> Tuple:\n    \"\"\"aws_get_unused_route53_health_checks Returns a list of unused Route 53 health checks.\n\n        :type hosted_zone_id: string\n        :param hosted_zone_id: Optional. Used to filter the health checks for a specific\n        hosted zone.\n\n        :rtype: A tuple containing a list of dicts with information about the unused health checks.\n    \"\"\"\n    result = []\n    try:\n        route_client = handle.client('route53')\n        health_checks = aws_get_paginator(route_client, \"list_health_checks\", \"HealthChecks\")\n        if hosted_zone_id:\n            hosted_zones = [{'Id': hosted_zone_id}]\n        else:\n            hosted_zones = aws_get_paginator(route_client, \"list_hosted_zones\", \"HostedZones\")\n        used_health_check_ids = set()\n        for zone in hosted_zones:\n            record_sets = aws_get_paginator(\n                route_client,\n                \"list_resource_record_sets\",\n                \"ResourceRecordSets\",\n                HostedZoneId=zone['Id']\n                )\n            for record_set in record_sets:\n                if 'HealthCheckId' in record_set:\n                    used_health_check_ids.add(record_set['HealthCheckId'])\n        for hc in health_checks:\n            if hc['Id'] not in used_health_check_ids:\n                result.append(hc['Id'])\n    except Exception as e:\n        raise e\n\n    if len(result) != 0:\n        return (False, result)\n    return (True, None)\n"
  },
  {
    "path": "AWS/legos/aws_get_users_with_old_access_keys/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \n<h1>AWS Get IAM Users with Old Access Keys</h1>\n\n## Description\nThis Lego collects the access keys that have never been used or the access keys that have been used but are older than the threshold.\n\n\n## Lego Details\n\n    aws_get_users_with_old_access_keys(handle, threshold_in_days: int = 120)\n\n        handle: Object of type unSkript AWS Connector.\n        threshold_in_days: (in days) The threshold to check the IAM user access keys older than the threshold.\n\n\n## Lego Input\nThis Lego take two inputs handle and threshold_in_days.\n\n## Lego Output\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_get_users_with_old_access_keys/__init__.py",
    "content": "##\n##  Copyright (c) 2022 unSkript, Inc\n##  All rights reserved.\n##"
  },
  {
    "path": "AWS/legos/aws_get_users_with_old_access_keys/aws_get_users_with_old_access_keys.json",
    "content": "{\r\n    \"action_title\": \"AWS Get IAM Users with Old Access Keys\",\r\n    \"action_description\": \"This Lego collects the access keys that have never been used or the access keys that have been used but are older than the threshold.\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_get_users_with_old_access_keys\",\r\n    \"action_needs_credential\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_SECOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\", \"CATEGORY_TYPE_AWS_IAM\"]\r\n}\r\n"
  },
  {
    "path": "AWS/legos/aws_get_users_with_old_access_keys/aws_get_users_with_old_access_keys.py",
    "content": "##\n# Copyright (c) 2023 unSkript, Inc\n# All rights reserved.\n##\nimport pprint\nfrom typing import List\nfrom datetime import datetime, timezone\nfrom pydantic import BaseModel, Field\nfrom unskript.connectors.aws import aws_get_paginator\n\n\nclass InputSchema(BaseModel):\n    threshold_in_days: int = Field(\n        default = 120,\n        title=\"Threshold (In days)\",\n        description=(\"(in days) The threshold to check the IAM user access \"\n                     \"keys older than the threshold.\")\n    )\n\n\ndef aws_get_users_with_old_access_keys_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef aws_get_users_with_old_access_keys(handle, threshold_in_days: int = 120) -> List:\n    \"\"\"aws_get_users_with_old_access_keys lists all the IAM users with access keys\n\n        :type handle: object\n        :param handle: Object returned from Task Validate\n        \n        :type threshold_in_days: int\n        :param threshold_in_days: (in days) The threshold to check the IAM user\n        access keys older than the threshold.\n\n        :rtype: Result List of all IAM users with access keys.\n    \"\"\"\n    client = handle.client('iam')\n    result = []\n    try:\n        response = aws_get_paginator(client, \"list_users\", \"Users\")\n    except Exception as e:\n        return result.append({\"error\": e})\n    for user in response:\n        try:\n            # Get a list of the user's access keys\n            access_keys = client.list_access_keys(UserName=user['UserName'])\n        except Exception:\n            continue\n        for access_key in access_keys['AccessKeyMetadata']:\n            iam_data = {}\n            try:\n                access_key_info = client.get_access_key_last_used(\n                    AccessKeyId=access_key['AccessKeyId']\n                    )\n            except Exception:\n                continue\n            if 'LastUsedDate' not in access_key_info['AccessKeyLastUsed']:\n                iam_data[\"access_key\"] = access_key['AccessKeyId']\n                iam_data[\"iam_user\"] = user['UserName']\n                iam_data[\"last_used_days_ago\"] = 'Never Used'\n                result.append(iam_data)\n            else:\n                # Get the last used date of the access key\n                last_used = access_key_info['AccessKeyLastUsed']['LastUsedDate']\n                days_since_last_used = (datetime.now(timezone.utc) - last_used).days\n                # Check if the access key was last used more than 90 days ago\n                if days_since_last_used > threshold_in_days:\n                    iam_data[\"access_key\"] = access_key['AccessKeyId']\n                    iam_data[\"iam_user\"] = user['UserName']\n                    iam_data[\"last_used_days_ago\"] = days_since_last_used\n                    result.append(iam_data)\n\n    return result\n"
  },
  {
    "path": "AWS/legos/aws_launch_instance_from_ami/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Launch AWS EC2 Instance From an AMI</h1>\r\n\r\n## Description\r\nThis Lego Launch AWS EC2 Instance From an AMI.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_launch_instance_from_ami(handle, ami_id: str, region: str)\r\n\r\n        handle: Object of type unSkript AWS Connector.\r\n        ami_id: AMI Id to launch instance\r\n        region: Region for instance.\r\n\r\n\r\n## Lego Input\r\nThis Lego take three inputs handle, ami_id and region.\r\n\r\n## Lego Output\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_launch_instance_from_ami/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_launch_instance_from_ami/aws_launch_instance_from_ami.json",
    "content": "{\r\n    \"action_title\": \"Launch AWS EC2 Instance From an AMI\",\r\n    \"action_description\": \"Use this instance to Launch an AWS EC2 instance from an AMI\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_launch_instance_from_ami\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\", \"CATEGORY_TYPE_AWS_EC2\"]\r\n}\r\n"
  },
  {
    "path": "AWS/legos/aws_launch_instance_from_ami/aws_launch_instance_from_ami.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import List\nfrom pydantic import BaseModel, Field\n\n\nclass InputSchema(BaseModel):\n    ami_id: str = Field(\n        title='AMI Id',\n        description='AMI Id.')\n    region: str = Field(\n        title='Region',\n        description='AWS Region.')\n\n\ndef aws_launch_instance_from_ami_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef aws_launch_instance_from_ami(handle, ami_id: str, region: str) -> List:\n    \"\"\"aws_launch_instance_from_ami Launch instances from a particular image.\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type ami_id: string\n        :param ami_id: AMI Id Information required to launch an instance.\n\n        :type region: string\n        :param region: Region to filter instances.\n\n        :rtype: Dict with launched instances info.\n    \"\"\"\n    ec2Client = handle.client('ec2', region_name=region)\n\n    res = ec2Client.run_instances(ImageId=ami_id, MinCount=1, MaxCount=1)\n\n    return res['Instances']\n"
  },
  {
    "path": "AWS/legos/aws_list_access_keys/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \n<h1>AWS List Access Keys</h1>\n\n## Description\nThis Lego Lists all the Access Keys of a user.\n\n\n## Lego Details\n\n    aws_list_access_keys(handle,aws_username: str)\n\n        handle: Object of type unSkript AWS Connector.\n        aws_username: User name of the AWS user.\n\n\n## Lego Input\nThis Lego take two inputs handle and aws_username.\n\n## Lego Output\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_list_access_keys/__init__.py",
    "content": "##\n##  Copyright (c) 2022 unSkript, Inc\n##  All rights reserved.\n##"
  },
  {
    "path": "AWS/legos/aws_list_access_keys/aws_list_access_keys.json",
    "content": "{\r\n    \"action_title\": \"AWS List Access Key\",\r\n    \"action_description\": \"List all Access Keys for the User\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_list_access_keys\",\r\n    \"action_needs_credential\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_SECOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\", \"CATEGORY_TYPE_AWS_IAM\"]\r\n}\r\n"
  },
  {
    "path": "AWS/legos/aws_list_access_keys/aws_list_access_keys.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nimport pprint\nfrom typing import Dict\nfrom pydantic import BaseModel, Field\n\n\nclass InputSchema(BaseModel):\n    aws_username: str = Field(\n        title=\"Username\",\n        description=\"Username of the IAM User\"\n    )\n\n\ndef aws_list_access_keys_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef aws_list_access_keys(\n        handle,\n        aws_username: str\n) -> Dict:\n    \"\"\"aws_list_access_keys lists all the access keys for a user\n\n        :type handle: object\n        :param handle: Object returned from Task Validate\n\n        :type aws_username: str\n        :param aws_username: Username of the IAM user to be looked up\n\n        :rtype: Result Dictionary of result\n    \"\"\"\n    iamClient = handle.client('iam')\n    result = iamClient.list_access_keys(UserName=aws_username)\n    retVal = {}\n    temp_list = []\n    for key, value in result.items():\n        if key not in temp_list:\n            temp_list.append(key)\n            retVal[key] = value\n    return retVal\n"
  },
  {
    "path": "AWS/legos/aws_list_all_iam_users/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \n<h1>AWS List all IAM Users</h1>\n\n## Description\nThis Lego Lists all the IAM Users.\n\n\n## Lego Details\n\n    aws_list_all_iam_users(handle)\n\n        handle: Object of type unSkript AWS Connector.\n\n\n## Lego Input\nThis Lego take one input: handle.\n\n## Lego Output\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_list_all_iam_users/__init__.py",
    "content": "##\n##  Copyright (c) 2022 unSkript, Inc\n##  All rights reserved.\n##"
  },
  {
    "path": "AWS/legos/aws_list_all_iam_users/aws_list_all_iam_users.json",
    "content": "{\n    \"action_title\": \"AWS List All IAM Users\",\n    \"action_description\": \"List all AWS IAM Users\",\n    \"action_type\": \"LEGO_TYPE_AWS\",\n    \"action_entry_function\": \"aws_list_all_iam_users\",\n    \"action_needs_credential\": true,\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n    \"action_supports_poll\": true,\n    \"action_supports_iteration\": true,\n    \"action_verbs\": [\"list\"],\n    \"action_nouns\": [\"users\",\"iam\",\"aws\"],\n    \"action_is_check\": false,\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\", \"CATEGORY_TYPE_AWS_IAM\"]\n}"
  },
  {
    "path": "AWS/legos/aws_list_all_iam_users/aws_list_all_iam_users.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nimport pprint\nfrom typing import List\nfrom pydantic import BaseModel\n\nclass InputSchema(BaseModel):\n    pass\n\ndef aws_list_all_iam_users_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\ndef aws_list_all_iam_users(handle) -> List:\n    \"\"\"aws_list_all_iam_users lists all the IAM users\n\n        :type handle: object\n        :param handle: Object returned from Task Validate\n        \n        :rtype: Result List of all IAM users\n    \"\"\"\n    client = handle.client('iam') \n    users_list=[]\n    response = client.list_users()\n    try:\n        for x in response['Users']:\n            users_list.append(x['UserName'])\n    except Exception as e:\n        users_list.append(e)\n    return users_list\n"
  },
  {
    "path": "AWS/legos/aws_list_all_regions/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \n<h1>AWS List All Regions</h1>\n\n## Description\nThis Lego Lists all the AWS Regions.\n\n\n## Lego Details\n\n    aws_list_all_regions(handle)\n\n        handle: Object of type unSkript AWS Connector.\n\n\n## Lego Input\nNone\n\n## Lego Output\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_list_all_regions/__init__.py",
    "content": "##\n##  Copyright (c) 2022 unSkript, Inc\n##  All rights reserved.\n##"
  },
  {
    "path": "AWS/legos/aws_list_all_regions/aws_list_all_regions.json",
    "content": "{\n    \"action_title\": \"AWS List All Regions\",\n    \"action_description\": \"List all available AWS Regions\",\n    \"action_type\": \"LEGO_TYPE_AWS\",\n    \"action_entry_function\": \"aws_list_all_regions\",\n    \"action_needs_credential\": true,\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n    \"action_supports_poll\": true,\n    \"action_supports_iteration\": true,\n    \"action_verbs\": [\"list\"],\n    \"action_nouns\": [\"regions\",\"aws\"],\n    \"action_is_check\": false,\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\"]\n}\n"
  },
  {
    "path": "AWS/legos/aws_list_all_regions/aws_list_all_regions.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n\nimport pprint\nfrom typing import List\nfrom pydantic import BaseModel\n\nclass InputSchema(BaseModel):\n    pass\n\ndef aws_list_all_regions_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef aws_list_all_regions(handle) -> List:\n    \"\"\"aws_list_all_regions lists all the AWS regions\n\n        :type handle: object\n        :param handle: Object returned from Task Validate\n\n        :rtype: Result List of result\n    \"\"\"\n\n    result = handle.aws_cli_command(\n        \"aws ec2 --region us-west-2 describe-regions --all-regions --query 'Regions[].{Name:RegionName}' --output text\"\n        )\n    if result is None or result.returncode != 0:\n        print(f\"Error while executing command : {result}\")\n        return str()\n    result_op = list(result.stdout.split(\"\\n\"))\n    list_region = [x for x in result_op if x != '']\n    return list_region\n"
  },
  {
    "path": "AWS/legos/aws_list_application_loadbalancers/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>AWS List Application LoadBalancers ARNs </h1>\r\n\r\n## Description\r\nThis Lego filter AWS application loadBalancers.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_list_application_loadbalancers(handle: object, region: str)\r\n\r\n        handle: Object of type unSkript AWS Connector\r\n        region: Region of the Classic loadbalancer.\r\n\r\n## Lego Input\r\nThis Lego take two inputs handle and region.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_list_application_loadbalancers/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_list_application_loadbalancers/aws_list_application_loadbalancers.json",
    "content": "{\r\n    \"action_title\": \"AWS List Application LoadBalancers ARNs\",\r\n    \"action_description\": \"AWS List Application LoadBalancers ARNs\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_list_application_loadbalancers\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\", \"CATEGORY_TYPE_AWS_ELB\"]\r\n}"
  },
  {
    "path": "AWS/legos/aws_list_application_loadbalancers/aws_list_application_loadbalancers.py",
    "content": "##\r\n##  Copyright (c) 2021 unSkript, Inc\r\n##  All rights reserved.\r\n##\r\nimport pprint\r\nfrom typing import Optional, List\r\nfrom pydantic import BaseModel, Field\r\nfrom unskript.connectors.aws import aws_get_paginator\r\n\r\n\r\nclass InputSchema(BaseModel):\r\n    region: Optional[str] = Field(\r\n        title='Region of the Classic Loadbalancer',\r\n        description='Region of the Classic loadbalancer.'\r\n    )\r\n\r\n\r\ndef aws_list_application_loadbalancers_printer(output):\r\n    if output is None:\r\n        return\r\n    pprint.pprint(output)\r\n\r\n\r\ndef aws_list_application_loadbalancers(handle, region: str) -> List:\r\n    \"\"\"aws_list_application_loadbalancers lists application loadbalancers ARNs.\r\n\r\n        :type handle: object\r\n        :param handle: Object returned from task.validate(...).\r\n\r\n        :type region: string\r\n        :param region: Region of the Classic loadbalancer.\r\n\r\n        :rtype: List with all the application loadbalancer ARNs\r\n    \"\"\"\r\n    result = []\r\n    try:\r\n        ec2Client = handle.client('elbv2', region_name=region)\r\n        resp = aws_get_paginator(ec2Client, \"describe_load_balancers\", \"LoadBalancers\")\r\n        for elb in resp:\r\n            if elb['Type'] == \"application\":\r\n                result.append(elb['LoadBalancerArn'])\r\n    except Exception:\r\n        pass\r\n    return result\r\n"
  },
  {
    "path": "AWS/legos/aws_list_attached_user_policies/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>AWS List Attached User Policies </h1>\r\n\r\n## Description\r\nThis Lego returns the list of Policies attached to a User.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_list_attached_user_policies(handle: object, UserName: str)\r\n\r\n        handle: Object of type unSkript AWS Connector\r\n        UserName: IAM user name whose policies need to fetched.\r\n\r\n## Lego Input\r\nThis Lego take two inputs handle and UserName. \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_list_attached_user_policies/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_list_attached_user_policies/aws_list_attached_user_policies.json",
    "content": "{\r\n    \"action_title\": \"AWS List Attached User Policies\",\r\n    \"action_description\": \"AWS List Attached User Policies\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_list_attached_user_policies\",\r\n    \"action_needs_credential\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\", \"CATEGORY_TYPE_AWS_IAM\"]\r\n  }\r\n  "
  },
  {
    "path": "AWS/legos/aws_list_attached_user_policies/aws_list_attached_user_policies.py",
    "content": "##  Copyright (c) 2021 unSkript, Inc\r\n##  All rights reserved.\r\n##\r\nimport pprint\r\nfrom typing import List\r\nfrom pydantic import BaseModel, Field\r\nfrom botocore.exceptions import ClientError\r\n\r\n\r\nclass InputSchema(BaseModel):\r\n    user_name: str = Field(\r\n        title='User Name',\r\n        description='IAM user whose policies need to fetched.')\r\n\r\n\r\ndef aws_list_attached_user_policies_printer(output):\r\n    if output is None:\r\n        return\r\n    pprint.pprint(output)\r\n\r\n\r\ndef aws_list_attached_user_policies(handle, user_name: str) -> List:\r\n    \"\"\"aws_list_attached_user_policies returns the list of policies attached to the user.\r\n\r\n        :type handle: object\r\n        :param handle: Object returned from task.validate(...).\r\n\r\n        :type user_name: string\r\n        :param user_name: IAM user whose policies need to fetched.\r\n\r\n        :rtype: List with with the attched policy names.\r\n    \"\"\"\r\n    result = []\r\n    ec2Client = handle.client('iam')\r\n    try:\r\n        response = ec2Client.list_attached_user_policies(UserName=user_name)\r\n        for i in response[\"AttachedPolicies\"]:\r\n            result.append(i['PolicyName'])\r\n\r\n    except ClientError as error:\r\n        result.append(error.response)\r\n\r\n    return result\r\n"
  },
  {
    "path": "AWS/legos/aws_list_clusters_with_low_utilization/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>AWS List ECS Clusters with Low CPU Utilization</h1>\r\n\r\n## Description\r\nThis Lego searches for clusters that have low CPU utilization.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_list_clusters_with_low_utilization(handle, region: str = \"\", threshold: int = 10)\r\n\r\n        handle: Object of type unSkript AWS Connector.\r\n        region: Optional, AWS region. Eg: “us-west-2”\r\n        threshold: Optional, (In percent) Threshold to check for cpu utilization is less than threshold.\r\n\r\n## Lego Input\r\nThis Lego take two inputs handle, threshold and region.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_list_clusters_with_low_utilization/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_list_clusters_with_low_utilization/aws_list_clusters_with_low_utilization.json",
    "content": "{\r\n    \"action_title\": \"AWS List ECS Clusters with Low CPU Utilization\",\r\n    \"action_description\": \"This action searches for clusters that have low CPU utilization.\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_list_clusters_with_low_utilization\",\r\n    \"action_needs_credential\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_is_check\":true,\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_next_hop\": [\"6ad946fb1afd19286a8e7771e0f8e5566e4fdd54e3e2473385b5ac8e206e0a49\"],\r\n    \"action_next_hop_parameter_mapping\": {\"6ad946fb1afd19286a8e7771e0f8e5566e4fdd54e3e2473385b5ac8e206e0a49\": {\"name\": \"Delete ECS Clusters with Low CPU Utilization\", \"region\": \".[0].region\", \"cluster_names\":\"map(.cluster_name)\"}},\r\n    \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_EC2\",\"CATEGORY_TYPE_AWS_EBC\" ]\r\n}\r\n  "
  },
  {
    "path": "AWS/legos/aws_list_clusters_with_low_utilization/aws_list_clusters_with_low_utilization.py",
    "content": "##\r\n##  Copyright (c) 2021 unSkript, Inc\r\n##  All rights reserved.\r\n##\r\nimport pprint\r\nfrom typing import Optional, Tuple\r\nfrom pydantic import BaseModel, Field\r\nfrom unskript.connectors.aws import aws_get_paginator\r\nfrom unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\r\n\r\n\r\nclass InputSchema(BaseModel):\r\n    region: Optional[str] = Field(\r\n        default=\"\",\r\n        title='Region',\r\n        description='AWS Region.')\r\n    threshold: Optional[int] = Field(\r\n        default=10,\r\n        title='Threshold (In percent)',\r\n        description='Threshold to check for cpu utilization is less than threshold.')\r\n\r\n\r\ndef aws_list_clusters_with_low_utilization_printer(output):\r\n    if output is None:\r\n        return\r\n\r\n    pprint.pprint(output)\r\n\r\n\r\ndef aws_list_clusters_with_low_utilization(handle, region: str = \"\", threshold: int = 10) -> Tuple:\r\n    \"\"\"aws_list_clusters_with_low_utilization Returns an array of ecs clusters.\r\n\r\n        :type region: string\r\n        :param region: AWS Region.\r\n\r\n        :type threshold: int\r\n        :param threshold: (In percent) Threshold to check for cpu utilization\r\n        is less than threshold.\r\n\r\n        :rtype: List of clusters for low CPU utilization\r\n    \"\"\"\r\n    result = []\r\n    all_regions = [region]\r\n    if not region:\r\n        all_regions = aws_list_all_regions(handle)\r\n\r\n    for reg in all_regions:\r\n        try:\r\n            ecs_Client = handle.client('ecs', region_name=reg)\r\n            response = aws_get_paginator(ecs_Client, \"list_clusters\", \"clusterArns\")\r\n            for cluster in response:\r\n                cluster_dict = {}\r\n                cluster_name = cluster.split('/')[1]\r\n                stats = ecs_Client.describe_clusters(clusters=[cluster])['clusters'][0]['statistics']\r\n                for stat in stats:\r\n                    if stat['name'] == 'CPUUtilization':\r\n                        cpu_utilization = int(stat['value'])\r\n                        if cpu_utilization < threshold:\r\n                            cluster_dict[\"cluster_name\"] = cluster_name\r\n                            cluster_dict[\"region\"] = reg\r\n                            result.append(cluster_dict)\r\n        except Exception:\r\n            pass\r\n\r\n    if len(result) != 0:\r\n        return (False, result)\r\n    return (True, None)\r\n"
  },
  {
    "path": "AWS/legos/aws_list_expiring_access_keys/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \n<h1>List Expiring Access Keys</h1>\n\n## Description\nThis Lego lists all the expiring IAM Access Keys for an AWS User.\n\n\n## Lego Details\n\n    aws_list_expiring_access_keys(handle, threshold_days: int)\n\n        handle: Object of type unSkript AWS Connector.\n        threshold_days: Integer, Threshold number of days to check for expiry. Eg: 30 -lists all expiring access keys within 30 days.\n\n## Lego Input\nThis Lego take two inputs handle, threshold_days and aws_username.\n\n## Lego Output\n<img src=\"./1.png\">\n\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_list_expiring_access_keys/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_list_expiring_access_keys/aws_list_expiring_access_keys.json",
    "content": "{\n    \"action_title\": \"AWS List Expiring Access Keys\",\n    \"action_description\": \"List Expiring IAM User Access Keys\",\n    \"action_type\": \"LEGO_TYPE_AWS\",\n    \"action_entry_function\": \"aws_list_expiring_access_keys\",\n    \"action_needs_credential\": true,\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n    \"action_supports_poll\": true,\n    \"action_supports_iteration\": true,\n    \"action_verbs\": [\"list\"],\n    \"action_nouns\": [\"expiring\",\"access\",\"aws\"],\n    \"action_is_check\": true,\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\", \"CATEGORY_TYPE_AWS_IAM\"],\n    \"action_next_hop\": [\"a79201f821993867e23dd9603ed7ef5123325353d717c566f902f7ca6e471f5c\"],\n    \"action_next_hop_parameter_mapping\": {}\n}\n"
  },
  {
    "path": "AWS/legos/aws_list_expiring_access_keys/aws_list_expiring_access_keys.py",
    "content": "# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nimport pprint\nfrom typing import Tuple\nimport datetime\nimport dateutil.tz\nfrom pydantic import BaseModel, Field\nfrom unskript.legos.aws.aws_list_all_iam_users.aws_list_all_iam_users import aws_list_all_iam_users\n\nclass InputSchema(BaseModel):\n    threshold_days: int = Field(\n        default=90,\n        title=\"Threshold Days\",\n        description=\"Threshold number(in days) to check for expiry. Eg: 30\"\n    )\n\ndef aws_list_expiring_access_keys_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\ndef aws_list_expiring_access_keys(handle, threshold_days: int = 90)-> Tuple:\n    \"\"\"aws_list_expiring_access_keys returns all the ACM issued certificates which are\n       about to expire given a threshold number of days\n\n        :type handle: object\n        :param handle: Object returned from Task Validate\n\n        :type threshold_days: int\n        :param threshold_days: Threshold number of days to check for expiry. Eg: 30 -lists\n        all access Keys which are expiring within 30 days\n\n        :rtype: Status, List of expiring access keys and Error if any \n    \"\"\"\n    result = []\n    all_users = []\n    try:\n        all_users = aws_list_all_iam_users(handle=handle)\n    except Exception as error:\n        raise error\n\n    for each_user in all_users:\n        try:\n            iamClient = handle.client('iam')\n            response = iamClient.list_access_keys(UserName=each_user)\n            for x in response[\"AccessKeyMetadata\"]:\n                create_date = x[\"CreateDate\"]\n                right_now = datetime.datetime.now(dateutil.tz.tzlocal())\n                diff = right_now - create_date\n                days_remaining = threshold_days - diff.days\n                if 0 <= days_remaining <= threshold_days:\n                    final_result = {\n                        \"username\": x[\"UserName\"],\n                        \"access_key_id\": x[\"AccessKeyId\"]\n                    }\n                    result.append(final_result)\n        except Exception as e:\n            raise e\n\n    if result:\n        return (False, result)\n    return (True, None)\n"
  },
  {
    "path": "AWS/legos/aws_list_expiring_acm_certificates/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>List Expiring ACM Certificate</h1>\r\n\r\n## Description\r\nThis Lego lists all the expiring ACM issued SSL certificates\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_list_expiring_acm_certificates(handle, threshold_days: int, region: str)\r\n\r\n        handle: Object of type unSkript AWS Connector.\r\n        threshold_days: Integer, Threshold number of days to check for expiry. Eg: 30 -lists all certificates which are expiring within 30 days.\r\n        region: Optional, Region where the Certificate is present.Eg:'us-west-2'\r\n\r\n## Lego Input\r\nThis Lego take three inputs handle, threshold_days and region.\r\n\r\n## Lego Output\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_list_expiring_acm_certificates/__init__.py",
    "content": "##\n##  Copyright (c) 2022 unSkript, Inc\n##  All rights reserved.\n##\n"
  },
  {
    "path": "AWS/legos/aws_list_expiring_acm_certificates/aws_list_expiring_acm_certificates.json",
    "content": "{\r\n    \"action_title\": \"List Expiring ACM Certificates\",\r\n    \"action_description\": \"List All Expiring ACM Certificates\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_list_expiring_acm_certificates\",\r\n    \"action_needs_credential\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_verbs\": [\"list\"],\r\n    \"action_nouns\": [\"expiring\",\"certificates\",\"aws\"],\r\n    \"action_is_check\": true,\r\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_SECOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\", \"CATEGORY_TYPE_AWS_ACM\"],\r\n    \"action_next_hop\": [\"76681732b20a69913f0d9248272271bf2f4ab6459498ec6d0ab055870e0db0bb\"],\r\n    \"action_next_hop_parameter_mapping\": {\"76681732b20a69913f0d9248272271bf2f4ab6459498ec6d0ab055870e0db0bb\": {\"name\": \"Renew AWS SSL Certificates that are close to expiration\", \"region\": \".[0].region\", \"certificate_arns\":\".[0].certificates\"}}\r\n}\r\n"
  },
  {
    "path": "AWS/legos/aws_list_expiring_acm_certificates/aws_list_expiring_acm_certificates.py",
    "content": "# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nimport pprint\nfrom typing import Optional,Tuple\nimport datetime\nimport dateutil\nfrom pydantic import BaseModel, Field\nfrom unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\n\nclass InputSchema(BaseModel):\n    threshold_days: int = Field(\n        title=\"Threshold Days\",\n        description=(\"Threshold number(in days) to check for expiry. \"\n                     \"Eg: 30 -lists all certificates which are expiring within 30 days\")\n    )\n    region: Optional[str] = Field(\n        default=\"\",\n        title='Region',\n        description='Name of the AWS Region'\n    )\n\ndef aws_list_expiring_acm_certificates_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\ndef aws_list_expiring_acm_certificates(handle, threshold_days: int = 90, region: str=None)-> Tuple:\n    \"\"\"aws_list_expiring_acm_certificates returns all the ACM issued certificates which\n       are about to expire given a threshold number of days\n\n        :type handle: object\n        :param handle: Object returned from Task Validate\n\n        :type threshold_days: int\n        :param threshold_days: Threshold number of days to check for expiry.\n        Eg: 30 -lists all certificates which are expiring within 30 days\n\n        :type region: str\n        :param region: Region name of the AWS account\n\n        :rtype: Tuple containing status, expiring certificates, and error\n    \"\"\"\n    arn_list=[]\n    domain_list = []\n    expiring_certificates_list= []\n    expiring_certificates_dict={}\n    result_list=[]\n    all_regions = [region]\n    if region is None or len(region)==0:\n        all_regions = aws_list_all_regions(handle=handle)\n    for r in all_regions:\n        iamClient = handle.client('acm', region_name=r)\n        try:\n            expiring_certificates_dict={}\n            certificates_list = iamClient.list_certificates(CertificateStatuses=['ISSUED'])\n            for each_arn in certificates_list['CertificateSummaryList']:\n                arn_list.append(each_arn['CertificateArn'])\n                domain_list.append(each_arn['DomainName'])\n            for cert_arn in arn_list:\n                details = iamClient.describe_certificate(CertificateArn=cert_arn)\n                for key,value in details['Certificate'].items():\n                    if key == \"NotAfter\":\n                        expiry_date = value\n                        right_now = datetime.datetime.now(dateutil.tz.tzlocal())\n                        diff = expiry_date-right_now\n                        days_remaining = diff.days\n                        if 0 < days_remaining < threshold_days:\n                            expiring_certificates_list.append(cert_arn)\n            expiring_certificates_dict[\"region\"]= r\n            expiring_certificates_dict[\"certificate\"]= expiring_certificates_list\n            if len(expiring_certificates_list)!=0:\n                result_list.append(expiring_certificates_dict)\n        except Exception:\n            pass\n    if len(result_list)!=0:\n        return (False, result_list)\n    return (True, None)\n"
  },
  {
    "path": "AWS/legos/aws_list_hosted_zones/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>AWS List Hosted Zones</h1>\n\n## Description\nList all AWS Hosted zones\n\n## Lego Details\n\taws_list_hosted_zones(handle)\n\t\thandle: Object of type unSkript AWS Connector.\n\n\n## Lego Input\nThis Lego takes one input: handle.\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.jpeg\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_list_hosted_zones/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_list_hosted_zones/aws_list_hosted_zones.json",
    "content": "{\n  \"action_title\": \"AWS List Hosted Zones\",\n  \"action_description\": \"List all AWS Hosted zones\",\n  \"action_type\": \"LEGO_TYPE_AWS\",\n  \"action_entry_function\": \"aws_list_hosted_zones\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n  \"action_is_check\": false,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\": [\"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_ROUTE53\"]\n}"
  },
  {
    "path": "AWS/legos/aws_list_hosted_zones/aws_list_hosted_zones.py",
    "content": "##\n# Copyright (c) 2023 unSkript, Inc\n# All rights reserved.\n##\nimport pprint\nfrom typing import List\nfrom pydantic import BaseModel\n\n\nclass InputSchema(BaseModel):\n    pass\n\n\ndef aws_list_hosted_zones_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\ndef aws_list_hosted_zones(handle) -> List:\n    \"\"\"aws_list_hosted_zones Returns all hosted zones.\n\n        :type handle: object\n        :param handle: Object returned by the task.validate(...) method.\n\n        :rtype: List of all the hosted zones.\n    \"\"\"\n\n    route53Client = handle.client('route53')\n\n    response = route53Client.list_hosted_zones()\n\n    result = []\n    for hosted_zone in response['HostedZones']:\n        hosted_zone_id = hosted_zone['Id']\n        hosted_zone_name = hosted_zone['Name']\n        result.append({\n            'id': hosted_zone_id,\n            'name': hosted_zone_name\n        })\n    return result\n"
  },
  {
    "path": "AWS/legos/aws_list_unattached_elastic_ips/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>AWS List Unattached Elastic IPs </h1>\r\n\r\n## Description\r\nThis Lego lists Elastic IP address and check if it is associated with an instance or network interface.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_list_unattached_elastic_ips(handle, region: str = \"\")\r\n\r\n        handle: Object of type unSkript AWS Connector.\r\n        region: Optional, AWS region. Eg: “us-west-2”\r\n\r\n## Lego Input\r\nThis Lego take two inputs handle and region.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_list_unattached_elastic_ips/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_list_unattached_elastic_ips/aws_list_unattached_elastic_ips.json",
    "content": "{\r\n    \"action_title\": \"AWS List Unattached Elastic IPs\",\r\n    \"action_description\": \"This action lists Elastic IP address and check if it is associated with an instance or network interface.\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_list_unattached_elastic_ips\",\r\n    \"action_needs_credential\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_is_check\":true,\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_EC2\"],\r\n    \"action_next_hop\": [\"a9d7ea5f3d31745f1de9fb8616ab6fbc20ff11e665808bdde6a9ba9b8b32e28a\"],\r\n    \"action_next_hop_parameter_mapping\": {\"a9d7ea5f3d31745f1de9fb8616ab6fbc20ff11e665808bdde6a9ba9b8b32e28a\": {\"name\": \"Release Unattached AWS Elastic IPs\", \"region\": \".[0].region\", \"allocation_ids\":\"map(.allocation_id)\"}}\r\n}\r\n  "
  },
  {
    "path": "AWS/legos/aws_list_unattached_elastic_ips/aws_list_unattached_elastic_ips.py",
    "content": "##\r\n##  Copyright (c) 2023 unSkript, Inc\r\n##  All rights reserved.\r\n##\r\nimport pprint\r\nfrom typing import Optional, Tuple\r\nfrom pydantic import BaseModel, Field\r\nfrom unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\r\n\r\n\r\nclass InputSchema(BaseModel):\r\n    region: Optional[str] = Field(\r\n        default=\"\",\r\n        title='Region',\r\n        description='AWS Region.')\r\n\r\n\r\ndef aws_list_unattached_elastic_ips_printer(output):\r\n    if output is None:\r\n        return\r\n\r\n    pprint.pprint(output)\r\n\r\n\r\ndef aws_list_unattached_elastic_ips(handle, region: str = \"\") -> Tuple:\r\n    \"\"\"aws_list_unattached_elastic_ips Returns an array of unattached elastic IPs.\r\n\r\n        :type region: string\r\n        :param region: AWS Region.\r\n\r\n        :rtype: Tuple with status result and list of unattached elastic IPs.\r\n    \"\"\"\r\n    result = []\r\n    all_regions = [region]\r\n    if not region:\r\n        all_regions = aws_list_all_regions(handle)\r\n\r\n    for reg in all_regions:\r\n        try:\r\n            # Filtering the public_ip by region\r\n            ec2Client = handle.client('ec2', region_name=reg)\r\n            all_eips = ec2Client.describe_addresses()\r\n            for eip in all_eips[\"Addresses\"]:\r\n                vpc_data = {}\r\n                if 'AssociationId' not in eip:\r\n                    vpc_data[\"public_ip\"] = eip['PublicIp']\r\n                    vpc_data[\"allocation_id\"] = eip['AllocationId']\r\n                    vpc_data[\"region\"] = reg\r\n                    result.append(vpc_data)\r\n        except Exception:\r\n            pass\r\n\r\n    if len(result) != 0:\r\n        return (False, result)\r\n    return (True, None)\r\n"
  },
  {
    "path": "AWS/legos/aws_list_unhealthy_instances_in_target_group/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>List Unhealthy Instances in a target group</h1>\r\n\r\n## Description\r\nThis Lego lists all unhealthy instances in a target group\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_list_unhealthy_instances_in_target_group(handle, region: str)\r\n\r\n        handle: Object of type unSkript AWS Connector.\r\n        region: Optional, Region where the Certificate is present.Eg:'us-west-2'\r\n\r\n## Lego Input\r\nThis Lego take two inputs handle and region.\r\n\r\n## Lego Output\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_list_unhealthy_instances_in_target_group/__init__.py",
    "content": "##\n##  Copyright (c) 2022 unSkript, Inc\n##  All rights reserved.\n##\n"
  },
  {
    "path": "AWS/legos/aws_list_unhealthy_instances_in_target_group/aws_list_unhealthy_instances_in_target_group.json",
    "content": "{\n    \"action_title\": \"AWS List Unhealthy Instances in a Target Group\",\n    \"action_description\": \"List Unhealthy Instances in a target group\",\n    \"action_type\": \"LEGO_TYPE_AWS\",\n    \"action_entry_function\": \"aws_list_unhealthy_instances_in_target_group\",\n    \"action_needs_credential\": true,\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n    \"action_supports_poll\": true,\n    \"action_supports_iteration\": true,\n    \"action_verbs\": [\"list\"],\n    \"action_nouns\": [\"unhealthy\",\"instances\",\"target\",\"group\",\"aws\"],\n    \"action_is_check\": true,\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\", \"CATEGORY_TYPE_TROUBLESHOOTING\", \"CATEGORY_TYPE_AWS\", \"CATEGORY_TYPE_AWS_ELB\"],\n    \"action_next_hop\": [\"7a5cf9629c56eb979a01977330c3d2df656e965a78323be4fa49fdc3b527c9d7\"],\n    \"action_next_hop_parameter_mapping\": {\"7a5cf9629c56eb979a01977330c3d2df656e965a78323be4fa49fdc3b527c9d7\": {\"name\": \"AWS Restart unhealthy services in a Target Group\", \"region\": \".[0].regions\", \"instance_ids\": \"map(.instance)\"}}\n}\n"
  },
  {
    "path": "AWS/legos/aws_list_unhealthy_instances_in_target_group/aws_list_unhealthy_instances_in_target_group.py",
    "content": "import pprint\nfrom typing import Optional, Tuple\nfrom pydantic import BaseModel, Field\nfrom unskript.connectors.aws import aws_get_paginator\nfrom unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\nfrom unskript.legos.utils import parseARN\n\nclass InputSchema(BaseModel):\n    region: Optional[str] = Field(\n        default=\"\",\n        title='Region',\n        description='Name of the AWS Region'\n    )\n\ndef aws_list_unhealthy_instances_in_target_group_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\ndef get_all_target_groups(handle, r):\n    target_arns_list = []\n    elbv2Client = handle.client('elbv2', region_name=r)\n    try:\n        tbs = aws_get_paginator(elbv2Client, \"describe_target_groups\", \"TargetGroups\")\n        for index, tb in enumerate(tbs):\n            target_arns_list.append(tb.get('TargetGroupArn'))\n    except Exception:\n        pass\n    return target_arns_list\n\ndef aws_list_unhealthy_instances_in_target_group(handle, region: str=None) -> Tuple:\n    result = []\n    unhealthy_instances_list = []\n    all_target_groups = []\n    unhealhthy_instances_dict = {}\n    all_regions = [region]\n    if region is None or len(region) == 0:\n        all_regions = aws_list_all_regions(handle=handle)\n    for r in all_regions:\n        try:\n            output = get_all_target_groups(handle, r)\n            if len(output) != 0:\n                all_target_groups.append(output)\n        except Exception:\n            pass\n    for target_group in all_target_groups:\n        for o in target_group:\n            parsedArn = parseARN(o)\n            region_name = parsedArn['region']\n            elbv2Client = handle.client('elbv2', region_name=region_name)\n            try:\n                targetHealthResponse = elbv2Client.describe_target_health(TargetGroupArn=o)\n            except Exception as e:\n                print(f\"An error occurred while describing target health: {e}\") # Log an error message\n                continue\n            for ins in targetHealthResponse[\"TargetHealthDescriptions\"]:\n                if ins['TargetHealth']['State'] in ['unhealthy']:\n                    unhealthy_instances_list.append(ins['Target']['Id'])\n    if len(unhealthy_instances_list) != 0 and region_name is not None:\n        unhealhthy_instances_dict['instance'] = unhealthy_instances_list\n        unhealhthy_instances_dict['region'] = region_name\n        result.append(unhealhthy_instances_dict)\n    if len(result) != 0:\n        return (False, result)\n    return (True, None)"
  },
  {
    "path": "AWS/legos/aws_list_unused_secrets/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>AWS List Unused Secrets</h1>\r\n\r\n## Description\r\nThis Lego lists all the unused secrets from AWS by comparing the last used date with the given threshold.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_list_unused_secrets(handle, max_age_days: int = 30, region: str = \"\")\r\n        handle: Object of type unSkript AWS Connector.\r\n        region: Optional, AWS region. Eg: “us-west-2”\r\n        max_age_days: The threshold to check the last use of the secret.\r\n\r\n## Lego Input\r\nThis Lego take three inputs handle, max_age_days and region.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_list_unused_secrets/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_list_unused_secrets/aws_list_unused_secrets.json",
    "content": "{\r\n    \"action_title\": \"AWS List Unused Secrets\",\r\n    \"action_description\": \"This action lists all the unused secrets from AWS by comparing the last used date with the given threshold.\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_list_unused_secrets\",\r\n    \"action_needs_credential\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_is_check\":true,\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [ \"CATEGORY_TYPE_IAM\", \"CATEGORY_TYPE_SECOPS\", \"CATEGORY_TYPE_COST_OPT\"],\r\n    \"action_next_hop\": [\"2a9101a1cf7be1cf70a30de2199dca5b302c3096\"],\r\n    \"action_next_hop_parameter_mapping\": {\"2a9101a1cf7be1cf70a30de2199dca5b302c3096\": {\"name\": \"Delete Unused AWS Secrets\",\"region\":\".[0].region\",\"secret_names\":\"map(.secret_name)\"}}\r\n}\r\n  "
  },
  {
    "path": "AWS/legos/aws_list_unused_secrets/aws_list_unused_secrets.py",
    "content": "##\r\n##  Copyright (c) 2023 unSkript, Inc\r\n##  All rights reserved.\r\n##\r\nimport pprint\r\nfrom typing import Optional, Tuple\r\nfrom datetime import datetime, timedelta\r\nfrom pydantic import BaseModel, Field\r\nfrom unskript.connectors.aws import aws_get_paginator\r\nfrom unskript.legos.aws.aws_list_all_regions.aws_list_all_regions import aws_list_all_regions\r\nimport pytz\r\n\r\n\r\nclass InputSchema(BaseModel):\r\n    region: Optional[str] = Field(\r\n        default=\"\",\r\n        title='Region',\r\n        description='AWS Region.')\r\n    max_age_days: Optional[int] = Field(\r\n        default=30,\r\n        title=\"Max Age Day's\",\r\n        description='The threshold to check the last use of the secret.')\r\n\r\n\r\ndef aws_list_unused_secrets_printer(output):\r\n    if output is None:\r\n        return\r\n    pprint.pprint(output)\r\n\r\n\r\ndef aws_list_unused_secrets(handle, region: str = \"\", max_age_days: int = 30) -> Tuple:\r\n    \"\"\"aws_list_unused_secrets Returns an array of unused secrets.\r\n\r\n        :type region: string\r\n        :param region: AWS region.\r\n\r\n        :type max_age_days: int\r\n        :param max_age_days: The threshold to check the last use of the secret.\r\n\r\n        :rtype: Tuple with status result and list of unused secrets.\r\n    \"\"\"\r\n    result = []\r\n    all_regions = [region]\r\n    if not region:\r\n        all_regions = aws_list_all_regions(handle)\r\n\r\n    for reg in all_regions:\r\n        try:\r\n            # Filtering the secrets by region\r\n            ec2Client = handle.client('secretsmanager', region_name=reg)\r\n            res = aws_get_paginator(ec2Client, \"list_secrets\", \"SecretList\")\r\n            for secret in res:\r\n                secret_dict = {}\r\n                secret_id = secret['Name']\r\n                last_accessed_date = ec2Client.describe_secret(SecretId=secret_id)\r\n                if 'LastAccessedDate' in last_accessed_date:\r\n                    if last_accessed_date[\"LastAccessedDate\"] < datetime.now(pytz.UTC) - timedelta(days=int(max_age_days)):\r\n                        secret_dict[\"secret_name\"] = secret_id\r\n                        secret_dict[\"region\"] = reg\r\n                        result.append(secret_dict)\r\n                else:\r\n                    if last_accessed_date[\"CreatedDate\"] < datetime.now(pytz.UTC) - timedelta(days=int(max_age_days)):\r\n                        secret_dict[\"secret_name\"] = secret_id\r\n                        secret_dict[\"region\"] = reg\r\n                        result.append(secret_dict)\r\n        except Exception:\r\n            pass\r\n\r\n    if len(result) != 0:\r\n        return (False, result)\r\n    return (True, None)\r\n"
  },
  {
    "path": "AWS/legos/aws_list_users_with_old_passwords/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>AWS List IAM Users With Old Passwords </h1>\r\n\r\n## Description\r\nThis Lego filter gets all the IAM users' login profiles, and if the login profile is available, checks for the last password change if the password is greater than the given threshold, and lists those users.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_list_users_with_old_passwords(handle, threshold_days: int = 120)\r\n\r\n        handle: Object of type unSkript AWS Connector.\r\n        threshold_days: (in days) The threshold to check the IAM user password older than the threshold.\r\n\r\n## Lego Input\r\nThis Lego take two inputs handle and threshold_days.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_list_users_with_old_passwords/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_list_users_with_old_passwords/aws_list_users_with_old_passwords.json",
    "content": "{\r\n    \"action_title\": \"AWS List IAM Users With Old Passwords\",\r\n    \"action_description\": \"This Lego filter gets all the IAM users' login profiles, and if the login profile is available, checks for the last password change if the password is greater than the given threshold, and lists those users.\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_list_users_with_old_passwords\",\r\n    \"action_needs_credential\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_is_check\": true,\r\n    \"action_next_hop\":[],\r\n    \"action_next_hop_parameter_mapping\":{},\r\n    \"action_categories\": [  \"CATEGORY_TYPE_SECOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_IAM\"]\r\n  }\r\n  "
  },
  {
    "path": "AWS/legos/aws_list_users_with_old_passwords/aws_list_users_with_old_passwords.py",
    "content": "##  Copyright (c) 2021 unSkript, Inc\r\n##  All rights reserved.\r\n##\r\nimport pprint\r\nfrom typing import Tuple\r\nfrom datetime import datetime, timezone, timedelta\r\nfrom pydantic import BaseModel, Field\r\nfrom dateutil.parser import parse\r\nfrom unskript.connectors.aws import aws_get_paginator\r\n\r\n\r\nclass InputSchema(BaseModel):\r\n    threshold_days: int = Field(\r\n        default = 120,\r\n        title='Threshold (In days)',\r\n        description=('(in days) The threshold to check the IAM user '\r\n                     'password older than the threshold.')\r\n                    )\r\n\r\ndef aws_list_users_with_old_passwords_printer(output):\r\n    if output is None:\r\n        return\r\n    pprint.pprint(output)\r\n\r\n\r\ndef aws_list_users_with_old_passwords(handle, threshold_days: int = 120) -> Tuple:\r\n    \"\"\"aws_list_users_with_old_passwords lists all the IAM users with old passwords.\r\n\r\n        :type handle: object\r\n        :param handle: Object returned from Task Validate\r\n\r\n        :type threshold_days: int\r\n        :param threshold_days: (in days) The threshold to check the IAM user\r\n        password older than the threshold.\r\n\r\n        :rtype: Result List of all IAM users\r\n    \"\"\"\r\n    client = handle.client('iam')\r\n    users_list = []\r\n    now = datetime.now(timezone.utc)\r\n    response = aws_get_paginator(client, \"list_users\", \"Users\")\r\n    for user in response:\r\n        try:\r\n            login_profile = client.get_login_profile(UserName=user['UserName'])\r\n            if 'CreateDate' in login_profile['LoginProfile']:\r\n                password_last_changed = parse(\r\n                    str(login_profile['LoginProfile']['CreateDate'])\r\n                    ).replace(tzinfo=timezone.utc)\r\n                password_age = now - password_last_changed\r\n                if password_age > timedelta(days=threshold_days):\r\n                    users_list.append(user['UserName'])\r\n        except Exception:\r\n            pass\r\n\r\n    if len(users_list) != 0:\r\n        return (False, users_list)\r\n    return (True, None)\r\n"
  },
  {
    "path": "AWS/legos/aws_loadbalancer_list_instances/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>AWS List Instances behind a Load Balancer </h1>\r\n\r\n## Description\r\nThis Lego list AWS Instances behind a Load Balancer..\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_loadbalancer_list_instances(handle: object, arn: str, region: str, classic: bool)\r\n\r\n        handle: Object of type unSkript AWS Connector.\r\n        arn: Name of the classic loadbalancer or ARN of the ALB/NLB. Classic loadbalancer dont have ARN.\r\n        classic: Check if the loadbalancer is Classic.\r\n        region: Region of the Classic loadbalancer.\r\n## Lego Input\r\n\r\nThis Lego take four inputs handle, arn, classic and region.\r\n\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_loadbalancer_list_instances/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_loadbalancer_list_instances/aws_loadbalancer_list_instances.json",
    "content": "{\r\n    \"action_title\": \"AWS List Instances behind a Load Balancer.\",\r\n    \"action_description\": \"List AWS Instances behind a Load Balancer\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_loadbalancer_list_instances\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_ELB\"]\r\n}\r\n    "
  },
  {
    "path": "AWS/legos/aws_loadbalancer_list_instances/aws_loadbalancer_list_instances.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import Optional, List\nfrom pydantic import BaseModel, Field\nfrom unskript.legos.utils import parseARN\n\nclass InputSchema(BaseModel):\n    arn: str = Field(\n        title='Loadbalancer Name (Classic) or ARN (ALB/NLB)',\n        description=('Name of the classic loadbalancer or ARN of the ALB/NLB. '\n                     'Classic loadbalancer dont have ARN.')\n                     )\n    region: Optional[str] = Field(\n        title='Region of the Classic Loadbalancer',\n        description='Region of the Classic loadbalancer. You dont need to fill this for ALB/NLB.'\n    )\n    classic: bool = Field(\n        False,\n        title='Classic Loadbalancer',\n        description='Check if the loadbalancer is Classic. By default, its false.'\n    )\n\n\ndef aws_loadbalancer_list_instances_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef aws_loadbalancer_list_instances(\n        handle,\n        arn: str,\n        region: str = None,\n        classic: bool = False\n        ) -> List:\n    \"\"\"aws_get_unhealthy_instances returns array of instances\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type arn: string\n        :param arn: Name of the classic loadbalancer or ARN of the ALB/NLB.\n\n        :type classic: bool\n        :param classic: Check if the loadbalancer is Classic.\n\n        :type region: string\n        :param region: Region of the Classic loadbalancer.\n\n        :rtype: Returns array of instances\n    \"\"\"\n    instancesInfo = []\n    try:\n        if classic is False:\n            parsedArn = parseARN(arn)\n            elbv2Client = handle.client('elbv2', region_name=parsedArn['region'])\n            ec2Client = handle.client('ec2', region_name=parsedArn['region'])\n            # Get  the list of target groups behind this LB.\n            tgs = elbv2Client.describe_target_groups(\n                LoadBalancerArn=arn\n            )\n            for tg in tgs['TargetGroups']:\n                targetHealthResponse = elbv2Client.describe_target_health(\n                    TargetGroupArn=tg['TargetGroupArn']\n                )\n                for ins in targetHealthResponse[\"TargetHealthDescriptions\"]:\n                    try:\n                        privateIP = get_instance_private_ip(ec2Client, ins['Target']['Id'])\n                    except Exception:\n                        continue\n                    instanceInfo = {\n                        'InstanceID': ins['Target']['Id'],\n                        'PrivateIP': privateIP\n                    }\n                    instancesInfo.append(instanceInfo)\n        else:\n            elbClient = handle.client('elb', region_name=region)\n            ec2Client = handle.client('ec2', region_name=region)\n            res = elbClient.describe_instance_health(\n                LoadBalancerName=arn\n            )\n            for ins in res['InstanceStates']:\n                try:\n                    privateIP = get_instance_private_ip(ec2Client, ins['InstanceId'])\n                except Exception:\n                    continue\n                instanceInfo = {\n                    'InstanceID': ins['InstanceId'],\n                    'PrivateIP': privateIP\n                }\n                instancesInfo.append(instanceInfo)\n    except Exception as e:\n        print(f'Hit exception {str(e)}')\n        raise e\n\n    return instancesInfo\n\n\ndef get_instance_private_ip(ec2Client, instanceID: str) -> str:\n    try:\n        resp = ec2Client.describe_instances(\n            Filters=[\n                {\n                    'Name': 'instance-id',\n                    'Values': [instanceID]\n                }\n            ]\n        )\n    except Exception as e:\n        print(f'Failed to get instance details for {instanceID}, err: {str(e)}')\n        raise e\n    return resp['Reservations'][0]['Instances'][0]['PrivateIpAddress']\n"
  },
  {
    "path": "AWS/legos/aws_make_bucket_public/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Make AWS Bucket Public </h1>\r\n\r\n## Description\r\nThis Lego make an AWS Bucket Public.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_make_bucket_public(handle: object, name: str, enable_write: bool)\r\n\r\n        handle: Object of type unSkript AWS Connector.\r\n        name: Name of the bucket.\r\n        enable_write: Set this to true for bucket to be publicly writeable.\r\n## Lego Input\r\n\r\nThis Lego take three inputs handle, name and enable_write.\r\n\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_make_bucket_public/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_make_bucket_public/aws_make_bucket_public.json",
    "content": "{\r\n    \"action_title\": \"Make AWS Bucket Public\",\r\n    \"action_description\": \"Make an AWS Bucket Public!\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_make_bucket_public\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_S3\"]\r\n}\r\n    "
  },
  {
    "path": "AWS/legos/aws_make_bucket_public/aws_make_bucket_public.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import Dict\nfrom pydantic import BaseModel, Field\n\n\nclass InputSchema(BaseModel):\n    name: str = Field(\n        title='Bucket Name',\n        description='Name of the bucket.')\n    enable_write: bool = Field(\n        title='Enable write',\n        description=('Set this to true, if you want the bucket to be publicly writeable as well. '\n                     'By default, it is made publicly readable.')\n                    )\n\n\ndef aws_make_bucket_public_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef aws_make_bucket_public(handle, name: str, enable_write: bool) -> Dict:\n    \"\"\"aws_make_bucket_public Makes bucket public.\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type name: string\n        :param name: Name of the bucket.\n\n        :type enable_write: bool\n        :param enable_write: Set this to true for bucket to be publicly writeable.\n\n        :rtype: Dict with information about the success of the request.\n    \"\"\"\n    s3Client = handle.client('s3')\n\n    acl = \"public-read\"\n    if enable_write:\n        acl = \"public-read-write\"\n    res = s3Client.put_bucket_acl(Bucket=name, ACL=acl)\n    return res\n"
  },
  {
    "path": "AWS/legos/aws_make_rds_instance_not_publicly_accessible/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>Disallow AWS RDS Instance public accessibility</h1>\n\n## Description\nChange public accessibility of RDS Instances to False.\n\n## Lego Details\n\taws_make_rds_instance_not_publicly_accessible(handle, db_instance_identifier: str, region: str)\n\t\thandle: Object of type unSkript AWS Connector.\n\t\tdb_instance_identifier: Identifier of the RDS instance.\n\t\tregion: Region of the RDS instance.\n\n\n## Lego Input\nThis Lego takes inputs handle, db_instance_identifier, region.\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_make_rds_instance_not_publicly_accessible/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_make_rds_instance_not_publicly_accessible/aws_make_rds_instance_not_publicly_accessible.json",
    "content": "{\n  \"action_title\": \"Disallow AWS RDS Instance public accessibility\",\n  \"action_description\": \"Change public accessibility of RDS Instances to False.\",\n  \"action_type\": \"LEGO_TYPE_AWS\",\n  \"action_entry_function\": \"aws_make_rds_instance_not_publicly_accessible\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\n  \"action_is_check\": false,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\":[\"CATEGORY_TYPE_CLOUDOPS\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_RDS\"]\n}"
  },
  {
    "path": "AWS/legos/aws_make_rds_instance_not_publicly_accessible/aws_make_rds_instance_not_publicly_accessible.py",
    "content": "##\n# Copyright (c) 2023 unSkript, Inc\n# All rights reserved.\n##\nfrom pydantic import BaseModel, Field\n\n\nclass InputSchema(BaseModel):\n    db_instance_identifier: str = Field(\n        ...,\n        description='The DB instance identifier for the DB instance to be deleted. This parameter isn’t case-sensitive.',\n        title='RDS Instance Identifier',\n    )\n    region: str = Field(\n        ..., description='AWS region of instance identifier', title='AWS Region'\n    )\n\n\n\ndef aws_make_rds_instance_not_publicly_accessible_printer(output):\n    if output is None:\n        return\n    print(output)\n\n\ndef aws_make_rds_instance_not_publicly_accessible(handle, db_instance_identifier: str, region: str) -> str:\n    \"\"\"\n    aws_make_rds_instance_not_publicly_accessible makes the specified RDS instance not publicly accessible.\n\n    :type handle: object\n    :param handle: Object returned from task.validate(...).\n\n    :type db_instance_identifier: string\n    :param db_instance_identifier: Identifier of the RDS instance.\n\n    :type region: string\n    :param region: Region of the RDS instance.\n\n    :rtype: Response of the operation.\n    \"\"\"\n    try:\n        rdsClient = handle.client('rds', region_name=region)\n        rdsClient.modify_db_instance(\n            DBInstanceIdentifier=db_instance_identifier,\n            PubliclyAccessible=False\n        )\n    except Exception as e:\n        raise e\n    return f\"Public accessiblilty is being changed to False...\"\n\n\n"
  },
  {
    "path": "AWS/legos/aws_modify_ebs_volume_to_gp3/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>AWS Modify EBS Volume to GP3 </h1>\r\n\r\n## Description\r\nThis Lego modify AWS EBS volumes to General Purpose SSD (gp3) volume type, which is designed to provide higher IOPS performance at a lower cost than the gp2 volume type.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_modify_ebs_volume_to_gp3(handle, region: str, volume_id: str)\r\n\r\n        handle: Object of type unSkript AWS Connector.\r\n        region: AWS region. Eg: “us-west-2”\r\n        volume_id: EBS Volume ID.\r\n\r\n## Lego Input\r\nThis Lego take three inputs handle, volume_id and region.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_modify_ebs_volume_to_gp3/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_modify_ebs_volume_to_gp3/aws_modify_ebs_volume_to_gp3.json",
    "content": "{\r\n    \"action_title\": \"AWS Modify EBS Volume to GP3\",\r\n    \"action_description\": \"AWS recently introduced the General Purpose SSD (gp3) volume type, which is designed to provide higher IOPS performance at a lower cost than the gp2 volume type.\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_modify_ebs_volume_to_gp3\",\r\n    \"action_needs_credential\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_EC2\"]\r\n}\r\n  "
  },
  {
    "path": "AWS/legos/aws_modify_ebs_volume_to_gp3/aws_modify_ebs_volume_to_gp3.py",
    "content": "##\r\n##  Copyright (c) 2021 unSkript, Inc\r\n##  All rights reserved.\r\n##\r\nimport pprint\r\nfrom typing import List\r\nfrom pydantic import BaseModel, Field\r\n\r\n\r\nclass InputSchema(BaseModel):\r\n    region: str = Field(\r\n        title='Region',\r\n        description='AWS Region.')\r\n    volume_id: str = Field(\r\n        title='Volume ID',\r\n        description='EBS Volume ID.')\r\n\r\n\r\ndef aws_modify_ebs_volume_to_gp3_printer(output):\r\n    if output is None:\r\n        return\r\n    pprint.pprint(output)\r\n\r\n\r\ndef aws_modify_ebs_volume_to_gp3(handle, region: str, volume_id: str) -> List:\r\n    \"\"\"aws_modify_ebs_volume_to_gp3 returns an array of modified details for EBS volumes.\r\n\r\n        :type region: string\r\n        :param region: Used to filter the volume for specific region.\r\n\r\n        :type volume_id: string\r\n        :param volume_id: EBS Volume ID.\r\n\r\n        :rtype: List of modified details for EBS volumes\r\n    \"\"\"\r\n    result = []\r\n    try:\r\n        ec2Client = handle.client('ec2', region_name=region)\r\n        volumes = ec2Client.modify_volume(VolumeId=volume_id, VolumeType='gp3')\r\n        result.append(volumes)\r\n    except Exception as e:\r\n        result.append({\"error\": e})\r\n\r\n    return result\r\n"
  },
  {
    "path": "AWS/legos/aws_modify_listener_for_http_redirection/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>AWS Modify ALB Listeners HTTP Redirection </h1>\r\n\r\n## Description\r\nThis Lego modify AWS ALB listeners HTTP redirection.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_modify_listener_for_http_redirection(handle, listener_arn: str, region: str)\r\n\r\n        handle: Object of type unSkript AWS Connector\r\n        listener_arn: List of listenerArn.\r\n        region: Region to filter ALB listeners.\r\n\r\n## Lego Input\r\nThis Lego take three inputs handle, listener_arn and region.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_modify_listener_for_http_redirection/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_modify_listener_for_http_redirection/aws_modify_listener_for_http_redirection.json",
    "content": "{\r\n    \"action_title\": \"AWS Modify ALB Listeners HTTP Redirection\",\r\n    \"action_description\": \"AWS Modify ALB Listeners HTTP Redirection\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_modify_listener_for_http_redirection\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_is_remediation\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_ELB\", \"CATEGORY_TYPE_AWS_EC2\"]\r\n}"
  },
  {
    "path": "AWS/legos/aws_modify_listener_for_http_redirection/aws_modify_listener_for_http_redirection.py",
    "content": "##  Copyright (c) 2021 unSkript, Inc\r\n##  All rights reserved.\r\n##\r\nimport pprint\r\nfrom typing import List\r\nfrom pydantic import BaseModel, Field\r\n\r\nclass InputSchema(BaseModel):\r\n    listener_arn: str = Field(\r\n        title='ListenerArn',\r\n        description='listener ARNs.')\r\n\r\n    region: str = Field(\r\n        title='Region',\r\n        description='AWS Region of the ALB listeners.')\r\n\r\ndef aws_modify_listener_for_http_redirection_printer(output):\r\n    if output is None:\r\n        return\r\n    pprint.pprint(output)\r\n\r\n\r\ndef aws_modify_listener_for_http_redirection(handle, listener_arn: str, region: str) -> List:\r\n    \"\"\"aws_modify_listener_for_http_redirection List of Dict with modified listener info.\r\n\r\n        :type handle: object\r\n        :param handle: Object returned from task.validate(...).\r\n\r\n        :type listener_arn: string\r\n        :param listener_arn: List of listenerArn.\r\n\r\n        :type region: string\r\n        :param region: Region to filter ALB listeners.\r\n\r\n        :rtype: List of Dict with modified ALB listeners info.\r\n    \"\"\"\r\n    listner_config = [{\r\n                        \"Type\": \"redirect\",\r\n                        \"Order\": 1,\r\n                        \"RedirectConfig\": {\r\n                            \"Protocol\": \"HTTPS\",\r\n                            \"Host\": \"#{host}\",\r\n                            \"Query\": \"#{query}\",\r\n                            \"Path\": \"/#{path}\",\r\n                            \"Port\": \"443\",\r\n                            \"StatusCode\": \"HTTP_302\"}}]\r\n    result = []\r\n    try:\r\n        ec2Client = handle.client('elbv2', region_name=region)\r\n        response = ec2Client.modify_listener(ListenerArn=listener_arn,\r\n                                             DefaultActions=listner_config)\r\n        result.append(response)\r\n\r\n    except Exception as error:\r\n        result.append(error)\r\n\r\n    return result\r\n"
  },
  {
    "path": "AWS/legos/aws_modify_public_db_snapshots/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>AWS Modify Publicly Accessible RDS Snapshots </h1>\r\n\r\n## Description\r\nThis Lego modify AWS publicly accessible RDS snapshots.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_modify_public_db_snapshots(handle, db_snapshot_identifier: str, region: str)\r\n\r\n        handle: Object of type unSkript AWS Connector\r\n        db_snapshot_identifier: DB Snapshot Idntifier of RDS.\r\n        region: Region of the RDS.\r\n\r\n## Lego Input\r\nThis Lego take three inputs handle, db_snapshot_identifier and region.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_modify_public_db_snapshots/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_modify_public_db_snapshots/aws_modify_public_db_snapshots.json",
    "content": "{\r\n    \"action_title\": \"AWS Modify Publicly Accessible RDS Snapshots\",\r\n    \"action_description\": \"AWS Modify Publicly Accessible RDS Snapshots\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_modify_public_db_snapshots\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_is_remediation\": true,\r\n    \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_RDS\", \"CATEGORY_TYPE_AWS_EC2\"]\r\n}\r\n"
  },
  {
    "path": "AWS/legos/aws_modify_public_db_snapshots/aws_modify_public_db_snapshots.py",
    "content": "##  Copyright (c) 2021 unSkript, Inc\r\n##  All rights reserved.\r\n##\r\nimport pprint\r\nfrom typing import List\r\nfrom pydantic import BaseModel, Field\r\n\r\nclass InputSchema(BaseModel):\r\n    db_snapshot_identifier: str = Field(\r\n        title='DB Snapshot Idntifier',\r\n        description='DB Snapshot Idntifier of RDS.'\r\n    )\r\n    region: str = Field(\r\n        title='Region',\r\n        description='Region of the RDS.'\r\n    )\r\n\r\ndef aws_modify_public_db_snapshots_printer(output):\r\n    if output is None:\r\n        return\r\n    pprint.pprint(output)\r\n\r\n\r\ndef aws_modify_public_db_snapshots(handle, db_snapshot_identifier: str, region: str) -> List:\r\n    \"\"\"aws_modify_public_db_snapshots lists of publicly accessible DB Snapshot Idntifier Info.\r\n\r\n        :type handle: object\r\n        :param handle: Object returned from task.validate(...).\r\n\r\n        :type db_snapshot_identifier: string\r\n        :param db_snapshot_identifier: DB Snapshot Idntifier of RDS.\r\n\r\n        :type region: string\r\n        :param region: Region of the RDS.\r\n\r\n        :rtype: List with Dict of DB Snapshot Idntifier Info.\r\n    \"\"\"\r\n\r\n\r\n    ec2Client = handle.client('rds', region_name=region)\r\n    result = []\r\n    try:\r\n        response = ec2Client.modify_db_snapshot_attribute(\r\n            DBSnapshotIdentifier=db_snapshot_identifier,\r\n            AttributeName='restore', \r\n            ValuesToRemove=['all'])\r\n\r\n        result.append(response)\r\n\r\n    except Exception as error:\r\n        result.append(error)\r\n\r\n    return result\r\n"
  },
  {
    "path": "AWS/legos/aws_postgresql_get_configured_max_connections/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Get AWS Postgresql Max Configured Connections </h1>\r\n\r\n## Description\r\nThis Lego used to get AWS Postgresql Max Configured Connections.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_postgresql_get_configured_max_connections(handle: object, cluster_identifier: str, region: str)\r\n\r\n        handle: Object of type unSkript AWS Connector.\r\n        cluster_identifier: RDS Cluster DB Identifier.\r\n        region: AWS Region of the Postgres DB Cluster.\r\n## Lego Input\r\n\r\nThis Lego take three inputs handle, cluster_identifier and region.\r\n\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_postgresql_get_configured_max_connections/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_postgresql_get_configured_max_connections/aws_postgresql_get_configured_max_connections.json",
    "content": "{\r\n    \"action_title\": \"Get AWS Postgresql Max Configured Connections\",\r\n    \"action_description\": \"Get AWS Postgresql Max Configured Connections\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_postgresql_get_configured_max_connections\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_EC2\", \"CATEGORY_TYPE_AWS_POSTGRES\"]\r\n}\r\n    "
  },
  {
    "path": "AWS/legos/aws_postgresql_get_configured_max_connections/aws_postgresql_get_configured_max_connections.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom pydantic import BaseModel, Field\n\n\nclass InputSchema(BaseModel):\n    cluster_identifier: str = Field(\n        title='DB Identifier',\n        description='RDS Cluster DB Identifier.')\n    region: str = Field(\n        title='Region',\n        description='AWS Region of the Postgres DB Cluster.')\n\n\ndef aws_postgresql_get_configured_max_connections_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef aws_postgresql_get_configured_max_connections(\n        handle,\n        cluster_identifier: str,\n        region: str\n        ) -> str:\n    \"\"\"aws_postgresql_get_configured_max_connection Get the configured max connection.\n\n          :type handle: object\n          :param handle: Object returned from task.validate(...).\n\n          :type cluster_identifier: string\n          :param cluster_identifier: RDS Cluster DB Identifier.\n\n          :type region: string\n          :param region: AWS Region of the Postgres DB Cluster.\n\n          :rtype: All the results of the query.\n      \"\"\"\n    # Input param validation.\n\n    ec2_client = handle.client('ec2', region_name=region)\n\n    # Get the list of instance types and their memory info.\n    paginator = ec2_client.get_paginator('describe_instance_types')\n    page_iterator = paginator.paginate()\n\n    instance_type_memory_map = {}\n    try:\n        for page in page_iterator:\n            for instance_type in page['InstanceTypes']:\n                instance_type_memory_map[instance_type['InstanceType']] = instance_type['MemoryInfo']['SizeInMiB']\n    except Exception as e:\n        print(f'describe_instance_types hit an exception {str(e)}')\n        raise e\n\n    rds_client = handle.client('rds', region_name=region)\n    try:\n        describe_db_clusters_resp = rds_client.describe_db_clusters(\n            DBClusterIdentifier=cluster_identifier\n            )\n    except Exception as e:\n        print(f'describe_db_clusters for cluster {cluster_identifier} hit an exception, {str(e)}')\n        raise e\n\n    cluster_info = describe_db_clusters_resp['DBClusters'][0]\n    cluster_parameter_group_name = cluster_info['DBClusterParameterGroup']\n    cluster_instances = []\n    for info in cluster_info['DBClusterMembers']:\n        cluster_instances.append(info['DBInstanceIdentifier'])\n\n    # Now get the type of the DBInstance Identifier.\n    # ASSUMPTION: All nodes are of the same type.\n    try:\n        describe_instance_resp = rds_client.describe_db_instances(\n            DBInstanceIdentifier=cluster_instances[0]\n            )\n    except Exception as e:\n        print(f'describe_db_instance for cluster {cluster_instances[0]} failed, {str(e)}')\n        raise e\n\n    cluster_instance_type = describe_instance_resp['DBInstances'][0]['DBInstanceClass'].lstrip('db.')\n    cluster_instance_memory = instance_type_memory_map[cluster_instance_type]\n\n    # Get the max connections for this postgresql. 2 options here:\n    # 1. If the max connection is configured via parameter group, get it from there.\n    # 2. If its default, its LEAST({DBInstanceClassMemory/9531392}, 5000)\n    paginator = rds_client.get_paginator('describe_db_parameters')\n    operation_parameters = {'DBParameterGroupName': cluster_parameter_group_name}\n    page_iterator = paginator.paginate(**operation_parameters)\n    for page in page_iterator:\n        for parameter in page['Parameters']:\n            if parameter['ParameterName'] == 'max_connections':\n                if parameter['ParameterValue'].startswith('LEAST'):\n                    return str(int(min(cluster_instance_memory * 1048576 / 9531392, 5000)))\n                else:\n                    return parameter['ParameterValue']\n"
  },
  {
    "path": "AWS/legos/aws_postgresql_plot_active_connections/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Plot AWS PostgreSQL Active Connections</h1>\r\n\r\n## Description\r\nThis Lego Plot AWS PostgreSQL Active Connections\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_postgresql_plot_active_connections(handle: object, cluster_identifier: str, max_connections: int, time_since: int, region: str)\r\n        handle: Object of type unSkript AWS Connector.\r\n        cluster_identifier: RDS DB Identifier.\r\n        max_connections: Configured max connections.\r\n        time_since: Starting from now, window (in seconds) for which you want to get the datapoints for.\r\n        region: AWS Region of the Postgres DB Cluster.\r\n## Lego Input\r\n\r\nThis Lego take five inputs handle, cluster_identifier, max_connections, time_since and region.\r\n\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_postgresql_plot_active_connections/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_postgresql_plot_active_connections/aws_postgresql_plot_active_connections.json",
    "content": "{\r\n    \"action_title\": \"Plot AWS PostgreSQL Active Connections\",\r\n    \"action_description\": \"Plot AWS PostgreSQL Action Connections\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_postgresql_plot_active_connections\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_NONE\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_EC2\", \"CATEGORY_TYPE_AWS_POSTGRES\"]\r\n}\r\n    "
  },
  {
    "path": "AWS/legos/aws_postgresql_plot_active_connections/aws_postgresql_plot_active_connections.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nfrom datetime import datetime, timedelta\nfrom pydantic import BaseModel, Field\nimport matplotlib.pyplot as plt\n\n\nclass InputSchema(BaseModel):\n    cluster_identifier: str = Field(\n        title='DB Identifier',\n        description='RDS DB Identifier.')\n    max_connections: int = Field(\n        title='Max Connections',\n        description='Configured max connections.')\n    time_since: int = Field(\n        title='Time Since',\n        description=('Starting from now, window (in seconds) for which you '\n                     'want to get the datapoints for.')\n                     )\n    region: str = Field(\n        title='Region',\n        description='AWS Region of the Postgres DB Cluster.')\n\n\ndef aws_postgresql_plot_active_connections(\n        handle,\n        cluster_identifier: str,\n        max_connections: int,\n        time_since: int,\n        region: str\n        ) -> None:\n    \"\"\"aws_postgresql_plot_active_connections Plots the active connections\n       normalized by the max connections.\n\n          :type handle: object\n          :param handle: Object returned from task.validate(...).\n\n          :type cluster_identifier: string\n          :param cluster_identifier: RDS DB Identifier.\n\n          :type max_connections: string\n          :param max_connections: Configured max connections.\n\n          :type time_since: int\n          :param time_since: Starting from now, window (in seconds) for which\n          you want to get the datapoints for.\n\n          :type region: string\n          :param region: AWS Region of the Postgres DB Cluster.\n\n          :rtype: All the results of the query.\n      \"\"\"\n    # Input param validation.\n\n    # Get the list of instances in this cluster and their types.\n    rds_client = handle.client('rds', region_name=region)\n\n    try:\n        describe_db_clusters_resp = rds_client.describe_db_clusters(\n            DBClusterIdentifier=cluster_identifier\n            )\n    except Exception as e:\n        print(f'describe_db_clusters for cluster {cluster_identifier} hit an exception, {str(e)}')\n        raise e\n    cluster_info = describe_db_clusters_resp['DBClusters'][0]\n    cluster_instances = []\n    for value in cluster_info['DBClusterMembers']:\n        cluster_instances.append(value['DBInstanceIdentifier'])\n\n    cloud_watch_client = handle.client('cloudwatch', region_name=region)\n\n    plt.figure(figsize=(10, 10))\n    plt.ylabel('ActiveConnections/MaxConnections')\n    for cluster_instance in cluster_instances:\n        ts, data_points = get_normalized_active_connections(\n            cloud_watch_client,\n            cluster_instance,\n            time_since, max_connections\n            )\n        plt.plot(ts, data_points, label=cluster_instance)\n    plt.legend(loc=1, fontsize='medium')\n    plt.show()\n\n\ndef get_normalized_active_connections(\n        cloudWatch_client,\n        db_instance_id,\n        time_since,\n        max_connections\n        ):\n    # Gets metric statistics.\n    res = cloudWatch_client.get_metric_statistics(\n        Namespace=\"AWS/RDS\",\n        MetricName=\"DatabaseConnections\",\n        Dimensions=[{\"Name\": \"DBInstanceIdentifier\", \"Value\": db_instance_id}],\n        Period=6000,\n        StartTime=datetime.utcnow() - timedelta(seconds=time_since),\n        EndTime=datetime.utcnow(),\n        Statistics=[\n            \"Average\"\n        ]\n    )\n\n    data = {}\n    for datapoints in res['Datapoints']:\n        data[datapoints['Timestamp']] = datapoints[\"Average\"] / max_connections\n\n    # Sorts data.\n    data_keys = data.keys()\n    times_stamps = list(data_keys)\n    times_stamps.sort()\n    sorted_values = []\n    for value in times_stamps:\n        sorted_values.append(data[value])\n\n    return (times_stamps, sorted_values)\n"
  },
  {
    "path": "AWS/legos/aws_purchase_elasticcache_reserved_node/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>AWS Purchase ElastiCache Reserved Nodes</h1>\n\n## Description\nThis action purchases a reserved cache node offering.\n\n## Lego Details\n\taws_purchase_elasticcache_reserved_node(handle, region: str, reserved_node_offering_id: str, no_of_nodes:int=1)\n\t\thandle: Object of type unSkript AWS Connector.\n\t\tno_of_nodes: The number of reserved nodes that you want to purchase.\n\t\treserved_node_offering_id: The unique identifier of the reserved node offering you want to purchase. Example: '438012d3-4052-4cc7-b2e3-8d3372e0e706'\n\n\n## Lego Input\nThis Lego takes inputs handle, no_of_nodes, reserved_node_offering_id.\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_purchase_elasticcache_reserved_node/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_purchase_elasticcache_reserved_node/aws_purchase_elasticcache_reserved_node.json",
    "content": "{\n  \"action_title\": \"AWS Purchase ElastiCache Reserved Nodes\",\n  \"action_description\": \"This action purchases a reserved cache node offering.\",\n  \"action_type\": \"LEGO_TYPE_AWS\",\n  \"action_entry_function\": \"aws_purchase_elasticcache_reserved_node\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\n  \"action_is_check\": false,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\": [\"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_ELASTICACHE\"]\n}"
  },
  {
    "path": "AWS/legos/aws_purchase_elasticcache_reserved_node/aws_purchase_elasticcache_reserved_node.py",
    "content": "##\n# Copyright (c) 2023 unSkript, Inc\n# All rights reserved.\n##\nfrom pydantic import BaseModel, Field\nfrom typing import Optional, Dict\nimport pprint\n\n\nclass InputSchema(BaseModel):\n    region: str = Field(\n        description='AWS Region.', \n        title='Region'\n    )\n    reserved_node_offering_id: str = Field(\n        description='The unique identifier of the reserved cache node offering you want to purchase.',\n        title='Reserved Cache Node Offering ID',\n    )\n    no_of_nodes: Optional[int] = Field(\n        1,\n        description='The number of reserved cache nodes that you want to purchase.',\n        title='No of nodes to purchase',\n    )\n\n\n\ndef aws_purchase_elasticcache_reserved_node_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef aws_purchase_elasticcache_reserved_node(handle, region: str, reserved_node_offering_id: str, no_of_nodes:int=1) -> Dict:\n    \"\"\"aws_purchase_elasticcache_reserved_node returns dict of response.\n\n        :type region: string\n        :param region: AWS Region.\n\n        :type reserved_node_offering_id: string\n        :param reserved_node_offering_id: The unique identifier of the reserved node offering you want to purchase. Example: '438012d3-4052-4cc7-b2e3-8d3372e0e706'\n\n        :type no_of_nodes: int\n        :param no_of_nodes: The number of reserved nodes that you want to purchase.\n\n        :rtype: dict of response metatdata of purchasing a reserved node\n    \"\"\"\n    try:\n        elasticClient = handle.client('elasticache', region_name=region)\n        params = {\n            'ReservedCacheNodesOfferingId': reserved_node_offering_id,\n            'CacheNodeCount': no_of_nodes\n            }\n        response = elasticClient.purchase_reserved_cache_nodes_offering(**params)\n        return response\n    except Exception as e:\n        raise Exception(e)\n\n\n"
  },
  {
    "path": "AWS/legos/aws_purchase_rds_reserved_instance/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>AWS Purchase RDS Reserved Instances</h1>\n\n## Description\nThis action purchases a reserved DB instance offering.\n\n## Lego Details\n\taws_purchase_rds_reserved_instance(handle, region: str, reserved_instance_offering_id: str, db_instance_count:int=1)\n\t\thandle: Object of type unSkript AWS Connector.\n\t\treserved_instance_offering_id: The unique identifier of the reserved instance offering you want to purchase.\n\t\tdb_instance_count: The number of reserved instances that you want to purchase.\n\n## Lego Input\nThis Lego takes inputs handle, reserved_instance_offering_id, db_instance_count.\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_purchase_rds_reserved_instance/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_purchase_rds_reserved_instance/aws_purchase_rds_reserved_instance.json",
    "content": "{\n  \"action_title\": \"AWS Purchase RDS Reserved Instances\",\n  \"action_description\": \"This action purchases a reserved DB instance offering.\",\n  \"action_type\": \"LEGO_TYPE_AWS\",\n  \"action_entry_function\": \"aws_purchase_rds_reserved_instance\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\n  \"action_is_check\": false,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\": [\"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_RDS\"]\n}"
  },
  {
    "path": "AWS/legos/aws_purchase_rds_reserved_instance/aws_purchase_rds_reserved_instance.py",
    "content": "##\n# Copyright (c) 2023 unSkript, Inc\n# All rights reserved.\n##\nfrom pydantic import BaseModel, Field\nfrom typing import Optional, Dict\nimport pprint\n\n\nclass InputSchema(BaseModel):\n    region: str = Field(\n        description='AWS Region.', \n        title='Region'\n        )\n    reserved_instance_offering_id: str = Field(\n        description='The ID of the Reserved DB instance offering to purchase. Example: 438012d3-4052-4cc7-b2e3-8d3372e0e706',\n        title='Reserved Instance Offering ID',\n    )\n    db_instance_count: Optional[int] = Field(\n        1, \n        description='The number of instances to reserve.', \n        title='Instance Count'\n    )\n\n\n\ndef aws_purchase_rds_reserved_instance_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef aws_purchase_rds_reserved_instance(handle, region: str, reserved_instance_offering_id: str, db_instance_count:int=1) -> Dict:\n    \"\"\"aws_purchase_rds_reserved_instance returns dict of response.\n\n        :type region: string\n        :param region: AWS Region.\n\n        :type reserved_instance_offering_id: string\n        :param reserved_instance_offering_id: The unique identifier of the reserved instance offering you want to purchase.\n\n        :type db_instance_count: int\n        :param db_instance_count: The number of reserved instances that you want to purchase.\n\n        :rtype: dict of response metatdata of purchasing a reserved instance\n    \"\"\"\n    try:\n        redshiftClient = handle.client('redshift', region_name=region)\n        params = {\n            'ReservedDBInstancesOfferingId': reserved_instance_offering_id,\n            'DBInstanceCount': db_instance_count\n            }\n        response = redshiftClient.purchase_reserved_db_instances_offering(**params)\n        return response\n    except Exception as e:\n        raise Exception(e)\n\n\n"
  },
  {
    "path": "AWS/legos/aws_purchase_redshift_reserved_node/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>AWS Purchase Redshift Reserved Nodes</h1>\n\n## Description\nThis action purchases reserved nodes. Amazon Redshift offers a predefined set of reserved node offerings. You can purchase one or more of the offerings.\n\n## Lego Details\n\taws_purchase_redshift_reserved_node(handle, region: str, reserved_node_offering_id: str, no_of_nodes:int=1)\n\t\thandle: Object of type unSkript AWS Connector.\n\t\tno_of_nodes: The number of reserved nodes that you want to purchase.\n\t\treserved_node_offering_id: The unique identifier of the reserved node offering you want to purchase. Example: '438012d3-4052-4cc7-b2e3-8d3372e0e706'\n\n## Lego Input\nThis Lego takes inputs handle, no_of_nodes, reserved_node_offering_id.\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_purchase_redshift_reserved_node/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_purchase_redshift_reserved_node/aws_purchase_redshift_reserved_node.json",
    "content": "{\n  \"action_title\": \"AWS Purchase Redshift Reserved Nodes\",\n  \"action_description\": \"This action purchases reserved nodes. Amazon Redshift offers a predefined set of reserved node offerings. You can purchase one or more of the offerings.\",\n  \"action_type\": \"LEGO_TYPE_AWS\",\n  \"action_entry_function\": \"aws_purchase_redshift_reserved_node\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\n  \"action_is_check\": false,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\": [\"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_REDSHIFT\"]\n}"
  },
  {
    "path": "AWS/legos/aws_purchase_redshift_reserved_node/aws_purchase_redshift_reserved_node.py",
    "content": "##\n# Copyright (c) 2023 unSkript, Inc\n# All rights reserved.\n##\nimport pprint\nfrom typing import Optional, Dict\nfrom pydantic import BaseModel, Field\n\n\nclass InputSchema(BaseModel):\n    region: str = Field(\n        description='AWS Region.', \n        title='Region'\n    )\n    reserved_node_offering_id: str = Field(\n        description='The unique identifier of the reserved node offering you want to purchase.',\n        title='Reserved Node Offering ID',\n    )\n    no_of_nodes: Optional[int] = Field(\n        1,\n        description='The number of reserved nodes that you want to purchase.',\n        title='No od Nodes to reserve',\n    )\n\n\n\ndef aws_purchase_redshift_reserved_node_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef aws_purchase_redshift_reserved_node(handle, region: str, reserved_node_offering_id: str, no_of_nodes:int=1) -> Dict:\n    \"\"\"aws_purchase_redshift_reserved_node returns dict of response.\n\n        :type region: string\n        :param region: AWS Region.\n\n        :type reserved_node_offering_id: string\n        :param reserved_node_offering_id: The unique identifier of the reserved node offering you want to purchase.\n\n        :type no_of_nodes: int\n        :param no_of_nodes: The number of reserved nodes that you want to purchase.\n\n        :rtype: dict of response metatdata of purchasing a reserved node\n    \"\"\"\n    try:\n        redshiftClient = handle.client('redshift', region_name=region)\n        params = {\n            'ReservedNodeOfferingId': reserved_node_offering_id,\n            'NodeCount': no_of_nodes\n            }\n        response = redshiftClient.purchase_reserved_node_offering(**params)\n        return response\n    except Exception as e:\n        raise Exception(e)\n\n\n"
  },
  {
    "path": "AWS/legos/aws_put_bucket_cors/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Apply CORS Policy for S3 Bucket </h1>\r\n\r\n## Description\r\nThis Lego apply CORS Policy for S3 Bucket.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_put_bucket_cors(handle: object, name: str, corsRules: List, region: str)\r\n\r\n        handle: Object of type unSkript AWS Connector.\r\n        name: Name of the bucket.\r\n        corsRules: cross-origin access configuration in JSON format.\r\n        region: AWS region of the bucket.\r\n## Lego Input\r\n\r\nThis Lego take four inputs handle, name, corsRules and region.\r\n\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_put_bucket_cors/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_put_bucket_cors/aws_put_bucket_cors.json",
    "content": "{\r\n    \"action_title\": \" Apply CORS Policy for S3 Bucket\",\r\n    \"action_description\": \" Apply CORS Policy for S3 Bucket\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_put_bucket_cors\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_S3\"]\r\n}\r\n    "
  },
  {
    "path": "AWS/legos/aws_put_bucket_cors/aws_put_bucket_cors.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n# @author: Yugal Pachpande, @email: yugal.pachpande@unskript.com\n##\nimport pprint\nfrom typing import Any, Dict, List\nfrom pydantic import BaseModel, Field\n\nclass InputSchema(BaseModel):\n    name: str = Field(\n        title='Bucket name',\n        description='Name of the bucket.'\n    )\n    corsRules: List[Dict[str, Any]] = Field(\n        title='Bucket Policy',\n        description=('cross-origin access configuration in JSON format. '\n                     'eg. [{\\\"AllowedHeaders\\\":[\"*\"],\\\"AllowedMethods\\\":[\\\"PUT\\\",\\\"POST\\\",\\\"DELETE\\\"],'\n                     '\\\"AllowedOrigins\\\":[\\\"http://www.example1.com\\\" ],\\\"ExposeHeaders\\\": []}, '\n                     '{\\\"AllowedHeaders\\\": [],\\\"AllowedMethods\\\":[\\\"GET\\\"],\\\"AllowedOrigins\\\":[\\\"*\\\"],'\n                     '\\\"ExposeHeaders\\\":[]}]')\n    )\n    region: str = Field(\n        title='Region',\n        description='AWS region of the bucket.'\n    )\n\n\ndef aws_put_bucket_cors_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef aws_put_bucket_cors(handle, name: str, corsRules: List, region: str) -> Dict:\n    \"\"\"aws_put_bucket_cors Puts CORS policy for bucket.\n\n          :type handle: object\n          :param handle: Object returned from task.validate(...).\n\n          :type name: string\n          :param name: Name of the bucket.\n\n          :type corsRules: list\n          :param corsRules: cross-origin access configuration in JSON format.\n\n          :type region: string\n          :param region: AWS region of the bucket.\n\n          :rtype: Dict with the response info.\n      \"\"\"\n    # Input param validation.\n\n    s3Client = handle.client('s3', region_name=region)\n\n    cors_configuration = {'CORSRules': corsRules}\n    pprint.pprint(f\"Applying config to bucket: {str(cors_configuration)}\")\n\n    # Setup a CORS policy\n    res = s3Client.put_bucket_cors(\n        Bucket=name,\n        CORSConfiguration=cors_configuration\n    )\n    return res\n"
  },
  {
    "path": "AWS/legos/aws_put_bucket_policy/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Apply AWS New Policy for S3 Bucket </h1>\r\n\r\n## Description\r\nThis Lego apply AWS New Policy for S3 Bucket.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_put_bucket_policy(handle: object, name: str, policy: str, region: str)\r\n\r\n        handle: Object of type unSkript AWS Connector.\r\n        name: Name of the bucket.\r\n        policy: Bucket policy in JSON format.\r\n        region: AWS region of the bucket.\r\n## Lego Input\r\n\r\nThis Lego take four inputs handle, name, policy and region.\r\n\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_put_bucket_policy/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_put_bucket_policy/aws_put_bucket_policy.json",
    "content": "{\r\n    \"action_title\": \"Apply AWS New Policy for S3 Bucket\",\r\n    \"action_description\": \"Apply a New AWS Policy for S3 Bucket\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_put_bucket_policy\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_S3\"]\r\n}\r\n    "
  },
  {
    "path": "AWS/legos/aws_put_bucket_policy/aws_put_bucket_policy.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##  @author: Amit Chandak, @email: amit@unskript.com\n##\nimport pprint\nfrom typing import Dict\nimport json\nfrom pydantic import BaseModel, Field\n\n\nclass InputSchema(BaseModel):\n    name: str = Field(\n        title='Bucket name',\n        description='Name of the bucket.'\n    )\n    policy: str = Field(\n        title='Bucket Policy',\n        description='Bucket policy in JSON format.'\n    )\n    region: str = Field(\n        title='Region',\n        description='AWS region of the bucket.'\n    )\n\n\ndef aws_put_bucket_policy_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef aws_put_bucket_policy(handle, name: str, policy: str, region: str) -> Dict:\n    \"\"\"aws_put_bucket_policy Puts new policy for bucket.\n\n          :type handle: object\n          :param handle: Object returned from task.validate(...).\n\n          :type name: string\n          :param name: Name of the bucket.\n\n          :type policy: string\n          :param policy: Bucket policy in JSON format.\n\n          :type region: string\n          :param region: AWS region of the bucket.\n\n          :rtype: Dict with the response info.\n      \"\"\"\n    # Input param validation.\n\n    s3Client = handle.client('s3',\n                             region_name=region)\n\n    # Setup a policy\n    res = s3Client.put_bucket_policy(\n        Bucket=name,\n        Policy=json.dumps(policy)\n    )\n    return res['ResponseMetadata']\n"
  },
  {
    "path": "AWS/legos/aws_read_object/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Read AWS S3 Object </h1>\r\n\r\n## Description\r\nThis Lego read AWS S3 Object.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_read_object(handle: object, name: str, key: str)\r\n\r\n        handle: Object of type unSkript AWS Connector.\r\n        name: Name of the bucket of the object.\r\n        key: Name of S3 object or Prefix.\r\n## Lego Input\r\n\r\nThis Lego take three inputs handle, name and key.\r\n\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_read_object/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_read_object/aws_read_object.json",
    "content": "{\r\n    \"action_title\": \"Read AWS S3 Object\",\r\n    \"action_description\": \"Read an AWS S3 Object\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_read_object\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_S3\"]\r\n}"
  },
  {
    "path": "AWS/legos/aws_read_object/aws_read_object.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport io\nimport pprint\nfrom typing import List\nfrom pydantic import BaseModel, Field\n\n\nclass InputSchema(BaseModel):\n    name: str = Field(\n        title='Bucket Name',\n        description='Name of the bucket of the object.')\n    key: str = Field(\n        title='Object Name',\n        description=('Name of S3 object or Prefix. Prefix should end with / '\n                     'to return the list of objects present in the bucket')\n                     )\n\n\ndef aws_read_object_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef aws_read_object(handle, name: str, key: str) -> List:\n    \"\"\"aws_read_object Reads object in S3.\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type name: string\n        :param name: Name of the bucket of the object.\n\n        :type key: string\n        :param key: Name of S3 object or Prefix.\n\n        :rtype: List with the object data.\n    \"\"\"\n    s3Client = handle.client('s3')\n    if key.endswith(\"/\"):\n        folder_list = []\n        res = s3Client.list_objects(Bucket=name, Prefix=key)\n        print(\"\\n\")\n        for content in res.get('Contents', []):\n            print(content.get(\"Key\"))\n            folder_list.append(content.get(\"Key\"))\n        return folder_list\n    else:\n        res = s3Client.get_object(Bucket=name, Key=key)\n        fileSizeLimit = 100000\n        output = str(io.BytesIO(res['Body'].read()).read(fileSizeLimit))\n        return [output]\n"
  },
  {
    "path": "AWS/legos/aws_register_instances/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Register AWS Instances with a Load Balancer </h1>\r\n\r\n## Description\r\nThis Lego register AWS Instances with a Load Balancer.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_register_instances(handle: object, elb_name: str, instance_ids: List, region: str)\r\n\r\n        handle: Object of type unSkript AWS Connector.\r\n        elb_name: Name of the Load Balancer.\r\n        instance_ids: List of instance IDs.\r\n        region: AWS Region of the ELB.\r\n## Lego Input\r\n\r\nThis Lego take four inputs handle, elb_name, instance_ids and region.\r\n\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_register_instances/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_register_instances/aws_register_instances.json",
    "content": "{\r\n    \"action_title\": \" Register AWS Instances with a Load Balancer\",\r\n    \"action_description\": \" Register AWS Instances with a Load Balancer\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_register_instances\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_ELB\"]\r\n}\r\n    "
  },
  {
    "path": "AWS/legos/aws_register_instances/aws_register_instances.py",
    "content": "# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nimport pprint\nfrom typing import List, Dict\nfrom pydantic import BaseModel, Field\n\n\nclass InputSchema(BaseModel):\n    elb_name: str = Field(\n        title='ELB Name',\n        description='Name of the Load Balancer.')\n    instance_ids: List[str] = Field(\n        title='Instance IDs',\n        description='List of instance IDs. For eg. [\"i-foo\", \"i-bar\"]')\n    region: str = Field(\n        title='Region',\n        description='AWS Region of the ELB.')\n\n\ndef aws_register_instances_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef aws_register_instances(handle, elb_name: str, instance_ids: List, region: str) -> Dict:\n    \"\"\"aws_register_instances returns dict of register info\n\n     :type handle: object\n     :param handle: Object returned from task.validate(...).\n\n     :type elb_name: string\n     :param elb_name: Name of the Load Balancer.\n\n     :type instance_ids: string\n     :param instance_ids: List of instance IDs.\n\n     :type region: string\n     :param region: AWS Region of the ELB.\n\n     :rtype: Dict of register info\n    \"\"\"\n    elbClient = handle.client('elb', region_name=region)\n\n    res = elbClient.register_instances_with_load_balancer(\n        LoadBalancerName=elb_name,\n        Instances=[{'InstanceId': instance_id} for instance_id in instance_ids]\n    )\n\n    return res\n"
  },
  {
    "path": "AWS/legos/aws_release_elastic_ip/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>AWS Release Elastic IP</h1>\r\n\r\n## Description\r\nThis Lego release AWS elastic IP for both VPC and Standard.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_release_elastic_ip(handle, region: str, public_ip: str, allocation_id: str)\r\n\r\n        handle: Object of type unSkript AWS Connector.\r\n        allocation_id: Allocation ID of the Elastic IP to release.\r\n        region: AWS Region.\r\n\r\n## Lego Input\r\nThis Lego take three inputs handle, allocation_id and region.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_release_elastic_ip/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_release_elastic_ip/aws_release_elastic_ip.json",
    "content": "{\r\n    \"action_title\": \"AWS Release Elastic IP\",\r\n    \"action_description\": \"AWS Release Elastic IP for both VPC and Standard\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_release_elastic_ip\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\"]\r\n}\r\n    "
  },
  {
    "path": "AWS/legos/aws_release_elastic_ip/aws_release_elastic_ip.py",
    "content": "##\r\n# Copyright (c) 2023 unSkript, Inc\r\n# All rights reserved.\r\n##\r\nimport pprint\r\nfrom typing import Dict\r\nfrom pydantic import BaseModel, Field\r\n\r\n\r\nclass InputSchema(BaseModel):\r\n    allocation_id: str = Field(\r\n        title='Allocation ID',\r\n        description='Allocation ID of the Elastic IP to release.')\r\n    region: str = Field(\r\n        title='Region',\r\n        description='AWS Region.')\r\n\r\n\r\ndef aws_release_elastic_ip_printer(output):\r\n    if output is None:\r\n        return\r\n    pprint.pprint(output)\r\n\r\n\r\ndef aws_release_elastic_ip(handle, region: str, allocation_id: str) -> Dict:\r\n    \"\"\"aws_release_elastic_ip release elastic ip.\r\n    \r\n        :type allocation_id: string\r\n        :param allocation_id: Allocation ID of the Elastic IP to release.\r\n\r\n        :type region: string\r\n        :param region: AWS Region.\r\n\r\n        :rtype: Dict with the release elastic ip info.\r\n    \"\"\"\r\n    try:\r\n        ec2_Client = handle.client('ec2', region_name=region)\r\n        response = ec2_Client.release_address(AllocationId=allocation_id)\r\n        return response\r\n    except Exception as e:\r\n        raise Exception(e) from e\r\n"
  },
  {
    "path": "AWS/legos/aws_renew_expiring_acm_certificates/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \n<h1>Renew Expiring ACM Certificate</h1>\n\n## Description\nThis Lego renews all eligible expiring ACM issued SSL certificates\n\n\n## Lego Details\n\n    aws_renew_expiring_acm_certificates(handle, aws_certificate_arn: List, region: str)\n\n        handle: Object of type unSkript AWS Connector.\n        aws_certificate_arn: List, ARN of the Certificate. Eg: arn:aws:acm:us-west-2:100498623390:certificate/f18891a2-892c-4d3b-aad0-28da2b5069a5\n        region: String, Region where the Certificate is present. Eg: us-west-2\n\n## Lego Input\nThis Lego take three inputs handle, aws_certificate_arn and region.\n\n## Lego Output\n<img src=\"./1.png\">\n\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_renew_expiring_acm_certificates/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_renew_expiring_acm_certificates/aws_renew_expiring_acm_certificates.json",
    "content": "{\n    \"action_title\": \"Renew Expiring ACM Certificates\",\n    \"action_description\": \"Renew Expiring ACM Certificates\",\n    \"action_type\": \"LEGO_TYPE_AWS\",\n    \"action_entry_function\": \"aws_renew_expiring_acm_certificates\",\n    \"action_needs_credential\": true,\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\n    \"action_supports_poll\": true,\n    \"action_supports_iteration\": true,\n    \"action_verbs\": [\"renew\"],\n    \"action_nouns\": [\"certificates\",\"acm\",\"aws\"],\n    \"action_is_check\": false,\n    \"action_is_remediation\": true,\n    \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_ACM\"]\n  }\n"
  },
  {
    "path": "AWS/legos/aws_renew_expiring_acm_certificates/aws_renew_expiring_acm_certificates.py",
    "content": "# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nimport pprint\nfrom typing import Dict, List\nfrom pydantic import BaseModel, Field\n\nclass InputSchema(BaseModel):\n    aws_certificate_arn: List = Field(\n        title=\"Certificate ARN\",\n        description=\"ARN of the Certificate\"\n    )\n    region: str = Field(\n        title='Region',\n        description='Name of the AWS Region'\n    )\n\ndef aws_renew_expiring_acm_certificates_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\ndef aws_renew_expiring_acm_certificates(handle, aws_certificate_arn: List, region: str='') -> Dict:\n    \"\"\"aws_renew_expiring_acm_certificates returns all the ACM issued certificates\n       which are about to expire given a threshold number of days\n\n        :type handle: object\n        :param handle: Object returned from Task Validate\n\n        :type aws_certificate_arn: List\n        :param aws_certificate_arn: ARN of the Certificate\n\n        :type region: str\n        :param region: Region name of the AWS account\n\n        :rtype: Result Dictionary of result\n    \"\"\"\n    result = {}\n    try:\n        acmClient = handle.client('acm', region_name=region)\n        for arn in aws_certificate_arn:\n            acmClient.renew_certificate(CertificateArn=arn)\n            result[arn] = \"Successfully renewed\"\n    except Exception as e:\n        result[\"error\"] = e\n    return result\n"
  },
  {
    "path": "AWS/legos/aws_request_service_quota_increase/README.md",
    "content": "\r\n[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>AWS Request Service Quota Increase </h1>\r\n\r\n## Description\r\nThis Action takes a Service and the quota code, along with a requested new quota value, and submits it to AWS for an increase.\r\n\r\n\r\n## Lego Details\r\n\r\n  aws_request_service_quota_increase(handle, service_code:str, quota_code:str, new_quota:float,region:str) -> Dict:\r\n\r\n        handle: Object of type unSkript AWS Connector\r\n        service_code: the Service Code (for example EC2)\r\n        Quota_Code: Each quota has a unique code of the form L-XXXXXX.\r\n        region: Location of the S3 buckets.\r\n\r\n## Lego Input\r\nThis Lego take 4 inputs: handle, service_code, quota_code and region.\r\n\r\n## Lego Output\r\nThe output shows a HTTP 200 response indicating submission of the request.\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)\r\n\r\n"
  },
  {
    "path": "AWS/legos/aws_request_service_quota_increase/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_request_service_quota_increase/aws_request_service_quota_increase.json",
    "content": "{\n  \"action_title\": \"AWS_Request_Service_Quota_Increase\",\n  \"action_description\": \"Given an AWS Region, Service Code, quota code and a new value for the quota, this Action sends a request to AWS for a new value. Your Connector must have servicequotas:RequestServiceQuotaIncrease enabled for this to work.\",\n  \"action_type\": \"LEGO_TYPE_AWS\",\n  \"action_entry_function\": \"aws_request_service_quota_increase\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\n  \"action_is_check\": false,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\"]\n}"
  },
  {
    "path": "AWS/legos/aws_request_service_quota_increase/aws_request_service_quota_increase.py",
    "content": "##\n##  Copyright (c) 2023 unSkript, Inc\n##  All rights reserved.\n##\nfrom __future__ import annotations\nimport pprint\nfrom typing import Dict\nfrom pydantic import BaseModel, Field\nfrom beartype import beartype\n\n\nclass InputSchema(BaseModel):\n    new_quota: float = Field(\n        '', description='The new quota value', title='new_quota'\n    )\n    quota_code: str = Field(\n        '', description='Quota Code that increase is requested for', title='quota_code'\n    )\n    region: str = Field(..., description='AWS Region.', title='Region')\n    service_code: str = Field(\n        '\"ec2\"',\n        description='Service Code whose quota you are requesting a change on.',\n        title='service_code',\n    )\n\n\n@beartype\ndef aws_request_service_quota_increase_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n#list_service_quotas\n#list_aws_default_service_quotas\n@beartype\ndef aws_request_service_quota_increase(\n    handle,\n    service_code:str,\n    quota_code:str,\n    new_quota:float,region:str\n    ) -> Dict:\n    sqClient = handle.client('service-quotas',region_name=region)\n    res = sqClient.request_service_quota_increase(\n        ServiceCode=service_code,\n        QuotaCode=quota_code,\n\n        DesiredValue=new_quota)\n\n    #res = sqClient.list_services(MaxResults = 100)\n    return res\n"
  },
  {
    "path": "AWS/legos/aws_restart_ec2_instances/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \n<h1>Restart AWS EC2 Instances </h1>\n\n## Description\nThis Lego Restart the AWS EC2 Instance.\n\n\n## Lego Details\n\n    aws_restart_ec2_instances(handle: object, instance_ids: List, region: str)\n\n        handle: Object of type unSkript AWS Connector\n        instance_ids: List of instance ids\n        region: Region for instance.\n\n## Lego Input\nThis Lego take three inputs handle, region and instance_ids.\n\n## Lego Output\nHere is a sample output.\n\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_restart_ec2_instances/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_restart_ec2_instances/aws_restart_ec2_instances.json",
    "content": "{\r\n    \"action_title\": \"Restart AWS EC2 Instances\",\r\n    \"action_description\": \"Restart AWS EC2 Instances\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_restart_ec2_instances\",\r\n    \"action_needs_credential\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_EC2\"]\r\n  }\r\n  "
  },
  {
    "path": "AWS/legos/aws_restart_ec2_instances/aws_restart_ec2_instances.py",
    "content": "##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import List, Dict\nfrom pydantic import BaseModel, Field\nfrom beartype import beartype\n\nclass InputSchema(BaseModel):\n    instance_ids: List[str] = Field(\n        title='Instance IDs',\n        description='List of instance IDs. For eg. [\"i-foo\", \"i-bar\"]')\n    region: str = Field(\n        title='Region',\n        description='AWS Region of the instances.')\n\n@beartype\ndef aws_restart_ec2_instances_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\n@beartype\ndef aws_restart_ec2_instances(handle, instance_ids: List, region: str) -> Dict:\n    \"\"\"aws_restart_ec2_instances Restarts instances.\n\n        :type handle: object\n        :param handle: Object returned by the task.validate(...) method.\n\n        :type instance_ids: list\n        :param instance_ids: List of instance ids.\n\n        :type region: string\n        :param region: Region for instance.\n\n        :rtype: Dict with the restarted instances info.\n    \"\"\"\n\n    ec2Client = handle.client('ec2', region_name=region)\n    res = ec2Client.reboot_instances(InstanceIds=instance_ids)\n    return res\n"
  },
  {
    "path": "AWS/legos/aws_revoke_policy_from_iam_user/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>AWS Revoke Policy from IAM User</h1>\r\n\r\n## Description\r\nThis Lego revoke policy from IAM User.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_revoke_policy_from_iam_user(handle, user_name: str, policy_arn: str)\r\n\r\n        handle: Object of type unSkript AWS Connector.\r\n        policy_arn: The Amazon Resource Name (ARN) of the policy.\r\n        user_name: The name of the IAM user from whom to revoke the policy.\r\n\r\n## Lego Input\r\nThis Lego takes 3 inputs, handle, policy_arn and user_name.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)\r\n"
  },
  {
    "path": "AWS/legos/aws_revoke_policy_from_iam_user/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_revoke_policy_from_iam_user/aws_revoke_policy_from_iam_user.json",
    "content": "{\r\n    \"action_title\": \"AWS Revoke Policy from IAM User\",\r\n    \"action_description\": \"AWS Revoke Policy from IAM User\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_revoke_policy_from_iam_user\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [\"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\"]\r\n}\r\n    "
  },
  {
    "path": "AWS/legos/aws_revoke_policy_from_iam_user/aws_revoke_policy_from_iam_user.py",
    "content": "##\r\n# Copyright (c) 2023 unSkript, Inc\r\n# All rights reserved.\r\n##\r\nimport pprint\r\nfrom typing import Dict\r\nfrom pydantic import BaseModel, Field\r\n\r\nclass InputSchema(BaseModel):\r\n    user_name: str = Field(\r\n        title='User Name',\r\n        description='The name of the IAM user from whom to revoke the policy.')\r\n    policy_arn: str = Field(\r\n        title='Policy ARNs',\r\n        description='The Amazon Resource Name (ARN) of the policy.')\r\n\r\n\r\ndef aws_revoke_policy_from_iam_user_printer(output):\r\n    if output is None:\r\n        return\r\n    pprint.pprint(output)\r\n\r\n\r\ndef aws_revoke_policy_from_iam_user(handle, user_name: str, policy_arn: str) -> Dict:\r\n    \"\"\"aws_revoke_policy_from_iam_user revoke policy from iam user.\r\n\r\n        :type handle: object\r\n        :param handle: Object returned from Task Validate\r\n\r\n        :type policy_arn: str\r\n        :param policy_arn: The Amazon Resource Name (ARN) of the policy.\r\n\r\n        :type user_name: str\r\n        :param user_name: The name of the IAM user from whom to revoke the policy.\r\n\r\n        :rtype: Dict\r\n    \"\"\"\r\n    try:\r\n        client = handle.client('iam')\r\n        response = client.detach_user_policy(\r\n                            UserName=user_name,\r\n                            PolicyArn=policy_arn)\r\n        return response\r\n    except Exception as e:\r\n        raise Exception(e) from e\r\n"
  },
  {
    "path": "AWS/legos/aws_run_instances/README.md",
    "content": "\nTBD\n"
  },
  {
    "path": "AWS/legos/aws_run_instances/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_run_instances/aws_run_instances.json",
    "content": "{\r\n\"action_title\": \"Start AWS Instances\",\r\n\"action_description\": \"Start an AWS EC2 Instances\",\r\n\"action_type\": \"LEGO_TYPE_AWS\",\r\n\"action_entry_function\": \"aws_run_instances\",\r\n\"action_needs_credential\": true,\r\n\"action_supports_poll\": true,\r\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\r\n\"action_supports_iteration\": true,\r\n\"action_verbs\": [\r\n\"start\"\r\n],\r\n\"action_nouns\": [\r\n\"aws\",\r\n\"instances\"\r\n],\r\n\"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_EC2\"]\r\n}\r\n"
  },
  {
    "path": "AWS/legos/aws_run_instances/aws_run_instances.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import Dict\nfrom pydantic import BaseModel, Field\n\n\nclass InputSchema(BaseModel):\n    instance_id: str = Field(\n        title='Instance Id',\n        description='ID of the instance to be run.')\n    region: str = Field(\n        title='Region',\n        description='AWS Region of the instance.')\n\n\ndef aws_run_instances_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef aws_run_instances(handle, instance_id: str, region: str) -> Dict:\n    \"\"\"def aws_run_instances Runs instances.\n\n        :type instance_id: string\n        :param instance_id: String containing the name of AWS EC2 instance\n\n        :type region: string\n        :param region: AWS region for instance\n\n        :rtype: Dict with the runing instances state info.\n    \"\"\"\n    ec2Client = handle.client('ec2', region_name=region)\n\n    output = {}\n    res = ec2Client.start_instances(InstanceIds=[instance_id])\n    for instances in res['StartingInstances']:\n        output[instances['InstanceId']] = instances['CurrentState']\n\n    return output\n"
  },
  {
    "path": "AWS/legos/aws_schedule_pause_resume_enabled/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>AWS Schedule Redshift Cluster Pause Resume Enabled</h1>\r\n\r\n## Description\r\nThis Lego Schedule Redshift Cluster Pause Resume Enabled.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_schedule_pause_resume_enabled(handle,\r\n                                      iam_role_arn: str,\r\n                                      cluster_name: str,\r\n                                      region: str,\r\n                                      pause_schedule_expression: str,\r\n                                      resume_schedule_expression: str)\r\n\r\n        handle: Object of type unSkript AWS Connector\r\n        iam_role_arn: The ARN of the IAM role.\r\n        cluster_name: The name of the Redshift cluster.\r\n        region: AWS Region.\r\n        pause_schedule_expression: The cron expression for the pause schedule.\r\n        resume_schedule_expression: The cron expression for the resume schedule.\r\n\r\n## Lego Input\r\nThis Lego take six inputs handle, region, cluster_name, iam_role_arn, pause_schedule_expression and resume_schedule_expression.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_schedule_pause_resume_enabled/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_schedule_pause_resume_enabled/aws_schedule_pause_resume_enabled.json",
    "content": "{\r\n    \"action_title\": \"AWS Schedule Redshift Cluster Pause Resume Enabled\",\r\n    \"action_description\": \"AWS Schedule Redshift Cluster Pause Resume Enabled\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_schedule_pause_resume_enabled\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\"]\r\n}"
  },
  {
    "path": "AWS/legos/aws_schedule_pause_resume_enabled/aws_schedule_pause_resume_enabled.py",
    "content": "# Copyright (c) 2023 unSkript, Inc\r\n# All rights reserved.\r\n##\r\nimport pprint\r\nfrom typing import List\r\nfrom pydantic import BaseModel, Field\r\n\r\n\r\nclass InputSchema(BaseModel):\r\n    region: str = Field(\r\n        title='Region',\r\n        description='AWS Region of the EBS volume')\r\n    iam_role_arn: str = Field(\r\n        title='IAM Role',\r\n        description='The ARN of the IAM role.')\r\n    cluster_name: str = Field(\r\n        title='Redshift Cluster Name',\r\n        description='The name of the Redshift cluster.')\r\n    pause_schedule_expression: str = Field(\r\n        title='Cron Expression for Pause',\r\n        description='The cron expression for the pause schedule.')\r\n    resume_schedule_expression: str = Field(\r\n        title='Cron Expression for Resume',\r\n        description='The cron expression for the resume schedule.')\r\n\r\n\r\ndef aws_schedule_pause_resume_enabled_printer(output):\r\n    if output is None:\r\n        return\r\n    pprint.pprint(output)\r\n\r\n\r\ndef aws_schedule_pause_resume_enabled(handle,\r\n                                      iam_role_arn: str,\r\n                                      cluster_name: str,\r\n                                      region: str,\r\n                                      pause_schedule_expression: str,\r\n                                      resume_schedule_expression: str) -> List:\r\n    \"\"\"aws_schedule_pause_resume_enabled schedule pause and resume enabled.\r\n\r\n    :type iam_role_arn: str\r\n    :param iam_role_arn: The ARN of the IAM role.\r\n\r\n    :type cluster_name: str\r\n    :param cluster_name: The name of the Redshift cluster.\r\n\r\n    :type region: str\r\n    :param region: AWS Region.\r\n\r\n    :type pause_schedule_expression: str\r\n    :param pause_schedule_expression: The cron expression for the pause schedule.\r\n\r\n    :type resume_schedule_expression: str\r\n    :param resume_schedule_expression: The cron expression for the resume schedule.\r\n\r\n    :rtype: List\r\n    :return: A list of pause and resume enabled status.\r\n    \"\"\"\r\n    result = []\r\n    pause_action_name = f\"{cluster_name}-scheduled-pause\"\r\n    resume_action_name = f\"{cluster_name}-scheduled-resume\"\r\n\r\n    try:\r\n        redshift_client = handle.client('redshift', region_name=region)\r\n        # Schedule pause action\r\n        response_pause = redshift_client.create_scheduled_action(\r\n            ScheduledActionName=pause_action_name,\r\n            TargetAction={\r\n                'PauseCluster': {'ClusterIdentifier': cluster_name}\r\n            },\r\n            Schedule=pause_schedule_expression,\r\n            IamRole=iam_role_arn,\r\n            Enable=True\r\n        )\r\n        result.append(response_pause)\r\n        # Schedule resume action\r\n        response_resume = redshift_client.create_scheduled_action(\r\n            ScheduledActionName=resume_action_name,\r\n            TargetAction={\r\n                'ResumeCluster': {'ClusterIdentifier': cluster_name}\r\n            },\r\n            Schedule=resume_schedule_expression,\r\n            IamRole=iam_role_arn,\r\n            Enable=True\r\n        )\r\n        result.append(response_resume)\r\n\r\n    except Exception as error:\r\n        raise Exception(error)\r\n\r\n    return result"
  },
  {
    "path": "AWS/legos/aws_send_email/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>AWS Send Email with SES</h1>\n\n## Description\nThis Action sends an Email with AWS Simple Email Service.  Input the sender and recipient addresses, a subject and the body of the message (and the AWS region for SES), and your message will be sent.\n\n## Action Details\n\taws_send_email(handle, Region:str, Sender:str, Receiver:str, Subject:str, Message:str)\n\t\thandle: Object of type unSkript AWS Connector.\n\n\t\t* Region: The AWS Region SES is provisioned in.\n\t\t* Sender: The verified email address to send the message (verification in SES).\n\t\t* Receiver: The email address to receive the message (note that new SES senders can only send to verified receivers for 7 days).\n\t\t* Subject: Email Subject\n\t\t* Message: The Body of the email\n \n## Action Output\nHere is a sample output.\n<img src=\"./1.jpg\">\n\n## Try it Out\n\nYou Try this Action in the unSkript [Free Trial](https://us.app.unskript.io/), or using the [open source Docker image](http://runbooks.sh)."
  },
  {
    "path": "AWS/legos/aws_send_email/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_send_email/aws_send_email.json",
    "content": "{\n  \"action_title\": \"AWS Send Email with SES\",\n  \"action_description\": \"This Action sends an Email with AWS Simple Email Service.  Input the sender and recipient addresses, a subject and the body of the message (and the AWS region for SES), and your message will be sent.\",\n  \"action_type\": \"LEGO_TYPE_AWS\",\n  \"action_entry_function\": \"aws_send_email\",\n  \"action_needs_credential\": \"true\",\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\n  \"action_is_check\": \"false\",\n  \"action_supports_iteration\": \"true\",\n  \"action_supports_poll\": \"true\",\n  \"action_categories\":[ \"CATEGORY_TYPE_CLOUDOPS\",\"CATEGORY_TYPE_AWS\" ]\n}"
  },
  {
    "path": "AWS/legos/aws_send_email/aws_send_email.py",
    "content": "##\n# Copyright (c) 2023 unSkript, Inc\n# All rights reserved.\n##\n# You must have the Sender email set up and\n# verified in AWS SES for this actio to work.\nfrom pydantic import BaseModel, Field\nfrom typing import Dict\nimport pprint\nfrom botocore.exceptions import ClientError\n\n\nclass InputSchema(BaseModel):\n    Message: str = Field(\n        ..., description='The body of the message to be sent.', title='Message'\n    )\n    Receiver: str = Field(\n        ..., description='Email address to receive the message.', title='Receiver'\n    )\n    Region: str = Field(..., description='AWS Region', title='Region')\n    Sender: str = Field(\n        ..., description='Email address sending the message.', title='Sender'\n    )\n    Subject: str = Field(...,\n                         description='Subject line of the email.', title='Subject')\n\n\ndef aws_send_email_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef aws_send_email(handle, Region: str, Sender: str, Receiver: str, Subject: str, Message: str) -> Dict:\n    client = handle.client('ses', region_name=Region)\n    # Create the email message\n    message = {\n        'Subject': {\n            'Data': Subject\n        },\n        'Body': {\n            'Text': {\n                'Data': Message\n            }\n        }\n    }\n    try:\n        # Send the email\n        response = client.send_email(\n            Source=Sender,\n            Destination={\n                'ToAddresses': [Receiver]\n            },\n            Message=message\n        )\n    except ClientError as e:\n        response = e\n        raise e\n\n    # Print the response\n    print(response)\n    return response\n"
  },
  {
    "path": "AWS/legos/aws_service_quota_limits/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \n<h1>AWS Service Quota Limits </h1>\n\n## Description\nThis Action compares usage for all input service quotas vs. your account's limit.  If any are above warning percentage - they will be output.\n\n\n## Lego Details\n\n    def aws_service_quota_limits_vpc(handle, region: str, warning_percentage: float, quota_input: List) -> List:\n\n        handle: Object of type unSkript AWS Connector\n        warning_percentage: If % is above this value - the service will be output.\n        region: Region for instance.\n        quota_input: List of Quota data. The format is described in this blog post: https://unskript.com/aws-service-quotas-discovering-where-you-stand/\n\n       Sample quota input:\n       [{'QuotaName':'VPCs Per Region','ServiceCode':'vpc',\n            'QuotaCode': 'L-F678F1CE', 'ApiName': 'describe_vpcs', \n            'ApiFilter' : '[]','ApiParam': 'Vpcs', 'initialQuery': ''}]\n\n## Lego Input\nThis Lego take fout inputs handle, region, quota_input and warning_percentage.\n\n## Lego Output\nHere is a sample output.\n\n<img src=\"./1.jpg\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_service_quota_limits/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_service_quota_limits/aws_service_quota_limits.json",
    "content": "{\n  \"action_title\": \"AWS Service Quota Limits\",\n  \"action_description\": \"Input a List of Service Quotas, and get back which of your instances are above the warning percentage of the quota\",\n  \"action_type\": \"LEGO_TYPE_AWS\",\n  \"action_entry_function\": \"aws_service_quota_limits\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\n  \"action_is_check\": false,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\"]\n}"
  },
  {
    "path": "AWS/legos/aws_service_quota_limits/aws_service_quota_limits.py",
    "content": "##\n##  Copyright (c) 2023 unSkript, Inc\n##  All rights reserved.\n##\nfrom __future__ import annotations\nimport pprint\nimport json\nfrom typing import List\nfrom pydantic import BaseModel, Field\nfrom unskript.connectors.aws import aws_get_paginator\nfrom beartype import beartype\n\n#here is a sample quota input:\n#[{'QuotaName':'VPCs Per Region','ServiceCode':'vpc',\n#    'QuotaCode': 'L-F678F1CE', 'ApiName': 'describe_vpcs',\n#      'ApiFilter' : '[]','ApiParam': 'Vpcs', 'initialQuery': ''}]\n# the values are described in a blog post:\n# https://unskript.com/aws-service-quotas-discovering-where-you-stand/\n\nclass InputSchema(BaseModel):\n    region: str = Field(..., description='AWS Region of the instances.', title='Region')\n    warning_percentage: float = Field(\n        50,\n        description=('Threshold for alerting Service Quota. If set to 50, '\n                     'any service at 50% of quota usage will be reported.'),\n        title='warning_percentage',\n    )\n    quota_input: List = Field(\n        '', description='Array of inputs - see readme for format', title='quota_input'\n    )\n\n@beartype\ndef aws_service_quota_limits_printer(output):\n    if output is None:\n        return\n    pprint.pprint({\"Instances\": output})\n\n\n@beartype\n@beartype\ndef aws_service_quota_limits(\n    handle,\n    region: str,\n    warning_percentage: float,\n    quota_input: List\n    ) -> List:\n\n    sqClient = handle.client('service-quotas',region_name=region)\n    ec2Client = handle.client('ec2', region_name=region)\n\n    result = []\n\n    for i in quota_input:\n        #convert the ApiFilter to a list\n        #'[{\"Name\": \"vpc-endpoint-type\",\"Values\": [\"Gateway\"]}]'\n        filterList=''\n        if len(i.get('ApiFilter')) > 0:\n            filterList = json.loads(i.get('ApiFilter'))\n        #print(\"filter\", filterList)\n\n        #get quota\n        sq = sqClient.get_service_quota(\n            ServiceCode=i.get('ServiceCode'),\n            QuotaCode=i.get('QuotaCode'))\n        quotaValue =sq['Quota']['Value']\n\n        #simple queries (Only one call to get the details)\n        if i.get('initialQuery') == '':\n            #find usage\n            res = aws_get_paginator(\n                ec2Client,\n                i.get('ApiName'),\n                i.get('ApiParam'),\n                Filters=filterList\n                )\n\n            #most of the time, all we need is the length (else)\n            if i.get('QuotaName')==\"NAT gateways per Availability Zone\":\n                #sample exception to the else rule\n                #count the subets per nat gateway\n                # Create a dictionary to store the count of NAT gateways for each Availability Zone\n                az_nat_gateway_count = {}\n                # Loop through each NAT gateway and count the number for each Availability Zone\n                for nat_gateway in res:\n                    az = nat_gateway['SubnetId']\n                    if az in az_nat_gateway_count:\n                        az_nat_gateway_count[az] += 1\n                    else:\n                        az_nat_gateway_count[az] = 1\n\n                for gw , value in az_nat_gateway_count.items():\n                    percentage = value/quotaValue\n                    combinedData = {\n                        'Quota Name': i.get('QuotaName') + \": \"+ gw,\n                        'Limit':quotaValue,\n                        'used': value,\n                        'percentage':percentage\n                        }\n                    result.append( combinedData)\n                    #print(combinedData)\n\n            else:\n                #most common default case\n                count = len(res)\n                percentage = count/quotaValue\n                combinedData = {\n                    'Quota Name': i.get('QuotaName'),\n                    'Limit':quotaValue,\n                    'used': count,\n                    'percentage':percentage\n                    }\n                result.append(  combinedData)\n                #print(combinedData)\n\n        #nested queries (get X per VPC or get y per network interface)\n        else:\n            #nested query for quota\n            #for example 'initialQuery': ['describe_vpcs','Vpcs', 'VpcId'] gets the list of VPCs,\n            #that we can then ask abour each VPC\n            #turn initalQuery string into a list\n            #'initialQuery': ['describe_vpcs','Vpcs', 'VpcId']\n            initialQuery = json.loads(i.get('initialQuery'))\n            initialQueryName = initialQuery[0]\n            initialQueryParam  = initialQuery[1]\n            initialQueryFilter = initialQuery[2]\n\n            #inital Query\n            res = aws_get_paginator(ec2Client, initialQueryName, initialQueryParam)\n            #print(res)\n            #nested query\n            for j in res:\n\n                #most of the time, there will be a 2nd query, and the table will have\n                #an 'ApiName' value\n\n                #rebuild filter\n                #print(\"test\", j[initialQueryFilter])\n                variableReplace = j[initialQueryFilter]\n                filterList = i.get('ApiFilter')\n                filterList = filterList.replace(\"VARIABLE\", variableReplace)\n                filterList = json.loads(filterList)\n\n                res2 = aws_get_paginator(\n                    ec2Client,\n                    i.get('ApiName'),\n                    i.get('ApiParam'),\n                    Filters=filterList\n                    )\n\n                #most of the time we can just count the length of the response (else)\n                if i.get('QuotaName') ==\"Participant accounts per VPC\":\n                        print(\"this is an exception, and you'll ahve to write custom code here\")\n                else:\n                    count = len(res2)\n\n                percentage = count/quotaValue\n                #print(objectResult)\n                quotaName = f\"{i.get('QuotaName')} for {j[initialQueryFilter]}\"\n                combinedData = {\n                    'Quota Name': quotaName,\n                    'Limit':quotaValue,\n                    'used': count,\n                    'percentage':percentage\n                    }\n                result.append(combinedData)\n                #print(combinedData)\n\n\n    # all the data is now in a list called result\n    warning_result =[]\n    threshold = warning_percentage/100\n    for quota in result:\n        if quota['percentage'] >= threshold:\n            #there are two sums that appear, and throw errors.\n            if quota['Quota Name'] != 'Inbound or outbound rules per security group':\n                if quota['Quota Name'] != 'Security groups per network interface':\n                    warning_result.append(quota)\n    return warning_result\n"
  },
  {
    "path": "AWS/legos/aws_service_quota_limits_vpc/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \n<h1>AWS Service Quotas for VPC </h1>\n\n## Description\nThis Action compares usage for all VPC service quotas vs. your account's limit.  If any are above warning percentage - they will be output.\n\n\n## Lego Details\n\n    def aws_service_quota_limits_vpc(handle, region: str, warning_percentage: float) -> List:\n\n        handle: Object of type unSkript AWS Connector\n        warning_percentage: If % is above this value - the service will be output.\n        region: Region for instance.\n\n## Lego Input\nThis Lego take three inputs handle, region and warning_percentage.\n\n## Lego Output\nHere is a sample output.\n\n<img src=\"./1.jpg\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_service_quota_limits_vpc/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_service_quota_limits_vpc/aws_service_quota_limits_vpc.json",
    "content": "{\n  \"action_title\": \"AWS VPC service quota limit\",\n  \"action_description\": \"This Action queries all VPC Storage quotas, and returns all usage over warning_percentage.\",\n  \"action_type\": \"LEGO_TYPE_AWS\",\n  \"action_entry_function\": \"aws_service_quota_limits_vpc\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\n  \"action_is_check\": false,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_VPC\"]\n}"
  },
  {
    "path": "AWS/legos/aws_service_quota_limits_vpc/aws_service_quota_limits_vpc.py",
    "content": "##\n##  Copyright (c) 2023 unSkript, Inc\n##  All rights reserved.\n##\nfrom __future__ import annotations\nimport pprint\nfrom typing import List\nimport json\nimport datetime\nfrom pydantic import BaseModel, Field\nfrom unskript.connectors.aws import aws_get_paginator\nfrom beartype import beartype\n\n\nclass InputSchema(BaseModel):\n    region: str = Field(..., description='AWS Region.', title='Region')\n    warning_percentage: float = Field(\n        50,\n        description='Percentage threshold for a warning.  For a complete list of quotas, use 0.',\n        title='warning_percentage',\n    )\n\n@beartype\ndef aws_service_quota_limits_vpc_printer(output):\n    if output is None:\n        return\n    pprint.pprint({\"Instances\": output})\n\n\n@beartype\n@beartype\ndef aws_service_quota_limits_vpc(handle, region: str, warning_percentage: float) -> List:\n\n\n    ## EC@ and VPCs\n\n    ec2Client = handle.client('ec2', region_name=region)\n    # List all VPCs in the specified region\n\n    q_table = [\n        #per region stats\n                #per region stats\n        {\n            'QuotaName':'VPCs Per Region',\n            'ServiceCode':'vpc',\n            'QuotaCode':'L-F678F1CE',\n            'ApiName': 'describe_vpcs',\n            'ApiFilter' : '[]',\n            'ApiParam': 'Vpcs',\n            'initialQuery': ''\n        },\n        {\n            'QuotaName':'VPC security groups per Region',\n            'ServiceCode':'vpc',\n            'QuotaCode': 'L-E79EC296',\n            'ApiName': 'describe_security_groups',\n            'ApiFilter' :'[]',\n            'ApiParam': 'SecurityGroups',\n            'initialQuery': ''\n        },\n        {\n            'QuotaName':'Egress-only internet gateways per Region',\n            'ServiceCode':'vpc',\n            'QuotaCode': 'L-45FE3B85',\n            'ApiName': 'describe_egress_only_internet_gateways',\n            'ApiFilter' : '[]',\n            'ApiParam': 'EgressOnlyInternetGateways',\n            'initialQuery': ''\n        },\n        {\n            'QuotaName':'Gateway VPC endpoints per Region',\n            'ServiceCode':'vpc',\n            'QuotaCode': 'L-1B52E74A',\n            'ApiName': 'describe_vpc_endpoints',\n            'ApiFilter' : '[{\"Name\": \"vpc-endpoint-type\",\"Values\": [\"Gateway\"]}]',\n            'ApiParam': 'VpcEndpoints',\n            'initialQuery': ''\n        },\n        {\n            'QuotaName':'Internet gateways per Region',\n            'ServiceCode':'vpc',\n            'QuotaCode': 'L-A4707A72',\n            'ApiName': 'describe_internet_gateways',\n            'ApiFilter' : '[]',\n            'ApiParam': 'InternetGateways',\n            'initialQuery': ''\n        },\n        {\n            'QuotaName':'Network interfaces per Region',\n            'ServiceCode':'vpc',\n            'QuotaCode': 'L-DF5E4CA3',\n            'ApiName': 'describe_network_interfaces',\n            'ApiFilter' : '[]',\n            'ApiParam': 'NetworkInterfaces',\n            'initialQuery': ''\n        },\n        #per VPC stats\n        {\n        'QuotaName':'Active VPC peering connections per VPC',\n         'ServiceCode':'vpc',\n         'QuotaCode': 'L-7E9ECCDB',\n         'ApiName': 'describe_vpc_peering_connections', \n         'ApiFilter' : '[{\"Name\": \"status-code\",\"Values\": [\"active\"]}, {\"Name\": \"requester-vpc-info.vpc-id\",\"Values\": [\"VARIABLE\"]}]',\n         'ApiParam': 'VpcPeeringConnections', \n         'initialQuery': '[\"describe_vpcs\",\"Vpcs\", \"VpcId\"]'\n        },\n        {\n        'QuotaName':'Interface VPC endpoints per VPC',\n         'ServiceCode':'vpc',\n         'QuotaCode': 'L-29B6F2EB',\n         'ApiName': 'describe_vpc_endpoints', \n         'ApiFilter' : '[{\"Name\": \"vpc-endpoint-type\",\"Values\": [\"Interface\"]}, {\"Name\": \"vpc-id\",\"Values\": [\"VARIABLE\"]}]',\n         'ApiParam': 'VpcEndpoints', \n         'initialQuery': '[\"describe_vpcs\",\"Vpcs\", \"VpcId\"]'\n        },\n        {\n        'QuotaName':'IPv4 CIDR blocks per VPC',\n         'ServiceCode':'vpc',\n         'QuotaCode': 'L-83CA0A9D',\n         'ApiName': '', \n         'ApiFilter': '',\n         'ApiParam': 'CidrBlockAssociationSet', \n         'initialQuery': '[\"describe_vpcs\",\"Vpcs\", \"VpcId\"]'\n        },\n        {\n        'QuotaName':' Network ACLs per VPC',\n         'ServiceCode':'vpc',\n         'QuotaCode': 'L-B4A6D682',\n         'ApiName': 'describe_network_acls', \n         'ApiFilter': '[{\"Name\": \"vpc-id\",\"Values\": [\"VARIABLE\"]}]',\n         'ApiParam': 'NetworkAcls', \n         'initialQuery': '[\"describe_vpcs\",\"Vpcs\", \"VpcId\"]'\n        },\n        {\n        'QuotaName':'Participant accounts per VPC',\n         'ServiceCode':'vpc',\n         'QuotaCode': 'L-2C462E13',\n         'ApiName': 'describe_vpc_peering_connections', \n         'ApiFilter': '[{\"Name\": \"requester-vpc-info.vpc-id\",\"Values\": [\"VARIABLE\"]}]',\n         'ApiParam': 'VpcPeeringConnections', \n         'initialQuery': '[\"describe_vpcs\",\"Vpcs\", \"VpcId\"]'\n        },\n        {\n        'QuotaName':'Route tables per VPC',\n         'ServiceCode':'vpc',\n         'QuotaCode': 'L-589F43AA',\n         'ApiName': 'describe_route_tables', \n         'ApiFilter': '[{\"Name\": \"vpc-id\",\"Values\": [\"VARIABLE\"]}]',\n         'ApiParam': 'RouteTables', \n         'initialQuery': '[\"describe_vpcs\",\"Vpcs\", \"VpcId\"]'\n        },\n        {\n        'QuotaName':'Subnets per VPC',\n         'ServiceCode':'vpc',\n         'QuotaCode': 'L-407747CB',\n         'ApiName': 'describe_subnets', \n         'ApiFilter': '[{\"Name\": \"vpc-id\",\"Values\": [\"VARIABLE\"]}]',\n         'ApiParam': 'Subnets', \n         'initialQuery': '[\"describe_vpcs\",\"Vpcs\", \"VpcId\"]'\n        },\n        {\n        'QuotaName':'NAT gateways per Availability Zone',\n          'ServiceCode':'vpc',\n          'QuotaCode': 'L-FE5A380F',\n          'ApiName': 'describe_nat_gateways', \n          'ApiFilter': '[]',\n          'ApiParam': 'NatGateways', \n          'initialQuery': ''\n        },\n        {\n        'QuotaName':'Inbound or outbound rules per security group',\n          'ServiceCode':'vpc',\n          'QuotaCode': 'L-0EA8095F',\n          'ApiName': 'describe_security_groups', \n          'ApiFilter': '[]',\n          'ApiParam': 'SecurityGroups', \n          'initialQuery': ''\n        },\n        {\n        'QuotaName':'Outstanding VPC peering connection requests',\n         'ServiceCode':'vpc',\n         'QuotaCode': 'L-DC9F7029',\n         'ApiName': 'describe_vpc_peering_connections', \n         'ApiFilter': '[{\"Name\": \"status-code\", \"Values\": [\"pending-acceptance\"]}]',\n         'ApiParam': 'VpcPeeringConnections', \n         'initialQuery': ''\n        },\n        {\n        'QuotaName':'Routes per route table',\n         'ServiceCode':'vpc',\n         'QuotaCode': 'L-93826ACB',\n         'ApiName': 'describe_route_tables', \n         'ApiFilter': '[]',\n         'ApiParam': 'RouteTables', \n         'initialQuery': ''\n        },\n        {\n        'QuotaName':'Rules per network ACL',\n         'ServiceCode':'vpc',\n         'QuotaCode': 'L-2AEEBF1A',\n         'ApiName': 'describe_network_acls', \n         'ApiFilter': '[]',\n         'ApiParam': 'NetworkAcls', \n         'initialQuery': ''\n        },\n        {\n        'QuotaName':'Security groups per network interface',\n         'ServiceCode':'vpc',\n         'QuotaCode': 'L-2AFB9258',\n         'ApiName': 'describe_network_interfaces', \n         'ApiFilter': '[]',\n         'ApiParam': 'NetworkInterfaces', \n         'initialQuery': ''\n        },\n        {\n        'QuotaName':'VPC peering connection request expiry hours',\n         'ServiceCode':'vpc',\n         'QuotaCode': 'L-8312C5BB',\n         'ApiName': 'describe_vpc_peering_connections', \n         'ApiFilter': '[{\"Name\": \"expiration-time\"}]',\n         'ApiParam': 'VpcPeeringConnections', \n         'initialQuery': ''\n        }\n    ]\n    #print(q_table)\n    result = []\n\n    sqClient = handle.client('service-quotas',region_name=region)\n    for i in q_table:\n        #convert the ApiFilter to a list\n        #'[{\"Name\": \"vpc-endpoint-type\",\"Values\": [\"Gateway\"]}]'\n        filterList=''\n        if len(i.get('ApiFilter')) > 0:\n            filterList = json.loads(i.get('ApiFilter'))\n        #print(\"filter\", filterList)\n\n        #get quota\n        sq = sqClient.get_service_quota(\n            ServiceCode=i.get('ServiceCode'),\n            QuotaCode=i.get('QuotaCode'))\n        quotaValue =sq['Quota']['Value']\n\n        #simple queries (Only one call to get the details)\n        if i.get('initialQuery') == '':\n            #find usage\n            res = aws_get_paginator(\n                ec2Client,\n                i.get('ApiName'),\n                i.get('ApiParam'),\n                Filters=filterList\n                )\n\n            #most of the time, all we need is the length (else)\n            if i.get('QuotaName')==\"NAT gateways per Availability Zone\":\n                #count the subets per nat gateway\n                # Create a dictionary to store the count of NAT gateways for each Availability Zone\n                az_nat_gateway_count = {}\n                # Loop through each NAT gateway and count the number for each Availability Zone\n                for nat_gateway in res:\n                    az = nat_gateway['SubnetId']\n                    if az in az_nat_gateway_count:\n                        az_nat_gateway_count[az] += 1\n                    else:\n                        az_nat_gateway_count[az] = 1\n\n                for gw, value in az_nat_gateway_count.items():\n                    percentage = value/quotaValue\n                    combinedData = {\n                        'Quota Name': i.get('QuotaName') + \": \"+ gw ,\n                        'Limit':quotaValue,\n                        'used': value,\n                        'percentage':percentage\n                        }\n                    result.append( combinedData)\n                    #print(combinedData)\n            if i.get('QuotaName')==\"Inbound or outbound rules per security group\":\n                for security_group in res:\n                    ruleCount = len(security_group['IpPermissions']) +len(security_group['IpPermissionsEgress'])\n                    percentage = ruleCount/quotaValue\n                    if len(i.get('QuotaName'))>0:\n                        combinedData = {\n                            'Quota Name': i.get('QuotaName') +\": \"+ security_group['GroupName'],\n                            'Limit':quotaValue,\n                            'used': ruleCount,\n                            'percentage':percentage\n                            }\n                        result.append(combinedData)\n                        #print(combinedData)\n            if i.get('QuotaName')==\"Routes per route table\":\n                for route_table in res:\n                    route_count = len(route_table['Routes'])\n                    route_table_id = route_table['RouteTableId']\n                    percentage = route_count/quotaValue\n                    combinedData = {\n                        'Quota Name': i.get('QuotaName') +\": \"+ route_table_id ,\n                        'Limit':quotaValue,\n                        'used': route_count,\n                        'percentage':percentage\n                        }\n                    result.append(  combinedData)\n                    #print(combinedData)\n            if i.get('QuotaName')==\"Rules per network ACL\":\n                for network_acl in res:\n                    rule_count = len(network_acl['Entries'])\n                    network_acl_id = network_acl['NetworkAclId']\n                    percentage = rule_count/quotaValue\n                    combinedData = {\n                        'Quota Name': i.get('QuotaName') +\": \"+ network_acl_id ,\n                        'Limit':quotaValue,\n                        'used': rule_count,\n                        'percentage':percentage\n                        }\n                    result.append(  combinedData)\n                    #print(combinedData)\n            if i.get('QuotaName')==\"Security groups per network interface\":\n                for network_interface in res:\n                    security_group_count = len(network_interface['Groups'])\n                    network_interface_id = network_interface['NetworkInterfaceId']\n                    percentage = security_group_count/quotaValue\n                    if len(i.get('QuotaName'))>0:\n                        combinedData = {\n                            'Quota Name': i.get('QuotaName') +\": \"+ network_interface_id ,\n                            'Limit':quotaValue,\n                            'used': security_group_count,\n                            'percentage':percentage\n                            }\n                        result.append(combinedData)\n                        #print(combinedData)\n            if i.get('QuotaName')==\"VPC peering connection request expiry hours\":\n                if len(res)>0:\n                    for peering_connection in res:\n                        expiration_time = peering_connection['ExpirationTime']\n                        current_time = datetime.now(datetime.timezone.utc)\n                        time_remaining = expiration_time - current_time\n                        peering_connection_id = peering_connection['VpcPeeringConnectionId']\n                        percentage = time_remaining/quotaValue\n                        combinedData = {\n                            'Quota Name': i.get('QuotaName') +\": \"+ peering_connection_id ,\n                            'Limit':quotaValue,\n                            'used': time_remaining,\n                            'percentage':percentage\n                            }\n                        result.append(combinedData)\n                        #print(combinedData)\n            else:\n                #most common default case\n                count = len(res)\n                percentage = count/quotaValue\n                combinedData = {\n                    'Quota Name': i.get('QuotaName'),\n                    'Limit':quotaValue,\n                    'used': count,\n                    'percentage':percentage\n                    }\n                result.append(  combinedData)\n                #print(combinedData)\n\n        #nested queries (get X per VPC or get y per network interface)\n        else:\n            #nested query for quota\n            #for example 'initialQuery': ['describe_vpcs','Vpcs', 'VpcId'] gets the list of VPCs,\n            # that we can then ask abour each VPC\n            #turn initalQuery string into a list\n            #'initialQuery': ['describe_vpcs','Vpcs', 'VpcId']\n            initialQuery = json.loads(i.get('initialQuery'))\n            initialQueryName = initialQuery[0]\n            initialQueryParam  = initialQuery[1]\n            initialQueryFilter = initialQuery[2]\n\n            #inital Query\n            res = aws_get_paginator(ec2Client, initialQueryName, initialQueryParam)\n            #print(res)\n            #nested query\n            for j in res:\n\n                #most of the time, there will be a 2nd query, and the table will have \n                #an 'ApiName' value\n                if len(i.get('ApiName')) >0:\n                    #rebuild filter\n                    #print(\"test\", j[initialQueryFilter])\n                    variableReplace = j[initialQueryFilter]\n                    filterList = i.get('ApiFilter')\n                    filterList = filterList.replace(\"VARIABLE\", variableReplace)\n                    filterList = json.loads(filterList)\n\n                    res2 = aws_get_paginator(\n                        ec2Client,\n                        i.get('ApiName'),\n                        i.get('ApiParam'),\n                        Filters=filterList\n                        )\n\n                    #most of the time we can just count the length of the response (else)\n                    if i.get('QuotaName') ==\"Participant accounts per VPC\":\n                        count =0\n                        #there can be zero peering conncetions....\n                        if len(res2) >0:\n                            for connection in res2:\n                                if len(connection['AccepterVpcInfo']['OwnerId']) >0:\n                                    count += 1\n                    else:\n                        count = len(res2)\n                else:\n                    #the value is in the first query, but we need to loop through it\n                    apiParam = i.get('ApiParam')    \n                    #print(apiParam, j[apiParam])\n                    count = len(j[apiParam])\n                percentage = count/quotaValue\n                #print(objectResult)\n                quotaName = f\"{i.get('QuotaName')} for {j[initialQueryFilter]}\"\n                combinedData = {\n                    'Quota Name': quotaName,\n                    'Limit':quotaValue,\n                    'used': count,\n                    'percentage':percentage\n                    }\n                result.append(combinedData)\n                #print(combinedData)\n\n\n    # all the data is now in a list called result\n    warning_result =[]\n    threshold = warning_percentage/100\n    for quota in result:\n        if quota['percentage'] >= threshold:\n            #there are two sums that appear, and throw errors.\n            if quota['Quota Name'] != 'Inbound or outbound rules per security group':\n                if quota['Quota Name'] != 'Security groups per network interface':\n                    warning_result.append(quota)\n    return warning_result\n"
  },
  {
    "path": "AWS/legos/aws_stop_instances/README.md",
    "content": "\nTBD\n"
  },
  {
    "path": "AWS/legos/aws_stop_instances/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_stop_instances/aws_stop_instances.json",
    "content": "{\r\n    \"action_title\": \"Stop AWS Instances\",\r\n    \"action_description\": \"Stop an AWS Instance\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_stop_instances\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_verbs\": [\r\n    \"stop\"\r\n    ],\r\n    \"action_nouns\": [\r\n    \"aws\",\r\n    \"instances\"\r\n    ],\r\n    \"action_is_remediation\": true,\r\n    \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_EC2\"]\r\n}"
  },
  {
    "path": "AWS/legos/aws_stop_instances/aws_stop_instances.py",
    "content": "##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import Dict\nfrom pydantic import BaseModel, Field\n\n\nclass InputSchema(BaseModel):\n    instance_id: str = Field(\n        title='Instance Id',\n        description='ID of the instance to be stopped.')\n    region: str = Field(\n        title='Region',\n        description='AWS Region of the instance.')\n\n\ndef aws_stop_instances_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef aws_stop_instances(handle, instance_id: str, region: str) -> Dict:\n    \"\"\"aws_stop_instances Stops instances.\n\n        :type instance_id: string\n        :param instance_id: String containing the name of AWS EC2 instance\n\n        :type region: string\n        :param region: AWS region for instance\n\n        :rtype: Dict with the stopped instances state info.\n    \"\"\"\n\n    ec2Client = handle.client('ec2', region_name=region)\n    output = {}\n    res = ec2Client.stop_instances(InstanceIds=[instance_id])\n    for instances in res['StoppingInstances']:\n        output[instances['InstanceId']] = instances['CurrentState']\n\n    return output\n"
  },
  {
    "path": "AWS/legos/aws_tag_ec2_instances/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Tag AWS Instances</h1>\r\n\r\n## Description\r\nThis Lego Tags AWS Instances with a specific Key:Value pair.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_tag_ec2_instance(handle, instance: str, tag_key: str, tag_value: str, region:str)\r\n\r\n        handle: Object of type unSkript AWS Connector\r\n        instance: EC2 instance ids.\r\n        tag_key:  Key to attach to instance.\r\n        tag_value: Value to attach to instance.\r\n        region: Region to filter instances.\r\n\r\n\r\n## Lego Input\r\nThis Action takes in 4 inputs: handle, instance, region, tag_key and tag_value.\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_tag_ec2_instances/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_tag_ec2_instances/aws_tag_ec2_instances.json",
    "content": "{\r\n    \"action_title\": \"Tag AWS Instances\",\r\n    \"action_description\": \"Tag AWS Instances\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_tag_ec2_instances\",\r\n    \"action_needs_credential\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_verbs\": [\"list\"],\r\n    \"action_nouns\": [\"instances\",\"AWS\"],\r\n    \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_EC2\"]\r\n  }\r\n\r\n\r\n  "
  },
  {
    "path": "AWS/legos/aws_tag_ec2_instances/aws_tag_ec2_instances.py",
    "content": "##  Copyright (c) 2023 unSkript, Inc\r\n## Written by Doug Sillars and ChatGPT\r\n##  All rights reserved.\r\n##\r\nimport pprint\r\nfrom typing import Dict\r\nfrom pydantic import BaseModel, Field\r\nfrom beartype import beartype\r\n\r\n\r\nclass InputSchema(BaseModel):\r\n    instance: str = Field(\r\n        title='EC2 Instance',\r\n        description='Name of the EC2 Instance.')\r\n    tag_key: str = Field(\r\n        title='Tag Key',\r\n        description='Key of the tag to be added.')\r\n    tag_value: str = Field(\r\n        title='Tag Value',\r\n        description='Value of the tag to be added.')\r\n    region: str = Field(\r\n        title='Region',\r\n        description='Name of the AWS where the instance is located.')\r\n\r\n\r\n@beartype\r\ndef aws_tag_ec2_instances_printer(output):\r\n    if output is None:\r\n        return\r\n    pprint.pprint(output)\r\n\r\n\r\n@beartype\r\ndef aws_tag_ec2_instances(handle, instance: str, tag_key: str, tag_value: str, region:str) -> Dict:\r\n    ec2 = handle.client('ec2', region_name=region)\r\n    res = ec2.create_tags(Resources=[instance], Tags=[{'Key': tag_key, 'Value': tag_value}])\r\n    return res\r\n"
  },
  {
    "path": "AWS/legos/aws_target_group_list_instances/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>AWS List Instances in a ELBV2 Target Group </h1>\r\n\r\n## Description\r\nThis Lego list AWS Instances in a ELBV2 Target Group.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_target_group_list_instances(handle: object, arn: str)\r\n\r\n        handle: Object of type unSkript AWS Connector.\r\n        arn: ARN of the Target Group.\r\n\r\n## Lego Input\r\n\r\nThis Lego take two inputs handle and arn.\r\n\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_target_group_list_instances/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_target_group_list_instances/aws_target_group_list_instances.json",
    "content": "{\r\n    \"action_title\": \"AWS List Instances in a ELBV2 Target Group\",\r\n    \"action_description\": \"List AWS Instance in a ELBv2 Target Group\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_target_group_list_instances\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_EC2\", \"CATEGORY_TYPE_AWS_ELB\"]\r\n}\r\n    "
  },
  {
    "path": "AWS/legos/aws_target_group_list_instances/aws_target_group_list_instances.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import List\nfrom pydantic import BaseModel, Field\nfrom unskript.legos.utils import parseARN\n\n\nclass InputSchema(BaseModel):\n    arn: str = Field(\n        title='Target Group ARN',\n        description='ARN of the Target Group.')\n\n\ndef aws_target_group_list_instances_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef aws_target_group_list_instances(handle, arn: str) -> List:\n    \"\"\"aws_target_group_list_instances List instances in a target group.\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type arn: string\n        :param arn: ARN of the Target Group.\n\n        :rtype: List of instances with their IPs.\n    \"\"\"\n    # Input param validation.\n    # Get the region for the target group.\n    parsedArn = parseARN(arn)\n    elbv2Client = handle.client('elbv2', region_name=parsedArn['region'])\n    ec2Client = handle.client('ec2', region_name=parsedArn['region'])\n    try:\n        targetHealthResponse = elbv2Client.describe_target_health(\n            TargetGroupArn=arn\n        )\n    except Exception as e:\n        print(f'Hit exception getting the instance list: {str(e)}')\n        raise e\n\n    instancesInfo = []\n    for ins in targetHealthResponse[\"TargetHealthDescriptions\"]:\n        try:\n            privateIP = get_instance_private_ip(ec2Client, ins['Target']['Id'])\n        except Exception:\n            continue\n        instanceInfo = {\n            'InstanceID': ins['Target']['Id'],\n            'PrivateIP': privateIP\n        }\n        instancesInfo.append(instanceInfo)\n\n    return instancesInfo\n\n\ndef get_instance_private_ip(ec2Client, instanceID: str) -> str:\n    try:\n        resp = ec2Client.describe_instances(\n            Filters=[\n                {\n                    'Name': 'instance-id',\n                    'Values': [instanceID]\n                }\n            ]\n        )\n    except Exception as e:\n        print(f'Failed to get instance details for {instanceID}, err: {str(e)}')\n        raise e\n\n    return resp['Reservations'][0]['Instances'][0]['PrivateIpAddress']\n"
  },
  {
    "path": "AWS/legos/aws_target_group_list_unhealthy_instances/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>AWS List Unhealthy Instances in a ELBV2 Target Group </h1>\r\n\r\n## Description\r\nThis Lego List AWS Unhealthy Instances in a ELBV2 Target Group.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_get_secret_from_secretmanager(handle: object, arn: str)\r\n\r\n        handle: Object of type unSkript AWS Connector.\r\n        arn: ARN of the Target Group.\r\n\r\n## Lego Input\r\n\r\nThis Lego take two inputs handle and arn.\r\n\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_target_group_list_unhealthy_instances/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_target_group_list_unhealthy_instances/aws_target_group_list_unhealthy_instances.json",
    "content": "{\r\n    \"action_title\": \" AWS List Unhealthy Instances in a ELBV2 Target Group\",\r\n    \"action_description\": \" List AWS Unhealthy Instance in a ELBv2 Target Group\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_target_group_list_unhealthy_instances\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\", \"CATEGORY_TYPE_AWS_ELB\"]\r\n}\r\n    "
  },
  {
    "path": "AWS/legos/aws_target_group_list_unhealthy_instances/aws_target_group_list_unhealthy_instances.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import List\nfrom pydantic import BaseModel, Field\nfrom unskript.legos.utils import parseARN\n\n\nclass InputSchema(BaseModel):\n    arn: str = Field(\n        title='Target Group ARN',\n        description='ARN of the Target Group.')\n\n\ndef aws_target_group_list_unhealthy_instances_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef aws_target_group_list_unhealthy_instances(handle, arn: str) -> List:\n\n    \"\"\"aws_target_group_list_unhealthy_instances returns array of unhealthy instances\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type arn: string\n        :param arn: ARN of the Target Group.\n\n        :rtype: Returns array of unhealthy instances\n    \"\"\"\n    # Get the region for the target group.\n    parsedArn = parseARN(arn)\n    elbv2Client = handle.client('elbv2', region_name=parsedArn['region'])\n    try:\n        targetHealthResponse = elbv2Client.describe_target_health(\n            TargetGroupArn=arn\n        )\n    except Exception as e:\n        print(f'Hit exception getting the instance list: {str(e)}')\n        raise e\n\n    instancesInfo = []\n    for ins in targetHealthResponse[\"TargetHealthDescriptions\"]:\n        if ins['TargetHealth']['State'] in ['unhealthy']:\n            instancesInfo.append(ins)\n\n    return instancesInfo\n"
  },
  {
    "path": "AWS/legos/aws_target_group_register_unregister_instances/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>AWS Register/Unregister Instances from a Target Group </h1>\r\n\r\n## Description\r\nThis Lego Register/Unregister AWS Instance from a Target Group.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_target_group_register_unregister_instances(handle: object, arn: str, instance_ids: List, port: int,\r\n                                                   unregister: bool)\r\n\r\n        handle: Object of type unSkript AWS Connector.\r\n        arn: ARN of the Target Group.\r\n        instance_ids: List of instance IDs.\r\n        port: The port on which the instances are listening.\r\n        unregister: Check this if the instances need to be unregistered.\r\n\r\n## Lego Input\r\n\r\nThis Lego take five inputs handle, arn, instance_ids, port and unregister.\r\n\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_target_group_register_unregister_instances/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_target_group_register_unregister_instances/aws_target_group_register_unregister_instances.json",
    "content": "{\r\n    \"action_title\": \"AWS Register/Unregister Instances from a Target Group.\",\r\n    \"action_description\": \"Register/Unregister AWS Instance from a Target Group\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_target_group_register_unregister_instances\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_NONE\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\", \"CATEGORY_TYPE_AWS_ELB\"]\r\n}\r\n    "
  },
  {
    "path": "AWS/legos/aws_target_group_register_unregister_instances/aws_target_group_register_unregister_instances.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nfrom typing import List\nfrom pydantic import BaseModel, Field\nfrom unskript.legos.utils import parseARN\n\n\nclass InputSchema(BaseModel):\n    arn: str = Field(\n        title='Target Group ARN',\n        description='ARN of the Target Group.')\n    instance_ids: List[str] = Field(\n        title='Instance IDs',\n        description='List of instance IDs. For eg. [\"i-foo\", \"i-bar\"]')\n    port: int = Field(\n        title='Port',\n        description='The port on which the instances are listening.'\n    )\n    unregister: bool = Field(\n        False,\n        title='Unregister',\n        description='Check this if the instances need to be unregistered. By default, it is false.'\n    )\n\n\n#All legos should take inputParamsJson as the input.\n#They should assume the handle variable is defined already.\n\n\ndef aws_target_group_register_unregister_instances(handle, arn: str, instance_ids: List, port: int,\n                                                   unregister: bool = False) -> None:\n    \"\"\"aws_target_group_register_unregister_instances Allows register/unregister instances to a\n       target group.\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type arn: string\n        :param arn: ARN of the Target Group.\n\n        :type instance_ids: list\n        :param instance_ids: List of instance IDs.\n\n        :type port: int\n        :param port: The port on which the instances are listening.\n\n        :type unregister: bool\n        :param unregister: Check this if the instances need to be unregistered.\n\n        :rtype: None\n    \"\"\"\n    # Input param validation.\n    # Get the region for the target group.\n    parsedArn = parseARN(arn)\n    elbv2Client = handle.client('elbv2', region_name=parsedArn['region'])\n    # Create the targets\n    targets = []\n    for i in instance_ids:\n        targets.append({\n            'Id': i,\n            'Port': port,\n        })\n    try:\n        if unregister is True:\n            elbv2Client.deregister_targets({\n                'TargetGroupArn': arn,\n                'Targets': targets\n            })\n        else:\n            elbv2Client.register_targets({\n                'TargetGroupArn': arn,\n                'Targets': targets\n            })\n    except Exception as e:\n        print(f'Unable to register/unregister: {str(e)}')\n        raise e\n"
  },
  {
    "path": "AWS/legos/aws_terminate_ec2_instances/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Terminate AWS Instances </h1>\r\n\r\n## Description\r\nThis Lego Terminate AWS Instances.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_terminate_instance(handle: object, instance_ids: List, region: str)\r\n\r\n        handle: Object of type unSkript AWS Connector\r\n        instance_ids: List contains instance ids.\r\n        region: Region to filter instances.\r\n\r\n## Lego Input\r\nThis Lego take three inputs handle, instance_ids and region.\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_terminate_ec2_instances/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_terminate_ec2_instances/aws_terminate_ec2_instances.json",
    "content": "{\r\n    \"action_title\": \"Terminate AWS EC2 Instances\",\r\n    \"action_description\": \"This Action will Terminate AWS EC2 Instances\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_terminate_ec2_instances\",\r\n    \"action_needs_credential\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_is_remediation\":true,\r\n    \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_EC2\"]\r\n  }\r\n\r\n\r\n  \r\n"
  },
  {
    "path": "AWS/legos/aws_terminate_ec2_instances/aws_terminate_ec2_instances.py",
    "content": "##\r\n# Copyright (c) 2021 unSkript, Inc\r\n# All rights reserved.\r\n##\r\nimport pprint\r\nfrom typing import List, Dict\r\nfrom pydantic import BaseModel, Field\r\n\r\n\r\nclass InputSchema(BaseModel):\r\n    instance_ids: List[str] = Field(\r\n        title='Instance IDs',\r\n        description='List of instance IDs. For eg. [\"i-foo\", \"i-bar\"]')\r\n    region: str = Field(\r\n        title='Region',\r\n        description='AWS Region of the instance.')\r\n\r\n\r\ndef aws_terminate_ec2_instances_printer(output):\r\n    if output is None:\r\n        return\r\n    pprint.pprint(output)\r\n\r\n\r\ndef aws_terminate_ec2_instances(handle, instance_ids: List, region: str) -> Dict:\r\n    \"\"\"aws_terminate_ec2_instances Returns an Dict of info terminated instance.\r\n\r\n        :type handle: object\r\n        :param handle: Object returned from task.validate(...).\r\n\r\n        :type instance_ids: List\r\n        :param instance_ids: Tag to filter Instances.\r\n\r\n        :type region: string\r\n        :param region: Used to filter the instance for specific region.\r\n\r\n        :rtype: Dict of info terminated instance.\r\n    \"\"\"\r\n    ec2Client = handle.client('ec2', region_name=region)\r\n    res = ec2Client.terminate_instances(InstanceIds=instance_ids)\r\n\r\n    return res\r\n"
  },
  {
    "path": "AWS/legos/aws_update_access_key/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \n<h1>AWS Delete Access Key</h1>\n\n## Description\nThis Lego updates the status of an old Access Key to Inactive.\n\n\n## Lego Details\n\n    aws_update_access_key(handle,aws_username: str)\n\n        handle: Object of type unSkript AWS Connector.\n        aws_access_key_id: String, Old Access Key ID of the User.\n        status: AccessKeyStatus, Status to set for the Access Key Eg:Active or Inactive\n\n\n## Lego Input\nThis Lego take three inputs handle, aws_access_key_id, and status.\n\n## Lego Output\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_update_access_key/__init__.py",
    "content": "##\n##  Copyright (c) 2022 unSkript, Inc\n##  All rights reserved.\n##"
  },
  {
    "path": "AWS/legos/aws_update_access_key/aws_update_access_key.json",
    "content": "{\r\n    \"action_title\": \"AWS Update Access Key\",\r\n    \"action_description\": \"Update status of the Access Key\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_update_access_key\",\r\n    \"action_needs_credential\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_is_check\": false,\r\n    \"action_is_remediation\": true,\r\n    \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\", \"CATEGORY_TYPE_AWS_IAM\"]\r\n  }\r\n"
  },
  {
    "path": "AWS/legos/aws_update_access_key/aws_update_access_key.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nimport pprint\nfrom typing import Dict\nfrom pydantic import BaseModel, Field\nfrom unskript.enums.aws_access_key_enums import AccessKeyStatus\n\n\nclass InputSchema(BaseModel):\n    aws_username: str = Field(\n        title=\"Username\",\n        description=\"Username of the IAM User\"\n    )\n    aws_access_key_id: str = Field(\n        title=\"Access Key ID\",\n        description=\"Old Access Key ID of the User\"\n    )\n    status: AccessKeyStatus = Field(\n        title=\"Status\",\n        description=\"Status to set for the Access Key\"\n    )\n\n\ndef aws_update_access_key_printer(output):\n    if output is None:\n        return\n    pprint.pprint(\"Access Key status successfully changed\")\n    pprint.pprint(output)\n\n\ndef aws_update_access_key(\n    handle,\n    aws_username: str,\n    aws_access_key_id: str,\n    status: AccessKeyStatus\n) -> Dict:\n    \"\"\"aws_update_access_key updates the status of an access key to Inactive/Active\n        :type handle: object\n        :param handle: Object returned from Task Validate\n\n        :type aws_username: str\n        :param aws_username: Username of the IAM user to be looked up\n\n        :type aws_access_key_id: str\n        :param aws_access_key_id: Old Access Key ID of the user of which the status\n        needs to be updated\n\n        :type status: AccessKeyStatus\n        :param status: Status to set for the Access Key\n\n        :rtype: Result Dictionary of result\n    \"\"\"\n    iamClient = handle.client('iam')\n    result = iamClient.update_access_key(\n        UserName=aws_username,\n        AccessKeyId=aws_access_key_id,\n        Status=status\n        )\n    retVal = {}\n    temp_list = []\n    for key, value in result.items():\n        if key not in temp_list:\n            temp_list.append(key)\n            retVal[key] = value\n    return retVal\n"
  },
  {
    "path": "AWS/legos/aws_update_ttl_for_route53_records/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>AWS Update TTL for Route53 Record</h1>\n\n## Description\nUpdate TTL for an existing record in a hosted zone.\n\n## Lego Details\n\taws_update_ttl_for_route53_records(handle, hosted_zone_id: str, record_name: str, record_type:str, new_ttl:int )\n\n\t\thandle: Object of type unSkript AWS Connector.\n\t\thosted_zone_id: ID of the hosted zone in Route53\n\t\trecord_name: Name of record in a hosted zone. Eg: example.com\n\t\trecord_type: Record Type of the record.\n\t\tnew_ttl: New TTL value for a record. Eg: 300\n\n## Lego Input\nThis Lego takes inputs handle, hosted_zone_id, record_name, record_type, new_ttl\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_update_ttl_for_route53_records/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_update_ttl_for_route53_records/aws_update_ttl_for_route53_records.json",
    "content": "{\n  \"action_title\": \"AWS Update TTL for Route53 Record\",\n  \"action_description\": \"Update TTL for an existing record in a hosted zone.\",\n  \"action_type\": \"LEGO_TYPE_AWS\",\n  \"action_entry_function\": \"aws_update_ttl_for_route53_records\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\n  \"action_is_check\": false,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\": [\"CATEGORY_TYPE_COST_OPT\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_ROUTE53\"]\n}"
  },
  {
    "path": "AWS/legos/aws_update_ttl_for_route53_records/aws_update_ttl_for_route53_records.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nimport pprint\nfrom typing import Dict\nfrom pydantic import BaseModel, Field\nfrom unskript.enums.aws_route53_record_type_enums import Route53RecordType\n\n\nclass InputSchema(BaseModel):\n    hosted_zone_id: str = Field(\n        ..., description='ID of the hosted zone in Route53', title='Hosted Zone ID'\n    )\n    new_ttl: int = Field(\n        ..., description='New TTL value for a record. Eg: 300', title='New TTL'\n    )\n    record_name: str = Field(\n        ...,\n        description='Name of record in a hosted zone. Eg: example.com',\n        title='Record Name',\n    )\n    record_type: Route53RecordType = Field(\n        ..., description='Record Type of the record.', title='Record Type'\n    )\n\n\ndef aws_update_ttl_for_route53_records_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\ndef aws_update_ttl_for_route53_records(\n        handle,\n        hosted_zone_id: str,\n        record_name: str,\n        record_type:Route53RecordType,\n        new_ttl:int\n        ) -> Dict:\n    \"\"\"aws_update_ttl_for_route53_records updates the TTL for a Route53 record in a hosted zone.\n\n        :type handle: object\n        :param handle: Object returned by the task.validate(...) method.\n\n        :type hosted_zone_id: string\n        :param hosted_zone_id: ID of the hosted zone in Route53\n\n        :type record_name: string\n        :param record_name: Name of record in a hosted zone. Eg: example.com\n\n        :type record_type: string\n        :param record_type: Record Type of the record.\n\n        :type new_ttl: int\n        :param new_ttl: New TTL value for a record. Eg: 300\n\n        :rtype: Response of updation on new TTL\n    \"\"\"\n\n    route53Client = handle.client('route53')\n    new_ttl_value = int(new_ttl)\n\n    response = route53Client.change_resource_record_sets(\n        HostedZoneId=hosted_zone_id,\n        ChangeBatch={\n            'Changes': [\n                {\n                    'Action': 'UPSERT',\n                    'ResourceRecordSet': {\n                        'Name': record_name,\n                        'Type': record_type,\n                        'TTL': new_ttl_value\n                    }\n                }\n            ]\n        }\n    )\n    return response\n"
  },
  {
    "path": "AWS/legos/aws_upload_file_to_s3/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Upload file to S3 </h1>\r\n\r\n## Description\r\nThis Lego Upload a local file to S3.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_upload_file_to_s3(handle: object, bucketName: str, file: __file__, prefix: str)\r\n\r\n        handle: Object of type unSkript AWS Connector.\r\n        bucketName: Name of the bucket to upload into.\r\n        file: Name of the local file to upload into bucket.\r\n        prefix: Prefix to attach to get the final object name to be used in the bucket.\r\n\r\n## Lego Input\r\n\r\nThis Lego take four inputs handle, bucketName, file and prefix.\r\n\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "AWS/legos/aws_upload_file_to_s3/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_upload_file_to_s3/aws_upload_file_to_s3.json",
    "content": "{\r\n    \"action_title\": \"Upload file to S3\",\r\n    \"action_description\": \"Upload a local file to S3\",\r\n    \"action_type\": \"LEGO_TYPE_AWS\",\r\n    \"action_entry_function\": \"aws_upload_file_to_s3\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_AWS_S3\"]\r\n}\r\n    "
  },
  {
    "path": "AWS/legos/aws_upload_file_to_s3/aws_upload_file_to_s3.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\n\nimport pprint\nfrom pydantic import BaseModel, Field\n\n\nclass InputSchema(BaseModel):\n    bucketName: str = Field(\n        title='Bucket',\n        description='Name of the bucket to upload into.')\n    file: str = Field(\n        title='File',\n        description='Name of the local file to upload into bucket. Eg /tmp/file-to-upload')\n    prefix: str = Field(\n        default=\"\",\n        title='Prefix',\n        description='Prefix to attach to get the final object name to be used in the bucket.')\n\n\ndef aws_upload_file_to_s3_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef aws_upload_file_to_s3(handle, bucketName: str, file: __file__, prefix: str = \"\") -> str:\n    \"\"\"aws_get_unhealthy_instances returns array of unhealthy instances\n\n     :type handle: object\n     :param handle: Object returned from task.validate(...).\n\n     :type bucketName: string\n     :param bucketName: Name of the bucket to upload into.\n\n     :type file: __file__\n     :param file: Name of the local file to upload into bucket.\n\n     :type prefix: string\n     :param prefix: Prefix to attach to get the final object name to be used in the bucket.\n\n     :rtype: Returns array of unhealthy instances\n    \"\"\"\n    s3 = handle.client('s3')\n    objName = prefix + file.split(\"/\")[-1]\n    try:\n        with open(file, \"rb\") as f:\n            s3.upload_fileobj(f, bucketName, objName)\n    except Exception as e:\n        raise e\n    return f\"Successfully copied {file} to bucket:{bucketName} object:{objName}\"\n"
  },
  {
    "path": "AWS/legos/aws_vpc_service_quota_warning/README.md",
    "content": "\r\n[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>AWS VPC Service Quota Warning </h1>\r\n\r\n## Description\r\nThis Action compares the AWS service quota for all VPC Services against the usage.  If the alert is over the warning threshold, the data is output as a LIst.\r\n\r\n\r\n## Lego Details\r\n\r\n    aws_vpc_service_quota_warning_v1(handle, region: str, warning_percentage: float) -> List\r\n\r\n        handle: Object of type unSkript AWS Connector\r\n        warning_percentage: Percentage. If usage/quota is over this value - it will be added to the output.\r\n        region: Location of the S3 buckets.\r\n\r\n## Lego Input\r\nThis Lego take three input handle, warning_percentage and region.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.jpg\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)\r\n\r\n"
  },
  {
    "path": "AWS/legos/aws_vpc_service_quota_warning/__init__.py",
    "content": ""
  },
  {
    "path": "AWS/legos/aws_vpc_service_quota_warning/aws_vpc_service_quota_warning.json",
    "content": "{\n  \"action_title\": \"AWS_VPC_service_quota_warning\",\n  \"action_description\": \"Given an AWS Region and a warning percentage, this Action queries all VPC quota limits, and returns any of Quotas that are over the alert value.\",\n  \"action_type\": \"LEGO_TYPE_AWS\",\n  \"action_entry_function\": \"aws_vpc_service_quota_warning\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\n  \"action_is_check\": false,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\", \"CATEGORY_TYPE_AWS_VPC\"]\n}"
  },
  {
    "path": "AWS/legos/aws_vpc_service_quota_warning/aws_vpc_service_quota_warning.py",
    "content": "##\n##  Copyright (c) 2023 unSkript, Inc\n##  All rights reserved.\n##\nfrom __future__ import annotations\nimport pprint\nimport json\nimport datetime\nfrom typing import List\nfrom pydantic import BaseModel, Field\nfrom unskript.connectors.aws import aws_get_paginator\nfrom beartype import beartype\n\n\nclass InputSchema(BaseModel):\n    region: str = Field(..., description='AWS Region.', title='Region')\n    warning_percentage: float = Field(\n        50,\n        description='Percentage threshold for a warning.  For a complete list of quotas, use 0.',\n        title='warning_percentage',\n    )\n\n@beartype\ndef aws_vpc_service_quota_warning_printer(output):\n    if output is None:\n        return\n    pprint.pprint({\"Instances\": output})\n\n\n@beartype\n@beartype\ndef aws_vpc_service_quota_warning(handle, region: str, warning_percentage: float) -> List:\n\n\n    ## EC@ and VPCs\n\n    ec2Client = handle.client('ec2', region_name=region)\n    # List all VPCs in the specified region\n\n    q_table = [ \n        #per region stats\n                #per region stats\n        {\n            'QuotaName':'VPCs Per Region',\n            'ServiceCode':'vpc',\n            'QuotaCode': 'L-F678F1CE',\n            'ApiName': 'describe_vpcs',\n            'ApiFilter' : '[]',\n            'ApiParam': 'Vpcs',\n            'initialQuery': ''\n        },\n        {\n            'QuotaName':'VPC security groups per Region',\n            'ServiceCode':'vpc',\n            'QuotaCode': 'L-E79EC296',\n            'ApiName': 'describe_security_groups',\n            'ApiFilter' :'[]',\n            'ApiParam': 'SecurityGroups',\n            'initialQuery': ''\n        },\n        {\n            'QuotaName':'Egress-only internet gateways per Region',\n            'ServiceCode':'vpc',\n            'QuotaCode': 'L-45FE3B85',\n            'ApiName': 'describe_egress_only_internet_gateways',\n            'ApiFilter' : '[]',\n            'ApiParam': 'EgressOnlyInternetGateways',\n            'initialQuery': ''\n        },\n        {\n            'QuotaName':'Gateway VPC endpoints per Region',\n            'ServiceCode':'vpc',\n            'QuotaCode': 'L-1B52E74A',\n            'ApiName': 'describe_vpc_endpoints',\n            'ApiFilter' : '[{\"Name\": \"vpc-endpoint-type\",\"Values\": [\"Gateway\"]}]',\n            'ApiParam': 'VpcEndpoints',\n            'initialQuery': ''\n        },\n        {\n            'QuotaName':'Internet gateways per Region',\n            'ServiceCode':'vpc',\n            'QuotaCode': 'L-A4707A72',\n            'ApiName': 'describe_internet_gateways',\n            'ApiFilter' : '[]',\n            'ApiParam': 'InternetGateways',\n            'initialQuery': ''\n        },\n        {\n            'QuotaName':'Network interfaces per Region',\n            'ServiceCode':'vpc',\n            'QuotaCode': 'L-DF5E4CA3',\n            'ApiName': 'describe_network_interfaces',\n            'ApiFilter' : '[]',\n            'ApiParam': 'NetworkInterfaces',\n            'initialQuery': ''\n        },\n        #per VPC stats\n        {\n        'QuotaName':'Active VPC peering connections per VPC',\n         'ServiceCode':'vpc',\n         'QuotaCode': 'L-7E9ECCDB',\n         'ApiName': 'describe_vpc_peering_connections', \n         'ApiFilter' : '[{\"Name\": \"status-code\",\"Values\": [\"active\"]}, {\"Name\": \"requester-vpc-info.vpc-id\",\"Values\": [\"VARIABLE\"]}]',\n         'ApiParam': 'VpcPeeringConnections', \n         'initialQuery': '[\"describe_vpcs\",\"Vpcs\", \"VpcId\"]'\n        },\n        {\n        'QuotaName':'Interface VPC endpoints per VPC',\n         'ServiceCode':'vpc',\n         'QuotaCode': 'L-29B6F2EB',\n         'ApiName': 'describe_vpc_endpoints', \n         'ApiFilter' : '[{\"Name\": \"vpc-endpoint-type\",\"Values\": [\"Interface\"]}, {\"Name\": \"vpc-id\",\"Values\": [\"VARIABLE\"]}]',\n         'ApiParam': 'VpcEndpoints', \n         'initialQuery': '[\"describe_vpcs\",\"Vpcs\", \"VpcId\"]'\n        },\n        {\n        'QuotaName':'IPv4 CIDR blocks per VPC',\n         'ServiceCode':'vpc',\n         'QuotaCode': 'L-83CA0A9D',\n         'ApiName': '', \n         'ApiFilter': '',\n         'ApiParam': 'CidrBlockAssociationSet', \n         'initialQuery': '[\"describe_vpcs\",\"Vpcs\", \"VpcId\"]'\n        },\n        {\n        'QuotaName':' Network ACLs per VPC',\n         'ServiceCode':'vpc',\n         'QuotaCode': 'L-B4A6D682',\n         'ApiName': 'describe_network_acls', \n         'ApiFilter': '[{\"Name\": \"vpc-id\",\"Values\": [\"VARIABLE\"]}]',\n         'ApiParam': 'NetworkAcls', \n         'initialQuery': '[\"describe_vpcs\",\"Vpcs\", \"VpcId\"]'\n        },\n        {\n        'QuotaName':'Participant accounts per VPC',\n         'ServiceCode':'vpc',\n         'QuotaCode': 'L-2C462E13',\n         'ApiName': 'describe_vpc_peering_connections', \n         'ApiFilter': '[{\"Name\": \"requester-vpc-info.vpc-id\",\"Values\": [\"VARIABLE\"]}]',\n         'ApiParam': 'VpcPeeringConnections', \n         'initialQuery': '[\"describe_vpcs\",\"Vpcs\", \"VpcId\"]'\n        },\n        {\n        'QuotaName':'Route tables per VPC',\n         'ServiceCode':'vpc',\n         'QuotaCode': 'L-589F43AA',\n         'ApiName': 'describe_route_tables', \n         'ApiFilter': '[{\"Name\": \"vpc-id\",\"Values\": [\"VARIABLE\"]}]',\n         'ApiParam': 'RouteTables', \n         'initialQuery': '[\"describe_vpcs\",\"Vpcs\", \"VpcId\"]'\n        },\n        {\n        'QuotaName':'Subnets per VPC',\n         'ServiceCode':'vpc',\n         'QuotaCode': 'L-407747CB',\n         'ApiName': 'describe_subnets', \n         'ApiFilter': '[{\"Name\": \"vpc-id\",\"Values\": [\"VARIABLE\"]}]',\n         'ApiParam': 'Subnets', \n         'initialQuery': '[\"describe_vpcs\",\"Vpcs\", \"VpcId\"]'\n        },\n        {\n        'QuotaName':'NAT gateways per Availability Zone',\n          'ServiceCode':'vpc',\n          'QuotaCode': 'L-FE5A380F',\n          'ApiName': 'describe_nat_gateways', \n          'ApiFilter': '[]',\n          'ApiParam': 'NatGateways', \n          'initialQuery': ''\n        },\n        {\n        'QuotaName':'Inbound or outbound rules per security group',\n          'ServiceCode':'vpc',\n          'QuotaCode': 'L-0EA8095F',\n          'ApiName': 'describe_security_groups', \n          'ApiFilter': '[]',\n          'ApiParam': 'SecurityGroups', \n          'initialQuery': ''\n        },\n        {\n        'QuotaName':'Outstanding VPC peering connection requests',\n         'ServiceCode':'vpc',\n         'QuotaCode': 'L-DC9F7029',\n         'ApiName': 'describe_vpc_peering_connections', \n         'ApiFilter': '[{\"Name\": \"status-code\", \"Values\": [\"pending-acceptance\"]}]',\n         'ApiParam': 'VpcPeeringConnections', \n         'initialQuery': ''\n        },\n        {\n        'QuotaName':'Routes per route table',\n         'ServiceCode':'vpc',\n         'QuotaCode': 'L-93826ACB',\n         'ApiName': 'describe_route_tables', \n         'ApiFilter': '[]',\n         'ApiParam': 'RouteTables', \n         'initialQuery': ''\n        },\n        {\n        'QuotaName':'Rules per network ACL',\n         'ServiceCode':'vpc',\n         'QuotaCode': 'L-2AEEBF1A',\n         'ApiName': 'describe_network_acls', \n         'ApiFilter':'[]',\n         'ApiParam': 'NetworkAcls', \n         'initialQuery': ''\n        },\n        {\n        'QuotaName':'Security groups per network interface',\n         'ServiceCode':'vpc',\n         'QuotaCode': 'L-2AFB9258',\n         'ApiName': 'describe_network_interfaces', \n         'ApiFilter': '[]',\n         'ApiParam': 'NetworkInterfaces', \n         'initialQuery': ''\n        },\n        {\n            \n        'QuotaName':'VPC peering connection request expiry hours',\n         'ServiceCode':'vpc',\n         'QuotaCode': 'L-8312C5BB',\n         'ApiName': 'describe_vpc_peering_connections', \n         'ApiFilter': '[{\"Name\": \"expiration-time\"}]',\n         'ApiParam': 'VpcPeeringConnections', \n         'initialQuery': ''\n        }\n    ]\n    #print(q_table)\n    result = []\n\n    sqClient = handle.client(\n        'service-quotas',\n        region_name=region\n        )\n    for i in q_table:   \n        #convert the ApiFilter to a list\n        #'[{\"Name\": \"vpc-endpoint-type\",\"Values\": [\"Gateway\"]}]'\n        filterList=''\n        if len(i.get('ApiFilter')) > 0:\n            filterList = json.loads(i.get('ApiFilter'))\n        #print(\"filter\", filterList)\n\n        #get quota\n        sq = sqClient.get_service_quota(\n            ServiceCode=i.get('ServiceCode'),\n            QuotaCode=i.get('QuotaCode'))\n        quotaValue =sq['Quota']['Value']\n\n        #simple queries (Only one call to get the details)\n        if i.get('initialQuery') == '':\n            #find usage\n            res = aws_get_paginator(\n                ec2Client,\n                i.get('ApiName'),\n                i.get('ApiParam'),\n                Filters=filterList\n                )\n\n            #most of the time, all we need is the length (else)\n            if i.get('QuotaName')==\"NAT gateways per Availability Zone\":\n                #count the subets per nat gateway\n                # Create a dictionary to store the count of NAT gateways for each Availability Zone\n                az_nat_gateway_count = {}\n                # Loop through each NAT gateway and count the number for each Availability Zone\n                for nat_gateway in res:\n                    az = nat_gateway['SubnetId']\n                    if az in az_nat_gateway_count:\n                        az_nat_gateway_count[az] += 1\n                    else:\n                        az_nat_gateway_count[az] = 1\n\n                for gw, value in az_nat_gateway_count.items():\n                    percentage = value/quotaValue\n                    combinedData = {\n                        'Quota Name': i.get('QuotaName') + \": \"+ gw ,\n                        'Limit':quotaValue,\n                        'used': value,\n                        'percentage':percentage\n                        }\n                    result.append( combinedData)\n                    #print(combinedData)\n            if i.get('QuotaName')==\"Inbound or outbound rules per security group\":\n                for security_group in res:\n                    ruleCount = len(security_group['IpPermissions']) +len(security_group['IpPermissionsEgress'])\n                    percentage = ruleCount/quotaValue\n                    if len(i.get('QuotaName'))>0:\n                        combinedData = {\n                            'Quota Name': i.get('QuotaName') +\": \"+ security_group['GroupName'] ,\n                            'Limit':quotaValue,\n                            'used': ruleCount,\n                            'percentage':percentage\n                            }\n                        result.append(combinedData)\n                        #print(combinedData)\n            if i.get('QuotaName')==\"Routes per route table\":\n                for route_table in res:\n                    route_count = len(route_table['Routes'])\n                    route_table_id = route_table['RouteTableId']\n                    percentage = route_count/quotaValue\n                    combinedData = {\n                        'Quota Name': i.get('QuotaName') +\": \"+ route_table_id ,\n                        'Limit':quotaValue,\n                        'used': route_count,\n                        'percentage':percentage\n                        }\n                    result.append(  combinedData)\n                    #print(combinedData)\n            if i.get('QuotaName')==\"Rules per network ACL\":\n                for network_acl in res:\n                    rule_count = len(network_acl['Entries'])\n                    network_acl_id = network_acl['NetworkAclId']\n                    percentage = rule_count/quotaValue\n                    combinedData = {\n                        'Quota Name': i.get('QuotaName') +\": \"+ network_acl_id ,\n                        'Limit':quotaValue,\n                        'used': rule_count,\n                        'percentage':percentage\n                        }\n                    result.append(  combinedData)\n                    #print(combinedData)\n            if i.get('QuotaName')==\"Security groups per network interface\":\n                for network_interface in res:\n                    security_group_count = len(network_interface['Groups'])\n                    network_interface_id = network_interface['NetworkInterfaceId']\n                    percentage = security_group_count/quotaValue\n                    if len(i.get('QuotaName'))>0:\n                        combinedData = {\n                            'Quota Name': i.get('QuotaName') +\": \"+ network_interface_id ,\n                            'Limit':quotaValue,\n                            'used': security_group_count,\n                            'percentage':percentage\n                            }\n                        result.append(combinedData)\n                        #print(combinedData)\n            if i.get('QuotaName')==\"VPC peering connection request expiry hours\":\n                if len(res)>0:\n                    for peering_connection in res:\n                        expiration_time = peering_connection['ExpirationTime']\n                        current_time = datetime.now(datetime.timezone.utc)\n                        time_remaining = expiration_time - current_time\n                        peering_connection_id = peering_connection['VpcPeeringConnectionId']\n                        percentage = time_remaining/quotaValue\n                        combinedData = {\n                            'Quota Name': i.get('QuotaName') +\": \"+ peering_connection_id ,\n                            'Limit':quotaValue,\n                            'used': time_remaining,\n                            'percentage':percentage\n                            }\n                        result.append(combinedData)\n                        #print(combinedData)\n            else:\n                #most common default case\n                count = len(res)\n                percentage = count/quotaValue\n                combinedData = {\n                    'Quota Name': i.get('QuotaName'),\n                    'Limit':quotaValue,\n                    'used': count,\n                    'percentage':percentage\n                    }\n                result.append(  combinedData)\n                #print(combinedData)\n\n        #nested queries (get X per VPC or get y per network interface)\n        else:\n            #nested query for quota\n            #for example 'initialQuery': ['describe_vpcs','Vpcs', 'VpcId'] gets \n            #the list of VPCs,that we can then ask abour each VPC\n            #turn initalQuery string into a list\n            #'initialQuery': ['describe_vpcs','Vpcs', 'VpcId']\n            initialQuery = json.loads(i.get('initialQuery'))\n            initialQueryName = initialQuery[0]\n            initialQueryParam  = initialQuery[1]\n            initialQueryFilter = initialQuery[2]\n\n            #inital Query\n            res = aws_get_paginator(ec2Client, initialQueryName, initialQueryParam)\n            #print(res)\n            #nested query\n            for j in res:\n\n                #most of the time, there will be a 2nd query, and the table will\n                # have an 'ApiName' value\n                if len(i.get('ApiName')) >0:\n                    #rebuild filter\n                    #print(\"test\", j[initialQueryFilter])\n                    variableReplace = j[initialQueryFilter]\n                    filterList = i.get('ApiFilter')\n                    filterList = filterList.replace(\"VARIABLE\", variableReplace)\n                    filterList = json.loads(filterList)\n\n                    res2 = aws_get_paginator(\n                        ec2Client,\n                        i.get('ApiName'),\n                        i.get('ApiParam'),\n                        Filters=filterList\n                        )\n\n                    #most of the time we can just count the length of the response (else)\n                    if i.get('QuotaName') ==\"Participant accounts per VPC\":\n                        count =0\n                        #there can be zero peering conncetions....\n                        if len(res2) >0:\n                            for connection in res2:\n                                if len(connection['AccepterVpcInfo']['OwnerId']) >0:\n                                    count += 1\n                    else:\n                        count = len(res2)\n                else:\n                    #the value is in the first query, but we need to loop through it\n                    apiParam = i.get('ApiParam')    \n                    #print(apiParam, j[apiParam])\n                    count = len(j[apiParam])\n                percentage = count/quotaValue\n                quotaName = f\"{i.get('QuotaName')} for {j[initialQueryFilter]}\"\n                combinedData = {\n                    'Quota Name': quotaName,\n                    'Limit':quotaValue,\n                    'used': count,\n                    'percentage':percentage\n                    }\n                result.append(combinedData)\n                #print(combinedData)\n\n\n    # all the data is now in a list called result\n    warning_result =[]\n    threshold = warning_percentage/100\n    for quota in result:\n        if quota['percentage'] >= threshold:\n            #there are two sums that appear, and throw errors.\n            if quota['Quota Name'] != 'Inbound or outbound rules per security group':\n                if quota['Quota Name'] != 'Security groups per network interface':\n                    warning_result.append(quota)\n    return warning_result\n"
  },
  {
    "path": "Airflow/README.md",
    "content": "\n# Airflow Actions\n* [Get Status for given DAG](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Airflow/legos/airflow_check_dag_status/README.md): Get Status for given DAG\n* [Get Airflow handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Airflow/legos/airflow_get_handle/README.md): Get Airflow handle\n* [List DAG runs for given DagID](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Airflow/legos/airflow_list_DAG_runs/README.md): List DAG runs for given DagID\n* [Airflow trigger DAG run](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Airflow/legos/airflow_trigger_dag_run/README.md): Airflow trigger DAG run\n"
  },
  {
    "path": "Airflow/__init__.py",
    "content": ""
  },
  {
    "path": "Airflow/legos/__init__.py",
    "content": ""
  },
  {
    "path": "Airflow/legos/airflow_check_dag_status/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Get Status for given DAG</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego Get Status for given DAG.\r\n\r\n\r\n## Lego Details\r\n    airflow_check_dag_status(handle: object, dag_id: str)\r\n\r\n        handle: Object of type unSkript AirFlow Connector\r\n        dag_id: The DAG ID.\r\n\r\n## Lego Input\r\nThis Lego take two input handle and dag_id.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Airflow/legos/airflow_check_dag_status/__init__.py",
    "content": ""
  },
  {
    "path": "Airflow/legos/airflow_check_dag_status/airflow_check_dag_status.json",
    "content": "{\r\n    \"action_title\": \"Get Status for given DAG\",\r\n    \"action_description\": \"Get Status for given DAG\",\r\n    \"action_type\": \"LEGO_TYPE_AIRFLOW\",\r\n    \"action_entry_function\": \"airflow_check_dag_status\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AIRFLOW\" ]\r\n}"
  },
  {
    "path": "Airflow/legos/airflow_check_dag_status/airflow_check_dag_status.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import Optional, Dict\nfrom pydantic import BaseModel, Field\n\n\npp = pprint.PrettyPrinter(indent=4)\n\n\nclass InputSchema(BaseModel):\n    dag_id: Optional[str] = Field(\n        None,\n        title='Dag ID',\n        description='The DAG ID')\n\n\ndef airflow_check_dag_status_printer(output):\n    if output is None:\n        return\n    print(type(output))\n    pprint.pprint(output)\n\n\ndef airflow_check_dag_status(handle, dag_id: str = \"\") -> Dict:\n    \"\"\"airflow_check_dag_status check dag status\n\n        :type dag_id: str\n        :param dag_id: The DAG ID.\n\n        :rtype: Dict of DAG status\n    \"\"\"\n    dag = handle.check_dag_status(dag_id=dag_id if dag_id else None)\n    return dag\n"
  },
  {
    "path": "Airflow/legos/airflow_get_handle/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Get Airflow handle</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego Get Airflow handle.\r\n\r\n\r\n## Lego Details\r\n\r\n    airflow_get_handle(handle: object)\r\n\r\n        handle: Object of type unSkript AirFlow Connector\r\n\r\n## Lego Input\r\nThis Lego take one input handle.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Airflow/legos/airflow_get_handle/__init__.py",
    "content": ""
  },
  {
    "path": "Airflow/legos/airflow_get_handle/airflow_get_handle.json",
    "content": "{\r\n    \"action_title\": \"Get Airflow handle\",\r\n    \"action_description\": \"Get Airflow handle\",\r\n    \"action_type\": \"LEGO_TYPE_AIRFLOW\",\r\n    \"action_entry_function\": \"airflow_get_handle\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": false,\r\n    \"action_supports_iteration\": false,\r\n    \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AIRFLOW\" ]\r\n\r\n}\r\n    "
  },
  {
    "path": "Airflow/legos/airflow_get_handle/airflow_get_handle.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nfrom pydantic import BaseModel\n\n\nclass InputSchema(BaseModel):\n    pass\n\n\ndef airflow_get_handle(handle):\n    \"\"\"airflow_get_handle returns the airflow handle.\n\n       :rtype: airflow Handle.\n    \"\"\"\n    return handle\n"
  },
  {
    "path": "Airflow/legos/airflow_list_DAG_runs/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>List DAG runs</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego List DAG runs for given DagID.\r\n\r\n\r\n## Lego Details\r\n    airflow_list_DAG_runs(handle: object, dag_id: str)\r\n\r\n        handle: Object of type unSkript AirFlow Connector\r\n        dag_id: The DAG ID.\r\n\r\n## Lego Input\r\nThis Lego take two input handle and dag_id.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Airflow/legos/airflow_list_DAG_runs/__init__.py",
    "content": ""
  },
  {
    "path": "Airflow/legos/airflow_list_DAG_runs/airflow_list_DAG_runs.json",
    "content": "{\r\n    \"action_title\": \"List DAG runs for given DagID\",\r\n    \"action_description\": \"List DAG runs for given DagID\",\r\n    \"action_type\": \"LEGO_TYPE_AIRFLOW\",\r\n    \"action_entry_function\": \"airflow_list_DAG_runs\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AIRFLOW\" ]\r\n\r\n}\r\n    "
  },
  {
    "path": "Airflow/legos/airflow_list_DAG_runs/airflow_list_DAG_runs.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import Optional, Dict\nfrom pydantic import BaseModel, Field\n\npp = pprint.PrettyPrinter(indent=4)\n\nclass InputSchema(BaseModel):\n    dag_id: Optional[str] = Field(\n        default=None,\n        title='Dag ID',\n        description='The DAG ID')\n\n\ndef airflow_list_DAG_runs_printer(output):\n    if output is None:\n        return\n    print(\"\\n\")\n    pp.pprint(output)\n\n\ndef airflow_list_DAG_runs(handle,  dag_id: str = \"\") -> Dict:\n    \"\"\"airflow_list_DAG_runs list dag runs\n\n        :type dag_id: str\n        :param dag_id: The DAG ID.\n\n        :rtype: Dict of Dag runs\n    \"\"\"\n    return handle.list_DAG_runs(dag_id=dag_id if dag_id else None)\n"
  },
  {
    "path": "Airflow/legos/airflow_trigger_dag_run/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Airflow trigger DAG run</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego used to trigger Airflow DAG run.\r\n\r\n\r\n## Lego Details\r\n    airflow_trigger_dag_run(handle: object, dag_id: str, conf: dict,\r\n                            dag_run_id: str, logical_date: str)\r\n\r\n        handle: Object of type unSkript AirFlow Connector\r\n        dag_id: The DAG ID.\r\n        conf: JSON object describing additional configuration parameters.\r\n        dag_run_id: The value of this field can be set only when creating the object.\r\n        logical_date: The logical date (previously called execution date).\r\n\r\n## Lego Input\r\nThis Lego take five inputs handle, conf, dag_run_id, logical_date and dag_id.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Airflow/legos/airflow_trigger_dag_run/__init__.py",
    "content": ""
  },
  {
    "path": "Airflow/legos/airflow_trigger_dag_run/airflow_trigger_dag_run.json",
    "content": "{\r\n    \"action_title\": \"Airflow trigger DAG run\",\r\n    \"action_description\": \"Airflow trigger DAG run\",\r\n    \"action_type\": \"LEGO_TYPE_AIRFLOW\",\r\n    \"action_entry_function\": \"airflow_trigger_dag_run\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AIRFLOW\" ]\r\n\r\n}\r\n    "
  },
  {
    "path": "Airflow/legos/airflow_trigger_dag_run/airflow_trigger_dag_run.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import Optional, Dict\n\nfrom pydantic import BaseModel, Field\n\npp = pprint.PrettyPrinter(indent=4)\n\nclass InputSchema(BaseModel):\n    dag_id: str = Field(\n        title='Dag ID',\n        description='The DAG ID')\n\n    dag_run_id: Optional[str] = Field(\n        None,\n        title='Run ID',\n        description= '''The value of this field can be set only when creating the object.\n        If you try to modify the field of an existing object, the request fails with an\n        BAD_REQUEST error''')\n\n    logical_date: Optional[str] = Field(\n        None,\n        title='logical date',\n        description='''The logical date (previously called execution date). This is the time\n        or interval covered by this DAG run, according to the DAG definition\n        eg: 2019-08-24T14:15:22Z''')\n\n    conf: Optional[dict] = Field(\n        None,\n        title='conf',\n        description='JSON object describing additional configuration parameters')\n\n\ndef airflow_trigger_dag_run_printer(output):\n    if output is None:\n        return\n    print(\"\\n\")\n    pp.pprint(output)\n\n\ndef airflow_trigger_dag_run(handle,\n                            dag_id: str,\n                            conf: dict = None,\n                            dag_run_id: str = \"\",\n                            logical_date: str = \"\") -> Dict:\n\n    \"\"\"airflow_trigger_dag_run trigger dag run\n\n        :type dag_id: str\n        :param dag_id: The DAG ID'.\n\n        :type conf: str\n        :param conf: JSON object describing additional configuration parameters.\n\n        :type dag_run_id: str\n        :param dag_run_id: The value of this field can be set only when creating the object.\n\n        :type logical_date: str\n        :param logical_date: The logical date (previously called execution date).\n\n        :rtype: Dict of DAG run info\n    \"\"\"\n    return handle.trigger_dag_run(dag_id,\n                                  conf,\n                                  dag_run_id= dag_run_id if dag_run_id else None,\n                                  logical_date=logical_date if logical_date else None)\n"
  },
  {
    "path": "Azure/README.md",
    "content": "\n# Azure Actions\n* [Get Azure Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Azure/legos/azure_get_handle/README.md): Get Azure Handle\n"
  },
  {
    "path": "Azure/__init__.py",
    "content": ""
  },
  {
    "path": "Azure/legos/__init__.py",
    "content": ""
  },
  {
    "path": "Azure/legos/azure_get_handle/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Get Azure Handle</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego get Azure Handle.\r\n\r\n\r\n## Lego Details\r\n\r\n    azure_get_handle(handle: object)\r\n\r\n        handle: Object of type unSkript Azure Connector\r\n\r\n## Lego Input\r\nThis Lego take one input handle.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Azure/legos/azure_get_handle/__init__.py",
    "content": ""
  },
  {
    "path": "Azure/legos/azure_get_handle/azure_get_handle.json",
    "content": "{\r\n    \"action_title\": \"Get Azure Handle\",\r\n    \"action_description\": \"Get Azure Handle\",\r\n    \"action_type\": \"LEGO_TYPE_AZURE\",\r\n    \"action_entry_function\": \"azure_get_handle\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": false,\r\n    \"action_supports_iteration\": false,\r\n    \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AZURE\" ]\r\n\r\n}"
  },
  {
    "path": "Azure/legos/azure_get_handle/azure_get_handle.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nfrom pydantic import BaseModel\n\n\nclass InputSchema(BaseModel):\n    pass\n\n\ndef azure_get_handle(handle):\n    \"\"\"azure_get_handle returns the azure handle.\n\n       :rtype: Azure Handle.\n    \"\"\"\n    return handle\n"
  },
  {
    "path": "Datadog/README.md",
    "content": "\n# Datadog Actions\n* [Datadog delete incident](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_delete_incident/README.md): Delete an incident given its id\n* [Datadog get event](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_get_event/README.md): Get an event given its id\n* [Get Datadog Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_get_handle/README.md): Get Datadog Handle\n* [Datadog get incident](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_get_incident/README.md): Get an incident given its id\n* [Datadog get metric metadata](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_get_metric_metadata/README.md): Get the metadata of a metric.\n* [Datadog get monitor](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_get_monitor/README.md): Get details about a monitor\n* [Datadog get monitorID given the name](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_get_monitorid/README.md): Get monitorID given the name\n* [Datadog list active metrics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_list_active_metrics/README.md): Get the list of actively reporting metrics from a given time until now.\n* [Datadog list all monitors](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_list_all_monitors/README.md): List all monitors\n* [Datadog list metrics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_list_metrics/README.md): Lists metrics from the last 24 hours in Datadog.\n* [Datadog mute/unmute monitors](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_mute_or_unmute_alerts/README.md): Mute/unmute monitors\n* [Datadog query metrics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_query_metrics/README.md): Query timeseries points for a metric.\n* [Schedule downtime](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_schedule_downtime/README.md): Schedule downtime\n* [Datadog search monitors](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_search_monitors/README.md): Search monitors in datadog based on filters\n"
  },
  {
    "path": "Datadog/__init__.py",
    "content": ""
  },
  {
    "path": "Datadog/legos/__init__.py",
    "content": ""
  },
  {
    "path": "Datadog/legos/datadog_delete_incident/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Datadog delete incident</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego Deletes an incident given the Id.\r\n\r\n\r\n## Lego Details\r\n    datadog_delete_incident(handle: object, incident_id: str)\r\n\r\n        handle: Object of type unSkript datadog Connector\r\n        incident_id: Id of the incident.\r\n\r\n## Lego Input\r\nThis Lego take two inputs handle and incident_id.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Datadog/legos/datadog_delete_incident/__init__.py",
    "content": ""
  },
  {
    "path": "Datadog/legos/datadog_delete_incident/datadog_delete_incident.json",
    "content": "{\r\n    \"action_title\": \"Datadog delete incident\",\r\n    \"action_description\": \"Delete an incident given its id\",\r\n    \"action_type\": \"LEGO_TYPE_DATADOG\",\r\n    \"action_entry_function\": \"datadog_delete_incident\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_NONE\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_AWS\",\"CATEGORY_TYPE_DATADOG\",\"CATEGORY_TYPE_DATADOG_INCIDENT\"]\r\n}"
  },
  {
    "path": "Datadog/legos/datadog_delete_incident/datadog_delete_incident.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom pydantic import BaseModel, Field\nfrom datadog_api_client.v2.api.incidents_api import IncidentsApi\nfrom datadog_api_client import ApiClient\n\nclass InputSchema(BaseModel):\n    incident_id: str = Field(\n        title='Incident Id',\n        description='Id of the incident to delete.')\n\ndef datadog_delete_incident_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\ndef datadog_delete_incident(handle, incident_id: str) -> None:\n    \"\"\"datadog_delete_incident deletes an incident given its id.\n\n        :type name: str\n        :param incident_id: Id of the incident to delete.\n\n        :rtype: None\n    \"\"\"\n    try:\n        handle.handle_v2.unstable_operations[\"delete_incident\"] = True\n        with ApiClient(handle.handle_v2) as api_client:\n            api_instance = IncidentsApi(api_client)\n            deleted_incident = api_instance.delete_incident(incident_id=incident_id)\n    except Exception as e:\n        raise e\n    return deleted_incident\n"
  },
  {
    "path": "Datadog/legos/datadog_get_event/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Datadog get event</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego Gets an event given the Id.\r\n\r\n\r\n## Lego Details\r\n    datadog_get_event(handle: object, event_id: int)\r\n\r\n        handle: Object of type unSkript datadog Connector\r\n        event_id: Id of the event.\r\n\r\n## Lego Input\r\nThis Lego take two inputs handle and event_id.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Datadog/legos/datadog_get_event/__init__.py",
    "content": ""
  },
  {
    "path": "Datadog/legos/datadog_get_event/datadog_get_event.json",
    "content": "{\r\n    \"action_title\": \"Datadog get event\",\r\n    \"action_description\": \"Get an event given its id\",\r\n    \"action_type\": \"LEGO_TYPE_DATADOG\",\r\n    \"action_entry_function\": \"datadog_get_event\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_DATADOG\",\"CATEGORY_TYPE_DATADOG_EVENT\"]\r\n}"
  },
  {
    "path": "Datadog/legos/datadog_get_event/datadog_get_event.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import Dict\nfrom pydantic import BaseModel, Field\nfrom datadog_api_client.v1.api.events_api import EventsApi\nfrom datadog_api_client import ApiClient\n\n\nclass InputSchema(BaseModel):\n    event_id: int = Field(\n        title='event Id',\n        description='Id of the event to retrieve.')\n\n\ndef datadog_get_event_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef datadog_get_event(handle, event_id: int) -> Dict:\n    \"\"\"datadog_get_event gets an event given its id.\n\n        :type event_id: int\n        :param event_id: Id of the event to retrieve.\n\n        :rtype: A Dict containing the event\n    \"\"\"\n    try:\n        with ApiClient(handle.handle_v2) as api_client:\n            api_instance = EventsApi(api_client)\n            event = api_instance.get_event(event_id=int(event_id))\n    except Exception as e:\n        raise e\n    return event.to_dict()\n"
  },
  {
    "path": "Datadog/legos/datadog_get_handle/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Get Datadog Handle</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego Get Datadog Handle.\r\n\r\n\r\n## Lego Details\r\n    datadog_get_handle(handle: object)\r\n\r\n        handle: Object of type unSkript datadog Connector\r\n\r\n## Lego Input\r\nThis Lego take one input handle.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Datadog/legos/datadog_get_handle/__init__.py",
    "content": ""
  },
  {
    "path": "Datadog/legos/datadog_get_handle/datadog_get_handle.json",
    "content": "{\r\n    \"action_title\": \"Get Datadog Handle\",\r\n    \"action_description\": \"Get Datadog Handle\",\r\n    \"action_type\": \"LEGO_TYPE_DATADOG\",\r\n    \"action_entry_function\": \"datadog_get_handle\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": false,\r\n    \"action_supports_iteration\": false\r\n}"
  },
  {
    "path": "Datadog/legos/datadog_get_handle/datadog_get_handle.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nfrom pydantic import BaseModel\n\n\nclass InputSchema(BaseModel):\n    pass\n\n\ndef datadog_get_handle(handle):\n    \"\"\"datadog_get_handle returns the Datadog handle.\n\n       :rtype: Datadog Handle.\n    \"\"\"\n    return handle\n"
  },
  {
    "path": "Datadog/legos/datadog_get_incident/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Datadog get incident</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego Gets an incident given the Id.\r\n\r\n\r\n## Lego Details\r\n    datadog_get_incident(handle: object, incident_id: str)\r\n\r\n        handle: Object of type unSkript datadog Connector\r\n        incident_id: Id of the incident.\r\n\r\n## Lego Input\r\nThis Lego take two inputs handle and incident_id.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Datadog/legos/datadog_get_incident/__init__.py",
    "content": ""
  },
  {
    "path": "Datadog/legos/datadog_get_incident/datadog_get_incident.json",
    "content": "{\r\n    \"action_title\": \"Datadog get incident\",\r\n    \"action_description\": \"Get an incident given its id\",\r\n    \"action_type\": \"LEGO_TYPE_DATADOG\",\r\n    \"action_entry_function\": \"datadog_get_incident\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_DATADOG\",\"CATEGORY_TYPE_DATADOG_INCIDENT\"]\r\n}"
  },
  {
    "path": "Datadog/legos/datadog_get_incident/datadog_get_incident.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import Dict\nfrom pydantic import BaseModel, Field\nfrom datadog_api_client.v2.api.incidents_api import IncidentsApi\nfrom datadog_api_client import ApiClient\n\nclass InputSchema(BaseModel):\n    incident_id: str = Field(\n        title='Incident Id',\n        description='Id of the incident to retrieve.')\n\ndef datadog_get_incident_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\ndef datadog_get_incident(handle, incident_id: str) -> Dict:\n    \"\"\"datadog_get_incident gets an incident given its id.\n\n        :type name: str\n        :param incident_id: Id of the incident to retrieve.\n\n        :rtype: A Dict containing the incident\n    \"\"\"\n    try:\n        handle.handle_v2.unstable_operations[\"get_incident\"] = True\n        with ApiClient(handle.handle_v2) as api_client:\n            api_instance = IncidentsApi(api_client)\n            incident = api_instance.get_incident(incident_id=incident_id)\n    except Exception as e:\n        raise e\n    return incident.to_dict()\n"
  },
  {
    "path": "Datadog/legos/datadog_get_metric_metadata/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Datadog get metric metadata</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego gets the metric metadata for a metric in datadog.\r\n\r\n\r\n## Lego Details\r\n    datadog_get_metric_metadata(handle,\r\n                                metric_name: str) -> Dict:\r\n        metric_name: Name of the metric for which to get metadata.\r\n        handle: Object of type unSkript datadog Connector\r\n\r\n## Lego Input\r\nThis Lego take 2 inputs handle, metric_name\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Datadog/legos/datadog_get_metric_metadata/__init__.py",
    "content": ""
  },
  {
    "path": "Datadog/legos/datadog_get_metric_metadata/datadog_get_metric_metadata.json",
    "content": "{\n    \"action_title\": \"Datadog get metric metadata\",\n    \"action_description\": \"Get the metadata of a metric.\",\n    \"action_type\": \"LEGO_TYPE_DATADOG\",\n    \"action_entry_function\": \"datadog_get_metric_metadata\",\n    \"action_needs_credential\": true,\n    \"action_supports_poll\": true,\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\n    \"action_supports_iteration\": true,\n    \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_DATADOG\",\"CATEGORY_TYPE_DATADOG_METRICS\"]\n}"
  },
  {
    "path": "Datadog/legos/datadog_get_metric_metadata/datadog_get_metric_metadata.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import Dict\nfrom pydantic import BaseModel, Field\nfrom datadog_api_client import ApiClient\nfrom datadog_api_client.v1.api.metrics_api import MetricsApi\n\n\nclass InputSchema(BaseModel):\n    metric_name: str = Field(\n        title='Metric name',\n        description='Name of the metric for which to get metadata.')\n\ndef datadog_get_metric_metadata_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef datadog_get_metric_metadata(handle,\n                                metric_name: str) -> Dict:\n    \"\"\"datadog_get_metric_metadata gets the metadata for a metric.\n\n        :type metric_name: str\n        :param metric_name: Name of the metric for which to get metadata.\n\n        :rtype: Dict of metadata of metric\n    \"\"\"\n    try:\n        with ApiClient(handle.handle_v2) as api_client:\n            api_instance = MetricsApi(api_client)\n            metric_metadata = api_instance.get_metric_metadata(metric_name=metric_name)\n    except Exception as e:\n        raise e\n    return metric_metadata.to_dict()\n"
  },
  {
    "path": "Datadog/legos/datadog_get_monitor/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Datadog get monitor</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego gets the details about a monitor.\r\n\r\n\r\n## Lego Details\r\n    datadog_get_monitor(handle,\r\n                                monitor_id: int) -> Dict:\r\n        monitor_id: The ID of the monitor.\r\n        handle: Object of type unSkript datadog Connector\r\n\r\n## Lego Input\r\nThis Lego take 2 inputs handle, monitor_id\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Datadog/legos/datadog_get_monitor/__init__.py",
    "content": ""
  },
  {
    "path": "Datadog/legos/datadog_get_monitor/datadog_get_monitor.json",
    "content": "{\n    \"action_title\": \"Datadog get monitor\",\n    \"action_description\": \"Get details about a monitor\",\n    \"action_type\": \"LEGO_TYPE_DATADOG\",\n    \"action_entry_function\": \"datadog_get_monitor\",\n    \"action_needs_credential\": true,\n    \"action_supports_poll\": true,\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\n    \"action_supports_iteration\": true,\n    \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_DATADOG\",\"CATEGORY_TYPE_DATADOG_MONITOR\"]\n}"
  },
  {
    "path": "Datadog/legos/datadog_get_monitor/datadog_get_monitor.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import Dict\nfrom pydantic import BaseModel, Field\nfrom datadog_api_client import ApiClient\nfrom datadog_api_client.v1.api.monitors_api import MonitorsApi\n\n\nclass InputSchema(BaseModel):\n    monitor_id: int = Field(\n        title='Monitor ID',\n        description='ID of the monitor')\n\ndef datadog_get_monitor_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef datadog_get_monitor(handle,\n                        monitor_id: int) -> Dict:\n    \"\"\"datadog_get_monitor gets the details for a monitor.\n\n        :type monitor_id: int\n        :param monitor_id: The ID of the monitor.\n\n        :rtype: Dict of monitor details\n    \"\"\"\n    try:\n        with ApiClient(handle.handle_v2) as api_client:\n            api_instance = MonitorsApi(api_client)\n            monitor_details = api_instance.get_monitor(monitor_id=int(monitor_id))\n    except Exception as e:\n        raise e\n    return monitor_details.to_dict()\n"
  },
  {
    "path": "Datadog/legos/datadog_get_monitorid/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Datadog get monitorID</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego Get monitorID given the name.\r\n\r\n\r\n## Lego Details\r\n    datadog_get_monitorid(handle: object, name: str)\r\n\r\n        handle: Object of type unSkript datadog Connector\r\n        name: Name of the target monitor.\r\n\r\n## Lego Input\r\nThis Lego take two inputs handle and name.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Datadog/legos/datadog_get_monitorid/__init__.py",
    "content": ""
  },
  {
    "path": "Datadog/legos/datadog_get_monitorid/datadog_get_monitorid.json",
    "content": "{\r\n    \"action_title\": \"Datadog get monitorID given the name\",\r\n    \"action_description\": \"Get monitorID given the name\",\r\n    \"action_type\": \"LEGO_TYPE_DATADOG\",\r\n    \"action_entry_function\": \"datadog_get_monitorid\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_INT\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_DATADOG\",\"CATEGORY_TYPE_DATADOG_MONITOR\"]\r\n}"
  },
  {
    "path": "Datadog/legos/datadog_get_monitorid/datadog_get_monitorid.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom pydantic import BaseModel, Field\nfrom datadog_api_client.v1.api.monitors_api import MonitorsApi\nfrom datadog_api_client import ApiClient\nfrom datadog_api_client.exceptions import NotFoundException\n\nclass InputSchema(BaseModel):\n    name: str = Field(\n        title='name',\n        description='Name of the target monitor.')\n\ndef datadog_get_monitorid_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\ndef datadog_get_monitorid(handle, name: str) -> int:\n    \"\"\"datadog_get_monitorid gets monitor id.\n\n        :type name: str\n        :param name: Name of the target monitor.\n\n        :rtype: The monitor id.\n    \"\"\"\n    try:\n        with ApiClient(handle.handle_v2) as api_client:\n            api_instance = MonitorsApi(api_client)\n            monitors = []\n            page = 0\n            while True:\n                response = api_instance.list_monitors(page_size=30, page=page, name=name)\n                if response == []:\n                    break\n                monitors.extend(response)\n                page += 1\n    except Exception as e:\n        raise e\n    if len(monitors) == 1:\n        return int(monitors[0]['id'])\n    raise NotFoundException\n"
  },
  {
    "path": "Datadog/legos/datadog_list_active_metrics/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Datadog list active metrics</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego lists all active metrics in datadog.\r\n\r\n\r\n## Lego Details\r\n    datadog_list_active_metrics(handle,\r\n                                from_time: int,\r\n                                tag_filter: str = \"\") -> Dict:\r\n        from_time: The time from which the metrics should be returned in seconds. Ex: 3600\r\n        tag_filter: Filter metrics that have been submitted with the given tags. Supports boolean and wildcard expressions.Cannot be combined with other filters.\r\n        handle: Object of type unSkript datadog Connector\r\n\r\n## Lego Input\r\nThis Lego take 3 inputs handle, from_time, tag_filter\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Datadog/legos/datadog_list_active_metrics/__init__.py",
    "content": ""
  },
  {
    "path": "Datadog/legos/datadog_list_active_metrics/datadog_list_active_metrics.json",
    "content": "{\n    \"action_title\": \"Datadog list active metrics\",\n    \"action_description\": \"Get the list of actively reporting metrics from a given time until now.\",\n    \"action_type\": \"LEGO_TYPE_DATADOG\",\n    \"action_entry_function\": \"datadog_list_active_metrics\",\n    \"action_needs_credential\": true,\n    \"action_supports_poll\": true,\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\n    \"action_supports_iteration\": true,\n    \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_DATADOG\",\"CATEGORY_TYPE_DATADOG_METRICS\"]\n}"
  },
  {
    "path": "Datadog/legos/datadog_list_active_metrics/datadog_list_active_metrics.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nimport datetime\nfrom datetime import timedelta\nfrom typing import Dict, Optional\nfrom pydantic import BaseModel, Field\nfrom datadog_api_client import ApiClient\nfrom datadog_api_client.v1.api.metrics_api import MetricsApi\n\n\nclass InputSchema(BaseModel):\n    from_time: int = Field(\n        title='From time',\n        description='The time from which the metrics should be returned in seconds. Ex: 3600')\n    tag_filter: Optional[str] = Field(\n        title='Tag Filter',\n        description=('Filter metrics that have been submitted with the given tags. Supports '\n                     'boolean and wildcard expressions.Cannot be combined with other filters.')\n                     )\n\ndef datadog_list_active_metrics_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef datadog_list_active_metrics(handle,\n                                from_time: int,\n                                tag_filter: str = \"\") -> Dict:\n    \"\"\"datadog_list_active_metrics get the list of actively reporting metrics from a \n       given time until now.\n\n         :type from_time: int\n        :param from_time: The time from which the metrics should be returned in seconds. Ex: 3600\n\n        :type tag_filter: str\n        :param tag_filter: Filter metrics that have been submitted with the given tags. \n        Supports boolean and wildcard expressions.Cannot be combined with other filters.\n\n        :rtype: Dict of active metrics.\n    \"\"\"\n    time_delta = datetime.datetime.utcnow() - timedelta(seconds=int(from_time))\n    from_epoch = int(time_delta.timestamp())\n    try:\n        with ApiClient(handle.handle_v2) as api_client:\n            api_instance = MetricsApi(api_client)\n            metrics = api_instance.list_active_metrics(_from=from_epoch, tag_filter=tag_filter)\n    except Exception as e:\n        raise e\n    return metrics.to_dict()\n"
  },
  {
    "path": "Datadog/legos/datadog_list_all_monitors/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Datadog list all monitors</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego lists all monitors in datadog.\r\n\r\n\r\n## Lego Details\r\n    datadog_list_all_monitors(handle: object)\r\n\r\n    handle: Object of type unSkript datadog Connector\r\n\r\n## Lego Input\r\nThis Lego no inputs\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Datadog/legos/datadog_list_all_monitors/__init__.py",
    "content": ""
  },
  {
    "path": "Datadog/legos/datadog_list_all_monitors/datadog_list_all_monitors.json",
    "content": "{\n    \"action_title\": \"Datadog list all monitors\",\n    \"action_description\": \"List all monitors\",\n    \"action_type\": \"LEGO_TYPE_DATADOG\",\n    \"action_entry_function\": \"datadog_list_all_monitors\",\n    \"action_needs_credential\": true,\n    \"action_supports_poll\": true,\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n    \"action_supports_iteration\": true,\n    \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_DATADOG\",\"CATEGORY_TYPE_DATADOG_MONITOR\"]\n}"
  },
  {
    "path": "Datadog/legos/datadog_list_all_monitors/datadog_list_all_monitors.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import List\nfrom pydantic import BaseModel\nfrom datadog_api_client.v1.api.monitors_api import MonitorsApi\nfrom datadog_api_client import ApiClient\n\n\nclass InputSchema(BaseModel):\n    pass\n\ndef datadog_list_all_monitors_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef datadog_list_all_monitors(handle) -> List[dict]:\n    \"\"\"datadog_get_all_monitors gets all monitors\n\n        :rtype: The list of monitors.\n    \"\"\"\n    try:\n        with ApiClient(handle.handle_v2) as api_client:\n            api_instance = MonitorsApi(api_client)\n            monitors = []\n            page = 0\n            while True:\n                response = api_instance.list_monitors(page_size=30,page=page)\n                if response == []:\n                    break\n                monitors.extend(response)\n                page += 1\n    except Exception as e:\n        raise e\n    return monitors\n"
  },
  {
    "path": "Datadog/legos/datadog_list_metrics/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Datadog list metrics</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego lists metrics from the last 24 hours in Datadog.\r\n\r\n\r\n## Lego Details\r\n    datadog_list_metrics(handle: object, query: str)\r\n\r\n        handle: Object of type unSkript datadog Connector\r\n        query: Query string to list metrics upon. Can optionally be prefixed with ``metrics:``.\r\n\r\n## Lego Input\r\nThis Lego take two inputs handle and query.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Datadog/legos/datadog_list_metrics/__init__.py",
    "content": ""
  },
  {
    "path": "Datadog/legos/datadog_list_metrics/datadog_list_metrics.json",
    "content": "{\r\n    \"action_title\": \"Datadog list metrics\",\r\n    \"action_description\": \"Lists metrics from the last 24 hours in Datadog.\",\r\n    \"action_type\": \"LEGO_TYPE_DATADOG\",\r\n    \"action_entry_function\": \"datadog_list_metrics\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_DATADOG\",\"CATEGORY_TYPE_DATADOG_METRICS\"]\r\n}"
  },
  {
    "path": "Datadog/legos/datadog_list_metrics/datadog_list_metrics.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import Dict, Optional\nfrom pydantic import BaseModel, Field\nfrom datadog_api_client.v1.api.metrics_api import MetricsApi\nfrom datadog_api_client import ApiClient\n\nclass InputSchema(BaseModel):\n    query: Optional[str] = Field(\n        \"\",\n        title='Query',\n        description=('Query string to list metrics upon. Can optionally be prefixed with '\n                     '``metrics:``. By default all metrics are returned')\n                     )\n\ndef datadog_list_metrics_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\ndef datadog_list_metrics(handle, query: str = \"\") -> Dict:\n    \"\"\"datadog_list_metrics lists metrics from the last 24 hours in Datadog.\n\n        :type name: str\n        :param query: Query string to list metrics upon. Can optionally be prefixed with \n        ``metrics:``.\n\n        :rtype: A Dict containing the queried metrics\n    \"\"\"\n    try:\n        with ApiClient(handle.handle_v2) as api_client:\n            api_instance = MetricsApi(api_client)\n            metrics = api_instance.list_metrics(q=query)\n    except Exception as e:\n        raise e\n    return metrics.to_dict()\n"
  },
  {
    "path": "Datadog/legos/datadog_mute_or_unmute_alerts/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Datadog get monitorID</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego Get monitorID given the name.\r\n\r\n\r\n## Lego Details\r\n    datadog_mute_or_unmute_alerts(handle: object, monitorIDs: List[int], all: bool, mute: bool,\r\n                                  scope: str)\r\n\r\n        handle: Object of type unSkript datadog Connector\r\n        monitorIDs: List of monitor Ids to be modified. eg: [1643815305,1643815323]\r\n        all: Set this to True if mute/unmute all monitors.\r\n        mute: True to mute, False to unmute.\r\n        scope: The scope to apply the mute to. For example, if your alert is grouped by \"host\", you might mute \"host:app1\".\r\n\r\n## Lego Input\r\nThis Lego take five inputs handle, monitorIDs, all, mute and scope.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Datadog/legos/datadog_mute_or_unmute_alerts/__init__.py",
    "content": ""
  },
  {
    "path": "Datadog/legos/datadog_mute_or_unmute_alerts/datadog_mute_or_unmute_alerts.json",
    "content": "{\r\n    \"action_title\": \"Datadog mute/unmute monitors\",\r\n    \"action_description\": \"Mute/unmute monitors\",\r\n    \"action_type\": \"LEGO_TYPE_DATADOG\",\r\n    \"action_entry_function\": \"datadog_mute_or_unmute_alerts\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_DATADOG\",\"CATEGORY_TYPE_DATADOG_ALERTS\"]\r\n}\r\n    "
  },
  {
    "path": "Datadog/legos/datadog_mute_or_unmute_alerts/datadog_mute_or_unmute_alerts.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom ast import Str\nfrom typing import Optional, List\nfrom pydantic import BaseModel, Field\n\nclass InputSchema(BaseModel):\n    monitorIDs: Optional[List[int]] = Field(\n        title='Monitor IDs',\n        description='List of monitor Ids to be modified. eg: [1643815305,1643815323].')\n    all: Optional[bool] = Field(\n        title=\"All monitors\",\n        description='Set this to True if mute/unmute all monitors.')\n    mute: bool = Field(\n        True,\n        title=\"Mute\",\n        description='True to mute, False to unmute.')\n    scope: Optional[str] = Field(\n        default=None,\n        title=\"Scope\",\n        description='''\n        The scope to apply the mute to. For example, if your alert is grouped by \"host\", you might mute \"host:app1\".\n        ''')\n\n\ndef datadog_mute_or_unmute_alerts_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\ndef datadog_mute_or_unmute_alerts(handle,\n                                  monitorIDs: List[int] = None,\n                                  all: bool = False,\n                                  mute: bool = True,\n                                  scope: str = \"\") -> Str:\n    \"\"\"datadog_mute_or_unmute_alerts mutes and unmutes alerts.\n\n        :type monitorIDs: list\n        :param monitorIDs: List of monitor Ids to be modified. eg: [1643815305,1643815323].\n        \n        :type all: bool\n        :param all: Set this to True if mute/unmute all monitors.\n\n        :type mute: bool\n        :param mute: True to mute, False to unmute.\n        \n        :type scope: str    \n        :param scope: The scope to apply the mute to. For example,\n        if your alert is grouped by \"host\", you might mute \"host:app1\".\n\n        :rtype: String with the execution status.\n    \"\"\"\n\n    if mute:\n        if all:\n            if scope:\n                handle.Monitor.mute_all(scope=scope)\n            else:\n                handle.Monitor.mute_all()\n            return 'Successfully muted all monitors.'\n        if scope:\n            res = [handle.Monitor.mute(id=x, scope=scope) for x in monitorIDs]\n        else:\n            res = [handle.Monitor.mute(id=x) for x in monitorIDs]\n        return 'Successfully muted monitors.'\n    else:\n        if all:\n            if scope:\n                handle.Monitor.unmute_all(scope=scope)\n            else:\n                handle.Monitor.unmute_all()\n            return 'Successfully unmuted all monitors.'\n        else:\n            if scope:\n                res = [handle.Monitor.unmute(id=x,scope=scope) for x in monitorIDs]\n            else:\n                res = [handle.Monitor.unmute(id=x) for x in monitorIDs]\n        return 'Successfully umuted monitors.'\n"
  },
  {
    "path": "Datadog/legos/datadog_query_metrics/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Datadog query metrics</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego queries timeseries points for a metric.\r\n\r\n\r\n## Lego Details\r\n    datadog_query_metrics(handle,\r\n                                from_time: int,\r\n                                to_time: int,\r\n                                query: str) -> Dict:\r\n        query: Query string. Ex: system.cpu.idle{*}\r\n        from_time: The time from which the metrics should be returned in seconds. Ex: 3600\r\n        to_time: The time until which the metrics should be returned in seconds. Ex: 3600\r\n        handle: Object of type unSkript datadog Connector\r\n\r\n## Lego Input\r\nThis Lego take 4 inputs handle, from_time, to_time, query\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Datadog/legos/datadog_query_metrics/__init__.py",
    "content": ""
  },
  {
    "path": "Datadog/legos/datadog_query_metrics/datadog_query_metrics.json",
    "content": "{\n    \"action_title\": \"Datadog query metrics\",\n    \"action_description\": \"Query timeseries points for a metric.\",\n    \"action_type\": \"LEGO_TYPE_DATADOG\",\n    \"action_entry_function\": \"datadog_query_metrics\",\n    \"action_needs_credential\": true,\n    \"action_supports_poll\": true,\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\n    \"action_supports_iteration\": true,\n    \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_DATADOG\",\"CATEGORY_TYPE_DATADOG_METRICS\"]\n}"
  },
  {
    "path": "Datadog/legos/datadog_query_metrics/datadog_query_metrics.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import Dict\nfrom datetime import datetime, timedelta\nfrom pydantic import BaseModel, Field\nfrom datadog_api_client import ApiClient\nfrom datadog_api_client.v1.api.metrics_api import MetricsApi\n\n\nclass InputSchema(BaseModel):\n    from_time: int = Field(\n        title='From time',\n        description='Start of the queried time period in seconds. Ex: 3600')\n    to_time: int = Field(\n        title='From time',\n        description='End of the queried time period in seconds. Ex: 3600')\n    query: str = Field(\n        title='Query String',\n        description='Query string. Ex: system.cpu.idle{*}')\n\ndef datadog_query_metrics_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef datadog_query_metrics(handle,\n                        from_time: int,\n                        to_time: int,\n                        query: str) -> Dict:\n    \"\"\"datadog_query_metrics queries timeseries points for a metric.\n\n         :type from_time: int\n        :param from_time: The time from which the metrics should be returned in seconds. Ex: 3600\n\n        :type to_time: int\n        :param to_time: The time until which the metrics should be returned in seconds. Ex: 3600\n\n        :type query: str\n        :param query: Query string. Ex: system.cpu.idle{*}\n\n        :rtype: Dict of queried metric\n    \"\"\"\n    try:\n        with ApiClient(handle.handle_v2) as api_client:\n            api_instance = MetricsApi(api_client)\n            response = api_instance.query_metrics(\n                _from=int((datetime.utcnow() - timedelta(seconds=int(from_time))).timestamp()),\n                to=int((datetime.utcnow() - timedelta(seconds=int(to_time))).timestamp()),\n                query=query,\n            )\n    except Exception as e:\n        raise e\n    return response.to_dict()\n"
  },
  {
    "path": "Datadog/legos/datadog_schedule_downtime/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Schedule downtime</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego used to Schedule downtime.\r\n\r\n\r\n## Lego Details\r\n    datadog_schedule_downtime(handle: object, duration: int, scope:list, monitor_id: int,\r\n                              monitor_tags:list)\r\n\r\n        handle: Object of type unSkript datadog Connector\r\n        duration: Select a duration in minutes eg: 60.\r\n        scope: The scope(s) to which the downtime applies.\r\n        monitor_id: A single monitor to which the downtime applies. If not provided, the downtime applies to all monitors.\r\n        monitor_tags: A comma-separated list of monitor tags.\r\n\r\n## Lego Input\r\nThis Lego take five inputs handle, duration, scope, monitor_id and monitor_tags.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Datadog/legos/datadog_schedule_downtime/__init__.py",
    "content": ""
  },
  {
    "path": "Datadog/legos/datadog_schedule_downtime/datadog_schedule_downtime.json",
    "content": "{\r\n    \"action_title\": \"Schedule downtime\",\r\n    \"action_description\": \"Schedule downtime\",\r\n    \"action_type\": \"LEGO_TYPE_DATADOG\",\r\n    \"action_entry_function\": \"datadog_schedule_downtime\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_DATADOG\"]\r\n}"
  },
  {
    "path": "Datadog/legos/datadog_schedule_downtime/datadog_schedule_downtime.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom ast import Str\nfrom typing import Optional, List\nfrom datetime import datetime as dt, timedelta\nfrom pydantic import BaseModel, Field\n\nclass InputSchema(BaseModel):\n    duration: int = Field(\n        title='Duration',\n        description='Select a duration in minutes eg: 60.'\n    )\n    scope: List[str] = Field(\n        default='',\n        title='Scope',\n        description='The scope(s) to which the downtime applies.')\n    monitor_id: Optional[int] = Field(\n        default=None,\n        title='Monitor Id',\n        description=('A single monitor to which the downtime applies. '\n                     'If not provided, the downtime applies to all monitors.')\n                     )\n    monitor_tags: Optional[List[str]] = Field(default=None,\n                                              title='Monitor Tags',\n                                              description='A comma-separated list of monitor tags')\n\n\n\n\ndef datadog_schedule_downtime_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\ndef datadog_schedule_downtime(handle,\n                              duration: int,\n                              scope:list = None,\n                              monitor_id: int = 0,\n                              monitor_tags:list = None) -> Str:\n    \"\"\"datadog_schedule_downtime schedules a monitor downtime.\n\n        :type duration: int\n        :param duration: Select a duration in minutes eg: 60.\n        \n        :type scope: List\n        :param scope: The scope(s) to which the downtime applies.\n\n        :type monitor_id: int\n        :param monitor_id: A single monitor to which the downtime applies. \n        If not provided, the downtime applies to all monitors.\n        \n        :type monitor_tags: List\n        :param monitor_tags: A comma-separated list of monitor tags.\n\n        :rtype: String with the execution status.\n    \"\"\"\n    start_time = dt.now()\n    end_time = (start_time + timedelta(minutes=duration)).strftime(\"%s\")\n    try:\n        res = handle.Downtime.create(\n            scope=scope,\n            start=start_time.strftime(\"%s\"),\n            end=end_time,\n            monitor_id=monitor_id,\n            monitor_tags=monitor_tags\n            )\n    except Exception as e:\n        return f'Failed to schedule downtime, {e}'\n    return f'Successfully scheduled downtime, ID {res.get(\"id\")}, starting time {start_time.strftime(\"%H:%M:%S\")}'\n"
  },
  {
    "path": "Datadog/legos/datadog_search_monitors/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Datadog search monitors</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego searches monitors in datadog based on filters.\r\n\r\n\r\n## Lego Details\r\n    datadog_search_monitors(handle: object)\r\n\r\n    handle: Object of type unSkript datadog Connector\r\n    query: After entering a search query in your `Manage Monitor page <https://app.datadoghq.com/monitors/manage>`_ use the query parameter value in the\r\n            URL of the page as value for this parameter. Consult the dedicated `manage monitor documentation </monitors/manage/#find-the-monitors>`_\r\n            page to learn more. The query can contain any number of space-separated monitor attributes, for instance ``query=\"type:metric status:alert\"``.\r\n    name: A string to filter monitors by name.\r\n\r\n## Lego Input\r\nThis Lego takes 3 inputs. handle, query, name\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Datadog/legos/datadog_search_monitors/__init__.py",
    "content": ""
  },
  {
    "path": "Datadog/legos/datadog_search_monitors/datadog_search_monitors.json",
    "content": "{\n    \"action_title\": \"Datadog search monitors\",\n    \"action_description\": \"Search monitors in datadog based on filters\",\n    \"action_type\": \"LEGO_TYPE_DATADOG\",\n    \"action_entry_function\": \"datadog_search_monitors\",\n    \"action_needs_credential\": true,\n    \"action_supports_poll\": true,\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n    \"action_supports_iteration\": true,\n    \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_DATADOG\",\"CATEGORY_TYPE_DATADOG_MONITOR\"]\n}"
  },
  {
    "path": "Datadog/legos/datadog_search_monitors/datadog_search_monitors.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import List, Optional\nfrom pydantic import BaseModel, Field\nfrom datadog_api_client.v1.api.monitors_api import MonitorsApi\nfrom datadog_api_client import ApiClient\n\n\nclass InputSchema(BaseModel):\n    query: Optional[str] = Field(\n        title='Query String',\n        description='''After entering a search query in your `Manage Monitor page\n        <https://app.datadoghq.com/monitors/manage>`_ use the query parameter value in the\n        URL of the page as value for this parameter. Consult the dedicated `manage monitor\n        documentation </monitors/manage/#find-the-monitors>`_ page to learn more. The query \n        can contain any number of space-separated monitor attributes, for instance \n        ``query=\"type:metric status:alert\"``.''')\n    name: Optional[str] = Field(\n        title='Name',\n        description='A string to filter monitors by name.')\n\n\ndef datadog_search_monitors_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef datadog_search_monitors(handle,\n                            query: str = \"\",\n                            name: str = \"\") -> List[dict]:\n    \"\"\"datadog_search_monitors searches monitors in datadog based on filters\n\n        :type query: str\n        :param query: After entering a search query in your `Manage Monitor page \n        <https://app.datadoghq.com/monitors/manage>`_ use the query parameter value in the\n        URL of the page as value for this parameter. Consult the dedicated `manage monitor \n        documentation </monitors/manage/#find-the-monitors>`_ page to learn more. The query \n        can contain any number of space-separated monitor attributes, for instance \n        ``query=\"type:metric status:alert\"``.\n\n        :type name: str\n        :param name: A string to filter monitors by name.\n\n        :rtype: The list of monitors.\n    \"\"\"\n    try:\n        with ApiClient(handle.handle_v2) as api_client:\n            api_instance = MonitorsApi(api_client)\n            monitors = []\n            page = 0\n            if query != \"\":\n                while True:\n                    # The default page_size is 30\n                    monitor_response = api_instance.search_monitors(page=page, query=query)\n                    if page == monitor_response['metadata']['page_count']:\n                        break\n                    monitors.extend(monitor_response['monitors'])\n                    page += 1\n            else:\n                while True:\n                    response = api_instance.list_monitors(page_size=30,\n                                                          page=page,\n                                                          name=name)\n                    if response == []:\n                        break\n                    monitors.extend(response)\n                    page += 1\n    except Exception as e:\n        raise e\n    return monitors\n"
  },
  {
    "path": "Docs/README.md",
    "content": ""
  },
  {
    "path": "ElasticSearch/.gitignore",
    "content": ".DS_Store"
  },
  {
    "path": "ElasticSearch/Elasticsearch_Rolling_Restart.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"e9678f47-8963-4304-b5a4-aef1de9aaeab\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Elasticsearch Rolling Restart\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Elasticsearch Rolling Restart\"\n   },\n   \"source\": [\n    \"<center><img src=\\\"https://storage.googleapis.com/unskript-website/assets/favicon.png\\\" alt=\\\"unSkript.com\\\" width=\\\"100\\\" height=\\\"100\\\">\\n\",\n    \"<h1 id=\\\"-unSkript-Runbooks-\\\">unSkript Runbooks</h1>\\n\",\n    \"<div class=\\\"alert alert-block alert-success\\\">\\n\",\n    \"<h3 id=\\\"-Objective\\\">Objective</h3>\\n\",\n    \"<br><strong style=\\\"color: #000000;\\\"><em>Perform rolling restart for a node in an Elasticsearch cluster</em></strong></div>\\n\",\n    \"</center>\\n\",\n    \"<p>&nbsp;</p>\\n\",\n    \"<center>\\n\",\n    \"<h2 id=\\\"Elasticsearch-Rolling-Restart\\\"><u>Elasticsearch Rolling Restart</u></h2>\\n\",\n    \"</center>\\n\",\n    \"<h1 id=\\\"Steps-Overview\\\">Steps Overview</h1>\\n\",\n    \"<p>1) <a href=\\\"Cluster%20Health%20Check\\\">Cluster Health Check</a><br>2)&nbsp;<a href=\\\"#1\\\" target=\\\"_self\\\" rel=\\\"noopener\\\">Disable shard allocation</a><br>3)<a href=\\\"#2\\\" target=\\\"_self\\\" rel=\\\"noopener\\\"> Shut down node</a><br>4)<a href=\\\"#3\\\" target=\\\"_self\\\" rel=\\\"noopener\\\"> Perform changes/ maintenance</a><br>5)<a href=\\\"#4\\\" target=\\\"_self\\\" rel=\\\"noopener\\\"> Start the node</a><br>6)<a href=\\\"#5\\\" target=\\\"_self\\\" rel=\\\"noopener\\\"> Reenable shard allocation</a><br>7)<a href=\\\"#6\\\" target=\\\"_self\\\" rel=\\\"noopener\\\">&nbsp;</a><a href=\\\"Cluster%20Health%20Check\\\">Cluster Health Check</a></p>\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"2c0aacc8-6eb4-412e-8809-fb64320693e6\",\n   \"metadata\": {\n    \"name\": \"Step 1\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 1\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Check-Cluster-Health&para;\\\"><a id=\\\"6\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Check Cluster Health</h3>\\n\",\n    \"<p>This action checks the status of an Elasticsearch cluster to trigger a rolling restart for the cluster. Ideally, the cluster should show <span style=\\\"color: green;\\\">Green/ None</span> in which case Step 2 will not be triggered. These are the cluster statuses that you may encounter-</p>\\n\",\n    \"<ol>\\n\",\n    \"<li>Unassigned primary shards = <span style=\\\"color: red;\\\">Red</span> Status</li>\\n\",\n    \"<li>Unassigned replica shards = <span style=\\\"color: #ffbf00;\\\">Yellow</span> Status</li>\\n\",\n    \"<li>All shards assigned = <span style=\\\"color: green;\\\">Green</span> Status which will return <span style=\\\"color: rgb(45, 194, 107);\\\">None</span></li>\\n\",\n    \"</ol>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters: <code>None</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 10,\n   \"id\": \"ee0a70e8-74d3-43f9-9a0a-a7e3b1989565\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionCategories\": [],\n    \"actionIsCheck\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [],\n    \"actionNextHopParameterMapping\": {},\n    \"actionOutputType\": \"\",\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": false,\n    \"action_uuid\": \"85e40f4fbed1df45b80cdf78eef44ac8a77605316ee1df76820dbd7e518c629b\",\n    \"checkEnabled\": false,\n    \"collapsed\": true,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Elasticsearch Check Health Status\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-02-17T08:26:50.997Z\"\n    },\n    \"id\": 80,\n    \"index\": 80,\n    \"inputschema\": [\n     {\n      \"properties\": {},\n      \"title\": \"elasticsearch_check_health_status\",\n      \"type\": \"object\"\n     }\n    ],\n    \"isUnskript\": false,\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_ELASTICSEARCH\",\n    \"name\": \"Elasticsearch Cluster Health\",\n    \"nouns\": [],\n    \"orderProperties\": [],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"cluster_health\",\n     \"output_name_enabled\": true\n    },\n    \"printOutput\": true,\n    \"tags\": [\n     \"elasticsearch_check_health_status\"\n    ],\n    \"trusted\": true,\n    \"verbs\": [],\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"# Copyright (c) 2021 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"import subprocess\\n\",\n    \"import pprint\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import Dict, Tuple\\n\",\n    \"from subprocess import PIPE\\n\",\n    \"import json\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def elasticsearch_check_health_status_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    print(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def elasticsearch_check_health_status(handle) -> Tuple:\\n\",\n    \"    result = []\\n\",\n    \"    cluster_health ={}\\n\",\n    \"    \\\"\\\"\\\"elasticsearch_check_health_status checks the status of an Elasticsearch cluster .\\n\",\n    \"\\n\",\n    \"            :type handle: object\\n\",\n    \"            :param handle: Object returned from Task Validate\\n\",\n    \"\\n\",\n    \"            :rtype: Result Dict of result\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"\\n\",\n    \"    output = handle.web_request(\\\"/_cluster/health?pretty\\\",  # Path\\n\",\n    \"                                \\\"GET\\\",                      # Method\\n\",\n    \"                                None)                       # Data\\n\",\n    \"    if output['status'] != 'green':\\n\",\n    \"        cluster_health[output['cluster_name']] = output['status'] \\n\",\n    \"        result.append(cluster_health)\\n\",\n    \"    if len(result) != 0:\\n\",\n    \"        return(False, result)\\n\",\n    \"    else:\\n\",\n    \"        return(True, None)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(outputName=\\\"cluster_health\\\")\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(elasticsearch_check_health_status, lego_printer=elasticsearch_check_health_status_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 12,\n   \"id\": \"1cb9e668-e088-44f3-9012-d7b9f6589e7f\",\n   \"metadata\": {\n    \"collapsed\": true,\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-02-17T08:28:12.189Z\"\n    },\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"name\": \"Get status value\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Get status value\",\n    \"trusted\": true,\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"cluster_health_status = ''\\n\",\n    \"for cluster in cluster_health:\\n\",\n    \"    if type(cluster)==list:\\n\",\n    \"        if len(cluster)!=0:\\n\",\n    \"            for x in cluster:\\n\",\n    \"                for status in x.values():\\n\",\n    \"                    cluster_health_status= status\\n\",\n    \"    else:\\n\",\n    \"        cluster_health_status = 'None'\\n\",\n    \"print(cluster_health_status)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"696df79e-d970-4f28-ad3e-7502b40c77a6\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 2\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 2\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Disable-Shard-Allocation\\\"><a id=\\\"1\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Disable Shard Allocation<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Disable-Shard-Allocation\\\" target=\\\"_self\\\">&para;</a></h3>\\n\",\n    \"<p>Using unSkript's Elasticsearch Disable Shard Allocation action we can disable shard allocation to avoid rebalancing of missing shards while the node shutdown process is in progress. This step ensures that no new shards are assigned till the node restarts.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters: <code>None</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 12,\n   \"id\": \"e746026e-cb23-4fc7-b551-a2a70edcb81a\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionCategories\": [],\n    \"actionIsCheck\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [],\n    \"actionNextHopParameterMapping\": {},\n    \"actionOutputType\": \"\",\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": false,\n    \"action_uuid\": \"23abf6572cb81c61e965514d011c6636363d10be6ed1ac6b178127fd090ed462\",\n    \"checkEnabled\": false,\n    \"condition_enabled\": true,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Elasticsearch Disable Shard Allocation for any indices\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-02-16T06:05:40.223Z\"\n    },\n    \"id\": 74,\n    \"index\": 74,\n    \"inputschema\": [\n     {\n      \"properties\": {},\n      \"title\": \"elasticsearch_disable_shard_allocation\",\n      \"type\": \"object\"\n     }\n    ],\n    \"isUnskript\": false,\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_ELASTICSEARCH\",\n    \"name\": \"Elasticsearch Disable Shard Allocation\",\n    \"nouns\": [],\n    \"orderProperties\": [],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"printOutput\": true,\n    \"startcondition\": \"cluster_health_status!='None'\",\n    \"tags\": [\n     \"elasticsearch_disable_shard_allocation\"\n    ],\n    \"verbs\": [],\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"# Copyright (c) 2021 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"import subprocess\\n\",\n    \"import pprint\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import List, Dict\\n\",\n    \"from subprocess import PIPE, run\\n\",\n    \"import json\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def elasticsearch_disable_shard_allocation_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    print(\\\"Shard allocations disabled for any kind shards\\\")\\n\",\n    \"    print(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def elasticsearch_disable_shard_allocation(handle) -> Dict:\\n\",\n    \"    \\\"\\\"\\\"elasticsearch_disable_shard_allocation disallows shard allocations for any indices.\\n\",\n    \"\\n\",\n    \"            :type handle: object\\n\",\n    \"            :param handle: Object returned from Task Validate\\n\",\n    \"\\n\",\n    \"            :rtype: Result Dict of result\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"\\n\",\n    \"    es_dict = {\\\"transient\\\": {\\\"cluster.routing.allocation.enable\\\": \\\"none\\\"}}\\n\",\n    \"    output = handle.web_request(\\\"/_cluster/settings?pretty\\\",  # Path\\n\",\n    \"                                \\\"PUT\\\",                        # Method\\n\",\n    \"                                es_dict)                      # Data\\n\",\n    \"\\n\",\n    \"    return output\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"cluster_health_status!='None'\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(elasticsearch_disable_shard_allocation, lego_printer=elasticsearch_disable_shard_allocation_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"8a0d9d6e-2aef-4f23-81a2-feb6be5c4c0a\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 3\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 3\"\n   },\n   \"source\": [\n    \"<h3><a id='2'>Shut down node</a></h3>\\n\",\n    \"unSkript's SSH Execute Remote Command action can be used to shut down a single node by sshing on the IP of the node and executing the command to stop the Elasticsearch service.\\n\",\n    \"\\n\",\n    \"   If you are running Elasticsearch with systemd:\\n\",\n    \"\\n\",\n    \"    sudo systemctl stop elasticsearch.service\\n\",\n    \"\\n\",\n    \"   If you are running Elasticsearch with SysV init:\\n\",\n    \"\\n\",\n    \"    sudo -i service elasticsearch stop\\n\",\n    \"    \\n\",\n    \">This action takes the following parameters: `host_for_ssh`(takes List of hosts but we need only one), `command_stop_elasticsearch`, `run_with_sudo`\\n\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 37,\n   \"id\": \"75d3d4ae-7689-4f5f-8f56-4e48cbbd192a\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_uuid\": \"5279b2046bb2eb4a691ba748086f4af9e580a849faae557694bb12a8c2b7b379\",\n    \"condition_enabled\": true,\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"SSH Execute Remote Command\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2022-11-08T11:22:25.404Z\"\n    },\n    \"id\": 85,\n    \"index\": 85,\n    \"inputData\": [\n     {\n      \"command\": {\n       \"constant\": false,\n       \"value\": \"cmd_stop_elasticsearch\"\n      },\n      \"hosts\": {\n       \"constant\": false,\n       \"value\": \"host_for_ssh\"\n      },\n      \"sudo\": {\n       \"constant\": true,\n       \"value\": \"run_with_sudo\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"command\": {\n        \"description\": \"Command to be executed on the remote server.\",\n        \"title\": \"Command\",\n        \"type\": \"string\"\n       },\n       \"hosts\": {\n        \"description\": \"List of hosts to connect to. For eg. [\\\"host1\\\", \\\"host2\\\"].\",\n        \"items\": {\n         \"type\": \"string\"\n        },\n        \"title\": \"Hosts\",\n        \"type\": \"array\"\n       },\n       \"sudo\": {\n        \"default\": false,\n        \"description\": \"Run the command with sudo.\",\n        \"title\": \"Run with sudo\",\n        \"type\": \"boolean\"\n       }\n      },\n      \"required\": [\n       \"hosts\",\n       \"command\"\n      ],\n      \"title\": \"ssh_execute_remote_command\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_SSH\",\n    \"name\": \"SSH Execute Remote Command\",\n    \"nouns\": [\n     \"ssh\",\n     \"command\"\n    ],\n    \"orderProperties\": [\n     \"hosts\",\n     \"command\",\n     \"sudo\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"printOutput\": true,\n    \"startcondition\": \"cluster_health_status!='None' and len(host_for_ssh)!=0\",\n    \"tags\": [\n     \"ssh_execute_remote_command\"\n    ],\n    \"verbs\": [\n     \"execute\"\n    ],\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"# Copyright (c) 2021 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import List, Optional, Dict\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def ssh_execute_remote_command_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    print(\\\"Elasticsearch Service successfully STOPPED\\\")\\n\",\n    \"    print(\\\"\\\\n\\\")\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def ssh_execute_remote_command(sshClient, hosts: List[str], command: str, sudo: bool = False) -> Dict:\\n\",\n    \"\\n\",\n    \"    client = sshClient(hosts)\\n\",\n    \"    runCommandOutput = client.run_command(command=command, sudo=sudo)\\n\",\n    \"    client.join()\\n\",\n    \"    res = {}\\n\",\n    \"\\n\",\n    \"    for host_output in runCommandOutput:\\n\",\n    \"        hostname = host_output.host\\n\",\n    \"        output = []\\n\",\n    \"        for line in host_output.stdout:\\n\",\n    \"            output.append(line)\\n\",\n    \"\\n\",\n    \"        o = \\\"\\\\n\\\".join(output)\\n\",\n    \"        res[hostname] = o\\n\",\n    \"\\n\",\n    \"    return res\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"command\\\": \\\"cmd_stop_elasticsearch\\\",\\n\",\n    \"    \\\"hosts\\\": \\\"host_for_ssh\\\",\\n\",\n    \"    \\\"sudo\\\": \\\"run_with_sudo\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"cluster_health_status!='None' and len(host_for_ssh)!=0\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(ssh_execute_remote_command, lego_printer=ssh_execute_remote_command_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"2bf6f444-f0bb-4554-8771-581a3c81e372\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 4\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 4\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Perform-changes/-maintenance\\\"><a id=\\\"3\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Perform changes/ maintenance<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Perform-changes/-maintenance\\\" target=\\\"_self\\\">&para;</a></h3>\\n\",\n    \"<p>In this step we can perform maintenance jobs, install updates or even modify the elasticsearch.yml. We can create a custom action (Click on <span style=\\\"background-color: rgb(230, 126, 35);\\\"><strong>Add</strong></span> button on the top) as per the requirement and add it in this step.</p>\\n\",\n    \"<p>This article explains some of the common issues incurred by Elasticsearch clusters- <strong><a href=\\\"https://www.elastic.co/guide/en/elasticsearch/reference/current/fix-common-cluster-issues.html\\\">link to blog&nbsp;</a></strong></p>\\n\",\n    \"<pre><code>Note- Please make sure that the configuration changes don't cause the failure of a node restart in the next step\\n\",\n    \"</code></pre>\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"80b61c80-85bb-48e9-9501-92151a9156c8\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 5\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 5\"\n   },\n   \"source\": [\n    \"<h3><a id='4'>Start the node</a></h3>\\n\",\n    \"This action starts the node after performing changes on the node.\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"   If you are running Elasticsearch with systemd:\\n\",\n    \"\\n\",\n    \"    sudo systemctl start elasticsearch.service\\n\",\n    \"\\n\",\n    \"   If you are running Elasticsearch with SysV init:\\n\",\n    \"\\n\",\n    \"    sudo -i service elasticsearch start\\n\",\n    \"\\n\",\n    \">This action takes the following parameters: `host_for_ssh`, `command_start_elasticsearch`, `run_with_sudo`\\n\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 32,\n   \"id\": \"b4b4e51b-bd21-4ba0-b9a3-ee94c9c00161\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_uuid\": \"5279b2046bb2eb4a691ba748086f4af9e580a849faae557694bb12a8c2b7b379\",\n    \"condition_enabled\": true,\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"SSH Execute Remote Command\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2022-11-08T11:21:04.025Z\"\n    },\n    \"id\": 85,\n    \"index\": 85,\n    \"inputData\": [\n     {\n      \"command\": {\n       \"constant\": false,\n       \"value\": \"cmd_start_elasticsearch\"\n      },\n      \"hosts\": {\n       \"constant\": false,\n       \"value\": \"host_for_ssh\"\n      },\n      \"sudo\": {\n       \"constant\": true,\n       \"value\": \"run_with_sudo\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"command\": {\n        \"description\": \"Command to be executed on the remote server.\",\n        \"title\": \"Command\",\n        \"type\": \"string\"\n       },\n       \"hosts\": {\n        \"description\": \"List of hosts to connect to. For eg. [\\\"host1\\\", \\\"host2\\\"].\",\n        \"items\": {\n         \"type\": \"string\"\n        },\n        \"title\": \"Hosts\",\n        \"type\": \"array\"\n       },\n       \"sudo\": {\n        \"default\": false,\n        \"description\": \"Run the command with sudo.\",\n        \"title\": \"Run with sudo\",\n        \"type\": \"boolean\"\n       }\n      },\n      \"required\": [\n       \"hosts\",\n       \"command\"\n      ],\n      \"title\": \"ssh_execute_remote_command\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_SSH\",\n    \"name\": \"SSH Execute Remote Command\",\n    \"nouns\": [\n     \"ssh\",\n     \"command\"\n    ],\n    \"orderProperties\": [\n     \"hosts\",\n     \"command\",\n     \"sudo\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"printOutput\": true,\n    \"startcondition\": \"cluster_health_status!='None' and len(host_for_ssh)!=0\",\n    \"tags\": [\n     \"ssh_execute_remote_command\"\n    ],\n    \"verbs\": [\n     \"execute\"\n    ],\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"# Copyright (c) 2021 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import List, Optional, Dict\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def ssh_execute_remote_command_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    print(\\\"Elasticsearch Service successfully STARTED\\\")\\n\",\n    \"    print(\\\"\\\\n\\\")\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def ssh_execute_remote_command(sshClient, hosts: List[str], command: str, sudo: bool = False) -> Dict:\\n\",\n    \"\\n\",\n    \"    client = sshClient(hosts)\\n\",\n    \"    runCommandOutput = client.run_command(command=command, sudo=sudo)\\n\",\n    \"    client.join()\\n\",\n    \"    res = {}\\n\",\n    \"\\n\",\n    \"    for host_output in runCommandOutput:\\n\",\n    \"        hostname = host_output.host\\n\",\n    \"        output = []\\n\",\n    \"        for line in host_output.stdout:\\n\",\n    \"            output.append(line)\\n\",\n    \"\\n\",\n    \"        o = \\\"\\\\n\\\".join(output)\\n\",\n    \"        res[hostname] = o\\n\",\n    \"\\n\",\n    \"    return res\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"command\\\": \\\"cmd_start_elasticsearch\\\",\\n\",\n    \"    \\\"hosts\\\": \\\"host_for_ssh\\\",\\n\",\n    \"    \\\"sudo\\\": \\\"run_with_sudo\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"cluster_health_status!='None' and len(host_for_ssh)!=0\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(ssh_execute_remote_command, lego_printer=ssh_execute_remote_command_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"571a08b1-e8d6-419f-a3be-ca825c00bba7\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 6\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 6\"\n   },\n   \"source\": [\n    \"<h3><a id='5'>Elasticsearch Reenable Shard Allocation</a></h3>\\n\",\n    \"This action to enables shard allocation and makes the node ready to use.\\n\",\n    \"\\n\",\n    \">This action takes the following parameters: `elasticsearch_host`, `port`, `api_key`\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"b44a11a1-1b8d-4e40-85f1-e9905fb21a52\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionCategories\": [],\n    \"actionIsCheck\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [],\n    \"actionNextHopParameterMapping\": {},\n    \"actionOutputType\": \"\",\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": false,\n    \"action_uuid\": \"67e3c5bfd82c5e34634734f7f09df5a795fa16bceb3552870c55673bb3148b74\",\n    \"checkEnabled\": false,\n    \"condition_enabled\": true,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Elasticsearch Enable Shard Allocation for any shards for any indices\",\n    \"id\": 78,\n    \"index\": 78,\n    \"inputschema\": [\n     {\n      \"properties\": {},\n      \"title\": \"elasticsearch_enable_shard_allocation\",\n      \"type\": \"object\"\n     }\n    ],\n    \"isUnskript\": false,\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_ELASTICSEARCH\",\n    \"name\": \"Elasticsearch Enable Shard Allocation\",\n    \"nouns\": [],\n    \"orderProperties\": [],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"printOutput\": true,\n    \"startcondition\": \"cluster_health_status!='None'\",\n    \"tags\": [\n     \"elasticsearch_enable_shard_allocation\"\n    ],\n    \"verbs\": [],\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"# Copyright (c) 2021 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"import subprocess\\n\",\n    \"import pprint\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import List, Dict\\n\",\n    \"from subprocess import PIPE, run\\n\",\n    \"import json\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def elasticsearch_enable_shard_allocation_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    print(\\\"Shard allocations enabled for all kinds of shards\\\")\\n\",\n    \"    print(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def elasticsearch_enable_shard_allocation(handle) -> Dict:\\n\",\n    \"    \\\"\\\"\\\"elasticsearch_enable_shard_allocation enables shard allocations for any shards for any indices.\\n\",\n    \"\\n\",\n    \"            :type handle: object\\n\",\n    \"            :param handle: Object returned from Task Validate\\n\",\n    \"\\n\",\n    \"            :rtype: Result Dict of result\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    es_dict = {\\\"transient\\\": {\\\"cluster.routing.allocation.enable\\\": \\\"all\\\"}}\\n\",\n    \"    output = handle.web_request(\\\"/_cluster/settings?pretty\\\",  # Path\\n\",\n    \"                                \\\"PUT\\\",                        # Method\\n\",\n    \"                                es_dict)                      # Data\\n\",\n    \"\\n\",\n    \"    return output\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"cluster_health_status!='None'\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(elasticsearch_enable_shard_allocation, lego_printer=elasticsearch_enable_shard_allocation_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"25e25d2a-4841-4ac3-811e-7c75c0029245\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 7\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 7\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Check-Cluster-Health\\\"><a id=\\\"6\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Check Cluster Health<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Check-Cluster-Health\\\" target=\\\"_self\\\">&para;</a></h3>\\n\",\n    \"<p>This action checks the status of an Elasticsearch cluster after restart. Ideally, the cluster should show <span style=\\\"color: green;\\\">Green</span> status after a successfull restart. These are the cluster statuses that you may encounter-</p>\\n\",\n    \"<ol>\\n\",\n    \"<li>Unassigned primary shards = <span style=\\\"color: red;\\\">Red</span> Status</li>\\n\",\n    \"<li>Unassigned replica shards = <span style=\\\"color: #ffbf00;\\\">Yellow</span> Status</li>\\n\",\n    \"<li>All shards assigned = <span style=\\\"color: green;\\\">Green</span> Status</li>\\n\",\n    \"</ol>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters: <code>None</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 8,\n   \"id\": \"3a19e8d6-8163-4504-9395-f27bb28729c5\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionCategories\": [],\n    \"actionIsCheck\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [],\n    \"actionNextHopParameterMapping\": {},\n    \"actionOutputType\": \"\",\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": false,\n    \"actionSupportsPoll\": false,\n    \"action_modified\": false,\n    \"action_uuid\": \"4590490856e040f305f080b411c392a054142f152696902a4724250aaa057b02\",\n    \"checkEnabled\": false,\n    \"collapsed\": true,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Get Elasticsearch Handle\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-02-17T08:25:35.019Z\"\n    },\n    \"id\": 81,\n    \"index\": 81,\n    \"inputschema\": [\n     {\n      \"properties\": {},\n      \"title\": \"elasticsearch_get_handle\",\n      \"type\": \"object\"\n     }\n    ],\n    \"isUnskript\": false,\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_ELASTICSEARCH\",\n    \"name\": \"Get Elasticsearch Handle\",\n    \"nouns\": [],\n    \"orderProperties\": [],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"handle\",\n     \"output_name_enabled\": true\n    },\n    \"printOutput\": true,\n    \"tags\": [\n     \"elasticsearch_get_handle\"\n    ],\n    \"trusted\": true,\n    \"verbs\": [],\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"##  Copyright (c) 2021 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from pydantic import BaseModel\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def elasticsearch_get_handle(handle):\\n\",\n    \"    \\\"\\\"\\\"elasticsearch_get_handle returns the elasticsearch client handle.\\n\",\n    \"\\n\",\n    \"       :rtype: elasticsearch client handle.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    return handle\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"def unskript_default_printer(output):\\n\",\n    \"    if isinstance(output, (list, tuple)):\\n\",\n    \"        for item in output:\\n\",\n    \"            print(f'item: {item}')\\n\",\n    \"    elif isinstance(output, dict):\\n\",\n    \"        for item in output.items():\\n\",\n    \"            print(f'item: {item}')\\n\",\n    \"    else:\\n\",\n    \"        print(f'Output for {task.name}')\\n\",\n    \"        print(output)\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(outputName=\\\"handle\\\")\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(elasticsearch_get_handle, lego_printer=unskript_default_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 15,\n   \"id\": \"a89d952a-40ea-43d4-ad1d-9456c339fee9\",\n   \"metadata\": {\n    \"collapsed\": true,\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-02-17T08:29:19.684Z\"\n    },\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"name\": \"Check Cluster Health\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Check Cluster Health\",\n    \"trusted\": true,\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"from unskript.legos.elasticsearch.elasticsearch_check_health_status.elasticsearch_check_health_status import elasticsearch_check_health_status\\n\",\n    \"\\n\",\n    \"output = elasticsearch_check_health_status(handle=handle)\\n\",\n    \"cluster_health_status = ''\\n\",\n    \"for cluster in output:\\n\",\n    \"    if type(cluster)==list:\\n\",\n    \"        if len(cluster)!=0:\\n\",\n    \"            for x in cluster:\\n\",\n    \"                for status in x.values():\\n\",\n    \"                    cluster_health_status= status\\n\",\n    \"    else:\\n\",\n    \"        cluster_health_status = 'green'\\n\",\n    \"print(\\\"Cluster Status: \\\",cluster_health_status)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"b20dd1a3-fb5f-4bb1-b5cd-2494d752b930\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Conclusion\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Conclusion\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Conclusion\\\">Conclusion<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Conclusion\\\" target=\\\"_self\\\">&para;</a></h3>\\n\",\n    \"<p>In this Runbook, we were able to perform rolling restart on a node in an Elasticsearch cluster using unSkript's Elasticsearch and SSH legos. This runbooks can be re triggered for mutiple clusters in a sequence. To view the full platform capabilities of unSkript please visit <a href=\\\"https://us.app.unskript.io\\\" target=\\\"_blank\\\" rel=\\\"noopener\\\">us.app.unskript.io</a></p>\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"execution_data\": {\n   \"runbook_name\": \"Elasticsearch Rolling restart\",\n   \"parameters\": [\n    \"cmd_start_elasticsearch\",\n    \"cmd_stop_elasticsearch\",\n    \"host_for_ssh\",\n    \"run_with_sudo\"\n   ]\n  },\n  \"kernelspec\": {\n   \"display_name\": \"Python 3.10.6 64-bit\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.10.6\"\n  },\n  \"parameterSchema\": {\n   \"properties\": {\n    \"cmd_start_elasticsearch\": {\n     \"default\": \"sudo systemctl start elasticsearch.service\",\n     \"description\": \"Command to start Elasticsearch service\",\n     \"title\": \"cmd_start_elasticsearch\",\n     \"type\": \"string\"\n    },\n    \"cmd_stop_elasticsearch\": {\n     \"default\": \"sudo systemctl stop elasticsearch.service\",\n     \"description\": \"Command to stop the Elasticsearch service\",\n     \"title\": \"cmd_stop_elasticsearch\",\n     \"type\": \"string\"\n    },\n    \"host_for_ssh\": {\n     \"default\": \"[]\",\n     \"description\": \"Host IP of elasticsearch server to SSH in List format. Eg: [123.45.67.89]\",\n     \"title\": \"host_for_ssh\",\n     \"type\": \"array\"\n    },\n    \"run_with_sudo\": {\n     \"default\": false,\n     \"description\": \"Run commands to start/stop elasticsearch with sudo\",\n     \"title\": \"run_with_sudo\",\n     \"type\": \"boolean\"\n    }\n   },\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"parameterValues\": {\n   \"host_for_ssh\": \"[]\"\n  },\n  \"vscode\": {\n   \"interpreter\": {\n    \"hash\": \"b0fa6594d8f4cbf19f97940f81e996739fb7646882a419484c72d19e05852a7e\"\n   }\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}"
  },
  {
    "path": "ElasticSearch/Elasticsearch_Rolling_Restart.json",
    "content": "{\n  \"name\": \"Elasticsearch Rolling restart\",\n  \"description\": \"This runbook can be used to perform rolling restart on ES\",\n  \"uuid\": \"7b308783a38a72461839e7bd1d13fbb4e8559d4b291a1454be39c40a2f026ce2\",\n  \"icon\": \"CONNECTOR_TYPE_ELASTICSEARCH\",\n  \"categories\": [ \"CATEGORY_TYPE_ES\" ],\n  \"connector_types\": [ \"CONNECTOR_TYPE_ELASTICSEARCH\" ],\n  \"version\": \"1.0.0\"\n}\n"
  },
  {
    "path": "ElasticSearch/README.md",
    "content": "# ElasticSearch RunBooks\n* [Elasticsearch Rolling restart](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/ElasticSearch/Elasticsearch_Rolling_Restart.ipynb): This runbook can be used to perform rolling restart on ES\n\n# ElasticSearch Actions\n* [Elasticsearch Cluster Health](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/ElasticSearch/legos/elasticsearch_check_health_status/README.md): Elasticsearch Check Health Status\n* [Get large Elasticsearch Index size](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/ElasticSearch/legos/elasticsearch_check_large_index_size/README.md): This action checks the sizes of all indices in the Elasticsearch cluster and compares them to a given threshold.\n* [Check Elasticsearch cluster disk size](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/ElasticSearch/legos/elasticsearch_compare_cluster_disk_size_to_threshold/README.md): This action compares the disk usage percentage of the Elasticsearch cluster to a given threshold.\n* [Elasticsearch Delete Unassigned Shards](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/ElasticSearch/legos/elasticsearch_delete_unassigned_shards/README.md): Elasticsearch Delete Corrupted/Lost Shards\n* [Elasticsearch Disable Shard Allocation](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/ElasticSearch/legos/elasticsearch_disable_shard_allocation/README.md): Elasticsearch Disable Shard Allocation for any indices\n* [Elasticsearch Enable Shard Allocation](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/ElasticSearch/legos/elasticsearch_enable_shard_allocation/README.md): Elasticsearch Enable Shard Allocation for any shards for any indices\n* [Elasticsearch Cluster Statistics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/ElasticSearch/legos/elasticsearch_get_cluster_statistics/README.md): Elasticsearch Cluster Statistics fetches total index size, disk size, and memory utilization and information about the current nodes and shards that form the cluster\n* [Get Elasticsearch Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/ElasticSearch/legos/elasticsearch_get_handle/README.md): Get Elasticsearch Handle\n* [Get Elasticsearch index level health](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/ElasticSearch/legos/elasticsearch_get_index_health/README.md): This action checks the health of a given Elasticsearch index or all indices if no specific index is provided.\n* [Elasticsearch List Allocations](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/ElasticSearch/legos/elasticsearch_list_allocations/README.md): Elasticsearch List Allocations in a Cluster\n* [Elasticsearch List Nodes](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/ElasticSearch/legos/elasticsearch_list_nodes/README.md): Elasticsearch List Nodes in a Cluster\n* [Elasticsearch search](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/ElasticSearch/legos/elasticsearch_search_query/README.md): Elasticsearch Search\n"
  },
  {
    "path": "ElasticSearch/__init__.py",
    "content": ""
  },
  {
    "path": "ElasticSearch/legos/__init__.py",
    "content": "#\n# unSkript (c) 2022\n#"
  },
  {
    "path": "ElasticSearch/legos/elasticsearch_check_health_status/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Elasticsearch Cluster Health</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego used to Check Elasticsearch Health Status.\r\n\r\n\r\n## Lego Details\r\n\r\n    elasticsearch_check_health_status(handle: object)\r\n\r\n        handle: Object of type unSkript ElasticSearch Connector\r\n        \r\n\r\n## Lego Input \r\nThis Lego takes only the handle object that is returned from `task.validate(...)`\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)\r\n"
  },
  {
    "path": "ElasticSearch/legos/elasticsearch_check_health_status/__init__.py",
    "content": ""
  },
  {
    "path": "ElasticSearch/legos/elasticsearch_check_health_status/elasticsearch_check_health_status.json",
    "content": "{\r\n    \"action_title\": \"Elasticsearch Cluster Health\",\r\n    \"action_description\": \"Elasticsearch Check Health Status\",\r\n    \"action_type\": \"LEGO_TYPE_ELASTICSEARCH\",\r\n    \"action_entry_function\": \"elasticsearch_check_health_status\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_is_check\":true,\r\n    \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_ES\"],\r\n    \"action_next_hop\":[\"7b308783a38a72461839e7bd1d13fbb4e8559d4b291a1454be39c40a2f026ce2\"]\r\n}\r\n"
  },
  {
    "path": "ElasticSearch/legos/elasticsearch_check_health_status/elasticsearch_check_health_status.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nfrom pydantic import BaseModel, Field\nfrom typing import Tuple, Optional\n\n\nclass InputSchema(BaseModel):\n    unassigned_shards: Optional[int] = Field(\n        20,\n        description='Threshold number of unassigned shards. Default - 20',\n        title='Number of unassigned shards'\n    )\n\ndef elasticsearch_check_health_status_printer(output):\n    if output is None:\n        return\n    print(output)\n\n\ndef elasticsearch_check_health_status(handle, unassigned_shards:int = 20) -> Tuple:\n    \"\"\"elasticsearch_check_health_status checks the status of an Elasticsearch cluster .\n\n            :type handle: object\n            :param handle: Object returned from Task Validate\n\n            :rtype: Result Tuple of result\n    \"\"\"\n    output = handle.web_request(\"/_cluster/health?pretty\", \"GET\", None)\n    \n    # Early return if cluster status is green\n    if output['status'] == 'green':\n        return (True, None)\n    \n    cluster_health = {\n        \"cluster_name\": output['cluster_name'],\n        \"status\": output['status'],\n        \"unassigned_shards\": output['unassigned_shards']\n    }\n    \n    # Check for significant health issues\n    if output['unassigned_shards'] > unassigned_shards:\n        return (False, [cluster_health])  # Return immediately if unassigned shards exceed the threshold\n\n    # Additional checks for severe conditions\n    if output['status'] == 'red' or output['delayed_unassigned_shards'] > 0 or output['initializing_shards'] > 0 or output['relocating_shards'] > 0 or output['number_of_nodes'] != output['number_of_data_nodes']:\n        additional_details = {\n            \"delayed_unassigned_shards\": output['delayed_unassigned_shards'],\n            \"initializing_shards\": output['initializing_shards'],\n            \"relocating_shards\": output['relocating_shards'],\n            \"number_of_nodes\": output['number_of_nodes'],\n            \"number_of_data_nodes\": output['number_of_data_nodes']\n        }\n        cluster_health.update(additional_details)\n        return (False, [cluster_health])\n    \n    # If status is yellow but no additional critical issues, consider it healthy\n    return (True, None)"
  },
  {
    "path": "ElasticSearch/legos/elasticsearch_check_large_index_size/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>Get large Elasticsearch Index size</h1>\n\n## Description\nThis action checks the sizes of all indices in the Elasticsearch cluster and compares them to a given threshold.\n\n## Lego Details\n\telasticsearch_check_large_index_size(handle, threshold: float = 1000)\n\t\thandle: Object of type unSkript ELASTICSEARCH Connector.\n\t\tthreshold: The threshold for index size in KB.\n\n\n## Lego Input\nThis Lego takes inputs handle, threshold.\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "ElasticSearch/legos/elasticsearch_check_large_index_size/__init__.py",
    "content": ""
  },
  {
    "path": "ElasticSearch/legos/elasticsearch_check_large_index_size/elasticsearch_check_large_index_size.json",
    "content": "{\n  \"action_title\": \"Get large Elasticsearch Index size\",\n  \"action_description\": \"This action checks the sizes of all indices in the Elasticsearch cluster and compares them to a given threshold.\",\n  \"action_type\": \"LEGO_TYPE_ELASTICSEARCH\",\n  \"action_entry_function\": \"elasticsearch_check_large_index_size\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n  \"action_is_check\": true,\n  \"action_next_hop\": [\n    \"\"\n  ],\n  \"action_next_hop_parameter_mapping\": {},\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true\n}"
  },
  {
    "path": "ElasticSearch/legos/elasticsearch_check_large_index_size/elasticsearch_check_large_index_size.py",
    "content": "##  \n##  Copyright (c) 2023 unSkript, Inc\n##  All rights reserved.\n##\nfrom typing import Tuple, Optional\nfrom pydantic import BaseModel,Field\n\n\nclass InputSchema(BaseModel):\n    threshold: Optional[float] = Field(\n        10485760,  # 10GB in KB\n        description='Threshold for index size in KB.',\n        title='Threshold (in KB)'\n    )\n\n\n\ndef elasticsearch_check_large_index_size_printer(result):\n    success, alerts = result\n    if success:\n        print(\"Index sizes are within the threshold.\")\n        return\n    for alert in alerts:\n        print(f\"Alert! Index size of {alert['indexSizeKB']} KB for index {alert['index']} exceeds threshold of {alert['threshold']} KB.\")\n\n\ndef elasticsearch_check_large_index_size(handle, threshold: float = 10485760) -> Tuple:\n    \"\"\"\n    elasticsearch_check_large_index_size checks the sizes of all indices in the\n    Elasticsearch cluster and compares them to a given threshold.\n\n    :type handle: object\n    :param handle: Object returned from Task Validate\n\n    :type threshold: float\n    :param threshold: The threshold for index size in KB.\n\n    :return: Status, alerts (if any index size exceeds the threshold).\n    \"\"\"\n    alerts = []\n\n    try:\n        # Request the list of all indices\n        indices_output = handle.web_request(\"/_cat/indices?h=index\", \"GET\", None)\n        indices_output = ''.join(indices_output).split('\\n')\n        indices_output = [index for index in indices_output if index and not index.startswith('.')]\n\n        for current_index in indices_output:\n            # Request the stats for the current index\n            stats_output = handle.web_request(f\"/{current_index}/_stats\", \"GET\", None)\n            index_size_bytes = stats_output['_all']['total']['store']['size_in_bytes']\n            index_size_KB = index_size_bytes / 1024\n\n            # Check if the index size exceeds the threshold\n            if index_size_KB > threshold:\n                alerts.append({\n                    'index': current_index,\n                    'indexSizeKB': index_size_KB,\n                    'threshold': threshold\n                })\n\n    except Exception as e:\n        raise e\n\n    if len(alerts) != 0:\n        return (False, alerts)\n    return (True, None)\n\n\n"
  },
  {
    "path": "ElasticSearch/legos/elasticsearch_compare_cluster_disk_size_to_threshold/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>Check Elasticsearch cluster disk size</h1>\n\n## Description\nThis action compares the disk usage percentage of the Elasticsearch cluster to a given threshold.\n\n## Lego Details\n\telasticsearch_compare_cluster_disk_size_to_threshold(handle, threshold: float=80.0)\n\t\thandle: Object of type unSkript ELASTICSEARCH Connector.\n\t\tthreshold: The threshold for disk usage percentage.\n\n\n## Lego Input\nThis Lego takes inputs handle, threshold.\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "ElasticSearch/legos/elasticsearch_compare_cluster_disk_size_to_threshold/__init__.py",
    "content": ""
  },
  {
    "path": "ElasticSearch/legos/elasticsearch_compare_cluster_disk_size_to_threshold/elasticsearch_compare_cluster_disk_size_to_threshold.json",
    "content": "{\n  \"action_title\": \"Check Elasticsearch cluster disk size\",\n  \"action_description\": \"This action compares the disk usage percentage of the Elasticsearch cluster to a given threshold.\",\n  \"action_type\": \"LEGO_TYPE_ELASTICSEARCH\",\n  \"action_entry_function\": \"elasticsearch_compare_cluster_disk_size_to_threshold\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n  \"action_is_check\": true,\n  \"action_next_hop\": [\n    \"\"\n  ],\n  \"action_next_hop_parameter_mapping\": {},\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true\n}"
  },
  {
    "path": "ElasticSearch/legos/elasticsearch_compare_cluster_disk_size_to_threshold/elasticsearch_compare_cluster_disk_size_to_threshold.py",
    "content": "##  \n##  Copyright (c) 2023 unSkript, Inc\n##  All rights reserved.\n##\nfrom typing import Optional, Tuple\nfrom pydantic import BaseModel, Field\n\n\nclass InputSchema(BaseModel):\n    threshold: Optional[float] = Field(\n        80, description='Threshold for disk usage percentage.', title='Threshold (in %)'\n    )\n\n\ndef elasticsearch_compare_cluster_disk_size_to_threshold_printer(output):\n    success, data = output\n    if success:\n        print(\"Cluster disk usage is within the threshold.\")\n    else:\n        for item in data:\n            print(f\"Alert! Cluster disk usage of {item['usage_disk_percentage']}% exceeds the threshold of {item['threshold']}%.\")\n\ndef elasticsearch_compare_cluster_disk_size_to_threshold(handle, threshold: float=80.0) -> Tuple:\n    \"\"\"\n    elasticsearch_compare_cluster_disk_size_to_threshold compares the disk usage percentage of the Elasticsearch cluster to a given threshold.\n\n    :type handle: object\n    :param handle: Object returned from Task Validate\n\n    :type threshold: float\n    :param threshold: The threshold for disk usage percentage.\n\n    :return: Status, result (if any exceeding the threshold).\n    \"\"\"\n\n    # Request the allocation stats\n    allocation_output = handle.web_request(\"/_cat/allocation?v\", \"GET\", None)\n\n    # Split the lines and skip the header\n    lines = allocation_output.splitlines()[1:]\n\n    # Calculate the max disk percentage from the lines, considering only assigned nodes\n    max_disk_percent = 0  # Initialize to 0 or an appropriately low number\n    for line in lines:\n        if \"UNASSIGNED\" not in line:\n            disk_usage = float(line.split()[5])\n            max_disk_percent = max(max_disk_percent, disk_usage)\n            if max_disk_percent > threshold:\n                result = [{\"usage_disk_percentage\": max_disk_percent, \"threshold\": threshold}]\n                return (False, result)      \n    return (True, None)\n\n"
  },
  {
    "path": "ElasticSearch/legos/elasticsearch_delete_unassigned_shards/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Elasticsearch Delete Unassigned Shards</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego delete elasticsearch Unassigned Shards.\r\n\r\n\r\n## Lego Details\r\n\r\n    elasticsearch_delete_unassigned_shards(handle: object)\r\n\r\n        handle: Object of type unSkript ElasticSearch Connector\r\n        \r\n## Lego Input\r\nThis Lego takes only the handle object that is returned from `task.validate(...)`\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "ElasticSearch/legos/elasticsearch_delete_unassigned_shards/__init__.py",
    "content": ""
  },
  {
    "path": "ElasticSearch/legos/elasticsearch_delete_unassigned_shards/elasticsearch_delete_unassigned_shards.json",
    "content": "{\r\n    \"action_title\": \"Elasticsearch Delete Unassigned Shards\",\r\n    \"action_description\": \"Elasticsearch Delete Corrupted/Lost Shards\",\r\n    \"action_type\": \"LEGO_TYPE_ELASTICSEARCH\",\r\n    \"action_entry_function\": \"elasticsearch_delete_unassigned_shards\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_ES\"]\r\n}\r\n    "
  },
  {
    "path": "ElasticSearch/legos/elasticsearch_delete_unassigned_shards/elasticsearch_delete_unassigned_shards.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nimport subprocess\nfrom pydantic import BaseModel, Field\nfrom subprocess import PIPE, Popen\n\n\nclass InputSchema(BaseModel):\n    pass\n\n\ndef elasticsearch_delete_unassigned_shards_printer(output):\n    if output is None:\n        return\n    print(output)\n\n\ndef elasticsearch_delete_unassigned_shards(handle) -> str:\n    \"\"\"elasticsearch_delete_lost_shards deleted any corrupted/lost shards .\n\n            :type handle: object\n            :param handle: Object returned from Task Validate\n\n\n            :rtype: Result String of result\n    \"\"\"\n    output = handle.web_request(\"/_cat/shards?v=true&h=index,shard,prirep,state,node,unassigned.reason&s=state&pretty\",  # Path\n                            \"GET\",                      # Method\n                            None)                       # Data\n    list_of_shards = []\n    for line in str(output).split('\\n'):\n        if \"UNASSIGNED\" in line:\n            list_of_shards.append(line.split(\" \")[0])\n    output2 = ''\n    \n    if len(list_of_shards) != 0:\n        output2 = handle.web_request(\"/\" + list_of_shards, # Path\n                                     \"DELETE\",  # Method\n                                     None)      # Data\n   \n    o = output2\n    if o == '':\n        result = \"No Unassigned shards found\"\n        return result\n    result = \"Successfully deleted unassigned shards\"\n    return result\n"
  },
  {
    "path": "ElasticSearch/legos/elasticsearch_disable_shard_allocation/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Elasticsearch Disable Shard Allocation</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego disable Elasticsearch Shard Allocation for any indices.\r\n\r\n\r\n## Lego Details\r\n\r\n    elasticsearch_disable_shard_allocation(handle: object)\r\n\r\n        handle: Object of type unSkript ElasticSearch Connector\r\n\r\n## Lego Input\r\nThis Lego takes only the handle object that is returned from `task.validate(...)`\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "ElasticSearch/legos/elasticsearch_disable_shard_allocation/__init__.py",
    "content": ""
  },
  {
    "path": "ElasticSearch/legos/elasticsearch_disable_shard_allocation/elasticsearch_disable_shard_allocation.json",
    "content": "{\r\n    \"action_title\": \"Elasticsearch Disable Shard Allocation\",\r\n    \"action_description\": \"Elasticsearch Disable Shard Allocation for any indices\",\r\n    \"action_type\": \"LEGO_TYPE_ELASTICSEARCH\",\r\n    \"action_entry_function\": \"elasticsearch_disable_shard_allocation\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_ES\"]\r\n}\r\n    "
  },
  {
    "path": "ElasticSearch/legos/elasticsearch_disable_shard_allocation/elasticsearch_disable_shard_allocation.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nimport subprocess\nimport pprint\nfrom pydantic import BaseModel, Field\nfrom typing import List, Dict\nfrom subprocess import PIPE, run\nimport json\n\n\nclass InputSchema(BaseModel):\n    pass\n\n\ndef elasticsearch_disable_shard_allocation_printer(output):\n    if output is None:\n        return\n    print(\"Shard allocations disabled for any kind shards\")\n    print(output)\n\n\ndef elasticsearch_disable_shard_allocation(handle) -> Dict:\n    \"\"\"elasticsearch_disable_shard_allocation disallows shard allocations for any indices.\n\n            :type handle: object\n            :param handle: Object returned from Task Validate\n\n            :rtype: Result Dict of result\n    \"\"\"\n\n    es_dict = {\"transient\": {\"cluster.routing.allocation.enable\": \"none\"}}\n    output = handle.web_request(\"/_cluster/settings?pretty\",  # Path\n                                \"PUT\",                        # Method\n                                es_dict)                      # Data\n\n    return output \n"
  },
  {
    "path": "ElasticSearch/legos/elasticsearch_enable_shard_allocation/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Elasticsearch Enable Shard Allocation</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego enable Elasticsearch Shard Allocation for any shards for any indices.\r\n\r\n\r\n## Lego Details\r\n\r\n    elasticsearch_enable_shard_allocation(handle: object)\r\n\r\n        handle: Object of type unSkript ElasticSearch Connector\r\n        \r\n\r\n## Lego Input\r\nThis Lego takes only the handle object that is returned from `task.validate(...)`\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "ElasticSearch/legos/elasticsearch_enable_shard_allocation/__init__.py",
    "content": ""
  },
  {
    "path": "ElasticSearch/legos/elasticsearch_enable_shard_allocation/elasticsearch_enable_shard_allocation.json",
    "content": "{\r\n    \"action_title\": \"Elasticsearch Enable Shard Allocation\",\r\n    \"action_description\": \"Elasticsearch Enable Shard Allocation for any shards for any indices\",\r\n    \"action_type\": \"LEGO_TYPE_ELASTICSEARCH\",\r\n    \"action_entry_function\": \"elasticsearch_enable_shard_allocation\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_ES\"]\r\n}\r\n    "
  },
  {
    "path": "ElasticSearch/legos/elasticsearch_enable_shard_allocation/elasticsearch_enable_shard_allocation.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nimport subprocess\nimport pprint\nfrom pydantic import BaseModel, Field\nfrom typing import List, Dict\nfrom subprocess import PIPE, run\nimport json\n\n\nclass InputSchema(BaseModel):\n    pass\n\n\ndef elasticsearch_enable_shard_allocation_printer(output):\n    if output is None:\n        return\n    print(\"Shard allocations enabled for all kinds of shards\")\n    print(output)\n\n\ndef elasticsearch_enable_shard_allocation(handle) -> Dict:\n    \"\"\"elasticsearch_enable_shard_allocation enables shard allocations for any shards for any indices.\n\n            :type handle: object\n            :param handle: Object returned from Task Validate\n\n            :rtype: Result Dict of result\n    \"\"\"\n    es_dict = {\"transient\": {\"cluster.routing.allocation.enable\": \"all\"}}\n    output = handle.web_request(\"/_cluster/settings?pretty\",  # Path\n                                \"PUT\",                        # Method\n                                es_dict)                      # Data\n\n    return output\n"
  },
  {
    "path": "ElasticSearch/legos/elasticsearch_get_cluster_statistics/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \n<h1>Get Elasticsearch cluster statistics </h1>\n\n## Description\nThis Lego checks fetches fetches total index size, disk size, and memory utilization and information about the current nodes and shards that form the cluster.\n\n\n## Lego Details\n\n    elasticsearch_get_cluster_statistics(handle: object)\n\n        handle: Object of type unSkript Elasticsearch Connector\n\n\n## Lego Input\nThis Lego takes only the handle object that is returned from `task.validate(...)`\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n\n## See it in Action\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)\n"
  },
  {
    "path": "ElasticSearch/legos/elasticsearch_get_cluster_statistics/__init__.py",
    "content": ""
  },
  {
    "path": "ElasticSearch/legos/elasticsearch_get_cluster_statistics/elasticsearch_get_cluster_statistics.json",
    "content": "{\n\"action_title\": \"Elasticsearch Cluster Statistics\",\n\"action_description\": \"Elasticsearch Cluster Statistics fetches total index size, disk size, and memory utilization and information about the current nodes and shards that form the cluster\",\n\"action_type\": \"LEGO_TYPE_ELASTICSEARCH\",\n\"action_entry_function\": \"elasticsearch_get_cluster_statistics\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": true,\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\n\"action_supports_iteration\": true,\n\"action_categories\":[\"CATEGORY_TYPE_INFORMATION\" , \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_ES\"]\n}\n"
  },
  {
    "path": "ElasticSearch/legos/elasticsearch_get_cluster_statistics/elasticsearch_get_cluster_statistics.py",
    "content": "##\n# Copyright (c) 2023 unSkript, Inc\n# All rights reserved.\n##\nfrom pydantic import BaseModel, Field\nfrom typing import List, Dict\nimport matplotlib.pyplot as plt\nimport pandas as pd\nfrom datetime import datetime\nfrom tabulate import tabulate\n\n\nclass InputSchema(BaseModel):\n    pass\n\n\ndef elasticsearch_get_cluster_statistics_printer(output):\n    if output is None:\n        return\n    timestamp = datetime.fromtimestamp(output.get('timestamp')/1000)  # converting milliseconds to seconds\n    print(\"\\nCluster Name: \", output.get('cluster_name'))\n    print(\"Timestamp: \", timestamp)\n    print(\"Status: \", output.get('status'))\n\n    # Node Statistics\n    print(\"\\nNode Statistics\")\n    nodes = output.get('_nodes')\n    if nodes is not None:\n        df = pd.DataFrame.from_records([nodes])\n        print(tabulate(df, headers='keys', tablefmt='psql', showindex=False))\n    else:\n        print(\"Nodes are None\")\n\n    # Document Statistics\n    print(\"\\nDocument Statistics\")\n    df = pd.DataFrame.from_records([output.get('indices').get('docs')])\n    df.columns = [f'{i} (count)' for i in df.columns]\n    print(tabulate(df, headers='keys', tablefmt='psql', showindex=False))\n\n    # Shard Statistics\n    print(\"\\nShard Statistics\")\n    df = pd.DataFrame.from_records([output.get('indices').get('shards').get('index')])\n    df.columns = [f'{i} (shard count)' for i in df.columns]\n    print(tabulate(df, headers='keys', tablefmt='psql', showindex=False))\n\n    # Additional Metrics\n    print(\"\\nAdditional Metrics\")\n    additional_metrics = {\n        'total_index_size (MB)': output.get('total_index_size'),\n        'total_disk_size (MB)': output.get('total_disk_size'),\n        'total_memory_utilization (%)': output.get('total_memory_utilization'),\n    }\n    df = pd.DataFrame.from_records([additional_metrics])\n    print(tabulate(df, headers='keys', tablefmt='psql', showindex=False))\n\n\ndef elasticsearch_get_cluster_statistics(handle) -> Dict:\n    \"\"\"elasticsearch_get_cluster_statistics fetches total index size, disk size, and memory utilization \n    and information about the current nodes and shards that form the cluster\n\n            :type handle: object\n            :param handle: Object returned from Task Validate\n\n            :rtype: Result Dict of result\n    \"\"\"\n    try:\n        # Fetching cluster statistics\n        output = handle.web_request(\"/_cluster/stats?human&pretty\", \"GET\", None)\n\n        # Fetching indices statistics\n        indices_stats = handle.web_request(\"/_cat/indices?format=json\", \"GET\", None)\n\n        # Fetching nodes statistics\n        nodes_stats = handle.web_request(\"/_nodes/stats?human&pretty\", \"GET\", None)\n\n        total_index_size = 0\n        for index in indices_stats:\n            size = index['store.size']\n            if 'kb' in size:\n                total_index_size += float(size.replace('kb', '')) / 1024\n            elif 'mb' in size:\n                total_index_size += float(size.replace('mb', ''))\n            elif 'gb' in size:\n                total_index_size += float(size.replace('gb', '')) * 1024\n\n        total_disk_size = sum(float(node['fs']['total']['total_in_bytes']) for node in nodes_stats['nodes'].values())\n\n        total_disk_size /= (1024 * 1024)  # convert from bytes to MB\n\n        total_memory = sum(float(node['jvm']['mem']['heap_used_percent']) for node in nodes_stats['nodes'].values())\n\n        # Adding additional metrics to the output\n        output['total_index_size'] = total_index_size\n        output['total_disk_size'] = total_disk_size\n        output['total_memory_utilization'] = total_memory\n\n    except Exception as e:\n        raise e\n    return output\n"
  },
  {
    "path": "ElasticSearch/legos/elasticsearch_get_handle/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Get Elasticsearch Handle</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego get Elasticsearch Handle.\r\n\r\n\r\n## Lego Details\r\n\r\n    elasticsearch_get_handle(handle: object)\r\n\r\n        handle: Object of type unSkript ElasticSearch Connector\r\n\r\n## Lego Input\r\nThis Lego take one input handle.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "ElasticSearch/legos/elasticsearch_get_handle/__init__.py",
    "content": ""
  },
  {
    "path": "ElasticSearch/legos/elasticsearch_get_handle/elasticsearch_get_handle.json",
    "content": "{\r\n    \"action_title\": \"Get Elasticsearch Handle\",\r\n    \"action_description\": \"Get Elasticsearch Handle\",\r\n    \"action_type\": \"LEGO_TYPE_ELASTICSEARCH\",\r\n    \"action_entry_function\": \"elasticsearch_get_handle\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": false,\r\n    \"action_supports_iteration\": false,\r\n    \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_ES\"]\r\n}\r\n    "
  },
  {
    "path": "ElasticSearch/legos/elasticsearch_get_handle/elasticsearch_get_handle.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nfrom pydantic import BaseModel\n\n\nclass InputSchema(BaseModel):\n    pass\n\n\ndef elasticsearch_get_handle(handle):\n    \"\"\"elasticsearch_get_handle returns the elasticsearch client handle.\n\n       :rtype: elasticsearch client handle.\n    \"\"\"\n    return handle\n"
  },
  {
    "path": "ElasticSearch/legos/elasticsearch_get_index_health/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>Get Elasticsearch index health</h1>\n\n## Description\nThis action checks the health of a given Elasticsearch index or all indices if no specific index is provided.\n\n## Lego Details\n\telasticsearch_get_index_health(handle, index_name=\"\")\n\t\thandle: Object of type unSkript ELASTICSEARCH Connector.\n\t\tindex_name: Name of the index for which the health is checked. If no index is provided, the health of all indices is checked.\n\n\n## Lego Input\nThis Lego takes inputs handle,\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "ElasticSearch/legos/elasticsearch_get_index_health/__init__.py",
    "content": ""
  },
  {
    "path": "ElasticSearch/legos/elasticsearch_get_index_health/elasticsearch_get_index_health.json",
    "content": "{\n  \"action_title\": \"Get Elasticsearch index level health\",\n  \"action_description\": \"This action checks the health of a given Elasticsearch index or all indices if no specific index is provided.\",\n  \"action_type\": \"LEGO_TYPE_ELASTICSEARCH\",\n  \"action_entry_function\": \"elasticsearch_get_index_health\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n  \"action_is_check\": true,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_ES\"],\n  \"action_next_hop\": [\"\"],\n  \"action_next_hop_parameter_mapping\": {}\n}"
  },
  {
    "path": "ElasticSearch/legos/elasticsearch_get_index_health/elasticsearch_get_index_health.py",
    "content": "##\n# Copyright (c) 2023 unSkript, Inc\n# All rights reserved.\n##\nfrom pydantic import BaseModel, Field\nfrom typing import Tuple, Optional\n\n\nclass InputSchema(BaseModel):\n    index_name: Optional[str] = Field(\n        '',\n        description='Name of the index for which the health is checked. If no index is provided, the health of all indices is checked.',\n        title='Index name',\n    )\n\n\ndef elasticsearch_get_index_health_printer(result):\n    success, outputs = result\n    if success or outputs is None or len(outputs) == 0:\n        print(\"No indices found with 'yellow' or 'red' health.\")\n        return\n    for output in outputs:\n        print(f\"\\nProcessing index: {output['index']}\")\n        print(\"--------------------------------------------------\")\n        print(f\"Health: {output['health']}\")\n        print(f\"Status: {output['status']}\")\n        print(f\"Documents count: {output['docs.count']}\")\n        print(f\"Documents deleted: {output['docs.deleted']}\")\n        print(f\"Store size: {output['store.size']}\")\n        print(f\"Primary shards: {output['pri']}\")\n        print(f\"Replicas: {output['rep']}\")\n        print(\"\\nKey Settings:\")\n        print(f\"  number_of_shards: {output['settings'].get('number_of_shards')}\")\n        print(f\"  number_of_replicas: {output['settings'].get('number_of_replicas')}\")\n        print(\"--------------------------------------------------\")\n\n\n\n\ndef elasticsearch_get_index_health(handle, index_name=\"\") -> Tuple:\n    \"\"\"\n    elasticsearch_get_index_health checks the health of a given Elasticsearch index or all indices if no specific index is provided.\n\n    :type handle: object\n    :param handle: Object returned from Task Validate\n\n    :type index_name: str\n    :param index_name: Name of the index for which the health is checked. If no index is provided, the health of all indices is checked.\n\n    :rtype: list\n    :return: A list of dictionaries where each dictionary contains stats about each index\n    \"\"\"\n    try:\n        health_url = f\"/_cat/indices/{index_name}?v&h=index,health&format=json\" if index_name else \"/_cat/indices?v&h=index,health&format=json\"\n        health_response = handle.web_request(health_url, \"GET\", None)\n        if not health_response:\n            print(f\"No indices found or error retrieving indices: {health_response.get('error', 'No response') if health_response else 'No data'}\")\n            return (True, None)\n\n        # Filter indices that are not 'green'\n        problematic_indices = [\n            {\"index\": idx['index'], \"health\": idx['health']}\n            for idx in health_response if idx['health'] != 'green'\n        ]\n\n        if not problematic_indices:\n            print(\"All indices are in good health.\")\n            return (True, None)\n\n    except Exception as e:\n        print(f\"Error processing index health: {str(e)}\")\n        return (False, [])\n\n    return (False, problematic_indices)\n\n"
  },
  {
    "path": "ElasticSearch/legos/elasticsearch_list_allocations/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \n<h1>Elasticsearch List Allocations</h1>\n\n## Description\nThis Lego lists allocations in an Elasticsearch cluster.\n\n\n## Lego Details\n\n    elasticsearch_list_allocations(handle: object)\n\n        handle: Object of type unSkript Elasticsearch Connector\n        \n\n## Lego Input\nThis Lego takes only the handle object that is returned from `task.validate(...)`\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n\n## See it in Action\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)\n"
  },
  {
    "path": "ElasticSearch/legos/elasticsearch_list_allocations/__init__.py",
    "content": ""
  },
  {
    "path": "ElasticSearch/legos/elasticsearch_list_allocations/elasticsearch_list_allocations.json",
    "content": "{\r\n    \"action_title\": \"Elasticsearch List Allocations\",\r\n    \"action_description\": \"Elasticsearch List Allocations in a Cluster\",\r\n    \"action_type\": \"LEGO_TYPE_ELASTICSEARCH\",\r\n    \"action_entry_function\": \"elasticsearch_list_allocations\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_ES\"]\r\n}\r\n"
  },
  {
    "path": "ElasticSearch/legos/elasticsearch_list_allocations/elasticsearch_list_allocations.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nimport subprocess\nfrom subprocess import PIPE\nfrom pydantic import BaseModel, Field\n\n\nclass InputSchema(BaseModel):\n    pass\n\n\ndef elasticsearch_list_allocations_printer(output):\n    if output is None:\n        return\n    print(output)\n\n\ndef elasticsearch_list_allocations(handle) -> str:\n    \"\"\"elasticsearch_list_allocations lists the allocations of an Elasticsearch cluster .\n\n            :type handle: object\n            :param handle: Object returned from Task Validate\n\n            :rtype: Result String of result\n    \"\"\"\n\n    output = handle.web_request(\"/_cat/allocation?v=true&pretty\",  # Path\n                                \"GET\",                        # Method\n                                None)                         # Data\n\n    return output"
  },
  {
    "path": "ElasticSearch/legos/elasticsearch_list_nodes/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Elasticsearch List Nodes</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego List Elasticsearch Nodes in a Cluster.\r\n\r\n\r\n## Lego Details\r\n\r\n    elasticsearch_list_nodes(handle: object)\r\n\r\n        handle: Object of type unSkript ElasticSearch Connector\r\n\r\n## Lego Input\r\nThis Lego takes only the handle object that is returned from `task.validate(...)`\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)\r\n"
  },
  {
    "path": "ElasticSearch/legos/elasticsearch_list_nodes/__init__.py",
    "content": ""
  },
  {
    "path": "ElasticSearch/legos/elasticsearch_list_nodes/elasticsearch_list_nodes.json",
    "content": "{\r\n    \"action_title\": \"Elasticsearch List Nodes\",\r\n    \"action_description\": \"Elasticsearch List Nodes in a Cluster\",\r\n    \"action_type\": \"LEGO_TYPE_ELASTICSEARCH\",\r\n    \"action_entry_function\": \"elasticsearch_list_nodes\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_ES\"]\r\n}\r\n"
  },
  {
    "path": "ElasticSearch/legos/elasticsearch_list_nodes/elasticsearch_list_nodes.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nimport subprocess\nfrom subprocess import PIPE\nfrom pydantic import BaseModel, Field\n\n\nclass InputSchema(BaseModel):\n    pass\n\n\ndef elasticsearch_list_nodes_printer(output):\n    if output is None:\n        return\n    print(output)\n\n\ndef elasticsearch_list_nodes(handle) -> str:\n    \"\"\"elasticsearch_list_nodes lists the nodes of an Elasticsearch cluster .\n\n            :type handle: object\n            :param handle: Object returned from Task Validate\n\n            :rtype: Result String of result\n        \"\"\"\n\n    output = handle.web_request(\"/_cat/nodes?v=true&pretty\",  # Path\n                                \"GET\",                        # Method\n                                None)                         # Data\n\n    return output"
  },
  {
    "path": "ElasticSearch/legos/elasticsearch_search_query/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Elasticsearch search</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego used to Elasticsearch Search.\r\n\r\n\r\n## Lego Details\r\n\r\n    elasticsearch_search_query(handle: object, query: str, index: str, size: int, \r\n                               sort: List, fields: List)\r\n\r\n        handle: Object of type unSkript ElasticSearch Connector\r\n        query: Query String\r\n        index: Index, Optional variable for the elasticsearch query\r\n        size: Size, Optional variable Size\r\n        sort: Sort, Optional List\r\n        fields: Fields, Optional List\r\n\r\n## Lego Input\r\nThis Lego take six inputs handle, query, index, size, sort and fields.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "ElasticSearch/legos/elasticsearch_search_query/__init__.py",
    "content": ""
  },
  {
    "path": "ElasticSearch/legos/elasticsearch_search_query/elasticsearch_search_query.json",
    "content": "{\r\n    \"action_title\": \"Elasticsearch search\",\r\n    \"action_description\": \"Elasticsearch Search\",\r\n    \"action_type\": \"LEGO_TYPE_ELASTICSEARCH\",\r\n    \"action_entry_function\": \"elasticsearch_search_query\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_ES\"]\r\n}\r\n    "
  },
  {
    "path": "ElasticSearch/legos/elasticsearch_search_query/elasticsearch_search_query.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport json\nfrom pydantic import BaseModel, Field\nfrom typing import List, Dict\n\n\nclass InputSchema(BaseModel):\n    query: str = Field(\n        title='Query',\n        description='Query string in compact Lucene query syntax. For eg: foo:bar'\n    )\n    index: str = Field(\n        '',\n        title='Index',\n        description='A comma-separated list of index names to search; use _all or empty string to perform the operation on all indices.'\n    )\n    size: int = Field(\n        '100',\n        title='Number of hits to return.',\n        description='The number of hits to return.'\n    )\n    sort: list = Field(\n        None,\n        title='List of fields to sort on.',\n        description='Comma separated field names. For eg. [{\"order_date\":\"desc\"}, \"order_id\"]',\n    )\n    fields: List[str] = Field(\n        None,\n        title='List of fields to return.',\n        description='Comma separated list of fields to return. For eg. [\"customer_name\", \"order_id\"]'\n    )\n\ndef elasticsearch_search_query_printer(output):\n        for num,doc in enumerate(output):\n            print(f'DOC ID: {doc[\"_id\"]}')\n            print(json.dumps(doc[\"_source\"]))\n    \n\ndef elasticsearch_search_query(handle, \n                               query: str, \n                               index: str = '', \n                               size: int = 100, \n                               sort: List = None,\n                               fields: List = None) -> List:\n    \"\"\"elasticsearch_search Does an elasticsearch search on the provided query.\n\n        :type handle: object\n        :param handle: Object returned from Task Validate\n\n        :type query: str\n        :param query: Query String\n\n        :type index: str\n        :param index: Index, Optional variable for the elasticsearch query\n\n        :type size: int\n        :param size: Size, Optional variable Size \n\n        :type sort: List\n        :param sort: Sort, Optional List\n\n        :type fields: List\n        :param fields: Fields, Optional List\n\n        :rtype: Result Dictionary of result\n    \"\"\"\n    # Input param validation.\n\n    result = {}\n    data = handle.search(query={\"query_string\": {\"query\": query}}, index=index, size=size, sort=sort, _source=fields)\n    print(f\"Got {data['hits']['total']['value']} Hits: \")\n    result = data['hits']['hits']\n\n    return result\n"
  },
  {
    "path": "GCP/README.md",
    "content": "\n# GCP Actions\n* [Add lifecycle policy to GCP storage bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_add_lifecycle_policy_to_bucket/README.md): The action adds a lifecycle policy to a Google Cloud Platform (GCP) storage bucket.\n* [GCP Add Member to IAM Role](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_add_member_to_iam_role/README.md): Adding member to the IAM role which already available\n* [GCP Add Role to Service Account](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_add_role_to_service_account/README.md): Adding role and member to the service account\n* [Create GCP Bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_create_bucket/README.md): Create a new GCP bucket in the given location\n* [Create a GCP disk snapshot](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_create_disk_snapshot/README.md): Create a GCP disk snapshot.\n* [Create GCP Filestore Instance](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_create_filestore_instance/README.md): Create a new GCP Filestore Instance in the given location\n* [Create GKE Cluster](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_create_gke_cluster/README.md): Create GKE Cluster\n* [GCP Create Service Account](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_create_service_account/README.md): GCP Create Service Account\n* [Delete GCP Bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_delete_bucket/README.md): Delete a GCP bucket\n* [Delete GCP Filestore Instance](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_delete_filestore_instance/README.md): Delete a GCP Filestore Instance in the given location\n* [Delete an Object from GCP Bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_delete_object_from_bucket/README.md): Delete an Object/Blob from a GCP Bucket\n* [GCP Delete Service Account](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_delete_service_account/README.md): GCP Delete Service Account\n* [GCP Describe a GKE cluster](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_describe_gke_cluster/README.md): GCP Describe a GKE cluster\n* [Fetch Objects from GCP Bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_fetch_objects_from_bucket/README.md): List all Objects in a GCP bucket\n* [Get GCP storage buckets without lifecycle policies](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_get_buckets_without_lifecycle_policies/README.md): The action retrieves a list of Google Cloud Platform (GCP) storage buckets that do not have any lifecycle policies applied.\n* [Get details of GCP forwarding rules](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_get_forwarding_rules_details/README.md): Get details of forwarding rules associated with a backend service.\n* [Get GCP Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_get_handle/README.md): Get GCP Handle\n* [Get List of GCP compute instance without label](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_get_instances_without_label/README.md): Get List of GCP compute instance without label\n* [Get unused GCP backend services](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_get_unused_backend_services/README.md): Get unused backend service for an application load balancer that has no instances in it's target group.\n\n* [List all GCP Buckets](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_list_buckets/README.md): List all GCP buckets\n* [Get GCP compute instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_list_compute_instances/README.md): Get GCP compute instances\n* [Get List of GCP compute instance by label](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_list_compute_instances_by_label/README.md): Get List of GCP compute instance by label\n* [Get list  compute instance by VPC](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_list_compute_instances_by_vpc/README.md): Get list  compute instance by VPC\n* [GCP List GKE Cluster](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_list_gke_cluster/README.md): GCP List GKE Cluster\n* [GCP List Nodes in GKE Cluster](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_list_nodes_in_gke_cluster/README.md): GCP List Nodes of GKE Cluster\n* [List all Public GCP Buckets](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_list_public_buckets/README.md): List all publicly available GCP buckets\n* [List GCP Secrets](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_list_secrets/README.md): List of your GCP Secrets\n* [GCP List Service Accounts](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_list_service_accounts/README.md): GCP List Service Accounts\n* [List all GCP VMs and if Publicly Accessible](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_list_vms_access/README.md): Lists all GCP buckets, and identifies those tha are public.\n* [GCP Remove Member from IAM Role](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_remove_member_from_iam_role/README.md): Remove member from the chosen IAM role.\n* [GCP Remove Role from Service Account](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_remove_role_from_service_account/README.md): Remove role and member from the service account\n* [Remove role from user](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_remove_user_role/README.md): GCP lego for removing a role from a user (default: 'viewer')\n* [GCP Resize a GKE cluster](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_resize_gke_cluster/README.md): GCP Resize a GKE cluster by modifying nodes\n* [GCP Restart compute instance](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_restart_compute_instances/README.md): GCP Restart compute instance\n* [Restore GCP disk from a snapshot ](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_restore_disk_from_snapshot/README.md): Restore a GCP disk from a compute instance snapshot.\n* [Save CSV to Google Sheets](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_save_csv_to_google_sheets_v1/README.md): Saves your CSV (see notes) into a prepared Google Sheet.\n* [GCP Stop compute instance](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_stop_compute_instances/README.md): GCP Stop compute instance\n* [Upload an Object to GCP Bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_upload_file_to_bucket/README.md): Upload an Object/Blob in a GCP bucket\n"
  },
  {
    "path": "GCP/__init__.py",
    "content": ""
  },
  {
    "path": "GCP/legos/__init__.py",
    "content": ""
  },
  {
    "path": "GCP/legos/gcp_add_lifecycle_policy_to_bucket/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>Add lifecycle policy to GCP storage bucket</h1>\n\n## Description\nThe action adds a lifecycle policy to a Google Cloud Platform (GCP) storage bucket.\n\n## Lego Details\n\tgcp_add_lifecycle_policy_to_bucket(handle, bucket_name:str, age:int)\n\t\thandle: Object of type unSkript GCP Connector.\n\t\tage: Age (in days) of bucket to add to lifecycle policy.\n    \tbucket_name: GCP storage bucket name.\n\n\n## Lego Input\nThis Lego takes inputs handle, age and bucket_name.\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "GCP/legos/gcp_add_lifecycle_policy_to_bucket/__init__.py",
    "content": ""
  },
  {
    "path": "GCP/legos/gcp_add_lifecycle_policy_to_bucket/gcp_add_lifecycle_policy_to_bucket.json",
    "content": "{\n  \"action_title\": \"Add lifecycle policy to GCP storage bucket\",\n  \"action_description\": \"The action adds a lifecycle policy to a Google Cloud Platform (GCP) storage bucket.\",\n  \"action_type\": \"LEGO_TYPE_GCP\",\n  \"action_entry_function\": \"gcp_add_lifecycle_policy_to_bucket\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\n  \"action_is_check\": false,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\" , \"CATEGORY_TYPE_GCP\", \"CATEGORY_TYPE_GCP_STORAGE\"]\n}"
  },
  {
    "path": "GCP/legos/gcp_add_lifecycle_policy_to_bucket/gcp_add_lifecycle_policy_to_bucket.py",
    "content": "##\n##  Copyright (c) 2023 unSkript, Inc\n##  All rights reserved.\n##\nfrom pydantic import BaseModel, Field\nfrom google.cloud import storage\n\n\nclass InputSchema(BaseModel):\n    age: int = Field(\n        default=3,\n        description='Age (in days) of bucket to add to lifecycle policy.',\n        title='Age (in days)',\n    )\n    bucket_name: str = Field(\n        description='GCP storage bucket name.', \n        title='Bucket Name'\n    )\n\n\ndef gcp_add_lifecycle_policy_to_bucket_printer(output):\n    if output is None:\n        return\n    print(output)\n\ndef gcp_add_lifecycle_policy_to_bucket(handle, bucket_name:str, age:int) -> str:\n    \"\"\"gcp_add_lifecycle_policy_to_bucket Returns the string of response of adding a lifecycle policy to a storage bucket\n\n    :type handle: object\n    :param handle: Object returned from Task Validate\n\n    :type age: int\n    :param age: Age (in days) of bucket to add to lifecycle policy.\n\n    :type bucket_name: string\n    :param bucket_name: GCP storage bucket name.\n\n    :rtype: Response of adding a lifecycle policy to a storage bucket\n    \"\"\"\n    storageClient = storage.Client(credentials= handle)\n\n    bucket = storageClient.get_bucket(bucket_name)\n    try:\n        bucket.add_lifecycle_delete_rule(age=age)\n    except Exception as e:\n        raise e\n    bucket.patch()\n    return f\"Added lifecycle policy to {bucket.name} which will delete object after {age} days of creation.\"\n\n\n"
  },
  {
    "path": "GCP/legos/gcp_add_member_to_iam_role/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \n<h1>GCP Add Member to IAM Role</h1>\n\n## Description\nThis Lego add member to the IAM role which alredy available in GCP.\n\n## Lego Details\n\n    gcp_add_member_to_iam_role(handle: object, project_id: str, role: str, member_email:str, version:int)\n\n        handle: Object of type unSkript GCP Connector\n        project_id: Name of the project\n        role: Permission name assign to member e.g iam.serviceAccountUser\n        member_email: Member email which has GCP access e.g test@company.com\n        version: Requested Policy Version\n\n## Lego Input\nproject_id: Name of the project. eg- \"unskript-test2\"\nrole: Permission name assign to member e.g iam.serviceAccountUser\nmember_email: Member email which has GCP access e.g test@company.com\nversion: Requested Policy Version\n\n## Lego Output\nHere is a sample output.\n\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "GCP/legos/gcp_add_member_to_iam_role/__init__.py",
    "content": ""
  },
  {
    "path": "GCP/legos/gcp_add_member_to_iam_role/gcp_add_member_to_iam_role.json",
    "content": "{\n    \"action_title\": \"GCP Add Member to IAM Role\",\n    \"action_description\": \"Adding member to the IAM role which already available\",\n    \"action_type\": \"LEGO_TYPE_GCP\",\n    \"action_entry_function\": \"gcp_add_member_to_iam_role\",\n    \"action_needs_credential\": true,\n    \"action_supports_poll\": true,\n    \"action_supports_iteration\": true,\n    \"action_categories\":[ \"CATEGORY_TYPE_SECOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_GCP\",\"CATEGORY_TYPE_GCP_IAM\" ]\n}\n"
  },
  {
    "path": "GCP/legos/gcp_add_member_to_iam_role/gcp_add_member_to_iam_role.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nfrom pydantic import BaseModel, Field\nimport pprint\nfrom typing import List,Any, Dict\nfrom googleapiclient import discovery\n\n\nclass InputSchema(BaseModel):\n    project_id: str = Field(\n        title = \"Project ID\",\n        description = \"Name of the project e.g unskript-dev\"\n    )\n    role: str = Field(\n        title = \"Role Name\",\n        description = \"Permission name assign to member e.g iam.serviceAccountUser\"\n    )\n    member_email: str = Field(\n        title = \"Member Email\",\n        description = \"Member email which has GCP access e.g test@company.com\"\n    )\n    version: int = Field(\n        title = \"Requested Policy Version\",\n        description = \"Requested Policy Version\"\n    )\n\ndef gcp_add_member_to_iam_role_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\ndef gcp_add_member_to_iam_role(handle, project_id: str, role: str, member_email:str, version:int = 1) -> Dict:\n    \"\"\"gcp_add_member_to_iam_role Returns a Dict of policy details\n\n        :type project_id: string\n        :param project_id: Name of the project\n\n        :type role: string\n        :param role: Permission name assign to member e.g iam.serviceAccountUser\n\n        :type member_email: string\n        :param member_email: Member email which has GCP access e.g test@company.com\n\n        :type version: int\n        :param version: Requested Policy Version\n\n        :rtype: Dict of policy details\n    \"\"\"\n    service = discovery.build(\n        \"cloudresourcemanager\", \"v1\", credentials=handle)\n\n    result = {}\n    try:\n        get_policy = (\n            service.projects().getIamPolicy(\n                    resource=project_id,\n                    body={\"options\": {\"requestedPolicyVersion\": version}}).execute())\n\n        member = \"user:\" + member_email\n        if \"gserviceaccount\" in member_email:\n            member = \"serviceAccount:\" + member_email\n\n        binding = None\n        get_role = \"roles/\" + role\n        for b in get_policy[\"bindings\"]:\n            if b[\"role\"] == get_role:\n                binding = b\n                break\n        if binding is not None:\n            binding[\"members\"].append(member)\n        else:\n            binding = {\"role\": get_role, \"members\": [member]}\n            get_policy[\"bindings\"].append(binding)\n\n        add_member = (\n            service.projects()\n            .setIamPolicy(resource=project_id, body={\"policy\": get_policy}).execute())\n\n        result = add_member\n\n    except Exception as error:\n        raise error\n\n    return result"
  },
  {
    "path": "GCP/legos/gcp_add_role_to_service_account/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \n<h1>GCP Add Role to Service Account</h1>\n\n## Description\nThis Lego add role and member to the service account in GCP.\n\n## Lego Details\n\n    gcp_add_role_to_service_account(handle: object, project_id: str, role: str, member_email:str, sa_id:str)\n\n        handle: Object of type unSkript GCP Connector\n        project_id: Name of the project\n        role: Role name from which member needs to remove e.g iam.serviceAccountUser\n        member_email: Member email which has GCP access e.g test@company.com\n        sa_id: Service Account email\n\n## Lego Input\nproject_id: Name of the project. eg- \"unskript-test2\"\nrole: Role name from which member needs to remove e.g iam.serviceAccountUser\nmember_email: Member email which has GCP access e.g test@company.com\nsa_id: Service Account email\n\n## Lego Output\nHere is a sample output.\n\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "GCP/legos/gcp_add_role_to_service_account/__init__.py",
    "content": ""
  },
  {
    "path": "GCP/legos/gcp_add_role_to_service_account/gcp_add_role_to_service_account.json",
    "content": "{\n    \"action_title\": \"GCP Add Role to Service Account\",\n    \"action_description\": \"Adding role and member to the service account\",\n    \"action_type\": \"LEGO_TYPE_GCP\",\n    \"action_entry_function\": \"gcp_add_role_to_service_account\",\n    \"action_needs_credential\": true,\n    \"action_supports_poll\": true,\n    \"action_supports_iteration\": true,\n    \"action_categories\":[ \"CATEGORY_TYPE_SECOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_GCP\",\"CATEGORY_TYPE_GCP_IAM\" ]\n}\n"
  },
  {
    "path": "GCP/legos/gcp_add_role_to_service_account/gcp_add_role_to_service_account.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nfrom pydantic import BaseModel, Field\nimport pprint\nfrom typing import List,Any, Dict\nfrom googleapiclient import discovery\n\n\nclass InputSchema(BaseModel):\n    project_id: str = Field(\n        title = \"Project ID\",\n        description = \"Name of the project e.g unskript-dev\"\n    )\n    role: str = Field(\n        title = \"Role Name\",\n        description = \"Role name from which member needs to remove e.g iam.serviceAccountUser\"\n    )\n    member_email: str = Field(\n        title = \"Member Email\",\n        description = \"Member email which has GCP access e.g test@company.com\"\n    )\n    sa_id: str = Field(\n        title = \"Service Account Email\",\n        description = \"Service Account email id e.g test-user@unskript-dev.iam.gserviceaccount.com\"\n    )\n\ndef gcp_add_role_to_service_account_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\ndef gcp_add_role_to_service_account(handle, project_id: str, role: str, member_email:str, sa_id:str) -> Dict:\n    \"\"\"gcp_add_role_to_service_account Returns a Dict of new policy details\n\n        :type project_id: string\n        :param project_id: Name of the project\n\n        :type role: string\n        :param role: Role name from which member needs to remove e.g iam.serviceAccountUser\n\n        :type member_email: string\n        :param member_email: Member email which has GCP access e.g test@company.com\n\n        :type sa_id: string\n        :param sa_id: Service Account email\n\n        :rtype: Dict of new policy details\n    \"\"\"\n    service = discovery.build('iam', 'v1', credentials=handle)\n    result = {}\n    try:\n        resource = f'projects/{project_id}/serviceAccounts/{sa_id}'\n        request = service.projects().serviceAccounts().getIamPolicy(resource=resource)\n        response = request.execute()\n\n        member = \"user:\" + member_email\n        if \"gserviceaccount\" in member_email:\n            member = \"serviceAccount:\" + member_email\n        get_role = \"roles/\" + role\n        if \"bindings\" not in response:\n            add_role = {'version': 1,\n                 'bindings': [{'role': get_role,\n                 'members': [member]}]}\n            response = add_role\n        else:\n            add_role = {\n                  \"role\": get_role,\n                  \"members\": [member]}\n            response[\"bindings\"].append(add_role)\n\n        set_policy = service.projects().serviceAccounts().setIamPolicy(resource=resource, body={\"policy\": response})\n        policy_output = set_policy.execute()\n        result = policy_output\n\n    except Exception as error:\n        raise error\n\n    return result"
  },
  {
    "path": "GCP/legos/gcp_create_bucket/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \n<h1>Create GCP Bucket</h1>\n\n## Description\nThis Lego creates a new GCP bucket.\n\n## Lego Details\n\n    gcp_create_bucket(handle: object, bucket_name: str, location: str, project_name: str,storage_class: str)\n\n        handle: Object of type unSkript GCP Connector\n        bucket_name: String, Bucket name\n        project_name: String, GCP Project name\n        location: String, Location of GCP bucket\n        storage_class: String, Storage class to be assigned to the new bucket\n\n\n## Lego Input\nbucket_name\": New bucket name. eg- \"unskript-test2\"\nproject_name: GCP Project name. eg-  \"acme-dev\"\nstorage_class: Storage class to be assigned. eg- \"STANDARD\"\nlocation: GCP Location. eg- \"us\"\n\n## Lego Output\nHere is a sample output.\n\n<img src=\"./1.png\">\n\n\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "GCP/legos/gcp_create_bucket/__init__.py",
    "content": ""
  },
  {
    "path": "GCP/legos/gcp_create_bucket/gcp_create_bucket.json",
    "content": "{\n    \"action_title\": \"Create GCP Bucket\",\n    \"action_description\": \"Create a new GCP bucket in the given location\",\n    \"action_type\": \"LEGO_TYPE_GCP\",\n    \"action_entry_function\": \"gcp_create_bucket\",\n    \"action_needs_credential\": true,\n    \"action_supports_poll\": true,\n    \"action_supports_iteration\": true,\n    \"action_verbs\": [\"create\"],\n    \"action_nouns\": [\"bucket\",\"gcp\"],\n    \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_GCP\",\"CATEGORY_TYPE_GCP_BUCKET\" ]\n}\n"
  },
  {
    "path": "GCP/legos/gcp_create_bucket/gcp_create_bucket.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nfrom pydantic import BaseModel, Field\nfrom beartype import beartype\nimport pprint\nfrom typing import List,Any, Dict\nfrom google.cloud import storage\n\n\nclass InputSchema(BaseModel):\n    bucket_name: str = Field(\n        title = \"Bucket Name\",\n        description = \"Name of the bucket to be created\"\n    )\n    project_name: str = Field(\n        '',\n        title = \"GCP Project\",\n        description = \"GCP Project Name\"\n    )\n    storage_class: str = Field(\n        'STANDARD',\n        title = \"Storage Class\",\n        description = \"Storage class to be assigned to the new bucket. Eg- STANDARD, COLDLINE\"\n    )\n    location: str = Field(\n        'us',\n        title = \"Location\",\n        description = \"GCP location where bucket should be created. Eg- US\"\n    )\n\ndef gcp_create_bucket_printer(output):\n    if output is None:\n        return\n    print(f\"Created bucket {output['name']} in {output['location']} with storage class {output['location']}\")\n\ndef gcp_create_bucket(handle, bucket_name: str, location: str, project_name: str,storage_class: str) -> Dict:\n    \"\"\"gcp_create_bucket Returns a Dict of details of the newly created bucket\n\n        :type bucket_name: string\n        :param bucket_name: Name of the bucket to be created\n\n        :type project_name: string\n        :param project_name: GCP Project Name\n\n        :type storage_class: string\n        :param storage_class: Storage class to be assigned to the new bucket\n\n        :type location: string\n        :param location: GCP location where bucket should be created\n\n        :rtype: Dict of Bucket Details\n    \"\"\"\n    result={}\n    try:\n        storage_client = storage.Client(credentials=handle)\n        bucket = storage_client.bucket(bucket_name)\n        bucket.storage_class = storage_class\n        new_bucket = storage_client.create_bucket(bucket,location=location, project=project_name)\n        result[\"name\"]= new_bucket.name\n        result[\"location\"]= new_bucket.location\n        result[\"storage_class\"]= new_bucket.storage_class\n    except Exception as e:\n        raise e\n    return result"
  },
  {
    "path": "GCP/legos/gcp_create_disk_snapshot/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>Create a GCP disk snapshot</h1>\n\n## Description\nCreate a GCP disk snapshot.\n\n## Lego Details\n\tgcp_create_disk_snapshot(handle, project: str, zone:str, disk: str, snapshot_name: str=\"\")\n\t\thandle: Object of type unSkript GCP Connector.\n\t\tproject: Google Cloud Platform Project\n\t\tzone: Zone to which the instance list in the project should be fetched.\n\t\tdisk: The name of the disk to create a snapshot of.\n\t\tsnapshot_name: The name of the snapshot to create. If not provided, a name will be automatically generated.\n\n\n## Lego Input\nThis Lego takes inputs handle, project, zone, disk, snapshot_name.\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "GCP/legos/gcp_create_disk_snapshot/__init__.py",
    "content": ""
  },
  {
    "path": "GCP/legos/gcp_create_disk_snapshot/gcp_create_disk_snapshot.json",
    "content": "{\n  \"action_title\": \"Create a GCP disk snapshot\",\n  \"action_description\": \"Create a GCP disk snapshot.\",\n  \"action_type\": \"LEGO_TYPE_GCP\",\n  \"action_entry_function\": \"gcp_create_disk_snapshot\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\n  \"action_is_check\": false,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_GCP\",\"CATEGORY_TYPE_GCP_VM\"]\n}"
  },
  {
    "path": "GCP/legos/gcp_create_disk_snapshot/gcp_create_disk_snapshot.py",
    "content": "##\n##  Copyright (c) 2023 unSkript, Inc\n##  All rights reserved.\n##\nfrom typing import Dict, Optional\nfrom pydantic import BaseModel, Field\nfrom google.cloud.compute_v1.services.disks import DisksClient\nfrom google.cloud.compute_v1.types import Snapshot\n\n\nclass InputSchema(BaseModel):\n    project: str = Field(..., description='GCP Project Name', title='GCP Project')\n    zone: str = Field(\n        ...,\n        description='GCP Zone where instance list should be gotten from',\n        title='Zone',\n    )\n    disk: str = Field(\n        ..., description='The name of the disk to create a snapshot of.', title='Disk name'\n    )\n    snapshot_name: str = Field(\n        '',\n        description='The name of the snapshot to create. If not provided, a name will be automatically generated.',\n        title='Snapshot name',\n    )\n\n\n\ndef gcp_create_disk_snapshot_printer(output):\n    if output is None:\n        return\n    print(output)\n\ndef gcp_create_disk_snapshot(handle, project: str, zone:str, disk: str, snapshot_name: str) -> str:\n    \"\"\"gcp_create_disk_snapshot Returns the confirmation of snapshot creation.\n\n    :type project: string\n    :param project: Google Cloud Platform Project\n\n    :type zone: string\n    :param zone: Zone to which the instance list in the project should be fetched.\n\n    :type disk: string\n    :param disk: The name of the disk to create a snapshot of.\n\n    :type snapshot_name: string\n    :param snapshot_name: The name of the snapshot to create. If not provided, a name will be automatically generated.\n\n    :rtype: String of snapshot creation confirmation\n    \"\"\"\n    disks_client = DisksClient(credentials=handle)\n\n    snapshot = Snapshot(name=snapshot_name)\n    try:\n        disks_client.create_snapshot(\n            project=project, zone=zone, disk=disk, snapshot_resource=snapshot\n        )\n    except Exception as e:\n        raise e\n    return f\"Snapshot {snapshot_name} created.\"\n\n\n"
  },
  {
    "path": "GCP/legos/gcp_create_filestore_instance/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \n<h1>Create GCP Filestore Instance</h1>\n\n## Description\nThis Lego creates a new GCP Filestore Instance.\n\n## Lego Details\n\n    gcp_create_filestore_instance(handle, instance_id:str, project_name:str, location:str, network:str, tier:str, description:str, name:str, capacity:int )\n\n        handle: Object of type unSkript GCP Connector\n        instance_id: String, Name of the instance to create\n        project_name: String, GCP Project Name\n        location: String, GCP locations map to GCP zones Eg: us-west1-b\n        network: String, Name of the Google Compute Engine VPC network\n        tier: String, Service tier for instance Eg: STANDARD\n        description: String,  Description of the instance (2048 characters or less)\n        name: String, Resource name of the instance\n        capacity: Integer, File share capacity in gigabytes (GB). Eg: 1024\n\n\n## Lego Input\ninstance_id: String, Name of the instance to create Eg: test-instance\nproject_name: String, GCP Project Name Eg: unskript-project\nlocation: String, GCP locations map to GCP zones Eg: us-west1-b\nnetwork: String, Name of the Google Compute Engine VPC network Eg: default\ntier: String, Service tier for instance Eg: STANDARD\ndescription: String,  Description of the instance (2048 characters or less)\nname: String, Resource name of the instance Eg: unskript-dev\ncapacity: Integer, File share capacity in gigabytes (GB). Eg: 1024\n\n## Lego Output\nHere is a sample output.\n\n<img src=\"./1.png\">\n\n\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "GCP/legos/gcp_create_filestore_instance/__init__.py",
    "content": ""
  },
  {
    "path": "GCP/legos/gcp_create_filestore_instance/gcp_create_filestore_instance.json",
    "content": "{\n    \"action_title\": \"Create GCP Filestore Instance\",\n    \"action_description\": \"Create a new GCP Filestore Instance in the given location\",\n    \"action_type\": \"LEGO_TYPE_GCP\",\n    \"action_entry_function\": \"gcp_create_filestore_instance\",\n    \"action_needs_credential\": true,\n    \"action_supports_poll\": true,\n    \"action_supports_iteration\": true,\n    \"action_verbs\": [\"create\"],\n    \"action_nouns\": [\"filestore\",\"instance\",\"gcp\"],\n    \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_GCP\",\"CATEGORY_TYPE_GCP_FILE_STORE\" ]\n}\n"
  },
  {
    "path": "GCP/legos/gcp_create_filestore_instance/gcp_create_filestore_instance.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nfrom pydantic import BaseModel, Field\nfrom google.cloud import filestore_v1\nfrom google.protobuf.json_format import MessageToDict\nfrom typing import List, Dict\nimport pprint\n\nclass InputSchema(BaseModel):\n    instance_id: str = Field(\n        title = \"Instance ID\",\n        description = \"Name of the instance to create\"\n    )\n    project_name: str = Field(\n        title = \"GCP Project Name(ID)\",\n        description = \"GCP Project Name\"\n    )\n    location: str = Field(\n        title = \"Location\",\n        description = \"GCP locations map to GCP zones Eg: us-west1-b\"\n    )\n    network: str = Field(\n        'default',\n        title = \"Network\",\n        description = \"Name of the Google Compute Engine VPC network\"\n    )\n    description: str = Field(\n        '',\n        max_length= 2048,\n        title = \"Description\",\n        description = \"Description of the instance (2048 characters or less)\"\n    )\n    name: str = Field(\n        title = \"Name\",\n        description = \"Resource name of the instance\"\n    )\n    capacity: int = Field(\n        title = \"Capacity\",\n        description = \"File share capacity in gigabytes (GB). Eg: 1024 \"\n    )\n    tier: str = Field(\n        title = \"Tier\",\n        description = \"Service tier for instance Eg: STANDARD\"\n    )\n\ndef gcp_create_filestore_instance_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\ndef gcp_create_filestore_instance(handle, instance_id:str, project_name:str, location:str, network:str, tier:str, description:str, name:str, capacity:int ) -> Dict:\n    \"\"\"gcp_create_filestore_instance Returns a Dict of details of the newly created Filestore Instance\n\n        :type instance_id: string\n        :param instance_id: Name of the instance to create\n\n        :type project_name: string\n        :param project_name: GCP Project Name\n\n        :type location: string\n        :param location: GCP locations map to GCP zones Eg: us-west1-b\n\n        :type network: string\n        :param network: Name of the Google Compute Engine VPC network\n\n        :type tier: string\n        :param tier: Service tier for instance Eg: STANDARD\n\n        :type description: string\n        :param description: Description of the instance (2048 characters or less)\n\n        :type name: string\n        :param name: Resource name of the instance\n\n        :type capacity: int\n        :param capacity: File share capacity in gigabytes (GB). Eg: 1024\n\n        :rtype: Dict of Filestore Instance Details\n    \"\"\"\n    try:\n        instance_details_dict= {\"networks\": [{\"network\": network,\"modes\": [\"MODE_IPV4\"]}],\"tier\": tier.upper(),\"description\": description,\"file_shares\": [{\"name\": name,\"capacity_gb\": capacity}]}\n        parent_path = \"projects/\"+project_name+\"/locations/\"+location\n        client = filestore_v1.CloudFilestoreManagerClient(credentials=handle)\n        request = filestore_v1.CreateInstanceRequest(parent=parent_path,instance=instance_details_dict,instance_id=instance_id)\n        operation = client.create_instance(request=request)\n        print(\"Waiting for operation to complete...\")\n        response = operation.result()\n        result_dict = MessageToDict(response._pb)\n    except Exception as e:\n        raise e\n    return result_dict"
  },
  {
    "path": "GCP/legos/gcp_create_gke_cluster/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Create GKE Cluster</h1>\r\n\r\n## Description\r\nThis Lego create GKE cluster for a given Project and Zone.\r\n\r\n## Lego Details\r\n\r\n    create_gke_cluster(handle: object, project_id: str, zone: str, cluster_name: str, node_count: int)\r\n\r\n        handle: Object of type unSkript GCP Connector\r\n        project_id: String GCP Project name\r\n        zone: Zone to which the cluster in the project should be fetched.\r\n        cluster_name: Name of the GKE cluster.\r\n        node_count: Node count of GKE cluster.\r\n\r\n\r\n## Lego Input\r\n project:  GCP Project name eg. \"acme-dev\"\r\n zone: GCP Zone eg. \"us-west1-b\"\r\n cluster_name: cluster Name\r\n node_count: cluster node count\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n<img src=\"./1.png\">\r\n\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)\r\n"
  },
  {
    "path": "GCP/legos/gcp_create_gke_cluster/__init__.py",
    "content": ""
  },
  {
    "path": "GCP/legos/gcp_create_gke_cluster/gcp_create_gke_cluster.json",
    "content": "{\n    \"action_title\": \"Create GKE Cluster\",\n    \"action_description\": \"Create GKE Cluster\",\n    \"action_type\": \"LEGO_TYPE_GCP\",\n    \"action_entry_function\": \"gcp_create_gke_cluster\",\n    \"action_needs_credential\": true,\n    \"action_supports_poll\": true,\n    \"action_supports_iteration\": true,\n    \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_GCP\",\"CATEGORY_TYPE_GCP_GKE\" ]\n}\n"
  },
  {
    "path": "GCP/legos/gcp_create_gke_cluster/gcp_create_gke_cluster.py",
    "content": "import pprint\nfrom typing import List, Dict\nfrom pydantic import BaseModel, Field\nfrom google.cloud import container_v1\nfrom google.protobuf.json_format import MessageToDict\n\n\nclass InputSchema(BaseModel):\n    project_id: str = Field(\n        title = \"GCP Project\",\n        description = \"GCP Project Name\"\n    )\n    zone: str = Field(\n        title = \"Zone\",\n        description = \"GCP Zone where instance list should be gotten from\"\n    )\n    cluster_name: str = Field(\n        title = \"Cluster Name\",\n        description = \"Name of the GKE cluster.\"\n    )\n    node_count: str = Field(\n        title = \"Initial Node Count\",\n        description = \"Node count of GKE cluster.\"\n    )\n\n\ndef gcp_create_gke_cluster_printer(output):\n    if len(output) == 0:\n        return\n    pprint.pprint(output)\n\ndef gcp_create_gke_cluster(handle, project_id: str, zone: str, cluster_name: str, node_count: int) -> Dict:\n    \"\"\"gcp_create_gke_cluster Returns the dict of cluster info\n\n        :type project_id: string\n        :param project_id: Google Cloud Platform Project\n\n        :type zone: string\n        :param zone: Zone to which the cluster in the project should be fetched.\n\n        :type cluster_name: string\n        :param cluster_name: Name of the GKE cluster.\n\n        :type node_count: int\n        :param node_count: Node count of GKE cluster.\n\n        :rtype: Dict of cluster info\n    \"\"\"\n    # Create a client\n    client = container_v1.ClusterManagerClient(credentials=handle)\n    try:\n        res = client.create_cluster(project_id=project_id,\n                                     zone=zone,\n                                     cluster={'name':cluster_name,\n                                              'initial_node_count':node_count})\n        response = MessageToDict(res._pb)\n    except Exception as error:\n        raise error\n\n    return response\n"
  },
  {
    "path": "GCP/legos/gcp_create_service_account/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \n<h1>GCP Create Service Account</h1>\n\n## Description\nThis Lego create service account for GCP.\n\n## Lego Details\n\n    gcp_create_service_account(handle: object, project_id: str, accountId: str, display_name:str)\n\n        handle: Object of type unSkript GCP Connector\n        project_id: Name of the project\n        accountId: Name for the service account\n        display_name: Display Name for the service account\n\n## Lego Input\nproject_id: Name of the project. eg- \"unskript-test2\"\naccountId: Name for the service account\ndisplay_name: Display Name for the service account\n\n## Lego Output\nHere is a sample output.\n\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "GCP/legos/gcp_create_service_account/__init__.py",
    "content": ""
  },
  {
    "path": "GCP/legos/gcp_create_service_account/gcp_create_service_account.json",
    "content": "{\n    \"action_title\": \"GCP Create Service Account\",\n    \"action_description\": \"GCP Create Service Account\",\n    \"action_type\": \"LEGO_TYPE_GCP\",\n    \"action_entry_function\": \"gcp_create_service_account\",\n    \"action_needs_credential\": true,\n    \"action_supports_poll\": true,\n    \"action_supports_iteration\": true,\n    \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_GCP\",\"CATEGORY_TYPE_GCP_IAM\" ]\n}\n"
  },
  {
    "path": "GCP/legos/gcp_create_service_account/gcp_create_service_account.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nfrom pydantic import BaseModel, Field\nimport pprint\nfrom typing import List,Any, Dict\nimport googleapiclient.discovery\n\n\nclass InputSchema(BaseModel):\n    project_id: str = Field(\n        title = \"Project ID\",\n        description = \"Name of the project e.g unskript-dev\"\n    )\n    accountId: str = Field(\n        title = \"Account ID\",\n        description = \"Name for the service account e.g test-account\"\n    )\n    display_name: str = Field(\n        title = \"Display Name\",\n        description = \"Display Name for the service account e.g test-account\"\n    )\n\ndef gcp_create_service_account_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\ndef gcp_create_service_account(handle, project_id: str, accountId: str, display_name:str) -> Dict:\n    \"\"\"gcp_create_service_account Returns a Dict of details of the created service account\n\n        :type project_id: string\n        :param project_id: Name of the project\n\n        :type accountId: string\n        :param accountId: Name for the service account\n\n        :type display_name: string\n        :param display_name: Display Name for the service account\n\n        :rtype: Dict of details of the created service account\n    \"\"\"\n    \"\"\"Creates a service account.\"\"\"\n    service = googleapiclient.discovery.build(\n        'iam', 'v1', credentials=handle)\n\n    result = {}\n    try:\n        response = service.projects().serviceAccounts().create(\n            name='projects/' + project_id,\n            body={\n                'accountId': accountId,\n                'serviceAccount': {\n                    'displayName': display_name\n                }}).execute()\n        result = response\n\n    except Exception as error:\n        raise error\n\n    return result"
  },
  {
    "path": "GCP/legos/gcp_delete_bucket/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \n<h1>Delete GCP Bucket</h1>\n\n## Description\nThis Lego deletes a GCP bucket.\n\n## Lego Details\n\n    gcp_delete_bucket(handle: object, bucket_name: str)\n\n        handle: Object of type unSkript GCP Connector\n        bucket_name: String, Bucket name\n\n## Lego Input\nbucket_name\": New bucket name. eg- \"unskript-test2\"\n\n## Lego Output\nHere is a sample output.\n\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "GCP/legos/gcp_delete_bucket/__init__.py",
    "content": ""
  },
  {
    "path": "GCP/legos/gcp_delete_bucket/gcp_delete_bucket.json",
    "content": "{\n    \"action_title\": \"Delete GCP Bucket\",\n    \"action_description\": \"Delete a GCP bucket\",\n    \"action_type\": \"LEGO_TYPE_GCP\",\n    \"action_entry_function\": \"gcp_delete_bucket\",\n    \"action_needs_credential\": true,\n    \"action_supports_poll\": true,\n    \"action_supports_iteration\": true,\n    \"action_verbs\": [\"delete\"],\n    \"action_nouns\": [\"bucket\",\"gcp\"],\n    \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_GCP\",\"CATEGORY_TYPE_GCP_BUCKET\" ]\n}\n"
  },
  {
    "path": "GCP/legos/gcp_delete_bucket/gcp_delete_bucket.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nfrom pydantic import BaseModel, Field\nfrom beartype import beartype\nimport pprint\nfrom typing import List,Any, Dict\nfrom google.cloud import storage\n\n\nclass InputSchema(BaseModel):\n    bucket_name: str = Field(\n        title = \"Bucket Name\",\n        description = \"Name of the bucket to be deleted\"\n    )\n\ndef gcp_delete_bucket_printer(output):\n    if output is None:\n        return\n    print(f\"Bucket {output['deleted_bucket']} deleted\")\n\ndef gcp_delete_bucket(handle, bucket_name: str) -> Dict:\n    \"\"\"gcp_delete_bucket Returns a Dict of details of the deleted bucket\n\n        :type bucket_name: string\n        :param bucket_name: Name of the bucket to be deleted\n\n        :rtype: Dict of Bucket Details\n    \"\"\"\n    result={}\n    try:\n        storage_client = storage.Client(credentials=handle)\n        bucket = storage_client.get_bucket(bucket_name)\n        result[\"deleted_bucket\"]= bucket.name\n        bucket.delete()\n    except Exception as e:\n        raise e\n    return result"
  },
  {
    "path": "GCP/legos/gcp_delete_filestore_instance/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \n<h1>Delete GCP Filestore Instance</h1>\n\n## Description\nThis Lego deleted a GCP Filestore Instance.\n\n## Lego Details\n\n    gcp_delete_filestore_instance(handle, instance_id:str, project_name:str, location:str)\n\n        handle: Object of type unSkript GCP Connector\n        instance_id: String, Name of the instance to create\n        project_name: String, GCP Project Name\n        location: String, GCP locations map to GCP zones Eg: us-west1-b\n\n## Lego Input\ninstance_id: String, Name of the instance to create Eg: test-instance\nproject_name: String, GCP Project Name Eg: unskript-project\nlocation: String, GCP locations map to GCP zones Eg: us-west1-b\n\n## Lego Output\nHere is a sample output.\n\n<img src=\"./1.png\">\n\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "GCP/legos/gcp_delete_filestore_instance/__init__.py",
    "content": ""
  },
  {
    "path": "GCP/legos/gcp_delete_filestore_instance/gcp_delete_filestore_instance.json",
    "content": "{\n    \"action_title\": \"Delete GCP Filestore Instance\",\n    \"action_description\": \"Delete a GCP Filestore Instance in the given location\",\n    \"action_type\": \"LEGO_TYPE_GCP\",\n    \"action_entry_function\": \"gcp_delete_filestore_instance\",\n    \"action_needs_credential\": true,\n    \"action_supports_poll\": true,\n    \"action_supports_iteration\": true,\n    \"action_verbs\": [\"delete\"],\n    \"action_nouns\": [\"filestore\",\"gcp\"],\n    \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_GCP\",\"CATEGORY_TYPE_GCP_FILE_STORE\" ]\n}\n"
  },
  {
    "path": "GCP/legos/gcp_delete_filestore_instance/gcp_delete_filestore_instance.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import Dict\nfrom pydantic import BaseModel, Field\nfrom google.cloud import filestore_v1\n\nclass InputSchema(BaseModel):\n    project_name: str = Field(\n        title = \"GCP Project Name(ID)\",\n        description = \"GCP Project Name\"\n    )\n    location: str = Field(\n        title = \"Location\",\n        description = \"GCP locations map to GCP zones Eg: us-west1-b\"\n    )\n    instance_id: str = Field(\n        title = \"Instance ID\",\n        description = \"Name of the instance to be deleted\"\n    )\n\ndef gcp_delete_filestore_instance_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n    \n\ndef gcp_delete_filestore_instance(handle, instance_id:str, project_name:str, location:str) -> Dict:\n    \"\"\"gcp_delete_filestore_instance Returns status of details of the deleted Filestore Instance\n\n        :type instance_id: string\n        :param instance_id: Name of the instance to create\n\n        :type project_name: string\n        :param project_name: GCP Project Name\n\n        :type location: string\n        :param location: GCP locations map to GCP zones Eg: us-west1-b\n\n        :rtype: Status of Deleted Filestore Instance\n    \"\"\"\n    try:\n        client = filestore_v1.CloudFilestoreManagerClient(credentials=handle)\n        name = \"projects/\"+ project_name +\"/locations/\"+ location +\"/instances/\"+ instance_id\n        request = filestore_v1.DeleteInstanceRequest(name=name)\n        operation = client.delete_instance(request=request)\n        print(\"Waiting for operation to complete...\")\n        operation.result()\n        result_dict={\"Message\":\"Filestore Instance deleted\"}\n    except Exception as e:\n        raise e\n    return result_dict"
  },
  {
    "path": "GCP/legos/gcp_delete_object_from_bucket/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \n<h1>Delete an Object from a GCP Bucket</h1>\n\n## Description\nThis Lego delete an Object/Blob from a GCP Bucket.\n\n## Lego Details\n\n    gcp_delete_object_from_bucket(handle: object, blob_name: str, bucket_name: str)\n\n        handle: Object of type unSkript GCP Connector\n        blob_name: String, Blob Name to be deleted\n        bucket_name: String, Bucket name\n\n## Lego Input\nblob_name: Blob name. eg- \"test-blob\"\nbucket_name: New bucket name. eg- \"unskript-test2\"\n\n## Lego Output\nHere is a sample output.\n\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "GCP/legos/gcp_delete_object_from_bucket/__init__.py",
    "content": ""
  },
  {
    "path": "GCP/legos/gcp_delete_object_from_bucket/gcp_delete_object_from_bucket.json",
    "content": "{\n    \"action_title\": \"Delete an Object from GCP Bucket\",\n    \"action_description\": \"Delete an Object/Blob from a GCP Bucket\",\n    \"action_type\": \"LEGO_TYPE_GCP\",\n    \"action_entry_function\": \"gcp_delete_object_from_bucket\",\n    \"action_needs_credential\": true,\n    \"action_supports_poll\": true,\n    \"action_supports_iteration\": true,\n    \"action_verbs\": [\"delete\"],\n    \"action_nouns\": [\"object\",\"bucket\",\"gcp\"],\n    \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_GCP\",\"CATEGORY_TYPE_GCP_BUCKET\" ]\n}"
  },
  {
    "path": "GCP/legos/gcp_delete_object_from_bucket/gcp_delete_object_from_bucket.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nfrom pydantic import BaseModel, Field\nfrom typing import Dict\nfrom google.cloud import storage\n\nclass InputSchema(BaseModel):\n    blob_name: str = Field(\n        title = \"Blob Name\",\n        description = \"Name of the object/blob to be deleted\"\n    )\n    bucket_name: str = Field(\n        title = \"Bucket Name\",\n        description = \"Name of the bucket to delete object/blob from\"\n    )\n\ndef gcp_delete_object_from_bucket_printer(output):\n    if output is None:\n        return\n    print(f\"Successfully deleted {output['blob_name']}\")\n\n\ndef gcp_delete_object_from_bucket(handle,blob_name: str, bucket_name: str) -> Dict:\n    \"\"\"gcp_delete_object_from_bucket deletes an object in a GCP Bucket\n\n        :type blob_name: string\n        :param bucket_name: Name of the object/blob to be deleted\n\n        :type bucket_name: string\n        :param bucket_name:Name of the bucket to delete object/blob from\n\n        :rtype: Dict of deleted blob\n    \"\"\"\n    try:\n        result={}\n        storage_client = storage.Client(credentials=handle)\n        bucket = storage_client.get_bucket(bucket_name)\n        blob = bucket.blob(blob_name)\n        blob.delete()\n        result[\"blob_name\"]= blob_name\n    except Exception as e:\n        raise e\n    return result"
  },
  {
    "path": "GCP/legos/gcp_delete_service_account/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \n<h1>GCP Delete Service Account</h1>\n\n## Description\nThis Lego delete service account for GCP.\n\n## Lego Details\n\n    gcp_delete_service_account(handle: object, sa_id: str)\n\n        handle: Object of type unSkript GCP Connector\n        sa_id: Email of the service account.\n\n## Lego Input\nsa_id: Email of the service account\n\n## Lego Output\nHere is a sample output.\n\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "GCP/legos/gcp_delete_service_account/__init__.py",
    "content": ""
  },
  {
    "path": "GCP/legos/gcp_delete_service_account/gcp_delete_service_account.json",
    "content": "{\n    \"action_title\": \"GCP Delete Service Account\",\n    \"action_description\": \"GCP Delete Service Account\",\n    \"action_type\": \"LEGO_TYPE_GCP\",\n    \"action_entry_function\": \"gcp_delete_service_account\",\n    \"action_needs_credential\": true,\n    \"action_supports_poll\": true,\n    \"action_supports_iteration\": true,\n    \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_GCP\",\"CATEGORY_TYPE_GCP_IAM\" ]\n}\n"
  },
  {
    "path": "GCP/legos/gcp_delete_service_account/gcp_delete_service_account.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nfrom pydantic import BaseModel, Field\nimport pprint\nfrom typing import Dict\nfrom googleapiclient import discovery\n\n\nclass InputSchema(BaseModel):\n    sa_id: str = Field(\n        title = \"Service Account Email\",\n        description = \"Email of the service account\"\n    )\n\n\ndef gcp_delete_service_account_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef gcp_delete_service_account(handle, sa_id: str) -> Dict:\n    \"\"\"gcp_delete_service_account Returns a Dict of success detailsfor the deleted service account\n\n        :type sa_id: string\n        :param sa_id: Email of the service account\n\n        :rtype: Dict\n    \"\"\"\n    service = discovery.build(\n        'iam', 'v1', credentials=handle)\n\n    result = {}\n    try:\n        result = service.projects().serviceAccounts().delete(\n            name='projects/-/serviceAccounts/' + sa_id).execute()\n\n\n    except Exception as error:\n        raise error\n\n    return result"
  },
  {
    "path": "GCP/legos/gcp_describe_gke_cluster/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>GCP Describe a GKE cluster</h1>\r\n\r\n## Description\r\nThis Lego describe a GKE clusterfor a given Project, cluster and Zone.\r\n\r\n## Lego Details\r\n\r\n    describe_gke_cluster(handle: object, project_id: str, zone: str, cluster_name: str)\r\n\r\n        handle: Object of type unSkript GCP Connector\r\n        project_id: String GCP Project name\r\n        zone: Zone to which the cluster in the project should be fetched.\r\n        cluster_name: Name of the GKE cluster.\r\n\r\n\r\n## Lego Input\r\n project:  GCP Project name eg. \"acme-dev\"\r\n zone: GCP Zone eg. \"us-west1-b\"\r\n cluster_name: cluster Name\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n<img src=\"./1.png\">\r\n\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)\r\n"
  },
  {
    "path": "GCP/legos/gcp_describe_gke_cluster/__init__.py",
    "content": ""
  },
  {
    "path": "GCP/legos/gcp_describe_gke_cluster/gcp_describe_gke_cluster.json",
    "content": "{\n    \"action_title\": \"GCP Describe a GKE cluster\",\n    \"action_description\": \"GCP Describe a GKE cluster\",\n    \"action_type\": \"LEGO_TYPE_GCP\",\n    \"action_entry_function\": \"gcp_describe_gke_cluster\",\n    \"action_needs_credential\": true,\n    \"action_supports_poll\": true,\n    \"action_supports_iteration\": true,\n    \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_GCP\",\"CATEGORY_TYPE_GCP_GKE\" ]\n}\n"
  },
  {
    "path": "GCP/legos/gcp_describe_gke_cluster/gcp_describe_gke_cluster.py",
    "content": "import pprint\nfrom typing import Dict\nfrom pydantic import BaseModel, Field\nfrom google.cloud import container_v1\nfrom google.protobuf.json_format import MessageToDict\n\n\nclass InputSchema(BaseModel):\n    project_id: str = Field(\n        title = \"GCP Project\",\n        description = \"GCP Project Name\"\n    )\n    zone: str = Field(\n        title = \"Zone\",\n        description = \"GCP Zone where instance list should be gotten from\"\n    )\n    cluster_name: str = Field(\n        title = \"Cluster Name\",\n        description = \"Name of the GKE cluster.\"\n    )\n\n\ndef gcp_describe_gke_cluster_printer(output):\n    if len(output) == 0:\n        return\n    pprint.pprint(output)\n\n\ndef gcp_describe_gke_cluster(handle, project_id: str, zone: str, cluster_name: str) -> Dict:\n    \"\"\"gcp_describe_gke_cluster Returns the dict of cluster details\n\n        :type project_id: string\n        :param project_id: Google Cloud Platform Project\n\n        :type zone: string\n        :param zone: Zone to which the cluster in the project should be fetched.\n\n        :type cluster_name: string\n        :param cluster_name: Name of the GKE cluster.\n\n        :rtype: Dict of cluster details\n    \"\"\"\n    # Create a client\n    client = container_v1.ClusterManagerClient(credentials=handle)\n    name = f'projects/{project_id}/locations/{zone}/clusters/{cluster_name}'\n    try:\n        res = client.get_cluster(name=name)\n        response = {}\n        response['Name'] = cluster_name\n        response['CurrentNodeCount'] = res.current_node_count\n        response['NodePoolsCount'] = len(res.node_pools)\n        response['NodePoolDetails'] = []\n        for node_pool in res.node_pools:\n            nodePoolDetail = {}\n            nodePoolDetail['Name'] = node_pool.name\n            nodePoolDetail['NodeCount'] = node_pool.initial_node_count\n            nodePoolDetail['MachineType'] = node_pool.initial_node_count\n            nodePoolDetail['AutoscalingEnabled'] = node_pool.autoscaling.enabled\n            nodePoolDetail['MinNodes'] = node_pool.autoscaling.min_node_count\n            nodePoolDetail['MaxNodes'] = node_pool.autoscaling.max_node_count\n            response['NodePoolDetails'].append(nodePoolDetail)\n\n    except Exception as error:\n        raise error\n\n    return response\n"
  },
  {
    "path": "GCP/legos/gcp_fetch_objects_from_bucket/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \n<h1>Fetch Objects from GCP Bucket</h1>\n\n## Description\nThis Lego fetches all Objects in a GCP Bucket.\n\n## Lego Details\n\n    gcp_fetch_objects_from_bucket(handle: object, bucket_name: str)\n\n        handle: Object of type unSkript GCP Connector\n        bucket_name: String, Bucket name\n\n## Lego Input\nbucket_name\": New bucket name. eg- \"unskript-test2\"\n\n## Lego Output\nHere is a sample output.\n\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "GCP/legos/gcp_fetch_objects_from_bucket/__init__.py",
    "content": ""
  },
  {
    "path": "GCP/legos/gcp_fetch_objects_from_bucket/gcp_fetch_objects_from_bucket.json",
    "content": "{\n    \"action_title\": \"Fetch Objects from GCP Bucket\",\n    \"action_description\": \"List all Objects in a GCP bucket\",\n    \"action_type\": \"LEGO_TYPE_GCP\",\n    \"action_entry_function\": \"gcp_fetch_objects_from_bucket\",\n    \"action_needs_credential\": true,\n    \"action_supports_poll\": true,\n    \"action_supports_iteration\": true,\n    \"action_verbs\": [\"list\"],\n    \"action_nouns\": [\"objects\",\"bucket\",\"gcp\"],\n    \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_GCP\",\"CATEGORY_TYPE_GCP_BUCKET\" ]\n}"
  },
  {
    "path": "GCP/legos/gcp_fetch_objects_from_bucket/gcp_fetch_objects_from_bucket.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nfrom typing import List\nfrom pydantic import BaseModel, Field\nfrom google.cloud import storage\n\n\nclass InputSchema(BaseModel):\n    bucket_name: str = Field(\n        title = \"Bucket Name\",\n        description = \"Name of the bucket to be deleted\"\n    )\n\ndef gcp_fetch_objects_from_bucket_printer(output):\n    if len(output)==0:\n        print(\"Bucket is empty\")\n        return\n    for blob in output:\n        print(blob)\n\ndef gcp_fetch_objects_from_bucket(handle, bucket_name: str) -> List:\n    \"\"\"gcp_fetch_objects_from_bucket returns a List of objects in the Bucket\n\n        :type bucket_name: string\n        :param bucket_name: Name of the bucket to fetch objects/blobs from\n\n        :rtype: List of Bucket Objects\n    \"\"\"\n    try:\n        result =[]\n        storage_client = storage.Client(credentials=handle)\n        bucket = storage_client.get_bucket(bucket_name)\n        blobs = bucket.list_blobs()\n        for blob in blobs:\n            result.append(blob)\n    except Exception as e:\n        raise e\n    return result\n"
  },
  {
    "path": "GCP/legos/gcp_get_buckets_without_lifecycle_policies/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>Get GCP storage buckets without lifecycle policies</h1>\n\n## Description\nThe action retrieves a list of Google Cloud Platform (GCP) storage buckets that do not have any lifecycle policies applied.\n\n## Lego Details\n\tgcp_get_buckets_without_lifecycle_policies(handle)\n\t\thandle: Object of type unSkript GCP Connector.\n\n\n## Lego Input\nThis Lego takes inputs handle.\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "GCP/legos/gcp_get_buckets_without_lifecycle_policies/__init__.py",
    "content": ""
  },
  {
    "path": "GCP/legos/gcp_get_buckets_without_lifecycle_policies/gcp_get_buckets_without_lifecycle_policies.json",
    "content": "{\n  \"action_title\": \"Get GCP storage buckets without lifecycle policies\",\n  \"action_description\": \"The action retrieves a list of Google Cloud Platform (GCP) storage buckets that do not have any lifecycle policies applied.\",\n  \"action_type\": \"LEGO_TYPE_GCP\",\n  \"action_entry_function\": \"gcp_get_buckets_without_lifecycle_policies\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n  \"action_is_check\": true,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\" , \"CATEGORY_TYPE_GCP\", \"CATEGORY_TYPE_GCP_STORAGE\"],\n  \"action_next_hop\": [\"\"],\n  \"action_next_hop_parameter_mapping\": {}\n}"
  },
  {
    "path": "GCP/legos/gcp_get_buckets_without_lifecycle_policies/gcp_get_buckets_without_lifecycle_policies.py",
    "content": "##\n##  Copyright (c) 2023 unSkript, Inc\n##  All rights reserved.\n##\nfrom typing import List, Tuple\nfrom pydantic import BaseModel, Field\nfrom google.cloud import storage\n\n\nclass InputSchema(BaseModel):\n    pass\n\n\ndef gcp_get_buckets_without_lifecycle_policies_printer(output):\n    if output is None:\n        return\n    print(output)\n\ndef gcp_get_buckets_without_lifecycle_policies(handle) -> Tuple:\n    \"\"\"gcp_get_buckets_without_lifecycle_policies Returns the List of GCP storage buckets without lifecycle policies\n\n    :type handle: object\n    :param handle: Object returned from Task Validate\n\n    :rtype: Tuple of storage buckets without lifecycle policies and the corresponding status.\n    \"\"\"\n    try:\n        storageClient = storage.Client(credentials=handle)\n        buckets = storageClient.list_buckets()\n        result = []\n        for bucket in buckets:\n            if not list(bucket.lifecycle_rules):\n                result.append(bucket.name)\n        if result:\n            return (False, result)\n        return (True, None)\n    except Exception as e:\n        raise e\n\n\n"
  },
  {
    "path": "GCP/legos/gcp_get_forwarding_rules_details/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>Get details of GCP forwarding rules</h1>\n\n## Description\nGet details of forwarding rules associated with a backend service.\n\n## Lego Details\n\tgcp_get_forwarding_rules_details(handle, project: str)\n\t\thandle: Object of type unSkript GCP Connector.\n\t\tproject GCP project ID\n\n\n## Lego Input\nThis Lego takes inputs handle, project.\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "GCP/legos/gcp_get_forwarding_rules_details/__init__.py",
    "content": ""
  },
  {
    "path": "GCP/legos/gcp_get_forwarding_rules_details/gcp_get_forwarding_rules_details.json",
    "content": "{\n  \"action_title\": \"Get details of GCP forwarding rules\",\n  \"action_description\": \"Get details of forwarding rules associated with a backend service.\",\n  \"action_type\": \"LEGO_TYPE_GCP\",\n  \"action_entry_function\": \"gcp_get_forwarding_rules_details\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n  \"action_is_check\": false,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\" , \"CATEGORY_TYPE_GCP\", \"CATEGORY_TYPE_GCP_VM\"]\n}"
  },
  {
    "path": "GCP/legos/gcp_get_forwarding_rules_details/gcp_get_forwarding_rules_details.py",
    "content": "##\n##  Copyright (c) 2023 unSkript, Inc\n##  All rights reserved.\n##\nfrom typing import List, Optional\nfrom pydantic import BaseModel, Field\nfrom google.cloud import compute_v1\n\nclass InputSchema(BaseModel):\n    project: str = Field(..., description='GCP project ID', title='Project ID')\n\n\ndef gcp_get_forwarding_rules_details_printer(output):\n    if output is None:\n        return\n    print(output)\n\ndef get_backend_services(project, handle):\n    client = compute_v1.BackendServicesClient(credentials=handle)\n    backend_services = client.list(project=project)\n    return {service.self_link: service.name for service in backend_services}\n\ndef get_target_proxy(forwarding_rule, project, handle):\n    target_http_proxy = []\n    if 'targetHttpProxies' in forwarding_rule.target:\n        target_proxy = compute_v1.TargetHttpProxiesClient(credentials=handle).get(\n            project=project,\n            target_http_proxy=forwarding_rule.target.split('/')[-1]\n        )\n    elif 'targetHttpsProxies' in forwarding_rule.target:\n        target_https_proxy = []\n        target_proxy = compute_v1.TargetHttpsProxiesClient(credentials=handle).get(\n            project=project, \n            target_https_proxy=forwarding_rule.target.split('/')[-1]\n        )\n    else:\n        raise Exception('Unsupported target proxy type')\n    return target_proxy\n\n\ndef gcp_get_forwarding_rules_details(handle, project: str) -> List:\n    \"\"\"gcp_get_forwarding_rules_details Returns the List of forwarding rules, its path and the associated backend service.\n\n    :type project: string\n    :param project: Google Cloud Platform Project\n\n    :rtype: List of of forwarding rules, its path and the associated backend service..\n    \"\"\"\n    client = compute_v1.GlobalForwardingRulesClient(credentials=handle)\n    backend_services = get_backend_services(project, handle)\n    result = []\n    # list all global forwarding rules\n    forwarding_rules = client.list(project=project)\n    for forwarding_rule in forwarding_rules:\n        target_proxy = get_target_proxy(forwarding_rule, project, handle)\n        # get the associated URL map\n        url_map = compute_v1.UrlMapsClient(credentials=handle).get(project=project, url_map=target_proxy.url_map.split('/')[-1])\n        # check if any backend service is associated with this URL map\n        for path_matcher in url_map.path_matchers:\n            for path_rule in path_matcher.path_rules:\n                if backend_services.get(path_rule.service):\n                    result.append({\"forwarding_rule_name\":forwarding_rule.name, \"backend_service\":backend_services.get(path_rule.service), \"path\":path_rule.paths})\n    return result\n\n\n"
  },
  {
    "path": "GCP/legos/gcp_get_handle/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Get GCP Handle</h1>\r\n\r\n## Description\r\nThis Lego returns the GCP Handle that can be used to access any Google Cloud python APIs.\r\n\r\n\r\n## Lego Details\r\n\r\n    gcp_get_handle(handle: object)\r\n\r\n        handle: Object of type unSkript GCP Connector\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n<img src=\"./1.png\">\r\n\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)\r\n"
  },
  {
    "path": "GCP/legos/gcp_get_handle/__init__.py",
    "content": ""
  },
  {
    "path": "GCP/legos/gcp_get_handle/gcp_get_handle.json",
    "content": "{\n  \"action_title\": \"Get GCP Handle\",\n  \"action_description\": \"Get GCP Handle\",\n  \"action_type\": \"LEGO_TYPE_GCP\",\n  \"action_entry_function\": \"gcp_get_handle\",\n  \"action_needs_credential\": true,\n  \"action_supports_poll\": false,\n  \"action_supports_iteration\": false\n}\n"
  },
  {
    "path": "GCP/legos/gcp_get_handle/gcp_get_handle.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nfrom pydantic import BaseModel\n\n\nclass InputSchema(BaseModel):\n    pass\n\n\ndef gcp_get_handle(handle):\n    \"\"\"gcp_get_handle returns the GCP handle.\n\n       :rtype: GCP Handle.\n    \"\"\"\n    return handle\n"
  },
  {
    "path": "GCP/legos/gcp_get_instances_without_label/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Get List of GCP compute instance without label</h1>\r\n\r\n## Description\r\nThis Lego get list of GCP compute instance without label.\r\n\r\n## Lego Details\r\n\r\n    gcp_get_instances_without_labels(handle: object, project: str, zone:str)\r\n\r\n        handle: Object of type unSkript GCP Connector\r\n        project_name: String GCP Project name\r\n        zone: String, Zone in which to get the instnances list from\r\n\r\n\r\n## Lego Input\r\n project:  GCP Project name eg. \"acme-dev\"\r\n zone: GCP Zone eg. \"us-west1-b\"\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n<img src=\"./1.png\">\r\n\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)\r\n"
  },
  {
    "path": "GCP/legos/gcp_get_instances_without_label/__init__.py",
    "content": ""
  },
  {
    "path": "GCP/legos/gcp_get_instances_without_label/gcp_get_instances_without_label.json",
    "content": "{\n    \"action_title\": \"Get List of GCP compute instance without label\",\n    \"action_description\": \"Get List of GCP compute instance without label\",\n    \"action_type\": \"LEGO_TYPE_GCP\",\n    \"action_entry_function\": \"gcp_get_instances_without_label\",\n    \"action_needs_credential\": true,\n    \"action_supports_poll\": true,\n    \"action_supports_iteration\": true,\n    \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_GCP\",\"CATEGORY_TYPE_GCP_VM\" ]\n}\n  \n"
  },
  {
    "path": "GCP/legos/gcp_get_instances_without_label/gcp_get_instances_without_label.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import List\nfrom pydantic import BaseModel, Field\nfrom google.cloud.compute_v1.services.instances import InstancesClient\n\n\nclass InputSchema(BaseModel):\n    project: str = Field(\n        title = \"GCP Project\",\n        description = \"GCP Project Name\"\n    )\n    zone: str = Field(\n        title = \"Zone\",\n        description = \"GCP Zone where instance list should be gotten from\"\n    )\n\n\ndef gcp_get_instances_without_label_printer(output):\n    if len(output) == 0:\n        return\n    pprint.pprint(output)\n\n\ndef gcp_get_instances_without_label(handle, project: str, zone:str) -> List:\n    \"\"\"gcp_get_instances_without_label Returns the List of compute instances\n\n        :type project: string\n        :param project: Google Cloud Platform Project\n\n        :type zone: string\n        :param zone: Zone to which the instance list in the project should be fetched.\n\n        :rtype: List of instances\n    \"\"\"\n    output = []\n    ic = InstancesClient(credentials=handle)\n    try:\n        result = ic.list(project=project, zone=zone)\n        instance_list = []\n        for instance in result:\n            instance_list.append(instance.name)\n        for instance_name in instance_list:\n            result = ic.get(project=project, zone=zone, instance=instance_name)\n            if not result.labels:\n                output.append(instance_name)\n    except Exception as error:\n        raise error\n\n    return output\n"
  },
  {
    "path": "GCP/legos/gcp_get_unused_backend_services/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>Get unused GCP backend services</h1>\n\n## Description\nGet unused backend service for an application load balancer that have no instances in it's target groups.\n\n\n## Lego Details\n\tgcp_get_unused_backend_services(handle, project: str)\n\t\thandle: Object of type unSkript GCP Connector.\n\t\tproject: GCP project ID.\n\n\n## Lego Input\nThis Lego takes inputs handle, project.\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "GCP/legos/gcp_get_unused_backend_services/__init__.py",
    "content": ""
  },
  {
    "path": "GCP/legos/gcp_get_unused_backend_services/gcp_get_unused_backend_services.json",
    "content": "{\n  \"action_title\": \"Get unused GCP backend services\",\n  \"action_description\": \"Get unused backend service for an application load balancer that has no instances in it's target group.\\n\",\n  \"action_type\": \"LEGO_TYPE_GCP\",\n  \"action_entry_function\": \"gcp_get_unused_backend_services\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n  \"action_is_check\": true,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\" , \"CATEGORY_TYPE_GCP\", \"CATEGORY_TYPE_GCP_VM\"],\n  \"action_next_hop\": [\"\"],\n  \"action_next_hop_parameter_mapping\": {}\n}"
  },
  {
    "path": "GCP/legos/gcp_get_unused_backend_services/gcp_get_unused_backend_services.py",
    "content": "##\n##  Copyright (c) 2023 unSkript, Inc\n##  All rights reserved.\n##\nfrom typing import List\nfrom google.cloud import compute_v1\n\n\n\nfrom pydantic import BaseModel, Field\n\n\nclass InputSchema(BaseModel):\n    project: str = Field(..., description='GCP project ID', title='Project ID')\n\n\ndef gcp_get_unused_backend_services_printer(output):\n    if output is None:\n        return\n    print(output)\n\ndef gcp_get_unused_backend_services(handle, project: str) -> List:\n    \"\"\"\n    gcp_get_unused_backend_services Returns a list of unused backend services \n    and their target groups which have 0 instances in the given project.\n\n    :type handle: object\n    :param handle: Object returned from Task Validate\n\n    :type project: string\n    :param project: Google Cloud Platform Project\n\n    :return: Status, List of unused Backend services\n    \"\"\"\n    backendClient = compute_v1.BackendServicesClient(credentials=handle)\n    instanceClient = compute_v1.InstanceGroupsClient(credentials=handle)\n\n    # List all backend services\n    backend_services = [\n        {\n            \"backend_service_name\": page.name, \n            \"backend_instance_group_name\": page.backends[0].group.split('/')[-1]\n        } \n        for page in backendClient.list(project=project)\n    ]\n\n    # Create a list for instance groups with instance size = 0\n    instance_groups = [\n        instance.name for zone, response in instanceClient.aggregated_list(project=project) \n        for instance in response.instance_groups if instance.size == 0\n    ]\n\n    # Compare the backend service instance groups to the instance groups that have instance size = 0\n    result = [\n        {\n            \"backend_service_name\": ser[\"backend_service_name\"], \n            \"instance_group_name\": ser[\"backend_instance_group_name\"]\n        }\n        for ser in backend_services if ser[\"backend_instance_group_name\"] in instance_groups\n    ]\n\n    return (False, result) if result else (True, None)\n\n\n\n\n\n"
  },
  {
    "path": "GCP/legos/gcp_list_buckets/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \n<h1>List GCP Buckets</h1>\n\n## Description\nThis Lego lists all available GCP buckets.\n\n## Lego Details\n\n    gcp_list_buckets(handle: object)\n\n        handle: Object of type unSkript GCP Connector\n        bucket_name: String, Bucket name\n\n## Lego Input\nNone\n\n## Lego Output\nHere is a sample output.\n\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "GCP/legos/gcp_list_buckets/__init__.py",
    "content": ""
  },
  {
    "path": "GCP/legos/gcp_list_buckets/gcp_list_buckets.json",
    "content": "{\n    \"action_title\": \"List all GCP Buckets\",\n    \"action_description\": \"List all GCP buckets\",\n    \"action_type\": \"LEGO_TYPE_GCP\",\n    \"action_entry_function\": \"gcp_list_buckets\",\n    \"action_needs_credential\": true,\n    \"action_supports_poll\": true,\n    \"action_supports_iteration\": true,\n    \"action_verbs\": [\"list\"],\n    \"action_nouns\": [\"buckets\",\"gcp\"],\n    \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_GCP\",\"CATEGORY_TYPE_GCP_BUCKET\" ]\n}"
  },
  {
    "path": "GCP/legos/gcp_list_buckets/gcp_list_buckets.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import List\nfrom pydantic import BaseModel\nfrom google.cloud import storage\n\nclass InputSchema(BaseModel):\n    pass\n\ndef gcp_list_buckets_printer(output):\n    if len(output)==0:\n        print(\"There are no buckets available\")\n        return\n    pprint.pprint(output)\n\n\ndef gcp_list_buckets(handle) -> List:\n    \"\"\"gcp_list_buckets lists all GCP Buckets\n\n        :rtype: List of all GCP buckets\n    \"\"\"\n    try:\n        result=[]\n        storage_client = storage.Client(credentials=handle)\n        buckets = storage_client.list_buckets()\n        for bucket in buckets:\n            result.append(bucket.name)\n    except Exception as e:\n        raise e\n    return result\n"
  },
  {
    "path": "GCP/legos/gcp_list_compute_instances/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Get GCP Compute Instances</h1>\r\n\r\n## Description\r\nThis Lego returns the the list of compute instances for a given Project and Zone.\r\n\r\n## Lego Details\r\n\r\n    gcp_list_compute_instances(handle: object, project: string, zone: string)\r\n\r\n        handle: Object of type unSkript GCP Connector\r\n        project: String GCP Project name\r\n        zone: String, Zone in which to get the instnances list from\r\n\r\n\r\n## Lego Input\r\n project:  GCP Project name eg. \"acme-dev\"\r\n zone: GCP Zone eg. \"us-west1-b\"\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n<img src=\"./1.png\">\r\n\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)\r\n"
  },
  {
    "path": "GCP/legos/gcp_list_compute_instances/__init__.py",
    "content": ""
  },
  {
    "path": "GCP/legos/gcp_list_compute_instances/gcp_list_compute_instances.json",
    "content": "{\n    \"action_title\": \"Get GCP compute instances\",\n    \"action_description\": \"Get GCP compute instances\",\n    \"action_type\": \"LEGO_TYPE_GCP\",\n    \"action_entry_function\": \"gcp_list_compute_instances\",\n    \"action_needs_credential\": true,\n    \"action_supports_poll\": true,\n    \"action_supports_iteration\": true,\n    \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_GCP\",\"CATEGORY_TYPE_GCP_VM\"]\n}\n  \n"
  },
  {
    "path": "GCP/legos/gcp_list_compute_instances/gcp_list_compute_instances.py",
    "content": "from typing import List, Optional\nfrom pydantic import BaseModel, Field\nfrom google.cloud.compute_v1.services.instances import InstancesClient\nimport re\n\nclass InputSchema(BaseModel):\n    project: str = Field(\n        title = \"GCP Project\",\n        description = \"GCP Project Name\"\n    )\n    zone: Optional[str] = Field(\n        title = \"Zone\",\n        description = \"GCP Zone where instance list should be gotten from\"\n    )\n\n\ndef gcp_list_compute_instances_printer(output):\n    if len(output) == 0:\n        return\n    for instance in output:\n        print(instance)\n\ndef gcp_list_compute_instances(handle, project: str, zone:str=\"\") -> List:\n    \"\"\"gcp_list_compute_instances Returns the List of compute instances\n    from given project and zone\n\n    :type project: string\n    :param project: Google Cloud Platform Project\n\n    :type zone: string\n    :param zone: Zone to which the instance list in the project should be fetched.\n\n    :rtype: List of instances\n    \"\"\"\n    output = []\n    instanceClient = InstancesClient(credentials=handle)\n    if zone:\n        instances = instanceClient.list(project=project, zone=zone)\n        for instance in instances:\n            output.append({\"instance_name\": instance.name,\"instance_zone\": zone})\n    else:\n        request = {\"project\" : project,}\n        agg_list = instanceClient.aggregated_list(request=request)\n        for instance_zone, response in agg_list:\n            if response.instances:\n                for instance in response.instances:\n                    zone_url = re.compile(r'https:\\/\\/www\\.googleapis\\.com\\/compute\\/v1\\/projects\\/unskript-dev\\/zones\\/([A-Za-z0-9]+(-[A-Za-z0-9]+)+)')\n                    instance_zone = zone_url.search(instance.zone)\n                    output.append({\"instance_name\": instance.name, \"instance_zone\": instance_zone.group(1)})\n    return output\n"
  },
  {
    "path": "GCP/legos/gcp_list_compute_instances_by_label/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Get List of GCP compute instance by label</h1>\r\n\r\n## Description\r\nThis Lego get list of GCP compute instance by label.\r\n\r\n## Lego Details\r\n\r\n    gcp_get_instances_by_labels(handle: object, project: str, zone:str, key: str, value: str)\r\n\r\n        handle: Object of type unSkript GCP Connector\r\n        project_name: String GCP Project name\r\n        zone: String, Zone in which to get the instnances list from\r\n        key: GCP label key assigned to instance.\r\n        value: GCP label value assigned to instance.\r\n\r\n\r\n## Lego Input\r\n project:  GCP Project name eg. \"acme-dev\"\r\n zone: GCP Zone eg. \"us-west1-b\"\r\n key: label key\r\n value: label value\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n<img src=\"./1.png\">\r\n\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)\r\n"
  },
  {
    "path": "GCP/legos/gcp_list_compute_instances_by_label/__init__.py",
    "content": ""
  },
  {
    "path": "GCP/legos/gcp_list_compute_instances_by_label/gcp_list_compute_instances_by_label.json",
    "content": "{\n    \"action_title\": \"Get List of GCP compute instance by label\",\n    \"action_description\": \"Get List of GCP compute instance by label\",\n    \"action_type\": \"LEGO_TYPE_GCP\",\n    \"action_entry_function\": \"gcp_list_compute_instances_by_label\",\n    \"action_needs_credential\": true,\n    \"action_supports_poll\": true,\n    \"action_supports_iteration\": true,\n    \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_GCP\",\"CATEGORY_TYPE_GCP_VM\" ]\n}\n  \n"
  },
  {
    "path": "GCP/legos/gcp_list_compute_instances_by_label/gcp_list_compute_instances_by_label.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import List\nfrom pydantic import BaseModel, Field\nfrom google.cloud.compute_v1.services.instances import InstancesClient\n\n\nclass InputSchema(BaseModel):\n    project: str = Field(\n        title = \"GCP Project\",\n        description = \"GCP Project Name\"\n    )\n    zone: str = Field(\n        title = \"Zone\",\n        description = \"GCP Zone where instance list should be gotten from\"\n    )\n    key: str = Field(\n        title = \"Label Key\",\n        description = \"GCP label key assigned to instance\"\n    )\n    value: str = Field(\n        title = \"Label Value\",\n        description = \"GCP label value assigned to instance\"\n    )\n\n\ndef gcp_list_compute_instances_by_label_printer(output):\n    if len(output) == 0:\n        return\n    pprint.pprint(output)\n\n\ndef gcp_list_compute_instances_by_label(\n        handle,\n        project: str,\n        zone:str,\n        key: str,\n        value: str\n        ) -> List:\n    \"\"\"gcp_list_compute_instances_by_label Returns the List of compute instances\n\n        :type project: string\n        :param project: Google Cloud Platform Project\n\n        :type zone: string\n        :param zone: Zone to which the instance list in the project should be fetched.\n\n        :type key: string\n        :param key: GCP label key assigned to instance.\n\n        :type value: string\n        :param value: GCP label value assigned to instance.\n\n        :rtype: List of instances\n    \"\"\"\n    output = []\n    ic = InstancesClient(credentials=handle)\n    try:\n        result = ic.list(project=project, zone=zone)\n        instance_list = []\n        for instance in result:\n            instance_list.append(instance.name)\n        for instance_name in instance_list:\n            result = ic.get(project=project, zone=zone, instance=instance_name)\n            if key in result.labels.keys():\n                if value in result.labels.values():\n                    output.append(result.name)\n    except Exception as error:\n        raise error\n\n    return output\n"
  },
  {
    "path": "GCP/legos/gcp_list_compute_instances_by_vpc/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Get list  compute instance by VPC</h1>\r\n\r\n## Description\r\nThis Lego returns the the list of compute instances filtered by vpc.\r\n\r\n## Lego Details\r\n\r\n    gcp_list_instances_by_vpc(handle: object, project: str, zone: str, vpc_id: str)\r\n\r\n        handle: Object of type unSkript GCP Connector\r\n        project: String GCP Project name\r\n        zone: String, Zone in which to get the instnances list from\r\n        vpc_id: Name of the VPC.\r\n\r\n\r\n## Lego Input\r\n project:  GCP Project name eg. \"acme-dev\"\r\n zone: GCP Zone eg. \"us-west1-b\"\r\n vpc_id: Name of the VPC.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n<img src=\"./1.png\">\r\n\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)\r\n"
  },
  {
    "path": "GCP/legos/gcp_list_compute_instances_by_vpc/__init__.py",
    "content": ""
  },
  {
    "path": "GCP/legos/gcp_list_compute_instances_by_vpc/gcp_list_compute_instances_by_vpc.json",
    "content": "{\n    \"action_title\": \"Get list  compute instance by VPC\",\n    \"action_description\": \"Get list  compute instance by VPC\",\n    \"action_type\": \"LEGO_TYPE_GCP\",\n    \"action_entry_function\": \"gcp_list_compute_instances_by_vpc\",\n    \"action_needs_credential\": true,\n    \"action_supports_poll\": true,\n    \"action_supports_iteration\": true,\n    \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_GCP\",\"CATEGORY_TYPE_GCP_VM\",\"CATEGORY_TYPE_GCP_VPC\" ]\n}\n  \n"
  },
  {
    "path": "GCP/legos/gcp_list_compute_instances_by_vpc/gcp_list_compute_instances_by_vpc.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import List\nfrom pydantic import BaseModel, Field\nfrom google.cloud.compute_v1.services.instances import InstancesClient\n\nclass InputSchema(BaseModel):\n    project: str = Field(\n        title=\"GCP Project\",\n        description=\"GCP Project Name\"\n    )\n    zone: str = Field(\n        title=\"Zone\",\n        description=\"GCP Zone where instance list should be gotten from\"\n    )\n    vpc_id: str = Field(\n        title=\"VPC Name\",\n        description=\"Name of the VPC.\"\n    )\n\n\ndef gcp_list_compute_instances_by_vpc_printer(output):\n    if len(output) == 0:\n        return\n    pprint.pprint(output)\n\n\ndef gcp_list_compute_instances_by_vpc(\n        handle,\n        project: str,\n        zone: str,\n        vpc_id: str\n        ) -> List:\n    \"\"\"gcp_list_instances_by_vpc Returns the List of compute instances\n    \n        :type project: string\n        :param project: Google Cloud Platform Project\n\n        :type zone: string\n        :param zone: Zone to which the instance list in the project should be fetched.\n\n        :type vpc_id: string\n        :param vpc_id: Name of the VPC.\n\n        :rtype: List of instances\n    \"\"\"\n    result = []\n    ic = InstancesClient(credentials=handle)\n    instances = ic.list(project=project, zone=zone)\n    instance_list = []\n    for instance in instances:\n        instance_list.append(instance.name)\n\n    for instance in instance_list:\n        get_data = ic.get(project=project, zone=zone, instance=instance)\n        response = get_data.network_interfaces\n        for data in response:\n            if vpc_id in data.network:\n                result.append(instance)\n\n    return result\n"
  },
  {
    "path": "GCP/legos/gcp_list_gke_cluster/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>GCP List GKE Cluster</h1>\r\n\r\n## Description\r\nThis Lego list all GKE clusters for a given Project and Zone.\r\n\r\n## Lego Details\r\n\r\n    list_gke_cluster(handle: object, project_id: str, zone: str)\r\n\r\n        handle: Object of type unSkript GCP Connector\r\n        project_id: String GCP Project name\r\n        zone: Zone to which the cluster in the project should be fetched.\r\n\r\n\r\n## Lego Input\r\n project:  GCP Project name eg. \"acme-dev\"\r\n zone: GCP Zone eg. \"us-west1-b\"\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)\r\n"
  },
  {
    "path": "GCP/legos/gcp_list_gke_cluster/__init__.py",
    "content": ""
  },
  {
    "path": "GCP/legos/gcp_list_gke_cluster/gcp_list_gke_cluster.json",
    "content": "{\n    \"action_title\": \"GCP List GKE Cluster\",\n    \"action_description\": \"GCP List GKE Cluster\",\n    \"action_type\": \"LEGO_TYPE_GCP\",\n    \"action_entry_function\": \"gcp_list_gke_cluster\",\n    \"action_needs_credential\": true,\n    \"action_supports_poll\": true,\n    \"action_supports_iteration\": true,\n    \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_GCP\",\"CATEGORY_TYPE_GCP_GKE\" ]\n}\n"
  },
  {
    "path": "GCP/legos/gcp_list_gke_cluster/gcp_list_gke_cluster.py",
    "content": "import pprint\nfrom typing import List\nfrom pydantic import BaseModel, Field\nfrom google.cloud import container_v1\n\nclass InputSchema(BaseModel):\n    project_id: str = Field(\n        title = \"GCP Project\",\n        description = \"GCP Project Name\"\n    )\n    zone: str = Field(\n        title = \"Zone\",\n        description = \"GCP Zone where instance list should be gotten from\"\n    )\n\n\ndef gcp_list_gke_cluster_printer(output):\n    if len(output) == 0:\n        return\n    pprint.pprint(output)\n\ndef gcp_list_gke_cluster(handle, project_id: str, zone: str) -> List:\n    \"\"\"gcp_list_gke_cluster Returns the list of cluster\n\n        :type project_id: string\n        :param project_id: Google Cloud Platform Project\n\n        :type zone: string\n        :param zone: Zone to which the cluster in the project should be fetched.\n\n        :rtype: list of cluster\n    \"\"\"\n    # Create a client\n    cluster_list = []\n    client = container_v1.ClusterManagerClient(credentials=handle)\n    try:\n        parent = f'projects/{project_id}/locations/{zone}'\n        response = client.list_clusters(parent=parent)\n        for cluster in response.clusters:\n            cluster_list.append(cluster.name)\n    except Exception as error:\n        raise error\n\n    return cluster_list\n"
  },
  {
    "path": "GCP/legos/gcp_list_nodes_in_gke_cluster/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>GCP List Nodes in GKE Cluster</h1>\r\n\r\n## Description\r\nThis Lego list nodes of GKE cluster for a given Project and Zone.\r\n\r\n## Lego Details\r\n\r\n    list_nodes_of_gke_cluster(handle: object, project_id: str, zone: str, cluster_name: str)\r\n\r\n        handle: Object of type unSkript GCP Connector\r\n        project_id: String GCP Project name\r\n        zone: Zone to which the cluster in the project should be fetched.\r\n        cluster_name: Name of the GKE cluster.\r\n\r\n\r\n## Lego Input\r\n project:  GCP Project name eg. \"acme-dev\"\r\n zone: GCP Zone eg. \"us-west1-b\"\r\n cluster_name: cluster Name\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n<img src=\"./1.png\">\r\n\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)\r\n"
  },
  {
    "path": "GCP/legos/gcp_list_nodes_in_gke_cluster/__init__.py",
    "content": ""
  },
  {
    "path": "GCP/legos/gcp_list_nodes_in_gke_cluster/gcp_list_nodes_in_gke_cluster.json",
    "content": "{\n    \"action_title\": \"GCP List Nodes in GKE Cluster\",\n    \"action_description\": \"GCP List Nodes of GKE Cluster\",\n    \"action_type\": \"LEGO_TYPE_GCP\",\n    \"action_entry_function\": \"gcp_list_nodes_in_gke_cluster\",\n    \"action_needs_credential\": true,\n    \"action_supports_poll\": true,\n    \"action_supports_iteration\": true,\n    \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_GCP\",\"CATEGORY_TYPE_GCP_GKE\" ]\n}\n"
  },
  {
    "path": "GCP/legos/gcp_list_nodes_in_gke_cluster/gcp_list_nodes_in_gke_cluster.py",
    "content": "import pprint\nfrom typing import List\nfrom pydantic import BaseModel, Field\nfrom google.cloud import container_v1\n\nclass InputSchema(BaseModel):\n    project_id: str = Field(\n        title = \"GCP Project\",\n        description = \"GCP Project Name\"\n    )\n    zone: str = Field(\n        title = \"Zone\",\n        description = \"GCP Zone where instance list should be gotten from\"\n    )\n    cluster_name: str = Field(\n        title = \"Cluster Name\",\n        description = \"Name of the GKE cluster.\"\n    )\n\n\ndef gcp_list_nodes_in_gke_cluster_printer(output):\n    if len(output) == 0:\n        return\n    pprint.pprint(output)\n\ndef gcp_list_nodes_in_gke_cluster(handle, project_id: str, zone: str, cluster_name: str) -> List:\n    \"\"\"gcp_list_nodes_in_gke_cluster Returns the list of cluster nodes\n\n        :type project_id: string\n        :param project_id: Google Cloud Platform Project\n\n        :type zone: string\n        :param zone: Zone to which the cluster in the project should be fetched.\n\n        :type cluster_name: string\n        :param cluster_name: Name of the GKE cluster.\n\n        :rtype: list of cluster nodes\n    \"\"\"\n    # Create a client\n    node_list = []\n    client = container_v1.ClusterManagerClient(credentials=handle)\n    try:\n        response = client.list_node_pools(project_id=project_id, zone=zone,\n                                        cluster_id=cluster_name)\n        for nodes in response.node_pools:\n            node_list.append(nodes.name)\n    except Exception as error:\n        raise error\n\n    return node_list\n"
  },
  {
    "path": "GCP/legos/gcp_list_public_buckets/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \n<h1>List Public GCP Buckets</h1>\n\n## Description\nThis Lego lists all publicly available GCP buckets.\n\n## Lego Details\n\n    gcp_list_public_buckets(handle: object)\n\n        handle: Object of type unSkript GCP Connector\n        bucket_name: String, Bucket name\n\n## Lego Input\nNone\n\n## Lego Output\nHere is a sample output.\n\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "GCP/legos/gcp_list_public_buckets/__init__.py",
    "content": ""
  },
  {
    "path": "GCP/legos/gcp_list_public_buckets/gcp_list_public_buckets.json",
    "content": "{\n    \"action_title\": \"List all Public GCP Buckets\",\n    \"action_description\": \"List all publicly available GCP buckets\",\n    \"action_type\": \"LEGO_TYPE_GCP\",\n    \"action_entry_function\": \"gcp_list_public_buckets\",\n    \"action_needs_credential\": true,\n    \"action_supports_poll\": true,\n    \"action_supports_iteration\": true,\n    \"action_output_type\":\"ACTION_OUTPUT_TYPE_LIST\",\n    \"action_verbs\": [\"list\"],\n    \"action_nouns\": [\"public\",\"buckets\",\"gcp\"],\n    \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_GCP\",\"CATEGORY_TYPE_GCP_BUCKET\" ]\n}"
  },
  {
    "path": "GCP/legos/gcp_list_public_buckets/gcp_list_public_buckets.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nfrom typing import List\nfrom pydantic import BaseModel\nfrom google.cloud import storage\n\nclass InputSchema(BaseModel):\n    pass\n\ndef gcp_list_public_buckets_printer(output):\n    if len(output)==0:\n        print(\"There are no publicly readable buckets available\")\n        return\n    print(output)\n\n\ndef gcp_list_public_buckets(handle) -> List:\n    \"\"\"gcp_list_public_buckets lists all public GCP Buckets\n\n        :rtype: List of all public GCP buckets\n    \"\"\"\n    try:\n        storage_client = storage.Client(credentials=handle)\n        buckets = storage_client.list_buckets()\n        result = []\n        for bucket in buckets:\n            l = str(bucket.name)\n            b = storage_client.bucket(l)\n            policy = b.get_iam_policy(requested_policy_version=3)\n            for binding in policy.bindings:\n                if binding['members']=={'allUsers'}:\n                        result.append(bucket.name)\n    except Exception as e:\n        raise e\n    return result\n"
  },
  {
    "path": "GCP/legos/gcp_list_secrets/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>List GCP Secrets</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego List GCP Secrets.\r\n\r\n\r\n## Lego Details\r\n\r\n    gcp_list_secrets(handle: object, name: str)\r\n\r\n        handle: Object of type unSkript GCP Connector\r\n        name: Name of the Google Cloud Project.\r\n\r\n## Lego Input\r\nThis Lego take two inputs handle and name.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "GCP/legos/gcp_list_secrets/__init__.py",
    "content": ""
  },
  {
    "path": "GCP/legos/gcp_list_secrets/gcp_list_secrets.json",
    "content": "{\r\n    \"action_title\": \"List GCP Secrets\",\r\n    \"action_description\": \"List of your GCP Secrets\",\r\n    \"action_type\": \"LEGO_TYPE_GCP\",\r\n    \"action_entry_function\": \"gcp_list_secrets\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\":\"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\":[ \"CATEGORY_TYPE_SECOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_GCP\",\"CATEGORY_TYPE_GCP_SECRET\" ]\r\n}\r\n    "
  },
  {
    "path": "GCP/legos/gcp_list_secrets/gcp_list_secrets.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import List\nfrom pydantic import BaseModel, Field\nfrom google.cloud import secretmanager\n\nclass InputSchema(BaseModel):\n    name: str = Field(\n        title='Project Name',\n        description='Name of the Google Cloud Project.')\n\ndef gcp_list_secrets_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\ndef gcp_list_secrets(handle, name: str) -> List:\n    \"\"\"gcp_list_secrets List all the secrets for a given project.\n\n        :type name: string\n        :param name: Name of the Google Cloud Project.\n\n        :rtype: List of the names of all the secrets.\n    \"\"\"\n    client = secretmanager.SecretManagerServiceClient(credentials=handle)\n\n    # Input param validation.\n    parent = \"projects/\" + name\n    try:\n            resp = client.list_secrets(parent=parent)\n    except Exception as e:\n        raise e\n    output = []\n    for i in resp.secrets:\n        output.append(i.name)\n    return output\n"
  },
  {
    "path": "GCP/legos/gcp_list_service_accounts/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \n<h1>GCP List Service Accounts</h1>\n\n## Description\nThis Lego list all the available service accounts from project in GCP.\n\n## Lego Details\n\n    gcp_list_service_account_from_project(handle: object, project_id: str)\n\n        handle: Object of type unSkript GCP Connector\n        project_id: Name of the project\n\n## Lego Input\nproject_id: Name of the project. eg- \"unskript-test2\"\n\n## Lego Output\nHere is a sample output.\n\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "GCP/legos/gcp_list_service_accounts/__init__.py",
    "content": ""
  },
  {
    "path": "GCP/legos/gcp_list_service_accounts/gcp_list_service_accounts.json",
    "content": "{\n    \"action_title\": \"GCP List Service Accounts\",\n    \"action_description\": \"GCP List Service Accounts\",\n    \"action_type\": \"LEGO_TYPE_GCP\",\n    \"action_entry_function\": \"gcp_list_service_accounts\",\n    \"action_needs_credential\": true,\n    \"action_supports_poll\": true,\n    \"action_supports_iteration\": true,\n    \"action_categories\":[ \"CATEGORY_TYPE_SECOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_GCP\",\"CATEGORY_TYPE_GCP_SECRET\" ]\n}\n"
  },
  {
    "path": "GCP/legos/gcp_list_service_accounts/gcp_list_service_accounts.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import List\nfrom pydantic import BaseModel, Field\nimport googleapiclient.discovery\n\n\nclass InputSchema(BaseModel):\n    project_id: str = Field(\n        title = \"Project ID\",\n        description = \"Name of the project e.g unskript-dev\"\n    )\n\ndef gcp_list_service_accounts_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\ndef gcp_list_service_accounts(handle, project_id: str) -> List:\n    \"\"\"gcp_list_service_accounts Returns a list of service accounts\n\n        :type project_id: string\n        :param project_id: Name of the project\n\n        :rtype: List of service accounts\n    \"\"\"\n    result = []\n    service = googleapiclient.discovery.build(\n        'iam', 'v1', credentials=handle)\n    try:\n        service_accounts = service.projects().serviceAccounts().list(\n            name='projects/' + project_id).execute()\n\n        for account in service_accounts[\"accounts\"]:\n            result.append(account[\"name\"])\n\n    except Exception as error:\n        raise error\n\n    return result\n"
  },
  {
    "path": "GCP/legos/gcp_list_vms_access/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \n<h1>List GCP VMs</h1>\n\n## Description\nThis Lego lists all available GCP VMs and whether or not the VMs are publicly accessible.\n\n## Lego Details\n\n    gcp_list_(handle: object)\n\n        handle: Object of type unSkript GCP Connector\n        bucket_name: String, Bucket name\n\n## Lego Input\nNone\n\n## Lego Output\nHere is a sample output.\n\n<img src=\"./3.jpg\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "GCP/legos/gcp_list_vms_access/__init__.py",
    "content": ""
  },
  {
    "path": "GCP/legos/gcp_list_vms_access/gcp_list_vms_access.json",
    "content": "{\n    \"action_title\": \"List all GCP VMs and if Publicly Accessible\",\n    \"action_description\": \"Lists all GCP buckets, and identifies those tha are public.\",\n    \"action_type\": \"LEGO_TYPE_GCP\",\n    \"action_entry_function\": \"gcp_list_vms_access\",\n    \"action_needs_credential\": true,\n    \"action_supports_poll\": true,\n    \"action_supports_iteration\": true,\n    \"action_verbs\": [\"list\"],\n    \"action_nouns\": [\"VMs\",\"gcp\"],\n    \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_GCP\",\"CATEGORY_TYPE_GCP_VMS\"]\n\n}"
  },
  {
    "path": "GCP/legos/gcp_list_vms_access/gcp_list_vms_access.py",
    "content": "##  Copyright (c) 2023 unSkript, Inc\n## Written by Doug Sillars & ChatGPT\n##  All rights reserved.\n##\nfrom typing import List\nfrom pydantic import BaseModel, Field\nfrom google.cloud import compute\nfrom beartype import beartype\n\n\nclass InputSchema(BaseModel):\n    project: str = Field(\n        title='Project Name',\n        description='Name of the Google Cloud Project.')\n    zone: str = Field(\n        title='Zone',\n        description='Name of the Google Cloud Zone where the project is located.')\n\n\n\n@beartype\ndef gcp_list_vms_access_printer(output):\n    if len(output)==0:\n        print(\"There are no publicly readable buckets available\")\n        return\n    print(output)\n\n\n@beartype\ndef gcp_list_vms_access(handle, project:str, zone:str) -> List:\n    compute_client = compute.InstancesClient(credentials=handle)\n\n    vms = compute_client.list(project=project, zone=zone)\n    vm_list = []\n    for vm in vms:\n        vm_info = {}\n        vm_info['name'] = vm.name\n        vm_info['publicly_accessible'] = vm.can_ip_forward\n        vm_list.append(vm_info)\n\n    return vm_list\n"
  },
  {
    "path": "GCP/legos/gcp_remove_member_from_iam_role/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \n<h1>GCP Remove Member from IAM Role</h1>\n\n## Description\nThis Lego remove member from the IAM role which alredy available in GCP.\n\n## Lego Details\n\n    gcp_remove_member_to_iam_role(handle: object, project_id: str, role: str, member_email:str, version:int)\n\n        handle: Object of type unSkript GCP Connector\n        project_id: Name of the project\n        role: Role name from which member needs to remove e.g iam.serviceAccountUser\n        member_email: Member email which has GCP access e.g test@company.com\n        version: Requested Policy Version\n\n## Lego Input\nproject_id: Name of the project. eg- \"unskript-test2\"\nrole: Role name from which member needs to remove e.g iam.serviceAccountUser\nmember_email: Member email which has GCP access e.g test@company.com\nversion: Requested Policy Version\n\n## Lego Output\nHere is a sample output.\n\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "GCP/legos/gcp_remove_member_from_iam_role/__init__.py",
    "content": ""
  },
  {
    "path": "GCP/legos/gcp_remove_member_from_iam_role/gcp_remove_member_from_iam_role.json",
    "content": "{\n    \"action_title\": \"GCP Remove Member from IAM Role\",\n    \"action_description\": \"Remove member from the chosen IAM role.\",\n    \"action_type\": \"LEGO_TYPE_GCP\",\n    \"action_entry_function\": \"gcp_remove_member_from_iam_role\",\n    \"action_needs_credential\": true,\n    \"action_supports_poll\": true,\n    \"action_supports_iteration\": true,\n    \"action_categories\":[ \"CATEGORY_TYPE_SECOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_GCP\",\"CATEGORY_TYPE_GCP_IAM\" ]\n}\n"
  },
  {
    "path": "GCP/legos/gcp_remove_member_from_iam_role/gcp_remove_member_from_iam_role.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import Dict\nfrom pydantic import BaseModel, Field\nimport googleapiclient.discovery\n\n\nclass InputSchema(BaseModel):\n    project_id: str = Field(\n        title = \"Project ID\",\n        description = \"Name of the project e.g unskript-dev\"\n    )\n    role: str = Field(\n        title = \"Role Name\",\n        description = \"Role name from which member needs to remove e.g iam.serviceAccountUser\"\n    )\n    member_email: str = Field(\n        title = \"Member Email\",\n        description = \"Member email which has GCP access e.g test@company.com\"\n    )\n    version: int = Field(\n        title = \"Requested Policy Version\",\n        description = \"Requested Policy Version\"\n    )\n\ndef gcp_remove_member_from_iam_role_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\ndef gcp_remove_member_from_iam_role(\n        handle,\n        project_id: str,\n        role: str,\n        member_email:str,\n        version:int = 1\n        ) -> Dict:\n    \"\"\"gcp_remove_member_from_iam_role Returns a Dict of new policy details\n\n        :type project_id: string\n        :param project_id: Name of the project\n\n        :type role: string\n        :param role: Role name from which member needs to remove e.g iam.serviceAccountUser\n\n        :type member_email: string\n        :param member_email: Member email which has GCP access e.g test@company.com\n\n        :type version: int\n        :param version: Requested Policy Version\n\n        :rtype: Dict of new policy details\n    \"\"\"\n    service = googleapiclient.discovery.build(\n        \"cloudresourcemanager\", \"v1\", credentials=handle)\n\n    result = {}\n    try:\n        member = \"user:\" + member_email\n        if \"gserviceaccount\" in member_email:\n            member = \"serviceAccount:\" + member_email\n        get_policy = (\n            service.projects().getIamPolicy(\n                    resource=project_id,\n                    body={\"options\": {\"requestedPolicyVersion\": version}}).execute())\n\n        get_role = \"roles/\" + role\n        binding = next(b for b in get_policy[\"bindings\"] if b[\"role\"] == get_role)\n        if \"members\" in binding and member in binding[\"members\"]:\n            binding[\"members\"].remove(member)\n\n        remove_member = (\n            service.projects()\n            .setIamPolicy(resource=project_id, body={\"policy\": get_policy}).execute())\n        result = remove_member\n\n    except Exception as error:\n        raise error\n\n    return result\n"
  },
  {
    "path": "GCP/legos/gcp_remove_role_from_service_account/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \n<h1>GCP Remove Role from Service Account</h1>\n\n## Description\nThis Lego remove role and member from the service account in GCP.\n\n## Lego Details\n\n    gcp_remove_role_from_service_account(handle: object, project_id: str, role: str, sa_id:str)\n\n        handle: Object of type unSkript GCP Connector\n        project_id: Name of the project\n        role: Role name from which member needs to remove e.g iam.serviceAccountUser\n        sa_id: Service Account email\n\n## Lego Input\nproject_id: Name of the project. eg- \"unskript-test2\"\nrole: Role name from which member needs to remove e.g iam.serviceAccountUser\nsa_id: Service Account email\n\n## Lego Output\nHere is a sample output.\n\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "GCP/legos/gcp_remove_role_from_service_account/__init__.py",
    "content": ""
  },
  {
    "path": "GCP/legos/gcp_remove_role_from_service_account/gcp_remove_role_from_service_account.json",
    "content": "{\n    \"action_title\": \"GCP Remove Role from Service Account\",\n    \"action_description\": \"Remove role and member from the service account\",\n    \"action_type\": \"LEGO_TYPE_GCP\",\n    \"action_entry_function\": \"gcp_remove_role_from_service_account\",\n    \"action_needs_credential\": true,\n    \"action_supports_poll\": true,\n    \"action_supports_iteration\": true,\n    \"action_categories\":[ \"CATEGORY_TYPE_SECOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_GCP\",\"CATEGORY_TYPE_GCP_IAM\" ]\n}\n"
  },
  {
    "path": "GCP/legos/gcp_remove_role_from_service_account/gcp_remove_role_from_service_account.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import Dict\nfrom pydantic import BaseModel, Field\nfrom googleapiclient import discovery\n\n\nclass InputSchema(BaseModel):\n    project_id: str = Field(\n        title = \"Project ID\",\n        description = \"Name of the project e.g unskript-dev\"\n    )\n    role: str = Field(\n        title = \"Role Name\",\n        description = \"Role name from which member needs to remove e.g iam.serviceAccountUser\"\n    )\n    sa_id: str = Field(\n        title = \"Service Account Email\",\n        description = \"Service Account email id e.g test-user@unskript-dev.iam.gserviceaccount.com\"\n    )\n\ndef gcp_remove_role_from_service_account_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef gcp_remove_role_from_service_account(\n        handle,\n        project_id: str,\n        role: str, \n        sa_id:str\n        ) -> Dict:\n    \"\"\"gcp_remove_role_from_service_account Returns a Dict of new policy details\n\n        :type project_id: string\n        :param project_id: Name of the project\n\n        :type role: string\n        :param role: Role name from which member needs to remove e.g iam.serviceAccountUser\n\n        :type sa_id: string\n        :param sa_id: Service Account email\n\n        :rtype: Dict of new policy details\n    \"\"\"\n    service = discovery.build('iam', 'v1', credentials=handle)\n    result = {}\n    try:\n        resource = f'projects/{project_id}/serviceAccounts/{sa_id}'\n        request = service.projects().serviceAccounts().getIamPolicy(resource=resource)\n        get_policy = request.execute()\n\n        get_role = \"roles/\"+role\n        binding = next(b for b in get_policy[\"bindings\"] if b[\"role\"] == get_role)\n        get_policy[\"bindings\"].remove(binding)\n\n        set_policy = service.projects().serviceAccounts().setIamPolicy(\n            resource=resource,\n            body={\"policy\": get_policy}\n            )\n        policy_output = set_policy.execute()\n        result = policy_output\n    except Exception as error:\n        raise error\n\n    return result\n"
  },
  {
    "path": "GCP/legos/gcp_remove_user_role/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \n<h1>Remove role from user</h1>\n\n## Description\nGCP lego for removing a role from a user (default: 'viewer')\n\n## Lego Details\n    gcp_remove_user_role(policy, role: str, member: str)\n        role: user role to be removed\n        member: user's id to be removed\n\n## Lego Input\nThis Lego takes 2 inputs: role and member.\n\n\n## Lego Output\n\nconfirmation of removal of role.\n\nTested action on sandbox.\n\n<img src=\"./1.png\">\n"
  },
  {
    "path": "GCP/legos/gcp_remove_user_role/__init__.py",
    "content": ""
  },
  {
    "path": "GCP/legos/gcp_remove_user_role/gcp_remove_user_role.json",
    "content": "{\n    \"action_title\": \"Remove role from user\",\n    \"action_description\": \"GCP lego for removing a role from a user (default: 'viewer')\",\n    \"action_entry_function\": \"gcp_remove_user_role\",\n    \"action_type\": \"LEGO_TYPE_GCP\",\n    \"action_needs_credential\": true,\n    \"action_supports_poll\": true,\n    \"action_supports_iteration\": true,\n    \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_GCP\",\"CATEGORY_TYPE_GCP_IAM\" ]\n}\n"
  },
  {
    "path": "GCP/legos/gcp_remove_user_role/gcp_remove_user_role.py",
    "content": "import pprint\nfrom pydantic import BaseModel, Field\n\nclass InputSchema(BaseModel):\n    role: str = Field(\n        title = \"Role\",\n        description = \"GCP user role to be removed\"\n    )\n    member: str = Field(\n        title = \"Member\",\n        description = \"user's id to be removed\"\n    )\n    resource: str = Field(\n        title = \"Resource\",\n        description = ('GCP Resource in the form of project/<PROJECT_ID>'\n                       '/serviceAccounts/<SERVICE_ACCOUNT_NAME>')\n    )\ndef gcp_remove_user_role_printer(output):\n    if output is None:\n        return\n    pprint.pprint(\"User role removed successfully.\")\n    pprint.pprint(output)\n\ndef gcp_remove_user_role(handle, policy, role: str, member: str, resource: str):\n    \"\"\"Removes a  member from a role binding.\n\n        :type role: string\n        :param role: user role to be removed.\n\n        :type member: string\n        :param member: user's id to be removed.\n\n        :type resource: string\n        :param resource: resource for which the policy is being requested.\n\n        :rtype: confirmation of removal of role.\"\"\"\n\n    # TODO: Update placeholder value.\n    binding = next(b for b in policy[\"bindings\"] if b[\"role\"] == role)\n    if \"members\" in binding and member in binding[\"members\"]:\n        binding[\"members\"].remove(member)\n    return policy\n"
  },
  {
    "path": "GCP/legos/gcp_resize_gke_cluster/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>GCP Resize a GKE cluster</h1>\r\n\r\n## Description\r\nThis Lego resize a GKE cluster by modifying nodes for a given Project, cluster and Zone.\r\n\r\n## Lego Details\r\n\r\n    resize_gke_cluster(handle: object, project_id: str, zone: str, cluster_name: str, node_id: str, node_count:int)\r\n\r\n        handle: Object of type unSkript GCP Connector\r\n        project_id: String GCP Project name\r\n        zone: Zone to which the cluster in the project should be fetched.\r\n        cluster_name: Name of the GKE cluster.\r\n        node_id: Name of the GKE cluster Node.\r\n        node_count: Node count of GKE cluster.\r\n\r\n\r\n## Lego Input\r\n project:  GCP Project name eg. \"acme-dev\"\r\n zone: GCP Zone eg. \"us-west1-b\"\r\n cluster_name: cluster Name\r\n node_id: Node Name\r\n node_count: cluster node count\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n<img src=\"./1.png\">\r\n\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)\r\n"
  },
  {
    "path": "GCP/legos/gcp_resize_gke_cluster/__init__.py",
    "content": ""
  },
  {
    "path": "GCP/legos/gcp_resize_gke_cluster/gcp_resize_gke_cluster.json",
    "content": "{\n    \"action_title\": \"GCP Resize a GKE cluster\",\n    \"action_description\": \"GCP Resize a GKE cluster by modifying nodes\",\n    \"action_type\": \"LEGO_TYPE_GCP\",\n    \"action_entry_function\": \"gcp_resize_gke_cluster\",\n    \"action_needs_credential\": true,\n    \"action_supports_poll\": true,\n    \"action_supports_iteration\": true,\n    \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_GCP\",\"CATEGORY_TYPE_GCP_GKE\" ]\n}\n"
  },
  {
    "path": "GCP/legos/gcp_resize_gke_cluster/gcp_resize_gke_cluster.py",
    "content": "import pprint\nfrom typing import Dict\nfrom pydantic import BaseModel, Field\nfrom google.cloud import container_v1\nfrom google.protobuf.json_format import MessageToDict\n\n\nclass InputSchema(BaseModel):\n    project_id: str = Field(\n        title = \"GCP Project\",\n        description = \"GCP Project Name\"\n    )\n    zone: str = Field(\n        title = \"Zone\",\n        description = \"GCP Zone where instance list should be gotten from\"\n    )\n    cluster_name: str = Field(\n        title = \"Cluster Name\",\n        description = \"Name of the GKE cluster.\"\n    )\n    node_id: str = Field(\n        title = \"Node Name\",\n        description = \"Name of the GKE cluster Node.\"\n    )\n    node_count: str = Field(\n        title = \"Initial Node Count\",\n        description = \"Node count of GKE cluster.\"\n    )\n\n\ndef gcp_resize_gke_cluster_printer(output):\n    if len(output) == 0:\n        return\n    pprint.pprint(output)\n\n\ndef gcp_resize_gke_cluster(\n        handle,\n        project_id: str,\n        zone: str,\n        cluster_name: str,\n        node_id: str,\n        node_count:int) -> Dict:\n    \"\"\"gcp_resize_gke_cluster Returns the dict of cluster details\n\n        :type project_id: string\n        :param project_id: Google Cloud Platform Project\n\n        :type zone: string\n        :param zone: Zone to which the cluster in the project should be fetched.\n\n        :type cluster_name: string\n        :param cluster_name: Name of the GKE cluster.\n\n        :type node_id: string\n        :param node_id: Name of the GKE cluster Node.\n\n        :type node_count: int\n        :param node_count: Node count of GKE cluster.\n\n        :rtype: Dict of cluster details\n    \"\"\"\n    # Create a client\n    client = container_v1.ClusterManagerClient(credentials=handle)\n    try:\n        request = container_v1.SetNodePoolSizeRequest(\n            project_id=project_id,\n            zone=zone,\n            cluster_id=cluster_name,\n            node_pool_id=node_id,\n            node_count=node_count,\n        )\n\n        res = client.set_node_pool_size(request=request)\n        response = MessageToDict(res._pb)\n\n    except Exception as error:\n        raise error\n\n    return response\n"
  },
  {
    "path": "GCP/legos/gcp_restart_compute_instances/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>GCP Restart Compute Instance</h1>\r\n\r\n## Description\r\nThis Lego perform the compute instance restart process.\r\n\r\n## Lego Details\r\n\r\n    gcp_restart_instance(handle: object, project_name: str, zone:str, instance_name: str)\r\n\r\n        handle: Object of type unSkript GCP Connector\r\n        project_name: String GCP Project name\r\n        zone: String, Zone in which to get the instnances list from\r\n        instance_name: Name of the instance.\r\n\r\n\r\n## Lego Input\r\n project:  GCP Project name eg. \"acme-dev\"\r\n zone: GCP Zone eg. \"us-west1-b\"\r\n instance_name: Instance name\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n<img src=\"./1.png\">\r\n\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)\r\n"
  },
  {
    "path": "GCP/legos/gcp_restart_compute_instances/__init__.py",
    "content": ""
  },
  {
    "path": "GCP/legos/gcp_restart_compute_instances/gcp_restart_compute_instances.json",
    "content": "{\n    \"action_title\": \"GCP Restart compute instance\",\n    \"action_description\": \"GCP Restart compute instance\",\n    \"action_type\": \"LEGO_TYPE_GCP\",\n    \"action_entry_function\": \"gcp_restart_compute_instances\",\n    \"action_needs_credential\": true,\n    \"action_supports_poll\": true,\n    \"action_supports_iteration\": true,\n    \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_GCP\",\"CATEGORY_TYPE_GCP_VM\" ]\n}\n  \n"
  },
  {
    "path": "GCP/legos/gcp_restart_compute_instances/gcp_restart_compute_instances.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import Dict\nfrom pydantic import BaseModel, Field\nfrom google.cloud.compute_v1.services.instances import InstancesClient\n\n\nclass InputSchema(BaseModel):\n    project_name: str = Field(\n        title = \"GCP Project\",\n        description = \"GCP Project Name\"\n    )\n    zone: str = Field(\n        title = \"Zone\",\n        description = \"GCP Zone where instance list should be gotten from\"\n    )\n    instance_name: str = Field(\n        title = \"Instance Name\",\n        description = \"Name of the instance.\"\n    )\n\n\ndef gcp_restart_compute_instances_printer(output):\n    if len(output) == 0:\n        return\n    pprint.pprint(output)\n\n\ndef gcp_restart_compute_instances(\n        handle,\n        project_name: str,\n        zone:str,\n        instance_name: str\n        ) -> Dict:\n    \"\"\"gcp_restart_compute_instances Returns the dict of instance data\n\n        :type project: string\n        :param project: Google Cloud Platform Project\n\n        :type zone: string\n        :param zone: Zone to which the instance list in the project should be fetched.\n\n        :type instance_name: string\n        :param instance_name: Name of the instance.\n\n        :rtype: Dict of instances data\n    \"\"\"\n    output = {}\n    try:\n        ic = InstancesClient(credentials=handle)\n        result = ic.reset(\n            project=project_name, zone=zone, instance=instance_name)\n\n        output['id'] = result.id\n        output['name'] = result.name\n        output['status'] = result.status\n    except Exception as error:\n        raise error\n\n    return output\n"
  },
  {
    "path": "GCP/legos/gcp_restore_disk_from_snapshot/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>Restore GCP disk from a snapshot </h1>\n\n## Description\nRestore a GCP disk from a compute instance snapshot.\n\n## Lego Details\n\tgcp_restore_disk_from_snapshot(handle, project: str, zone: str, disk: str, snapshot_name: str)\n\t\thandle: Object of type unSkript GCP Connector.\n\t\tproject: Google Cloud Platform Project.\n\t\tzone: GCP Zone where the disk and snapshot reside.\n\t\tdisk: The name of the disk to restore.\n\t\tsnapshot_name: The name of the snapshot to restore from.\n\n\tPlease refer to README.md file of any existing lego and similarly add the description for your input parameters.\n\n\n## Lego Input\nThis Lego takes inputs handle, project, zone, disk, snaoshot_name.\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "GCP/legos/gcp_restore_disk_from_snapshot/__init__.py",
    "content": ""
  },
  {
    "path": "GCP/legos/gcp_restore_disk_from_snapshot/gcp_restore_disk_from_snapshot.json",
    "content": "{\n  \"action_title\": \"Restore GCP disk from a snapshot \",\n  \"action_description\": \"Restore a GCP disk from a compute instance snapshot.\",\n  \"action_type\": \"LEGO_TYPE_GCP\",\n  \"action_entry_function\": \"gcp_restore_disk_from_snapshot\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\n  \"action_is_check\": false,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_GCP\",\"CATEGORY_TYPE_GCP_VM\"]\n}"
  },
  {
    "path": "GCP/legos/gcp_restore_disk_from_snapshot/gcp_restore_disk_from_snapshot.py",
    "content": "##\n##  Copyright (c) 2023 unSkript, Inc\n##  All rights reserved.\n##\nfrom typing import Dict, Optional\nfrom pydantic import BaseModel, Field\nfrom google.cloud.compute_v1.services.disks import DisksClient\nfrom google.cloud.compute_v1.services.snapshots import SnapshotsClient\nfrom google.cloud.compute_v1.types import Disk, Snapshot\n\n\n\nclass InputSchema(BaseModel):\n    project: str = Field(..., description='GCP Project Name', title='GCP Project')\n    zone: str = Field(\n        ...,\n        description='GCP Zone where the disk and snapshot reside.',\n        title='Zone',\n    )\n    disk: str = Field(\n        ..., description='The name of the disk to restore.', title='Disk name'\n    )\n    snapshot_name: str = Field(\n        ...,\n        description='The name of the snapshot to restore from.',\n        title='Snapshot name',\n    )\n\n\n\ndef gcp_restore_disk_from_snapshot_printer(output):\n    if output is None:\n        return\n    print(output)\n\n\ndef gcp_restore_disk_from_snapshot(handle, project: str, zone: str, disk: str, snapshot_name: str) -> str:\n    \"\"\"gcp_restore_disk_from_snapshot Returns the confirmation of disk restoration.\n\n    :type handle: object\n    :param handle: Object returned from Task Validate\n\n    :type project: string\n    :param project: Google Cloud Platform Project\n\n    :type zone: string\n    :param zone: GCP Zone where the disk and snapshot reside.\n\n    :type disk: string\n    :param disk: The name of the disk to restore.\n\n    :type snapshot_name: string\n    :param snapshot_name: The name of the snapshot to restore from.\n\n    :rtype: String of disk restoration confirmation\n    \"\"\"\n    disks_client = DisksClient(credentials=handle)\n    snapshots_client = SnapshotsClient(credentials=handle)\n\n    snapshot = snapshots_client.get(project=project, snapshot=snapshot_name)\n\n    # Create a Disk object with the Snapshot as the source\n    disk_to_restore = Disk(name=disk, source_snapshot=snapshot.self_link)\n\n    try:\n        # Creating a disk from snapshot\n        disks_client.insert(\n            project=project, zone=zone, disk_resource=disk_to_restore\n        )\n    except Exception as e:\n        raise e\n\n    return f\"Disk {disk} restored from Snapshot {snapshot_name}.\""
  },
  {
    "path": "GCP/legos/gcp_save_csv_to_google_sheets_v1/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \n<h1>Save a CSV to a Google Sheet</h1>\n\n## Description\nThis Action takes a variable CSV, and saves it to a Google Sheet\n\n## Action Details\n\nIn order to run this Action, you'll need to:\n\n1. Enable the Google Sheets API in yur GCP console (https://console.cloud.google.com/apis/library/browse?)\n2. Add an IAM user as an editor of the Google Sheet.\n3. Your \"CSV\" must have each line as an array.\n\n```\ndef gcp_write_to_google_sheet(handle, csvList: List, GoogleSheetID: str, StartingCell: str) -> Dict:\n    result = []\n\n    service = build('sheets', 'v4', credentials=handle)\n    sheet  = service.spreadsheets()\n    body={'values':csvList}\n    result = sheet.values().update(spreadsheetId=GoogleSheetID, range=StartingCell, valueInputOption='USER_ENTERED', body=body).execute()\n    print(\"result\",result)\n   ```\n\n\n## Action Input\ncsvList: CSV. Each line should be an array\nGoogleSheetID: This is the ID of the Google Sheet the file will be saved into. You can get this from the URL of the sheet\nStartingCell: Cell where the paste should begin\n\n\n## Action Output\nDict of the inputs.  You will also see the Google Sheet update.\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "GCP/legos/gcp_save_csv_to_google_sheets_v1/__init__.py",
    "content": ""
  },
  {
    "path": "GCP/legos/gcp_save_csv_to_google_sheets_v1/gcp_save_csv_to_google_sheets_v1.json",
    "content": "{\n  \"action_title\": \"Save CSV to Google Sheets\",\n  \"action_description\": \"Saves your CSV (see notes) into a prepared Google Sheet.\",\n  \"action_type\": \"LEGO_TYPE_GCP\",\n  \"action_entry_function\": \"gcp_save_csv_to_google_sheets_v1\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\n  \"action_is_check\": false,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_GCP\",\"CATEGORY_TYPE_GCP_SHEETS\" ]\n}"
  },
  {
    "path": "GCP/legos/gcp_save_csv_to_google_sheets_v1/gcp_save_csv_to_google_sheets_v1.py",
    "content": "##\n##  Copyright (c) 2023 unSkript, Inc\n##  All rights reserved.\n##\n## CSV must have each line as a list. For Example:\n##[['This file was created 02/17/2023'],\n##['Service Code', 'Quota Name', 'Quota Code', 'Quota Value', 'Quota Unit', 'Global?', 'Adjustable?'],\n##['AWSCloudMap', 'DiscoverInstances operation per account steady rate', 'L-514A639A', 1000.0, 'None', False, True], ['AWSCloudMap', 'DiscoverInstances operation per account burst rate', 'L-76CF203B', 2000.0, 'None', False, True], ['AWSCloudMap', 'Namespaces per Region', 'L-0FE3F50E', 50.0, 'None', False, True],\n\n##You must also turn on the Google Sheets API in your Google Console:\n##https://console.cloud.google.com/apis/library/browse?\n\n## Add you IAM user as an editor to the Google Sheet\n\nfrom __future__ import annotations\nimport pprint\nfrom typing import List, Dict\nfrom pydantic import BaseModel, Field\nfrom googleapiclient.discovery import build\nfrom beartype import beartype\n\n\n\n\nclass InputSchema(BaseModel):\n    GoogleSheetID: str = Field(\n        description='SheetId (from the URL) of your Google Sheet',\n        title='GoogleSheetID',\n    )\n    StartingCell: str = Field(\n        '\"A1\"',\n        description='Starting Cell for the data insertion into the sheet.',\n        title='StartingCell',\n    )\n    csvList: List = Field(\n        description='List of rows to be inserted into the Google Sheet',\n        title='csvList',\n    )\n\n@beartype\ndef gcp_save_csv_to_google_sheets_v1_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n@beartype\ndef gcp_save_csv_to_google_sheets_v1(\n    handle,\n    csvList: List,\n    GoogleSheetID: str,\n    StartingCell: str\n    ) -> Dict:\n\n    service = build('sheets', 'v4', credentials=handle)\n    sheet  = service.spreadsheets()\n    body={'values':csvList}\n    result = sheet.values().update(\n        spreadsheetId=GoogleSheetID,\n        range=StartingCell,\n        valueInputOption='USER_ENTERED',\n        body=body\n        ).execute()\n    print(\"result\",result)\n    return result\n"
  },
  {
    "path": "GCP/legos/gcp_stop_compute_instances/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>GCP Stop Compute Instance</h1>\r\n\r\n## Description\r\nThis Lego perform the compute instance Stop process.\r\n\r\n## Lego Details\r\n\r\n    gcp_stop_instance(handle: object, project_name: str, zone:str, instance_name: str)\r\n\r\n        handle: Object of type unSkript GCP Connector\r\n        project_name: String GCP Project name\r\n        zone: String, Zone in which to get the instnances list from\r\n        instance_name: Name of the instance.\r\n\r\n\r\n## Lego Input\r\n project:  GCP Project name eg. \"acme-dev\"\r\n zone: GCP Zone eg. \"us-west1-b\"\r\n instance_name: Instance name\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n<img src=\"./1.png\">\r\n\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)\r\n"
  },
  {
    "path": "GCP/legos/gcp_stop_compute_instances/__init__.py",
    "content": ""
  },
  {
    "path": "GCP/legos/gcp_stop_compute_instances/gcp_stop_compute_instances.json",
    "content": "{\n    \"action_title\": \"GCP Stop compute instance\",\n    \"action_description\": \"GCP Stop compute instance\",\n    \"action_type\": \"LEGO_TYPE_GCP\",\n    \"action_entry_function\": \"gcp_stop_compute_instances\",\n    \"action_needs_credential\": true,\n    \"action_supports_poll\": true,\n    \"action_supports_iteration\": true,\n    \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_GCP\",\"CATEGORY_TYPE_GCP_VM\" ]\n}\n"
  },
  {
    "path": "GCP/legos/gcp_stop_compute_instances/gcp_stop_compute_instances.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import Dict\nfrom pydantic import BaseModel, Field\nfrom google.cloud.compute_v1.services.instances import InstancesClient\n\n\nclass InputSchema(BaseModel):\n    project_name: str = Field(\n        title = \"GCP Project\",\n        description = \"GCP Project Name\"\n    )\n    zone: str = Field(\n        title = \"Zone\",\n        description = \"GCP Zone where instance list should be gotten from\"\n    )\n    instance_name: str = Field(\n        title = \"Instance Name\",\n        description = \"Name of the instance.\"\n    )\n\n\ndef gcp_stop_compute_instances_printer(output):\n    if len(output) == 0:\n        return\n    pprint.pprint(output)\n\n\ndef gcp_stop_compute_instances(handle, project_name: str, zone:str, instance_name: str) -> Dict:\n    \"\"\"gcp_stop_compute_instance Returns the dict of instance data\n\n        :type project: string\n        :param project: Google Cloud Platform Project\n\n        :type zone: string\n        :param zone: Zone to which the instance list in the project should be fetched.\n\n        :type instance_name: string\n        :param instance_name: Name of the instance.\n\n        :rtype: Dict of instances data\n    \"\"\"\n    output = {}\n    try:\n        ic = InstancesClient(credentials=handle)\n        result = ic.stop(\n            project=project_name, zone=zone, instance=instance_name)\n\n        output['id'] = result.id\n        output['name'] = result.name\n        output['status'] = result.status\n    except Exception as error:\n        raise error\n\n    return output\n"
  },
  {
    "path": "GCP/legos/gcp_upload_file_to_bucket/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \n<h1>Upload an Object/Blob to a GCP Bucket</h1>\n\n## Description\nThis Lego upload an Object/Blob to a GCP Bucket.\n\n## Lego Details\n\n    gcp_upload_file_to_bucket(handle: object, blob_name: str, bucket_name: str, data: str)\n\n        handle: Object of type unSkript GCP Connector\n        blob_name: String, Blob Name to be given\n        bucket_name: String, Bucket name\n        data: String, String of data to be uploaded to blob/object\n\n## Lego Input\nblob_name: Blob name. eg- \"test-blob\"\nbucket_name: New bucket name. eg- \"unskript-test2\"\ndata: Data to be uploaded. eg- \" dummy data for testing\"\n\n## Lego Output\nHere is a sample output.\n\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "GCP/legos/gcp_upload_file_to_bucket/__init__.py",
    "content": ""
  },
  {
    "path": "GCP/legos/gcp_upload_file_to_bucket/gcp_upload_file_to_bucket.json",
    "content": "{\n    \"action_title\": \"Upload an Object to GCP Bucket\",\n    \"action_description\": \"Upload an Object/Blob in a GCP bucket\",\n    \"action_type\": \"LEGO_TYPE_GCP\",\n    \"action_entry_function\": \"gcp_upload_file_to_bucket\",\n    \"action_needs_credential\": true,\n    \"action_supports_poll\": true,\n    \"action_supports_iteration\": true,\n    \"action_verbs\": [\"upload\"],\n    \"action_nouns\": [\"file\",\"bucket\",\"gcp\"],\n    \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_GCP\",\"CATEGORY_TYPE_GCP_BUCKET\" ]\n}\n"
  },
  {
    "path": "GCP/legos/gcp_upload_file_to_bucket/gcp_upload_file_to_bucket.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nfrom typing import Dict\nfrom pydantic import BaseModel, Field\nfrom google.cloud import storage\n\n\nclass InputSchema(BaseModel):\n    blob_name: str = Field(\n        title = \"Blob Name\",\n        description = \"Name of the object/blob to be created\"\n    )\n    bucket_name: str = Field(\n        title = \"Bucket Name\",\n        description = \"Name of the bucket to create object/blob\"\n    )\n    data: str = Field(\n        title = \"Input Data\",\n        description = \"String of data to be added to the object/blob\"\n    )\n\ndef gcp_upload_file_to_bucket_printer(output):\n    if output is None:\n        return\n    print(f\"Created an object {output['blob_name']} in {output['bucket_name']} bucket\")\n\n\ndef gcp_upload_file_to_bucket(handle,blob_name: str, bucket_name: str, data: str) -> Dict:\n    \"\"\"gcp_upload_file_to_bucket returns a List of objects in the Bucket\n\n        :type blob_name: string\n        :param bucket_name: Name of the object/blob to be created\n\n        :type bucket_name: string\n        :param bucket_name:Name of the bucket to create object/blob\n\n        :type data: string\n        :param bucket_name: String of data to be added to the object/blob\n\n        :rtype: Dict of blob details\n    \"\"\"\n    try:\n        result = {}\n        storage_client = storage.Client(credentials=handle)\n        bucket = storage_client.get_bucket(bucket_name)\n        blob = bucket.blob(blob_name)\n        blob = blob.upload_from_string(data)\n        result[\"blob_name\"] = blob_name\n        result[\"bucket_name\"] = bucket_name\n    except Exception as e:\n        raise e\n    return result\n"
  },
  {
    "path": "Github/README.md",
    "content": "\n# Github Actions\n* [Github Assign Issue](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_assign_issue/README.md): Assign a github issue to a user\n* [Github Check if Pull Request is merged](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_check_if_pull_request_is_merged/README.md): Check if a Github Pull Request is merged\n* [Github Close Pull Request](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_close_pull_request/README.md): Close pull request based on pull request number\n* [Github Count Stars](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_count_stars/README.md): Get count of stars for a repository\n* [Github Create Issue](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_create_issue/README.md): Create a new Github Issue for a repository\n* [Github Create Team](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_create_team/README.md): Create a new Github Team\n* [Github Delete Branch](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_delete_branch/README.md): Delete a github branch\n* [Github Get Branch](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_get_branch/README.md): Get Github branch for a user in a repository\n* [Get Github Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_get_handle/README.md): Get Github Handle\n* [Github Get Issue](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_get_issue/README.md): Get Github Issue from a repository\n* [Github Get Open Branches](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_get_open_branches/README.md): Get first 100 open branches for a given user in a given repo.\n* [Github Get Pull Request](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_get_pull_request/README.md): Get Github Pull Request for a user in a repository\n* [Github Get Team](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_get_team/README.md): Github Get Team\n* [Github Get User](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_get_user/README.md): Get Github User details\n* [Github Invite User to Organization](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_invite_user_to_org/README.md): Invite a Github User to an Organization\n* [Github Comment on an Issue](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_issue_comment/README.md): Add a comment to the selected GitHub Issue\n* [Github List Open Issues](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_list_open_issues/README.md): List open Issues in a Github Repository\n* [Github List Organization Members](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_list_org_members/README.md): List Github Organization Members\n* [Github List PR Commits](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_list_pull_request_commits/README.md): Github List all Pull Request Commits\n* [Github List Pull Request Reviewers](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_list_pull_request_reviewers/README.md): List PR reviewers for a PR\n* [Github List Pull Requests](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_list_pull_requests/README.md): List pull requests for a user in a repository\n* [Github List Stale Issues](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_list_stale_issues/README.md): List Stale Github Issues that have crossed a certain age limit.\n* [Github List Stale Pull Requests](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_list_stale_pull_requests/README.md): Check for any Pull requests over a certain age. \n* [Github List Stargazers](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_list_stargazers/README.md): List of Github users that have starred (essentially bookmarked) a repository\n* [Github List Team Members](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_list_team_members/README.md): List Github Team Members for a given Team\n* [Github List Team Repositories](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_list_team_repos/README.md): Github List Team Repositories\n* [Github List Teams in Organization](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_list_teams_in_org/README.md): List teams in a organization in GitHub\n* [Github List Webhooks](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_list_webhooks/README.md): List webhooks for a repository\n* [Github Merge Pull Request](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_merge_pull_request/README.md): Github Merge Pull Request\n* [Github Remove Member from Organization](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_remove_member_from_org/README.md): Remove a member from a Github Organization\n"
  },
  {
    "path": "Github/__init__.py",
    "content": ""
  },
  {
    "path": "Github/legos/__init__.py",
    "content": ""
  },
  {
    "path": "Github/legos/github_assign_issue/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \n<h2>Github Assign Issue</h2>\n\n<br>\n\n## Description\nThis Lego assigns a Github issue to a user\n\n## Lego Details\n\n    github_assign_issue(handle, owner:str, repository:str, issue_number:int, assignee:str)\n\n        handle: Object of type unSkript Github Connector\n        owner: String, Username of the GitHub user. Eg: \"johnwick\"\n        repository: String, Full name of the GitHub repository. Eg: \"unskript/Awesome-CloudOps-Automation\"\n        issue_number:int, Issue number. Eg: 345\n        assignee: String, Username of the assignee.\n\n## Lego Input\nThis Lego take 5 inputs handle, owner, repository, issue_number, assignee\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Github/legos/github_assign_issue/__init__.py",
    "content": ""
  },
  {
    "path": "Github/legos/github_assign_issue/github_assign_issue.json",
    "content": "{\n    \"action_title\": \"Github Assign Issue\",\n    \"action_description\": \"Assign a github issue to a user\",\n    \"action_type\": \"LEGO_TYPE_GITHUB\",\n    \"action_entry_function\": \"github_assign_issue\",\n    \"action_needs_credential\": true,\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\n    \"action_supports_iteration\": true,\n    \"action_supports_poll\": true,\n    \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_GITHUB\",\"CATEGORY_TYPE_GITHUB_ISSUE\" ]\n\n  }"
  },
  {
    "path": "Github/legos/github_assign_issue/github_assign_issue.py",
    "content": "\n##\n##  Copyright (c) 2023 unSkript, Inc\n##  All rights reserved.\n##\n\nimport pprint\nfrom pydantic import BaseModel, Field\nfrom github import GithubException\n\n\n\nclass InputSchema(BaseModel):\n    owner: str = Field(\n        ..., description='Username of the GitHub user. Eg: \"johnwick\"', title='Owner'\n    )\n    repository: str = Field(\n        ...,\n        description='Name of the GitHub repository. Eg: \"Awesome-CloudOps-Automation\"',\n        title='Repository',\n    )\n    issue_number: int = Field(\n        ...,\n        description='Issue Number. Eg: \"367\"',\n        title='Issue Number',\n    )\n    assignee: int = Field(\n        ...,\n        description='Username of the assignee.',\n        title='Assignee Username',\n    )\n\n\ndef github_assign_issue_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef github_assign_issue(\n        handle,\n        owner:str,\n        repository:str,\n        issue_number:int,\n        assignee:str\n        ) -> str:\n    \"\"\"github_assign_issue assigns an issue to user\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type owner: string\n        :param owner: Username of the GitHub user. Eg: \"johnwick\"\n\n        :type repository: string\n        :param repository: Name of the GitHub repository. Eg: \"Awesome-CloudOps-Automation\"\n\n        :type issue_number: int\n        :param issue_number: Issue number. Eg: 345\n\n        :type assignee: string\n        :param assignees: Username of the assignee.\n\n        :rtype: Status of assigning an issue to a user\n    \"\"\"\n    issue_no = int(issue_number)\n    try:\n        owner = handle.get_user(owner)\n        repo_name = owner.login + '/' + repository\n        repo = handle.get_repo(repo_name)\n        issue = repo.get_issue(issue_no)\n        result = issue.add_to_assignees(assignee)\n    except GithubException as e:\n        if e.status == 403:\n            raise Exception(\"You need admin access\") from e\n        if e.status == 404:\n            raise Exception(\"No such repository or user found\") from e\n        raise e.data\n    except Exception as e:\n        raise e\n    if result is None:\n        return f\"Issue {issue_no} assigned to {assignee}\"\n    return f\"Unable to assign Issue {issue_no} to {assignee}\"\n"
  },
  {
    "path": "Github/legos/github_close_pull_request/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Github Close Pull Requests</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego closes an open pull requests for a user based on PR number\r\n\r\n## Lego Details\r\n\r\n    github_close_pull_request(handle: object , owner:str, repository:str, pull_request_number)\r\n\r\n        handle: Object of type unSkript Github Connector\r\n        owner: String, Username of the GitHub user. Eg: \"johnwick\"\r\n        repository: String, Full name of the GitHub repository. Eg: \"unskript/Awesome-CloudOps-Automation\"\r\n        pull_request_number: Int, Pull request number. Eg: 167 \r\n\r\n## Lego Input\r\nThis Lego take 4 inputs handle, owner, repository, pull_request_number\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Github/legos/github_close_pull_request/__init__.py",
    "content": ""
  },
  {
    "path": "Github/legos/github_close_pull_request/github_close_pull_request.json",
    "content": "{\n  \"action_title\": \"Github Close Pull Request\",\n  \"action_description\": \"Close pull request based on pull request number\",\n  \"action_type\": \"LEGO_TYPE_GITHUB\",\n  \"action_entry_function\": \"github_close_pull_request\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\n  \"action_is_check\": false,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_GITHUB\",\"CATEGORY_TYPE_GITHUB_PR\" ]\n\n}"
  },
  {
    "path": "Github/legos/github_close_pull_request/github_close_pull_request.py",
    "content": "##\n##  Copyright (c) 2023 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom pydantic import BaseModel, Field\nfrom github import GithubException\n\n\nclass InputSchema(BaseModel):\n    owner: str = Field(\n        description='Username of the GitHub user. Eg: \"johnwick\"',\n        title='Owner'\n    )\n    repository: str = Field(\n        description='Name of the GitHub repository. Eg: \"Awesome-CloudOps-Automation\"',\n        title='Repository',\n    )\n    pull_request_number: int = Field(\n        description='Pull request number. Eg: 167',\n        title='Pull Request Number'\n    )\n\n\n\ndef github_close_pull_request_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\ndef github_close_pull_request(handle, owner:str, repository:str, pull_request_number: int) -> str:\n    \"\"\"github_close_pull_request returns time at which the pull request was closed\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type owner: string\n        :param owner: Username of the GitHub user. Eg: \"johnwick\"\n\n        :type repository: string\n        :param repository: Name of the GitHub repository. Eg: \"Awesome-CloudOps-Automation\"\n\n        :type pull_request_number: int\n        :param pull_request_number: Pull request number. Eg: 167 \n\n        :rtype: String of details of pull request closure\n    \"\"\"\n    result = []\n    pr_number = int(pull_request_number)\n    try:\n        owner = handle.get_user(owner)\n        repo_name = owner.login+'/'+repository\n        repo = handle.get_repo(repo_name)\n        pr = repo.get_pull(pr_number)\n        try:\n            if pr.state == \"open\":\n                result = pr.edit(state='closed')\n            else:\n                return f\"PR number {pr.number} is already closed\"\n            if result is None:\n                return f\"PR {pr.number} was closed at: {pr.closed_at} \"\n        except GithubException as e:\n            if e.status == 404:\n                raise Exception((\"You need admin access of an organization in case \"\n                                \"the repository is a part of an organization\")) from e\n            raise e.data\n    except GithubException as e:\n        if e.status == 403:\n            raise Exception(\"You need admin access\") from e\n        if e.status == 404:\n            raise Exception(\"No such pull number or repository or user found\") from e\n        raise e.data\n    except Exception as e:\n        raise e\n    return result\n"
  },
  {
    "path": "Github/legos/github_count_stars/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Github Count Stars</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego gets the count of stars on a repository\r\n\r\n## Lego Details\r\n\r\n    github_count_stars(handle, owner:str, repository:str)\r\n\r\n        handle: Object of type unSkript Github Connector\r\n        owner: String, Username of the GitHub user. Eg: \"johnwick\"\r\n        repository: String, Full name of the GitHub repository. Eg: \"unskript/Awesome-CloudOps-Automation\"\r\n\r\n## Lego Input\r\nThis Lego take 3 inputs handle, owner, repository\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Github/legos/github_count_stars/__init__.py",
    "content": ""
  },
  {
    "path": "Github/legos/github_count_stars/github_count_stars.json",
    "content": "{\n  \"action_title\": \"Github Count Stars\",\n  \"action_description\": \"Get count of stars for a repository\",\n  \"action_type\": \"LEGO_TYPE_GITHUB\",\n  \"action_entry_function\": \"github_count_stars\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_INT\",\n  \"action_is_check\": false,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_GITHUB\",\"CATEGORY_TYPE_GITHUB_REPO\" ]\n}"
  },
  {
    "path": "Github/legos/github_count_stars/github_count_stars.py",
    "content": "##\n##  Copyright (c) 2023 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom pydantic import BaseModel, Field\nfrom github import GithubException\n\nclass InputSchema(BaseModel):\n    owner: str = Field(\n        description='Username of the GitHub user. Eg: \"johnwick\"',\n        title='Owner'\n    )\n    repository: str = Field(\n        description='Name of the GitHub repository. Eg: \"Awesome-CloudOps-Automation\"',\n        title='Repository',\n    )\n\ndef github_count_stars_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\ndef github_count_stars(handle, owner:str, repository:str) -> int:\n    \"\"\"github_count_stars counts number of stars on a repository\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type owner: string\n        :param owner: Username of the GitHub user. Eg: \"johnwick\"\n\n        :type repository: string\n        :param repository: Name of the GitHub repository. Eg: \"Awesome-CloudOps-Automation\"\n\n        :rtype: Count of number of stars on a repository\n    \"\"\"\n    try:\n        owner = handle.get_user(owner)\n        repo_name = owner.login +'/'+ repository\n        repo = handle.get_repo(repo_name)\n        stars = repo.get_stargazers()\n        result = len(list(stars))\n    except GithubException as e:\n        if e.status == 403:\n            raise Exception(\"You need admin access\") from e\n        if e.status == 404:\n            raise Exception(\"No such repository or user found\") from e\n        raise e.data\n    except Exception as e:\n        raise e\n    return result\n"
  },
  {
    "path": "Github/legos/github_create_issue/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Github Create Issue</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego creates a new Github Issue and returns its details\r\n\r\n## Lego Details\r\n\r\n    github_create_issue(handle, owner:str, repository:str, title:str, description:str, assignee: str)\r\n\r\n        handle: Object of type unSkript Github Connector\r\n        owner: String, Username of the GitHub user. Eg: \"johnwick\"\r\n        repository: String, Full name of the GitHub repository. Eg: \"unskript/Awesome-CloudOps-Automation\"\r\n        title: String, Title if the Github Issue\r\n        description: String, Description of the Github Issue\r\n        assignee: String, Username of the Assignee\r\n\r\n## Lego Input\r\nThis Lego take 6 inputs handle, owner, repository, title, description, assignee\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Github/legos/github_create_issue/__init__.py",
    "content": ""
  },
  {
    "path": "Github/legos/github_create_issue/github_create_issue.json",
    "content": "{\n  \"action_title\": \"Github Create Issue\",\n  \"action_description\": \"Create a new Github Issue for a repository\",\n  \"action_type\": \"LEGO_TYPE_GITHUB\",\n  \"action_entry_function\": \"github_create_issue\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\n  \"action_is_check\": false,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_GITHUB\",\"CATEGORY_TYPE_GITHUB_ISSUE\" ]\n}"
  },
  {
    "path": "Github/legos/github_create_issue/github_create_issue.py",
    "content": "##\n##  Copyright (c) 2023 unSkript, Inc\n##  All rights reserved.\n##\n\nimport pprint\nfrom typing import Dict\nfrom pydantic import BaseModel, Field\nfrom github import GithubException\n\n\nclass InputSchema(BaseModel):\n    owner: str = Field(\n        description='Username of the GitHub user. Eg: \"johnwick\"',\n        title='Owner'\n    )\n    repository: str = Field(\n        description='Name of the GitHub repository. Eg: \"Awesome-CloudOps-Automation\"',\n        title='Repository',\n    )\n    title: str = Field(\n        description='Title if the Github Issue',\n        title='Title of the Issue'\n    )\n    description: str = Field(\n        description='Description of the Github Issue', \n        title='Description of the Issue'\n    )\n    assignee: str = Field(\n        description='Username of the Github User to assign this issue to ',\n        title='Username of the Assignee'\n    )\n\n\n\ndef github_create_issue_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef github_create_issue(\n        handle,\n        owner:str,\n        repository:str,\n        title:str,\n        description:str,\n        assignee: str\n        ) -> Dict:\n    \"\"\"github_create_issue returns details of newly created issue\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type owner: string\n        :param owner: Username of the GitHub user. Eg: \"johnwick\"\n\n        :type repository: string\n        :param repository: Name of the GitHub repository. Eg: \"Awesome-CloudOps-Automation\"\n\n        :type title: string\n        :param title: Title if the Github Issue\n\n        :type description: string\n        :param description: Description of the Github Issue\n\n        :type assignee: string\n        :param assignee: Username of the Assignee\n        \n        :rtype: Dict of newly created issue\n    \"\"\"\n    issue_details = {}\n    try:\n        owner = handle.get_user(owner)\n        repo_name = owner.login + '/' + repository\n        repo = handle.get_repo(repo_name)\n        res = repo.create_issue(title=title, body=description, assignee=assignee)\n        issue_details[\"title\"] = res.title\n        issue_details[\"issue_number\"] = res.number\n        issue_details[\"assignee\"] = res.assignee.login\n    except GithubException as e:\n        if e.status == 403:\n            raise Exception(\"You need admin access\") from e\n        if e.status == 404:\n            raise Exception(\"No such repository or user found\") from e\n        raise e.data\n    except Exception as e:\n        raise e\n    return issue_details\n"
  },
  {
    "path": "Github/legos/github_create_team/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \n<h2>Github Create Team</h2>\n\n<br>\n\n## Description\nThis Lego creates a new Github team and returns the details of a team.\n\n## Lego Details\n\n    github_create_team(handle: object, organization_name: str, team_name: str, repositories:list, privacy:str, description:str)\n\n        handle: Object of type unSkript Github Connector\n        organization_name: String, Organization Name\n        team_name: String, Team Name\n        description: Optional String, \n        privacy: Enum, Privacy type to be given to the team. \"secret\" - only visible to organization owners and members of  this team, \"closed\"- visible to all members of this organization. By default type \"secret\" will be considered. \n        repositories: List, List of the GitHub repositories to add to the new team. Eg: [\"repo1\",\"repo2\"]'\n\n## Lego Input\nThis Lego take 6 inputs handle, organization_name, team_name, description, privacy, repositories.\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Github/legos/github_create_team/__init__.py",
    "content": ""
  },
  {
    "path": "Github/legos/github_create_team/github_create_team.json",
    "content": "{\n  \"action_title\": \"Github Create Team\",\n  \"action_description\": \"Create a new Github Team\",\n  \"action_type\": \"LEGO_TYPE_GITHUB\",\n  \"action_entry_function\": \"github_create_team\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\n  \"action_is_check\": false,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_GITHUB\",\"CATEGORY_TYPE_GITHUB_TEAM\"]\n}"
  },
  {
    "path": "Github/legos/github_create_team/github_create_team.py",
    "content": "import pprint\nfrom typing import Optional, List, Dict\nfrom pydantic import BaseModel, Field\nfrom unskript.enums.github_team_privacy_enums import GithubTeamPrivacy\nfrom github import GithubException\n\nclass InputSchema(BaseModel):\n    team_name: str = Field(\n     description='Team name. Eg:\"backend\"',\n     title='Team Name'\n    )\n    description: Optional[str] = Field(\n        '', \n        description='Description of the new team.',\n        title='Description'\n    )\n    privacy: Optional[GithubTeamPrivacy] = Field(\n        description=('Privacy type to be given to the team. \"secret\" - only visible to '\n                     'organization owners and members of this team, \"closed\"- visible to '\n                     'all members of this organization. By default type \"secret\" will be '\n                     'considered. '), \n        title='Privacy'\n    )\n    organization_name: str = Field(\n       description='Github Organization Name. Eg: \"infosecorg\"',\n       title='Organization Name'\n    )\n    repositories: List = Field(\n        description='List of the GitHub repositories to add to the new team. Eg: [\"repo1\",\"repo2\"]',\n        title='repositories',\n    )\n\n\n\ndef github_create_team_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\ndef github_create_team(\n        handle,\n        organization_name:str,\n        team_name:str,\n        repositories:list,\n        privacy:GithubTeamPrivacy=GithubTeamPrivacy.secret,\n        description:str=\"\"\n        ) -> Dict:\n    \"\"\"github_create_team returns details of newly created team.\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type organization_name: string\n        :param organization_name: Organization name Eg: \"infosec\"\n\n        :type team_name: string\n        :param team_name: Team name. Eg: \"backend\"\n\n        :type description: string\n        :param description: Description of the new team.\n\n        :type repositories: string\n        :param repositories: List of the GitHub repositories to add to the new team. \n        Eg: [\"repo1\",\"repo2\"]\n\n        :type privacy: Enum\n        :param privacy: Privacy type to be given to the team. \"secret\" - only visible \n        to organization owners and members of this team, \"closed\"- visible to all members \n        of this organization. By default type \"secret\" will be considered. \n\n        :rtype: Dict of details of newly created team\n    \"\"\"\n    result = []\n    team_details = {}\n    repo_names =[]\n    list_of_repos = ''\n    privacy_settings = ''\n    if privacy is None or len(privacy)==0:\n        privacy_settings = \"secret\"\n    organization = handle.get_organization(organization_name)\n    for repo in repositories:\n        list_of_repos  = organization.get_repo(repo)\n        repo_names.append(list_of_repos)\n    try:\n        result = organization.create_team(\n            name=team_name,\n            repo_names=repo_names,\n            privacy=privacy_settings,\n            description=description\n            )\n        team_details[\"name\"]= result.name\n        team_details[\"id\"]= result.id\n    except GithubException as e:\n        if e.status == 404:\n            raise Exception(\"No such organization found\") from e\n        raise e.data\n    except Exception as e:\n        raise e\n    return team_details\n"
  },
  {
    "path": "Github/legos/github_delete_branch/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Github Delete Branch</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego deleted a given branch\r\n\r\n## Lego Details\r\n\r\n    github_delete_branch(handle: object, owner: str, repository: str, branch_name: str)\r\n\r\n        handle: Object of type unSkript Github Connector\r\n        owner: String, Username of the GitHub user. Eg: \"johnwick\"\r\n        repository: String, Name of the GitHub repository. Eg: \"Awesome-CloudOps-Automation\"\r\n        branch_name: String, Branch Name\r\n\r\n## Lego Input\r\nThis Lego take 4 inputs handle, owner, repository, branch_name\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Github/legos/github_delete_branch/__init__.py",
    "content": ""
  },
  {
    "path": "Github/legos/github_delete_branch/github_delete_branch.json",
    "content": "{\n  \"action_title\": \"Github Delete Branch\",\n  \"action_description\": \"Delete a github branch\",\n  \"action_type\": \"LEGO_TYPE_GITHUB\",\n  \"action_entry_function\": \"github_delete_branch\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\n  \"action_is_check\": false,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_GITHUB\",\"CATEGORY_TYPE_GITHUB_REPO\" ]\n}"
  },
  {
    "path": "Github/legos/github_delete_branch/github_delete_branch.py",
    "content": "##\n##  Copyright (c) 2023 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom pydantic import BaseModel, Field\nfrom github import GithubException\n\nclass InputSchema(BaseModel):\n    branch_name: str = Field(\n    description='Branch name. Eg:\"dummy-branch-name\"',\n    title='Branch Name'\n    )\n    owner: str = Field(\n    description='Username of the GitHub user. Eg: \"johnwick\"',\n    title='Owner'\n    )\n    repository: str = Field(\n        description='Name of the GitHub repository. Eg: \"Awesome-CloudOps-Automation\"',\n        title='Repository',\n    )\n\n\ndef github_delete_branch_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\ndef github_delete_branch(handle, owner:str, repository: str, branch_name: str)-> str:\n    \"\"\"github_delete_branch returns details of the deleted branch.\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type owner: string\n        :param owner: Username of the GitHub user. Eg: \"johnwick\"\n\n        :type repository: string\n        :param repository: Name of the GitHub repository. Eg: \"Awesome-CloudOps-Automation\"\n\n        :type branch_name: string\n        :param branch_name: Branch Name Eg: \"dummy-branch\"\n\n        :rtype: Deleted branch info\n    \"\"\"\n    flag_to_check_branch = 0\n    try:\n        user = handle.get_user(login=owner)\n        repo_name = user.login+\"/\"+repository\n        repo = handle.get_repo(repo_name)\n        if repo.full_name == repo_name:\n            branch = repo.get_branch(branch_name)\n            flag_to_check_branch = 0\n            if branch.name == branch_name:\n                flag_to_check_branch = 1\n                ref = repo.get_git_ref(f\"heads/{branch_name}\")\n                ref.delete()\n                return f\"{branch_name} successfully deleted\"\n        if flag_to_check_branch == 0:\n            return [f\"{branch_name} not found\"]\n    except GithubException as e:\n        if e.status == 403:\n            raise Exception(\"You need admin access\") from e\n        if e.status == 404:\n            raise Exception(\"No such username or repository\") from e\n        raise e.data\n    except Exception as e:\n        raise e\n    return None\n"
  },
  {
    "path": "Github/legos/github_get_branch/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Github Get Branch</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego Gets commit details for a branch for a given user in a given repo.\r\n\r\n## Lego Details\r\n\r\n    github_get_branch(handle: object, owner: str, repository: str, branch_name: str)\r\n\r\n        handle: Object of type unSkript Github Connector\r\n        owner: String, Username of the GitHub user. Eg: \"johnwick\"\r\n        repository: String, Full name of the GitHub repository. Eg: \"unskript/Awesome-CloudOps-Automation\"\r\n        branch_name: String, Branch Name\r\n\r\n## Lego Input\r\nThis Lego take 4 inputs handle, owner, repository, branch_name\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Github/legos/github_get_branch/__init__.py",
    "content": ""
  },
  {
    "path": "Github/legos/github_get_branch/github_get_branch.json",
    "content": "{\n  \"action_title\": \"Github Get Branch\",\n  \"action_description\": \"Get Github branch for a user in a repository\",\n  \"action_type\": \"LEGO_TYPE_GITHUB\",\n  \"action_entry_function\": \"github_get_branch\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\n  \"action_is_check\": false,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_GITHUB\",\"CATEGORY_TYPE_GITHUB_REPO\" ]\n}"
  },
  {
    "path": "Github/legos/github_get_branch/github_get_branch.py",
    "content": "##\n##  Copyright (c) 2023 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom github import GithubException\nfrom typing import Dict\nfrom pydantic import BaseModel, Field\n\n\nclass InputSchema(BaseModel):\n    branch_name: str = Field(\n        description='Branch name. Eg:\"dummy-branch-name\"', \n        title='Branch Name'\n    )\n    owner: str = Field(\n        description='Username of the GitHub user. Eg: \"johnwick\"',\n        title='Owner'\n    )\n    repository: str = Field(\n        description='Name of the GitHub repository. Eg: \"Awesome-CloudOps-Automation\"',\n        title='Repository',\n    )\n\n\n\ndef github_get_branch_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\ndef github_get_branch(handle, owner:str, repository: str, branch_name: str) -> Dict:\n    \"\"\"github_get_branch returns details of commits (if any) of a branche for a user.\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type owner: string\n        :param owner: Username of the GitHub user. Eg: \"johnwick\"\n\n        :type repository: string\n        :param repository: Name of the GitHub repository. Eg: \"Awesome-CloudOps-Automation\"\n\n        :type branch_name: string\n        :param branch_name: Branch Name Eg: \"dummy-branch\"\n\n        :rtype: Dict of branch with commits for a user for a repository\n    \"\"\"\n    branch_info = {}\n    try:\n        user = handle.get_user(login=owner)\n        repo_name = user.login+\"/\"+repository\n        repo = handle.get_repo(repo_name)\n        if repo.full_name == repo_name:\n            branch = repo.get_branch(branch_name)\n            if branch.name == branch_name:\n                branch_info[\"branch\"] = branch.name\n                branch_info[\"commit\"] = branch.commit.sha\n            else:\n                return [f\"{branch_name} not found\"]\n    except GithubException as e:\n        if e.status == 403:\n            raise Exception(\"You need admin access\") from e\n        if e.status == 404:\n            raise Exception(\"No such repository or user found\") from e\n        raise e.data\n    except Exception as e:\n        raise e\n    return branch_info\n"
  },
  {
    "path": "Github/legos/github_get_handle/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Get Github handle</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego Get Github handle.\r\n\r\n## Lego Details\r\n\r\n    github_get_handle(handle: object)\r\n\r\n        handle: Object of type unSkript Github Connector\r\n\r\n## Lego Input\r\nThis Lego take one inputs handle\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Github/legos/github_get_handle/__init__.py",
    "content": ""
  },
  {
    "path": "Github/legos/github_get_handle/github_get_handle.json",
    "content": "{\n    \"action_title\": \"Get Github Handle\",\n    \"action_description\": \"Get Github Handle\",\n    \"action_type\": \"LEGO_TYPE_GITHUB\",\n    \"action_entry_function\": \"github_get_handle\",\n    \"action_needs_credential\": true,\n    \"action_supports_poll\": false,\n    \"action_supports_iteration\": false\n}\n    "
  },
  {
    "path": "Github/legos/github_get_handle/github_get_handle.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nfrom pydantic import BaseModel\n\n\nclass InputSchema(BaseModel):\n    pass\n\n\ndef github_get_handle(handle):\n    \"\"\"github_get_handle returns the github handle.\n\n          :rtype: Github handle.\n    \"\"\"\n    return handle\n"
  },
  {
    "path": "Github/legos/github_get_issue/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Github Get Issue</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego gets a Github issue\r\n\r\n## Lego Details\r\n\r\n    github_get_issue(handle, owner:str, repository:str, issue_number:int)\r\n\r\n        handle: Object of type unSkript Github Connector\r\n        owner: String, Username of the GitHub user. Eg: \"johnwick\"\r\n        repository: String, Full name of the GitHub repository. Eg: \"unskript/Awesome-CloudOps-Automation\"\r\n        issue_number:int, Issue number. Eg: 345\r\n\r\n## Lego Input\r\nThis Lego take 4 inputs handle, owner, repository, issue_number\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Github/legos/github_get_issue/__init__.py",
    "content": ""
  },
  {
    "path": "Github/legos/github_get_issue/github_get_issue.json",
    "content": "{\n  \"action_title\": \"Github Get Issue\",\n  \"action_description\": \"Get Github Issue from a repository\",\n  \"action_type\": \"LEGO_TYPE_GITHUB\",\n  \"action_entry_function\": \"github_get_issue\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\n  \"action_is_check\": false,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_GITHUB\",\"CATEGORY_TYPE_GITHUB_ISSUE\" ]\n\n}"
  },
  {
    "path": "Github/legos/github_get_issue/github_get_issue.py",
    "content": "\n##\n##  Copyright (c) 2023 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import Dict\nfrom pydantic import BaseModel, Field\nfrom github import GithubException\n\n\nclass InputSchema(BaseModel):\n    owner: str = Field(\n        ..., description='Username of the GitHub user. Eg: \"johnwick\"', title='Owner'\n    )\n    repository: str = Field(\n        ...,\n        description='Name of the GitHub repository. Eg: \"Awesome-CloudOps-Automation\"',\n        title='Repository',\n    )\n    issue_number: int = Field(\n        ...,\n        description='Issue Number. Eg: \"367\"',\n        title='Issue Number',\n    )\n\n\ndef github_get_issue_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef github_get_issue(handle, owner:str, repository:str, issue_number:int) -> Dict:\n    \"\"\"github_get_issue returns details of the issue\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type owner: string\n        :param owner: Username of the GitHub user. Eg: \"johnwick\"\n\n        :type repository: string\n        :param repository: Name of the GitHub repository. Eg: \"Awesome-CloudOps-Automation\"\n\n        :type issue_number: int\n        :param issue_number: Issue number. Eg: 345\n\n        :rtype: Dict of issue details\n    \"\"\"\n    issue_no = int(issue_number)\n    issue_details = {}\n    try:\n        owner = handle.get_user(owner)\n        repo_name = owner.login + '/' + repository\n        repo = handle.get_repo(repo_name)\n        issue = repo.get_issue(issue_no)\n        issue_details[\"title\"] = issue.title\n        issue_details[\"issue_number\"] = issue.number\n        if isinstance(issue.assignee, type(None)):\n            issue_details[\"assignee\"] = issue.assignee.login\n        else:\n            issue_details[\"assignee\"] = issue.assignee\n        issue_details[\"body\"] = issue.body\n        issue_details[\"state\"] = issue.state\n        dummy_date = issue.updated_at\n        formatted_date = dummy_date.strftime(\"%d-%m-%Y\")\n        issue_details[\"updated_at\"] = formatted_date\n    except GithubException as e:\n        if e.status == 403:\n            raise Exception(\"You need admin access\") from e\n        if e.status == 404:\n            raise Exception(\"No such repository or user found\") from e\n        raise e.data\n    except Exception as e:\n        raise e\n    return issue_details\n    "
  },
  {
    "path": "Github/legos/github_get_open_branches/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Github Get Open Branches</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego Gets first 100 open branches for a given user in a given repo.\r\n\r\n## Lego Details\r\n\r\n    github_get_open_branches(handle: object, owner: str, repository: str)\r\n\r\n        handle: Object of type unSkript Github Connector\r\n        owner: String, Username of the GitHub user. Eg: \"johnwick\"\r\n        repository: String, Full name of the GitHub repository. Eg: \"unskript/Awesome-CloudOps-Automation\"\r\n\r\n## Lego Input\r\nThis Lego take 3 inputs handle, owner, repository\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Github/legos/github_get_open_branches/__init__.py",
    "content": ""
  },
  {
    "path": "Github/legos/github_get_open_branches/github_get_open_branches.json",
    "content": "{\n  \"action_title\": \"Github Get Open Branches\",\n  \"action_description\": \"Get first 100 open branches for a given user in a given repo.\",\n  \"action_type\": \"LEGO_TYPE_GITHUB\",\n  \"action_entry_function\": \"github_get_open_branches\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n  \"action_is_check\": false,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_GITHUB\",\"CATEGORY_TYPE_GITHUB_REPO\" ]\n\n}"
  },
  {
    "path": "Github/legos/github_get_open_branches/github_get_open_branches.py",
    "content": "\n##\n##  Copyright (c) 2023 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import List\nfrom pydantic import BaseModel, Field\nfrom github import GithubException\n\n\nclass InputSchema(BaseModel):\n    owner: str = Field(\n        description='Username of the GitHub user. Eg: \"johnwick\"',\n        title='Owner'\n    )\n    repository: str = Field(\n        description='Name of the GitHub repository. Eg: \"Awesome-CloudOps-Automation\"',\n        title='Repository',\n    )\n\n\ndef github_get_open_branches_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef github_get_open_branches(handle, owner: str, repository: str)-> List:\n    \"\"\"github_get_open_branches returns 100 open github branches for a user.\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n    \n        :type owner: string\n        :param owner: Username of the GitHub user. Eg: \"johnwick\"\n\n        :type repository: string\n        :param repository: Name of the GitHub repository. Eg: \"Awesome-CloudOps-Automation\"\n\n        :rtype: List of branches for a user for a repository\n    \"\"\"\n    result = []\n    try:\n        user = handle.get_user(login=owner)\n        repos = user.get_repos()\n        repo_name = owner+\"/\"+repository\n        if len(list(repos)) == 0:\n            return [f\"{owner} does not have any repositories\"]\n        for repo in repos:\n            if repo.full_name == repo_name:\n                branches = repo.get_branches()\n                result = [branch.name for branch in branches[:100]]\n    except GithubException as e:\n        if e.status == 403:\n            raise Exception(\"You need admin access\") from e\n        if e.status == 404:\n            raise Exception(\"No such pull number or repository or user found\") from e\n        raise e.data\n    except Exception as e:\n        raise e\n    return result\n"
  },
  {
    "path": "Github/legos/github_get_open_pull_requests/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Github Get Team</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego gets checks if a PR is merged\r\n\r\n## Lego Details\r\n\r\n    github_get_open_pull_requests(handle, repository: str, owner: str = \"\")\r\n\r\n        handle: Object of type unSkript Github Connector\r\n        owner: String, Username of the GitHub user. Eg: \"johnwick\" (optional)\r\n        repository: String, Full name of the GitHub repository. Eg: \"unskript/Awesome-CloudOps-Automation\"\r\n\r\n## Lego Input\r\nThis Lego take 3 inputs handle, owner (optional), repository.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Github/legos/github_get_open_pull_requests/__init__.py",
    "content": ""
  },
  {
    "path": "Github/legos/github_get_open_pull_requests/github_get_open_pull_requests.json",
    "content": "{\n  \"action_title\": \"Github get open pull requests\",\n  \"action_description\": \"This action gets details of open pull requests\",\n  \"action_type\": \"LEGO_TYPE_GITHUB\",\n  \"action_entry_function\": \"github_get_open_pull_requests\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n  \"action_is_check\": true,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_GITHUB\",\"CATEGORY_TYPE_GITHUB_PR\" ],\n  \"action_next_hop\": [\"\"],\n  \"action_next_hop_parameter_mapping\": {}\n}"
  },
  {
    "path": "Github/legos/github_get_open_pull_requests/github_get_open_pull_requests.py",
    "content": "##\n##  Copyright (c) 2023 unSkript, Inc\n##  All rights reserved.\n##\nfrom typing import Tuple, Optional\nfrom pydantic import BaseModel, Field\nfrom github import GithubException, BadCredentialsException, UnknownObjectException\nfrom tabulate import tabulate\n\nclass InputSchema(BaseModel):\n    owner: Optional[str] = Field(\n        description='Username of the GitHub user. Eg: \"johnwick\"',\n        title='Owner'\n    )\n    repository: str = Field(\n        description='Name of the GitHub repository. Eg: \"Awesome-CloudOps-Automation\"',\n        title='Repository',\n    )\n\n\ndef github_get_open_pull_requests_printer(output_tuple):\n    if output_tuple is None or output_tuple[1] is None:\n        return\n    success, output = output_tuple\n    if not success:\n        headers = [\"PR Number\", \"Title\", \"Changed Files\", \"Review Comments\", \"Commits\"]\n        table = [[pr[\"pull_number\"], pr[\"pull_title\"], pr[\"pull_changed_files\"],\n                  pr[\"pull_review_comments\"], pr[\"pull_commits\"]] for pr in output]\n        print(tabulate(table, headers, tablefmt=\"grid\"))\n    else:\n        print(\"No unmerged pull requests found.\")\n\n\ndef github_get_open_pull_requests(handle, repository: str, owner: str = \"\") -> Tuple:\n    \"\"\"github_get_open_pull_requests returns status, list of open pull requests.\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type owner: string (Optional)\n        :param owner: Username of the GitHub user. Eg: \"johnwick\"\n\n        :type repository: string\n        :param repository: Name of the GitHub repository. Eg: \"Awesome-CloudOps-Automation\"\n\n        :rtype: Status, List of details of pull requests if it is not merged\n    \"\"\"\n    result = []\n    try:\n        if not owner:\n            owner = handle.get_user().login\n        repo = handle.get_repo(f\"{owner}/{repository}\")\n        prs = repo.get_pulls()\n\n        # Check if there are no open pull requests\n        if prs.get_page(0) == []:\n            print(\"No pull requests are open at the moment.\")\n            return (True, None)\n\n        for pr in prs:\n            if not pr.is_merged():\n                prs_dict = {\n                    \"pull_number\": pr.number,\n                    \"pull_title\": pr.title,\n                    \"pull_changed_files\": pr.changed_files,\n                    \"pull_review_comments\": pr.review_comments,\n                    \"pull_commits\": pr.commits\n                }\n                result.append(prs_dict)\n    except Exception as e:\n        raise e\n\n    return (False, result) if result else (True, None)"
  },
  {
    "path": "Github/legos/github_get_pull_request/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Github Get Pull Request</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego lists details of a pull requests for a user\r\n\r\n## Lego Details\r\n\r\n    github_list_pull_requests(handle: object , owner:str, repository:str, pull_request_number:int)\r\n\r\n        handle: Object of type unSkript Github Connector\r\n        owner: String, Username of the GitHub user. Eg: \"johnwick\"\r\n        repository: String, Full name of the GitHub repository. Eg: \"unskript/Awesome-CloudOps-Automation\"\r\n        pull_request_number: Integer, Pull request number. Eg: 167\r\n\r\n## Lego Input\r\nThis Lego take 4 inputs handle, owner, repository, pull_request_number\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Github/legos/github_get_pull_request/__init__.py",
    "content": ""
  },
  {
    "path": "Github/legos/github_get_pull_request/github_get_pull_request.json",
    "content": "{\n  \"action_title\": \"Github Get Pull Request\",\n  \"action_description\": \"Get Github Pull Request for a user in a repository\",\n  \"action_type\": \"LEGO_TYPE_GITHUB\",\n  \"action_entry_function\": \"github_get_pull_request\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\n  \"action_is_check\": false,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_GITHUB\",\"CATEGORY_TYPE_GITHUB_PR\" ]\n}"
  },
  {
    "path": "Github/legos/github_get_pull_request/github_get_pull_request.py",
    "content": "##\n##  Copyright (c) 2023 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import Dict\nfrom pydantic import BaseModel, Field\nfrom github import GithubException, BadCredentialsException, UnknownObjectException\n\n\nclass InputSchema(BaseModel):\n    owner: str = Field(\n        description='Username of the GitHub user. Eg: \"johnwick\"',\n        title='Owner'\n    )\n    repository: str = Field(\n        description='Name of the GitHub repository. Eg: \"Awesome-CloudOps-Automation\"',\n        title='Repository',\n    )\n    pull_request_number: int = Field(\n        description='Pull request number. Eg: 167',\n        title='Pull Request Number'\n    )\n\n\ndef github_get_pull_request_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\ndef github_get_pull_request(handle, owner:str, repository:str, pull_request_number: int) -> Dict:\n    \"\"\"github_get_pull_request returns details of pull requests for a user\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type owner: string\n        :param owner: Username of the GitHub user. Eg: \"johnwick\"\n\n        :type repository: string\n        :param repository: Name of the GitHub repository. Eg: \"Awesome-CloudOps-Automation\"\n\n        :type pull_request_number: int\n        :param pull_request_number: Pull request number. Eg: 167\n\n        :rtype: Dict of details of pull request for a user\n    \"\"\"\n    prs_dict = {}\n    pr_number = int(pull_request_number)\n    try:\n        owner = handle.get_user(owner)\n        repo_name = owner.login+'/'+repository\n        repo = handle.get_repo(repo_name)\n        pr = repo.get_pull(pr_number)\n        prs_dict[\"pull_number\"] = pr.number\n        prs_dict[\"pull_title\"] = pr.title\n        prs_dict[\"pull_changed_files\"] = pr.changed_files\n        prs_dict[\"pull_review_comments\"] = pr.review_comments\n        prs_dict[\"pull_commits\"] = pr.commits\n    except GithubException as e:\n        if e.status == 403:\n            raise BadCredentialsException(\"You need admin access\") from e\n        if e.status == 404:\n            raise UnknownObjectException(\"No such pull number or repository or user found\") from e\n        raise e.data\n    except Exception as e:\n        raise e\n    return prs_dict\n"
  },
  {
    "path": "Github/legos/github_get_team/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Github Get Team</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego gets the details of a team in a Github Organization\r\n\r\n## Lego Details\r\n\r\n    github_get_team(handle: object, organization_name: str, team_name: str)\r\n\r\n        handle: Object of type unSkript Github Connector\r\n        organization_name: String, Organization Name\r\n        team_name: String, Team Name\r\n\r\n## Lego Input\r\nThis Lego take 3 inputs handle, organization_name, team_name\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Github/legos/github_get_team/__init__.py",
    "content": ""
  },
  {
    "path": "Github/legos/github_get_team/github_get_team.json",
    "content": "{\n  \"action_title\": \"Github Get Team\",\n  \"action_description\": \"Github Get Team\",\n  \"action_type\": \"LEGO_TYPE_GITHUB\",\n  \"action_entry_function\": \"github_get_team\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\n  \"action_is_check\": false,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_GITHUB\",\"CATEGORY_TYPE_GITHUB_TEAM\" ]\n}"
  },
  {
    "path": "Github/legos/github_get_team/github_get_team.py",
    "content": "##\n##  Copyright (c) 2023 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import Dict\nfrom pydantic import BaseModel, Field\nfrom github import GithubException, BadCredentialsException, UnknownObjectException\n\nclass InputSchema(BaseModel):\n    organization_name: str = Field(\n        description='Github Organization Name',\n        title='Organization Name'\n    )\n    team_name: str = Field(\n        description='Team name in a GitHub Organization',\n        title='Team name'\n    )\n\n\ndef github_get_team_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\ndef github_get_team(handle, organization_name:str, team_name:str) -> Dict:\n    \"\"\"github_get_team returns details of the team\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type organization_name: string\n        :param organization_name: Organization name Eg: \"infosec\"\n\n        :type team_name: string\n        :param team_name: Team name. Eg: \"backend\"\n\n        :rtype: List of a teams and its details\n    \"\"\"\n    result = []\n    try:\n        organization = handle.get_organization(organization_name)\n        team = organization.get_team_by_slug(team_name)\n        team_details = {}\n        team_details[\"team_name\"] = team.name\n        team_details[\"team_id\"] = team.id\n        team_details[\"members_count\"]= team.members_count\n        team_details[\"repos_count\"]= team.repos_count\n        team_details[\"privacy\"]= team.privacy\n        team_details[\"permission\"]= team.permission\n    except GithubException as e:\n        if e.status == 403:\n            raise BadCredentialsException(\"You need admin access\") from e\n        if e.status == 404:\n            raise UnknownObjectException(\"No such organization or repository found\") from e\n        raise e.data\n    except Exception as e:\n        raise e\n    return result\n"
  },
  {
    "path": "Github/legos/github_get_user/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Github Get User</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego Gets user details for the given user.\r\n\r\n## Lego Details\r\n\r\n    github_get_user(handle: object, owner: str)\r\n\r\n        handle: Object of type unSkript Github Connector\r\n        owner: String, Username of the GitHub user. Eg: \"johnwick\"\r\n\r\n## Lego Input\r\nThis Lego take 2 inputs handle, owner\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Github/legos/github_get_user/__init__.py",
    "content": ""
  },
  {
    "path": "Github/legos/github_get_user/github_get_user.json",
    "content": "{\n  \"action_title\": \"Github Get User\",\n  \"action_description\": \"Get Github User details\",\n  \"action_type\": \"LEGO_TYPE_GITHUB\",\n  \"action_entry_function\": \"github_get_user\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\n  \"action_is_check\": false,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_GITHUB\",\"CATEGORY_TYPE_GITHUB_USER\" ]\n}"
  },
  {
    "path": "Github/legos/github_get_user/github_get_user.py",
    "content": "##\n##  Copyright (c) 2023 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import Dict\nfrom pydantic import BaseModel, Field\nfrom github import GithubException, UnknownObjectException\n\n\nclass InputSchema(BaseModel):\n    owner: str = Field(\n        description='Username of the GitHub user. Eg: \"johnwick\"',\n        title='Owner'\n    )\n\n\ndef github_get_user_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\ndef github_get_user(handle, owner:str) -> Dict:\n    \"\"\"github_get_user returns details of a user\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type owner: string\n        :param owner: Username of the GitHub user. Eg: \"johnwick\"\n\n        :rtype: Dict of details of a user\n    \"\"\"\n    try:\n        user_details = {}\n        user = handle.get_user(login=owner)\n        user_details[\"name\"] = user.login\n        user_details[\"company\"] = user.company\n        user_details[\"email\"] = user.email\n        user_details[\"bio\"] = user.bio\n        user_details[\"followers\"] = user.followers\n        user_details[\"following\"] = user.following\n    except GithubException as e:\n        if e.status == 404:\n            raise UnknownObjectException(\"User not found\") from e\n        raise e.data\n    except Exception as e:\n        raise e\n    return user_details\n"
  },
  {
    "path": "Github/legos/github_invite_user_to_org/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Github invite User to an Organization</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego sends an invite to a user to an organization\r\n\r\n## Lego Details\r\n\r\n    github_invite_user_to_org(handle, organization_name:str, email:str, list_of_teams:list, role:GithubUserRole=None)\r\n\r\n        handle: Object of type unSkript Github Connector\r\n        organization_name: String, Organization Name\r\n        list_of_teams: List of teams to add the user to. Eg:[\"frontend-dev\",\"backend-dev\"]\r\n        email: String, Email address of the user to invite to the Github Organization. Eg: user@gmail.com\r\n        role: Enum, Role to assign to the new user. By default, direct_member role will be assigned. Eg:\"admin\" or \"direct_member\" or \"billing_manager\". \r\n\r\n## Lego Input\r\nThis Lego take 5 inputs handle, organization_name, list_of_teams, email, role\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Github/legos/github_invite_user_to_org/__init__.py",
    "content": ""
  },
  {
    "path": "Github/legos/github_invite_user_to_org/github_invite_user_to_org.json",
    "content": "{\n  \"action_title\": \"Github Invite User to Organization\",\n  \"action_description\": \"Invite a Github User to an Organization\",\n  \"action_type\": \"LEGO_TYPE_GITHUB\",\n  \"action_entry_function\": \"github_invite_user_to_org\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\n  \"action_is_check\": false,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_GITHUB\",\"CATEGORY_TYPE_GITHUB_USER\",\"CATEGORY_TYPE_GITHUB_ORG\" ]\n}"
  },
  {
    "path": "Github/legos/github_invite_user_to_org/github_invite_user_to_org.py",
    "content": "\n##\n##  Copyright (c) 2023 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import Optional, List\nfrom pydantic import BaseModel, Field\nfrom unskript.enums.github_user_role_enums import GithubUserRole\nfrom github import GithubException, BadCredentialsException, UnknownObjectException\n\n\nclass InputSchema(BaseModel):\n    email: str = Field(\n        description=('Email address of the user to invite to the Github Organization. '\n                     'Eg: user@gmail.com'),\n        title='Email',\n    )\n    organization_name: str = Field(\n        description='Github Organization Name',\n        title='Organization Name'\n    )\n    role: Optional[GithubUserRole] = Field(\n        '',\n        description=('Role to assign to the new user. By default, direct_member role will '\n                     'be assigned. Eg:\"admin\" or \"direct_member\" or \"billing_manager\". '),\n        title='Role',\n    )\n    list_of_teams: List = Field(\n        description='List of teams to add the user to. Eg:[\"frontend-dev\",\"backend-dev\"]',\n        title='List of Teams',\n    )\n\n\ndef github_invite_user_to_org_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef github_invite_user_to_org(\n        handle,\n        organization_name:str,\n        email:str,\n        list_of_teams:list,\n        role:GithubUserRole=GithubUserRole.direct_member\n        )-> str:\n    \"\"\"github_invite_user_to_org returns status of the invite\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type organization_name: string\n        :param organization_name: Organization name Eg: \"infosec\"\n\n        :type list_of_teams: list\n        :param list_of_teams: List of teams to add the user to. Eg:[\"frontend-dev\",\"backend-dev\"]\n\n        :type email: str\n        :param email: Email address of the user to invite to the Github Organization. \n        Eg: user@gmail.com\n\n        :type role: GithubUserRole (Enum)\n        :param role: Role to assign to the new user. By default, direct_member role will be \n        assigned. Eg:\"admin\" or \"direct_member\" or \"billing_manager\". \n\n        :rtype: String, Status message for a the invite\n    \"\"\"\n    result = []\n    teams_list = []\n    organization = handle.get_organization(organization_name)\n    if role is None:\n        role = \"direct_member\"\n    try:\n        teams = organization.get_teams()\n        for each_team in teams:\n            if each_team.name in list_of_teams:\n                teams_list.append(each_team)\n        result = organization.invite_user(email=email, role=role, teams=teams_list)\n        if result is None:\n            return \"Successfully sent invite\"\n    except GithubException as e:\n        if e.status == 403:\n            raise BadCredentialsException(\"You need admin access\") from e\n        if e.status == 404:\n            raise UnknownObjectException(\"No such organization found\") from e\n        raise e.data\n    return None\n"
  },
  {
    "path": "Github/legos/github_issue_comment/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Github Add Comment to an Issue</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Action adds a comment to the desired issue\r\n\r\n## Lego Details\r\n\r\n    def github_issue_comment(handle, owner:str, repository:str, issue_number:str, issue_comment:str) -> str:\r\n\r\n        handle: Object of type unSkript Github Connector\r\n        owner: String, Username of the GitHub user. Eg: \"johnwick\"\r\n        repository: String, Full name of the GitHub repository. Eg: \"unskript/Awesome-CloudOps-Automation\"\r\n        issue_number: The issue that you wish to comment on.\r\n        issue_comment: The text to be added as a comment.\r\n\r\n## Lego Input\r\nThis Lego take 5 inputs handle, owner, repository, issue_number and issue_comment\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.jpg\">\r\n\r\n<img src=\"./2.jpg\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Github/legos/github_issue_comment/__init__.py",
    "content": ""
  },
  {
    "path": "Github/legos/github_issue_comment/github_issue_comment.json",
    "content": "{\n  \"action_title\": \"Github Comment on an Issue\",\n  \"action_description\": \"Add a comment to the selected GitHub Issue\",\n  \"action_type\": \"LEGO_TYPE_GITHUB\",\n  \"action_entry_function\": \"github_issue_comment\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\n  \"action_is_check\": false,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_GITHUB\",\"CATEGORY_TYPE_GITHUB_ISSUE\" ]\n}"
  },
  {
    "path": "Github/legos/github_issue_comment/github_issue_comment.py",
    "content": "##\n##  Copyright (c) 2023 unSkript, Inc\n##  All rights reserved.\n##\n\nimport pprint\nfrom pydantic import BaseModel, Field\n\n\nclass InputSchema(BaseModel):\n    owner: str = Field(\n        description='Username of the GitHub user. Eg: \"johnwick\"',\n        title='Owner'\n    )\n    repository: str = Field(\n        description='Name of the GitHub repository. Eg: \"Awesome-CloudOps-Automation\"',\n        title='Repository',\n    )\n    issue_comment: str = Field(\n        description='The Comment to add to the Issue',\n        title='Issue Comment'\n    )\n    issue_number: str = Field(\n        description='Github Issue where Comment is to be added.',\n        title='Issue Number'\n    )\n\n\ndef github_issue_comment_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef github_issue_comment(\n        handle,\n        owner:str,\n        repository:str,\n        issue_number:str,\n        issue_comment:str\n        ) -> str:\n    issue_number = int(issue_number)\n    owner = handle.get_user(owner)\n    repo_name = owner.login +'/'+ repository\n    repo = handle.get_repo(repo_name)\n    # Get the issue by its number\n    issue = repo.get_issue(issue_number)\n\n    # Add a comment to the issue\n    issue.create_comment(issue_comment)\n    return \"added comment\"\n"
  },
  {
    "path": "Github/legos/github_list_open_issues/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Github List Open Issues</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego lists all open Github Issues\r\n\r\n## Lego Details\r\n\r\n    github_list_open_issues(handle, owner:str, repository:str)\r\n\r\n        handle: Object of type unSkript Github Connector\r\n        owner: String, Username of the GitHub user. Eg: \"johnwick\"\r\n        repository: String, Full name of the GitHub repository. Eg: \"unskript/Awesome-CloudOps-Automation\"\r\n\r\n## Lego Input\r\nThis Lego take 3 inputs handle, owner, repository\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Github/legos/github_list_open_issues/__init__.py",
    "content": ""
  },
  {
    "path": "Github/legos/github_list_open_issues/github_list_open_issues.json",
    "content": "{\n  \"action_title\": \"Github List Open Issues\",\n  \"action_description\": \"List open Issues in a Github Repository\",\n  \"action_type\": \"LEGO_TYPE_GITHUB\",\n  \"action_entry_function\": \"github_list_open_issues\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n  \"action_is_check\": false,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_GITHUB\",\"CATEGORY_TYPE_GITHUB_ISSUE\" ]\n}"
  },
  {
    "path": "Github/legos/github_list_open_issues/github_list_open_issues.py",
    "content": "\n##\n##  Copyright (c) 2023 unSkript, Inc\n##  All rights reserved.\n##\n\nimport pprint\nfrom typing import List\nfrom pydantic import BaseModel, Field\nfrom github import GithubException, BadCredentialsException, UnknownObjectException\n\n\n\nclass InputSchema(BaseModel):\n    owner: str = Field(\n        ..., description='Username of the GitHub user. Eg: \"johnwick\"', title='Owner'\n    )\n    repository: str = Field(\n        ...,\n        description='Name of the GitHub repository. Eg: \"Awesome-CloudOps-Automation\"',\n        title='Repository',\n    )\n\n\ndef github_list_open_issues_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef github_list_open_issues(handle, owner:str, repository:str) -> List:\n    \"\"\"github_list_open_issues returns details of open issues\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type owner: string\n        :param owner: Username of the GitHub user. Eg: \"johnwick\"\n\n        :type repository: string\n        :param repository: Name of the GitHub repository. Eg: \"Awesome-CloudOps-Automation\"\n\n        :rtype: List of open issues\n    \"\"\"\n    result = []\n    try:\n        owner = handle.get_user(owner)\n        repo_name = owner.login +'/'+ repository\n        repo = handle.get_repo(repo_name)\n        issues = repo.get_issues()\n        for issue in issues:\n            if issue.state == 'open':\n                issue_details = {}\n                issue_details[\"title\"] = issue.title\n                issue_details[\"issue_number\"] = issue.number\n                if isinstance(issue.assignee, type(None)):\n                    issue_details[\"assignee\"] = issue.assignee.login\n                else:\n                    issue_details[\"assignee\"] = issue.assignee\n                result.append(issue_details)\n    except GithubException as e:\n        if e.status == 403:\n            raise BadCredentialsException(\"You need admin access\") from e\n        if e.status == 404:\n            raise UnknownObjectException(\"No such repository or user found\") from e\n        raise e.data\n    except Exception as e:\n        raise e\n    return result\n"
  },
  {
    "path": "Github/legos/github_list_org_members/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Github List Organization Members</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego lists all organization members\r\n\r\n## Lego Details\r\n\r\n    github_list_org_members(handle: object ,organization_name:str)\r\n\r\n        handle: Object of type unSkript Github Connector\r\n        organization_name: String, Name of Github Organization. Eg: \"unskript\"\r\n\r\n## Lego Input\r\nThis Lego take 2 inputs handle, organization_name\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Github/legos/github_list_org_members/__init__.py",
    "content": ""
  },
  {
    "path": "Github/legos/github_list_org_members/github_list_org_members.json",
    "content": "{\n  \"action_title\": \"Github List Organization Members\",\n  \"action_description\": \"List Github Organization Members\",\n  \"action_type\": \"LEGO_TYPE_GITHUB\",\n  \"action_entry_function\": \"github_list_org_members\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n  \"action_is_check\": false,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_GITHUB\",\"CATEGORY_TYPE_GITHUB_ORG\" ]\n\n}"
  },
  {
    "path": "Github/legos/github_list_org_members/github_list_org_members.py",
    "content": "\n##\n##  Copyright (c) 2023 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import List\nfrom pydantic import BaseModel, Field\nfrom github import GithubException, BadCredentialsException, UnknownObjectException\n\n\nclass InputSchema(BaseModel):\n    organization_name: str = Field(\n        description='Name of Github Organization. Eg: \"unskript\"',\n        title='Organization Name',\n    )\n\n\ndef github_list_org_members_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\ndef github_list_org_members(handle, organization_name:str)-> List:\n    \"\"\"github_remove_member_from_org returns the status to remove a member\n\n        :type organization_name: string\n        :param organization_name: Name of Github Organization. Eg: \"unskript\"\n        \n        :rtype: List of return status of removing a member from Org\n    \"\"\"\n    result = []\n    try:\n        organization = handle.get_organization(organization_name)\n        members = organization.get_members()\n        result = [member.login for member in members]\n    except GithubException as e:\n        if e.status == 403:\n            raise BadCredentialsException(\"You need admin access\") from e\n        if e.status == 404:\n            raise UnknownObjectException(\"No such organization or user found\") from e\n        raise e.data\n    except Exception as e:\n        raise e\n    return result\n"
  },
  {
    "path": "Github/legos/github_list_pull_request_commits/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Github List Pull Request Commits</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego gets commit details of a  pull request\r\n\r\n## Lego Details\r\n\r\n    github_list_pull_request_commits(handle: object , owner:str, repository:str)\r\n\r\n        handle: Object of type unSkript Github Connector\r\n        owner: String, Username of the GitHub user. Eg: \"johnwick\"\r\n        repository: String, Full name of the GitHub repository. Eg: \"unskript/Awesome-CloudOps-Automation\"\r\n\r\n## Lego Input\r\nThis Lego take 3 inputs handle, owner, repository\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Github/legos/github_list_pull_request_commits/__init__.py",
    "content": ""
  },
  {
    "path": "Github/legos/github_list_pull_request_commits/github_list_pull_request_commits.json",
    "content": "{\n  \"action_title\": \"Github List PR Commits\",\n  \"action_description\": \"Github List all Pull Request Commits\",\n  \"action_type\": \"LEGO_TYPE_GITHUB\",\n  \"action_entry_function\": \"github_list_pull_request_commits\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n  \"action_is_check\": false,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_GITHUB\",\"CATEGORY_TYPE_GITHUB_PR\" ]\n\n}"
  },
  {
    "path": "Github/legos/github_list_pull_request_commits/github_list_pull_request_commits.py",
    "content": "\n##\n##  Copyright (c) 2023 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import List\nfrom pydantic import BaseModel, Field\nfrom github import GithubException, BadCredentialsException, UnknownObjectException\n\n\nclass InputSchema(BaseModel):\n    owner: str = Field(\n        description='Username of the GitHub user. Eg: \"johnwick\"',\n        title='Owner'\n    )\n    repository: str = Field(\n        description='Name of the GitHub repository. Eg: \"Awesome-CloudOps-Automation\"',\n        title='Repository',\n    )\n    pull_request_number: int = Field(\n        description='Pull request number. Eg: 167',\n        title='Pull Request Number'\n    )\n\n\ndef github_list_pull_request_commits_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\ndef github_list_pull_request_commits(\n        handle,\n        owner:str,\n        repository:str,\n        pull_request_number: int\n        ) -> List:\n    \"\"\"github_list_pull_request_commits returns details of pull requests commits\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type owner: string\n        :param owner: Username of the GitHub user. Eg: \"johnwick\"\n\n        :type repository: string\n        :param repository: Name of the GitHub repository. Eg: \"Awesome-CloudOps-Automation\"\n\n        :type pull_request_number: int\n        :param pull_request_number: Pull request number. Eg: 167\n\n        :rtype: List of details of pull request commits\n    \"\"\"\n    result = []\n    pr_number = int(pull_request_number)\n    try:\n        owner = handle.get_user(owner)\n        repo_name = owner.login+'/'+repository\n        repo = handle.get_repo(repo_name)\n        pr = repo.get_pull(pr_number)\n        commits = pr.get_commits()\n        for commit in commits:\n            commits_dict = {}\n            commits_dict[\"sha\"] = commit.sha\n            commits_dict[\"committer\"] = commit.committer.login\n            commits_dict[\"date\"] = commit.commit.author.date\n            result.append(commits_dict)\n    except GithubException as e:\n        if e.status == 403:\n            raise BadCredentialsException(\"You need admin access\") from e\n        if e.status == 404:\n            raise UnknownObjectException(\"No such pull number or repository or user found\") from e\n        raise e.data\n    except Exception as e:\n        raise e\n    return result\n"
  },
  {
    "path": "Github/legos/github_list_pull_request_reviewers/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Github Get Pull Request Reviewers</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego lists pull requests reviewers\r\n\r\n## Lego Details\r\n\r\n    github_list_pull_request_reviewers(handle: object , owner:str, repository:str, pull_request_number:int)\r\n\r\n        handle: Object of type unSkript Github Connector\r\n        owner: String, Username of the GitHub user. Eg: \"johnwick\"\r\n        repository: String, Full name of the GitHub repository. Eg: \"unskript/Awesome-CloudOps-Automation\"\r\n        pull_request_number: Integer, Pull request number. Eg: 167\r\n\r\n## Lego Input\r\nThis Lego take 4 inputs handle, owner, repository, pull_request_number\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Github/legos/github_list_pull_request_reviewers/__init__.py",
    "content": ""
  },
  {
    "path": "Github/legos/github_list_pull_request_reviewers/github_list_pull_request_reviewers.json",
    "content": "{\n  \"action_title\": \"Github List Pull Request Reviewers\",\n  \"action_description\": \"List PR reviewers for a PR\",\n  \"action_type\": \"LEGO_TYPE_GITHUB\",\n  \"action_entry_function\": \"github_list_pull_request_reviewers\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n  \"action_is_check\": false,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_GITHUB\",\"CATEGORY_TYPE_GITHUB_PR\" ]\n}"
  },
  {
    "path": "Github/legos/github_list_pull_request_reviewers/github_list_pull_request_reviewers.py",
    "content": "\n##\n##  Copyright (c) 2023 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import List\nfrom pydantic import BaseModel, Field\nfrom github import GithubException, BadCredentialsException, UnknownObjectException\n\n\nclass InputSchema(BaseModel):\n    owner: str = Field(\n        description='Username of the GitHub user. Eg: \"johnwick\"',\n        title='Owner'\n    )\n    repository: str = Field(\n        description='Name of the GitHub repository. Eg: \"Awesome-CloudOps-Automation\"',\n        title='Repository',\n    )\n    pull_request_number: int = Field(\n        description='Pull request number. Eg: 167',\n        title='Pull Request Number'\n    )\n\n\ndef github_get_pull_request_reviewers_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\ndef github_list_pull_request_reviewers(\n        handle,\n        owner:str,\n        repository:str,\n        pull_request_number: int\n        ) -> List:\n    \"\"\"github_get_pull_request_reviewers returns reviewers of a pull request\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type owner: string\n        :param owner: Username of the GitHub user. Eg: \"johnwick\"\n\n        :type repository: string\n        :param repository: Name of the GitHub repository. Eg: \"Awesome-CloudOps-Automation\"\n\n        :type pull_request_number: int\n        :param pull_request_number: Pull request number. Eg: 167\n\n        :rtype: List of reviewers of a pull request\n    \"\"\"\n    result = []\n    pr_number = int(pull_request_number)\n    try:\n        owner = handle.get_user(owner)\n        repo_name = owner.login+'/'+repository\n        repo = handle.get_repo(repo_name)\n        pr = repo.get_pull(pr_number)\n        review_requests = pr.get_review_requests()\n        for request in review_requests:\n            for r in request:\n                result.append(r.login)\n    except GithubException as e:\n        if e.status == 403:\n            raise BadCredentialsException(\"You need admin access\") from e\n        if e.status == 404:\n            raise UnknownObjectException(\"No such pull number or repository or user found\") from e\n        raise e.data\n    except Exception as e:\n        raise e\n    if len(result) == 0:\n        return [f\"No reviewers added for Pull Number {pr.number}\"]\n    return result\n"
  },
  {
    "path": "Github/legos/github_list_pull_requests/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Github List Pull Requests</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego lists all open pull requests for a user\r\n\r\n## Lego Details\r\n\r\n    github_list_pull_requests(handle: object , owner:str, repository:str)\r\n\r\n        handle: Object of type unSkript Github Connector\r\n        owner: String, Username of the GitHub user. Eg: \"johnwick\"\r\n        repository: String, Full name of the GitHub repository. Eg: \"unskript/Awesome-CloudOps-Automation\"\r\n\r\n## Lego Input\r\nThis Lego take 3 inputs handle, owner, repository\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Github/legos/github_list_pull_requests/__init__.py",
    "content": ""
  },
  {
    "path": "Github/legos/github_list_pull_requests/github_list_pull_requests.json",
    "content": "{\n  \"action_title\": \"Github List Pull Requests\",\n  \"action_description\": \"List pull requests for a user in a repository\",\n  \"action_type\": \"LEGO_TYPE_GITHUB\",\n  \"action_entry_function\": \"github_list_pull_requests\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n  \"action_is_check\": false,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_GITHUB\",\"CATEGORY_TYPE_GITHUB_PR\" ]\n}"
  },
  {
    "path": "Github/legos/github_list_pull_requests/github_list_pull_requests.py",
    "content": "##\n##  Copyright (c) 2023 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import List\nfrom pydantic import BaseModel, Field\nfrom github import GithubException, BadCredentialsException, UnknownObjectException\n\n\nclass InputSchema(BaseModel):\n    owner: str = Field(\n        description='Username of the GitHub user. Eg: \"johnwick\"',\n        title='Owner'\n    )\n    repository: str = Field(\n        description='Name of the GitHub repository. Eg: \"Awesome-CloudOps-Automation\"',\n        title='Repository',\n    )\n\n\ndef github_list_pull_requests_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\ndef github_list_pull_requests(handle, owner:str, repository:str) -> List:\n    \"\"\"github_list_pull_requests returns all pull requests for a user\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type owner: string\n        :param owner: Username of the GitHub user. Eg: \"johnwick\"\n\n        :type repository: string\n        :param repository: Name of the GitHub repository. Eg: \"Awesome-CloudOps-Automation\"\n\n        :rtype: List of pull requests for a user\n    \"\"\"\n    result = []\n    try:\n        owner = handle.get_user(owner)\n        repo_name = owner.login+'/'+repository\n        repo = handle.get_repo(repo_name)\n        #Fetch open PRs and sort by created\n        prs = repo.get_pulls(state='open', sort='created')\n        for pr in prs:\n            prs_dict = {}\n            prs_dict[pr.number] = pr.title\n            result.append(prs_dict)\n    except GithubException as e:\n        if e.status == 403:\n            raise BadCredentialsException(\"You need admin access\") from e\n        if e.status == 404:\n            raise UnknownObjectException(\"No such repository or user found\") from e\n        raise e.data\n    except Exception as e:\n        raise e\n    return result\n"
  },
  {
    "path": "Github/legos/github_list_stale_issues/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Github List Stale Issues</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego lists all stale issues in a Github Repository\r\n\r\n## Lego Details\r\n\r\n    github_list_pull_requests(handle: object , owner:str, repository:str, age_to_stale:int)\r\n\r\n        handle: Object of type unSkript Github Connector\r\n        owner: String, Username of the GitHub user. Eg: \"johnwick\"\r\n        repository: String, Name of the GitHub repository. Eg: \"Awesome-CloudOps-Automation\"\r\n        age_to_stale: Integer, Age in days to check if the issue creation or updation dates are older and hence classify those issues as stale Eg:45'\r\n\r\n## Lego Input\r\nThis Lego take 4 inputs handle, owner, repository, age_to_stale\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Github/legos/github_list_stale_issues/__init__.py",
    "content": ""
  },
  {
    "path": "Github/legos/github_list_stale_issues/github_list_stale_issues.json",
    "content": "{\n  \"action_title\": \"Github List Stale Issues\",\n  \"action_description\": \"List Stale Github Issues that have crossed a certain age limit.\",\n  \"action_type\": \"LEGO_TYPE_GITHUB\",\n  \"action_entry_function\": \"github_list_stale_issues\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n  \"action_is_check\": true,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_GITHUB\",\"CATEGORY_TYPE_GITHUB_ISSUE\" ],\n  \"action_next_hop\": [\"\"],\n  \"action_next_hop_parameter_mapping\": {}\n}"
  },
  {
    "path": "Github/legos/github_list_stale_issues/github_list_stale_issues.py",
    "content": "##\n##  Copyright (c) 2023 unSkript, Inc\n##  All rights reserved.\n##\nfrom typing import Tuple, Optional\nimport datetime\nfrom pydantic import BaseModel, Field\nfrom tabulate import tabulate\nfrom github import GithubException, BadCredentialsException, UnknownObjectException\n\nclass InputSchema(BaseModel):\n    owner: Optional[str] = Field(\n        description='Username of the GitHub user. Eg: \"johnwick\"',\n        title='Owner'\n    )\n    repository: str = Field(\n        description='Name of the GitHub repository. Eg: \"Awesome-CloudOps-Automation\"',\n        title='Repository',\n    )\n    age_to_stale: Optional[int] = Field(\n        '14',\n        description=('Age in days to check if the issue creation or updation dates are '\n                     'older and hence classify those issues as stale Eg:45'),\n        title='Age to Stale',\n    )\n\n\ndef github_list_stale_issues_printer(output):\n    if output is None or output[1] is None:\n        return\n\n    success, res = output\n    if not success:\n        headers = [\"Title\", \"Issue Number\", \"Assignee\"]\n        table = [[issue[\"title\"], issue[\"issue_number\"], issue[\"assignee\"]] for issue in res]\n        print(tabulate(table, headers, tablefmt=\"grid\"))\n    else:\n        print(\"No stale issues found.\")\n\n\ndef github_list_stale_issues(handle, repository:str, age_to_stale:int=14) -> Tuple:\n    \"\"\"github_list_stale_issues returns details of stale issues\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type repository: string\n        :param repository: Name of the GitHub repository. Eg: \"Awesome-CloudOps-Automation\"\n\n        :type owner: string (Optional)\n        :param owner: Username of the GitHub user. Eg: \"johnwick\"\n\n        :type age_to_stale: int (Optional)\n        :param age_to_stale: Age in days to check if the issue creation or updation dates \n        are older and hence classify those issues as stale Eg:45',\n\n        :rtype: List of stale issues\n    \"\"\"\n    result = []\n    try:\n        owner = handle.get_user().login  # Fetch the owner (authenticated user) login\n        repo = handle.get_repo(f\"{owner}/{repository}\")\n        issues = repo.get_issues()\n\n        # Check if there are no open issues\n        if issues.get_page(0) == []:\n            print(\"No issues are open at the moment.\")\n            return (True, None)\n\n        today = datetime.datetime.now()\n        for issue in issues:\n            creation_date = issue.created_at\n            updation_date = issue.updated_at\n            diff_in_updation = (today - updation_date).days\n            diff_in_creation = (today - creation_date).days\n            if diff_in_creation >= age_to_stale or diff_in_updation >= age_to_stale:\n                issue_details = {\n                    \"title\": issue.title,\n                    \"issue_number\": issue.number,\n                    \"assignee\": \"None\" if issue.assignee is None else issue.assignee.login\n                }\n                result.append(issue_details)\n    except Exception as e:\n        raise e\n\n    return (False, result) if result else (True, None)"
  },
  {
    "path": "Github/legos/github_list_stale_pull_requests/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Github List Stale Pull Requests</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego lists stale pull requests\r\n\r\n## Lego Details\r\n\r\n    github_list_stale_pull_requests(handle: object , owner:str, repository:str, threshold:int)\r\n\r\n        handle: Object of type unSkript Github Connector\r\n        owner: String, Username of the GitHub user. Eg: \"johnwick\"\r\n        repository: String, Full name of the GitHub repository. Eg: \"unskript/Awesome-CloudOps-Automation\"\r\n        threshold: Threshold number of days to find stale PR's\r\n    \r\n## Lego Input\r\nThis Lego take 4 inputs handle, owner, repository, threshold.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Github/legos/github_list_stale_pull_requests/__init__.py",
    "content": ""
  },
  {
    "path": "Github/legos/github_list_stale_pull_requests/github_list_stale_pull_requests.json",
    "content": "{\n  \"action_title\": \"Github List Stale Pull Requests\",\n  \"action_description\": \"Check for any Pull requests over a certain age. \",\n  \"action_type\": \"LEGO_TYPE_GITHUB\",\n  \"action_entry_function\": \"github_list_stale_pull_requests\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n  \"action_is_check\": true,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_GITHUB\",\"CATEGORY_TYPE_GITHUB_PR\" ],\n  \"action_next_hop\": [\"\"],\n  \"action_next_hop_parameter_mapping\": {}\n}"
  },
  {
    "path": "Github/legos/github_list_stale_pull_requests/github_list_stale_pull_requests.py",
    "content": "##\n##  Copyright (c) 2023 unSkript, Inc\n##  All rights reserved.\n##\nfrom typing import Tuple, Optional\nimport datetime\nfrom pydantic import BaseModel, Field\nfrom tabulate import tabulate\nfrom github import GithubException, BadCredentialsException, UnknownObjectException\n\n\nclass InputSchema(BaseModel):\n    owner: Optional[str] = Field(\n        description='Username of the GitHub user. Eg: \"johnwick\"',\n        title='Owner'\n    )\n    repository: str = Field(\n        description='Name of the GitHub repository. Eg: \"Awesome-CloudOps-Automation\"',\n        title='Repository'\n    )\n    threshold: Optional[int] = Field(\n        description=(\"Threshold number of days to check for a stale PR. Eg: 45 -> \"\n                     \"All PR's older than 45 days will be displayed\"),\n        title='Threshold (in Days)'\n    )\n\n\ndef github_list_stale_pull_requests_printer(output_tuple):\n    if output_tuple is None or output_tuple[1] is None:\n        return\n    success, output = output_tuple\n    if not success:\n        headers = [\"PR Number\", \"Title\"]\n        table = [[pr[\"number\"], pr[\"title\"]] for pr in output]\n        print(tabulate(table, headers, tablefmt=\"grid\"))\n    else:\n        print(\"No stale pull requests found.\")\n\n\ndef github_list_stale_pull_requests(handle, repository: str, threshold: int = 14,  owner:str = \"\") -> Tuple:\n    \"\"\"github_list_stale_pull_requests returns stale pull requests\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type repository: string\n        :param repository: Name of the GitHub repository. Eg: \"Awesome-CloudOps-Automation\"\n\n        :type owner: string (Optional)\n        :param owner: Username of the GitHub user. Eg: \"johnwick\"\n\n        :type threshold: int (Optional)\n        :param threshold: Threshold number of days to find stale PR's\n\n        :rtype: Status, List of stale pull requests\n    \"\"\"\n    if handle is None or not owner or not repository or threshold is None or threshold <= 0:\n        raise ValueError(\"Invalid input parameters\")\n\n    result = []\n    try:\n        if len(owner)==0 or owner is None:\n            owner = handle.get_user().login\n        owner = handle.get_user().login\n        repo = handle.get_repo(f\"{owner}/{repository}\")\n        prs = repo.get_pulls()\n\n        # Check if there are no open pull requests\n        if prs.get_page(0) == []:\n            print(\"No pull requests are open at the moment.\")\n            return (True, None)\n\n        today = datetime.datetime.now()\n\n        for pr in repo.get_pulls():\n            print(pr)\n            creation_date = pr.created_at\n            diff = (today - creation_date).days\n            if diff >= threshold:\n                result.append({\"number\": pr.number, \"title\": pr.title})\n\n    except Exception as e:\n        raise e\n\n    return (False, result) if result else (True, None)\n"
  },
  {
    "path": "Github/legos/github_list_stargazers/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Github List Stargazers</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego lists Github users that have starred (essentially bookmarked) a repository\r\n\r\n## Lego Details\r\n\r\n    github_list_stargazers(handle, owner:str, repository:str)\r\n\r\n        handle: Object of type unSkript Github Connector\r\n        owner: String, Username of the GitHub user. Eg: \"johnwick\"\r\n        repository: String, Full name of the GitHub repository. Eg: \"unskript/Awesome-CloudOps-Automation\"\r\n\r\n## Lego Input\r\nThis Lego take 3 inputs handle, owner, repository\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Github/legos/github_list_stargazers/__init__.py",
    "content": ""
  },
  {
    "path": "Github/legos/github_list_stargazers/github_list_stargazers.json",
    "content": "{\n  \"action_title\": \"Github List Stargazers\",\n  \"action_description\": \"List of Github users that have starred (essentially bookmarked) a repository\",\n  \"action_type\": \"LEGO_TYPE_GITHUB\",\n  \"action_entry_function\": \"github_list_stargazers\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n  \"action_is_check\": false,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_GITHUB\",\"CATEGORY_TYPE_GITHUB_REPO\" ]\n}"
  },
  {
    "path": "Github/legos/github_list_stargazers/github_list_stargazers.py",
    "content": "\n##\n##  Copyright (c) 2023 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import List\nfrom pydantic import BaseModel, Field\nfrom github import GithubException, BadCredentialsException, UnknownObjectException\n\n\nclass InputSchema(BaseModel):\n    owner: str = Field(\n        ..., description='Username of the GitHub user. Eg: \"johnwick\"', title='Owner'\n    )\n    repository: str = Field(\n        ...,\n        description='Name of the GitHub repository. Eg: \"Awesome-CloudOps-Automation\"',\n        title='Repository',\n    )\n\n\n\ndef github_list_stargazers_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\ndef github_list_stargazers(handle, owner:str, repository:str) -> List:\n    \"\"\"github_list_stargazers returns last 100 stargazers for a Github Repository\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type owner: string\n        :param owner: Username of the GitHub user. Eg: \"johnwick\"\n\n        :type repository: string\n        :param repository: Name of the GitHub repository. Eg: \"Awesome-CloudOps-Automation\"\n\n        :rtype: List of last 100 stargazers for a Github Repository\n    \"\"\"\n    result = []\n    try:\n        owner = handle.get_user(owner)\n        repo_name = owner.login +'/'+ repository\n        repo = handle.get_repo(repo_name)\n        stars = repo.get_stargazers_with_dates()\n        for star in stars[len(list(stars))-100:]:\n            stargazer_details = {}\n            stargazer_details[\"name\"] = star.user.login\n            dummy_date = star.starred_at\n            formatted_date = dummy_date.strftime(\"%d-%m-%Y\")\n            stargazer_details[\"date\"] = formatted_date\n            result.append(stargazer_details)\n    except GithubException as e:\n        if e.status == 403:\n            raise BadCredentialsException(\"You need admin access\") from e\n        if e.status == 404:\n            raise UnknownObjectException(\"No such repository or user found\") from e\n        raise e.data\n    except Exception as e:\n        raise e\n    return result\n"
  },
  {
    "path": "Github/legos/github_list_team_members/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Github Get Team Members</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego lists Github Team Members for a given Team\r\n\r\n## Lego Details\r\n\r\n    github_list_team_members(handle, organization_name:str, team_name:str)\r\n\r\n        handle: Object of type unSkript Github Connector\r\n        organization_name: String, Organization Name\r\n        team_name: String, Team Name\r\n\r\n## Lego Input\r\nThis Lego take 3 inputs handle, organization_name, team_name\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Github/legos/github_list_team_members/__init__.py",
    "content": ""
  },
  {
    "path": "Github/legos/github_list_team_members/github_list_team_members.json",
    "content": "{\n  \"action_title\": \"Github List Team Members\",\n  \"action_description\": \"List Github Team Members for a given Team\",\n  \"action_type\": \"LEGO_TYPE_GITHUB\",\n  \"action_entry_function\": \"github_list_team_members\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n  \"action_is_check\": false,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_GITHUB\",\"CATEGORY_TYPE_GITHUB_TEAM\", \"CATEGORY_TYPE_GITHUB_USER\" ]\n}"
  },
  {
    "path": "Github/legos/github_list_team_members/github_list_team_members.py",
    "content": "\n##\n##  Copyright (c) 2023 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import List\nfrom pydantic import BaseModel, Field\nfrom github import GithubException, BadCredentialsException, UnknownObjectException\n\n\nclass InputSchema(BaseModel):\n    organization_name: str = Field(\n        ..., description='Github Organization Name', title='Organization Name'\n    )\n    team_name: str = Field(\n        ..., description='Team name in a GitHub Organization', title='Team name'\n    )\n\ndef github_list_team_members_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\ndef github_list_team_members(handle, organization_name:str, team_name:str) -> List:\n    \"\"\"github_list_team_members returns details of the team members for a given team\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type organization_name: string\n        :param organization_name: Organization name Eg: \"infosec\"\n\n        :type team_name: string\n        :param team_name: Team name. Eg: \"backend\"\n\n        :rtype: List of a teams members details\n    \"\"\"\n    result = []\n    try:\n        organization = handle.get_organization(organization_name)\n        team = organization.get_team_by_slug(team_name)\n        members = team.get_members()\n        for member in members:\n            member_details = {}\n            member_details[\"name\"] = member.login\n            member_details[\"id\"] = member.id\n            result.append(member_details)\n    except GithubException as e:\n        if e.status == 403:\n            raise BadCredentialsException(\"You need admin access\") from e\n        if e.status == 404:\n            raise UnknownObjectException(\"No such organization or team found\") from e\n        raise e.data\n    except Exception as e:\n        raise e\n    return result\n"
  },
  {
    "path": "Github/legos/github_list_team_repos/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Github List Team Repositories</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego lists Repositories in a Team\r\n\r\n## Lego Details\r\n\r\n    github_list_team_repos(handle: object, organization_name: str, team_name: str)\r\n\r\n        handle: Object of type unSkript Github Connector\r\n        organization_name: String, Organization Name\r\n        team_name: String, Team Name\r\n\r\n## Lego Input\r\nThis Lego take 3 inputs handle, organization_name, team_name\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Github/legos/github_list_team_repos/__init__.py",
    "content": ""
  },
  {
    "path": "Github/legos/github_list_team_repos/github_list_team_repos.json",
    "content": "{\n  \"action_title\": \"Github List Team Repositories\",\n  \"action_description\": \"Github List Team Repositories\",\n  \"action_type\": \"LEGO_TYPE_GITHUB\",\n  \"action_entry_function\": \"github_list_team_repos\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n  \"action_is_check\": false,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_GITHUB\",\"CATEGORY_TYPE_GITHUB_REPO\" ]\n}"
  },
  {
    "path": "Github/legos/github_list_team_repos/github_list_team_repos.py",
    "content": "\n##\n##  Copyright (c) 2023 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import List\nfrom pydantic import BaseModel, Field\nfrom github import GithubException, BadCredentialsException, UnknownObjectException\n\nclass InputSchema(BaseModel):\n    organization_name: str = Field(\n        description='Name of the GitHub Organization. Eg: \"wheelorg\"',\n        title='Organization Name',\n    )\n    team_name: str = Field(\n        description='Team name. Eg: \"backend\"',\n        title='Team Name'\n    )\n\n\n\ndef github_list_team_repos_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\ndef github_list_team_repos(handle, organization_name:str, team_name:str) -> List:\n    \"\"\"github_list_team_repos returns list of repositories in a team\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type organization_name: string\n        :param organization_name: Organization name Eg: \"infosec\"\n\n        :type team_name: string\n        :param team_name: Team name. Eg: \"backend\"\n\n        :rtype: List of repositories in a team\n    \"\"\"\n    result = []\n    try:\n        organization = handle.get_organization(organization_name)\n        team = organization.get_team_by_slug(team_name)\n        repos = team.get_repos()\n        result = [repo.full_name for repo in repos]\n    except GithubException as e:\n        if e.status == 403:\n            raise BadCredentialsException(\"You need admin access\") from e\n        if e.status == 404:\n            raise UnknownObjectException(\"No such organization or repository found\") from e\n        raise e.data\n    except Exception as e:\n        raise e\n    return result\n"
  },
  {
    "path": "Github/legos/github_list_teams_in_org/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Github List Teams in an Organization</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego gets list of teams in an organization\r\n\r\n## Lego Details\r\n\r\n    github_list_teams_in_org(handle: object,organization_name:str )\r\n\r\n        handle: Object of type unSkript Github Connector\r\n        organization_name: String, Name of the GitHub Organization. Eg: \"wheelorg\"\r\n\r\n## Lego Input\r\nThis Lego take 2 inputs handle, organization_name\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Github/legos/github_list_teams_in_org/__init__.py",
    "content": ""
  },
  {
    "path": "Github/legos/github_list_teams_in_org/github_list_teams_in_org.json",
    "content": "{\n  \"action_title\": \"Github List Teams in Organization\",\n  \"action_description\": \"List teams in a organization in GitHub\",\n  \"action_type\": \"LEGO_TYPE_GITHUB\",\n  \"action_entry_function\": \"github_list_teams_in_org\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n  \"action_is_check\": false,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_GITHUB\",\"CATEGORY_TYPE_GITHUB_TEAM\",\"CATEGORY_TYPE_GITHUB_ORG\" ]\n\n}"
  },
  {
    "path": "Github/legos/github_list_teams_in_org/github_list_teams_in_org.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import List\nfrom pydantic import BaseModel, Field\nfrom github import GithubException, BadCredentialsException\n\nclass InputSchema(BaseModel):\n    organization_name: str = Field(\n        description='Name of the GitHub Organization. Eg: \"wheelorg\"',\n        title='Organization Name',\n    )\n\n\n\ndef github_list_teams_in_org_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\ndef github_list_teams_in_org(handle, organization_name:str) -> List:\n    \"\"\"github_list_teams_in_org returns 100 open github branches for a user.\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type organization_name: string\n        :param organization_name: Name of the GitHub Organization. Eg: \"wheelorg\"\n\n        :rtype: List of teams in a github org\n    \"\"\"\n    result = []\n    organization = handle.get_organization(organization_name)\n    teams = organization.get_teams()\n    try:\n        [result.append(team.name) for team in teams]\n    except GithubException as e:\n        if e.status == 403:\n            raise BadCredentialsException(\"You need admin access\") from e\n        raise e.data\n    except Exception as e:\n        raise e\n    return result\n"
  },
  {
    "path": "Github/legos/github_list_webhooks/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Github List Webhooks</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego lists all webhooks for a repository\r\n\r\n## Lego Details\r\n\r\n    github_list_webhooks(handle: object ,owner:str, repository:str)\r\n\r\n        handle: Object of type unSkript Github Connector\r\n        owner: String, Username of the GitHub user. Eg: \"johnwick\"\r\n        repository: String, Full name of the GitHub repository. Eg: \"unskript/Awesome-CloudOps-Automation\"\r\n\r\n## Lego Input\r\nThis Lego take 3 inputs handle, owner, repository\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Github/legos/github_list_webhooks/__init__.py",
    "content": ""
  },
  {
    "path": "Github/legos/github_list_webhooks/github_list_webhooks.json",
    "content": "{\n    \"action_title\": \"Github List Webhooks\",\n    \"action_description\": \"List webhooks for a repository\",\n    \"action_type\": \"LEGO_TYPE_GITHUB\",\n    \"action_entry_function\": \"github_list_webhooks\",\n    \"action_needs_credential\": true,\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n    \"action_is_check\": false,\n    \"action_supports_iteration\": true,\n    \"action_supports_poll\": true,\n    \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_GITHUB\" ]\n  }"
  },
  {
    "path": "Github/legos/github_list_webhooks/github_list_webhooks.py",
    "content": "##\n##  Copyright (c) 2023 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import List\nfrom pydantic import BaseModel, Field\nfrom github import GithubException, BadCredentialsException, UnknownObjectException\n\nclass InputSchema(BaseModel):\n    owner: str = Field(\n        ..., description='Username of the GitHub user. Eg: \"johnwick\"', title='Owner'\n    )\n    repository: str = Field(\n        ...,\n        description='Name of the GitHub repository. Eg: \"Awesome-CloudOps-Automation\"',\n        title='Repository',\n    )\n\n\ndef github_list_webhooks_printer(output):\n    if not output:\n        return\n    pprint.pprint(output)\n\ndef github_list_webhooks(handle, owner:str, repository: str) -> List:\n    \"\"\"github_list_webhooks returns details of webhooks for a repository\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type owner: string\n        :param owner: Username of the GitHub user. Eg: \"johnwick\"\n\n        :type repository: string\n        :param repository: Name of the GitHub repository. Eg: \"Awesome-CloudOps-Automation\"\n\n        :rtype: List of details of webhooks for a repository\n    \"\"\"\n    result = []\n    try:\n        owner = handle.get_user(owner)\n        repo_name = owner.login +'/'+ repository\n        repo = handle.get_repo(repo_name)\n        webhooks = repo.get_hooks()\n        for hook in webhooks:\n            hooks = {}\n            hooks['url'] = hook.url\n            hooks['id'] = hook.id\n            hooks['active'] = hook.active\n            hooks['events'] = hook.events\n            hooks['config'] = hook.config\n            result.append(hooks)\n    except GithubException as e:\n        if e.status == 403:\n            raise BadCredentialsException(\"You need admin access\") from e\n        if e.status == 404:\n            raise UnknownObjectException(\"No such repository or user found\") from e\n        raise e.data\n    except Exception as e:\n        raise e\n    return result\n"
  },
  {
    "path": "Github/legos/github_merge_pull_request/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Github Merge Pull Request</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego merges a  pull request\r\n\r\n## Lego Details\r\n\r\n    github_list_pull_request_commits(handle: object , owner:str, repository:str, pull_request_number: int, commit_message:str)\r\n\r\n        handle: Object of type unSkript Github Connector\r\n        owner: String, Username of the GitHub user. Eg: \"johnwick\"\r\n        repository: String, Full name of the GitHub repository. Eg: \"unskript/Awesome-CloudOps-Automation\"\r\n        pull_request_number: Integer, Pull request number. Eg: 167\r\n        commit_message: String, Commit Message\r\n\r\n## Lego Input\r\nThis Lego take 5 inputs handle, owner, repository, pull_request_number, commit_message\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Github/legos/github_merge_pull_request/__init__.py",
    "content": ""
  },
  {
    "path": "Github/legos/github_merge_pull_request/github_merge_pull_request.json",
    "content": "{\n  \"action_title\": \"Github Merge Pull Request\",\n  \"action_description\": \"Github Merge Pull Request\",\n  \"action_type\": \"LEGO_TYPE_GITHUB\",\n  \"action_entry_function\": \"github_merge_pull_request\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\n  \"action_is_check\": false,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_GITHUB\",\"CATEGORY_TYPE_GITHUB_PR\" ]\n}"
  },
  {
    "path": "Github/legos/github_merge_pull_request/github_merge_pull_request.py",
    "content": "\n##\n##  Copyright (c) 2023 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom pydantic import BaseModel, Field\nfrom github import GithubException, BadCredentialsException, UnknownObjectException\n\n\nclass InputSchema(BaseModel):\n    owner: str = Field(\n        description='Username of the GitHub user. Eg: \"johnwick\"',\n        title='Owner'\n    )\n    repository: str = Field(\n        description='Name of the GitHub repository. Eg: \"Awesome-CloudOps-Automation\"',\n        title='Repository',\n    )\n    pull_request_number: int = Field(\n        description='Pull request number. Eg: 167',\n        title='Pull Request Number'\n    )\n    commit_message: str = Field(\n        description='Merge commit message.',\n        title='Commit Message'\n    )\n\n\ndef github_merge_pull_request_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef github_merge_pull_request(\n        handle,\n        owner:str,\n        repository:str,\n        pull_request_number: int,\n        commit_message:str\n        ) -> str:\n    \"\"\"github_merge_pull_request returns message and commit sha of successfully merged branch\n\n        Note- The base branch is considered to be \"master\"\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type owner: string\n        :param owner: Username of the GitHub user. Eg: \"johnwick\"\n\n        :type repository: string\n        :param repository: Name of the GitHub repository. Eg: \"Awesome-CloudOps-Automation\"\n\n        :type pull_request_number: int\n        :param pull_request_number: Pull request number. Eg:167\n\n        :type commit_message: str\n        :param commit_message: Commit Message\n\n        :rtype: String of details with message of successfully merged branch\n    \"\"\"\n    pr_number = int(pull_request_number)\n    try:\n        owner = handle.get_user(owner)\n        repo_name = owner.login+'/'+repository\n        repo = handle.get_repo(repo_name)\n        p = repo.get_pull(pr_number)\n        commit = repo.merge(base=\"master\", head=p.head.sha, commit_message=commit_message)\n        return f\"Successully merged branch with commit SHA- {commit.sha}\"\n    except GithubException as e:\n        if e.status == 403:\n            raise BadCredentialsException(\"You need admin access\") from e\n        if e.status == 404:\n            raise UnknownObjectException(\"No such pull number or repository or user found\") from e\n        if e.status==409:\n            raise Exception(\"Merge Conflict\") from e\n        raise e.data\n    except Exception as e:\n        raise e\n"
  },
  {
    "path": "Github/legos/github_remove_member_from_org/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Github Remove Organization Member</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego removes a member from an Organization\r\n\r\n## Lego Details\r\n\r\n    github_remove_member_from_org(handle: object ,organization_name:str, username:str)\r\n\r\n        handle: Object of type unSkript Github Connector\r\n        organization_name: String, Name of Github Organization. Eg: \"unskript\"\r\n        username: String, Organization member's username. Eg: \"jane-mitch-unskript\"\r\n\r\n## Lego Input\r\nThis Lego take 3 inputs handle, organization_name, username\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Github/legos/github_remove_member_from_org/__init__.py",
    "content": ""
  },
  {
    "path": "Github/legos/github_remove_member_from_org/github_remove_member_from_org.json",
    "content": "{\n  \"action_title\": \"Github Remove Member from Organization\",\n  \"action_description\": \"Remove a member from a Github Organization\",\n  \"action_type\": \"LEGO_TYPE_GITHUB\",\n  \"action_entry_function\": \"github_remove_member_from_org\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\n  \"action_is_check\": false,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_GITHUB\",\"CATEGORY_TYPE_GITHUB_USER\", \"CATEGORY_TYPE_GITHUB_ORG\" ]\n}"
  },
  {
    "path": "Github/legos/github_remove_member_from_org/github_remove_member_from_org.py",
    "content": "##\n##  Copyright (c) 2023 unSkript, Inc\n##  All rights reserved.\n##\n\nimport pprint\nfrom pydantic import BaseModel, Field\nfrom github import GithubException, BadCredentialsException, UnknownObjectException\n\n\nclass InputSchema(BaseModel):\n    organization_name: str = Field(\n        description='Name of Github Organization. Eg: \"unskript\"',\n        title='Organization Name',\n    )\n    username: str = Field(\n        description='Organization member\\'s username. Eg: \"jane-mitch-unskript\"',\n        title='Member\\'s Username',\n    )\n\n\ndef github_remove_member_from_org_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\ndef github_remove_member_from_org(handle, organization_name:str, username:str)-> str:\n    \"\"\"github_remove_member_from_org returns the status to remove a member\n\n        :type organization_name: string\n        :param organization_name: Name of Github Organization. Eg: \"unskript\"\n\n        :type username: string\n        :param username: Organization member's username. Eg: \"jane-mitch-unskript\"\n\n        :rtype: List of return status of removing a member from Org\n    \"\"\"\n    result = \"\"\n    organization = handle.get_organization(organization_name)\n    try:\n        user = handle.get_user(username)\n        mem_exist = organization.has_in_members(user)\n        if mem_exist:\n            result = organization.remove_from_members(user)\n    except GithubException as e:\n        if e.status == 403:\n            raise BadCredentialsException(\"You need admin access\") from e\n        if e.status == 404:\n            raise UnknownObjectException(\"No organization or user found\") from e\n        raise e.data\n    except Exception as e:\n        raise e\n    if result is None:\n        return f\"Successfully removed user {username}\"\n    return None\n"
  },
  {
    "path": "Grafana/README.md",
    "content": "\n# Grafana Actions\n* [Get Grafana Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Grafana/legos/grafana_get_handle/README.md): Get Grafana Handle\n* [Grafana List Alerts](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Grafana/legos/grafana_list_alerts/README.md): List of Grafana alerts. Specifying the dashboard ID will show alerts in that dashboard\n"
  },
  {
    "path": "Grafana/__init__.py",
    "content": ""
  },
  {
    "path": "Grafana/legos/__init__.py",
    "content": ""
  },
  {
    "path": "Grafana/legos/grafana_get_handle/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Get Grafana Handle</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego Get Grafana Handle.\r\n\r\n\r\n## Lego Details\r\n\r\n    grafana_get_handle(handle: object)\r\n\r\n        handle: Object of type unSkript Grafana Connector\r\n\r\n## Lego Input\r\nThis Lego take one input handle.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Grafana/legos/grafana_get_handle/__init__.py",
    "content": ""
  },
  {
    "path": "Grafana/legos/grafana_get_handle/grafana_get_handle.json",
    "content": "{\r\n    \"action_title\": \"Get Grafana Handle\",\r\n    \"action_description\": \"Get Grafana Handle\",\r\n    \"action_type\": \"LEGO_TYPE_GRAFANA\",\r\n    \"action_entry_function\": \"grafana_get_handle\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": false,\r\n    \"action_supports_iteration\": false\r\n}\r\n    "
  },
  {
    "path": "Grafana/legos/grafana_get_handle/grafana_get_handle.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nfrom pydantic import BaseModel\nfrom unskript.connectors.grafana import Grafana\n\n\nclass InputSchema(BaseModel):\n    pass\n\n\ndef grafana_get_handle(handle: Grafana) -> Grafana:\n    \"\"\"grafana_get_handle returns the grafana REST API handle.\n\n       :rtype: grafana Handle.\n    \"\"\"\n    return handle\n"
  },
  {
    "path": "Grafana/legos/grafana_list_alerts/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Grafana List Alerts</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego List of Grafana alerts. Specifying the dashboard ID will show alerts in that dashboard.\r\n\r\n\r\n## Lego Details\r\n\r\n    grafana_list_alerts(handle: object, dashboard_id:int = None,\r\n                           panel_id:int)\r\n\r\n        handle: Object of type unSkript Grafana Connector\r\n        dashboard_id: ID of the grafana dashboard.\r\n        panel_id: Panel ID to limit the alerts within that Panel.\r\n\r\n## Lego Input\r\nThis Lego take three inputs handle, dashboard_id and panel_id.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Grafana/legos/grafana_list_alerts/__init__.py",
    "content": ""
  },
  {
    "path": "Grafana/legos/grafana_list_alerts/grafana_list_alerts.json",
    "content": "{\r\n    \"action_title\": \"Grafana List Alerts\",\r\n    \"action_description\": \"List of Grafana alerts. Specifying the dashboard ID will show alerts in that dashboard\",\r\n    \"action_type\": \"LEGO_TYPE_GRAFANA\",\r\n    \"action_entry_function\": \"grafana_list_alerts\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_GRAFANA\" ]\r\n}\r\n    "
  },
  {
    "path": "Grafana/legos/grafana_list_alerts/grafana_list_alerts.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nimport json\nimport pprint\nfrom typing import Optional, List\nfrom pydantic import BaseModel, Field\nfrom unskript.connectors.grafana import Grafana\n\npp = pprint.PrettyPrinter(indent=4)\n\nclass InputSchema(BaseModel):\n    dashboard_id: Optional[int] = Field(\n        None,\n        title='Dashboard ID',\n        description='ID of the dashboard to limit the alerts within that dashboard.')\n    panel_id: Optional[int] = Field(\n        None,\n        title='Panel ID',\n        description='Panel ID to limit the alerts within that Panel.')\n\n\ndef grafana_list_alerts_printer(output):\n    if output is None:\n        return\n    print('\\n')\n    pprint.pprint(output)\n\n\ndef grafana_list_alerts(\n        handle: Grafana,\n        dashboard_id:int = None,\n        panel_id:int = None\n        ) -> List[dict]:\n    \"\"\"grafana_list_alerts lists the configured alerts in grafana.\n       You can filter alerts configured in a particular dashboard.\n\n       :type dashboard_id: int\n       :param dashboard_id: ID of the grafana dashboard.\n\n       :type panel_id: int\n       :param panel_id: Panel ID to limit the alerts within that Panel.\n\n       :rtype: List of alerts.\n    \"\"\"\n    url = handle.host + \"/api/ruler/grafana/api/v1/rules\"\n    params = None\n    if dashboard_id or panel_id:\n        param = {}\n        if dashboard_id:\n            param[\"DashboardUID\"] = dashboard_id\n        if panel_id:\n            param[\"PanelID\"] = panel_id\n        params = param\n    try:\n        response = handle.session.get(url,\n                                     params=params)\n    except Exception as e:\n        print(f'Failed to get grafana rules, {str(e)}')\n        raise e\n\n    # Grafana ruler rules api response\n    # https://editor.swagger.io/?url=https://raw.githubusercontent.com/grafana/grafana/main/pkg/services/ngalert/api/tooling/post.json\n    result = []\n    folder_names = json.loads(response.content).keys()\n    for folder_name in list(folder_names):\n        for alarm in json.loads(response.content)[folder_name]:\n            rules = alarm.get('rules')\n            if rules is not None:\n                for rule in rules:\n                    res = {}\n                    grafana_alert = rule.get('grafana_alert')\n                    if grafana_alert is not None:\n                        res['id'] = grafana_alert.get('id')\n                        res['name'] = alarm.get('name')\n                        result.append(res)\n\n    # Get Loki/Prometheus alerts as well.\n    # First get the datasources which has type Loki or Prometheus.\n    #\n    url = handle.host + \"/api/datasources\"\n    try:\n        response = handle.session.get(url)\n        response.raise_for_status()\n    except Exception:\n        # This could happen because in non-cloud grafana, there is no rbac,\n        # so this api doesnt work with viewer role.\n        print(\"Unable to get datasources\")\n        return result\n\n    try:\n        datasourcesList = json.loads(response.content)\n    except Exception as e:\n        print(f'Unable to parse datasources response, error {str(e)}')\n        return result\n\n    interestedDatasourcesList = []\n    for datasource in datasourcesList:\n        if datasource[\"type\"] == \"loki\" or datasource['type'] == \"prometheus\":\n            if datasource.get('uid') is not None:\n                interestedDatasourcesList.append(datasource.get(\"uid\"))\n\n    for interestedDatasourceUID in interestedDatasourcesList:\n        url = handle.host + \"/api/ruler/\" + interestedDatasourceUID + \"/api/v1/rules\"\n        try:\n            response = handle.session.get(url, params=params)\n            response.raise_for_status()\n        except Exception as e:\n            print(f'Skipping {interestedDatasourceUID}, error {str(e)}')\n            continue\n\n        try:\n            responseDict = json.loads(response.content)\n        except Exception as e:\n            print(f'Skipping uid {interestedDatasourceUID}, error {str(e)}')\n            continue\n\n        folder_names = responseDict.keys()\n        for folder_name in list(folder_names):\n            for alarm in responseDict[folder_name]:\n                rules = alarm.get('rules')\n                if rules is not None:\n                    for rule in rules:\n                        res = {}\n                        grafana_alert = rule.get('grafana_alert')\n                        if grafana_alert is not None:\n                            res['id'] = grafana_alert.get('id')\n                            res['name'] = alarm.get('name')\n                            result.append(res)\n                        # Loki have 'alert' in the key, where as recorded loki has\n                        # 'record' as the key.\n                        else:\n                            if 'alert' in rule:\n                                res['name'] = rule.get('alert')\n                            else:\n                                res['name'] = rule.get('record')\n                            result.append(res)\n\n    return result\n"
  },
  {
    "path": "Hadoop/README.md",
    "content": "\n# Hadoop Actions\n* [Get Hadoop cluster apps](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Hadoop/legos/hadoop_get_cluster_apps/README.md): Get Hadoop cluster apps\n* [Get Hadoop cluster appstatistics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Hadoop/legos/hadoop_get_cluster_appstatistics/README.md): Get Hadoop cluster appstatistics\n* [Get Hadoop cluster metrics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Hadoop/legos/hadoop_get_cluster_metrics/README.md): Get Hadoop EMR cluster metrics\n* [Get Hadoop cluster nodes](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Hadoop/legos/hadoop_get_cluster_nodes/README.md): Get Hadoop cluster nodes\n* [Get Hadoop handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Hadoop/legos/hadoop_get_handle/README.md): Get Hadoop handle\n"
  },
  {
    "path": "Hadoop/__init__.py",
    "content": ""
  },
  {
    "path": "Hadoop/legos/__init__.py",
    "content": ""
  },
  {
    "path": "Hadoop/legos/hadoop_get_cluster_apps/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Get Hadoop cluster apps</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego Get Hadoop cluster apps.\r\n\r\n\r\n## Lego Details\r\n\r\n    hadoop_get_cluster_apps(handle: object, appid: str)\r\n\r\n        handle: Object of type unSkript Hadoop Connector\r\n        appid: The application id.\r\n\r\n## Lego Input\r\nThis Lego take two inputs handle and appid.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Hadoop/legos/hadoop_get_cluster_apps/__init__.py",
    "content": ""
  },
  {
    "path": "Hadoop/legos/hadoop_get_cluster_apps/hadoop_get_cluster_apps.json",
    "content": "{\r\n    \"action_title\": \"Get Hadoop cluster apps\",\r\n    \"action_description\": \"Get Hadoop cluster apps\",\r\n    \"action_type\": \"LEGO_TYPE_HADOOP\",\r\n    \"action_entry_function\": \"hadoop_get_cluster_apps\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_HADOOP\"]\r\n}\r\n    "
  },
  {
    "path": "Hadoop/legos/hadoop_get_cluster_apps/hadoop_get_cluster_apps.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nimport pprint\nfrom typing import Optional, Dict\nfrom pydantic import BaseModel, Field\n\npp = pprint.PrettyPrinter()\n\nclass InputSchema(BaseModel):\n    appid: Optional[str] = Field(\n        title='Application id',\n        description='The application id'\n    )\n\n\ndef hadoop_get_cluster_apps_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef hadoop_get_cluster_apps(handle, appid: str = \"\") -> Dict:\n    \"\"\"hadoop_get_cluster_apps get cluster apps\n\n        :type appid: str\n        :param appid: The application id.\n\n        :rtype: Dict of cluster apps\n    \"\"\"\n    return handle.get_cluster_apps(appid = appid if appid else None)\n"
  },
  {
    "path": "Hadoop/legos/hadoop_get_cluster_appstatistics/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Get Hadoop cluster appstatistics</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego Get Hadoop cluster appstatistics.\r\n\r\n\r\n## Lego Details\r\n\r\n    hadoop_get_cluster_appstatistics(handle: object, states: str, applicationTypes: str)\r\n\r\n        handle: Object of type unSkript Hadoop Connector\r\n        states: The states of the node, specified as a comma-separated.\r\n        applicationTypes: Types of the applications, specified as a comma-separated list.\r\n\r\n## Lego Input\r\nThis Lego take three inputs handle, states and applicationTypes.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Hadoop/legos/hadoop_get_cluster_appstatistics/__init__.py",
    "content": ""
  },
  {
    "path": "Hadoop/legos/hadoop_get_cluster_appstatistics/hadoop_get_cluster_appstatistics.json",
    "content": "{\r\n    \"action_title\": \"Get Hadoop cluster appstatistics\",\r\n    \"action_description\": \"Get Hadoop cluster appstatistics\",\r\n    \"action_type\": \"LEGO_TYPE_HADOOP\",\r\n    \"action_entry_function\": \"hadoop_get_cluster_appstatistics\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_HADOOP\"]\r\n}\r\n    "
  },
  {
    "path": "Hadoop/legos/hadoop_get_cluster_appstatistics/hadoop_get_cluster_appstatistics.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nimport pprint\nfrom typing import Optional, Dict\nfrom pydantic import BaseModel, Field\n\npp = pprint.PrettyPrinter()\n\n\nclass InputSchema(BaseModel):\n    states: Optional[str] = Field(\n        title='states',\n        description=('The states of the node, specified as a comma-separated list, '\n                     'valid values are: NEW, RUNNING, UNHEALTHY, DECOMMISSIONING, '\n                     'DECOMMISSIONED, LOST, REBOOTED, SHUTDOWN')\n    )\n    applicationTypes: Optional[str] = Field(\n        title='Application Types',\n        description='Types of the applications, specified as a comma-separated list.'\n    )\n\n\ndef hadoop_get_cluster_appstatistics_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef hadoop_get_cluster_appstatistics(handle, states: str = \"\", applicationTypes: str = \"\") -> Dict:\n    \"\"\"hadoop_get_cluster_appstatistics get cluster app statistics\n\n        :type states: str\n        :param states: The states of the node, specified as a comma-separated.\n\n        :type applicationTypes: str\n        :param applicationTypes: Types of the applications, specified as a comma-separated list.\n\n        :rtype: Dict cluster app statistics\n    \"\"\"\n    return handle.get_cluster_appstatistics(\n        states=states if states else None,\n        applicationTypes=applicationTypes if applicationTypes else None\n        )\n"
  },
  {
    "path": "Hadoop/legos/hadoop_get_cluster_metrics/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Get Hadoop cluster metrics</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego Get Hadoop EMR cluster metrics.\r\n\r\n\r\n## Lego Details\r\n\r\n    hadoop_get_cluster_metrics(handle: object)\r\n\r\n        handle: Object of type unSkript Hadoop Connector\r\n\r\n## Lego Input\r\nThis Lego take one input handle.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Hadoop/legos/hadoop_get_cluster_metrics/__init__.py",
    "content": ""
  },
  {
    "path": "Hadoop/legos/hadoop_get_cluster_metrics/hadoop_get_cluster_metrics.json",
    "content": "{\r\n    \"action_title\": \"Get Hadoop cluster metrics\",\r\n    \"action_description\": \"Get Hadoop EMR cluster metrics\",\r\n    \"action_type\": \"LEGO_TYPE_HADOOP\",\r\n    \"action_entry_function\": \"hadoop_get_cluster_metrics\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_HADOOP\"]\r\n}"
  },
  {
    "path": "Hadoop/legos/hadoop_get_cluster_metrics/hadoop_get_cluster_metrics.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nimport pprint\nfrom typing import Dict\nfrom pydantic import BaseModel\n\npp = pprint.PrettyPrinter()\n\nclass InputSchema(BaseModel):\n    pass\n\n\ndef hadoop_get_cluster_metrics_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef hadoop_get_cluster_metrics(handle) -> Dict:\n    \"\"\"hadoop_get_cluster_metrics returns the cluster matrics.\n       :rtype: cluster matrics.\n    \"\"\"\n    return handle.get_cluster_metrics()\n"
  },
  {
    "path": "Hadoop/legos/hadoop_get_cluster_nodes/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Get Hadoop cluster nodes</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego Get Hadoop cluster nodes.\r\n\r\n\r\n## Lego Details\r\n\r\n    hadoop_get_cluster_nodes(handle: object, states: str)\r\n\r\n        handle: Object of type unSkript Hadoop Connector\r\n        states: The states of the node, specified as a comma-separated.\r\n\r\n## Lego Input\r\nThis Lego take two inputs handle and states.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Hadoop/legos/hadoop_get_cluster_nodes/__init__.py",
    "content": ""
  },
  {
    "path": "Hadoop/legos/hadoop_get_cluster_nodes/hadoop_get_cluster_nodes.json",
    "content": "{\r\n    \"action_title\": \"Get Hadoop cluster nodes\",\r\n    \"action_description\": \"Get Hadoop cluster nodes\",\r\n    \"action_type\": \"LEGO_TYPE_HADOOP\",\r\n    \"action_entry_function\": \"hadoop_get_cluster_nodes\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_HADOOP\"]\r\n}"
  },
  {
    "path": "Hadoop/legos/hadoop_get_cluster_nodes/hadoop_get_cluster_nodes.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nimport pprint\nfrom typing import Optional, Dict\nfrom pydantic import BaseModel, Field\n\npp = pprint.PrettyPrinter()\n\n\nclass InputSchema(BaseModel):\n    states: Optional[str] = Field(\n        title='States',\n        description=('The states of the node, specified as a comma-separated list, '\n                     'valid values are: NEW, RUNNING, UNHEALTHY, DECOMMISSIONING, '\n                     'DECOMMISSIONED, LOST, REBOOTED, SHUTDOWN')\n    )\n\n\ndef hadoop_get_cluster_nodes_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef hadoop_get_cluster_nodes(handle, states: str = \"\") -> Dict:\n    \"\"\"hadoop_get_cluster_nodes get cluster nodes\n\n        :type states: str\n        :param states: The states of the node, specified as a comma-separated.\n\n        :rtype: Dict cluster nodes\n    \"\"\"\n    return handle.get_cluster_nodes(states=states if states else None)\n"
  },
  {
    "path": "Hadoop/legos/hadoop_get_handle/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Get Hadoop handle</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego Get Hadoop handle.\r\n\r\n\r\n## Lego Details\r\n\r\n    hadoop_get_handle(handle: object)\r\n\r\n        handle: Object of type unSkript Hadoop Connector\r\n\r\n## Lego Input\r\nThis Lego take one input handle.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Hadoop/legos/hadoop_get_handle/__init__.py",
    "content": ""
  },
  {
    "path": "Hadoop/legos/hadoop_get_handle/hadoop_get_handle.json",
    "content": "{\r\n    \"action_title\": \"Get Hadoop handle\",\r\n    \"action_description\": \"Get Hadoop handle\",\r\n    \"action_type\": \"LEGO_TYPE_HADOOP\",\r\n    \"action_entry_function\": \"hadoop_get_handle\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": false,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_NONE\",\r\n    \"action_supports_iteration\": false,\r\n    \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_HADOOP\"]\r\n}"
  },
  {
    "path": "Hadoop/legos/hadoop_get_handle/hadoop_get_handle.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nfrom pydantic import BaseModel\n\nclass InputSchema(BaseModel):\n    pass\n\n\ndef hadoop_get_handle(handle) -> None:\n    \"\"\"hadoop_get_handle returns the Hadoop session handle.\n       :rtype: Hadoop handle.\n    \"\"\"\n    return handle\n"
  },
  {
    "path": "Jenkins/Fetch_Jenkins_Build_Logs.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"6e5315be\",\n   \"metadata\": {},\n   \"source\": [\n    \"\\n\",\n    \"<img src=\\\"https://storage.googleapis.com/unskript-website/assets/favicon.png\\\" alt=\\\"unSkript.com\\\" width=\\\"100\\\" height=\\\"100\\\"/> \\n\",\n    \"<h1> unSkript Runbooks </h1>\\n\",\n    \"<div class=\\\"alert alert-block alert-success\\\">\\n\",\n    \"    <b> This runbook demonstrates How to Fetch Jenkins Build Logs using unSkript legos.</b>\\n\",\n    \"</div>\\n\",\n    \"\\n\",\n    \"<br>\\n\",\n    \"\\n\",\n    \"<center><h2>Fetch Jenkins Build Logs</h2></center>\\n\",\n    \"\\n\",\n    \"# Steps Overview\\n\",\n    \"    1) Get logs for the particular job name.\\n\",\n    \"    2) Post the logs to Slack channel.\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"26223053\",\n   \"metadata\": {},\n   \"source\": [\n    \"Here we will use unSkript Get Jenkins Logs from a job Lego. This lego takes job_name and build_number as input. This input is used to discover logs from Jenkins using a Job.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"dd9b8409-330a-4b3a-b7aa-606184c14b31\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_uuid\": \"56ab4c7ca0cdab8afa03582a8b802ac37202fa0fd00ec3d64850194f79870bcb\",\n    \"collapsed\": true,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Get Jenkins Logs from a Job\",\n    \"id\": 9,\n    \"index\": 9,\n    \"inputData\": [\n     {\n      \"build_number\": {\n       \"constant\": false,\n       \"value\": \"build_number\"\n      },\n      \"job_name\": {\n       \"constant\": false,\n       \"value\": \"job_name\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"build_number\": {\n        \"default\": 0,\n        \"description\": \"Specific build number of the job. By default, it gets the last build logs.\",\n        \"title\": \"Build Number\",\n        \"type\": \"integer\"\n       },\n       \"job_name\": {\n        \"description\": \"Jenkins job name.\",\n        \"title\": \"Job Name\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"job_name\"\n      ],\n      \"title\": \"jenkins_get_logs\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_JENKINS\",\n    \"name\": \"Get Jenkins Logs from a job\",\n    \"nouns\": [\n     \"jenkis\",\n     \"logs\",\n     \"job\"\n    ],\n    \"orderProperties\": [\n     \"job_name\",\n     \"build_number\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"tags\": [\n     \"jenkins_get_logs\"\n    ],\n    \"verbs\": [\n     \"get\"\n    ],\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"##  Copyright (c) 2021 unSkript, Inc\\n\",\n    \"##  All rights reserved.\\n\",\n    \"##\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import Optional\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def jenkins_get_logs(handle, job_name: str, build_number: int = 0):\\n\",\n    \"    \\\"\\\"\\\"jenkins_get_logs returns logs for the particular job name.\\n\",\n    \"        :type nbParamsObj: object\\n\",\n    \"        :param nbParamsObj: Object containing global params for the notebook.\\n\",\n    \"        :type credentialsDict: dict\\n\",\n    \"        :param credentialsDict: Dictionary of credentials info.\\n\",\n    \"        :rtype: Dict with builds number and logs.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"\\n\",\n    \"    # Input param validation.\\n\",\n    \"    job = handle.get_job(job_name)\\n\",\n    \"\\n\",\n    \"    if build_number == 0:\\n\",\n    \"        res = job.get_last_build()\\n\",\n    \"        return res.get_console()\\n\",\n    \"\\n\",\n    \"    res = job.get_build(build_number)\\n\",\n    \"    return res.get_console()\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"build_number\\\": \\\"build_number\\\",\\n\",\n    \"    \\\"job_name\\\": \\\"job_name\\\"\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.output = task.execute(jenkins_get_logs, hdl=hdl, args=args)\\n\",\n    \"    if task.output_name != None:\\n\",\n    \"        globals().update({task.output_name: task.output[0]})\\n\",\n    \"\\n\",\n    \"if hasattr(task, 'output'):\\n\",\n    \"    if isinstance(task.output, (list, tuple)):\\n\",\n    \"        for item in task.output:\\n\",\n    \"            print(f'item: {item}')\\n\",\n    \"    elif isinstance(task.output, dict):\\n\",\n    \"        for item in task.output.items():\\n\",\n    \"            print(f'item: {item}')\\n\",\n    \"    else:\\n\",\n    \"        print(f'Output for {task.name}')\\n\",\n    \"        print(task.output)\\n\",\n    \"    w.tasks[task.name]= task.output\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"427aefe9\",\n   \"metadata\": {},\n   \"source\": [\n    \"Here we will use unSkript Post Slack Message Lego. This lego takes channel: str and message: str as input. This input is used to post the message to the slack channel.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"822f1945-cd69-4258-ac64-0e9752ee2069\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_uuid\": \"6a87f83ab0ecfeecb9c98d084e2b1066c26fa64be5b4928d5573a5d60299802d\",\n    \"collapsed\": true,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Post Slack Message\",\n    \"id\": 44,\n    \"index\": 44,\n    \"inputData\": [\n     {\n      \"channel\": {\n       \"constant\": false,\n       \"value\": \"channel\"\n      },\n      \"message\": {\n       \"constant\": false,\n       \"value\": \"f\\\"Jenkins logs for Job:{job_name} Build:{build_number} {jenkins_output}\\\"\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"channel\": {\n        \"description\": \"Name of the slack channel where the message to be posted\",\n        \"title\": \"Channel\",\n        \"type\": \"string\"\n       },\n       \"message\": {\n        \"description\": \"Message to be sent\",\n        \"title\": \"Message\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"channel\",\n       \"message\"\n      ],\n      \"title\": \"slack_post_message\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_SLACK\",\n    \"name\": \"Post Slack Message\",\n    \"nouns\": [\n     \"slack\",\n     \"message\"\n    ],\n    \"orderProperties\": [\n     \"channel\",\n     \"message\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"tags\": [\n     \"slack_post_message\"\n    ],\n    \"verbs\": [\n     \"post\"\n    ],\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"# Copyright (c) 2021 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from slack_sdk import WebClient\\n\",\n    \"from slack_sdk.errors import SlackApiError\\n\",\n    \"\\n\",\n    \"pp = pprint.PrettyPrinter(indent=2)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"def legoPrinter(func):\\n\",\n    \"    def Printer(*args, **kwargs):\\n\",\n    \"        output = func(*args, **kwargs)\\n\",\n    \"        if output:\\n\",\n    \"            channel = kwargs[\\\"channel\\\"]\\n\",\n    \"            pp.pprint(print(f\\\"Message sent to Slack channel {channel}\\\"))\\n\",\n    \"        return output\\n\",\n    \"    return Printer\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@legoPrinter\\n\",\n    \"@beartype\\n\",\n    \"def slack_post_message(\\n\",\n    \"        handle: WebClient,\\n\",\n    \"        channel: str,\\n\",\n    \"        message: str) -> bool:\\n\",\n    \"\\n\",\n    \"    try:\\n\",\n    \"        response = handle.chat_postMessage(\\n\",\n    \"            channel=channel,\\n\",\n    \"            text=message)\\n\",\n    \"        return True\\n\",\n    \"    except SlackApiError as e:\\n\",\n    \"        print(\\\"\\\\n\\\\n\\\")\\n\",\n    \"        pp.pprint(\\n\",\n    \"            f\\\"Failed sending message to slack channel {channel}, Error: {e.response['error']}\\\")\\n\",\n    \"        return False\\n\",\n    \"    except Exception as e:\\n\",\n    \"        print(\\\"\\\\n\\\\n\\\")\\n\",\n    \"        pp.pprint(\\n\",\n    \"            f\\\"Failed sending message to slack channel {channel}, Error: {e.__str__()}\\\")\\n\",\n    \"        return False\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"channel\\\": \\\"channel\\\",\\n\",\n    \"    \\\"message\\\": \\\"f\\\\\\\\\\\"Jenkins logs for Job:{job_name} Build:{build_number} {jenkins_output}\\\\\\\\\\\"\\\"\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.output = task.execute(slack_post_message, hdl=hdl, args=args)\\n\",\n    \"    if task.output_name != None:\\n\",\n    \"        globals().update({task.output_name: task.output[0]})\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"d153cd3a\",\n   \"metadata\": {},\n   \"source\": [\n    \"### Conclusion\\n\",\n    \"In this Runbook, we demonstrated the use of unSkript's Jenkins and slack legos to perform Jenkins action and this runbook fetches the logs for a given Jenkins job and posts to a slack channel. To view the full platform capabilities of unSkript please visit https://unskript.com\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"execution_data\": {\n   \"runbook_name\": \"Fetch Jenkins Build Logs\",\n   \"parameters\": [\n    \"build_number\",\n    \"channel\",\n    \"job_name\"\n   ]\n  },\n  \"kernelspec\": {\n   \"display_name\": \"Python 3.9.6 64-bit\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.9.6\"\n  },\n  \"parameterSchema\": {\n   \"properties\": {\n    \"build_number\": {\n     \"default\": 0,\n     \"description\": \"Build Number\",\n     \"title\": \"build_number\",\n     \"type\": \"integer\"\n    },\n    \"channel\": {\n     \"default\": \"\",\n     \"description\": \"Slack channel to post to\",\n     \"title\": \"channel\",\n     \"type\": \"string\"\n    },\n    \"job_name\": {\n     \"default\": \"\",\n     \"description\": \"Name of the Jenkins Job\",\n     \"title\": \"job_name\",\n     \"type\": \"string\"\n    }\n   },\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"parameterValues\": {\n   \"build_number\": null,\n   \"channel\": null,\n   \"job_name\": null\n  },\n  \"vscode\": {\n   \"interpreter\": {\n    \"hash\": \"31f2aee4e71d21fbe5cf8b01ff0e069b9275f58929596ceb00d14d90e3e16cd6\"\n   }\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}\n"
  },
  {
    "path": "Jenkins/Fetch_Jenkins_Build_Logs.json",
    "content": "{\n  \"name\": \"Fetch Jenkins Build Logs\",\n  \"description\": \"This runbook fetches the logs for a given Jenkins job and posts to a slack channel\", \n  \"uuid\": \"62ad92c21f215a552c4aa6011d11b35cbc5fb04727f4fa055414ad736d8c1636\", \n  \"icon\": \"CONNECTOR_TYPE_JENKINS\",\n  \"categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\" ],\n  \"connector_types\": [ \"CONNECTOR_TYPE_JENKINS\" ],\n  \"version\": \"1.0.0\"\n}\n"
  },
  {
    "path": "Jenkins/README.md",
    "content": "# Jenkins RunBooks\n* [Fetch Jenkins Build Logs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Jenkins/Fetch_Jenkins_Build_Logs.ipynb): This runbook fetches the logs for a given Jenkins job and posts to a slack channel\n\n# Jenkins Actions\n* [Get Jenkins Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Jenkins/legos/jenkins_get_handle/README.md): Get Jenkins Handle\n* [Get Jenkins Logs from a job](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Jenkins/legos/jenkins_get_logs/README.md): Get Jenkins Logs from a Job\n* [Get Jenkins Plugin List](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Jenkins/legos/jenkins_get_plugins/README.md): Get Jenkins Plugin List\n"
  },
  {
    "path": "Jenkins/legos/__init__.py",
    "content": ""
  },
  {
    "path": "Jenkins/legos/jenkins_get_handle/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Get Jenkins Handle</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego Get Jenkins Handle.\r\n\r\n\r\n## Lego Details\r\n\r\n    jenkins_get_handle(handle: object)\r\n\r\n        handle: Object of type unSkript Jenkins Connector\r\n\r\n## Lego Input\r\nThis Lego take one input handle.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Jenkins/legos/jenkins_get_handle/__init__.py",
    "content": ""
  },
  {
    "path": "Jenkins/legos/jenkins_get_handle/jenkins_get_handle.json",
    "content": "{\r\n    \"action_title\": \"Get Jenkins Handle\",\r\n    \"action_description\": \"Get Jenkins Handle\",\r\n    \"action_type\": \"LEGO_TYPE_JENKINS\",\r\n    \"action_entry_function\": \"jenkins_get_handle\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": false,\r\n    \"action_supports_iteration\": false\r\n}\r\n    "
  },
  {
    "path": "Jenkins/legos/jenkins_get_handle/jenkins_get_handle.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nfrom pydantic import BaseModel\n\nclass InputSchema(BaseModel):\n    pass\n\n\ndef jenkins_get_handle(handle):\n    \"\"\"jenkins_get_handle returns the jenkins server handle.\n\n          :rtype: Jenkins handle.\n    \"\"\"\n    return handle\n"
  },
  {
    "path": "Jenkins/legos/jenkins_get_logs/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Get Jenkins Logs</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego Get Jenkins Logs from a Job.\r\n\r\n\r\n## Lego Details\r\n\r\n    jenkins_get_logs(handle: object, job_name: str, build_number: int)\r\n\r\n        handle: Object of type unSkript Jenkins Connector\r\n        job_name: Jenkins job name.\r\n        build_number: Specific build number of the job. By default, it gets the last build logs.\r\n\r\n## Lego Input\r\nThis Lego take three input handle, job_name and build_number.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Jenkins/legos/jenkins_get_logs/__init__.py",
    "content": ""
  },
  {
    "path": "Jenkins/legos/jenkins_get_logs/jenkins_get_logs.json",
    "content": "{\r\n    \"action_title\": \"Get Jenkins Logs from a job\",\r\n    \"action_description\": \"Get Jenkins Logs from a Job\",\r\n    \"action_type\": \"LEGO_TYPE_JENKINS\",\r\n    \"action_entry_function\": \"jenkins_get_logs\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_JENKINS\"]\r\n}"
  },
  {
    "path": "Jenkins/legos/jenkins_get_logs/jenkins_get_logs.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import Dict, Optional\nfrom pydantic import BaseModel, Field\n\n\n\nclass InputSchema(BaseModel):\n    job_name: str = Field(\n        title='Job Name',\n        description='Jenkins job name.')\n    build_number: Optional[int] = Field(\n        0,\n        title='Build Number',\n        description='Specific build number of the job. By default, it gets the last build logs.')\n\n\ndef jenkins_get_logs_printer(output):\n    if output is None:\n        return\n    pprint.pprint({output})\n\n\ndef jenkins_get_logs(handle, job_name: str, build_number: int = 0) -> Dict:\n    \"\"\"jenkins_get_logs returns logs for the particular job name.\n\n        :type job_name: string\n        :param job_name: Jenkins job name.\n\n        :type build_number: int\n        :param build_number: Specific build number of the job. \n        By default, it gets the last build logs.\n\n        :rtype: Dict with builds number and logs.\n    \"\"\"\n\n    # Input param validation.\n    job = handle.get_job(job_name)\n\n    if build_number == 0:\n        res = job.get_last_build()\n        return res.get_console()\n\n    res = job.get_build(build_number)\n    return res.get_console()\n"
  },
  {
    "path": "Jenkins/legos/jenkins_get_plugins/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Get Jenkins Plugin List</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego Get Jenkins Plugin List.\r\n\r\n\r\n## Lego Details\r\n\r\n    jenkins_get_plugins(handle: object)\r\n\r\n        handle: Object of type unSkript Jenkins Connector\r\n\r\n## Lego Input\r\nThis Lego take one input handle.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Jenkins/legos/jenkins_get_plugins/__init__.py",
    "content": ""
  },
  {
    "path": "Jenkins/legos/jenkins_get_plugins/jenkins_get_plugins.json",
    "content": "{\r\n    \"action_title\": \"Get Jenkins Plugin List\",\r\n    \"action_description\": \"Get Jenkins Plugin List\",\r\n    \"action_type\": \"LEGO_TYPE_JENKINS\",\r\n    \"action_entry_function\": \"jenkins_get_plugins\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_JENKINS\"]\r\n}\r\n  "
  },
  {
    "path": "Jenkins/legos/jenkins_get_plugins/jenkins_get_plugins.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import  List\nfrom pydantic import BaseModel\n\n\nclass InputSchema(BaseModel):\n    pass\n\ndef jenkins_get_plugins_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\ndef jenkins_get_plugins(handle) -> List:\n    \"\"\"jenkins_get_plugins returns the jenkins plugins list.\n\n        :rtype: List with jenkins plugins.\n    \"\"\"\n    res = []\n    plugins = handle.get_plugins().keys()\n    for plugin in plugins:\n        res.append(plugin)\n    return res\n"
  },
  {
    "path": "Jira/README.md",
    "content": "# Jira RunBooks\n* [Jira Visualize Issue Time to Resolution](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Jira/jira_visualize_time_to_resolution.ipynb): Using the Panel Library - visualize the time it takes for issues to close over a specifict timeframe\n\n# Jira Actions\n* [Jira Add Comment](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Jira/legos/jira_add_comment/README.md): Add a Jira Comment\n* [Assign Jira Issue](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Jira/legos/jira_assign_issue/README.md): Assign a Jira Issue to a user\n* [Create a Jira Issue](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Jira/legos/jira_create_issue/README.md): Create a Jira Issue\n* [Get Jira SDK Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Jira/legos/jira_get_handle/README.md): Get Jira SDK Handle\n* [Get Jira Issue Info](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Jira/legos/jira_get_issue/README.md): Get Issue Info from Jira API: description, labels, attachments\n* [Get Jira Issue Status](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Jira/legos/jira_get_issue_status/README.md): Get Issue Status from Jira API\n* [Change JIRA Issue Status](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Jira/legos/jira_issue_change_status/README.md): Change JIRA Issue Status to given status\n* [Search for Jira issues matching JQL queries](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Jira/legos/jira_search_issue/README.md): Use JQL to search all matching issues in Jira. Returns a List of the matching issues IDs/keys\n"
  },
  {
    "path": "Jira/__init__.py",
    "content": ""
  },
  {
    "path": "Jira/jira_visualize_time_to_resolution.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"32435b12-d7c9-4424-be43-f4d26736dd1a\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Get MTTR of Jira issues\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Get MTTR of Jira issues\"\n   },\n   \"source\": [\n    \"<p>In this RunBook, We will graph MTTR of issues in Jira.</p>\\n\",\n    \"<p>&nbsp;</p>\\n\",\n    \"<p>Since many teams track DevOps issues in Jira, this is a great way to understand how quickly issues are getting resolved, and if the MTTR is improving.</p>\\n\",\n    \"<p>&nbsp;</p>\\n\",\n    \"<p>First we will get a static pull of all the issues in Jira.&nbsp; This is fine if your data set is small, but we'll also generate the graph dynamically - so that the data pulled from Jira is never \\\"too big.\\\"</p>\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"f0a1f3b5-4494-4533-811b-5dd36e5e4d46\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Configure the JQL query\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Configure the JQL query\"\n   },\n   \"source\": [\n    \"<p>By defining the JQL query in this way, we can reuse the \\\"get issues from JIRA\\\" Action with different start and end times to pull different timeframes from Jira.</p>\\n\",\n    \"<p>&nbsp;</p>\\n\",\n    \"<p>For the static chart, we use a big start and end time, to pull all the data in.</p>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"42d705d5-69c3-4671-9ede-9891f1584aac\",\n   \"metadata\": {\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-02-27T21:48:04.101Z\"\n    },\n    \"name\": \"JQL Query Variable\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"JQL Query Variable\",\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"start = start_date\\n\",\n    \"end = end_date\\n\",\n    \"#global jql_query\\n\",\n    \"jql_query=\\\"\\\"\\n\",\n    \"def create_query(jira_project, issue_type, new_status, start, end) -> str: \\n\",\n    \"    \\n\",\n    \"    #global jql_query\\n\",\n    \"    return f'project = {jira_project} and issueType = {issue_type} and status changed to {new_status} during (\\\"{start}\\\",\\\"{end}\\\")'\\n\",\n    \"jql_query = create_query(jira_project, issue_type, new_status, start, end)\\n\",\n    \"print(jql_query)\\n\"\n   ],\n   \"output\": {}\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"55af3eea-f53f-41ba-83f5-c0f1c58a8d1f\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Query JQL\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Query JQL\"\n   },\n   \"source\": [\n    \"<p>We;ve created the JQL query - this pre-built Action just pulls the requested data from Jira.:<br><br></p>\\n\",\n    \"<pre>project = EN and issueType = Bug and status changed to Done during ('2022/01/01','2023/01/08')</pre>\\n\",\n    \"<p>&nbsp;</p>\\n\",\n    \"<p>This query pulls all bgs from the EN project that were completed from 1/1/2022 - 1/8/2023.</p>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"cf3db291-fea3-4dbe-94c4-d15a8d906cd0\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionOutputType\": null,\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": false,\n    \"actionSupportsPoll\": false,\n    \"action_modified\": false,\n    \"action_uuid\": \"23c4b5c86be9cfdbc7cbfce6d90ed089b7dc61d6dbc219aae3f4cc08862d3934\",\n    \"continueOnError\": false,\n    \"credentialsJson\": {},\n    \"description\": \"Use JQL to search all matching issues in Jira. Returns a List of the matching issues IDs/keys\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-02-27T21:48:16.314Z\"\n    },\n    \"id\": 7,\n    \"index\": 7,\n    \"inputData\": [\n     {\n      \"jql\": {\n       \"constant\": false,\n       \"value\": \"jql_query\"\n      },\n      \"max_results\": {\n       \"constant\": false,\n       \"value\": \"\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"jql\": {\n        \"description\": \"Search string to execute in JIRA. Valid JQL expression eg \\\"project = EN and status in (\\\"Selected for Development\\\") AND labels in (beta)\\\"\",\n        \"title\": \"Jira issue search using Jira Query Languagae (JQL)\",\n        \"type\": \"string\"\n       },\n       \"max_results\": {\n        \"default\": 5,\n        \"description\": \"Max limit on number of matching issues\",\n        \"title\": \"Limit number of matching issues\",\n        \"type\": \"integer\"\n       }\n      },\n      \"required\": [\n       \"jql\"\n      ],\n      \"title\": \"jira_search_issue\",\n      \"type\": \"object\"\n     }\n    ],\n    \"language\": \"python\",\n    \"legotype\": \"LEGO_TYPE_JIRA\",\n    \"metadata\": {\n     \"action_bash_command\": false,\n     \"action_description\": \"Use JQL to search all matching issues in Jira. Returns a List of the matching issues IDs/keys\",\n     \"action_entry_function\": \"jira_search_issue\",\n     \"action_needs_credential\": true,\n     \"action_nouns\": null,\n     \"action_output_type\": \"ACTION_OUTPUT_TYPE_NONE\",\n     \"action_supports_iteration\": true,\n     \"action_supports_poll\": true,\n     \"action_title\": \"Search for Jira issues matching JQL queries\",\n     \"action_type\": \"LEGO_TYPE_JIRA\",\n     \"action_verbs\": null,\n     \"action_version\": \"1.0.0\"\n    },\n    \"name\": \"Search Jira Issues with JQL\",\n    \"orderProperties\": [\n     \"jql\",\n     \"max_results\"\n    ],\n    \"outputParams\": {\n     \"output_name\": \"issueList\",\n     \"output_name_enabled\": true\n    },\n    \"printOutput\": false,\n    \"tags\": [\n     \"jira_search_issue\"\n    ],\n    \"title\": \"Search Jira Issues with JQL\",\n    \"uuid\": \"23c4b5c86be9cfdbc7cbfce6d90ed089b7dc61d6dbc219aae3f4cc08862d3934\",\n    \"version\": \"1.0.0\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"# Copyright (c) 2021 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"from jira import JIRA, Issue\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import Optional, List, Dict\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"pp = pprint.PrettyPrinter(indent=4)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def legoPrinter(func):\\n\",\n    \"    def Printer(*args, **kwargs):\\n\",\n    \"        matching_issues = func(*args, **kwargs)\\n\",\n    \"        print('\\\\n')\\n\",\n    \"        #for issue in matching_issues:\\n\",\n    \"         #  print('ID:{}: Summary:{} Description:{}'.format(\\n\",\n    \"         #       issue.key, issue.fields.summary, issue.fields.description))\\n\",\n    \"            #print(issue)\\n\",\n    \"\\n\",\n    \"        return matching_issues\\n\",\n    \"    return Printer\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@legoPrinter\\n\",\n    \"@beartype\\n\",\n    \"def jira_search_issue(handle: JIRA, jql: str, max_results: int = 0) -> List:\\n\",\n    \"    \\\"\\\"\\\"jira_search_issue get Jira issues matching JQL queries.\\n\",\n    \"        :type jql: str\\n\",\n    \"        :param jql: Search string to execute in JIRA.\\n\",\n    \"\\n\",\n    \"        :type max_results: int\\n\",\n    \"        :param max_results: Max limit on number of matching issues\\n\",\n    \"\\n\",\n    \"        :rtype: Jira issues matching JQL queries\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    print(\\\"jql search lego\\\",jql)\\n\",\n    \"    matching_issues = handle.search_issues(jql, maxResults=max_results)\\n\",\n    \"\\n\",\n    \"    return matching_issues\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"def unskript_default_printer(output):\\n\",\n    \"    if isinstance(output, (list, tuple)):\\n\",\n    \"        for item in output:\\n\",\n    \"            print(f'item: {item}')\\n\",\n    \"    elif isinstance(output, dict):\\n\",\n    \"        for item in output.items():\\n\",\n    \"            print(f'item: {item}')\\n\",\n    \"    else:\\n\",\n    \"        print(f'Output for {task.name}')\\n\",\n    \"        print(output)\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"jql\\\": \\\"jql_query\\\"\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"task.configure(outputName=\\\"issueList\\\")\\n\",\n    \"task.configure(printOutput=False)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(jira_search_issue, lego_printer=unskript_default_printer, hdl=hdl, args=args)\"\n   ],\n   \"output\": {}\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"48f39e74-8285-4252-ac8c-fce2ae30c841\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Data into a Dict\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Data into a Dict\"\n   },\n   \"source\": [\n    \"<p>In this Action - we convert the object from Jira into a Dict, and we add the elapsed time.&nbsp;&nbsp;</p>\\n\",\n    \"<p>&nbsp;</p>\\n\",\n    \"<p>This is the time from the bug being opened to the status changed to closed.&nbsp; We save this as a timedelta, but also convert the timedelta into hours - adding the days (*24) and seconds (/3600) of the timedelta so that we can see how many hours the ticket was open.</p>\\n\",\n    \"<p>&nbsp;</p>\\n\",\n    \"<p>We also count the number of issues in the Dict and print that value.</p>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"76777827-9c79-4965-96b7-871bfe9cbf0d\",\n   \"metadata\": {\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-02-27T20:37:19.664Z\"\n    },\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"name\": \"data into dict\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"data into dict\",\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"from datetime import datetime\\n\",\n    \"def create_dict(issueList):\\n\",\n    \"\\n\",\n    \"    issue_data = {}\\n\",\n    \"    counter =0\\n\",\n    \"    for issue in issueList:\\n\",\n    \"        counter +=1\\n\",\n    \"        create_time = datetime.strptime(issue.fields.created, '%Y-%m-%dT%H:%M:%S.%f%z')\\n\",\n    \"        done_time = datetime.strptime(issue.fields.updated, '%Y-%m-%dT%H:%M:%S.%f%z')\\n\",\n    \"        elapsed_time = done_time-create_time\\n\",\n    \"        elapsed_time_hours = round(elapsed_time.days*24,0) +round(elapsed_time.seconds/3600,1)\\n\",\n    \"        #print(\\\"elapsed\\\", elapsed_time)\\n\",\n    \"        assignee = issue.fields.assignee\\n\",\n    \"        if hasattr(issue.fields.assignee,'displayName'):\\n\",\n    \"            assignee = issue.fields.assignee.displayName\\n\",\n    \"        else:\\n\",\n    \"            assignee = \\\"Not assigned\\\"\\n\",\n    \"        issue_data[issue.key] = {#\\\"summary\\\": issue.fields.summary, \\n\",\n    \"                                     #\\\"description\\\": issue.fields.description,\\n\",\n    \"                                     \\\"reporter\\\":issue.fields.reporter.displayName,\\n\",\n    \"                                     \\\"status\\\":issue.fields.status.name,\\n\",\n    \"                                     \\\"issueType\\\":issue.fields.issuetype.name,\\n\",\n    \"                                     \\\"project\\\":issue.fields.project.name,\\n\",\n    \"                                     \\\"create_time\\\":create_time,\\n\",\n    \"                                     \\\"done_time\\\":done_time,\\n\",\n    \"                                     \\\"elapsed_time\\\":elapsed_time,\\n\",\n    \"                                     \\\"elapsed_time_hours\\\":elapsed_time_hours,\\n\",\n    \"                                     \\\"assignee\\\":assignee\\n\",\n    \"                                    }\\n\",\n    \"    print(\\\"counter\\\", counter)\\n\",\n    \"    return issue_data\\n\",\n    \"issue_data = create_dict(issueList)\"\n   ],\n   \"output\": {}\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"4e25ad3f-0264-4ec4-a1cc-0d193edcc947\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Create a data frame and a graph\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Create a data frame and a graph\"\n   },\n   \"source\": [\n    \"<p>this step is doing a lot.</p>\\n\",\n    \"<p>&nbsp;</p>\\n\",\n    \"<ol>\\n\",\n    \"<li>pulls the Dict into a dataframe.</li>\\n\",\n    \"<li>builds the chart with Panel.\\n\",\n    \"<ol>\\n\",\n    \"<li>Chart is built</li>\\n\",\n    \"<li>pulls the data from weekdef - which calculates the start and end date- and filters the datafram into a smaller dataframe with the elapsed time in hours&nbsp; - chunked into 4 bins.&nbsp; These are then charted.</li>\\n\",\n    \"<li>The sliderstart value might be different for your organization.&nbsp; You can hard code this here,</li>\\n\",\n    \"<li>The intervals are set in 7 day increments in the daycount slider</li>\\n\",\n    \"</ol>\\n\",\n    \"</li>\\n\",\n    \"</ol>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"2e1ff679-fe5c-41d1-9671-b4a0abfc98e5\",\n   \"metadata\": {\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-02-28T00:12:08.942Z\"\n    },\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"name\": \"interactive chart\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"interactive chart\",\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"import pandas as pd; import numpy as np; import matplotlib.pyplot as plt\\n\",\n    \"import panel as pn\\n\",\n    \"import datetime as dt\\n\",\n    \"from datetime import timezone\\n\",\n    \"\\n\",\n    \"from matplotlib.backends.backend_agg import FigureCanvas\\n\",\n    \"from matplotlib.figure import Figure\\n\",\n    \"pn.extension()\\n\",\n    \"\\n\",\n    \"#create a dataframe from the Jira export\\n\",\n    \"data = pd.DataFrame.from_dict(issue_data)\\n\",\n    \"data = data.T\\n\",\n    \"#data.tail\\n\",\n    \"\\n\",\n    \"def weekdf(startDay, dayCount, dataframe):\\n\",\n    \"    startDay = datetime.combine(startDay, datetime.min.time())\\n\",\n    \"    startDay =startDay.replace(tzinfo=timezone.utc)\\n\",\n    \"    endDay =startDay+ dt.timedelta(days=dayCount)\\n\",\n    \"    #print(startDay, endDay)\\n\",\n    \"    weekdf= dataframe[(dataframe[\\\"create_time\\\"] >= startDay)&(dataframe[\\\"create_time\\\"] <= endDay) ][\\\"elapsed_time_hours\\\"].value_counts()\\n\",\n    \"    weektitle = \\\"Defect status by creation date\\\"\\n\",\n    \"    if weekdf.empty:\\n\",\n    \"        startDay =dt.datetime(2021, 1, 1,0,0,0)\\n\",\n    \"        startDay =startDay.replace(tzinfo=timezone.utc)\\n\",\n    \"        dayCount=730\\n\",\n    \"        endDay =startDay+ dt.timedelta(days=dayCount)\\n\",\n    \"        weektitle = \\\"no data for this week.\\\"\\n\",\n    \"        weekdf= dataframe[(dataframe[\\\"create_time\\\"] >= startDay)&(dataframe[\\\"create_time\\\"] <= endDay) ][\\\"elapsed_time_hours\\\"].value_counts(bins=4, sort=False)\\n\",\n    \"    else:\\n\",\n    \"         issueCount = weekdf.sum(0)\\n\",\n    \"         numberofBins = 4\\n\",\n    \"         if issueCount < 4:\\n\",\n    \"                numberofBins = 2\\n\",\n    \"         if issueCount > 15:\\n\",\n    \"                numberofBins = 8\\n\",\n    \"         weekdf= dataframe[(dataframe[\\\"create_time\\\"] >= startDay)&(dataframe[\\\"create_time\\\"] <= endDay) ][\\\"elapsed_time_hours\\\"].value_counts(bins=numberofBins, sort=False)\\n\",\n    \"    #print(\\\"count\\\", weekdf.sum(0))\\n\",\n    \"    return weekdf\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"def time_plot(startDay, dayCount, dataframe):\\n\",\n    \"    fig = Figure(figsize=(10, 6))\\n\",\n    \"    fig.subplots_adjust(bottom=0.45)\\n\",\n    \"\\n\",\n    \"    ax = fig.subplots()\\n\",\n    \"    ax.xaxis.set_tick_params(labelsize=20)\\n\",\n    \"    df1 = weekdf(startDay, dayCount, dataframe)\\n\",\n    \"    FigureCanvas(fig) \\n\",\n    \"    df1.plot.bar(x=\\\"x\\\", y=\\\"counts\\\", ax=ax, title=\\\"MTTR to closing issues\\\")\\n\",\n    \"    return fig\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"#build the chart\\n\",\n    \"\\n\",\n    \"#get all our date-time variables correctly formatted with a timezone.\\n\",\n    \"sliderstart = dt.datetime(2022, 1, 1,0,0,0)\\n\",\n    \"sliderstart =sliderstart.replace(tzinfo=timezone.utc)\\n\",\n    \"sliderend = dt.datetime.now()\\n\",\n    \"sliderend =sliderend.replace(tzinfo=timezone.utc)\\n\",\n    \"slidervalue = dt.datetime(2023, 1, 1,0,0,0)\\n\",\n    \"slidervalue =slidervalue.replace(tzinfo=timezone.utc)\\n\",\n    \"#print(\\\"sliderstart\\\",sliderstart)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"#CREATE SLIDERS\\n\",\n    \"startDay = pn.widgets.DateSlider(name='Date Slider', start=sliderstart, end=sliderend, value=slidervalue)\\n\",\n    \"\\n\",\n    \"dayCount = pn.widgets.IntSlider(name='number of days', value=7, start=1, end=180, step = 7)\\n\",\n    \"interactive = pn.bind(time_plot, startDay=startDay, dayCount=dayCount, dataframe = data)\\n\",\n    \"first_app = pn.Column(startDay, dayCount, interactive)\\n\",\n    \"first_app\\n\"\n   ],\n   \"output\": {}\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"c0dbc4ed-8af7-4749-8604-584f40749d44\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Pulling the chart data live\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Pulling the chart data live\"\n   },\n   \"source\": [\n    \"<p>The above steps are great if your dataset isn't very large.&nbsp; But what if you have thousands of issues?&nbsp; We dont want to make epic JQL queries, and then also hammer the RUnBook with a huge amount of data.&nbsp;&nbsp;</p>\\n\",\n    \"<p>&nbsp;</p>\\n\",\n    \"<p>Let's pull the data every time we change the graph.&nbsp; This is REALLY useful for time sensitive data (imagine you need time sensitive data, and pulling a whole day's of data takes forever.). Now, pull the small subset you need in real time when you create the chart</p>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"5d939c27-a6ad-4f57-a5c4-b23545187ad8\",\n   \"metadata\": {\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-02-27T22:45:21.864Z\"\n    },\n    \"name\": \"live data interactive chart\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"live data interactive chart\",\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"import json\\n\",\n    \"#create a dataframe from the Jira export\\n\",\n    \"livedata = data\\n\",\n    \"\\n\",\n    \"#data.tail\\n\",\n    \"\\n\",\n    \"def weekdflive(startDay, dayCount, dataframe):\\n\",\n    \"    startDay = datetime.combine(startDay, datetime.min.time())\\n\",\n    \"    startDay =startDay.replace(tzinfo=timezone.utc)\\n\",\n    \"    endDay =startDay+ dt.timedelta(days=dayCount)\\n\",\n    \"    #pull data from JIRA\\n\",\n    \"    startJira = startDay.strftime(\\\"%Y/%m/%d\\\")\\n\",\n    \"    endJira = endDay.strftime(\\\"%Y/%m/%d\\\")\\n\",\n    \"    jql_query =create_query(jira_project, issue_type, new_status, startJira, endJira)\\n\",\n    \"    print(\\\"jql_query\\\",jql_query)\\n\",\n    \"    jsonInput = {\\\"jql\\\":jql_query}\\n\",\n    \"    stringifiedInput = json.dumps(jsonInput)\\n\",\n    \"    print(\\\"stringifiedInput\\\",stringifiedInput, type(stringifiedInput))\\n\",\n    \"    #inputParamsJson1 = f'''{\\\"jql\\\":{{jql_query}}}'''\\n\",\n    \"    #print(\\\"inputParamsJson1\\\", inputParamsJson1)\\n\",\n    \"   \\n\",\n    \"\\n\",\n    \"\\n\",\n    \"    task.configure(inputParamsJson='''{\\n\",\n    \"        \\\"jql\\\": \\\"jql_query\\\"\\n\",\n    \"        }''')\\n\",\n    \"    task.configure(outputName=\\\"issueList1\\\")\\n\",\n    \"    (err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"    task.execute(jira_search_issue, lego_printer=unskript_default_printer, hdl=hdl, args=args)\\n\",\n    \"\\n\",\n    \"    print(\\\"issueList1\\\", len(issueList1))\\n\",\n    \"    live_issue_data = create_dict(issueList1)\\n\",\n    \"    livedata=\\\"\\\"\\n\",\n    \"    livedata = pd.DataFrame.from_dict(live_issue_data)\\n\",\n    \"    livedata = livedata.T\\n\",\n    \"    print(\\\"livedata\\\" ,livedata.size)\\n\",\n    \"    \\n\",\n    \"    \\n\",\n    \"    \\n\",\n    \"    weekdf= dataframe[(dataframe[\\\"create_time\\\"] >= startDay)&(dataframe[\\\"create_time\\\"] <= endDay) ][\\\"elapsed_time_hours\\\"].value_counts()\\n\",\n    \"    weektitle = \\\"Defect status by creation date\\\"\\n\",\n    \"    if weekdf.empty:\\n\",\n    \"        startDay =dt.datetime(2021, 1, 1,0,0,0)\\n\",\n    \"        startDay =startDay.replace(tzinfo=timezone.utc)\\n\",\n    \"        dayCount=730\\n\",\n    \"        endDay =startDay+ dt.timedelta(days=dayCount)\\n\",\n    \"        weektitle = \\\"no data for this week.\\\"\\n\",\n    \"        weekdf= dataframe[(dataframe[\\\"create_time\\\"] >= startDay)&(dataframe[\\\"create_time\\\"] <= endDay) ][\\\"elapsed_time_hours\\\"].value_counts(bins=4, sort=False)\\n\",\n    \"    else:\\n\",\n    \"         issueCount = weekdf.sum(0)\\n\",\n    \"         numberofBins = 4\\n\",\n    \"         if issueCount < 4:\\n\",\n    \"                numberofBins = 2\\n\",\n    \"         if issueCount > 15:\\n\",\n    \"                numberofBins = 8\\n\",\n    \"         weekdf= dataframe[(dataframe[\\\"create_time\\\"] >= startDay)&(dataframe[\\\"create_time\\\"] <= endDay) ][\\\"elapsed_time_hours\\\"].value_counts(bins=numberofBins, sort=False)\\n\",\n    \"    #print(\\\"count\\\", weekdf.sum(0))\\n\",\n    \"    return weekdf\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"def time_plotlive(startDay, dayCount, dataframe):\\n\",\n    \"    fig = Figure(figsize=(10, 6))\\n\",\n    \"    fig.subplots_adjust(bottom=0.45)\\n\",\n    \"\\n\",\n    \"    ax = fig.subplots()\\n\",\n    \"    ax.xaxis.set_tick_params(labelsize=20)\\n\",\n    \"    df1 = weekdflive(startDay, dayCount, dataframe)\\n\",\n    \"    FigureCanvas(fig) \\n\",\n    \"    df1.plot.bar(x=\\\"x\\\", y=\\\"counts\\\", ax=ax, title=\\\"MTTR to closing issues\\\")\\n\",\n    \"    return fig\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"#build the chart\\n\",\n    \"\\n\",\n    \"#get all our date-time variables correctly formatted with a timezone.\\n\",\n    \"sliderstart = dt.datetime(2022, 1, 1,0,0,0)\\n\",\n    \"sliderstart =sliderstart.replace(tzinfo=timezone.utc)\\n\",\n    \"sliderend = dt.datetime.now()\\n\",\n    \"sliderend =sliderend.replace(tzinfo=timezone.utc)\\n\",\n    \"slidervalue = dt.datetime(2023, 1, 1,0,0,0)\\n\",\n    \"slidervalue =slidervalue.replace(tzinfo=timezone.utc)\\n\",\n    \"#print(\\\"sliderstart\\\",sliderstart)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"#CREATE SLIDERS\\n\",\n    \"startDay = pn.widgets.DateSlider(name='Date Slider', start=sliderstart, end=sliderend, value=slidervalue)\\n\",\n    \"\\n\",\n    \"dayCount = pn.widgets.IntSlider(name='number of days', value=7, start=1, end=180, step = 7)\\n\",\n    \"interactive = pn.bind(time_plotlive, startDay=startDay, dayCount=dayCount, dataframe = livedata)\\n\",\n    \"first_app = pn.Column(startDay, dayCount, interactive)\\n\",\n    \"first_app\\n\"\n   ],\n   \"output\": {}\n  }\n ],\n \"metadata\": {\n  \"execution_data\": {\n   \"runbook_name\": \"Jira Visualize Issue Time to Resolution\",\n   \"parameters\": [\n    \"AMI_Id\",\n    \"Region\"\n   ]\n  },\n  \"kernelspec\": {\n   \"display_name\": \"unSkript (Build: 904)\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"codemirror_mode\": {\n    \"name\": \"ipython\",\n    \"version\": 3\n   },\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"nbconvert_exporter\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.9.6\"\n  },\n  \"parameterSchema\": {\n   \"properties\": {\n    \"end_date\": {\n     \"default\": \"2023/02/22\",\n     \"description\": \"End Date for search range\",\n     \"title\": \"end_date\",\n     \"type\": \"string\"\n    },\n    \"issue_type\": {\n     \"default\": \"Bug\",\n     \"description\": \"Jira issueType to query\",\n     \"title\": \"issue_type\",\n     \"type\": \"string\"\n    },\n    \"jira_project\": {\n     \"default\": \"EN\",\n     \"description\": \"Jira Project Name\",\n     \"title\": \"jira_project\",\n     \"type\": \"string\"\n    },\n    \"new_status\": {\n     \"default\": \"Done\",\n     \"description\": \"Status change to search for in Jira\",\n     \"title\": \"new_status\",\n     \"type\": \"string\"\n    },\n    \"start_date\": {\n     \"default\": \"2022/01/01\",\n     \"description\": \"Start Date for search range.\",\n     \"title\": \"start_date\",\n     \"type\": \"string\"\n    }\n   },\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"vscode\": {\n   \"interpreter\": {\n    \"hash\": \"e8899eb02dfbc033aab5733bdae1bd213fa031d40331094008e8673d99ebab63\"\n   }\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}"
  },
  {
    "path": "Jira/jira_visualize_time_to_resolution.json",
    "content": "{\n  \"name\": \"Jira Visualize Issue Time to Resolution\",\n  \"description\": \"Using the Panel Library - visualize the time it takes for issues to close over a specifict timeframe\",\n  \"uuid\": \"1d6f5420dc07075e60bb98018e5447658679ab5b50d7247b4385395a0b6e2989\",\n  \"icon\": \"CONNECTOR_TYPE_JIRA\",\n  \"categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_SRE\" ],\n  \"connector_types\": [ \"CONNECTOR_TYPE_JIRA\" ],\n  \"version\": \"1.0.0\"\n}\n"
  },
  {
    "path": "Jira/legos/__init__.py",
    "content": ""
  },
  {
    "path": "Jira/legos/jira_add_comment/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Get Jira Add Comment</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego adds a comment to a Jira Issue.\r\n\r\n\r\n## Lego Details\r\n\r\n    jira_add_comment(handle: JIRA, issue_id: str, comment: str, visibility: Dict[str, str] = None)\r\n\r\n        handle: Object of type unSkript jira Connector\r\n        issue_id: Issue ID.\r\n        comment: Comment to add in Jira Issue.\r\n        visibility: a dict containing two entries: \"type\" and \"value\".\r\n              \"type\" is 'role' (or 'group' if the Jira server has configured comment visibility for groups)\r\n              \"value\" is the name of the role (or group) to which viewing of this comment will be restricted.\r\n        is_internal: True marks the comment as 'Internal' in Jira Service Desk (Default: ``False``)\r\n\r\n## Lego Input\r\nThis Lego take 4 input handle, issue_id, comment, visibility.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Jira/legos/jira_add_comment/__init__.py",
    "content": ""
  },
  {
    "path": "Jira/legos/jira_add_comment/jira_add_comment.json",
    "content": "{\r\n    \"action_title\": \"Jira Add Comment\",\r\n    \"action_description\": \"Add a Jira Comment\",\r\n    \"action_type\": \"LEGO_TYPE_JIRA\",\r\n    \"action_entry_function\": \"jira_add_comment\",\r\n    \"action_needs_credential\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_INT\",\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_JIRA\"]\r\n}\r\n    "
  },
  {
    "path": "Jira/legos/jira_add_comment/jira_add_comment.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\n\nimport pprint\nfrom typing import Dict, Optional\nfrom pydantic import BaseModel, Field\nfrom jira.client import JIRA\n\n\nclass InputSchema(BaseModel):\n    issue_id: str = Field(\n        title='JIRA Issue ID',\n        description='Issue ID. Eg EN-1234'\n    )\n    comment: str = Field(\n        title='Comment',\n        description='Comment to add in Jira Issue'\n    )\n    visibility: Optional[Dict[str, str]] = Field(\n        None,\n        title='Visibility',\n        description='''a dict containing two entries: \"type\" and \"value\".\n              \"type\" is 'role' (or 'group' if the Jira server has configured comment visibility for groups)\n              \"value\" is the name of the role (or group) to which viewing of this comment \n              will be restricted.'''\n    )\n    is_internal: Optional[bool] = Field(\n        False,\n        title='Internal',\n        description=('True marks the comment as \\'Internal\\' in Jira Service Desk '\n                     '(Default: ``False``)')\n    )\n\n\ndef jira_add_comment_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef jira_add_comment(hdl: JIRA,\n                     issue_id: str,\n                     comment: str,\n                     visibility: Dict[str, str] = None,\n                     is_internal: bool = False) -> int:\n    \"\"\"jira_get_issue Get Jira Issue Info\n\n        :type hdl: JIRA\n        :param hdl: Jira handle.\n\n        :type issue_id: str\n        :param issue_id: Issue ID.\n\n        :type comment: str\n        :param comment: Comment to add in Jira Issue.\n\n        :type visibility: Dict[str, str]\n        :param visibility: a dict containing two entries: \"type\" and \"value\".\n              \"type\" is 'role' (or 'group' if the Jira server has configured \n              comment visibility for groups)\n              \"value\" is the name of the role (or group) to which viewing of \n              this comment will be restricted.\n\n        :type is_internal: bool\n        :param is_internal: True marks the comment as \\'Internal\\' in Jira \n        Service Desk (Default: ``False``)\n\n        :rtype: Jira comment id (int)\n    \"\"\"\n    try:\n        issue = hdl.issue(issue_id)\n        comment = hdl.add_comment(issue, comment, visibility=visibility, is_internal=is_internal)\n    except Exception as e:\n        raise e\n    return int(comment.id)\n"
  },
  {
    "path": "Jira/legos/jira_assign_issue/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Assign Jira Issue</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego Assign a Jira Issue to a user.\r\n\r\n\r\n## Lego Details\r\n\r\n    jira_assign_issue(handle: object, issue_id: str, user_id: str)\r\n\r\n        handle: Object of type unSkript jira Connector\r\n        issue_id: JIRA issue ID to assign. Eg ENG-42\r\n        user_id: User to assign the issue to. Eg user@acme.com\r\n\r\n## Lego Input\r\nThis Lego take three input handle, issue_id and user_id.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Jira/legos/jira_assign_issue/__init__.py",
    "content": ""
  },
  {
    "path": "Jira/legos/jira_assign_issue/jira_assign_issue.json",
    "content": "{\r\n    \"action_title\": \"Assign Jira Issue\",\r\n    \"action_description\": \"Assign a Jira Issue to a user\",\r\n    \"action_type\": \"LEGO_TYPE_JIRA\",\r\n    \"action_entry_function\": \"jira_assign_issue\",\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_JIRA\"]\r\n}\r\n"
  },
  {
    "path": "Jira/legos/jira_assign_issue/jira_assign_issue.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\n\nimport pprint\nfrom pydantic import BaseModel, Field\nfrom jira.client import JIRA\n\nclass InputSchema(BaseModel):\n    issue_id: str = Field(\n        title=\"Issue ID\",\n        description=\"JIRA issue ID to assign. Eg ENG-42\"\n    )\n    user_id: str = Field(\n        title=\"User ID\",\n        description=\"User to assign the issue to. Eg user@acme.com\"\n    )\n\n\ndef jira_assign_issue_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\ndef jira_assign_issue(hdl: JIRA, issue_id: str, user_id: str) -> str:\n    \"\"\"jira_assign_issue assigns a given Jira issue to a user\n\n        :type issue_id: str\n        :param issue_id: JIRA issue ID to assign. Eg ENG-42\n\n        :type user_id: str\n        :param user_id: User to assign the issue to. Eg user@acme.com\n\n        :rtype: str\n    \"\"\"\n\n    # Input param validation.\n    issue = hdl.issue(issue_id)\n    hdl.assign_issue(issue, user_id)\n    return issue.fields.assignee\n"
  },
  {
    "path": "Jira/legos/jira_create_issue/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Create a Jira Issue</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego Create a Jira Issue.\r\n\r\n\r\n## Lego Details\r\n\r\n    jira_create_issue(handle: object, project_name: str, summary: str, issue_type: IssueType, description: str, fields: dict)\r\n\r\n        handle: Object of type unSkript jira Connector\r\n        project_name: The name of the project for which the issue will be generated\r\n        summary: Summary of the issue\r\n        description: Description of the issue\r\n        issue_type: JIRA Issue Type. Possible values: Bug|Task|Story|Epic\r\n        fields: User needs to pass the fields in the format of dict(KEY=VALUE) pair\r\n\r\n## Lego Input\r\nThis Lego take six input handle, project_name, summary, issue_type, description and fields.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Jira/legos/jira_create_issue/__init__.py",
    "content": ""
  },
  {
    "path": "Jira/legos/jira_create_issue/jira_create_issue.json",
    "content": "{\r\n    \"action_title\": \"Create a Jira Issue\",\r\n    \"action_description\": \"Create a Jira Issue\",\r\n    \"action_type\": \"LEGO_TYPE_JIRA\",\r\n    \"action_entry_function\": \"jira_create_issue\",\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_JIRA\"]\r\n}\r\n\r\n"
  },
  {
    "path": "Jira/legos/jira_create_issue/jira_create_issue.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nimport enum\nimport pprint\nfrom typing import Optional\nfrom pydantic import BaseModel, Field\nfrom jira import JIRA\n\npp = pprint.PrettyPrinter(indent=4)\n\nclass CustomFieldTypes(enum.Enum):\n    MULTICHECKBOXES = 'com.atlassian.jira.plugin.system.customfieldtypes:multicheckboxes'\n    LABELS = 'com.atlassian.jira.plugin.system.customfieldtypes:labels'\n    FLOAT = 'com.atlassian.jira.plugin.system.customfieldtypes:float'\n    USERPICKER = 'com.atlassian.jira.plugin.system.customfieldtypes:userpicker'\n    TEXTAREA = 'com.atlassian.jira.plugin.system.customfieldtypes:textarea'\n    CASCADINGSELECT = 'com.atlassian.jira.plugin.system.customfieldtypes:cascadingselect'\n    TEXTFIELD = 'com.atlassian.jira.plugin.system.customfieldtypes:textfield'\n    MULTISELECT = 'com.atlassian.jira.plugin.system.customfieldtypes:multiselect'\n    SELECT = 'com.atlassian.jira.plugin.system.customfieldtypes:select'\n    URL = 'com.atlassian.jira.plugin.system.customfieldtypes:url'\n    RADIOBUTTONS = 'com.atlassian.jira.plugin.system.customfieldtypes:radiobuttons'\n\n\nclass InputSchema(BaseModel):\n    project_name: str = Field(\n        title=\"Project Name\",\n        description=\"The name of the project for which the issue will be generated\"\n    )\n    summary: str = Field(\n        title=\"Summary\",\n        description=\"Summary of the issue\"\n    )\n    description: Optional[str] = Field(\n        title=\"Description\",\n        description=\"Description of the issue\"\n    )\n    issue_type: str = Field(\n        title=\"Issue Type\",\n        description=\"JIRA Issue Type.\"\n    )\n    fields: dict = Field(\n        None,\n        title='Extra fields',\n        description='''\n            User needs to pass the fields in the format of dict(KEY=VALUE) pair\n            where key is the Field Name and value is actual value\n            Value will be vary based on there field type like mention below\n            fields can be passed as mentioned below:\n                Quarter field can be passed as: {QuarterExample:[\"Q1\", \"Q2\"]}\n                Labels field can be passed as : {\"Labelexample\": [\"cherry-picker\"]}\n                Numbers can be provided through numbers field Eg: {\"NumberExample\": 10}\n                User picker (single user) can be passed as: {\"UserPickerExample\": \"John Smith\"}\n                Paragraphs (multi-line) field can be passed as a string like so: {\"ParagraphTest\": \"ABC ABC ABC ABC\"}\n                Select list (cascading) field can be passed as: {\"SelectListCascadeExample\": {\"parent\": \"ABC\", \"child\": \"XYZ\"}}\n                Short text field can be passed as: {\"ShortTextExample\": \"test\"}\n                Select list (multiple choice) field is passed as: {\"SelectListMultipleChoicesSample\": [\"ABC\", \"XYZ\"]}\n                Select list (single choice) field is passed as: {\"SelectListSingleTest\": [\"123\"]}\n                URL Field is passed as: {\"UrlFieldTest\": \"http://www.example.com}\n                RadioButton field is passed as: {\"RadioButtonTest\": \"Q1\"}\n            For more information about custom fields visit: https://support.atlassian.com/jira-cloud-administration/docs/custom-fields-types-in-company-managed-projects/\n            '''\n    )\n\n\ndef jira_create_issue_printer(output):\n    if output is None:\n        return\n    pp.pprint(output)\n\ndef jira_create_issue(\n        handle: JIRA,\n        project_name: str,\n        summary: str,\n        issue_type: str,\n        description: str = \"\",\n        fields: dict=None\n        ) -> str:\n    \"\"\"create_issue creates issue in jira.\n        :type project_name: str\n        :param project_name: The name of the project for which the issue will be generated\n        :type summary: str\n        :param summary: Summary of the issue\n        :type description: str\n        :param description: Description of the issue\n        :type issue_type: IssueType\n        :param issue_type: JIRA Issue Type.\n        :type fields: dict\n        :param fields: User needs to pass the fields in the format of dict(KEY=VALUE) pair\n        :rtype: String with issues key\n    \"\"\"\n    issue_type = issue_type if issue_type else None\n    if fields:\n        issue_fields = {\n            'project': project_name,\n            'summary': summary,\n            'description': description,\n            'issuetype': {'name': issue_type}\n        }\n\n\n        for key in list(fields.keys()):\n            found = False\n            for f in handle.fields():\n                if 'schema' not in f:\n                    continue\n                if f['name'] == key:\n                    found = True\n                    custom_field_type = f['schema'].get(\"custom\", \"\")\n                    if custom_field_type == CustomFieldTypes.MULTICHECKBOXES.value:\n                        issue_fields.update({f['id']: [{'value': i} for i in fields[key]]})\n\n                    elif custom_field_type == CustomFieldTypes.LABELS.value:\n                        issue_fields.update({f['id']: fields[key]})\n\n                    elif custom_field_type == CustomFieldTypes.FLOAT.value:\n                        issue_fields.update({f['id']: fields[key]})\n\n                    elif custom_field_type == CustomFieldTypes.USERPICKER.value:\n                        accountId = get_user_accountId(handle, fields[key])\n                        issue_fields.update({f['id']: {\"accountId\":accountId}})\n\n                    elif custom_field_type == CustomFieldTypes.TEXTAREA.value:\n                        issue_fields.update({f['id']: fields[key]})\n\n                    elif custom_field_type == CustomFieldTypes.CASCADINGSELECT.value:\n                        cascade_list = {\n                              \"value\": fields[key][\"parent\"],\n                              \"child\": {\n                                \"value\": fields[key][\"child\"]\n                              }\n                            }\n                        issue_fields.update({f['id']: cascade_list})\n\n                    elif custom_field_type == CustomFieldTypes.TEXTFIELD.value:\n                        issue_fields.update({f['id']: fields[key]})\n\n                    elif custom_field_type == CustomFieldTypes.MULTISELECT.value:\n                        issue_fields.update({f['id']: [{'value': i} for i in fields[key]]})\n\n                    elif custom_field_type == CustomFieldTypes.SELECT.value:\n                        issue_fields.update({f['id']: {'value': fields[key][0]}})\n\n                    elif custom_field_type == CustomFieldTypes.URL.value:\n                        issue_fields.update({f['id']: fields[key]})\n\n                    elif custom_field_type == CustomFieldTypes.RADIOBUTTONS.value:\n                        issue_fields.update({f['id']: {'value': fields[key]}})\n\n                    else:\n                        if f['schema']['type'] == \"array\":\n                            #There can be 2 scenarios here.\n                            # For labels, its an array of strings.\n                            # {'id': 'labels', 'key': 'labels', 'name': 'Labels', 'custom':\n                            # False, 'orderable': True,\n                            # 'navigable': True, 'searchable': True, 'clauseNames': ['labels'],\n                            # 'schema': {'type': 'array', 'items': 'string', 'system': 'labels'}}\n                            #\n                            # For others, its an array of dictionary.\n                            # {'id': 'components', 'key': 'components', 'name': 'Components',\n                            # 'custom': False,\n                            # 'orderable': True, 'navigable': True, 'searchable': True,\n                            # 'clauseNames': ['component'],\n                            # 'schema': {'type': 'array', 'items': 'component',\n                            # 'system': 'components'}}\n                            if f['schema']['items'] == \"string\":\n                                issue_fields.update({f['id']: fields[key]})\n                            else:\n                                issue_fields.update({f['id']: [{'name': i} for i in fields[key]]})\n                        elif f['schema']['type'] == \"user\":\n                            accountId = get_user_accountId(handle, fields[key])\n                            issue_fields.update({f['id']: {\"id\": accountId}})\n                        else:\n                            issue_fields.update({f['id']: {'name': fields[key]}})\n\n            if found is False:\n                    raise Exception(f'Invalid field: {key}')\n\n\n        issue = handle.create_issue(fields=issue_fields)\n    else:\n        issue = handle.create_issue(project=project_name, summary=summary,\n                                    description=description, issuetype={'name': issue_type})\n    return issue.key\n\ndef get_user_accountId(handle: JIRA, user: str)->str:\n    get_user = handle._get_json(f\"user/search?query=[{user}]\")\n    if len(get_user) != 0:\n        return get_user[0].get('accountId')\n    raise  Exception(f'Unable to get accountId for {user}')\n"
  },
  {
    "path": "Jira/legos/jira_get_handle/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Get Jira SDK Handle</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego Get Jira SDK Handle.\r\n\r\n\r\n## Lego Details\r\n\r\n    jira_get_handle(handle: object)\r\n\r\n        handle: Object of type unSkript jira Connector\r\n\r\n## Lego Input\r\nThis Lego take one input handle.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Jira/legos/jira_get_handle/__init__.py",
    "content": ""
  },
  {
    "path": "Jira/legos/jira_get_handle/jira_get_handle.json",
    "content": "{\r\n    \"action_title\": \"Get Jira SDK Handle\",\r\n    \"action_description\": \"Get Jira SDK Handle\",\r\n    \"action_type\": \"LEGO_TYPE_JIRA\",\r\n    \"action_entry_function\": \"jira_get_handle\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": false,\r\n    \"action_supports_iteration\": false\r\n}\r\n    "
  },
  {
    "path": "Jira/legos/jira_get_handle/jira_get_handle.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nfrom pydantic import BaseModel\n\n\nclass InputSchema(BaseModel):\n    pass\n\n\ndef jira_get_handle(handle):\n    \"\"\"jira_get_handle returns the jira connection handle.\n\n       :rtype: postgresql Handle.\n    \"\"\"\n    return handle\n"
  },
  {
    "path": "Jira/legos/jira_get_issue/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Get Jira Issue Info</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego Get Issue Info from Jira API: description, labels, attachments.\r\n\r\n\r\n## Lego Details\r\n\r\n    jira_get_issue(handle: object, issue_id: str)\r\n\r\n        handle: Object of type unSkript jira Connector\r\n        issue_id: Issue ID.\r\n\r\n## Lego Input\r\nThis Lego take two input handle and issue_id.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Jira/legos/jira_get_issue/__init__.py",
    "content": ""
  },
  {
    "path": "Jira/legos/jira_get_issue/jira_get_issue.json",
    "content": "{\r\n    \"action_title\": \"Get Jira Issue Info\",\r\n    \"action_description\": \"Get Issue Info from Jira API: description, labels, attachments\",\r\n    \"action_type\": \"LEGO_TYPE_JIRA\",\r\n    \"action_entry_function\": \"jira_get_issue\",\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_JIRA\"]\r\n}\r\n"
  },
  {
    "path": "Jira/legos/jira_get_issue/jira_get_issue.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nimport pprint\nfrom pydantic import BaseModel, Field\nfrom jira.client import JIRA\n\npp = pprint.PrettyPrinter(indent=4)\n\n\nclass InputSchema(BaseModel):\n    issue_id: str = Field(\n        title='JIRA Issue ID',\n        description='Issue ID. Eg EN-1234'\n    )\n\n\ndef jira_get_issue_printer(output):\n    if output is None:\n        return\n    pp.pprint(output)\n\ndef jira_get_issue(hdl: JIRA, issue_id: str) -> dict:\n    \"\"\"jira_get_issue Get Jira Issue Info\n\n        :type issue_id: str\n        :param issue_id: Issue ID.\n\n        :rtype: Jira Issue Info\n    \"\"\"\n    # Input param validation.\n    issue = hdl.issue(issue_id)\n    return issue.raw\n"
  },
  {
    "path": "Jira/legos/jira_get_issue_status/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Get Jira Issue Status</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego Get Issue Status from Jira API.\r\n\r\n\r\n## Lego Details\r\n\r\n    jira_get_issue_status(handle: object, issue_id: str)\r\n\r\n        handle: Object of type unSkript jira Connector\r\n        issue_id: Issue ID.\r\n\r\n## Lego Input\r\nThis Lego take two input handle and issue_id.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Jira/legos/jira_get_issue_status/__init__.py",
    "content": ""
  },
  {
    "path": "Jira/legos/jira_get_issue_status/jira_get_issue_status.json",
    "content": "{\r\n    \"action_title\": \"Get Jira Issue Status\",\r\n    \"action_description\": \"Get Issue Status from Jira API\",\r\n    \"action_type\": \"LEGO_TYPE_JIRA\",\r\n    \"action_entry_function\": \"jira_get_issue_status\",\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_JIRA\"]\r\n}"
  },
  {
    "path": "Jira/legos/jira_get_issue_status/jira_get_issue_status.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\n\nimport pprint\nfrom pydantic import BaseModel, Field\nfrom jira.client import JIRA\n\npp = pprint.PrettyPrinter(indent=4)\n\n\nclass InputSchema(BaseModel):\n    issue_id: str = Field(\n        title='Issue ID',\n        description='Issue ID'\n    )\n\n\ndef jira_get_issue_status_printer(output):\n    if output is None:\n        return\n    pp.pprint(output)\n\n\ndef jira_get_issue_status(hdl: JIRA, issue_id: str):\n    \"\"\"jira_get_issue_status get issue status\n        :type issue_id: str\n        :param issue_id: Issue ID.\n        :rtype:\n    \"\"\"\n    # Input param validation.\n    issue = hdl.issue(issue_id)\n    return issue.fields.status.name\n"
  },
  {
    "path": "Jira/legos/jira_issue_change_status/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Change JIRA Issue Status</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego Change JIRA Issue Status to given status.\r\n\r\n\r\n## Lego Details\r\n\r\n    jira_issue_change_status(handle: object, issue_id: str, status: str, transition: str)\r\n\r\n        handle: Object of type unSkript jira Connector\r\n        issue_id: ID of the issue whose status we want to fetch (eg ENG-14)\r\n        status: New Status for the JIRA issue\r\n        transition: Transition to use for status change for the JIRA issue\r\n\r\n## Lego Input\r\nThis Lego take four input handle, status, transition and issue_id.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Jira/legos/jira_issue_change_status/__init__.py",
    "content": ""
  },
  {
    "path": "Jira/legos/jira_issue_change_status/jira_issue_change_status.json",
    "content": "{\r\n    \"action_title\": \"Change JIRA Issue Status\",\r\n    \"action_description\": \"Change JIRA Issue Status to given status\",\r\n    \"action_type\": \"LEGO_TYPE_JIRA\",\r\n    \"action_entry_function\": \"jira_issue_change_status\",\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_NONE\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_JIRA\"]\r\n}\r\n"
  },
  {
    "path": "Jira/legos/jira_issue_change_status/jira_issue_change_status.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\n\nimport pprint\nfrom typing import Optional\nfrom pydantic import BaseModel, Field\nfrom jira.client import JIRA\n\npp = pprint.PrettyPrinter(indent=4)\n\n\nclass InputSchema(BaseModel):\n    issue_id: str = Field(\n        title=\"Issue ID\",\n        description=\"Issue ID\"\n    )\n    status: str = Field(\n        title=\"New Status\",\n        description=\"New Status for the JIRA issue\"\n    )\n    transition: Optional[str] = Field(\n        title=\"Transition ID\",\n        description=\"Transition to use for status change for the JIRA issue\"\n    )\n\n\ndef jira_issue_change_status_printer(output):\n    if output is None:\n        return\n    pp.pprint(output)\n\n\ndef jira_issue_change_status(hdl: JIRA, issue_id: str, status: str, transition: str = \"\"):\n    \"\"\"jira_get_issue_status gets the status of a given Jira issue.\n        :type issue_id: str\n        :param issue_id: ID of the issue whose status we want to fetch (eg ENG-14)\n\n        :type status: str\n        :param status: New Status for the JIRA issue\n\n        :type transition: str\n        :param transition: Transition to use for status change for the JIRA issue\n        :rtype: String with issue status fetched from JIRA API\n    \"\"\"\n\n    # Input param validation.\n    issue = hdl.issue(issue_id)\n    if transition == \"\":\n        transitions = hdl.transitions(issue)\n        # Transitions look like this\n        # {'id': '11', 'name': 'Backlog', 'to': {'self': 'https://foo/status/10000',\n        # 'description': '', 'iconUrl': 'https://foo/', 'name': 'Backlog', 'id': '10000',\n        # 'statusCategory': {'self': 'https://foo/rest/api/2/statuscategory/2', 'id': 2, 'key': 'new',\n        # 'colorName': 'blue-gray', 'name': 'To Do'}}, 'hasScreen': False, 'isGlobal': True,\n        # 'isInitial': False, 'isAvailable': True, 'isConditional': False, 'isLooped': False}\n        t = [t for t in transitions if t.get('to').get('name') == status]\n        if len(t) == 0:\n            print(\"No transition found\")\n            return\n\n        if len(t) > 1:\n            print(\"Multiple transitions possible for JIRA issue. Please select transition number to use\", [\n                t.get('id') for t in transitions if t.get('name') == status])\n            return\n        else:\n            transition = t[0].get('id')\n\n    hdl.transition_issue(issue, transition)\n    return\n"
  },
  {
    "path": "Jira/legos/jira_search_issue/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Search for Jira issues matching JQL queries</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego Use JQL to search all matching issues in Jira. Returns a List of the matching issues IDs/keys.\r\n\r\n\r\n## Lego Details\r\n\r\n    jira_search_issue(handle: object, jql: str, max_results: int)\r\n\r\n        handle: Object of type unSkript jira Connector\r\n        jql: Search string to execute in JIRA.\r\n        max_results: Max limit on number of matching issues\r\n\r\n## Lego Input\r\nThis Lego take three input handle, jql and max_results.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Jira/legos/jira_search_issue/__init__.py",
    "content": ""
  },
  {
    "path": "Jira/legos/jira_search_issue/jira_search_issue.json",
    "content": "{\r\n    \"action_title\": \"Search for Jira issues matching JQL queries\",\r\n    \"action_description\": \"Use JQL to search all matching issues in Jira. Returns a List of the matching issues IDs/keys\",\r\n    \"action_type\": \"LEGO_TYPE_JIRA\",\r\n    \"action_entry_function\": \"jira_search_issue\",\r\n    \"action_needs_credential\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_output_example\": \"[<JIRA Issue: key='PLAY-2', id='10090'>, <JIRA Issue: key='PLAY-1', id='10025'>]\",\r\n    \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_JIRA\"]\r\n}\r\n"
  },
  {
    "path": "Jira/legos/jira_search_issue/jira_search_issue.py",
    "content": "##\n# Copyright (c) 2023 unSkript, Inc\n# All rights reserved.\n##\nimport pprint\nfrom typing import Optional, List\nfrom pydantic import BaseModel, Field\nfrom tabulate import tabulate\nfrom jira import JIRA\n\npp = pprint.PrettyPrinter(indent=4)\n\n\nclass InputSchema(BaseModel):\n    jql: str = Field(\n        title=\"Jira issue search using Jira Query Language (JQL)\",\n        description=\"Search string to execute in JIRA. \"\n        \"Valid JQL expression eg \\\"project = EN and status in \"\n        \"(\\\"Selected for Development\\\") AND labels in (beta)\\\"\"\n    )\n    max_results: Optional[int] = Field(\n        default=50,\n        title=\"Limit number of matching issues\",\n        description=\"Max limit on number of matching issues\"\n    )\n\n\ndef jira_search_issue_printer(output):\n    if output is None:\n        return\n    print_data = []\n    for issue in output:\n        print_data.append([issue.get(\"ID\"), issue.get(\"Summary\")])\n    print(tabulate(print_data, headers=[\"Issue ID\", \"Summary\"], tablefmt=\"grid\"))\n\ndef jira_search_issue(handle: JIRA, jql: str, max_results: int = 50) -> List:\n    \"\"\"jira_search_issue get Jira issues matching JQL queries.\n        :type jql: str\n        :param jql: Search string to execute in JIRA.\n\n        :type max_results: int\n        :param max_results: Max limit on number of matching issues\n\n        :rtype: Jira issues matching JQL queries\n    \"\"\"\n    result = []\n    total_done = 0\n    while True:\n        matching_issues = handle.search_issues(jql, startAt=total_done, maxResults=max_results)\n        for i in matching_issues:\n            result.append({\"ID\": i.key, \"Summary\":i.fields.summary})\n        total_done += max_results\n        if total_done > matching_issues.total:\n            break\n    return result\n"
  },
  {
    "path": "Kafka/README.md",
    "content": "\n# Kafka Actions\n* [Kafka Check In-Sync Replicas](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kafka/legos/kafka_check_in_sync_replicas/README.md): Checks number of actual min-isr for each topic-partition with configuration for that topic.\n* [Kafka Check Offline Partitions](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kafka/legos/kafka_check_offline_partitions/README.md): Checks the number of offline partitions.\n* [Kafka Check Replicas Available](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kafka/legos/kafka_check_replicas_available/README.md): Checks if the number of replicas not available for communication is equal to zero.\n* [Kafka get cluster health](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kafka/legos/kafka_get_cluster_health/README.md): Fetches the health of the Kafka cluster including brokers, topics, and partitions.\n* [Kafka get count of committed messages](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kafka/legos/kafka_get_committed_messages_count/README.md): Fetches the count of committed messages (consumer offsets) for a specific consumer group and its topics.\n* [Get Kafka Producer Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kafka/legos/kafka_get_handle/README.md): Get Kafka Producer Handle\n* [Kafka get topic health](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kafka/legos/kafka_get_topic_health/README.md): This action fetches the health and total number of messages for the specified topics.\n* [Kafka get topics with lag](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kafka/legos/kafka_get_topics_with_lag/README.md): This action fetches the topics with lag in the Kafka cluster.\n* [Kafka Publish Message](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kafka/legos/kafka_publish_message/README.md): Publish Kafka Message\n* [Run a Kafka command using kafka CLI](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kafka/legos/kafka_run_command/README.md): Run a Kafka command using kafka CLI. Eg kafka-topics.sh --list --exclude-internal\n"
  },
  {
    "path": "Kafka/__init__.py",
    "content": ""
  },
  {
    "path": "Kafka/legos/__init__.py",
    "content": ""
  },
  {
    "path": "Kafka/legos/kafka_broker_health_check/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>Get Kafka broker health</h1>\n\n## Description\nChecks the health of the Kafka brokers by determining if the Kafka producer can establish a connection with the bootstrap brokers of a Kafka cluster.\n\n## Lego Details\n\tkafka_broker_health_check(handle)\n\t\thandle: Object of type unSkript KAFKA Connector.\n\n## Lego Input\nThis Lego takes inputs handle.\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kafka/legos/kafka_broker_health_check/__init__.py",
    "content": ""
  },
  {
    "path": "Kafka/legos/kafka_broker_health_check/kafka_broker_health_check.json",
    "content": "{\n  \"action_title\": \"Get Kafka broker health\",\n  \"action_description\": \"Checks the health of the Kafka brokers by determining if the Kafka producer can establish a connection with the bootstrap brokers of a Kafka cluster.\",\n  \"action_type\": \"LEGO_TYPE_KAFKA\",\n  \"action_entry_function\": \"kafka_broker_health_check\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n  \"action_is_check\": true,\n  \"action_next_hop\": [\n    \"\"\n  ],\n  \"action_next_hop_parameter_mapping\": {},\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true\n}"
  },
  {
    "path": "Kafka/legos/kafka_broker_health_check/kafka_broker_health_check.py",
    "content": "##\n# Copyright (c) 2023 unSkript, Inc\n# All rights reserved.\n##\nfrom kafka import KafkaProducer, KafkaConsumer\nfrom typing import Tuple\nfrom pydantic import BaseModel\n\n\nclass InputSchema(BaseModel):\n    pass\n\n\ndef kafka_broker_health_check_printer(output):\n    status, issues = output\n\n    if status:\n        print(\"All brokers are connected and healthy!\")\n    else:\n        print(\"Issues detected with brokers:\\n\")\n        for issue in issues:\n            print(f\"Issue Type: {issue['issue_type']}\")\n            print(f\"Description: {issue['description']}\\n\")\n\n\ndef kafka_broker_health_check(handle) -> Tuple:\n    \"\"\"\n    Checks the health of the Kafka brokers function by determining if the Kafka producer \n    can establish a connection with the bootstrap brokers of a Kafka cluster.\n\n    :type handle: KafkaProducer\n    :param handle: Handle containing the KafkaProducer instance.\n\n    :rtype: Tuple containing a status and an optional list of issues with brokers.\n    \"\"\"\n\n    issues = []\n\n    # Check the brokers\n    connected_to_brokers = handle.bootstrap_connected()\n    if not connected_to_brokers:\n        issues.append({\n            'issue_type': 'Broker',\n            'description': 'Unable to connect to bootstrap brokers.'\n        })\n\n    if len(issues) != 0:\n        return (False, issues)\n    return (True, None)\n\n"
  },
  {
    "path": "Kafka/legos/kafka_check_in_sync_replicas/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Kafka Check In-Sync Replicas</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego Checks number of actual min-isr for each topic-partition with configuration for that topic.\r\n\r\n\r\n## Lego Details\r\n\r\n    kafka_check_in_sync_replicas(handle: object, min_isr: int)\r\n\r\n        handle: Object of type unSkript kafka Connector\r\n        min_isr: Default min.isr value for cases without settings in Zookeeper. The default value is 3.\r\n\r\n## Lego Input\r\nThis Lego take two input handle, and min_isr.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kafka/legos/kafka_check_in_sync_replicas/__init__.py",
    "content": ""
  },
  {
    "path": "Kafka/legos/kafka_check_in_sync_replicas/kafka_check_in_sync_replicas.json",
    "content": "{\r\n    \"action_title\": \"Kafka Check In-Sync Replicas\",\r\n    \"action_description\": \"Checks number of actual min-isr for each topic-partition with configuration for that topic.\",\r\n    \"action_type\": \"LEGO_TYPE_KAFKA\",\r\n    \"action_entry_function\": \"kafka_check_in_sync_replicas\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_KAFKA\" ]\r\n\r\n}\r\n    "
  },
  {
    "path": "Kafka/legos/kafka_check_in_sync_replicas/kafka_check_in_sync_replicas.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nimport pprint\nfrom typing import Dict\nimport argparse\nfrom pydantic import BaseModel, Field\nfrom kafka_utils.kafka_check.commands.min_isr import MinIsrCmd\nfrom kafka_utils.util.zookeeper import ZK\n\n\npp = pprint.PrettyPrinter(indent=4)\n\n\nclass InputSchema(BaseModel):\n    min_isr: int = Field(\n        3,\n        title='Minimum In-Sync Replicas',\n        description='Default min.isr value for cases without '\n        'settings in Zookeeper. The default value is 3'\n    )\n\n\ndef kafka_check_in_sync_replicas_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef kafka_check_in_sync_replicas(handle, min_isr: int) -> Dict:\n\n    \"\"\"kafka_check_in_sync_replicas checks number of actual min-isr for \n    each topic-partition with configuration for that topic.\n\n        :type min_isr: int\n        :param min_isr: Default min.isr value for cases without settings \n        in Zookeeper. The default value is 3.\n\n        :rtype: Dict\n    \"\"\"\n    try:\n        # Initialize the check\n        check_in_sync_replicas = MinIsrCmd()\n        check_in_sync_replicas.cluster_config = handle.cluster_config\n\n        # Set the arguments for running the check\n        args = argparse.Namespace()\n        args.default_min_isr = min_isr\n        args.verbose = True\n        args.head = -1\n        check_in_sync_replicas.args = args\n\n        # Initialize zookeper and run the check\n        with ZK(handle.cluster_config) as zk:\n            check_in_sync_replicas.zk = zk\n            check_output = check_in_sync_replicas.run_command()\n\n    except Exception as e:\n        raise e\n\n    return check_output[1]\n"
  },
  {
    "path": "Kafka/legos/kafka_check_lag_change/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>Kafka check lag change</h1>\n\n## Description\nThis action checks if the lag for consumer groups is not changing for a threshold number of hours.\n\n\n## Lego Details\n\tkafka_check_lag_change(handle, group_id: str= \"\", threshold: int=1)\n\t\thandle: Object of type unSkript KAFKA Connector.\n\t\tgroup_id: Consumer group ID.\n\t\tthreshold: The number of hours to check if the lag hasn't changed.\n\n\n## Lego Input\nThis Lego takes inputs handle, group_id, threshold.\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kafka/legos/kafka_check_lag_change/__init__.py",
    "content": ""
  },
  {
    "path": "Kafka/legos/kafka_check_lag_change/kafka_check_lag_change.json",
    "content": "{\n  \"action_title\": \"Kafka check lag change\",\n  \"action_description\": \"This action checks if the lag for consumer groups is not changing for a threshold number of hours.\\n\",\n  \"action_type\": \"LEGO_TYPE_KAFKA\",\n  \"action_entry_function\": \"kafka_check_lag_change\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n  \"action_is_check\": true,\n  \"action_next_hop\": [\n    \"\"\n  ],\n  \"action_next_hop_parameter_mapping\": {},\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true\n}"
  },
  {
    "path": "Kafka/legos/kafka_check_lag_change/kafka_check_lag_change.py",
    "content": "##\n# Copyright (c) 2023 unSkript, Inc\n# All rights reserved.\n##\nfrom typing import Tuple, Optional\nfrom concurrent.futures import ThreadPoolExecutor, as_completed\nfrom kafka import KafkaAdminClient, KafkaConsumer, TopicPartition\nfrom pydantic import BaseModel, Field\nfrom tabulate import tabulate\nimport time\n\n\n\nclass InputSchema(BaseModel):\n    group_id: Optional[str] = Field(\n        '',\n        description='Consumer group ID to which this consumer belongs',\n        title='Consumer group ID',\n    )\n    threshold: Optional[int] = Field(\n        3,\n        description=\"The number of hours to check if the lag hasn't changed.\",\n        title='Threshold (in hours)',\n    )\n\n\ndef kafka_check_lag_change_printer(output):\n    status, issues = output\n\n    if status:\n        print(\"All consumer groups are maintaining their lags!\")\n    else:\n        print(\"Lag issues detected:\")\n        headers = ['Consumer Group', 'Topic', 'Partition', 'Description']\n        table_data = [(issue['consumer_group'], issue['topic'], issue['partition'], issue['description']) for issue in issues]\n        print(tabulate(table_data, headers=headers, tablefmt='grid'))\n\n    # This would be a global or persisted store of previous lags at the last check.\n    # Format: { \"topic-partition\": [timestamp, lag] }\nprev_lags = {}\n\ndef fetch_lag(handle, group_id, topic_partitions, current_time, threshold):\n    issues = []\n    # Utilize bootstrap_servers from handle\n    consumer = KafkaConsumer(bootstrap_servers=handle.config['bootstrap_servers'], group_id=group_id)\n    try:\n        for tp in topic_partitions:\n            end_offset = consumer.end_offsets([tp])[tp]\n            committed = consumer.committed(tp) or 0\n            lag = end_offset - committed\n\n            if lag == 0:\n                continue\n\n            key = f\"{group_id}-{tp.topic}-{tp.partition}\"\n            prev_entry = prev_lags.get(key)\n\n            if prev_entry:\n                prev_timestamp, prev_lag = prev_entry\n                if prev_lag != lag:\n                    prev_lags[key] = (current_time, lag)\n                elif (current_time - prev_timestamp) >= threshold * 3600:\n                    issues.append({\n                        'consumer_group': group_id,\n                        'topic': tp.topic,\n                        'partition': tp.partition,\n                        'description': f\"Lag hasn't changed for {threshold} hours. Current Lag: {lag}\"\n                    })\n            else:\n                prev_lags[key] = (current_time, lag)\n    finally:\n        consumer.close()\n\n    return issues\n\ndef kafka_check_lag_change(handle, group_id: str = \"\", threshold: int = 3) -> Tuple:\n    \"\"\"\n    kafka_check_lag_change checks if the lag for consumer groups is not changing for X hours.\n\n    :param handle: Object of type unSkript KAFKA Connector.\n\n    :param group_id: Consumer group ID.\n\n    :param threshold: The number of hours to check if the lag hasn't changed.\n\n    :return: Tuple containing a status and an optional list of issues with lag.\n    \"\"\"\n    issues = []\n    current_time = time.time()\n\n    admin_client = KafkaAdminClient(bootstrap_servers=handle.config['bootstrap_servers'])\n    consumer_groups = [group_id] if group_id else [group[0] for group in admin_client.list_consumer_groups()]\n\n    with ThreadPoolExecutor(max_workers=10) as executor:\n        futures = []\n\n        for group in consumer_groups:\n            consumer = KafkaConsumer(bootstrap_servers=handle.config['bootstrap_servers'], group_id=group)\n            topics = consumer.topics()\n            topic_partitions = [TopicPartition(topic, partition) for topic in topics for partition in consumer.partitions_for_topic(topic)]\n            consumer.close()\n\n            if topic_partitions:\n                future = executor.submit(fetch_lag, handle, group, topic_partitions, current_time, threshold)\n                futures.append(future)\n\n        for future in as_completed(futures):\n            issues.extend(future.result())\n\n    return (False, issues) if issues else (True, None)"
  },
  {
    "path": "Kafka/legos/kafka_check_offline_partitions/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Kafka Check Offline Partitions</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego Checks Checks the number of offline partitions.\r\n\r\n\r\n## Lego Details\r\n\r\n    kafka_check_offline_partitions(handle: object)\r\n\r\n        handle: Object of type unSkript kafka Connector\r\n\r\n## Lego Input\r\nThis Lego take one input: handle\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kafka/legos/kafka_check_offline_partitions/__init__.py",
    "content": ""
  },
  {
    "path": "Kafka/legos/kafka_check_offline_partitions/kafka_check_offline_partitions.json",
    "content": "{\r\n    \"action_title\": \"Kafka Check Offline Partitions\",\r\n    \"action_description\": \"Checks the number of offline partitions.\",\r\n    \"action_type\": \"LEGO_TYPE_KAFKA\",\r\n    \"action_entry_function\": \"kafka_check_offline_partitions\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_verbs\": [\"filter\"],\r\n    \"action_nouns\": [\"aws\",\"instances\",\"untagged\"],\r\n    \"action_is_check\": true,\r\n    \"action_categories\": [],\r\n    \"action_next_hop\": [],\r\n    \"action_next_hop_parameter_mapping\": {}\r\n}\r\n    "
  },
  {
    "path": "Kafka/legos/kafka_check_offline_partitions/kafka_check_offline_partitions.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nimport pprint\nimport argparse\nfrom typing import Tuple\nfrom pydantic import BaseModel\nfrom kafka_utils.kafka_check.commands.offline import OfflineCmd\nfrom kafka_utils.util.zookeeper import ZK\n\n\n\npp = pprint.PrettyPrinter(indent=4)\n\n\nclass InputSchema(BaseModel):\n    pass\n\ndef kafka_check_offline_partitions_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef kafka_check_offline_partitions(handle) -> Tuple:\n\n    \"\"\"kafka_check_offline_partitions Checks the number of offline partitions.\n\n        :rtype: Tuple of the check\n    \"\"\"\n    try:\n        # Initialize the check\n        check_offline_partitions = OfflineCmd()\n        check_offline_partitions.cluster_config = handle.cluster_config\n\n        # Set the arguments for running the check\n        args = argparse.Namespace()\n        args.verbose = True\n        args.head = -1\n        check_offline_partitions.args = args\n\n        # Initialize zookeper and run the check\n        with ZK(handle.cluster_config) as zk:\n            check_offline_partitions.zk = zk\n            check_output = check_offline_partitions.run_command()\n\n    except Exception as e:\n        raise e\n\n    if len(check_output[1]['raw']['partitions']) != 0:\n        return (False, check_output)\n    return (True, check_output)\n    "
  },
  {
    "path": "Kafka/legos/kafka_check_replicas_available/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Kafka Check Replicas Available</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego Checks Checks if the number of replicas not available for communication is equal to zero.\r\n\r\n\r\n## Lego Details\r\n\r\n    kafka_check_replicas_available(handle: object)\r\n\r\n        handle: Object of type unSkript kafka Connector\r\n\r\n## Lego Input\r\nThis Lego take one input: handle\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kafka/legos/kafka_check_replicas_available/__init__.py",
    "content": ""
  },
  {
    "path": "Kafka/legos/kafka_check_replicas_available/kafka_check_replicas_available.json",
    "content": "{\r\n    \"action_title\": \"Kafka Check Replicas Available\",\r\n    \"action_description\": \"Checks if the number of replicas not available for communication is equal to zero.\",\r\n    \"action_type\": \"LEGO_TYPE_KAFKA\",\r\n    \"action_entry_function\": \"kafka_check_replicas_available\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_KAFKA\" ]\r\n}\r\n    "
  },
  {
    "path": "Kafka/legos/kafka_check_replicas_available/kafka_check_replicas_available.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nimport pprint\nfrom typing import Dict\nimport argparse\nfrom pydantic import BaseModel\nfrom kafka_utils.kafka_check.commands.replica_unavailability import ReplicaUnavailabilityCmd\nfrom kafka_utils.util.zookeeper import ZK\n\n\npp = pprint.PrettyPrinter(indent=4)\n\n\nclass InputSchema(BaseModel):\n    pass\n\ndef kafka_check_replicas_available_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef kafka_check_replicas_available(handle) -> Dict:\n\n    \"\"\"kafka_check_replicas_available Checks if the number of replicas \n    not available for communication is equal to zero.\n\n        :rtype: Dict\n    \"\"\"\n    try:\n        # Initialize the check\n        check_replica_unavailability = ReplicaUnavailabilityCmd()\n        check_replica_unavailability.cluster_config = handle.cluster_config\n\n        # Set the arguments for running the check\n        args = argparse.Namespace()\n        args.verbose = True\n        args.head = -1\n        check_replica_unavailability.args = args\n\n        # Initialize zookeper and run the check\n        with ZK(handle.cluster_config) as zk:\n            check_replica_unavailability.zk = zk\n            check_output = check_replica_unavailability.run_command()\n\n    except Exception as e:\n        raise e\n\n    return check_output[1]\n"
  },
  {
    "path": "Kafka/legos/kafka_get_committed_messages_count/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>Kafka get count of committed messages</h1>\n\n## Description\nFetches the count of committed messages (consumer offsets) for a specific consumer group and its topics.\n\n## Lego Details\n\tkafka_get_committed_messages_count(handle, group_id: str)\n\t\thandle: Object of type unSkript KAFKA Connector.\n\t\tgroup_id: Consumer group ID \n\n\n## Lego Input\nThis Lego takes inputs handle, group_id.\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kafka/legos/kafka_get_committed_messages_count/__init__.py",
    "content": ""
  },
  {
    "path": "Kafka/legos/kafka_get_committed_messages_count/kafka_get_committed_messages_count.json",
    "content": "{\n  \"action_title\": \"Kafka get count of committed messages\",\n  \"action_description\": \"Fetches the count of committed messages (consumer offsets) for a specific consumer group and its topics.\",\n  \"action_type\": \"LEGO_TYPE_KAFKA\",\n  \"action_entry_function\": \"kafka_get_committed_messages_count\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\n  \"action_is_check\": false,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\":[\"CATEGORY_TYPE_INFORMATION\" , \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_KAFKA\"]\n}"
  },
  {
    "path": "Kafka/legos/kafka_get_committed_messages_count/kafka_get_committed_messages_count.py",
    "content": "##\n# Copyright (c) 2023 unSkript, Inc\n# All rights reserved.\n##\nfrom kafka import KafkaConsumer, TopicPartition, KafkaAdminClient\nfrom typing import Dict, Optional\nfrom pydantic import BaseModel, Field\n\n\nclass InputSchema(BaseModel):\n    group_id: Optional[str] = Field(..., description='Consumer group ID to which this consumer belongs', title='Consumer group ID')\n\n\n\ndef kafka_get_committed_messages_count_printer(output):\n    if output is None:\n        print(\"No data found to get kafka committed messages count ! \")\n        return\n\n    for group_id, topics in output.items():\n        print(f\"Group ID: {group_id}\")\n        for topic_name, partitions in topics.items():\n            print(f\"  Topic: {topic_name}\")\n            for partition, number_of_messages in partitions.items():\n                print(f\"    Partition {partition}: {number_of_messages} committed messages\")\n        print()\n\ndef kafka_get_committed_messages_count(handle, group_id: str = \"\") -> Dict:\n    \"\"\"\n    Fetches committed messages (consumer offsets) for all consumer groups and topics,\n    or for a specific group if provided.\n    \"\"\"\n    admin_client = KafkaAdminClient(bootstrap_servers=handle.config['bootstrap_servers'])\n    committed_messages_count = {}\n\n    \n    if group_id:\n        consumer_groups = [group_id]\n    else:\n        # Fetch all consumer groups\n        try:\n            consumer_groups_info = admin_client.list_consumer_groups()\n        except Exception as e:\n            print(f\"An error occured while fetching consumer groups:{e}\")\n            return {}\n        consumer_groups = [group[0] for group in consumer_groups_info]\n    \n\n        for group in consumer_groups:\n            try:\n                # Create a consumer for each group to fetch topics\n                consumer = KafkaConsumer(bootstrap_servers=handle.config['bootstrap_servers'], group_id=group)\n                topics = consumer.topics()\n            except Exception as e:\n                print(f\"An error occurred while fetching topics in consumer group {group} : {e}\")\n                continue\n            \n            for topic in topics:\n                try:\n                    partitions = consumer.partitions_for_topic(topic)\n                except Exception as e:\n                    print(f\"An error occurred while fetching partitions for consumer group {group} and topic {topic} : {e}\")\n                    continue\n                for partition in partitions:\n                    try:\n                        tp = TopicPartition(topic, partition)\n                    except:\n                        print(f\"An error occurred while fetching partition info for  consumer group {group} and topic {topic} : {e}\")\n                        continue\n                    # Fetch committed offset for each partition\n                    committed_offset = consumer.committed(tp)\n                    if committed_offset is not None:\n                        # If there's a committed offset, calculate the number of messages\n                        earliest_offset = consumer.beginning_offsets([tp])[tp]\n                        number_of_messages = committed_offset - earliest_offset\n                        committed_messages_count.setdefault(group, {}).setdefault(topic, {})[partition] = number_of_messages\n                    else:\n                        # If no committed offset, assume 0 messages\n                        committed_messages_count.setdefault(group, {}).setdefault(topic, {})[partition] = 0\n\n            # Close the consumer after processing to free up resources\n            consumer.close()\n    \n\n    return committed_messages_count\n"
  },
  {
    "path": "Kafka/legos/kafka_get_handle/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Get Kafka Producer Handle</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego Get Kafka Producer Handle.\r\n\r\n\r\n## Lego Details\r\n\r\n    kafka_get_handle(handle: object)\r\n\r\n        handle: Object of type unSkript kafka Connector\r\n\r\n## Lego Input\r\nThis Lego take one input handle.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kafka/legos/kafka_get_handle/__init__.py",
    "content": ""
  },
  {
    "path": "Kafka/legos/kafka_get_handle/kafka_get_handle.json",
    "content": "{\r\n    \"action_title\": \"Get Kafka Producer Handle\",\r\n    \"action_description\": \"Get Kafka Producer Handle\",\r\n    \"action_type\": \"LEGO_TYPE_KAFKA\",\r\n    \"action_entry_function\": \"kafka_get_handle\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": false,\r\n    \"action_supports_iteration\": false\r\n}\r\n    "
  },
  {
    "path": "Kafka/legos/kafka_get_handle/kafka_get_handle.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nfrom pydantic import BaseModel\nfrom kafka import KafkaProducer\n\n\nclass InputSchema(BaseModel):\n    pass\n\n\ndef kafka_get_handle(handle) -> KafkaProducer:\n    \"\"\"kafka_get_handle returns the kafka producer client handle.\n\n       :rtype: kafka client handle.\n    \"\"\"\n    return handle\n"
  },
  {
    "path": "Kafka/legos/kafka_get_topic_health/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>Kafka get topic health</h1>\n\n## Description\nThis action fetches the health and total number of messages for the specified topics.\n\n## Lego Details\n\tkafka_get_topic_health(handle, group_id: str, topics: list)\n\t\thandle: Object of type unSkript KAFKA Connector.\n\t\tgroup_id: Consumer group ID \n\t\ttopics: List of topic names.\n\n\n## Lego Input\nThis Lego takes inputs handle, group_id, topics.\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kafka/legos/kafka_get_topic_health/__init__.py",
    "content": ""
  },
  {
    "path": "Kafka/legos/kafka_get_topic_health/kafka_get_topic_health.json",
    "content": "{\n  \"action_title\": \"Kafka get topic health\",\n  \"action_description\": \"This action fetches the health and total number of messages for the specified topics.\",\n  \"action_type\": \"LEGO_TYPE_KAFKA\",\n  \"action_entry_function\": \"kafka_get_topic_health\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\n  \"action_is_check\": false,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\":[\"CATEGORY_TYPE_INFORMATION\" , \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_KAFKA\"]\n}"
  },
  {
    "path": "Kafka/legos/kafka_get_topic_health/kafka_get_topic_health.py",
    "content": "##\n# Copyright (c) 2023 unSkript, Inc\n# All rights reserved.\n##\nfrom kafka import TopicPartition, KafkaConsumer, KafkaAdminClient\nfrom typing import Dict, Optional\nfrom pydantic import BaseModel, Field\n\n\n\nclass InputSchema(BaseModel):\n    group_id: Optional[str] = Field(..., description='Consumer group ID to which this consumer belongs', title='Consumer group ID')\n    topics: Optional[list] = Field(..., description='List of topic names.', title='List of topics')\n\n\ndef kafka_get_topic_health_printer(output):\n    if output is None:\n        print(\"No data found for the Kafka topic health!\")\n        return\n    \n    # Iterating through each group in the output\n    for group_id, topics in output.items():\n        print(f\"Group ID: {group_id}\")\n        # Iterating through each topic in the group\n        for topic_name, partitions in topics.items():\n            print(f\"  Topic: {topic_name}\")\n            # Iterating through each partition in the topic\n            for partition, info in partitions.items():\n                # Checking if the topic exists flag is true or false to print accordingly\n                topic_exists_msg = \"Yes\" if info[\"topic_exists\"] else \"No\"\n                print(f\"    Partition {partition}: {info['number_of_messages']} messages, Topic exists: {topic_exists_msg}\")\n        print()\n\n\ndef kafka_get_topic_health(handle, group_id: str=\"\", topics: list=[]) -> Dict:\n    \"\"\"\n    kafka_get_topic_health fetches the health and total number of messages for the specified topics.\n\n    :type handle: object\n    :param handle: Handle containing the KafkaConsumer instance.\n\n    :type group_id: str\n    :param group_id: Consumer group ID \n\n    :type topics: list\n    :param topics: List of topic names.\n\n    :rtype: Dictionary containing the health status and number of messages by topic and partition\n    \"\"\"\n\n\n    admin_client = KafkaAdminClient(bootstrap_servers=handle.config['bootstrap_servers'])\n\n    topic_health_info = {}\n\n    try:\n        if not group_id:\n            consumer_groups_info = admin_client.list_consumer_groups()\n            consumer_groups = [group[0] for group in consumer_groups_info]\n        else:\n            consumer_groups = [group_id]\n    except Exception as e:\n        print(f\"Failed to list consumer groups: {e}\")\n        return {}\n\n    for group in consumer_groups:\n        consumer = KafkaConsumer(bootstrap_servers=handle.config['bootstrap_servers'], group_id=group)\n        group_topics = topics if topics else list(consumer.topics())\n\n        for topic in group_topics:\n            partitions = consumer.partitions_for_topic(topic)\n            if not partitions:\n                topic_health_info.setdefault(group, {})[topic] = {\"-1\": {\"number_of_messages\": 0, \"topic_exists\": False}}\n                continue\n\n            for partition in partitions:\n                try:\n                    tp = TopicPartition(topic, partition)\n                    earliest_offset = consumer.beginning_offsets([tp])[tp]\n                    latest_offset = consumer.end_offsets([tp])[tp]\n                    number_of_messages = latest_offset - earliest_offset\n                except Exception as e:\n                    print(f\"Failed to fetch offsets for partition {partition} of topic {topic} in group {group}: {e}\")\n                    continue\n\n                topic_health_info.setdefault(group, {}).setdefault(topic, {})[partition] = {\n                    \"number_of_messages\": number_of_messages,\n                    \"topic_exists\": True\n                }\n        \n        consumer.close()\n    \n    return topic_health_info"
  },
  {
    "path": "Kafka/legos/kafka_get_topics_with_lag/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>Kafka get topics with lag</h1>\n\n## Description\nThis action fetches the topics with lag in the Kafka cluster.\n\n## Action Details\n\tkafka_get_topics_with_lag(handle, group_id: str, threshold: int = 2)\n\t\thandle: Object of type unSkript KAFKA Connector.\n\t\tgroup_id: Consumer group ID.\n\t\tthreshold: Lag threshold for alerting.\n\n\n## Action Input\nThis Action takes inputs handle, group_id, threshold.\n\n## Action Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kafka/legos/kafka_get_topics_with_lag/__init__.py",
    "content": ""
  },
  {
    "path": "Kafka/legos/kafka_get_topics_with_lag/kafka_get_topics_with_lag.json",
    "content": "{\n  \"action_title\": \"Kafka get topics with lag\",\n  \"action_description\": \"This action fetches the topics with lag in the Kafka cluster.\",\n  \"action_type\": \"LEGO_TYPE_KAFKA\",\n  \"action_entry_function\": \"kafka_get_topics_with_lag\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n  \"action_is_check\": true,\n  \"action_next_hop\": [\n    \"\"\n  ],\n  \"action_next_hop_parameter_mapping\": {},\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_KAFKA\"]\n}"
  },
  {
    "path": "Kafka/legos/kafka_get_topics_with_lag/kafka_get_topics_with_lag.py",
    "content": "##\n# Copyright (c) 2023 unSkript, Inc\n# All rights reserved.\n##\nfrom kafka import KafkaConsumer, TopicPartition\nfrom typing import Optional, Tuple\nfrom pydantic import BaseModel, Field\nfrom kafka.admin import KafkaAdminClient\nimport time\n\n\n\nclass InputSchema(BaseModel):\n    group_id: Optional[str] = Field(..., description='Consumer group ID', title='Consumer group ID')\n    threshold: Optional[int] = Field(\n        10, description='The threshold on the difference between 2 sample sets of lag data collected.', title='Threshold (no. of messages)'\n    )\n    sliding_window_interval: Optional[int] = Field(\n        30, description='The cadence (in seconds) at which the lag data needs to be collected', title='Sliding window interval'\n    )\n\n\n\ndef kafka_get_topics_with_lag_printer(output):\n    print(\"Topics with lag:\")\n    status, topics_with_lag = output\n    if status:\n        print(\"None of the topics are experiencing a lag\")\n    else:\n        for item in topics_with_lag:\n            print(f\"Group '{item['group']}' | Topic '{item['topic']}' | Partition {item['partition']}: {item['lag']} lag (no. of messages)\")\n\n\ndef kafka_get_topics_with_lag(handle, group_id: str = \"\", threshold: int = 10, sliding_window_interval = 30) -> Tuple:\n    \"\"\"\n    kafka_get_topics_with_lag fetches the topics with lag in the Kafka cluster.\n\n    :type handle: KafkaProducer\n    :param handle: Handle containing the KafkaProducer instance.\n\n    :type group_id: str\n    :param group_id: Consumer group ID.\n\n    :type threshold: int, optional\n    :param threshold: Lag threshold for alerting.\n\n    :rtype: Status and a List of objects with topics with lag information.\n    \"\"\"\n\n    result = []\n\n    admin_client = KafkaAdminClient(bootstrap_servers=handle.config['bootstrap_servers'])\n\n    if group_id:\n        consumer_groups = [group_id]\n    else:\n        consumer_groups = [group[0] for group in admin_client.list_consumer_groups()]\n\n    # cached_kafka_info stores the kafka info like groups, topics, partitions.\n    # Only end_offsets and committed needs to be fetched to get the latest value.\n    # Its organized as groups->topics->partitions\n    cached_kafka_info = {}\n    # sample_data captures the snapshots for lag data. It stores for each iteration.\n    # The value stored is group,topic,partition as the key and lag as the value\n    sample_data = []\n    sample_data_dict = {}\n    for group in consumer_groups:\n        consumer = KafkaConsumer(bootstrap_servers=handle.config['bootstrap_servers'], group_id=group)\n        if consumer is None:\n            continue\n        cached_kafka_info[group] = {'consumer': consumer}\n        \n        try:\n            topic = None \n            partition = None \n            for topic in consumer.topics():\n                partitions = consumer.partitions_for_topic(topic)\n                cached_kafka_info[group].update({'topics': {topic:partitions}})\n                for partition in partitions:\n                    tp = TopicPartition(topic, partition)\n                    end_offset = consumer.end_offsets([tp])[tp]\n                    committed = consumer.committed(tp)\n                    # Handle the case where committed is None\n                    lag = end_offset - (committed if committed is not None else 0)\n                    key = f'{group}:{topic}:{partition}'\n                    sample_data_dict[key] = lag\n        except Exception as e:\n            print(f'First Iteration: An error occurred:{e}, group {group}, topic {topic}, partition {partition}') \n\n    sample_data.append(sample_data_dict)\n    # Second iteration\n    time.sleep(sliding_window_interval)\n\n    for group, value in cached_kafka_info.items():\n        consumer = value.get('consumer')\n        if consumer is None:\n            continue\n        topics = value.get('topics')\n        try:\n            topic = None \n            partition = None \n            for topic, partitions in topics.items():\n                for partition in partitions:\n                    tp = TopicPartition(topic, partition)\n                    end_offset = consumer.end_offsets([tp])[tp]\n                    committed = consumer.committed(tp)\n                    # Handle the case where committed is None\n                    lag = end_offset - (committed if committed is not None else 0)\n                    key = f'{group}:{topic}:{partition}'\n                    sample_data_dict[key] = lag\n        except Exception as e:\n            print(f'Second Iteration: An error occurred:{e}, group {group}, topic {topic}, partition {partition}')        \n\n        consumer.close()\n\n    sample_data.append(sample_data_dict)\n\n    for key, value in sample_data[0].items():\n        # Get the value from the second sample, if present\n        new_value = sample_data[1].get(key)\n        if new_value is None:\n            continue\n        if new_value - value > threshold:\n            key_split = key.split(\":\")\n            result.append({\"group\": key_split[0], \"topic\": key_split[1], \"partition\": key_split[2], \"incremental\": new_value - value})\n\n    if len(result) != 0:\n        return (False, result)\n    return (True, None)\n\n"
  },
  {
    "path": "Kafka/legos/kafka_publish_message/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Kafka Publish Message</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego Publish Kafka Message.\r\n\r\n\r\n## Lego Details\r\n\r\n    kafka_publish_message(handle: object, topic: str, message: str)\r\n\r\n        handle: Object of type unSkript kafka Connector\r\n        topic: Name of the Kafka topic, the message need to be sent on.\r\n        message: Message to be sent.\r\n\r\n## Lego Input\r\nThis Lego take three input handle, topic and message.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kafka/legos/kafka_publish_message/__init__.py",
    "content": ""
  },
  {
    "path": "Kafka/legos/kafka_publish_message/kafka_publish_message.json",
    "content": "{\r\n    \"action_title\": \"Kafka Publish Message\",\r\n    \"action_description\": \"Publish Kafka Message\",\r\n    \"action_type\": \"LEGO_TYPE_KAFKA\",\r\n    \"action_entry_function\": \"kafka_publish_message\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_KAFKA\"]\r\n}\r\n    "
  },
  {
    "path": "Kafka/legos/kafka_publish_message/kafka_publish_message.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nimport pprint\nfrom pydantic import BaseModel, Field\n\n\npp = pprint.PrettyPrinter(indent=4)\n\n\nclass InputSchema(BaseModel):\n    topic: str = Field(\n        title='Topic Name',\n        description='Name of the Kafka topic, the message need to be sent on.'\n    )\n    message: str = Field(\n        title='Message',\n        description='Message to be sent.'\n    )\n\n\ndef kafka_publish_message_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef kafka_publish_message(handle, topic: str, message: str) -> str:\n\n    \"\"\"kafka_publish_message publish messages\n\n        :type topic: str\n        :param topic: Name of the Kafka topic, the message need to be sent on.\n\n        :type message: str\n        :param message: Message to be sent.\n\n        :rtype: string\n    \"\"\"\n    try:\n        res = handle.send(topic, message.encode())\n\n    except Exception as e:\n        print(f'Publish message failed, err: {e}')\n        raise e\n\n    return res\n"
  },
  {
    "path": "Kafka/legos/kafka_run_command/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Run a Kafka command using kafka CLI</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego Run a Kafka command using kafka CLI. Eg kafka-topics.sh --list --exclude-internal.\r\n\r\n\r\n## Lego Details\r\n\r\n    kafka_run_command(handle: object, kafka_command: str)\r\n\r\n        handle: Object of type unSkript kafka Connector\r\n        kafka_command: Kafka command.\r\n\r\n## Lego Input\r\nThis Lego take two input handle and kafka_command.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kafka/legos/kafka_run_command/__init__.py",
    "content": ""
  },
  {
    "path": "Kafka/legos/kafka_run_command/kafka_run_command.json",
    "content": "{\r\n    \"action_title\": \"Run a Kafka command using kafka CLI\",\r\n    \"action_description\": \"Run a Kafka command using kafka CLI. Eg kafka-topics.sh --list --exclude-internal\",\r\n    \"action_type\": \"LEGO_TYPE_KAFKA\",\r\n    \"action_entry_function\": \"kafka_run_command\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_KAFKA\"]\r\n}\r\n    "
  },
  {
    "path": "Kafka/legos/kafka_run_command/kafka_run_command.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n\nfrom pydantic import BaseModel, Field\n\nclass InputSchema(BaseModel):\n    kafka_command: str = Field(\n        title='Kafka Command',\n        description='Kafka command. '\n                    'Eg. kafka-topics.sh --list --exclude-internal'\n    )\n\n\ndef kafka_run_command_printer(output):\n    if output is None:\n        return\n    print(output)\n\n\ndef kafka_run_command(handle, kafka_command: str) -> str:\n    \"\"\"kafka_run_command run command\n\n        :type kafka_command: str\n        :param kafka_command: Kafka command.\n\n        :rtype: string\n    \"\"\"\n\n    assert(kafka_command.startswith(\"kafka\") or kafka_command.startswith(\"./kafka\"))\n\n    result = handle.run_native_cmd(kafka_command)\n    return result.stdout\n"
  },
  {
    "path": "Kafka/legos/kafka_topic_partition_health_check/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>Get Kafka topic partition health</h1>\n\n## Description\nChecks the health of the Kafka topics and their partitions.This check checks if the topics have any partitions at all.\n\n## Lego Details\n\tkafka_topic_partition_health_check(handle)\n\t\thandle: Object of type unSkript KAFKA Connector.\n\n\n## Lego Input\nThis Lego takes inputs handle.\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kafka/legos/kafka_topic_partition_health_check/__init__.py",
    "content": ""
  },
  {
    "path": "Kafka/legos/kafka_topic_partition_health_check/kafka_topic_partition_health_check.json",
    "content": "{\n  \"action_title\": \"Get Kafka topic partition health\",\n  \"action_description\": \"Checks the health of the Kafka topics and their partitions.This check checks if the topics have any partitions at all.\",\n  \"action_type\": \"LEGO_TYPE_KAFKA\",\n  \"action_entry_function\": \"kafka_topic_partition_health_check\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n  \"action_is_check\": true,\n  \"action_next_hop\": [\n    \"\"\n  ],\n  \"action_next_hop_parameter_mapping\": {},\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true\n}"
  },
  {
    "path": "Kafka/legos/kafka_topic_partition_health_check/kafka_topic_partition_health_check.py",
    "content": "from __future__ import annotations\n\n##\n# Copyright (c) 2023 unSkript, Inc\n# All rights reserved.\n##\nfrom kafka import KafkaProducer, KafkaConsumer\nfrom typing import Tuple\nfrom pydantic import BaseModel\n\n\n\nfrom pydantic import BaseModel\n\n\nclass InputSchema(BaseModel):\n    pass\n\n\ndef kafka_topic_partition_health_check_printer(output):\n    status, issues = output\n\n    if status:\n        print(\"All topics and partitions are healthy!\")\n    else:\n        print(\"Issues detected with topics or partitions:\\n\")\n        for issue in issues:\n            print(f\"Issue Type: {issue['issue_type']}\")\n            print(f\"Description: {issue['description']}\\n\")\n\n\ndef kafka_topic_partition_health_check(handle) -> Tuple:\n    \"\"\"\n    Checks the health of the Kafka topics and their partitions.This check checks if the topics have any partitions at all.\n\n    :type handle: KafkaProducer\n    :param handle: Handle containing the KafkaProducer instance.\n\n    :rtype: Status, Tuple containing a status and an optional list of issues with topics and their partitions.\n    \"\"\"\n\n    issues = []\n\n    # Using KafkaConsumer to get topic details\n    consumer = KafkaConsumer(bootstrap_servers=handle.config['bootstrap_servers'])\n    for topic in consumer.topics():\n        partitions = consumer.partitions_for_topic(topic)\n        if not partitions or len(partitions) == 0:\n            issues.append({\n                'issue_type': f'Topic: {topic}',\n                'description': 'No partitions available.'\n            })\n\n    consumer.close()\n\n    if len(issues) != 0:\n        return (False, issues)\n    return (True, None)\n\n\n\n"
  },
  {
    "path": "Keycloak/__init__.py",
    "content": ""
  },
  {
    "path": "Keycloak/legos/keycloak_get_audit_report/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>Get Keycloak audit report</h1>\n\n## Description\nFetches the audit events from Keycloak.\n\n## Lego Details\n\tkeycloak_get_audit_report(handle):\n    handle: Handle object containing the KeycloakAdmin instance.\n\n\n## Lego Input\nThis Lego takes inputs handle.\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Keycloak/legos/keycloak_get_audit_report/__init__.py",
    "content": ""
  },
  {
    "path": "Keycloak/legos/keycloak_get_audit_report/keycloak_get_audit_report.json",
    "content": "{\n    \"action_title\": \"Get Keycloak audit report\",\n    \"action_description\": \"Fetches the audit events from Keycloak.\",\n    \"action_type\": \"LEGO_TYPE_KEYCLOAK\",\n    \"action_entry_function\": \"keycloak_get_audit_report\",\n    \"action_needs_credential\": true,\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n    \"action_is_check\": false,\n    \"action_supports_iteration\": true,\n    \"action_supports_poll\": true,\n    \"action_categories\":[\"CATEGORY_TYPE_INFORMATION\",\"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_KEYCLOAK\"]\n  }\n"
  },
  {
    "path": "Keycloak/legos/keycloak_get_audit_report/keycloak_get_audit_report.py",
    "content": "#\n# Copyright (c) 2024 unSkript.com\n# All rights reserved.\n#\nimport requests\nimport os\n\nfrom typing import List\nfrom tabulate import tabulate\nfrom pydantic import BaseModel\nfrom datetime import datetime\n\n\nclass InputSchema(BaseModel):\n    pass\n\n\ndef keycloak_get_audit_report_printer(output):\n    if not output:\n        print(\"No audit events found.\")\n        return\n\n    # Extract relevant event data for tabulation\n    table_data = [[\"Time\", \"Type\", \"User ID\", \"Client ID\", \"IP Address\", \"Error\"]]\n    for event in output:\n        time = event.get('time', '') \n        _type = event.get('type') if event.get('type') else \\\n               event.get('operationType') if event.get('operationType') else ''\n        user_id = event.get('userId') if event.get('userId') else \\\n                  event.get('authDetails').get('userId', '') if event.get('authDetails') else ''\n        client_id = event.get('clientId') if event.get('clientId') else \\\n                  event.get('authDetails').get('clientId', '') if event.get('authDetails') else ''\n        ip_addr = event.get('ipAddress') if event.get('ipAddress') else \\\n                  event.get('authDetails').get('ipAddress', '') if event.get('authDetails') else ''\n        error = event.get('error', '')\n        \n        table_data.append([datetime.fromtimestamp(time/1000).strftime('%Y-%m-%d %H:%M:%S'), \n                           _type, \n                           user_id, \n                           client_id, \n                           ip_addr, \n                           error])\n\n    print(tabulate(table_data, headers='firstrow', tablefmt=\"grid\"))\n\n\ndef keycloak_get_audit_report(handle):\n    \"\"\"\n    keycloak_get_audit_report fetches the audit events from Keycloak.\n\n    :type handle: KeycloakAdmin\n    :param handle: Handle containing the KeycloakAdmin instance.\n\n    :rtype: List of dictionaries representing the audit events.\n    \"\"\"\n    try:\n        # Exception could occur if keycloak package was not found\n        # in such case try if we can import UnskriptKeycloakWrapper\n        from unskript.connectors.keycloak import UnskriptKeycloakWrapper\n        from unskript.legos.utils import get_keycloak_token\n\n        if not isinstance(handle, UnskriptKeycloakWrapper):\n            raise ValueError(\"Unable to Find Keycloak Package!\")\n        access_token = get_keycloak_token(handle)\n        events_url = os.path.join(handle.server_url, f\"admin/realms/{handle.realm_name}/events\")\n        headers = {\"Authorization\": f\"Bearer {access_token}\"}\n\n        response = requests.get(events_url, headers=headers)\n        response.raise_for_status()\n\n        events = response.json()\n\n        return events if events else []\n    except Exception as e:\n        print(f\"ERROR: Unable to connect to keycloak server {str(e)}\")\n"
  },
  {
    "path": "Keycloak/legos/keycloak_get_handle/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \n<h2>Get Keycloak Handle</h2>\n\n<br>\n\n## Description\nThis Lego Get Keycloak Handle.\n\n\n## Lego Details\n\n    keycloak_get_handle(handle: object)\n\n        handle: Object of type unSkript Keycloak Connector\n\n## Lego Input\nThis Lego take one input handle.\n\n## Lego Output\nHere is a sample output.\n\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Keycloak/legos/keycloak_get_handle/__init__.py",
    "content": ""
  },
  {
    "path": "Keycloak/legos/keycloak_get_handle/keycloak_get_handle.json",
    "content": "{\n    \"action_title\": \"Keycloak Get Handle\",\n    \"action_description\": \"Get Keycloak Handle\",\n    \"action_type\": \"LEGO_TYPE_KEYCLOAK\",\n    \"action_entry_function\": \"keycloak_get_handle\",\n    \"action_needs_credential\": true,\n    \"action_supports_poll\": false,\n    \"action_supports_iteration\": false,\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\"\n}\n    "
  },
  {
    "path": "Keycloak/legos/keycloak_get_handle/keycloak_get_handle.py",
    "content": "##\n##  Copyright (c) 2023 unSkript, Inc\n##  All rights reserved.\n##\nfrom pydantic import BaseModel\n\n\nclass InputSchema(BaseModel):\n    pass\n\n\ndef keycloak_get_handle(handle):\n    \"\"\"keycloak_get_handle returns the Keycloak handle.\n\n          :rtype: Keycloak handle.\n    \"\"\"\n    return handle\n"
  },
  {
    "path": "Keycloak/legos/keycloak_get_service_health/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>Get Keycloak service health</h1>\n\n## Description\nFetches the health of the Keycloak service by trying to list available realms.\n\n## Lego Details\n\tkeycloak_get_service_health fetches the health of the Keycloak service by trying to list available realms.\n    handle: Handle containing the KeycloakAdmin instance.\n\n\n## Lego Input\nThis Lego takes inputs handle.\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Keycloak/legos/keycloak_get_service_health/__init__.py",
    "content": ""
  },
  {
    "path": "Keycloak/legos/keycloak_get_service_health/keycloak_get_service_health.json",
    "content": "{\n    \"action_title\": \"Get Keycloak service health\",\n    \"action_description\": \"Fetches the health of the Keycloak service by trying to list available realms.\",\n    \"action_type\": \"LEGO_TYPE_KEYCLOAK\",\n    \"action_entry_function\": \"keycloak_get_service_health\",\n    \"action_needs_credential\": true,\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n    \"action_is_check\": true,\n    \"action_next_hop\": [\n      \"\"\n    ],\n    \"action_next_hop_parameter_mapping\": {},\n    \"action_supports_iteration\": true,\n    \"action_supports_poll\": true,\n    \"action_categories\":[\"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_KEYCLOAK\"]\n  }"
  },
  {
    "path": "Keycloak/legos/keycloak_get_service_health/keycloak_get_service_health.py",
    "content": "#\n# Copyright (c) 2024 unSkript.com\n# All rights reserved.\n#\n\nimport requests\nimport os \n\nfrom typing import Tuple\nfrom pydantic import BaseModel\n\n\nclass InputSchema(BaseModel):\n    pass\n\ndef keycloak_get_service_health(handle):\n    \"\"\"\n    keycloak_get_service_health fetches the health of the Keycloak service by trying to list available realms.\n\n    :type handle: object\n    :param handle: Handle containing the KeycloakAdmin instance.\n\n    :rtype: Tuple indicating if the service is healthy and a list of available realms (or None if healthy).\n    \"\"\"\n    try:\n        from unskript.connectors.keycloak import UnskriptKeycloakWrapper\n        from unskript.legos.utils import get_keycloak_token\n        \n        if not isinstance(handle, UnskriptKeycloakWrapper):\n            raise ValueError(\"Unable to Find Keycloak Package!\")\n        \n        access_token = get_keycloak_token(handle)\n        realms_url = os.path.join(handle.server_url, \"admin/realms\")\n        headers = {\"Authorization\": f\"Bearer {access_token}\"}\n        response = requests.get(realms_url, headers=headers)\n        response.raise_for_status()\n\n        available_realms = response.json()\n        result = False\n        if handle.realm_name and available_realms:\n            result = any(realm.get(\"realm\") == handle.realm_name for realm in available_realms)\n\n        if not result:\n            return (False, available_realms)\n        \n        return (True, None)\n    except Exception as e:\n        print(f\"ERROR: Unable to connect to keycloak server {str(e)}\")\n\n\n\ndef keycloak_get_service_health_printer(output):\n    is_healthy, realms = output\n\n    if is_healthy:\n        print(\"Keycloak Service is Healthy.\")\n    else:\n        print(\"Keycloak Service is Unhealthy.\")\n        if realms:\n            print(\"\\nUnavailable Realms:\")\n            for realm in realms:\n                print(f\"  - {realm}\")\n"
  },
  {
    "path": "Kubernetes/Delete_Evicted_Pods_From_Namespaces.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"ed972c43-e797-4fe7-8e90-386d7af0b950\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Steps Overview\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Steps Overview\"\n   },\n   \"source\": [\n    \"<center><img src=\\\"https://storage.googleapis.com/unskript-website/assets/favicon.png\\\" alt=\\\"unSkript.com\\\" width=\\\"100\\\" height=\\\"100\\\"/> \\n\",\n    \"<h1> unSkript Runbooks </h1>\\n\",\n    \"<div class=\\\"alert alert-block alert-success\\\">\\n\",\n    \"    <h3> Objective</h3> <br>\\n\",\n    \"    <b style = \\\"color:#000000\\\"><i>To delete evicted pods from namespaces using unSkript actions</i></b>\\n\",\n    \"</div>\\n\",\n    \"</center>\\n\",\n    \"<br>\\n\",\n    \"\\n\",\n    \"<center><h2>Delete Evicted Pods From Namespaces</h2></center>\\n\",\n    \"\\n\",\n    \"# Steps Overview\\n\",\n    \"1)[ Show All Evicted Pods From Namespace.](#1)<br>\\n\",\n    \"2)[ Delete Evicted Pods From Namespace.](#2)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"a7da9ae0-4a59-4ff7-913c-9832eebecbc4\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Gather Information\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Gather Information\"\n   },\n   \"source\": [\n    \"### Command Selection\\n\",\n    \"In this action, we select the command based on the user input parameter. If the user provides the namespace input, then it only collects pods for the given namespace; otherwise, it will select all pods from all the namespaces.\\n\",\n    \"\\n\",\n    \">Output variable: `command`\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 37,\n   \"id\": \"3f65101e-440d-4503-a51f-c372b7efe683\",\n   \"metadata\": {\n    \"collapsed\": true,\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2022-12-23T13:05:15.203Z\"\n    },\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"name\": \"Command Selection\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Command Selection\",\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"command = \\\"kubectl get pods --all-namespaces -o json | grep Evicted\\\"\\n\",\n    \"if namespace:\\n\",\n    \"    command = \\\"kubectl get pods -n \\\" + namespace + \\\" -o json | grep Evicted\\\"\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"a1c24634-7d34-4524-b76b-35209d458c62\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-1\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-1\"\n   },\n   \"source\": [\n    \"### <a id=\\\"1\\\">Show All Evicted Pods From All Namespaces</a>\\n\",\n    \"In this action, we will show the evicted pods from all namespaces by using command \\\"kubectl get pods --all-namespaces | grep Evicted\\\"\\n\",\n    \">Input parameters: `kubectl_command`\\n\",\n    \"\\n\",\n    \">Output variable: `Pods_Details`\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 44,\n   \"id\": \"a3ed8798-a7e8-407a-9d7a-e25e971302e8\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_uuid\": \"4d3b4c6153e14622f42b332b7c7b8f7043577971f64edc5be6b5f8b40d5b89d1\",\n    \"collapsed\": true,\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Execute the given Kubectl command.\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2022-12-23T13:08:59.665Z\"\n    },\n    \"id\": 42,\n    \"index\": 42,\n    \"inputData\": [\n     {\n      \"kubectl_command\": {\n       \"constant\": false,\n       \"value\": \"command\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"kubectl_command\": {\n        \"description\": \"kubectl command eg \\\"kubectl get pods --all-namespaces\\\"\",\n        \"title\": \"Kubectl Command\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"kubectl_command\"\n      ],\n      \"title\": \"k8s_kubectl_command\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_K8S\",\n    \"name\": \"Show All Evicted Pods From Namespace\",\n    \"nouns\": [],\n    \"orderProperties\": [\n     \"kubectl_command\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"pods_details\",\n     \"output_name_enabled\": true\n    },\n    \"tags\": [\n     \"k8s_kubectl_command\"\n    ],\n    \"title\": \"Show All Evicted Pods From Namespace\",\n    \"verbs\": []\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"#\\n\",\n    \"# Copyright (c) 2022 unSkript.com\\n\",\n    \"# All rights reserved.\\n\",\n    \"#\\n\",\n    \"\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def k8s_kubectl_command_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    print(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def k8s_kubectl_command(handle, kubectl_command: str) -> str:\\n\",\n    \"    \\\"\\\"\\\"k8s_kubectl_command executes the given kubectl command on the pod\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned from the Task validate method\\n\",\n    \"\\n\",\n    \"        :type kubectl_command: str\\n\",\n    \"        :param kubectl_command: The Actual kubectl command, like kubectl get ns, etc..\\n\",\n    \"\\n\",\n    \"        :rtype: String, Output of the command in python string format or Empty String in case of Error.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    if handle.client_side_validation != True:\\n\",\n    \"        print(f\\\"K8S Connector is invalid: {handle}\\\")\\n\",\n    \"        return str()\\n\",\n    \"\\n\",\n    \"    result = handle.run_native_cmd(kubectl_command)\\n\",\n    \"    if result is None or hasattr(result, \\\"stderr\\\") is False or result.stderr is None:\\n\",\n    \"        print(\\n\",\n    \"            f\\\"Error while executing command ({kubectl_command}): {result.stderr}\\\")\\n\",\n    \"        return str()\\n\",\n    \"\\n\",\n    \"    return result.stdout\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"kubectl_command\\\": \\\"command\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"pods_details\\\")\\n\",\n    \"\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(k8s_kubectl_command, lego_printer=k8s_kubectl_command_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"4e8e74fd-026f-4ea6-b5b4-cc1b18f84e64\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-1 Extension\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-1 Extension\"\n   },\n   \"source\": [\n    \"### Command Selection\\n\",\n    \"In this action, we select the command based on the user input parameter. If the user provides the namespace input, then it only collects pods for the given namespace; otherwise, it will select all pods from all the namespaces.\\n\",\n    \"\\n\",\n    \">Output variable: `delete_command`\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 43,\n   \"id\": \"f045cda4-ddb7-4353-8448-c1d8344c2d49\",\n   \"metadata\": {\n    \"collapsed\": true,\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2022-12-23T13:07:43.863Z\"\n    },\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"name\": \"Command Selection\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Command Selection\",\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"import json\\n\",\n    \"\\n\",\n    \"pods_data = []\\n\",\n    \"if pods_details:\\n\",\n    \"    pod_details = json.loads(pods_details)\\n\",\n    \"    for k, v in pod_details.items():\\n\",\n    \"        if \\\"items\\\" in k:\\n\",\n    \"            for i in v:\\n\",\n    \"                pod_dict = {}\\n\",\n    \"                pod_dict[\\\"pod_name\\\"] = i[\\\"metadata\\\"][\\\"name\\\"]\\n\",\n    \"                pod_dict[\\\"namespace\\\"] = i[\\\"metadata\\\"][\\\"namespace\\\"]\\n\",\n    \"                pods_data.append(pod_dict)\\n\",\n    \"\\n\",\n    \"delete_command = []\\n\",\n    \"for pod in pods_data:\\n\",\n    \"    command = \\\"kubectl delete pod \\\" + pod[\\\"pod_name\\\"] + \\\" -n \\\" + pod[\\\"namespace\\\"]\\n\",\n    \"    delete_command.append(command)\\n\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"8310152a-97fd-4920-afce-f70dbdf28991\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-2\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-2\"\n   },\n   \"source\": [\n    \"### <a id=\\\"2\\\">Delete Evicted Pods From All Namespaces</a>\\n\",\n    \"In this action, we will delete the evicted pods from all namespaces by using command \\\"kubectl get pods --all-namespaces | grep Evicted | awk '{print $2 \\\" --namespace=\\\" $1}' | xargs kubectl delete pod\\\"\\n\",\n    \">Input parameters: `kubectl_command`\\n\",\n    \"\\n\",\n    \">Output variable: `Delete_Status`\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 32,\n   \"id\": \"4db54c54-5e81-47f3-a857-50e083f9cb00\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_uuid\": \"4d3b4c6153e14622f42b332b7c7b8f7043577971f64edc5be6b5f8b40d5b89d1\",\n    \"continueOnError\": true,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Execute the given Kubectl command.\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2022-12-23T12:53:13.393Z\"\n    },\n    \"id\": 42,\n    \"index\": 42,\n    \"inputData\": [\n     {\n      \"kubectl_command\": {\n       \"constant\": false,\n       \"value\": \"iter_item\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"kubectl_command\": {\n        \"description\": \"kubectl command eg \\\"kubectl get pods --all-namespaces\\\"\",\n        \"title\": \"Kubectl Command\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"kubectl_command\"\n      ],\n      \"title\": \"k8s_kubectl_command\",\n      \"type\": \"object\"\n     }\n    ],\n    \"iterData\": [\n     {\n      \"iter_enabled\": true,\n      \"iter_item\": \"kubectl_command\",\n      \"iter_list\": {\n       \"constant\": false,\n       \"objectItems\": false,\n       \"value\": \"delete_command\"\n      }\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_K8S\",\n    \"name\": \"Delete Evicted Pods From Namespace\",\n    \"nouns\": [],\n    \"orderProperties\": [\n     \"kubectl_command\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"delete_status\",\n     \"output_name_enabled\": true\n    },\n    \"tags\": [\n     \"k8s_kubectl_command\"\n    ],\n    \"title\": \"Delete Evicted Pods From Namespace\",\n    \"verbs\": []\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"#\\n\",\n    \"# Copyright (c) 2022 unSkript.com\\n\",\n    \"# All rights reserved.\\n\",\n    \"#\\n\",\n    \"\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def k8s_kubectl_command_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    print(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def k8s_kubectl_command(handle, kubectl_command: str) -> str:\\n\",\n    \"    \\\"\\\"\\\"k8s_kubectl_command executes the given kubectl command on the pod\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned from the Task validate method\\n\",\n    \"\\n\",\n    \"        :type kubectl_command: str\\n\",\n    \"        :param kubectl_command: The Actual kubectl command, like kubectl get ns, etc..\\n\",\n    \"\\n\",\n    \"        :rtype: String, Output of the command in python string format or Empty String in case of Error.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    if handle.client_side_validation != True:\\n\",\n    \"        print(f\\\"K8S Connector is invalid: {handle}\\\")\\n\",\n    \"        return str()\\n\",\n    \"\\n\",\n    \"    result = handle.run_native_cmd(kubectl_command)\\n\",\n    \"    if result is None or hasattr(result, \\\"stderr\\\") is False or result.stderr is None:\\n\",\n    \"        print(\\n\",\n    \"            f\\\"Error while executing command ({kubectl_command}): {result.stderr}\\\")\\n\",\n    \"        return str()\\n\",\n    \"\\n\",\n    \"    return result.stdout\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(continueOnError=True)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"kubectl_command\\\": \\\"iter_item\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(iterJson='''{\\n\",\n    \"    \\\"iter_enabled\\\": true,\\n\",\n    \"    \\\"iter_list_is_const\\\": false,\\n\",\n    \"    \\\"iter_list\\\": \\\"delete_command\\\",\\n\",\n    \"    \\\"iter_parameter\\\": \\\"kubectl_command\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"delete_status\\\")\\n\",\n    \"\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(k8s_kubectl_command, lego_printer=k8s_kubectl_command_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"3b09724f-c9f8-4399-a3d1-aaf8d4866911\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Conclusion\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Conclusion\"\n   },\n   \"source\": [\n    \"In this Runbook, we demonstrated the use of unSkript's Kubernetes actions, and this runbook shows all the evicted pods from all namespaces and deletes them. To view the full platform capabilities of unSkript please visit https://us.app.unskript.io\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"execution_data\": {\n   \"runbook_name\": \"k8s: Delete Evicted Pods From All Namespaces\",\n   \"parameters\": [\n    \"namespace\"\n   ]\n  },\n  \"kernelspec\": {\n   \"display_name\": \"Python 3\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]\"\n  },\n  \"parameterSchema\": {\n   \"properties\": {\n    \"namespace\": {\n     \"description\": \"Name of the namespace to delete evicted pods if not provided it will execute for all namespaces.\",\n     \"title\": \"namespace\",\n     \"type\": \"string\"\n    }\n   },\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"parameterValues\": null,\n  \"vscode\": {\n   \"interpreter\": {\n    \"hash\": \"e8899eb02dfbc033aab5733bdae1bd213fa031d40331094008e8673d99ebab63\"\n   }\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}"
  },
  {
    "path": "Kubernetes/Delete_Evicted_Pods_From_Namespaces.json",
    "content": "{\r\n  \"name\": \"k8s: Delete Evicted Pods From All Namespaces\",\r\n  \"description\": \"This runbook shows and deletes the evicted pods for given namespace. If the user provides the namespace input, then it only collects pods for the given namespace; otherwise, it will select all pods from all the namespaces.\", \r\n  \"uuid\": \"a9b8a0c8ecdb5ef76f01e81689319f16095d6136620a4c7f78d57e81ba9a3ba0\", \r\n  \"icon\": \"CONNECTOR_TYPE_K8S\",\r\n  \"categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\" ],\r\n  \"connector_types\": [ \"CONNECTOR_TYPE_K8S\" ],\r\n  \"version\": \"1.0.0\"\r\n}\r\n\r\n"
  },
  {
    "path": "Kubernetes/Get_Kube_System_Config_Map.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"e233b43f-cf30-432e-9b66-45a5f906de74\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Steps Overview\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Steps Overview\"\n   },\n   \"source\": [\n    \"<center><img src=\\\"https://storage.googleapis.com/unskript-website/assets/favicon.png\\\" alt=\\\"unSkript.com\\\" width=\\\"100\\\" height=\\\"100\\\">\\n\",\n    \"<h1 id=\\\"unSkript-Runbooks\\\">unSkript Runbooks<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#unSkript-Runbooks\\\" target=\\\"_self\\\">&para;</a></h1>\\n\",\n    \"<div class=\\\"alert alert-block alert-success\\\">\\n\",\n    \"<h3 id=\\\"Objective\\\">Objective<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Objective\\\" target=\\\"_self\\\">&para;</a></h3>\\n\",\n    \"<br><strong style=\\\"color: #000000;\\\"><em>Fetch the Kubernetes system ConfigMap</em></strong></div>\\n\",\n    \"</center><center>\\n\",\n    \"<h2 id=\\\"Get-Kube-System-Config-Map\\\"><u>Get Kube System Config Map</u><a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Get-Kube-System-Config-Map\\\" target=\\\"_self\\\">&para;</a></h2>\\n\",\n    \"</center>\\n\",\n    \"<h1 id=\\\"Steps-Overview\\\">Steps Overview<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Steps-Overview\\\" target=\\\"_self\\\">&para;</a></h1>\\n\",\n    \"<p>1)&nbsp;<a href=\\\"#1\\\" target=\\\"_self\\\" rel=\\\"noopener\\\">Get system config map</a><br>2)&nbsp;<a href=\\\"#2\\\" target=\\\"_self\\\" rel=\\\"noopener\\\">Post slack message</a></p>\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"6cc5313b-913a-4f14-89e2-dccc51129889\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 1A\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 1A\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Get-List-of-Pods-in-ImagePullBackOff-State&para;\\\"><a id=\\\"1\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Convert namespace to String if empty<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Get-List-of-Pods-in-CrashLoopBackOff-State\\\" target=\\\"_self\\\" rel=\\\"noopener\\\">&para;</a><a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Get-List-of-Pods-in-ImagePullBackOff-State&para;\\\" target=\\\"_self\\\">&para;</a></h3>\\n\",\n    \"<p>This custom action changes the type of namespace and config_map_name from None to String if no namespace is given</p>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 3,\n   \"id\": \"a10ef57b-c2d3-4b48-b7e5-de180785881a\",\n   \"metadata\": {\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-02-12T07:17:30.635Z\"\n    },\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"name\": \"Convert namespace to String if empty\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Convert namespace to String if empty\",\n    \"trusted\": true\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"if namespace == None:\\n\",\n    \"    namespace = ''\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"521a1162-8c06-45eb-b938-32c7cc52be38\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 1\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 1\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"List-all-IAM-Users\\\"><a id=\\\"1\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Get system config map</h3>\\n\",\n    \"<p>This action gets the ConfigMap object for a given namespace or config map name. If neither is specified, namespace is considered to be \\\"all\\\".</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Action takes the following parameters (Optional) : <code>namespace, config_map_name</code><br>Action gives the following output (Optional) : <code>config_map_details</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 4,\n   \"id\": \"f1140b24-7721-4808-a8da-12e59bd34b27\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionCategories\": [],\n    \"actionIsCheck\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [],\n    \"actionNextHopParameterMapping\": {},\n    \"actionOutputType\": \"\",\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": false,\n    \"action_uuid\": \"0c59f81ba7694bc31e1a0e856340ce9545d4d4a3562d2c61659500950751b16a\",\n    \"collapsed\": true,\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Get k8s kube system config map\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-02-12T07:18:20.599Z\"\n    },\n    \"id\": 56,\n    \"index\": 56,\n    \"inputData\": [\n     {\n      \"config_map_name\": {\n       \"constant\": false,\n       \"value\": \"\"\n      },\n      \"namespace\": {\n       \"constant\": false,\n       \"value\": \" namespace\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"config_map_name\": {\n        \"description\": \"Kubernetes Config Map Name\",\n        \"title\": \"Config Map\",\n        \"type\": \"string\"\n       },\n       \"namespace\": {\n        \"description\": \"Kubernetes namespace\",\n        \"title\": \"Namespace\",\n        \"type\": \"string\"\n       }\n      },\n      \"title\": \"k8s_get_config_map_kube_system\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_K8S\",\n    \"name\": \"Get k8s kube system config map\",\n    \"nouns\": [],\n    \"orderProperties\": [\n     \"namespace\",\n     \"config_map_name\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"config_map_details\",\n     \"output_name_enabled\": true\n    },\n    \"printOutput\": true,\n    \"probeEnabled\": false,\n    \"tags\": [\n     \"k8s_get_config_map_kube_system\"\n    ],\n    \"trusted\": true,\n    \"verbs\": []\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"# Copyright (c) 2021 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"from typing import Optional, List, Tuple\\n\",\n    \"from kubernetes import client\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from tabulate import tabulate\\n\",\n    \"from unskript.legos.kubernetes.k8s_kubectl_command.k8s_kubectl_command import k8s_kubectl_command\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def k8s_get_config_map_kube_system_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    for x in output:\\n\",\n    \"        for k,v in x.items():\\n\",\n    \"            if k=='details':\\n\",\n    \"                for config in v:\\n\",\n    \"                    data_set_1 = []\\n\",\n    \"                    data_set_1.append(\\\"Name:\\\")\\n\",\n    \"                    data_set_1.append(config.metadata.name)\\n\",\n    \"\\n\",\n    \"                    data_set_2 = []\\n\",\n    \"                    data_set_2.append(\\\"Namespace:\\\")\\n\",\n    \"                    data_set_2.append(config.metadata.namespace)\\n\",\n    \"\\n\",\n    \"                    data_set_3 = []\\n\",\n    \"                    data_set_3.append(\\\"Labels:\\\")\\n\",\n    \"                    data_set_3.append(config.metadata.labels)\\n\",\n    \"\\n\",\n    \"                    data_set_4 = []\\n\",\n    \"                    data_set_4.append(\\\"Annotations:\\\")\\n\",\n    \"                    data_set_4.append(config.metadata.annotations)\\n\",\n    \"\\n\",\n    \"                    data_set_5 = []\\n\",\n    \"                    data_set_5.append(\\\"Data:\\\")\\n\",\n    \"                    data_set_5.append(config.data)\\n\",\n    \"\\n\",\n    \"                    tabular_config_map = []\\n\",\n    \"                    tabular_config_map.append(data_set_1)\\n\",\n    \"                    tabular_config_map.append(data_set_2)\\n\",\n    \"                    tabular_config_map.append(data_set_3)\\n\",\n    \"                    tabular_config_map.append(data_set_4)\\n\",\n    \"                    tabular_config_map.append(data_set_5)\\n\",\n    \"\\n\",\n    \"                    print(tabulate(tabular_config_map, tablefmt=\\\"github\\\"))\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def k8s_get_config_map_kube_system(handle, config_map_name: str = '', namespace: str = None)->List:\\n\",\n    \"    \\\"\\\"\\\"k8s_get_config_map_kube_system get kube system config map\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned from the Task validate method\\n\",\n    \"\\n\",\n    \"        :type config_map_name: str\\n\",\n    \"        :param config_map_name: Kubernetes Config Map Name.\\n\",\n    \"\\n\",\n    \"        :type namespace: str\\n\",\n    \"        :param namespace: Kubernetes namespace.\\n\",\n    \"\\n\",\n    \"        :rtype: List of system kube config maps for a given namespace\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    all_namespaces = [namespace]\\n\",\n    \"    cmd = f\\\"kubectl get ns  --no-headers -o custom-columns=':metadata.name'\\\"\\n\",\n    \"    if namespace is None or len(namespace)==0:\\n\",\n    \"        kubernetes_namespaces = k8s_kubectl_command(handle=handle,kubectl_command=cmd )\\n\",\n    \"        replaced_str = kubernetes_namespaces.replace(\\\"\\\\n\\\",\\\" \\\")\\n\",\n    \"        stripped_str = replaced_str.strip()\\n\",\n    \"        all_namespaces = stripped_str.split(\\\" \\\")\\n\",\n    \"    result = []\\n\",\n    \"    coreApiClient = client.CoreV1Api(api_client=handle)\\n\",\n    \"    for n in all_namespaces:\\n\",\n    \"        config_map_dict = {}\\n\",\n    \"        res = coreApiClient.list_namespaced_config_map(\\n\",\n    \"            namespace=n, pretty=True)\\n\",\n    \"        if len(res.items) > 0:\\n\",\n    \"            if config_map_name:\\n\",\n    \"                config_maps = list(\\n\",\n    \"                    filter(lambda x: (x.metadata.name == config_map_name), res.items))\\n\",\n    \"            else:\\n\",\n    \"                config_maps = res.items\\n\",\n    \"            config_map_dict[\\\"namespace\\\"] = n\\n\",\n    \"            config_map_dict[\\\"details\\\"] = config_maps\\n\",\n    \"            result.append(config_map_dict)\\n\",\n    \"    return result\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"namespace\\\": \\\" namespace\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"config_map_details\\\")\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(k8s_get_config_map_kube_system, lego_printer=k8s_get_config_map_kube_system_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"0a404d88-fa37-4716-86cd-fa6d82c298d2\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 2\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 2\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"List-all-IAM-Users\\\"><a id=\\\"1\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Post a slack message</h3>\\n\",\n    \"<p>This action posts a slack message of the config map retrieved in Step 1. This action will only run if the channel_name is specified in the parameters.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Action takes the following parameters (Optional) : <code>channel_name, message</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"d616410e-1051-463c-a1ae-4b7d1162e823\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": false,\n    \"action_uuid\": \"6a87f83ab0ecfeecb9c98d084e2b1066c26fa64be5b4928d5573a5d60299802d\",\n    \"condition_enabled\": true,\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Post Slack Message\",\n    \"id\": 78,\n    \"index\": 78,\n    \"inputData\": [\n     {\n      \"channel\": {\n       \"constant\": false,\n       \"value\": \"channel_name\"\n      },\n      \"message\": {\n       \"constant\": false,\n       \"value\": \"f\\\"Config map for namespace:{namespace}: {config_map_details}\\\"\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"channel\": {\n        \"description\": \"Name of slack channel.\",\n        \"title\": \"Channel\",\n        \"type\": \"string\"\n       },\n       \"message\": {\n        \"description\": \"Message for slack channel.\",\n        \"title\": \"Message\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"channel\",\n       \"message\"\n      ],\n      \"title\": \"slack_post_message\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_SLACK\",\n    \"name\": \"Post Slack Message\",\n    \"nouns\": [],\n    \"orderProperties\": [\n     \"channel\",\n     \"message\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"printOutput\": true,\n    \"startcondition\": \"if len(channel_name)!=0\",\n    \"tags\": [\n     \"slack_post_message\"\n    ],\n    \"verbs\": []\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"# Copyright (c) 2021 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from slack_sdk import WebClient\\n\",\n    \"from slack_sdk.errors import SlackApiError\\n\",\n    \"\\n\",\n    \"pp = pprint.PrettyPrinter(indent=2)\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def slack_post_message_printer(output):\\n\",\n    \"    if output is not None:\\n\",\n    \"        pprint.pprint(output)\\n\",\n    \"    else:\\n\",\n    \"        return\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"@beartype\\n\",\n    \"def slack_post_message(\\n\",\n    \"        handle: WebClient,\\n\",\n    \"        channel: str,\\n\",\n    \"        message: str) -> str:\\n\",\n    \"\\n\",\n    \"    try:\\n\",\n    \"        response = handle.chat_postMessage(\\n\",\n    \"            channel=channel,\\n\",\n    \"            text=message)\\n\",\n    \"        return f\\\"Successfuly Sent Message on Channel: #{channel}\\\"\\n\",\n    \"    except SlackApiError as e:\\n\",\n    \"        pp.pprint(\\n\",\n    \"            f\\\"Failed sending message to slack channel {channel}, Error: {e.response['error']}\\\")\\n\",\n    \"        if e.response['error'] == 'channel_not_found':\\n\",\n    \"            raise Exception('Channel Not Found')\\n\",\n    \"        elif e.response['error'] == 'duplicate_channel_not_found':\\n\",\n    \"            raise Exception('Channel associated with the message_id not valid')\\n\",\n    \"        elif e.response['error'] == 'not_in_channel':\\n\",\n    \"            raise Exception('Cannot post message to channel user is not in')\\n\",\n    \"        elif e.response['error'] == 'is_archived':\\n\",\n    \"            raise Exception('Channel has been archived')\\n\",\n    \"        elif e.response['error'] == 'msg_too_long':\\n\",\n    \"            raise Exception('Message text is too long')\\n\",\n    \"        elif e.response['error'] == 'no_text':\\n\",\n    \"            raise Exception('Message text was not provided')\\n\",\n    \"        elif e.response['error'] == 'restricted_action':\\n\",\n    \"            raise Exception('Workspace preference prevents user from posting')\\n\",\n    \"        elif e.response['error'] == 'restricted_action_read_only_channel':\\n\",\n    \"            raise Exception('Cannot Post message, read-only channel')\\n\",\n    \"        elif e.response['error'] == 'team_access_not_granted':\\n\",\n    \"            raise Exception('The token used is not granted access to the workspace')\\n\",\n    \"        elif e.response['error'] == 'not_authed':\\n\",\n    \"            raise Exception('No Authtnecition token provided')\\n\",\n    \"        elif e.response['error'] == 'invalid_auth':\\n\",\n    \"            raise Exception('Some aspect of Authentication cannot be validated. Request denied')\\n\",\n    \"        elif e.response['error'] == 'access_denied':\\n\",\n    \"            raise Exception('Access to a resource specified in the request denied')\\n\",\n    \"        elif e.response['error'] == 'account_inactive':\\n\",\n    \"            raise Exception('Authentication token is for a deleted user')\\n\",\n    \"        elif e.response['error'] == 'token_revoked':\\n\",\n    \"            raise Exception('Authentication token for a deleted user has been revoked')\\n\",\n    \"        elif e.response['error'] == 'no_permission':\\n\",\n    \"            raise Exception('The workspace toekn used does not have necessary permission to send message')\\n\",\n    \"        elif e.response['error'] == 'ratelimited':\\n\",\n    \"            raise Exception('The request has been ratelimited. Retry sending message later')\\n\",\n    \"        elif e.response['error'] == 'service_unavailable':\\n\",\n    \"            raise Exception('The service is temporarily unavailable')\\n\",\n    \"        elif e.response['error'] == 'fatal_error':\\n\",\n    \"            raise Exception('The server encountered catostrophic error while sending message')\\n\",\n    \"        elif e.response['error'] == 'internal_error':\\n\",\n    \"            raise Exception('The server could not complete operation, likely due to transietn issue')\\n\",\n    \"        elif e.response['error'] == 'request_timeout':\\n\",\n    \"            raise Exception('Sending message error via POST: either message was missing or truncated')\\n\",\n    \"        else:\\n\",\n    \"            raise Exception(f'Failed Sending Message to slack channel {channel} Error: {e.response[\\\"error\\\"]}')\\n\",\n    \"\\n\",\n    \"    except Exception as e:\\n\",\n    \"        print(\\\"\\\\n\\\\n\\\")\\n\",\n    \"        pp.pprint(\\n\",\n    \"            f\\\"Failed sending message to slack channel {channel}, Error: {e.__str__()}\\\")\\n\",\n    \"        return f\\\"Unable to send message on {channel}\\\"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"channel\\\": \\\"channel_name\\\",\\n\",\n    \"    \\\"message\\\": \\\"f\\\\\\\\\\\"Config map for namespace:{namespace}: {config_map_details}\\\\\\\\\\\"\\\"\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"if len(channel_name)!=0\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(slack_post_message, lego_printer=slack_post_message_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"b92d99e6-2735-4d4c-b20e-358ee36e6243\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Conclusion\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Conclusion\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Conclusion\\\">Conclusion</h3>\\n\",\n    \"<p>In this Runbook, we were able to get the Kube ConfigMap and post a Slack message with the ConfigMap details. To view the full platform capabilities of unSkript please visit&nbsp;<a href=\\\"https://us.app.unskript.io\\\" target=\\\"_blank\\\" rel=\\\"noopener\\\">us.app.unskript.io</a></p>\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"execution_data\": {\n   \"parameters\": [\n    \"channel_name\",\n    \"namespace\"\n   ],\n   \"runbook_name\": \"k8s: Get kube system config map\"\n  },\n  \"kernelspec\": {\n   \"display_name\": \"Python 3.9.6 64-bit\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.9.6\"\n  },\n  \"parameterSchema\": {\n   \"properties\": {\n    \"channel_name\": {\n     \"description\": \"Slack channel to post the details to. Eg: \\\"general\\\"\",\n     \"title\": \"channel_name\",\n     \"type\": \"string\"\n    },\n    \"namespace\": {\n     \"description\": \"Name of the namespace to fetch system config map. If left empty, it will fetch for all. Eg: \\\"logging\\\"\",\n     \"title\": \"namespace\",\n     \"type\": \"string\"\n    }\n   },\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"parameterValues\": null,\n  \"vscode\": {\n   \"interpreter\": {\n    \"hash\": \"31f2aee4e71d21fbe5cf8b01ff0e069b9275f58929596ceb00d14d90e3e16cd6\"\n   }\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}\n"
  },
  {
    "path": "Kubernetes/Get_Kube_System_Config_Map.json",
    "content": "{\n  \"name\": \"k8s: Get kube system config map\",\n  \"description\": \"This runbook fetches the kube system config map for a k8s cluster and publishes the information on a Slack channel.\", \n  \"uuid\": \"3fd89891a2b968e4422632e121c72ece82ef51b09822df7fcf734e9a14ed9e5c\", \n  \"icon\": \"CONNECTOR_TYPE_K8S\",\n  \"categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\" ],\n  \"connector_types\": [ \"CONNECTOR_TYPE_K8S\" ],\n  \"version\": \"1.0.0\"\n}\n\n"
  },
  {
    "path": "Kubernetes/K8S_Delete_Pods_From_Failing_Jobs.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"ed972c43-e797-4fe7-8e90-386d7af0b950\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Steps Overview\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Steps Overview\"\n   },\n   \"source\": [\n    \"<center><img src=\\\"https://storage.googleapis.com/unskript-website/assets/favicon.png\\\" alt=\\\"unSkript.com\\\" width=\\\"100\\\" height=\\\"100\\\">\\n\",\n    \"<h1 id=\\\"-unSkript-Runbooks-\\\">unSkript Runbooks&nbsp;</h1>\\n\",\n    \"<div class=\\\"alert alert-block alert-success\\\">\\n\",\n    \"<h3 id=\\\"-Objective\\\">Objective</h3>\\n\",\n    \"<br><strong style=\\\"color: #000000;\\\"><em>To identify and delete failing Kubernetes pods from jobs to mitigate IP exhaustion issues in the cluster.</em></strong></div>\\n\",\n    \"</center>\\n\",\n    \"<p>&nbsp;</p>\\n\",\n    \"<center>\\n\",\n    \"<h2 id=\\\"Delete-Evicted-Pods-From-Namespaces\\\">IP Exhaustion Mitigation: Failing K8s Pod Deletion from Jobs</h2>\\n\",\n    \"</center>\\n\",\n    \"<h1 id=\\\"Steps-Overview\\\">Steps Overview</h1>\\n\",\n    \"<p>1)<a href=\\\"#1\\\" target=\\\"_self\\\" rel=\\\"noopener\\\"> Get failing pods from all jobs.</a><br>2)<a href=\\\"#2\\\" target=\\\"_self\\\" rel=\\\"noopener\\\"> Delete the pod&nbsp;</a></p>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 4,\n   \"id\": \"477a0f30-b116-4170-8219-0de2637e539d\",\n   \"metadata\": {\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-07-04T16:28:49.945Z\"\n    },\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"name\": \"Input Verification\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Input Verification\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"if namespace is None:\\n\",\n    \"    namespace = ''\\n\",\n    \"if pod_names and not namespace:\\n\",\n    \"    raise SystemExit(\\\"Provide a namespace for the Kubernetes pods!\\\")\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"a1c24634-7d34-4524-b76b-35209d458c62\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 1\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 1\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Show-All-Evicted-Pods-From-All-Namespaces\\\"><a id=\\\"1\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Get failing Pods From all jobs</h3>\\n\",\n    \"<p>If a job doesn&rsquo;t exit cleanly (whether it finished successfully or not) the pod is left in a terminated or errored state. After some rounds of runs, these extra pods can quickly exhaust iptables&rsquo; available IP addresses in the cluster. This action fetches all the pods that are not in the running state from a scheduled job.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Input parameters: <code>namespace (Optional)</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Output variable: <code>unhealthy_pods</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"4450bc10-5a2b-4985-8d33-92d19c2f1acf\",\n   \"metadata\": {\n    \"actionBashCommand\": false,\n    \"actionCategories\": [\n     \"CATEGORY_TYPE_CLOUDOPS\",\n     \"CATEGORY_TYPE_DEVOPS\",\n     \"CATEGORY_TYPE_TROUBLESHOOTING\",\n     \"CATEGORY_TYPE_SRE\",\n     \"CATEGORY_TYPE_K8S\",\n     \"CATEGORY_TYPE_K8S_POD\"\n    ],\n    \"actionDescription\": \"Get Kubernetes Error PODs from All Jobs\",\n    \"actionEntryFunction\": \"k8s_get_error_pods_from_all_jobs\",\n    \"actionIsCheck\": true,\n    \"actionIsRemediation\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [],\n    \"actionNextHopParameterMapping\": {},\n    \"actionNouns\": null,\n    \"actionOutputType\": \"ACTION_OUTPUT_TYPE_LIST\",\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"actionTitle\": \"Get Kubernetes Error PODs from All Jobs\",\n    \"actionType\": \"LEGO_TYPE_K8S\",\n    \"actionVerbs\": null,\n    \"actionVersion\": \"1.0.0\",\n    \"action_modified\": false,\n    \"action_uuid\": \"d7a1da167d056a912739fce8c4571c6863050f52d6e19495971277057e709857\",\n    \"collapsed\": true,\n    \"condition_enabled\": true,\n    \"continueOnError\": false,\n    \"credentialsJson\": {},\n    \"description\": \"Get Kubernetes Error PODs from All Jobs\",\n    \"id\": 2,\n    \"index\": 2,\n    \"inputData\": [\n     {\n      \"namespace\": {\n       \"constant\": false,\n       \"value\": \"namespace\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"namespace\": {\n        \"default\": \"\",\n        \"description\": \"k8s Namespace\",\n        \"title\": \"Namespace\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [],\n      \"title\": \"k8s_get_error_pods_from_all_jobs\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"language\": \"python\",\n    \"legotype\": \"LEGO_TYPE_K8S\",\n    \"name\": \"Get Kubernetes Error PODs from All Jobs\",\n    \"orderProperties\": [\n     \"namespace\"\n    ],\n    \"outputParams\": {\n     \"output_name\": \"unhealthy_pods\",\n     \"output_name_enabled\": true,\n     \"output_runbook_enabled\": false,\n     \"output_runbook_name\": \"\"\n    },\n    \"printOutput\": true,\n    \"startcondition\": \"not pod_names\",\n    \"tags\": [\n     \"k8s_get_error_pods_from_all_jobs\"\n    ],\n    \"title\": \"Get Kubernetes Error PODs from All Jobs\",\n    \"uuid\": \"d7a1da167d056a912739fce8c4571c6863050f52d6e19495971277057e709857\",\n    \"version\": \"1.0.0\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"#\\n\",\n    \"# Copyright (c) 2023 unSkript.com\\n\",\n    \"# All rights reserved.\\n\",\n    \"#\\n\",\n    \"\\n\",\n    \"import pprint\\n\",\n    \"from typing import Tuple, Optional\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from kubernetes.client.rest import ApiException\\n\",\n    \"from kubernetes import client, watch\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def k8s_get_error_pods_from_all_jobs_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def k8s_get_error_pods_from_all_jobs(handle, namespace:str=\\\"\\\") -> Tuple:\\n\",\n    \"    \\\"\\\"\\\"k8s_get_error_pods_from_all_jobs This check function uses the handle's native command\\n\",\n    \"       method to execute a pre-defined kubectl command and returns the output of list of error pods\\n\",\n    \"       from all jobs.\\n\",\n    \"\\n\",\n    \"       :type handle: Object\\n\",\n    \"       :param handle: Object returned from the task.validate(...) function\\n\",\n    \"\\n\",\n    \"       :rtype: Tuple Result in tuple format.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    result = []\\n\",\n    \"    coreApiClient = client.CoreV1Api(api_client=handle)\\n\",\n    \"    BatchApiClient = client.BatchV1Api(api_client=handle)\\n\",\n    \"    # If namespace is provided, get jobs from the specified namespace\\n\",\n    \"    if namespace:\\n\",\n    \"        jobs = BatchApiClient.list_namespaced_job(namespace,watch=False, limit=200).items\\n\",\n    \"    # If namespace is not provided, get jobs from all namespaces\\n\",\n    \"    else:\\n\",\n    \"        jobs = BatchApiClient.list_job_for_all_namespaces(watch=False, limit=200).items\\n\",\n    \"\\n\",\n    \"    for job in jobs:\\n\",\n    \"        # Fetching all the pods associated with the current job\\n\",\n    \"        pods = coreApiClient.list_namespaced_pod(job.metadata.namespace, label_selector=f\\\"job-name={job.metadata.name}\\\",watch=False, limit=200).items\\n\",\n    \"\\n\",\n    \"        # Checking the status of each pod\\n\",\n    \"        for pod in pods:\\n\",\n    \"            # If the pod status is 'Failed', print its namespace and name\\n\",\n    \"            if pod.status.phase != \\\"Succeeded\\\":\\n\",\n    \"                result.append({\\\"namespace\\\":pod.metadata.namespace,\\\"pod_name\\\":pod.metadata.name})\\n\",\n    \"    if len(result) != 0:\\n\",\n    \"        return (False, result)\\n\",\n    \"    else:\\n\",\n    \"        return (True, None)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"namespace\\\": \\\"namespace\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"not pod_names\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"unhealthy_pods\\\")\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(k8s_get_error_pods_from_all_jobs, lego_printer=k8s_get_error_pods_from_all_jobs_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"9dbff6fe-9b36-4749-8cc5-000b70b7e87d\",\n   \"metadata\": {\n    \"name\": \"Step 1A\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 1A\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Create-list-of-errored-pods\\\">Create list of errored pods<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Create-list-of-errored-pods\\\" target=\\\"_self\\\">&para;</a></h3>\\n\",\n    \"<p>This action gets a list of all&nbsp; objects from the output of Step 1</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters: <code>None</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following output: <code>all_uhealthy_pods</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"af12bab5-e503-4da9-a74d-dbb88c5f8298\",\n   \"metadata\": {\n    \"collapsed\": true,\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"name\": \"Create list of errored pods\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Create list of errored pods\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"all_unhealthy_pods = []\\n\",\n    \"try:\\n\",\n    \"    if unhealthy_pods[0] == False:\\n\",\n    \"            if len(unhealthy_pods[1])!=0:\\n\",\n    \"                all_unhealthy_pods=unhealthy_pods[1]\\n\",\n    \"except Exception:\\n\",\n    \"    for po in pod_names:\\n\",\n    \"        data_dict = {}\\n\",\n    \"        data_dict[\\\"namespace\\\"] = namespace\\n\",\n    \"        data_dict[\\\"pod_name\\\"] = po\\n\",\n    \"        all_unhealthy_pods.append(data_dict)\\n\",\n    \"print(all_unhealthy_pods)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"8310152a-97fd-4920-afce-f70dbdf28991\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-2\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-2\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Delete-Evicted-Pods-From-All-Namespaces\\\"><a id=\\\"2\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Delete the Pod</h3>\\n\",\n    \"<p>This action deletes the pods found in Step 1.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Input parameters: <code>pod_name, namespace</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>Output paramerters:<span style=\\\"font-family: monospace;\\\"> None</span></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"d0170b6b-6d69-4dd8-9c8f-9128e252659f\",\n   \"metadata\": {\n    \"actionBashCommand\": false,\n    \"actionCategories\": [\n     \"CATEGORY_TYPE_DEVOPS\",\n     \"CATEGORY_TYPE_SRE\",\n     \"CATEGORY_TYPE_K8S\",\n     \"CATEGORY_TYPE_K8S_POD\"\n    ],\n    \"actionDescription\": \"Delete a Kubernetes POD in a given Namespace\",\n    \"actionEntryFunction\": \"k8s_delete_pod\",\n    \"actionIsCheck\": false,\n    \"actionIsRemediation\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": null,\n    \"actionNextHopParameterMapping\": null,\n    \"actionNouns\": null,\n    \"actionOutputType\": \"ACTION_OUTPUT_TYPE_DICT\",\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"actionTitle\": \"Delete a Kubernetes POD in a given Namespace\",\n    \"actionType\": \"LEGO_TYPE_K8S\",\n    \"actionVerbs\": null,\n    \"actionVersion\": \"1.0.0\",\n    \"action_modified\": false,\n    \"action_uuid\": \"9e1cc8076571d227dc6d1955fda400e9e29e2306b070d007b72692cfa2281407\",\n    \"condition_enabled\": true,\n    \"continueOnError\": true,\n    \"credentialsJson\": {},\n    \"description\": \"Delete a Kubernetes POD in a given Namespace\",\n    \"id\": 2,\n    \"index\": 2,\n    \"inputData\": [\n     {\n      \"namespace\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"namespace\\\\\\\\\\\")\\\"\"\n      },\n      \"podname\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"pod_name\\\\\\\\\\\")\\\"\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"namespace\": {\n        \"description\": \"Kubernetes namespace\",\n        \"title\": \"Namespace\",\n        \"type\": \"string\"\n       },\n       \"podname\": {\n        \"description\": \"K8S Pod Name\",\n        \"title\": \"Podname\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"namespace\",\n       \"podname\"\n      ],\n      \"title\": \"k8s_delete_pod\",\n      \"type\": \"object\"\n     }\n    ],\n    \"iterData\": [\n     {\n      \"iter_enabled\": true,\n      \"iter_item\": {\n       \"namespace\": \"namespace\",\n       \"podname\": \"pod_name\"\n      },\n      \"iter_list\": {\n       \"constant\": false,\n       \"objectItems\": true,\n       \"value\": \"all_unhealthy_pods\"\n      }\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"language\": \"python\",\n    \"legotype\": \"LEGO_TYPE_K8S\",\n    \"name\": \"Delete a Kubernetes POD in a given Namespace\",\n    \"orderProperties\": [\n     \"namespace\",\n     \"podname\"\n    ],\n    \"printOutput\": true,\n    \"startcondition\": \"len(all_unhealthy_pods)!=0\",\n    \"tags\": [\n     \"k8s_delete_pod\"\n    ],\n    \"uuid\": \"9e1cc8076571d227dc6d1955fda400e9e29e2306b070d007b72692cfa2281407\",\n    \"version\": \"1.0.0\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"#\\n\",\n    \"# Copyright (c) 2021 unSkript.com\\n\",\n    \"# All rights reserved.\\n\",\n    \"#\\n\",\n    \"import pprint\\n\",\n    \"from typing import Dict\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from kubernetes import client\\n\",\n    \"from kubernetes.client.rest import ApiException\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def k8s_delete_pod_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def k8s_delete_pod(handle, namespace: str, podname: str):\\n\",\n    \"    \\\"\\\"\\\"k8s_delete_pod delete a Kubernetes POD in a given Namespace\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned from the Task validate method\\n\",\n    \"\\n\",\n    \"        :type namespace: str\\n\",\n    \"        :param namespace: Kubernetes namespace\\n\",\n    \"\\n\",\n    \"        :type podname: str\\n\",\n    \"        :param podname: K8S Pod Name\\n\",\n    \"\\n\",\n    \"        :rtype: Dict of POD info\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    coreApiClient = client.CoreV1Api(api_client=handle)\\n\",\n    \"\\n\",\n    \"    try:\\n\",\n    \"        resp = coreApiClient.delete_namespaced_pod(\\n\",\n    \"            name=podname, namespace=namespace)\\n\",\n    \"    except ApiException as e:\\n\",\n    \"        resp = 'An Exception occurred while executing the command ' + e.reason\\n\",\n    \"        raise e\\n\",\n    \"    return resp\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(continueOnError=True)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"podname\\\": \\\"iter.get(\\\\\\\\\\\"pod_name\\\\\\\\\\\")\\\",\\n\",\n    \"    \\\"namespace\\\": \\\"iter.get(\\\\\\\\\\\"namespace\\\\\\\\\\\")\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(iterJson='''{\\n\",\n    \"    \\\"iter_enabled\\\": true,\\n\",\n    \"    \\\"iter_list_is_const\\\": false,\\n\",\n    \"    \\\"iter_list\\\": \\\"all_unhealthy_pods\\\",\\n\",\n    \"    \\\"iter_parameter\\\": [\\\"podname\\\",\\\"namespace\\\"]\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"len(all_unhealthy_pods)!=0\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(k8s_delete_pod, lego_printer=k8s_delete_pod_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"3b09724f-c9f8-4399-a3d1-aaf8d4866911\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Conclusion\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Conclusion\"\n   },\n   \"source\": [\n    \"<p>This runbook addressed the issue of failing Kubernetes pods in jobs that were leading to IP exhaustion. By following the steps outlined in this runbook, the failing pods were identified and deleted, preventing further IP exhaustion. Regular monitoring and proactive deletion of failing pods from jobs are crucial to maintaining the stability and availability of the Kubernetes cluster. Implementing this runbook as part of the operational processes will help ensure efficient resource utilization and minimize disruptions caused by IP exhaustion. To view the full platform capabilities of unSkript please visit <a href=\\\"https://us.app.unskript.io\\\" target=\\\"_blank\\\" rel=\\\"noopener\\\">https://us.app.unskript.io</a></p>\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"execution_data\": {\n   \"runbook_name\": \"IP Exhaustion Mitigation: Failing K8s Pod Deletion from Jobs\",\n   \"parameters\": [\n    \"namespace\"\n   ]\n  },\n  \"kernelspec\": {\n   \"display_name\": \"Python 3.10.6 64-bit\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"codemirror_mode\": {\n    \"name\": \"ipython\",\n    \"version\": 3\n   },\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"nbconvert_exporter\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.10.6\"\n  },\n  \"parameterSchema\": {\n   \"properties\": {\n    \"namespace\": {\n     \"description\": \"Name of the K8s namespace. Default- all namespaces\",\n     \"title\": \"namespace\",\n     \"type\": \"string\"\n    },\n    \"pod_names\": {\n     \"description\": \"Pod names from a particular namespace to delete for failing jobs.\",\n     \"title\": \"pod_names\",\n     \"type\": \"array\"\n    }\n   },\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"vscode\": {\n   \"interpreter\": {\n    \"hash\": \"b0fa6594d8f4cbf19f97940f81e996739fb7646882a419484c72d19e05852a7e\"\n   }\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}"
  },
  {
    "path": "Kubernetes/K8S_Delete_Pods_From_Failing_Jobs.json",
    "content": "{\n    \"name\": \"IP Exhaustion Mitigation: Failing K8s Pod Deletion from Jobs\",\n    \"description\": \"Preventing IP exhaustion is critical in Kubernetes environments, and a key strategy is deleting failing pods from jobs. Failing pods can consume valuable IP resources, leading to scarcity and inefficiency. By proactively identifying and removing malfunctioning pods, administrators can promptly free up IP addresses, optimizing resource utilization. This approach ensures that IP allocation remains efficient, enabling the cluster to accommodate new pods without experiencing IP exhaustion. This runbook helps us to identify failing pods within jobs thereby maximizing IP availability for other pods and services.\",  \n    \"uuid\": \"88e97c46ad944d2f0541cd1f87e3ec5b8a4619f6093e89b55cec53b2a47e45aa\",\n    \"icon\": \"CONNECTOR_TYPE_K8S\",\n    \"categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\" ],\n    \"connector_types\": [ \"CONNECTOR_TYPE_K8S\" ],\n    \"version\": \"1.0.0\"\n  }  "
  },
  {
    "path": "Kubernetes/K8S_Deployment_with_multiple_restarts.ipynb",
    "content": "{\n    \"nbformat\": 4,\n    \"nbformat_minor\": 5,\n    \"metadata\": {\n        \"kernelspec\": {\n            \"name\": \"python_kubernetes\",\n            \"display_name\": \"unSkript (Build: 1267)\"\n        },\n        \"language_info\": {\n            \"name\": \"python\",\n            \"file_extension\": \".py\",\n            \"mimetype\": \"text/x-python\",\n            \"pygments_lexer\": \"ipython3\"\n        },\n        \"execution_data\": {\n            \"runbook_name\": \"k8s: Deployment with multiple restarts\",\n            \"parameters\": [\n                \"namespace\",\n                \"pod\",\n                \"app_label\",\n                \"container\",\n                \"deployment\"\n            ]\n        },\n        \"parameterSchema\": {\n            \"definitions\": null,\n            \"properties\": {\n                \"app_label\": {\n                    \"description\": \"k8s App Label\",\n                    \"title\": \"app_label\",\n                    \"type\": \"string\"\n                },\n                \"container\": {\n                    \"description\": \"k8s container\",\n                    \"title\": \"container\",\n                    \"type\": \"string\"\n                },\n                \"deployment\": {\n                    \"description\": \"Name of deployment with the restart issue\",\n                    \"title\": \"deployment\",\n                    \"type\": \"string\"\n                },\n                \"namespace\": {\n                    \"description\": \"k8s namespace with problematic deployment\",\n                    \"title\": \"namespace\",\n                    \"type\": \"string\"\n                },\n                \"pod\": {\n                    \"description\": \"k8s pod name\",\n                    \"title\": \"pod\",\n                    \"type\": \"string\"\n                }\n            },\n            \"required\": [\n                \"namespace\",\n                \"app_label\",\n                \"pod\",\n                \"deployment\"\n            ],\n            \"title\": \"Schema\",\n            \"type\": \"object\"\n        },\n        \"outputParameterSchema\": {\n            \"definitions\": null,\n            \"properties\": {},\n            \"required\": [],\n            \"title\": \"Schema\",\n            \"type\": \"object\"\n        },\n        \"parameterValues\": {}\n    },\n    \"cells\": [\n        {\n            \"id\": \"cef235be-afe2-45d3-b2a5-291cbb45698a\",\n            \"cell_type\": \"markdown\",\n            \"metadata\": {\n                \"jupyter\": {\n                    \"source_hidden\": false\n                },\n                \"name\": \"Debug Steps\",\n                \"orderProperties\": [],\n                \"tags\": [],\n                \"title\": \"Debug Steps\"\n            },\n            \"source\": \"<ul>\\n<li>Look for pod level information</li>\\n<li>Look for deployment</li>\\n<li>Look for app label</li>\\n</ul>\",\n            \"execution_count\": null,\n            \"outputs\": []\n        },\n        {\n            \"id\": \"0641bd51-bd57-4a33-8238-1b95094147c1\",\n            \"cell_type\": \"code\",\n            \"metadata\": {\n                \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n                \"actionBashCommand\": false,\n                \"actionCategories\": [\n                    \"CATEGORY_TYPE_CLOUDOPS\",\n                    \"CATEGORY_TYPE_DEVOPS\",\n                    \"CATEGORY_TYPE_SRE\",\n                    \"CATEGORY_TYPE_K8S\",\n                    \"CATEGORY_TYPE_K8S_KUBECTL\"\n                ],\n                \"actionIsCheck\": false,\n                \"actionNeedsCredential\": true,\n                \"actionNextHopParameterMapping\": {},\n                \"actionOutputType\": \"\",\n                \"actionRequiredLinesInCode\": [],\n                \"actionSupportsIteration\": true,\n                \"actionSupportsPoll\": true,\n                \"action_modified\": false,\n                \"action_uuid\": \"8cd969c4db1d03d54d258e2c119e90aa914888abb4d5376b775ade8233bf3ae7\",\n                \"checkEnabled\": false,\n                \"continueOnError\": false,\n                \"createTime\": \"1970-01-01T00:00:00Z\",\n                \"credentialsJson\": {},\n                \"currentVersion\": \"0.1.0\",\n                \"description\": \"Execute kubectl command.\",\n                \"execution_data\": {},\n                \"id\": 76,\n                \"index\": 76,\n                \"inputData\": [\n                    {\n                        \"kubectl_command\": {\n                            \"constant\": false,\n                            \"value\": \"\\\"kubectl get deployment \\\" + deployment + \\\" -n \\\" + namespace + \\\" -o json\\\"\"\n                        }\n                    }\n                ],\n                \"inputschema\": [\n                    {\n                        \"properties\": {\n                            \"kubectl_command\": {\n                                \"description\": \"kubectl command eg \\\"kubectl get pods --all-namespaces\\\"\",\n                                \"title\": \"Kubectl Command\",\n                                \"type\": \"string\"\n                            }\n                        },\n                        \"required\": [\n                            \"kubectl_command\"\n                        ],\n                        \"title\": \"k8s_kubectl_command\",\n                        \"type\": \"object\"\n                    }\n                ],\n                \"isUnskript\": false,\n                \"jupyter\": {\n                    \"source_hidden\": true\n                },\n                \"legotype\": \"LEGO_TYPE_K8S\",\n                \"name\": \"Kubectl command -> kubectl get deployment\",\n                \"nouns\": [],\n                \"orderProperties\": [\n                    \"kubectl_command\"\n                ],\n                \"output\": {\n                    \"type\": \"\"\n                },\n                \"printOutput\": true,\n                \"tags\": [\n                    \"k8s_kubectl_command\"\n                ],\n                \"title\": \"Kubectl command -> kubectl get deployment\",\n                \"trusted\": true,\n                \"verbs\": [],\n                \"execution_count\": {}\n            },\n            \"source\": [\n                \"#\\n\",\n                \"# Copyright (c) 2022 unSkript.com\\n\",\n                \"# All rights reserved.\\n\",\n                \"#\\n\",\n                \"\\n\",\n                \"from pydantic import BaseModel, Field\\n\",\n                \"from kubernetes.client.rest import ApiException\\n\",\n                \"\\n\",\n                \"\\n\",\n                \"from beartype import beartype\\n\",\n                \"@beartype\\n\",\n                \"def k8s_kubectl_command_printer(output):\\n\",\n                \"    if output is None:\\n\",\n                \"        return\\n\",\n                \"    print(output)\\n\",\n                \"\\n\",\n                \"\\n\",\n                \"@beartype\\n\",\n                \"def k8s_kubectl_command(handle, kubectl_command: str) -> str:\\n\",\n                \"    \\\"\\\"\\\"k8s_kubectl_command executes the given kubectl command on the pod\\n\",\n                \"\\n\",\n                \"        :type handle: object\\n\",\n                \"        :param handle: Object returned from the Task validate method\\n\",\n                \"\\n\",\n                \"        :type kubectl_command: str\\n\",\n                \"        :param kubectl_command: The Actual kubectl command, like kubectl get ns, etc..\\n\",\n                \"\\n\",\n                \"        :rtype: String, Output of the command in python string format or Empty String\\n\",\n                \"        in case of Error.\\n\",\n                \"    \\\"\\\"\\\"\\n\",\n                \"    if handle.client_side_validation is not True:\\n\",\n                \"        print(f\\\"K8S Connector is invalid: {handle}\\\")\\n\",\n                \"        return str()\\n\",\n                \"\\n\",\n                \"    result = handle.run_native_cmd(kubectl_command)\\n\",\n                \"\\n\",\n                \"    if result is None:\\n\",\n                \"        print(\\n\",\n                \"            f\\\"Error while executing command ({kubectl_command}) (empty response)\\\")\\n\",\n                \"        return \\\"\\\"\\n\",\n                \"\\n\",\n                \"    if result.stderr:\\n\",\n                \"        raise ApiException(f\\\"Error occurred while executing command {kubectl_command} {result.stderr}\\\")\\n\",\n                \"\\n\",\n                \"    return result.stdout\\n\",\n                \"\\n\",\n                \"\\n\",\n                \"task = Task(Workflow())\\n\",\n                \"task.configure(inputParamsJson='''{\\n\",\n                \"    \\\"kubectl_command\\\": \\\"\\\\\\\\\\\"kubectl get deployment \\\\\\\\\\\" + deployment + \\\\\\\\\\\" -n \\\\\\\\\\\" + namespace + \\\\\\\\\\\" -o json\\\\\\\\\\\"\\\"\\n\",\n                \"    }''')\\n\",\n                \"task.configure(printOutput=True)\\n\",\n                \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n                \"if err is None:\\n\",\n                \"    task.execute(k8s_kubectl_command, lego_printer=k8s_kubectl_command_printer, hdl=hdl, args=args)\\n\"\n            ],\n            \"execution_count\": 8,\n            \"outputs\": []\n        },\n        {\n            \"id\": \"4a2a28b5-7f7c-4d1a-9fc0-9f5df8cb5a1b\",\n            \"cell_type\": \"code\",\n            \"metadata\": {\n                \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n                \"actionBashCommand\": false,\n                \"actionCategories\": [\n                    \"CATEGORY_TYPE_CLOUDOPS\",\n                    \"CATEGORY_TYPE_DEVOPS\",\n                    \"CATEGORY_TYPE_SRE\",\n                    \"CATEGORY_TYPE_K8S\",\n                    \"CATEGORY_TYPE_K8S_KUBECTL\"\n                ],\n                \"actionIsCheck\": false,\n                \"actionNeedsCredential\": true,\n                \"actionNextHopParameterMapping\": {},\n                \"actionOutputType\": \"\",\n                \"actionRequiredLinesInCode\": [],\n                \"actionSupportsIteration\": true,\n                \"actionSupportsPoll\": true,\n                \"action_modified\": false,\n                \"action_uuid\": \"7a54aaf7808d98bce5132bc5b5224a084d63ca31921dc362f5b91fbc581cd0da\",\n                \"checkEnabled\": false,\n                \"collapsed\": true,\n                \"continueOnError\": false,\n                \"createTime\": \"1970-01-01T00:00:00Z\",\n                \"credentialsJson\": {},\n                \"currentVersion\": \"0.1.0\",\n                \"description\": \"Kubectl get logs for a given pod\",\n                \"execution_data\": {},\n                \"id\": 52,\n                \"index\": 52,\n                \"inputData\": [\n                    {\n                        \"k8s_cli_string\": {\n                            \"constant\": false,\n                            \"value\": \"\\\"kubectl logs {pod_name} -n {namespace}\\\"\"\n                        },\n                        \"namespace\": {\n                            \"constant\": false,\n                            \"value\": \"namespace\"\n                        },\n                        \"pod_name\": {\n                            \"constant\": false,\n                            \"value\": \"pod\"\n                        }\n                    }\n                ],\n                \"inputschema\": [\n                    {\n                        \"properties\": {\n                            \"k8s_cli_string\": {\n                                \"default\": \"\\\"kubectl logs {pod_name} -n {namespace}\\\"\",\n                                \"description\": \"kubectl get logs for a given pod\",\n                                \"title\": \"Kubectl Command\",\n                                \"type\": \"string\"\n                            },\n                            \"namespace\": {\n                                \"description\": \"Namespace\",\n                                \"title\": \"Namespace\",\n                                \"type\": \"string\"\n                            },\n                            \"pod_name\": {\n                                \"description\": \"Pod Name\",\n                                \"title\": \"Pod Name\",\n                                \"type\": \"string\"\n                            }\n                        },\n                        \"required\": [\n                            \"namespace\",\n                            \"pod_name\"\n                        ],\n                        \"title\": \"k8s_kubectl_get_logs\",\n                        \"type\": \"object\"\n                    }\n                ],\n                \"isUnskript\": false,\n                \"jupyter\": {\n                    \"outputs_hidden\": true,\n                    \"source_hidden\": true\n                },\n                \"legotype\": \"LEGO_TYPE_K8S\",\n                \"name\": \"Kubectl get logs\",\n                \"nouns\": [],\n                \"orderProperties\": [\n                    \"k8s_cli_string\",\n                    \"pod_name\",\n                    \"namespace\"\n                ],\n                \"output\": {\n                    \"type\": \"\"\n                },\n                \"printOutput\": true,\n                \"tags\": [\n                    \"k8s_kubectl_get_logs\"\n                ],\n                \"verbs\": [],\n                \"execution_count\": {}\n            },\n            \"source\": [\n                \"from pprint import pprint\\n\",\n                \"from pydantic import BaseModel, Field\\n\",\n                \"from kubernetes.client.rest import ApiException\\n\",\n                \"\\n\",\n                \"from beartype import beartype\\n\",\n                \"@beartype\\n\",\n                \"def k8s_kubectl_get_logs_printer(data: str):\\n\",\n                \"    if data is None:\\n\",\n                \"        return\\n\",\n                \"\\n\",\n                \"    print(\\\"Logs:\\\")\\n\",\n                \"\\n\",\n                \"    pprint(data)\\n\",\n                \"\\n\",\n                \"@beartype\\n\",\n                \"def k8s_kubectl_get_logs(handle, k8s_cli_string: str, pod_name: str, namespace:str) -> str:\\n\",\n                \"    \\\"\\\"\\\"k8s_kubectl_get_logs executes the given kubectl command\\n\",\n                \"\\n\",\n                \"        :type handle: object\\n\",\n                \"        :param handle: Object returned from the Task validate method\\n\",\n                \"\\n\",\n                \"        :type k8s_cli_string: str\\n\",\n                \"        :param k8s_cli_string: kubectl logs {pod_name} -n {namespace}.\\n\",\n                \"\\n\",\n                \"        :type pod_name: str\\n\",\n                \"        :param pod_name: Pod Name.\\n\",\n                \"\\n\",\n                \"        :type namespace: str\\n\",\n                \"        :param namespace: Namespace.\\n\",\n                \"\\n\",\n                \"        :rtype: String, Output of the command in python string format or\\n\",\n                \"        Empty String in case of Error.\\n\",\n                \"    \\\"\\\"\\\"\\n\",\n                \"    k8s_cli_string = k8s_cli_string.format(pod_name=pod_name, namespace=namespace)\\n\",\n                \"    result = handle.run_native_cmd(k8s_cli_string)\\n\",\n                \"    if result is None:\\n\",\n                \"        print(\\n\",\n                \"            f\\\"Error while executing command ({k8s_cli_string}) (empty response)\\\")\\n\",\n                \"        return \\\"\\\"\\n\",\n                \"\\n\",\n                \"    if result.stderr:\\n\",\n                \"        raise ApiException(f\\\"Error occurred while executing command {k8s_cli_string} {result.stderr}\\\")\\n\",\n                \"\\n\",\n                \"    data = result.stdout\\n\",\n                \"    return data\\n\",\n                \"\\n\",\n                \"\\n\",\n                \"task = Task(Workflow())\\n\",\n                \"task.configure(inputParamsJson='''{\\n\",\n                \"    \\\"k8s_cli_string\\\": \\\"\\\\\\\\\\\"kubectl logs {pod_name} -n {namespace}\\\\\\\\\\\"\\\",\\n\",\n                \"    \\\"namespace\\\": \\\"namespace\\\",\\n\",\n                \"    \\\"pod_name\\\": \\\"pod\\\"\\n\",\n                \"    }''')\\n\",\n                \"task.configure(printOutput=True)\\n\",\n                \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n                \"if err is None:\\n\",\n                \"    task.execute(k8s_kubectl_get_logs, lego_printer=k8s_kubectl_get_logs_printer, hdl=hdl, args=args)\\n\"\n            ],\n            \"execution_count\": 35,\n            \"outputs\": []\n        },\n        {\n            \"id\": \"fc768526-a848-4609-a173-7184706429e6\",\n            \"cell_type\": \"code\",\n            \"metadata\": {\n                \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n                \"actionBashCommand\": false,\n                \"actionCategories\": [\n                    \"CATEGORY_TYPE_CLOUDOPS\",\n                    \"CATEGORY_TYPE_DEVOPS\",\n                    \"CATEGORY_TYPE_SRE\",\n                    \"CATEGORY_TYPE_K8S\",\n                    \"CATEGORY_TYPE_K8S_KUBECTL\",\n                    \"CATEGORY_TYPE_K8S_POD\"\n                ],\n                \"actionIsCheck\": false,\n                \"actionNeedsCredential\": true,\n                \"actionNextHopParameterMapping\": {},\n                \"actionOutputType\": \"\",\n                \"actionRequiredLinesInCode\": [],\n                \"actionSupportsIteration\": true,\n                \"actionSupportsPoll\": true,\n                \"action_modified\": false,\n                \"action_uuid\": \"a303833b7287340b76e41b564a1427c4cd2131035d819c67c56cd1d6aff087c1\",\n                \"checkEnabled\": false,\n                \"continueOnError\": false,\n                \"createTime\": \"1970-01-01T00:00:00Z\",\n                \"credentialsJson\": {},\n                \"currentVersion\": \"0.1.0\",\n                \"description\": \"Kubectl describe a pod\",\n                \"execution_data\": {},\n                \"id\": 81,\n                \"index\": 81,\n                \"inputData\": [\n                    {\n                        \"k8s_cli_string\": {\n                            \"constant\": false,\n                            \"value\": \"\\\"kubectl describe pod {pod_name} -n {namespace}\\\"\"\n                        },\n                        \"namespace\": {\n                            \"constant\": false,\n                            \"value\": \"namespace\"\n                        },\n                        \"pod_name\": {\n                            \"constant\": false,\n                            \"value\": \"pod\"\n                        }\n                    }\n                ],\n                \"inputschema\": [\n                    {\n                        \"properties\": {\n                            \"k8s_cli_string\": {\n                                \"default\": \"kubectl describe pod {pod_name} -n {namespace}\",\n                                \"description\": \"kubectl describe a pod\",\n                                \"title\": \"Kubectl Command\",\n                                \"type\": \"string\"\n                            },\n                            \"namespace\": {\n                                \"description\": \"Namespace\",\n                                \"title\": \"Namespace\",\n                                \"type\": \"string\"\n                            },\n                            \"pod_name\": {\n                                \"description\": \"Pod Name\",\n                                \"title\": \"Pod Name\",\n                                \"type\": \"string\"\n                            }\n                        },\n                        \"required\": [\n                            \"namespace\",\n                            \"pod_name\"\n                        ],\n                        \"title\": \"k8s_kubectl_describe_pod\",\n                        \"type\": \"object\"\n                    }\n                ],\n                \"isUnskript\": false,\n                \"jupyter\": {\n                    \"source_hidden\": true\n                },\n                \"legotype\": \"LEGO_TYPE_K8S\",\n                \"name\": \"Kubectl describe a pod\",\n                \"nouns\": [],\n                \"orderProperties\": [\n                    \"pod_name\",\n                    \"k8s_cli_string\",\n                    \"namespace\"\n                ],\n                \"output\": {\n                    \"type\": \"\"\n                },\n                \"printOutput\": true,\n                \"tags\": [\n                    \"k8s_kubectl_describe_pod\"\n                ],\n                \"verbs\": [],\n                \"execution_count\": {}\n            },\n            \"source\": [\n                \"from pprint import pprint\\n\",\n                \"from pydantic import BaseModel, Field\\n\",\n                \"from kubernetes.client.rest import ApiException\\n\",\n                \"\\n\",\n                \"\\n\",\n                \"from beartype import beartype\\n\",\n                \"@beartype\\n\",\n                \"def k8s_kubectl_describe_pod_printer(data: str):\\n\",\n                \"    if data is None:\\n\",\n                \"        return\\n\",\n                \"    print(\\\"Pod Details:\\\")\\n\",\n                \"    pprint(data)\\n\",\n                \"\\n\",\n                \"\\n\",\n                \"@beartype\\n\",\n                \"def k8s_kubectl_describe_pod(handle, pod_name: str, k8s_cli_string: str, namespace: str) -> str:\\n\",\n                \"    \\\"\\\"\\\"k8s_kubectl_describe_pod executes the given kubectl command\\n\",\n                \"\\n\",\n                \"        :type handle: object\\n\",\n                \"        :param handle: Object returned from the Task validate method\\n\",\n                \"\\n\",\n                \"        :type k8s_cli_string: str\\n\",\n                \"        :param k8s_cli_string: kubectl describe pod {pod_name} -n {namespace}.\\n\",\n                \"\\n\",\n                \"        :type node_name: str\\n\",\n                \"        :param node_name: Node Name.\\n\",\n                \"\\n\",\n                \"        :type namespace: str\\n\",\n                \"        :param namespace:Namespace\\n\",\n                \"\\n\",\n                \"        :rtype: String, Output of the command in python string format or\\n\",\n                \"        Empty String in case of Error.\\n\",\n                \"    \\\"\\\"\\\"\\n\",\n                \"    k8s_cli_string = k8s_cli_string.format(\\n\",\n                \"        pod_name=pod_name, namespace=namespace)\\n\",\n                \"    result = handle.run_native_cmd(k8s_cli_string)\\n\",\n                \"    if result.stderr:\\n\",\n                \"        raise ApiException(\\n\",\n                \"            f\\\"Error occurred while executing command {result.stderr}\\\")\\n\",\n                \"\\n\",\n                \"    data = result.stdout\\n\",\n                \"    return data\\n\",\n                \"\\n\",\n                \"\\n\",\n                \"task = Task(Workflow())\\n\",\n                \"task.configure(inputParamsJson='''{\\n\",\n                \"    \\\"k8s_cli_string\\\": \\\"\\\\\\\\\\\"kubectl describe pod {pod_name} -n {namespace}\\\\\\\\\\\"\\\",\\n\",\n                \"    \\\"namespace\\\": \\\"namespace\\\",\\n\",\n                \"    \\\"pod_name\\\": \\\"pod\\\"\\n\",\n                \"    }''')\\n\",\n                \"\\n\",\n                \"task.configure(printOutput=True)\\n\",\n                \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n                \"if err is None:\\n\",\n                \"    task.execute(k8s_kubectl_describe_pod, lego_printer=k8s_kubectl_describe_pod_printer, hdl=hdl, args=args)\\n\"\n            ],\n            \"execution_count\": 36,\n            \"outputs\": []\n        },\n        {\n            \"id\": \"3c37fa8b-4f6f-4a51-84ae-0829056c1d4a\",\n            \"cell_type\": \"code\",\n            \"metadata\": {\n                \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n                \"actionBashCommand\": false,\n                \"actionCategories\": [\n                    \"CATEGORY_TYPE_CLOUDOPS\",\n                    \"CATEGORY_TYPE_DEVOPS\",\n                    \"CATEGORY_TYPE_SRE\",\n                    \"CATEGORY_TYPE_K8S\",\n                    \"CATEGORY_TYPE_K8S_KUBECTL\"\n                ],\n                \"actionIsCheck\": false,\n                \"actionNeedsCredential\": true,\n                \"actionNextHopParameterMapping\": {},\n                \"actionOutputType\": \"\",\n                \"actionRequiredLinesInCode\": [],\n                \"actionSupportsIteration\": true,\n                \"actionSupportsPoll\": true,\n                \"action_modified\": false,\n                \"action_uuid\": \"8cd969c4db1d03d54d258e2c119e90aa914888abb4d5376b775ade8233bf3ae7\",\n                \"checkEnabled\": false,\n                \"continueOnError\": false,\n                \"createTime\": \"1970-01-01T00:00:00Z\",\n                \"credentialsJson\": {},\n                \"currentVersion\": \"0.1.0\",\n                \"description\": \"Execute kubectl command.\",\n                \"execution_data\": {},\n                \"id\": 76,\n                \"index\": 76,\n                \"inputData\": [\n                    {\n                        \"kubectl_command\": {\n                            \"constant\": false,\n                            \"value\": \"\\\"kubectl get pods {} -n {}\\\".format(pod, namespace)\"\n                        }\n                    }\n                ],\n                \"inputschema\": [\n                    {\n                        \"properties\": {\n                            \"kubectl_command\": {\n                                \"description\": \"kubectl command eg \\\"kubectl get pods --all-namespaces\\\"\",\n                                \"title\": \"Kubectl Command\",\n                                \"type\": \"string\"\n                            }\n                        },\n                        \"required\": [\n                            \"kubectl_command\"\n                        ],\n                        \"title\": \"k8s_kubectl_command\",\n                        \"type\": \"object\"\n                    }\n                ],\n                \"isUnskript\": false,\n                \"jupyter\": {\n                    \"source_hidden\": true\n                },\n                \"legotype\": \"LEGO_TYPE_K8S\",\n                \"name\": \"Kubectl command ->  kubectl get pod {pod} -n {namespace}\",\n                \"nouns\": [],\n                \"orderProperties\": [\n                    \"kubectl_command\"\n                ],\n                \"output\": {\n                    \"type\": \"\"\n                },\n                \"printOutput\": true,\n                \"tags\": [\n                    \"k8s_kubectl_command\"\n                ],\n                \"title\": \"Kubectl command ->  kubectl get pod {pod} -n {namespace}\",\n                \"verbs\": [],\n                \"execution_count\": {}\n            },\n            \"source\": [\n                \"#\\n\",\n                \"# Copyright (c) 2022 unSkript.com\\n\",\n                \"# All rights reserved.\\n\",\n                \"#\\n\",\n                \"\\n\",\n                \"from pydantic import BaseModel, Field\\n\",\n                \"from kubernetes.client.rest import ApiException\\n\",\n                \"\\n\",\n                \"\\n\",\n                \"from beartype import beartype\\n\",\n                \"@beartype\\n\",\n                \"def k8s_kubectl_command_printer(output):\\n\",\n                \"    if output is None:\\n\",\n                \"        return\\n\",\n                \"    print(output)\\n\",\n                \"\\n\",\n                \"\\n\",\n                \"@beartype\\n\",\n                \"def k8s_kubectl_command(handle, kubectl_command: str) -> str:\\n\",\n                \"    \\\"\\\"\\\"k8s_kubectl_command executes the given kubectl command on the pod\\n\",\n                \"\\n\",\n                \"        :type handle: object\\n\",\n                \"        :param handle: Object returned from the Task validate method\\n\",\n                \"\\n\",\n                \"        :type kubectl_command: str\\n\",\n                \"        :param kubectl_command: The Actual kubectl command, like kubectl get ns, etc..\\n\",\n                \"\\n\",\n                \"        :rtype: String, Output of the command in python string format or Empty String\\n\",\n                \"        in case of Error.\\n\",\n                \"    \\\"\\\"\\\"\\n\",\n                \"    if handle.client_side_validation is not True:\\n\",\n                \"        print(f\\\"K8S Connector is invalid: {handle}\\\")\\n\",\n                \"        return str()\\n\",\n                \"\\n\",\n                \"    k8s_cli_string = kubectl_command.format()\\n\",\n                \"    result = handle.run_native_cmd(k8s_cli_string)\\n\",\n                \"\\n\",\n                \"    if result is None:\\n\",\n                \"        print(\\n\",\n                \"            f\\\"Error while executing command ({k8s_cli_string}) (empty response)\\\")\\n\",\n                \"        return \\\"\\\"\\n\",\n                \"\\n\",\n                \"    if result.stderr:\\n\",\n                \"        raise ApiException(f\\\"Error occurred while executing command {k8s_cli_string} {result.stderr}\\\")\\n\",\n                \"\\n\",\n                \"    return result.stdout\\n\",\n                \"\\n\",\n                \"\\n\",\n                \"task = Task(Workflow())\\n\",\n                \"task.configure(inputParamsJson='''{\\n\",\n                \"    \\\"kubectl_command\\\": \\\"\\\\\\\\\\\"kubectl get pods {} -n {}\\\\\\\\\\\".format(pod, namespace)\\\"\\n\",\n                \"    }''')\\n\",\n                \"\\n\",\n                \"task.configure(printOutput=True)\\n\",\n                \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n                \"if err is None:\\n\",\n                \"    task.execute(k8s_kubectl_command, lego_printer=k8s_kubectl_command_printer, hdl=hdl, args=args)\\n\"\n            ],\n            \"execution_count\": 37,\n            \"outputs\": []\n        },\n        {\n            \"id\": \"a9291bbd-7dca-44f1-a8e8-1133d24ed8be\",\n            \"cell_type\": \"code\",\n            \"metadata\": {\n                \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n                \"actionBashCommand\": false,\n                \"actionCategories\": [\n                    \"CATEGORY_TYPE_CLOUDOPS\",\n                    \"CATEGORY_TYPE_DEVOPS\",\n                    \"CATEGORY_TYPE_SRE\",\n                    \"CATEGORY_TYPE_K8S\",\n                    \"CATEGORY_TYPE_K8S_KUBECTL\",\n                    \"CATEGORY_TYPE_K8S_POD\"\n                ],\n                \"actionIsCheck\": false,\n                \"actionNeedsCredential\": true,\n                \"actionNextHopParameterMapping\": {},\n                \"actionOutputType\": \"\",\n                \"actionRequiredLinesInCode\": [],\n                \"actionSupportsIteration\": true,\n                \"actionSupportsPoll\": true,\n                \"action_modified\": false,\n                \"action_uuid\": \"dbc4cc0c949d182204e79d2208aaa1df5a7928b387741d435b4a1605206309c7\",\n                \"checkEnabled\": false,\n                \"continueOnError\": false,\n                \"createTime\": \"1970-01-01T00:00:00Z\",\n                \"credentialsJson\": {},\n                \"currentVersion\": \"0.1.0\",\n                \"description\": \"Kubectl show metrics for a given pod\",\n                \"execution_data\": {},\n                \"id\": 44,\n                \"index\": 44,\n                \"inputData\": [\n                    {\n                        \"k8s_cli_string\": {\n                            \"constant\": false,\n                            \"value\": \"\\\"kubectl top pod {pod_name} -n {namespace}\\\"\"\n                        },\n                        \"namespace\": {\n                            \"constant\": false,\n                            \"value\": \"namespace\"\n                        },\n                        \"pod_name\": {\n                            \"constant\": false,\n                            \"value\": \"pod\"\n                        }\n                    }\n                ],\n                \"inputschema\": [\n                    {\n                        \"properties\": {\n                            \"k8s_cli_string\": {\n                                \"default\": \"kubectl top pod {pod_name} -n {namespace}\",\n                                \"description\": \"kubectl show metrics for a given pod\",\n                                \"title\": \"Kubectl Command\",\n                                \"type\": \"string\"\n                            },\n                            \"namespace\": {\n                                \"description\": \"Namespace\",\n                                \"title\": \"Namespace\",\n                                \"type\": \"string\"\n                            },\n                            \"pod_name\": {\n                                \"description\": \"Pod Name\",\n                                \"title\": \"Pod Name\",\n                                \"type\": \"string\"\n                            }\n                        },\n                        \"required\": [\n                            \"namespace\",\n                            \"pod_name\"\n                        ],\n                        \"title\": \"k8s_kubectl_show_metrics_pod\",\n                        \"type\": \"object\"\n                    }\n                ],\n                \"isUnskript\": false,\n                \"jupyter\": {\n                    \"source_hidden\": true\n                },\n                \"legotype\": \"LEGO_TYPE_K8S\",\n                \"name\": \"Kubectl show metrics -> kubectl top pods\",\n                \"nouns\": [],\n                \"orderProperties\": [\n                    \"k8s_cli_string\",\n                    \"pod_name\",\n                    \"namespace\"\n                ],\n                \"output\": {\n                    \"type\": \"\"\n                },\n                \"printOutput\": true,\n                \"tags\": [\n                    \"k8s_kubectl_show_metrics_pod\"\n                ],\n                \"title\": \"Kubectl show metrics -> kubectl top pods\",\n                \"verbs\": [],\n                \"execution_count\": {}\n            },\n            \"source\": [\n                \"from pydantic import BaseModel, Field\\n\",\n                \"from kubernetes.client.rest import ApiException\\n\",\n                \"\\n\",\n                \"\\n\",\n                \"from beartype import beartype\\n\",\n                \"@beartype\\n\",\n                \"def k8s_kubectl_show_metrics_pod_printer(data: str):\\n\",\n                \"    if data is None:\\n\",\n                \"        print(\\\"Error while executing command\\\")\\n\",\n                \"        return\\n\",\n                \"\\n\",\n                \"    print (data)\\n\",\n                \"\\n\",\n                \"@beartype\\n\",\n                \"def k8s_kubectl_show_metrics_pod(\\n\",\n                \"        handle,\\n\",\n                \"        k8s_cli_string: str,\\n\",\n                \"        pod_name:str,\\n\",\n                \"        namespace:str\\n\",\n                \"        ) -> str:\\n\",\n                \"    \\\"\\\"\\\"k8s_kubectl_show_metrics_node executes the given kubectl command\\n\",\n                \"\\n\",\n                \"        :type handle: object\\n\",\n                \"        :param handle: Object returned from the Task validate method\\n\",\n                \"\\n\",\n                \"        :type k8s_cli_string: str\\n\",\n                \"        :param k8s_cli_string: kubectl top pod {pod_name} -n {namespace}.\\n\",\n                \"\\n\",\n                \"        :type pod_name: str\\n\",\n                \"        :param pod_name: Pod Name.\\n\",\n                \"\\n\",\n                \"        :type namespace: str\\n\",\n                \"        :param namespace: Namespace.\\n\",\n                \"\\n\",\n                \"        :rtype: String, Output of the command in python string format or\\n\",\n                \"        Empty String in case of Error.\\n\",\n                \"    \\\"\\\"\\\"\\n\",\n                \"    k8s_cli_string = k8s_cli_string.format(pod_name=pod_name, namespace=namespace)\\n\",\n                \"    result = handle.run_native_cmd(k8s_cli_string)\\n\",\n                \"    if result is None:\\n\",\n                \"        print(\\n\",\n                \"            f\\\"Error while executing command ({k8s_cli_string}) (empty response)\\\")\\n\",\n                \"        return \\\"\\\"\\n\",\n                \"\\n\",\n                \"    if result.stderr:\\n\",\n                \"        raise ApiException(\\n\",\n                \"            f\\\"Error occurred while executing command {k8s_cli_string} {result.stderr}\\\")\\n\",\n                \"\\n\",\n                \"    return result.stdout\\n\",\n                \"\\n\",\n                \"\\n\",\n                \"task = Task(Workflow())\\n\",\n                \"task.configure(inputParamsJson='''{\\n\",\n                \"    \\\"k8s_cli_string\\\": \\\"\\\\\\\\\\\"kubectl top pod {pod_name} -n {namespace}\\\\\\\\\\\"\\\",\\n\",\n                \"    \\\"namespace\\\": \\\"namespace\\\",\\n\",\n                \"    \\\"pod_name\\\": \\\"pod\\\"\\n\",\n                \"    }''')\\n\",\n                \"task.configure(printOutput=True)\\n\",\n                \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n                \"if err is None:\\n\",\n                \"    task.execute(k8s_kubectl_show_metrics_pod, lego_printer=k8s_kubectl_show_metrics_pod_printer, hdl=hdl, args=args)\\n\"\n            ],\n            \"execution_count\": 38,\n            \"outputs\": []\n        },\n        {\n            \"id\": \"1f8e49cf-9b9e-4b05-a70a-b3e208638ff4\",\n            \"cell_type\": \"code\",\n            \"metadata\": {\n                \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n                \"actionBashCommand\": false,\n                \"actionCategories\": [\n                    \"CATEGORY_TYPE_CLOUDOPS\",\n                    \"CATEGORY_TYPE_DEVOPS\",\n                    \"CATEGORY_TYPE_SRE\",\n                    \"CATEGORY_TYPE_K8S\",\n                    \"CATEGORY_TYPE_K8S_KUBECTL\"\n                ],\n                \"actionIsCheck\": false,\n                \"actionNeedsCredential\": true,\n                \"actionNextHopParameterMapping\": {},\n                \"actionOutputType\": \"\",\n                \"actionRequiredLinesInCode\": [],\n                \"actionSupportsIteration\": true,\n                \"actionSupportsPoll\": true,\n                \"action_modified\": false,\n                \"action_uuid\": \"8cd969c4db1d03d54d258e2c119e90aa914888abb4d5376b775ade8233bf3ae7\",\n                \"checkEnabled\": false,\n                \"continueOnError\": false,\n                \"createTime\": \"1970-01-01T00:00:00Z\",\n                \"credentialsJson\": {},\n                \"currentVersion\": \"0.1.0\",\n                \"description\": \"Execute kubectl command.\",\n                \"execution_data\": {},\n                \"id\": 76,\n                \"index\": 76,\n                \"inputData\": [\n                    {\n                        \"kubectl_command\": {\n                            \"constant\": false,\n                            \"value\": \"f\\\"kubectl get pods -l app={app_label} -n {namespace}\\\"\"\n                        }\n                    }\n                ],\n                \"inputschema\": [\n                    {\n                        \"properties\": {\n                            \"kubectl_command\": {\n                                \"description\": \"kubectl command eg \\\"kubectl get pods --all-namespaces\\\"\",\n                                \"title\": \"Kubectl Command\",\n                                \"type\": \"string\"\n                            }\n                        },\n                        \"required\": [\n                            \"kubectl_command\"\n                        ],\n                        \"title\": \"k8s_kubectl_command\",\n                        \"type\": \"object\"\n                    }\n                ],\n                \"isUnskript\": false,\n                \"jupyter\": {\n                    \"source_hidden\": true\n                },\n                \"legotype\": \"LEGO_TYPE_K8S\",\n                \"name\": \"Kubectl command -> kubectl get pods -l app={app_label} -n {namespace}\",\n                \"nouns\": [],\n                \"orderProperties\": [\n                    \"kubectl_command\"\n                ],\n                \"output\": {\n                    \"type\": \"\"\n                },\n                \"printOutput\": true,\n                \"tags\": [\n                    \"k8s_kubectl_command\"\n                ],\n                \"title\": \"Kubectl command -> kubectl get pods -l app={app_label} -n {namespace}\",\n                \"verbs\": [],\n                \"execution_count\": {}\n            },\n            \"source\": [\n                \"#\\n\",\n                \"# Copyright (c) 2022 unSkript.com\\n\",\n                \"# All rights reserved.\\n\",\n                \"#\\n\",\n                \"\\n\",\n                \"from pydantic import BaseModel, Field\\n\",\n                \"from kubernetes.client.rest import ApiException\\n\",\n                \"\\n\",\n                \"\\n\",\n                \"from beartype import beartype\\n\",\n                \"@beartype\\n\",\n                \"def k8s_kubectl_command_printer(output):\\n\",\n                \"    if output is None:\\n\",\n                \"        return\\n\",\n                \"    print(output)\\n\",\n                \"\\n\",\n                \"\\n\",\n                \"@beartype\\n\",\n                \"def k8s_kubectl_command(handle, kubectl_command: str) -> str:\\n\",\n                \"    \\\"\\\"\\\"k8s_kubectl_command executes the given kubectl command on the pod\\n\",\n                \"\\n\",\n                \"        :type handle: object\\n\",\n                \"        :param handle: Object returned from the Task validate method\\n\",\n                \"\\n\",\n                \"        :type kubectl_command: str\\n\",\n                \"        :param kubectl_command: The Actual kubectl command, like kubectl get ns, etc..\\n\",\n                \"\\n\",\n                \"        :rtype: String, Output of the command in python string format or Empty String\\n\",\n                \"        in case of Error.\\n\",\n                \"    \\\"\\\"\\\"\\n\",\n                \"    if handle.client_side_validation is not True:\\n\",\n                \"        print(f\\\"K8S Connector is invalid: {handle}\\\")\\n\",\n                \"        return str()\\n\",\n                \"\\n\",\n                \"    result = handle.run_native_cmd(kubectl_command)\\n\",\n                \"\\n\",\n                \"    if result is None:\\n\",\n                \"        print(\\n\",\n                \"            f\\\"Error while executing command ({kubectl_command}) (empty response)\\\")\\n\",\n                \"        return \\\"\\\"\\n\",\n                \"\\n\",\n                \"    if result.stderr:\\n\",\n                \"        raise ApiException(f\\\"Error occurred while executing command {kubectl_command} {result.stderr}\\\")\\n\",\n                \"\\n\",\n                \"    return result.stdout\\n\",\n                \"\\n\",\n                \"\\n\",\n                \"task = Task(Workflow())\\n\",\n                \"task.configure(inputParamsJson='''{\\n\",\n                \"    \\\"kubectl_command\\\": \\\"f\\\\\\\\\\\"kubectl get pods -l app={app_label} -n {namespace}\\\\\\\\\\\"\\\"\\n\",\n                \"    }''')\\n\",\n                \"\\n\",\n                \"task.configure(printOutput=True)\\n\",\n                \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n                \"if err is None:\\n\",\n                \"    task.execute(k8s_kubectl_command, lego_printer=k8s_kubectl_command_printer, hdl=hdl, args=args)\\n\"\n            ],\n            \"execution_count\": 39,\n            \"outputs\": []\n        },\n        {\n            \"id\": \"1276aa13-132a-43f9-ade3-0932803fd703\",\n            \"cell_type\": \"code\",\n            \"metadata\": {\n                \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n                \"actionBashCommand\": false,\n                \"actionCategories\": [\n                    \"CATEGORY_TYPE_CLOUDOPS\",\n                    \"CATEGORY_TYPE_DEVOPS\",\n                    \"CATEGORY_TYPE_SRE\",\n                    \"CATEGORY_TYPE_K8S\",\n                    \"CATEGORY_TYPE_K8S_KUBECTL\"\n                ],\n                \"actionIsCheck\": false,\n                \"actionNeedsCredential\": true,\n                \"actionNextHopParameterMapping\": {},\n                \"actionOutputType\": \"\",\n                \"actionRequiredLinesInCode\": [],\n                \"actionSupportsIteration\": true,\n                \"actionSupportsPoll\": true,\n                \"action_modified\": false,\n                \"action_uuid\": \"8cd969c4db1d03d54d258e2c119e90aa914888abb4d5376b775ade8233bf3ae7\",\n                \"checkEnabled\": false,\n                \"continueOnError\": false,\n                \"createTime\": \"1970-01-01T00:00:00Z\",\n                \"credentialsJson\": {},\n                \"currentVersion\": \"0.1.0\",\n                \"description\": \"Execute kubectl command.\",\n                \"execution_data\": {},\n                \"id\": 76,\n                \"index\": 76,\n                \"inputData\": [\n                    {\n                        \"kubectl_command\": {\n                            \"constant\": false,\n                            \"value\": \"f\\\"kubectl top pods -l app={app_label} -n {namespace}\\\"\"\n                        }\n                    }\n                ],\n                \"inputschema\": [\n                    {\n                        \"properties\": {\n                            \"kubectl_command\": {\n                                \"description\": \"kubectl command eg \\\"kubectl get pods --all-namespaces\\\"\",\n                                \"title\": \"Kubectl Command\",\n                                \"type\": \"string\"\n                            }\n                        },\n                        \"required\": [\n                            \"kubectl_command\"\n                        ],\n                        \"title\": \"k8s_kubectl_command\",\n                        \"type\": \"object\"\n                    }\n                ],\n                \"isUnskript\": false,\n                \"jupyter\": {\n                    \"source_hidden\": true\n                },\n                \"legotype\": \"LEGO_TYPE_K8S\",\n                \"name\": \"Kubectl command -> kubectl top pods -l app={app_label} -n {namespace}\",\n                \"nouns\": [],\n                \"orderProperties\": [\n                    \"kubectl_command\"\n                ],\n                \"output\": {\n                    \"type\": \"\"\n                },\n                \"printOutput\": true,\n                \"tags\": [\n                    \"k8s_kubectl_command\"\n                ],\n                \"title\": \"Kubectl command -> kubectl top pods -l app={app_label} -n {namespace}\",\n                \"verbs\": [],\n                \"execution_count\": {}\n            },\n            \"source\": [\n                \"#\\n\",\n                \"# Copyright (c) 2022 unSkript.com\\n\",\n                \"# All rights reserved.\\n\",\n                \"#\\n\",\n                \"\\n\",\n                \"from pydantic import BaseModel, Field\\n\",\n                \"from kubernetes.client.rest import ApiException\\n\",\n                \"\\n\",\n                \"\\n\",\n                \"from beartype import beartype\\n\",\n                \"@beartype\\n\",\n                \"def k8s_kubectl_command_printer(output):\\n\",\n                \"    if output is None:\\n\",\n                \"        return\\n\",\n                \"    print(output)\\n\",\n                \"\\n\",\n                \"\\n\",\n                \"@beartype\\n\",\n                \"def k8s_kubectl_command(handle, kubectl_command: str) -> str:\\n\",\n                \"    \\\"\\\"\\\"k8s_kubectl_command executes the given kubectl command on the pod\\n\",\n                \"\\n\",\n                \"        :type handle: object\\n\",\n                \"        :param handle: Object returned from the Task validate method\\n\",\n                \"\\n\",\n                \"        :type kubectl_command: str\\n\",\n                \"        :param kubectl_command: The Actual kubectl command, like kubectl get ns, etc..\\n\",\n                \"\\n\",\n                \"        :rtype: String, Output of the command in python string format or Empty String\\n\",\n                \"        in case of Error.\\n\",\n                \"    \\\"\\\"\\\"\\n\",\n                \"    if handle.client_side_validation is not True:\\n\",\n                \"        print(f\\\"K8S Connector is invalid: {handle}\\\")\\n\",\n                \"        return str()\\n\",\n                \"\\n\",\n                \"    result = handle.run_native_cmd(kubectl_command)\\n\",\n                \"\\n\",\n                \"    if result is None:\\n\",\n                \"        print(\\n\",\n                \"            f\\\"Error while executing command ({kubectl_command}) (empty response)\\\")\\n\",\n                \"        return \\\"\\\"\\n\",\n                \"\\n\",\n                \"    if result.stderr:\\n\",\n                \"        raise ApiException(f\\\"Error occurred while executing command {kubectl_command} {result.stderr}\\\")\\n\",\n                \"\\n\",\n                \"    return result.stdout\\n\",\n                \"\\n\",\n                \"\\n\",\n                \"task = Task(Workflow())\\n\",\n                \"task.configure(inputParamsJson='''{\\n\",\n                \"    \\\"kubectl_command\\\": \\\"f\\\\\\\\\\\"kubectl top pods -l app={app_label} -n {namespace}\\\\\\\\\\\"\\\"\\n\",\n                \"    }''')\\n\",\n                \"\\n\",\n                \"task.configure(printOutput=True)\\n\",\n                \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n                \"if err is None:\\n\",\n                \"    task.execute(k8s_kubectl_command, lego_printer=k8s_kubectl_command_printer, hdl=hdl, args=args)\\n\"\n            ],\n            \"execution_count\": 40,\n            \"outputs\": []\n        },\n        {\n            \"id\": \"0f214340-528a-4b8c-8409-58788587a9b6\",\n            \"cell_type\": \"code\",\n            \"metadata\": {\n                \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n                \"actionBashCommand\": false,\n                \"actionCategories\": [\n                    \"CATEGORY_TYPE_CLOUDOPS\",\n                    \"CATEGORY_TYPE_DEVOPS\",\n                    \"CATEGORY_TYPE_SRE\",\n                    \"CATEGORY_TYPE_K8S\",\n                    \"CATEGORY_TYPE_K8S_KUBECTL\"\n                ],\n                \"actionIsCheck\": false,\n                \"actionNeedsCredential\": true,\n                \"actionNextHopParameterMapping\": {},\n                \"actionOutputType\": \"\",\n                \"actionRequiredLinesInCode\": [],\n                \"actionSupportsIteration\": true,\n                \"actionSupportsPoll\": true,\n                \"action_modified\": false,\n                \"action_uuid\": \"8cd969c4db1d03d54d258e2c119e90aa914888abb4d5376b775ade8233bf3ae7\",\n                \"checkEnabled\": false,\n                \"continueOnError\": false,\n                \"createTime\": \"1970-01-01T00:00:00Z\",\n                \"credentialsJson\": {},\n                \"currentVersion\": \"0.1.0\",\n                \"description\": \"Execute kubectl command.\",\n                \"execution_data\": {},\n                \"id\": 76,\n                \"index\": 76,\n                \"inputData\": [\n                    {\n                        \"kubectl_command\": {\n                            \"constant\": false,\n                            \"value\": \"f\\\"kubectl logs -l app={app_label} -c {container} -n {namespace}\\\"\"\n                        }\n                    }\n                ],\n                \"inputschema\": [\n                    {\n                        \"properties\": {\n                            \"kubectl_command\": {\n                                \"description\": \"kubectl command eg \\\"kubectl get pods --all-namespaces\\\"\",\n                                \"title\": \"Kubectl Command\",\n                                \"type\": \"string\"\n                            }\n                        },\n                        \"required\": [\n                            \"kubectl_command\"\n                        ],\n                        \"title\": \"k8s_kubectl_command\",\n                        \"type\": \"object\"\n                    }\n                ],\n                \"isUnskript\": false,\n                \"jupyter\": {\n                    \"source_hidden\": true\n                },\n                \"legotype\": \"LEGO_TYPE_K8S\",\n                \"name\": \"Kubectl command -> kubectl logs -l app={app_label} -c {container} -n {namespace}\",\n                \"nouns\": [],\n                \"orderProperties\": [\n                    \"kubectl_command\"\n                ],\n                \"output\": {\n                    \"type\": \"\"\n                },\n                \"printOutput\": true,\n                \"tags\": [\n                    \"k8s_kubectl_command\"\n                ],\n                \"title\": \"Kubectl command -> kubectl logs -l app={app_label} -c {container} -n {namespace}\",\n                \"verbs\": [],\n                \"execution_count\": {}\n            },\n            \"source\": [\n                \"#\\n\",\n                \"# Copyright (c) 2022 unSkript.com\\n\",\n                \"# All rights reserved.\\n\",\n                \"#\\n\",\n                \"\\n\",\n                \"from pydantic import BaseModel, Field\\n\",\n                \"from kubernetes.client.rest import ApiException\\n\",\n                \"\\n\",\n                \"\\n\",\n                \"from beartype import beartype\\n\",\n                \"@beartype\\n\",\n                \"def k8s_kubectl_command_printer(output):\\n\",\n                \"    if output is None:\\n\",\n                \"        return\\n\",\n                \"    print(output)\\n\",\n                \"\\n\",\n                \"\\n\",\n                \"@beartype\\n\",\n                \"def k8s_kubectl_command(handle, kubectl_command: str) -> str:\\n\",\n                \"    \\\"\\\"\\\"k8s_kubectl_command executes the given kubectl command on the pod\\n\",\n                \"\\n\",\n                \"        :type handle: object\\n\",\n                \"        :param handle: Object returned from the Task validate method\\n\",\n                \"\\n\",\n                \"        :type kubectl_command: str\\n\",\n                \"        :param kubectl_command: The Actual kubectl command, like kubectl get ns, etc..\\n\",\n                \"\\n\",\n                \"        :rtype: String, Output of the command in python string format or Empty String\\n\",\n                \"        in case of Error.\\n\",\n                \"    \\\"\\\"\\\"\\n\",\n                \"    if handle.client_side_validation is not True:\\n\",\n                \"        print(f\\\"K8S Connector is invalid: {handle}\\\")\\n\",\n                \"        return str()\\n\",\n                \"\\n\",\n                \"    result = handle.run_native_cmd(kubectl_command)\\n\",\n                \"\\n\",\n                \"    if result is None:\\n\",\n                \"        print(\\n\",\n                \"            f\\\"Error while executing command ({kubectl_command}) (empty response)\\\")\\n\",\n                \"        return \\\"\\\"\\n\",\n                \"\\n\",\n                \"    if result.stderr:\\n\",\n                \"        raise ApiException(f\\\"Error occurred while executing command {kubectl_command} {result.stderr}\\\")\\n\",\n                \"\\n\",\n                \"    return result.stdout\\n\",\n                \"\\n\",\n                \"\\n\",\n                \"task = Task(Workflow())\\n\",\n                \"task.configure(inputParamsJson='''{\\n\",\n                \"    \\\"kubectl_command\\\": \\\"f\\\\\\\\\\\"kubectl logs -l app={app_label} -c {container} -n {namespace}\\\\\\\\\\\"\\\"\\n\",\n                \"    }''')\\n\",\n                \"\\n\",\n                \"task.configure(printOutput=True)\\n\",\n                \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n                \"if err is None:\\n\",\n                \"    task.execute(k8s_kubectl_command, lego_printer=k8s_kubectl_command_printer, hdl=hdl, args=args)\\n\"\n            ],\n            \"execution_count\": 41,\n            \"outputs\": []\n        },\n        {\n            \"id\": \"62ca2564-e1f0-4d91-8e87-ba941a354efa\",\n            \"cell_type\": \"code\",\n            \"metadata\": {\n                \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n                \"actionBashCommand\": false,\n                \"actionCategories\": [\n                    \"CATEGORY_TYPE_CLOUDOPS\",\n                    \"CATEGORY_TYPE_DEVOPS\",\n                    \"CATEGORY_TYPE_SRE\",\n                    \"CATEGORY_TYPE_K8S\",\n                    \"CATEGORY_TYPE_K8S_KUBECTL\"\n                ],\n                \"actionIsCheck\": false,\n                \"actionNeedsCredential\": true,\n                \"actionNextHopParameterMapping\": {},\n                \"actionOutputType\": \"\",\n                \"actionRequiredLinesInCode\": [],\n                \"actionSupportsIteration\": true,\n                \"actionSupportsPoll\": true,\n                \"action_modified\": false,\n                \"action_uuid\": \"8cd969c4db1d03d54d258e2c119e90aa914888abb4d5376b775ade8233bf3ae7\",\n                \"checkEnabled\": false,\n                \"continueOnError\": false,\n                \"createTime\": \"1970-01-01T00:00:00Z\",\n                \"credentialsJson\": {},\n                \"currentVersion\": \"0.1.0\",\n                \"description\": \"Execute kubectl command.\",\n                \"execution_data\": {},\n                \"id\": 76,\n                \"index\": 76,\n                \"inputData\": [\n                    {\n                        \"kubectl_command\": {\n                            \"constant\": false,\n                            \"value\": \"f\\\"kubectl describe pods -l app={app_label} -n {namespace}\\\"\"\n                        }\n                    }\n                ],\n                \"inputschema\": [\n                    {\n                        \"properties\": {\n                            \"kubectl_command\": {\n                                \"description\": \"kubectl command eg \\\"kubectl get pods --all-namespaces\\\"\",\n                                \"title\": \"Kubectl Command\",\n                                \"type\": \"string\"\n                            }\n                        },\n                        \"required\": [\n                            \"kubectl_command\"\n                        ],\n                        \"title\": \"k8s_kubectl_command\",\n                        \"type\": \"object\"\n                    }\n                ],\n                \"isUnskript\": false,\n                \"jupyter\": {\n                    \"source_hidden\": true\n                },\n                \"legotype\": \"LEGO_TYPE_K8S\",\n                \"name\": \"Kubectl command -> kubectl describe pods -l app={app_label} -n {namespace}\",\n                \"nouns\": [],\n                \"orderProperties\": [\n                    \"kubectl_command\"\n                ],\n                \"output\": {\n                    \"type\": \"\"\n                },\n                \"printOutput\": true,\n                \"tags\": [\n                    \"k8s_kubectl_command\"\n                ],\n                \"title\": \"Kubectl command -> kubectl describe pods -l app={app_label} -n {namespace}\",\n                \"verbs\": [],\n                \"execution_count\": {}\n            },\n            \"source\": [\n                \"#\\n\",\n                \"# Copyright (c) 2022 unSkript.com\\n\",\n                \"# All rights reserved.\\n\",\n                \"#\\n\",\n                \"\\n\",\n                \"from pydantic import BaseModel, Field\\n\",\n                \"from kubernetes.client.rest import ApiException\\n\",\n                \"\\n\",\n                \"\\n\",\n                \"from beartype import beartype\\n\",\n                \"@beartype\\n\",\n                \"def k8s_kubectl_command_printer(output):\\n\",\n                \"    if output is None:\\n\",\n                \"        return\\n\",\n                \"    print(output)\\n\",\n                \"\\n\",\n                \"\\n\",\n                \"@beartype\\n\",\n                \"def k8s_kubectl_command(handle, kubectl_command: str) -> str:\\n\",\n                \"    \\\"\\\"\\\"k8s_kubectl_command executes the given kubectl command on the pod\\n\",\n                \"\\n\",\n                \"        :type handle: object\\n\",\n                \"        :param handle: Object returned from the Task validate method\\n\",\n                \"\\n\",\n                \"        :type kubectl_command: str\\n\",\n                \"        :param kubectl_command: The Actual kubectl command, like kubectl get ns, etc..\\n\",\n                \"\\n\",\n                \"        :rtype: String, Output of the command in python string format or Empty String\\n\",\n                \"        in case of Error.\\n\",\n                \"    \\\"\\\"\\\"\\n\",\n                \"    if handle.client_side_validation is not True:\\n\",\n                \"        print(f\\\"K8S Connector is invalid: {handle}\\\")\\n\",\n                \"        return str()\\n\",\n                \"\\n\",\n                \"    result = handle.run_native_cmd(kubectl_command)\\n\",\n                \"\\n\",\n                \"    if result is None:\\n\",\n                \"        print(\\n\",\n                \"            f\\\"Error while executing command ({kubectl_command}) (empty response)\\\")\\n\",\n                \"        return \\\"\\\"\\n\",\n                \"\\n\",\n                \"    if result.stderr:\\n\",\n                \"        raise ApiException(f\\\"Error occurred while executing command {kubectl_command} {result.stderr}\\\")\\n\",\n                \"\\n\",\n                \"    return result.stdout\\n\",\n                \"\\n\",\n                \"\\n\",\n                \"task = Task(Workflow())\\n\",\n                \"task.configure(inputParamsJson='''{\\n\",\n                \"    \\\"kubectl_command\\\": \\\"f\\\\\\\\\\\"kubectl describe pods -l app={app_label} -n {namespace}\\\\\\\\\\\"\\\"\\n\",\n                \"    }''')\\n\",\n                \"\\n\",\n                \"task.configure(printOutput=True)\\n\",\n                \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n                \"if err is None:\\n\",\n                \"    task.execute(k8s_kubectl_command, lego_printer=k8s_kubectl_command_printer, hdl=hdl, args=args)\\n\"\n            ],\n            \"execution_count\": 42,\n            \"outputs\": []\n        },\n        {\n            \"id\": \"13a530cf-ecf3-4d71-b516-4f78b67a8986\",\n            \"cell_type\": \"markdown\",\n            \"metadata\": {\n                \"jupyter\": {\n                    \"source_hidden\": false\n                },\n                \"name\": \"Fix\",\n                \"orderProperties\": [],\n                \"tags\": [],\n                \"title\": \"Fix\"\n            },\n            \"source\": \"<p>Increase resource allocation:</p>\\n<p>If the root cause is identified as insufficient resources, increase the allocation of CPU, memory, or other resources to the Kubernetes deployment.</p>\\n<p>&nbsp;</p>\",\n            \"execution_count\": null,\n            \"outputs\": []\n        },\n        {\n            \"id\": \"7c6f855a-ac70-41d0-bf70-2936a3e00ce3\",\n            \"cell_type\": \"code\",\n            \"metadata\": {\n                \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n                \"actionBashCommand\": false,\n                \"actionCategories\": [\n                    \"CATEGORY_TYPE_CLOUDOPS\",\n                    \"CATEGORY_TYPE_DEVOPS\",\n                    \"CATEGORY_TYPE_SRE\",\n                    \"CATEGORY_TYPE_K8S\",\n                    \"CATEGORY_TYPE_K8S_KUBECTL\"\n                ],\n                \"actionIsCheck\": false,\n                \"actionNeedsCredential\": true,\n                \"actionNextHopParameterMapping\": {},\n                \"actionOutputType\": \"\",\n                \"actionRequiredLinesInCode\": [],\n                \"actionSupportsIteration\": true,\n                \"actionSupportsPoll\": true,\n                \"action_modified\": true,\n                \"action_uuid\": \"8cd969c4db1d03d54d258e2c119e90aa914888abb4d5376b775ade8233bf3ae7\",\n                \"checkEnabled\": false,\n                \"continueOnError\": false,\n                \"createTime\": \"1970-01-01T00:00:00Z\",\n                \"credentialsJson\": {},\n                \"currentVersion\": \"0.1.0\",\n                \"description\": \"Execute kubectl command.\",\n                \"execution_data\": {},\n                \"id\": 76,\n                \"index\": 76,\n                \"inputData\": [\n                    {\n                        \"deployment\": {\n                            \"constant\": false,\n                            \"value\": \"deployment\"\n                        },\n                        \"kubectl_command\": {\n                            \"constant\": false,\n                            \"value\": \"\\\"kubectl set resources deployment {deployment} -n {namespace} --limits=cpu={new_cpu_limit},memory={new_memory_limit}\\\"\"\n                        },\n                        \"namespace\": {\n                            \"constant\": false,\n                            \"value\": \"namespace\"\n                        },\n                        \"new_cpu_limit\": {\n                            \"constant\": false,\n                            \"value\": \"12\"\n                        },\n                        \"new_memory_limit\": {\n                            \"constant\": false,\n                            \"value\": \"12\"\n                        }\n                    }\n                ],\n                \"inputschema\": [\n                    {\n                        \"properties\": {\n                            \"deployment\": {\n                                \"default\": \"\",\n                                \"description\": \"Deployment Name\",\n                                \"title\": \"deployment\",\n                                \"type\": \"string\"\n                            },\n                            \"kubectl_command\": {\n                                \"default\": \"\",\n                                \"description\": \"kubectl command eg \\\"kubectl get pods --all-namespaces\\\"\",\n                                \"title\": \"Kubectl Command\",\n                                \"type\": \"string\"\n                            },\n                            \"namespace\": {\n                                \"default\": \"\",\n                                \"description\": \"Namespace\",\n                                \"title\": \"namespace\",\n                                \"type\": \"string\"\n                            },\n                            \"new_cpu_limit\": {\n                                \"default\": \"\",\n                                \"description\": \"New CPU Limit\",\n                                \"title\": \"new_cpu_limit\",\n                                \"type\": \"string\"\n                            },\n                            \"new_memory_limit\": {\n                                \"default\": \"\",\n                                \"description\": \"New Memory Limit\",\n                                \"title\": \"new_memory_limit\",\n                                \"type\": \"string\"\n                            }\n                        },\n                        \"required\": [\n                            \"kubectl_command\",\n                            \"new_cpu_limit\",\n                            \"new_memory_limit\",\n                            \"deployment\",\n                            \"namespace\"\n                        ],\n                        \"title\": \"k8s_kubectl_command\",\n                        \"type\": \"object\"\n                    }\n                ],\n                \"isUnskript\": false,\n                \"jupyter\": {\n                    \"source_hidden\": true\n                },\n                \"legotype\": \"LEGO_TYPE_K8S\",\n                \"name\": \"Kubectl command -> Increase CPU/Memory Limits\",\n                \"nouns\": [],\n                \"orderProperties\": [\n                    \"kubectl_command\",\n                    \"new_cpu_limit\",\n                    \"new_memory_limit\",\n                    \"deployment\",\n                    \"namespace\"\n                ],\n                \"output\": {\n                    \"type\": \"\"\n                },\n                \"printOutput\": true,\n                \"tags\": [\n                    \"k8s_kubectl_command\"\n                ],\n                \"title\": \"Kubectl command -> Increase CPU/Memory Limits\",\n                \"verbs\": [],\n                \"execution_count\": {}\n            },\n            \"source\": [\n                \"#\\n\",\n                \"# Copyright (c) 2022 unSkript.com\\n\",\n                \"# All rights reserved.\\n\",\n                \"#\\n\",\n                \"\\n\",\n                \"from pydantic import BaseModel, Field\\n\",\n                \"from kubernetes.client.rest import ApiException\\n\",\n                \"\\n\",\n                \"\\n\",\n                \"from beartype import beartype\\n\",\n                \"@beartype\\n\",\n                \"def k8s_kubectl_command_printer(output):\\n\",\n                \"    if output is None:\\n\",\n                \"        return\\n\",\n                \"    print(output)\\n\",\n                \"\\n\",\n                \"\\n\",\n                \"@beartype\\n\",\n                \"def k8s_kubectl_command(handle, new_cpu_limit, new_memory_limit, deployment, namespace, kubectl_command: str) -> str:\\n\",\n                \"    \\\"\\\"\\\"k8s_kubectl_command executes the given kubectl command on the pod\\n\",\n                \"\\n\",\n                \"        :type handle: object\\n\",\n                \"        :param handle: Object returned from the Task validate method\\n\",\n                \"\\n\",\n                \"        :type kubectl_command: str\\n\",\n                \"        :param kubectl_command: The Actual kubectl command, like kubectl get ns, etc..\\n\",\n                \"\\n\",\n                \"        :rtype: String, Output of the command in python string format or Empty String\\n\",\n                \"        in case of Error.\\n\",\n                \"    \\\"\\\"\\\"\\n\",\n                \"    if handle.client_side_validation is not True:\\n\",\n                \"        print(f\\\"K8S Connector is invalid: {handle}\\\")\\n\",\n                \"        return str()\\n\",\n                \"\\n\",\n                \"    result = handle.run_native_cmd(kubectl_command.format(deployment=deployment, namespace=namespace, new_cpu_limit=new_cpu_limit, new_memory_limit=new_memory_limit))\\n\",\n                \"\\n\",\n                \"    if result is None:\\n\",\n                \"        print(\\n\",\n                \"            f\\\"Error while executing command ({kubectl_command}) (empty response)\\\")\\n\",\n                \"        return \\\"\\\"\\n\",\n                \"\\n\",\n                \"    if result.stderr:\\n\",\n                \"        raise ApiException(f\\\"Error occurred while executing command {kubectl_command} {result.stderr}\\\")\\n\",\n                \"\\n\",\n                \"    return result.stdout\\n\",\n                \"\\n\",\n                \"\\n\",\n                \"task = Task(Workflow())\\n\",\n                \"task.configure(inputParamsJson='''{\\n\",\n                \"    \\\"kubectl_command\\\": \\\"\\\\\\\\\\\"kubectl set resources deployment {deployment} -n {namespace} --limits=cpu={new_cpu_limit},memory={new_memory_limit}\\\\\\\\\\\"\\\",\\n\",\n                \"    \\\"new_cpu_limit\\\": \\\"12\\\",\\n\",\n                \"    \\\"new_memory_limit\\\": \\\"12\\\",\\n\",\n                \"    \\\"deployment\\\": \\\"deployment\\\",\\n\",\n                \"    \\\"namespace\\\": \\\"namespace\\\"\\n\",\n                \"    }''')\\n\",\n                \"\\n\",\n                \"task.configure(printOutput=True)\\n\",\n                \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n                \"if err is None:\\n\",\n                \"    task.execute(k8s_kubectl_command, lego_printer=k8s_kubectl_command_printer, hdl=hdl, args=args)\\n\"\n            ],\n            \"execution_count\": 21,\n            \"outputs\": []\n        },\n        {\n            \"id\": \"2c2aa264-5a2b-4714-adfd-79bdc4ae89d1\",\n            \"cell_type\": \"code\",\n            \"metadata\": {\n                \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n                \"actionBashCommand\": false,\n                \"actionCategories\": [\n                    \"CATEGORY_TYPE_CLOUDOPS\",\n                    \"CATEGORY_TYPE_DEVOPS\",\n                    \"CATEGORY_TYPE_SRE\",\n                    \"CATEGORY_TYPE_K8S\",\n                    \"CATEGORY_TYPE_K8S_KUBECTL\"\n                ],\n                \"actionIsCheck\": false,\n                \"actionNeedsCredential\": true,\n                \"actionNextHopParameterMapping\": {},\n                \"actionOutputType\": \"\",\n                \"actionRequiredLinesInCode\": [],\n                \"actionSupportsIteration\": true,\n                \"actionSupportsPoll\": true,\n                \"action_modified\": true,\n                \"action_uuid\": \"8cd969c4db1d03d54d258e2c119e90aa914888abb4d5376b775ade8233bf3ae7\",\n                \"checkEnabled\": false,\n                \"continueOnError\": false,\n                \"createTime\": \"1970-01-01T00:00:00Z\",\n                \"credentialsJson\": {},\n                \"currentVersion\": \"0.1.0\",\n                \"description\": \"Execute kubectl command.\",\n                \"execution_data\": {},\n                \"id\": 76,\n                \"index\": 76,\n                \"inputData\": [\n                    {\n                        \"deployment\": {\n                            \"constant\": false,\n                            \"value\": \"deployment\"\n                        },\n                        \"kubectl_command\": {\n                            \"constant\": false,\n                            \"value\": \"\\\"kubectl scale deployment -n {namespace} {deployment} --replicas={replicas}\\\"\"\n                        },\n                        \"namespace\": {\n                            \"constant\": false,\n                            \"value\": \"namespace\"\n                        },\n                        \"replicas\": {\n                            \"constant\": false,\n                            \"value\": \"2\"\n                        }\n                    }\n                ],\n                \"inputschema\": [\n                    {\n                        \"properties\": {\n                            \"deployment\": {\n                                \"default\": \"\",\n                                \"description\": \"Deployment\",\n                                \"title\": \"deployment\",\n                                \"type\": \"string\"\n                            },\n                            \"kubectl_command\": {\n                                \"default\": \"\",\n                                \"description\": \"kubectl command eg \\\"kubectl get pods --all-namespaces\\\"\",\n                                \"title\": \"Kubectl Command\",\n                                \"type\": \"string\"\n                            },\n                            \"namespace\": {\n                                \"default\": \"\",\n                                \"description\": \"Namespace\",\n                                \"title\": \"namespace\",\n                                \"type\": \"string\"\n                            },\n                            \"replicas\": {\n                                \"default\": \"\",\n                                \"description\": \"Replica Count\",\n                                \"title\": \"replicas\",\n                                \"type\": \"string\"\n                            }\n                        },\n                        \"required\": [\n                            \"deployment\",\n                            \"kubectl_command\",\n                            \"namespace\",\n                            \"replicas\"\n                        ],\n                        \"title\": \"k8s_kubectl_command\",\n                        \"type\": \"object\"\n                    }\n                ],\n                \"isUnskript\": false,\n                \"jupyter\": {\n                    \"source_hidden\": true\n                },\n                \"legotype\": \"LEGO_TYPE_K8S\",\n                \"name\": \"Kubectl command -> Scale Deployment\",\n                \"nouns\": [],\n                \"orderProperties\": [\n                    \"kubectl_command\",\n                    \"replicas\",\n                    \"namespace\",\n                    \"deployment\"\n                ],\n                \"output\": {\n                    \"type\": \"\"\n                },\n                \"printOutput\": true,\n                \"tags\": [\n                    \"k8s_kubectl_command\"\n                ],\n                \"title\": \"Kubectl command -> Scale Deployment\",\n                \"verbs\": [],\n                \"execution_count\": {}\n            },\n            \"source\": [\n                \"#\\n\",\n                \"# Copyright (c) 2022 unSkript.com\\n\",\n                \"# All rights reserved.\\n\",\n                \"#\\n\",\n                \"\\n\",\n                \"from pydantic import BaseModel, Field\\n\",\n                \"from kubernetes.client.rest import ApiException\\n\",\n                \"\\n\",\n                \"\\n\",\n                \"from beartype import beartype\\n\",\n                \"@beartype\\n\",\n                \"def k8s_kubectl_command_printer(output):\\n\",\n                \"    if output is None:\\n\",\n                \"        return\\n\",\n                \"    print(output)\\n\",\n                \"\\n\",\n                \"\\n\",\n                \"@beartype\\n\",\n                \"def k8s_kubectl_command(handle, kubectl_command: str, namespace, deployment, replicas) -> str:\\n\",\n                \"    \\\"\\\"\\\"k8s_kubectl_command executes the given kubectl command on the pod\\n\",\n                \"\\n\",\n                \"        :type handle: object\\n\",\n                \"        :param handle: Object returned from the Task validate method\\n\",\n                \"\\n\",\n                \"        :type kubectl_command: str\\n\",\n                \"        :param kubectl_command: The Actual kubectl command, like kubectl get ns, etc..\\n\",\n                \"\\n\",\n                \"        :rtype: String, Output of the command in python string format or Empty String\\n\",\n                \"        in case of Error.\\n\",\n                \"    \\\"\\\"\\\"\\n\",\n                \"    if handle.client_side_validation is not True:\\n\",\n                \"        print(f\\\"K8S Connector is invalid: {handle}\\\")\\n\",\n                \"        return str()\\n\",\n                \"\\n\",\n                \"    result = handle.run_native_cmd(kubectl_command.format(namespace=namespace, deployment=deployment, replicas=replicas))\\n\",\n                \"\\n\",\n                \"    if result is None:\\n\",\n                \"        print(\\n\",\n                \"            f\\\"Error while executing command ({kubectl_command}) (empty response)\\\")\\n\",\n                \"        return \\\"\\\"\\n\",\n                \"\\n\",\n                \"    if result.stderr:\\n\",\n                \"        raise ApiException(f\\\"Error occurred while executing command {kubectl_command} {result.stderr}\\\")\\n\",\n                \"\\n\",\n                \"    return result.stdout\\n\",\n                \"\\n\",\n                \"\\n\",\n                \"task = Task(Workflow())\\n\",\n                \"task.configure(inputParamsJson='''{\\n\",\n                \"    \\\"deployment\\\": \\\"deployment\\\",\\n\",\n                \"    \\\"kubectl_command\\\": \\\"\\\\\\\\\\\"kubectl scale deployment -n {namespace} {deployment} --replicas={replicas}\\\\\\\\\\\"\\\",\\n\",\n                \"    \\\"namespace\\\": \\\"namespace\\\",\\n\",\n                \"    \\\"replicas\\\": \\\"2\\\"\\n\",\n                \"    }''')\\n\",\n                \"\\n\",\n                \"task.configure(printOutput=True)\\n\",\n                \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n                \"if err is None:\\n\",\n                \"    task.execute(k8s_kubectl_command, lego_printer=k8s_kubectl_command_printer, hdl=hdl, args=args)\\n\"\n            ],\n            \"execution_count\": 46,\n            \"outputs\": []\n        }\n    ]\n}"
  },
  {
    "path": "Kubernetes/K8S_Deployment_with_multiple_restarts.json",
    "content": "{\n  \"name\": \"k8s: Deployment with multiple restarts\",\n  \"description\": \"Kubernetes deployment has experienced multiple restarts within a certain timeframe, which is usually indicative of a problem. When a deployment experiences multiple restarts, it can impact the availability and performance of the application, and can be a sign of underlying issues that need to be addressed.\",\n  \"uuid\": \"b138b716b87b7707424b3558b3b007a17d310d73c2fe9308f8702859e8c6a3a7\",\n  \"icon\": \"CONNECTOR_TYPE_K8S\",\n  \"categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\" ],\n  \"connector_types\": [ \"CONNECTOR_TYPE_K8S\" ],\n  \"version\": \"1.0.0\"\n}\n"
  },
  {
    "path": "Kubernetes/K8S_Get_Candidate_Nodes_Given_Config.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"5f2fac7e\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Runbook Overview\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Runbook Overview\"\n   },\n   \"source\": [\n    \"<center><img src=\\\"https://storage.googleapis.com/unskript-website/assets/favicon.png\\\" alt=\\\"unSkript.com\\\" width=\\\"100\\\" height=\\\"100\\\">\\n\",\n    \"<h1 id=\\\"unSkript-Runbooks\\\">unSkript Runbooks</h1>\\n\",\n    \"<div class=\\\"alert alert-block alert-success\\\">\\n\",\n    \"<h3 id=\\\"Objective\\\"><strong>Objective</strong></h3>\\n\",\n    \"<strong>To get candidate k8s nodes for a given configuration using unSkript actions.</strong></div>\\n\",\n    \"</center><center>\\n\",\n    \"<h2 id=\\\"Get-Candidate-k8s-Nodes-For-Given-Configuration\\\">Get Candidate k8s Nodes For Given Configuration</h2>\\n\",\n    \"</center>\\n\",\n    \"<h1 id=\\\"Steps-Overview\\\">Steps Overview</h1>\\n\",\n    \"<p>1. Get the matching nodes for a given configuration<code>\\n\",\n    \"</code></p>\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"d84b6f44\",\n   \"metadata\": {\n    \"name\": \"Step-1\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-1\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Get-candidate-k8s-nodes-for-the-given-configuration\\\">Get candidate k8s nodes for the given configuration</h3>\\n\",\n    \"<p>Here we will use unSkript <strong>Get candidate k8s nodes for the given configuration</strong> action. This action is used to find out matching nodes for a given configuration.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p><strong>Input parameters:</strong> <code>cpu_limit, memory_limit, pod_limit</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p><strong>Output variable:</strong> <code>candidate_nodes</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<p>&nbsp;</p>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 43,\n   \"id\": \"faff16f3-a562-4d4e-804c-c509efee3cec\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_uuid\": \"5326cf5d52f4d62391e32a4290dcca4ac6f023218b01aefcc5be2765391e7ea2\",\n    \"collapsed\": true,\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Get candidate k8s nodes for given configuration\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-02-13T10:59:51.802Z\"\n    },\n    \"id\": 34,\n    \"index\": 34,\n    \"inputData\": [\n     {\n      \"cpu_limit\": {\n       \"constant\": false,\n       \"value\": \"int(cpu_limit)\"\n      },\n      \"memory_limit\": {\n       \"constant\": false,\n       \"value\": \"memory_limit\"\n      },\n      \"pod_limit\": {\n       \"constant\": false,\n       \"value\": \"int(pod_limit)\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"cpu_limit\": {\n        \"default\": 0,\n        \"description\": \"CPU Limit. Eg 2\",\n        \"title\": \"CPU Limit\",\n        \"type\": \"integer\"\n       },\n       \"memory_limit\": {\n        \"default\": \"\",\n        \"description\": \"Limits and requests for memory are measured in bytes. Accept the store in Mi. Eg 123Mi\",\n        \"title\": \"Memory Limit (Mi)\",\n        \"type\": \"string\"\n       },\n       \"pod_limit\": {\n        \"default\": 0,\n        \"description\": \"Pod Limit. Eg 2\",\n        \"title\": \"Number of Pods to attach\",\n        \"type\": \"integer\"\n       }\n      },\n      \"title\": \"k8s_get_candidate_nodes_for_pods\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_K8S\",\n    \"name\": \"Get candidate k8s nodes for given configuration\",\n    \"nouns\": [\n     \"candidate\",\n     \"nodes\",\n     \"configuration\"\n    ],\n    \"orderProperties\": [\n     \"cpu_limit\",\n     \"memory_limit\",\n     \"pod_limit\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"candidate_nodes\",\n     \"output_name_enabled\": true\n    },\n    \"printOutput\": true,\n    \"tags\": [\n     \"k8s_get_candidate_nodes_for_pods\"\n    ],\n    \"title\": \"Get candidate k8s nodes for given configuration\",\n    \"trusted\": true,\n    \"verbs\": [\n     \"get\"\n    ]\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"# Copyright (c) 2021 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"\\n\",\n    \"import pprint\\n\",\n    \"from typing import Optional\\n\",\n    \"\\n\",\n    \"from kubernetes import client\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from tabulate import tabulate\\n\",\n    \"\\n\",\n    \"pp = pprint.PrettyPrinter(indent=2)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def k8s_get_candidate_nodes_for_pods(handle, cpu_limit: int = 0, memory_limit: str = \\\"\\\", pod_limit: int = 0):\\n\",\n    \"\\n\",\n    \"    coreApiClient = client.CoreV1Api(api_client=handle)\\n\",\n    \"\\n\",\n    \"    nodes = coreApiClient.list_node()\\n\",\n    \"    match_nodes = [node for node in nodes.items if\\n\",\n    \"                   (cpu_limit < int(node.status.capacity.get(\\\"cpu\\\", 0))) and\\n\",\n    \"                   (pod_limit < int(node.status.capacity.get(\\\"pods\\\", 0))) and\\n\",\n    \"                   int(memory_limit.split(\\\"Mi\\\")[0]) < (int(node.status.capacity.get(\\\"memory\\\").split(\\\"Ki\\\")[0]) / 1024)]\\n\",\n    \"\\n\",\n    \"    if len(match_nodes) > 0:\\n\",\n    \"        data = []\\n\",\n    \"\\n\",\n    \"        for node in match_nodes:\\n\",\n    \"            node_capacity = []\\n\",\n    \"            node_capacity.append(node.metadata.name)\\n\",\n    \"            for capacity in node.status.capacity.values():\\n\",\n    \"                node_capacity.append(capacity)\\n\",\n    \"            data.append(node_capacity)\\n\",\n    \"\\n\",\n    \"        print(\\\"\\\\n\\\")\\n\",\n    \"        print(tabulate(data, tablefmt=\\\"grid\\\", headers=[\\\"Name\\\", \\\"cpu\\\", \\\"ephemeral-storage\\\",\\n\",\n    \"                                                       \\\"hugepages-1Gi\\\", \\\"hugepages-2Mi\\\", \\\"memory\\\", \\\"pods\\\"]))\\n\",\n    \"        return match_nodes\\n\",\n    \"\\n\",\n    \"    pp.pprint(\\\"No Matching Nodes Found for this spec\\\")\\n\",\n    \"    return None\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(outputName=\\\"candidate_nodes\\\")\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"cpu_limit\\\": \\\"int(cpu_limit)\\\",\\n\",\n    \"    \\\"memory_limit\\\": \\\"memory_limit\\\",\\n\",\n    \"    \\\"pod_limit\\\": \\\"int(pod_limit)\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.output = task.execute(k8s_get_candidate_nodes_for_pods, hdl=hdl, args=args)\\n\",\n    \"    if task.output_name != None:\\n\",\n    \"        globals().update({task.output_name: task.output[0]})\\n\",\n    \"\\n\",\n    \"if hasattr(task, 'output'):\\n\",\n    \"    if isinstance(task.output, (list, tuple)):\\n\",\n    \"        for item in task.output:\\n\",\n    \"            print(f'item: {item}')\\n\",\n    \"    elif isinstance(task.output, dict):\\n\",\n    \"        for item in task.output.items():\\n\",\n    \"            print(f'item: {item}')\\n\",\n    \"    else:\\n\",\n    \"        print(f'Output for {task.name}')\\n\",\n    \"        print(task.output)\\n\",\n    \"    w.tasks[task.name]= task.output\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"2a154136\",\n   \"metadata\": {\n    \"name\": \"Conclusion\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Conclusion\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Conclusion\\\">Conclusion</h3>\\n\",\n    \"<p>In this Runbook, we demonstrated the use of unSkript's k8s legos to run k8s configuration and get the matching nodes for a given configuration (storage, CPU, memory, pod_limit). To view the full platform capabilities of unSkript please visit&nbsp;<a href=\\\"https://us.app.unskript.io\\\" target=\\\"_blank\\\" rel=\\\"noopener\\\">https://us.app.unskript.io</a></p>\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"execution_data\": {\n   \"runbook_name\": \"k8s: Get candidate nodes for given configuration\",\n   \"parameters\": [\n    \"cpu_limit\",\n    \"ebs_limit\",\n    \"memory_limit\",\n    \"pod_limit\"\n   ]\n  },\n  \"kernelspec\": {\n   \"display_name\": \"unSkript (Build: 839)\",\n   \"name\": \"python_kubernetes\"\n  },\n  \"language_info\": {\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"pygments_lexer\": \"ipython3\"\n  },\n  \"parameterSchema\": {\n   \"properties\": {\n    \"cpu_limit\": {\n     \"default\": 1,\n     \"description\": \"CPU Limit. Eg 2\",\n     \"title\": \"cpu_limit\",\n     \"type\": \"number\"\n    },\n    \"ebs_limit\": {\n     \"default\": 1,\n     \"description\": \"EBS Volume Limit in Gb. Eg 25\",\n     \"title\": \"ebs_limit\",\n     \"type\": \"number\"\n    },\n    \"memory_limit\": {\n     \"default\": \"65Mi\",\n     \"description\": \"Memory limits and requests are measured in bytes. Eg 64Mi\",\n     \"title\": \"memory_limit\",\n     \"type\": \"string\"\n    },\n    \"pod_limit\": {\n     \"default\": 1,\n     \"description\": \"Limit on pods\",\n     \"title\": \"pod_limit\",\n     \"type\": \"number\"\n    }\n   },\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"parameterValues\": {\n   \"cpu_limit\": 1,\n   \"ebs_limit\": 1,\n   \"memory_limit\": \"65Mi\",\n   \"pod_limit\": 1\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}"
  },
  {
    "path": "Kubernetes/K8S_Get_Candidate_Nodes_Given_Config.json",
    "content": "{\n  \"name\": \"k8s: Get candidate nodes for given configuration\",\n  \"description\": \"This runbook get the matching nodes for a given configuration (storage, cpu, memory, pod_limit) from a k8s cluster\",\n  \"uuid\": \"d85523e7d07d1413b8dde69caa4cd444057220b7a43c08ea0432b14cfdd01d36\", \n  \"icon\": \"CONNECTOR_TYPE_K8S\",\n  \"categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\" ],\n  \"connector_types\": [ \"CONNECTOR_TYPE_K8S\" ],\n  \"version\": \"1.0.0\"\n}\n\n"
  },
  {
    "path": "Kubernetes/K8S_Log_Healthcheck.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"8c2def3e-168a-408c-b85d-49048cdd54cd\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"K8s Log healthcheck\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"K8s Log healthcheck\"\n   },\n   \"source\": [\n    \"<h1>Kubernetes Healthcheck Runbook</h1>\\n\",\n    \"<p>This runbook grabs all of your K8s pods, reads the logs from them, and then output any WARNING logs from the last hour.</p>\\n\",\n    \"<p>&nbsp;</p>\\n\",\n    \"<ul>\\n\",\n    \"<li>Step 1: Get all of the pods</li>\\n\",\n    \"<li>Step 2: get all of the lopgs for each pod</li>\\n\",\n    \"<li>Step 3 parse the logs for warnings in the last hour</li>\\n\",\n    \"<li>Step 4: if there are warnings - send a Slack alert.</li>\\n\",\n    \"</ul>\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"9ff97ba4-b03b-4537-a840-f8e878048d9e\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 1: get the pod names\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 1: get the pod names\"\n   },\n   \"source\": [\n    \"<p>The input required is the namespace - from the input parameters.</p>\\n\",\n    \"<p>&nbsp;</p>\\n\",\n    \"<p>This will then query the namespace and return a list of pods in the Output variable 'podList.'</p>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 24,\n   \"id\": \"0cc3b3cf-638c-4b01-ae49-27cb6e30c79e\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": false,\n    \"action_uuid\": \"9e74360f92185496ce46b5110f5551edb1907d29ceed02dbb7b6a1a0b16e7e27\",\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Kubectl list pods in given namespace\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-01-13T20:02:23.900Z\"\n    },\n    \"id\": 47,\n    \"index\": 47,\n    \"inputData\": [\n     {\n      \"k8s_cli_string\": {\n       \"constant\": false,\n       \"value\": \"\\\"kubectl get pods -n {namespace}\\\"\"\n      },\n      \"namespace\": {\n       \"constant\": false,\n       \"value\": \"namespace\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"k8s_cli_string\": {\n        \"default\": \"\\\"kubectl get pods -n {namespace}\\\"\",\n        \"description\": \"kubectl List pods in given namespace\",\n        \"title\": \"Kubectl Command\",\n        \"type\": \"string\"\n       },\n       \"namespace\": {\n        \"description\": \"Namespace\",\n        \"title\": \"Namespace\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"namespace\"\n      ],\n      \"title\": \"k8s_kubectl_list_pods\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_K8S\",\n    \"name\": \"Kubectl list pods\",\n    \"nouns\": [],\n    \"orderProperties\": [\n     \"k8s_cli_string\",\n     \"namespace\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"podList\",\n     \"output_name_enabled\": true\n    },\n    \"printOutput\": true,\n    \"tags\": [\n     \"k8s_kubectl_list_pods\"\n    ],\n    \"verbs\": []\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"from pydantic import BaseModel, Field\\n\",\n    \"import pandas as pd\\n\",\n    \"import io\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def k8s_kubectl_list_pods_printer(data: list):\\n\",\n    \"    if data is None:\\n\",\n    \"        return\\n\",\n    \"\\n\",\n    \"    print(\\\"POD List:\\\")\\n\",\n    \"\\n\",\n    \"    for pod in data:\\n\",\n    \"        print(f\\\"\\\\t {pod}\\\")\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def k8s_kubectl_list_pods(handle, k8s_cli_string: str, namespace: str) -> list:\\n\",\n    \"    \\\"\\\"\\\"k8s_kubectl_list_pods executes the given kubectl command\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned from the Task validate method\\n\",\n    \"\\n\",\n    \"        :type k8s_cli_string: str\\n\",\n    \"        :param k8s_cli_string: kubectl get pods -n {namespace}.\\n\",\n    \"\\n\",\n    \"        :type namespace: str\\n\",\n    \"        :param namespace: Namespace.\\n\",\n    \"\\n\",\n    \"        :rtype:\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    k8s_cli_string = k8s_cli_string.format(namespace=namespace)\\n\",\n    \"    result = handle.run_native_cmd(k8s_cli_string)\\n\",\n    \"    df = pd.read_fwf(io.StringIO(result.stdout))\\n\",\n    \"    all_pods = []\\n\",\n    \"    for index, row in df.iterrows():\\n\",\n    \"        all_pods.append(row['NAME'])\\n\",\n    \"    return all_pods\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"k8s_cli_string\\\": \\\"\\\\\\\\\\\"kubectl get pods -n {namespace}\\\\\\\\\\\"\\\",\\n\",\n    \"    \\\"namespace\\\": \\\"namespace\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"podList\\\")\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(k8s_kubectl_list_pods, lego_printer=k8s_kubectl_list_pods_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"d626b3d5-16fd-4878-a937-3e880a1442be\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 2: get all of the logs\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 2: get all of the logs\"\n   },\n   \"source\": [\n    \"<p>Step 2 takes the list of pod 'pod\\\"list' from Step one, and the namespace input parameter, and obtains the logs for all of the Pods.</p>\\n\",\n    \"<p>&nbsp;</p>\\n\",\n    \"<p>We use the Iterator to iterate through the list.&nbsp; This can take a while if you have a lot of pods.</p>\\n\",\n    \"<p>&nbsp;</p>\\n\",\n    \"<p>The output is saved in a Dict called `allTheLogs'</p>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 25,\n   \"id\": \"5404a1ee-efd1-4bf6-91a8-e7d240e6ae43\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": false,\n    \"action_uuid\": \"7a54aaf7808d98bce5132bc5b5224a084d63ca31921dc362f5b91fbc581cd0da\",\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Kubectl get logs for a given pod\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-01-13T20:08:25.384Z\"\n    },\n    \"id\": 35,\n    \"index\": 35,\n    \"inputData\": [\n     {\n      \"k8s_cli_string\": {\n       \"constant\": false,\n       \"value\": \"\\\"kubectl logs {pod_name} -n {namespace}\\\"\"\n      },\n      \"namespace\": {\n       \"constant\": false,\n       \"value\": \"namespace\"\n      },\n      \"pod_name\": {\n       \"constant\": false,\n       \"value\": \"iter_item\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"k8s_cli_string\": {\n        \"default\": \"\\\"kubectl logs {pod_name} -n {namespace}\\\"\",\n        \"description\": \"kubectl get logs for a given pod\",\n        \"title\": \"Kubectl Command\",\n        \"type\": \"string\"\n       },\n       \"namespace\": {\n        \"description\": \"Namespace\",\n        \"title\": \"Namespace\",\n        \"type\": \"string\"\n       },\n       \"pod_name\": {\n        \"description\": \"Pod Name\",\n        \"title\": \"Pod Name\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"pod_name\",\n       \"namespace\"\n      ],\n      \"title\": \"k8s_kubectl_get_logs\",\n      \"type\": \"object\"\n     }\n    ],\n    \"iterData\": [\n     {\n      \"iter_enabled\": true,\n      \"iter_item\": \"pod_name\",\n      \"iter_list\": {\n       \"constant\": false,\n       \"objectItems\": false,\n       \"value\": \"podList\"\n      }\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_K8S\",\n    \"name\": \"Kubectl get logs\",\n    \"nouns\": [],\n    \"orderProperties\": [\n     \"k8s_cli_string\",\n     \"pod_name\",\n     \"namespace\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"allTheLogs\",\n     \"output_name_enabled\": true\n    },\n    \"printOutput\": true,\n    \"tags\": [\n     \"k8s_kubectl_get_logs\"\n    ],\n    \"verbs\": []\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from pprint import pprint\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def k8s_kubectl_get_logs_printer(data: str):\\n\",\n    \"    if data is None:\\n\",\n    \"        return\\n\",\n    \"\\n\",\n    \"    print(\\\"Logs:\\\")\\n\",\n    \"\\n\",\n    \"    pprint (data)\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def k8s_kubectl_get_logs(handle, k8s_cli_string: str, pod_name: str, namespace:str) -> str:\\n\",\n    \"    \\\"\\\"\\\"k8s_kubectl_get_logs executes the given kubectl command\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned from the Task validate method\\n\",\n    \"\\n\",\n    \"        :type k8s_cli_string: str\\n\",\n    \"        :param k8s_cli_string: kubectl logs {pod_name} -n {namespace}.\\n\",\n    \"\\n\",\n    \"        :type pod_name: str\\n\",\n    \"        :param pod_name: Pod Name.\\n\",\n    \"\\n\",\n    \"        :type namespace: str\\n\",\n    \"        :param namespace: Namespace.\\n\",\n    \"\\n\",\n    \"        :rtype: String, Output of the command in python string format or Empty String in case of Error.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    k8s_cli_string = k8s_cli_string.format(pod_name=pod_name, namespace=namespace)\\n\",\n    \"    result = handle.run_native_cmd(k8s_cli_string)\\n\",\n    \"    data = result.stdout\\n\",\n    \"    return data\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(continueOnError=False)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"k8s_cli_string\\\": \\\"\\\\\\\\\\\"kubectl logs {pod_name} -n {namespace}\\\\\\\\\\\"\\\",\\n\",\n    \"    \\\"namespace\\\": \\\"namespace\\\",\\n\",\n    \"    \\\"pod_name\\\": \\\"iter_item\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(iterJson='''{\\n\",\n    \"    \\\"iter_enabled\\\": true,\\n\",\n    \"    \\\"iter_list_is_const\\\": false,\\n\",\n    \"    \\\"iter_list\\\": \\\"podList\\\",\\n\",\n    \"    \\\"iter_parameter\\\": \\\"pod_name\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"allTheLogs\\\")\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(k8s_kubectl_get_logs, lego_printer=k8s_kubectl_get_logs_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"7d75d8a6-49e8-479a-a250-827685c7c376\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 3: parse the logs\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 3: parse the logs\"\n   },\n   \"source\": [\n    \"<p>'allTheLogs' is a pretty big file.&nbsp; Loop through each log file, and extract any WARNING messages.&nbsp;&nbsp;<br><br><br></p>\\n\",\n    \"<p>We use the input parameter hoursToExamine to filter for logs back that many hours.</p>\\n\",\n    \"<p>&nbsp;</p>\\n\",\n    \"<p>&nbsp;</p>\\n\",\n    \"<p>&nbsp;</p>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 26,\n   \"id\": \"62686394-a57f-47ab-9b1d-1022869f25c1\",\n   \"metadata\": {\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-01-13T20:16:14.980Z\"\n    },\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"name\": \"parse dict of logs\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"parse dict of logs\",\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"import re\\n\",\n    \"from datetime import datetime, timedelta\\n\",\n    \"\\n\",\n    \"#get all warnings\\n\",\n    \"#only report warnings fournd in the x hours\\n\",\n    \"timeDiff = datetime.now()- timedelta(hours=hoursToExamine)\\n\",\n    \"#if there are warnings that are ok to supress, add them to this list\\n\",\n    \"stringsToIgnore = [\\\"arerqewreqwr\\\" ]\\n\",\n    \"#this will hold all the warnings\\n\",\n    \"warning_text_all = {}\\n\",\n    \"\\n\",\n    \"#Specific issues we can deal with\\n\",\n    \"primaryShardIsNotActive = False\\n\",\n    \"\\n\",\n    \"#we've collected a bunch of logs, lets loop through them for Warnings\\n\",\n    \"for instance in allTheLogs:\\n\",\n    \"    #print(instance)\\n\",\n    \"    log = allTheLogs[instance]\\n\",\n    \"    #find the position of all instances of '[WARN' in the logs\\n\",\n    \"    warning_start = [m.start() for m in re.finditer(re.escape('[WARN'), log)]\\n\",\n    \"    \\n\",\n    \"    for i in warning_start:\\n\",\n    \"        warningtime = log[i-24:i-5]\\n\",\n    \"        issue  = log[i:i+400]\\n\",\n    \"        warningtimeDT = datetime.strptime(warningtime, '%Y-%m-%dT%H:%M:%S')\\n\",\n    \"        if warningtimeDT > timeDiff:\\n\",\n    \"            if issue not in stringsToIgnore:\\n\",\n    \"                warning_text_all[instance] = { warningtime:issue}\\n\",\n    \"                #test for specific issues\\n\",\n    \"                if issue.find(\\\"primary shard is not active Timeout\\\")>0:\\n\",\n    \"                    primaryShardIsNotActive = True\\n\",\n    \"                \\n\",\n    \"print(warning_text_all, len(warning_text_all))\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"af26fd0a-7621-4016-8a0d-8a0492ce1b17\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Alerts!\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Alerts!\"\n   },\n   \"source\": [\n    \"<p>Only send a slack message if there is a problem.&nbsp;&nbsp;</p>\\n\",\n    \"<p>&nbsp;</p>\\n\",\n    \"<p>To facilitate this, we use the Start Condition</p>\\n\",\n    \"<p>```</p>\\n\",\n    \"<p>len(warning_text_all) &gt;0</p>\\n\",\n    \"<p>```</p>\\n\",\n    \"<p>If there are warnings, a Slack message is sent. If there are no warnings, there is no message.</p>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 27,\n   \"id\": \"ca14605f-1ca3-438b-951c-a3f680bcdb86\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": false,\n    \"action_uuid\": \"6a87f83ab0ecfeecb9c98d084e2b1066c26fa64be5b4928d5573a5d60299802d\",\n    \"condition_enabled\": true,\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Post Slack Message\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-01-13T20:09:57.724Z\"\n    },\n    \"id\": 78,\n    \"index\": 78,\n    \"inputData\": [\n     {\n      \"channel\": {\n       \"constant\": false,\n       \"value\": \"\\\"unskript-healthcheck\\\"\"\n      },\n      \"message\": {\n       \"constant\": false,\n       \"value\": \"warning_text_all\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"channel\": {\n        \"description\": \"Name of slack channel.\",\n        \"title\": \"Channel\",\n        \"type\": \"string\"\n       },\n       \"message\": {\n        \"description\": \"Message for slack channel.\",\n        \"title\": \"Message\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"channel\",\n       \"message\"\n      ],\n      \"title\": \"slack_post_message\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_SLACK\",\n    \"name\": \"Post Slack Message\",\n    \"nouns\": [],\n    \"orderProperties\": [\n     \"channel\",\n     \"message\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"printOutput\": true,\n    \"startcondition\": \"len(warning_text_all) >0\",\n    \"tags\": [\n     \"slack_post_message\"\n    ],\n    \"verbs\": []\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"# Copyright (c) 2021 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from slack_sdk import WebClient\\n\",\n    \"from slack_sdk.errors import SlackApiError\\n\",\n    \"\\n\",\n    \"pp = pprint.PrettyPrinter(indent=2)\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def slack_post_message_printer(output):\\n\",\n    \"    if output is not None:\\n\",\n    \"        pprint.pprint(output)\\n\",\n    \"    else:\\n\",\n    \"        return\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"@beartype\\n\",\n    \"def slack_post_message(\\n\",\n    \"        handle: WebClient,\\n\",\n    \"        channel: str,\\n\",\n    \"        message: str) -> str:\\n\",\n    \"\\n\",\n    \"    try:\\n\",\n    \"        response = handle.chat_postMessage(\\n\",\n    \"            channel=channel,\\n\",\n    \"            text=message)\\n\",\n    \"        return f\\\"Successfuly Sent Message on Channel: #{channel}\\\"\\n\",\n    \"    except SlackApiError as e:\\n\",\n    \"        pp.pprint(\\n\",\n    \"            f\\\"Failed sending message to slack channel {channel}, Error: {e.response['error']}\\\")\\n\",\n    \"        if e.response['error'] == 'channel_not_found':\\n\",\n    \"            raise Exception('Channel Not Found')\\n\",\n    \"        elif e.response['error'] == 'duplicate_channel_not_found':\\n\",\n    \"            raise Exception('Channel associated with the message_id not valid')\\n\",\n    \"        elif e.response['error'] == 'not_in_channel':\\n\",\n    \"            raise Exception('Cannot post message to channel user is not in')\\n\",\n    \"        elif e.response['error'] == 'is_archived':\\n\",\n    \"            raise Exception('Channel has been archived')\\n\",\n    \"        elif e.response['error'] == 'msg_too_long':\\n\",\n    \"            raise Exception('Message text is too long')\\n\",\n    \"        elif e.response['error'] == 'no_text':\\n\",\n    \"            raise Exception('Message text was not provided')\\n\",\n    \"        elif e.response['error'] == 'restricted_action':\\n\",\n    \"            raise Exception('Workspace preference prevents user from posting')\\n\",\n    \"        elif e.response['error'] == 'restricted_action_read_only_channel':\\n\",\n    \"            raise Exception('Cannot Post message, read-only channel')\\n\",\n    \"        elif e.response['error'] == 'team_access_not_granted':\\n\",\n    \"            raise Exception('The token used is not granted access to the workspace')\\n\",\n    \"        elif e.response['error'] == 'not_authed':\\n\",\n    \"            raise Exception('No Authtnecition token provided')\\n\",\n    \"        elif e.response['error'] == 'invalid_auth':\\n\",\n    \"            raise Exception('Some aspect of Authentication cannot be validated. Request denied')\\n\",\n    \"        elif e.response['error'] == 'access_denied':\\n\",\n    \"            raise Exception('Access to a resource specified in the request denied')\\n\",\n    \"        elif e.response['error'] == 'account_inactive':\\n\",\n    \"            raise Exception('Authentication token is for a deleted user')\\n\",\n    \"        elif e.response['error'] == 'token_revoked':\\n\",\n    \"            raise Exception('Authentication token for a deleted user has been revoked')\\n\",\n    \"        elif e.response['error'] == 'no_permission':\\n\",\n    \"            raise Exception('The workspace toekn used does not have necessary permission to send message')\\n\",\n    \"        elif e.response['error'] == 'ratelimited':\\n\",\n    \"            raise Exception('The request has been ratelimited. Retry sending message later')\\n\",\n    \"        elif e.response['error'] == 'service_unavailable':\\n\",\n    \"            raise Exception('The service is temporarily unavailable')\\n\",\n    \"        elif e.response['error'] == 'fatal_error':\\n\",\n    \"            raise Exception('The server encountered catostrophic error while sending message')\\n\",\n    \"        elif e.response['error'] == 'internal_error':\\n\",\n    \"            raise Exception('The server could not complete operation, likely due to transietn issue')\\n\",\n    \"        elif e.response['error'] == 'request_timeout':\\n\",\n    \"            raise Exception('Sending message error via POST: either message was missing or truncated')\\n\",\n    \"        else:\\n\",\n    \"            raise Exception(f'Failed Sending Message to slack channel {channel} Error: {e.response[\\\"error\\\"]}')\\n\",\n    \"\\n\",\n    \"    except Exception as e:\\n\",\n    \"        print(\\\"\\\\n\\\\n\\\")\\n\",\n    \"        pp.pprint(\\n\",\n    \"            f\\\"Failed sending message to slack channel {channel}, Error: {e.__str__()}\\\")\\n\",\n    \"        return f\\\"Unable to send message on {channel}\\\"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \" \\n\",\n    \"    \\\"message\\\": \\\"warning_text_all\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"len(warning_text_all) >0\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(slack_post_message, lego_printer=slack_post_message_printer, hdl=hdl, args=args)\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"execution_data\": {\n   \"runbook_name\": \"Kubernetes Log Healthcheck\",\n   \"parameters\": [\n    \"hoursToExamine\",\n    \"namespace\"\n   ]\n  },\n  \"kernelspec\": {\n   \"display_name\": \"unSkript (Build: 813)\",\n   \"name\": \"python_kubernetes\"\n  },\n  \"language_info\": {\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"pygments_lexer\": \"ipython3\"\n  },\n  \"parameterSchema\": {\n   \"properties\": {\n    \"hoursToExamine\": {\n     \"default\": 1,\n     \"description\": \"Hours to look back in the logs for WARNING messages.  If you set hours =1, this runbook should be run hourly.  If you choose 24 hours, then run it daily.\",\n     \"title\": \"hoursToExamine\",\n     \"type\": \"number\"\n    },\n    \"namespace\": {\n     \"default\": \"logging\",\n     \"description\": \"The namespace for your K8s instances\",\n     \"title\": \"namespace\",\n     \"type\": \"string\"\n    }\n   },\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"parameterValues\": {\n   \"hoursToExamine\": \"\\\"float(1)\\\"\",\n   \"namespace\": \"logging\"\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}"
  },
  {
    "path": "Kubernetes/K8S_Log_Healthcheck.json",
    "content": "{\n  \"name\": \"Kubernetes Log Healthcheck\",\n  \"description\": \"This RunBook checks the logs of every pod in a namespace for warning messages.\",\n  \"uuid\": \"ee1aa2cb2a0854604bcc516389cf542af17c8de07e5da70524286a112c4eef6f\",\n  \"icon\": \"CONNECTOR_TYPE_K8S\",\n  \"categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\", \"CATEGORY_TYPE_TROUBLESHOOTING\" ],\n  \"connector_types\": [ \"CONNECTOR_TYPE_K8S\" ],\n  \"version\": \"1.0.0\"\n}\n"
  },
  {
    "path": "Kubernetes/K8S_Pod_Stuck_In_CrashLoopBack_State.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"2a1bc075-e2c8-466a-9aa6-07e84c21c162\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Steps Overview\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Steps Overview\"\n   },\n   \"source\": [\n    \"<hr><center><img src=\\\"https://storage.googleapis.com/unskript-website/assets/favicon.png\\\" alt=\\\"unSkript.com\\\" width=\\\"100\\\" height=\\\"100\\\">\\n\",\n    \"<h1 id=\\\"unSkript-Runbooks\\\">unSkript Runbooks</h1>\\n\",\n    \"<div class=\\\"alert alert-block alert-success\\\">\\n\",\n    \"<h3 id=\\\"Objective\\\">Objective</h3>\\n\",\n    \"<br><strong style=\\\"color: #000000;\\\"><em>Fix K8s Pod in CrashLoopBack State</em></strong></div>\\n\",\n    \"</center>\\n\",\n    \"<p>&nbsp;</p>\\n\",\n    \"<center>\\n\",\n    \"<h2 id=\\\"Terminate-EC2-Instances-Without-Valid-Lifetime-Tag\\\"><u>K8S Pod in CrashLoopBack State</u></h2>\\n\",\n    \"</center>\\n\",\n    \"<h1 id=\\\"Steps-Overview\\\">Steps Overview</h1>\\n\",\n    \"<p>1)&nbsp;<a href=\\\"#1\\\">Get list of pods in CrashLoopBackOff State</a><br>2)&nbsp;<a href=\\\"#2\\\">Gather information of the pod</a><br>3)&nbsp;<a href=\\\"#2\\\">Collect pod exit code</a></p>\\n\",\n    \"<p>A <code>CrashLoopBackOff</code> error occurs when a pod startup fails repeatedly in Kubernetes.</p>\\n\",\n    \"<pre><code>When running. a kubectl get pods command, you would see something like this\\n\",\n    \"\\n\",\n    \"NAME                     READY     STATUS             RESTARTS   AGE\\n\",\n    \"nginx-7ef9efa7cd-qasd2   0/1       CrashLoopBackOff   2          1m\\n\",\n    \"\\n\",\n    \"Or\\n\",\n    \"\\n\",\n    \"NAME                     READY     STATUS                  RESTARTS   AGE\\n\",\n    \"pod1-7ef9efa7cd-qasd2    0/2       Init:CrashLoopBackOff   2          1m\\n\",\n    \"</code></pre>\\n\",\n    \"<hr>\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"9176ae13-c5a8-42dc-b5c6-c6a7a91b56fd\",\n   \"metadata\": {\n    \"name\": \"Step 1A\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 1A\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Convert-namespace-to-String-if-empty&para;&para;\\\"><a id=\\\"1\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Convert namespace to String if empty<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Get-List-of-Pods-in-CrashLoopBackOff-State\\\" target=\\\"_self\\\" rel=\\\"noopener\\\">&para;</a><a class=\\\"jp-InternalAnchorLink\\\" href=\\\"../../../../../../files/97ea8f79-ead4-449e-844a-dfc8ed651315/current/%23Get-List-of-Pods-in-ImagePullBackOff-State%C2%B6?_xsrf=2%7C84903cb5%7C0fc688833621afd7a1297198ce4df7c4%7C1673863912#Get-List-of-Pods-in-ImagePullBackOff-State%C2%B6\\\" target=\\\"_self\\\" rel=\\\"noopener\\\" data-commandlinker-command=\\\"rendermime:handle-local-link\\\" data-commandlinker-args=\\\"{&quot;path&quot;:&quot;97ea8f79-ead4-449e-844a-dfc8ed651315/current/#Get-List-of-Pods-in-ImagePullBackOff-State&para;&quot;,&quot;id&quot;:&quot;#Get-List-of-Pods-in-ImagePullBackOff-State%C2%B6&quot;}\\\">&para;</a><a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Convert-namespace-to-String-if-empty&para;&para;\\\" target=\\\"_self\\\">&para;</a></h3>\\n\",\n    \"<p>This custom action changes the type of namespace from None to String only if no namespace is given</p>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"16a31ef5-a834-4878-afa5-79f64dfa0c3d\",\n   \"metadata\": {\n    \"collapsed\": true,\n    \"customAction\": true,\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"name\": \"Convert namespace to String if empty\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Convert namespace to String if empty\",\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"if namespace==None:\\n\",\n    \"    namespace=''\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"60e73ca7-e3a8-42d3-a3bb-87ad2baa1f91\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 1\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 1\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Get-List-of-Pods-in-CrashLoopBackOff-State\\\"><a id=\\\"1\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Get List of Pods in CrashLoopBackOff State<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Get-List-of-Pods-in-CrashLoopBackOff-State\\\" target=\\\"_self\\\">&para;</a></h3>\\n\",\n    \"<p>This action fetches a list of the pods in CrashLoopBack State. This action will consider <code>namespace</code> as&nbsp;<strong> all&nbsp;</strong>if no namespace is given.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters (Optional):&nbsp;<code>namespace</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following ouput: <code>crashloopbackoff_pods</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 4,\n   \"id\": \"060496ab-6cef-4a23-8a93-194cb8774ea3\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionCategories\": [],\n    \"actionIsCheck\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [],\n    \"actionNextHopParameterMapping\": {},\n    \"actionOutputType\": \"\",\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": false,\n    \"action_uuid\": \"d8047bf803242cfbfd1a19e28d64ae8d95168f8edb753ae4e1e7a7af1ffccf07\",\n    \"collapsed\": true,\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Get all K8s pods in CrashLoopBackOff State\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-02-10T12:54:10.973Z\"\n    },\n    \"id\": 26,\n    \"index\": 26,\n    \"inputData\": [\n     {\n      \"namespace\": {\n       \"constant\": false,\n       \"value\": \"str(namespace)\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"namespace\": {\n        \"default\": \"\",\n        \"description\": \"k8s Namespace\",\n        \"title\": \"Namespace\",\n        \"type\": \"string\"\n       }\n      },\n      \"title\": \"k8s_get_pods_in_crashloopbackoff_state\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_K8S\",\n    \"name\": \"Get all K8s Pods in CrashLoopBackOff State\",\n    \"nouns\": [],\n    \"orderProperties\": [\n     \"namespace\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"crashloopbackoff_pods\",\n     \"output_name_enabled\": true\n    },\n    \"printOutput\": true,\n    \"probeEnabled\": false,\n    \"tags\": [\n     \"k8s_get_pods_in_crashloopbackoff_state\"\n    ],\n    \"verbs\": [],\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"#\\n\",\n    \"# Copyright (c) 2022 unSkript.com\\n\",\n    \"# All rights reserved.\\n\",\n    \"#\\n\",\n    \"\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import Optional, Tuple\\n\",\n    \"from unskript.legos.utils import CheckOutput, CheckOutputStatus\\n\",\n    \"from collections import defaultdict\\n\",\n    \"import json\\n\",\n    \"import pprint\\n\",\n    \"import re\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def k8s_get_pods_in_crashloopbackoff_state_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    if isinstance(output, CheckOutput):\\n\",\n    \"        print(output.json())\\n\",\n    \"    else:\\n\",\n    \"        pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def k8s_get_pods_in_crashloopbackoff_state(handle, namespace: str=None) -> Tuple:\\n\",\n    \"    \\\"\\\"\\\"k8s_get_pods_in_crashloopbackoff_state executes the given kubectl command to find pods in CrashLoopBackOff State\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned from the Task validate method\\n\",\n    \"\\n\",\n    \"        :type namespace: Optional[str]\\n\",\n    \"        :param namespace: Namespace to get the pods from. Eg:\\\"logging\\\", if not given all namespaces are considered\\n\",\n    \"\\n\",\n    \"        :rtype: Status, List of pods in CrashLoopBackOff State\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    if handle.client_side_validation != True:\\n\",\n    \"        print(f\\\"K8S Connector is invalid: {handle}\\\")\\n\",\n    \"        return str()\\n\",\n    \"    kubectl_command =\\\"kubectl get pods --all-namespaces | grep CrashLoopBackOff | tr -s ' ' | cut -d ' ' -f 1,2\\\"\\n\",\n    \"    if namespace:\\n\",\n    \"        kubectl_command = \\\"kubectl get pods -n \\\" + namespace + \\\" | grep CrashLoopBackOff | cut -d' ' -f 1 | tr -d ' '\\\"\\n\",\n    \"    response = handle.run_native_cmd(kubectl_command)\\n\",\n    \"    if response is None or hasattr(response, \\\"stderr\\\") is False or response.stderr is None:\\n\",\n    \"        print(\\n\",\n    \"            f\\\"Error while executing command ({kubectl_command}): {response.stderr}\\\")\\n\",\n    \"        return str()\\n\",\n    \"    temp = response.stdout\\n\",\n    \"    result = []\\n\",\n    \"    res = []\\n\",\n    \"    unhealthy_pods =[]\\n\",\n    \"    unhealthy_pods_tuple = ()\\n\",\n    \"    if not namespace:\\n\",\n    \"        all_namespaces = re.findall(r\\\"(\\\\S+).*\\\",temp)\\n\",\n    \"        all_unhealthy_pods = re.findall(r\\\"\\\\S+\\\\s+(.*)\\\",temp)\\n\",\n    \"        unhealthy_pods = [(i, j) for i, j in zip(all_namespaces, all_unhealthy_pods)]\\n\",\n    \"        res = defaultdict(list)\\n\",\n    \"        for key, val in unhealthy_pods:\\n\",\n    \"            res[key].append(val)\\n\",\n    \"    elif namespace:\\n\",\n    \"        all_pods = []\\n\",\n    \"        all_unhealthy_pods =[]\\n\",\n    \"        all_pods = re.findall(r\\\"(\\\\S+).*\\\",temp)\\n\",\n    \"        for p in all_pods:\\n\",\n    \"                unhealthy_pods_tuple = (namespace,p)\\n\",\n    \"                unhealthy_pods.append(unhealthy_pods_tuple)\\n\",\n    \"        res = defaultdict(list)\\n\",\n    \"        for key, val in unhealthy_pods:\\n\",\n    \"            res[key].append(val)\\n\",\n    \"    if len(res)!=0:\\n\",\n    \"        result.append(dict(res))\\n\",\n    \"    if len(result) != 0:\\n\",\n    \"        return (False, result)\\n\",\n    \"    else:\\n\",\n    \"        return (True, None)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"namespace\\\": \\\"namespace\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"crashloopbackoff_pods\\\")\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(k8s_get_pods_in_crashloopbackoff_state, lego_printer=k8s_get_pods_in_crashloopbackoff_state_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"ba58fe60-9922-4c86-b0d6-d76d4db71249\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 1A\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 1A\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Examine-the-Events\\\">Create List of commands to get Events<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Examine-the-Events\\\" target=\\\"_self\\\">&para;</a></h3>\\n\",\n    \"<p>Examine the output from Step 1\\ud83d\\udc46,&nbsp; and create a list of commands for each pod in a namespace that is found to be in the CrashLoopBackOff State</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following ouput:&nbsp;<code>all_unhealthy_pods</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 5,\n   \"id\": \"042b8352-5769-403c-9c22-432fa48de97d\",\n   \"metadata\": {\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-02-09T11:18:22.306Z\"\n    },\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"name\": \"Create List of commands to get Events\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Create List of commands to get Events\",\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"all_unhealthy_pods = []\\n\",\n    \"for each_pod_dict in crashloopbackoff_pods:\\n\",\n    \"    if type(each_pod_dict)==list:\\n\",\n    \"        for pod in each_pod_dict:\\n\",\n    \"            for k,v in pod.items():\\n\",\n    \"                if len(v)!=0:\\n\",\n    \"                    nspace = k\\n\",\n    \"                    u_pod = ' '.join([str(each_pod) for each_pod in v])\\n\",\n    \"                    cmd = \\\"kubectl describe pod \\\"+u_pod+\\\" -n \\\"+nspace+\\\" | grep -A 10 Events\\\"\\n\",\n    \"                    all_unhealthy_pods.append(cmd)\\n\",\n    \"print(all_unhealthy_pods)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"528330e1-c862-42bc-9056-05608a78d437\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 2\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 2\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"List-all-AWS-Regions\\\">Gather information of the pods</h3>\\n\",\n    \"<p>This action describes events for a list of unhealthy pods obtained in Step 1.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters (Optional):&nbsp;<code>namespace</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following ouput: <code>describe_output</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 6,\n   \"id\": \"5d45773d-cf52-4dcb-8a35-01219781cf8f\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_uuid\": \"ae0b25757f0c6c0ca4b3aaf6feea636e3f193dc354f74823a7becd7d675becdc\",\n    \"condition_enabled\": true,\n    \"continueOnError\": true,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Kubectl command in python syntax.\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-02-09T11:19:18.986Z\"\n    },\n    \"id\": 21,\n    \"index\": 21,\n    \"inputData\": [\n     {\n      \"kubectl_command\": {\n       \"constant\": false,\n       \"value\": \"iter_item\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"kubectl_command\": {\n        \"description\": \"kubectl command eg \\\"kubectl get pods --all-namespaces\\\"\",\n        \"title\": \"Kubectl Command\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"kubectl_command\"\n      ],\n      \"title\": \"k8s_kubectl_command\",\n      \"type\": \"object\"\n     }\n    ],\n    \"iterData\": [\n     {\n      \"iter_enabled\": true,\n      \"iter_item\": \"kubectl_command\",\n      \"iter_list\": {\n       \"constant\": false,\n       \"objectItems\": false,\n       \"value\": \"all_unhealthy_pods\"\n      }\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_K8S\",\n    \"name\": \"Gather Information of the pod\",\n    \"nouns\": [\n     \"command\"\n    ],\n    \"orderProperties\": [\n     \"kubectl_command\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"describe_output\",\n     \"output_name_enabled\": true\n    },\n    \"printOutput\": true,\n    \"startcondition\": \"len(all_unhealthy_pods)!=0\",\n    \"tags\": [\n     \"k8s_kubectl_command\"\n    ],\n    \"title\": \"Gather Information of the pod\",\n    \"verbs\": [\n     \"execute\"\n    ],\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"#\\n\",\n    \"# Copyright (c) 2021 unSkript.com\\n\",\n    \"# All rights reserved.\\n\",\n    \"#\\n\",\n    \"\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def k8s_kubectl_command(handle, kubectl_command: str) -> str:\\n\",\n    \"\\n\",\n    \"    result = handle.run_native_cmd(kubectl_command)\\n\",\n    \"    if result is None or hasattr(result, \\\"stderr\\\") is False or result.stderr is None:\\n\",\n    \"        print(\\n\",\n    \"            f\\\"Error while executing command ({kubectl_command}): {result.stderr}\\\")\\n\",\n    \"        return None\\n\",\n    \"\\n\",\n    \"    return result.stdout\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(continueOnError=True)\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"kubectl_command\\\": \\\"iter_item\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(iterJson='''{\\n\",\n    \"    \\\"iter_enabled\\\": true,\\n\",\n    \"    \\\"iter_list_is_const\\\": false,\\n\",\n    \"    \\\"iter_list\\\": \\\"all_unhealthy_pods\\\",\\n\",\n    \"    \\\"iter_parameter\\\": \\\"kubectl_command\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"len(all_unhealthy_pods)!=0\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"describe_output\\\")\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(k8s_kubectl_command, hdl=hdl, args=args)\\n\",\n    \"\\n\",\n    \"if hasattr(task, 'output'):\\n\",\n    \"    if isinstance(task.output, (list, tuple)):\\n\",\n    \"        for item in task.output:\\n\",\n    \"            print(f'item: {item}')\\n\",\n    \"    elif isinstance(task.output, dict):\\n\",\n    \"        for item in task.output.items():\\n\",\n    \"            print(f'item: {item}')\\n\",\n    \"    else:\\n\",\n    \"        print(f'Output for {task.name}')\\n\",\n    \"        print(task.output)\\n\",\n    \"    w.tasks[task.name]= task.output\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"26886eb4-ca1f-40f0-a2da-c34af115ae69\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 2A\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 2A\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Examine-the-Events\\\">Convert to String<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Examine-the-Events\\\" target=\\\"_self\\\">&para;</a></h3>\\n\",\n    \"<p>From the output from Step 2\\ud83d\\udc46,&nbsp; we convert the dict output to a string format.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following ouput: <code>all_describe_info</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 26,\n   \"id\": \"50d94b8f-7c44-413e-b653-72c59ab1ee15\",\n   \"metadata\": {\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-02-08T12:26:44.491Z\"\n    },\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"name\": \"Convert to String \",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Convert to String \",\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"import json\\n\",\n    \"\\n\",\n    \"all_describe_info = json.dumps(describe_output)\\n\",\n    \"print(all_describe_info)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"ba918b53-4a49-494d-956f-073849b6cd9e\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 2B\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 2B\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Examine-the-Events\\\">Examine the Events<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Examine-the-Events\\\" target=\\\"_self\\\">&para;</a></h3>\\n\",\n    \"<p>Examine the output from Step 2A\\ud83d\\udc46,&nbsp; and make a note of any containers that have a <code>Back-off restarting failed container</code> in the description.</p>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 27,\n   \"id\": \"6a08134c-e35f-48da-a687-cb1b3bb4a91a\",\n   \"metadata\": {\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-02-08T12:27:04.007Z\"\n    },\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"name\": \"Examine the Events\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Examine the Events\",\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"import re\\n\",\n    \"\\n\",\n    \"\\\"\\\"\\\"\\n\",\n    \"This Custom Action searches Known errors in the describeOutput variable.\\n\",\n    \"This lego \\n\",\n    \"\\\"\\\"\\\"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"def check_msg(msg):\\n\",\n    \"    return re.search(msg, all_describe_info)\\n\",\n    \"\\n\",\n    \"if ('describeOutput' not in globals()):\\n\",\n    \"    pass\\n\",\n    \"else:\\n\",\n    \"    print(\\\"Processing Events...\\\")\\n\",\n    \"    result = check_msg(\\\"Back-off restarting failed container\\\")\\n\",\n    \"    if result is not None:\\n\",\n    \"        print(\\\"Confirming the POD(s) is in Back-Off restarting state\\\")\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"e43ad9f0-5d64-4a0f-9543-7214fac6e359\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 3A\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 3A\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Examine-the-Events\\\">Create List of commands to get Exit Code<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Examine-the-Events\\\" target=\\\"_self\\\">&para;</a></h3>\\n\",\n    \"<p>From the output from Step 1\\ud83d\\udc46create a list of commands for each pod in a namespace to get the exit code for each pod to examine the reason of failure.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following ouput: <code>all_pods_exit_code</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 7,\n   \"id\": \"e19a6db0-d941-4e62-8a3b-05105389ebfe\",\n   \"metadata\": {\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-02-09T11:20:07.998Z\"\n    },\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"name\": \"Create List of commands to get Exit Code\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Create List of commands to get Exit Code\",\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"all_pods_exit_code = []\\n\",\n    \"for x in crashloopbackoff_pods:\\n\",\n    \"    if type(x[1])==list:\\n\",\n    \"        if len(x[1])!=0:\\n\",\n    \"            for pod in x[1]:\\n\",\n    \"                for k,v in pod.items():\\n\",\n    \"                    nspace = k\\n\",\n    \"                    u_pod = ' '.join([str(each_pod) for each_pod in v])\\n\",\n    \"                    cmd = \\\"kubectl describe pod \\\"+u_pod+\\\" -n \\\"+nspace+\\\" | grep \\\\\\\\\\\"+\\\"Exit Code\\\"+\\\"\\\\\\\\\\\"+\\\" | cut -d':' -f 2 | tr -d ' '\\\"\\n\",\n    \"                    all_pods_exit_code.append(cmd)\\n\",\n    \"print(all_pods_exit_code)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"e9ef9f2a-2cd8-4bb9-9efc-746e2ec958d2\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 3\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 3\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Collect-pod-exit-code\\\">Collect pod exit code<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Collect-pod-exit-code\\\" target=\\\"_self\\\">&para;</a></h3>\\n\",\n    \"<p>Examine the output from Step 1\\ud83d\\udc46, and look for the Exit Code.</p>\\n\",\n    \"<blockquote>This action captures the following ouput: exit_code</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 31,\n   \"id\": \"e8db2cae-8894-47a0-8b88-d2275314acd7\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionCategories\": [],\n    \"actionIsCheck\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [],\n    \"actionNextHopParameterMapping\": {},\n    \"actionOutputType\": \"\",\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": false,\n    \"action_uuid\": \"4d3b4c6153e14622f42b332b7c7b8f7043577971f64edc5be6b5f8b40d5b89d1\",\n    \"condition_enabled\": true,\n    \"continueOnError\": true,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Execute the given Kubectl command.\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-02-08T12:34:23.155Z\"\n    },\n    \"id\": 51,\n    \"index\": 51,\n    \"inputData\": [\n     {\n      \"kubectl_command\": {\n       \"constant\": false,\n       \"value\": \"iter_item\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"kubectl_command\": {\n        \"description\": \"kubectl command eg \\\"kubectl get pods --all-namespaces\\\"\",\n        \"title\": \"Kubectl Command\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"kubectl_command\"\n      ],\n      \"title\": \"k8s_kubectl_command\",\n      \"type\": \"object\"\n     }\n    ],\n    \"iterData\": [\n     {\n      \"iter_enabled\": true,\n      \"iter_item\": \"kubectl_command\",\n      \"iter_list\": {\n       \"constant\": false,\n       \"objectItems\": false,\n       \"value\": \"all_pods_exit_code\"\n      }\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_K8S\",\n    \"name\": \"Collect pod exit code\",\n    \"nouns\": [],\n    \"orderProperties\": [\n     \"kubectl_command\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"exit_code\",\n     \"output_name_enabled\": true\n    },\n    \"printOutput\": true,\n    \"probeEnabled\": false,\n    \"startcondition\": \"len(all_pods_exit_code)!=0\",\n    \"tags\": [\n     \"k8s_kubectl_command\"\n    ],\n    \"title\": \"Collect pod exit code\",\n    \"verbs\": [],\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"#\\n\",\n    \"# Copyright (c) 2022 unSkript.com\\n\",\n    \"# All rights reserved.\\n\",\n    \"#\\n\",\n    \"\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def k8s_kubectl_command_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    print(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def k8s_kubectl_command(handle, kubectl_command: str) -> str:\\n\",\n    \"    \\\"\\\"\\\"k8s_kubectl_command executes the given kubectl command on the pod\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned from the Task validate method\\n\",\n    \"\\n\",\n    \"        :type kubectl_command: str\\n\",\n    \"        :param kubectl_command: The Actual kubectl command, like kubectl get ns, etc..\\n\",\n    \"\\n\",\n    \"        :rtype: String, Output of the command in python string format or Empty String in case of Error.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    if handle.client_side_validation != True:\\n\",\n    \"        print(f\\\"K8S Connector is invalid: {handle}\\\")\\n\",\n    \"        return str()\\n\",\n    \"\\n\",\n    \"    result = handle.run_native_cmd(kubectl_command)\\n\",\n    \"    if result is None or hasattr(result, \\\"stderr\\\") is False or result.stderr is None:\\n\",\n    \"        print(\\n\",\n    \"            f\\\"Error while executing command ({kubectl_command}): {result.stderr}\\\")\\n\",\n    \"        return str()\\n\",\n    \"\\n\",\n    \"    return result.stdout\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(continueOnError=True)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"kubectl_command\\\": \\\"iter_item\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(iterJson='''{\\n\",\n    \"    \\\"iter_enabled\\\": true,\\n\",\n    \"    \\\"iter_list_is_const\\\": false,\\n\",\n    \"    \\\"iter_list\\\": \\\"all_pods_exit_code\\\",\\n\",\n    \"    \\\"iter_parameter\\\": \\\"kubectl_command\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"len(all_pods_exit_code)!=0\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"exit_code\\\")\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(k8s_kubectl_command, lego_printer=k8s_kubectl_command_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"21e2e967-6514-4b87-b43b-b1f0e95b4ac2\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 3B\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 3B\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Examine-the-Events\\\">Create List Exit Codes<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Examine-the-Events\\\" target=\\\"_self\\\">&para;</a></h3>\\n\",\n    \"<p>From the output from Step 3\\ud83d\\udc46create a list of exit codes&nbsp; to ananlyze in Step 3C.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following ouput: <code>all_exit_code_info</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 45,\n   \"id\": \"5351b111-f025-4952-a3dc-917047966aab\",\n   \"metadata\": {\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-02-08T12:50:44.137Z\"\n    },\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"name\": \"Create List of Exit Codes\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Create List of Exit Codes\",\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"import json\\n\",\n    \"all_exit_code_info = []\\n\",\n    \"for k,v in exit_code.items():\\n\",\n    \"    all_exit_code_info.append(v)\\n\",\n    \"print(all_exit_code_info)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"86de5ae7-00f3-424a-9740-c02cd0cab643\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 3C\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 3C\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Examine-the-Events\\\">Examine Exit Codes<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Examine-the-Events\\\" target=\\\"_self\\\">&para;</a></h3>\\n\",\n    \"<p>Using the exit_codes list from Step 3B\\ud83d\\udc46examine each code.</p>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 52,\n   \"id\": \"6c8adc48-7c21-40cc-8dbc-77a9d46843fc\",\n   \"metadata\": {\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-02-08T12:54:26.923Z\"\n    },\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"name\": \"Examine Exit Code\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Examine Exit Code\",\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"from IPython.display import Markdown as md\\n\",\n    \"\\n\",\n    \"# if repoLocation is not None:\\n\",\n    \"#     display(md(f\\\"**Please verify {repoLocation} is accessible from the K8S POD**\\\"))\\n\",\n    \"\\n\",\n    \"if 'all_exit_code_info' not in globals():\\n\",\n    \"    pass\\n\",\n    \"else:\\n\",\n    \"    for ec in all_exit_code_info:\\n\",\n    \"        if ec is None or len(ec)==0:\\n\",\n    \"            exitCode = 323400\\n\",\n    \"        if ec is not None or len(ec)!=0:\\n\",\n    \"            exitCode = int(ec)\\n\",\n    \"        if exitCode == 0:\\n\",\n    \"            display(md(\\\"Exit code 0 implies that the specified container command completed\\\"))\\n\",\n    \"            display(md(\\\"Successfully, but too often for Kubernetes to accept as working.\\\"))\\n\",\n    \"            display(md(\\\"\\\"))\\n\",\n    \"            display(md(\\\"Did you fail to specify a command in the POD Spec, and the container ran\\\"))\\n\",\n    \"            display(md(\\\"a default shell command that failed? If so, you will need to fix the command\\\"))\\n\",\n    \"        elif exitCode == 1:\\n\",\n    \"            display(md(\\\"The container failed to run its command successfully, and returned\\\"))\\n\",\n    \"            display(md(\\\"an exit code 1. This is an application failure within the process\\\"))\\n\",\n    \"            display(md(\\\"that was started, but return with a failing exit code some time after.\\\"))\\n\",\n    \"            display(md(\\\"\\\"))\\n\",\n    \"            display(md(\\\"If this is happening only with all pods running on your cluster, then\\\"))\\n\",\n    \"            display(md(\\\"there may be a problem with your nodes. Check Nodes are OK on your cluster\\\"))\\n\",\n    \"            display(md(\\\"with kubectl get nodes -o wide command\\\"))\\n\",\n    \"        elif exitCode == 2:\\n\",\n    \"            display(md(\\\"An exit code of 2 indicates either that the application chose to return\\\"))\\n\",\n    \"            display(md(\\\"that error code, or there was a misuse of a shell builtin. Check your\\\"))\\n\",\n    \"            display(md(\\\"pod's command specification to ensure that the command is correct.\\\"))\\n\",\n    \"            display(md(\\\"If you think it is correct, try running the image locally with a shell\\\"))\\n\",\n    \"            display(md(\\\"and run the command directly.\\\"))\\n\",\n    \"        elif exitCode == 128:\\n\",\n    \"            display(md(\\\"An exit code of 128 indicates that the container could not run. Check this\\\"))\\n\",\n    \"            display(md(\\\"by kubectl describe pod command, check to see if LastState Reason is\\\"))\\n\",\n    \"            display(md(\\\"ContainerCannotRun.\\\"))\\n\",\n    \"        elif exitCode == 137:\\n\",\n    \"            display(md(\\\"This indicates that the container was killed with Signal 9\\\"))\\n\",\n    \"            display(md(\\\"This can be due to One of these reasons:\\\"))\\n\",\n    \"            display(md(\\\"    1. Container ran out of Memory\\\"))\\n\",\n    \"            display(md(\\\"    2. The OOMKiller killed the container\\\"))\\n\",\n    \"            display(md(\\\"    3. The liveness probe failed. Check liveness and readiness probes\\\"))\\n\",\n    \"        else:\\n\",\n    \"            display(md(\\\"Some common application problem to consider are\\\"))\\n\",\n    \"            display(md(\\\"    1. Priveleged access to function. By setting allowPrivelegeEscalation\\\"))\\n\",\n    \"            display(md(\\\"    2. SELinux or AppArmor controls may be preventing your application to run\\\"))\\n\",\n    \"        \\n\",\n    \"\\n\",\n    \"    display(md(\\\">You can use kubectl get pods command to verify after you fix the issue\\\"))\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"e25b3628-8ff0-401e-b909-e4955e45f397\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Conclusion\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Conclusion\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Conclusion\\\">Conclusion<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Conclusion\\\" target=\\\"_self\\\">&para;</a></h3>\\n\",\n    \"<p>In this Runbook, we were able to identify pods stuck in CrashLoopBackOff State and examined the possible event that caused it's failure using unSkript's K8s actions. To view the full platform capabilities of unSkript please visit <a href=\\\"https://us.app.unskript.io\\\" target=\\\"_blank\\\" rel=\\\"noopener\\\">us.app.unskript.io</a></p>\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"execution_data\": {\n   \"runbook_name\": \"k8s: Pod Stuck in CrashLoopBackoff State\",\n   \"parameters\": [\n    \"namespace\"\n   ]\n  },\n  \"kernelspec\": {\n   \"display_name\": \"unSkript (Build: 839)\",\n   \"name\": \"python_kubernetes\"\n  },\n  \"language_info\": {\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"pygments_lexer\": \"ipython3\"\n  },\n  \"parameterSchema\": {\n   \"properties\": {\n    \"namespace\": {\n     \"description\": \"K8S Namespace\",\n     \"title\": \"namespace\",\n     \"type\": \"string\"\n    }\n   },\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"parameterValues\": null\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}"
  },
  {
    "path": "Kubernetes/K8S_Pod_Stuck_In_CrashLoopBack_State.json",
    "content": "{\n  \"name\": \"k8s: Pod Stuck in CrashLoopBackoff State\",\n  \"description\": \"This runbook checks if any Pod(s) in CrashLoopBackoff state in a given k8s namespace. If it finds, it tries to find out the reason why the Pod(s) is in that state.\",  \n  \"uuid\": \"1d3a64b3c396be6d27b260606aa5570f61e79f3b7adcda457e026da657edc079\",\n  \"icon\": \"CONNECTOR_TYPE_K8S\",\n  \"categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\" ],\n  \"connector_types\": [ \"CONNECTOR_TYPE_K8S\" ],\n  \"version\": \"1.0.0\"\n}\n"
  },
  {
    "path": "Kubernetes/K8S_Pod_Stuck_In_ImagePullBackOff_State.ipynb",
    "content": "{\n    \"cells\": [\n        {\n            \"cell_type\": \"code\",\n            \"execution_count\": 2,\n            \"id\": \"56630bd7-a4d2-492d-bb06-5a3027a321f1\",\n            \"metadata\": {\n                \"credentialsJson\": {},\n                \"customAction\": true,\n                \"execution_data\": {\n                    \"last_date_success_run_cell\": \"2023-06-07T16:32:28.600Z\"\n                },\n                \"name\": \"Click \\\"Run Action\\\" For a video tutorial -->\",\n                \"orderProperties\": [],\n                \"tags\": [],\n                \"title\": \"Click \\\"Run Action\\\" For a video tutorial -->\",\n                \"trusted\": true\n            },\n            \"outputs\": [],\n            \"source\": [\n                \"%%html\\n\",\n                \"<iframe width=\\\"560\\\" height=\\\"315\\\" src=\\\"https://www.youtube.com/embed/-871n89aTLk\\\" title=\\\"YouTube video player\\\" frameborder=\\\"0\\\" allow=\\\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture\\\" allowfullscreen></iframe>\\n\"\n            ]\n        },\n        {\n            \"cell_type\": \"markdown\",\n            \"id\": \"f518e5b7-08a7-425c-9d86-cfc629d5b355\",\n            \"metadata\": {\n                \"jupyter\": {\n                    \"source_hidden\": false\n                },\n                \"name\": \"Steps Overview\",\n                \"orderProperties\": [],\n                \"tags\": [],\n                \"title\": \"Steps Overview\"\n            },\n            \"source\": [\n                \"<hr><hr><center><img src=\\\"https://storage.googleapis.com/unskript-website/assets/favicon.png\\\" alt=\\\"unSkript.com\\\" width=\\\"100\\\" height=\\\"100\\\">\\n\",\n                \"<h1 id=\\\"unSkript-Runbooks&para;&para;\\\">unSkript Runbooks<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#unSkript-Runbooks\\\" target=\\\"_self\\\" rel=\\\"noopener\\\">&para;</a><a class=\\\"jp-InternalAnchorLink\\\" href=\\\"../../../../../../files/4499bd79-f6da-4721-976e-f56c19bf2b93/current/%23unSkript-Runbooks%C2%B6?_xsrf=2%7Cb12423e3%7C3ddd93d43897fff029854a704ea6ecfd%7C1686077820#unSkript-Runbooks%C2%B6\\\" target=\\\"_self\\\" rel=\\\"noopener\\\" data-commandlinker-command=\\\"rendermime:handle-local-link\\\" data-commandlinker-args=\\\"{&quot;path&quot;:&quot;4499bd79-f6da-4721-976e-f56c19bf2b93/current/#unSkript-Runbooks&para;&quot;,&quot;id&quot;:&quot;#unSkript-Runbooks%C2%B6&quot;}\\\">&para;</a><a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#unSkript-Runbooks&para;&para;\\\" target=\\\"_self\\\">&para;</a></h1>\\n\",\n                \"<div class=\\\"alert alert-block alert-success\\\">\\n\",\n                \"<h3 id=\\\"Objective&para;&para;\\\">Objective<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Objective\\\" target=\\\"_self\\\" rel=\\\"noopener\\\">&para;</a><a class=\\\"jp-InternalAnchorLink\\\" href=\\\"../../../../../../files/4499bd79-f6da-4721-976e-f56c19bf2b93/current/%23Objective%C2%B6?_xsrf=2%7Cb12423e3%7C3ddd93d43897fff029854a704ea6ecfd%7C1686077820#Objective%C2%B6\\\" target=\\\"_self\\\" rel=\\\"noopener\\\" data-commandlinker-command=\\\"rendermime:handle-local-link\\\" data-commandlinker-args=\\\"{&quot;path&quot;:&quot;4499bd79-f6da-4721-976e-f56c19bf2b93/current/#Objective&para;&quot;,&quot;id&quot;:&quot;#Objective%C2%B6&quot;}\\\">&para;</a><a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Objective&para;&para;\\\" target=\\\"_self\\\">&para;</a></h3>\\n\",\n                \"<br><strong style=\\\"color: #000000;\\\"><em>Fix K8s Pod in ImagePullBackOff State</em></strong></div>\\n\",\n                \"</center>\\n\",\n                \"<p>&nbsp;</p>\\n\",\n                \"<center>\\n\",\n                \"<h2 id=\\\"K8S-Pod-in-ImagePullBackOff-State&para;&para;\\\"><u>K8S Pod in ImagePullBackOff State</u><a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#K8S-Pod-in-CrashLoopBack-State\\\" target=\\\"_self\\\" rel=\\\"noopener\\\">&para;</a><a class=\\\"jp-InternalAnchorLink\\\" href=\\\"../../../../../../files/4499bd79-f6da-4721-976e-f56c19bf2b93/current/%23K8S-Pod-in-ImagePullBackOff-State%C2%B6?_xsrf=2%7Cb12423e3%7C3ddd93d43897fff029854a704ea6ecfd%7C1686077820#K8S-Pod-in-ImagePullBackOff-State%C2%B6\\\" target=\\\"_self\\\" rel=\\\"noopener\\\" data-commandlinker-command=\\\"rendermime:handle-local-link\\\" data-commandlinker-args=\\\"{&quot;path&quot;:&quot;4499bd79-f6da-4721-976e-f56c19bf2b93/current/#K8S-Pod-in-ImagePullBackOff-State&para;&quot;,&quot;id&quot;:&quot;#K8S-Pod-in-ImagePullBackOff-State%C2%B6&quot;}\\\">&para;</a><a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#K8S-Pod-in-ImagePullBackOff-State&para;&para;\\\" target=\\\"_self\\\">&para;</a></h2>\\n\",\n                \"</center>\\n\",\n                \"<h1 id=\\\"Steps-Overview&para;&para;\\\">Steps Overview<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Steps-Overview\\\" target=\\\"_self\\\" rel=\\\"noopener\\\">&para;</a><a class=\\\"jp-InternalAnchorLink\\\" href=\\\"../../../../../../files/4499bd79-f6da-4721-976e-f56c19bf2b93/current/%23Steps-Overview%C2%B6?_xsrf=2%7Cb12423e3%7C3ddd93d43897fff029854a704ea6ecfd%7C1686077820#Steps-Overview%C2%B6\\\" target=\\\"_self\\\" rel=\\\"noopener\\\" data-commandlinker-command=\\\"rendermime:handle-local-link\\\" data-commandlinker-args=\\\"{&quot;path&quot;:&quot;4499bd79-f6da-4721-976e-f56c19bf2b93/current/#Steps-Overview&para;&quot;,&quot;id&quot;:&quot;#Steps-Overview%C2%B6&quot;}\\\">&para;</a><a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Steps-Overview&para;&para;\\\" target=\\\"_self\\\">&para;</a></h1>\\n\",\n                \"<p>1)&nbsp;<a href=\\\"#1\\\" target=\\\"_self\\\" rel=\\\"noopener\\\">Get list of pods in ImagePullBackOff State</a><br>2)&nbsp;<a href=\\\"#2\\\" target=\\\"_self\\\" rel=\\\"noopener\\\">Extract Events of the pods</a><br>3)&nbsp;<a href=\\\"#3\\\" target=\\\"_self\\\" rel=\\\"noopener\\\">Check registry accessibility</a></p>\\n\",\n                \"<p>An <code>ImagePullBackOff</code> error occurs when a Pod startup fails to pull the specified image. The reasons could be Non-Existent of the repository or Permission to Access the repository issues. This runbook helps to walk through the steps involved in debugging such a Pod.</p>\\n\",\n                \"<p>&nbsp;</p>\\n\",\n                \"<p>We'll then create the steps required to resolve the issue - learning how to use unSkript at the same time.</p>\\n\",\n                \"<p>If you haven't already - click \\\"Run Action above to see a YouTube video that will begin walking you through the process.</p>\\n\",\n                \"<hr>\"\n            ]\n        },\n        {\n            \"cell_type\": \"markdown\",\n            \"id\": \"bf250d0f-f958-47bc-907e-3721c3720288\",\n            \"metadata\": {\n                \"jupyter\": {\n                    \"source_hidden\": false\n                },\n                \"name\": \"Step 1A\",\n                \"orderProperties\": [],\n                \"tags\": [],\n                \"title\": \"Step 1A\"\n            },\n            \"source\": [\n                \"<h3 id=\\\"Get-List-of-Pods-in-ImagePullBackOff-State&para;\\\"><a id=\\\"1\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Convert namespace to String if empty<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Get-List-of-Pods-in-CrashLoopBackOff-State\\\" target=\\\"_self\\\" rel=\\\"noopener\\\">&para;</a><a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Get-List-of-Pods-in-ImagePullBackOff-State&para;\\\" target=\\\"_self\\\">&para;</a></h3>\\n\",\n                \"<p>This custom action changes the type of namespace from None to String only if no namespace is given</p>\"\n            ]\n        },\n        {\n            \"cell_type\": \"code\",\n            \"execution_count\": 3,\n            \"id\": \"a49bd0a5-1b34-4beb-940d-9f28239837e0\",\n            \"metadata\": {\n                \"credentialsJson\": {},\n                \"customAction\": true,\n                \"execution_data\": {\n                    \"last_date_success_run_cell\": \"2023-06-07T16:32:36.279Z\"\n                },\n                \"jupyter\": {\n                    \"source_hidden\": true\n                },\n                \"name\": \"Convert namespace to String if empty\",\n                \"orderProperties\": [],\n                \"tags\": [],\n                \"title\": \"Convert namespace to String if empty\",\n                \"trusted\": true\n            },\n            \"outputs\": [],\n            \"source\": [\n                \"if namespace==None:\\n\",\n                \"    namespace=''\\n\",\n                \" \"\n            ]\n        },\n        {\n            \"cell_type\": \"markdown\",\n            \"id\": \"6cdb0116-152b-493c-8eb9-71237b691806\",\n            \"metadata\": {\n                \"jupyter\": {\n                    \"source_hidden\": false\n                },\n                \"name\": \"Step 1\",\n                \"orderProperties\": [],\n                \"tags\": [],\n                \"title\": \"Step 1\"\n            },\n            \"source\": [\n                \"<h3 id=\\\"Get-List-of-Pods-in-CrashLoopBackOff-State\\\"><a id=\\\"1\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Get List of Pods in ImagePullBackOff State<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Get-List-of-Pods-in-CrashLoopBackOff-State\\\" target=\\\"_self\\\">&para;</a></h3>\\n\",\n                \"<p>This action fetches a list of the pods in ImagePullBackOff State. This action will consider <code>namespace</code> as&nbsp;<strong> all&nbsp;</strong>if no namespace is given.</p>\\n\",\n                \"<blockquote>\\n\",\n                \"<p>This action takes the following parameters (Optional):&nbsp;<code>namespace</code></p>\\n\",\n                \"</blockquote>\\n\",\n                \"<blockquote>\\n\",\n                \"<p>This action captures the following ouput: <code>imagepullbackoff_pods</code></p>\\n\",\n                \"</blockquote>\"\n            ]\n        },\n        {\n            \"cell_type\": \"code\",\n            \"execution_count\": 4,\n            \"id\": \"fbfd4282-2516-4506-b617-c6816736dbea\",\n            \"metadata\": {\n                \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n                \"actionBashCommand\": false,\n                \"actionCategories\": [],\n                \"actionIsCheck\": false,\n                \"actionNeedsCredential\": true,\n                \"actionNextHop\": [],\n                \"actionNextHopParameterMapping\": {},\n                \"actionOutputType\": \"\",\n                \"actionRequiredLinesInCode\": [],\n                \"actionSupportsIteration\": true,\n                \"actionSupportsPoll\": true,\n                \"action_modified\": false,\n                \"action_uuid\": \"683b7f1a1482a5bed32698689e2b47e13dcdb5e00d719316cc46ada5ead26758\",\n                \"continueOnError\": false,\n                \"createTime\": \"1970-01-01T00:00:00Z\",\n                \"credentialsJson\": {},\n                \"currentVersion\": \"0.1.0\",\n                \"description\": \"Get all K8s pods in ImagePullBackOff State\",\n                \"execution_data\": {},\n                \"id\": 45,\n                \"index\": 45,\n                \"inputData\": [\n                    {\n                        \"namespace\": {\n                            \"constant\": false,\n                            \"value\": \"namespace\"\n                        }\n                    }\n                ],\n                \"inputschema\": [\n                    {\n                        \"properties\": {\n                            \"namespace\": {\n                                \"default\": \"\",\n                                \"description\": \"k8s Namespace\",\n                                \"title\": \"Namespace\",\n                                \"type\": \"string\"\n                            }\n                        },\n                        \"title\": \"k8s_get_pods_in_imagepullbackoff_state\",\n                        \"type\": \"object\"\n                    }\n                ],\n                \"jupyter\": {\n                    \"source_hidden\": true\n                },\n                \"legotype\": \"LEGO_TYPE_K8S\",\n                \"name\": \"Get all K8s Pods in ImagePullBackOff State\",\n                \"nouns\": [],\n                \"orderProperties\": [\n                    \"namespace\"\n                ],\n                \"output\": {\n                    \"type\": \"\"\n                },\n                \"outputParams\": {\n                    \"output_name\": \"imagepullbackoff_pods\",\n                    \"output_name_enabled\": true\n                },\n                \"printOutput\": true,\n                \"probeEnabled\": false,\n                \"tags\": [\n                    \"k8s_get_pods_in_imagepullbackoff_state\"\n                ],\n                \"title\": \"Get all K8s Pods in ImagePullBackOff State\",\n                \"trusted\": true,\n                \"verbs\": [],\n                \"execution_count\": {}\n            },\n            \"outputs\": [],\n            \"source\": [\n                \"#\\n\",\n                \"# Copyright (c) 2022 unSkript.com\\n\",\n                \"# All rights reserved.\\n\",\n                \"#\\n\",\n                \"\\n\",\n                \"from pydantic import BaseModel, Field\\n\",\n                \"from typing import Optional, Tuple\\n\",\n                \"from unskript.legos.utils import CheckOutput, CheckOutputStatus\\n\",\n                \"from collections import defaultdict\\n\",\n                \"import json\\n\",\n                \"import pprint\\n\",\n                \"import re\\n\",\n                \"\\n\",\n                \"from beartype import beartype\\n\",\n                \"@beartype\\n\",\n                \"def k8s_get_pods_in_imagepullbackoff_state_printer(output):\\n\",\n                \"    if output is None:\\n\",\n                \"        return\\n\",\n                \"    if isinstance(output, CheckOutput):\\n\",\n                \"        print(output.json())\\n\",\n                \"    else:\\n\",\n                \"        pprint.pprint(output)\\n\",\n                \"\\n\",\n                \"\\n\",\n                \"@beartype\\n\",\n                \"def k8s_get_pods_in_imagepullbackoff_state(handle, namespace: str=None) -> Tuple:\\n\",\n                \"    \\\"\\\"\\\"k8s_get_list_of_pods_with_imagepullbackoff_state executes the given kubectl command to find pods in ImagePullBackOff State\\n\",\n                \"\\n\",\n                \"        :type handle: object\\n\",\n                \"        :param handle: Object returned from the Task validate method\\n\",\n                \"\\n\",\n                \"        :type namespace: Optional[str]\\n\",\n                \"        :param namespace: Namespace to get the pods from. Eg:\\\"logging\\\", if not given all namespaces are considered\\n\",\n                \"\\n\",\n                \"        :rtype: Status, List of pods in CrashLoopBackOff State\\n\",\n                \"    \\\"\\\"\\\"\\n\",\n                \"    if handle.client_side_validation != True:\\n\",\n                \"        print(f\\\"K8S Connector is invalid: {handle}\\\")\\n\",\n                \"        return str()\\n\",\n                \"    kubectl_command =\\\"kubectl get pods --all-namespaces | grep ImagePullBackOff | tr -s ' ' | cut -d ' ' -f 1,2\\\"\\n\",\n                \"    if namespace:\\n\",\n                \"        kubectl_command = \\\"kubectl get pods -n \\\" + namespace + \\\" | grep ImagePullBackOff | cut -d' ' -f 1 | tr -d ' '\\\"\\n\",\n                \"    response = handle.run_native_cmd(kubectl_command)\\n\",\n                \"    if response is None or hasattr(response, \\\"stderr\\\") is False or response.stderr is None:\\n\",\n                \"        print(\\n\",\n                \"            f\\\"Error while executing command ({kubectl_command}): {response.stderr}\\\")\\n\",\n                \"        return str()\\n\",\n                \"    temp = response.stdout\\n\",\n                \"    result = []\\n\",\n                \"    res = []\\n\",\n                \"    unhealthy_pods =[]\\n\",\n                \"    unhealthy_pods_tuple = ()\\n\",\n                \"    if not namespace:\\n\",\n                \"        all_namespaces = re.findall(r\\\"(\\\\S+).*\\\",temp)\\n\",\n                \"        all_unhealthy_pods = re.findall(r\\\"\\\\S+\\\\s+(.*)\\\",temp)\\n\",\n                \"        unhealthy_pods = [(i, j) for i, j in zip(all_namespaces, all_unhealthy_pods)]\\n\",\n                \"        res = defaultdict(list)\\n\",\n                \"        for key, val in unhealthy_pods:\\n\",\n                \"            res[key].append(val)\\n\",\n                \"    elif namespace:\\n\",\n                \"        all_pods = []\\n\",\n                \"        all_unhealthy_pods =[]\\n\",\n                \"        all_pods = re.findall(r\\\"(\\\\S+).*\\\",temp)\\n\",\n                \"        for p in all_pods:\\n\",\n                \"                unhealthy_pods_tuple = (namespace,p)\\n\",\n                \"                unhealthy_pods.append(unhealthy_pods_tuple)\\n\",\n                \"        res = defaultdict(list)\\n\",\n                \"        for key, val in unhealthy_pods:\\n\",\n                \"            res[key].append(val)\\n\",\n                \"    if len(res)!=0:\\n\",\n                \"        result.append(dict(res))\\n\",\n                \"    if len(result) != 0:\\n\",\n                \"        return (False, result)\\n\",\n                \"    else:\\n\",\n                \"        return (True, None)\\n\",\n                \"\\n\",\n                \"\\n\",\n                \"task = Task(Workflow())\\n\",\n                \"task.configure(inputParamsJson='''{\\n\",\n                \"    \\\"namespace\\\": \\\"namespace\\\"\\n\",\n                \"    }''')\\n\",\n                \"\\n\",\n                \"task.configure(outputName=\\\"imagepullbackoff_pods\\\")\\n\",\n                \"\\n\",\n                \"task.configure(printOutput=True)\\n\",\n                \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n                \"if err is None:\\n\",\n                \"    task.execute(k8s_get_pods_in_imagepullbackoff_state, lego_printer=k8s_get_pods_in_imagepullbackoff_state_printer, hdl=hdl, args=args)\"\n            ]\n        },\n        {\n            \"cell_type\": \"code\",\n            \"execution_count\": 6,\n            \"id\": \"b273811b-9921-4786-9808-230187591944\",\n            \"metadata\": {\n                \"credentialsJson\": {},\n                \"customAction\": true,\n                \"execution_data\": {\n                    \"last_date_success_run_cell\": \"2023-06-07T16:34:59.845Z\"\n                },\n                \"name\": \"Video 2: Click Run Action -->\",\n                \"orderProperties\": [],\n                \"tags\": [],\n                \"title\": \"Video 2: Click Run Action -->\",\n                \"trusted\": true\n            },\n            \"outputs\": [],\n            \"source\": [\n                \"%%html\\n\",\n                \"<iframe width=\\\"560\\\" height=\\\"315\\\" src=\\\"https://www.youtube.com/embed/aSsYlIGQhO8\\\" title=\\\"YouTube video player\\\" frameborder=\\\"0\\\" allow=\\\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture\\\" allowfullscreen></iframe>\"\n            ]\n        },\n        {\n            \"cell_type\": \"markdown\",\n            \"id\": \"7b195002-2041-48dc-a7de-3ca871925e58\",\n            \"metadata\": {\n                \"jupyter\": {\n                    \"source_hidden\": false\n                },\n                \"name\": \"Step 1A\",\n                \"orderProperties\": [],\n                \"tags\": [],\n                \"title\": \"Step 1A\"\n            },\n            \"source\": [\n                \"<h3 id=\\\"Create-List-of-commands-to-get-Events&para;\\\">Create List of commands to get Events<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Examine-the-Events\\\" target=\\\"_self\\\" rel=\\\"noopener\\\">&para;</a><a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Create-List-of-commands-to-get-Events&para;\\\" target=\\\"_self\\\">&para;</a></h3>\\n\",\n                \"<p>Examine the output from Step 1\\ud83d\\udc46,&nbsp; and create a list of commands for each pod in a namespace that is found to be in the ImagePullBackOff State</p>\\n\",\n                \"<blockquote>\\n\",\n                \"<p>This action captures the following ouput:&nbsp;<code>all_unhealthy_pods</code></p>\\n\",\n                \"</blockquote>\"\n            ]\n        },\n        {\n            \"cell_type\": \"code\",\n            \"execution_count\": 7,\n            \"id\": \"52ca8812-faef-4953-a4e6-94ba17bb5c17\",\n            \"metadata\": {\n                \"collapsed\": true,\n                \"credentialsJson\": {},\n                \"customAction\": true,\n                \"execution_data\": {\n                    \"last_date_success_run_cell\": \"2023-06-07T16:35:03.045Z\"\n                },\n                \"jupyter\": {\n                    \"outputs_hidden\": true,\n                    \"source_hidden\": true\n                },\n                \"name\": \"Create List of commands to get Events\",\n                \"orderProperties\": [],\n                \"tags\": [],\n                \"title\": \"Create List of commands to get Events\",\n                \"trusted\": true\n            },\n            \"outputs\": [],\n            \"source\": [\n                \"all_unhealthy_pods = []\\n\",\n                \"for each_pod_dict in imagepullbackoff_pods:\\n\",\n                \"    if type(each_pod_dict)==list:\\n\",\n                \"        for pod in each_pod_dict:\\n\",\n                \"            for k,v in pod.items():\\n\",\n                \"                if len(v)!=0:\\n\",\n                \"                    nspace = k\\n\",\n                \"                    u_pod = ' '.join([str(each_pod) for each_pod in v])\\n\",\n                \"                    cmd = \\\"kubectl describe pod \\\"+u_pod+\\\" -n \\\"+nspace+\\\" | grep -A 10 Events\\\"\\n\",\n                \"                    all_unhealthy_pods.append(cmd)\\n\",\n                \"print(all_unhealthy_pods)\"\n            ]\n        },\n        {\n            \"cell_type\": \"markdown\",\n            \"id\": \"380b03f3-b09c-4836-8d50-15ee8021d0e4\",\n            \"metadata\": {\n                \"jupyter\": {\n                    \"source_hidden\": false\n                },\n                \"name\": \"Step 2\",\n                \"orderProperties\": [],\n                \"tags\": [],\n                \"title\": \"Step 2\"\n            },\n            \"source\": [\n                \"<h3 id=\\\"Gather-information-of-the-pods\\\">Extract Events of the pods<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Gather-information-of-the-pods\\\" target=\\\"_self\\\">&para;</a></h3>\\n\",\n                \"<p>This action describes events for a list of unhealthy pods obtained in Step 1.</p>\\n\",\n                \"<blockquote>\\n\",\n                \"<p>This action captures the following ouput: <code>describe_output</code></p>\\n\",\n                \"</blockquote>\"\n            ]\n        },\n        {\n            \"cell_type\": \"code\",\n            \"execution_count\": 6,\n            \"id\": \"cae3c677-fe96-4d0e-9d64-1b11abd00883\",\n            \"metadata\": {\n                \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n                \"actionBashCommand\": false,\n                \"actionNeedsCredential\": true,\n                \"actionRequiredLinesInCode\": [],\n                \"actionSupportsIteration\": true,\n                \"actionSupportsPoll\": true,\n                \"action_modified\": false,\n                \"action_uuid\": \"4d3b4c6153e14622f42b332b7c7b8f7043577971f64edc5be6b5f8b40d5b89d1\",\n                \"condition_enabled\": true,\n                \"continueOnError\": true,\n                \"createTime\": \"1970-01-01T00:00:00Z\",\n                \"credentialsJson\": {},\n                \"currentVersion\": \"0.1.0\",\n                \"description\": \"Execute the given Kubectl command.\",\n                \"execution_data\": {},\n                \"id\": 51,\n                \"index\": 51,\n                \"inputData\": [\n                    {\n                        \"kubectl_command\": {\n                            \"constant\": false,\n                            \"value\": \"iter_item\"\n                        }\n                    }\n                ],\n                \"inputschema\": [\n                    {\n                        \"properties\": {\n                            \"kubectl_command\": {\n                                \"description\": \"kubectl command eg \\\"kubectl get pods --all-namespaces\\\"\",\n                                \"title\": \"Kubectl Command\",\n                                \"type\": \"string\"\n                            }\n                        },\n                        \"required\": [\n                            \"kubectl_command\"\n                        ],\n                        \"title\": \"k8s_kubectl_command\",\n                        \"type\": \"object\"\n                    }\n                ],\n                \"iterData\": [\n                    {\n                        \"iter_enabled\": true,\n                        \"iter_item\": \"kubectl_command\",\n                        \"iter_list\": {\n                            \"constant\": false,\n                            \"objectItems\": false,\n                            \"value\": \"all_unhealthy_pods\"\n                        }\n                    }\n                ],\n                \"jupyter\": {\n                    \"source_hidden\": true\n                },\n                \"legotype\": \"LEGO_TYPE_K8S\",\n                \"name\": \"Extract Events for the Pods\",\n                \"nouns\": [],\n                \"orderProperties\": [\n                    \"kubectl_command\"\n                ],\n                \"output\": {\n                    \"type\": \"\"\n                },\n                \"outputParams\": {\n                    \"output_name\": \"describe_output\",\n                    \"output_name_enabled\": true\n                },\n                \"printOutput\": true,\n                \"startcondition\": \"len(all_unhealthy_pods)!=0\",\n                \"tags\": [\n                    \"k8s_kubectl_command\"\n                ],\n                \"title\": \"Extract Events for the Pods\",\n                \"verbs\": [],\n                \"execution_count\": {}\n            },\n            \"outputs\": [],\n            \"source\": [\n                \"#\\n\",\n                \"# Copyright (c) 2022 unSkript.com\\n\",\n                \"# All rights reserved.\\n\",\n                \"#\\n\",\n                \"\\n\",\n                \"from pydantic import BaseModel, Field\\n\",\n                \"\\n\",\n                \"from beartype import beartype\\n\",\n                \"@beartype\\n\",\n                \"def k8s_kubectl_command_printer(output):\\n\",\n                \"    if output is None:\\n\",\n                \"        return\\n\",\n                \"    print(output)\\n\",\n                \"\\n\",\n                \"\\n\",\n                \"@beartype\\n\",\n                \"def k8s_kubectl_command(handle, kubectl_command: str) -> str:\\n\",\n                \"    \\\"\\\"\\\"k8s_kubectl_command executes the given kubectl command on the pod\\n\",\n                \"\\n\",\n                \"        :type handle: object\\n\",\n                \"        :param handle: Object returned from the Task validate method\\n\",\n                \"\\n\",\n                \"        :type kubectl_command: str\\n\",\n                \"        :param kubectl_command: The Actual kubectl command, like kubectl get ns, etc..\\n\",\n                \"\\n\",\n                \"        :rtype: String, Output of the command in python string format or Empty String in case of Error.\\n\",\n                \"    \\\"\\\"\\\"\\n\",\n                \"    if handle.client_side_validation != True:\\n\",\n                \"        print(f\\\"K8S Connector is invalid: {handle}\\\")\\n\",\n                \"        return str()\\n\",\n                \"    result = handle.run_native_cmd(kubectl_command)\\n\",\n                \"    if result is None or hasattr(result, \\\"stderr\\\") is False or result.stderr is None:\\n\",\n                \"        print(\\n\",\n                \"            f\\\"Error while executing command ({kubectl_command}): {result.stderr}\\\")\\n\",\n                \"        return str()\\n\",\n                \"\\n\",\n                \"    return result.stdout\\n\",\n                \"\\n\",\n                \"\\n\",\n                \"task = Task(Workflow())\\n\",\n                \"task.configure(continueOnError=True)\\n\",\n                \"task.configure(inputParamsJson='''{\\n\",\n                \"    \\\"kubectl_command\\\": \\\"iter_item\\\"\\n\",\n                \"    }''')\\n\",\n                \"task.configure(iterJson='''{\\n\",\n                \"    \\\"iter_enabled\\\": true,\\n\",\n                \"    \\\"iter_list_is_const\\\": false,\\n\",\n                \"    \\\"iter_list\\\": \\\"all_unhealthy_pods\\\",\\n\",\n                \"    \\\"iter_parameter\\\": \\\"kubectl_command\\\"\\n\",\n                \"    }''')\\n\",\n                \"task.configure(conditionsJson='''{\\n\",\n                \"    \\\"condition_enabled\\\": true,\\n\",\n                \"    \\\"condition_cfg\\\": \\\"len(all_unhealthy_pods)!=0\\\",\\n\",\n                \"    \\\"condition_result\\\": true\\n\",\n                \"    }''')\\n\",\n                \"task.configure(outputName=\\\"describe_output\\\")\\n\",\n                \"\\n\",\n                \"task.configure(printOutput=True)\\n\",\n                \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n                \"if err is None:\\n\",\n                \"    task.execute(k8s_kubectl_command, lego_printer=k8s_kubectl_command_printer, hdl=hdl, args=args)\"\n            ]\n        },\n        {\n            \"cell_type\": \"markdown\",\n            \"id\": \"67287ce3-806d-458b-9fe5-ed0e6b146252\",\n            \"metadata\": {\n                \"jupyter\": {\n                    \"source_hidden\": false\n                },\n                \"name\": \"Step 2B\",\n                \"orderProperties\": [],\n                \"tags\": [],\n                \"title\": \"Step 2B\"\n            },\n            \"source\": [\n                \"<h3 id=\\\"Examine-the-Events&para;\\\">Examine the Events<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Examine-the-Events\\\" target=\\\"_self\\\" rel=\\\"noopener\\\">&para;</a><a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Examine-the-Events&para;\\\" target=\\\"_self\\\">&para;</a></h3>\\n\",\n                \"<p>This Custom Action searches Known errors .&nbsp;The well known errors are listed in the error_msgs variable. If&nbsp;there is a new error message that was found, it can be added to the list.</p>\"\n            ]\n        },\n        {\n            \"cell_type\": \"code\",\n            \"execution_count\": 8,\n            \"id\": \"6df7c408-377b-4ea8-a33c-ff3c5329fbaa\",\n            \"metadata\": {\n                \"credentialsJson\": {},\n                \"execution_data\": {\n                    \"last_date_success_run_cell\": \"2023-05-25T16:26:31.562Z\"\n                },\n                \"jupyter\": {\n                    \"source_hidden\": true\n                },\n                \"name\": \"Examine Events\",\n                \"orderProperties\": [],\n                \"tags\": [],\n                \"title\": \"Examine Events\"\n            },\n            \"outputs\": [],\n            \"source\": [\n                \"import re\\n\",\n                \"\\n\",\n                \"\\\"\\\"\\\"\\n\",\n                \"This Custom Action searches Known errors in the podEvents variable.\\n\",\n                \"The well known errors are listed in the error_msgs variable. If\\n\",\n                \"there is a new error message that was found, you can add it to this\\n\",\n                \"list and the next run, the runbook will catch that error.\\n\",\n                \"\\\"\\\"\\\"\\n\",\n                \"\\n\",\n                \"def check_msg(msg, err):\\n\",\n                \"    return re.search(err, msg)\\n\",\n                \"\\n\",\n                \"error_msgs = [\\\"repository (.*) does not exist or no pull access\\\",\\n\",\n                \"              \\\"manifest for (.*) not found\\\",\\n\",\n                \"              \\\"pull access denied, repository does not exist or may require authorization\\\",\\n\",\n                \"             \\\"Back-off pulling image (.*)\\\"]\\n\",\n                \"cause_found = False\\n\",\n                \"result = ''\\n\",\n                \"for key, msg in describe_output.items():\\n\",\n                \"    for err in error_msgs:\\n\",\n                \"        result = check_msg(msg, err)\\n\",\n                \"        if result is not None:\\n\",\n                \"            print(\\\"PROBABLE CAUSE: \\\", f\\\"{result.string}\\\")\\n\",\n                \"            cause_found = True\\n\",\n                \"\\n\",\n                \"repoLocation = ''\\n\",\n                \"if cause_found is False:\\n\",\n                \"    print(\\\"ERROR MESSAGE : \\\\n\\\", all_describe_info)\\n\",\n                \"else:\\n\",\n                \"    try:\\n\",\n                \"        repoLocation = result.groups()[0]\\n\",\n                \"    except:\\n\",\n                \"        pass\\n\",\n                \"    else:\\n\",\n                \"        print(\\\"Image Repo Location : \\\", repoLocation)\"\n            ]\n        },\n        {\n            \"cell_type\": \"markdown\",\n            \"id\": \"83081ee6-55c1-4f82-923b-ed6c4e054d35\",\n            \"metadata\": {\n                \"jupyter\": {\n                    \"source_hidden\": false\n                },\n                \"name\": \"Step 3\",\n                \"orderProperties\": [],\n                \"tags\": [],\n                \"title\": \"Step 3\"\n            },\n            \"source\": [\n                \"<h3 id=\\\"Create-List-of-commands-to-get-Exit-Code&para;\\\">Check Registry Accessibility<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Examine-the-Events\\\" target=\\\"_self\\\" rel=\\\"noopener\\\">&para;</a><a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Create-List-of-commands-to-get-Exit-Code&para;\\\" target=\\\"_self\\\">&para;</a></h3>\\n\",\n                \"<p>From the output from Step 2B\\ud83d\\udc46check if the repoLocation is accessible.</p>\"\n            ]\n        },\n        {\n            \"cell_type\": \"code\",\n            \"execution_count\": 9,\n            \"id\": \"d3fbe0a1-6669-490f-8ffc-3e4e11a32156\",\n            \"metadata\": {\n                \"credentialsJson\": {},\n                \"execution_data\": {\n                    \"last_date_success_run_cell\": \"2023-05-25T16:26:41.642Z\"\n                },\n                \"jupyter\": {\n                    \"source_hidden\": true\n                },\n                \"name\": \"Check Registry Accessibility\",\n                \"orderProperties\": [],\n                \"tags\": [],\n                \"title\": \"Check Registry Accessibility\"\n            },\n            \"outputs\": [],\n            \"source\": [\n                \"from IPython.display import Markdown as md\\n\",\n                \"\\n\",\n                \"if repoLocation is not None:\\n\",\n                \"    display(md(f\\\"**Please verify Repo {repoLocation} is accessible from the K8S POD**\\\"))\"\n            ]\n        },\n        {\n            \"cell_type\": \"code\",\n            \"execution_count\": 9,\n            \"id\": \"e68888fe-d002-49af-9196-cebc01594dbc\",\n            \"metadata\": {\n                \"credentialsJson\": {},\n                \"customAction\": true,\n                \"execution_data\": {\n                    \"last_date_success_run_cell\": \"2023-06-07T16:39:45.975Z\"\n                },\n                \"jupyter\": {\n                    \"source_hidden\": true\n                },\n                \"name\": \"Video 3: Click Run Action -->\",\n                \"orderProperties\": [],\n                \"tags\": [],\n                \"title\": \"Video 3: Click Run Action -->\",\n                \"trusted\": true\n            },\n            \"outputs\": [],\n            \"source\": [\n                \"%%html\\n\",\n                \"<iframe width=\\\"560\\\" height=\\\"315\\\" src=\\\"https://www.youtube.com/embed/qXS3ILkti0s\\\" title=\\\"YouTube video player\\\" frameborder=\\\"0\\\" allow=\\\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture\\\" allowfullscreen></iframe>\"\n            ]\n        },\n        {\n            \"cell_type\": \"markdown\",\n            \"id\": \"094afbe4-6ea9-4c02-883b-55c4754422c8\",\n            \"metadata\": {\n                \"name\": \"Here is the Code for Step 3a, 3b, 3c\",\n                \"orderProperties\": [],\n                \"tags\": [],\n                \"title\": \"Here is the Code for Step 3a, 3b, 3c\"\n            },\n            \"source\": [\n                \"<p>Step 3a: ADdthis to an ACtion (add -&gt; Action)</p>\\n\",\n                \"<p><strong id=\\\"docs-internal-guid-a21dbc63-7fff-a731-b8e0-45e1efa43d7f\\\">patchCommand= \\\"kubectl patch pod image-pullback -n \\\" + namespace + ' -p \\\\'{\\\"spec\\\":{\\\"containers\\\":[{\\\"name\\\":\\\"image-pullback-container\\\", \\\"image\\\":\\\"debian\\\"}]}}\\\\''</strong></p>\\n\",\n                \"<p>&nbsp;</p>\\n\",\n                \"<p>Step 3b: Search actions on the Riught menu for \\\"Kubectl Command.\\\" Drag this action in, add your K8s credentials.</p>\\n\",\n                \"<p>&nbsp;</p>\\n\",\n                \"<p>Add this to the Kubectl Command</p>\\n\",\n                \"<p><strong>patchCommand</strong></p>\\n\",\n                \"<p>&nbsp;</p>\\n\",\n                \"<p>Step 3c:&nbsp;</p>\\n\",\n                \"<p>Drag in a second \\\"Kubectl Command\\\" action, add your K8s credentials.</p>\\n\",\n                \"<p>Add this to the Kubectl Command:</p>\\n\",\n                \"<p><strong id=\\\"docs-internal-guid-8bbd08ae-7fff-d143-ec83-5fc85433d193\\\">f'kubectl get pods -n {namespace}'</strong></p>\\n\",\n                \"<p>&nbsp;</p>\"\n            ]\n        }\n    ],\n    \"metadata\": {\n        \"execution_data\": {\n            \"runbook_name\": \"k8s: Pod Stuck in ImagePullBackOff State\",\n            \"parameters\": [\n                \"namespace\"\n            ]\n        },\n        \"kernelspec\": {\n            \"display_name\": \"unSkript (Build: 1172)\",\n            \"name\": \"python3\"\n        },\n        \"language_info\": {\n            \"file_extension\": \".py\",\n            \"mimetype\": \"text/x-python\",\n            \"name\": \"python\",\n            \"pygments_lexer\": \"ipython3\"\n        },\n        \"outputParameterSchema\": null,\n        \"parameterSchema\": {\n            \"definitions\": null,\n            \"properties\": {\n                \"namespace\": {\n                    \"description\": \"K8S Namespace\",\n                    \"title\": \"namespace\",\n                    \"type\": \"string\"\n                }\n            },\n            \"required\": [\n                \"namespace\"\n            ],\n            \"title\": \"Schema\",\n            \"type\": \"object\"\n        },\n        \"parameterValues\": {}\n    },\n    \"nbformat\": 4,\n    \"nbformat_minor\": 5\n}"
  },
  {
    "path": "Kubernetes/K8S_Pod_Stuck_In_ImagePullBackOff_State.json",
    "content": "{\n  \"name\": \"k8s: Pod Stuck in ImagePullBackOff State\",\n  \"description\": \"This runbook checks if any Pod(s) in ImagePullBackOff state in a given k8s namespace. If it finds, it tries to find out the reason why the Pod(s) is in that state.\",  \n  \"uuid\": \"a53b5860500e142aa387ce55d5e85f139596c521dfb5c920cc2bc47c38fc0b11\",\n  \"icon\": \"CONNECTOR_TYPE_K8S\",\n  \"categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\" ],\n  \"connector_types\": [ \"CONNECTOR_TYPE_K8S\" ],\n  \"version\": \"1.0.0\"\n}"
  },
  {
    "path": "Kubernetes/K8S_Pod_Stuck_In_ImagePullBackOff_State_with_genai.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"f518e5b7-08a7-425c-9d86-cfc629d5b355\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Steps Overview\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Steps Overview\"\n   },\n   \"source\": [\n    \"<hr><hr><center>Objective:&nbsp;<strong><em>Fix K8s Pod in ImagePullBackOff State</em></strong></center>\\n\",\n    \"<p>1)&nbsp;<a href=\\\"#1\\\" target=\\\"_self\\\" rel=\\\"noopener\\\">Get list of pods in ImagePullBackOff State</a><br>2)&nbsp;<a href=\\\"#2\\\" target=\\\"_self\\\" rel=\\\"noopener\\\">Extract Events of the pods</a><br>3)&nbsp;<a href=\\\"#3\\\" target=\\\"_self\\\" rel=\\\"noopener\\\">Check registry accessibility</a></p>\\n\",\n    \"<p>An <code>ImagePullBackOff</code> error occurs when a Pod startup fails to pull the specified image. The reasons could be Non-Existent of the repository or Permission to Access the repository issues. This runbook helps to walk through the steps involved in debugging such a Pod.&nbsp;We'll then create the steps required to resolve the issue - learning how to use unSkript at the same time.</p>\\n\",\n    \"<hr>\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"6cdb0116-152b-493c-8eb9-71237b691806\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 1\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 1\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Get-List-of-Pods-in-ImagePullBackOff-State&para;\\\"><a id=\\\"1\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Get List of Pods in ImagePullBackOff State<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Get-List-of-Pods-in-CrashLoopBackOff-State\\\" target=\\\"_self\\\" rel=\\\"noopener\\\">&para;</a><a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Get-List-of-Pods-in-ImagePullBackOff-State&para;\\\" target=\\\"_self\\\">&para;</a></h3>\\n\",\n    \"<p>This action fetches a list of the pods in ImagePullBackOff State. This action will consider <code>namespace</code> as&nbsp;<strong> all&nbsp;</strong>if no namespace is given.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters:&nbsp;<code>namespace</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following ouput: <code>imagepullbackoff_pods</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"1de4e931-bb27-47d0-bb67-3ff9696d41e2\",\n   \"metadata\": {\n    \"actionIsGenAI\": true,\n    \"show_tool_tip_gen_ai_chat\": \"openChat\",\n    \"tool_tip_gen_ai_chat_first_message\":\"write a function to get list of pods in ImagePullBackOff State with namespace as a required parameter. Use container status to evaluate this condition.  It should only return the pod name.\",\n    \"customAction\": true,\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": []\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 4,\n   \"id\": \"b273811b-9921-4786-9808-230187591944\",\n   \"metadata\": {\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-08-01T23:17:30.711Z\"\n    },\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"name\": \"Video 2: Click Run Action -->\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Video 2: Click Run Action -->\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"%%html\\n\",\n    \"<iframe width=\\\"560\\\" height=\\\"315\\\" src=\\\"https://www.youtube.com/embed/aSsYlIGQhO8\\\" title=\\\"YouTube video player\\\" frameborder=\\\"0\\\" allow=\\\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture\\\" allowfullscreen></iframe>\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"7b195002-2041-48dc-a7de-3ca871925e58\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 1A\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 1A\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Create-List-of-commands-to-get-Events&para;\\\">Create List of commands to get Events<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Examine-the-Events\\\" target=\\\"_self\\\" rel=\\\"noopener\\\">&para;</a><a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Create-List-of-commands-to-get-Events&para;\\\" target=\\\"_self\\\">&para;</a></h3>\\n\",\n    \"<p>Examine the output from Step 1\\ud83d\\udc46,&nbsp; and create a list of commands for each pod in a namespace that is found to be in the ImagePullBackOff State</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following ouput:&nbsp;<code>imagepullbackoff_pods</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"380b03f3-b09c-4836-8d50-15ee8021d0e4\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 2\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 2\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Gather-information-of-the-pods\\\">Extract Events of the pods<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Gather-information-of-the-pods\\\" target=\\\"_self\\\">&para;</a></h3>\\n\",\n    \"<p>This action describes events for a list of unhealthy pods obtained in Step 1.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following ouput: <code>describe_output</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 5,\n   \"id\": \"cae3c677-fe96-4d0e-9d64-1b11abd00883\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": false,\n    \"action_uuid\": \"4d3b4c6153e14622f42b332b7c7b8f7043577971f64edc5be6b5f8b40d5b89d1\",\n    \"condition_enabled\": true,\n    \"continueOnError\": true,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Execute the given Kubectl command.\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-08-01T23:45:19.929Z\"\n    },\n    \"id\": 51,\n    \"index\": 51,\n    \"inputData\": [\n     {\n      \"kubectl_command\": {\n       \"constant\": false,\n       \"value\": \"iter_item\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"kubectl_command\": {\n        \"description\": \"kubectl command eg \\\"kubectl get pods --all-namespaces\\\"\",\n        \"title\": \"Kubectl Command\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"kubectl_command\"\n      ],\n      \"title\": \"k8s_kubectl_command\",\n      \"type\": \"object\"\n     }\n    ],\n    \"iterData\": [\n     {\n      \"iter_enabled\": true,\n      \"iter_item\": \"kubectl_command\",\n      \"iter_list\": {\n       \"constant\": false,\n       \"objectItems\": false,\n       \"value\": \"[ f\\\"kubectl describe pod {x} -n {namespace} | grep -A 10 Events\\\" for x in imagepullbackoff_pods ]\"\n      }\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_K8S\",\n    \"name\": \"Extract Events for the Pods\",\n    \"nouns\": [],\n    \"orderProperties\": [\n     \"kubectl_command\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"describe_output\",\n     \"output_name_enabled\": true\n    },\n    \"printOutput\": true,\n    \"service_id_enabled\": false,\n    \"startcondition\": \"len(imagepullbackoff_pods)!=0\",\n    \"tags\": [\n     \"k8s_kubectl_command\"\n    ],\n    \"title\": \"Extract Events for the Pods\",\n    \"verbs\": []\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"#\\n\",\n    \"# Copyright (c) 2022 unSkript.com\\n\",\n    \"# All rights reserved.\\n\",\n    \"#\\n\",\n    \"\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def k8s_kubectl_command_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    print(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def k8s_kubectl_command(handle, kubectl_command: str) -> str:\\n\",\n    \"    \\\"\\\"\\\"k8s_kubectl_command executes the given kubectl command on the pod\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned from the Task validate method\\n\",\n    \"\\n\",\n    \"        :type kubectl_command: str\\n\",\n    \"        :param kubectl_command: The Actual kubectl command, like kubectl get ns, etc..\\n\",\n    \"\\n\",\n    \"        :rtype: String, Output of the command in python string format or Empty String in case of Error.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    if handle.client_side_validation != True:\\n\",\n    \"        print(f\\\"K8S Connector is invalid: {handle}\\\")\\n\",\n    \"        return str()\\n\",\n    \"    result = handle.run_native_cmd(kubectl_command)\\n\",\n    \"    if result is None or hasattr(result, \\\"stderr\\\") is False or result.stderr is None:\\n\",\n    \"        print(\\n\",\n    \"            f\\\"Error while executing command ({kubectl_command}): {result.stderr}\\\")\\n\",\n    \"        return str()\\n\",\n    \"\\n\",\n    \"    return result.stdout\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(continueOnError=True)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"kubectl_command\\\": \\\"iter_item\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(iterJson='''{\\n\",\n    \"    \\\"iter_enabled\\\": true,\\n\",\n    \"    \\\"iter_list_is_const\\\": false,\\n\",\n    \"    \\\"iter_list\\\": \\\"[ f\\\\\\\\\\\"kubectl describe pod {x} -n {namespace} | grep -A 10 Events\\\\\\\\\\\" for x in imagepullbackoff_pods ]\\\",\\n\",\n    \"    \\\"iter_parameter\\\": \\\"kubectl_command\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"len(imagepullbackoff_pods)!=0\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"describe_output\\\")\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(k8s_kubectl_command, lego_printer=k8s_kubectl_command_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"67287ce3-806d-458b-9fe5-ed0e6b146252\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 2B\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 2B\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Examine-the-Events&para;\\\">Examine the Events<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Examine-the-Events\\\" target=\\\"_self\\\" rel=\\\"noopener\\\">&para;</a><a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Examine-the-Events&para;\\\" target=\\\"_self\\\">&para;</a></h3>\\n\",\n    \"<p>This Custom Action searches Known errors .&nbsp;The well known errors are listed in the error_msgs variable. If&nbsp;there is a new error message that was found, it can be added to the list.</p>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 6,\n   \"id\": \"6df7c408-377b-4ea8-a33c-ff3c5329fbaa\",\n   \"metadata\": {\n    \"credentialsJson\": {},\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-08-01T23:45:28.944Z\"\n    },\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"name\": \"Examine Events\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Examine Events\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"import re\\n\",\n    \"\\n\",\n    \"\\\"\\\"\\\"\\n\",\n    \"This Custom Action searches Known errors in the podEvents variable.\\n\",\n    \"The well known errors are listed in the error_msgs variable. If\\n\",\n    \"there is a new error message that was found, you can add it to this\\n\",\n    \"list and the next run, the runbook will catch that error.\\n\",\n    \"\\\"\\\"\\\"\\n\",\n    \"\\n\",\n    \"def check_msg(msg, err):\\n\",\n    \"    return re.search(err, msg)\\n\",\n    \"\\n\",\n    \"error_msgs = [\\\"repository (.*) does not exist or no pull access\\\",\\n\",\n    \"              \\\"manifest for (.*) not found\\\",\\n\",\n    \"              \\\"pull access denied, repository does not exist or may require authorization\\\",\\n\",\n    \"             \\\"Back-off pulling image (.*)\\\"]\\n\",\n    \"cause_found = False\\n\",\n    \"result = ''\\n\",\n    \"for key, msg in describe_output.items():\\n\",\n    \"    for err in error_msgs:\\n\",\n    \"        result = check_msg(msg, err)\\n\",\n    \"        if result is not None:\\n\",\n    \"            print(\\\"PROBABLE CAUSE: \\\", f\\\"{result.string}\\\")\\n\",\n    \"            cause_found = True\\n\",\n    \"\\n\",\n    \"repoLocation = ''\\n\",\n    \"if cause_found is False:\\n\",\n    \"    print(\\\"ERROR MESSAGE : \\\\n\\\", all_describe_info)\\n\",\n    \"else:\\n\",\n    \"    try:\\n\",\n    \"        repoLocation = result.groups()[0]\\n\",\n    \"    except:\\n\",\n    \"        pass\\n\",\n    \"    else:\\n\",\n    \"        print(\\\"Image Repo Location : \\\", repoLocation)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"83081ee6-55c1-4f82-923b-ed6c4e054d35\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 3\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 3\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Create-List-of-commands-to-get-Exit-Code&para;\\\">Check Registry Accessibility<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Examine-the-Events\\\" target=\\\"_self\\\" rel=\\\"noopener\\\">&para;</a><a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Create-List-of-commands-to-get-Exit-Code&para;\\\" target=\\\"_self\\\">&para;</a></h3>\\n\",\n    \"<p>From the output from Step 2B\\ud83d\\udc46check if the repoLocation is accessible.</p>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 18,\n   \"id\": \"d3fbe0a1-6669-490f-8ffc-3e4e11a32156\",\n   \"metadata\": {\n    \"credentialsJson\": {},\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-08-01T23:40:25.833Z\"\n    },\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"name\": \"Check Registry Accessibility\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Check Registry Accessibility\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"from IPython.display import Markdown as md\\n\",\n    \"\\n\",\n    \"if repoLocation is not None:\\n\",\n    \"    display(md(f\\\"**Please verify Repo {repoLocation} is accessible from the K8S POD**\\\"))\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 19,\n   \"id\": \"e68888fe-d002-49af-9196-cebc01594dbc\",\n   \"metadata\": {\n    \"credentialsJson\": {},\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-08-01T23:40:29.611Z\"\n    },\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"name\": \"Video 3: Click Run Action -->\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Video 3: Click Run Action -->\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"%%html\\n\",\n    \"<iframe width=\\\"560\\\" height=\\\"315\\\" src=\\\"https://www.youtube.com/embed/qXS3ILkti0s\\\" title=\\\"YouTube video player\\\" frameborder=\\\"0\\\" allow=\\\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture\\\" allowfullscreen></iframe>\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"094afbe4-6ea9-4c02-883b-55c4754422c8\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Here is the Code for Step 3a, 3b, 3c\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Here is the Code for Step 3a, 3b, 3c\"\n   },\n   \"source\": [\n    \"<p>Step 3a: Add this to an Action (add -&gt; Action)</p>\\n\",\n    \"<p><strong id=\\\"docs-internal-guid-a21dbc63-7fff-a731-b8e0-45e1efa43d7f\\\">patchCommand= \\\"kubectl patch pod image-pullback -n \\\" + namespace + ' -p \\\\'{\\\"spec\\\":{\\\"containers\\\":[{\\\"name\\\":\\\"image-pullback-container\\\", \\\"image\\\":\\\"debian\\\"}]}}\\\\''</strong></p>\\n\",\n    \"<p>&nbsp;</p>\\n\",\n    \"<p>Step 3b: Search actions on the Riught menu for \\\"Kubectl Command.\\\" Drag this action in, add your K8s credentials.</p>\\n\",\n    \"<p>&nbsp;</p>\\n\",\n    \"<p>Add this to the Kubectl Command</p>\\n\",\n    \"<p><strong>patchCommand</strong></p>\\n\",\n    \"<p>&nbsp;</p>\\n\",\n    \"<p>Step 3c:&nbsp;</p>\\n\",\n    \"<p>Drag in a second \\\"Kubectl Command\\\" action, add your K8s credentials.</p>\\n\",\n    \"<p>Add this to the Kubectl Command:</p>\\n\",\n    \"<p><strong id=\\\"docs-internal-guid-8bbd08ae-7fff-d143-ec83-5fc85433d193\\\">f'kubectl get pods -n {namespace}'</strong></p>\\n\",\n    \"<p>&nbsp;</p>\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"execution_data\": {\n   \"runbook_name\": \"k8s: Pod Stuck in ImagePullBackOff State using genAI\",\n   \"parameters\": [\n    \"environment\",\n    \"namespace\"\n   ]\n  },\n  \"kernelspec\": {\n   \"display_name\": \"unSkript (Build: 1248)\",\n   \"name\": \"python_kubernetes\"\n  },\n  \"language_info\": {\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"pygments_lexer\": \"ipython3\"\n  },\n  \"outputParameterSchema\": null,\n  \"parameterSchema\": {\n   \"definitions\": null,\n   \"properties\": {\n    \"environment\": {\n     \"default\": \"\",\n     \"description\": \"Name of the environment, associated with the credential\",\n     \"title\": \"environment\",\n     \"type\": \"string\"\n    },\n    \"namespace\": {\n     \"default\": \"0bb055c9-1d76-4570-a173-54eefecc7e42\",\n     \"description\": \"K8S Namespace\",\n     \"title\": \"namespace\",\n     \"type\": \"string\"\n    }\n   },\n   \"required\": [\n    \"namespace\",\n    \"environment\"\n   ],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"parameterValues\": null\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}\n"
  },
  {
    "path": "Kubernetes/K8S_Pod_Stuck_In_ImagePullBackOff_State_with_genai.json",
    "content": "{\n  \"name\": \"k8s: Pod Stuck in ImagePullBackOff State using genAI\",\n  \"description\": \"This runbook checks if any Pod(s) in ImagePullBackOff state in a given k8s namespace, using genAI. If it finds, it tries to find out the reason why the Pod(s) is in that state.\",  \n  \"uuid\": \"4ece5a97491d3df93e6a2ec483d1bc554ee484a6b5bc8d91f03775d961a5400b\",\n  \"icon\": \"CONNECTOR_TYPE_K8S\",\n  \"categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\" ],\n  \"connector_types\": [ \"CONNECTOR_TYPE_K8S\" ],\n  \"version\": \"1.0.0\"\n}\n"
  },
  {
    "path": "Kubernetes/K8S_Pod_Stuck_In_Terminating_State.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"642c0464-7f6e-484f-ba43-bcd8d030f6f5\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Overview\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Overview\"\n   },\n   \"source\": [\n    \"<hr><center><img src=\\\"https://storage.googleapis.com/unskript-website/assets/favicon.png\\\" alt=\\\"unSkript.com\\\" width=\\\"100\\\" height=\\\"100\\\">\\n\",\n    \"<h1 id=\\\"unSkript-Runbooks&para;\\\">unSkript Runbooks</h1>\\n\",\n    \"<div class=\\\"alert alert-block alert-success\\\">\\n\",\n    \"<h3 id=\\\"Objective&para;\\\">Objective</h3>\\n\",\n    \"<br><strong style=\\\"color: #000000;\\\"><em>Fix pods stuck in Terminating state</em></strong></div>\\n\",\n    \"</center>\\n\",\n    \"<p>&nbsp;</p>\\n\",\n    \"<center>\\n\",\n    \"<h2 id=\\\"Resize-PVC&para;\\\"><u>Pods Stuck in Terminating State</u></h2>\\n\",\n    \"</center>\\n\",\n    \"<h1 id=\\\"Steps-Overview&para;\\\">Steps Overview</h1>\\n\",\n    \"<p><a href=\\\"#1\\\" target=\\\"_self\\\" rel=\\\"noopener\\\">1)</a> <a href=\\\"#1\\\">Get pods stuck in Terminating State</a><a href=\\\"#1\\\" target=\\\"_self\\\" rel=\\\"noopener\\\"><br>2)&nbsp;</a><a href=\\\"#2\\\">Check for finalizers</a><a href=\\\"#1\\\" target=\\\"_self\\\" rel=\\\"noopener\\\"><br></a><a href=\\\"#1\\\" target=\\\"_self\\\" rel=\\\"noopener\\\">3) </a><a href=\\\"#1\\\">Get Node Information</a><a href=\\\"#1\\\" target=\\\"_self\\\" rel=\\\"noopener\\\"><br></a><a href=\\\"#1\\\" target=\\\"_self\\\" rel=\\\"noopener\\\">4)&nbsp;</a><a href=\\\"#1\\\">Force-delete the pod</a><a href=\\\"#1\\\" target=\\\"_self\\\" rel=\\\"noopener\\\"><br></a><a href=\\\"#1\\\" target=\\\"_self\\\" rel=\\\"noopener\\\">5)&nbsp;</a><a href=\\\"#1\\\">Check Resolution</a><a href=\\\"#1\\\" target=\\\"_self\\\" rel=\\\"noopener\\\"><br></a><a href=\\\"#2\\\" target=\\\"_self\\\" rel=\\\"noopener\\\">6)&nbsp;</a><a href=\\\"#2\\\">Further steps</a></p>\\n\",\n    \"<p>A Pod has been deleted but remains in&nbsp;<code>Terminating</code> Status</p>\\n\",\n    \"<p>This can happen for either of the reasons:</p>\\n\",\n    \"<pre><code>1. Pod has a finalizer associated with it and that is not completing\\n\",\n    \"2. The Pod is not responding to termination signals\\n\",\n    \"</code></pre>\\n\",\n    \"<p>The output of <code>kubectl get pods [PODNAME] -n [NAMESPACE]</code> will show something like this:</p>\\n\",\n    \"<pre><code>NAME                     READY     STATUS             RESTARTS   AGE\\n\",\n    \"nginx-7ef9efa7cd-qasd2   1/1       Terminating        0          1h\\n\",\n    \"</code></pre>\\n\",\n    \"<hr>\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"ba64ce4b-0f6a-4faa-b501-461543954f39\",\n   \"metadata\": {\n    \"name\": \"Step 1A\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 1A\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Convert-namespace-to-String-if-empty&para;&para;\\\"><a id=\\\"1\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Convert namespace to String if empty</h3>\\n\",\n    \"<p>This custom action changes the type of namespace from None to String only if no namespace is given</p>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"74e67045-d7e7-4116-8714-19d880552650\",\n   \"metadata\": {\n    \"collapsed\": true,\n    \"customAction\": true,\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"name\": \"Convert namespace to String if empty\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Convert namespace to String if empty\",\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"if namespace==None:\\n\",\n    \"    namespace=''\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"fc9ce20e-22ae-49e8-b439-c189a902b2a4\",\n   \"metadata\": {\n    \"name\": \"Step 1\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 1\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Get-List-of-Pods-in-Terminating-State\\\"><a id=\\\"1\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Get List of Pods in Terminating State</h3>\\n\",\n    \"<p>This action fetches a list of the pods in Terminating State. This action will consider <code>namespace</code> as&nbsp;<strong> all&nbsp;</strong>if no namespace is given.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters (Optional):&nbsp;<code>namespace</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following ouput: <code>terminating_pods</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"b2809d57-03b2-41a3-9b57-f544e4ac32fa\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionCategories\": [],\n    \"actionIsCheck\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [],\n    \"actionNextHopParameterMapping\": {},\n    \"actionOutputType\": \"\",\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": false,\n    \"action_uuid\": \"4d3b4c6153e14622f42b332b7c7b8f7043577971f64edc5be6b5f8b40d5b89d1\",\n    \"checkEnabled\": false,\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Execute the given Kubectl command.\",\n    \"id\": 55,\n    \"index\": 55,\n    \"inputData\": [\n     {\n      \"kubectl_command\": {\n       \"constant\": false,\n       \"value\": \"f\\\"kubectl get pods -n {namespace} | grep Terminating | cut -d' ' -f 1\\\"\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"kubectl_command\": {\n        \"description\": \"kubectl command eg \\\"kubectl get pods --all-namespaces\\\"\",\n        \"title\": \"Kubectl Command\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"kubectl_command\"\n      ],\n      \"title\": \"k8s_kubectl_command\",\n      \"type\": \"object\"\n     }\n    ],\n    \"isUnskript\": false,\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_K8S\",\n    \"name\": \"Get pods stuck in Terminating State\",\n    \"nouns\": [],\n    \"orderProperties\": [\n     \"kubectl_command\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"terminatingPods\",\n     \"output_name_enabled\": true\n    },\n    \"printOutput\": true,\n    \"tags\": [\n     \"k8s_kubectl_command\"\n    ],\n    \"title\": \"Get pods stuck in Terminating State\",\n    \"verbs\": [],\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"#\\n\",\n    \"# Copyright (c) 2022 unSkript.com\\n\",\n    \"# All rights reserved.\\n\",\n    \"#\\n\",\n    \"\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def k8s_kubectl_command_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    print(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def k8s_kubectl_command(handle, kubectl_command: str) -> str:\\n\",\n    \"    \\\"\\\"\\\"k8s_kubectl_command executes the given kubectl command on the pod\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned from the Task validate method\\n\",\n    \"\\n\",\n    \"        :type kubectl_command: str\\n\",\n    \"        :param kubectl_command: The Actual kubectl command, like kubectl get ns, etc..\\n\",\n    \"\\n\",\n    \"        :rtype: String, Output of the command in python string format or Empty String in case of Error.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    if handle.client_side_validation != True:\\n\",\n    \"        print(f\\\"K8S Connector is invalid: {handle}\\\")\\n\",\n    \"        return str()\\n\",\n    \"\\n\",\n    \"    result = handle.run_native_cmd(kubectl_command)\\n\",\n    \"    if result is None or hasattr(result, \\\"stderr\\\") is False or result.stderr is None:\\n\",\n    \"        print(\\n\",\n    \"            f\\\"Error while executing command ({kubectl_command}): {result.stderr}\\\")\\n\",\n    \"        return str()\\n\",\n    \"\\n\",\n    \"    return result.stdout\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"kubectl_command\\\": \\\"f\\\\\\\\\\\"kubectl get pods -n {namespace} | grep Terminating | cut -d' ' -f 1\\\\\\\\\\\"\\\"\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"task.configure(outputName=\\\"terminatingPods\\\")\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(k8s_kubectl_command, lego_printer=k8s_kubectl_command_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"bb2f595c-18ab-4415-baba-5f7cac36d936\",\n   \"metadata\": {\n    \"name\": \"Step 2\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 2\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Get-List-of-Pods-in-Terminating-State\\\"><a id=\\\"1\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Check for finalizer</h3>\\n\",\n    \"<p>This action checks for finalizer. If it does, their failure to complete may be the root cause.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters:&nbsp;<code>namespace, terminatingPods</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following ouput: <code>finalizerOutput</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"3ae96e00-4f46-461a-afd9-2db939414f0a\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionCategories\": [],\n    \"actionIsCheck\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [],\n    \"actionNextHopParameterMapping\": {},\n    \"actionOutputType\": \"\",\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": false,\n    \"action_uuid\": \"4d3b4c6153e14622f42b332b7c7b8f7043577971f64edc5be6b5f8b40d5b89d1\",\n    \"checkEnabled\": false,\n    \"condition_enabled\": true,\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Execute the given Kubectl command.\",\n    \"id\": 55,\n    \"index\": 55,\n    \"inputData\": [\n     {\n      \"kubectl_command\": {\n       \"constant\": false,\n       \"value\": \"f\\\"kubectl get pod -n {namespace} {terminatingPods.strip()} -o yaml | grep -A 1 finalizers\\\" \"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"kubectl_command\": {\n        \"description\": \"kubectl command eg \\\"kubectl get pods --all-namespaces\\\"\",\n        \"title\": \"Kubectl Command\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"kubectl_command\"\n      ],\n      \"title\": \"k8s_kubectl_command\",\n      \"type\": \"object\"\n     }\n    ],\n    \"isUnskript\": false,\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_K8S\",\n    \"name\": \"Check for Finalizer\",\n    \"nouns\": [],\n    \"orderProperties\": [\n     \"kubectl_command\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"finalizerOutput\",\n     \"output_name_enabled\": true\n    },\n    \"printOutput\": true,\n    \"startcondition\": \"terminatingPods is not ''\",\n    \"tags\": [\n     \"k8s_kubectl_command\"\n    ],\n    \"title\": \"Check for Finalizer\",\n    \"verbs\": [],\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"#\\n\",\n    \"# Copyright (c) 2022 unSkript.com\\n\",\n    \"# All rights reserved.\\n\",\n    \"#\\n\",\n    \"\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def k8s_kubectl_command_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    print(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def k8s_kubectl_command(handle, kubectl_command: str) -> str:\\n\",\n    \"    \\\"\\\"\\\"k8s_kubectl_command executes the given kubectl command on the pod\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned from the Task validate method\\n\",\n    \"\\n\",\n    \"        :type kubectl_command: str\\n\",\n    \"        :param kubectl_command: The Actual kubectl command, like kubectl get ns, etc..\\n\",\n    \"\\n\",\n    \"        :rtype: String, Output of the command in python string format or Empty String in case of Error.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    if handle.client_side_validation != True:\\n\",\n    \"        print(f\\\"K8S Connector is invalid: {handle}\\\")\\n\",\n    \"        return str()\\n\",\n    \"\\n\",\n    \"    result = handle.run_native_cmd(kubectl_command)\\n\",\n    \"    if result is None or hasattr(result, \\\"stderr\\\") is False or result.stderr is None:\\n\",\n    \"        print(\\n\",\n    \"            f\\\"Error while executing command ({kubectl_command}): {result.stderr}\\\")\\n\",\n    \"        return str()\\n\",\n    \"\\n\",\n    \"    return result.stdout\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"kubectl_command\\\": \\\"f\\\\\\\\\\\"kubectl get pod -n {namespace} {terminatingPods.strip()} -o yaml | grep -A 1 finalizers\\\\\\\\\\\" \\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"terminatingPods is not ''\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"finalizerOutput\\\")\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(k8s_kubectl_command, lego_printer=k8s_kubectl_command_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"b3a1c250-a232-4bdc-8e9b-25e462871800\",\n   \"metadata\": {\n    \"name\": \"Step 2A\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 2A\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Get-List-of-Pods-in-Terminating-State\\\"><a id=\\\"1\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Remove finalizer if present</h3>\\n\",\n    \"<p>This action takes input from Step 2 and removes finalizer if present</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters:&nbsp;<code>namespace</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following ouput: <code>finalizerOutput</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"9cea9d29-8443-4fd9-a6f2-90501fdb652c\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionCategories\": [],\n    \"actionIsCheck\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [],\n    \"actionNextHopParameterMapping\": {},\n    \"actionOutputType\": \"\",\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": false,\n    \"action_uuid\": \"4d3b4c6153e14622f42b332b7c7b8f7043577971f64edc5be6b5f8b40d5b89d1\",\n    \"checkEnabled\": false,\n    \"condition_enabled\": true,\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Execute the given Kubectl command.\",\n    \"id\": 55,\n    \"index\": 55,\n    \"inputData\": [\n     {\n      \"kubectl_command\": {\n       \"constant\": false,\n       \"value\": \"f\\\"kubectl patch pod {terminatingPods.strip()}\\\" + \\\" -p '{\\\\\\\\\\\"metadata\\\\\\\\\\\":{\\\\\\\\\\\"finalizers\\\\\\\\\\\":null}}'\\\"\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"kubectl_command\": {\n        \"description\": \"kubectl command eg \\\"kubectl get pods --all-namespaces\\\"\",\n        \"title\": \"Kubectl Command\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"kubectl_command\"\n      ],\n      \"title\": \"k8s_kubectl_command\",\n      \"type\": \"object\"\n     }\n    ],\n    \"isUnskript\": false,\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_K8S\",\n    \"name\": \"Remove finalizer if present\",\n    \"nouns\": [],\n    \"orderProperties\": [\n     \"kubectl_command\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"removeFinalizerOutput\",\n     \"output_name_enabled\": true\n    },\n    \"printOutput\": true,\n    \"startcondition\": \"finalizerOutput is not ''\",\n    \"tags\": [\n     \"k8s_kubectl_command\"\n    ],\n    \"title\": \"Remove finalizer if present\",\n    \"verbs\": [],\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"#\\n\",\n    \"# Copyright (c) 2022 unSkript.com\\n\",\n    \"# All rights reserved.\\n\",\n    \"#\\n\",\n    \"\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def k8s_kubectl_command_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    print(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def k8s_kubectl_command(handle, kubectl_command: str) -> str:\\n\",\n    \"    \\\"\\\"\\\"k8s_kubectl_command executes the given kubectl command on the pod\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned from the Task validate method\\n\",\n    \"\\n\",\n    \"        :type kubectl_command: str\\n\",\n    \"        :param kubectl_command: The Actual kubectl command, like kubectl get ns, etc..\\n\",\n    \"\\n\",\n    \"        :rtype: String, Output of the command in python string format or Empty String in case of Error.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    if handle.client_side_validation != True:\\n\",\n    \"        print(f\\\"K8S Connector is invalid: {handle}\\\")\\n\",\n    \"        return str()\\n\",\n    \"\\n\",\n    \"    result = handle.run_native_cmd(kubectl_command)\\n\",\n    \"    if result is None or hasattr(result, \\\"stderr\\\") is False or result.stderr is None:\\n\",\n    \"        print(\\n\",\n    \"            f\\\"Error while executing command ({kubectl_command}): {result.stderr}\\\")\\n\",\n    \"        return str()\\n\",\n    \"\\n\",\n    \"    return result.stdout\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"kubectl_command\\\": \\\"f\\\\\\\\\\\"kubectl patch pod {terminatingPods.strip()}\\\\\\\\\\\" + \\\\\\\\\\\" -p '{\\\\\\\\\\\\\\\\\\\\\\\\\\\"metadata\\\\\\\\\\\\\\\\\\\\\\\\\\\":{\\\\\\\\\\\\\\\\\\\\\\\\\\\"finalizers\\\\\\\\\\\\\\\\\\\\\\\\\\\":null}}'\\\\\\\\\\\"\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"finalizerOutput is not ''\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"removeFinalizerOutput\\\")\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(k8s_kubectl_command, lego_printer=k8s_kubectl_command_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"a813187c-9310-41fc-a4f8-e2d61781baea\",\n   \"metadata\": {\n    \"name\": \"Step 3\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 3\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Get-List-of-Pods-in-Terminating-State\\\"><a id=\\\"1\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Get node information</h3>\\n\",\n    \"<p>This action gets the nodes information to check for it's status (Step 3A)</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters:&nbsp;<code>namespace</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following ouput: <code>nodeName</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"e70ff97f-2147-4026-bd92-8fb1851b6ce6\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionCategories\": [],\n    \"actionIsCheck\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [],\n    \"actionNextHopParameterMapping\": {},\n    \"actionOutputType\": \"\",\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": false,\n    \"action_uuid\": \"4d3b4c6153e14622f42b332b7c7b8f7043577971f64edc5be6b5f8b40d5b89d1\",\n    \"checkEnabled\": false,\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Execute the given Kubectl command.\",\n    \"id\": 55,\n    \"index\": 55,\n    \"inputData\": [\n     {\n      \"kubectl_command\": {\n       \"constant\": false,\n       \"value\": \"f\\\"kubectl get pods {terminatingPods.strip()} -n {namespace} -o yaml | grep nodeName | tr -d \\\\\\\\\\\" \\\\\\\\\\\" | cut -d':' -f 2\\\" \"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"kubectl_command\": {\n        \"description\": \"kubectl command eg \\\"kubectl get pods --all-namespaces\\\"\",\n        \"title\": \"Kubectl Command\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"kubectl_command\"\n      ],\n      \"title\": \"k8s_kubectl_command\",\n      \"type\": \"object\"\n     }\n    ],\n    \"isUnskript\": false,\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_K8S\",\n    \"name\": \"Get Node Information\",\n    \"nouns\": [],\n    \"orderProperties\": [\n     \"kubectl_command\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"nodeName\",\n     \"output_name_enabled\": true\n    },\n    \"printOutput\": true,\n    \"tags\": [\n     \"k8s_kubectl_command\"\n    ],\n    \"title\": \"Get Node Information\",\n    \"verbs\": [],\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"#\\n\",\n    \"# Copyright (c) 2022 unSkript.com\\n\",\n    \"# All rights reserved.\\n\",\n    \"#\\n\",\n    \"\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def k8s_kubectl_command_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    print(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def k8s_kubectl_command(handle, kubectl_command: str) -> str:\\n\",\n    \"    \\\"\\\"\\\"k8s_kubectl_command executes the given kubectl command on the pod\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned from the Task validate method\\n\",\n    \"\\n\",\n    \"        :type kubectl_command: str\\n\",\n    \"        :param kubectl_command: The Actual kubectl command, like kubectl get ns, etc..\\n\",\n    \"\\n\",\n    \"        :rtype: String, Output of the command in python string format or Empty String in case of Error.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    if handle.client_side_validation != True:\\n\",\n    \"        print(f\\\"K8S Connector is invalid: {handle}\\\")\\n\",\n    \"        return str()\\n\",\n    \"\\n\",\n    \"    result = handle.run_native_cmd(kubectl_command)\\n\",\n    \"    if result is None or hasattr(result, \\\"stderr\\\") is False or result.stderr is None:\\n\",\n    \"        print(\\n\",\n    \"            f\\\"Error while executing command ({kubectl_command}): {result.stderr}\\\")\\n\",\n    \"        return str()\\n\",\n    \"\\n\",\n    \"    return result.stdout\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"kubectl_command\\\": \\\"f\\\\\\\\\\\"kubectl get pods {terminatingPods.strip()} -n {namespace} -o yaml | grep nodeName | tr -d \\\\\\\\\\\\\\\\\\\\\\\\\\\" \\\\\\\\\\\\\\\\\\\\\\\\\\\" | cut -d':' -f 2\\\\\\\\\\\" \\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"nodeName\\\")\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(k8s_kubectl_command, lego_printer=k8s_kubectl_command_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"fd26240c-3c7f-42a0-b021-94cfdab6bd6d\",\n   \"metadata\": {\n    \"name\": \"Ste 3A\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Ste 3A\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Get-List-of-Pods-in-Terminating-State\\\"><a id=\\\"1\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Check status of Node</h3>\\n\",\n    \"<p>This action gets the status of node. It is possible that the node your pod(s) is/are running on has failed in some way. If all pods on the same node are in a&nbsp;<code>Terminating</code>&nbsp;state on a specific node, then this may be the issue.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters:&nbsp;<code>namespace</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following ouput: <code>nodeStatus</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"86992954-67e8-423a-a85e-6cfdbf933b99\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionCategories\": [],\n    \"actionIsCheck\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [],\n    \"actionNextHopParameterMapping\": {},\n    \"actionOutputType\": \"\",\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": false,\n    \"action_uuid\": \"4d3b4c6153e14622f42b332b7c7b8f7043577971f64edc5be6b5f8b40d5b89d1\",\n    \"checkEnabled\": false,\n    \"condition_enabled\": true,\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Execute the given Kubectl command.\",\n    \"id\": 55,\n    \"index\": 55,\n    \"inputData\": [\n     {\n      \"kubectl_command\": {\n       \"constant\": false,\n       \"value\": \"f\\\"kubectl get pods -n {namespace} {nodeName} | grep \\\\\\\\\\\" Ready\\\\\\\\\\\" | cut -d' ' -f 1 | tr -d ' '\\\" \"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"kubectl_command\": {\n        \"description\": \"kubectl command eg \\\"kubectl get pods --all-namespaces\\\"\",\n        \"title\": \"Kubectl Command\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"kubectl_command\"\n      ],\n      \"title\": \"k8s_kubectl_command\",\n      \"type\": \"object\"\n     }\n    ],\n    \"isUnskript\": false,\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_K8S\",\n    \"name\": \"Check Node Status\",\n    \"nouns\": [],\n    \"orderProperties\": [\n     \"kubectl_command\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"nodeStatus\",\n     \"output_name_enabled\": true\n    },\n    \"printOutput\": true,\n    \"startcondition\": \"nodeName is not ''\",\n    \"tags\": [\n     \"k8s_kubectl_command\"\n    ],\n    \"title\": \"Check Node Status\",\n    \"verbs\": [],\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"#\\n\",\n    \"# Copyright (c) 2022 unSkript.com\\n\",\n    \"# All rights reserved.\\n\",\n    \"#\\n\",\n    \"\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def k8s_kubectl_command_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    print(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def k8s_kubectl_command(handle, kubectl_command: str) -> str:\\n\",\n    \"    \\\"\\\"\\\"k8s_kubectl_command executes the given kubectl command on the pod\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned from the Task validate method\\n\",\n    \"\\n\",\n    \"        :type kubectl_command: str\\n\",\n    \"        :param kubectl_command: The Actual kubectl command, like kubectl get ns, etc..\\n\",\n    \"\\n\",\n    \"        :rtype: String, Output of the command in python string format or Empty String in case of Error.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    if handle.client_side_validation != True:\\n\",\n    \"        print(f\\\"K8S Connector is invalid: {handle}\\\")\\n\",\n    \"        return str()\\n\",\n    \"\\n\",\n    \"    result = handle.run_native_cmd(kubectl_command)\\n\",\n    \"    if result is None or hasattr(result, \\\"stderr\\\") is False or result.stderr is None:\\n\",\n    \"        print(\\n\",\n    \"            f\\\"Error while executing command ({kubectl_command}): {result.stderr}\\\")\\n\",\n    \"        return str()\\n\",\n    \"\\n\",\n    \"    return result.stdout\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"kubectl_command\\\": \\\"f\\\\\\\\\\\"kubectl get pods -n {namespace} {nodeName} | grep \\\\\\\\\\\\\\\\\\\\\\\\\\\" Ready\\\\\\\\\\\\\\\\\\\\\\\\\\\" | cut -d' ' -f 1 | tr -d ' '\\\\\\\\\\\" \\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"nodeName is not ''\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"nodeStatus\\\")\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(k8s_kubectl_command, lego_printer=k8s_kubectl_command_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"b9e8faf9-9ea6-4170-b7ae-76c74bfb012c\",\n   \"metadata\": {\n    \"name\": \"Step 4\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 4\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Get-List-of-Pods-in-Terminating-State\\\"><a id=\\\"1\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Force-Delete the Pod</h3>\\n\",\n    \"<p>This action force deletes a pod. The pod may not be terminating due to a process that is not responding to a signal. The exact reason will be context-specific and application dependent. Common causes include:</p>\\n\",\n    \"<ul>\\n\",\n    \"<li>\\n\",\n    \"<p>A tight loop in userspace code that does not allow for interrupt signals</p>\\n\",\n    \"</li>\\n\",\n    \"<li>\\n\",\n    \"<p>A maintenance process (eg garbage collection) on the application runtime</p>\\n\",\n    \"</li>\\n\",\n    \"</ul>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters:&nbsp;<code>namespace</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following ouput: <code>nodeStatus</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"2b0381ff-38a0-4d0e-baaf-f53328af5c15\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionCategories\": [],\n    \"actionIsCheck\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [],\n    \"actionNextHopParameterMapping\": {},\n    \"actionOutputType\": \"\",\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": false,\n    \"action_uuid\": \"4d3b4c6153e14622f42b332b7c7b8f7043577971f64edc5be6b5f8b40d5b89d1\",\n    \"checkEnabled\": false,\n    \"condition_enabled\": true,\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Execute the given Kubectl command.\",\n    \"id\": 55,\n    \"index\": 55,\n    \"inputData\": [\n     {\n      \"kubectl_command\": {\n       \"constant\": false,\n       \"value\": \"f\\\"kubectl delete pod {terminatingPods.strip()} -n {namespace} --now\\\"\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"kubectl_command\": {\n        \"description\": \"kubectl command eg \\\"kubectl get pods --all-namespaces\\\"\",\n        \"title\": \"Kubectl Command\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"kubectl_command\"\n      ],\n      \"title\": \"k8s_kubectl_command\",\n      \"type\": \"object\"\n     }\n    ],\n    \"isUnskript\": false,\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_K8S\",\n    \"name\": \"Force-delete the Pod\",\n    \"nouns\": [],\n    \"orderProperties\": [\n     \"kubectl_command\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"printOutput\": true,\n    \"startcondition\": \"terminatingPods is not ''\",\n    \"tags\": [\n     \"k8s_kubectl_command\"\n    ],\n    \"title\": \"Force-delete the Pod\",\n    \"verbs\": [],\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"#\\n\",\n    \"# Copyright (c) 2022 unSkript.com\\n\",\n    \"# All rights reserved.\\n\",\n    \"#\\n\",\n    \"\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def k8s_kubectl_command_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    print(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def k8s_kubectl_command(handle, kubectl_command: str) -> str:\\n\",\n    \"    \\\"\\\"\\\"k8s_kubectl_command executes the given kubectl command on the pod\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned from the Task validate method\\n\",\n    \"\\n\",\n    \"        :type kubectl_command: str\\n\",\n    \"        :param kubectl_command: The Actual kubectl command, like kubectl get ns, etc..\\n\",\n    \"\\n\",\n    \"        :rtype: String, Output of the command in python string format or Empty String in case of Error.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    if handle.client_side_validation != True:\\n\",\n    \"        print(f\\\"K8S Connector is invalid: {handle}\\\")\\n\",\n    \"        return str()\\n\",\n    \"\\n\",\n    \"    result = handle.run_native_cmd(kubectl_command)\\n\",\n    \"    if result is None or hasattr(result, \\\"stderr\\\") is False or result.stderr is None:\\n\",\n    \"        print(\\n\",\n    \"            f\\\"Error while executing command ({kubectl_command}): {result.stderr}\\\")\\n\",\n    \"        return str()\\n\",\n    \"\\n\",\n    \"    return result.stdout\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"kubectl_command\\\": \\\"f\\\\\\\\\\\"kubectl delete pod {terminatingPods.strip()} -n {namespace} --now\\\\\\\\\\\"\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"terminatingPods is not ''\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(k8s_kubectl_command, lego_printer=k8s_kubectl_command_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"e83f0bff-934f-4980-83b7-56ca3a4e62c3\",\n   \"metadata\": {\n    \"name\": \"Step 5\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 5\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Get-List-of-Pods-in-Terminating-State\\\"><a id=\\\"1\\\" target=\\\"_self\\\" rel=\\\"nofollow\\\"></a>Check status of Node</h3>\\n\",\n    \"<p>This action runs get pods command and if the specific pod no longer shows up when running&nbsp;<code>kubectl get pods, </code>then the issue has been <span style=\\\"color: rgb(45, 194, 107);\\\">resolved</span>.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters:&nbsp;<code>namespace</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following ouput: <code>checkResolution</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<p>&nbsp;</p>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"febcfe22-e455-4a69-951b-0084a36c5cf9\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionCategories\": [],\n    \"actionIsCheck\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [],\n    \"actionNextHopParameterMapping\": {},\n    \"actionOutputType\": \"\",\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": false,\n    \"action_uuid\": \"4d3b4c6153e14622f42b332b7c7b8f7043577971f64edc5be6b5f8b40d5b89d1\",\n    \"checkEnabled\": false,\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Execute the given Kubectl command.\",\n    \"id\": 55,\n    \"index\": 55,\n    \"inputData\": [\n     {\n      \"kubectl_command\": {\n       \"constant\": false,\n       \"value\": \"f\\\"kubectl get pods -n {namespace}\\\"\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"kubectl_command\": {\n        \"description\": \"kubectl command eg \\\"kubectl get pods --all-namespaces\\\"\",\n        \"title\": \"Kubectl Command\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"kubectl_command\"\n      ],\n      \"title\": \"k8s_kubectl_command\",\n      \"type\": \"object\"\n     }\n    ],\n    \"isUnskript\": false,\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_K8S\",\n    \"name\": \"Check Resolution\",\n    \"nouns\": [],\n    \"orderProperties\": [\n     \"kubectl_command\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"checkResolution\",\n     \"output_name_enabled\": true\n    },\n    \"printOutput\": true,\n    \"tags\": [\n     \"k8s_kubectl_command\"\n    ],\n    \"title\": \"Check Resolution\",\n    \"verbs\": [],\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"#\\n\",\n    \"# Copyright (c) 2022 unSkript.com\\n\",\n    \"# All rights reserved.\\n\",\n    \"#\\n\",\n    \"\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def k8s_kubectl_command_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    print(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def k8s_kubectl_command(handle, kubectl_command: str) -> str:\\n\",\n    \"    \\\"\\\"\\\"k8s_kubectl_command executes the given kubectl command on the pod\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned from the Task validate method\\n\",\n    \"\\n\",\n    \"        :type kubectl_command: str\\n\",\n    \"        :param kubectl_command: The Actual kubectl command, like kubectl get ns, etc..\\n\",\n    \"\\n\",\n    \"        :rtype: String, Output of the command in python string format or Empty String in case of Error.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    if handle.client_side_validation != True:\\n\",\n    \"        print(f\\\"K8S Connector is invalid: {handle}\\\")\\n\",\n    \"        return str()\\n\",\n    \"\\n\",\n    \"    result = handle.run_native_cmd(kubectl_command)\\n\",\n    \"    if result is None or hasattr(result, \\\"stderr\\\") is False or result.stderr is None:\\n\",\n    \"        print(\\n\",\n    \"            f\\\"Error while executing command ({kubectl_command}): {result.stderr}\\\")\\n\",\n    \"        return str()\\n\",\n    \"\\n\",\n    \"    return result.stdout\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"kubectl_command\\\": \\\"f\\\\\\\\\\\"kubectl get pods -n {namespace}\\\\\\\\\\\"\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"checkResolution\\\")\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(k8s_kubectl_command, lego_printer=k8s_kubectl_command_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"2a340553-4bd7-42ef-ba15-2e3125b471f9\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Further Steps\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Further Steps\"\n   },\n   \"source\": [\n    \"***\\n\",\n    \"If the POD still stuck in `Terminating` state then you can consider.\\n\",\n    \"\\n\",\n    \"    1. Restarting kubelet\\n\",\n    \"        \\n\",\n    \"        If you can SSH to the node and restart the kublet process. You may need\\n\",\n    \"        administrator priveleges to do so. Before you do that, you may also want\\n\",\n    \"        to check the kubelet logs for any issues.\\n\",\n    \"        \\n\",\n    \"    2. Check Whether finalizer's work needs to get done before termination\\n\",\n    \"    \\n\",\n    \"        This will vary depending on what the finalizer is doing. Please refer to \\n\",\n    \"        [Finalizers](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#finalizers). Common cases for finalizers not completing realtes to\\n\",\n    \"        Volumes.\\n\",\n    \"        \\n\",\n    \"***\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"f07e9fa6-01da-45f3-b195-97e9f89c9465\",\n   \"metadata\": {\n    \"name\": \"Conclusion\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Conclusion\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Conclusion\\\">Conclusion<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Conclusion\\\" target=\\\"_self\\\">&para;</a></h3>\\n\",\n    \"<p>In this Runbook, we were able to identify pods stuck in Terminating State and removed the finalizer(if present) and tried force deletion of pod, using unSkript's K8s actions. To view the full platform capabilities of unSkript please visit <a href=\\\"https://us.app.unskript.io\\\" target=\\\"_blank\\\" rel=\\\"noopener\\\">us.app.unskript.io</a></p>\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"execution_data\": {\n   \"runbook_name\": \"k8s: Pod Stuck in Terminating State\",\n   \"parameters\": [\n    \"namespace\"\n   ]\n  },\n  \"kernelspec\": {\n   \"display_name\": \"Python 3.9.6 64-bit\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.9.6\"\n  },\n  \"parameterSchema\": {\n   \"properties\": {\n    \"namespace\": {\n     \"description\": \"K8S Namespace\",\n     \"title\": \"namespace\",\n     \"type\": \"string\"\n    }\n   },\n   \"required\": [\n    \"namespace\"\n   ],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"parameterValues\": {},\n  \"vscode\": {\n   \"interpreter\": {\n    \"hash\": \"31f2aee4e71d21fbe5cf8b01ff0e069b9275f58929596ceb00d14d90e3e16cd6\"\n   }\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}"
  },
  {
    "path": "Kubernetes/K8S_Pod_Stuck_In_Terminating_State.json",
    "content": "{\n  \"name\": \"k8s: Pod Stuck in Terminating State\",\n  \"description\": \"This runbook checks any Pods are in terminating state in a given k8s namespace. If it finds, it tries to recover it by resetting finalizers of the pod.\",  \n  \"uuid\": \"7108717393788c2d76687490938faffe5e6e2a46f05405f180e089a166761173\",\n  \"icon\": \"CONNECTOR_TYPE_K8S\",\n  \"categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\" ],\n  \"connector_types\": [ \"CONNECTOR_TYPE_K8S\" ],\n  \"version\": \"1.0.0\"\n}"
  },
  {
    "path": "Kubernetes/README.md",
    "content": "# Kubernetes RunBooks\n* [k8s: Delete Evicted Pods From All Namespaces](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/Delete_Evicted_Pods_From_Namespaces.ipynb): This runbook shows and deletes the evicted pods for given namespace. If the user provides the namespace input, then it only collects pods for the given namespace; otherwise, it will select all pods from all the namespaces.\n* [k8s: Get kube system config map](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/Get_Kube_System_Config_Map.ipynb): This runbook fetches the kube system config map for a k8s cluster and publishes the information on a Slack channel.\n* [IP Exhaustion Mitigation: Failing K8s Pod Deletion from Jobs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/K8S_Delete_Pods_From_Failing_Jobs.ipynb): Preventing IP exhaustion is critical in Kubernetes environments, and a key strategy is deleting failing pods from jobs. Failing pods can consume valuable IP resources, leading to scarcity and inefficiency. By proactively identifying and removing malfunctioning pods, administrators can promptly free up IP addresses, optimizing resource utilization. This approach ensures that IP allocation remains efficient, enabling the cluster to accommodate new pods without experiencing IP exhaustion. This runbook helps us to identify failing pods within jobs thereby maximizing IP availability for other pods and services.\n* [k8s: Get candidate nodes for given configuration](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/K8S_Get_Candidate_Nodes_Given_Config.ipynb): This runbook get the matching nodes for a given configuration (storage, cpu, memory, pod_limit) from a k8s cluster\n* [Kubernetes Log Healthcheck](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/K8S_Log_Healthcheck.ipynb): This RunBook checks the logs of every pod in a namespace for warning messages.\n* [k8s: Pod Stuck in CrashLoopBackoff State](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/K8S_Pod_Stuck_In_CrashLoopBack_State.ipynb): This runbook checks if any Pod(s) in CrashLoopBackoff state in a given k8s namespace. If it finds, it tries to find out the reason why the Pod(s) is in that state.\n* [k8s: Pod Stuck in ImagePullBackOff State](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/K8S_Pod_Stuck_In_ImagePullBackOff_State.ipynb): This runbook checks if any Pod(s) in ImagePullBackOff state in a given k8s namespace. If it finds, it tries to find out the reason why the Pod(s) is in that state.\n* [k8s: Pod Stuck in ImagePullBackOff State using genAI](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/K8S_Pod_Stuck_In_ImagePullBackOff_State_with_genai.ipynb): This runbook checks if any Pod(s) in ImagePullBackOff state in a given k8s namespace, using genAI. If it finds, it tries to find out the reason why the Pod(s) is in that state.\n* [k8s: Pod Stuck in Terminating State](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/K8S_Pod_Stuck_In_Terminating_State.ipynb): This runbook checks any Pods are in terminating state in a given k8s namespace. If it finds, it tries to recover it by resetting finalizers of the pod.\n* [k8s: Resize List of PVCs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/Resize_List_of_PVCs.ipynb): This runbook resizes a list of Kubernetes PVCs.\n* [k8s: Resize PVC](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/Resize_PVC.ipynb): This runbook resizes a Kubernetes PVC.\n* [Rollback Kubernetes Deployment](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/Rollback_k8s_Deployment_and_Update_Jira.ipynb): This runbook can be used to rollback Kubernetes Deployment\n\n# Kubernetes Actions\n* [Add Node in a Kubernetes Cluster](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_add_node_to_cluster/README.md): Add Node in a Kubernetes Cluster\n* [Change size of Kubernetes PVC](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_change_pvc_size/README.md): Change size of Kubernetes PVC\n* [Check K8s services endpoint health](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_check_service_status/README.md): This action checks the health status of the provided Kubernetes services.\n* [Check K8s worker CPU Utilization](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_check_worker_cpu_utilization/README.md): Retrieves the CPU utilization for all worker nodes in the cluster and compares it to a given threshold.\n* [Delete a Kubernetes POD in a given Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_delete_pod/README.md): Delete a Kubernetes POD in a given Namespace\n* [Describe Kubernetes Node](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_describe_node/README.md): Describe a Kubernetes Node\n* [Describe a Kubernetes POD in a given Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_describe_pod/README.md): Describe a Kubernetes POD in a given Namespace\n* [Execute a command on a Kubernetes POD in a given Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_exec_command_on_pod/README.md): Execute a command on a Kubernetes POD in a given Namespace\n* [Kubernetes Execute a command on a POD in a given namespace and filter](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_exec_command_on_pods_and_filter/README.md): Execute a command on Kubernetes POD in a given namespace and filter output\n* [Execute local script on a pod](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_execute_local_script_on_a_pod/README.md): Execute local script on a pod in a namespace\n* [Gather Data for POD Troubleshoot](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_gather_data_for_pod_troubleshoot/README.md): Gather Data for POD Troubleshoot\n* [Gather Data for K8S Service Troubleshoot](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_gather_data_for_service_troubleshoot/README.md): Gather Data for K8S Service Troubleshoot\n* [Get All Evicted PODS From Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_all_evicted_pods_from_namespace/README.md): This action get all evicted PODS from given namespace. If namespace not given it will get all the pods from all namespaces.\n* [ Get All Kubernetes PODS with state in a given Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_all_pods/README.md):  Get All Kubernetes PODS with state in a given Namespace\n* [Get K8s pods status and resource utilization info](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_all_resources_utilization_info/README.md): This action gets the pod status and resource utilization of various Kubernetes resources like jobs, services, persistent volumes.\n* [Get candidate k8s nodes for given configuration](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_candidate_nodes_for_pods/README.md): Get candidate k8s nodes for given configuration\n* [Get K8S Cluster Health](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_cluster_health/README.md): Get K8S Cluster Health\n* [Get k8s kube system config map](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_config_map_kube_system/README.md): Get k8s kube system config map\n* [Get Kubernetes Deployment For a Pod in a Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_deployment/README.md): Get Kubernetes Deployment for a POD in a Namespace\n* [Get Deployment Status](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_deployment_status/README.md): This action search for failed deployment status and returns list.\n* [Get Kubernetes Error PODs from All Jobs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_error_pods_from_all_jobs/README.md): Get Kubernetes Error PODs from All Jobs\n* [Get expiring K8s certificates](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_expiring_certificates/README.md): Get the expiring certificates for a K8s cluster.\n* [Get Kubernetes Failed Deployments](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_failed_deployments/README.md): Get Kubernetes Failed Deployments\n* [Get frequently restarting K8s pods](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_frequently_restarting_pods/README.md): Get Kubernetes pods from all namespaces that are restarting too often.\n* [Get Kubernetes Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_handle/README.md): Get Kubernetes Handle\n* [Get All Kubernetes Healthy PODS in a given Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_healthy_pods/README.md): Get All Kubernetes Healthy PODS in a given Namespace\n* [Get memory utilization for K8s services](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_memory_utilization_of_services/README.md): This action executes the given kubectl commands to find the memory utilization of the specified services in a particular namespace and compares it with a given threshold.\n* [Get K8s node status and CPU utilization](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_node_status_and_resource_utilization/README.md): This action gathers Kubernetes node status and resource utilization information.\n* [Get Kubernetes Nodes](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_nodes/README.md): Get Kubernetes Nodes\n* [Get K8s nodes disk and memory pressure](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_nodes_pressure/README.md): This action fetches the memory and disk pressure status of each node in the cluster\n* [Get Kubernetes Nodes that have insufficient resources](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_nodes_with_insufficient_resources/README.md): Get Kubernetes Nodes that have insufficient resources\n* [Get K8s offline nodes](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_offline_nodes/README.md): This action checks if any node in the Kubernetes cluster is offline.\n* [Get K8S OOMKilled Pods](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_oomkilled_pods/README.md): Get K8S Pods which are OOMKilled from the container last states.\n* [Get K8s get pending pods](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_pending_pods/README.md): This action checks if any pod in the Kubernetes cluster is in 'Pending' status.\n* [Get Kubernetes POD Configuration](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_pod_config/README.md): Get Kubernetes POD Configuration\n* [Get Kubernetes Logs for a given POD in a Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_pod_logs/README.md): Get Kubernetes Logs for a given POD in a Namespace\n* [Get Kubernetes Logs for a list of PODs & Filter in a Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_pod_logs_and_filter/README.md): Get Kubernetes Logs for a list of PODs and Filter in a Namespace\n* [Get Kubernetes Status for a POD in a given Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_pod_status/README.md): Get Kubernetes Status for a POD in a given Namespace\n* [Get pods attached to Kubernetes PVC](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_pods_attached_to_pvc/README.md): Get pods attached to Kubernetes PVC\n* [Get all K8s Pods in CrashLoopBackOff State](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_pods_in_crashloopbackoff_state/README.md): Get all K8s pods in CrashLoopBackOff State\n* [Get all K8s Pods in ImagePullBackOff State](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_pods_in_imagepullbackoff_state/README.md): Get all K8s pods in ImagePullBackOff State\n* [Get Kubernetes PODs in not Running State](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_pods_in_not_running_state/README.md): Get Kubernetes PODs in not Running State\n* [Get all K8s Pods in Terminating State](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_pods_in_terminating_state/README.md): Get all K8s pods in Terminating State\n* [Get Kubernetes PODS with high restart](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_pods_with_high_restart/README.md): Get Kubernetes PODS with high restart\n* [Get K8S Service with no associated endpoints](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_service_with_no_associated_endpoints/README.md): Get K8S Service with no associated endpoints\n* [Get Kubernetes Services for a given Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_services/README.md): Get Kubernetes Services for a given Namespace\n* [Get Kubernetes Unbound PVCs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_unbound_pvcs/README.md): Get Kubernetes Unbound PVCs\n* [Kubectl command](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_command/README.md): Execute kubectl command.\n* [Kubectl set context entry in kubeconfig](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_config_set_context/README.md): Kubectl set context entry in kubeconfig\n* [Kubectl display merged kubeconfig settings](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_config_view/README.md): Kubectl display merged kubeconfig settings\n* [Kubectl delete a pod](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_delete_pod/README.md): Kubectl delete a pod\n* [Kubectl describe a node](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_describe_node/README.md): Kubectl describe a node\n* [Kubectl describe a pod](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_describe_pod/README.md): Kubectl describe a pod\n* [Kubectl drain a node](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_drain_node/README.md): Kubectl drain a node\n* [Execute command on a pod](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_exec_command/README.md): Execute command on a pod\n* [Kubectl get api resources](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_get_api_resources/README.md): Kubectl get api resources\n* [Kubectl get logs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_get_logs/README.md): Kubectl get logs for a given pod\n* [Kubectl get services](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_get_service_namespace/README.md): Kubectl get services in a given namespace\n* [Kubectl list pods](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_list_pods/README.md): Kubectl list pods in given namespace\n* [Kubectl update field](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_patch_pod/README.md): Kubectl update field of a resource using strategic merge patch\n* [Kubectl rollout deployment history](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_rollout_deployment/README.md): Kubectl rollout deployment history\n* [Kubectl scale deployment](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_scale_deployment/README.md): Kubectl scale a given deployment\n* [Kubectl show metrics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_show_metrics_node/README.md): Kubectl show metrics for a given node\n* [Kubectl show metrics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_show_metrics_pod/README.md): Kubectl show metrics for a given pod\n* [List matching name pods](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_list_all_matching_pods/README.md): List all pods matching a particular name string. The matching string can be a regular expression too\n* [List pvcs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_list_pvcs/README.md): List pvcs by namespace. By default, it will list all pvcs in all namespaces.\n* [Remove POD from Deployment](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_remove_pod_from_deployment/README.md): Remove POD from Deployment\n* [Update Commands in a Kubernetes POD in a given Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_update_command_in_pod_spec/README.md): Update Commands in a Kubernetes POD in a given Namespace\n"
  },
  {
    "path": "Kubernetes/Resize_List_of_PVCs.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"16182e50-b995-4f61-a140-30c3f4902837\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Steps Overview\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Steps Overview\"\n   },\n   \"source\": [\n    \"<hr><center><img src=\\\"https://storage.googleapis.com/unskript-website/assets/favicon.png\\\" alt=\\\"unSkript.com\\\" width=\\\"100\\\" height=\\\"100\\\">\\n\",\n    \"<h1 id=\\\"unSkript-Runbooks&para;\\\">unSkript Runbooks</h1>\\n\",\n    \"<div class=\\\"alert alert-block alert-success\\\">\\n\",\n    \"<h3 id=\\\"Objective&para;\\\">Objective</h3>\\n\",\n    \"<br><strong style=\\\"color: #000000;\\\"><em>Resize List of PVC</em></strong></div>\\n\",\n    \"</center>\\n\",\n    \"<p>&nbsp;</p>\\n\",\n    \"<center>\\n\",\n    \"<h2 id=\\\"Resize-PVC&para;\\\"><u>Resize List of PVC</u></h2>\\n\",\n    \"</center>\\n\",\n    \"<h1 id=\\\"Steps-Overview&para;\\\">Steps Overview</h1>\\n\",\n    \"<p><a href=\\\"#1\\\" target=\\\"_self\\\" rel=\\\"noopener\\\">1)&nbsp;</a><a href=\\\"#2\\\">List PVCs</a><a href=\\\"#1\\\" target=\\\"_self\\\" rel=\\\"noopener\\\"><br>2)&nbsp;</a><a href=\\\"#2\\\" target=\\\"_self\\\" rel=\\\"noopener\\\">Call Resize PVC Runbook</a></p>\\n\",\n    \"<hr>\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"16f384c9-d833-4613-a946-732e6b31f727\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 1\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 1\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Get-storage-class-for-PVC\\\">List PVC</h3>\\n\",\n    \"<p>This action fetches a list of PVC</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters:<code> Namespace(optional)</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following ouput: <code>pvcsList</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"c6d4498e-8d97-4790-87ff-090a7846ccd6\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionCategories\": [],\n    \"actionIsCheck\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [],\n    \"actionNextHopParameterMapping\": {},\n    \"actionOutputType\": \"\",\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": false,\n    \"action_uuid\": \"0c96676c124796bc48e751c641ea0ccc722e7d29f1ffe665fe756a7106d756c5\",\n    \"checkEnabled\": false,\n    \"collapsed\": true,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"List pvcs by namespace. By default, it will list all pvcs in all namespaces.\",\n    \"id\": 48,\n    \"index\": 48,\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"namespace\": {\n        \"default\": \"\",\n        \"description\": \"Kubernetes namespace\",\n        \"title\": \"Namespace\",\n        \"type\": \"string\"\n       }\n      },\n      \"title\": \"k8s_list_pvcs\",\n      \"type\": \"object\"\n     }\n    ],\n    \"isUnskript\": false,\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_K8S\",\n    \"name\": \"List pvcs\",\n    \"nouns\": [],\n    \"orderProperties\": [\n     \"namespace\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"pvcsList\",\n     \"output_name_enabled\": true\n    },\n    \"printOutput\": true,\n    \"tags\": [\n     \"k8s_list_pvcs\"\n    ],\n    \"verbs\": [],\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"#\\n\",\n    \"# Copyright (c) 2021 unSkript.com\\n\",\n    \"# All rights reserved.\\n\",\n    \"#\\n\",\n    \"import pprint\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import Optional, List, Tuple\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def k8s_list_pvcs_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def k8s_list_pvcs(handle, namespace: str = '') -> List:\\n\",\n    \"    \\\"\\\"\\\"k8s_list_pvcs list pvcs\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned from the Task validate method\\n\",\n    \"\\n\",\n    \"        :type namespace: str\\n\",\n    \"        :param namespace: Kubernetes namespace.\\n\",\n    \"\\n\",\n    \"        :rtype: List\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    if namespace == '':\\n\",\n    \"        kubectl_command = 'kubectl get pvc -A --output=jsonpath=\\\\'{range .items[*]}{@.metadata.namespace}{\\\",\\\"}{@.metadata.name}{\\\"\\\\\\\\n\\\"}{end}\\\\''\\n\",\n    \"    else:\\n\",\n    \"        kubectl_command = 'kubectl get pvc -n ' + namespace + ' --output=jsonpath=\\\\'{range .items[*]}{@.metadata.namespace}{\\\",\\\"}{@.metadata.name}{\\\"\\\\\\\\n\\\"}{end}\\\\''\\n\",\n    \"    result = handle.run_native_cmd(kubectl_command)\\n\",\n    \"    if result is None or hasattr(result, \\\"stderr\\\") is False or result.stderr is None:\\n\",\n    \"        print(\\n\",\n    \"            f\\\"Error while executing command ({kubectl_command}): {result.stderr}\\\")\\n\",\n    \"        return []\\n\",\n    \"    names_list = [y for y in (x.strip() for x in result.stdout.splitlines()) if y]\\n\",\n    \"    output = []\\n\",\n    \"    for i in names_list:\\n\",\n    \"        ns, name = i.split(\\\",\\\")\\n\",\n    \"        output.append({\\\"Namespace\\\": ns, \\\"Name\\\":name})\\n\",\n    \"    return output\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(outputName=\\\"pvcsList\\\")\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(k8s_list_pvcs, lego_printer=k8s_list_pvcs_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"e344354c-6a1b-4622-b83a-3f8cefb5791d\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 1A\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 1A\"\n   },\n   \"source\": [\n    \"<p>Convert <code>Value</code> to float</p>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 20,\n   \"id\": \"b7868ed9-03d6-4b39-a840-0db2bba2b7a7\",\n   \"metadata\": {\n    \"actionNeedsCredential\": false,\n    \"actionSupportsIteration\": false,\n    \"actionSupportsPoll\": false,\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"name\": \"Convert Value to float\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Convert Value to float\",\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"Value = float(Value)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"5d115afd-91ea-46f4-a1a6-c3bf6bad3ac1\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 2\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 2\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Get-storage-class-details\\\">Call Resize PVC runbook</h3>\\n\",\n    \"<p>This custom action makes an API call to the resize PVC runbook with the list of PVCs obtained from Step 1.</p>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 21,\n   \"id\": \"46616499-6e96-462c-b9fc-b16b2538d6b2\",\n   \"metadata\": {\n    \"actionNeedsCredential\": false,\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"name\": \"Call Resize PVC runbook\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Call Resize PVC runbook\",\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"# Copyright (c) 2021 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from unskript.connectors.infra import InfraConnector\\n\",\n    \"from typing import Optional\\n\",\n    \"import requests\\n\",\n    \"from polling2 import poll_decorator\\n\",\n    \"import html_to_json\\n\",\n    \"import uuid \\n\",\n    \"\\n\",\n    \"class Schema(BaseModel):\\n\",\n    \"    Namespace: Optional[str] = Field(\\n\",\n    \"        None, description='Namespace of the PVC', title='Namespace'\\n\",\n    \"    )\\n\",\n    \"    PVCName: Optional[str] = Field(None, description='Name of the PVC', title='PVCName')\\n\",\n    \"    ResizeOption: Optional[str] = Field(\\n\",\n    \"        'Add',\\n\",\n    \"        description='Option to resize the volume. 2 options supported:             1. Add - Use this option to resize by an amount.             2. Multiple - Use this option if you want to resize by a multiple of the current volume size.',\\n\",\n    \"        title='ResizeOption',\\n\",\n    \"    )\\n\",\n    \"    RestartPodsAfterResize: Optional[bool] = Field(\\n\",\n    \"        False,\\n\",\n    \"        description='Restart the pods after PVC resize',\\n\",\n    \"        title='RestartPodsAfterResize',\\n\",\n    \"    )\\n\",\n    \"    Channel: Optional[str] = Field(\\n\",\n    \"        None,\\n\",\n    \"        description='Slack Channel name where notification will be send.',\\n\",\n    \"        title='SlackChannelName',\\n\",\n    \"    )\\n\",\n    \"    Value: Optional[float] = Field(\\n\",\n    \"        None,\\n\",\n    \"        description='Based on the resize option chosen, specify the value. For eg, if you chose Add option, this             value will be a value in Gb (like 100). If you chose, this value will be a multiplying factor             to the current volume size. For eg, to double, specify value as 2.',\\n\",\n    \"        title='Value',\\n\",\n    \"    )\\n\",\n    \"\\n\",\n    \"@poll_decorator(step=10, timeout=60, check_success=lambda x: x is True)\\n\",\n    \"def checkExecutionStatus(handle, tenantID, executionID) -> bool:\\n\",\n    \"    print(f'Checking execution status')\\n\",\n    \"    url = f'{env[\\\"TENANT_URL\\\"]}/executions/{executionID}'\\n\",\n    \"    try:\\n\",\n    \"        resp = handle.request('GET', url, params={'tenant_id': tenantID, \\\"summary\\\": True})\\n\",\n    \"        resp.raise_for_status()\\n\",\n    \"    except Exception as e:\\n\",\n    \"        print(f'Get execution {executionID} failed, {e}')\\n\",\n    \"        return False\\n\",\n    \"\\n\",\n    \"    try:\\n\",\n    \"        result = resp.json()\\n\",\n    \"    except Exception:\\n\",\n    \"        result = html_to_json.convert(resp.content)\\n\",\n    \"    if result['execution']['executionStatus'] == \\\"EXECUTION_STATUS_SUCCEEDED\\\" or result['execution']['executionStatus'] == \\\"EXECUTION_STATUS_FAILED\\\":\\n\",\n    \"        return True\\n\",\n    \"    else:\\n\",\n    \"        return False\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def call_pvc_resize_runbook(handle: InfraConnector, Namespace: str, PVCName: str, ResizeOption: str, RestartPodsAfterResize:bool, Value: float, Channel: str = None):\\n\",\n    \"    workflowIDToBeCalled = RunbookID\\n\",\n    \"    apiToken = APIToken\\n\",\n    \"    tenantID = env['TENANT_ID']\\n\",\n    \"    environmentID = env['ENVIRONMENT_ID']\\n\",\n    \"    userID = \\\"Bot-user\\\"\\n\",\n    \"\\n\",\n    \"    params = Schema()\\n\",\n    \"    params.Namespace = Namespace\\n\",\n    \"    params.PVCName = PVCName\\n\",\n    \"    params.Value = Value\\n\",\n    \"    params.ResizeOption = ResizeOption\\n\",\n    \"    params.Channel = Channel\\n\",\n    \"    payload = {\\n\",\n    \"        \\\"req_hdr\\\": {\\n\",\n    \"            \\\"tid\\\": str(uuid.uuid4())\\n\",\n    \"        },\\n\",\n    \"        \\\"tenant_id\\\": tenantID,\\n\",\n    \"        \\\"environment_id\\\": environmentID,\\n\",\n    \"        \\\"user_id\\\": userID,\\n\",\n    \"        \\\"params\\\": params.json()\\n\",\n    \"    }\\n\",\n    \"    handle = requests.Session()\\n\",\n    \"    authHeader = f'unskript-sha {apiToken}'\\n\",\n    \"    handle.headers.update({'Authorization': authHeader})\\n\",\n    \"    url = f'{env[\\\"TENANT_URL\\\"]}/workflows/{workflowIDToBeCalled}/run'\\n\",\n    \"\\n\",\n    \"    try:\\n\",\n    \"        resp = handle.request('POST', url, json=payload)\\n\",\n    \"        resp.raise_for_status()\\n\",\n    \"    except Exception as e:\\n\",\n    \"        print(f'Workflow run failed, {e}')\\n\",\n    \"        raise e\\n\",\n    \"\\n\",\n    \"    try:\\n\",\n    \"        result = resp.json()\\n\",\n    \"    except Exception:\\n\",\n    \"        result = html_to_json.convert(resp.content)\\n\",\n    \"\\n\",\n    \"    executionID = result['executionId']\\n\",\n    \"    print(f'ExecutionID {executionID}')\\n\",\n    \"\\n\",\n    \"    try:\\n\",\n    \"        checkExecutionStatus(handle, tenantID, executionID)\\n\",\n    \"    except Exception as e:\\n\",\n    \"        handle.close()\\n\",\n    \"        print(f'Check execution status for {executionID} failed, {e}')\\n\",\n    \"        raise e\\n\",\n    \"\\n\",\n    \"    handle.close()\\n\",\n    \"    return\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"Namespace\\\": {\\n\",\n    \"        \\\"constant\\\": false,\\n\",\n    \"        \\\"value\\\": \\\"iter.get(\\\\\\\\\\\\\\\"Namespace\\\\\\\\\\\\\\\")\\\"\\n\",\n    \"    },\\n\",\n    \"    \\\"PVCName\\\": {\\n\",\n    \"        \\\"constant\\\": false,\\n\",\n    \"        \\\"value\\\": \\\"iter.get(\\\\\\\\\\\\\\\"Name\\\\\\\\\\\\\\\")\\\"\\n\",\n    \"    },\\n\",\n    \"    \\\"ResizeOption\\\": {\\n\",\n    \"        \\\"constant\\\": false,\\n\",\n    \"        \\\"value\\\": \\\"ResizeOption\\\"\\n\",\n    \"    },\\n\",\n    \"    \\\"RestartPodsAfterResize\\\": {\\n\",\n    \"        \\\"constant\\\": true,\\n\",\n    \"        \\\"value\\\": false\\n\",\n    \"    },\\n\",\n    \"    \\\"Channel\\\": {\\n\",\n    \"        \\\"constant\\\": false,\\n\",\n    \"        \\\"value\\\": \\\"Channel\\\"\\n\",\n    \"    },\\n\",\n    \"    \\\"Value\\\": {\\n\",\n    \"        \\\"constant\\\": false,\\n\",\n    \"        \\\"value\\\": \\\"Value\\\"\\n\",\n    \"    }\\n\",\n    \"}''')\\n\",\n    \"task.configure(iterJson='''{\\n\",\n    \"    \\\"iter_enabled\\\": true,\\n\",\n    \"    \\\"iter_list_is_const\\\": false,\\n\",\n    \"    \\\"iter_list\\\": \\\"pvcsList\\\",\\n\",\n    \"    \\\"iter_parameter\\\": [\\n\",\n    \"        \\\"Namespace\\\",\\n\",\n    \"        \\\"PVCName\\\"\\n\",\n    \"    ]\\n\",\n    \"}''')\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars(), infra=True)\\n\",\n    \"if err is None:\\n\",\n    \"    task.output = task.execute(call_pvc_resize_runbook, hdl, args)\\n\",\n    \"if hasattr(task, 'output'):\\n\",\n    \"    if isinstance(task.output, (list, tuple)):\\n\",\n    \"        for item in task.output:\\n\",\n    \"            print(f'item: {item}')\\n\",\n    \"    elif isinstance(task.output, dict):\\n\",\n    \"        for item in task.output.items():\\n\",\n    \"            print(f'item: {item}')\\n\",\n    \"    else:\\n\",\n    \"        print(task.output)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"fafb82e0-b73e-487b-8cb7-a987b59b5902\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Conclusion\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Conclusion\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Conclusion\\\">Conclusion</h3>\\n\",\n    \"<p>In this Runbook, we were able successfully resize a list of PVCs using unSkript's K8s actions and making an API call to the resize PVC runbook. To view the full platform capabilities of unSkript please visit&nbsp;<a href=\\\"https://us.app.unskript.io\\\" target=\\\"_blank\\\" rel=\\\"noopener\\\">us.app.unskript.io</a></p>\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"execution_data\": {\n   \"runbook_name\": \"k8s: Resize List of PVCs\",\n   \"parameters\": [\n    \"ResizeOption\",\n    \"RunbookID\",\n    \"Value\",\n    \"APIToken\",\n    \"Channel\"\n   ]\n  },\n  \"kernelspec\": {\n   \"display_name\": \"Python 3.10.6 64-bit\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.10.6\"\n  },\n  \"parameterSchema\": {\n   \"properties\": {\n    \"APIToken\": {\n     \"description\": \"APIToken to talk to unskript apis\",\n     \"title\": \"APIToken\",\n     \"type\": \"string\"\n    },\n    \"Channel\": {\n     \"description\": \"Slack Channel name where notification will be send\",\n     \"title\": \"Channel\",\n     \"type\": \"string\"\n    },\n    \"ResizeOption\": {\n     \"default\": \"Add\",\n     \"description\": \"Option to resize the volume. 2 options supported:             1. Add - Use this option to resize by an amount.             2. Multiple - Use this option if you want to resize by a multiple of the current volume size.\",\n     \"title\": \"ResizeOption\",\n     \"type\": \"string\"\n    },\n    \"RunbookID\": {\n     \"default\": \"b8385df9545bdb5695af879d7d089571fed148d996cf4b7e9e7848502e2cc029\",\n     \"description\": \"UUID of the PVC Resize runbook\",\n     \"title\": \"RunbookID\",\n     \"type\": \"string\"\n    },\n    \"Value\": {\n     \"description\": \"Based on the resize option chosen, specify the value. For eg, if you chose Add option, this             value will be a value in Gb (like 100). If you chose, this value will be a multiplying factor             to the current volume size. For eg, to double, specify value as 2.\",\n     \"title\": \"Value\",\n     \"type\": \"number\"\n    }\n   },\n   \"required\": [\n    \"APIToken\",\n    \"Value\"\n   ],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"parameterValues\": null,\n  \"vscode\": {\n   \"interpreter\": {\n    \"hash\": \"b0fa6594d8f4cbf19f97940f81e996739fb7646882a419484c72d19e05852a7e\"\n   }\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}\n"
  },
  {
    "path": "Kubernetes/Resize_List_of_PVCs.json",
    "content": "{\n    \"name\": \"k8s: Resize List of PVCs\",\n    \"description\": \"This runbook resizes a list of Kubernetes PVCs.\",  \n    \"uuid\": \"40df55f0b809c1f77b7c5c5c106fc534f58b7eb93ac92993723e9798631e7359\",\n    \"icon\": \"CONNECTOR_TYPE_K8S\",\n    \"categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\" ],\n    \"connector_types\": [ \"CONNECTOR_TYPE_K8S\" ],\n    \"version\": \"1.0.0\"\n  }"
  },
  {
    "path": "Kubernetes/Resize_PVC.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"9ac20689-a687-4867-a035-676d8b5c46bf\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Steps Overview\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Steps Overview\"\n   },\n   \"source\": [\n    \"<hr><center><img src=\\\"https://storage.googleapis.com/unskript-website/assets/favicon.png\\\" alt=\\\"unSkript.com\\\" width=\\\"100\\\" height=\\\"100\\\">\\n\",\n    \"<h1 id=\\\"unSkript-Runbooks\\\">unSkript Runbooks<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#unSkript-Runbooks\\\" target=\\\"_self\\\">&para;</a></h1>\\n\",\n    \"<div class=\\\"alert alert-block alert-success\\\">\\n\",\n    \"<h3 id=\\\"Objective\\\">Objective<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Objective\\\" target=\\\"_self\\\">&para;</a></h3>\\n\",\n    \"<br><strong style=\\\"color: #000000;\\\"><em>Resize PVC volume&nbsp;</em></strong></div>\\n\",\n    \"</center>\\n\",\n    \"<p>&nbsp;</p>\\n\",\n    \"<center>\\n\",\n    \"<h2 id=\\\"Resize-PVC\\\"><u>Resize PVC</u><a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Resize-PVC\\\" target=\\\"_self\\\">&para;</a></h2>\\n\",\n    \"</center>\\n\",\n    \"<h1 id=\\\"Steps-Overview\\\">Steps Overview<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Steps-Overview\\\" target=\\\"_self\\\">&para;</a></h1>\\n\",\n    \"<p>1)&nbsp;<a href=\\\"#1\\\" target=\\\"_self\\\" rel=\\\"noopener\\\">Get Storage class of PVC</a><a href=\\\"#1\\\" target=\\\"_self\\\" rel=\\\"noopener\\\"><br>2)&nbsp;</a><a href=\\\"#2\\\" target=\\\"_self\\\" rel=\\\"noopener\\\">Get Storage Class details</a><a href=\\\"#1\\\" target=\\\"_self\\\" rel=\\\"noopener\\\"><br></a><a href=\\\"#1\\\" target=\\\"_self\\\" rel=\\\"noopener\\\">3) </a><a href=\\\"#1\\\" target=\\\"_self\\\" rel=\\\"noopener\\\">Change size of PVC</a><a href=\\\"#1\\\" target=\\\"_self\\\" rel=\\\"noopener\\\"><br></a><a href=\\\"#1\\\" target=\\\"_self\\\" rel=\\\"noopener\\\">4)&nbsp;</a><a href=\\\"#1\\\" target=\\\"_self\\\" rel=\\\"noopener\\\">Restart the pod</a><a href=\\\"#1\\\" target=\\\"_self\\\" rel=\\\"noopener\\\"><br></a><a href=\\\"#1\\\" target=\\\"_self\\\" rel=\\\"noopener\\\">5)&nbsp;</a><a href=\\\"#1\\\" target=\\\"_self\\\" rel=\\\"noopener\\\">Execute a command on a Kuberentes POD</a><a href=\\\"#1\\\" target=\\\"_self\\\" rel=\\\"noopener\\\"><br></a><a href=\\\"#2\\\" target=\\\"_self\\\" rel=\\\"noopener\\\">6)&nbsp;</a><a href=\\\"#2\\\" target=\\\"_self\\\" rel=\\\"noopener\\\">Post Slack message</a><a href=\\\"#2\\\" target=\\\"_self\\\" rel=\\\"noopener\\\"><br></a></p>\\n\",\n    \"<p>&nbsp;</p>\\n\",\n    \"<hr>\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"b06a24a5-a36f-40b4-8a5d-59392e1fcb8b\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 1\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 1\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"List-all-AWS-Regions\\\">Get storage class for PVC</h3>\\n\",\n    \"<p>This action fetches storage class for PVC to execute Step 2\\ud83d\\udc47</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters: <code>PVCName, Namespace</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following ouput: <code>storageClass</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"3b6aa5ee-f63b-4018-9467-068572ddef93\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionCategories\": [],\n    \"actionIsCheck\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [],\n    \"actionNextHopParameterMapping\": {},\n    \"actionOutputType\": \"\",\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": false,\n    \"action_uuid\": \"4d3b4c6153e14622f42b332b7c7b8f7043577971f64edc5be6b5f8b40d5b89d1\",\n    \"checkEnabled\": false,\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Execute the given Kubectl command.\",\n    \"id\": 55,\n    \"index\": 55,\n    \"inputData\": [\n     {\n      \"kubectl_command\": {\n       \"constant\": false,\n       \"value\": \"f\\\"kubectl get pvc {PVCName} -n {Namespace} --output=jsonpath={{.spec.storageClassName}}\\\"\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"kubectl_command\": {\n        \"description\": \"kubectl command eg \\\"kubectl get pods --all-namespaces\\\"\",\n        \"title\": \"Kubectl Command\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"kubectl_command\"\n      ],\n      \"title\": \"k8s_kubectl_command\",\n      \"type\": \"object\"\n     }\n    ],\n    \"isUnskript\": false,\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_K8S\",\n    \"name\": \"Get Storage Class for the PVC\",\n    \"nouns\": [],\n    \"orderProperties\": [\n     \"kubectl_command\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"storageClass\",\n     \"output_name_enabled\": true\n    },\n    \"printOutput\": true,\n    \"tags\": [\n     \"k8s_kubectl_command\"\n    ],\n    \"title\": \"Get Storage Class for the PVC\",\n    \"verbs\": [],\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"#\\n\",\n    \"# Copyright (c) 2022 unSkript.com\\n\",\n    \"# All rights reserved.\\n\",\n    \"#\\n\",\n    \"\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def k8s_kubectl_command_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    print(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def k8s_kubectl_command(handle, kubectl_command: str) -> str:\\n\",\n    \"    \\\"\\\"\\\"k8s_kubectl_command executes the given kubectl command on the pod\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned from the Task validate method\\n\",\n    \"\\n\",\n    \"        :type kubectl_command: str\\n\",\n    \"        :param kubectl_command: The Actual kubectl command, like kubectl get ns, etc..\\n\",\n    \"\\n\",\n    \"        :rtype: String, Output of the command in python string format or Empty String in case of Error.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    if handle.client_side_validation != True:\\n\",\n    \"        print(f\\\"K8S Connector is invalid: {handle}\\\")\\n\",\n    \"        return str()\\n\",\n    \"\\n\",\n    \"    result = handle.run_native_cmd(kubectl_command)\\n\",\n    \"    if result is None or hasattr(result, \\\"stderr\\\") is False or result.stderr is None:\\n\",\n    \"        print(\\n\",\n    \"            f\\\"Error while executing command ({kubectl_command}): {result.stderr}\\\")\\n\",\n    \"        return str()\\n\",\n    \"\\n\",\n    \"    return result.stdout\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"kubectl_command\\\": \\\"f\\\\\\\\\\\"kubectl get pvc {PVCName} -n {Namespace} --output=jsonpath={{.spec.storageClassName}}\\\\\\\\\\\"\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"storageClass\\\")\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(k8s_kubectl_command, lego_printer=k8s_kubectl_command_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"67898de3-7c41-4e9c-abb2-2c8086b19ad9\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 2\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 2\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"List-all-AWS-Regions\\\">Get storage class details</h3>\\n\",\n    \"<p>This action fetches storage class details for PVC</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters: <code>storageClass</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following ouput: <code>allowVolumeExpansion</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"6e01bd2b-8a37-4579-bb64-273e921b3712\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionCategories\": [],\n    \"actionIsCheck\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [],\n    \"actionNextHopParameterMapping\": {},\n    \"actionOutputType\": \"\",\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": false,\n    \"action_uuid\": \"4d3b4c6153e14622f42b332b7c7b8f7043577971f64edc5be6b5f8b40d5b89d1\",\n    \"checkEnabled\": false,\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Execute the given Kubectl command.\",\n    \"id\": 55,\n    \"index\": 55,\n    \"inputData\": [\n     {\n      \"kubectl_command\": {\n       \"constant\": false,\n       \"value\": \"f\\\"kubectl get sc {storageClass} --output=jsonpath={{.allowVolumeExpansion}}\\\"\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"kubectl_command\": {\n        \"description\": \"kubectl command eg \\\"kubectl get pods --all-namespaces\\\"\",\n        \"title\": \"Kubectl Command\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"kubectl_command\"\n      ],\n      \"title\": \"k8s_kubectl_command\",\n      \"type\": \"object\"\n     }\n    ],\n    \"isUnskript\": false,\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_K8S\",\n    \"name\": \"Get Storage class details\",\n    \"nouns\": [],\n    \"orderProperties\": [\n     \"kubectl_command\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"allowVolumeExpansion\",\n     \"output_name_enabled\": true\n    },\n    \"printOutput\": true,\n    \"tags\": [\n     \"k8s_kubectl_command\"\n    ],\n    \"title\": \"Get Storage class details\",\n    \"verbs\": [],\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"#\\n\",\n    \"# Copyright (c) 2022 unSkript.com\\n\",\n    \"# All rights reserved.\\n\",\n    \"#\\n\",\n    \"\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def k8s_kubectl_command_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    print(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def k8s_kubectl_command(handle, kubectl_command: str) -> str:\\n\",\n    \"    \\\"\\\"\\\"k8s_kubectl_command executes the given kubectl command on the pod\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned from the Task validate method\\n\",\n    \"\\n\",\n    \"        :type kubectl_command: str\\n\",\n    \"        :param kubectl_command: The Actual kubectl command, like kubectl get ns, etc..\\n\",\n    \"\\n\",\n    \"        :rtype: String, Output of the command in python string format or Empty String in case of Error.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    if handle.client_side_validation != True:\\n\",\n    \"        print(f\\\"K8S Connector is invalid: {handle}\\\")\\n\",\n    \"        return str()\\n\",\n    \"\\n\",\n    \"    result = handle.run_native_cmd(kubectl_command)\\n\",\n    \"    if result is None or hasattr(result, \\\"stderr\\\") is False or result.stderr is None:\\n\",\n    \"        print(\\n\",\n    \"            f\\\"Error while executing command ({kubectl_command}): {result.stderr}\\\")\\n\",\n    \"        return str()\\n\",\n    \"\\n\",\n    \"    return result.stdout\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"kubectl_command\\\": \\\"f\\\\\\\\\\\"kubectl get sc {storageClass} --output=jsonpath={{.allowVolumeExpansion}}\\\\\\\\\\\"\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"allowVolumeExpansion\\\")\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(k8s_kubectl_command, lego_printer=k8s_kubectl_command_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"184efdc8-9c65-4e52-a49b-c9528dff6f94\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 2A\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 2A\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"List-all-AWS-Regions\\\">Check if storage class has allowVolumeExpansion enabled</h3>\\n\",\n    \"<p>This action verifies that allowVolumeExpansion is enabled for the storage class for the PVC. Assert if its not enabled.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters: <code>None</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following ouput: <code>None</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 6,\n   \"id\": \"17593357-01c4-4219-be83-06bafebbb0e6\",\n   \"metadata\": {\n    \"actionNeedsCredential\": false,\n    \"actionSupportsIteration\": false,\n    \"actionSupportsPoll\": false,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2022-07-06T07:54:16.872Z\"\n    },\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Check if storage class has allowVolumeExpansion enabled\",\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"if allowVolumeExpansion == \\\"\\\" or allowVolumeExpansion is False:\\n\",\n    \"    print(f'allowVolumeExpansion disabled for storage class {storageClass}, exiting')\\n\",\n    \"    assert(f'allowVolumeExpansion disabled for storage class {storageClass}')\\n\",\n    \"else:\\n\",\n    \"    print(f'allowVolumeExpansion enabled for storage class {storageClass}')\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"a15d0df5-5123-48a8-9785-ba47c786961c\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 3\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 3\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Change-size-of-Kuberneted-PVC\\\">Change size of PVC<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Change-size-of-Kuberneted-PVC\\\" target=\\\"_self\\\">&para;</a></h3>\\n\",\n    \"<p>This action increases the PVC Volume by the provided Value depending upon ResizeOption chosen.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters: <code>namespace, PVCName, value</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following ouput: <code>None</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"d68afc44-f6b0-4e23-8a36-26f2f48bc04b\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionCategories\": [],\n    \"actionIsCheck\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [],\n    \"actionNextHopParameterMapping\": {},\n    \"actionOutputType\": \"\",\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": false,\n    \"action_uuid\": \"c82954c96e797711613cd6b0ef8c6ab45a6af26f191115df128396bb056450d2\",\n    \"checkEnabled\": false,\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Change size of Kubernetes PVC\",\n    \"id\": 32,\n    \"index\": 32,\n    \"inputData\": [\n     {\n      \"name\": {\n       \"constant\": false,\n       \"value\": \"PVCName\"\n      },\n      \"namespace\": {\n       \"constant\": false,\n       \"value\": \"Namespace\"\n      },\n      \"resize_option\": {\n       \"constant\": true,\n       \"value\": \"Add\"\n      },\n      \"resize_value\": {\n       \"constant\": false,\n       \"value\": \"Value\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"definitions\": {\n       \"SizingOption\": {\n        \"description\": \"An enumeration.\",\n        \"enum\": [\n         \"Add\",\n         \"Multiple\"\n        ],\n        \"title\": \"SizingOption\",\n        \"type\": \"string\"\n       }\n      },\n      \"properties\": {\n       \"name\": {\n        \"description\": \"Name of the PVC.\",\n        \"title\": \"PVC Name\",\n        \"type\": \"string\"\n       },\n       \"namespace\": {\n        \"description\": \"Namespace of the PVC.\",\n        \"title\": \"Namespace\",\n        \"type\": \"string\"\n       },\n       \"resize_option\": {\n        \"allOf\": [\n         {\n          \"$ref\": \"#/definitions/SizingOption\"\n         }\n        ],\n        \"default\": \"\\\"Add\\\"\",\n        \"description\": \"\\n            Option to resize the volume. 2 options supported:\\n            1. Add - Use this option to resize by an amount.\\n            2. Multiple - Use this option if you want to resize by a multiple of the current volume size.\\n        \",\n        \"title\": \"Resize option\"\n       },\n       \"resize_value\": {\n        \"description\": \"\\n            Based on the resize option chosen, specify the value. For eg, if you chose Add option, this\\n            value will be a value in Gi (like 100). If you chose Multiple option, this value will be a multiplying factor\\n            to the current volume size. So, if you want to double, you specify 2 here.\\n        \",\n        \"title\": \"Value\",\n        \"type\": \"number\"\n       }\n      },\n      \"required\": [\n       \"namespace\",\n       \"name\",\n       \"resize_value\"\n      ],\n      \"title\": \"k8s_change_pvc_size\",\n      \"type\": \"object\"\n     }\n    ],\n    \"isUnskript\": false,\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_K8S\",\n    \"name\": \"Change size of Kubernetes PVC\",\n    \"nouns\": [],\n    \"orderProperties\": [\n     \"namespace\",\n     \"name\",\n     \"resize_option\",\n     \"resize_value\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"printOutput\": true,\n    \"tags\": [\n     \"k8s_change_pvc_size\"\n    ],\n    \"verbs\": [],\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"#\\n\",\n    \"# Copyright (c) 2022 unSkript.com\\n\",\n    \"# All rights reserved.\\n\",\n    \"#\\n\",\n    \"\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import Optional\\n\",\n    \"from unskript.enums.aws_k8s_enums import SizingOption\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def k8s_change_pvc_size_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def k8s_change_pvc_size(handle, namespace: str, name: str, resize_option: SizingOption, resize_value: float) -> str:\\n\",\n    \"    \\\"\\\"\\\"k8s_change_pvc_size change pvc size\\n\",\n    \"\\n\",\n    \"        :type name: str\\n\",\n    \"        :param name: Name of the PVC.\\n\",\n    \"\\n\",\n    \"        :type resize_option: SizingOption\\n\",\n    \"        :param resize_option: Option to resize the volume.\\n\",\n    \"\\n\",\n    \"        :type resize_value: float\\n\",\n    \"        :param resize_value: Based on the resize option chosen, specify the value.\\n\",\n    \"\\n\",\n    \"        :type namespace: str\\n\",\n    \"        :param namespace: Namespace of the PVC.\\n\",\n    \"\\n\",\n    \"        :rtype: string\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    # Get the current size.\\n\",\n    \"    kubectl_command = f'kubectl get pvc {name} -n {namespace}  -o jsonpath={{.status.capacity.storage}}'\\n\",\n    \"    result = handle.run_native_cmd(kubectl_command)\\n\",\n    \"    if result.stderr:\\n\",\n    \"        print(\\n\",\n    \"            f\\\"Error while executing command ({kubectl_command}): {result.stderr}\\\")\\n\",\n    \"        return str(f\\\"Error Changing PVC Size {kubectl_command}: {result.stderr}\\\")\\n\",\n    \"\\n\",\n    \"    currentSize = result.stdout\\n\",\n    \"    currentSizeInt = int(currentSize.rstrip(\\\"Gi\\\"))\\n\",\n    \"    if resize_option == SizingOption.Add:\\n\",\n    \"        newSizeInt = currentSizeInt + resize_value\\n\",\n    \"    else:\\n\",\n    \"        newSizeInt = currentSizeInt * resize_value\\n\",\n    \"    newSize = str(newSizeInt) + \\\"Gi\\\"\\n\",\n    \"    print(f'Current size {currentSize}, new Size {newSize}')\\n\",\n    \"    kubectl_command = f'kubectl patch pvc {name} -n {namespace} -p \\\\'{{\\\"spec\\\":{{\\\"resources\\\":{{\\\"requests\\\": {{\\\"storage\\\": \\\"{newSize}\\\"}}}}}}}}\\\\''\\n\",\n    \"    result = handle.run_native_cmd(kubectl_command)\\n\",\n    \"    if result.stderr:\\n\",\n    \"        print(\\n\",\n    \"            f\\\"Error while executing command ({kubectl_command}): {result.stderr}\\\")\\n\",\n    \"        return str(f\\\"Error Changing PVC Size {kubectl_command}: {result.stderr}\\\")\\n\",\n    \"    print(f'PVC {name} size changed to {newSize} successfully')\\n\",\n    \"    return result.stdout\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"name\\\": \\\"PVCName\\\",\\n\",\n    \"    \\\"namespace\\\": \\\"Namespace\\\",\\n\",\n    \"    \\\"resize_option\\\": \\\"SizingOption.Add\\\",\\n\",\n    \"    \\\"resize_value\\\": \\\"float(Value)\\\"\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(k8s_change_pvc_size, lego_printer=k8s_change_pvc_size_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"e9ad51be-bee4-4b23-800d-ae5d17af5455\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 4A\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 4A\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Get-pods-attahced-to-Kuberneted-PVC\\\">Get pods attahced to PVC<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Get-pods-attahced-to-Kuberneted-PVC\\\" target=\\\"_self\\\">&para;</a></h3>\\n\",\n    \"<p>This action gets the pods attached to a Kuberneted PVC</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters: <code>namespace, PVCName</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following ouput: <code>podName</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"7782fc86-9231-4e8e-bdc7-cf133b7b8967\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionCategories\": [],\n    \"actionIsCheck\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [],\n    \"actionNextHopParameterMapping\": {},\n    \"actionOutputType\": \"\",\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": false,\n    \"action_uuid\": \"eedf20eddc44193edbda5e7df1810485ae415f496aebb77edbd995f7901602ee\",\n    \"checkEnabled\": false,\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Get pods attached to Kubernetes PVC\",\n    \"id\": 62,\n    \"index\": 62,\n    \"inputData\": [\n     {\n      \"namespace\": {\n       \"constant\": false,\n       \"value\": \"Namespace\"\n      },\n      \"pvc\": {\n       \"constant\": false,\n       \"value\": \"PVCName\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"namespace\": {\n        \"description\": \"Namespace of the PVC.\",\n        \"title\": \"Namespace\",\n        \"type\": \"string\"\n       },\n       \"pvc\": {\n        \"description\": \"Name of the PVC.\",\n        \"title\": \"PVC Name\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"namespace\",\n       \"pvc\"\n      ],\n      \"title\": \"k8s_get_pods_attached_to_pvc\",\n      \"type\": \"object\"\n     }\n    ],\n    \"isUnskript\": false,\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_K8S\",\n    \"name\": \"Get pods attached to Kubernetes PVC\",\n    \"nouns\": [],\n    \"orderProperties\": [\n     \"namespace\",\n     \"pvc\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"podName\",\n     \"output_name_enabled\": true\n    },\n    \"printOutput\": true,\n    \"tags\": [\n     \"k8s_get_pods_attached_to_pvc\"\n    ],\n    \"verbs\": [],\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"#\\n\",\n    \"# Copyright (c) 2021 unSkript.com\\n\",\n    \"# All rights reserved.\\n\",\n    \"#\\n\",\n    \"\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def k8s_get_pods_attached_to_pvc_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"\\n\",\n    \"    print(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def k8s_get_pods_attached_to_pvc(handle, namespace: str, pvc: str) -> str:\\n\",\n    \"    \\\"\\\"\\\"k8s_get_pods_attached_to_pvc get pods attached to pvc\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned from the Task validate method\\n\",\n    \"\\n\",\n    \"        :type namespace: str\\n\",\n    \"        :param namespace: Namespace of the PVC.\\n\",\n    \"\\n\",\n    \"        :type pvc: str\\n\",\n    \"        :param pvc: Name of the PVC.\\n\",\n    \"\\n\",\n    \"        :rtype: string\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    kubectl_command = f\\\"kubectl describe pvc {pvc} -n {namespace} | awk \\\\'/Used By/ {{print $3}}\\\\'\\\"\\n\",\n    \"    result = handle.run_native_cmd(kubectl_command)\\n\",\n    \"    if result.stderr:\\n\",\n    \"        print(\\n\",\n    \"            f\\\"Error while executing command ({kubectl_command}): {result.stderr}\\\")\\n\",\n    \"        return str()\\n\",\n    \"    return result.stdout\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"namespace\\\": \\\"Namespace\\\",\\n\",\n    \"    \\\"pvc\\\": \\\"PVCName\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"podName\\\")\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(k8s_get_pods_attached_to_pvc, lego_printer=k8s_get_pods_attached_to_pvc_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"c1171c52-a5dc-4a71-bc70-6203b0a194c3\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 4B\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 4B\"\n   },\n   \"source\": [\n    \"<p>This action simply extracts the&nbsp; pod name attached to a Kuberneted PVC</p>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"e066a114-5445-45b0-a64d-e1c32c5bd37b\",\n   \"metadata\": {\n    \"actionNeedsCredential\": false,\n    \"actionSupportsIteration\": false,\n    \"actionSupportsPoll\": false,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2022-07-06T08:20:59.242Z\"\n    },\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Podname\",\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"podName = podName.strip()\\n\",\n    \"print(f'Pod {podName} attached to PVC {PVCName}')\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"e8f65779-7310-41d6-bfd8-8d59f8fdcba6\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 4\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 4\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"List-all-AWS-Regions\\\">Restart the pod</h3>\\n\",\n    \"<p>This action restarts the pod. If RestartPodsAfterResize is enabled, it restarts the pods attached to the PVC.</p>\\n\",\n    \"<p>NOTE: This is not required if the kubernetes has <span style=\\\"color: rgb(53, 152, 219);\\\">ExpandInUsePersistentVolumes</span> enabled.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters: <code>namespace, podName</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following ouput: <code>None</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"66c3f5ee-f13e-4a62-bf5e-87b1eea0e262\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionCategories\": [],\n    \"actionIsCheck\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [],\n    \"actionNextHopParameterMapping\": {},\n    \"actionOutputType\": \"\",\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": false,\n    \"action_uuid\": \"4d3b4c6153e14622f42b332b7c7b8f7043577971f64edc5be6b5f8b40d5b89d1\",\n    \"checkEnabled\": false,\n    \"condition_enabled\": true,\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Execute the given Kubectl command.\",\n    \"id\": 55,\n    \"index\": 55,\n    \"inputData\": [\n     {\n      \"kubectl_command\": {\n       \"constant\": false,\n       \"value\": \"f\\\"kubectl delete pod {podName} -n {Namespace}\\\"\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"kubectl_command\": {\n        \"description\": \"kubectl command eg \\\"kubectl get pods --all-namespaces\\\"\",\n        \"title\": \"Kubectl Command\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"kubectl_command\"\n      ],\n      \"title\": \"k8s_kubectl_command\",\n      \"type\": \"object\"\n     }\n    ],\n    \"isUnskript\": false,\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_K8S\",\n    \"name\": \"Restart the pod\",\n    \"nouns\": [],\n    \"orderProperties\": [\n     \"kubectl_command\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"printOutput\": true,\n    \"startcondition\": \"RestartPodsAfterResize==True\",\n    \"tags\": [\n     \"k8s_kubectl_command\"\n    ],\n    \"title\": \"Restart the pod\",\n    \"verbs\": [],\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"#\\n\",\n    \"# Copyright (c) 2022 unSkript.com\\n\",\n    \"# All rights reserved.\\n\",\n    \"#\\n\",\n    \"\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def k8s_kubectl_command_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    print(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def k8s_kubectl_command(handle, kubectl_command: str) -> str:\\n\",\n    \"    \\\"\\\"\\\"k8s_kubectl_command executes the given kubectl command on the pod\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned from the Task validate method\\n\",\n    \"\\n\",\n    \"        :type kubectl_command: str\\n\",\n    \"        :param kubectl_command: The Actual kubectl command, like kubectl get ns, etc..\\n\",\n    \"\\n\",\n    \"        :rtype: String, Output of the command in python string format or Empty String in case of Error.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    if handle.client_side_validation != True:\\n\",\n    \"        print(f\\\"K8S Connector is invalid: {handle}\\\")\\n\",\n    \"        return str()\\n\",\n    \"\\n\",\n    \"    result = handle.run_native_cmd(kubectl_command)\\n\",\n    \"    if result is None or hasattr(result, \\\"stderr\\\") is False or result.stderr is None:\\n\",\n    \"        print(\\n\",\n    \"            f\\\"Error while executing command ({kubectl_command}): {result.stderr}\\\")\\n\",\n    \"        return str()\\n\",\n    \"\\n\",\n    \"    return result.stdout\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"kubectl_command\\\": \\\"f\\\\\\\\\\\"kubectl delete pod {podName} -n {Namespace}\\\\\\\\\\\"\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"RestartPodsAfterResize==True\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(k8s_kubectl_command, lego_printer=k8s_kubectl_command_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"64daaeb5-b33a-43d5-8052-2a103cd0cd04\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 5\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 5\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"List-all-AWS-Regions\\\">Execute a command on a Kuberentes POD</h3>\\n\",\n    \"<p>This action verifies resize by running 'df-kh' on the pod attached to the PVC.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters: <code>namespace, podName</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following ouput: <code>None</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"79c4d6b1-aa30-4865-b3da-1ee6b0a65105\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionCategories\": [],\n    \"actionIsCheck\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [],\n    \"actionNextHopParameterMapping\": {},\n    \"actionOutputType\": \"\",\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": false,\n    \"action_uuid\": \"8383fbb16190afe9c1936fcceab4f438e45e24f288491416037be1ed07e50c57\",\n    \"checkEnabled\": false,\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Execute a command on a Kubernetes POD in a given Namespace\",\n    \"id\": 43,\n    \"index\": 43,\n    \"inputData\": [\n     {\n      \"command\": {\n       \"constant\": false,\n       \"value\": \"[\\\"df\\\", \\\"-kh\\\"]\"\n      },\n      \"namespace\": {\n       \"constant\": false,\n       \"value\": \"Namespace\"\n      },\n      \"podname\": {\n       \"constant\": false,\n       \"value\": \"podName\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"command\": {\n        \"description\": \"Commands to execute on the Pod. Eg \\\"df -k\\\"\",\n        \"title\": \"Command\",\n        \"type\": \"string\"\n       },\n       \"namespace\": {\n        \"description\": \"Kubernetes namespace.\",\n        \"title\": \"Namespace\",\n        \"type\": \"string\"\n       },\n       \"podname\": {\n        \"description\": \"Kubernetes Pod Name\",\n        \"title\": \"Pod\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"namespace\",\n       \"podname\",\n       \"command\"\n      ],\n      \"title\": \"k8s_exec_command_on_pod\",\n      \"type\": \"object\"\n     }\n    ],\n    \"isUnskript\": false,\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_K8S\",\n    \"name\": \"Execute a command on a Kubernetes POD in a given Namespace\",\n    \"nouns\": [],\n    \"orderProperties\": [\n     \"namespace\",\n     \"podname\",\n     \"command\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"printOutput\": true,\n    \"tags\": [\n     \"k8s_exec_command_on_pod\"\n    ],\n    \"verbs\": [],\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"#\\n\",\n    \"# Copyright (c) 2021 unSkript.com\\n\",\n    \"# All rights reserved.\\n\",\n    \"#\\n\",\n    \"\\n\",\n    \"from posixpath import split\\n\",\n    \"from typing import List\\n\",\n    \"import pprint\\n\",\n    \"from kubernetes import client\\n\",\n    \"from kubernetes.stream import stream\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def k8s_exec_command_on_pod_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def k8s_exec_command_on_pod(handle, namespace: str, podname: str, command: str) -> str:\\n\",\n    \"    \\\"\\\"\\\"k8s_exec_command_on_pod executes the given kubectl command on the pod\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned from the Task validate method\\n\",\n    \"\\n\",\n    \"        :type namespace: str\\n\",\n    \"        :param namespace: Kubernetes namespace.\\n\",\n    \"\\n\",\n    \"        :type podname: str\\n\",\n    \"        :param podname: Kubernetes Pod Name.\\n\",\n    \"\\n\",\n    \"        :type command: str\\n\",\n    \"        :param command: Commands to execute on the Pod.\\n\",\n    \"\\n\",\n    \"        :rtype: String, Output of the command in python string format or Empty String in case of Error.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    coreApiClient = client.CoreV1Api(api_client=handle)\\n\",\n    \"\\n\",\n    \"    try:\\n\",\n    \"        resp = stream(coreApiClient.connect_get_namespaced_pod_exec,\\n\",\n    \"                      podname,\\n\",\n    \"                      namespace,\\n\",\n    \"                      command=command.split(),\\n\",\n    \"                      stderr=True,\\n\",\n    \"                      stdin=True,\\n\",\n    \"                      stdout=True,\\n\",\n    \"                      tty=False\\n\",\n    \"                      )\\n\",\n    \"    except Exception as e:\\n\",\n    \"        resp = f'An Exception occured while executing the command {e}'\\n\",\n    \"\\n\",\n    \"    return resp\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"command\\\": \\\"[\\\\\\\\\\\"df\\\\\\\\\\\", \\\\\\\\\\\"-kh\\\\\\\\\\\"]\\\",\\n\",\n    \"    \\\"namespace\\\": \\\"Namespace\\\",\\n\",\n    \"    \\\"podname\\\": \\\"podName\\\"\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(k8s_exec_command_on_pod, lego_printer=k8s_exec_command_on_pod_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"d5990ec2-daff-4289-acd3-e2bafc92c46a\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 5A\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 5A\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"List-all-AWS-Regions\\\">Run kubectl commands on local k8s cluster</h3>\\n\",\n    \"<p>This action further verifies the resize by running commands on the local k8s cluster and gets the new size.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters: <code>namespace, PVCName</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following ouput: <code>newSize</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"1d79ae1f-ae84-4416-a19a-8bcd5c33d2e0\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionCategories\": [],\n    \"actionIsCheck\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [],\n    \"actionNextHopParameterMapping\": {},\n    \"actionOutputType\": \"\",\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": false,\n    \"action_uuid\": \"4d3b4c6153e14622f42b332b7c7b8f7043577971f64edc5be6b5f8b40d5b89d1\",\n    \"checkEnabled\": false,\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Execute the given Kubectl command.\",\n    \"id\": 55,\n    \"index\": 55,\n    \"inputData\": [\n     {\n      \"kubectl_command\": {\n       \"constant\": false,\n       \"value\": \"f\\\"kubectl get pvc {PVCName} -n {Namespace}  -o jsonpath={{.status.capacity.storage}}\\\"\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"kubectl_command\": {\n        \"description\": \"kubectl command eg \\\"kubectl get pods --all-namespaces\\\"\",\n        \"title\": \"Kubectl Command\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"kubectl_command\"\n      ],\n      \"title\": \"k8s_kubectl_command\",\n      \"type\": \"object\"\n     }\n    ],\n    \"isUnskript\": false,\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_K8S\",\n    \"name\": \"Run kubectl commands on local k8s cluster\",\n    \"nouns\": [],\n    \"orderProperties\": [\n     \"kubectl_command\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"newSize\",\n     \"output_name_enabled\": true\n    },\n    \"printOutput\": true,\n    \"tags\": [\n     \"k8s_kubectl_command\"\n    ],\n    \"title\": \"Run kubectl commands on local k8s cluster\",\n    \"verbs\": [],\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"#\\n\",\n    \"# Copyright (c) 2022 unSkript.com\\n\",\n    \"# All rights reserved.\\n\",\n    \"#\\n\",\n    \"\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def k8s_kubectl_command_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    print(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def k8s_kubectl_command(handle, kubectl_command: str) -> str:\\n\",\n    \"    \\\"\\\"\\\"k8s_kubectl_command executes the given kubectl command on the pod\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned from the Task validate method\\n\",\n    \"\\n\",\n    \"        :type kubectl_command: str\\n\",\n    \"        :param kubectl_command: The Actual kubectl command, like kubectl get ns, etc..\\n\",\n    \"\\n\",\n    \"        :rtype: String, Output of the command in python string format or Empty String in case of Error.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    if handle.client_side_validation != True:\\n\",\n    \"        print(f\\\"K8S Connector is invalid: {handle}\\\")\\n\",\n    \"        return str()\\n\",\n    \"\\n\",\n    \"    result = handle.run_native_cmd(kubectl_command)\\n\",\n    \"    if result is None or hasattr(result, \\\"stderr\\\") is False or result.stderr is None:\\n\",\n    \"        print(\\n\",\n    \"            f\\\"Error while executing command ({kubectl_command}): {result.stderr}\\\")\\n\",\n    \"        return str()\\n\",\n    \"\\n\",\n    \"    return result.stdout\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"kubectl_command\\\": \\\"f\\\\\\\\\\\"kubectl get pvc {PVCName} -n {Namespace}  -o jsonpath={{.status.capacity.storage}}\\\\\\\\\\\"\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"newSize\\\")\\n\",\n    \"\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(k8s_kubectl_command, lego_printer=k8s_kubectl_command_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"67ed3763-1fc2-4fd4-9cdd-b140821e4fe0\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step 6\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step 6\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"List-all-AWS-Regions\\\">Post Slack Message</h3>\\n\",\n    \"<p>This action posts a slack message notifying the new size of the PVC.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action takes the following parameters: <code>Channel</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p>This action captures the following ouput: <code>None</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"fee9946d-d975-42c7-8651-ff7b55250fb9\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_uuid\": \"6a87f83ab0ecfeecb9c98d084e2b1066c26fa64be5b4928d5573a5d60299802d\",\n    \"condition_enabled\": true,\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Post Slack Message\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2022-07-07T04:05:14.429Z\"\n    },\n    \"id\": 46,\n    \"index\": 46,\n    \"inputData\": [\n     {\n      \"channel\": {\n       \"constant\": false,\n       \"value\": \"Channel\"\n      },\n      \"message\": {\n       \"constant\": false,\n       \"value\": \"f\\\"PVC {PVCName} successfully resized to {newSize}\\\"\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"channel\": {\n        \"description\": \"Name of the slack channel where the message to be posted\",\n        \"title\": \"Channel\",\n        \"type\": \"string\"\n       },\n       \"message\": {\n        \"description\": \"Message to be sent\",\n        \"title\": \"Message\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"channel\",\n       \"message\"\n      ],\n      \"title\": \"slack_post_message\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_SLACK\",\n    \"name\": \"Post Slack Message\",\n    \"nouns\": [\n     \"slack\",\n     \"message\"\n    ],\n    \"orderProperties\": [\n     \"channel\",\n     \"message\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"printOutput\": true,\n    \"startcondition\": \"if len(Channel)!=0\",\n    \"tags\": [\n     \"slack_post_message\"\n    ],\n    \"verbs\": [\n     \"post\"\n    ],\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"# Copyright (c) 2021 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from slack_sdk import WebClient\\n\",\n    \"from slack_sdk.errors import SlackApiError\\n\",\n    \"\\n\",\n    \"pp = pprint.PrettyPrinter(indent=2)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def slack_post_message_printer(data):\\n\",\n    \"    if data != None:\\n\",\n    \"        pprint.pprint(data)\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def slack_post_message(\\n\",\n    \"        handle: WebClient,\\n\",\n    \"        channel: str,\\n\",\n    \"        message: str) -> str:\\n\",\n    \"\\n\",\n    \"    try:\\n\",\n    \"        response = handle.chat_postMessage(\\n\",\n    \"            channel=channel,\\n\",\n    \"            text=message)\\n\",\n    \"        return f\\\"Successfuly Sent Message on Channel: #{channel}\\\"\\n\",\n    \"    except SlackApiError as e:\\n\",\n    \"        pp.pprint(\\n\",\n    \"            f\\\"Failed sending message to slack channel {channel}, Error: {e.response['error']}\\\")\\n\",\n    \"        if e.response['error'] == 'channel_not_found':\\n\",\n    \"            raise Exception('Channel Not Found')\\n\",\n    \"        elif e.response['error'] == 'duplicate_channel_not_found':\\n\",\n    \"            raise Exception('Channel associated with the message_id not valid')\\n\",\n    \"        elif e.response['error'] == 'not_in_channel':\\n\",\n    \"            raise Exception('Cannot post message to channel user is not in')\\n\",\n    \"        elif e.reponse['error'] == 'is_archived':\\n\",\n    \"            raise Exception('Channel has been archived')\\n\",\n    \"        elif e.response['error'] == 'msg_too_long':\\n\",\n    \"            raise Exception('Message text is too long')\\n\",\n    \"        elif e.response['error'] == 'no_text':\\n\",\n    \"            raise Exception('Message text was not provided')\\n\",\n    \"        elif e.response['error'] == 'restricted_action':\\n\",\n    \"            raise Exception('Workspace preference prevents user from posting')\\n\",\n    \"        elif e.response['error'] == 'restricted_action_read_only_channel':\\n\",\n    \"            raise Exception('Cannot Post message, read-only channel')\\n\",\n    \"        elif e.respones['error'] == 'team_access_not_granted':\\n\",\n    \"            raise Exception('The token used is not granted access to the workspace')\\n\",\n    \"        elif e.response['error'] == 'not_authed':\\n\",\n    \"            raise Exception('No Authtnecition token provided')\\n\",\n    \"        elif e.response['error'] == 'invalid_auth':\\n\",\n    \"            raise Exception('Some aspect of Authentication cannot be validated. Request denied')\\n\",\n    \"        elif e.response['error'] == 'access_denied':\\n\",\n    \"            raise Exception('Access to a resource specified in the request denied')\\n\",\n    \"        elif e.response['error'] == 'account_inactive':\\n\",\n    \"            raise Exception('Authentication token is for a deleted user')\\n\",\n    \"        elif e.response['error'] == 'token_revoked':\\n\",\n    \"            raise Exception('Authentication token for a deleted user has been revoked')\\n\",\n    \"        elif e.response['error'] == 'no_permission':\\n\",\n    \"            raise Exception('The workspace toekn used does not have necessary permission to send message')\\n\",\n    \"        elif e.response['error'] == 'ratelimited':\\n\",\n    \"            raise Exception('The request has been ratelimited. Retry sending message later')\\n\",\n    \"        elif e.response['error'] == 'service_unavailable':\\n\",\n    \"            raise Exception('The service is temporarily unavailable')\\n\",\n    \"        elif e.response['error'] == 'fatal_error':\\n\",\n    \"            raise Exception('The server encountered catostrophic error while sending message')\\n\",\n    \"        elif e.response['error'] == 'internal_error':\\n\",\n    \"            raise Exception('The server could not complete operation, likely due to transietn issue')\\n\",\n    \"        elif e.response['error'] == 'request_timeout':\\n\",\n    \"            raise Exception('Sending message error via POST: either message was missing or truncated')\\n\",\n    \"        else:\\n\",\n    \"            raise Exception(f'Failed Sending Message to slack channel {channel} Error: {e.response[\\\"error\\\"]}')\\n\",\n    \"\\n\",\n    \"        return f\\\"Unable to send message on {channel}\\\"\\n\",\n    \"    except Exception as e:\\n\",\n    \"        print(\\\"\\\\n\\\\n\\\")\\n\",\n    \"        pp.pprint(\\n\",\n    \"            f\\\"Failed sending message to slack channel {channel}, Error: {e.__str__()}\\\")\\n\",\n    \"        return f\\\"Unable to send message on {channel}\\\"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"channel\\\": \\\"Channel\\\",\\n\",\n    \"    \\\"message\\\": \\\"f\\\\\\\\\\\"PVC {PVCName} successfully resized to {newSize}\\\\\\\\\\\"\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"if len(Channel)!=0\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(slack_post_message, lego_printer=slack_post_message_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"3f624369-89fd-4354-b13e-3de80d4465d4\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Conclusion\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Conclusion\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Conclusion&para;&para;\\\">Conclusion</h3>\\n\",\n    \"<p>In this Runbook, we were able successfully resize a PVC using unSkript's K8s actions. To view the full platform capabilities of unSkript please visit <a href=\\\"https://us.app.unskript.io\\\" target=\\\"_blank\\\" rel=\\\"noopener\\\">us.app.unskript.io</a></p>\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"execution_data\": {\n   \"runbook_name\": \"k8s: Resize PVC\",\n   \"parameters\": [\n    \"RestartPodsAfterResize\",\n    \"Value\",\n    \"Channel\",\n    \"Namespace\",\n    \"PVCName\",\n    \"ResizeOption\"\n   ]\n  },\n  \"kernelspec\": {\n   \"display_name\": \"Python 3.10.6 64-bit\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.10.6\"\n  },\n  \"parameterSchema\": {\n   \"properties\": {\n    \"Channel\": {\n     \"description\": \"Slack channel\",\n     \"title\": \"Channel\",\n     \"type\": \"string\"\n    },\n    \"Namespace\": {\n     \"description\": \"Namespace of the PVC\",\n     \"title\": \"Namespace\",\n     \"type\": \"string\"\n    },\n    \"PVCName\": {\n     \"description\": \"Name of the PVC\",\n     \"title\": \"PVCName\",\n     \"type\": \"string\"\n    },\n    \"ResizeOption\": {\n     \"default\": \"Add\",\n     \"description\": \"Option to resize the volume. 2 options supported:             1. Add - Use this option to resize by an amount.             2. Multiple - Use this option if you want to resize by a multiple of the current volume size.\",\n     \"enum\": [\n      \"Add\"\n     ],\n     \"enumNames\": [\n      \"Add\"\n     ],\n     \"title\": \"ResizeOption\",\n     \"type\": \"string\"\n    },\n    \"RestartPodsAfterResize\": {\n     \"default\": false,\n     \"description\": \"Restart the pods after PVC resize\",\n     \"title\": \"RestartPodsAfterResize\",\n     \"type\": \"boolean\"\n    },\n    \"Value\": {\n     \"description\": \"Based on the resize option chosen, specify the value (float). For eg, if you chose Add option, this             value will be a value in Gb (like 100.0). If you chose, this value will be a multiplying factor             to the current volume size. For eg, to double, specify value as 2.0\",\n     \"title\": \"Value\",\n     \"type\": \"number\"\n    }\n   },\n   \"required\": [\n    \"Namespace\",\n    \"PVCName\",\n    \"Value\"\n   ],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"parameterValues\": {},\n  \"vscode\": {\n   \"interpreter\": {\n    \"hash\": \"b0fa6594d8f4cbf19f97940f81e996739fb7646882a419484c72d19e05852a7e\"\n   }\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}\n"
  },
  {
    "path": "Kubernetes/Resize_PVC.json",
    "content": "{\n    \"name\": \"k8s: Resize PVC\",\n    \"description\": \"This runbook resizes a Kubernetes PVC.\",  \n    \"uuid\": \"b8385df9545bdb5695af879d7d089571fed148d996cf4b7e9e7848502e2cc029\",\n    \"icon\": \"CONNECTOR_TYPE_K8S\",\n    \"categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\" ],\n    \"connector_types\": [ \"CONNECTOR_TYPE_K8S\" ],\n    \"version\": \"1.0.0\"\n  }"
  },
  {
    "path": "Kubernetes/Rollback_k8s_Deployment_and_Update_Jira.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"b176b2f8-b2a7-48e5-a573-1d2058900ba1\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Runbook Overview\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Runbook Overview\"\n   },\n   \"source\": [\n    \"<center>\\n\",\n    \"<h1 id=\\\"\\\"><img src=\\\"https://storage.googleapis.com/unskript-website/assets/favicon.png\\\" alt=\\\"unSkript.com\\\" width=\\\"100\\\" height=\\\"100\\\"></h1>\\n\",\n    \"<h1 id=\\\"unSkript-Runbooks\\\">unSkript Runbooks</h1>\\n\",\n    \"<div class=\\\"alert alert-block alert-success\\\">\\n\",\n    \"<h3 id=\\\"Objective\\\"><strong>Objective</strong></h3>\\n\",\n    \"<strong>Rollback k8s Deployment and Update Jira using unSkript actions.</strong></div>\\n\",\n    \"</center><center>\\n\",\n    \"<h2 id=\\\"Rollback-k8s-Deployment-and-Update-Jira\\\">Rollback k8s Deployment and Update Jira</h2>\\n\",\n    \"</center>\\n\",\n    \"<h1 id=\\\"Steps-Overview\\\"><sub>Steps Overview</sub></h1>\\n\",\n    \"<ol>\\n\",\n    \"<li>&nbsp;Get the Deployment Rollout status</li>\\n\",\n    \"<li>&nbsp;Kubectl rollout deployment</li>\\n\",\n    \"<li>&nbsp;Change JIRA Issue Status</li>\\n\",\n    \"</ol>\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"5dc2c32b-7fed-492b-ba3b-a883c677fa4f\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Gathering Information\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Gathering Information\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Get-All-The-Namespaces\\\">Get All The Namespaces</h3>\\n\",\n    \"<p>In this action, we collect all the namespaces available in the cluster as a list. This action only executes if the namespace parameter is not given.</p>\\n\",\n    \"<ul>\\n\",\n    \"<li><strong>Input parameters:</strong>&nbsp; <code>kubectl_command</code></li>\\n\",\n    \"<li><strong>Output variable:</strong>&nbsp; <code>namespace_list</code></li>\\n\",\n    \"</ul>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 3,\n   \"id\": \"dbce5781-d0a4-4c63-abf4-b0fccdf250c8\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionCategories\": [],\n    \"actionIsCheck\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [],\n    \"actionNextHopParameterMapping\": {},\n    \"actionOutputType\": \"\",\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": false,\n    \"action_uuid\": \"4d3b4c6153e14622f42b332b7c7b8f7043577971f64edc5be6b5f8b40d5b89d1\",\n    \"condition_enabled\": true,\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Execute the given Kubectl command.\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-02-13T11:43:10.502Z\"\n    },\n    \"id\": 55,\n    \"index\": 55,\n    \"inputData\": [\n     {\n      \"kubectl_command\": {\n       \"constant\": false,\n       \"value\": \"\\\"kubectl get namespaces -o json\\\"\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"kubectl_command\": {\n        \"description\": \"kubectl command eg \\\"kubectl get pods --all-namespaces\\\"\",\n        \"title\": \"Kubectl Command\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"kubectl_command\"\n      ],\n      \"title\": \"k8s_kubectl_command\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_K8S\",\n    \"name\": \"Get All The Namespaces\",\n    \"nouns\": [],\n    \"orderProperties\": [\n     \"kubectl_command\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"namespace_list\",\n     \"output_name_enabled\": true\n    },\n    \"printOutput\": true,\n    \"probeEnabled\": false,\n    \"startcondition\": \"not namespace\",\n    \"tags\": [\n     \"k8s_kubectl_command\"\n    ],\n    \"title\": \"Get All The Namespaces\",\n    \"verbs\": []\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"#\\n\",\n    \"# Copyright (c) 2022 unSkript.com\\n\",\n    \"# All rights reserved.\\n\",\n    \"#\\n\",\n    \"\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def k8s_kubectl_command_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    print(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def k8s_kubectl_command(handle, kubectl_command: str) -> str:\\n\",\n    \"    \\\"\\\"\\\"k8s_kubectl_command executes the given kubectl command on the pod\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned from the Task validate method\\n\",\n    \"\\n\",\n    \"        :type kubectl_command: str\\n\",\n    \"        :param kubectl_command: The Actual kubectl command, like kubectl get ns, etc..\\n\",\n    \"\\n\",\n    \"        :rtype: String, Output of the command in python string format or Empty String in case of Error.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    if handle.client_side_validation != True:\\n\",\n    \"        print(f\\\"K8S Connector is invalid: {handle}\\\")\\n\",\n    \"        return str()\\n\",\n    \"\\n\",\n    \"    result = handle.run_native_cmd(kubectl_command)\\n\",\n    \"    if result is None or hasattr(result, \\\"stderr\\\") is False or result.stderr is None:\\n\",\n    \"        print(\\n\",\n    \"            f\\\"Error while executing command ({kubectl_command}): {result.stderr}\\\")\\n\",\n    \"        return str()\\n\",\n    \"\\n\",\n    \"    return result.stdout\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"kubectl_command\\\": \\\"\\\\\\\\\\\"kubectl get namespaces -o json\\\\\\\\\\\"\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"not namespace\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"namespace_list\\\")\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(k8s_kubectl_command, lego_printer=k8s_kubectl_command_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"7281ea9e-7b6f-4ba2-a095-8a3d03c6783f\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Modify Information\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Modify Information\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Modify-Output\\\">Modify Output</h3>\\n\",\n    \"<p>In this action, we modify the output which collects from the Gathering Information cell and returns a list of all the namespaces or given namespaces.</p>\\n\",\n    \"<ul>\\n\",\n    \"<li><strong>Output variable:</strong>&nbsp; <code>namespace_data</code></li>\\n\",\n    \"</ul>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 4,\n   \"id\": \"1e52faf1-6694-43f8-a1c7-e0ec5cf6bf07\",\n   \"metadata\": {\n    \"collapsed\": true,\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-02-13T11:43:27.943Z\"\n    },\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"name\": \"Modify Output\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Modify Output\",\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"import json\\n\",\n    \"\\n\",\n    \"namespace_data = []\\n\",\n    \"try:\\n\",\n    \"    if namespace_list:\\n\",\n    \"        namespace_details = json.loads(namespace_list)\\n\",\n    \"        for i in namespace_details[\\\"items\\\"]:\\n\",\n    \"            namespace_data.append(i[\\\"metadata\\\"][\\\"name\\\"])\\n\",\n    \"except Exception as e:\\n\",\n    \"    namespace_data.append(namespace)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"6e64cc52-3618-4731-b0a5-1605d1249216\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-1\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-1\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Get-Deployment-Rollout-Status\\\">Get Deployment Rollout Status</h3>\\n\",\n    \"<p>Here we will use the unSkript&nbsp;<strong>Get Deployment Rollout Status</strong> action. This action is used to identify the status of deployment for the namespace and return a list of a dictionary that contains the deployments which failed.</p>\\n\",\n    \"<ul>\\n\",\n    \"<li><strong>Input parameters:</strong>&nbsp; <code>namespace, deployment</code></li>\\n\",\n    \"<li><strong>Output variable:</strong>&nbsp; <code>deployment_data</code></li>\\n\",\n    \"</ul>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 7,\n   \"id\": \"7ed8cf54-eed4-4eb0-8542-b42a13acb787\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionCategories\": [],\n    \"actionIsCheck\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [],\n    \"actionNextHopParameterMapping\": {},\n    \"actionOutputType\": \"\",\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": false,\n    \"action_uuid\": \"24e6afe0152d5464b36bc6a2741a157e01efb2eb7c6e1a571c5312468a63cdd3\",\n    \"collapsed\": true,\n    \"condition_enabled\": true,\n    \"continueOnError\": true,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"This action search for failed deployment rollout status and returns list.\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-02-13T12:15:09.061Z\"\n    },\n    \"id\": 53,\n    \"index\": 53,\n    \"inputData\": [\n     {\n      \"deployment\": {\n       \"constant\": false,\n       \"value\": \"\"\n      },\n      \"namespace\": {\n       \"constant\": false,\n       \"value\": \"iter_item\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"deployment\": {\n        \"default\": \"\",\n        \"description\": \"k8s Deployment\",\n        \"title\": \"Deployment\",\n        \"type\": \"string\"\n       },\n       \"namespace\": {\n        \"default\": \"\",\n        \"description\": \"k8s Namespace\",\n        \"title\": \"Namespace\",\n        \"type\": \"string\"\n       }\n      },\n      \"title\": \"k8s_get_deployment_rollout_status\",\n      \"type\": \"object\"\n     }\n    ],\n    \"iterData\": [\n     {\n      \"iter_enabled\": true,\n      \"iter_item\": \"namespace\",\n      \"iter_list\": {\n       \"constant\": false,\n       \"objectItems\": false,\n       \"value\": \"namespace_data\"\n      }\n     }\n    ],\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_K8S\",\n    \"name\": \"Get Deployment Rollout Status\",\n    \"nouns\": [],\n    \"orderProperties\": [\n     \"namespace\",\n     \"deployment\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"deployment_data\",\n     \"output_name_enabled\": true\n    },\n    \"printOutput\": true,\n    \"probeEnabled\": false,\n    \"startcondition\": \"len(namespace_data)>0\",\n    \"tags\": [\n     \"k8s_get_deployment_rollout_status\"\n    ],\n    \"title\": \"Get Deployment Rollout Status\",\n    \"verbs\": []\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"#\\n\",\n    \"# Copyright (c) 2022 unSkript.com\\n\",\n    \"# All rights reserved.\\n\",\n    \"#\\n\",\n    \"\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import Optional, Tuple\\n\",\n    \"import pprint\\n\",\n    \"import json\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def k8s_get_deployment_rollout_status_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def k8s_get_deployment_rollout_status(handle, deployment: str = \\\"\\\", namespace: str = \\\"\\\") -> Tuple:\\n\",\n    \"    \\\"\\\"\\\"k8s_get_deployment_rollout_status executes the command and give failed deployment list\\n\",\n    \"\\n\",\n    \"        :type handle: object\\n\",\n    \"        :param handle: Object returned from the Task validate method\\n\",\n    \"\\n\",\n    \"        :type deployment: str\\n\",\n    \"        :param deployment: Deployment Name.\\n\",\n    \"\\n\",\n    \"        :type namespace: str\\n\",\n    \"        :param namespace: Kubernetes Namespace.\\n\",\n    \"\\n\",\n    \"        :rtype: CheckOutput with status result and list of failed deployments.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    result = []\\n\",\n    \"    if handle.client_side_validation != True:\\n\",\n    \"        print(f\\\"K8S Connector is invalid: {handle}\\\")\\n\",\n    \"        return CheckOutput(status=CheckOutputStatus.RUN_EXCEPTION,\\n\",\n    \"                               objects=[],\\n\",\n    \"                               error=str())\\n\",\n    \"\\n\",\n    \"    status_details = \\\"\\\"\\n\",\n    \"    if namespace and deployment:\\n\",\n    \"        name_cmd = \\\"kubectl get deployment \\\" + deployment + \\\" -n \\\" + namespace + \\\" -o json\\\"\\n\",\n    \"        exec_cmd = handle.run_native_cmd(name_cmd)\\n\",\n    \"        status_op = exec_cmd.stdout\\n\",\n    \"        status_details = json.loads(status_op)\\n\",\n    \"\\n\",\n    \"    if not namespace and not deployment:\\n\",\n    \"        name_cmd = \\\"kubectl get deployments --all-namespaces -o json\\\"\\n\",\n    \"        exec_cmd = handle.run_native_cmd(name_cmd)\\n\",\n    \"        status_op = exec_cmd.stdout\\n\",\n    \"        status_details = json.loads(status_op)\\n\",\n    \"\\n\",\n    \"    if namespace and not deployment:\\n\",\n    \"        name_cmd = \\\"kubectl get deployment -n \\\" + namespace + \\\" -o json\\\"\\n\",\n    \"        exec_cmd = handle.run_native_cmd(name_cmd)\\n\",\n    \"        status_op = exec_cmd.stdout\\n\",\n    \"        status_details = json.loads(status_op)\\n\",\n    \"\\n\",\n    \"    if deployment and not namespace:\\n\",\n    \"        name_cmd = \\\"kubectl get deployment \\\" + deployment + \\\" -o json\\\"\\n\",\n    \"        exec_cmd = handle.run_native_cmd(name_cmd)\\n\",\n    \"        status_op = exec_cmd.stdout\\n\",\n    \"        status_details = json.loads(status_op)\\n\",\n    \"\\n\",\n    \"    if status_details:\\n\",\n    \"        if \\\"items\\\" in status_details:\\n\",\n    \"            for items in status_details[\\\"items\\\"]:\\n\",\n    \"                namespace_name = items[\\\"metadata\\\"][\\\"namespace\\\"]\\n\",\n    \"                deployment_name = items[\\\"metadata\\\"][\\\"name\\\"]\\n\",\n    \"                replica_details = items[\\\"status\\\"][\\\"conditions\\\"]\\n\",\n    \"                for i in replica_details:\\n\",\n    \"                    deployment_dict = {}\\n\",\n    \"                    if \\\"FailedCreate\\\" in i[\\\"reason\\\"] and \\\"ReplicaFailure\\\" in i[\\\"type\\\"] and \\\"True\\\" in i[\\\"status\\\"]:\\n\",\n    \"                        deployment_dict[\\\"namespace\\\"] = namespace_name\\n\",\n    \"                        deployment_dict[\\\"deployment_name\\\"] = deployment_name\\n\",\n    \"                        result.append(deployment_dict)\\n\",\n    \"                    if \\\"ProgressDeadlineExceeded\\\" in i[\\\"reason\\\"] and \\\"Progressing\\\" in i[\\\"type\\\"] and \\\"False\\\" in i[\\\"status\\\"]:\\n\",\n    \"                        deployment_dict[\\\"namespace\\\"] = namespace_name\\n\",\n    \"                        deployment_dict[\\\"deployment_name\\\"] = deployment_name\\n\",\n    \"                        result.append(deployment_dict)\\n\",\n    \"        else:\\n\",\n    \"            namespace_name = status_details[\\\"metadata\\\"][\\\"namespace\\\"]\\n\",\n    \"            deployment_name = status_details[\\\"metadata\\\"][\\\"name\\\"]\\n\",\n    \"            replica_details = status_details[\\\"status\\\"][\\\"conditions\\\"]\\n\",\n    \"            for i in replica_details:\\n\",\n    \"                deployment_dict = {}\\n\",\n    \"                if \\\"FailedCreate\\\" in i[\\\"reason\\\"] and \\\"ReplicaFailure\\\" in i[\\\"type\\\"] and \\\"True\\\" in i[\\\"status\\\"]:\\n\",\n    \"                    deployment_dict[\\\"namespace\\\"] = namespace_name\\n\",\n    \"                    deployment_dict[\\\"deployment_name\\\"] = deployment_name\\n\",\n    \"                    result.append(deployment_dict)\\n\",\n    \"                if \\\"ProgressDeadlineExceeded\\\" in i[\\\"reason\\\"] and \\\"Progressing\\\" in i[\\\"type\\\"] and \\\"False\\\" in i[\\\"status\\\"]:\\n\",\n    \"                    deployment_dict[\\\"namespace\\\"] = namespace_name\\n\",\n    \"                    deployment_dict[\\\"deployment_name\\\"] = deployment_name\\n\",\n    \"                    result.append(deployment_dict)\\n\",\n    \"\\n\",\n    \"    if len(result) != 0:\\n\",\n    \"        return (False, result)\\n\",\n    \"    else:\\n\",\n    \"        return (True, None)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(continueOnError=True)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"namespace\\\": \\\"iter_item\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(iterJson='''{\\n\",\n    \"    \\\"iter_enabled\\\": true,\\n\",\n    \"    \\\"iter_list_is_const\\\": false,\\n\",\n    \"    \\\"iter_list\\\": \\\"namespace_data\\\",\\n\",\n    \"    \\\"iter_parameter\\\": \\\"namespace\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"len(namespace_data)>0\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"deployment_data\\\")\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(k8s_get_deployment_rollout_status, lego_printer=k8s_get_deployment_rollout_status_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"6e16ec0b-c128-4584-9d4a-9f3b0d89c176\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-1 Extension\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-1 Extension\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Modify-Output\\\">Modify Output</h3>\\n\",\n    \"<p>In this action, we modify the output which collects from step 1 and return a list of dictionaries for the failed deployments.</p>\\n\",\n    \"<ul>\\n\",\n    \"<li><strong>Output variable:</strong>&nbsp; <code>rollout_deployment</code></li>\\n\",\n    \"</ul>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 8,\n   \"id\": \"19ed40e0-40ec-4b9c-9fd5-62021027aa78\",\n   \"metadata\": {\n    \"collapsed\": true,\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-02-13T12:17:38.989Z\"\n    },\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"name\": \"Modify Output\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Modify Output\",\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"rollout_deployment = []\\n\",\n    \"if deployment_data:\\n\",\n    \"    for k, v in deployment_data.items():\\n\",\n    \"        if v[0] == False:\\n\",\n    \"            for deployments in v[1]:\\n\",\n    \"                rollout_deployment.append(deployments)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"538cfd33-2a50-40d5-b70c-dac2d3aa66de\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-2\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-2\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Kubectl-rollout-deployment\\\">Kubectl rollout deployment</h3>\\n\",\n    \"<p>Here we will use the unSkript <strong>Kubectl rollout deployment</strong> action. This action is used to roll back the deployment to a stable version.</p>\\n\",\n    \"<ul>\\n\",\n    \"<li><strong>Input parameters:</strong>&nbsp; <code>k8s_cli_string, deployment,&nbsp;namespace</code></li>\\n\",\n    \"<li><strong>Output variable:</strong>&nbsp; <code>rollback_status</code></li>\\n\",\n    \"</ul>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"f1d67467-4d0d-415f-9f27-14ff9910e073\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_uuid\": \"25f595b493ccd2b7502f48145c1ae8c50b52ba726f0639ae6164dc3d668789f5\",\n    \"condition_enabled\": true,\n    \"continueOnError\": true,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Kubectl rollout deployment history\",\n    \"id\": 39,\n    \"index\": 39,\n    \"inputData\": [\n     {\n      \"deployment\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"deployment_name\\\\\\\\\\\")\\\"\"\n      },\n      \"k8s_cli_string\": {\n       \"constant\": false,\n       \"value\": \"kubectl rollout undo deploy {deployment} -n {namespace}\"\n      },\n      \"namespace\": {\n       \"constant\": false,\n       \"value\": \"\\\"iter.get(\\\\\\\\\\\"namespace\\\\\\\\\\\")\\\"\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"deployment\": {\n        \"description\": \"Deployment Name\",\n        \"title\": \"Deployment Name\",\n        \"type\": \"string\"\n       },\n       \"k8s_cli_string\": {\n        \"default\": \"\\\"kubectl rollout history deployment {deployment} -n {namespace}\\\"\",\n        \"description\": \"kubectl rollout deployment history\",\n        \"title\": \"Kubectl Command\",\n        \"type\": \"string\"\n       },\n       \"namespace\": {\n        \"description\": \"Namespace\",\n        \"title\": \"Namespace\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"deployment\",\n       \"namespace\"\n      ],\n      \"title\": \"k8s_kubectl_rollout_deployment\",\n      \"type\": \"object\"\n     }\n    ],\n    \"iterData\": [\n     {\n      \"iter_enabled\": true,\n      \"iter_item\": {\n       \"deployment\": \"deployment_name\",\n       \"namespace\": \"namespace\"\n      },\n      \"iter_list\": {\n       \"constant\": false,\n       \"objectItems\": true,\n       \"value\": \"deployment_data\"\n      }\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_K8S\",\n    \"name\": \"Kubectl rollout deployment\",\n    \"nouns\": [\n     \"deployment\"\n    ],\n    \"orderProperties\": [\n     \"k8s_cli_string\",\n     \"deployment\",\n     \"namespace\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"rollback_status\",\n     \"output_name_enabled\": true\n    },\n    \"printOutput\": true,\n    \"startcondition\": \"len(rollout_deployment)>0\",\n    \"tags\": [\n     \"k8s_kubectl_rollout_deployment\"\n    ],\n    \"title\": \"Kubectl rollout deployment\",\n    \"verbs\": [\n     \"rollout\"\n    ]\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"from pydantic import BaseModel, Field\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"@beartype\\n\",\n    \"def k8s_kubectl_rollout_deployment_printer(data: str):\\n\",\n    \"    if data is None:\\n\",\n    \"        print(\\\"Error while executing command\\\")\\n\",\n    \"        return\\n\",\n    \"\\n\",\n    \"    print (data)\\n\",\n    \"\\n\",\n    \"@beartype\\n\",\n    \"def k8s_kubectl_rollout_deployment(handle, k8s_cli_string: str, deployment: str, namespace: str) -> str:\\n\",\n    \"    k8s_cli_string = k8s_cli_string.format(deployment, namespace)\\n\",\n    \"    result = handle.run_native_cmd(k8s_cli_string)\\n\",\n    \"    if result is None or hasattr(result, \\\"stderr\\\") is False or result.stderr is None:\\n\",\n    \"        return None\\n\",\n    \"    return result.stdout\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(continueOnError=True)\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"deployment\\\": \\\"iter.get(\\\\\\\\\\\"deployment_name\\\\\\\\\\\")\\\",\\n\",\n    \"    \\\"k8s_cli_string\\\": \\\"kubectl rollout undo deploy {deployment} -n {namespace}\\\",\\n\",\n    \"    \\\"namespace\\\": \\\"iter.get(\\\\\\\\\\\"namespace\\\\\\\\\\\")\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(iterJson='''{\\n\",\n    \"    \\\"iter_enabled\\\": true,\\n\",\n    \"    \\\"iter_list_is_const\\\": false,\\n\",\n    \"    \\\"iter_list\\\": \\\"deployment_data\\\",\\n\",\n    \"    \\\"iter_parameter\\\": [\\\"deployment\\\",\\\"namespace\\\"]\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"len(rollout_deployment)>0\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"rollback_status\\\")\\n\",\n    \"\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(k8s_kubectl_rollout_deployment, lego_printer=k8s_kubectl_rollout_deployment_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"381064a9-f652-4f8d-a3a2-331d8c642f26\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-3\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-3\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Change-JIRA-Issue-Status\\\">Change JIRA Issue Status</h3>\\n\",\n    \"<p>Here we will use unSkript <strong>Change JIRA Issue Status</strong> action. This action is used to update the status of the Jira issue. It will only execute if the issue id is given.</p>\\n\",\n    \"<ul>\\n\",\n    \"<li><strong>Input parameters:</strong>&nbsp; <code>issue_id, status, transition</code></li>\\n\",\n    \"<li><strong>Output variable:</strong>&nbsp; <code>update_status</code></li>\\n\",\n    \"</ul>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 26,\n   \"id\": \"e3fd68fc-2830-46f5-bff5-836534b79ca7\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_uuid\": \"751877b0836cf1f8884a2fc0186e2e73024b59494dc71d372d244d3c93468c7a\",\n    \"condition_enabled\": true,\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Change JIRA Issue Status to given status\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-02-08T13:53:15.915Z\"\n    },\n    \"id\": 68,\n    \"index\": 68,\n    \"inputData\": [\n     {\n      \"issue_id\": {\n       \"constant\": false,\n       \"value\": \"issue_id\"\n      },\n      \"status\": {\n       \"constant\": false,\n       \"value\": \"\\\"Done\\\"\"\n      },\n      \"transition\": {\n       \"constant\": false,\n       \"value\": \"\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"issue_id\": {\n        \"description\": \"Issue ID\",\n        \"title\": \"Issue ID\",\n        \"type\": \"string\"\n       },\n       \"status\": {\n        \"description\": \"New Status for the JIRA issue\",\n        \"title\": \"New Status\",\n        \"type\": \"string\"\n       },\n       \"transition\": {\n        \"description\": \"Transition to use for status change for the JIRA issue\",\n        \"title\": \"Transition ID\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"issue_id\",\n       \"status\"\n      ],\n      \"title\": \"jira_issue_change_status\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_JIRA\",\n    \"name\": \"Change JIRA Issue Status\",\n    \"nouns\": [\n     \"jira\",\n     \"issue\",\n     \"status\"\n    ],\n    \"orderProperties\": [\n     \"issue_id\",\n     \"status\",\n     \"transition\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"update_status\",\n     \"output_name_enabled\": true\n    },\n    \"printOutput\": true,\n    \"startcondition\": \"issue_id\",\n    \"tags\": [\n     \"jira_issue_change_status\"\n    ],\n    \"title\": \"Change JIRA Issue Status\",\n    \"verbs\": [\n     \"change\"\n    ]\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"# Copyright (c) 2021 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"\\n\",\n    \"from jira.client import JIRA\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from typing import Optional\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"pp = pprint.PrettyPrinter(indent=4)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"def legoPrinter(func):\\n\",\n    \"    def Printer(*args, **kwargs):\\n\",\n    \"        output = func(*args, **kwargs)\\n\",\n    \"        print('\\\\n')\\n\",\n    \"        return output\\n\",\n    \"    return Printer\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@legoPrinter\\n\",\n    \"@beartype\\n\",\n    \"def jira_issue_change_status(hdl: JIRA, issue_id: str, status: str, transition: str = \\\"\\\"):\\n\",\n    \"    \\\"\\\"\\\"jira_get_issue_status gets the status of a given Jira issue.\\n\",\n    \"        :type issue_id: str\\n\",\n    \"        :param issue_id: ID of the issue whose status we want to fetch (eg ENG-14)\\n\",\n    \"        :rtype: String with issue status fetched from JIRA API\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"\\n\",\n    \"    # Input param validation.\\n\",\n    \"    issue = hdl.issue(issue_id)\\n\",\n    \"\\n\",\n    \"    if transition:\\n\",\n    \"        transitions = hdl.transitions(issue)\\n\",\n    \"        t = [t for t in transitions if t.get('name') == status]\\n\",\n    \"\\n\",\n    \"        if len(t) > 1:\\n\",\n    \"            print(\\\"Multiple transitions possible for JIRA issue. Please select transition number to use\\\", [\\n\",\n    \"                t.get('id') for t in transitions if t.get('name') == status])\\n\",\n    \"            return\\n\",\n    \"        else:\\n\",\n    \"            transition = t[0].get('id')\\n\",\n    \"\\n\",\n    \"    hdl.transition_issue(issue, transition)\\n\",\n    \"    return\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"def unskript_default_printer(output):\\n\",\n    \"    if isinstance(output, (list, tuple)):\\n\",\n    \"        for item in output:\\n\",\n    \"            print(f'item: {item}')\\n\",\n    \"    elif isinstance(output, dict):\\n\",\n    \"        for item in output.items():\\n\",\n    \"            print(f'item: {item}')\\n\",\n    \"    else:\\n\",\n    \"        print(f'Output for {task.name}')\\n\",\n    \"        print(output)\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"issue_id\\\": \\\"issue_id\\\",\\n\",\n    \"    \\\"status\\\": \\\"\\\\\\\\\\\"Done\\\\\\\\\\\"\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(conditionsJson='''{\\n\",\n    \"    \\\"condition_enabled\\\": true,\\n\",\n    \"    \\\"condition_cfg\\\": \\\"issue_id\\\",\\n\",\n    \"    \\\"condition_result\\\": true\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"update_status\\\")\\n\",\n    \"\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(jira_issue_change_status, lego_printer=unskript_default_printer, hdl=hdl, args=args)\"\n   ]\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"4e38c5da-4714-48af-bdb4-5e1e59f25651\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Conclusion\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Conclusion\"\n   },\n   \"source\": [\n    \"### Conclusion\\n\",\n    \"<p>In this Runbook, we demonstrated the use of unSkript's AWS and Jira actions to roll back the Kubernetes deployment to the previous stable version and update the issue status in jira. To view the full platform capabilities of unSkript please visit <a href=\\\"https://us.app.unskript.io\\\" target=\\\"_blank\\\" rel=\\\"noopener\\\">https://us.app.unskript.io</a></p>\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"execution_data\": {\n   \"runbook_name\": \"Rollback Kubernetes Deployment\",\n   \"parameters\": [\n    \"issue_id\",\n    \"namespace\"\n   ]\n  },\n  \"kernelspec\": {\n   \"display_name\": \"Python 3\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.10.6\"\n  },\n  \"parameterSchema\": {\n   \"properties\": {\n    \"issue_id\": {\n     \"description\": \"Jira Issue ID. e.g. EN-123\",\n     \"title\": \"issue_id\",\n     \"type\": \"string\"\n    },\n    \"namespace\": {\n     \"description\": \"Namespace\",\n     \"title\": \"namespace\",\n     \"type\": \"string\"\n    }\n   },\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"parameterValues\": null,\n  \"vscode\": {\n   \"interpreter\": {\n    \"hash\": \"e8899eb02dfbc033aab5733bdae1bd213fa031d40331094008e8673d99ebab63\"\n   }\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}"
  },
  {
    "path": "Kubernetes/Rollback_k8s_Deployment_and_Update_Jira.json",
    "content": "{\n  \"name\": \"Rollback Kubernetes Deployment\",\n  \"description\": \"This runbook can be used to rollback Kubernetes Deployment\",\n  \"uuid\": \"65afc892db3d7ef487fe2353282bf94351e4674a34f56cd0349a2ad920897ddd\",\n  \"icon\": \"CONNECTOR_TYPE_K8S\",\n  \"categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\", \"CATEGORY_TYPE_TROUBLESHOOTING\" ],\n  \"connector_types\": [ \"CONNECTOR_TYPE_K8S\" ],\n  \"version\": \"1.0.0\"\n}\n\n"
  },
  {
    "path": "Kubernetes/__init__.py",
    "content": "#\n# unSkript (c) 2022\n"
  },
  {
    "path": "Kubernetes/legos/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \n<h1>unSkript </h1>\n\n\n# K8S Lego Collection\n\nThis Directory has list of all `k8s` Legos. Each Lego has its own sub-directory with details about the lego. You can follow this index to navigate to it.\n\n1. [Kubectl Execute Command](./k8s_kubectl_command/README.md)"
  },
  {
    "path": "Kubernetes/legos/__init__.py",
    "content": "# 2022 (c) unSkript.com\n"
  },
  {
    "path": "Kubernetes/legos/k8s_add_node_to_cluster/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Add Node in a Kubernetes Cluster </h1>\r\n\r\n## Description\r\nThis Lego add Node in a Kubernetes Cluster\r\n\r\n\r\n## Lego Details\r\n\r\n    k8s_add_node_to_cluster(handle, \r\n                            node_name: str, \r\n                            cluster_name: str, \r\n                            provider_id: str, \r\n                            node_info: dict, \r\n                            capacity: dict)\r\n\r\n        handle: Object of type unSkript K8S Connector\r\n        node_name: k8s node name\r\n        cluster_name: k8s cluster Name\r\n        provider_id:k8s node spec provider ID. Eg aws:///us-west-2a/{instance_type}\r\n        node_info: Node system info like architecture, boot_id, etc.\r\n        capacity: Node Parameters, like cpu, storage, memory.\r\n\r\n## Lego Input\r\n\r\nThis Lego take six inputs handle, node_name, cluster_name, provider_id, node_info and capacity.\r\n\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kubernetes/legos/k8s_add_node_to_cluster/__init__.py",
    "content": ""
  },
  {
    "path": "Kubernetes/legos/k8s_add_node_to_cluster/k8s_add_node_to_cluster.json",
    "content": "{\r\n    \"action_title\": \"Add Node in a Kubernetes Cluster\",\r\n    \"action_description\": \"Add Node in a Kubernetes Cluster\",\r\n    \"action_type\": \"LEGO_TYPE_K8S\",\r\n    \"action_entry_function\": \"k8s_add_node_to_cluster\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_K8S\",\"CATEGORY_TYPE_K8S_CLUSTER\",\"CATEGORY_TYPE_K8S_NODE\"]\r\n}\r\n    "
  },
  {
    "path": "Kubernetes/legos/k8s_add_node_to_cluster/k8s_add_node_to_cluster.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\nimport pprint\nfrom typing import List, Tuple\nfrom pydantic import BaseModel, Field\nfrom kubernetes import client\nfrom kubernetes.client.rest import ApiException\n\npp = pprint.PrettyPrinter(indent=2)\n\n\nclass InputSchema(BaseModel):\n    node_name: str = Field(\n        title='Node',\n        description='k8s node name')\n    cluster_name: str = Field(\n        title='Cluster',\n        description='k8s cluster Name')\n    provider_id: str = Field(\n        title='Node Spec Provider',\n        description='k8s node spec provider ID. Eg aws:///us-west-2a/{instance_type}')\n    node_info: dict = Field(\n        title='Node Info',\n        description='Node system info like architecture, boot_id, etc. '\n                    'Allowed key names are: '\n                    'architecture, '\n                    'boot_id, '\n                    'container_runtime_version, '\n                    'kernel_version, '\n                    'kube_proxy_version, '\n                    'kubelet_version, '\n                    'machine_id, '\n                    'operating_system, '\n                    'os_image, '\n                    'system_uuid.'\n    )\n    capacity: dict = Field(\n        title='Node Capacity',\n        description='Node Parameters, like cpu, storage, memory. '\n                    'For eg: attachable-volumes-aws-ebs=25 in gb, '\n                    'cpu=1 core, memory=7935036Ki, '\n                    'ephemeral-storage:104845292Ki, hugepages-1Gi:0, '\n                    'hugepages-2Mi:0, pods:29'\n    )\n\ndef k8s_add_node_to_cluster_printer(output):\n    if output is None:\n        return None\n\n    (v1node, data) = output\n    if v1node is not None:\n        pp.pprint(f\"Creating Node {v1node}\")\n    else:\n        pp.pprint(\"Error Creating Node\")\n    if data is not None:\n        pp.pprint(f\"Node Created {data}\")\n    else:\n        pp.pprint(\"Node Creation Error\")\n    return data\n\n\ndef k8s_add_node_to_cluster(handle,\n                            node_name: str,\n                            cluster_name: str,\n                            provider_id: str,\n                            node_info: dict,\n                            capacity: dict) -> Tuple:\n\n\n    \"\"\"k8s_add_node_to_cluster add node to cluster\n\n        :type handle: object\n        :param handle: Object returned from the Task validate method\n\n        :type node_name: str\n        :param node_name: k8s node name\n\n        :type cluster_name: str\n        :param cluster_name: k8s cluster Name\n\n        :type provider_id: str\n        :param provider_id: k8s node spec provider ID. Eg aws:///us-west-2a/{instance_type}\n\n        :type node_info: str\n        :param node_info: Node system info like architecture, boot_id, etc.\n\n        :type capacity: dict\n        :param capacity: Node Parameters, like cpu, storage, memory.\n\n\n        :rtype: None\n    \"\"\"\n\n\n    coreApiClient = client.CoreV1Api(handle)\n\n    try:\n        v1Node = client.V1Node()\n        metadata = client.V1ObjectMeta()\n        metadata.name = node_name\n        metadata.cluster_name = cluster_name\n        v1Node.metadata = metadata\n\n        v1NodeSpec = client.V1NodeSpec()\n        v1NodeSpec.provider_id = provider_id\n        v1Node.spec = v1NodeSpec\n\n        v1NodeStatus = client.V1NodeStatus()\n        if capacity:\n            v1NodeStatus.capacity = capacity\n\n        if node_info:\n            v1NodeSystemInfo = client.V1NodeSystemInfo(\n                architecture=node_info.get(\"architecture\", None),\n                boot_id=node_info.get(\"boot_id\", None),\n                container_runtime_version=node_info.get(\"container_runtime_version\", None),\n                kernel_version=node_info.get(\"kernel_version\", None),\n                kube_proxy_version=node_info.get(\"kube_proxy_version\", None),\n                kubelet_version=node_info.get(\"kubelet_version\", None),\n                machine_id=node_info.get(\"machine_id\", None),\n                operating_system=node_info.get(\"operating_system\", None),\n                os_image=node_info.get(\"os_image\", None),\n                system_uuid=node_info.get(\"system_uuid\", None)\n                )\n            v1NodeStatus.node_info = v1NodeSystemInfo\n\n        v1Node.status = v1NodeStatus\n        resp = coreApiClient.create_node(body=v1Node, pretty=True)\n        return (v1Node, resp)\n\n    except ApiException as e:\n        error = f'An Exception occured while executing the command :{e}'\n        pp.pprint(error)\n        return (None, None)\n"
  },
  {
    "path": "Kubernetes/legos/k8s_change_pvc_size/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \n<h2>K8S Change PVC Size</h2>\n\n<br>\n\n## Description\nThis Lego uses the `unSkript` internal k8s API to resize the size of a given K8S PVC. \n\n\n## Lego Details\n\n    k8s_change_pvc_size(handle: object, \n                        namespace: str,\n                        name: str,\n                        resize_option: SizingOption,\n                        resize_value: float)\n\n        handle: Object of type unSkript K8S Connector\n        namespace: String, K8S Namespace\n        name: String, Name of the PVC\n        resize_option: Enum, SizingOption (Add or Multiply)\n        resize_value: Float, Value that is used to resize.\n\n## Lego Input\nThis Lego takes Four input values. `namespace` (string), `name` (string), `resize_option` (Enum of type SizingOption) and `resize_value` (float).\n\nLike all unSkript Legos this lego relies on the information provided in unSkript K8S Connector. \n\n\n## Lego Output\nHere is a sample output. \n\n    Events:\n    Type     Reason   Age                     From     Message\n    ----     ------   ----                    ----     -------\n    Normal   BackOff  33m (x437 over 133m)    kubelet  Back-off pulling image \"diebian\"\n    Warning  Failed   3m16s (x569 over 133m)  kubelet  Error: ImagePullBackOff\n\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)\n"
  },
  {
    "path": "Kubernetes/legos/k8s_change_pvc_size/__init__.py",
    "content": ""
  },
  {
    "path": "Kubernetes/legos/k8s_change_pvc_size/k8s_change_pvc_size.json",
    "content": "{\n  \"action_title\": \"Change size of Kubernetes PVC\",\n  \"action_description\": \"Change size of Kubernetes PVC\",\n  \"action_type\": \"LEGO_TYPE_K8S\",\n  \"action_entry_function\": \"k8s_change_pvc_size\",\n  \"action_needs_credential\": true,\n  \"action_supports_poll\": true,\n  \"action_supports_iteration\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\n  \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_K8S\"]\n\n}\n"
  },
  {
    "path": "Kubernetes/legos/k8s_change_pvc_size/k8s_change_pvc_size.py",
    "content": "#\n# Copyright (c) 2022 unSkript.com\n# All rights reserved.\n#\nimport pprint\nfrom typing import Optional\nfrom pydantic import BaseModel, Field\nfrom unskript.enums.aws_k8s_enums import SizingOption\nfrom kubernetes.client.rest import ApiException\n\nclass InputSchema(BaseModel):\n    namespace: str = Field(\n        title=\"Namespace\",\n        description=\"Namespace of the PVC.\"\n    )\n    name: str = Field(\n        title=\"PVC Name\",\n        description=\"Name of the PVC.\"\n    )\n    resize_option: Optional[SizingOption] = Field(\n        default=SizingOption.Add,\n        title=\"Resize option\",\n        description='''\n            Option to resize the volume. 2 options supported:\n            1. Add - Use this option to resize by an amount.\n            2. Multiple - Use this option if you want to resize by a multiple of the current volume size.\n        '''\n    )\n    resize_value: float = Field(\n        title=\"Value\",\n        description='''\n            Based on the resize option chosen, specify the value. For eg, if you chose Add option, this\n            value will be a value in Gi (like 100). If you chose Multiple option, this value will be a multiplying factor\n            to the current volume size. So, if you want to double, you specify 2 here.\n        '''\n    )\n\ndef k8s_change_pvc_size_printer(output):\n    if output is None:\n        return\n\n    pprint.pprint(output)\n\n\n\ndef k8s_change_pvc_size(\n        handle,\n        namespace: str,\n        name: str,\n        resize_option: SizingOption,\n        resize_value: float) -> str:\n    \"\"\"k8s_change_pvc_size change pvc size\n\n        :type name: str\n        :param name: Name of the PVC.\n\n        :type resize_option: SizingOption\n        :param resize_option: Option to resize the volume.\n\n        :type resize_value: float\n        :param resize_value: Based on the resize option chosen, specify the value.\n\n        :type namespace: str\n        :param namespace: Namespace of the PVC.\n\n        :rtype: string\n    \"\"\"\n    # Get the current size.\n    kubectl_command = f'kubectl get pvc {name} -n {namespace}  -o jsonpath={{.status.capacity.storage}}'\n    result = handle.run_native_cmd(kubectl_command)\n    if result is None:\n        print(\n            f\"Error while executing command ({kubectl_command}) (empty response)\")\n        return \"\"\n\n    if result.stderr:\n        raise ApiException(\n            f\"Error occurred while executing command {kubectl_command} {result.stderr}\")\n\n    currentSize = result.stdout\n    currentSizeInt = int(currentSize.rstrip(\"Gi\"))\n    if resize_option == SizingOption.Add:\n        newSizeInt = currentSizeInt + resize_value\n    else:\n        newSizeInt = currentSizeInt * resize_value\n    newSize = str(newSizeInt) + \"Gi\"\n    print(f'Current size {currentSize}, new Size {newSize}')\n    kubectl_command = f'kubectl patch pvc {name} -n {namespace} -p \\'{{\"spec\":{{\"resources\":{{\"requests\": {{\"storage\": \"{newSize}\"}}}}}}}}\\''\n    result = handle.run_native_cmd(kubectl_command)\n    if result.stderr:\n        print(\n            f\"Error while executing command ({kubectl_command}): {result.stderr}\")\n        return str(f\"Error Changing PVC Size {kubectl_command}: {result.stderr}\")\n    print(f'PVC {name} size changed to {newSize} successfully')\n    return result.stdout\n"
  },
  {
    "path": "Kubernetes/legos/k8s_check_cronjob_pod_status/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>Checks the status of CronJob pods</h1>\n\n## Description\nThis action checks the status of CronJob pods\n\n## Lego Details\n\tk8s_check_cronjob_pod_status(handle, namespace: str=\"\")\n\t\thandle: Object of type unSkript K8S Connector.\n\t\tnamespace: Namespace where the CronJob is deployed.\n\n\n## Lego Input\nThis Lego takes inputs handle namespace(Optional)\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n<img src=\"./2.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kubernetes/legos/k8s_check_cronjob_pod_status/__init__.py",
    "content": ""
  },
  {
    "path": "Kubernetes/legos/k8s_check_cronjob_pod_status/k8s_check_cronjob_pod_status.json",
    "content": "{\n  \"action_title\": \"Check the status of K8s CronJob pods\",\n  \"action_description\": \"This action checks the status of CronJob pods\",\n  \"action_type\": \"LEGO_TYPE_K8S\",\n  \"action_entry_function\": \"k8s_check_cronjob_pod_status\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n  \"action_is_check\": true,\n  \"action_next_hop\": [\n    \"\"\n  ],\n  \"action_next_hop_parameter_mapping\": {},\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true\n}"
  },
  {
    "path": "Kubernetes/legos/k8s_check_cronjob_pod_status/k8s_check_cronjob_pod_status.py",
    "content": "#\n# Copyright (c) 2023 unSkript.com\n# All rights reserved.\n#\nfrom datetime import datetime, timezone\nfrom kubernetes import client\nfrom typing import Tuple, Optional\nfrom pydantic import BaseModel, Field\nfrom croniter import croniter\nfrom datetime import datetime, timezone, timedelta\nimport json\n\n\nclass InputSchema(BaseModel):\n    namespace: Optional[str] = Field(..., description='k8s Namespace', title='Namespace')\n    time_interval_to_check: int = Field(\n        24,\n        description='Time interval in hours. This time window is used to check if pod in a cronjob was in Pending state. Default is 24 hours.',\n        title=\"Time Interval\"\n    )\n\n\ndef k8s_check_cronjob_pod_status_printer(output):\n    status, issues = output\n    if status:\n        print(\"CronJobs are running as expected.\")\n    else:\n        for issue in issues:\n            print(f\"CronJob '{issue['cronjob_name']}' in namespace '{issue['namespace']}' has issues.\")\n\ndef format_datetime(dt):\n    return dt.strftime(\"%Y-%m-%d %H:%M:%S %Z\")\n\ndef k8s_check_cronjob_pod_status(handle, namespace: str='', time_interval_to_check=24) -> Tuple:\n    \"\"\"\n    Checks the status of the CronJob pods.\n\n    :type handle: object\n    :param handle: The Kubernetes client handle.\n\n    :type name: str\n    :param namespace: Namespace where the CronJob is deployed.\n\n    :return: A tuple where the first item has the status if the check and second has a list of failed objects.\n    \"\"\"\n    # Initialize the K8s API clients\n    batch_v1 = client.BatchV1Api(api_client=handle)\n    core_v1 = client.CoreV1Api(api_client=handle)\n\n    issues = []\n    current_time = datetime.now(timezone.utc)\n    interval_time_to_check = current_time - timedelta(hours=time_interval_to_check)\n    interval_time_to_check = interval_time_to_check.replace(tzinfo=timezone.utc)\n\n    # Get namespaces to check\n    if namespace:\n        namespaces = [namespace]\n    else:\n        ns_obj = core_v1.list_namespace()\n        namespaces = [ns.metadata.name for ns in ns_obj.items]\n\n    for ns in namespaces:\n        # Fetch all CronJobs in the namespace using kubectl\n        get_cronjob_command = f\"kubectl get cronjobs -n {ns} -o=jsonpath='{{.items[*].metadata.name}}'\"\n        response = handle.run_native_cmd(get_cronjob_command)\n\n        if not response or response.stderr:\n            raise Exception(f\"Error fetching CronJobs for namespace {ns}: {response.stderr if response else 'empty response'}\")\n\n        cronjob_names = response.stdout.split()\n        for cronjob_name in cronjob_names:\n            get_cronjob_details_command = f\"kubectl get cronjob {cronjob_name} -n {ns} -o=json\"\n            try:\n                response = handle.run_native_cmd(get_cronjob_details_command)\n                if response.stderr:\n                    raise Exception(f\"Error fetching details for CronJob {cronjob_name} in namespace {ns}: {response.stderr}\")\n            except Exception as e:\n                print(f\"Failed to fetch details for CronJob {cronjob_name} in namespace {ns}: {str(e)}\")\n                continue\n            cronjob = json.loads(response.stdout)\n\n            # Fetch the most recent Job associated with the CronJob\n            jobs = batch_v1.list_namespaced_job(ns)  # Fetch all jobs, and then filter by prefix.\n            associated_jobs = [job for job in jobs.items if job.metadata.name.startswith(cronjob['metadata']['name'])]\n            if not associated_jobs:\n                # If no associated jobs, that means the job is not scheduled.\n                continue\n\n            latest_job = sorted(associated_jobs, key=lambda x: x.status.start_time, reverse=True)[0]\n\n            # Check job's pods for any issues\n            pods = core_v1.list_namespaced_pod(ns, label_selector=f\"job-name={latest_job.metadata.name}\")\n            for pod in pods.items:\n                if pod.status.phase == 'Pending':\n                    start_time = pod.status.start_time\n                    if start_time and start_time >= interval_time_to_check:\n                        issues.append({\n                            \"cronjob_name\": cronjob_name, \n                            \"namespace\": ns, \n                            \"pod_name\": pod.metadata.name, \n                            \"start_time\": format_datetime(start_time)\n                        })\n                        break\n                elif pod.status.phase not in ['Running', 'Succeeded','Completed']:\n                    issues.append({\n                        \"cronjob_name\": cronjob_name, \n                        \"namespace\": ns, \n                        \"pod_name\": pod.metadata.name, \n                        \"state\": pod.status.phase\n                    })\n                    break\n\n    return (not issues, issues if issues else None)"
  },
  {
    "path": "Kubernetes/legos/k8s_check_service_pvc_utilization/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>Check K8s service PVC utilization </h1>\n\n## Description\nThis check fetches the PVC associated with a given service, determines its utilized size, and then compares it to its total capacity. If the used percentage exceeds the provided threshold, it triggers an alert.\n\n## Lego Details\n\tk8s_check_service_pvc_utilization(handle, core_services: list, namespace: str = \"\", threshold: int = 80)\n\t\thandle: Object of type unSkript K8S Connector.\n    \tcore_services: List of services to check PVC utilization\n    \tthreshold: Percentage threshold for utilized PVC disk size. E.g., a 80% threshold checks if the utilized space exceeds 80% of the total PVC capacity.\n\t\tnamespace: The namespace in which the service resides.\n\n\n## Lego Input\nThis Lego takes inputs handle, core_services, namespace, threshold.\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)\n"
  },
  {
    "path": "Kubernetes/legos/k8s_check_service_pvc_utilization/__init__.py",
    "content": ""
  },
  {
    "path": "Kubernetes/legos/k8s_check_service_pvc_utilization/k8s_check_service_pvc_utilization.json",
    "content": "{\n  \"action_title\": \"Check K8s service PVC utilization \",\n  \"action_description\": \"This check fetches the PVC associated with a given service, determines its utilized size, and then compares it to its total capacity. If the used percentage exceeds the provided threshold, it triggers an alert.\",\n  \"action_type\": \"LEGO_TYPE_K8S\",\n  \"action_entry_function\": \"k8s_check_service_pvc_utilization\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n  \"action_is_check\": true,\n  \"action_next_hop\": [\n    \"\"\n  ],\n  \"action_next_hop_parameter_mapping\": {},\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true\n}"
  },
  {
    "path": "Kubernetes/legos/k8s_check_service_pvc_utilization/k8s_check_service_pvc_utilization.py",
    "content": "#\n# Copyright (c) 2023 unSkript.com\n# All rights reserved.\n#\nimport re\nimport json\nfrom typing import Tuple, Optional\nfrom pydantic import BaseModel, Field\nfrom kubernetes.client.rest import ApiException\n\n\nclass InputSchema(BaseModel):\n    namespace: str = Field(\n        ...,\n        description=\"The namespace in which the service resides.\",\n        title=\"Namespace\",\n    )\n    core_services: list = Field(\n        ...,\n        description=\"List of services for which the used PVC size needs to be checked.\",\n        title=\"K8s Service name\",\n    )\n    threshold: Optional[int] = Field(\n        80,\n        description=\"Percentage threshold for utilized PVC disk size.E.g., a 80% threshold checks if the utilized space exceeds 80% of the total PVC capacity.\",\n        title=\"Threshold (in %)\",\n    )\n\ndef k8s_check_service_pvc_utilization(\n    handle, core_services: list, namespace: str, threshold: int = 60\n) -> Tuple:\n    \"\"\"\n    k8s_check_service_pvc_utilization checks the utilized disk size of a service's PVC against a given threshold.\n\n    This function fetches the PVC associated with a given service, determines its utilized size,\n    and then compares it to its total capacity. If the used percentage exceeds the provided threshold,\n    it triggers an alert.\n\n    :type handle: object\n    :param handle: Handle object to execute the kubectl command.\n\n    :type service_name: str\n    :param service_name: The name of the service.\n\n    :type threshold: int\n    :param threshold: Percentage threshold for utilized PVC disk size.\n                        E.g., a 80% threshold checks if the utilized space exceeds 80% of the total PVC capacity.\n\n    :type namespace: str\n    :param namespace: The namespace in which the service resides.\n\n    :return: Status and dictionary with PVC name and its size information if the PVC's disk size exceeds threshold.\n    \"\"\"\n\n    alert_pvcs_all_services = []\n    services_without_pvcs = []\n    \n    # Keep track of processed PVCs to avoid duplicates\n    processed_pvcs = set()\n\n    for svc in core_services:\n        # Get label associated with the service\n        get_service_labels_command = f\"kubectl get services {svc} -n {namespace} -o=jsonpath='{{.spec.selector}}'\"\n        response = handle.run_native_cmd(get_service_labels_command)\n        if not response.stdout.strip():\n            # No labels found for a particular service. Skipping...\n            continue\n        labels_dict = json.loads(response.stdout.replace(\"'\", '\"'))\n        label_selector = \",\".join([f\"{k}={v}\" for k, v in labels_dict.items()])\n\n        # Fetch the pod attached to this service.\n        # The safer option is to try with the * option. Having a specific index like 0 or 1\n        # will lead to ApiException.\n        get_pod_command = f\"kubectl get pods -n {namespace} -l {label_selector} -o=jsonpath='{{.items[*].metadata.name}}'\"\n        response = handle.run_native_cmd(get_pod_command)\n        if not response or response.stderr:\n            raise ApiException(\n                f\"Error while executing command ({get_pod_command}): {response.stderr if response else 'empty response'}\"\n            )\n\n        # pod_names stores the output from the above kubectl command, which is a list of pod_names separated by space\n        pod_names = response.stdout.strip()\n        if not pod_names:\n            # No pods found for service {svc} in namespace {namespace} with labels {label_selector}\n            continue\n\n        # Fetch PVCs attached to the pod\n        # The Above kubectl command would return a string that is space separated name(s) of the pod.\n        # Given such a string, lets find out if we have one or more than one pod name in the string.\n        # If there are more than one pod name in the output, we need to iterate over all items[] array.\n        # Else we can directly access the persistentVolumeClaim name\n        # Lets also associate the pod_name along with the claim name (PVC Name) in the format of\n        # pod_name:pv_claim_name\n\n        if len(pod_names.split()) > 1:\n            json_path_cmd = '{range .items[*]}{.metadata.name}:{range .spec.volumes[*].persistentVolumeClaim}{.claimName} {end}{\"\\\\n\"}{end}'\n        else:\n            json_path_cmd = \"{.metadata.name}:{range .spec.volumes[*].persistentVolumeClaim}{.claimName}{end}\"\n\n        get_pvc_names_command = f\"kubectl get pod {pod_names} -n {namespace} -o=jsonpath='{json_path_cmd}'\"\n\n        response = handle.run_native_cmd(get_pvc_names_command)\n        if not response or response.stderr:\n            raise ApiException(\n                f\"Error while executing command ({get_pvc_names_command}): {response.stderr if response else 'empty response'}\"\n            )\n        # Example: ['lightbeam-elasticsearch-master-0:data-lightbeam-elasticsearch-master-0']\n        pod_and_pvc_names = response.stdout.strip().split()\n\n        # The pod_and_pvc_names\n        if not pod_and_pvc_names:\n            services_without_pvcs.append(svc)\n            continue\n\n        pvc_mounts = []\n        alert_pvcs = []\n\n        for element in pod_and_pvc_names:\n            pod_name, claim_name = element.split(\":\")\n            if not claim_name:\n                # Skip if Volume Claim name is empty.\n                continue\n\n            # Fetch the Pod JSON\n            # We need to get the container name (if any) from the Pod's JSON. This is needed\n            # if we want to exec into the POD that is within a container. The JSON data that\n            # we obtain is used to fill the pvc_mounts list, which is a list of dictionaries.\n            # We use this pvc_mounts to find out the used_space percentage. We compare that with\n            # the threshold to flag if the utilization is above threshold.\n            # df -kh is the command used to get the disk utilization. This is accurate as we get\n            # the disk utilization from the POD directly, rather than checking the resource limit\n            # and resource request from the deployment / stateful YAML file.\n            get_pod_json_command = (\n                f\"kubectl get pod {pod_name} -n {namespace} -o json\"\n            )\n            pod_json_output = handle.run_native_cmd(get_pod_json_command)\n            if not pod_json_output or pod_json_output.stderr:\n                raise ApiException(\n                    f\"Error fetching pod json for {pod_name}: {pod_json_output.stderr if pod_json_output else 'empty response'}\"\n                )\n            pod_data = json.loads(pod_json_output.stdout)\n\n            # Dictionary .get() method with default value is way of error handling\n            for container in pod_data.get(\"spec\", {}).get(\"containers\", {}):\n                for mount in container.get(\"volumeMounts\", {}):\n                    for volume in pod_data.get(\"spec\", {}).get(\"volumes\", {}):\n                        if \"persistentVolumeClaim\" in volume and volume.get(\n                            \"name\"\n                        ) == mount.get(\"name\"):\n                            try:\n                                claim_name = volume[\"persistentVolumeClaim\"][\n                                    \"claimName\"\n                                ]\n                                print(f\"ClaimName: {claim_name}: MountName: {mount['name']} ContainerName: {container['name']}\")\n                                \n                                # Add mount info if not already added\n                                mount_info = {\n                                    \"container_name\": container[\"name\"],\n                                    \"mount_path\": mount[\"mountPath\"],\n                                    \"pvc_name\": claim_name if claim_name else None,\n                                    \"pod_name\": pod_name\n                                }\n                                \n                                # Only add if this specific mount combination hasn't been processed yet\n                                mount_key = f\"{pod_name}:{container['name']}:{mount['mountPath']}:{claim_name}\"\n                                if mount_key not in processed_pvcs:\n                                    pvc_mounts.append(mount_info)\n                                    processed_pvcs.add(mount_key)\n                                    \n                            except KeyError as e:\n                                # Handle the KeyError (e.g., log the error, skip this iteration, etc.)\n                                print(f\"KeyError: {e}. Skipping this entry.\")\n                            except IndexError as e:\n                                # Handle the IndexError (e.g., log the error, skip this iteration, etc.)\n                                print(f\"IndexError: {e}. Skipping this entry.\")\n\n        # Create a dictionary to store processed PVC info\n        pvc_info_dict = {}\n            \n        # Process each mount separately with a single df command\n        for mount in pvc_mounts:\n            container_name = mount[\"container_name\"]\n            mount_path = mount[\"mount_path\"]\n            pvc_name = mount[\"pvc_name\"]\n            pod_name = mount[\"pod_name\"]\n            \n            # Skip if we've already processed this PVC\n            if pvc_name in pvc_info_dict:\n                continue\n                \n            du_command = f\"kubectl exec -n {namespace} {pod_name} -c {container_name} -- df -kh {mount_path} | grep -v Filesystem\"\n            du_output = handle.run_native_cmd(du_command)\n\n            if du_output and not du_output.stderr:\n                # Process each line of df output separately\n                df_lines = du_output.stdout.strip().split(\"\\n\")\n\n                for df_line in df_lines:\n                    if not df_line.strip():\n                        continue\n\n                    # Split line into columns\n                    columns = re.split(r\"\\s+\", df_line.strip())\n\n                    # Find the percentage column (contains '%')\n                    percent_col = None\n                    for i, col in enumerate(columns):\n                        if \"%\" in col:\n                            percent_col = i\n                            break\n\n                    if percent_col is None or len(columns) < 2:\n                        print(f\"Warning: Unexpected df output format: {df_line}\")\n                        continue\n\n                    # Extract percentage and capacity\n                    used_percentage = int(columns[percent_col].replace(\"%\", \"\"))\n                    total_capacity = columns[1] if len(columns) > 1 else \"Unknown\"\n                    pvc_info = {\n                        \"pvc_name\": pvc_name,\n                        \"mount_path\": mount_path,\n                        \"used\": used_percentage,\n                        \"capacity\": total_capacity,\n                    }\n                    \n                    # Store in dictionary to prevent duplicates\n                    pvc_info_dict[pvc_name] = pvc_info\n\n                    # Check if usage exceeds threshold\n                    if used_percentage > threshold:\n                        alert_pvcs.append(pvc_info)\n\n        # Add unique alert PVCs to the main list\n        for pvc_info in alert_pvcs:\n            if pvc_info not in alert_pvcs_all_services:\n                alert_pvcs_all_services.append(pvc_info)\n\n    if services_without_pvcs:\n        print(\"Following services do not have any PVCs attached:\")\n        for service in services_without_pvcs:\n            print(f\"- {service}\")\n\n    if alert_pvcs_all_services:\n        print(json.dumps(alert_pvcs_all_services, indent=4))\n\n    return (not bool(alert_pvcs_all_services), alert_pvcs_all_services)\n"
  },
  {
    "path": "Kubernetes/legos/k8s_check_service_status/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>Check K8s services endpoint and SSL certificate health</h1>\n\n## Description\nChecks the health status of the provided list of endpoints and their SSL certificate status.\n\n## Lego Details\n\tk8s_check_service_status(handle, endpoints:list=[], threshold: int = 30)\n\t\thandle: Object of type unSkript K8S Connector.\n\t\tendpoints: The URLs of the endpoint whose SSL certificate is to be checked. Eg: [\"https://www.google.com\", \"https://expired.badssl.com/\"]\n    \tthreshold: The number of days within which, if the certificate is set to expire, is considered a potential issue.\n\n\n## Lego Input\nThis Lego takes inputs handle, endpoints, threshold.\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kubernetes/legos/k8s_check_service_status/__init__.py",
    "content": ""
  },
  {
    "path": "Kubernetes/legos/k8s_check_service_status/k8s_check_service_status.json",
    "content": "{\n  \"action_title\": \"Check K8s services endpoint and SSL certificate health\",\n  \"action_description\": \"Checks the health status of the provided list of endpoints and their SSL certificate status.\",\n  \"action_type\": \"LEGO_TYPE_K8S\",\n  \"action_entry_function\": \"k8s_check_service_status\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n  \"action_is_check\": true,\n  \"action_next_hop\": [\n    \"\"\n  ],\n  \"action_next_hop_parameter_mapping\": {},\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true\n}"
  },
  {
    "path": "Kubernetes/legos/k8s_check_service_status/k8s_check_service_status.py",
    "content": "##\n##  Copyright (c) 2023 unSkript, Inc\n##  All rights reserved.\n##\nfrom typing import Tuple, List, Optional\nfrom requests.packages.urllib3.exceptions import InsecureRequestWarning\nimport requests\nfrom pydantic import BaseModel, Field\nfrom datetime import datetime, timedelta\nimport ssl\nimport socket\n\n# Disabling insecure request warnings\nrequests.packages.urllib3.disable_warnings(InsecureRequestWarning)\n\n\nclass InputSchema(BaseModel):\n    endpoints: list = Field(\n        ..., description='The URLs of the endpoint whose SSL certificate is to be checked. Eg: [\"https://www.google.com\", \"https://expired.badssl.com/\"]', title='List of URLs'\n    )\n    threshold: Optional[int] = Field(\n        30,\n        description='The number of days within which, if the certificate is set to expire is considered a potential issue.',\n        title='K8s Namespace',\n    )\n    \n\n\ndef k8s_check_service_status_printer(output):\n    status, results = output\n    if status:\n        print(\"All services are healthy.\")\n        return\n\n    if \"Error\" in results[0]:\n        print(f\"Error: {results[0]['Error']}\")\n        return\n    print(\"\\n\" + \"=\" * 100) \n\n    for result in results:\n        print(f\"Service:\\t{result['endpoint']}\")\n        print(\"-\" * 100)  \n        print(f\"Status: {result['status']}\\n\")\n        print(\"=\" * 100) \n\n\ndef check_ssl_expiry(endpoint, threshold):\n    hostname = endpoint.split(\"//\")[-1].split(\"/\")[0]\n    ssl_date_fmt = r'%b %d %H:%M:%S %Y %Z'\n\n    # Create an SSL context that restricts to secure versions of TLS\n    context = ssl.create_default_context()\n    context.check_hostname = True\n    context.verify_mode = ssl.CERT_REQUIRED\n\n    # Ensure that only TLSv1.2 and later are used (disabling TLSv1.0 and TLSv1.1) as TLS versions 1.0 and 1.1 are known to be vulnerable to attacks\n    context.options |= ssl.OP_NO_TLSv1 | ssl.OP_NO_TLSv1_1\n\n    try:\n        with socket.create_connection((hostname, 443), timeout=10) as sock:\n            with context.wrap_socket(sock, server_hostname=hostname) as ssl_sock:\n                ssl_info = ssl_sock.getpeercert()\n                \n        expiry_date = datetime.strptime(ssl_info['notAfter'], ssl_date_fmt).date()\n        days_remaining = (expiry_date - datetime.utcnow().date()).days\n        if days_remaining <= threshold:\n            return (days_remaining, False)\n        else:\n            return (days_remaining, True)\n    except Exception as e:\n        raise e\n\n\ndef k8s_check_service_status(handle, endpoints:list, threshold: int = 30) -> Tuple:\n    \"\"\"\n    k8s_check_service_status Checks the health status of the provided endpoints.\n\n    :param endpoints: The URLs of the endpoint whose SSL certificate is to be checked. Eg: [\"https://www.google.com\", \"https://expired.badssl.com/\"]\n    :param threshold: The number of days within which, if the certificate is set to expire, \n                      is considered a potential issue.\n    :return: Tuple with a boolean indicating if all services are healthy, and a list of dictionaries \n             with individual service status.\n    \"\"\"\n    failed_endpoints = []\n\n    for endpoint in endpoints:\n        status_info = {\"endpoint\": endpoint}\n\n        # Check if the endpoint is HTTPS or not\n        if endpoint.startswith(\"https://\"):\n            try:\n                response = requests.get(endpoint, verify=True, timeout=5)\n                days_remaining, is_healthy = check_ssl_expiry(endpoint, threshold)\n                if not (response.status_code == 200 and is_healthy):\n                    status_info[\"status\"] = 'unhealthy'\n                    reason = f'SSL expiring in {days_remaining} days.' if not is_healthy else f'Status code: {response.status_code}'\n                    status_info[\"Reason\"] = reason\n                    failed_endpoints.append(status_info)\n            except requests.RequestException as e:\n                status_info[\"status\"] = 'unhealthy'\n                reason = f'SSL error: {str(e)}' if 'CERTIFICATE_VERIFY_FAILED' in str(e) else f'Error: {str(e)}'\n                status_info[\"Reason\"] = reason\n                failed_endpoints.append(status_info)\n        else:\n            # For non-HTTPS endpoints\n            try:\n                response = requests.get(endpoint, timeout=5)\n                if response.status_code != 200:\n                    status_info[\"status\"] = 'unhealthy'\n                    status_info[\"Reason\"] = f'Status code: {response.status_code}'\n                    failed_endpoints.append(status_info)\n            except requests.RequestException as e:\n                status_info[\"status\"] = 'unhealthy'\n                status_info[\"Reason\"] = f'Error: {str(e)}'\n                failed_endpoints.append(status_info)\n\n    if failed_endpoints:\n        return (False, failed_endpoints)\n    else:\n        return (True, None)\n"
  },
  {
    "path": "Kubernetes/legos/k8s_check_worker_cpu_utilization/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>Check K8s worker CPU Utilization</h1>\n\n## Description\nRetrieves the CPU utilization for all worker nodes in the cluster and compares it to a given threshold.\n\n## Lego Details\n\tk8s_check_worker_cpu_utilization(handle, threshold: float=70)\n\t\thandle: Object of type unSkript K8S Connector.\n\t\tthreshold: Threshold for CPU utilization in percentage.\n\n\n## Lego Input\nThis Lego takes inputs handle, threshold.\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kubernetes/legos/k8s_check_worker_cpu_utilization/__init__.py",
    "content": ""
  },
  {
    "path": "Kubernetes/legos/k8s_check_worker_cpu_utilization/k8s_check_worker_cpu_utilization.json",
    "content": "{\n  \"action_title\": \"Check K8s worker CPU Utilization\",\n  \"action_description\": \"Retrieves the CPU utilization for all worker nodes in the cluster and compares it to a given threshold.\",\n  \"action_type\": \"LEGO_TYPE_K8S\",\n  \"action_entry_function\": \"k8s_check_worker_cpu_utilization\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n  \"action_is_check\": true,\n  \"action_next_hop\": [\n    \"\"\n  ],\n  \"action_next_hop_parameter_mapping\": {},\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_K8S\",\"CATEGORY_TYPE_K8S_NODE\"]\n}"
  },
  {
    "path": "Kubernetes/legos/k8s_check_worker_cpu_utilization/k8s_check_worker_cpu_utilization.py",
    "content": "from __future__ import annotations\n\n#\n# Copyright (c) 2023 unSkript.com\n# All rights reserved.\n#\nfrom pydantic import BaseModel, Field\nfrom typing import Optional, Tuple\n\n\n\nclass InputSchema(BaseModel):\n    threshold: Optional[float] = Field(\n        70.0,\n        description='Threshold for CPU utilization in percentage.',\n        title='Threshold (in %)',\n    )\n\n\ndef k8s_check_worker_cpu_utilization_printer(output):\n    status, nodes_info = output\n    if status:\n        print(\"All nodes are within the CPU utilization threshold.\")\n        return\n\n    print(\"ALERT: Nodes exceeding CPU utilization threshold:\")\n    print(\"-\" * 40)\n    for node_info in nodes_info:\n        print(f\"Node: {node_info['node']} - CPU Utilization: {node_info['cpu']}%\")\n    print(\"-\" * 40)\n\ndef k8s_check_worker_cpu_utilization(handle, threshold: float=70.0) -> Tuple:\n    \"\"\"\n    k8s_check_worker_cpu_utilization Retrieves the CPU utilization for all worker nodes in the cluster and compares it to a given threshold.\n\n    :type handle: object\n    :param handle: Handle object to execute the kubectl command.\n\n    :type threshold: int\n    :param threshold: Threshold for CPU utilization in percentage.\n\n    :return: Status and dictionary with node names and their CPU information if any node's CPU utilization exceeds the threshold.\n    \"\"\"\n    exceeding_nodes = []\n    kubectl_command = \"kubectl top nodes --no-headers\"\n    response = handle.run_native_cmd(kubectl_command)\n\n    if response is None or response.stderr:\n        raise Exception(f\"Error while executing command ({kubectl_command}): {response.stderr if response else 'empty response'}\")\n\n    # Ensure response.stdout is processed only once and correctly\n    lines = response.stdout.strip().split('\\n')\n    seen_nodes = set()  # Keep track of nodes that have already been processed\n\n    for line in lines:\n        parts = line.split()\n        if len(parts) < 5:  # Check for correct line format\n            continue\n        node_name, cpu_percentage_str = parts[0], parts[2].rstrip('%')\n        if node_name in seen_nodes:\n            print(f\"Duplicate entry detected for node {node_name}, skipping.\")\n            continue\n        seen_nodes.add(node_name)\n\n        cpu_percentage = float(cpu_percentage_str)\n        if cpu_percentage > threshold:\n            exceeding_nodes.append({\"node\": node_name, \"cpu\": cpu_percentage})\n\n    if exceeding_nodes:\n        return (False, exceeding_nodes)\n    return (True, None)\n\n\n\n"
  },
  {
    "path": "Kubernetes/legos/k8s_delete_pod/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Delete a Kubernetes POD</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego delete a Kubernetes POD in a given Namespace.\r\n\r\n\r\n## Lego Details\r\n\r\n    k8s_delete_pod(handle: object, namespace: str, podname: str)\r\n\r\n        handle: Object of type unSkript MongoDB Connector\r\n        namespace: Kubernetes namespace\r\n        podname: K8S Pod Name\r\n\r\n## Lego Input\r\nThis Lego take three input handle, namespace and podname.\r\n\r\n## Lego Output\r\nHere is a sample output. For the command `kubectl describe pod {unhealthyPod} -n {namespace} | grep -A 10`\r\n\r\n    Events:\r\n    Type     Reason   Age                     From     Message\r\n    ----     ------   ----                    ----     -------\r\n    Normal   BackOff  33m (x437 over 133m)    kubelet  Back-off pulling image \"diebian\"\r\n    Warning  Failed   3m16s (x569 over 133m)  kubelet  Error: ImagePullBackOff\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kubernetes/legos/k8s_delete_pod/__init__.py",
    "content": ""
  },
  {
    "path": "Kubernetes/legos/k8s_delete_pod/k8s_delete_pod.json",
    "content": "{\r\n    \"action_title\": \"Delete a Kubernetes POD in a given Namespace\",\r\n    \"action_description\": \"Delete a Kubernetes POD in a given Namespace\",\r\n    \"action_type\": \"LEGO_TYPE_K8S\",\r\n    \"action_entry_function\": \"k8s_delete_pod\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\r\n    \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_K8S\",\"CATEGORY_TYPE_K8S_POD\"]\r\n}\r\n    "
  },
  {
    "path": "Kubernetes/legos/k8s_delete_pod/k8s_delete_pod.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\nimport pprint \nfrom typing import Dict\nfrom pydantic import BaseModel, Field\nfrom kubernetes import client\nfrom kubernetes.client.rest import ApiException\n\nclass InputSchema(BaseModel):\n    namespace: str = Field(\n        title='Namespace',\n        description='Kubernetes namespace')\n    podname: str = Field(\n        title='Podname',\n        description='K8S Pod Name')\n\n\ndef k8s_delete_pod_printer(output):\n    if output is None:\n        return\n\n    pprint.pprint(output)\n\n\ndef k8s_delete_pod(handle, namespace: str, podname: str):\n    \"\"\"k8s_delete_pod delete a Kubernetes POD in a given Namespace\n\n        :type handle: object\n        :param handle: Object returned from the Task validate method\n\n        :type namespace: str\n        :param namespace: Kubernetes namespace\n\n        :type podname: str\n        :param podname: K8S Pod Name\n\n        :rtype: Dict of POD info\n    \"\"\"\n    coreApiClient = client.CoreV1Api(api_client=handle)\n\n    try:\n        resp = coreApiClient.delete_namespaced_pod(\n            name=podname, namespace=namespace)\n    except ApiException as e:\n        resp = 'An Exception occurred while executing the command ' + e.reason\n        raise e\n\n    return resp\n"
  },
  {
    "path": "Kubernetes/legos/k8s_delete_pvc/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>Delete Kubernetes PVC</h1>\n\n## Description\nThis action force deletes a list of Kubernetes PVCs in a given Namespace.\n\n## Lego Details\n\tk8s_delete_pvc(handle, namespace: str, pvc_names: list)\n\t\thandle: Object of type unSkript K8S Connector.\n\t\tnamespace: Kubernetes namespace\n\t\tpvc_names: List of K8S PVC Names. Eg: [\"data-dir-1\", \"data-dir-2\"]\n\n\n## Lego Input\nThis Lego takes inputs handle, namespace, pvc_names\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kubernetes/legos/k8s_delete_pvc/__init__.py",
    "content": ""
  },
  {
    "path": "Kubernetes/legos/k8s_delete_pvc/k8s_delete_pvc.json",
    "content": "{\n  \"action_title\": \"Delete Kubernetes PVC\",\n  \"action_description\": \"This action force deletes a list of Kubernetes PVCs in a given Namespace.\",\n  \"action_type\": \"LEGO_TYPE_K8S\",\n  \"action_entry_function\": \"k8s_delete_pvc\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\n  \"action_is_check\": false,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true\n}"
  },
  {
    "path": "Kubernetes/legos/k8s_delete_pvc/k8s_delete_pvc.py",
    "content": "from __future__ import annotations\n\n##\n# Copyright (c) 2023 unSkript, Inc\n# All rights reserved.\n##\nfrom typing import Dict\nfrom kubernetes import client, config\nfrom kubernetes.client.exceptions import ApiException\nimport pprint\nfrom pydantic import BaseModel, Field\n\n\nclass InputSchema(BaseModel):\n    namespace: str = Field(..., description='Kubernetes namespace', title='K8s namespace')\n    pvc_names: list = Field(..., description='List of K8S PVC Names. Eg: [\"data-dir-1\", \"data-dir-2\"]', title='List of PVC names')\n\n\n\ndef k8s_delete_pvc_printer(output):\n    if output is None:\n        return\n\n    pprint.pprint(output)\n\ndef k8s_delete_pvc(handle, namespace: str, pvc_names: list) -> Dict:\n    \"\"\"\n    k8s_delete_pvc force deletes one or more Kubernetes PVCs in a given Namespace.\n\n    :type handle: object\n    :param handle: Object returned from the Task validate method or Kubernetes client configuration\n\n    :type namespace: str\n    :param namespace: Kubernetes namespace\n\n    :type pvc_names: list\n    :param pvc_names: List of K8S PVC Names. Eg: [\"data-dir-1\", \"data-dir-2\"]\n\n    :rtype: Dict or str with information about the deletion or error.\n    \"\"\"\n    coreApiClient = client.CoreV1Api(api_client=handle)\n\n    responses = {}\n    for pvc_name in pvc_names:\n        try:\n            resp = coreApiClient.delete_namespaced_persistent_volume_claim(\n                name=pvc_name,\n                namespace=namespace,\n                body=client.V1DeleteOptions(propagation_policy='Foreground')  # This forces the deletion\n            )\n            responses[pvc_name] = resp.status\n        except ApiException as e:\n            resp = 'An Exception occurred while executing the command ' + e.reason\n            responses[pvc_name] = resp\n            raise e\n\n    return responses\n\n\n\n"
  },
  {
    "path": "Kubernetes/legos/k8s_describe_node/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Describe Kubernetes Node</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego describe Kubernetes Node.\r\n\r\n\r\n## Lego Details\r\n\r\n    k8s_describe_node(handle: object, node_name: str)\r\n\r\n        handle: Object of type unSkript K8S Connector\r\n        node_name: Kubernetes namespace\r\n\r\n## Lego Input\r\nThis Lego take two input handle and node_name.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kubernetes/legos/k8s_describe_node/__init__.py",
    "content": ""
  },
  {
    "path": "Kubernetes/legos/k8s_describe_node/k8s_describe_node.json",
    "content": "{\r\n    \"action_title\": \"Describe Kubernetes Node\",\r\n    \"action_description\": \"Describe a Kubernetes Node\",\r\n    \"action_type\": \"LEGO_TYPE_K8S\",\r\n    \"action_entry_function\": \"k8s_describe_node\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\r\n    \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_K8S\",\"CATEGORY_TYPE_K8S_NODE\"]\r\n}\r\n    "
  },
  {
    "path": "Kubernetes/legos/k8s_describe_node/k8s_describe_node.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\nimport pprint\nfrom typing import Dict\nfrom pydantic import BaseModel, Field\nfrom kubernetes import client\nfrom kubernetes.client.rest import ApiException\n\n\nclass InputSchema(BaseModel):\n    node_name: str = Field(\n        title='Node',\n        description='Kubernetes Node name'\n    )\n\ndef k8s_desribe_node_printer(output):\n    if output is None:\n        return\n\n    pprint.pprint(output)\n\n\ndef k8s_describe_node(handle, node_name: str):\n    \"\"\"k8s_describe_node get nodes details\n\n        :type handle: object\n        :param handle: Object returned from the Task validate method\n\n        :type node_name: str\n        :param node_name: Kubernetes Node name.\n\n        :rtype: Dict of nodes details\n    \"\"\"\n    coreApiClient = client.CoreV1Api(handle)\n\n    try:\n        resp = coreApiClient.read_node(node_name, pretty=True)\n\n    except ApiException as e:\n        resp = 'An Exception occured while executing the command' + e.reason\n\n    return resp\n"
  },
  {
    "path": "Kubernetes/legos/k8s_describe_pod/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Describe a Kubernetes POD</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego describe a Kubernetes POD in a given Namespace.\r\n\r\n\r\n## Lego Details\r\n\r\n    k8s_describe_pod(handle: object, namespace: str, podname: str)\r\n\r\n        handle: Object of type unSkript K8S Connector\r\n        namespace: Kubernetes namespace.\r\n        podname: K8S Pod Name.\r\n\r\n## Lego Input\r\nThis Lego take three input handle, namespace and podname.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kubernetes/legos/k8s_describe_pod/__init__.py",
    "content": ""
  },
  {
    "path": "Kubernetes/legos/k8s_describe_pod/k8s_describe_pod.json",
    "content": "{\r\n    \"action_title\": \"Describe a Kubernetes POD in a given Namespace\",\r\n    \"action_description\": \"Describe a Kubernetes POD in a given Namespace\",\r\n    \"action_type\": \"LEGO_TYPE_K8S\",\r\n    \"action_entry_function\": \"k8s_describe_pod\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\r\n    \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_K8S\",\"CATEGORY_TYPE_K8S_POD\"]\r\n}\r\n    "
  },
  {
    "path": "Kubernetes/legos/k8s_describe_pod/k8s_describe_pod.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\nimport collections\nfrom typing import Dict\nimport pprint\nfrom pydantic import BaseModel, Field\nfrom kubernetes import client\nfrom kubernetes.client.rest import ApiException\n\n\nclass InputSchema(BaseModel):\n    namespace: str = Field(\n        title='Namespace',\n        description='Kubernetes namespace')\n    podname: str = Field(\n        title='Pod',\n        description='K8S Pod Name')\n\n\ndef k8s_desribe_pod_printer(output):\n    if output is None:\n        return \n\n    pprint.pprint(output)\n\ndef k8s_describe_pod(handle, namespace: str, podname: str) -> Dict:\n    \"\"\"k8s_describe_pod get Kubernetes POD details\n\n        :type handle: object\n        :param handle: Object returned from the Task validate method\n\n        :type namespace: str\n        :param namespace: Kubernetes namespace.\n\n        :type podname: str\n        :param podname: K8S Pod Name.\n\n        :rtype: Dict of POD details\n    \"\"\"\n    coreApiClient = client.CoreV1Api(api_client=handle)\n\n    def cleanNullTerms(_dict):\n        \"\"\"Delete None values recursively from all of the dictionaries\"\"\"\n        for key, value in list(_dict.items()):\n            if isinstance(value, dict):\n                cleanNullTerms(value)\n            elif value is None:\n                del _dict[key]\n            elif isinstance(value, list):\n                for v_i in value:\n                    if isinstance(v_i, dict):\n                        cleanNullTerms(v_i)\n\n        return _dict\n\n    data = {}\n    try:\n        resp = coreApiClient.read_namespaced_pod(\n            name=podname, namespace=namespace)\n        resp = resp.to_dict()\n        del resp['metadata']['managed_fields']\n        resp = cleanNullTerms(resp)\n        data[\"Name\"] = resp['metadata']['name']\n        data[\"Namespace\"] = namespace\n        data[\"Priority\"] = resp['spec']['priority']\n        data[\"Node\"] = resp['spec']['node_name']\n        data[\"Start Time\"] = resp['status']['start_time']\n        data[\"Labels\"] = resp['metadata']['labels']\n        if \"annotations\" in resp['metadata']:\n            data[\"Annotations\"] = resp['metadata']['annotations']\n        data[\"Status\"] = resp['status']['phase']\n        data[\"IP\"] = resp['status']['pod_ip']\n        data[\"IPS\"] = resp['status'].get('pod_i_ps')\n        data[\"Controlled By\"] = resp['metadata']['owner_references'][0]['kind'] + \\\n            \"/\" + resp['metadata']['owner_references'][0]['name']\n        data[\"Containers\"] = ''\n        ####\n        for container in resp['spec']['containers']:\n            data['  ' + container['name']] = ''\n            for c in container:\n                data['      ' + c] = container[c]\n        # Container Index Represents the Number of containers in a given POD\n        container_index = 0\n        msglist = []\n        for c in resp['status']['container_statuses']:\n            data[' ' + c['name']] = ''\n            data['   ' + 'Container ID'] = c['container_id']\n            data['   ' + 'Image'] = c['image']\n            data['   ' + 'Image ID'] = c['image_id']\n            data['   ' + 'Port'] = resp['spec']['containers'][container_index]['ports']\n            if 'command' in resp['spec']['containers'][container_index]:\n                data['   ' + 'Command'] = resp['spec']['containers'][container_index]['command']\n            if 'args' in resp['spec']['containers'][container_index]:\n                data['   ' + 'Args'] = resp['spec']['containers'][container_index]['args']\n            data['   ' + 'State'] = ''\n            if c['state']['running'] is None and c['state']['waiting'] is not None:\n                data['     ' + 'Reason'] = c['state']['waiting']['reason']\n                if c['last_state']['terminated'] is not None:\n                    msglist.append(c['last_state']['terminated']['message'])\n            container_index += 1\n        data['Conditions'] = ''\n        for c in resp[\"status\"][\"conditions\"]:\n            data[\"Type\"] = \"Status\"\n            data[c[\"type\"]] = bool(c[\"status\"])\n\n        data['Volumes:'] = ''\n        for container in resp[\"spec\"][\"volumes\"]:\n            for c in container:\n                data['      ' + c] = container[c]\n        data['QoS Class:'] = resp['status'].get('qos_class')\n        tolerations = []\n        for toleration in resp['spec']['tolerations']:\n            tolerations.append(toleration[\"key\"] + \":\" + toleration[\"effect\"] + \" op=\" + \\\n                         toleration[\"operator\"] + \" for \" + str(toleration[\"toleration_seconds\"]))\n        data['Tolerations'] = tolerations\n        data['Events'] = msglist\n    except ApiException as e:\n        resp = 'An Exception occured while executing the command' + e.reason\n        raise e\n\n    print('\\n')    \n    data = collections.OrderedDict(data)\n    return data\n"
  },
  {
    "path": "Kubernetes/legos/k8s_detect_service_crashes/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Detect K8s services crashes</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis action detects service crashes by checking the logs of each pod for specific error messages.\r\n\r\n\r\n## Lego Details\r\n\r\n    k8s_detect_service_crashes(handle, namespace: str, core_services: list, tail_lines: int = 100)\r\n\r\n        handle: Object of type unSkript K8S Connector\r\n        namespace: Kubernetes namespace\r\n        core_services: List of services to detect service crashes\r\n        tail_lines: Number of log lines to fetch from each container. Defaults to 100.\r\n\r\n## Lego Input\r\nThis Lego take 4 inputs handle, namespace, tail_lines, core_services.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kubernetes/legos/k8s_detect_service_crashes/__init__.py",
    "content": ""
  },
  {
    "path": "Kubernetes/legos/k8s_detect_service_crashes/k8s_detect_service_crashes.json",
    "content": "{\n    \"action_title\": \"Detect K8s service crashes\",\n    \"action_description\": \"Detects service crashes by checking the logs of each pod for specific error messages.\",\n    \"action_type\": \"LEGO_TYPE_K8S\",\n    \"action_entry_function\": \"k8s_detect_service_crashes\",\n    \"action_is_check\": true,\n    \"action_needs_credential\": true,\n    \"action_supports_poll\": true,\n    \"action_supports_iteration\": true,\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_TROUBLESHOOTING\",\"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_K8S\",\"CATEGORY_TYPE_K8S_POD\" ],\n    \"action_next_hop\": [],\n    \"action_next_hop_parameter_mapping\": {}\n}"
  },
  {
    "path": "Kubernetes/legos/k8s_detect_service_crashes/k8s_detect_service_crashes.py",
    "content": "#\n# Copyright (c) 2023 unSkript.com\n# All rights reserved.\n#\nimport json\nimport re\n\nfrom typing import Optional, Tuple\nfrom pydantic import BaseModel, Field\nfrom tabulate import tabulate \n\nclass InputSchema(BaseModel):\n    namespace: str = Field(\n        description='K8S Namespace',\n        title='K8S Namespace'\n    )\n    tail_lines: Optional[int] = Field(\n        100,\n        description='Number of log lines to fetch from each container. Defaults to 100.',\n        title='No. of lines (Default: 100)'\n    )\n    core_services: list = Field(\n        description='List of services to detect service crashes on.'\n    )\n\ndef k8s_detect_service_crashes_printer(output):\n    status, data = output\n\n    if status:\n        print(\"No detected errors in the logs of the pods.\")\n    else:\n        headers = [\"Pod\", \"Namespace\", \"Error\", \"Timestamp\"]\n        table_data = [(entry[\"pod\"], entry[\"namespace\"], entry[\"error\"], entry[\"timestamp\"]) for entry in data]\n        print(tabulate(table_data, headers=headers, tablefmt=\"grid\"))\n\n\n\ndef k8s_detect_service_crashes(handle, namespace: str, core_services:list, tail_lines: int = 100) -> Tuple:\n    \"\"\"\n    k8s_detect_service_crashes detects service crashes by checking the logs of each pod for specific error messages.\n\n    :type handle: object\n    :param handle: Object returned from the task.validate(...)\n\n    :type namespace: str\n    :param namespace: (Optional) String, K8S Namespace as python string\n\n    :type tail_lines: int\n    :param tail_lines: Number of log lines to fetch from each container. Defaults to 100.\n\n    :rtype: Status, List of objects of pods, namespaces that might have crashed along with the timestamp\n    \"\"\"\n    ERROR_PATTERNS = [\n        \"Worker exiting\",\n        \"Exception\"\n        # Add more error patterns here as necessary\n    ]\n    ERROR_PATTERNS = [\"Worker exiting\", \"Exception\"]  # Add more error patterns as necessary\n    crash_logs = []\n\n    # Retrieve all services and pods in the namespace just once\n    kubectl_cmd = f\"kubectl -n {namespace} get services,pods -o json\"\n    try:\n        response = handle.run_native_cmd(kubectl_cmd)\n        services_and_pods = json.loads(response.stdout.strip())[\"items\"]\n    except json.JSONDecodeError as json_err:\n        print(f\"Error parsing JSON response: {str(json_err)}\")\n        return (True, None)  # Return early if we can't parse the JSON at all\n    except Exception as e:\n        print(f\"Unexpected error while fetching services and pods: {str(e)}\")\n        return (True, None)\n\n    for service_name_to_check in core_services:\n        service_found = False\n        for item in services_and_pods:\n            if item.get(\"kind\") == \"Service\" and item.get(\"metadata\", {}).get(\"name\") == service_name_to_check:\n                service_found = True\n                pod_labels = item.get('spec', {}).get(\"selector\", None)\n                if pod_labels:\n                    pod_selector = \",\".join([f\"{key}={value}\" for key, value in pod_labels.items()])\n                    try:\n                        kubectl_logs_cmd = f\"kubectl -n {namespace} logs --selector {pod_selector} --tail={tail_lines}\"\n                        pod_logs = handle.run_native_cmd(kubectl_logs_cmd).stdout.strip()\n\n                        for error_pattern in ERROR_PATTERNS:\n                            if re.search(error_pattern, pod_logs):\n                                crash_logs.append({\n                                    \"service\": service_name_to_check,\n                                    \"pod\": item.get('metadata', {}).get('name', 'N/A'),\n                                    \"namespace\": item.get('metadata', {}).get('namespace', 'N/A'),\n                                    \"error\": error_pattern,\n                                    \"timestamp\": re.findall(r\"\\d{4}-\\d{2}-\\d{2}T\\d{2}:\\d{2}:\\d{2}\", pod_logs)[-1] if re.search(error_pattern, pod_logs) else \"Unknown Time\"\n                                })\n                    except Exception as e:\n                        # Log the error but don't stop execution\n                        print(f\"Error fetching logs for service {service_name_to_check}: {str(e)}\")\n                        pass\n\n        if not service_found:\n            print(f\"Service {service_name_to_check} not found in namespace {namespace}. Continuing with next service.\")\n\n    return (False, crash_logs) if crash_logs else (True, None)\n"
  },
  {
    "path": "Kubernetes/legos/k8s_exec_command_on_pod/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Execute a command on a Kubernetes POD</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego execute a command on a Kubernetes POD in a given Namespace.\r\n\r\n\r\n## Lego Details\r\n\r\n    k8s_exec_command_on_pod(handle: object, namespace: str, podname: str, command: str)\r\n\r\n        handle: Object of type unSkript K8S Connector\r\n        namespace: Kubernetes namespace\r\n        podname: Kubernetes namespace\r\n        command: Kubernetes namespace\r\n\r\n## Lego Input\r\nThis Lego take four input handle, namespace, podname and command.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kubernetes/legos/k8s_exec_command_on_pod/__init__.py",
    "content": ""
  },
  {
    "path": "Kubernetes/legos/k8s_exec_command_on_pod/k8s_exec_command_on_pod.json",
    "content": "{\r\n    \"action_title\": \"Execute a command on a Kubernetes POD in a given Namespace\",\r\n    \"action_description\": \"Execute a command on a Kubernetes POD in a given Namespace\",\r\n    \"action_type\": \"LEGO_TYPE_K8S\",\r\n    \"action_entry_function\": \"k8s_exec_command_on_pod\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\r\n    \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_K8S\",\"CATEGORY_TYPE_K8S_POD\"]\r\n}\r\n    "
  },
  {
    "path": "Kubernetes/legos/k8s_exec_command_on_pod/k8s_exec_command_on_pod.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\nimport pprint\nfrom pydantic import BaseModel, Field\nfrom kubernetes import client\nfrom kubernetes.stream import stream\n\n\nclass InputSchema(BaseModel):\n    namespace: str = Field(\n        title='Namespace',\n        description='Kubernetes namespace.')\n    podname: str = Field(\n        title='Pod',\n        description='Kubernetes Pod Name')\n    command: str = Field(\n        title='Command',\n        description='Commands to execute on the Pod. Eg \"df -k\"')\n\ndef k8s_exec_command_on_pod_printer(output):\n    if output is None:\n        return\n\n    pprint.pprint(output)\n\n\ndef k8s_exec_command_on_pod(handle, namespace: str, podname: str, command: str) -> str:\n    \"\"\"k8s_exec_command_on_pod executes the given kubectl command on the pod\n\n        :type handle: object\n        :param handle: Object returned from the Task validate method\n\n        :type namespace: str\n        :param namespace: Kubernetes namespace.\n\n        :type podname: str\n        :param podname: Kubernetes Pod Name.\n\n        :type command: str\n        :param command: Commands to execute on the Pod.\n\n        :rtype: String, Output of the command in python string format \n        or Empty String in case of Error.\n    \"\"\"\n    coreApiClient = client.CoreV1Api(api_client=handle)\n\n    try:\n        resp = stream(coreApiClient.connect_get_namespaced_pod_exec,\n                      podname,\n                      namespace,\n                      command=command.split(),\n                      stderr=True,\n                      stdin=True,\n                      stdout=True,\n                      tty=False\n                      )\n    except Exception as e:\n        resp = f'An Exception occured while executing the command {e}'\n\n    return resp\n"
  },
  {
    "path": "Kubernetes/legos/k8s_exec_command_on_pods_and_filter/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Kubernetes Execute a command on a POD</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego execute a command on Kubernetes POD in a given namespace and filter output.\r\n\r\n\r\n## Lego Details\r\n\r\n    k8s_exec_command_on_pods_and_filter(handle: object, namespace: str, pods: List, match: str, command: List)\r\n\r\n        handle: Object of type unSkript K8S Connector\r\n        namespace: Kubernetes namespace.\r\n        pods: Kubernetes Pod Name(s).\r\n        match: Matching String for Command response.\r\n        command: List of Commands to Execute on the Pod.\r\n\r\n## Lego Input\r\nThis Lego take five input handle, namespace, pods, match and command.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kubernetes/legos/k8s_exec_command_on_pods_and_filter/__init__.py",
    "content": ""
  },
  {
    "path": "Kubernetes/legos/k8s_exec_command_on_pods_and_filter/k8s_exec_command_on_pods_and_filter.json",
    "content": "{\r\n    \"action_title\": \"Kubernetes Execute a command on a POD in a given namespace and filter\",\r\n    \"action_description\": \"Execute a command on Kubernetes POD in a given namespace and filter output\",\r\n    \"action_type\": \"LEGO_TYPE_K8S\",\r\n    \"action_entry_function\": \"k8s_exec_command_on_pods_and_filter\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\r\n    \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_K8S\",\"CATEGORY_TYPE_K8S_POD\"]\r\n}\r\n    "
  },
  {
    "path": "Kubernetes/legos/k8s_exec_command_on_pods_and_filter/k8s_exec_command_on_pods_and_filter.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\nimport pprint\nimport re\nfrom typing import List, Dict\nfrom pydantic import BaseModel, Field\nfrom kubernetes import client\nfrom kubernetes.stream import stream\n\n\nclass InputSchema(BaseModel):\n    namespace: str = Field(\n        title='Namespace',\n        description='Kubernetes namespace.')\n    pods: list = Field(\n        title='Pod(s)',\n        description='Kubernetes Pod Name(s)')\n    match: str = Field(\n        title='Match String',\n        description='Matching String for Command response'\n    )\n    command: list = Field(\n        title='Command',\n        description='List of Commands to Execute on the Pod, '\n        'ex: [\"/bin/sh\",\"-c\",\"nslookup google.com\"]')\n\ndef legoPrinter(output):\n    if output is None:\n        return\n\n    pprint.pprint(output)\n\n\ndef k8s_exec_command_on_pods_and_filter(\n        handle,\n        namespace: str,\n        pods: List,\n        match: str,\n        command: List\n        ) -> Dict:\n\n    \"\"\"k8s_exec_command_on_pods_and_filter executes the given kubectl command on the pod\n\n        :type handle: object\n        :param handle: Object returned from the Task validate method\n\n        :type namespace: str\n        :param namespace: Kubernetes namespace.\n\n        :type pods: List\n        :param pods: Kubernetes Pod Name(s).\n\n        :type match: str\n        :param match: Matching String for Command response.\n\n        :type command: List\n        :param command: List of Commands to Execute on the Pod.\n\n        :rtype: String, Output of the command in python string \n        format or Empty String in case of Error.\n    \"\"\"\n    coreApiClient = client.CoreV1Api(api_client=handle)\n\n    result = {}\n    try:\n        for pod in pods:\n            resp = stream(coreApiClient.connect_get_namespaced_pod_exec,\n                          pod,\n                          namespace,\n                          command=list(command),\n                          stderr=True,\n                          stdin=True,\n                          stdout=True,\n                          tty=False\n                          )\n            res = re.search(f'({match})', resp)\n            if res is not None:\n                result['name'] = pod\n                result['output'] = res\n                result['status'] = 'SUCCESS'\n    except Exception as e:\n        result['name'] = 'N/A'\n        result['output'] = e\n        result['status'] = 'ERROR'\n\n    return result\n"
  },
  {
    "path": "Kubernetes/legos/k8s_execute_helm_command/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \n<h2>Execute helm command</h2>\n\n<br>\n\n## Description\nThis Lego Executes helm command in a given k8s cluster.\n\n\n## Lego Details\n\n    k8s_execute_helm_command(handle: object, helm_command: str)\n        handle: Object of type unSkript K8S Connector\n        helm_command: helm command to execute on the k8s cluster\n\n\n## Lego Input\nThis Lego take two inputs: handle and helm_command\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kubernetes/legos/k8s_execute_helm_command/__init__.py",
    "content": ""
  },
  {
    "path": "Kubernetes/legos/k8s_execute_helm_command/k8s_execute_helm_command.json",
    "content": "{\n    \"action_title\": \"Helm command\",\n    \"action_description\": \"Execute helm command in K8S Cluster\",\n    \"action_type\": \"LEGO_TYPE_K8S\",\n    \"action_entry_function\": \"k8s_execute_helm_command\",\n    \"action_needs_credential\": true,\n    \"action_supports_poll\": true,\n    \"action_supports_iteration\": true,\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_K8S\",\"CATEGORY_TYPE_K8S_KUBECTL\"]\n}"
  },
  {
    "path": "Kubernetes/legos/k8s_execute_helm_command/k8s_execute_helm_command.py",
    "content": "#\n# Copyright (c) 2024 unSkript.com\n# All rights reserved.\n#\nimport subprocess\nfrom pydantic import BaseModel, Field\n\nclass InputSchema(BaseModel):\n    helm_command: str = Field(\n        title='Helm Command',\n        description='Helm command to execute in the K8s Cluster'\n    )\n\ndef k8s_execute_helm_command_printer(data: str):\n    if not data:\n        return \n    \n    print(data)\n\n\ndef k8s_execute_helm_command(handle, helm_command: str) -> str:\n    \"\"\"k8s_execute_helm_command executes the given helm command in the k8s cluster\n\n       :type handle: object\n       :param handle: Object returned from the Task validate method\n\n       :type helm_command: str\n       :param helm_command: Helm Command that need to be executed \n\n       :rtype: String, Output of the given helm command. Empty string in case of error\n    \"\"\"\n    retval = None \n    if handle.client_side_validation is not True:\n        print(f\"K8S Connector is invalid: {handle}\")\n        return str()\n    \n    if not helm_command:\n        print(f\"Given helm command is empty, cannot proceed further!\")\n        return str()\n    \n    config_file = None\n    try:\n        config_file = handle.temp_config_file \n    except Exception as e:\n        print(f\"ERROR: {str(e)}\")\n        return str()\n    \n    if config_file:\n        if not '--kubeconfig' in helm_command:\n            helm_command = helm_command.replace('helm',\n                                                f'helm --kubeconfig {config_file}')\n    else:\n        # Incluster configuration, so need not have any kubeconfig \n        pass \n\n    try:\n        result = subprocess.run(helm_command,\n                                check=True,\n                                shell=True,\n                                capture_output=True,\n                                text=True)\n        retval = result.stdout \n        \n        # If error is set, then lets dump the error code\n        if result.stderr and result.returncode != 0:\n            print(result.stderr)\n\n    except subprocess.CalledProcessError as e:\n        error_message = f\"Error running command: {e}\\n{e.stderr.decode('utf-8')}\" \\\n                    if e.stderr else f\"Error running command: {e}\"\n        print(error_message)\n\n    return retval\n             "
  },
  {
    "path": "Kubernetes/legos/k8s_execute_local_script_on_a_pod/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png)\n<h2>Execute Local Script on a Pod</h2>\n\n<br>\n\n## Description\nExecute a given script on a pod in a namespace\n\n## Lego Details\n\n    k8s_execute_local_script_on_a_pod(handle: object, pod_name: str, namespace: str, file_name:str)\n\n        handle: Object of type unSkript K8S Connector\n        pod_name: String, Name of the POD (Mandatory parameter)\n        namespace: String, Namespace where the POD exists\n        file_name: String, Local script file that needs to be run on the pod.\n\n## Lego Input\nThis Lego takes four mandatory inputs. Handle (K8S) object returned from the task.validator(...),\nPOD Name and Namespace where the POD exists, local script file that needs to be run on the pod.\n\n## Lego Output\nHere is a sample output-\n<img src=\"./1.png\">\n\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kubernetes/legos/k8s_execute_local_script_on_a_pod/__init__.py",
    "content": ""
  },
  {
    "path": "Kubernetes/legos/k8s_execute_local_script_on_a_pod/k8s_execute_local_script_on_a_pod.json",
    "content": "{\n    \"action_title\": \"Execute local script on a pod\",\n    \"action_description\": \"Execute local script on a pod in a namespace\",\n    \"action_type\": \"LEGO_TYPE_K8S\",\n    \"action_entry_function\": \"k8s_execute_local_script_on_a_pod\",\n    \"action_needs_credential\": true,\n    \"action_supports_poll\": true,\n    \"action_supports_iteration\": true,\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_K8S\",\"CATEGORY_TYPE_K8S_KUBECTL\",\"CATEGORY_TYPE_K8S_POD\"]\n}"
  },
  {
    "path": "Kubernetes/legos/k8s_execute_local_script_on_a_pod/k8s_execute_local_script_on_a_pod.py",
    "content": "#\n# Copyright (c) 2022 unSkript.com\n# All rights reserved.\n#\n\nimport pprint\nfrom pydantic import BaseModel, Field\nimport os\n#import subprocess\n\nclass InputSchema(BaseModel):\n    pod_name: str = Field(\n        title=\"Pod Name\",\n        description=\"K8S Pod Name\"\n    )\n    namespace: str = Field(\n        title=\"Namespace\",\n        description=\"K8S Namespace where the POD exists\"\n    )\n    file_name: str = Field(\n        title=\"Script filename with the full path\",\n        description=\"Script filename with the full path. \"\n    )\n\ndef k8s_execute_local_script_on_a_pod_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef k8s_execute_local_script_on_a_pod(handle, namespace: str, pod_name:str, file_name:str)->str:\n    \"\"\"k8s_execute_local_script_on_a_pod executes a given script on a pod\n\n        :type handle: object\n        :param handle: Object returned from the Task validate method\n\n        :type namespace: str\n        :param namespace: Namespace to get the pods from. Eg:\"logging\"\n\n        :type pod_name: str\n        :param pod_name: Pod name to to run the script.\n\n        :type file_name: str\n        :param file_name: Script file name.\n\n        :rtype: String of the result of the script that was run on the pod\n    \"\"\"\n    # Step 2: Copy the script to the pod using kubectl cp command\n    tmp_script_path = \"/tmp/script.sh\"\n    handle.run_native_cmd(f'kubectl cp {file_name} {namespace}/{pod_name}:{tmp_script_path}')\n\n    # Step 3: Make the script executable on the pod\n    handle.run_native_cmd(f'kubectl exec -n {namespace} {pod_name} -- chmod +x {tmp_script_path}')\n\n    # Step 4: Execute the script on the pod and get the output\n    command = f'kubectl exec -n {namespace} {pod_name} -- sh -c {tmp_script_path}'\n\n    result = handle.run_native_cmd(command)\n    # Remove the temporary script file\n    handle.run_native_cmd(f'kubectl exec -n {namespace} {pod_name} -- rm -f {tmp_script_path}')\n\n    if result.stderr not in ('', None):\n        raise result.stderr\n    return result.stdout"
  },
  {
    "path": "Kubernetes/legos/k8s_gather_data_for_pod_troubleshoot/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \n<h2>Gather Data to Troubleshoot POD</h2>\n\n<br>\n\n## Description\nThis Action can be used to gather all relevant data to troubleshoot a POD in error state.\nThis Action gathers \n* POD Status\n* POD Logs\n* POD Events \n\n## Lego Details\n\n    k8s_gather_data_for_pod_troubleshoot(handle: object, pod_name: str, namespace: str)\n\n        handle: Object of type unSkript K8S Connector\n        pod_name: String, Name of the POD (Mandatory parameter)\n        namespace: String, Namespace where the POD exists\n\n## Lego Input\nThis Lego takes three mandatory inputs. Handle (K8S) object returned from the task.validator(...),\nPOD Name and Namespace where the POD exists. \n\n## Lego Output\nThis Action outputs a Dict with `describe` and `logs` as keys. Here is a sample output.\n<img src=\"./1.png\">\n\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kubernetes/legos/k8s_gather_data_for_pod_troubleshoot/__init__.py",
    "content": ""
  },
  {
    "path": "Kubernetes/legos/k8s_gather_data_for_pod_troubleshoot/k8s_gather_data_for_pod_troubleshoot.json",
    "content": "{\n    \"action_title\": \"Gather Data for POD Troubleshoot\",\n    \"action_description\": \"Gather Data for POD Troubleshoot\",\n    \"action_type\": \"LEGO_TYPE_K8S\",\n    \"action_entry_function\": \"k8s_gather_data_for_pod_troubleshoot\",\n    \"action_needs_credential\": true,\n    \"action_supports_poll\": true,\n    \"action_supports_iteration\": true,\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_K8S\",\"CATEGORY_TYPE_K8S_KUBECTL\",\"CATEGORY_TYPE_K8S_POD\"]\n}"
  },
  {
    "path": "Kubernetes/legos/k8s_gather_data_for_pod_troubleshoot/k8s_gather_data_for_pod_troubleshoot.py",
    "content": "##\n# Copyright (c) 2023 unSkript, Inc\n# All rights reserved.\n##\nimport pprint\nfrom pydantic import BaseModel, Field\nfrom kubernetes.client.rest import ApiException\n\nclass InputSchema(BaseModel):\n    pod_name: str = Field(\n        title=\"Pod Name\",\n        description=\"K8S Pod Name\"\n    )\n    namespace: str = Field(\n        title=\"Namespace\",\n        description=\"K8S Namespace where the POD exists\"\n    )\n\ndef k8s_gather_data_for_pod_troubleshoot_printer(output):\n    if not output:\n        return\n\n    pprint.pprint(output)\n\n\ndef k8s_gather_data_for_pod_troubleshoot(handle, pod_name: str, namespace: str) -> dict:\n    \"\"\"k8s_gather_data_for_pod_troubleshoot This function gathers data from the k8s namespace\n       to assist in troubleshooting of a pod. The gathered data are returned in the form of a\n       Dictionary with `logs`, `events` and `details` keys. \n       \n       :type handle: Object\n       :param handle: Object returned from task.validate(...) routine\n\n       :type pod_name: str\n       :param pod_name: Name of the K8S POD (Mandatory parameter)\n\n       :type namespace: str \n       :param namespace: Namespace where the above K8S POD is found (Mandatory parameter)\n\n       :rtype: Output of in the form of dictionary with `describe` and `logs` keys\n    \"\"\"\n    if not pod_name or not namespace:\n        raise TypeError(\"POD Name and Namespace are mandatory parameters, cannot be None\")\n\n    retval = {}\n    # Get Describe POD details\n    kubectl_client = f'kubectl describe pod {pod_name} -n {namespace}'\n    result = handle.run_native_cmd(kubectl_client)\n\n    if result is None:\n        print(\n            f\"Error while executing command ({kubectl_client}) (empty response)\")\n        return {}\n\n    if result.stderr:\n        raise ApiException(\n            f\"Error occurred while executing command {kubectl_client} {result.stderr}\")\n\n    # Get Logs for the POD\n    kubectl_client = f'kubectl logs {pod_name} -n {namespace}'\n    result = handle.run_native_cmd(kubectl_client)\n    if not result.stderr:\n        retval['logs'] =  result.stdout\n    else:\n        retval['error'] = result.stderr\n    return retval\n"
  },
  {
    "path": "Kubernetes/legos/k8s_gather_data_for_service_troubleshoot/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \n<h1>Gather data for K8S Service troubleshooting </h1>\n\n## Description\nThis Action can be used to gather data to aid in troubleshooting k8s service in a namespace. \n\n\n## Lego Details\n\n    k8s_gather_data_for_service_troubleshoot(handle, namespace: str)\n\n        handle: Object of type unSkript K8S Connector\n        namespace: k8s namespace.\n\n## Lego Input\n\nThis Lego take two inputs handle, and namespace.\n\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kubernetes/legos/k8s_gather_data_for_service_troubleshoot/__init__.py",
    "content": ""
  },
  {
    "path": "Kubernetes/legos/k8s_gather_data_for_service_troubleshoot/k8s_gather_data_for_service_troubleshoot.json",
    "content": "{\n    \"action_title\": \"Gather Data for K8S Service Troubleshoot\",\n    \"action_description\": \"Gather Data for K8S Service Troubleshoot\",\n    \"action_type\": \"LEGO_TYPE_K8S\",\n    \"action_entry_function\": \"k8s_gather_data_for_service_troubleshoot\",\n    \"action_needs_credential\": true,\n    \"action_supports_poll\": true,\n    \"action_supports_iteration\": true,\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\" ]\n\n}"
  },
  {
    "path": "Kubernetes/legos/k8s_gather_data_for_service_troubleshoot/k8s_gather_data_for_service_troubleshoot.py",
    "content": "#\n# Copyright (c) 2023 unSkript.com\n# All rights reserved.\n#\nimport pprint\nimport json\nfrom pydantic import BaseModel, Field\nfrom kubernetes.client.rest import ApiException\n\nclass InputSchema(BaseModel):\n    service_name: str = Field(\n        title=\"Service Name\",\n        description=\"K8S Service Name to gather data\"\n    )\n    namespace: str = Field(\n        title='Namespace',\n        description='k8s Namespace')\n\n\ndef k8s_gather_data_for_service_troubleshoot_printer(output):\n    if not output:\n        return\n    pprint.pprint(output)\n\ndef k8s_gather_data_for_service_troubleshoot(handle, servicename: str, namespace: str) -> dict:\n    \"\"\"k8s_gather_data_for_service_troubleshoot This utility function can be used to gather data\n       for a given service in a namespace. \n\n       :type handle: object\n       :param handle: Object returned from task.validate(...) function\n\n       :type servicename: str\n       :param servicename: Service Name that needs gathering data\n\n       :type namespace: str\n       :param namespace: K8S Namespace\n\n       :rtype: Dictionary containing the result\n    \"\"\"\n    if not namespace or not servicename :\n        raise Exception(\"Namespace and Servicename are mandatory parameter\")\n\n    # Get Service Detail\n    describe_cmd = f'kubectl describe svc {servicename} -n {namespace}'\n    describe_output = handle.run_native_cmd(describe_cmd)\n\n    if describe_output is None:\n        print(\n            f\"Error while executing command ({describe_cmd}) (empty response)\")\n        return {}\n\n    if describe_output.stderr:\n        raise ApiException(\n            f\"Error occurred while executing command {describe_cmd} {describe_output.stderr}\")\n\n    retval = {}\n    if not describe_output.stderr:\n        retval['describe'] = describe_output.stdout\n\n    # To Get the Ingress rule, we first find out the name of the ingress\n    # Find out the ingress rules in the given namespace, find out the\n    # Matching rule in the ingress that matches the service name and append it\n    # to the `ingress` key.\n    rule_name = ''\n    ingress_rules_for_service = []\n    ingress_rule_name_cmd = f\"kubectl get ingress -n {namespace} -o name\"\n    ingress_rule_name_output = handle.run_native_cmd(ingress_rule_name_cmd)\n    if not ingress_rule_name_output.stderr:\n        rule_name = ingress_rule_name_output.stdout\n\n    ingress_rules_cmd = f\"kubectl get ingress -n {namespace}\" + \\\n        ' -o jsonpath=\"{.items[*].spec.rules}\"'\n    ingress_rules_output = handle.run_native_cmd(ingress_rules_cmd)\n    if not ingress_rules_output.stderr:\n        rules = json.loads(ingress_rules_output.stdout)\n        for r in rules:\n            h = r.get('host')\n            for s_p in r.get('http').get('paths'):\n                if s_p.get('backend').get('service').get('name') == servicename:\n                    ingress_rules_for_service.append([\n                        h,\n                        s_p.get('backend').get('service').get('port'),\n                        s_p.get('path')\n                        ])\n\n    if ingress_rules_for_service:\n        retval['ingress'] = []\n        for ir in ingress_rules_for_service:\n            if len(ir) >= 3:\n                retval['ingress'].append({'name': rule_name,\n                            'namespace': namespace, \n                            'host': ir[0],\n                            'port': ir[1],\n                            'path': ir[-1]})\n\n\n    return retval\n"
  },
  {
    "path": "Kubernetes/legos/k8s_get_all_evicted_pods_from_namespace/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Get All Evicted PODS From Namespace </h1>\r\n\r\n## Description\r\nThis Lego get all evicted PODS from given namespace. If namespace not given it will get all the pods from all namespaces.\r\n\r\n\r\n## Lego Details\r\n\r\n    k8s_get_all_evicted_pods_from_namespace(handle, namespace: str = \"\")\r\n\r\n        handle: Object of type unSkript K8S Connector\r\n        namespace: k8s namespace.\r\n\r\n## Lego Input\r\n\r\nThis Lego take two inputs handle, and namespace.\r\n\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kubernetes/legos/k8s_get_all_evicted_pods_from_namespace/__init__.py",
    "content": ""
  },
  {
    "path": "Kubernetes/legos/k8s_get_all_evicted_pods_from_namespace/k8s_get_all_evicted_pods_from_namespace.json",
    "content": "{\r\n    \"action_title\": \"Get All Evicted PODS From Namespace\",\r\n    \"action_description\": \"This action get all evicted PODS from given namespace. If namespace not given it will get all the pods from all namespaces.\",\r\n    \"action_type\": \"LEGO_TYPE_K8S\",\r\n    \"action_entry_function\": \"k8s_get_all_evicted_pods_from_namespace\",\r\n    \"action_is_check\": true,\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\" , \"CATEGORY_TYPE_K8S\", \"CATEGORY_TYPE_K8S_POD\"],\r\n    \"action_next_hop\": [\"a9b8a0c8ecdb5ef76f01e81689319f16095d6136620a4c7f78d57e81ba9a3ba0\"],\r\n    \"action_next_hop_parameter_mapping\": {}\r\n}\r\n    "
  },
  {
    "path": "Kubernetes/legos/k8s_get_all_evicted_pods_from_namespace/k8s_get_all_evicted_pods_from_namespace.py",
    "content": "#\n# Copyright (c) 2022 unSkript.com\n# All rights reserved.\n#\nimport pprint\nimport json\nfrom typing import Optional, Tuple\nfrom pydantic import BaseModel, Field\n\n\nclass InputSchema(BaseModel):\n    namespace: Optional[str] = Field(\n        default='',\n        title='Namespace',\n        description='k8s Namespace')\n\n\ndef k8s_get_all_evicted_pods_from_namespace_printer(output):\n    if output is None:\n        return\n\n    pprint.pprint(output)\n\n\ndef k8s_get_all_evicted_pods_from_namespace(handle, namespace: str = \"\") -> Tuple:\n    \"\"\"k8s_get_all_evicted_pods_from_namespace returns all evicted pods\n\n        :type handle: object\n        :param handle: Object returned from the Task validate method\n        \n        :type namespace: str\n        :param namespace: k8s namespace.\n\n        :rtype: Tuple of status result and list of evicted pods\n    \"\"\"\n    if handle.client_side_validation is not True:\n        raise Exception(f\"K8S Connector is invalid: {handle}\")\n\n    # Define the kubectl command based on the namespace input\n    kubectl_command = \"kubectl get pods --all-namespaces -o json\"\n    if namespace:\n        kubectl_command = \"kubectl get pods -n \" + namespace + \" -o json\"\n\n    try:\n        response = handle.run_native_cmd(kubectl_command)\n    except Exception as e:\n        print(f\"Error occurred while executing command {kubectl_command}: {str(e)}\")\n        raise\n\n    if response is None:\n        print(f\"Error while executing command ({kubectl_command}) (empty response)\")\n        raise Exception(\"Empty response from kubectl command\")\n\n    if response.stderr:\n        raise Exception(f\"Error occurred while executing command {kubectl_command} {response.stderr}\")\n\n    result = []\n    try:\n        pod_details = json.loads(response.stdout)\n        for pod in pod_details.get('items', []):\n            if pod['status']['phase'] == 'Failed' and any(cs.get('reason') == 'Evicted' for cs in pod['status'].get('conditions', [])):\n                pod_dict = {\n                    \"pod_name\": pod[\"metadata\"][\"name\"],\n                    \"namespace\": pod[\"metadata\"][\"namespace\"]\n                }\n                result.append(pod_dict)\n    except Exception:\n        raise\n\n    if result:\n        return (False, result)\n    return (True, None)"
  },
  {
    "path": "Kubernetes/legos/k8s_get_all_pods/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Get All Kubernetes PODS with state</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego get all Kubernetes PODS with state in a given Namespace.\r\n\r\n\r\n## Lego Details\r\n\r\n    k8s_get_all_pods(handle: object, namespace: str)\r\n\r\n        handle: Object of type unSkript K8S Connector\r\n        namespace: k8s namespace\r\n\r\n## Lego Input\r\nThis Lego take two input handle and namespace.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kubernetes/legos/k8s_get_all_pods/__init__.py",
    "content": ""
  },
  {
    "path": "Kubernetes/legos/k8s_get_all_pods/k8s_get_all_pods.json",
    "content": "{\r\n    \"action_title\": \" Get All Kubernetes PODS with state in a given Namespace\",\r\n    \"action_description\": \" Get All Kubernetes PODS with state in a given Namespace\",\r\n    \"action_type\": \"LEGO_TYPE_K8S\",\r\n    \"action_version\": \"2.0.0\",\r\n    \"action_entry_function\": \"k8s_get_all_pods\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_K8S\",\"CATEGORY_TYPE_K8S_POD\"]\r\n}\r\n    "
  },
  {
    "path": "Kubernetes/legos/k8s_get_all_pods/k8s_get_all_pods.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\n\nimport pprint\nfrom typing import Optional, Tuple\nfrom pydantic import BaseModel, Field\nfrom tabulate import tabulate\nfrom kubernetes import client\n\npp = pprint.PrettyPrinter(indent=2)\n\n\nclass InputSchema(BaseModel):\n    namespace: Optional[str] = Field(\n        default='all',\n        title='Namespace',\n        description='k8s namespace')\n\n\ndef k8s_get_all_pods_printer(output):\n    (healthy_pods, unhealthy_pods, data) = output\n\n    if len(healthy_pods) > 0:\n        print(\"\\n Healthy PODS \\n\")\n        print(tabulate(healthy_pods, headers=[\n            \"NAME\", \"READY\", \"STATUS\", \"RESTARTS\", \"Age\"]))\n\n    if len(unhealthy_pods) > 0:\n        print(\"\\n UnHealthy PODS \\n\")\n        print(tabulate(unhealthy_pods, headers=[\n            \"NAME\", \"READY\", \"STATUS\", \"RESTARTS\", \"Age\"]))\n\n\ndef k8s_get_all_pods(handle, namespace: str = \"all\") -> Tuple:\n\n    \"\"\"k8s_get_all_pods get all pods\n\n        :type handle: object\n        :param handle: Object returned from the Task validate method\n\n        :type namespace: str\n        :param namespace: k8s namespace.\n\n        :rtype: Tuple\n    \"\"\"\n    coreApiClient = client.CoreV1Api(api_client=handle)\n\n    healthy_pods = []\n    unhealthy_pods = []\n    data = coreApiClient.list_namespaced_pod(namespace=namespace, pretty=True)\n    for i in data.items:\n        for container_status in i.status.container_statuses:\n            if container_status.ready is False:\n                waiting_state = container_status.state.waiting\n                status = waiting_state.reason\n                unhealthy_pods.append([i.metadata.name,\n                                       str(0) + \"/\" + str(len(i.status.container_statuses)),\n                                       status,\n                                       container_status.restart_count,\n                                       i.status.start_time\n                                       ])\n            else:\n                healthy_pods.append([\n                    i.metadata.name,\n                    str(len(i.status.container_statuses)) + \"/\" + str(len(i.status.container_statuses)),\n                    i.status.phase,\n                    container_status.restart_count,\n                    i.status.start_time\n                    ])\n\n\n    return (healthy_pods, unhealthy_pods, data)\n"
  },
  {
    "path": "Kubernetes/legos/k8s_get_all_resources_utilization_info/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>Get K8s all resource utilization info</h1>\n\n## Description\nThis action gets the pod status and resource utilization of various Kubernetes resources like jobs, services, persistent volumes.\n\n## Lego Details\n\tk8s_get_all_resources_utilization_info(handle, namespace:str=\"\")\n\t\thandle: Object of type unSkript K8S Connector.\n\t\tnamespace: Namespace in which to look for the resources. If not provided, all namespaces are considered\n\n## Lego Input\nThis Lego takes inputs handle, namespace(optional)\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n<img src=\"./2.png\">\n<img src=\"./3.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kubernetes/legos/k8s_get_all_resources_utilization_info/__init__.py",
    "content": ""
  },
  {
    "path": "Kubernetes/legos/k8s_get_all_resources_utilization_info/k8s_get_all_resources_utilization_info.json",
    "content": "{\n  \"action_title\": \"Get K8s pods status and resource utilization info\",\n  \"action_description\": \"This action gets the pod status and resource utilization of various Kubernetes resources like jobs, services, persistent volumes.\",\n  \"action_type\": \"LEGO_TYPE_K8S\",\n  \"action_entry_function\": \"k8s_get_all_resources_utilization_info\",\n  \"action_needs_credential\": \"true\",\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\n  \"action_is_check\": \"false\",\n  \"action_supports_iteration\": \"true\",\n  \"action_supports_poll\": \"true\",\n  \"action_categories\": [\"CATEGORY_TYPE_INFORMATION\" ]\n}"
  },
  {
    "path": "Kubernetes/legos/k8s_get_all_resources_utilization_info/k8s_get_all_resources_utilization_info.py",
    "content": "\n#\n# Copyright (c) 2023 unSkript.com\n# All rights reserved.\n#\nfrom typing import Optional, Dict\nfrom tabulate import tabulate\nimport json\nfrom kubernetes.client.rest import ApiException\nfrom pydantic import BaseModel, Field\n\n\nclass InputSchema(BaseModel):\n    namespace: Optional[str] = Field('', description='k8s Namespace', title='Namespace')\n\n\ndef k8s_get_all_resources_utilization_info_printer(data):\n    namespace = data['namespace']\n    for resource, rows in data.items():\n        if resource == 'namespace':  # Skip the namespace key-value pair\n            continue\n\n        print(f\"\\n{resource.capitalize()}:\")\n        if not rows:  # Check if the resource list is empty\n            # print(f\"No {resource} found in {namespace} namespace.\")\n            continue  # Skip to the next resource\n\n        if resource == 'pods':\n            headers = ['Namespace', 'Name', 'Status', 'CPU Usage (m)', 'Memory Usage (Mi)']\n        else:\n            headers = ['Name', 'Status']\n        print(tabulate(rows, headers, tablefmt='pretty'))\n\n\ndef k8s_get_all_resources_utilization_info(handle, namespace: str = \"\") -> Dict:\n    \"\"\"\n    k8s_get_all_resources_utilization_info fetches the pod status and resource utilization of various Kubernetes resources like jobs, services, persistent volumes.\n\n    :type handle: object\n    :param handle: Object returned from the Task validate method\n\n    :type namespace: string\n    :param namespace: Namespace in which to look for the resources. If not provided, all namespaces are considered\n\n    :rtype: Status, Message\n    \"\"\"\n    if handle.client_side_validation is not True:\n        print(f\"K8S Connector is invalid: {handle}\")\n        return False, \"Invalid Handle\"\n\n    namespace_option = f\"--namespace={namespace}\" if namespace else \"--all-namespaces\"\n\n    resources = ['pods', 'jobs' \n    # 'persistentvolumeclaims'\n    ]\n    data = {resource: [] for resource in resources}\n    data['namespace'] = namespace  # Store namespace in data dict\n\n    # Fetch current utilization of pods\n    pod_utilization_cmd = f\"kubectl top pods {namespace_option} --no-headers\"\n    pod_utilization = handle.run_native_cmd(pod_utilization_cmd)\n    if pod_utilization.stderr:\n        pass\n\n    pod_utilization_lines = pod_utilization.stdout.split('\\n')\n    utilization_map = {}\n    for line in pod_utilization_lines:\n        parts = line.split()\n        if len(parts) < 3:  # Skip if line doesn't contain enough parts\n            continue\n        pod_name, cpu_usage, memory_usage = parts[:3]\n        # Use a tuple of (namespace, pod_name) as the key to ensure uniqueness across namespaces\n        key = (namespace, pod_name) if namespace else (parts[0], pod_name)\n        utilization_map[key] = (cpu_usage, memory_usage)\n\n    for resource in resources:\n        cmd = f\"kubectl get {resource} -o json {namespace_option}\"\n        result = handle.run_native_cmd(cmd)\n\n        if result.stderr:\n            print(f\"Error occurred while executing command {cmd}: {result.stderr}\")\n            continue \n\n        items = json.loads(result.stdout)['items']\n        if not items:  # If no items found, continue to ensure message is printed by printer function\n            continue\n\n        for item in items:\n            name = item['metadata']['name']\n            ns = item['metadata'].get('namespace', 'default')\n            status = 'Unknown'\n\n            if resource == 'pods':\n                status = item['status']['phase']\n                 # Skip pods in Succeeded or Completed state as they dont have any utilization\n                if status in ['Succeeded', 'Completed', 'Failed','Pending']:\n                    continue\n                key = (ns, name)\n                cpu_usage, memory_usage = utilization_map.get(key, ('N/A', 'N/A'))\n                data[resource].append([ns, name, status, cpu_usage, memory_usage])\n            else:\n                status = None\n                if resource == 'jobs':\n                    conditions = item['status'].get('conditions', [])\n                    if conditions:\n                        status = conditions[-1]['type']\n                        if status in ['Complete']:\n                            continue\n                # elif resource == 'persistentvolumeclaims':\n                #     status = item['status']['phase']\n                if status is not None:\n                    data[resource].append([ns, name, status])\n\n    # If resource has no objects to display, filter it out\n    data = {k: v for k, v in data.items() if v}\n\n    return data\n"
  },
  {
    "path": "Kubernetes/legos/k8s_get_candidate_nodes_for_pods/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Get candidate k8s nodes</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego get candidate k8s nodes for given configuration.\r\n\r\n\r\n## Lego Details\r\n\r\n    k8s_get_candidate_nodes_for_pods(handle: object, attachable_volumes_aws_ebs: int = 0, \r\n                                     cpu_limit: int, \r\n                                     memory_limit: str, \r\n                                     pod_limit: int)\r\n\r\n        handle: Object of type unSkript K8S Connector\r\n        attachable_volumes_aws_ebs: EBS Volume limit in Gb.\r\n        cpu_limit: CPU Limit.\r\n        memory_limit: Limits and requests for memory are measured in bytes.\r\n        pod_limit: Pod Limit.\r\n\r\n## Lego Input\r\nThis Lego take five input handle, attachable_volumes_aws_ebs, cpu_limit, memory_limit and pod_limit.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kubernetes/legos/k8s_get_candidate_nodes_for_pods/__init__.py",
    "content": ""
  },
  {
    "path": "Kubernetes/legos/k8s_get_candidate_nodes_for_pods/k8s_get_candidate_nodes_for_pods.json",
    "content": "{\r\n    \"action_title\": \"Get candidate k8s nodes for given configuration\",\r\n    \"action_description\": \"Get candidate k8s nodes for given configuration\",\r\n    \"action_type\": \"LEGO_TYPE_K8S\",\r\n    \"action_entry_function\": \"k8s_get_candidate_nodes_for_pods\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_K8S\",\"CATEGORY_TYPE_K8S_POD\",\"CATEGORY_TYPE_K8S_NODE\"]\r\n}\r\n    "
  },
  {
    "path": "Kubernetes/legos/k8s_get_candidate_nodes_for_pods/k8s_get_candidate_nodes_for_pods.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\n\nimport pprint\nfrom typing import Optional, Tuple\nfrom pydantic import BaseModel, Field\nfrom tabulate import tabulate\nfrom kubernetes import client\n\npp = pprint.PrettyPrinter(indent=2)\n\nclass InputSchema(BaseModel):\n    cpu_limit: Optional[int] = Field(\n        default=0,\n        title='CPU Limit',\n        description='CPU Limit. Eg 2')\n    memory_limit: Optional[str] = Field(\n        default=\"\",\n        title='Memory Limit (Mi)',\n        description='Limits and requests for memory are measured in bytes. '\n                    'Accept the store in Mi. Eg 123Mi')\n    pod_limit: Optional[int] = Field(\n        default=0,\n        title='Number of Pods to attach',\n        description='Pod Limit. Eg 2')\n\n\ndef k8s_get_candidate_nodes_for_pods_printer(output):\n    if output is None:\n        return\n\n    data = output[0]\n    print(\"\\n\")\n    print(tabulate(data, tablefmt=\"grid\", headers=[\n        \"Name\",\n        \"cpu\",\n        \"ephemeral-storage\",\n        \"hugepages-1Gi\",\n        \"hugepages-2Mi\",\n        \"memory\",\n        \"pods\"\n        ]))\n\ndef k8s_get_candidate_nodes_for_pods(handle,\n                                     cpu_limit: int = 0,\n                                     memory_limit: str = \"\",\n                                     pod_limit: int = 0) -> Tuple:\n\n    \"\"\"k8s_get_candidate_nodes_for_pods get nodes for pod\n\n        :type handle: object\n        :param handle: Object returned from the Task validate method\n\n        :type cpu_limit: int\n        :param cpu_limit: CPU Limit.\n\n        :type memory_limit: string\n        :param memory_limit: Limits and requests for memory are measured in bytes.\n\n        :type pod_limit: int\n        :param pod_limit: Pod Limit.\n\n        :rtype: Tuple of nodes for pod\n    \"\"\"\n\n    coreApiClient = client.CoreV1Api(api_client=handle)\n\n    nodes = coreApiClient.list_node()\n    match_nodes = [node for node in nodes.items if\n                   (cpu_limit < int(node.status.capacity.get(\"cpu\", 0))) and\n                   (pod_limit < int(node.status.capacity.get(\"pods\", 0))) and\n                   int(memory_limit.split(\"Mi\")[0]) < (int(node.status.capacity.get(\"memory\").split(\"Ki\")[0]) / 1024)]\n\n    if len(match_nodes) > 0:\n        data = []\n        for node in match_nodes:\n            node_capacity = []\n            node_capacity.append(node.metadata.name)\n            for capacity in node.status.capacity.values():\n                node_capacity.append(capacity)\n            data.append(node_capacity)\n\n        return (data, match_nodes)\n\n    pp.pprint(\"No Matching Nodes Found for this spec\")\n    return (None, None)\n"
  },
  {
    "path": "Kubernetes/legos/k8s_get_cluster_health/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \n<h2>Get k8s cluster health</h2>\n\n<br>\n\n## Description\nThis Action returns the health of K8S cluster. This Action checks the following in a cluster\n1. Abnormal Events that were reported\n2. Node Resource Utilization \n3. Pod Resource Utilization\n4. API Server readiness, liveness and health\n\nIf all these checks a boolean True value is returned, if not False and the reason for the failure is returned\n\n\n## Lego Details\n\n    k8s_get_cluster_health(handle: object, threshold: int)\n\n        handle: Object of type unSkript K8S Connector\n        threshold: int CPU / Memory Threshold %age\n\n## Lego Input\nThis Lego takes two parameters handle & threshold. Handle (K8S) object returned from the task.validator(...), CPU/Memory Threshold %age. \n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kubernetes/legos/k8s_get_cluster_health/__init__.py",
    "content": ""
  },
  {
    "path": "Kubernetes/legos/k8s_get_cluster_health/k8s_get_cluster_health.json",
    "content": "{\n    \"action_title\": \"Get K8S Cluster Health\",\n    \"action_description\": \"Get K8S Cluster Health\",\n    \"action_type\": \"LEGO_TYPE_K8S\",\n    \"action_entry_function\": \"k8s_get_cluster_health\",\n    \"action_needs_credential\": true,\n    \"action_supports_poll\": true,\n    \"action_is_check\": true,\n    \"action_supports_iteration\": true,\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n    \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_K8S\",\"CATEGORY_TYPE_K8S_POD\",\"CATEGORY_TYPE_K8S_CLUSTER\"],\n    \"action_next_hop\": [\"\"],\n    \"action_next_hop_parameter_mapping\": {}\n}\n"
  },
  {
    "path": "Kubernetes/legos/k8s_get_cluster_health/k8s_get_cluster_health.py",
    "content": "##\n# Copyright (c) 2023 unSkript, Inc\n# All rights reserved.\n##\nimport json\nfrom typing import Tuple, Optional\nfrom pydantic import BaseModel, Field\nfrom kubernetes import client\n\nclass InputSchema(BaseModel):\n    core_services: Optional[list] = Field(\n        default=[],\n        title=\"Core Services\",\n        description=\"List of core services names to check for health. If empty, checks all services.\"\n    )\n    namespace: Optional[str] = Field(\n        default=\"\",\n        title=\"Namespace\",\n        description=\"Namespace of the core services. If empty, checks all namespaces.\"\n    )\n\ndef k8s_get_cluster_health_printer(output):\n    status, health_issues = output\n    if status:\n        print(\"Cluster Health: OK\\n\")\n    else:\n        print(\"Cluster Health: NOT OK\\n\")\n        for issue in health_issues:\n            print(f\"Type: {issue['type']}\")\n            print(f\"Name: {issue['name']}\")\n            print(f\"Namespace: {issue.get('namespace', 'N/A')}\")\n            print(f\"Issue: {issue['issue']}\")\n            print(\"-\" * 40)\n\ndef execute_kubectl_command(handle, command: str):\n    response = handle.run_native_cmd(command)\n    if response.stderr.lower():\n        print(f\"Warning: {response.stderr}\")\n        if \"not found\" in response.stderr.lower():\n            return None  # Service not found in the given namespace, skip this service\n    if response:\n        if response.stdout:\n            return response.stdout.strip()\n        else:\n            print(f\"No output for command: {command}\")\n            return None\n\ndef get_namespaces(handle):\n    command = \"kubectl get ns -o=jsonpath='{.items[*].metadata.name}'\"\n    namespaces_str = execute_kubectl_command(handle, command)\n    if namespaces_str:\n        return namespaces_str.split()\n    return []\n\ndef get_label_selector_for_service(handle, namespace: str, service_name: str):\n    command = f\"kubectl get svc {service_name} -n {namespace} -o=jsonpath='{{.spec.selector}}'\"\n    label_selector_json = execute_kubectl_command(handle, command)\n    if label_selector_json:\n        labels_dict = json.loads(label_selector_json.replace(\"'\", \"\\\"\"))\n        return \",\".join([f\"{k}={v}\" for k, v in labels_dict.items()])\n    return ''\n\ndef check_node_health(node_api):\n    health_issues = []\n    nodes = node_api.list_node()\n    for node in nodes.items:\n        ready_condition = next((condition for condition in node.status.conditions if condition.type == \"Ready\"), None)\n        if not ready_condition or ready_condition.status != \"True\":\n            health_issues.append({\n                \"type\": \"Node\",\n                \"name\": node.metadata.name,\n                \"issue\": f\"Node is not ready. Condition: {ready_condition.type if ready_condition else 'None'}, Status: {ready_condition.status if ready_condition else 'None'}\"\n            })\n    return health_issues\n\ndef check_pod_health(handle, core_services, namespace):\n    health_issues = []\n    namespaces = [namespace] if namespace else get_namespaces(handle)\n\n    for ns in namespaces:\n        if core_services:\n            for service in core_services:\n                label_selector = get_label_selector_for_service(handle, ns, service)\n                if label_selector:\n                    # Get all pods for the service\n                    command_pods = f\"kubectl get pods -n {ns} -l {label_selector} -o=json\"\n                    pods_info = execute_kubectl_command(handle, command_pods)\n                    if pods_info:\n                        pods_data = json.loads(pods_info)\n                        total_pods = len(pods_data['items'])\n                        running_pods = sum(1 for item in pods_data['items'] if item['status']['phase'] == \"Running\")\n\n                        # Check if at least 70% of pods are running\n                        if total_pods > 0:\n                            running_percentage = (running_pods / total_pods) * 100\n                            if running_percentage < 70:\n                                health_issues.append({\n                                    \"type\": \"Pod\",\n                                    \"name\": service,\n                                    \"namespace\": ns,\n                                    \"issue\": f\"Insufficient running pods. Only {running_pods} out of {total_pods} are running.\"\n                                })\n                    else:\n                        print(f\"No pods found for service {service} in namespace {ns}.\")\n                else:\n                    print(f\"No label selector found for service {service} in namespace {ns}. Skipping...\")\n        else:\n            # Check all pods in the namespace if no specific services are given\n            command = f\"kubectl get pods -n {ns} -o=jsonpath='{{.items[?(@.status.phase!=\\\"Running\\\")].metadata.name}}'\"\n            pods_not_running = execute_kubectl_command(handle, command)\n            if pods_not_running:\n                for pod_name in pods_not_running.split():\n                    health_issues.append({\"type\": \"Pod\", \"name\": pod_name, \"namespace\": ns, \"issue\": \"Pod is not running.\"})\n\n    return health_issues\n\ndef check_deployment_health(handle, core_services, namespace):\n    health_issues = []\n    namespaces = [namespace] if namespace else get_namespaces(handle)\n\n    for ns in namespaces:\n        if core_services:\n            for service in core_services:\n                label_selector = get_label_selector_for_service(handle, ns, service)\n                if label_selector:\n                    command = f\"kubectl get deployments -n {ns} -l {label_selector} -o=jsonpath='{{.items[?(@.status.readyReplicas!=@.status.replicas)].metadata.name}}'\"\n                    deployments_not_ready = execute_kubectl_command(handle, command)\n                    if deployments_not_ready:\n                        for deployment_name in deployments_not_ready.split():\n                            health_issues.append({\"type\": \"Deployment\", \"name\": deployment_name, \"namespace\": ns, \"issue\": \"Deployment has replicas mismatch or is not available/progressing.\"})\n                else:\n                    print(f\"Service {service} not found or has no selectors in namespace {ns}. Skipping...\")\n        else:\n            # Check all deployments in the namespace if no specific services are given\n            command = f\"kubectl get deployments -n {ns} -o=jsonpath='{{.items[?(@.status.readyReplicas!=@.status.replicas)].metadata.name}}'\"\n            deployments_not_ready = execute_kubectl_command(handle, command)\n            if deployments_not_ready:\n                for deployment_name in deployments_not_ready.split():\n                    health_issues.append({\"type\": \"Deployment\", \"name\": deployment_name, \"namespace\": ns, \"issue\": \"Deployment has replicas mismatch or is not available/progressing.\"})\n\n    return health_issues\n\ndef k8s_get_cluster_health(handle, core_services: list = [], namespace: str = \"\") -> Tuple:\n    node_api = client.CoreV1Api(api_client=handle)\n    health_issues = check_node_health(node_api) + check_pod_health(handle, core_services, namespace) + check_deployment_health(handle, core_services, namespace)\n    if health_issues:\n        return (False, health_issues)\n    else:\n        return (True, None)\n"
  },
  {
    "path": "Kubernetes/legos/k8s_get_config_map_kube_system/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Get k8s kube system config map</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego get k8s kube system config map.\r\n\r\n\r\n## Lego Details\r\n\r\n    k8s_get_config_map_kube_system(handle: object,config_map_name: str, namespace: str)\r\n\r\n        handle: Object of type unSkript K8S Connector\r\n        config_map_name: Kubernetes Config Map Name.(Optional)\r\n        namespace: Kubernetes namespace.(Optional)\r\n\r\n## Lego Input\r\nThis Lego take three input handle, config_map_name and namespace.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kubernetes/legos/k8s_get_config_map_kube_system/__init__.py",
    "content": ""
  },
  {
    "path": "Kubernetes/legos/k8s_get_config_map_kube_system/k8s_get_config_map_kube_system.json",
    "content": "{\r\n    \"action_title\": \"Get k8s kube system config map\",\r\n    \"action_description\": \"Get k8s kube system config map\",\r\n    \"action_type\": \"LEGO_TYPE_K8S\",\r\n    \"action_entry_function\": \"k8s_get_config_map_kube_system\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_K8S\"]\r\n}\r\n    "
  },
  {
    "path": "Kubernetes/legos/k8s_get_config_map_kube_system/k8s_get_config_map_kube_system.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nfrom typing import Optional, List\nfrom pydantic import BaseModel, Field\nfrom tabulate import tabulate\nfrom unskript.legos.kubernetes.k8s_kubectl_command.k8s_kubectl_command import k8s_kubectl_command\nfrom kubernetes import client\n\n\nclass InputSchema(BaseModel):\n    namespace: Optional[str] = Field(\n        default=\"\",\n        title='Namespace',\n        description='Kubernetes namespace')\n    config_map_name: str = Field(\n        default=\"\",\n        title='Config Map',\n        description='Kubernetes Config Map Name')\n\n\ndef k8s_get_config_map_kube_system_printer(output):\n    if output is None:\n        return\n    for x in output:\n        for k, v in x.items():\n            if k == 'details':\n                for config in v:\n                    data_set_1 = []\n                    data_set_1.append(\"Name:\")\n                    data_set_1.append(config.metadata.name)\n\n                    data_set_2 = []\n                    data_set_2.append(\"Namespace:\")\n                    data_set_2.append(config.metadata.namespace)\n\n                    data_set_3 = []\n                    data_set_3.append(\"Labels:\")\n                    data_set_3.append(config.metadata.labels)\n\n                    data_set_4 = []\n                    data_set_4.append(\"Annotations:\")\n                    data_set_4.append(config.metadata.annotations)\n\n                    data_set_5 = []\n                    data_set_5.append(\"Data:\")\n                    data_set_5.append(config.data)\n\n                    tabular_config_map = []\n                    tabular_config_map.append(data_set_1)\n                    tabular_config_map.append(data_set_2)\n                    tabular_config_map.append(data_set_3)\n                    tabular_config_map.append(data_set_4)\n                    tabular_config_map.append(data_set_5)\n\n                    print(tabulate(tabular_config_map, tablefmt=\"github\"))\n\n\ndef k8s_get_config_map_kube_system(handle, config_map_name: str = '', namespace: str = '') -> List:\n    \"\"\"k8s_get_config_map_kube_system get kube system config map\n\n        :type handle: object\n        :param handle: Object returned from the Task validate method\n\n        :type config_map_name: str\n        :param config_map_name: Kubernetes Config Map Name.\n\n        :type namespace: str\n        :param namespace: Kubernetes namespace.\n\n        :rtype: List of system kube config maps for a given namespace\n    \"\"\"\n    all_namespaces = [namespace]\n    cmd = \"kubectl get ns  --no-headers -o custom-columns=':metadata.name'\"\n    if namespace is None or len(namespace) == 0:\n        kubernetes_namespaces = k8s_kubectl_command(\n            handle=handle, kubectl_command=cmd)\n        replaced_str = kubernetes_namespaces.replace(\"\\n\", \" \")\n        stripped_str = replaced_str.strip()\n        all_namespaces = stripped_str.split(\" \")\n    result = []\n    coreApiClient = client.CoreV1Api(api_client=handle)\n    for n in all_namespaces:\n        config_map_dict = {}\n        res = coreApiClient.list_namespaced_config_map(\n            namespace=n, pretty=True)\n        if len(res.items) > 0:\n            if config_map_name:\n                config_maps = list(\n                    filter(lambda x: (x.metadata.name == config_map_name), res.items))\n            else:\n                config_maps = res.items\n            config_map_dict[\"namespace\"] = n\n            config_map_dict[\"details\"] = config_maps\n            result.append(config_map_dict)\n    return result\n"
  },
  {
    "path": "Kubernetes/legos/k8s_get_deployment/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Get Kubernetes Deployment For a Pod</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego get Kubernetes Deployment for a POD in a Namespace.\r\n\r\n\r\n## Lego Details\r\n\r\n    k8s_get_deployment(handle: object, namespace: str, deployment_name: str)\r\n\r\n        handle: Object of type unSkript K8S Connector\r\n        namespace: Kubernetes namespace.\r\n        deployment_name: Kubernetes deployment name\r\n\r\n## Lego Input\r\nThis Lego take three input handle, namespace and deployment_name.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kubernetes/legos/k8s_get_deployment/__init__.py",
    "content": ""
  },
  {
    "path": "Kubernetes/legos/k8s_get_deployment/k8s_get_deployment.json",
    "content": "{\r\n    \"action_title\": \"Get Kubernetes Deployment For a Pod in a Namespace\",\r\n    \"action_description\": \"Get Kubernetes Deployment for a POD in a Namespace\",\r\n    \"action_type\": \"LEGO_TYPE_K8S\",\r\n    \"action_entry_function\": \"k8s_get_deployment\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\r\n    \"action_categories\":[ \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_K8S\"]\r\n}\r\n    "
  },
  {
    "path": "Kubernetes/legos/k8s_get_deployment/k8s_get_deployment.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\nimport pprint\nfrom pydantic import BaseModel, Field\nfrom kubernetes import client\nfrom kubernetes.client.rest import ApiException\n\nclass InputSchema(BaseModel):\n    namespace: str = Field(\n        title='Namespace',\n        description='Kubernetes namespace.')\n    deployment_name: str = Field(\n        title='Deployment',\n        description='Kubernetes deployment name'\n    )\n\ndef k8s_get_deployment_printer(output):\n    if output is None:\n        return\n\n    pprint.pprint(output)\n\ndef k8s_get_deployment(handle, namespace: str, deployment_name: str) -> str:\n    \"\"\"k8s_get_deployment get Kubernetes Deployment For a Pod\n\n        :type handle: object\n        :param handle: Object returned from the Task validate method\n\n        :type namespace: str\n        :param namespace: Kubernetes namespace.\n\n        :type deployment_name: str\n        :param deployment_name: Kubernetes deployment name\n\n        :rtype: string\n    \"\"\"\n    coreApiClient = client.AppsV1Api(handle)\n\n    try:\n        field_selector = \"metadata.name=\" + deployment_name\n        resp = coreApiClient.list_namespaced_deployment(\n            namespace, pretty=True, field_selector=field_selector)\n\n    except ApiException as e:\n        resp = 'An Exception occured while executing the command' + e.reason\n\n    return resp\n"
  },
  {
    "path": "Kubernetes/legos/k8s_get_deployment_status/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Get All Deployment Status From Namespace </h1>\r\n\r\n## Description\r\nThis Lego get deployment status for the given inputs. If namespace and deployment name not given it will get all the failed deployment status from all namespaces.\r\n\r\n\r\n## Lego Details\r\n\r\n    k8s_get_deployment_status(handle, deployment: str = \"\", namespace: str = \"\")\r\n\r\n        handle: Object of type unSkript K8S Connector\r\n        namespace: Optional - k8s namespace.\r\n        deployment: Optional - k8s deployment name.\r\n\r\n## Lego Input\r\n\r\nThis Lego take two inputs handle, deployment and namespace.\r\n\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kubernetes/legos/k8s_get_deployment_status/__init__.py",
    "content": ""
  },
  {
    "path": "Kubernetes/legos/k8s_get_deployment_status/k8s_get_deployment_status.json",
    "content": "{\r\n    \"action_title\": \"Get Deployment Status\",\r\n    \"action_description\": \"This action search for failed deployment status and returns list.\",\r\n    \"action_type\": \"LEGO_TYPE_K8S\",\r\n    \"action_entry_function\": \"k8s_get_deployment_status\",\r\n    \"action_is_check\": true,\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\" ,\"CATEGORY_TYPE_K8S\"],\r\n    \"action_next_hop\": [\"65afc892db3d7ef487fe2353282bf94351e4674a34f56cd0349a2ad920897ddd\"],\r\n    \"action_next_hop_parameter_mapping\": {}\r\n}\r\n    "
  },
  {
    "path": "Kubernetes/legos/k8s_get_deployment_status/k8s_get_deployment_status.py",
    "content": "#\n# Copyright (c) 2022 unSkript.com\n# All rights reserved.\n#\n\nimport pprint\nimport json\nfrom typing import Optional, Tuple\nfrom pydantic import BaseModel, Field\n\nclass InputSchema(BaseModel):\n    namespace: Optional[str] = Field(\n        default='',\n        title='Namespace',\n        description='k8s Namespace')\n    deployment: Optional[str] = Field(\n        default='',\n        title='Deployment',\n        description='k8s Deployment')\n\n\ndef k8s_get_deployment_status_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef k8s_get_deployment_status(handle, deployment: str = \"\", namespace: str = \"\") -> Tuple:\n    \"\"\"k8s_get_deployment_status executes the command and give failed deployment list\n\n        :type handle: object\n        :param handle: Object returned from the Task validate method\n\n        :type deployment: str\n        :param deployment: Deployment Name.\n        \n        :type namespace: str\n        :param namespace: Kubernetes Namespace.\n\n        :rtype: Tuple with status result and list of failed deployments.\n    \"\"\"\n    result = []\n    if handle.client_side_validation is not True:\n        print(f\"K8S Connector is invalid: {handle}\")\n        raise Exception(\"K8S Connector is invalid\")\n\n    status_details = \"\"\n    if namespace and deployment:\n        name_cmd = \"kubectl get deployment \" + deployment + \" -n \" + namespace + \" -o json\"\n        exec_cmd = handle.run_native_cmd(name_cmd)\n        status_op = exec_cmd.stdout\n        status_details = json.loads(status_op)\n\n    if not namespace and not deployment:\n        name_cmd = \"kubectl get deployments --all-namespaces -o json\"\n        exec_cmd = handle.run_native_cmd(name_cmd)\n        status_op = exec_cmd.stdout\n        status_details = json.loads(status_op)\n\n    if namespace and not deployment:\n        name_cmd = \"kubectl get deployment -n \" + namespace + \" -o json\"\n        exec_cmd = handle.run_native_cmd(name_cmd)\n        status_op = exec_cmd.stdout\n        status_details = json.loads(status_op)\n\n    if deployment and not namespace:\n        name_cmd = \"kubectl get deployment \" + deployment + \" -o json\"\n        exec_cmd = handle.run_native_cmd(name_cmd)\n        status_op = exec_cmd.stdout\n        status_details = json.loads(status_op)\n\n    if status_details:\n        if \"items\" in status_details:\n            for items in status_details[\"items\"]:\n                namespace_name = items[\"metadata\"][\"namespace\"]\n                deployment_name = items[\"metadata\"][\"name\"]\n                replica_details = items[\"status\"][\"conditions\"]\n                for i in replica_details:\n                    deployment_dict = {}\n                    if (\"FailedCreate\" in i[\"reason\"] and \"ReplicaFailure\" in i[\"type\"] and\n                        \"True\" in i[\"status\"]):\n                        deployment_dict[\"namespace\"] = namespace_name\n                        deployment_dict[\"deployment_name\"] = deployment_name\n                        result.append(deployment_dict)\n                    if (\"ProgressDeadlineExceeded\" in i[\"reason\"] and \"Progressing\" in i[\"type\"] and\n                        \"False\" in i[\"status\"]):\n                        deployment_dict[\"namespace\"] = namespace_name\n                        deployment_dict[\"deployment_name\"] = deployment_name\n                        result.append(deployment_dict)\n        else:\n            namespace_name = status_details[\"metadata\"][\"namespace\"]\n            deployment_name = status_details[\"metadata\"][\"name\"]\n            replica_details = status_details[\"status\"][\"conditions\"]\n            for i in replica_details:\n                deployment_dict = {}\n                if (\"FailedCreate\" in i[\"reason\"] and \"ReplicaFailure\" in i[\"type\"] and\n                    \"True\" in i[\"status\"]):\n                    deployment_dict[\"namespace\"] = namespace_name\n                    deployment_dict[\"deployment_name\"] = deployment_name\n                    result.append(deployment_dict)\n                if (\"ProgressDeadlineExceeded\" in i[\"reason\"] and \"Progressing\" in i[\"type\"] and\n                    \"False\" in i[\"status\"]):\n                    deployment_dict[\"namespace\"] = namespace_name\n                    deployment_dict[\"deployment_name\"] = deployment_name\n                    result.append(deployment_dict)\n\n    if len(result) != 0:\n        return (False, result)\n    return (True, None)\n"
  },
  {
    "path": "Kubernetes/legos/k8s_get_error_pods_from_all_jobs/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Get Kubernetes Error PODs from All Jobs</h1>\r\n\r\n## Description\r\nThis Lego gets all failed or error pods from all jobs for a given namespace.\r\n\r\n\r\n## Lego Details\r\n\r\n    k8s_get_error_pods_from_all_jobs(handle, namespace: str = '') \r\n\r\n        handle: Object of type unSkript K8S Connector\r\n        namespace: k8s namespace (Optional)\r\n\r\n## Lego Input\r\n\r\nThis Lego take two inputs handle, and namespace (Optional).\r\n\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kubernetes/legos/k8s_get_error_pods_from_all_jobs/__init__.py",
    "content": "\n"
  },
  {
    "path": "Kubernetes/legos/k8s_get_error_pods_from_all_jobs/k8s_get_error_pods_from_all_jobs.json",
    "content": "{\n    \"action_title\": \"Get Kubernetes Error PODs from All Jobs\",\n    \"action_description\": \"Get Kubernetes Error PODs from All Jobs\",\n    \"action_type\": \"LEGO_TYPE_K8S\",\n    \"action_entry_function\": \"k8s_get_error_pods_from_all_jobs\",\n    \"action_is_check\": true,\n    \"action_needs_credential\": true,\n    \"action_supports_poll\": true,\n    \"action_supports_iteration\": true,\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_TROUBLESHOOTING\",\"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_K8S\",\"CATEGORY_TYPE_K8S_POD\" ],\n    \"action_next_hop\": [\"88e97c46ad944d2f0541cd1f87e3ec5b8a4619f6093e89b55cec53b2a47e45aa\"],\n    \"action_next_hop_parameter_mapping\": {\"88e97c46ad944d2f0541cd1f87e3ec5b8a4619f6093e89b55cec53b2a47e45aa\": {\"name\": \"IP Exhaustion Mitigation: Failing K8s Pod Deletion from Jobs\",\"namespace\":\".[0].namespace\",\"pod_names\":\"map(.pod_name)\"}}\n}"
  },
  {
    "path": "Kubernetes/legos/k8s_get_error_pods_from_all_jobs/k8s_get_error_pods_from_all_jobs.py",
    "content": "#\n# Copyright (c) 2023 unSkript.com\n# All rights reserved.\n#\n\nimport pprint\nfrom typing import Tuple, Optional\nfrom pydantic import BaseModel, Field\nimport json\n\n\nclass InputSchema(BaseModel):\n    namespace: Optional[str] = Field(\n        default='',\n        title='Namespace',\n        description='k8s Namespace')\n\n\ndef k8s_get_error_pods_from_all_jobs_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef k8s_get_error_pods_from_all_jobs(handle, namespace:str=\"\") -> Tuple:\n    \"\"\"k8s_get_error_pods_from_all_jobs This check function uses the handle's native command\n       method to execute a pre-defined kubectl command and returns the output of list of error pods\n       from all jobs.\n\n       :type handle: Object\n       :param handle: Object returned from the task.validate(...) function\n\n       :rtype: Tuple Result in tuple format.\n    \"\"\"\n    result = []\n    # Fetch jobs for a particular namespace or if not given all namespaces\n    ns_cmd = f\"-n {namespace}\" if namespace else \"--all-namespaces\"\n    kubectl_cmd = f\"kubectl get jobs {ns_cmd} -o json\"\n    response = handle.run_native_cmd(kubectl_cmd)\n    \n    if response.stderr:\n        raise Exception(f\"Error occurred while executing command {kubectl_cmd}: {response.stderr}\")\n    jobs = {}\n    try:\n        if response.stdout:\n            jobs = json.loads(response.stdout)\n    except json.JSONDecodeError:\n        raise Exception(\"Failed to parse JSON output from kubectl command.\")\n\n    for job in jobs.get(\"items\", []):\n        job_name = job[\"metadata\"][\"name\"]\n        job_namespace = job[\"metadata\"][\"namespace\"]\n        # Fetch pods for each job\n        pod_kubectl_cmd = f\"kubectl get pods -n {job_namespace} -l job-name={job_name} -o json\"\n        pod_response = handle.run_native_cmd(pod_kubectl_cmd)\n        \n        if pod_response.stderr:\n            print(f\"Error occurred while fetching pods for job {job_name}: {pod_response.stderr}\")\n            continue\n        pods = {}\n        try:\n            if response.stdout:\n                pods = json.loads(pod_response.stdout)\n        except json.JSONDecodeError:\n            print(f\"Failed to parse JSON pod response output for kubectl command: {pod_kubectl_cmd}\")\n            pass\n        for pod in pods.get(\"items\", []):\n            if pod[\"status\"][\"phase\"] not in [\"Succeeded\", \"Running\"]:\n                result.append({\"pod_name\": pod[\"metadata\"][\"name\"],\n                                \"job_name\": job_name,\n                                \"namespace\": pod[\"metadata\"][\"namespace\"]\n                                })\n    if result:\n        return (False, result)\n    else:\n        return (True, None)\n"
  },
  {
    "path": "Kubernetes/legos/k8s_get_expiring_cluster_certificate/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Check the valifity of K8s certificate for a cluster. </h1>\r\n\r\n## Description\r\nThis action checks if the certificate is expiring for a K8s cluster.\r\n\r\n\r\n## Lego Details\r\n\r\n    k8s_get_expiring_cluster_certificate(handle, expiring_threshold: int = 7)\r\n\r\n        handle: Object of type unSkript K8S Connector\r\n        expiration_threshold (int): The threshold (in days) for considering a certificate as expiring soon.\r\n\r\n## Lego Input\r\n\r\nThis Lego take three inputs handle, expiration_threshold.\r\n\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kubernetes/legos/k8s_get_expiring_cluster_certificate/__init__.py",
    "content": ""
  },
  {
    "path": "Kubernetes/legos/k8s_get_expiring_cluster_certificate/k8s_get_expiring_cluster_certificate.json",
    "content": "{\n    \"action_title\": \"Check expiry of K8s cluster certificate\",\n    \"action_description\": \"Check expiry of K8s cluster certificate\",\n    \"action_type\": \"LEGO_TYPE_K8S\",\n    \"action_entry_function\": \"k8s_get_expiring_cluster_certificate\",\n    \"action_is_check\": true,\n    \"action_needs_credential\": true,\n    \"action_supports_poll\": true,\n    \"action_supports_iteration\": true,\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\" ,\"CATEGORY_TYPE_K8S\"],\n    \"action_next_hop\": [\"\"],\n    \"action_next_hop_parameter_mapping\": {}\n}"
  },
  {
    "path": "Kubernetes/legos/k8s_get_expiring_cluster_certificate/k8s_get_expiring_cluster_certificate.py",
    "content": "##\n# Copyright (c) 2023 unSkript, Inc\n# All rights reserved.\n##\nfrom pydantic import BaseModel, Field\nfrom typing import Optional, Tuple\nimport base64\nimport datetime\nfrom cryptography import x509\nfrom cryptography.hazmat.backends import default_backend\n\n\nclass InputSchema(BaseModel):\n    expiring_threshold: Optional[int] = Field(\n        default=7,\n        title='Expiration Threshold (in days)',\n        description='Expiration Threshold of certificates (in days). Default- 90 days')\n\ndef k8s_get_expiring_cluster_certificate_printer(output):\n    if output is None:\n        return\n    success, data = output\n    if not success:\n        print(data)\n    else:\n        print(\"K8s certificate is valid.\")\n\ndef get_expiry_date(pem_data: str) -> datetime.datetime:\n    cert = x509.load_pem_x509_certificate(pem_data.encode(), default_backend())\n    return cert.not_valid_after\n\ndef k8s_get_expiring_cluster_certificate(handle, expiring_threshold:int=7) -> Tuple:\n    \"\"\"\n    Check the validity for a K8s cluster certificate.\n\n    Args:\n        handle: Object of type unSkript K8S Connector\n        expiration_threshold (int): The threshold (in days) for considering a certificate as expiring soon.\n\n    Returns:\n        tuple: Status, details of the certificate.\n    \"\"\"\n    result = []\n    try:\n        # Fetch cluster CA certificate\n        ca_cert = handle.run_native_cmd(\"kubectl get secret -o jsonpath=\\\"{.items[?(@.type=='kubernetes.io/service-account-token')].data['ca\\\\.crt']}\\\" --all-namespaces\")\n        if ca_cert.stderr:\n            raise Exception(f\"Error occurred while fetching cluster CA certificate: {ca_cert.stderr}\")\n\n        # Decode and check expiry date of the cluster's CA certificate\n        ca_cert_decoded = base64.b64decode(ca_cert.stdout.strip()).decode(\"utf-8\")\n        ca_cert_exp = get_expiry_date(ca_cert_decoded)\n        days_remaining = (ca_cert_exp - datetime.datetime.now()).days\n        if days_remaining < 0:\n            # Certificate has already expired\n            result.append({\n                \"certificate\": \"Kubeconfig Cluster certificate\",\n                \"days_remaining\": days_remaining,\n                \"status\": \"Expired\"\n            })\n        elif ca_cert_exp < datetime.datetime.now() + datetime.timedelta(days=expiring_threshold):\n            result.append({\n                \"certificate\": \"Kubeconfig Cluster certificate\",\n                \"days_remaining\": days_remaining,\n                \"status\": \"Expiring Soon\"\n            })\n    except Exception as e:\n        print(f\"Error occurred while checking cluster CA certificate: {e}\")\n        raise e\n    \n    if len(result) != 0:\n        return (False, result)\n    return (True, None)\n"
  },
  {
    "path": "Kubernetes/legos/k8s_get_expiring_tls_secret_certificates/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Get the expiring TLS secret certificates for a K8s cluster. </h1>\r\n\r\n## Description\r\nThis action gets the expiring certificates for a K8s cluster.\r\n\r\n\r\n## Lego Details\r\n\r\n    k8s_get_expiring_tls_secret_certificates(handle, namespace:str='', expiring_threshold:int=7)\r\n\r\n        handle: Object of type unSkript K8S Connector\r\n        namespace (str) : Optional - k8s namespace.\r\n        expiration_threshold (int): The threshold (in days) for considering a certificate as expiring soon.\r\n\r\n## Lego Input\r\n\r\nThis Lego take three inputs handle, namespace and expiration_threshold.\r\n\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kubernetes/legos/k8s_get_expiring_tls_secret_certificates/__init__.py",
    "content": ""
  },
  {
    "path": "Kubernetes/legos/k8s_get_expiring_tls_secret_certificates/k8s_get_expiring_tls_secret_certificates.json",
    "content": "{\n    \"action_title\": \"Get expiring secret certificates\",\n    \"action_description\": \"Get the expiring secret certificates for a K8s cluster.\",\n    \"action_type\": \"LEGO_TYPE_K8S\",\n    \"action_entry_function\": \"k8s_get_expiring_tls_secret_certificates\",\n    \"action_is_check\": true,\n    \"action_needs_credential\": true,\n    \"action_supports_poll\": true,\n    \"action_supports_iteration\": true,\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\" ,\"CATEGORY_TYPE_K8S\"],\n    \"action_next_hop\": [\"\"],\n    \"action_next_hop_parameter_mapping\": {}\n}"
  },
  {
    "path": "Kubernetes/legos/k8s_get_expiring_tls_secret_certificates/k8s_get_expiring_tls_secret_certificates.py",
    "content": "##\n# Copyright (c) 2024 unSkript, Inc\n# All rights reserved.\n##\nfrom pydantic import BaseModel, Field\nfrom typing import Optional, Tuple\nimport base64\nimport datetime\nfrom cryptography import x509\nfrom cryptography.hazmat.backends import default_backend\nfrom kubernetes import client, watch\nfrom kubernetes.client.rest import ApiException\nfrom tabulate import tabulate\n\n\nclass InputSchema(BaseModel):\n    namespace: Optional[str] = Field(\n        default='',\n        title='Namespace',\n        description='K8s Namespace. Default- all namespaces')\n    expiring_threshold: Optional[int] = Field(\n        default=7,\n        title='Expiration Threshold (in days)',\n        description='Expiration Threshold of certificates (in days). Default- 90 days')\n\n\ndef k8s_get_expiring_tls_secret_certificates_printer(output):\n    if output is None:\n        return\n    success, data = output\n    if not success:\n        headers = ['Secret Name', 'Namespace']\n        table = [[item['secret_name'], item['namespace']] for item in data]\n        print(tabulate(table, headers=headers, tablefmt='grid'))\n    else:\n        print(\"No expiring certificates found.\")\n\ndef get_expiry_date(pem_data: str) -> datetime.datetime:\n    cert = x509.load_pem_x509_certificate(pem_data.encode(), default_backend())\n    return cert.not_valid_after\n\n\ndef k8s_get_expiring_tls_secret_certificates(handle, namespace:str='', expiring_threshold:int=7) -> Tuple:\n    \"\"\"\n    Get the expiring TLS secret certificates for a K8s cluster.\n\n    Args:\n        handle: Object of type unSkript K8S Connector\n        namespace (str): The Kubernetes namespace where the certificates are stored.\n        expiration_threshold (int): The threshold (in days) for considering a certificate as expiring soon.\n\n    Returns:\n        tuple: Status, a list of expiring certificate names.\n    \"\"\"\n    result = []\n    coreApiClient = client.CoreV1Api(api_client=handle)\n\n    try:\n        if namespace:\n            # Check if namespace exists and has secrets\n            secrets = coreApiClient.list_namespaced_secret(namespace, watch=False, limit=1).items\n            if not secrets:\n                return (True, None)  # No secrets in the namespace\n            all_namespaces = [namespace]\n        else:\n            all_namespaces = [ns.metadata.name for ns in coreApiClient.list_namespace().items]\n\n    except ApiException as e:\n        print(f\"Error occurred while accessing Kubernetes API: {e}\")\n        return False, None\n\n    for n in all_namespaces:\n        secrets = coreApiClient.list_namespaced_secret(n, watch=False, limit=200).items\n\n        for secret in secrets:\n            # Check if the secret contains a certificate\n            if secret.type == \"kubernetes.io/tls\":\n                # Get the certificate data\n                cert_data = secret.data.get(\"tls.crt\")\n                if cert_data:\n                    # Decode the certificate data\n                    cert_data_decoded = base64.b64decode(cert_data).decode(\"utf-8\")\n                    # Parse the certificate expiration date\n                    cert_exp = get_expiry_date(cert_data_decoded)\n                    days_remaining = (cert_exp - datetime.datetime.now()).days\n                    if days_remaining < 0:\n                        # Certificate has already expired\n                        result.append({\n                            \"secret_name\": secret.metadata.name,\n                            \"namespace\": n,\n                            \"days_remaining\": days_remaining,\n                            \"status\": \"Expired\"\n                        })\n                    elif cert_exp and cert_exp < datetime.datetime.now() + datetime.timedelta(days=expiring_threshold):\n                        result.append({\n                        \"secret_name\": secret.metadata.name,\n                        \"namespace\": n,\n                        \"days_remaining\": days_remaining,\n                        \"status\": \"Expiring Soon\"  # Indicating the certificate is close to expiring\n                            })\n\n    if len(result) != 0:\n        return (False, result)\n    return (True, None)\n    "
  },
  {
    "path": "Kubernetes/legos/k8s_get_failed_deployments/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \n<h2>Get K8S Failed Deployment</h2>\n\n<br>\n\n## Description\nThis Lego Returns Failed Deployment in all namespaces\n\n\n## Lego Details\n\n    k8s_get_failed_deployments(handle: object)\n\n        handle: Object of type unSkript K8S Connector\n\n## Lego Input\nThis Lego take just one input, the Handle\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kubernetes/legos/k8s_get_failed_deployments/__init__.py",
    "content": ""
  },
  {
    "path": "Kubernetes/legos/k8s_get_failed_deployments/k8s_get_failed_deployments.json",
    "content": "{\n    \"action_title\": \"Get Kubernetes Failed Deployments\",\n    \"action_description\": \"Get Kubernetes Failed Deployments\",\n    \"action_type\": \"LEGO_TYPE_K8S\",\n    \"action_is_check\": true,\n    \"action_entry_function\": \"k8s_get_failed_deployments\",\n    \"action_needs_credential\": true,\n    \"action_supports_poll\": true,\n    \"action_supports_iteration\": true,\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_K8S\"],\n    \"action_next_hop\": [],\n    \"action_next_hop_parameter_mapping\": {}\n}"
  },
  {
    "path": "Kubernetes/legos/k8s_get_failed_deployments/k8s_get_failed_deployments.py",
    "content": "#\n# Copyright (c) 2023 unSkript.com\n# All rights reserved.\n#\nimport json\nfrom typing import Tuple\nfrom pydantic import BaseModel, Field\n\n\nclass InputSchema(BaseModel):\n    namespace: str = Field(\n        '',\n        description=\"K8S Namespace\",\n        title=\"K8S Namespace\"\n    )\n\ndef k8s_get_failed_deployments_printer(output):\n    if output is None:\n        return\n    print(output)\n\n\ndef k8s_get_failed_deployments(handle, namespace: str = '') -> Tuple:\n    \"\"\"k8s_get_failed_deployments Returns all failed deployments across all namespaces\n    or within a specific namespace if provided. The deployments are considered\n    failed if the 'Available' condition is set to 'False'.\n\n    :type handle: Object\n    :param handle: Object returned from task.validate(...) function\n\n    :type namespace: str\n    :param namespace: The specific namespace to filter the deployments. Defaults to ''.\n\n    :rtype: Status of result, list of dictionaries, each containing the 'name' and 'namespace' of the failed deployments.\n    \"\"\"\n    # Construct the kubectl command based on whether a namespace is provided\n    kubectl_command = \"kubectl get deployments --all-namespaces -o json\"\n    if namespace:\n        kubectl_command = \"kubectl get deployments -n \" + namespace + \" -o json\"\n    # Execute kubectl command\n    response = handle.run_native_cmd(kubectl_command)\n    # Check if the response is None, which indicates an error\n    if response is None:\n        print(f\"Error while executing command ({kubectl_command}) (empty response)\")\n    if response.stderr:\n        raise Exception(f\"Error occurred while executing command {kubectl_command} {response.stderr}\")\n\n    result = []\n    try:\n        deployments = json.loads(response.stdout)\n        # Iterate over each item in the deployments\n        for item in deployments[\"items\"]:\n            # Check each condition of the deployment\n            for condition in item[\"status\"][\"conditions\"]:\n                # If the 'Available' condition is set to 'False', add the deployment to the result\n                if condition[\"type\"] == \"Available\" and condition[\"status\"] == \"False\":\n                    result.append({\n                        'name': item[\"metadata\"][\"name\"],\n                        'namespace': item[\"metadata\"][\"namespace\"]\n                    })\n    except Exception as e:\n        raise e\n\n    return (False, result) if result else (True, None)"
  },
  {
    "path": "Kubernetes/legos/k8s_get_frequently_restarting_pods/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>Get frequently restarting K8s pods</h1>\n\n## Description\nGet Kubernetes pods from all namespaces that are restarting too often.\n\n## Lego Details\n\tk8s_get_frequently_restarting_pods(handle, restart_threshold:int=90)\n\t\thandle: Object of type unSkript K8S Connector.\n\t\trestart_threshold: Threshold number of times for which a pod should be restarting\n\n\n## Lego Input\nThis Lego takes inputs handle, restart_threshold.\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kubernetes/legos/k8s_get_frequently_restarting_pods/__init__.py",
    "content": ""
  },
  {
    "path": "Kubernetes/legos/k8s_get_frequently_restarting_pods/k8s_get_frequently_restarting_pods.json",
    "content": "{\n  \"action_title\": \"Get frequently restarting K8s pods\",\n  \"action_description\": \"Get Kubernetes pods from all namespaces that are restarting too often.\",\n  \"action_type\": \"LEGO_TYPE_K8S\",\n  \"action_entry_function\": \"k8s_get_frequently_restarting_pods\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n  \"action_is_check\": false,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\" ,\"CATEGORY_TYPE_K8S\"],\n  \"action_next_hop\": [\"\"],\n  \"action_next_hop_parameter_mapping\": {}\n}"
  },
  {
    "path": "Kubernetes/legos/k8s_get_frequently_restarting_pods/k8s_get_frequently_restarting_pods.py",
    "content": "##\n# Copyright (c) 2023 unSkript, Inc\n# All rights reserved.\n##\nimport json\nfrom typing import Optional, Tuple\nfrom pydantic import BaseModel, Field\n\n\n\nclass InputSchema(BaseModel):\n    restart_threshold: Optional[int] = Field(\n        default = 90,\n        description='Threshold number of times for which a pod should be restarting. Default is 90 times.',\n        title='Restart threshold',\n    )\n\n\ndef k8s_get_frequently_restarting_pods_printer(output):\n    if output is None:\n        return\n    print(output)\n\n\ndef k8s_get_frequently_restarting_pods(handle, restart_threshold:int=90) -> Tuple:\n    \"\"\"k8s_get_frequently_restarting_pods finds any K8s pods that have restarted more number of times than a given threshold\n\n        :type handle: object\n        :param handle: Object returned from the Task validate method\n\n        :type restart_threshold: int\n        :param restart_threshold: Threshold number of times for which a pod should be restarting\n\n        :rtype: Tuple of status and list of namespaces and pods that have restarted more than the threshold number of times.\n    \"\"\"\n    result = []\n    cmd = \"kubectl get pods --all-namespaces --sort-by='.status.containerStatuses[0].restartCount' -o custom-columns='NAMESPACE:.metadata.namespace,NAME:.metadata.name,RESTART_COUNT:.status.containerStatuses[0].restartCount' -o json\"\n    response = handle.run_native_cmd(cmd)\n    if response is None:\n        print(\n            f\"Error while executing command ({cmd}) (empty response)\")\n\n    if response.stderr:\n        raise Exception(\n            f\"Error occurred while executing command {cmd} {response.stderr}\")\n\n    all_pods_data = json.loads(response.stdout)\n    for pod_data in all_pods_data['items']:\n        pod = pod_data['metadata']['name']\n        nmspace = pod_data['metadata']['namespace']\n\n        # Check if 'containerStatuses' is present and if it's not empty\n        if 'containerStatuses' in pod_data['status'] and pod_data['status']['containerStatuses']:\n            restart_count = pod_data['status']['containerStatuses'][0]['restartCount']\n            if restart_count > restart_threshold:\n                pods_dict = {\n                    'pod': pod,\n                    'namespace': nmspace\n                }\n                result.append(pods_dict)\n    if len(result) != 0:\n        return (False, result)\n    return (True, None)\n\n"
  },
  {
    "path": "Kubernetes/legos/k8s_get_handle/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Get Kubernetes Handle</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego get Kubernetes Handle. \r\n\r\n\r\n## Lego Details\r\n\r\n    k8s_get_handle(handle: object)\r\n\r\n        handle: Object of type unSkript K8S Connector\r\n\r\n## Lego Input\r\nThis Lego take one input handle.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)\r\n"
  },
  {
    "path": "Kubernetes/legos/k8s_get_handle/__init__.py",
    "content": ""
  },
  {
    "path": "Kubernetes/legos/k8s_get_handle/k8s_get_handle.json",
    "content": "{\r\n    \"action_title\": \"Get Kubernetes Handle\",\r\n    \"action_description\": \"Get Kubernetes Handle\",\r\n    \"action_type\": \"LEGO_TYPE_K8S\",\r\n    \"action_entry_function\": \"k8s_get_handle\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": false,\r\n    \"action_supports_iteration\": false\r\n}\r\n    "
  },
  {
    "path": "Kubernetes/legos/k8s_get_handle/k8s_get_handle.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nfrom pydantic import BaseModel\n\n\nclass InputSchema(BaseModel):\n    pass\n\ndef k8s_get_handle(handle):\n  \"\"\"kubernetes_get_handle returns the kubernetes handle.\n\n     :rtype: kubernetes Handle.\n  \"\"\"\n  return handle\n"
  },
  {
    "path": "Kubernetes/legos/k8s_get_healthy_pods/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Get All Kubernetes Healthy PODS</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego get All Kubernetes Healthy PODS in a given Namespace.\r\n\r\n\r\n## Lego Details\r\n\r\n    k8s_get_healthy_pods(handle: object, namespace: str)\r\n\r\n        handle: Object of type unSkript K8S Connector\r\n        namespace: Kubernetes namespace.\r\n\r\n## Lego Input\r\nThis Lego take two input handle and namespace.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kubernetes/legos/k8s_get_healthy_pods/__init__.py",
    "content": ""
  },
  {
    "path": "Kubernetes/legos/k8s_get_healthy_pods/k8s_get_healthy_pods.json",
    "content": "{\r\n    \"action_title\": \"Get All Kubernetes Healthy PODS in a given Namespace\",\r\n    \"action_description\": \"Get All Kubernetes Healthy PODS in a given Namespace\",\r\n    \"action_type\": \"LEGO_TYPE_K8S\",\r\n    \"action_entry_function\": \"k8s_get_healthy_pods\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\" ,\"CATEGORY_TYPE_K8S\",\"CATEGORY_TYPE_K8S_POD\" ]\r\n}\r\n    "
  },
  {
    "path": "Kubernetes/legos/k8s_get_healthy_pods/k8s_get_healthy_pods.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nfrom typing import List\nfrom pydantic import BaseModel, Field\nfrom kubernetes import client\nfrom kubernetes.client.rest import ApiException\n\n\nclass InputSchema(BaseModel):\n    namespace: str = Field(\n        title='Namespace',\n        description='Kubernetes namespace')\n\n\ndef k8s_get_healthy_pods_printer(data: list):\n    if data is None:\n        return\n\n    print(\"POD List:\")\n\n    for pod in data:\n        print(f\"\\t {pod}\")\n\ndef k8s_get_healthy_pods(handle, namespace: str) -> List:\n    \"\"\"k8s_get_healthy_pods get healthy pods\n\n        :type handle: object\n        :param handle: Object returned from the Task validate method\n\n        :type namespace: str\n        :param namespace: Kubernetes namespace.\n\n        :rtype: List\n    \"\"\"\n    coreApiClient = client.CoreV1Api(api_client=handle)\n    try:\n        coreApiClient.read_namespace_status(namespace, pretty=True)\n    except ApiException as e:\n        #print(\"Exception when calling CoreV1Api->read_namespace_status: %s\\n\" % e)\n        raise e\n\n    all_healthy_pods = []\n    ret = coreApiClient.list_namespaced_pod(namespace=namespace)\n    for i in ret.items:\n        phase = i.status.phase\n        if phase in (\"Running\", \"Succeeded\"):\n            all_healthy_pods.append(i.metadata.name)\n    return all_healthy_pods\n"
  },
  {
    "path": "Kubernetes/legos/k8s_get_memory_utilization_of_services/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>Get K8s services exceeding memory utilizations</h1>\n\n## Description\nThis action executes the given kubectl commands to find the memory utilization of the specified services in a particular namespace and compares it with a given threshold.\n\n## Lego Details\n\tk8s_get_memory_utilization_of_services(handle, services: list, namespace: str, threshold=80:float)\n\t\thandle: Object of type unSkript K8S Connector.\n\t\tservices: List of pod names of the services for which memory utilization is to be fetched.\n\t\tnamespace: Namespace in which the services are running.\n\t\tthreshold: Threshold for memory utilization percentage. Default is 80%.\n\n\n## Lego Input\nThis Lego takes inputs handle, services, namespace, threshold.\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n<img src=\"./2.png\">\n<img src=\"./3.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kubernetes/legos/k8s_get_memory_utilization_of_services/__init__.py",
    "content": ""
  },
  {
    "path": "Kubernetes/legos/k8s_get_memory_utilization_of_services/k8s_get_memory_utilization_of_services.json",
    "content": "{\n  \"action_title\": \"Get K8s services exceeding memory utilization\",\n  \"action_description\": \"This action executes the given kubectl commands to find the memory utilization of the specified services in a particular namespace and compares it with a given threshold.\",\n  \"action_type\": \"LEGO_TYPE_K8S\",\n  \"action_entry_function\": \"k8s_get_memory_utilization_of_services\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n  \"action_is_check\": true,\n  \"action_next_hop\": [\n    \"\"\n  ],\n  \"action_next_hop_parameter_mapping\": {},\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true\n}"
  },
  {
    "path": "Kubernetes/legos/k8s_get_memory_utilization_of_services/k8s_get_memory_utilization_of_services.py",
    "content": "#\n# Copyright (c) 2023 unSkript.com\n# All rights reserved.\n#\nimport os \nimport json \n\nfrom typing import Optional, Tuple\nfrom pydantic import BaseModel, Field\nfrom tabulate import tabulate\n\n\n\nclass InputSchema(BaseModel):\n    services: list = Field(\n        description='List of pod names of the services for which memory utilization is to be fetched.',\n        title='List of pod names (as services)',\n    )\n    namespace: str = Field(\n        description='Namespace in which the services are running.',\n        title='K8s Namespace',\n    )\n    threshold: Optional[float] = Field(\n        80,\n        description='Threshold for memory utilization percentage. Default is 80%.',\n        title='Threshold (in %)',\n    )\n        \n    \ndef k8s_get_memory_utilization_of_services_printer(output):\n    status, data = output\n    if status:\n        print(\"All services are within memory utilization threshold\")\n    else:\n        headers = [\"Service\", \"Pod\", \"Namespace\", \"Container\", \"Utilization %\"]\n        table_data = []\n\n        for entry in data:\n            service = entry.get('service', \"N/A\")\n            pod = entry.get('pod', \"N/A\")\n            namespace = entry.get('namespace', \"N/A\")\n            container = entry.get('container_name', \"N/A\")\n            utilization_percentage = entry.get('utilization_percentage', \"N/A\")\n\n            table_data.append([service, pod, namespace, container, utilization_percentage])\n        \n        # Using tabulate to format the output as a grid table\n        print(tabulate(table_data, headers=headers, tablefmt=\"grid\"))\n\n\ndef convert_memory_to_bytes(memory_value) -> int:\n    if not memory_value:\n        return 0\n    units = {\n        'K': 1000,\n        'M': 1000 * 1000,\n        'G': 1000 * 1000 * 1000,\n        'T': 1000 * 1000 * 1000 * 1000,\n        'Ki': 1024,\n        'Mi': 1024 * 1024,\n        'Gi': 1024 * 1024 * 1024,\n        'Ti': 1024 * 1024 * 1024 * 1024,\n    }\n\n    for unit, multiplier in units.items():\n        if memory_value.endswith(unit):\n            return int(memory_value[:-len(unit)]) * multiplier\n\n    return int(memory_value)\n\ndef k8s_get_memory_utilization_of_services(handle, namespace: str = \"\", threshold:float=80, services: list=[]) -> Tuple:\n    \"\"\"\n    k8s_get_memory_utilization_of_services executes the given kubectl commands\n    to find the memory utilization of the specified services in a particular namespace\n    and compares it with a given threshold.\n\n    :param handle: Object returned from the Task validate method, must have client-side validation enabled.\n    :param namespace: Namespace in which the services are running.\n    :param threshold: Threshold for memory utilization percentage. Default is 80%.\n    :param services: List of pod names of the services for which memory utilization is to be fetched.\n    :return: Status, list of exceeding services if any service has exceeded the threshold.\n    \"\"\"\n    if handle.client_side_validation is False:\n        raise Exception(f\"K8S Connector is invalid: {handle}\")\n\n    if services and not namespace:\n        raise ValueError(\"Namespace must be provided if services are specified.\")\n\n    if not namespace:\n        namespace = 'default'\n\n    exceeding_services = []\n\n    # Main Idea:\n    # 1. Given namespace, lets get current memory utilization for top pods\n    # 2. Filter the list of pods to check from the service list\n    # 3. For the pods get the memory request\n    # 4. Calculate utilization as (mem_usage / mem_request) * 100\n    # 5. Construct list of pods which has  Utilization > threshold  and return the list\n\n    try:\n\n        top_pods_command = f\"kubectl top pods -n {namespace} --containers --no-headers\"\n        response = handle.run_native_cmd(top_pods_command)\n        top_pods_output = response.stdout.strip()\n        if not top_pods_output:\n            return (True, None)\n        \n        service_pods_containers = {}  # Dictionary to hold pod and container names for each service\n        if services:\n            # If services specified, lets iterate over it and get pods corresponding to them.\n            # If service pod not found in the top pod list, which means the memory\n            # utilization is not significant, so dont need to check\n            for svc in services:\n                kubectl_cmd = f\"kubectl get service {svc} -n {namespace} -o=jsonpath={{.spec.selector}}\"\n                response = handle.run_native_cmd(kubectl_cmd)\n                svc_labels = None \n                if response.stderr:\n                    print(f\"Error occurred while executing command {kubectl_cmd}: {response.stderr}\")\n                    continue\n                try:\n                    if response.stdout.strip():\n                        svc_labels = json.loads(response.stdout.strip())\n                except:\n                    # If json.loads returns error, which means the output of the kubectl command returned invalid output.\n                    # since there is invalid output, no service label output. the next if check should return back\n                    pass \n\n                if not svc_labels:\n                    continue\n                _labels = \", \".join([f\"{key}={value}\" for key, value in svc_labels.items()])\n                svc_pod_cmd = f\"kubectl get pods -n {namespace} -l \\\"{_labels}\\\" -o=jsonpath={{.items[*].metadata.name}}\"\n                response = handle.run_native_cmd(svc_pod_cmd)\n                svc_pods = response.stdout.strip()\n                if not svc_pods:\n                    # No pods attached to the given service\n                    continue\n\n                # For each pod, fetch containers and their memory usage\n                for svc_pod in svc_pods.split():\n                    for line in top_pods_output.split('\\n'):\n                        if svc_pod in line:\n                            parts = line.split()\n                            if len(parts) >= 3:  # Ensure line has enough parts to parse\n                                container_name = parts[1]\n                                mem_usage = parts[-1]\n                            else:\n                                print(f\"Incorrect top pods output for pod:{svc_pod} namespace: {namespace}.\")\n                                continue\n\n                            # Key: Service, Pod, Container; Value: Memory Usage\n                            service_pods_containers[(svc, svc_pod, container_name)] = mem_usage\n        else:\n            for line in top_pods_output.split('\\n'):\n                parts = line.split()\n                if len(parts) >= 3:\n                    pod_name, container_name, mem_usage = parts[0], parts[1], parts[-1]\n                else:\n                    print(f\"Incorrect top pods output for namespace: {namespace}.\")\n                    continue\n\n                # Key: Service: None, Pod, Container; Value: Memory Usage (when services are not specified)\n                service_pods_containers[(None, pod_name, container_name)] = mem_usage\n\n        # Now, for each service's pod and container, fetch memory request and calculate utilization\n        for (service_key, pod, container), mem_usage in service_pods_containers.items():\n                # Check if the service name exists or use a placeholder\n                service_name = service_key if service_key else \"N/A\"\n                # Kubernetes pod must have at least one container. The container is the smallest deployable unit in \n                # Kubernetes. A pod encapsulates one or more containers, storage resources, a unique network IP, \n                # and options that govern how the container(s) should run. When you define a pod manifest in Kubernetes, \n                # you define one or more containers within it. Each container has its own image, environment variables, \n                # resources, and other configuration settings. It's the containers within the pod that execute the actual application \n                # code or processes. Without at least one container, there would be no workloads running within the pod, and \n                # it would essentially be an empty entity without any purpose in the Kubernetes ecosystem.\n                # The below command takes the container name that was obtained earlier and uses it to get the memory request\n                kubectl_command = f\"kubectl get pod {pod} -n {namespace} -o=jsonpath='{{.spec.containers[?(@.name==\\\"{container}\\\")].resources.requests.memory}}'\"\n                response = handle.run_native_cmd(kubectl_command)\n                mem_request = response.stdout.strip()\n\n                if not mem_request:\n                     # Memory limit is not set, dont calculate utilization\n                    continue\n\n                mem_request_bytes = convert_memory_to_bytes(mem_request)\n                mem_usage_bytes = convert_memory_to_bytes(mem_usage)\n\n                if mem_request_bytes > 0:\n                    utilization = (mem_usage_bytes / mem_request_bytes) * 100\n                    utilization = round(utilization, 2)\n\n                    if utilization > threshold:\n                        exceeding_services.append({\n                            \"service\": service_name,\n                            \"pod\": pod,\n                            \"container_name\": container,\n                            \"namespace\": namespace,\n                            \"utilization_percentage\": utilization,\n                            \"memory_request_bytes\": mem_request_bytes,\n                            \"memory_usage_bytes\": mem_usage_bytes,\n                        })\n                else:\n                    print(f\"Memory request for pod: {pod}, container: {container} is 0 or not set. Skipping...\")\n                    continue\n\n    except Exception as e:\n        raise e\n\n    return (False, exceeding_services) if exceeding_services else (True, None)"
  },
  {
    "path": "Kubernetes/legos/k8s_get_node_status_and_resource_utilization/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>Get K8s Nod Status and Resource Utilization Info</h1>\n\n## Description\nThis action gathers Kubernetes node status and CPU utilization information.\n\n## Lego Details\n\tk8s_get_node_status_and_resource_utilization(handle)\n\t\thandle: Object of type unSkript K8S Connector.\n\n## Lego Input\nThis Lego takes inputs handle.\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kubernetes/legos/k8s_get_node_status_and_resource_utilization/__init__.py",
    "content": ""
  },
  {
    "path": "Kubernetes/legos/k8s_get_node_status_and_resource_utilization/k8s_get_node_status_and_resource_utilization.json",
    "content": "{\n  \"action_title\": \"Get K8s node status and CPU utilization\",\n  \"action_description\": \"This action gathers Kubernetes node status and resource utilization information.\",\n  \"action_type\": \"LEGO_TYPE_K8S\",\n  \"action_entry_function\": \"k8s_get_node_status_and_resource_utilization\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n  \"action_is_check\": false,\n  \"action_supports_poll\": true,\n  \"action_supports_iteration\": true,\n  \"action_categories\": [\"CATEGORY_TYPE_INFORMATION\" , \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_K8S\",\"CATEGORY_TYPE_K8S_NODE\" ]\n}"
  },
  {
    "path": "Kubernetes/legos/k8s_get_node_status_and_resource_utilization/k8s_get_node_status_and_resource_utilization.py",
    "content": "#\n# Copyright (c) 2023 unSkript.com\n# All rights reserved.\n#\nfrom typing import List\nfrom tabulate import tabulate\nimport json\nfrom kubernetes.client.rest import ApiException\nfrom pydantic import BaseModel\n\n\nclass InputSchema(BaseModel):\n    pass\n\n\ndef k8s_get_node_status_and_resource_utilization_printer(output):\n    if not output:\n        print(\"No Data to Display\")\n    else:\n        headers = ['Node Name', 'Status', 'CPU Usage (%)', 'Memory Usage (%)']\n        print(tabulate(output, headers, tablefmt='pretty'))\n\n\ndef k8s_get_node_status_and_resource_utilization(handle) -> List:\n    if handle.client_side_validation is not True:\n        print(f\"K8S Connector is invalid: {handle}\")\n        return []\n\n    # Command to fetch node resource utilization\n    node_utilization_cmd = \"kubectl top nodes --no-headers\"\n    node_utilization = handle.run_native_cmd(node_utilization_cmd)\n    if node_utilization.stderr:\n        raise ApiException(f\"Error occurred while executing command {node_utilization_cmd} {node_utilization.stderr}\")\n\n    utilization_lines = node_utilization.stdout.split('\\n')\n\n    # Command to fetch node status\n    node_status_cmd = \"kubectl get nodes -o json\"\n    node_status = handle.run_native_cmd(node_status_cmd)\n    if node_status.stderr:\n        raise ApiException(f\"Error occurred while executing command {node_status_cmd} {node_status.stderr}\")\n\n    nodes_info = json.loads(node_status.stdout)\n\n    data = []\n    for item, utilization_line in zip(nodes_info['items'], utilization_lines):\n        node_name = item['metadata']['name']\n        node_status = item['status']['conditions'][-1]['type']\n        utilization_parts = utilization_line.split()\n        cpu_usage_percent = utilization_parts[2].rstrip('%')\n        memory_usage_percent = utilization_parts[4].rstrip('%')\n\n        data.append([node_name, node_status, cpu_usage_percent, memory_usage_percent])\n\n    return data\n"
  },
  {
    "path": "Kubernetes/legos/k8s_get_nodes/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Get Kubernetes Nodes</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego get Kubernetes Nodes.\r\n\r\n\r\n## Lego Details\r\n\r\n    k8s_get_nodes(handle: object)\r\n\r\n        handle: Object of type unSkript K8S Connector\r\n\r\n## Lego Input\r\nThis Lego take one input handle.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kubernetes/legos/k8s_get_nodes/__init__.py",
    "content": ""
  },
  {
    "path": "Kubernetes/legos/k8s_get_nodes/k8s_get_nodes.json",
    "content": "{\r\n    \"action_title\": \"Get Kubernetes Nodes\",\r\n    \"action_description\": \"Get Kubernetes Nodes\",\r\n    \"action_type\": \"LEGO_TYPE_K8S\",\r\n    \"action_entry_function\": \"k8s_get_nodes\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_K8S\",\"CATEGORY_TYPE_K8S_NODE\" ]\r\n}\r\n    "
  },
  {
    "path": "Kubernetes/legos/k8s_get_nodes/k8s_get_nodes.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\nimport datetime\nfrom typing import Tuple\nfrom pydantic import BaseModel\nfrom tabulate import tabulate\nfrom kubernetes import client\nfrom kubernetes.client.rest import ApiException\n\n\nclass InputSchema(BaseModel):\n    pass\n\ndef k8s_get_nodes_printer(result):\n    if result is None:\n        return\n\n    tabular_config_map = result[0]\n    print(\"\\n\")\n    print(tabulate(tabular_config_map, tablefmt=\"github\",\n                headers=['name', 'status', 'age', 'version', 'labels']))\n\ndef k8s_get_nodes(handle) -> Tuple:\n    \"\"\"k8s_get_nodes get nodes\n\n        :rtype: Tuple\n    \"\"\"\n    coreApiClient = client.CoreV1Api(api_client=handle)\n\n    try:\n        resp = coreApiClient.list_node(pretty=True)\n\n    except ApiException as e:\n        resp = 'An Exception occured while executing the command' + e.reason\n        raise e\n\n    output = []\n    tabular_config_map = []\n    for node in resp.items:\n        print(node.metadata.labels)\n        labels = [f\"{label}={value}\"\n                  for label, value in node.metadata.labels.items()]\n        nodeStatus = node.status.conditions\n        types = \"\"\n        for i in nodeStatus:\n            types = i.type\n\n        name = node.metadata.name\n        status = types\n        age = (datetime.datetime.now() -\n               node.metadata.creation_timestamp.replace(tzinfo=None)).days\n        version = node.status.node_info.kubelet_version\n        labels = \",\".join(labels)\n        tabular_config_map.append([name, status, age, version, labels])\n\n        output.append({\n            \"name\": name,\n            \"status\": types,\n            \"age\": f\"{(datetime.datetime.now() - node.metadata.creation_timestamp.replace(tzinfo=None)).days}d\",\n            \"version\": version, \"labels\": labels})\n\n    return (tabular_config_map, output)\n"
  },
  {
    "path": "Kubernetes/legos/k8s_get_nodes_pressure/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>Get K8s nodes disk and memory pressure</h1>\n\n## Description\nThis action fetches the memory and disk pressure status of each node in the cluster\n\n## Lego Details\n\tk8s_get_nodes_pressure(handle)\n\t\thandle: Object of type unSkript K8S Connector.\n\n## Lego Input\nThis Lego takes inputs handle,\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kubernetes/legos/k8s_get_nodes_pressure/__init__.py",
    "content": ""
  },
  {
    "path": "Kubernetes/legos/k8s_get_nodes_pressure/k8s_get_nodes_pressure.json",
    "content": "{\n  \"action_title\": \"Get K8s nodes disk and memory pressure\",\n  \"action_description\": \"This action fetches the memory and disk pressure status of each node in the cluster\",\n  \"action_type\": \"LEGO_TYPE_K8S\",\n  \"action_entry_function\": \"k8s_get_nodes_pressure\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n  \"action_is_check\": true,\n  \"action_supports_poll\": true,\n  \"action_supports_iteration\": true,\n  \"action_categories\": [\"CATEGORY_TYPE_INFORMATION\" , \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_K8S\",\"CATEGORY_TYPE_K8S_NODE\" ]\n}"
  },
  {
    "path": "Kubernetes/legos/k8s_get_nodes_pressure/k8s_get_nodes_pressure.py",
    "content": "#\n# Copyright (c) 2023 unSkript.com\n# All rights reserved.\n#\nfrom typing import Optional, Tuple\nfrom pydantic import BaseModel\nfrom tabulate import tabulate\nimport json\nfrom kubernetes.client.rest import ApiException\n\n\nclass InputSchema(BaseModel):\n    pass\n\n\n\ndef k8s_get_nodes_pressure_printer(output):\n    if output is None:\n        return\n\n    status, data = output\n\n    if status:\n        print(\"No nodes have memory or disk pressure issues.\")\n        return\n\n    headers = ['Node', 'Type', 'Status']\n    formatted_data = [[item['node'], item['type'], item['status']] for item in data]\n    print(tabulate(formatted_data, headers=headers, tablefmt='pretty'))\n\n\n\ndef k8s_get_nodes_pressure(handle) -> Tuple:\n    \"\"\"\n    k8s_get_nodes_pressure fetches the memory and disk pressure status of each node in the cluster\n    \n    :type handle: object\n    :param handle: Object returned from the Task validate method\n    \n    :rtype: List of memory and disk pressure status of each node in the cluster\n    \"\"\"\n\n    if handle.client_side_validation is not True:\n        print(f\"K8S Connector is invalid: {handle}\")\n        return \"Invalid Handle\"\n\n    # Getting nodes details in json format\n    cmd = \"kubectl get nodes -o json\"\n    result = handle.run_native_cmd(cmd)\n\n    if result.stderr:\n        raise ApiException(f\"Error occurred while executing command {cmd} {result.stderr}\")\n\n    nodes = json.loads(result.stdout)['items']\n    pressure_nodes = []\n\n    for node in nodes:\n        name = node['metadata']['name']\n        conditions = node['status']['conditions']\n\n        memory_pressure = next((item for item in conditions if item[\"type\"] == \"MemoryPressure\"), None)\n        disk_pressure = next((item for item in conditions if item[\"type\"] == \"DiskPressure\"), None)\n\n        # Check for pressure conditions being False\n        if memory_pressure and memory_pressure['status'] != \"False\":\n            pressure_nodes.append({\"node\": name, \"type\": \"MemoryPressure\", \"status\": \"False\"})\n\n        if disk_pressure and disk_pressure['status'] != \"False\":\n            pressure_nodes.append({\"node\": name, \"type\": \"DiskPressure\", \"status\": \"False\"})\n\n    if len(pressure_nodes) != 0:\n        return (False, pressure_nodes)\n    return (True, None)\n\n\n\n"
  },
  {
    "path": "Kubernetes/legos/k8s_get_nodes_with_insufficient_resources/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \n<h2>Get K8S Nodes with Insufficient resources </h2>\n\n<br>\n\n## Description\nThis Lego Returns Nodes that have insufficient resources \n\n\n## Lego Details\n\n    k8s_get_nodes_with_insufficient_resources(handle: object, threshold: int)\n\n        handle: Object of type unSkript K8S Connector\n        threshold: Optional int Threshold for CPU and Memory utilization. \n\n## Lego Input\nThis Lego takes just two inputs, the Handle and Threshold (Optional)\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kubernetes/legos/k8s_get_nodes_with_insufficient_resources/__init__.py",
    "content": ""
  },
  {
    "path": "Kubernetes/legos/k8s_get_nodes_with_insufficient_resources/k8s_get_nodes_with_insufficient_resources.json",
    "content": "{\n    \"action_title\": \"Get Kubernetes Nodes that have insufficient resources\",\n    \"action_description\": \"Get Kubernetes Nodes that have insufficient resources\",\n    \"action_type\": \"LEGO_TYPE_K8S\",\n    \"action_is_check\": true,\n    \"action_entry_function\": \"k8s_get_nodes_with_insufficient_resources\",\n    \"action_needs_credential\": true,\n    \"action_supports_poll\": true,\n    \"action_supports_iteration\": true,\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_K8S\"]\n}"
  },
  {
    "path": "Kubernetes/legos/k8s_get_nodes_with_insufficient_resources/k8s_get_nodes_with_insufficient_resources.py",
    "content": "#\n# Copyright (c) 2023 unSkript.com\n# All rights reserved.\n#\nimport pprint\nfrom typing import Tuple\nfrom pydantic import BaseModel, Field\nfrom tabulate import tabulate\nfrom kubernetes import client\nfrom kubernetes.client.rest import ApiException\n\ntry:\n    from unskript.legos.kubernetes.k8s_utils import normalize_cpu, normalize_memory, normalize_storage\nexcept Exception:\n    pass\n\n\nclass InputSchema(BaseModel):\n    threshold: int = Field(\n        85,\n        title='Threshold',\n        description='Threshold in %age. Default is 85%'\n    )\n\ndef k8s_get_nodes_with_insufficient_resources_printer(output):\n    if output is None:\n        return\n\n    res_hdr = [\"Name\", \"Resource\"]\n    data = []\n    for o in output[1]:\n        if isinstance(o, dict) is True:\n            res_hdr = [\"Name\", \"Allocatable\", \"Capacity\"]\n            data.append([\n                o.get('name'),\n                pprint.pformat(o.get('allocatable')),\n                pprint.pformat(o.get('capacity'))\n                ])\n    print(tabulate(data, headers=res_hdr, tablefmt='fancy_grid'))\n\n\ndef k8s_get_nodes_with_insufficient_resources(handle, threshold: int = 85) -> Tuple:\n    \"\"\"k8s_get_nodes_with_insufficient_resources returns the list of nodes that have insufficient resources\n\n    :type handle: Object\n    :param handle: Object returned from task.validate(...) function\n\n    :type threshold: int\n    :param threshold: Threshold in Percentage. Default value being 85.\n    Any node resource exceeding that threshold\n                      is flagged as having insufficient resource.\n\n    :rtype: Tuple of the result\n    \"\"\"\n    if handle.client_side_validation is not True:\n        raise ApiException(f\"K8S Connector is invalid {handle}\")\n\n    api_client = client.CoreV1Api(api_client=handle)\n    retval = []\n    nodes = api_client.list_node().items\n    for node in nodes:\n        cpu_allocatable = normalize_cpu(node.status.allocatable.get('cpu'))\n        cpu_capacity = normalize_cpu(node.status.capacity.get('cpu'))\n        mem_allocatable = normalize_memory(node.status.allocatable.get('memory'))\n        mem_capacity = normalize_memory(node.status.capacity.get('memory'))\n        storage_allocatable = normalize_storage(node.status.allocatable.get('ephemeral-storage'))\n        storage_capacity = normalize_storage(node.status.capacity.get('ephemeral-storage'))\n        cpu_usage_percent = (cpu_capacity - cpu_allocatable)/cpu_capacity * 100\n        mem_usage_percent = (mem_capacity - mem_allocatable)/mem_capacity * 100\n        storage_usage_percent = (storage_capacity - storage_allocatable)/storage_capacity * 100\n        if cpu_usage_percent >= threshold \\\n            or mem_usage_percent >= threshold \\\n            or storage_usage_percent >= threshold:\n            retval.append({\n                'name': node.metadata.name,\n                'allocatable': node.status.allocatable,\n                'capacity': node.status.capacity\n                })\n\n    if  retval:\n        return(False, retval)\n\n    return (True, [])\n"
  },
  {
    "path": "Kubernetes/legos/k8s_get_offline_nodes/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>Get K8s offline nodes</h1>\n\n## Description\nThis action checks if any node in the Kubernetes cluster is offline.\n\n## Lego Details\n\tk8s_get_offline_nodes(handle)\n\t\thandle: Object of type unSkript K8S Connector.\n\n## Lego Input\nThis Lego takes inputs handle.\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kubernetes/legos/k8s_get_offline_nodes/__init__.py",
    "content": ""
  },
  {
    "path": "Kubernetes/legos/k8s_get_offline_nodes/k8s_get_offline_nodes.json",
    "content": "{\n  \"action_title\": \"Get K8s offline nodes\",\n  \"action_description\": \"This action checks if any node in the Kubernetes cluster is offline.\",\n  \"action_type\": \"LEGO_TYPE_K8S\",\n  \"action_entry_function\": \"k8s_get_offline_nodes\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n  \"action_is_check\": true,\n  \"action_supports_poll\": true,\n  \"action_supports_iteration\": true,\n  \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_K8S\",\"CATEGORY_TYPE_K8S_NODE\" ],\n  \"action_next_hop\": [\"\"],\n  \"action_next_hop_parameter_mapping\": {}\n}"
  },
  {
    "path": "Kubernetes/legos/k8s_get_offline_nodes/k8s_get_offline_nodes.py",
    "content": "#\n# Copyright (c) 2023 unSkript.com\n# All rights reserved.\n#\nimport json\nfrom typing import Tuple\nfrom pydantic import BaseModel\n\n\nclass InputSchema(BaseModel):\n    pass\n\n\n\ndef k8s_get_offline_nodes_printer(output):\n    if output is None:\n        return\n    print(output)\n\n\ndef k8s_get_offline_nodes(handle) -> Tuple:\n    \"\"\"\n    k8s_get_offline_nodes checks if any node in the Kubernetes cluster is offline.\n\n    :type handle: object\n    :param handle: Object returned from the Task validate method\n\n    :rtype: tuple\n    :return: Status, List of offline nodes\n    \"\"\"\n\n    if handle.client_side_validation is not True:\n        print(f\"K8S Connector is invalid: {handle}\")\n        return (False, [\"Invalid Handle\"])\n\n    # Getting nodes details in json format\n    cmd = \"kubectl get nodes -o json\"\n    result = handle.run_native_cmd(cmd)\n\n    if result.stderr:\n        raise Exception(f\"Error occurred while executing command {cmd} {result.stderr}\")\n\n    nodes = json.loads(result.stdout)['items']\n    offline_nodes = []\n\n    for node in nodes:\n        name = node['metadata']['name']\n        conditions = node['status']['conditions']\n\n        node_ready = next((item for item in conditions if item[\"type\"] == \"Ready\"), None)\n\n        if node_ready and node_ready['status'] == \"False\":\n            offline_nodes.append(name)\n\n    if len(offline_nodes) != 0:\n        return (False, offline_nodes)\n    return (True, None)\n\n\n\n\n"
  },
  {
    "path": "Kubernetes/legos/k8s_get_oomkilled_pods/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>Get K8S OOMKilled Pods</h1>\n\n## Description\nGet K8S Pods which are OOMKilled from the container last states.\n\n## Lego Details\n\tk8s_get_oomkilled_pods(handle, namespace: str = \"\")\n\t\thandle: Object of type unSkript K8S Connector.\n\t\tnamespace: String, K8S Namespace as python string\n\n\n## Lego Input\nThis Lego takes inputs handle, namespace.\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kubernetes/legos/k8s_get_oomkilled_pods/__init__.py",
    "content": ""
  },
  {
    "path": "Kubernetes/legos/k8s_get_oomkilled_pods/k8s_get_oomkilled_pods.json",
    "content": "{\n  \"action_title\": \"Get K8S OOMKilled Pods\",\n  \"action_description\": \"Get K8S Pods which are OOMKilled from the container last states.\",\n  \"action_type\": \"LEGO_TYPE_K8S\",\n  \"action_entry_function\": \"k8s_get_oomkilled_pods\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n  \"action_is_check\": true,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_TROUBLESHOOTING\",\"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_K8S\",\"CATEGORY_TYPE_K8S_POD\" ],\n  \"action_next_hop\": [\"\"],\n  \"action_next_hop_parameter_mapping\": {}\n}"
  },
  {
    "path": "Kubernetes/legos/k8s_get_oomkilled_pods/k8s_get_oomkilled_pods.py",
    "content": "#\n# Copyright (c) 2023 unSkript.com\n# All rights reserved.\n#\nimport pprint\nimport datetime\nfrom datetime import timezone \nfrom typing import Tuple, Optional\nfrom pydantic import BaseModel, Field\nfrom kubernetes import client\nfrom kubernetes.client.rest import ApiException\n\n\nclass InputSchema(BaseModel):\n    namespace: Optional[str] = Field(\n        '',\n        description='Kubernetes Namespace Where the Service exists',\n        title='K8S Namespace',\n    )\n    time_interval_to_check: int = Field(\n        24,\n        description='Time interval in hours. This time window is used to check if POD good OOMKilled. Default is 24 hours.',\n        title=\"Time Interval\"\n    )\n\n\n\ndef k8s_get_oomkilled_pods_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\ndef format_datetime(dt):\n    # Format datetime to a string 'YYYY-MM-DD HH:MM:SS UTC'\n    return dt.strftime('%Y-%m-%d %H:%M:%S UTC')\n    \n\ndef k8s_get_oomkilled_pods(handle, namespace: str = \"\", time_interval_to_check=24) -> Tuple:\n    \"\"\"k8s_get_oomkilled_pods This function returns the pods that have OOMKilled event in the container last states\n\n    :type handle: Object\n    :param handle: Object returned from the task.validate(...) function\n\n    :type namespace: str\n    :param namespace: (Optional)String, K8S Namespace as python string\n\n    :type time_interval_to_check: int\n    :param time_interval_to_check: (Optional) Integer, in hours, the interval within which the\n            state of the POD should be checked.\n\n    :rtype: Status, List of objects of pods, namespaces, and containers that are in OOMKilled state\n    \"\"\"\n    result = []\n\n    if handle.client_side_validation is not True:\n        raise ApiException(f\"K8S Connector is invalid {handle}\")\n\n    v1 = client.CoreV1Api(api_client=handle)\n\n    # Check whether a namespace is provided, if not fetch all namespaces\n    try:\n        if namespace:\n            response = v1.list_namespaced_pod(namespace)\n        else:\n            response = v1.list_pod_for_all_namespaces()\n        \n        if response is None or not hasattr(response, 'items'):\n            raise ApiException(\"Unexpected response from the Kubernetes API. 'items' not found in the response.\")\n\n        pods = response.items\n\n    except ApiException as e:\n        raise e\n\n    # Check if pods is None or not\n    if pods is None:\n        raise ApiException(\"No pods returned from the Kubernetes API.\")\n\n    # Get Current Time in UTC\n    current_time = datetime.datetime.now(timezone.utc)\n    # Get time interval to check (or 24 hour) reference and convert to UTC\n    interval_time_to_check = current_time - datetime.timedelta(hours=time_interval_to_check)\n    interval_time_to_check = interval_time_to_check.replace(tzinfo=timezone.utc)\n\n    \n    for pod in pods:\n        pod_name = pod.metadata.name\n        namespace = pod.metadata.namespace\n        \n        # Ensure container_statuses is not None before iterating\n        container_statuses = pod.status.container_statuses\n        if container_statuses is None:\n            continue\n        \n        # Check each pod for OOMKilled state\n        for container_status in container_statuses:\n            container_name = container_status.name\n            last_state = container_status.last_state\n            if last_state and last_state.terminated and last_state.terminated.reason == \"OOMKilled\":\n                termination_time = last_state.terminated.finished_at\n                termination_time = termination_time.replace(tzinfo=timezone.utc)\n                # If termination time is greater than interval_time_to_check meaning\n                # the POD has gotten OOMKilled in the last 24 hours, so lets flag it!\n                if termination_time and termination_time >= interval_time_to_check:\n                    formatted_termination_time = format_datetime(termination_time)\n                    formatted_interval_time_to_check = format_datetime(interval_time_to_check)\n                    result.append({\"pod\": pod_name, \"namespace\": namespace, \"container\": container_name, \"termination_time\":formatted_termination_time,\"interval_time_to_check\": formatted_interval_time_to_check})\n    \n    return (False, result) if result else (True, None)\n\n"
  },
  {
    "path": "Kubernetes/legos/k8s_get_pending_pods/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>Get K8s get pending pods</h1>\n\n## Description\nThis action checks if any pod in the Kubernetes cluster is in 'Pending' status.\n\n## Lego Details\n\tk8s_get_pending_pods(handle, namespace:str=\"\")\n\t\thandle: Object of type unSkript K8S Connector.\n\t\tnamespace: Namespace in which to look for the resources. If not provided, all namespaces are considered\n\n## Lego Input\nThis Lego takes inputs handle, namespace.\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kubernetes/legos/k8s_get_pending_pods/__init__.py",
    "content": ""
  },
  {
    "path": "Kubernetes/legos/k8s_get_pending_pods/k8s_get_pending_pods.json",
    "content": "{\n  \"action_title\": \"Get K8s get pending pods\",\n  \"action_description\": \"This action checks if any pod in the Kubernetes cluster is in 'Pending' status.\",\n  \"action_type\": \"LEGO_TYPE_K8S\",\n  \"action_entry_function\": \"k8s_get_pending_pods\",\n  \"action_needs_credential\": \"true\",\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n  \"action_is_check\": true,\n  \"action_supports_poll\": true,\n  \"action_supports_iteration\": true,\n  \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_K8S\",\"CATEGORY_TYPE_K8S_POD\" ],\n  \"action_next_hop\": [\"\"],\n  \"action_next_hop_parameter_mapping\": {}\n}"
  },
  {
    "path": "Kubernetes/legos/k8s_get_pending_pods/k8s_get_pending_pods.py",
    "content": "#\n# Copyright (c) 2023 unSkript.com\n# All rights reserved.\n#\nfrom typing import Optional, Tuple\nfrom pydantic import BaseModel, Field\nimport json\nfrom tabulate import tabulate\nfrom datetime import datetime, timedelta, timezone\n\n\nclass InputSchema(BaseModel):\n    namespace: Optional[str] = Field('', description='k8s Namespace', title='Namespace')\n    time_interval_to_check: int = Field(\n        24,\n        description='Time interval in hours. This time window is used to check if POD was in Pending state. Default is 24 hours.',\n        title=\"Time Interval\"\n    )\n\n\n\ndef k8s_get_pending_pods_printer(output):\n    status, data = output\n\n    if status:\n        print(\"There are no pending pods.\")\n        return\n    else:\n        headers = [\"Pod Name\", \"Namespace\"]\n        print(tabulate(data, headers=headers, tablefmt=\"grid\"))\n\n\ndef format_datetime(dt):\n    return dt.strftime(\"%Y-%m-%d %H:%M:%S %Z\")\n\ndef k8s_get_pending_pods(handle, namespace: str = \"\", time_interval_to_check=24) -> Tuple:\n    \"\"\"\n    k8s_get_pending_pods checks if any pod in the Kubernetes cluster is in 'Pending' status within the specified time interval.\n\n    :type handle: object\n    :param handle: Object returned from the Task validate method\n\n    :type namespace: string\n    :param namespace: Namespace in which to look for the resources. If not provided, all namespaces are considered\n\n    :type time_interval_to_check: int\n    :param time_interval_to_check: (Optional) Integer, in hours, the interval within which the\n            state of the POD should be checked.\n\n    :rtype: tuple\n    :return: Status, list of pending pods with their namespace and the time they became pending\n    \"\"\"\n    if handle.client_side_validation is not True:\n        print(f\"K8S Connector is invalid: {handle}\")\n        return False, \"Invalid Handle\"\n\n    namespace_option = f\"--namespace={namespace}\" if namespace else \"--all-namespaces\"\n\n    # Getting pods details in json format\n    cmd = f\"kubectl get pods -o json {namespace_option}\"\n    result = handle.run_native_cmd(cmd)\n\n    if result.stderr:\n        raise Exception(f\"Error occurred while executing command {cmd}: {result.stderr}\")\n\n    pods = json.loads(result.stdout)['items']\n    pending_pods = []\n\n    current_time = datetime.now(timezone.utc)\n    interval_time_to_check = current_time - timedelta(hours=time_interval_to_check)\n    interval_time_to_check = interval_time_to_check.replace(tzinfo=timezone.utc)\n\n    for pod in pods:\n        name = pod['metadata']['name']\n        status = pod['status']['phase']\n        pod_namespace = pod['metadata']['namespace']\n\n        if status == 'Pending':\n            # Check if the pod has been in Pending state within the specified the last 24 hours\n            start_time = pod['status'].get('startTime')\n            if start_time:\n                start_time = datetime.strptime(start_time, \"%Y-%m-%dT%H:%M:%SZ\").replace(tzinfo=timezone.utc)\n                if start_time >= interval_time_to_check:\n                    formatted_start_time = format_datetime(start_time)\n                    formatted_interval_time_to_check = format_datetime(interval_time_to_check)\n                    pending_pods.append({\n                        \"pod\": name,\n                        \"namespace\": pod_namespace,\n                        \"start_time\": formatted_start_time,\n                        \"interval_time_to_check\": formatted_interval_time_to_check\n                    })\n\n    if pending_pods:\n        return (False, pending_pods)\n    return (True, None)\n"
  },
  {
    "path": "Kubernetes/legos/k8s_get_pod_config/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Get Kubernetes POD Configuration</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego get Kubernetes POD Configuration.\r\n\r\n\r\n## Lego Details\r\n\r\n    k8s_get_pod_config(handle: object, namespace: str, pod: str)\r\n\r\n        handle: Object of type unSkript K8S Connector\r\n        namespace: Kubernetes namespace.\r\n        pod: Kubernetes Pod Name.\r\n\r\n## Lego Input\r\nThis Lego take three input handle, namespace and pod.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kubernetes/legos/k8s_get_pod_config/__init__.py",
    "content": ""
  },
  {
    "path": "Kubernetes/legos/k8s_get_pod_config/k8s_get_pod_config.json",
    "content": "{\r\n    \"action_title\": \"Get Kubernetes POD Configuration\",\r\n    \"action_description\": \"Get Kubernetes POD Configuration\",\r\n    \"action_type\": \"LEGO_TYPE_K8S\",\r\n    \"action_entry_function\": \"k8s_get_pod_config\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\r\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\" ,\"CATEGORY_TYPE_K8S\",\"CATEGORY_TYPE_K8S_POD\"]\r\n}"
  },
  {
    "path": "Kubernetes/legos/k8s_get_pod_config/k8s_get_pod_config.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nimport pprint\nfrom pydantic import BaseModel, Field\nfrom kubernetes import client\n\nclass InputSchema(BaseModel):\n    namespace: str = Field(\n        title='Namespace',\n        description='Kubernetes namespace')\n    pod: str = Field(\n        title=\"Pod\",\n        description='Kubernetes Pod Name. eg ngix-server')\n\n\ndef k8s_get_pod_config_printer(output):\n    if output is None:\n        return\n\n    pprint.pprint(output)\n\ndef k8s_get_pod_config(handle, namespace: str, pod: str) -> str:\n    \"\"\"k8s_get_pod_config get pod config\n\n        :type handle: object\n        :param handle: Object returned from the Task validate method\n\n        :type namespace: str\n        :param namespace: Kubernetes namespace.\n\n        :type pod: str\n        :param pod: Kubernetes Pod Name.\n\n        :rtype: string\n    \"\"\"\n    coreApiClient = client.AppsV1Api(api_client=handle)\n\n    field_selector = \"metadata.name=\" + pod\n    res = coreApiClient.list_namespaced_deployment(\n        namespace=namespace, pretty=True, field_selector=field_selector)\n    return res\n"
  },
  {
    "path": "Kubernetes/legos/k8s_get_pod_logs/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Get Kubernetes Logs for a given POD</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego get Kubernetes Logs for a given POD in a Namespace.\r\n\r\n\r\n## Lego Details\r\n\r\n    k8s_get_pod_logs(handle: object, namespace: str, pod_name: str)\r\n\r\n        handle: Object of type unSkript K8S Connector\r\n        namespace: Kubernetes namespace.\r\n        pod_name: Name of the pod.\r\n\r\n## Lego Input\r\nThis Lego take three input handle, namespace and pod_name.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kubernetes/legos/k8s_get_pod_logs/__init__.py",
    "content": ""
  },
  {
    "path": "Kubernetes/legos/k8s_get_pod_logs/k8s_get_pod_logs.json",
    "content": "{\r\n    \"action_title\": \"Get Kubernetes Logs for a given POD in a Namespace\",\r\n    \"action_description\": \"Get Kubernetes Logs for a given POD in a Namespace\",\r\n    \"action_type\": \"LEGO_TYPE_K8S\",\r\n    \"action_entry_function\": \"k8s_get_pod_logs\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\r\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\" ,\"CATEGORY_TYPE_K8S\",\"CATEGORY_TYPE_K8S_POD\"]\r\n}\r\n    "
  },
  {
    "path": "Kubernetes/legos/k8s_get_pod_logs/k8s_get_pod_logs.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nimport pprint\nfrom pydantic import BaseModel, Field\nfrom kubernetes import client\n\n\nclass InputSchema(BaseModel):\n    namespace: str = Field(\n        title='Namespace',\n        description='Kubernetes namespace.')\n    pod_name: str = Field(\n        title='Pod',\n        description='Name of the pod')\n\n\ndef k8s_get_pods_logs_printer(output):\n    if output is None:\n        return\n\n    pprint.pprint(output)\n\n\ndef k8s_get_pod_logs(handle, namespace: str, pod_name: str) -> str:\n    \"\"\"k8s_get_pod_logs get pod logs\n\n        :type handle: object\n        :param handle: Object returned from the Task validate method\n\n        :type namespace: str\n        :param namespace: Kubernetes namespace.\n\n        :type pod_name: str\n        :param pod_name: Name of the pod.\n\n        :rtype: String, Output of the command in python string\n        format or Empty String in case of Error.\n    \"\"\"\n    coreApiClient = client.CoreV1Api(api_client=handle)\n\n    res = coreApiClient.read_namespaced_pod_log(\n        namespace=namespace, name=pod_name)\n    return res\n"
  },
  {
    "path": "Kubernetes/legos/k8s_get_pod_logs_and_filter/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Get Kubernetes Logs for a list of PODs</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego get Kubernetes Logs for a list of PODs and Filter in a Namespace.\r\n\r\n\r\n## Lego Details\r\n\r\n    k8s_get_pod_logs_and_filter(handle: object, namespace: str, pods: List, matchstr: str)\r\n\r\n        handle: Object of type unSkript K8S Connector\r\n        namespace: k8s namespace.\r\n        pods: Name of pods.\r\n        matchstr: String to Match in the Logs.\r\n\r\n## Lego Input\r\nThis Lego take four input handle, namespace, pods and matchstr.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kubernetes/legos/k8s_get_pod_logs_and_filter/__init__.py",
    "content": ""
  },
  {
    "path": "Kubernetes/legos/k8s_get_pod_logs_and_filter/k8s_get_pod_logs_and_filter.json",
    "content": "{\r\n    \"action_title\": \"Get Kubernetes Logs for a list of PODs & Filter in a Namespace\",\r\n    \"action_description\": \"Get Kubernetes Logs for a list of PODs and Filter in a Namespace\",\r\n    \"action_type\": \"LEGO_TYPE_K8S\",\r\n    \"action_entry_function\": \"k8s_get_pod_logs_and_filter\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\r\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\" ,\"CATEGORY_TYPE_K8S\",\"CATEGORY_TYPE_K8S_POD\"]\r\n}\r\n    "
  },
  {
    "path": "Kubernetes/legos/k8s_get_pod_logs_and_filter/k8s_get_pod_logs_and_filter.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\n\nimport re\nimport pprint\nfrom typing import List, Dict\nfrom pydantic import BaseModel, Field\nfrom kubernetes import client\n\n\nclass InputSchema(BaseModel):\n    namespace: str = Field(\n        title='Namespace',\n        description='k8s namespace')\n    pods: list = Field(\n        title='Pods',\n        description='Name of pods')\n    matchstr: str = Field(\n        title='Match String',\n        description='String to Match in the Logs')\n\ndef k8s_get_pod_logs_and_filter_printer(output):\n    if output is None:\n        return\n\n    pprint.pprint(output)\n\n\ndef k8s_get_pod_logs_and_filter(handle, namespace: str, pods: List, matchstr: str) -> Dict:\n    \"\"\"k8s_get_pod_logs_and_filter get pod logs\n\n        :type handle: object\n        :param handle: Object returned from the Task validate method\n\n        :type namespace: str\n        :param namespace: k8s namespace.\n\n        :type pods: List\n        :param pods: Name of pods.\n\n        :type matchstr: str\n        :param matchstr: String to Match in the Logs.\n\n        :rtype: Dict\n    \"\"\"\n    coreApiClient = client.CoreV1Api(api_client=handle)\n\n    result = {}\n    try:\n        for pod in pods:\n            resp = coreApiClient.read_namespaced_pod_log(\n                namespace=namespace, name=pod, pretty=True, timestamps=True)\n            res = re.search(f'({matchstr})', resp)\n            if res is not None:\n                result[pod] = res\n\n    except Exception:\n        print(\"Unable to Read Logs from the Pods\")\n\n    return result\n"
  },
  {
    "path": "Kubernetes/legos/k8s_get_pod_status/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Get Kubernetes Status for a POD</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego get Kubernetes Status for a POD in a given Namespace.\r\n\r\n\r\n## Lego Details\r\n\r\n    k8s_get_pod_status(handle: object, namespace: str, pod_name: str)\r\n\r\n        handle: Object of type unSkript K8S Connector\r\n        namespace: Kubernetes namespace.\r\n        pod_name: Name of the pod.\r\n\r\n## Lego Input\r\nThis Lego take three input handle, namespace and pod_name.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kubernetes/legos/k8s_get_pod_status/__init__.py",
    "content": ""
  },
  {
    "path": "Kubernetes/legos/k8s_get_pod_status/k8s_get_pod_status.json",
    "content": "{\r\n    \"action_title\": \"Get Kubernetes Status for a POD in a given Namespace\",\r\n    \"action_description\": \"Get Kubernetes Status for a POD in a given Namespace\",\r\n    \"action_type\": \"LEGO_TYPE_K8S\",\r\n    \"action_entry_function\": \"k8s_get_pod_status\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\r\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\" ,\"CATEGORY_TYPE_K8S\",\"CATEGORY_TYPE_K8S_POD\"]\r\n}\r\n    "
  },
  {
    "path": "Kubernetes/legos/k8s_get_pod_status/k8s_get_pod_status.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nimport pprint\nfrom typing import Dict\nfrom pydantic import BaseModel, Field\nfrom kubernetes import client\n\n\nclass InputSchema(BaseModel):\n    namespace: str = Field(\n        title='Namespace',\n        description='Kubernetes namespace')\n    pod_name: str = Field(\n        title='Pod',\n        description='Name of the pod')\n\n\ndef k8s_get_pod_status_printer(data):\n    if data is None:\n        return \n    pprint.pprint(data)\n\ndef k8s_get_pod_status(handle, namespace: str, pod_name: str) -> Dict:\n    \"\"\"k8s_get_pod_status get pod status\n\n        :type handle: object\n        :param handle: Object returned from the Task validate method\n\n        :type namespace: str\n        :param namespace: Kubernetes namespace.\n\n        :type pod_name: str\n        :param pod_name: Name of the pod.\n\n        :rtype: Dict\n    \"\"\"\n    coreApiClient = client.CoreV1Api(api_client=handle)\n\n    status = coreApiClient.read_namespaced_pod_status(\n        namespace=namespace, name=pod_name)\n\n    res = {}\n\n    ready_containers_number = 0\n    containers_number = 0\n    restarts_number = 0\n\n    for container in status.status.container_statuses:\n        if container.ready:\n            ready_containers_number += 1\n        if container.restart_count:\n            restarts_number = restarts_number + container.restart_count\n        containers_number += 1\n    res[\"NAME\"] = pod_name\n    res['READY'] = f\"Ready {ready_containers_number}/{containers_number}\"\n    res['STATUS'] = status.status.phase\n    res['RESTARTS'] = restarts_number\n    res['START_TIME'] = status.status.start_time.strftime(\"%m/%d/%Y, %H:%M:%S\")\n    return res\n"
  },
  {
    "path": "Kubernetes/legos/k8s_get_pods_attached_to_pvc/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Get pods attached to Kubernetes PVC</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego get pods attached to Kubernetes PVC.\r\n\r\n\r\n## Lego Details\r\n\r\n    k8s_get_pods_attached_to_pvc(handle: object, namespace: str, pvc: str)\r\n\r\n        handle: Object of type unSkript K8S Connector\r\n        namespace: Kubernetes namespace.\r\n        pvc: Name of the PVC.\r\n\r\n## Lego Input\r\nThis Lego take three input handle, namespace and pvc.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kubernetes/legos/k8s_get_pods_attached_to_pvc/__init__.py",
    "content": ""
  },
  {
    "path": "Kubernetes/legos/k8s_get_pods_attached_to_pvc/k8s_get_pods_attached_to_pvc.json",
    "content": "{\r\n    \"action_title\": \"Get pods attached to Kubernetes PVC\",\r\n    \"action_description\": \"Get pods attached to Kubernetes PVC\",\r\n    \"action_type\": \"LEGO_TYPE_K8S\",\r\n    \"action_entry_function\": \"k8s_get_pods_attached_to_pvc\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\r\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\" ,\"CATEGORY_TYPE_K8S\",\"CATEGORY_TYPE_K8S_POD\",\"CATEGORY_TYPE_K8S_PVC\" ]\r\n}\r\n    "
  },
  {
    "path": "Kubernetes/legos/k8s_get_pods_attached_to_pvc/k8s_get_pods_attached_to_pvc.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n\nfrom pydantic import BaseModel, Field\nfrom kubernetes.client.rest import ApiException\n\n\nclass InputSchema(BaseModel):\n    namespace: str = Field(\n        title=\"Namespace\",\n        description=\"Namespace of the PVC.\"\n    )\n    pvc: str = Field(\n        title=\"PVC Name\",\n        description=\"Name of the PVC.\"\n    )\n\ndef k8s_get_pods_attached_to_pvc_printer(output):\n    if output is None:\n        return \n        \n    print(output)\n\n\n\ndef k8s_get_pods_attached_to_pvc(handle, namespace: str, pvc: str) -> str:\n    \"\"\"k8s_get_pods_attached_to_pvc get pods attached to pvc\n\n        :type handle: object\n        :param handle: Object returned from the Task validate method\n\n        :type namespace: str\n        :param namespace: Namespace of the PVC.\n\n        :type pvc: str\n        :param pvc: Name of the PVC.\n\n        :rtype: string\n    \"\"\"\n    kubectl_command = f\"kubectl describe pvc {pvc} -n {namespace} | awk \\'/Used By/ {{print $3}}\\'\"\n    result = handle.run_native_cmd(kubectl_command)\n    if result is None:\n        print(\n            f\"Error while executing command ({kubectl_command}) (empty response)\")\n        return \"\"\n        \n    if result.stderr:\n        raise ApiException(f\"Error occurred while executing command {kubectl_command} {result.stderr}\")\n\n    return result.stdout\n"
  },
  {
    "path": "Kubernetes/legos/k8s_get_pods_in_crashloopbackoff_state/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Get all K8s pods in ImagePullBackOff State </h1>\r\n\r\n## Description\r\nThis Lego get all evicted pods in CrashLoopBackOff State from given namespace. If namespace not given it will get all the pods from all namespaces.\r\n\r\n\r\n## Lego Details\r\n\r\n    k8s_get_pods_in_crashloopbackoff_state(handle, namespace: str = None)\r\n\r\n        handle: Object of type unSkript K8S Connector\r\n        namespace: k8s namespace.(Optional)\r\n\r\n## Lego Input\r\n\r\nThis Lego take two inputs handle, and namespace (Optional).\r\n\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kubernetes/legos/k8s_get_pods_in_crashloopbackoff_state/__init__.py",
    "content": "\n"
  },
  {
    "path": "Kubernetes/legos/k8s_get_pods_in_crashloopbackoff_state/k8s_get_pods_in_crashloopbackoff_state.json",
    "content": "{\n    \"action_title\": \"Get all K8s Pods in CrashLoopBackOff State\",\n    \"action_description\": \"Get all K8s pods in CrashLoopBackOff State\",\n    \"action_type\": \"LEGO_TYPE_K8S\",\n    \"action_entry_function\": \"k8s_get_pods_in_crashloopbackoff_state\",\n    \"action_is_check\": true,\n    \"action_needs_credential\": true,\n    \"action_supports_poll\": true,\n    \"action_supports_iteration\": true,\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_TROUBLESHOOTING\",\"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_K8S\",\"CATEGORY_TYPE_K8S_POD\" ],\n    \"action_next_hop\": [\"1d3a64b3c396be6d27b260606aa5570f61e79f3b7adcda457e026da657edc079\"],\n    \"action_next_hop_parameter_mapping\": {}\n}"
  },
  {
    "path": "Kubernetes/legos/k8s_get_pods_in_crashloopbackoff_state/k8s_get_pods_in_crashloopbackoff_state.py",
    "content": "#\n# Copyright (c) 2022 unSkript.com\n# All rights reserved.\n#\n\nfrom typing import Optional, Tuple\nfrom pydantic import BaseModel, Field\nfrom kubernetes import client\nfrom kubernetes.client.rest import ApiException\nfrom tabulate import tabulate\nimport datetime\nfrom datetime import timezone \n\n\nclass InputSchema(BaseModel):\n    namespace: Optional[str] = Field(\n        default='',\n        title='Namespace',\n        description='k8s Namespace')\n    time_interval_to_check: int = Field(\n        24,\n        description='Time interval in hours. This time window is used to check if POD was in Crashloopback. Default is 24 hours.',\n        title=\"Time Interval\"\n    )\n\n\ndef k8s_get_pods_in_crashloopbackoff_state_printer(output):\n    status, data = output\n\n    if status:\n        print(\"No pods are in CrashLoopBackOff state.\")\n    else:\n        headers = [\"Pod Name\", \"Namespace\", \"Container Name\"]\n        table_data = [(entry[\"pod\"], entry[\"namespace\"], entry[\"container\"]) for entry in data]\n        print(tabulate(table_data, headers=headers, tablefmt=\"grid\"))\n\ndef format_datetime(dt):\n    # Format datetime to a string 'YYYY-MM-DD HH:MM:SS UTC'\n    return dt.strftime('%Y-%m-%d %H:%M:%S UTC')\n\ndef k8s_get_pods_in_crashloopbackoff_state(handle, namespace: str = '', time_interval_to_check=24) -> Tuple:\n    \"\"\"\n    k8s_get_pods_in_crashloopbackoff_state returns the pods that have CrashLoopBackOff state in their container statuses within the specified time interval.\n\n    :type handle: Object\n    :param handle: Object returned from the task.validate(...) function\n\n    :type namespace: str\n    :param namespace: (Optional) String, K8S Namespace as python string\n\n    :type time_interval_to_check: int\n    :param time_interval_to_check: (Optional) Integer, in hours, the interval within which the\n            state of the POD should be checked.\n\n    :rtype: Status, List of objects of pods, namespaces, and containers that are in CrashLoopBackOff state\n    \"\"\"\n    result = []\n    if handle.client_side_validation is not True:\n        raise ApiException(f\"K8S Connector is invalid {handle}\")\n\n    v1 = client.CoreV1Api(api_client=handle)\n\n    try:\n        if namespace:\n            response = v1.list_namespaced_pod(namespace)\n        else:\n            response = v1.list_pod_for_all_namespaces()\n\n        if response is None or not hasattr(response, 'items'):\n            raise ApiException(\"Unexpected response from the Kubernetes API. 'items' not found in the response.\")\n\n        pods = response.items\n\n    except ApiException as e:\n        raise e\n\n    if pods is None:\n        raise ApiException(\"No pods returned from the Kubernetes API.\")\n\n    current_time = datetime.datetime.now(timezone.utc)\n    interval_time_to_check = current_time - datetime.timedelta(hours=time_interval_to_check)\n    interval_time_to_check = interval_time_to_check.replace(tzinfo=timezone.utc)\n\n    for pod in pods:\n        pod_name = pod.metadata.name\n        namespace = pod.metadata.namespace\n        container_statuses = pod.status.container_statuses\n        if container_statuses is None:\n            continue\n        for container_status in container_statuses:\n            container_name = container_status.name\n            if container_status.state and container_status.state.waiting and container_status.state.waiting.reason == \"CrashLoopBackOff\":\n                # Check if the last transition time to CrashLoopBackOff is within the specified interval\n                if container_status.last_state and container_status.last_state.terminated:\n                    last_transition_time = container_status.last_state.terminated.finished_at\n                    if last_transition_time:\n                        last_transition_time = last_transition_time.replace(tzinfo=timezone.utc)\n                        if last_transition_time >= interval_time_to_check:\n                            formatted_transition_time = format_datetime(last_transition_time)\n                            formatted_interval_time_to_check = format_datetime(interval_time_to_check)\n                            result.append({\n                                \"pod\": pod_name,\n                                \"namespace\": namespace,\n                                \"container\": container_name,\n                                \"last_transition_time\": formatted_transition_time,\n                                \"interval_time_to_check\": formatted_interval_time_to_check\n                            })\n\n    return (False, result) if result else (True, None)\n"
  },
  {
    "path": "Kubernetes/legos/k8s_get_pods_in_imagepullbackoff_state/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Get all K8s pods in ImagePullBackOff State </h1>\r\n\r\n## Description\r\nThis Lego get all evicted pods in ImagePullBackOff State from given namespace. If namespace not given it will get all the pods from all namespaces.\r\n\r\n\r\n## Lego Details\r\n\r\n    k8s_get_pods_in_imagepullbackoff_state(handle, namespace: str = None)\r\n\r\n        handle: Object of type unSkript K8S Connector\r\n        namespace: k8s namespace (Optional)\r\n\r\n## Lego Input\r\n\r\nThis Lego take two inputs handle, and namespace (Optional).\r\n\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kubernetes/legos/k8s_get_pods_in_imagepullbackoff_state/__init__.py",
    "content": "\n"
  },
  {
    "path": "Kubernetes/legos/k8s_get_pods_in_imagepullbackoff_state/k8s_get_pods_in_imagepullbackoff_state.json",
    "content": "{\n    \"action_title\": \"Get all K8s Pods in ImagePullBackOff State\",\n    \"action_description\": \"Get all K8s pods in ImagePullBackOff State\",\n    \"action_type\": \"LEGO_TYPE_K8S\",\n    \"action_entry_function\": \"k8s_get_pods_in_imagepullbackoff_state\",\n    \"action_is_check\": true,\n    \"action_needs_credential\": true,\n    \"action_supports_poll\": true,\n    \"action_supports_iteration\": true,\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_TROUBLESHOOTING\",\"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_K8S\",\"CATEGORY_TYPE_K8S_POD\" ],\n    \"action_next_hop\": [\"a53b5860500e142aa387ce55d5e85f139596c521dfb5c920cc2bc47c38fc0b11\"],\n    \"action_next_hop_parameter_mapping\": {}\n}"
  },
  {
    "path": "Kubernetes/legos/k8s_get_pods_in_imagepullbackoff_state/k8s_get_pods_in_imagepullbackoff_state.py",
    "content": "#\n# Copyright (c) 2022 unSkript.com\n# All rights reserved.\n#\n\n\nfrom typing import Optional, Tuple\nfrom pydantic import BaseModel, Field\nfrom kubernetes import client\nfrom kubernetes.client.rest import ApiException\nfrom tabulate import tabulate\n\nclass InputSchema(BaseModel):\n    namespace: Optional[str] = Field(\n        default='',\n        title='Namespace',\n        description='k8s Namespace')\n\n\ndef k8s_get_pods_in_imagepullbackoff_state_printer(output):\n    status, data = output\n\n    if status:\n        print(\"No pods are in ImagePullBackOff or ErrImagePull state.\")\n    else:\n        headers = [\"Pod Name\", \"Namespace\", \"Container Name\"]\n        table_data = [(entry[\"pod\"], entry[\"namespace\"], entry[\"container\"]) for entry in data]\n        print(tabulate(table_data, headers=headers, tablefmt=\"grid\"))\n\n\ndef k8s_get_pods_in_imagepullbackoff_state(handle, namespace: str = '') -> Tuple:\n    \"\"\"\n    k8s_get_pods_in_imagepullbackoff_state returns the pods that have ImagePullBackOff or ErrImagePull state in their container statuses.\n\n    :type handle: Object\n    :param handle: Object returned from the task.validate(...) function\n\n    :type namespace: str\n    :param namespace: (Optional) String, K8S Namespace as python string\n\n    :rtype: Status, List of objects of pods, namespaces, and containers in ImagePullBackOff or ErrImagePull state\n    \"\"\"\n    result = []\n    if handle.client_side_validation is not True:\n        raise ApiException(f\"K8S Connector is invalid {handle}\")\n\n    v1 = client.CoreV1Api(api_client=handle)\n\n    try:\n        if namespace:\n            pods = v1.list_namespaced_pod(namespace).items\n            if not pods:\n                return (True, None)\n        else:\n            pods = v1.list_pod_for_all_namespaces().items\n    except ApiException as e:\n        raise e\n\n    for pod in pods:\n        pod_name = pod.metadata.name\n        namespace = pod.metadata.namespace\n        container_statuses = pod.status.container_statuses\n        if container_statuses is None:\n            continue\n        for container_status in container_statuses:\n            container_name = container_status.name\n            if container_status.state and container_status.state.waiting:\n                reason = container_status.state.waiting.reason\n                if reason in [\"ImagePullBackOff\", \"ErrImagePull\"]:\n                    result.append({\"pod\": pod_name, \"namespace\": namespace, \"container\": container_name})\n\n    return (False, result) if result else (True, None)"
  },
  {
    "path": "Kubernetes/legos/k8s_get_pods_in_not_running_state/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \n<h2>Get Pods in Not Running State</h2>\n\n<br>\n\n## Description\nThis Lego Returns PODS that are in Not Running state from all namespaces\n\n\n## Lego Details\n\n    k8s_get_pods_in_not_running_state(handle: object)\n\n        handle: Object of type unSkript K8S Connector\n\n## Lego Input\nThis Lego take just one input, the Handle\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kubernetes/legos/k8s_get_pods_in_not_running_state/__init__.py",
    "content": ""
  },
  {
    "path": "Kubernetes/legos/k8s_get_pods_in_not_running_state/k8s_get_pods_in_not_running_state.json",
    "content": "{\n    \"action_title\": \"Get Kubernetes PODs in not Running State\",\n    \"action_description\": \"Get Kubernetes PODs in not Running State\",\n    \"action_type\": \"LEGO_TYPE_K8S\",\n    \"action_is_check\": true,\n    \"action_entry_function\": \"k8s_get_pods_in_not_running_state\",\n    \"action_needs_credential\": true,\n    \"action_supports_poll\": true,\n    \"action_supports_iteration\": true,\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_K8S\"]\n}"
  },
  {
    "path": "Kubernetes/legos/k8s_get_pods_in_not_running_state/k8s_get_pods_in_not_running_state.py",
    "content": "#\n# Copyright (c) 2023 unSkript.com\n# All rights reserved.\n#\nfrom typing import Tuple\nfrom pydantic import BaseModel, Field\nimport json\n\n\nclass InputSchema(BaseModel):\n    namespace: str = Field(\n        '',\n        description='K8S Namespace',\n        title='K8S Namespace'\n    )\n\ndef k8s_get_pods_in_not_running_state_printer(output):\n    if output is None:\n        return\n\n    print(output)\n\n\ndef k8s_get_pods_in_not_running_state(handle, namespace: str = '') -> Tuple:\n    \"\"\"k8s_get_pods_in_not_running_state this check function checks for pods not in \"Running\" state and status.phase is not \"Succeeded\" \n       and returns the output of list of pods. It does not consider \"Completed\" status as an errored state.\n\n       :type handle: Object\n       :param handle: Object returned from the task.validate(...) function\n\n       :rtype: Tuple Result in tuple format.  \n    \"\"\"\n    if handle.client_side_validation is not True:\n        raise Exception(f\"K8S Connector is invalid {handle}\")\n\n    cmd_base = \"kubectl get pods\"\n    ns_arg = f\"-n {namespace}\" if namespace else \"--all-namespaces\"\n    field_selector = \"--field-selector=status.phase!=Running,status.phase!=Succeeded\"\n    output_format = \"-o json\"\n\n    kubectl_command = f\"{cmd_base} {ns_arg} {field_selector} {output_format}\"\n    result = handle.run_native_cmd(kubectl_command)\n\n    if result.stderr:\n        raise Exception(f\"Error occurred while executing command {kubectl_command}: {result.stderr}\")\n    \n    failed_pods = []\n    if result.stdout:\n        pods = json.loads(result.stdout).get(\"items\", [])\n        if pods:\n            failed_pods = [{'name': pod['metadata']['name'], \n                            'namespace': pod['metadata']['namespace'], \n                            'status': pod['status']['phase']} for pod in pods]\n            return (False, failed_pods)\n\n    return (True, None)\n"
  },
  {
    "path": "Kubernetes/legos/k8s_get_pods_in_terminating_state/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Get all K8s pods in ImagePullBackOff State </h1>\r\n\r\n## Description\r\nThis Lego get all evicted pods in Terminating State from given namespace. If namespace not given it will get all the pods from all namespaces.\r\n\r\n\r\n## Lego Details\r\n\r\n    k8s_get_pods_in_terminating_state(handle, namespace: str = None)\r\n\r\n        handle: Object of type unSkript K8S Connector\r\n        namespace: k8s namespace (Optional)\r\n\r\n## Lego Input\r\n\r\nThis Lego take two inputs handle, and namespace (Optional).\r\n\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kubernetes/legos/k8s_get_pods_in_terminating_state/__init__.py",
    "content": "\n"
  },
  {
    "path": "Kubernetes/legos/k8s_get_pods_in_terminating_state/k8s_get_pods_in_terminating_state.json",
    "content": "{\n    \"action_title\": \"Get all K8s Pods in Terminating State\",\n    \"action_description\": \"Get all K8s pods in Terminating State\",\n    \"action_type\": \"LEGO_TYPE_K8S\",\n    \"action_entry_function\": \"k8s_get_pods_in_terminating_state\",\n    \"action_is_check\": true,\n    \"action_needs_credential\": true,\n    \"action_supports_poll\": true,\n    \"action_supports_iteration\": true,\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_K8S\",\"CATEGORY_TYPE_K8S_POD\" ],\n    \"action_next_hop\": [\"7108717393788c2d76687490938faffe5e6e2a46f05405f180e089a166761173\"],\n    \"action_next_hop_parameter_mapping\": {}\n}"
  },
  {
    "path": "Kubernetes/legos/k8s_get_pods_in_terminating_state/k8s_get_pods_in_terminating_state.py",
    "content": "#\n# Copyright (c) 2022 unSkript.com\n# All rights reserved.\n#\n\nimport pprint\nfrom typing import Optional, Tuple\nfrom pydantic import BaseModel, Field\nfrom kubernetes import client\nfrom kubernetes.client.rest import ApiException\n\n\nclass InputSchema(BaseModel):\n    namespace: Optional[str] = Field(\n        default='',\n        title='Namespace',\n        description='k8s Namespace')\n\n\ndef k8s_get_pods_in_terminating_state_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef k8s_get_pods_in_terminating_state(handle, namespace: str = '') -> Tuple:\n    \"\"\"\n    This function returns the pods that are in the Terminating state.\n\n    :type handle: Object\n    :param handle: Object returned from the task.validate(...) function\n\n    :type namespace: str\n    :param namespace: (Optional) String, K8S Namespace as python string\n\n    :rtype: Status, List of objects of pods, namespaces, and containers that are in Terminating state\n    \"\"\"\n    result = []\n    if handle.client_side_validation is not True:\n        raise ApiException(f\"K8S Connector is invalid {handle}\")\n\n    v1 = client.CoreV1Api(api_client=handle)\n\n    # Check whether a namespace is provided, if not fetch all namespaces\n    try:\n        if namespace:\n            pods = v1.list_namespaced_pod(namespace).items\n        else:\n            pods = v1.list_pod_for_all_namespaces().items\n    except ApiException as e:\n        raise e\n\n    for pod in pods:\n        pod_name = pod.metadata.name\n        namespace = pod.metadata.namespace\n        # Check each pod for Terminating state\n        if pod.metadata.deletion_timestamp is not None:\n            result.append({\"pod\": pod_name, \"namespace\": namespace})\n\n    return (False, result) if result else (True, None)\n"
  },
  {
    "path": "Kubernetes/legos/k8s_get_pods_with_high_restart/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \n<h2>Get K8S Pods with High Restart Count </h2>\n\n<br>\n\n## Description\nThis Lego Returns Pods that have high restart counts\n\n\n## Lego Details\n\n    k8s_get_pods_with_high_restart(handle: object)\n\n        handle: Object of type unSkript K8S Connector\n\n## Lego Input\nThis Lego takes just one input, the Handle\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kubernetes/legos/k8s_get_pods_with_high_restart/__init__.py",
    "content": ""
  },
  {
    "path": "Kubernetes/legos/k8s_get_pods_with_high_restart/k8s_get_pods_with_high_restart.json",
    "content": "{\n    \"action_title\": \"Get Kubernetes PODS with high restart\",\n    \"action_description\": \"Get Kubernetes PODS with high restart\",\n    \"action_type\": \"LEGO_TYPE_K8S\",\n    \"action_is_check\": true,\n    \"action_entry_function\": \"k8s_get_pods_with_high_restart\",\n    \"action_needs_credential\": true,\n    \"action_supports_poll\": true,\n    \"action_supports_iteration\": true,\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_K8S\"]\n}"
  },
  {
    "path": "Kubernetes/legos/k8s_get_pods_with_high_restart/k8s_get_pods_with_high_restart.py",
    "content": "#\n# Copyright (c) 2023 unSkript.com\n# All rights reserved.\n#\nimport datetime \nfrom datetime import timezone\nfrom typing import Tuple\nfrom pydantic import BaseModel, Field\nfrom kubernetes import client\nfrom kubernetes.client.rest import ApiException\n\n# Constants used in this file\nINTERVAL_TO_CHECK = 24  # In hours\n\nclass InputSchema(BaseModel):\n    namespace: str = Field(\n        '',\n        description='K8S Namespace',\n        title='K8S Namespace'\n    )\n    threshold: int = Field(\n        25,\n        description='Restart Threshold Value',\n        title='Restart Threshold'\n    )\n\ndef k8s_get_pods_with_high_restart_printer(output):\n    if output is None:\n        return\n\n    print(output)\n\ndef format_datetime(dt):\n    # Format datetime to a string 'YYYY-MM-DD HH:MM:SS UTC'\n    return dt.strftime('%Y-%m-%d %H:%M:%S UTC')\n\ndef k8s_get_pods_with_high_restart(handle, namespace: str = '', threshold: int = 25) -> Tuple:\n    \"\"\"k8s_get_pods_with_high_restart This function finds out PODS that have\n       high restart count and returns them as a list of dictionaries\n\n       :type handle: Object\n       :param handle: Object returned from the task.validate(...) function\n\n       :type namespace: str\n       :param namespace: K8S Namespace \n\n       :type threshold: int \n       :param threshold: int Restart Threshold Count value\n\n       :rtype: Tuple Result in tuple format.  \n    \"\"\"\n    if handle.client_side_validation is not True:\n        raise Exception(f\"K8S Connector is invalid {handle}\")\n\n    v1 = client.CoreV1Api(api_client=handle)\n    \n    try:\n        pods = v1.list_namespaced_pod(namespace).items if namespace else v1.list_pod_for_all_namespaces().items\n        if not pods:\n            return (True, None)  # No pods in the namespace\n    except ApiException as e:\n        raise Exception(f\"Error occurred while accessing Kubernetes API: {e}\")\n\n    retval = []\n    \n    # It is not enough to check if the restart count is more than the threshold \n    # we should check if the last time the pod got restarted is not within the 24 hours.\n    # If it is, then we need to flag it. If not, it could be that the pod restarted at \n    # some time, but have been stable since then. \n\n    # Lets take current time and reference time that is 24 hours ago.\n    current_time = datetime.datetime.now(timezone.utc)\n    interval_time_to_check = current_time - datetime.timedelta(hours=INTERVAL_TO_CHECK)\n    interval_time_to_check = interval_time_to_check.replace(tzinfo=timezone.utc)\n\n    for pod in pods:\n        for container_status in pod.status.container_statuses or []:\n            restart_count = container_status.restart_count\n            last_state = container_status.last_state\n\n            if restart_count > threshold:\n                if last_state and last_state.terminated:\n                    termination_time = last_state.terminated.finished_at\n                    termination_time = termination_time.replace(tzinfo=timezone.utc)\n                    # We compare if the termination time is within the last 24 hours, if yes\n                    # then we need to add it to the retval and return the list back\n                    if termination_time and termination_time >= interval_time_to_check:\n                        formatted_termination_time = format_datetime(termination_time)\n                        formatted_interval_time_to_check = format_datetime(interval_time_to_check)\n                        retval.append({\"pod\": pod.metadata.name, \"namespace\": pod.metadata.namespace, \"termination_time\":formatted_termination_time,\"interval_time_to_check\": formatted_interval_time_to_check})\n\n    return (False, retval) if retval else (True, None)"
  },
  {
    "path": "Kubernetes/legos/k8s_get_service_images/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \n<h2>Get images of K8s services</h2>\n\n<br>\n\n## Description\nCollect images of running services in the provided namespace.\n\n\n## Lego Details\n\n    k8s_get_service_images(handle, namespace:str = \"\")\n\n        handle: Object of type unSkript K8S Connector\n        namespace: Kubernetes namespace.\n\n## Lego Input\nThis Lego take three input handle, namespace (Optional).\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kubernetes/legos/k8s_get_service_images/__init__.py",
    "content": ""
  },
  {
    "path": "Kubernetes/legos/k8s_get_service_images/k8s_get_service_images.json",
    "content": "{\n    \"action_title\": \"Get images of K8s services\",\n    \"action_description\": \"Collect images of running services in the provided namespace.\",\n    \"action_type\": \"LEGO_TYPE_K8S\",\n    \"action_entry_function\": \"k8s_get_service_images\",\n    \"action_needs_credential\": true,\n    \"action_supports_poll\": true,\n    \"action_supports_iteration\": true,\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\n    \"action_categories\": [\"CATEGORY_TYPE_INFORMATION\" , \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\" ,\"CATEGORY_TYPE_K8S\",\"CATEGORY_TYPE_K8S_SERVICE\"]\n}\n    "
  },
  {
    "path": "Kubernetes/legos/k8s_get_service_images/k8s_get_service_images.py",
    "content": "#\n# Copyright (c) 2023 unSkript.com\n# All rights reserved.\n#\nfrom typing import Optional, Dict\nfrom pydantic import BaseModel, Field\nfrom kubernetes import client\nfrom kubernetes.client.rest import ApiException\nfrom tabulate import tabulate \nimport json\n\nclass InputSchema(BaseModel):\n    namespace: Optional[str] = Field(\n        '',\n        description='K8S Namespace',\n        title='K8S Namespace'\n    )\n\n\ndef k8s_get_service_images_printer(output):\n    table_data = []\n    if len(output) == 0:\n        print(\"No data available\")\n        return\n    for service, images in output.items():\n        shortened_images = list(images)\n        if not shortened_images:\n            table_data.append([service, \"No images found\"])\n        else:\n            # Join multiple shortened images into a single string\n            table_data.append([service, \"\\n\".join(shortened_images)])\n\n    headers = [\"Service (Namespace)\", \"Images\"]\n    table = tabulate(table_data, headers=headers, tablefmt='grid')\n    print(table)\n\n\n\ndef k8s_get_service_images(handle, namespace:str = \"\") -> Dict:\n    \"\"\"\n    k8s_get_service_images collects the images of running services in the provided namespace.\n\n    :type handle: Object\n    :param handle: Object returned from the task.validate(...) function\n\n    :type namespace: str, optional\n    :param namespace: The namespace in which the services reside. If not provided, images from all namespaces are fetched.\n\n    :return: Dictionary with service names as keys and lists of image names as values.\n    \"\"\"\n\n    if not namespace:\n        get_namespaces_command = \"kubectl get ns -o=jsonpath='{.items[*].metadata.name}'\"\n        response = handle.run_native_cmd(get_namespaces_command)\n        if not response or response.stderr:\n            raise ApiException(f\"Error while executing command ({get_namespaces_command}): {response.stderr if response else 'empty response'}\")\n        namespaces = response.stdout.strip().split()\n    else:\n        namespaces = [namespace]\n\n    service_images = {}\n\n    for ns in namespaces:\n        # Get the names of all services in the namespace\n        get_services_command = f\"kubectl get svc -n {ns} -o=jsonpath='{{.items[*].metadata.name}}'\"\n        response = handle.run_native_cmd(get_services_command)\n        if not response or response.stderr:\n            raise ApiException(f\"Error while executing command ({get_services_command}): {response.stderr if response else 'empty response'}\")\n\n        service_names = response.stdout.strip().split()\n\n        for service_name in service_names:\n            # Get the labels associated with the service to identify its pods\n            get_service_labels_command = f\"kubectl get service {service_name} -n {ns} -o=jsonpath='{{.spec.selector}}'\"\n            response = handle.run_native_cmd(get_service_labels_command)\n            if not response.stdout.strip():\n                print(f\"No labels found for service {service_name} in namespace {ns}. Skipping...\")\n                continue\n            labels_dict = json.loads(response.stdout.replace(\"'\", \"\\\"\"))\n            label_selector = \",\".join([f\"{k}={v}\" for k, v in labels_dict.items()])\n\n            # Get the images from the pods associated with this service\n            get_images_command = f\"kubectl get pods -n {ns} -l {label_selector} -o=jsonpath='{{.items[*].spec.containers[*].image}}'\"\n            response = handle.run_native_cmd(get_images_command)\n            if response and not response.stderr:\n                # Deduplicate images and replace 'docker.io' with 'docker_io'\n                images = list(set(response.stdout.strip().split()))\n                images = [image.replace('docker.io', 'docker_io') for image in images]\n                service_key = f\"{service_name} ({ns})\"\n                service_images[service_key] = images\n            else:\n                service_key = f\"{service_name} ({ns})\"\n                service_images[service_key] = []\n\n    return service_images\n"
  },
  {
    "path": "Kubernetes/legos/k8s_get_service_with_no_associated_endpoints/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \n<h2>Get Service with no associated endpoints </h2>\n\n<br>\n\n## Description\nThis Lego Returns services that has no associated endpoints\n\n\n## Lego Details\n\n    k8s_get_service_with_no_associated_endpoints(handle: object, namespace:str = \"\")\n\n        handle: Object of type unSkript K8S Connector\n        namespace: String, Name of K8S Namespace\n\n## Lego Input\nThis Lego takes just two inputs- handle, and namespace.\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kubernetes/legos/k8s_get_service_with_no_associated_endpoints/__init__.py",
    "content": ""
  },
  {
    "path": "Kubernetes/legos/k8s_get_service_with_no_associated_endpoints/k8s_get_service_with_no_associated_endpoints.json",
    "content": "{\n    \"action_title\": \"Get K8S Service with no associated endpoints\",\n    \"action_description\": \"Get K8S Service with no associated endpoints\",\n    \"action_type\": \"LEGO_TYPE_K8S\",\n    \"action_is_check\": true,\n    \"action_entry_function\": \"k8s_get_service_with_no_associated_endpoints\",\n    \"action_needs_credential\": true,\n    \"action_supports_poll\": true,\n    \"action_supports_iteration\": true,\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_K8S\"]\n}"
  },
  {
    "path": "Kubernetes/legos/k8s_get_service_with_no_associated_endpoints/k8s_get_service_with_no_associated_endpoints.py",
    "content": "#\n# Copyright (c) 2023 unSkript.com\n# All rights reserved.\n#\nfrom typing import Tuple, Optional\nfrom pydantic import BaseModel, Field\nfrom tabulate import tabulate\n\nfrom kubernetes import client\nfrom kubernetes.client.rest import ApiException\n\n\n\nclass InputSchema(BaseModel):\n    namespace:str = Field(\n        title = \"K8S Namespace\",\n        description = \"Kubernetes Namespace Where the Service exists\"\n    )\n    core_services: list = Field(\n        title = \"Names of whitelisted services\",\n        description = \"List of services\"\n    )\n\ndef k8s_get_service_with_no_associated_endpoints_printer(output):\n    status, data = output\n    if status:\n        print(\"No services with missing endpoints found !\")\n    else:\n        table_headers = [\"Namespace\", \"Service Name\"]\n        table_data = [(entry[\"namespace\"], entry[\"name\"]) for entry in data]\n\n        print(tabulate(table_data, headers=table_headers, tablefmt = \"grid\"))\n\ndef k8s_get_service_with_no_associated_endpoints(handle, namespace: str , core_services:list) -> Tuple:\n    \"\"\"k8s_get_service_with_no_associated_endpoints This function returns Services that\n       do not have any associated endpoints.\n\n       :type handle: Object\n       :param handle: Object returned from the task.validate(...) function\n\n       :type namespace: str\n       :param namespace: String, K8S Namespace as python string\n\n       :rtype: Tuple Result in tuple format.\n    \"\"\"\n    if handle.client_side_validation is not True:\n        raise ApiException(f\"K8S Connector is invalid {handle}\")\n\n    v1 = client.CoreV1Api(api_client=handle)\n\n    retval = []\n\n    for service_name in core_services:\n        try:\n            service = v1.read_namespaced_service(name=service_name, namespace=namespace)\n            ep = v1.read_namespaced_endpoints(name=service_name, namespace=namespace)\n            if not ep.subsets:\n                retval.append({\"name\": service.metadata.name, \"namespace\": service.metadata.namespace})\n        except ApiException as e:\n            if e.status == 404:\n                print(f\"Service {service_name} not found in namespace {namespace}.\")\n                continue\n            else:\n                raise e\n    if retval:\n        return (False, retval)\n\n    return(True, None)\n"
  },
  {
    "path": "Kubernetes/legos/k8s_get_services/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Get Kubernetes Services</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego get Kubernetes Services for a given Namespace.\r\n\r\n\r\n## Lego Details\r\n\r\n    k8s_get_services(handle: object, namespace: str)\r\n\r\n        handle: Object of type unSkript K8S Connector\r\n        namespace: Kubernetes namespace.\r\n\r\n## Lego Input\r\nThis Lego take two input handle and namespace.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kubernetes/legos/k8s_get_services/__init__.py",
    "content": ""
  },
  {
    "path": "Kubernetes/legos/k8s_get_services/k8s_get_services.json",
    "content": "{\r\n    \"action_title\": \"Get Kubernetes Services for a given Namespace\",\r\n    \"action_description\": \"Get Kubernetes Services for a given Namespace\",\r\n    \"action_type\": \"LEGO_TYPE_K8S\",\r\n    \"action_entry_function\": \"k8s_get_services\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\r\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_K8S\"]\r\n}\r\n    "
  },
  {
    "path": "Kubernetes/legos/k8s_get_services/k8s_get_services.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\nfrom pydantic import BaseModel, Field\nfrom kubernetes import client\nfrom kubernetes.client.rest import ApiException\n\n\nclass InputSchema(BaseModel):\n    namespace: str = Field(\n        title='Namespace',\n        description='Kubernetes namespace')\n\ndef k8s_get_services_printer(output):\n    if output is None:\n        return\n\n    print(output)\n\n\ndef k8s_get_services(handle, namespace: str) -> str:\n    \"\"\"k8s_get_services get services\n\n        :type handle: object\n        :param handle: Object returned from the Task validate method\n\n        :type namespace: str\n        :param namespace: Kubernetes namespace.\n\n        :rtype: string\n    \"\"\"\n    coreApiClient = client.CoreV1Api(api_client=handle)\n\n    try:\n        resp = coreApiClient.list_namespaced_service(namespace)\n\n    except ApiException as e:\n        resp = 'An Exception occured while executing the command' + e.reason\n\n    return resp\n"
  },
  {
    "path": "Kubernetes/legos/k8s_get_unbound_pvcs/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \n<h2>Get Unbound PVCs </h2>\n\n<br>\n\n## Description\nThis Lego Returns List of unbound PVCs\n\n\n## Lego Details\n\n    k8s_get_unbound_pvcs(handle: object)\n\n        handle: Object of type unSkript K8S Connector\n\n## Lego Input\nThis Lego takes just one input, the Handle\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kubernetes/legos/k8s_get_unbound_pvcs/__init__.py",
    "content": ""
  },
  {
    "path": "Kubernetes/legos/k8s_get_unbound_pvcs/k8s_get_unbound_pvcs.json",
    "content": "{\n    \"action_title\": \"Get Kubernetes Unbound PVCs\",\n    \"action_description\": \"Get Kubernetes Unbound PVCs\",\n    \"action_type\": \"LEGO_TYPE_K8S\",\n    \"action_is_check\": true,\n    \"action_entry_function\": \"k8s_get_unbound_pvcs\",\n    \"action_needs_credential\": true,\n    \"action_supports_poll\": true,\n    \"action_supports_iteration\": true,\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_K8S\"]\n}"
  },
  {
    "path": "Kubernetes/legos/k8s_get_unbound_pvcs/k8s_get_unbound_pvcs.py",
    "content": "#\n# Copyright (c) 2023 unSkript.com\n# All rights reserved.\n#\nfrom typing import Tuple\nfrom pydantic import BaseModel, Field\nfrom kubernetes import client\nfrom kubernetes.client.rest import ApiException\n\n\nclass InputSchema(BaseModel):\n    namespace: str = Field (\n        '',\n        description='K8S Namespace',\n        title=\"K8S Namespace\"\n    )\n\ndef k8s_get_unbound_pvcs_printer(output):\n    if output is None:\n        return\n    print(output)\n\ndef k8s_get_unbound_pvcs(handle, namespace:str = '') -> Tuple:\n    \"\"\"k8s_get_unbound_pvcs This function all unbound PVCS and returns them back\n\n       :type handle: Object\n       :param handle: Object returned from the task.validate(...) function\n\n       :type namespace: str\n       :param namespace: Kubernetes Namespace \n\n       :rtype: Tuple Result in tuple format.  \n    \"\"\"\n    if handle.client_side_validation is not True:\n        raise ApiException(f\"K8S Connector is invalid {handle}\")\n\n    v1 = client.CoreV1Api(api_client=handle)\n\n    # Get all PVCs in the cluster\n    if not namespace:\n        pvc_list = v1.list_persistent_volume_claim_for_all_namespaces().items\n        pod_list = v1.list_pod_for_all_namespaces().items\n    else:\n        pvc_list = v1.list_namespaced_persistent_volume_claim(namespace).items\n        pod_list = v1.list_namespaced_pod(namespace).items\n\n    retval = []\n    mounted_volume = []\n    list_all_volumes = []\n    # Iterate through each PVC\n    for pvc in pvc_list:\n        list_all_volumes.append([pvc.metadata.name, pvc.metadata.namespace])\n\n    for pod in pod_list:\n        for volume in pod.spec.volumes:\n                if volume.persistent_volume_claim is not None:\n                    mounted_volume.append([\n                        volume.persistent_volume_claim.claim_name,\n                        pod.metadata.namespace\n                        ])\n\n    if len(mounted_volume) != len(list_all_volumes):\n        unmounted_volumes = {x[0] for x in list_all_volumes} - {x[0] for x in mounted_volume}\n        for um in unmounted_volumes:\n            n = [x for x in list_all_volumes if x[0] == um][0]\n            unmounted_pvc_name = n[0]\n            unmounted_pvc_namespace = n[1]\n            retval.append({'name': unmounted_pvc_name, 'namespace': unmounted_pvc_namespace})\n\n    if retval:\n        return (False, retval)\n\n    return (True, [])\n"
  },
  {
    "path": "Kubernetes/legos/k8s_get_versioning_info/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Get versioning info</h1>\r\n\r\n## Description\r\nThis action gets the kubectl, Kubernetes cluster, and Docker version if available.\r\n\r\n\r\n## Lego Details\r\n\r\n    k8s_get_versioning_info(handle)\r\n\r\n        handle: Object of type unSkript K8S Connector\r\n\r\n## Lego Input\r\n\r\nThis Lego take two inputs handle.\r\n\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kubernetes/legos/k8s_get_versioning_info/__init__.py",
    "content": ""
  },
  {
    "path": "Kubernetes/legos/k8s_get_versioning_info/k8s_get_versioning_info.json",
    "content": "{\n    \"action_title\": \"Get versioning info\",\n    \"action_description\": \"This action gets the kubectl, Kubernetes cluster, and Docker version if available.\",\n    \"action_type\": \"LEGO_TYPE_K8S\",\n    \"action_entry_function\": \"k8s_get_versioning_info\",\n    \"action_needs_credential\": true,\n    \"action_supports_poll\": true,\n    \"action_supports_iteration\": true,\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n    \"action_categories\": [\"CATEGORY_TYPE_INFORMATION\" , \"CATEGORY_TYPE_K8S\", \"CATEGORY_TYPE_DEVOPS\"]\n}"
  },
  {
    "path": "Kubernetes/legos/k8s_get_versioning_info/k8s_get_versioning_info.py",
    "content": "#\n# Copyright (c) 2023 unSkript.com\n# All rights reserved.\n#\nimport subprocess\nfrom pydantic import BaseModel\n\nclass InputSchema(BaseModel):\n    pass\n\n\ndef k8s_get_versioning_info_printer(output):\n    print(\"Versions:\")\n    for key, value in output.items():\n        print(f\"{key}: {value}\")\n\n\ndef k8s_get_versioning_info(handle):\n    \"\"\"\n    k8s_get_versioning_info returns the kubectl, Kubernetes cluster, and Docker version if available.\n\n    :type handle: Object\n    :param handle: Object returned from the task.validate(...) function\n\n    :rtype: Dict of version results.\n    \"\"\"\n    versions = {}\n\n    try:\n        # Getting kubectl version\n        kubectl_version_command = [\"kubectl\", \"version\", \"--client\", \"--short\"]\n        result = subprocess.run(kubectl_version_command, stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n        if result.returncode == 0:\n            versions['kubectl'] = result.stdout.decode('utf-8').strip()\n    except FileNotFoundError:\n        versions['kubectl'] = \"Not found\"\n\n    try:\n        # Getting Kubernetes cluster version\n        k8s_version_command = [\"kubectl\", \"version\", \"--short\"]\n        result = subprocess.run(k8s_version_command, stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n        if result.returncode == 0:\n            versions['kubernetes'] = result.stdout.decode('utf-8').strip()\n    except FileNotFoundError:\n        versions['kubernetes'] = \"Not found\"\n\n    try:\n        # Getting Docker version\n        docker_version_command = [\"docker\", \"version\", \"--format\", \"'{{.Server.Version}}'\"]\n        result = subprocess.run(docker_version_command, stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n        if result.returncode == 0:\n            versions['docker'] = result.stdout.decode('utf-8').strip()\n    except FileNotFoundError:\n        versions['docker'] = \"Not found\"\n\n    return versions"
  },
  {
    "path": "Kubernetes/legos/k8s_kubectl_command/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \n<h2>Kubectl in python syntax</h2>\n\n<br>\n\n## Description\nThis Lego takes the `kubectl` command to be executed and returns the output of the command. This lego can be treated like a wrapper around the `kubectl` command.\n\n\n## Lego Details\n\n    k8s_kubectl_command(handle: object, kubectl_command: str)\n\n        handle: Object of type unSkript MongoDB Connector\n        kubectl_command: Kubectl command, eg: \"kubectl get pods -A\", \"kubectl get ns\"\n\n## Lego Input\nThis Lego takes the actual kubectl command to be executed as input, as python string.\n\nLike all unSkript Legos this lego relies on the information provided in unSkript K8S Connector. \n\n>Note: The input for the command should start with keyword `kubectl` \n\n## Lego Output\nHere is a sample output. For the command `kubectl describe pod {unhealthyPod} -n {namespace} | grep -A 10`\n\n    Events:\n    Type     Reason   Age                     From     Message\n    ----     ------   ----                    ----     -------\n    Normal   BackOff  33m (x437 over 133m)    kubelet  Back-off pulling image \"diebian\"\n    Warning  Failed   3m16s (x569 over 133m)  kubelet  Error: ImagePullBackOff\n\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kubernetes/legos/k8s_kubectl_command/__init__.py",
    "content": "# 2022 (c) unSkript.com\n"
  },
  {
    "path": "Kubernetes/legos/k8s_kubectl_command/k8s_kubectl_command.json",
    "content": "{\n  \"action_title\": \"Kubectl command\",\n  \"action_description\": \"Execute kubectl command.\",\n  \"action_type\": \"LEGO_TYPE_K8S\",\n  \"action_entry_function\": \"k8s_kubectl_command\",\n  \"action_needs_credential\": true,\n  \"action_supports_poll\": true,\n  \"action_supports_iteration\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\n  \"action_categories\": [\"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_K8S\",\"CATEGORY_TYPE_K8S_KUBECTL\"]\n}\n  \n"
  },
  {
    "path": "Kubernetes/legos/k8s_kubectl_command/k8s_kubectl_command.py",
    "content": "#\n# Copyright (c) 2022 unSkript.com\n# All rights reserved.\n#\n\nfrom pydantic import BaseModel, Field\nfrom kubernetes.client.rest import ApiException\n\n\nclass InputSchema(BaseModel):\n    kubectl_command: str = Field(\n        title='Kubectl Command',\n        description='kubectl command '\n                    'eg \"kubectl get pods --all-namespaces\"'\n    )\n\n\ndef k8s_kubectl_command_printer(output):\n    if output is None:\n        return\n    print(output)\n\n\ndef k8s_kubectl_command(handle, kubectl_command: str) -> str:\n    \"\"\"k8s_kubectl_command executes the given kubectl command on the pod\n\n        :type handle: object\n        :param handle: Object returned from the Task validate method\n\n        :type kubectl_command: str\n        :param kubectl_command: The Actual kubectl command, like kubectl get ns, etc..\n\n        :rtype: String, Output of the command in python string format or Empty String\n        in case of Error.\n    \"\"\"\n    if handle.client_side_validation is not True:\n        print(f\"K8S Connector is invalid: {handle}\")\n        return str()\n\n    result = handle.run_native_cmd(kubectl_command)\n\n    if result is None:\n        print(\n            f\"Error while executing command ({kubectl_command}) (empty response)\")\n        return \"\"\n        \n    if result.stderr:\n        raise ApiException(f\"Error occurred while executing command {kubectl_command} {result.stderr}\")\n\n    return result.stdout\n"
  },
  {
    "path": "Kubernetes/legos/k8s_kubectl_config_set_context/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \n<h2>Kubectl set context entry in kubeconfig</h2>\n\n<br>\n\n## Description\nThis Lego used to set Kubectl context entry in kubeconfig.\n\n\n## Lego Details\n\n    k8s_kubectl_config_set_context(handle: object, k8s_cli_string: str, namespace: str)\n\n        handle: Object of type unSkript MongoDB Connector\n        k8s_cli_string: kubectl sets a context entry in kubeconfig\n        namespace: Namespace\n\n## Lego Input\nThis Lego take three inputs handle, k8s_cli_string and namespace.\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kubernetes/legos/k8s_kubectl_config_set_context/__init__.py",
    "content": ""
  },
  {
    "path": "Kubernetes/legos/k8s_kubectl_config_set_context/k8s_kubectl_config_set_context.json",
    "content": "{\r\n    \"action_title\": \"Kubectl set context entry in kubeconfig\",\r\n    \"action_description\": \"Kubectl set context entry in kubeconfig\",\r\n    \"action_type\": \"LEGO_TYPE_K8S\",\r\n    \"action_entry_function\": \"k8s_kubectl_config_set_context\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_K8S\",\"CATEGORY_TYPE_K8S_KUBECTL\"]\r\n\r\n}\r\n    "
  },
  {
    "path": "Kubernetes/legos/k8s_kubectl_config_set_context/k8s_kubectl_config_set_context.py",
    "content": "from pydantic import BaseModel, Field\nfrom kubernetes.client.rest import ApiException\n\n\nclass InputSchema(BaseModel):\n    k8s_cli_string: str = Field(\n        title='Kubectl Command',\n        description='kubectl sets a context entry in kubeconfig',\n        default='kubectl config set-context --current --namespace={namespace}'\n    )\n    namespace: str = Field(\n        title='Namespace',\n        description='Namespace'\n    )\n\ndef k8s_kubectl_config_set_context_printer(data: list):\n    if data is None:\n        return\n\n    print (data)\n\ndef k8s_kubectl_config_set_context(handle, k8s_cli_string: str, namespace: str) -> str:\n    \"\"\"k8s_kubectl_config_set_context \n\n        :type handle: object\n        :param handle: Object returned from the Task validate method\n\n        :type k8s_cli_string: str\n        :param k8s_cli_string: kubectl sets a context entry in kubeconfig\n\n        :type namespace: str\n        :param namespace: Namespace\n\n        :rtype: str\n    \"\"\"\n    k8s_cli_string = k8s_cli_string.format(namespace=namespace)\n    result = handle.run_native_cmd(k8s_cli_string)\n\n    if result is None:\n        print(\n            f\"Error while executing command ({k8s_cli_string}) (empty response)\")\n        return \"\"\n        \n    if result.stderr:\n        raise ApiException(f\"Error occurred while executing command {k8s_cli_string} {result.stderr}\")\n\n    return result.stdout\n"
  },
  {
    "path": "Kubernetes/legos/k8s_kubectl_config_view/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Kubectl display merged kubeconfig settings</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego Kubectl display merged kubeconfig settings.\r\n\r\n\r\n## Lego Details\r\n\r\n    k8s_kubectl_config_view(handle: object, k8s_cli_string: str, namespace: str)\r\n\r\n        handle: Object of type unSkript K8S Connector\r\n        k8s_cli_string: kubectl Displays merged kubeconfig settings.\r\n        namespace: Namespace.\r\n\r\n## Lego Input\r\nThis Lego take three input handle, k8s_cli_string and namespace.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kubernetes/legos/k8s_kubectl_config_view/__init__.py",
    "content": ""
  },
  {
    "path": "Kubernetes/legos/k8s_kubectl_config_view/k8s_kubectl_config_view.json",
    "content": "{\r\n    \"action_title\": \"Kubectl display merged kubeconfig settings\",\r\n    \"action_description\": \"Kubectl display merged kubeconfig settings\",\r\n    \"action_type\": \"LEGO_TYPE_K8S\",\r\n    \"action_entry_function\": \"k8s_kubectl_config_view\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\r\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_K8S\",\"CATEGORY_TYPE_K8S_KUBECTL\"]\r\n\r\n}\r\n    "
  },
  {
    "path": "Kubernetes/legos/k8s_kubectl_config_view/k8s_kubectl_config_view.py",
    "content": "from pydantic import BaseModel, Field\nfrom kubernetes.client.rest import ApiException\n\n\nclass InputSchema(BaseModel):\n    k8s_cli_string: str = Field(\n        title='Kubectl Command',\n        description='kubectl Displays merged kubeconfig settings',\n        default='kubectl config view -n {namespace}'\n    )\n    namespace: str = Field(\n        title='Namespace',\n        description='Namespace'\n    )\n\ndef k8s_kubectl_config_view_printer(data: str):\n    if data is None:\n        print(\"Error while executing command\")\n        return\n\n    print (data)\n\ndef k8s_kubectl_config_view(handle, k8s_cli_string: str, namespace: str) -> str:\n    \"\"\"k8s_kubectl_config_view executes the given kubectl command\n\n        :type handle: object\n        :param handle: Object returned from the Task validate method\n\n        :type k8s_cli_string: str\n        :param k8s_cli_string: kubectl Displays merged kubeconfig settings.\n\n        :type namespace: str\n        :param namespace: Namespace.\n\n        :rtype: String, Output of the command in python string format or\n        Empty String in case of Error.\n    \"\"\"\n    k8s_cli_string = k8s_cli_string.format(namespace=namespace)\n    result = handle.run_native_cmd(k8s_cli_string)\n    if result is None:\n        print(\n            f\"Error while executing command ({k8s_cli_string}) (empty response)\")\n        return \"\"\n        \n    if result.stderr:\n        raise ApiException(f\"Error occurred while executing command {k8s_cli_string} {result.stderr}\")\n\n    return result.stdout\n"
  },
  {
    "path": "Kubernetes/legos/k8s_kubectl_delete_pod/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Kubectl delete a pod</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego Kubectl delete a pod.\r\n\r\n\r\n## Lego Details\r\n\r\n    k8s_kubectl_delete_pod(handle: object, k8s_cli_string: str, pod_name: str, namespace: str)\r\n\r\n        handle: Object of type unSkript K8S Connector\r\n        k8s_cli_string: kubectl delete pod {pod_name} -n {namespace}.\r\n        pod_name: Pod Name.\r\n        namespace: Namespace.\r\n\r\n## Lego Input\r\nThis Lego take four input handle, k8s_cli_string, pod_name and namespace.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kubernetes/legos/k8s_kubectl_delete_pod/__init__.py",
    "content": ""
  },
  {
    "path": "Kubernetes/legos/k8s_kubectl_delete_pod/k8s_kubectl_delete_pod.json",
    "content": "{\r\n    \"action_title\": \"Kubectl delete a pod\",\r\n    \"action_description\": \"Kubectl delete a pod\",\r\n    \"action_type\": \"LEGO_TYPE_K8S\",\r\n    \"action_entry_function\": \"k8s_kubectl_delete_pod\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\r\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_K8S\",\"CATEGORY_TYPE_K8S_KUBECTL\",\"CATEGORY_TYPE_K8S_POD\"]\r\n\r\n}\r\n    "
  },
  {
    "path": "Kubernetes/legos/k8s_kubectl_delete_pod/k8s_kubectl_delete_pod.py",
    "content": "from pydantic import BaseModel, Field\nfrom kubernetes.client.rest import ApiException\n\nclass InputSchema(BaseModel):\n    k8s_cli_string: str = Field(\n        title='Kubectl Command',\n        description='kubectl delete a pod',\n        default='kubectl delete pod {pod_name} -n {namespace}'\n    )\n    pod_name: str = Field(\n        title='Pod Name',\n        description='Pod Name'\n    )\n    namespace: str = Field(\n        title='Namespace',\n        description='Namespace'\n    )\n\ndef k8s_kubectl_delete_pod_printer(data: str):\n    if data is None:\n        print(\"Error while executing command\")\n        return\n\n    print (data)\n\ndef k8s_kubectl_delete_pod(handle, k8s_cli_string: str, pod_name: str, namespace: str) -> str:\n    \"\"\"k8s_kubectl_delete_pod executes the given kubectl command\n\n        :type handle: object\n        :param handle: Object returned from the Task validate method\n\n        :type k8s_cli_string: str\n        :param k8s_cli_string: kubectl delete pod {pod_name} -n {namespace}.\n\n        :type pod_name: str\n        :param pod_name: Pod Name.\n\n        :type namespace: str\n        :param namespace: Namespace.\n\n        :rtype: String, Output of the command in python string format or\n        Empty String in case of Error.\n    \"\"\"\n    k8s_cli_string = k8s_cli_string.format(pod_name, namespace)\n    result = handle.run_native_cmd(k8s_cli_string)\n\n    if result is None:\n        print(\n            f\"Error while executing command ({k8s_cli_string}) (empty response)\")\n        return \"\"\n        \n    if result.stderr:\n        raise ApiException(f\"Error occurred while executing command {k8s_cli_string} {result.stderr}\")\n    return result.stdout\n"
  },
  {
    "path": "Kubernetes/legos/k8s_kubectl_describe_node/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Kubectl describe a node</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego Kubectl describe a node.\r\n\r\n\r\n## Lego Details\r\n\r\n    k8s_describe_node(handle: object, node_name: str, k8s_cli_string: str)\r\n\r\n        handle: Object of type unSkript K8S Connector\r\n        k8s_cli_string: kubectl describe node {node_name}.\r\n        node_name: Node Name.\r\n\r\n## Lego Input\r\nThis Lego take three input handle, k8s_cli_string and node_name.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kubernetes/legos/k8s_kubectl_describe_node/__init__.py",
    "content": ""
  },
  {
    "path": "Kubernetes/legos/k8s_kubectl_describe_node/k8s_kubectl_describe_node.json",
    "content": "{\r\n    \"action_title\": \"Kubectl describe a node\",\r\n    \"action_description\": \"Kubectl describe a node\",\r\n    \"action_type\": \"LEGO_TYPE_K8S\",\r\n    \"action_entry_function\": \"k8s_kubectl_describe_node\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\r\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_K8S\",\"CATEGORY_TYPE_K8S_KUBECTL\",\"CATEGORY_TYPE_K8S_NODE\"]\r\n}\r\n    "
  },
  {
    "path": "Kubernetes/legos/k8s_kubectl_describe_node/k8s_kubectl_describe_node.py",
    "content": "from pprint import pprint\nfrom pydantic import BaseModel, Field\nfrom kubernetes.client.rest import ApiException\n\nclass InputSchema(BaseModel):\n    node_name: str = Field(\n        title='Node Name',\n        description='Node Name'\n    )\n    k8s_cli_string: str = Field(\n        title='Kubectl Command',\n        description='kubectl describe a node',\n        default='kubectl describe node {node_name}'\n    )\n\ndef k8s_kubectl_describe_node_printer(data: str):\n    if data is None:\n        return\n\n    print(\"Node Details:\")\n    pprint(data)\n\ndef k8s_kubectl_describe_node(handle, node_name: str, k8s_cli_string: str) -> str:\n    \"\"\"k8s_kubectl_describe_node executes the given kubectl command\n\n        :type handle: object\n        :param handle: Object returned from the Task validate method\n\n        :type k8s_cli_string: str\n        :param k8s_cli_string: kubectl describe node {node_name}.\n\n         :type node_name: str\n        :param node_name: Node Name.\n\n        :rtype: String, Output of the command in python string format or\n        Empty String in case of Error.\n    \"\"\"\n\n    k8s_cli_string = k8s_cli_string.format(node_name=node_name)\n    result = handle.run_native_cmd(k8s_cli_string)\n    if result is None:\n        print(\n            f\"Error while executing command ({k8s_cli_string}) (empty response)\")\n        return \"\"\n        \n    if result.stderr:\n        raise ApiException(f\"Error occurred while executing command {k8s_cli_string} {result.stderr}\")\n\n\n    data = result.stdout\n    return data\n"
  },
  {
    "path": "Kubernetes/legos/k8s_kubectl_describe_pod/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Kubectl describe a pod</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego Kubectl describe a pod.\r\n\r\n\r\n## Lego Details\r\n\r\n    k8s_kubectl_describe_pod(handle: object, pod_name: str, k8s_cli_string: str, namespace: str)\r\n\r\n        handle: Object of type unSkript K8S Connector\r\n        k8s_cli_string: kubectl describe pod {pod_name} -n {namespace}.\r\n        node_name: Node Name.\r\n        namespace:Namespace\r\n\r\n## Lego Input\r\nThis Lego take four input handle, k8s_cli_string, namespace and node_name.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kubernetes/legos/k8s_kubectl_describe_pod/__init__.py",
    "content": ""
  },
  {
    "path": "Kubernetes/legos/k8s_kubectl_describe_pod/k8s_kubectl_describe_pod.json",
    "content": "{\r\n    \"action_title\": \"Kubectl describe a pod\",\r\n    \"action_description\": \"Kubectl describe a pod\",\r\n    \"action_type\": \"LEGO_TYPE_K8S\",\r\n    \"action_entry_function\": \"k8s_kubectl_describe_pod\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\r\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_K8S\",\"CATEGORY_TYPE_K8S_KUBECTL\",\"CATEGORY_TYPE_K8S_POD\"]\r\n}\r\n    "
  },
  {
    "path": "Kubernetes/legos/k8s_kubectl_describe_pod/k8s_kubectl_describe_pod.py",
    "content": "from pprint import pprint\nfrom pydantic import BaseModel, Field\nfrom kubernetes.client.rest import ApiException\n\n\nclass InputSchema(BaseModel):\n    pod_name: str = Field(\n        title='Pod Name',\n        description='Pod Name'\n    )\n    k8s_cli_string: str = Field(\n        title='Kubectl Command',\n        description='kubectl describe a pod',\n        default='kubectl describe pod {pod_name} -n {namespace}'\n    )\n    namespace: str = Field(\n        title='Namespace',\n        description='Namespace'\n    )\n\n\ndef k8s_kubectl_describe_pod_printer(data: str):\n    if data is None:\n        return\n    print(\"Pod Details:\")\n    pprint(data)\n\n\ndef k8s_kubectl_describe_pod(handle, pod_name: str, k8s_cli_string: str, namespace: str) -> str:\n    \"\"\"k8s_kubectl_describe_pod executes the given kubectl command\n\n        :type handle: object\n        :param handle: Object returned from the Task validate method\n\n        :type k8s_cli_string: str\n        :param k8s_cli_string: kubectl describe pod {pod_name} -n {namespace}.\n\n        :type node_name: str\n        :param node_name: Node Name.\n\n        :type namespace: str\n        :param namespace:Namespace\n\n        :rtype: String, Output of the command in python string format or\n        Empty String in case of Error.\n    \"\"\"\n    k8s_cli_string = k8s_cli_string.format(\n        pod_name=pod_name, namespace=namespace)\n    result = handle.run_native_cmd(k8s_cli_string)\n    if result.stderr:\n        raise ApiException(\n            f\"Error occurred while executing command {result.stderr}\")\n\n    data = result.stdout\n    return data\n"
  },
  {
    "path": "Kubernetes/legos/k8s_kubectl_drain_node/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Kubectl drain a node</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego Kubectl drain a node.\r\n\r\n\r\n## Lego Details\r\n\r\n    k8s_kubectl_drain_node(handle: object, node_name: str, k8s_cli_string: str)\r\n\r\n        handle: Object of type unSkript K8S Connector\r\n        k8s_cli_string: kubectl drain {node_name}.\r\n        node_name: Node Name.\r\n\r\n## Lego Input\r\nThis Lego take three input handle, k8s_cli_string and node_name.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kubernetes/legos/k8s_kubectl_drain_node/__init__.py",
    "content": ""
  },
  {
    "path": "Kubernetes/legos/k8s_kubectl_drain_node/k8s_kubectl_drain_node.json",
    "content": "{\r\n    \"action_title\": \"Kubectl drain a node\",\r\n    \"action_description\": \"Kubectl drain a node\",\r\n    \"action_type\": \"LEGO_TYPE_K8S\",\r\n    \"action_entry_function\": \"k8s_kubectl_drain_node\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\r\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_K8S\",\"CATEGORY_TYPE_K8S_KUBECTL\",\"CATEGORY_TYPE_K8S_NODE\"]\r\n}\r\n    "
  },
  {
    "path": "Kubernetes/legos/k8s_kubectl_drain_node/k8s_kubectl_drain_node.py",
    "content": "from pydantic import BaseModel, Field\nfrom kubernetes.client.rest import ApiException\n\n\nclass InputSchema(BaseModel):\n    k8s_cli_string: str = Field(\n        title='Kubectl Command',\n        description='kubectl drain a node in preparation of a maintenance',\n        default='kubectl drain {node_name}'\n    )\n    node_name: str = Field(\n        title='Node Name',\n        description='Node Name'\n    )\n\ndef k8s_kubectl_drain_node_printer(data: str):\n    if data is None:\n        print(\"Error while executing command\")\n        return\n\n    print (data)\n\ndef k8s_kubectl_drain_node(handle, k8s_cli_string: str, node_name:str) -> str:\n    \"\"\"k8s_kubectl_drain_node executes the given kubectl command\n\n        :type handle: object\n        :param handle: Object returned from the Task validate method\n\n        :type k8s_cli_string: str\n        :param k8s_cli_string: kubectl drain {node_name}.\n\n        :type node_name: str\n        :param node_name: Node Name.\n\n        :rtype: String, Output of the command in python string format or\n        Empty String in case of Error.\n    \"\"\"\n    k8s_cli_string = k8s_cli_string.format(node_name=node_name)\n    result = handle.run_native_cmd(k8s_cli_string)\n\n    if result is None:\n        print(\n            f\"Error while executing command ({k8s_cli_string}) (empty response)\")\n        return \"\"\n        \n    if result.stderr:\n        raise ApiException(f\"Error occurred while executing command {k8s_cli_string} {result.stderr}\")\n\n    return result.stdout\n"
  },
  {
    "path": "Kubernetes/legos/k8s_kubectl_exec_command/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Kubectl execute command</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego Kubectl execute command in a given pod.\r\n\r\n\r\n## Lego Details\r\n\r\n    k8s_kubectl_exec_command(handle: object, k8s_cli_string: str, pod_name:str, command: str, namespace: str)\r\n\r\n        handle: Object of type unSkript K8S Connector\r\n        k8s_cli_string: kubectl exec {pod_name} {command} -n {namespace}.\r\n        pod_name: Pod Name.\r\n        command: Command.\r\n        namespace: Namespace.\r\n\r\n## Lego Input\r\nThis Lego take five input handle, k8s_cli_string, pod_name, command and namespace.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kubernetes/legos/k8s_kubectl_exec_command/__init__.py",
    "content": ""
  },
  {
    "path": "Kubernetes/legos/k8s_kubectl_exec_command/k8s_kubectl_exec_command.json",
    "content": "{\r\n    \"action_title\": \"Execute command on a pod\",\r\n    \"action_description\": \"Execute command on a pod\",\r\n    \"action_type\": \"LEGO_TYPE_K8S\",\r\n    \"action_entry_function\": \"k8s_kubectl_exec_command\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\r\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_K8S\",\"CATEGORY_TYPE_K8S_KUBECTL\"]\r\n}\r\n    \r\n"
  },
  {
    "path": "Kubernetes/legos/k8s_kubectl_exec_command/k8s_kubectl_exec_command.py",
    "content": "from pydantic import BaseModel, Field\nfrom kubernetes.client.rest import ApiException\n\n\nclass InputSchema(BaseModel):\n    k8s_cli_string: str = Field(\n        title='Kubectl Command',\n        description='kubectl execute a command in pod',\n        default='kubectl exec {pod_name} {command} -n {namespace}'\n    )\n    pod_name: str = Field(\n        title='Pod Name',\n        description='Pod Name'\n    )\n    command: str = Field(\n        title='Command',\n        description='Command'\n    )\n    namespace: str = Field(\n        title='Namespace',\n        description='Namespace'\n    )\n\ndef k8s_kubectl_exec_command_printer(data: str):\n    if data is None:\n        print(\"Error while executing command\")\n        return\n\n    print (data)\n\ndef k8s_kubectl_exec_command(\n        handle,\n        k8s_cli_string: str,\n        pod_name:str,\n        command: str,\n        namespace: str\n        ) -> str:\n    \"\"\"k8s_kubectl_exec_command executes the given kubectl command on the pod\n\n        :type handle: object\n        :param handle: Object returned from the Task validate method\n\n        :type k8s_cli_string: str\n        :param k8s_cli_string: kubectl exec {pod_name} {command} -n {namespace}.\n\n        :type pod_name: str\n        :param pod_name: Pod Name.\n\n        :type command: str\n        :param command: Command.\n\n        :type namespace: str\n        :param namespace: Namespace.\n\n        :rtype: String, Output of the command in python string format or\n        Empty String in case of Error.\n    \"\"\"\n    k8s_cli_string = k8s_cli_string.format(pod_name=pod_name, command=command, namespace=namespace)\n    result = handle.run_native_cmd(k8s_cli_string)\n\n    if result is None:\n        print(\n            f\"Error while executing command ({k8s_cli_string}) (empty response)\")\n        return \"\"\n        \n    if result.stderr:\n        raise ApiException(f\"Error occurred while executing command {k8s_cli_string} {result.stderr}\")\n\n    return result.stdout\n"
  },
  {
    "path": "Kubernetes/legos/k8s_kubectl_get_api_resources/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Kubectl get api resources</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego Kubectl get api resources.\r\n\r\n\r\n## Lego Details\r\n\r\n    k8s_kubectl_get_api_resources(handle: object, k8s_cli_string: str, namespace: str)\r\n\r\n        handle: Object of type unSkript K8S Connector\r\n        k8s_cli_string: kubectl api-resources -o wide -n {namespace}.\r\n        namespace: Namespace.\r\n\r\n## Lego Input\r\nThis Lego take three input handle, k8s_cli_string and namespace.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kubernetes/legos/k8s_kubectl_get_api_resources/__init__.py",
    "content": ""
  },
  {
    "path": "Kubernetes/legos/k8s_kubectl_get_api_resources/k8s_kubectl_get_api_resources.json",
    "content": "{\r\n    \"action_title\": \"Kubectl get api resources\",\r\n    \"action_description\": \"Kubectl get api resources\",\r\n    \"action_type\": \"LEGO_TYPE_K8S\",\r\n    \"action_entry_function\": \"k8s_kubectl_get_api_resources\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\r\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_K8S\",\"CATEGORY_TYPE_K8S_KUBECTL\"]\r\n}\r\n    "
  },
  {
    "path": "Kubernetes/legos/k8s_kubectl_get_api_resources/k8s_kubectl_get_api_resources.py",
    "content": "from pprint import pprint\nfrom pydantic import BaseModel, Field\nfrom kubernetes.client.rest import ApiException\n\nclass InputSchema(BaseModel):\n    k8s_cli_string: str = Field(\n        title='Kubectl Command',\n        description='kubectl get api resources',\n        default='kubectl api-resources -o wide -n {namespace}'\n    )\n    namespace: str = Field(\n        title='Namespace',\n        description='Namespace',\n    )\n\ndef k8s_kubectl_get_api_resources_printer(data: str):\n    if data is None:\n        print(\"Error while executing command\")\n        return\n\n    pprint (data)\n\ndef k8s_kubectl_get_api_resources(handle, k8s_cli_string: str, namespace: str) -> str:\n    \"\"\"k8s_kubectl_get_api_resources executes the given kubectl command\n\n        :type handle: object\n        :param handle: Object returned from the Task validate method\n\n        :type k8s_cli_string: str\n        :param k8s_cli_string: kubectl api-resources -o wide -n {namespace}.\n\n        :type namespace: str\n        :param namespace: Namespace.\n\n        :rtype: String, Output of the command in python string format or\n        Empty String in case of Error.\n    \"\"\"\n    k8s_cli_string = k8s_cli_string.format(namespace=namespace)\n    result = handle.run_native_cmd(k8s_cli_string)\n\n    if result is None:\n        print(\n            f\"Error while executing command ({k8s_cli_string}) (empty response)\")\n        return \"\"\n        \n    if result.stderr:\n        raise ApiException(f\"Error occurred while executing command {k8s_cli_string} {result.stderr}\")\n\n    return result.stdout\n"
  },
  {
    "path": "Kubernetes/legos/k8s_kubectl_get_logs/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Kubectl get logs</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego Kubectl get logs for a given pod.\r\n\r\n\r\n## Lego Details\r\n\r\n    k8s_kubectl_delete_pod(handle: object, k8s_cli_string: str, pod_name: str, namespace: str)\r\n\r\n        handle: Object of type unSkript K8S Connector\r\n        k8s_cli_string: kubectl logs {pod_name} -n {namespace}.\r\n        pod_name: Pod Name.\r\n        namespace: Namespace.\r\n\r\n## Lego Input\r\nThis Lego take four input handle, k8s_cli_string, pod_name and namespace.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kubernetes/legos/k8s_kubectl_get_logs/__init__.py",
    "content": ""
  },
  {
    "path": "Kubernetes/legos/k8s_kubectl_get_logs/k8s_kubectl_get_logs.json",
    "content": "{\r\n    \"action_title\": \"Kubectl get logs\",\r\n    \"action_description\": \"Kubectl get logs for a given pod\",\r\n    \"action_type\": \"LEGO_TYPE_K8S\",\r\n    \"action_entry_function\": \"k8s_kubectl_get_logs\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\r\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_K8S\",\"CATEGORY_TYPE_K8S_KUBECTL\"]\r\n}\r\n    "
  },
  {
    "path": "Kubernetes/legos/k8s_kubectl_get_logs/k8s_kubectl_get_logs.py",
    "content": "from pprint import pprint\nfrom pydantic import BaseModel, Field\nfrom kubernetes.client.rest import ApiException\n\nclass InputSchema(BaseModel):\n    k8s_cli_string: str = Field(\n        title='Kubectl Command',\n        description='kubectl get logs for a given pod',\n        default='\"kubectl logs {pod_name} -n {namespace}\"'\n    )\n    pod_name: str = Field(\n        title='Pod Name',\n        description='Pod Name'\n    )\n    namespace: str = Field(\n        title='Namespace',\n        description='Namespace'\n    )\n\ndef k8s_kubectl_get_logs_printer(data: str):\n    if data is None:\n        return\n\n    print(\"Logs:\")\n\n    pprint(data)\n\ndef k8s_kubectl_get_logs(handle, k8s_cli_string: str, pod_name: str, namespace:str) -> str:\n    \"\"\"k8s_kubectl_get_logs executes the given kubectl command\n\n        :type handle: object\n        :param handle: Object returned from the Task validate method\n\n        :type k8s_cli_string: str\n        :param k8s_cli_string: kubectl logs {pod_name} -n {namespace}.\n\n        :type pod_name: str\n        :param pod_name: Pod Name.\n\n        :type namespace: str\n        :param namespace: Namespace.\n\n        :rtype: String, Output of the command in python string format or\n        Empty String in case of Error.\n    \"\"\"\n    k8s_cli_string = k8s_cli_string.format(pod_name=pod_name, namespace=namespace)\n    result = handle.run_native_cmd(k8s_cli_string)\n    if result is None:\n        print(\n            f\"Error while executing command ({k8s_cli_string}) (empty response)\")\n        return \"\"\n        \n    if result.stderr:\n        raise ApiException(f\"Error occurred while executing command {k8s_cli_string} {result.stderr}\")\n\n    data = result.stdout\n    return data\n"
  },
  {
    "path": "Kubernetes/legos/k8s_kubectl_get_service_namespace/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Kubectl get services</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego Kubectl get services in a given namespace.\r\n\r\n\r\n## Lego Details\r\n\r\n    k8s_kubectl_get_api_resources(handle: object, k8s_cli_string: str, namespace: str)\r\n\r\n        handle: Object of type unSkript K8S Connector\r\n        k8s_cli_string: kubectl get service -n {namespace}.\r\n        namespace: Namespace.\r\n\r\n## Lego Input\r\nThis Lego take three input handle, k8s_cli_string and namespace.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kubernetes/legos/k8s_kubectl_get_service_namespace/__init__.py",
    "content": ""
  },
  {
    "path": "Kubernetes/legos/k8s_kubectl_get_service_namespace/k8s_kubectl_get_service_namespace.json",
    "content": "{\r\n    \"action_title\": \"Kubectl get services\",\r\n    \"action_description\": \"Kubectl get services in a given namespace\",\r\n    \"action_type\": \"LEGO_TYPE_K8S\",\r\n    \"action_entry_function\": \"k8s_kubectl_get_service_namespace\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_K8S\",\"CATEGORY_TYPE_K8S_KUBECTL\",\"CATEGORY_TYPE_K8S_NAMESPACE\"]\r\n}\r\n    "
  },
  {
    "path": "Kubernetes/legos/k8s_kubectl_get_service_namespace/k8s_kubectl_get_service_namespace.py",
    "content": "import io\nimport pandas as pd\nfrom pydantic import BaseModel, Field\nfrom kubernetes.client.rest import ApiException\n\nclass InputSchema(BaseModel):\n    k8s_cli_string: str = Field(\n        title='Kubectl Command',\n        description='kubectl list services in current namespace',\n        default='kubectl get service -n {namespace}'\n    )\n    namespace: str = Field(\n        title='Namespace',\n        description='Namespace'\n    )\n\ndef k8s_kubectl_get_service_namespace_printer(data: list):\n    if data is None:\n        return\n\n    print(\"Service List:\")\n\n    for service in data:\n        print(f\"\\t {service}\")\n\ndef k8s_kubectl_get_service_namespace(handle, k8s_cli_string: str, namespace: str) -> list:\n    \"\"\"k8s_kubectl_get_service_namespace executes the given kubectl command\n\n        :type handle: object\n        :param handle: Object returned from the Task validate method\n\n        :type k8s_cli_string: str\n        :param k8s_cli_string: kubectl get service -n {namespace}.\n\n        :type namespace: str\n        :param namespace: Namespace.\n\n        :rtype: \n    \"\"\"\n    k8s_cli_string = k8s_cli_string.format(namespace=namespace)\n    result = handle.run_native_cmd(k8s_cli_string)\n    if result is None:\n        print(\n            f\"Error while executing command ({k8s_cli_string}) (empty response)\")\n        return []\n        \n    if result.stderr:\n        raise ApiException(f\"Error occurred while executing command {k8s_cli_string} {result.stderr}\")\n\n    df = pd.read_fwf(io.StringIO(result.stdout))\n    all_services = []\n    for index, row in df.iterrows():\n        all_services.append(row['NAME'])\n    return all_services\n"
  },
  {
    "path": "Kubernetes/legos/k8s_kubectl_list_pods/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Kubectl list pods</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego Kubectl list pods in given namespace.\r\n\r\n\r\n## Lego Details\r\n\r\n    k8s_kubectl_get_api_resources(handle: object, k8s_cli_string: str, namespace: str)\r\n\r\n        handle: Object of type unSkript K8S Connector\r\n        k8s_cli_string: kubectl get pods -n {namespace}.\r\n        namespace: Namespace.\r\n\r\n## Lego Input\r\nThis Lego take three input handle, k8s_cli_string and namespace.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kubernetes/legos/k8s_kubectl_list_pods/__init__.py",
    "content": ""
  },
  {
    "path": "Kubernetes/legos/k8s_kubectl_list_pods/k8s_kubectl_list_pods.json",
    "content": "{\r\n    \"action_title\": \"Kubectl list pods\",\r\n    \"action_description\": \"Kubectl list pods in given namespace\",\r\n    \"action_type\": \"LEGO_TYPE_K8S\",\r\n    \"action_entry_function\": \"k8s_kubectl_list_pods\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_K8S\",\"CATEGORY_TYPE_K8S_KUBECTL\",\"CATEGORY_TYPE_K8S_POD\"]\r\n}\r\n    "
  },
  {
    "path": "Kubernetes/legos/k8s_kubectl_list_pods/k8s_kubectl_list_pods.py",
    "content": "import io\nimport pandas as pd\nfrom pydantic import BaseModel, Field\nfrom kubernetes.client.rest import ApiException\n\nclass InputSchema(BaseModel):\n    k8s_cli_string: str = Field(\n        title='Kubectl Command',\n        description='kubectl List pods in given namespace',\n        default='kubectl get pods -n {namespace}'\n    )\n    namespace: str = Field(\n        title='Namespace',\n        description='Namespace'\n    )\n\ndef k8s_kubectl_list_pods_printer(data: list):\n    if data is None:\n        return\n\n    print(\"POD List:\")\n\n    for pod in data:\n        print(f\"\\t {pod}\")\n\ndef k8s_kubectl_list_pods(handle, k8s_cli_string: str, namespace: str) -> list:\n    \"\"\"k8s_kubectl_list_pods executes the given kubectl command\n\n        :type handle: object\n        :param handle: Object returned from the Task validate method\n\n        :type k8s_cli_string: str\n        :param k8s_cli_string: kubectl get pods -n {namespace}.\n\n        :type namespace: str\n        :param namespace: Namespace.\n\n        :rtype: \n    \"\"\"\n    k8s_cli_string = k8s_cli_string.format(namespace=namespace)\n    result = handle.run_native_cmd(k8s_cli_string)\n    if result is None:\n        print(\n            f\"Error while executing command ({k8s_cli_string}) (empty response)\")\n        return []\n\n    if result.stderr:\n        raise ApiException(\n            f\"Error occurred while executing command {k8s_cli_string} {result.stderr}\")\n\n    df = pd.read_fwf(io.StringIO(result.stdout))\n    all_pods = []\n    for index, row in df.iterrows():\n        all_pods.append(row['NAME'])\n    return all_pods\n"
  },
  {
    "path": "Kubernetes/legos/k8s_kubectl_patch_pod/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Kubectl update field</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego Kubectl update field of a resource using strategic merge patch.\r\n\r\n\r\n## Lego Details\r\n\r\n    k8s_kubectl_patch_pod(handle: object, k8s_cli_string: str, pod_name:str, patch: str, namespace: str)\r\n\r\n        handle: Object of type unSkript K8S Connector\r\n        k8s_cli_string: kubectl drain {node_name}.\r\n        pod_name: Pod Name.\r\n        patch: The patch to be applied to the resource.\r\n        namespace: Namespace.\r\n\r\n## Lego Input\r\nThis Lego take five input handle, k8s_cli_string, pod_name, patch and namespace.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kubernetes/legos/k8s_kubectl_patch_pod/__init__.py",
    "content": ""
  },
  {
    "path": "Kubernetes/legos/k8s_kubectl_patch_pod/k8s_kubectl_patch_pod.json",
    "content": "{\r\n    \"action_title\": \"Kubectl update field\",\r\n    \"action_description\": \"Kubectl update field of a resource using strategic merge patch\",\r\n    \"action_type\": \"LEGO_TYPE_K8S\",\r\n    \"action_entry_function\": \"k8s_kubectl_patch_pod\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\r\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_K8S\",\"CATEGORY_TYPE_K8S_KUBECTL\",\"CATEGORY_TYPE_K8S_POD\"]\r\n}\r\n    "
  },
  {
    "path": "Kubernetes/legos/k8s_kubectl_patch_pod/k8s_kubectl_patch_pod.py",
    "content": "from pydantic import BaseModel, Field\nfrom kubernetes.client.rest import ApiException\n\n\nclass InputSchema(BaseModel):\n    k8s_cli_string: str = Field(\n        title='Kubectl Command',\n        description='kubectl update field of a resource using strategic merge patch',\n        default=\"kubectl patch pod {pod_name} -p '{patch}' -n {namespace}\"\n    )\n    pod_name: str = Field(\n        title='Pod Name',\n        description='Pod Name'\n    )\n    patch: str = Field(\n        title='Patch',\n        description='The patch to be applied to the resource'\n    )\n    namespace: str = Field(\n        title='Namespace',\n        description='Namespace'\n    )\n\ndef k8s_kubectl_patch_pod_printer(data: str):\n    if data is None:\n        print(\"Error while executing command\")\n        return\n\n    print (data)\n\ndef k8s_kubectl_patch_pod(\n        handle,\n        k8s_cli_string: str,\n        pod_name:str,\n        patch: str,\n        namespace: str\n        ) -> str:\n    \"\"\"k8s_kubectl_patch_pod executes the given kubectl command\n\n        :type handle: object\n        :param handle: Object returned from the Task validate method\n\n        :type k8s_cli_string: str\n        :param k8s_cli_string: kubectl patch pod {pod_name} -p '{patch}' -n {namespace}.\n\n        :type pod_name: str\n        :param pod_name: Pod Name.\n\n        :type patch: str\n        :param patch: The patch to be applied to the resource.\n\n        :type namespace: str\n        :param namespace: Namespace.\n\n        :rtype: String, Output of the command in python string format or\n        Empty String in case of Error.\n    \"\"\"\n    k8s_cli_string = k8s_cli_string.format(pod_name=pod_name, patch=patch, namespace=namespace)\n    result = handle.run_native_cmd(k8s_cli_string)\n\n    if result is None:\n        print(\n            f\"Error while executing command ({k8s_cli_string}) (empty response)\")\n        return \"\"\n        \n    if result.stderr:\n        raise ApiException(f\"Error occurred while executing command {k8s_cli_string} {result.stderr}\")\n\n    return result.stdout\n"
  },
  {
    "path": "Kubernetes/legos/k8s_kubectl_rollout_deployment/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \n<h2>Kubectl rollout deployment</h2>\n\n<br>\n\n## Description\nThis Lego takes the `kubectl` command to be executed and returns the output of the command. This lego can be treated like a wrapper around the `kubectl` command.\n\n\n## Lego Details\n\n    k8s_kubectl_rollout_deployment(handle: object, k8s_cli_string: str, deployment: str, namespace: str)\n\n        handle: Object of type unSkript MongoDB Connector\n        k8s_cli_string: Kubectl command, eg: \"kubectl get pods -A\", \"kubectl get ns\"\n        deployment: Deployment Name\n        Namespace: namespace\n\n## Lego Input\nThis Lego takes the actual k8s_cli_string, deployment and namespace to be executed as input, as python string.\n\nLike all unSkript Legos this lego relies on the information provided in unSkript K8S Connector. \n\n>Note: The input for the command should start with keyword `kubectl` \n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://unskript.com)"
  },
  {
    "path": "Kubernetes/legos/k8s_kubectl_rollout_deployment/__init__.py",
    "content": "# 2022 (c) unSkript.com\n"
  },
  {
    "path": "Kubernetes/legos/k8s_kubectl_rollout_deployment/k8s_kubectl_rollout_deployment.json",
    "content": "{ \"action_title\": \"Kubectl rollout deployment history\", \n  \"action_description\": \"Kubectl rollout deployment history\", \n  \"action_type\": \"LEGO_TYPE_K8S\", \n  \"action_entry_function\": \"k8s_kubectl_rollout_deployment\", \n  \"action_needs_credential\": true, \n  \"action_supports_poll\": true, \n  \"action_supports_iteration\": true, \n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\n  \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_K8S\",\"CATEGORY_TYPE_K8S_KUBECTL\"]\n}\n"
  },
  {
    "path": "Kubernetes/legos/k8s_kubectl_rollout_deployment/k8s_kubectl_rollout_deployment.py",
    "content": "#\n# Copyright (c) 2022 unSkript.com\n# All rights reserved.\n#\n\nfrom pydantic import BaseModel, Field\nfrom beartype import beartype\nfrom kubernetes.client.rest import ApiException\n\nclass InputSchema(BaseModel):\n    k8s_cli_string: str = Field(\n        title='Kubectl Command',\n        description='kubectl command '\n                    'eg \"kubectl get pods --all-namespaces\"'\n    )\n    deployment: str = Field(\n        title='Deployment Name',\n        description='Deployment Name'\n    )\n    namespace: str = Field(\n        title='Namespace',\n        description='Namespace'\n    )\n\n\n@beartype\ndef k8s_kubectl_rollout_deployment_printer(data: str):\n    if data is None:\n        print(\"Error while executing command\")\n        return\n\n    print (data)\n\n@beartype\ndef k8s_kubectl_rollout_deployment(\n    handle,\n    k8s_cli_string: str,\n    deployment: str,\n    namespace: str\n    ) -> str:\n    k8s_cli_string = k8s_cli_string.format(deployment=deployment, namespace=namespace)\n    result = handle.run_native_cmd(k8s_cli_string)\n\n    if result is None:\n        print(\n            f\"Error while executing command ({k8s_cli_string}) (empty response)\")\n        return \"\"\n\n    if result.stderr:\n        raise ApiException(\n            f\"Error occurred while executing command {k8s_cli_string} {result.stderr}\")\n\n    return result.stdout\n\n## Duplicate code?"
  },
  {
    "path": "Kubernetes/legos/k8s_kubectl_scale_deployment/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Kubectl scale deployment</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego Kubectl scale a given deployment.\r\n\r\n\r\n## Lego Details\r\n\r\n    k8s_kubectl_patch_pod(handle: object, k8s_cli_string: str, num: str, deployment: str, namespace:str)\r\n\r\n        handle: Object of type unSkript K8S Connector\r\n        k8s_cli_string: kubectl drain {node_name}.\r\n        num: Specified Size.\r\n        deployment: Deployment Name.\r\n        namespace: Namespace.\r\n\r\n## Lego Input\r\nThis Lego take five input handle, k8s_cli_string, num, deployment and namespace.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kubernetes/legos/k8s_kubectl_scale_deployment/__init__.py",
    "content": ""
  },
  {
    "path": "Kubernetes/legos/k8s_kubectl_scale_deployment/k8s_kubectl_scale_deployment.json",
    "content": "{\r\n    \"action_title\": \"Kubectl scale deployment\",\r\n    \"action_description\": \"Kubectl scale a given deployment\",\r\n    \"action_type\": \"LEGO_TYPE_K8S\",\r\n    \"action_entry_function\": \"k8s_kubectl_scale_deployment\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\r\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_K8S\",\"CATEGORY_TYPE_K8S_KUBECTL\"]\r\n}\r\n    "
  },
  {
    "path": "Kubernetes/legos/k8s_kubectl_scale_deployment/k8s_kubectl_scale_deployment.py",
    "content": "from pydantic import BaseModel, Field\nfrom kubernetes.client.rest import ApiException\n\n\nclass InputSchema(BaseModel):\n    k8s_cli_string: str = Field(\n        title='Kubectl Command',\n        description='kubectl Scale a given deployment',\n        default='kubectl scale --replicas={num} deployment {deployment} -n {namespace}'\n    )\n    num: str = Field(\n        title='Specified Size',\n        description='Specified Size'\n    )\n    deployment: str = Field(\n        title='Deployment Name',\n        description='Deployment Name'\n    )\n    namespace: str = Field(\n        title='Namespace',\n        description='Namespace'\n    )\n\ndef k8s_kubectl_scale_deployment_printer(data: str):\n    if data is None:\n        print(\"Error while executing command\")\n        return\n\n    print (data)\n\ndef k8s_kubectl_scale_deployment(\n        handle,\n        k8s_cli_string: str,\n        num: str,\n        deployment: str,\n        namespace:str\n        ) -> str:\n    \"\"\"k8s_kubectl_scale_deployment executes the given kubectl command\n\n        :type handle: object\n        :param handle: Object returned from the Task validate method\n\n        :type k8s_cli_string: str\n        :param k8s_cli_string: kubectl scale --replicas={num} deployment {deployment} -n {namespace}.\n\n        :type num: str\n        :param num: Specified Size.\n\n        :type deployment: str\n        :param deployment: Deployment Name.\n\n        :type namespace: str\n        :param namespace: Namespace.\n\n        :rtype: String, Output of the command in python string format or Empty String in case of Error.\n    \"\"\"\n    k8s_cli_string = k8s_cli_string.format(num=num, deployment=deployment, namespace=namespace)\n    result = handle.run_native_cmd(k8s_cli_string)\n\n    if result is None:\n        print(\n            f\"Error while executing command ({k8s_cli_string}) (empty response)\")\n        return \"\"\n        \n    if result.stderr:\n        raise ApiException(f\"Error occurred while executing command {k8s_cli_string} {result.stderr}\")\n\n    return result.stdout\n"
  },
  {
    "path": "Kubernetes/legos/k8s_kubectl_show_metrics_node/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Kubectl show metrics</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego Kubectl show metrics for a given node.\r\n\r\n\r\n## Lego Details\r\n\r\n    k8s_kubectl_show_metrics_node(handle: object, k8s_cli_string: str, node_name:str)\r\n\r\n        handle: Object of type unSkript K8S Connector\r\n        k8s_cli_string: kubectl describe pod {pod_name} -n {namespace}.\r\n        node_name: Node Name.\r\n\r\n## Lego Input\r\nThis Lego take three input handle, k8s_cli_string and node_name.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kubernetes/legos/k8s_kubectl_show_metrics_node/__init__.py",
    "content": ""
  },
  {
    "path": "Kubernetes/legos/k8s_kubectl_show_metrics_node/k8s_kubectl_show_metrics_node.json",
    "content": "{\r\n    \"action_title\": \"Kubectl show metrics\",\r\n    \"action_description\": \"Kubectl show metrics for a given node\",\r\n    \"action_type\": \"LEGO_TYPE_K8S\",\r\n    \"action_entry_function\": \"k8s_kubectl_show_metrics_node\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\r\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_K8S\",\"CATEGORY_TYPE_K8S_KUBECTL\",\"CATEGORY_TYPE_K8S_NODE\"]\r\n}\r\n    "
  },
  {
    "path": "Kubernetes/legos/k8s_kubectl_show_metrics_node/k8s_kubectl_show_metrics_node.py",
    "content": "from pydantic import BaseModel, Field\nfrom kubernetes.client.rest import ApiException\n\n\nclass InputSchema(BaseModel):\n    k8s_cli_string: str = Field(\n        title='Kubectl Command',\n        description='kubectl Show metrics for a given node',\n        default='kubectl top node {node_name}'\n    )\n    node_name: str = Field(\n        title='Node Name',\n        description='Node Name'\n    )\n\n\ndef k8s_kubectl_show_metrics_node_printer(data: str):\n    if data is None:\n        print(\"Error while executing command\")\n        return\n\n    print(data)\n\n\ndef k8s_kubectl_show_metrics_node(handle, k8s_cli_string: str, node_name: str) -> str:\n    \"\"\"k8s_kubectl_show_metrics_node executes the given kubectl command\n\n        :type handle: object\n        :param handle: Object returned from the Task validate method\n\n        :type k8s_cli_string: str\n        :param k8s_cli_string: kubectl top node {node_name}.\n\n        :type node_name: str\n        :param node_name: Node Name.\n\n        :rtype: String, Output of the command in python string format or\n        Empty String in case of Error.\n    \"\"\"\n    k8s_cli_string = k8s_cli_string.format(node_name=node_name)\n    result = handle.run_native_cmd(k8s_cli_string)\n\n    if result is None:\n        print(\n            f\"Error while executing command ({k8s_cli_string}) (empty response)\")\n        return \"\"\n\n    if result.stderr:\n        raise ApiException(\n            f\"Error occurred while executing command {k8s_cli_string} {result.stderr}\")\n\n    return result.stdout\n"
  },
  {
    "path": "Kubernetes/legos/k8s_kubectl_show_metrics_pod/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Kubectl show metrics</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego Kubectl show metrics for a given pod.\r\n\r\n\r\n## Lego Details\r\n\r\n    k8s_kubectl_show_metrics_pod(handle: object, k8s_cli_string: str, pod_name:str, namespace:str)\r\n\r\n        handle: Object of type unSkript K8S Connector\r\n        k8s_cli_string: kubectl top pod {pod_name} -n {namespace}.\r\n        pod_name: Pod Name.\r\n        namespace: Namespace\r\n\r\n## Lego Input\r\nThis Lego take four input handle, k8s_cli_string, pod_name and namespace.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kubernetes/legos/k8s_kubectl_show_metrics_pod/__init__.py",
    "content": ""
  },
  {
    "path": "Kubernetes/legos/k8s_kubectl_show_metrics_pod/k8s_kubectl_show_metrics_pod.json",
    "content": "{\r\n    \"action_title\": \"Kubectl show metrics\",\r\n    \"action_description\": \"Kubectl show metrics for a given pod\",\r\n    \"action_type\": \"LEGO_TYPE_K8S\",\r\n    \"action_entry_function\": \"k8s_kubectl_show_metrics_pod\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\r\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_K8S\",\"CATEGORY_TYPE_K8S_KUBECTL\",\"CATEGORY_TYPE_K8S_POD\"]\r\n}\r\n    "
  },
  {
    "path": "Kubernetes/legos/k8s_kubectl_show_metrics_pod/k8s_kubectl_show_metrics_pod.py",
    "content": "from pydantic import BaseModel, Field\nfrom kubernetes.client.rest import ApiException\n\n\nclass InputSchema(BaseModel):\n    k8s_cli_string: str = Field(\n        title='Kubectl Command',\n        description='kubectl show metrics for a given pod',\n        default='kubectl top pod {pod_name} -n {namespace}'\n    )\n    pod_name: str = Field(\n        title='Pod Name',\n        description='Pod Name'\n    )\n    namespace: str = Field(\n        title='Namespace',\n        description='Namespace'\n    )\n\ndef k8s_kubectl_show_metrics_pod_printer(data: str):\n    if data is None:\n        print(\"Error while executing command\")\n        return\n\n    print (data)\n\ndef k8s_kubectl_show_metrics_pod(\n        handle,\n        k8s_cli_string: str,\n        pod_name:str,\n        namespace:str\n        ) -> str:\n    \"\"\"k8s_kubectl_show_metrics_node executes the given kubectl command\n\n        :type handle: object\n        :param handle: Object returned from the Task validate method\n\n        :type k8s_cli_string: str\n        :param k8s_cli_string: kubectl top pod {pod_name} -n {namespace}.\n\n        :type pod_name: str\n        :param pod_name: Pod Name.\n        \n        :type namespace: str\n        :param namespace: Namespace.\n\n        :rtype: String, Output of the command in python string format or\n        Empty String in case of Error.\n    \"\"\"\n    k8s_cli_string = k8s_cli_string.format(pod_name=pod_name, namespace=namespace)\n    result = handle.run_native_cmd(k8s_cli_string)\n    if result is None:\n        print(\n            f\"Error while executing command ({k8s_cli_string}) (empty response)\")\n        return \"\"\n\n    if result.stderr:\n        raise ApiException(\n            f\"Error occurred while executing command {k8s_cli_string} {result.stderr}\")\n\n    return result.stdout\n"
  },
  {
    "path": "Kubernetes/legos/k8s_list_all_matching_pods/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>List matching name pods</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego List all pods matching a particular name string. The matching string can be a regular expression too.\r\n\r\n\r\n## Lego Details\r\n\r\n    k8s_list_all_matching_pods(handle: object, matchstr: str, namespace: str)\r\n\r\n        handle: Object of type unSkript K8S Connector\r\n        matchstr: Matching name string. The matching string can be a regular expression too.\r\n        namespace: Namespace\r\n\r\n## Lego Input\r\nThis Lego take three input handle, matchstr and namespace.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kubernetes/legos/k8s_list_all_matching_pods/__init__.py",
    "content": ""
  },
  {
    "path": "Kubernetes/legos/k8s_list_all_matching_pods/k8s_list_all_matching_pods.json",
    "content": "{\r\n    \"action_title\": \"List matching name pods\",\r\n    \"action_description\": \"List all pods matching a particular name string. The matching string can be a regular expression too\",\r\n    \"action_type\": \"LEGO_TYPE_K8S\",\r\n    \"action_entry_function\": \"k8s_list_all_matching_pods\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_K8S\",\"CATEGORY_TYPE_K8S_KUBECTL\",\"CATEGORY_TYPE_K8S_POD\"]\r\n}\r\n  "
  },
  {
    "path": "Kubernetes/legos/k8s_list_all_matching_pods/k8s_list_all_matching_pods.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\n\nimport pprint\nimport re\nfrom typing import Optional, Tuple\nfrom pydantic import BaseModel, Field\nfrom tabulate import tabulate\nfrom kubernetes import client\n\npp = pprint.PrettyPrinter(indent=2)\n\n\nclass InputSchema(BaseModel):\n    namespace: Optional[str] = Field(\n        default='all',\n        title='Namespace',\n        description='Kubernetes namespace')\n    matchstr: str = Field(\n        title='Match String',\n        description='''\n                Matching name string. The matching string can be a regular expression too.\n                For eg. ^[a-zA-Z0-9]+$ //string consists only of alphanumerics.\n            ''')\n\ndef k8s_list_all_matching_pods_printer(output):\n    if output is None:\n        return\n\n    (match_pods, data) = output\n    if len(match_pods) > 0:\n        print(\"\\n\")\n        print(tabulate(data, tablefmt=\"grid\", headers=[\n            \"Pod Ip\", \"Namespace\", \"Name\", \"Status\", \"Start Time\"]))\n    if not data:\n        pp.pprint(\"No Matching Pods !!!\")\n\n\n\ndef k8s_list_all_matching_pods(handle, matchstr: str, namespace: str = 'all') -> Tuple:\n    \"\"\"k8s_list_all_matching_pods list all matching pods\n\n        :type handle: object\n        :param handle: Object returned from the Task validate method\n\n        :type matchstr: str\n        :param matchstr: Matching name string. The matching string can be a regular expression too.\n\n        :type namespace: str\n        :param namespace: Kubernetes namespace.\n\n        :rtype: Tuple\n    \"\"\"\n    coreApiClient = client.CoreV1Api(api_client=handle)\n\n    data = []\n    match_pods = []\n    res = coreApiClient.list_namespaced_pod(namespace=namespace, pretty=True)\n    if len(res.items) > 0:\n        match_pods = list(filter(lambda x: (\n            re.search(fr'({matchstr})', x.metadata.name) is not None), res.items))\n        for pod in match_pods:\n            data.append([pod.status.pod_ip, pod.metadata.namespace,\n                         pod.metadata.name, pod.status.phase, pod.status.start_time])\n\n    return (match_pods, data)\n"
  },
  {
    "path": "Kubernetes/legos/k8s_list_pvcs/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>List pvcs</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego List pvcs by namespace. By default, it will list all pvcs in all namespaces.\r\n\r\n\r\n## Lego Details\r\n\r\n    k8s_list_pvcs(handle: object, namespace: str)\r\n\r\n        handle: Object of type unSkript K8S Connector\r\n        namespace: Kubernetes namespace.\r\n\r\n## Lego Input\r\nThis Lego take two input handle and namespace.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kubernetes/legos/k8s_list_pvcs/__init__.py",
    "content": ""
  },
  {
    "path": "Kubernetes/legos/k8s_list_pvcs/k8s_list_pvcs.json",
    "content": "{\r\n    \"action_title\": \"List pvcs\",\r\n    \"action_description\": \"List pvcs by namespace. By default, it will list all pvcs in all namespaces.\",\r\n    \"action_type\": \"LEGO_TYPE_K8S\",\r\n    \"action_version\": \"2.0.0\",\r\n    \"action_entry_function\": \"k8s_list_pvcs\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_K8S\",\"CATEGORY_TYPE_K8S_KUBECTL\",\"CATEGORY_TYPE_K8S_PVC\"]\r\n}\r\n  "
  },
  {
    "path": "Kubernetes/legos/k8s_list_pvcs/k8s_list_pvcs.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\nimport pprint\nfrom typing import Optional, List\nfrom pydantic import BaseModel, Field\nfrom kubernetes.client.rest import ApiException\n\nclass InputSchema(BaseModel):\n    namespace: Optional[str] = Field(\n        default='',\n        title='Namespace',\n        description='Kubernetes namespace')\n\ndef k8s_list_pvcs_printer(output):\n    if output is None:\n        return\n\n    pprint.pprint(output)\n\ndef k8s_list_pvcs(handle, namespace: str = '') -> List:\n    \"\"\"k8s_list_pvcs list pvcs\n\n        :type handle: object\n        :param handle: Object returned from the Task validate method\n\n        :type namespace: str\n        :param namespace: Kubernetes namespace.\n\n        :rtype: List\n    \"\"\"\n    if namespace == '':\n        kubectl_command = ('kubectl get pvc -A --output=jsonpath=\\'{range .items[*]}'\n                           '{@.metadata.namespace}{\",\"}{@.metadata.name}{\"\\\\n\"}{end}\\'')\n    else:\n        kubectl_command = ('kubectl get pvc -n ' + namespace + ' --output=jsonpath=\\''\n                    '{range .items[*]}{@.metadata.namespace}{\",\"}{@.metadata.name}{\"\\\\n\"}{end}\\'')\n    result = handle.run_native_cmd(kubectl_command)\n\n    if result is None:\n        print(\n            f\"Error while executing command ({kubectl_command}) (empty response)\")\n        return []\n        \n    if result.stderr:\n        raise ApiException(f\"Error occurred while executing command {kubectl_command} {result.stderr}\")\n\n    names_list = [y for y in (x.strip() for x in result.stdout.splitlines()) if y]\n    output = []\n    for i in names_list:\n        ns, name = i.split(\",\")\n        output.append({\"Namespace\": ns, \"Name\":name})\n    return output\n"
  },
  {
    "path": "Kubernetes/legos/k8s_measure_worker_node_network_bandwidth/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>Measure K8s worker node network bandwidth</h1>\n\n## Description\nMeasures the network bandwidth for each worker node using a DaemonSet and returns the results.\n\n## Lego Details\n\tk8s_measure_worker_node_network_bandwidth(handle, namespace: str)\n\t\thandle: Object of type unSkript K8S Connector.\n\t\tnamespace: The namespace where the DaemonSet will be deployed.\n\n\n## Lego Input\nThis Lego takes inputs handle, namespace.\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kubernetes/legos/k8s_measure_worker_node_network_bandwidth/__init__.py",
    "content": ""
  },
  {
    "path": "Kubernetes/legos/k8s_measure_worker_node_network_bandwidth/k8s_measure_worker_node_network_bandwidth.json",
    "content": "{\n  \"action_title\": \"Measure K8s worker node network bandwidth\",\n  \"action_description\": \"Measures the network bandwidth for each worker node using a DaemonSet and returns the results.\",\n  \"action_type\": \"LEGO_TYPE_K8S\",\n  \"action_entry_function\": \"k8s_measure_worker_node_network_bandwidth\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n  \"action_is_check\": false,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\": [\"CATEGORY_TYPE_INFORMATION\" ]\n}"
  },
  {
    "path": "Kubernetes/legos/k8s_measure_worker_node_network_bandwidth/k8s_measure_worker_node_network_bandwidth.py",
    "content": "#\n# Copyright (c) 2023 unSkript.com\n# All rights reserved.\n#\nfrom typing import List\nfrom kubernetes import client\nfrom kubernetes.client.rest import ApiException\nfrom tabulate import tabulate \nimport time\nfrom pydantic import BaseModel, Field\n\n\nclass InputSchema(BaseModel):\n    namespace_to_check_bandwidth: str = Field(description='The namespace where the DaemonSet will be deployed.', title='Namespace')\n\n\n\ndef pods_have_written_results(handle, core_v1, label_selector, namespace, timeout=150) -> bool:\n    \"\"\"Check if all pods with the given label selector have written their results.\"\"\"\n    end_time = time.time() + timeout\n    # Marks the beginning of polling\n    time.sleep(5)\n    while time.time() < end_time:\n        pods = core_v1.list_namespaced_pod(namespace=namespace, label_selector=label_selector).items\n        all_written_results = True\n        for pod in pods:\n            pod_name = pod.metadata.name\n            check_files_command = f\"kubectl exec -n {namespace} {pod_name} -- ls /results/\"\n            result = handle.run_native_cmd(check_files_command)\n\n            if \"time.txt\" in result.stdout:\n                continue\n            elif \"in_progress.txt\" in result.stdout:\n                all_written_results = False\n                break\n            else:\n                all_written_results = False\n                break\n\n        if all_written_results:\n            return True\n        # Retrying in 2 seconds...\n        time.sleep(2)\n\n    return False\n\n\ndef k8s_measure_worker_node_network_bandwidth_printer(output):\n    \"\"\"Print the network bandwidth results in tabular format.\"\"\"\n    if isinstance(output, list) and isinstance(output[0], str):\n        print(output[0])\n    elif output:\n        headers = [\"Node\", \"Bandwidth\"]\n        table_data = [[entry['Node'], entry['Bandwidth'].replace('Time taken: ', '')] for entry in output]\n        table = tabulate(table_data, headers=headers, tablefmt='grid')\n        print(table)\n    else:\n        print(\"No data available or access denied.\")\n\ndef k8s_measure_worker_node_network_bandwidth(handle, namespace_to_check_bandwidth: str) -> List:\n    \"\"\"\n     k8s_measure_worker_node_network_bandwidth measures the network bandwidth for each worker node using a DaemonSet and returns the results.\n\n    :type handle: object\n    :param handle: Object returned from the Task validate method\n\n    :type namespace: str\n    :param namespace: The namespace where the DaemonSet will be deployed.\n\n    :return: List containing node and bandwidth details.\n    \"\"\"\n\n    # DaemonSet spec to run our bandwidth test\n    daemonset = {\n        \"apiVersion\": \"apps/v1\",\n        \"kind\": \"DaemonSet\",\n        \"metadata\": {\"name\": \"bandwidth-tester\"},\n        \"spec\": {\n            \"selector\": {\"matchLabels\": {\"app\": \"bandwidth-tester\"}},\n            \"template\": {\n                \"metadata\": {\"labels\": {\"app\": \"bandwidth-tester\"}},\n                \"spec\": {\n                    \"containers\": [\n                        {\n                            \"name\": \"tester\",\n                            \"image\": \"appropriate/curl\",\n                             \"command\": [\n                                            \"sh\",\n                                            \"-c\",\n                                            (\"touch /results/in_progress.txt && \"\n                                            \"start_time=$(date +%s) && \"\n                                            \"curl -O https://speed.hetzner.de/100MB.bin && \"\n                                            \"end_time=$(date +%s) && \"\n                                            \"duration=$((end_time - start_time)) && \"\n                                            \"echo 'Time taken: '$duration' seconds' > /results/time.txt && \"\n                                            \"rm /results/in_progress.txt\")\n                                        ],\n                            \"volumeMounts\": [{\"name\": \"results\", \"mountPath\": \"/results\"}],\n                        }\n                    ],\n                    \"volumes\": [{\"name\": \"results\", \"emptyDir\": {}}],\n                },\n            },\n        },\n    }\n\n\n    v1 = client.AppsV1Api(api_client=handle)\n    core_v1 = client.CoreV1Api(api_client=handle)\n    try:\n        try:\n            v1.delete_namespaced_daemon_set(name=\"bandwidth-tester\", namespace=namespace_to_check_bandwidth, \n                                            propagation_policy=\"Foreground\", grace_period_seconds=0)\n        except ApiException as ae:\n            if ae.status == 404:  # Not Found error\n                print(f\"Checking for an existing DaemonSet 'bandwidth-tester' in namespace {namespace_to_check_bandwidth}...\")\n            elif ae.status == 403:\n                return [\"Forbidden: The service account does not have permission to create/delete daemonset.\"]\n            else:\n                raise ae\n        print(f\"Deploying DaemonSet 'bandwidth-tester' in namespace {namespace_to_check_bandwidth}...\")\n        v1.create_namespaced_daemon_set(namespace=namespace_to_check_bandwidth, body=daemonset)\n\n        print(\"Waiting for DaemonSet to run on all nodes...\")\n        if not pods_have_written_results(handle, core_v1, \"app=bandwidth-tester\", namespace_to_check_bandwidth):\n            print(\"Timeout waiting for pods to write results.\")\n            return []\n\n        # Collect results\n        pods = core_v1.list_namespaced_pod(namespace=namespace_to_check_bandwidth, label_selector=\"app=bandwidth-tester\").items\n        results = []\n        for pod in pods:\n            pod_name = pod.metadata.name\n            retry_count = 0\n            max_retries = 20\n            delay_between_retries = 5\n            while retry_count < max_retries:\n                print(f\"Fetching results from pod: {pod_name}, status: {pod.status.phase}\")\n\n                if pod.status.phase != \"Running\":\n                    time.sleep(delay_between_retries)\n                    retry_count += 1\n                    continue\n\n                fetch_results_command = f\"kubectl exec -n {namespace_to_check_bandwidth} {pod.metadata.name} -- cat /results/time.txt\"\n                fetch_output = handle.run_native_cmd(fetch_results_command)\n\n                if fetch_output and not fetch_output.stderr:\n                    bandwidth = fetch_output.stdout.strip()\n                    results.append({\"Node\": pod.spec.node_name, \"Bandwidth\": bandwidth})\n                    break\n                else:\n                    retry_count += 1\n\n                    print(f\"Retrying in {delay_between_retries} seconds...\")\n                    time.sleep(delay_between_retries)\n\n        print(\"\\nCleaning up: Deleting the DaemonSet after collecting results...\\n\")\n        v1.delete_namespaced_daemon_set(name=\"bandwidth-tester\", namespace=namespace_to_check_bandwidth, \n                                        propagation_policy=\"Foreground\", grace_period_seconds=0)\n\n        return results\n\n    except Exception as e:\n        print(\"An error occurred. Performing cleanup...\")\n        # Cleanup in case of exceptions: Ensure that DaemonSet is deleted\n        try:\n            v1.delete_namespaced_daemon_set(name=\"bandwidth-tester\", namespace=namespace_to_check_bandwidth, \n                                            propagation_policy=\"Foreground\", grace_period_seconds=0)\n        except Exception as cleanup_err:\n            print(f\"Error during cleanup: {cleanup_err}\")\n        raise e\n\n\n"
  },
  {
    "path": "Kubernetes/legos/k8s_remove_pod_from_deployment/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \n<h2>Remove POD From Deployment</h2>\n\n<br>\n\n## Description\nThis Action can be used to remove a POD from Kubernetes deployment. We do it by using the `patch` API to update the existing label with `-out-for-maintenance` appended to them. This would make sure that the POD is removed from the deployment.\n\n## Lego Details\n\n    k8s_remove_pod_from_deployment(handle: object, pod_name: str, namespace: str)\n\n        handle: Object of type unSkript K8S Connector\n        pod_name: String, Name of the POD (Mandatory parameter)\n        namespace: String, Namespace where the POD exists\n\n\n## Lego Input\nThis Lego takes three mandatory inputs. Handle (K8S) object returned from the `task.validator(...)`, POD Name and Namespace where the POD exists. \n\n## Lego Output\nHere are two sample outputs\n\n<img src=\"./1.png\">\n<img src=\"./2.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kubernetes/legos/k8s_remove_pod_from_deployment/__init__.py",
    "content": ""
  },
  {
    "path": "Kubernetes/legos/k8s_remove_pod_from_deployment/k8s_remove_pod_from_deployment.json",
    "content": "{\n    \"action_title\": \"Remove POD from Deployment\",\n    \"action_description\": \"Remove POD from Deployment\",\n    \"action_type\": \"LEGO_TYPE_K8S\",\n    \"action_entry_function\": \"k8s_remove_pod_from_deployment\",\n    \"action_needs_credential\": true,\n    \"action_supports_poll\": true,\n    \"action_supports_iteration\": true,\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_NONE\",\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_K8S\",\"CATEGORY_TYPE_K8S_KUBECTL\",\"CATEGORY_TYPE_K8S_POD\"]\n\n}"
  },
  {
    "path": "Kubernetes/legos/k8s_remove_pod_from_deployment/k8s_remove_pod_from_deployment.py",
    "content": "##\n# Copyright (c) 2023 unSkript, Inc\n# All rights reserved.\n##\nimport pprint\nfrom pydantic import BaseModel, Field\nfrom kubernetes import client\n\nclass InputSchema(BaseModel):\n    pod_name: str = Field(\n        title=\"Pod Name\",\n        description=\"K8S Pod Name\"\n    )\n    namespace: str = Field(\n        title=\"Namespace\",\n        description=\"K8S Namespace where the POD exists\"\n    )\n\ndef k8s_remove_pod_from_deployment_printer(output):\n    if not output:\n        return\n\n    pprint.pprint(output)\n\n\ndef k8s_remove_pod_from_deployment(handle, pod_name: str, namespace: str):\n    \"\"\"k8s_remove_pod_from_deployment This action can be used to remove the given POD in a namespace\n       from a deployment. \n\n       :type handle: Object\n       :param handle: Object returned from task.validate(...) routine\n\n       :type pod_name: str\n       :param pod_name: Name of the K8S POD (Mandatory parameter)\n\n       :type namespace: str \n       :param namespace: Namespace where the above K8S POD is found (Mandatory parameter)\n\n       :rtype: None\n    \"\"\"\n    if not pod_name or not namespace:\n        raise Exception(\"Pod Name and Namespace are Mandatory fields\")\n\n    core_api = client.CoreV1Api(api_client=handle)\n    apps_api = client.AppsV1Api(api_client=handle)\n\n    # Labels are key-value pairs that can be attached to Kubernetes objects.\n    # Labels can be used to organize and group objects, and they can be used to\n    # select objects for operations such as deletion and updates.\n\n    # Selectors are used to select a group of objects for an operation. Selectors can be\n    # specified using labels, and they can be used to select all objects with a given\n    # label or all objects that match a certain pattern.\n\n    # Kubernetes deployment uses Labels and Selectors to select which pods need to be\n    # updated when a new version of a pod is deployed.\n\n    # Here by modifying the selector label for deployment, we are making sure the pod\n    # is removed from the deployment. We verify the same by listing the pod labels after\n    # doing a patch operation\n    try:\n        pod = core_api.read_namespaced_pod(name=pod_name, namespace=namespace)\n        owner_references = pod.metadata.owner_references\n        deployment_name = ''\n        if isinstance(owner_references, list):\n            owner_name = owner_references[0].name\n            owner_kind = owner_references[0].kind \n            if owner_kind == 'Deployment':\n                deployment_name = owner_name\n            else:\n                raise Exception(f\"Unexpected owner_references kind in pod metadata {pod.metadata.owner_references} Only Deployment is supported\")\n\n        if deployment_name != '':\n            deployment = apps_api.read_namespaced_deployment(\n                name=deployment_name,\n                namespace=namespace\n                )\n            deployment_labels= [key for key, value in deployment.spec.selector.match_labels.items()]\n\n            pod_labels = [key for key,value in pod.metadata.labels.items()]\n\n            common_labels = set(deployment_labels) & set(pod_labels)\n            new_label = {}\n            for label in common_labels:\n                new_label[label] = pod.metadata.labels.get(label) + '-out-for-maintenance'\n\n            pod.metadata.labels.update(new_label)\n            core_api.patch_namespaced_pod(pod_name, namespace, pod)\n        else:\n            print(f\"ERROR: Could not remove {pod_name} from its deployment in {namespace} \")\n    except Exception as e:\n        raise e\n"
  },
  {
    "path": "Kubernetes/legos/k8s_update_command_in_pod_spec/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Update Commands in a Kubernetes POD</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego Update Commands in a Kubernetes POD in a given Namespace.\r\n\r\n\r\n## Lego Details\r\n\r\n    k8s_update_command_in_pod_spec(handle: object, namespace: str, deployment_name: str, command: List)\r\n\r\n        handle: Object of type unSkript K8S Connector\r\n        namespace: Kubernetes namespace.\r\n        deployment_name: Kubernetes Deployment Name.\r\n        command: List of Commands to update on the Pod Deployment.\r\n\r\n## Lego Input\r\nThis Lego take four input handle, namespace, deployment_name and command.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Kubernetes/legos/k8s_update_command_in_pod_spec/__init__.py",
    "content": ""
  },
  {
    "path": "Kubernetes/legos/k8s_update_command_in_pod_spec/k8s_update_command_in_pod_spec.json",
    "content": "{\r\n    \"action_title\": \"Update Commands in a Kubernetes POD in a given Namespace\",\r\n    \"action_description\": \"Update Commands in a Kubernetes POD in a given Namespace\",\r\n    \"action_type\": \"LEGO_TYPE_K8S\",\r\n    \"action_entry_function\": \"k8s_update_command_in_pod_spec\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_K8S\",\"CATEGORY_TYPE_K8S_KUBECTL\",\"CATEGORY_TYPE_K8S_POD\"]\r\n}\r\n    "
  },
  {
    "path": "Kubernetes/legos/k8s_update_command_in_pod_spec/k8s_update_command_in_pod_spec.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\nimport pprint\nfrom typing import List\nfrom typing import Tuple\nfrom pydantic import BaseModel, Field\nfrom kubernetes import client\nfrom kubernetes.client.rest import ApiException\n\npp = pprint.PrettyPrinter(indent=2)\n\nclass InputSchema(BaseModel):\n    namespace: str = Field(\n        title='Namespace',\n        description='Kubernetes namespace')\n    deployment_name: str = Field(\n        title='Deployment Name',\n        description='Kubernetes Deployment Name')\n    command: list = Field(\n        title='Command',\n        description='''\n            List of Commands to update on the Pod Deployment\n            For eg: [\"nginx\" , \"-t\"]\n            ''')\n\n\ndef k8s_update_command_in_pod_spec_printer(output):\n    if output is None:\n        return\n    command = output[0]\n    pprint.pprint(command)\n\n\ndef k8s_update_command_in_pod_spec(\n        handle,\n        namespace: str,\n        deployment_name: str,\n        command: List\n        ) -> Tuple:\n    \"\"\"k8s_update_command_in_pod_spec updateb command in pod spec\n\n        :type handle: object\n        :param handle: Object returned from the Task validate method\n\n        :type namespace: str\n        :param namespace: Kubernetes namespace.\n\n        :type deployment_name: strdeployment_name: Kubernetes Deployment Name.\n        :param \n\n        :type command: List\n        :param command: List of Commands to update on the Pod Deployment.\n\n        :rtype: Tuple\n    \"\"\"\n    coreApiClient = client.AppsV1Api(api_client=handle)\n\n    try:\n        deployment = coreApiClient.read_namespaced_deployment(\n            deployment_name, namespace, pretty=True)\n        deployment.spec.template.spec.containers[0].command = list(command)\n        resp = coreApiClient.patch_namespaced_deployment(\n            name=deployment.metadata.name, namespace=namespace, body=deployment\n        )\n        return (resp.spec.template.spec.containers[0].command, resp)\n    except ApiException as e:\n        error = f'An Exception occured while executing the command :{e}'\n        pp.pprint(error)\n        return (None, error)\n"
  },
  {
    "path": "License",
    "content": "\r\n                                 Apache License\r\n                           Version 2.0, January 2004\r\n                        http://www.apache.org/licenses/\r\n\r\n   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION\r\n\r\n   1. Definitions.\r\n\r\n      \"License\" shall mean the terms and conditions for use, reproduction,\r\n      and distribution as defined by Sections 1 through 9 of this document.\r\n\r\n      \"Licensor\" shall mean the copyright owner or entity authorized by\r\n      the copyright owner that is granting the License.\r\n\r\n      \"Legal Entity\" shall mean the union of the acting entity and all\r\n      other entities that control, are controlled by, or are under common\r\n      control with that entity. For the purposes of this definition,\r\n      \"control\" means (i) the power, direct or indirect, to cause the\r\n      direction or management of such entity, whether by contract or\r\n      otherwise, or (ii) ownership of fifty percent (50%) or more of the\r\n      outstanding shares, or (iii) beneficial ownership of such entity.\r\n\r\n      \"You\" (or \"Your\") shall mean an individual or Legal Entity\r\n      exercising permissions granted by this License.\r\n\r\n      \"Source\" form shall mean the preferred form for making modifications,\r\n      including but not limited to software source code, documentation\r\n      source, and configuration files.\r\n\r\n      \"Object\" form shall mean any form resulting from mechanical\r\n      transformation or translation of a Source form, including but\r\n      not limited to compiled object code, generated documentation,\r\n      and conversions to other media types.\r\n\r\n      \"Work\" shall mean the work of authorship, whether in Source or\r\n      Object form, made available under the License, as indicated by a\r\n      copyright notice that is included in or attached to the work\r\n      (an example is provided in the Appendix below).\r\n\r\n      \"Derivative Works\" shall mean any work, whether in Source or Object\r\n      form, that is based on (or derived from) the Work and for which the\r\n      editorial revisions, annotations, elaborations, or other modifications\r\n      represent, as a whole, an original work of authorship. For the purposes\r\n      of this License, Derivative Works shall not include works that remain\r\n      separable from, or merely link (or bind by name) to the interfaces of,\r\n      the Work and Derivative Works thereof.\r\n\r\n      \"Contribution\" shall mean any work of authorship, including\r\n      the original version of the Work and any modifications or additions\r\n      to that Work or Derivative Works thereof, that is intentionally\r\n      submitted to Licensor for inclusion in the Work by the copyright owner\r\n      or by an individual or Legal Entity authorized to submit on behalf of\r\n      the copyright owner. For the purposes of this definition, \"submitted\"\r\n      means any form of electronic, verbal, or written communication sent\r\n      to the Licensor or its representatives, including but not limited to\r\n      communication on electronic mailing lists, source code control systems,\r\n      and issue tracking systems that are managed by, or on behalf of, the\r\n      Licensor for the purpose of discussing and improving the Work, but\r\n      excluding communication that is conspicuously marked or otherwise\r\n      designated in writing by the copyright owner as \"Not a Contribution.\"\r\n\r\n      \"Contributor\" shall mean Licensor and any individual or Legal Entity\r\n      on behalf of whom a Contribution has been received by Licensor and\r\n      subsequently incorporated within the Work.\r\n\r\n   2. Grant of Copyright License. Subject to the terms and conditions of\r\n      this License, each Contributor hereby grants to You a perpetual,\r\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\r\n      copyright license to reproduce, prepare Derivative Works of,\r\n      publicly display, publicly perform, sublicense, and distribute the\r\n      Work and such Derivative Works in Source or Object form.\r\n\r\n   3. Grant of Patent License. Subject to the terms and conditions of\r\n      this License, each Contributor hereby grants to You a perpetual,\r\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\r\n      (except as stated in this section) patent license to make, have made,\r\n      use, offer to sell, sell, import, and otherwise transfer the Work,\r\n      where such license applies only to those patent claims licensable\r\n      by such Contributor that are necessarily infringed by their\r\n      Contribution(s) alone or by combination of their Contribution(s)\r\n      with the Work to which such Contribution(s) was submitted. If You\r\n      institute patent litigation against any entity (including a\r\n      cross-claim or counterclaim in a lawsuit) alleging that the Work\r\n      or a Contribution incorporated within the Work constitutes direct\r\n      or contributory patent infringement, then any patent licenses\r\n      granted to You under this License for that Work shall terminate\r\n      as of the date such litigation is filed.\r\n\r\n   4. Redistribution. You may reproduce and distribute copies of the\r\n      Work or Derivative Works thereof in any medium, with or without\r\n      modifications, and in Source or Object form, provided that You\r\n      meet the following conditions:\r\n\r\n      (a) You must give any other recipients of the Work or\r\n          Derivative Works a copy of this License; and\r\n\r\n      (b) You must cause any modified files to carry prominent notices\r\n          stating that You changed the files; and\r\n\r\n      (c) You must retain, in the Source form of any Derivative Works\r\n          that You distribute, all copyright, patent, trademark, and\r\n          attribution notices from the Source form of the Work,\r\n          excluding those notices that do not pertain to any part of\r\n          the Derivative Works; and\r\n\r\n      (d) If the Work includes a \"NOTICE\" text file as part of its\r\n          distribution, then any Derivative Works that You distribute must\r\n          include a readable copy of the attribution notices contained\r\n          within such NOTICE file, excluding those notices that do not\r\n          pertain to any part of the Derivative Works, in at least one\r\n          of the following places: within a NOTICE text file distributed\r\n          as part of the Derivative Works; within the Source form or\r\n          documentation, if provided along with the Derivative Works; or,\r\n          within a display generated by the Derivative Works, if and\r\n          wherever such third-party notices normally appear. The contents\r\n          of the NOTICE file are for informational purposes only and\r\n          do not modify the License. You may add Your own attribution\r\n          notices within Derivative Works that You distribute, alongside\r\n          or as an addendum to the NOTICE text from the Work, provided\r\n          that such additional attribution notices cannot be construed\r\n          as modifying the License.\r\n\r\n      You may add Your own copyright statement to Your modifications and\r\n      may provide additional or different license terms and conditions\r\n      for use, reproduction, or distribution of Your modifications, or\r\n      for any such Derivative Works as a whole, provided Your use,\r\n      reproduction, and distribution of the Work otherwise complies with\r\n      the conditions stated in this License.\r\n\r\n   5. Submission of Contributions. Unless You explicitly state otherwise,\r\n      any Contribution intentionally submitted for inclusion in the Work\r\n      by You to the Licensor shall be under the terms and conditions of\r\n      this License, without any additional terms or conditions.\r\n      Notwithstanding the above, nothing herein shall supersede or modify\r\n      the terms of any separate license agreement you may have executed\r\n      with Licensor regarding such Contributions.\r\n\r\n   6. Trademarks. This License does not grant permission to use the trade\r\n      names, trademarks, service marks, or product names of the Licensor,\r\n      except as required for reasonable and customary use in describing the\r\n      origin of the Work and reproducing the content of the NOTICE file.\r\n\r\n   7. Disclaimer of Warranty. Unless required by applicable law or\r\n      agreed to in writing, Licensor provides the Work (and each\r\n      Contributor provides its Contributions) on an \"AS IS\" BASIS,\r\n      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\r\n      implied, including, without limitation, any warranties or conditions\r\n      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A\r\n      PARTICULAR PURPOSE. You are solely responsible for determining the\r\n      appropriateness of using or redistributing the Work and assume any\r\n      risks associated with Your exercise of permissions under this License.\r\n\r\n   8. Limitation of Liability. In no event and under no legal theory,\r\n      whether in tort (including negligence), contract, or otherwise,\r\n      unless required by applicable law (such as deliberate and grossly\r\n      negligent acts) or agreed to in writing, shall any Contributor be\r\n      liable to You for damages, including any direct, indirect, special,\r\n      incidental, or consequential damages of any character arising as a\r\n      result of this License or out of the use or inability to use the\r\n      Work (including but not limited to damages for loss of goodwill,\r\n      work stoppage, computer failure or malfunction, or any and all\r\n      other commercial damages or losses), even if such Contributor\r\n      has been advised of the possibility of such damages.\r\n\r\n   9. Accepting Warranty or Additional Liability. While redistributing\r\n      the Work or Derivative Works thereof, You may choose to offer,\r\n      and charge a fee for, acceptance of support, warranty, indemnity,\r\n      or other liability obligations and/or rights consistent with this\r\n      License. However, in accepting such obligations, You may act only\r\n      on Your own behalf and on Your sole responsibility, not on behalf\r\n      of any other Contributor, and only if You agree to indemnify,\r\n      defend, and hold each Contributor harmless for any liability\r\n      incurred by, or claims asserted against, such Contributor by reason\r\n      of your accepting any such warranty or additional liability.\r\n\r\n   END OF TERMS AND CONDITIONS\r\n\r\n   APPENDIX: How to apply the Apache License to your work.\r\n\r\n      To apply the Apache License to your work, attach the following\r\n      boilerplate notice, with the fields enclosed by brackets \"[]\"\r\n      replaced with your own identifying information. (Don't include\r\n      the brackets!)  The text should be enclosed in the appropriate\r\n      comment syntax for the file format. We also recommend that a\r\n      file or class name and description of purpose be included on the\r\n      same \"printed page\" as the copyright notice for easier\r\n      identification within third-party archives.\r\n\r\n   Copyright [yyyy] [name of copyright owner]\r\n\r\n   Licensed under the Apache License, Version 2.0 (the \"License\");\r\n   you may not use this file except in compliance with the License.\r\n   You may obtain a copy of the License at\r\n\r\n       http://www.apache.org/licenses/LICENSE-2.0\r\n\r\n   Unless required by applicable law or agreed to in writing, software\r\n   distributed under the License is distributed on an \"AS IS\" BASIS,\r\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\r\n   See the License for the specific language governing permissions and\r\n   limitations under the License.\r\n"
  },
  {
    "path": "Mantishub/README.md",
    "content": "\n# Mantishub Actions\n* [Get Mantishub handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mantishub/legos/mantishub_get_handle/README.md): Get Mantishub handle\n"
  },
  {
    "path": "Mantishub/__init__.py",
    "content": ""
  },
  {
    "path": "Mantishub/legos/__init__.py",
    "content": ""
  },
  {
    "path": "Mantishub/legos/mantishub_get_handle/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \n<h1>Get Mantishub handle</h1>\n\n## Description\nThis Lego Returns Mantishub handle.\n\n\n## Lego Details\n\n    mantishub_get_handle(handle)\n\n        handle: Object of type unSkript MANTISHUB Connector\n\n## Lego Input\nThis Lego takes only one input handle. \n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Mantishub/legos/mantishub_get_handle/mantishub_get_handle.json",
    "content": "{\n\"action_title\": \"Get Mantishub handle\",\n\"action_description\": \"Get Mantishub handle\",\n\"action_type\": \"LEGO_TYPE_MANTISHUB\",\n\"action_entry_function\": \"mantishub_get_handle\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": false,\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_NONE\",\n\"action_supports_iteration\": false,\n\"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_MANTISHUB\"]\n}\n"
  },
  {
    "path": "Mantishub/legos/mantishub_get_handle/mantishub_get_handle.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nfrom pydantic import BaseModel\n\n\nclass InputSchema(BaseModel):\n    pass\n\n\ndef mantishub_get_handle(handle):\n    \"\"\" mantishub_get_handle returns the Mantishub handle.\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n        \n        :rtype: Mantishub handle.\n    \"\"\"\n    return handle\n"
  },
  {
    "path": "Mongo/README.md",
    "content": "\n# Mongo Actions\n* [MongoDB add new field in all collections](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_add_new_field_in_collections/README.md): MongoDB add new field in all collections\n* [MongoDB Aggregate Command](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_aggregate_command/README.md): MongoDB Aggregate Command\n* [MongoDB Atlas cluster cloud backup](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_atlas_cluster_backup/README.md): Trigger on-demand Atlas cloud backup\n* [Get large MongoDB indices](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_check_large_index_size/README.md): This action compares the size of each index with a given threshold and returns any indexes that exceed the threshold.\n* [Get MongoDB large databases](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_compare_disk_size_to_threshold/README.md): This action compares the total disk size used by MongoDB to a given threshold.\n* [MongoDB Count Documents](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_count_documents/README.md): MongoDB Count Documents\n* [MongoDB Create Collection](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_create_collection/README.md): MongoDB Create Collection\n* [MongoDB Create Database](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_create_database/README.md): MongoDB Create Database\n* [Delete collection from MongoDB database](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_delete_collection/README.md): Delete collection from MongoDB database\n* [MongoDB Delete Database](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_delete_database/README.md): MongoDB Delete Database\n* [MongoDB Delete Document](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_delete_document/README.md): MongoDB Delete Document\n* [MongoDB Distinct Command](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_distinct_command/README.md): MongoDB Distinct Command\n* [MongoDB Find Document](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_find_document/README.md): MongoDB Find Document\n* [MongoDB Find One](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_find_one/README.md): MongoDB Find One returns a single entry that matches the query.\n* [Get MongoDB Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_get_handle/README.md): Get MongoDB Handle\n* [MongoDB get metrics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_get_metrics/README.md): This action retrieves various metrics such as index size, disk size per collection for all databases and collections.\n* [Get Mongo Server Status](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_get_server_status/README.md): Get Mongo Server Status and check for any abnormalities.\n* [MongoDB Insert Document](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_insert_document/README.md): MongoDB Insert Document\n* [MongoDB kill queries](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_kill_queries/README.md): MongoDB kill queries\n* [Get list of collections in MongoDB Database](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_list_collections/README.md): Get list of collections in MongoDB Database\n* [Get list of MongoDB Databases](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_list_databases/README.md): Get list of MongoDB Databases\n* [MongoDB list queries](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_list_queries/README.md): MongoDB list queries\n* [MongoDB Read Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_read_query/README.md): MongoDB Read Query\n* [MongoDB remove a field in all collections](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_remove_field_in_collections/README.md): MongoDB remove a field in all collections\n* [MongoDB Rename Database](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_rename_database/README.md): MongoDB Rename Database\n* [MongoDB Update Document](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_update_document/README.md): MongoDB Update Document\n* [MongoDB Upsert Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_write_query/README.md): MongoDB Upsert Query\n"
  },
  {
    "path": "Mongo/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "Mongo/legos/__init__.py",
    "content": ""
  },
  {
    "path": "Mongo/legos/mongodb_add_new_field_in_collections/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>MongoDB add new field in all collections</h1>\r\n\r\n## Description\r\nThis Lego Adds New feild in the MongoDB collection.\r\n\r\n\r\n## Lego Details\r\n\r\n    mongodb_add_new_field_in_collections(handle, database_name: str, collection_name: str, add_new_fields: dict, upsert: bool = True)\r\n\r\n        handle: Object of type unSkript Mongodb Connector.\r\n        database_name: Name of the MongoDB database.\r\n        collection_name: Name of the MongoDB collection.\r\n        add_new_fields: Add new fields to every document.\r\n        upsert: Allow creation of a new document, if one does not exist.\r\n\r\n\r\n## Lego Input\r\nThis Lego take five inputs handle, database_name, collection_name, add_new_fields and upsert.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Mongo/legos/mongodb_add_new_field_in_collections/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "Mongo/legos/mongodb_add_new_field_in_collections/mongodb_add_new_field_in_collections.json",
    "content": "{\n\"action_title\": \"MongoDB add new field in all collections\",\n\"action_description\": \"MongoDB add new field in all collections\",\n\"action_type\": \"LEGO_TYPE_MONGODB\",\n\"action_entry_function\": \"mongodb_add_new_field_in_collections\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": true,\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n\"action_supports_iteration\": true ,\n\"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_MONGODB\", \"CATEGORY_TYPE_MONGODB_COLLECTION\"]\n}\n"
  },
  {
    "path": "Mongo/legos/mongodb_add_new_field_in_collections/mongodb_add_new_field_in_collections.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import List\nfrom pydantic import BaseModel, Field\n\n\nclass InputSchema(BaseModel):\n    database_name: str = Field(\n        title='Database Name',\n        description='Name of the MongoDB database.'\n    )\n    collection_name: str = Field(\n        title='Collection Name',\n        description='Name of the MongoDB collection.'\n    )\n    add_new_fields: dict = Field(\n        title='Add new fields to every document',\n        description='''\n                The addition of fields apply in dictionary format.\n                For eg: {\"field\":\"value\"}.\n                '''\n    )\n    upsert: bool = Field(\n        True,\n        title='Upsert',\n        description='Allow creation of a new document, if one does not exist.'\n    )\n\n\ndef mongodb_add_new_field_in_collections_printer(output):\n    if output is None:\n        return\n    print(\"\\n\\n\")\n    if isinstance(output, List):\n        for entry in output:\n            pprint.pprint(entry)\n    else:\n        pprint.pprint(output)\n\n\ndef mongodb_add_new_field_in_collections(\n        handle,\n        database_name: str,\n        collection_name: str,\n        add_new_fields: dict,\n        upsert: bool = True\n        ) -> List:\n    \"\"\"mongodb_add_new_field_in_collections Add new field to every document in a MongoDB collection.\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type database_name: str\n        :param database_name: Name of the MongoDB database.\n\n        :type collection_name: str\n        :param collection_name: Name of the MongoDB collection.\n\n        :type add_new_fields: Dict\n        :param add_new_fields: Add new fields to every document.\n\n        :type upsert: bool\n        :param upsert: Allow creation of a new document, if one does not exist.\n\n        :rtype: List with the objectID.\n    \"\"\"\n    modifications = {\"$set\": add_new_fields}\n\n    try:\n        handle[database_name][collection_name].update_many(\n            {},\n            update=modifications,\n            upsert=upsert)\n        res = handle[database_name][collection_name].find()\n        result = []\n        for entry in res:\n            result.append(entry)\n        return result\n    except Exception as e:\n        return [e]\n"
  },
  {
    "path": "Mongo/legos/mongodb_aggregate_command/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>MongoDB Aggregate Command</h1>\r\n\r\n## Description\r\nThis Lego Runs Agrregate commands on MongoDB.\r\n\r\n\r\n## Lego Details\r\n\r\n    mongodb_aggregate_command(handle, database_name: str, collection_name: str, pipeline: List)\r\n\r\n        handle: Object of type unSkript Mongodb Connector.\r\n        database_name: Name of the MongoDB database.\r\n        collection_name: Name of the MongoDB collection.\r\n        pipeline: A list of aggregation pipeline stages.\r\n\r\n## Lego Input\r\nThis Lego take four input database_name, collection_name and pipeline. \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Mongo/legos/mongodb_aggregate_command/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "Mongo/legos/mongodb_aggregate_command/mongodb_aggregate_command.json",
    "content": "{\n\"action_title\": \"MongoDB Aggregate Command\",\n\"action_description\": \"MongoDB Aggregate Command\",\n\"action_type\": \"LEGO_TYPE_MONGODB\",\n\"action_entry_function\": \"mongodb_aggregate_command\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": true,\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n\"action_supports_iteration\": true,\n\"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_MONGODB\"]\n}\n"
  },
  {
    "path": "Mongo/legos/mongodb_aggregate_command/mongodb_aggregate_command.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nimport pprint\nfrom typing import List\nfrom pydantic import BaseModel, Field\n\n\nclass InputSchema(BaseModel):\n    database_name: str = Field(\n        title='Database Name',\n        description='Name of the MongoDB database'\n    )\n    collection_name: str = Field(\n        title='Collection Name',\n        description='Name of the MongoDB collection'\n    )\n    pipeline: list = Field(\n        title='Pipeline',\n        description='''\n                A list of aggregation pipeline stages.\n                For Eg. [  {\n                            \"$group\" :\n                                {\"_id\" : \"$user\", \"num_tutorial\" : {\"$sum\" : 1}}\n                            }\n                        ]\n                In the above example, the documents are grouped on the basis of expression $user,\n                and then the field num_tutorial includes the accumulator operator $sum that \n                calculates the number of tutorials of each user.\n            '''\n    )\n\n\ndef mongodb_aggregate_command_printer(output):\n        print(\"\\n\\n\")\n        if isinstance(output, List):\n            for entry in output:\n                pprint.pprint(entry)\n        else:\n            pprint.pprint(output)\n\n\ndef mongodb_aggregate_command(\n        handle,\n        database_name: str,\n        collection_name: str,\n        pipeline: List\n        ) -> List:\n    \"\"\"mongodb_aggregate_command Retrieves the documents present in the collection\n    and the count of the documents using count_documents().\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type database_name: str\n        :param database_name: Name of the MongoDB database.\n\n        :type collection_name: str\n        :param collection_name: Name of the MongoDB collection.\n\n        :type pipeline: List\n        :param pipeline: A list of aggregation pipeline stages.\n\n        :rtype: List of All the results of the query.\n    \"\"\"\n\n    try:\n        result = []\n        db = handle[database_name]\n        res = db[collection_name].aggregate(pipeline=pipeline)\n        for entry in res:\n            result.append(entry)\n        return result\n    except Exception as e:\n        return [e]\n"
  },
  {
    "path": "Mongo/legos/mongodb_atlas_cluster_backup/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>MongoDB Atlas cluster cloud backup</h1>\r\n\r\n## Description\r\nThis Lego Trigger on-demand Atlas cloud backup.\r\n\r\n\r\n## Lego Details\r\n\r\n    mongodb_atlas_cluster_backup( handle, project_name: str, cluster_name: str, description: str, retention_in_days: int = 1)\r\n\r\n        handle: Object of type unSkript Mongodb Connector.\r\n        project_name : Atlas Project Name.\r\n        project_name : Atlas Cluster Name.\r\n        description : Description of the on-demand snapshot.\r\n        retention_in_days: Number of days that Atlas should retain the on-demand snapshot. Must be at least 1.\r\n\r\n## Lego Input\r\nThis Lego take five inputs handle, project_name, project_name,description and retention_in_days. \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Mongo/legos/mongodb_atlas_cluster_backup/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "Mongo/legos/mongodb_atlas_cluster_backup/mongodb_atlas_cluster_backup.json",
    "content": "{\n\"action_title\": \"MongoDB Atlas cluster cloud backup\",\n\"action_description\": \"Trigger on-demand Atlas cloud backup\",\n\"action_type\": \"LEGO_TYPE_MONGODB\",\n\"action_entry_function\": \"mongodb_atlas_cluster_backup\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": true,\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\n\"action_supports_iteration\": true,\n\"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_MONGODB\", \"CATEGORY_TYPE_MONGODB_CLUSTER\"]\n}\n"
  },
  {
    "path": "Mongo/legos/mongodb_atlas_cluster_backup/mongodb_atlas_cluster_backup.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\nimport pprint\nfrom typing import Dict\nimport requests\nfrom pydantic import BaseModel, Field\nfrom requests.auth import HTTPDigestAuth\n\n\nclass InputSchema(BaseModel):\n    project_name: str = Field(\n        title='Project Name',\n        description='Atlas Project Name'\n    )\n    cluster_name: str = Field(\n        title='Cluster Name',\n        description='Atlas Cluster Name.'\n    )\n    description: str = Field(\n        title='Description',\n        description=\"Description of the on-demand snapshot.\"\n    )\n    retention_in_days: int = Field(\n        default=7,\n        title='Retention In Days',\n        description=('Number of days that Atlas should retain the '\n                     'on-demand snapshot. Must be at least 1.')\n    )\n\ndef mongodb_atlas_cluster_backup_printer(output):\n    if output is None:\n        return\n    print(\"\\n\\n\")\n    pprint.pprint(output)\n\n\ndef mongodb_atlas_cluster_backup(\n        handle,\n        project_name: str,\n        cluster_name: str,\n        description: str,\n        retention_in_days: int = 1) -> Dict:\n    \"\"\"mongodb_atlas_cluster_backup Create backup of MongoDB Cluster.\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type project_name: str\n        :param project_name: Atlas Project Name.\n\n        :type cluster_name: str\n        :param cluster_name: Atlas Cluster Name.\n\n        :type description: str\n        :param description: Description of the on-demand snapshot.\n\n        :type retention_in_days: int\n        :param retention_in_days: Retention In Days.\n\n        :rtype: Dict of SnapShot.\n    \"\"\"\n    atlas_base_url = handle.get_base_url()\n    public_key = handle.get_public_key()\n    private_key = handle.get_private_key()\n    auth = HTTPDigestAuth(public_key, private_key)\n\n    #Get Project ID from Project Name\n    url =  atlas_base_url + f\"/groups/byName/{project_name}\"\n    try:\n        resp = requests.get(url, auth=auth)\n        resp.raise_for_status()\n    except Exception as e:\n        return {'Get project id failed': str(e)}\n\n    project_resp = resp.json()\n    group_id = project_resp.get(\"id\")\n\n    body = {\n        \"description\": description,\n        \"retentionInDays\" : retention_in_days\n    }\n    url =  atlas_base_url + (f\"/groups/{group_id}/clusters/{cluster_name}/backup\"\n                             \"/snapshots/?pretty=true\")\n    try:\n        response = requests.post(url, auth=auth, json=body)\n        response.raise_for_status()\n    except Exception as e:\n        return {'Start snapshot failed': str(e)}\n    return response.json()\n"
  },
  {
    "path": "Mongo/legos/mongodb_check_large_index_size/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>Get large MongoDB indices</h1>\n\n## Description\nThis action compares the size of each index with a given threshold and returns any indexes that exceed the threshold.\n\n## Action Details\n\tmongodb_check_large_index_size(handle, threshold:float=1000)\n\t\thandle: Object of type unSkript MONGODB Connector.\n\t\tthreshold: The threshold for index size in KB\n\n## Action Input\nThis Lego takes inputs handle, threshold.\n\n## Action Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Action by following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Mongo/legos/mongodb_check_large_index_size/__init__.py",
    "content": ""
  },
  {
    "path": "Mongo/legos/mongodb_check_large_index_size/mongodb_check_large_index_size.json",
    "content": "{\n  \"action_title\": \"Get large MongoDB indices\",\n  \"action_description\": \"This action compares the size of each index with a given threshold and returns any indexes that exceed the threshold.\",\n  \"action_type\": \"LEGO_TYPE_MONGODB\",\n  \"action_entry_function\": \"mongodb_check_large_index_size\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n  \"action_is_check\": true,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_MONGODB\"],\n  \"action_next_hop\": [\"\"],\n  \"action_next_hop_parameter_mapping\": {}\n}"
  },
  {
    "path": "Mongo/legos/mongodb_check_large_index_size/mongodb_check_large_index_size.py",
    "content": "##\n##  Copyright (c) 2023 unSkript, Inc\n##  All rights reserved.\n##\nfrom typing import Tuple, Optional\nfrom pydantic import BaseModel, Field\nDEFAULT_SIZE= 2048000 # 2GB in KB\n\n\nclass InputSchema(BaseModel):\n    index_threshold: Optional[float] = Field(\n        DEFAULT_SIZE,\n        description='The threshold for total index size. Default is 512000KB.',\n        title='Index threshold(in KB)',\n    )\n\n\n\ndef mongodb_check_large_index_size_printer(output):\n    success, alerts = output\n    if success:\n        print(\"Index sizes are within the threshold.\")\n        return\n\n    # Otherwise, print the alerts\n    for alert in alerts:\n        print(f\"Alert! Index size of {alert['indexSizeKB']} KB for database '{alert['db']}' in collection '{alert['collection']}' exceeds threshold !\")\n\n\ndef mongodb_check_large_index_size(handle, threshold: float = DEFAULT_SIZE) -> Tuple:\n    \"\"\"\n    mongodb_check_large_index_size checks the index sizes for all databases and collections.\n    It compares the size of each index with a given threshold and returns any indexes that exceed the threshold.\n\n    :type handle: object\n    :param handle: Object returned from task.validate(...).\n\n    :type threshold: float\n    :param threshold: The threshold for index size in KB\n\n    :rtype: Status, the list of the details of the indexes that exceeded the threshold.\n    \"\"\"\n\n    # List to hold alerts for indexes that exceed the threshold\n    alerts = []\n\n    try:\n        database_names = [db for db in handle.list_database_names() if db != 'local']\n        for db_name in database_names:\n            db = handle[db_name]\n            collection_names = db.list_collection_names()\n            # Iterate through each collection in the database\n            for coll_name in collection_names:\n                coll = db.get_collection(coll_name)\n                # Skip views\n                if coll.options().get('viewOn'):\n                    continue\n\n                stats = db.command(\"collstats\", coll_name)\n                # Check each index's size\n                for index_name, index_size in stats['indexSizes'].items():\n                    index_size_KB = index_size / 1024  # Convert to KB\n\n                    if index_size_KB > threshold:\n                        alerts.append({\n                            'db': db_name,\n                            'collection': coll_name,\n                            'index': index_name,\n                            'indexSizeKB': index_size_KB\n                        })\n    \n    except Exception as e:\n        raise e\n        \n    if len(alerts) != 0:\n        return (False, alerts)\n    return (True, None)\n\n\n\n\n"
  },
  {
    "path": "Mongo/legos/mongodb_compare_disk_size_to_threshold/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>Get MongoDB large databases</h1>\n\n## Description\nThis action compares the total disk size used by MongoDB to a given threshold.\n\n## Lego Details\n\tmongodb_compare_disk_size_to_threshold(handle, threshold: float=2000)\n\t\thandle: Object of type unSkript MONGODB Connector.\n\t\tthreshold: The threshold for disk size in KB.\n\n\n## Lego Input\nThis Lego takes inputs handle, threshold.\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Mongo/legos/mongodb_compare_disk_size_to_threshold/__init__.py",
    "content": ""
  },
  {
    "path": "Mongo/legos/mongodb_compare_disk_size_to_threshold/mongodb_compare_disk_size_to_threshold.json",
    "content": "{\n  \"action_title\": \"Get MongoDB large databases\",\n  \"action_description\": \"This action compares the total disk size used by MongoDB to a given threshold.\",\n  \"action_type\": \"LEGO_TYPE_MONGODB\",\n  \"action_entry_function\": \"mongodb_compare_disk_size_to_threshold\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n  \"action_is_check\": true,\n  \"action_next_hop\": [\n    \"\"\n  ],\n  \"action_next_hop_parameter_mapping\": {},\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true\n}"
  },
  {
    "path": "Mongo/legos/mongodb_compare_disk_size_to_threshold/mongodb_compare_disk_size_to_threshold.py",
    "content": "##\n##  Copyright (c) 2023 unSkript, Inc\n##  All rights reserved.\n##\nfrom typing import Optional, Tuple\nfrom pydantic import BaseModel, Field\n\n\nclass InputSchema(BaseModel):\n    threshold: Optional[float] = Field(\n        83886080 , # 80GB in KB\n        description='Threshold for disk size in KB.', \n        title='Threshold (in KB)'\n    )\n\n\ndef mongodb_compare_disk_size_to_threshold_printer(output):\n    success, alerts = output\n    if success:\n        print(\"Disk sizes are within the threshold.\")\n        return\n\n    for alert in alerts:\n        print(f\"Alert! Disk size of {alert['totalDiskSize']} KB for database {alert['db']} exceeds threshold of {alert['threshold']} KB.\")\n\n\ndef mongodb_compare_disk_size_to_threshold(handle, threshold: float=83886080) -> Tuple:\n    \"\"\"\n    mongodb_compare_disk_size_to_threshold compares the total disk size used by MongoDB to a given threshold.\n\n    :type handle: object\n    :param handle: Object returned from Task Validate\n\n    :type threshold: float\n    :param threshold: The threshold for disk size in KB.\n\n    :return: Status, a list of alerts if disk size is exceeded.\n    \"\"\"\n\n    # Initialize variables\n    total_disk_size = 0\n    result = []\n\n    # Get a list of database names\n    database_names = handle.list_database_names()\n\n    # Iterate through each database\n    for db_name in database_names:\n        db = handle[db_name]\n        stats = db.command(\"dbStats\")\n\n        # Add the dataSize and indexSize to get the total size for the database\n        total_disk_size = (stats['dataSize'] + stats['indexSize']) / (1024)\n\n        if total_disk_size > threshold:\n            # Append the database name, total disk size, and threshold to the result\n            result.append({'db': db_name, 'totalDiskSize': total_disk_size, 'threshold': threshold})\n\n    if len(result) != 0:\n        return (False, result)\n    return (True, None)\n\n\n\n"
  },
  {
    "path": "Mongo/legos/mongodb_count_documents/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>MongoDB Count Documents</h1>\r\n\r\n## Description\r\nThis Lego Counts the doucuments in  MongoDB Collection.\r\n\r\n\r\n## Lego Details\r\n\r\n    mongodb_count_documents(handle, database_name: str, collection_name: str, filter: dict)\r\n\r\n        handle: Object of type unSkript Mongodb Connector.\r\n        database_name: Name of the MongoDB database.\r\n        collection_name: Name of the MongoDB collection.\r\n        filter: A query that matches the document to filter.\r\n\r\n\r\n## Lego Input\r\nThis Lego take four inputs database_name, collection_name and filter. \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Mongo/legos/mongodb_count_documents/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "Mongo/legos/mongodb_count_documents/mongodb_count_documents.json",
    "content": "{\n\"action_title\": \"MongoDB Count Documents\",\n\"action_description\": \"MongoDB Count Documents\",\n\"action_type\": \"LEGO_TYPE_MONGODB\",\n\"action_entry_function\": \"mongodb_count_documents\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": true,\n\"action_supports_iteration\": true,\n\"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_MONGODB\", \"CATEGORY_TYPE_MONGODB_COLLECTION\",\"CATEGORY_TYPE_MONGODB_DOCUMENT\"]\n}\n"
  },
  {
    "path": "Mongo/legos/mongodb_count_documents/mongodb_count_documents.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\n\nfrom pydantic import BaseModel, Field\n\n\nclass InputSchema(BaseModel):\n    database_name: str = Field(\n        title='Database Name',\n        description='Name of the MongoDB database.'\n    )\n    collection_name: str = Field(\n        title='Collection Name',\n        description='Name of the MongoDB collection.'\n    )\n    filter: dict = Field(\n        title='Filter Query',\n        description='''\n             A query document that selects which documents to count in the collection.\n             Can be an empty document to count all documents.\n             For eg: {\"foo\":\"bar\"}.\n            '''\n    )\n\n\ndef mongodb_count_documents_printer(output):\n    if output is None:\n        return\n    if isinstance(output, int):\n        pprint.pprint(f\"Total number of documents : {output}\")\n    else:\n        pprint.pprint(output)\n\n\ndef mongodb_count_documents(\n        handle,\n        database_name: str,\n        collection_name: str,\n        filter: dict\n        ):\n    \"\"\"mongodb_count_documents Retrieves the documents present in\n    the collection and the count of the documents using count_documents().\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type database_name: str\n        :param database_name: Name of the MongoDB database.\n\n        :type filter: Dict\n        :param filter: A query that matches the document to filter.\n\n        :rtype: All the results of the query.\n    \"\"\"\n    # Input param validation.\n\n    try:\n        db = handle[database_name]\n        total_count = db[collection_name].count_documents(filter)\n        return total_count\n    except Exception as e:\n        return e\n"
  },
  {
    "path": "Mongo/legos/mongodb_create_collection/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>MongoDB Create Collection</h1>\r\n\r\n## Description\r\nThis Lego creates a MongoDB Collection.\r\n\r\n\r\n## Lego Details\r\n\r\n    mongodb_create_collection(handle, database_name: str, collection_name: str)\r\n\r\n        handle: Object of type unSkript Mongodb Connector.\r\n        database_name: Name of the MongoDB database.\r\n        collection_name: Name of the MongoDB collection.\r\n\r\n## Lego Input\r\nThis Lego take three input handle, database_name and collection_name. \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Mongo/legos/mongodb_create_collection/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "Mongo/legos/mongodb_create_collection/mongodb_create_collection.json",
    "content": "{\n\"action_title\": \"MongoDB Create Collection\",\n\"action_description\": \"MongoDB Create Collection\",\n\"action_type\": \"LEGO_TYPE_MONGODB\",\n\"action_entry_function\": \"mongodb_create_collection\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": true,\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n\"action_supports_iteration\": true,\n\"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_MONGODB\", \"CATEGORY_TYPE_MONGODB_COLLECTION\"]\n}\n"
  },
  {
    "path": "Mongo/legos/mongodb_create_collection/mongodb_create_collection.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import List\nfrom pydantic import BaseModel, Field\n\n\nclass InputSchema(BaseModel):\n    database_name: str = Field(\n        title='Database Name',\n        description='Name of the MongoDB database'\n    )\n    collection_name: str = Field(\n        title='Collection Name',\n        description='Name of the MongoDB collection'\n    )\n\n\ndef mongodb_create_collection_printer(output):\n    if output[0] is None:\n        return\n    print(\"\\n\\n\")\n    if isinstance(output[0], Exception):\n        pprint.pprint(f\"Error : {output[0]}\")\n    else:\n        pprint.pprint(\"List of all collections after creating new one\")\n        pprint.pprint(output[0])\n        collection_name = output[1]\n        if collection_name in output[0]:\n            pprint.pprint(\"Collection created successfully !!!\")\n\n\ndef mongodb_create_collection(handle, database_name: str, collection_name: str) -> List:\n    \"\"\"mongodb_create_collection create collection in mongodb.\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type database_name: str\n        :param database_name: Name of the MongoDB database.\n\n        :type collection_name: str\n        :param collection_name: Name of the MongoDB collection.\n\n        :rtype: List of all collections after creating new one.\n    \"\"\"\n    # Input param validation.\n\n    try:\n        db = handle[database_name]\n        db.create_collection(collection_name)\n        # Verification\n        collection_list = db.list_collection_names()\n        return [collection_list, collection_name]\n    except Exception as e:\n        return [e]\n"
  },
  {
    "path": "Mongo/legos/mongodb_create_database/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>MongoDB Create Database</h1>\r\n\r\n## Description\r\nThis Lego Creates a database in MongoDB.\r\n\r\n\r\n## Lego Details\r\n\r\n    mongodb_create_database(handle, database_name: str, collection_name: str)\r\n\r\n        handle: Object of type unSkript Mongodb Connector.\r\n        database_name: Name of the MongoDB database to be created.\r\n        collection_name: Name of the MongoDB collection to be created.\r\n## Lego Input\r\nThis Lego take three input handle, database_name and collection_name. \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Mongo/legos/mongodb_create_database/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "Mongo/legos/mongodb_create_database/mongodb_create_database.json",
    "content": "{\n\"action_title\": \"MongoDB Create Database\",\n\"action_description\": \"MongoDB Create Database\",\n\"action_type\": \"LEGO_TYPE_MONGODB\",\n\"action_entry_function\": \"mongodb_create_database\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": true,\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n\"action_supports_iteration\": true,\n\"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_MONGODB\"]\n}\n"
  },
  {
    "path": "Mongo/legos/mongodb_create_database/mongodb_create_database.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import List\nfrom pydantic import BaseModel, Field\n\n\nclass InputSchema(BaseModel):\n    database_name: str = Field(\n        title='Database Name',\n        description='Name of the MongoDB database'\n    )\n    collection_name: str = Field(\n        title='Collection Name',\n        description='Name of the MongoDB collection'\n    )\n\n\ndef mongodb_create_database_printer(output):\n    if output[0] is None:\n        return\n    print(\"\\n\\n\")\n    if isinstance(output[0], Exception):\n        pprint.pprint(f\"Error : {output[0]}\")\n    else:\n        pprint.pprint(\"List of databases after creating new one\")\n        pprint.pprint(output[0])\n        collection_name = output[1]\n        if collection_name in output[0]:\n            pprint.pprint(\"Database created successfully !!!\")\n\n\ndef mongodb_create_database(handle, database_name: str, collection_name: str) -> List:\n    \"\"\"mongodb_create_database create database in mongodb.\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type database_name: str\n        :param database_name: Name of the MongoDB database.\n\n        :type collection_name: str\n        :param collection_name: Name of the MongoDB collection.\n\n\n        :rtype: List of  Database after creating.\n    \"\"\"\n    # Input param validation.\n\n    try:\n        db = handle[database_name]\n        db.create_collection(collection_name)\n        # Verification\n        dblist = handle.list_database_names()\n        return [dblist, database_name]\n    except Exception as e:\n        return [e]\n"
  },
  {
    "path": "Mongo/legos/mongodb_delete_collection/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Delete collection from MongoDB database</h1>\r\n\r\n## Description\r\nThis Lego Deletes MongoDB Collection.\r\n\r\n\r\n## Lego Details\r\n\r\n    mongodb_delete_collection(handle, database_name: str, collection_name: str)\r\n\r\n        handle: Object of type unSkript Mongodb Connector.\r\n        database_name: Name of the MongoDB database.\r\n        collection_name: Name of the MongoDB collection.\r\n\r\n## Lego Input\r\nThis Lego take three input handle, database_name and collection_name. \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Mongo/legos/mongodb_delete_collection/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "Mongo/legos/mongodb_delete_collection/mongodb_delete_collection.json",
    "content": "{\n\"action_title\": \"Delete collection from MongoDB database\",\n\"action_description\": \"Delete collection from MongoDB database\",\n\"action_type\": \"LEGO_TYPE_MONGODB\",\n\"action_entry_function\": \"mongodb_delete_collection\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": true,\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n\"action_supports_iteration\": true,\n\"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_MONGODB\", \"CATEGORY_TYPE_MONGODB_COLLECTION\"]\n}\n"
  },
  {
    "path": "Mongo/legos/mongodb_delete_collection/mongodb_delete_collection.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import List\nfrom pydantic import BaseModel, Field\n\n\nclass InputSchema(BaseModel):\n    database_name: str = Field(\n        title='Database Name',\n        description='Name of the MongoDB database'\n    )\n    collection_name: str = Field(\n        title='Collection Name',\n        description='Name of the MongoDB collection'\n    )\n\n\n\ndef mongodb_delete_collection_printer(output):\n    if output is None:\n        return None\n    print(\"\\n\\n\")\n    if isinstance(output, Exception):\n        pprint.pprint(f\"Error : {output}\")\n        return output\n    collections_before_drop = output[0]\n    collections_after_drop = output[1]\n    pprint.pprint(f\"Collection count BEFORE drop:{len(collections_before_drop)}\")\n    pprint.pprint(f\"Collection count AFTER drop:{len(collections_after_drop)}\")\n    diff = len(collections_before_drop) - len(collections_after_drop)\n    if diff != 0:\n        pprint.pprint(\"Collection deleted successfully !!!\")\n    return None\n\n\ndef mongodb_delete_collection(handle, database_name: str, collection_name: str) -> List:\n    \"\"\"mongodb_delete_collection delete collection from mongodb database.\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type database_name: str\n        :param database_name: Name of the MongoDB database.\n\n        :type collection_name: str\n        :param collection_name: Name of the MongoDB collection.\n\n        :rtype: List of the results of the delete query.\n    \"\"\"\n    # Input param validation.\n\n    try:\n        db = handle[database_name]\n\n        collections_before_drop = db.list_collection_names()\n        db.drop_collection(collection_name)\n        # Verification\n        collections_after_drop = db.list_collection_names()\n        return [collections_before_drop, collections_after_drop]\n    except Exception as e:\n        return [e]\n"
  },
  {
    "path": "Mongo/legos/mongodb_delete_database/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>MongoDB Delete Database</h1>\r\n\r\n## Description\r\nThis Lego Deletes MongoDB Database.\r\n\r\n\r\n## Lego Details\r\n\r\n    mongodb_delete_database(handle, database_name: str)\r\n\r\n        handle: Object of type unSkript Mongodb Connector.\r\n        database_name: Name of the MongoDB database.\r\n\r\n## Lego Input\r\nThis Lego take two input handle and database_name. \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Mongo/legos/mongodb_delete_database/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "Mongo/legos/mongodb_delete_database/mongodb_delete_database.json",
    "content": "{\n\"action_title\": \"MongoDB Delete Database\",\n\"action_description\": \"MongoDB Delete Database\",\n\"action_type\": \"LEGO_TYPE_MONGODB\",\n\"action_entry_function\": \"mongodb_delete_database\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": true,\n\"action_supports_iteration\": true,\n\"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_MONGODB\"]\n}\n"
  },
  {
    "path": "Mongo/legos/mongodb_delete_database/mongodb_delete_database.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import List\nfrom pydantic import BaseModel, Field\n\n\nclass InputSchema(BaseModel):\n    database_name: str = Field(\n        title='Database Name',\n        description='Name of the MongoDB database'\n    )\n\n\ndef mongodb_delete_database_printer(output):\n    if output is None:\n        return None\n    print(\"\\n\\n\")\n    if isinstance(output, Exception):\n        pprint.pprint(f\"Error : {output}\")\n        return output\n    db_names_before_drop = output[0]\n    db_names_after_drop = output[1]\n    pprint.pprint(f\"db count BEFORE drop:{len(db_names_before_drop)}\")\n    pprint.pprint(f\"db count AFTER drop:{len(db_names_after_drop)}\")\n\n    diff = len(db_names_before_drop) - len(db_names_after_drop)\n    if diff != 0:\n        pprint.pprint(\"Database deleted successfully !!!\")\n    return None\n\n\ndef mongodb_delete_database(handle, database_name: str) -> List:\n    \"\"\"mongodb_delete_database delete database in mongodb.\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type database_name: str\n        :param database_name: Name of the MongoDB database.\n\n        :rtype: All the results of the query.\n    \"\"\"\n    # Input param validation.\n\n    try:\n        db_names_before_drop = handle.list_database_names()\n\n        handle.drop_database(database_name)\n        # Verification\n        db_names_after_drop = handle.list_database_names()\n        return [db_names_before_drop, db_names_after_drop]\n    except Exception as e:\n        return [e]\n"
  },
  {
    "path": "Mongo/legos/mongodb_delete_document/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>MongoDB Delete Document</h1>\r\n\r\n## Description\r\nThis Lego Deletes the mongodb document.\r\n\r\n\r\n## Lego Details\r\n\r\n    mongodb_delete_document(handle, database_name: str, collection_name: str, command: DeleteCommands, filter: dict)\r\n\r\n        handle: Object of type unSkript Mongodb Connector.\r\n        database_name: Name of the MongoDB database.\r\n        collection_name: Name of the MongoDB collection.\r\n        DeleteCommands: Enum for DeleteCommand Options are delete_one or delete_many\r\n        filter: Search Filter to perform the delete operation on.\r\n\r\n## Lego Input\r\nThis Lego take five inputs handle, database_name, collection_name, DeleteCommands and filter.\r\n \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Mongo/legos/mongodb_delete_document/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "Mongo/legos/mongodb_delete_document/mongodb_delete_document.json",
    "content": "{\n\"action_title\": \"MongoDB Delete Document\",\n\"action_description\": \"MongoDB Delete Document\",\n\"action_type\": \"LEGO_TYPE_MONGODB\",\n\"action_entry_function\": \"mongodb_delete_document\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": true,\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_INT\",\n\"action_supports_iteration\": true,\n\"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_MONGODB\", \"CATEGORY_TYPE_MONGODB_COLLECTION\",\"CATEGORY_TYPE_MONGODB_DOCUMENT\"]\n}\n"
  },
  {
    "path": "Mongo/legos/mongodb_delete_document/mongodb_delete_document.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nfrom pydantic import BaseModel, Field\nfrom unskript.enums.mongo_enums import DeleteCommands\nfrom pymongo.errors import AutoReconnect, ServerSelectionTimeoutError\n\nclass InputSchema(BaseModel):\n    database_name: str = Field(\n        title='Database Name',\n        description='Name of the MongoDB database.'\n    )\n    collection_name: str = Field(\n        title='Collection Name',\n        description='Name of the MongoDB collection.'\n    )\n    command: DeleteCommands = Field(\n        DeleteCommands.delete_one,\n        title='Command Name',\n        description='''\n                         Name of command\n                         for Eg. delete_one, delete_many\n                         Supported commands : delete_one and delete_many\n                    '''\n    )\n    filter: dict = Field(\n        title='Filter Query',\n        description='A query that matches the document to delete For eg: {\"foo\":\"bar\"}.'\n    )\n\ndef mongodb_delete_document_printer(output):\n    print(\"\\n\")\n    if output == 0:\n        print(\"No Documents were deleted\")\n    elif output > 1:\n        print(f\"{output.deleted_count} Documents Deleted\")\n    else:\n        print(\"Document Deleted\")\n\n\n\ndef mongodb_delete_document(\n        handle,\n        database_name: str,\n        collection_name: str,\n        command: DeleteCommands,\n        filter: dict\n        ) -> int:\n    \"\"\"mongodb_delete_document Runs mongo delete command with the provided parameters.\n\n        :type handle: object\n        :param handle: Handle returned from the Task validate command\n\n        :type database_name: str\n        :param database_name: Name of the MongoDB database\n\n        :type collection_name: str\n        :param collection_name: Name of the Collection to the delete the document from\n\n        :type command: DeleteCommands\n        :param command: Enum for DeleteCommand Options are delete_one or delete_many\n\n        :type filter: dict\n        :param filter: Search Filter to perform the delete operation on\n\n        :rtype: Count of deleted document\n    \"\"\"\n    # Lets make sure the handle that is returned is not stale\n    # and can connect to the MongoDB server\n    try:\n        handle.server_info()\n    except (AutoReconnect, ServerSelectionTimeoutError) as e:\n        print(\"[UNSKRIPT]: Reconnection / Server Selection Timeout Error: \", str(e))\n        raise e\n    except Exception as e:\n        print(\"[UNSKRIPT]: Error Connecting: \", str(e))\n        raise e\n\n    try:\n        result = None\n        db = handle[database_name]\n        if command == DeleteCommands.delete_one:\n            result = db[collection_name].delete_one(filter)\n            return result.deleted_count\n        if command == DeleteCommands.delete_many:\n            result = db[collection_name].delete_many(filter)\n            return result.deleted_count\n    except Exception as e:\n        raise e\n    return None\n"
  },
  {
    "path": "Mongo/legos/mongodb_distinct_command/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>MongoDB Distinct Command</h1>\r\n\r\n## Description\r\nThis Lego applys Distinct Command on query.\r\n\r\n\r\n## Lego Details\r\n\r\n    mongodb_distinct_command(handle, database_name: str, collection_name: str, key: str, filter=None)\r\n\r\n        handle: Object of type unSkript Mongodb Connector.\r\n        database_name: Name of the MongoDB database.\r\n        collection_name: Name of the MongoDB collection.\r\n        key: Name of the field for which we want to get the distinct values.\r\n        filter: A query that matches the document to filter.\r\n\r\n\r\n## Lego Input\r\nThis Lego take five inputs handle, database_name, collection_name, key, update  and filter.\r\n \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Mongo/legos/mongodb_distinct_command/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "Mongo/legos/mongodb_distinct_command/mongodb_distinct_command.json",
    "content": "{\n\"action_title\": \"MongoDB Distinct Command\",\n\"action_description\": \"MongoDB Distinct Command\",\n\"action_type\": \"LEGO_TYPE_MONGODB\",\n\"action_entry_function\": \"mongodb_distinct_command\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": true,\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n\"action_supports_iteration\": true,\n\"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_MONGODB\"]\n}\n"
  },
  {
    "path": "Mongo/legos/mongodb_distinct_command/mongodb_distinct_command.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import List\nfrom pydantic import BaseModel, Field\n\n\nclass InputSchema(BaseModel):\n    database_name: str = Field(\n        title='Database Name',\n        description='Name of the MongoDB database'\n    )\n    collection_name: str = Field(\n        title='Collection Name',\n        description='Name of the MongoDB collection'\n    )\n    key: str = Field(\n        title='Name of field',\n        description='''\n                Name of the field for which we want to get the distinct values\n            '''\n    )\n    filter: dict = Field(\n        None,\n        title='Filter Query',\n        description='''\n                A query document that specifies the documents from which to retrieve the distinct values.\n                For eg: {\"foo\":\"bar\"}.\n            '''\n    )\n\n\ndef mongodb_distinct_command_printer(output):\n    if output is None:\n        return\n    print(\"\\n\\n\")\n    if isinstance(output, List):\n        for entry in output:\n            pprint.pprint(entry)\n    else:\n        pprint.pprint(output)\n\n\ndef mongodb_distinct_command(\n        handle,\n        database_name: str,\n        collection_name: str,\n        key: str,\n        filter=None\n        ) -> List:\n    \"\"\"mongodb_distinct_command Retrieves the documents present in the collection\n    and the count of the documents using count_documents().\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type database_name: str\n        :param database_name: Name of the MongoDB database.\n\n        :type collection_name: str\n        :param collection_name: Name of the MongoDB collection.\n\n        :type key: str\n        :param key: Name of the field for which we want to get the distinct values.\n\n        :type filter Dict\n        :param filter: A query that matches the document to filter.\n\n        :rtype: All the results of the query.\n    \"\"\"\n    # Input param validation.\n\n    if filter is None:\n        filter = {}\n    try:\n        result = []\n        db = handle[database_name]\n        res = db[collection_name].distinct(key, filter)\n        for entry in res:\n            result.append(entry)\n        return result\n    except Exception as e:\n        return [e]\n"
  },
  {
    "path": "Mongo/legos/mongodb_find_document/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>MongoDB Find Document</h1>\r\n\r\n## Description\r\nThis Lego Finds the document in collection.\r\n\r\n\r\n## Lego Details\r\n\r\n    mongodb_find_document(handle, database_name: str, collection_name: str, filter: dict, command: FindCommands, document: dict = {}, projection: dict = {}, sort: List = [])\r\n\r\n        handle: Object of type unSkript Mongodb Connector.\r\n        database_name: Name of the MongoDB database.\r\n        collection_name: Name of the MongoDB collection.\r\n        filter: A query that matches the document to find.\r\n        command: Db command.\r\n        document:The modifications to apply in dictionary format.\r\n        projection: A list of field names that should be returned/excluded in the result.\r\n        sort: A list of {key:direction} pairs.\r\n\r\n## Lego Input\r\nThis Lego take eight inputs handle, database_name, collection_name, filter,command, document, projection  and sort. \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Mongo/legos/mongodb_find_document/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "Mongo/legos/mongodb_find_document/mongodb_find_document.json",
    "content": "{\n\"action_title\": \"MongoDB Find Document\",\n\"action_description\": \"MongoDB Find Document\",\n\"action_type\": \"LEGO_TYPE_MONGODB\",\n\"action_entry_function\": \"mongodb_find_document\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": true,\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n\"action_supports_iteration\": true,\n\"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_MONGODB\", \"CATEGORY_TYPE_MONGODB_COLLECTION\",\"CATEGORY_TYPE_MONGODB_DOCUMENT\"]\n}\n"
  },
  {
    "path": "Mongo/legos/mongodb_find_document/mongodb_find_document.py",
    "content": "##\n# Copyright (c) 2022 unSkript, Inc\n# All rights reserved.\n##\nimport pprint\nfrom typing import List, Optional\nfrom pydantic import BaseModel, Field\nfrom unskript.enums.mongo_enums import FindCommands\nfrom pymongo import ReturnDocument\nfrom pymongo.errors import AutoReconnect, ServerSelectionTimeoutError\n\nclass InputSchema(BaseModel):\n    database_name: str = Field(\n        title='Database Name',\n        description='Name of the MongoDB database.'\n    )\n    collection_name: str = Field(\n        title='Collection Name',\n        description='Name of the MongoDB collection.'\n    )\n    command: Optional[FindCommands] = Field(\n        default=FindCommands.find,\n        title='Command',\n        description='''\n                        Name of command\n                        for Eg. find,  etc\n                        Supported commands : find, find_one_and_delete, find_one_and_replace, find_one_and_update\n                '''\n    )\n    filter: dict = Field(\n        title='Filter',\n        description=('A query that matches the document to update, delete, find '\n                     'and replace. For eg: {\"foo\":\"bar\"}.')\n    )\n    document: Optional[dict] = Field(\n        default=None,\n        title='Update/Replace Document',\n        description='''\n            The modifications to apply in dictionary format.\n            For eg: For update : {\"$set\":{\"field\":\"value\"}} to Replace : {\"field\":\"value\"}\n            Not applicable for find, find_one and find_one_and_delete\n        '''\n    )\n    projection: Optional[dict] = Field(\n        default=None,\n        title='Projection ',\n        description='''\n                A list of field names that should be\n                returned in the result document or a mapping specifying the fields\n                to include or exclude. If `projection` is a list \"_id\" will\n                always be returned. Use a mapping to exclude fields from\n                the result (e.g. {'_id': False})\n                ''')\n    sort: Optional[list] = Field(\n        default=None,\n        title='Sort',\n        description='''\n                a list of {key:direction} pairs\n                specifying the sort order for the query. If multiple documents\n                match the query, they are sorted and the first is updated.\n                (e.g. [{'age': '-1'}])\n                '''\n    )\n\ndef mongodb_find_document_printer(output):\n    if isinstance(output, List):\n        if len(output) == 0:\n            print(\"No Matching Documents.\")\n            return\n        for entry in output:\n            pprint.pprint(entry)\n\n\ndef mongodb_find_document(\n        handle,\n        database_name: str,\n        collection_name: str,\n        filter: dict,\n        command: FindCommands = FindCommands.find,\n        document: dict = None,\n        projection: dict = None,\n        sort: List = None) -> List:\n    \"\"\"mongodb_find_document Runs mongo find commands with the provided parameters.\n\n        :type handle: object\n        :param handle: Object returned from Task validate method\n\n        :type database_name: str\n        :param database_name: Name of the MongoDB database\n\n        :type collection_name: str\n        :param collection_name: Name of the MongoDB Collection to work on\n\n        :type filter: dict\n        :param filter: Filter in the dictionary form to work with\n\n        :type command: FindCommands\n        :param command: FindCommands Enum\n\n        :type document: dict\n        :param document: Document in the Dictionary form\n\n        :type projection: dict\n        :param projection: Projection in Dictionary form\n\n        :type sort: List\n        :param sort: Sort List to be used\n\n        :rtype: All the results of the query.\n    \"\"\"\n\n    # Lets make sure the handle that is returned is not stale\n    # and can connect to the MongoDB server\n    try:\n        handle.server_info()\n    except (AutoReconnect, ServerSelectionTimeoutError) as e:\n        print(f\"[UNSKRIPT]: Reconnection / Server Selection Timeout Error: {str(e)}\")\n        raise e\n    except Exception as e:\n        print(f\"[UNSKRIPT]: Error Connecting: {str(e)}\")\n        raise e\n\n    sort_by = sort\n    update = document\n    sort = []\n    if sort_by:\n        for val in sort_by:\n            for k, v in val.items():\n                sort.append((k, v))\n\n    result = []\n    try:\n        db = handle[database_name]\n        if command == FindCommands.find:\n            records = db[collection_name].find(\n                filter, projection=projection, sort=sort)\n            for record in records:\n                result.append(record)\n        elif command == FindCommands.find_one:\n            record = db[collection_name].find_one(\n                filter, projection=projection, sort=sort)\n            result.append(record)\n        elif command == FindCommands.find_one_and_delete:\n            record = db[collection_name].find_one_and_delete(\n                filter, projection=projection, sort=sort)\n            pprint.pprint(\"One matching document deleted\")\n            result.append(record)\n        elif command == FindCommands.find_one_and_replace:\n            record = db[collection_name].find_one_and_replace(\n                filter, replacement=update, projection=projection,\n                sort=sort, return_document=ReturnDocument.AFTER)\n            pprint.pprint(\"One matching docuemnt replaced\")\n            result.append(record)\n        elif command == FindCommands.find_one_and_update:\n            record = db[collection_name].find_one_and_update(\n                filter,\n                update=update,\n                projection=projection,\n                sort=sort,\n                return_document=ReturnDocument.AFTER\n                )\n            pprint.pprint(\"Document Updated\")\n            result.append(record)\n        return result\n    except Exception as e:\n        raise e\n"
  },
  {
    "path": "Mongo/legos/mongodb_find_one/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>MongoDB Find One</h1>\r\n\r\n## Description\r\nThis Lego MongoDB Finds One and returns a single entry that matches the query.\r\n\r\n\r\n## Lego Details\r\n\r\n    mongodb_find_one(handle, database_name: str, collection_name: str, filter: dict, projection: dict = {}, sort: List = [])\r\n\r\n        handle: Object of type unSkript Mongodb Connector.\r\n        database_name: Name of the MongoDB database.\r\n        collection_name: Name of the MongoDB collection.\r\n        filter: A query that matches the document to find.\r\n        projection: A list of field names that should be returned/excluded in the result.\r\n        sort: A list of {key:direction} pairs.\r\n\r\n## Lego Input\r\nThis Lego take six inputs handle, database_name, collection_name, filter, projection  and sort.\r\n \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\"> \r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Mongo/legos/mongodb_find_one/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "Mongo/legos/mongodb_find_one/mongodb_find_one.json",
    "content": "{\n\"action_title\": \"MongoDB Find One\",\n\"action_description\": \"MongoDB Find One returns a single entry that matches the query.\",\n\"action_type\": \"LEGO_TYPE_MONGODB\",\n\"action_entry_function\": \"mongodb_find_one\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": true,\n\"action_supports_iteration\": true,\n\"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_MONGODB\", \"CATEGORY_TYPE_MONGODB_COLLECTION\",\"CATEGORY_TYPE_MONGODB_DOCUMENT\"]\n}\n"
  },
  {
    "path": "Mongo/legos/mongodb_find_one/mongodb_find_one.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nimport pprint\nfrom typing import List, Optional\nfrom pydantic import BaseModel, Field\n\n\nclass InputSchema(BaseModel):\n    database_name: str = Field(\n        title='Database Name',\n        description='Name of the MongoDB database.'\n    )\n    collection_name: str = Field(\n        title='Collection Name',\n        description='Name of the MongoDB collection.'\n    )\n    filter: dict = Field(\n        title='Filter Query',\n        description='A query that matches the document to find. For eg: { \"name\": \"mike\" }.'\n    )\n    projection: Optional[dict] = Field(\n        default=None,\n        title='Projection',\n        description='''\n                A list of field names that should be\n                returned in the result document or a mapping specifying the fields\n                to include or exclude. If `projection` is a list \"_id\" will\n                always be returned. Use a mapping to exclude fields from\n                the result (e.g. {'_id': false})\n                ''')\n    sort: Optional[list] = Field(\n        default=None,\n        title='Sort',\n        description='''\n                a list of {key:direction} pairs\n                specifying the sort order for the query. If multiple documents\n                match the query, they are sorted and the first is updated.\n                (e.g. [{'age': '-1'}])\n                '''\n    )\n\n\ndef mongodb_find_one_printer(func):\n    def Printer(*args, **kwargs):\n        output = func(*args, **kwargs)\n        print(\"\\n\\n\")\n        pprint.pprint(output)\n        return output\n    return Printer\n\n\n@mongodb_find_one_printer\ndef mongodb_find_one(\n        handle,\n        database_name: str,\n        collection_name: str,\n        filter: dict,\n        projection: dict = None,\n        sort: List = None) -> dict:\n    \"\"\"mongodb_find_one and returns .\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type database_name: str\n        :param database_name: Name of the MongoDB database.\n\n        :type collection_name: str\n        :param collection_name: Name of the MongoDB collection.\n\n        :type match_query: Dict\n        :param match_query: The selection criteria for the update in dictionary format.\n\n        :type filter: Dict\n        :param filter: A query that matches the document to find.\n\n        :type projection: Dict\n        :param projection: A list of field names that should be returned/excluded in the result.\n\n        :type sort: list\n        :param sort: A list of {key:direction} pairs.\n\n        :rtype: Dict of matched query result.\n    \"\"\"\n    try:\n        db = handle[database_name]\n        r = db[collection_name].find_one(\n            filter, projection=projection, sort=sort if sort else None)\n        return r or {}\n\n    except Exception as e:\n        return {\"error\" : str(e)}\n"
  },
  {
    "path": "Mongo/legos/mongodb_get_handle/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Get MongoDB Handle</h1>\r\n\r\n## Description\r\nThis Lego Gets MongoDB Handle.\r\n\r\n\r\n## Lego Details\r\n\r\n    mongodb_get_handle(handle):\r\n\r\n        handle: Object of type unSkript AWS Connector\r\n        \r\n\r\n## Lego Input\r\nThis Lego take one input handle. \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Mongo/legos/mongodb_get_handle/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "Mongo/legos/mongodb_get_handle/mongodb_get_handle.json",
    "content": "{\n\"action_title\": \"Get MongoDB Handle\",\n\"action_description\": \"Get MongoDB Handle\",\n\"action_type\": \"LEGO_TYPE_MONGODB\",\n\"action_entry_function\": \"mongodb_get_handle\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": false,\n\"action_supports_iteration\": false\n\n}\n"
  },
  {
    "path": "Mongo/legos/mongodb_get_handle/mongodb_get_handle.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nfrom pydantic import BaseModel\n\n\nclass InputSchema(BaseModel):\n    pass\n\n\ndef mongodb_get_handle(handle):\n    \"\"\"mongodb_get_handle returns the mongo client.\n    \n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :rtype: mongo client.\n    \"\"\"\n    return handle\n"
  },
  {
    "path": "Mongo/legos/mongodb_get_metrics/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>MongoDB get metrics</h1>\n\n## Description\nThis action retrieves various metrics such as index size, disk size per collection for all databases and collections.\n\n## Lego Details\n\tmongodb_get_metrics(handle)\n\t\thandle: Object of type unSkript MONGODB Connector.\n\n## Lego Input\nThis Lego takes inputs handle.\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Mongo/legos/mongodb_get_metrics/__init__.py",
    "content": ""
  },
  {
    "path": "Mongo/legos/mongodb_get_metrics/mongodb_get_metrics.json",
    "content": "{\n  \"action_title\": \"MongoDB get metrics\",\n  \"action_description\": \"This action retrieves various metrics such as index size, disk size per collection for all databases and collections.\",\n  \"action_type\": \"LEGO_TYPE_MONGODB\",\n  \"action_entry_function\": \"mongodb_get_metrics\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n  \"action_is_check\": false,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\": [\"CATEGORY_TYPE_INFORMATION\" , \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_MONGODB\"]\n}"
  },
  {
    "path": "Mongo/legos/mongodb_get_metrics/mongodb_get_metrics.py",
    "content": "##\n##  Copyright (c) 2023 unSkript, Inc\n##  All rights reserved.\n##\nfrom typing import Tuple\nfrom pydantic import BaseModel\nfrom tabulate import tabulate\n\n\nclass InputSchema(BaseModel):\n    pass\n\n\n\ndef mongodb_get_metrics_printer(output):\n    if not output:\n        return\n    total_memory, index_outputs = output\n    if total_memory:\n        print(f\"Total Memory: {total_memory[0].get('Memory (MB)')} MB\")\n\n    print(tabulate(index_outputs, headers=\"keys\"))\n\n\ndef mongodb_get_metrics(handle) -> Tuple:\n    \"\"\"\n    mongodb_get_metrics retrieves various metrics such as index size,\n    disk size per collection for all databases and collections.\n\n    :type handle: object\n    :param handle: Object returned from task.validate(...).\n\n    :rtype: list of dictionaries with index size, storage size metrics and total memory usage in MB\n    \n    \"\"\"\n    index_metrics = []\n    database_metrics = []\n    try:\n        database_names = handle.list_database_names()\n\n        server_status = handle.admin.command(\"serverStatus\")\n        total_memory_MB = server_status['mem']['resident']  # Get the total resident set size in memory\n\n        database_metrics.append({\n            'Database': 'ALL',\n            'Collection': 'ALL',\n            'Memory (MB)': total_memory_MB,\n        })\n\n        for db_name in database_names:\n            db = handle[db_name]\n            collection_names = [coll['name'] for coll in db.list_collections() if not coll['options'].get('viewOn')]\n            for coll_name in collection_names:\n                stats = db.command(\"collstats\", coll_name)\n\n                index_size_KB = sum(stats.get('indexSizes', {}).values())/ 1024 # Convert bytes to KB\n                storage_size_KB = stats.get('storageSize', 0)/ 1024 # Convert bytes to KB\n\n                index_metrics.append({\n                    'Database': db_name,\n                    'Collection': coll_name,\n                    'Index Size (KB)': index_size_KB,\n                    'Storage Size (KB)': storage_size_KB,\n                })\n\n    except Exception as e:\n        raise e\n    return database_metrics, index_metrics"
  },
  {
    "path": "Mongo/legos/mongodb_get_replica_set/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>Get MongoDB replica set</h1>\n\n## Description\nThis action retrieves the primary replica and a list of secondary replicas from a MongoDB replica set.\n\n## Lego Details\n\tmongodb_get_replica_set(handle)\n\t\thandle: Object of type unSkript MONGODB Connector.\n\n\n## Lego Input\nThis Lego takes inputs handle.\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Mongo/legos/mongodb_get_replica_set/__init__.py",
    "content": ""
  },
  {
    "path": "Mongo/legos/mongodb_get_replica_set/mongodb_get_replica_set.json",
    "content": "{\n  \"action_title\": \"Get MongoDB replica set\",\n  \"action_description\": \"This action retrieves the primary replica and a list of secondary replicas from a MongoDB replica set.\",\n  \"action_type\": \"LEGO_TYPE_MONGODB\",\n  \"action_entry_function\": \"mongodb_get_replica_set\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n  \"action_is_check\": false,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\": [\"CATEGORY_TYPE_INFORMATION\" ]\n}"
  },
  {
    "path": "Mongo/legos/mongodb_get_replica_set/mongodb_get_replica_set.py",
    "content": "##\n##  Copyright (c) 2023 unSkript, Inc\n##  All rights reserved.\n##\nfrom typing import List\nfrom pydantic import BaseModel\nfrom tabulate import tabulate\n\n\nclass InputSchema(BaseModel):\n    pass\n\n\ndef mongodb_get_replica_set_printer(output):\n    if output is None:\n        print(\"No data found\")\n        return\n    headers = [\"Replica Name\", \"Role\"]\n    table = [(o['name'], o['role']) for o in output]\n    print(tabulate(table, headers=headers, tablefmt='grid'))\n\n\ndef mongodb_get_replica_set(handle) -> List:\n    \"\"\"\n    mongodb_get_replica_set retrieves the primary replica and a list of secondary replicas from a MongoDB replica set.\n\n    :type handle: object\n    :param handle: Object of type unskript connector to connect to MongoDB client\n\n    :return: A list of dictionaries where each dictionary contains the name of the replica and its role.\n    \"\"\"\n    replica_status = handle.admin.command(\"replSetGetStatus\")\n    replicas = []\n\n    for member in replica_status['members']:\n        role = member['stateStr'] \n        replicas.append({\n            'name': member['name'],\n            'role': role\n        })\n\n    return replicas\n\n\n\n"
  },
  {
    "path": "Mongo/legos/mongodb_get_server_status/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png)\r\n<h1>Get Mongo Server Status</h1>\r\n\r\n## Description\r\nThis Lego Gets Mongo Server Status\r\n\r\n\r\n## Lego Details\r\n\r\n    mongodb_get_server_status(handle)\r\n\r\n        handle: Object of type unSkript Mongodb Connector.\r\n\r\n## Lego Input\r\nThis Lego take only one input handle.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Mongo/legos/mongodb_get_server_status/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "Mongo/legos/mongodb_get_server_status/mongodb_get_server_status.json",
    "content": "{\n\"action_title\": \"Get Mongo Server Status\",\n\"action_description\": \"Status indicating server reachability of mongo server\",\n\"action_type\": \"LEGO_TYPE_MONGODB\",\n\"action_entry_function\": \"mongodb_get_server_status\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": true,\n\"action_is_check\": true,\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n\"action_supports_iteration\": true,\n\"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_MONGODB\"],\n\"action_next_hop\": [\"\"],\n\"action_next_hop_parameter_mapping\": {}\n}\n"
  },
  {
    "path": "Mongo/legos/mongodb_get_server_status/mongodb_get_server_status.py",
    "content": "##\n##  Copyright (c) 2023 unSkript, Inc\n##  All rights reserved.\n##\nfrom typing import Tuple, Optional\nfrom pydantic import BaseModel, Field\nfrom tabulate import tabulate\n\n\nclass InputSchema(BaseModel):\n    pass\n\n\n\ndef mongodb_get_server_status_printer(output):\n    if output[0]:\n        print(\"MongoDB Server Status: Reachable\")\n    else:\n        print(\"MongoDB Server Status: Unreachable\")\n        if output[1]:\n            print(f\"Error: {output[1]}\")\n\ndef mongodb_get_server_status(handle) -> Tuple:\n    \"\"\"Returns the status of the MongoDB instance.\n\n    :type handle: object\n    :param handle: MongoDB connection object\n\n    :return: Status indicating server reachability of mongo server\n    \"\"\"\n    try:\n        # Check server reachability\n        result = handle.admin.command(\"ping\")\n        if result and result.get(\"ok\"):\n            return (True, None)\n    except Exception as e:\n        return (False, str(e))\n    return (False, {\"message\":\"Unable to check Mongo server status\"})\n"
  },
  {
    "path": "Mongo/legos/mongodb_get_write_conflicts/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>Get MongoDB potential write conflicts</h1>\n\n## Description\nThis action retrieves potential write conflict metrics from the serverStatus command.\n\n## Lego Details\n\tmongodb_get_write_conflicts(handle)\n\t\thandle: Object of type unSkript MONGODB Connector.\n\n\n## Lego Input\nThis Lego takes inputs handle.\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Mongo/legos/mongodb_get_write_conflicts/__init__.py",
    "content": ""
  },
  {
    "path": "Mongo/legos/mongodb_get_write_conflicts/mongodb_get_write_conflicts.json",
    "content": "{\n  \"action_title\": \"Get MongoDB potential write conflicts\",\n  \"action_description\": \"This action retrieves potential write conflict metrics from the serverStatus command.\",\n  \"action_type\": \"LEGO_TYPE_MONGODB\",\n  \"action_entry_function\": \"mongodb_get_write_conflicts\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\n  \"action_is_check\": false,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\": [\"CATEGORY_TYPE_INFORMATION\" ]\n}"
  },
  {
    "path": "Mongo/legos/mongodb_get_write_conflicts/mongodb_get_write_conflicts.py",
    "content": "##\n##  Copyright (c) 2023 unSkript, Inc\n##  All rights reserved.\n##\nfrom typing import Dict\nfrom pydantic import BaseModel\n\n\n\nclass InputSchema(BaseModel):\n    pass\n\n\ndef mongodb_get_write_conflicts_printer(output):\n    if output is None:\n        return\n    print(\"Potential Write Conflicts:\", output.get(\"totalWriteConflicts\", \"N/A\"))\n\n\ndef mongodb_get_write_conflicts(handle) -> Dict:\n    \"\"\"\n    mongodb_get_write_conflicts Retrieves potential write conflict metrics from the serverStatus command.\n\n    :type handle: object\n    :param handle: Object of type unskript connector to connect to MongoDB client\n\n    :return: A dictionary containing metrics related to potential write conflicts.\n    \"\"\"\n\n    server_status = handle.admin.command(\"serverStatus\")\n\n    write_conflict_metrics = {\n        \"totalWriteConflicts\": server_status.get(\"wiredTiger\", {}).get(\"concurrentTransactions\", {}).get(\"write\", {}).get(\"out\", 0)\n    }\n\n    return write_conflict_metrics\n\n\n\n"
  },
  {
    "path": "Mongo/legos/mongodb_insert_document/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>MongoDB Insert Document</h1>\r\n\r\n## Description\r\nThis Lego Inserts Documents in MongoDb Collection.\r\n\r\n\r\n## Lego Details\r\n\r\n    mongodb_insert_document(handle, database_name: str, collection_name: str,  documents: list)\r\n\r\n        handle: Object of type unSkript Mongodb Connector.\r\n        database_name: Name of the MongoDB database.\r\n        collection_name: Name of the MongoDB collection.\r\n        documents: List of documents to be inserted.\r\n\r\n## Lego Input\r\nThis Lego take four input handle, database_name and documents. \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Mongo/legos/mongodb_insert_document/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "Mongo/legos/mongodb_insert_document/mongodb_insert_document.json",
    "content": "{\n\"action_title\": \"MongoDB Insert Document\",\n\"action_description\": \"MongoDB Insert Document\",\n\"action_type\": \"LEGO_TYPE_MONGODB\",\n\"action_entry_function\": \"mongodb_insert_document\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": true,\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n\"action_supports_iteration\": true,\n\"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_MONGODB\", \"CATEGORY_TYPE_MONGODB_COLLECTION\",\"CATEGORY_TYPE_MONGODB_DOCUMENT\"]\n}\n"
  },
  {
    "path": "Mongo/legos/mongodb_insert_document/mongodb_insert_document.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nimport pprint\nfrom typing import List\nfrom pydantic import BaseModel, Field\nimport pymongo\n\n\nclass InputSchema(BaseModel):\n    database_name: str = Field(\n        title='Database Name',\n        description='Name of the MongoDB database.'\n    )\n    collection_name: str = Field(\n        title='Collection Name',\n        description='Name of the MongoDB collection.'\n    )\n    documents: list = Field(\n        title='Documents',\n        description='''\n            An array of documents to insert into the collection.\n            For eg. Fo [ {\"foo\": \"bar\"} ... ]\n            '''\n    )\n\n\ndef mongodb_insert_document_printer(output):\n    if output is None:\n        return\n    if isinstance(output, List):\n        if len(output) == 0:\n            print(\"No Documents Inserted.\")\n            return\n        print(f\"Inserted {len(output)} Documents with IDs: \")\n        for entry in output:\n            pprint.pprint(entry)\n\n\ndef mongodb_insert_document(\n        handle,\n        database_name: str,\n        collection_name: str, \n        documents: list\n        ) -> List:\n    \"\"\"mongodb_insert_document Runs mongo insert commands with the provided parameters.\n\n        :type handle: object\n        :param handle: Object returned from the Task Validate method\n\n        :type database_name: str\n        :param database_name: Name of the MongoDB database\n\n        :type collection_name: str\n        :param collection_name: Collection name in the MongoDB database\n\n        :type document: list\n        :param document: Document to be inserted in the MongoDB collection\n\n        :rtype: List containing Insert IDs\n    \"\"\"\n    # Input param validation.\n\n    # Lets make sure the handle that is returned is not stale\n    # and can connect to the MongoDB server\n    try:\n        handle.server_info()\n    except (pymongo.errors.AutoReconnect, pymongo.errors.ServerSelectionTimeoutError) as e:\n        print(\"[UNSKRIPT]: Reconnection / Server Selection Timeout Error: \", str(e))\n        raise e\n    except Exception as e:\n        print(\"[UNSKRIPT]: Error Connecting: \", str(e))\n        raise e\n\n\n\n    try:\n        db = handle[database_name]\n        res = db[collection_name].insert_many(documents)\n        return res.inserted_ids\n    except Exception as e:\n        print(\"[UNSKRIPT]: Error while Inserting Document(s): \", str(e))\n        raise e\n"
  },
  {
    "path": "Mongo/legos/mongodb_kill_queries/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>MongoDB kill queries</h1>\r\n\r\n## Description\r\nThis Lego kills MongoDb running queries.\r\n\r\n\r\n## Lego Details\r\n\r\n    mongodb_kill_queries(handle, op_id: int)\r\n\r\n        handle: Object of type unSkript Mongodb Connector.\r\n        op_id: An operation ID of running query.\r\n\r\n## Lego Input\r\nThis Lego take two inputs handle and op_id. \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Mongo/legos/mongodb_kill_queries/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "Mongo/legos/mongodb_kill_queries/mongodb_kill_queries.json",
    "content": "{\n\"action_title\": \"MongoDB kill queries\",\n\"action_description\": \"MongoDB kill queries\",\n\"action_type\": \"LEGO_TYPE_MONGODB\",\n\"action_entry_function\": \"mongodb_kill_queries\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": true,\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\n\"action_supports_iteration\": true,\n\"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_MONGODB\",\"CATEGORY_TYPE_MONGODB_QUERY\"]\n}\n"
  },
  {
    "path": "Mongo/legos/mongodb_kill_queries/mongodb_kill_queries.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import Dict\nfrom pydantic import BaseModel, Field\n\n\nclass InputSchema(BaseModel):\n    op_id: int = Field(\n        title='An operation ID',\n        description='Kill the operation based on opid'\n    )\n\n\ndef mongodb_kill_queries_printer(output):\n    if output is None:\n        return\n    print(\"\\n\\n\")\n    pprint.pprint(output)\n\n\n\ndef mongodb_kill_queries(handle, op_id: int) -> Dict:\n    \"\"\"mongodb_kill_queries can kill queries (read operations) that\n    are running on more than one shard in a cluster.\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type database_name: str\n        :param database_name: An operation ID.\n\n        :rtype: All the results of the query.\n    \"\"\"\n    # Input param validation.\n\n    try:\n        resp = handle.admin.command(\"killOp\", op=op_id)\n        return resp\n    except Exception as e:\n        return {\"Error\": e}\n"
  },
  {
    "path": "Mongo/legos/mongodb_list_collections/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Get list of collections in MongoDB Database</h1>\r\n\r\n## Description\r\nThis Lego Gets the list of collections in MongoDB Database.\r\n\r\n\r\n## Lego Details\r\n\r\n    mongodb_list_collections(handle, database_name: str)\r\n\r\n        handle: Object of type unSkript Mongodb Connector.\r\n        database_name: Name of the MongoDB database.\r\n\r\n## Lego Input\r\nThis Lego take two inputs handle and database_name. \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Mongo/legos/mongodb_list_collections/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "Mongo/legos/mongodb_list_collections/mongodb_list_collections.json",
    "content": "{\n\"action_title\": \"Get list of collections in MongoDB Database\",\n\"action_description\": \"Get list of collections in MongoDB Database\",\n\"action_type\": \"LEGO_TYPE_MONGODB\",\n\"action_entry_function\": \"mongodb_list_collections\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": true,\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n\"action_supports_iteration\": true,\n\"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_MONGODB\", \"CATEGORY_TYPE_MONGODB_COLLECTION\"]\n}\n"
  },
  {
    "path": "Mongo/legos/mongodb_list_collections/mongodb_list_collections.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import List\nfrom pydantic import BaseModel, Field\nfrom pymongo.errors import InvalidName\n\n\nclass InputSchema(BaseModel):\n    database_name: str = Field(\n        title='Database Name',\n        description='Name of the MongoDB database'\n    )\n\n\ndef mongodb_list_collections_printer(output):\n    if output is None:\n        return\n    print(\"\\n\\n\")\n    if isinstance(output, Exception):\n        pprint.pprint(output._message)\n    else:\n        pprint.pprint(\"List of collections in DB\")\n        pprint.pprint(output)\n\n\ndef mongodb_list_collections(handle, database_name: str) -> List:\n    \"\"\"mongodb_list_collections Returns list of all collection in MongoDB\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type database_name: str\n        :param database_name: Name of the MongoDB database.\n\n        :rtype: All the results of the query.\n    \"\"\"\n    # Input param validation.\n\n    try:\n\n        db = handle[database_name]\n        collection_list = db.list_collection_names()\n        return collection_list\n    except InvalidName as e:\n        return [e]\n"
  },
  {
    "path": "Mongo/legos/mongodb_list_databases/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Get list of MongoDB Databases </h1>\r\n\r\n## Description\r\nThis Lego Gets the list of MongoDB Databases.\r\n\r\n\r\n## Lego Details\r\n\r\n    mongodb_list_databases(handle) \r\n\r\n        handle: Object of type unSkript Mongodb Connector.\r\n        \r\n\r\n## Lego Input\r\nThis Lego take only one input handle. \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Mongo/legos/mongodb_list_databases/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "Mongo/legos/mongodb_list_databases/mongodb_list_databases.json",
    "content": "{\n\"action_title\": \"Get list of MongoDB Databases\",\n\"action_description\": \"Get list of MongoDB Databases\",\n\"action_type\": \"LEGO_TYPE_MONGODB\",\n\"action_entry_function\": \"mongodb_list_databases\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": true,\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n\"action_supports_iteration\": true,\n\"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_MONGODB\"]\n}\n"
  },
  {
    "path": "Mongo/legos/mongodb_list_databases/mongodb_list_databases.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import List\nfrom pydantic import BaseModel\nfrom pymongo.errors import AutoReconnect, ServerSelectionTimeoutError\n\n\nclass InputSchema(BaseModel):\n    pass\n\n\ndef mongodb_list_databases_printer(output):\n    if output is None:\n        return\n    print(\"\\n\\n\")\n    pprint.pprint(\"List of databases\")\n    pprint.pprint(output)\n\n\ndef mongodb_list_databases(handle) -> List:\n    \"\"\"mongodb_list_databases Returns list of all databases in MongoDB\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :rtype: List All the databases in mongodb.\n    \"\"\"\n    # Lets make sure the handle that is returned is not stale\n    # and can connect to the MongoDB server\n    try:\n        handle.server_info()\n    except (AutoReconnect, ServerSelectionTimeoutError) as e:\n        print(\"[UNSKRIPT]: Reconnection / Server Selection Timeout Error: \", str(e))\n        raise e\n    except Exception as e:\n        print(\"[UNSKRIPT]: Error Connecting: \", str(e))\n        raise e\n\n    dblist = handle.list_database_names()\n    return dblist\n"
  },
  {
    "path": "Mongo/legos/mongodb_list_queries/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>MongoDB list queries </h1>\r\n\r\n## Description\r\nThis Lego gives MongoDB list queries.\r\n\r\n\r\n## Lego Details\r\n\r\n    mongodb_list_queries(handle)\r\n\r\n        handle: Object of type unSkript Mongodb Connector.\r\n        \r\n\r\n## Lego Input\r\nThis Lego take only one input handle. \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Mongo/legos/mongodb_list_queries/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "Mongo/legos/mongodb_list_queries/mongodb_list_queries.json",
    "content": "{\n\"action_title\": \"MongoDB list queries\",\n\"action_description\": \"MongoDB list queries\",\n\"action_type\": \"LEGO_TYPE_MONGODB\",\n\"action_entry_function\": \"mongodb_list_queries\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": true,\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\n\"action_supports_iteration\": true,\n\"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_MONGODB\", \"CATEGORY_TYPE_MONGODB_QUERY\"]\n}\n"
  },
  {
    "path": "Mongo/legos/mongodb_list_queries/mongodb_list_queries.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import Dict\nfrom pydantic import BaseModel\n\nclass InputSchema(BaseModel):\n    pass\n\ndef mongodb_list_queries_printer(output):\n    if output is None:\n        return\n    print(\"\\n\\n\")\n    if isinstance(output, Exception):\n        pprint.pprint(f\"Error : {output}\")\n    else:\n        pprint.pprint(output['inprog'])\n\n\ndef mongodb_list_queries(handle) -> Dict:\n    \"\"\"mongodb_list_queries can returns information on all the operations running.\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n\n        :rtype: Dict All the results of the query.\n    \"\"\"\n    try:\n        resp = handle.admin.command(({\"currentOp\": True}))\n        return resp\n    except Exception as e:\n        return {\"Error\": e}\n"
  },
  {
    "path": "Mongo/legos/mongodb_read_query/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>MongoDB Read Query</h1>\r\n\r\n## Description\r\nThis Lego applys read query on mongodb collection\r\n\r\n\r\n## Lego Details\r\n\r\n    mongodb_read_query(handle, database_name: str, collection_name: str, query: dict)\r\n\r\n        handle: Object of type unSkript Mongodb Connector.\r\n        database_name: Name of the MongoDB database.\r\n        collection_name: Name of the MongoDB collection.\r\n        query: Read only query in dictionary format.\r\n\r\n## Lego Input\r\nThis Lego take four inputs handle, database_name, collection_name and query.\r\n\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Mongo/legos/mongodb_read_query/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "Mongo/legos/mongodb_read_query/mongodb_read_query.json",
    "content": "{\n\"action_title\": \"MongoDB Read Query\",\n\"action_description\": \"MongoDB Read Query\",\n\"action_type\": \"LEGO_TYPE_MONGODB\",\n\"action_entry_function\": \"mongodb_read_query\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": true,\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n\"action_supports_iteration\": true,\n\"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_MONGODB\", \"CATEGORY_TYPE_MONGODB_QUERY\"]\n}\n"
  },
  {
    "path": "Mongo/legos/mongodb_read_query/mongodb_read_query.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import List\nfrom pydantic import BaseModel, Field\n\n\nclass InputSchema(BaseModel):\n    database_name: str = Field(\n        title='Database Name',\n        description='Name of the MongoDB database.'\n    )\n    collection_name: str = Field(\n        title='Collection Name',\n        description='Name of the MongoDB collection.'\n    )\n    query: dict = Field(\n        title='Read Query',\n        description='Read only query in dictionary format. For eg: {\"foo\":\"bar\"}.'\n    )\n\ndef mongodb_read_query_printer(output):\n    if output is None:\n        return\n    print(\"\\n\\n\")\n    for entry in output:\n        pprint.pprint(entry)\n\n\ndef mongodb_read_query(handle, database_name: str, collection_name: str, query: dict) -> List:\n    \"\"\"mongodb_read_query Runs mongo query with the provided parameters.\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type database_name: str\n        :param database_name: Name of the MongoDB database.\n\n        :type collection_name: str\n        :param collection_name: Name of the MongoDB collection.\n\n        :type query: Dict\n        :param query: Read only query in dictionary format.\n\n        :rtype: All the results of the query.\n    \"\"\"\n    try:\n        res = handle[database_name][collection_name].find(query)\n    except Exception as e:\n        return [e]\n    result = []\n    for entry in res:\n        result.append(entry)\n    return result\n"
  },
  {
    "path": "Mongo/legos/mongodb_remove_field_in_collections/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>MongoDB remove a field in all collections</h1>\r\n\r\n## Description\r\nThis Lego removes a field in all mongodb  collections.\r\n\r\n\r\n## Lego Details\r\n\r\n    mongodb_remove_field_in_collections(handle, database_name: str, collection_name: str, remove_fields: dict, upsert: bool = True)\r\n\r\n        handle: Object of type unSkript Mongodb Connector.\r\n        database_name: Name of the MongoDB database.\r\n        collection_name: Name of the MongoDB collection.\r\n        remove_fields: The  fields to be removed from every document.\r\n        upsert: Allow creation of a new document, if one does not exist.\r\n\r\n## Lego Input\r\nThis Lego take five inputs handle, database_name, collection_name, remove_fields and upsert.\r\n\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Mongo/legos/mongodb_remove_field_in_collections/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "Mongo/legos/mongodb_remove_field_in_collections/mongodb_remove_field_in_collections.json",
    "content": "{\n\"action_title\": \"MongoDB remove a field in all collections\",\n\"action_description\": \"MongoDB remove a field in all collections\",\n\"action_type\": \"LEGO_TYPE_MONGODB\",\n\"action_entry_function\": \"mongodb_remove_field_in_collections\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": true,\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n\"action_supports_iteration\": true,\n\"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_MONGODB\", \"CATEGORY_TYPE_MONGODB_COLLECTION\"]\n}\n"
  },
  {
    "path": "Mongo/legos/mongodb_remove_field_in_collections/mongodb_remove_field_in_collections.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import List\nfrom pydantic import BaseModel, Field\n\n\nclass InputSchema(BaseModel):\n    database_name: str = Field(\n        title='Database Name',\n        description='Name of the MongoDB database.'\n    )\n    collection_name: str = Field(\n        title='Collection Name',\n        description='Name of the MongoDB collection.'\n    )\n    remove_fields: dict = Field(\n        title='Remove fields from every document',\n        description='''\n                The Removal of field apply in dictionary format.\n                For eg: {\"field\":\"value\"}.\n                '''\n    )\n    upsert: bool = Field(\n        True,\n        title='Upsert',\n        description='Allow creation of a new document, if one does not exist.'\n    )\n\n\ndef mongodb_remove_field_in_collections_printer(output):\n    if output is None:\n        return\n    print(\"\\n\\n\")\n    if isinstance(output, Exception):\n        pprint.pprint(f\"Error : {output}\")\n    else:\n        for entry in output:\n            pprint.pprint(entry)\n\n\ndef mongodb_remove_field_in_collections(\n        handle,\n        database_name: str,\n        collection_name: str,\n        remove_fields: dict,\n        upsert: bool = True\n        ) -> List:\n    \"\"\"mongodb_remove_field_in_collections Remove field from every document in a MongoDB collection.\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type database_name: str\n        :param database_name: Name of the MongoDB database.\n\n        :type collection_name: str\n        :param collection_name: Name of the MongoDB collection.\n\n        :type remove_fields: Dict\n        :param remove_fields: Remove fields from every document.\n\n        :type upsert: bool\n        :param upsert: Allow creation of a new document, if one does not exist.\n\n        :rtype: string with the objectID.\n    \"\"\"\n    # Input param validation.\n\n    modifications = {\"$unset\": remove_fields}\n\n    try:\n        handle[database_name][collection_name].update_many(\n            {},\n            update=modifications,\n            upsert=upsert)\n        res = handle[database_name][collection_name].find()\n        result = []\n        for entry in res:\n            result.append(entry)\n        return result\n    except Exception as e:\n        return [e]\n"
  },
  {
    "path": "Mongo/legos/mongodb_rename_database/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>MongoDB Rename Database\"</h1>\r\n\r\n## Description\r\nThis Lego Renames MongoDB  Database\"\r\n\r\n\r\n## Lego Details\r\n\r\n    mongodb_rename_database(handle, old_database_name: str, new_database_name: str)\r\n\r\n        handle: Object of type unSkript AWS Connector\r\n        old_database_name: Name of the MongoDB database wants to update.\r\n        new_database_name: Name of the MongoDB database will be updated .\r\n\r\n## Lego Input\r\nThis Lego take three input handle, old_database_name and new_database_name. \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Mongo/legos/mongodb_rename_database/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "Mongo/legos/mongodb_rename_database/mongodb_rename_database.json",
    "content": "{\n\"action_title\": \"MongoDB Rename Database\",\n\"action_description\": \"MongoDB Rename Database\",\n\"action_type\": \"LEGO_TYPE_MONGODB\",\n\"action_entry_function\": \"mongodb_rename_database\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": true,\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n\"action_supports_iteration\": true,\n\"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_MONGODB\"]\n}\n"
  },
  {
    "path": "Mongo/legos/mongodb_rename_database/mongodb_rename_database.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport os\nimport pprint\nfrom typing import List\nimport bson\nfrom pydantic import BaseModel, Field\n\n\nclass InputSchema(BaseModel):\n    old_database_name: str = Field(\n        title='Old Database Name',\n        description='''\n             Name of the MongoDB database that user want to change.\n             Warning : This solution is not suitable for big or complex databases\n            '''\n    )\n    new_database_name: str = Field(\n        title='New Database Name',\n        description='''\n        New name of the MongoDB database.\n        Warning : This solution is not suitable for big or complex databases\n        '''\n    )\n\n\ndef mongodb_rename_database_printer(output):\n    if output is None:\n        return\n    print(\"\\n\\n\")\n    if isinstance(output, Exception):\n        pprint.pprint(f\"Error : {output}\")\n    else:\n        pprint.pprint(\"List of databases after renaming\")\n        pprint.pprint(output)\n\n\ndef mongodb_rename_database(handle, old_database_name: str, new_database_name: str) -> List:\n    \"\"\"mongodb_rename_database rename database in mongodb.\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type old_database_name: str\n        :param old_database_name: Name of the MongoDB database that user want to change.\n\n        :type new_database_name: str\n        :param new_database_name: New name of the MongoDB database.\n\n        :rtype: All the results of the query.\n    \"\"\"\n\n    def dump(collections, conn, db_name, path):\n        \"\"\"\n        MongoDB Dump\n        :param collections: Database collections name\n        :param conn: MongoDB client connection\n        :param db_name: Database name\n        :param path:\n        :return:\n        \"\"\"\n        try:\n            db = conn[db_name]\n            for coll in collections:\n                with open(os.path.join(path, f'{coll}.bson'), 'wb+') as f:\n                    for doc in db[coll].find():\n                        f.write(bson.BSON.encode(doc))\n            return True\n        except Exception as e:\n            raise e\n\n    def restore(path, conn, db_name):\n        \"\"\"\n        MongoDB Restore\n        :param path: Database dumped path\n        :param conn: MongoDB client connection\n        :param db_name: Database name\n        :return:\n\n        \"\"\"\n        try:\n            db = conn[db_name]\n            for coll in os.listdir(path):\n                if coll.endswith('.bson'):\n                    with open(os.path.join(path, coll), 'rb+') as f:\n                        db[coll.split('.')[0]].insert_many(bson.decode_all(f.read()))\n            return True\n        except Exception as e:\n            raise e\n        finally:\n            for coll in os.listdir(path):\n                if coll.endswith('.bson'):\n                    os.remove(os.path.join(path, coll))\n\n    # Input param validation.\n\n    try:\n        db = handle[old_database_name]\n        collection_list = db.list_collection_names()\n        path = \"/tmp/\"\n        # Steps 1 : Take a dump of old db\n        is_backup = dump(collection_list, handle, old_database_name, path)\n        # Step 2 : Restore the same dum in new db\n        is_restore = False\n        if is_backup:\n            is_restore = restore(path, handle, new_database_name)\n        # Step 3 : Drop the old Db\n        if is_restore:\n            handle.drop_database(old_database_name)\n\n        # Verification\n        dblist = handle.list_database_names()\n        if new_database_name not in dblist:\n            return [Exception(\"Error Occured !!!\")]\n        return dblist\n    except Exception as e:\n        return [e]\n"
  },
  {
    "path": "Mongo/legos/mongodb_update_document/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>MongoDB Update Document</h1>\r\n\r\n## Description\r\nThis Lego Updates MongoDB Document.\r\n\r\n\r\n## Lego Details\r\n\r\n    mongodb_update_document(handle,database_name: str,collection_name: str filter: dict,new_values: dict,command: UpdateCommands = UpdateCommands.update_one,upsert: bool = True) \r\n\r\n        handle: Object of type unSkript Mongodb Connector.\r\n        database_name: Name of the MongoDB database.\r\n        collection_name: Name of the MongoDB collection.\r\n        filter: A query that matches the document to update.\r\n        new_values: Update new fields to every document.\r\n        command: Db command.\r\n        upsert: Allow creation of a new document, if one does not exist.\r\n\r\n## Lego Input\r\nThis Lego take seven inputs handle, database_name, collection_name, filter, new_values, command and upsert.\r\n\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Mongo/legos/mongodb_update_document/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "Mongo/legos/mongodb_update_document/mongodb_update_document.json",
    "content": "{\n\"action_title\": \"MongoDB Update Document\",\n\"action_description\": \"MongoDB Update Document\",\n\"action_type\": \"LEGO_TYPE_MONGODB\",\n\"action_entry_function\": \"mongodb_update_document\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": true,\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_INT\",\n\"action_supports_iteration\": true,\n\"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_MONGODB\", \"CATEGORY_TYPE_MONGODB_COLLECTION\",\"CATEGORY_TYPE_MONGODB_DOCUMENT\"]\n}\n"
  },
  {
    "path": "Mongo/legos/mongodb_update_document/mongodb_update_document.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\n\nfrom pydantic import BaseModel, Field\nfrom unskript.enums.mongo_enums import UpdateCommands\nfrom pymongo.errors import AutoReconnect, ServerSelectionTimeoutError\n\n\nclass InputSchema(BaseModel):\n    database_name: str = Field(\n        title='Database Name',\n        description='Name of the MongoDB database.'\n    )\n    collection_name: str = Field(\n        title='Collection Name',\n        description='Name of the MongoDB collection.'\n    )\n    command: UpdateCommands = Field(\n        UpdateCommands.update_one,\n        title='Command',\n        description='''\n                         Db command\n                         for Eg. update_one, update_many\n                         Supported commands : update_one and update_many\n                    '''\n    )\n    filter: dict = Field(\n        title='Filter',\n        description='A query that matches the document to update. For eg: {\"foo\":\"bar\"}.'\n    )\n    new_values: dict = Field(\n        title='Update new fields to every document',\n        description='''\n                    The addition of fields apply in dictionary format.\n                    For eg: { \"$set\": { \"field\": \"value\" } }\n                    '''\n    )\n    upsert: bool = Field(\n        True,\n        title='Upsert',\n        description='Allow creation of a new document, if one does not exist.'\n    )\n\n\ndef mongodb_update_document_printer(output):\n    if output is None:\n        return\n    print(\"\\n\")\n    if output == 0:\n        print(\"No Documents Updated\")\n    elif output > 1:\n        print(f\"Updated {output} Documents\")\n    else:\n        print(\"Updated Given Document\")\n\n\ndef mongodb_update_document(\n        handle,\n        database_name: str,\n        collection_name: str,\n        filter: dict,\n        new_values: dict,\n        command: UpdateCommands = UpdateCommands.update_one,\n        upsert: bool = True) -> int:\n    \"\"\"mongodb_write_query Updates/creates an entry.\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type database_name: str\n        :param database_name: Name of the MongoDB database.\n\n        :type collection_name: str\n        :param collection_name: Name of the MongoDB collection.\n\n        :type filter: Dict\n        :param filter: A query that matches the document to update.\n\n        :type new_values: Dict\n        :param new_values: Update new fields to every document\n\n        :type command: UpdateCommands\n        :param command: Db command.\n        \n        :type upsert: bool\n        :param upsert: Allow creation of a new document, if one does not exist.\n\n        :rtype: int of updated document\n    \"\"\"\n    # Input param validation.\n\n    # Lets make sure the handle that is returned is not stale\n    # and can connect to the MongoDB server\n    try:\n        handle.server_info()\n    except (AutoReconnect, ServerSelectionTimeoutError) as e:\n        print(\"[UNSKRIPT]: Reconnection / Server Selection Timeout Error: \", str(e))\n        raise e\n    except Exception as e:\n        print(\"[UNSKRIPT]: Error Connecting: \", str(e))\n        raise e\n\n    try:\n        record = None\n        result = 0\n        db = handle[database_name]\n\n        if command == UpdateCommands.update_one:\n            record = db[collection_name].update_one(filter, new_values, upsert=upsert)\n        elif command == UpdateCommands.update_many:\n            record = db[collection_name].update_many(\n                filter, new_values, upsert=upsert)\n\n        if record is not None:\n            result = record.modified_count\n\n        return result\n    except Exception as e:\n        raise e\n"
  },
  {
    "path": "Mongo/legos/mongodb_write_query/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \n<h1> MongoDB Upsert Query<h1>\n\n## Description\n MongoDB Upsert Query\n\n\n## Lego Details\n\n\tmongodb_write_query(handle, database_name: str, collection_name: str, match_query: dict, update: dict, upsert: bool = True)\n\n        handle: Object of type unSkript Mongodb Connector.\n        database_name: Name of the MongoDB database.\n        collection_name: Name of the MongoDB collection.\n        match_query: The selection criteria for the update in dictionary format.\n        update: The modifications to apply in dictionary format.\n        upsert: Allow creation of a new document, if one does not exist.\n\n## Lego Input\nThis Lego take six inputs handle, database_name, collection_name, match_query, update  and upsert. \n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Mongo/legos/mongodb_write_query/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "Mongo/legos/mongodb_write_query/mongodb_write_query.json",
    "content": "{\n\"action_title\": \"MongoDB Upsert Query\",\n\"action_description\": \"MongoDB Upsert Query\",\n\"action_type\": \"LEGO_TYPE_MONGODB\",\n\"action_entry_function\": \"mongodb_write_query\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": true,\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\n\"action_supports_iteration\": true,\n\"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_MONGODB\", \"CATEGORY_TYPE_MONGODB_QUERY\"]\n}\n"
  },
  {
    "path": "Mongo/legos/mongodb_write_query/mongodb_write_query.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nfrom typing import Dict\nfrom pydantic import BaseModel, Field\n\n\nclass InputSchema(BaseModel):\n    database_name: str = Field(\n        title='Database Name',\n        description='Name of the MongoDB database.'\n    )\n    collection_name: str = Field(\n        title='Collection Name',\n        description='Name of the MongoDB collection.'\n    )\n    match_query: dict = Field(\n        title='Match Query',\n        description=('The selection criteria for the update in '\n                     'dictionary format. For eg: {\"foo\":\"bar\"}.')\n    )\n    update: dict = Field(\n        title='Update Document',\n        description='''The modifications to apply in dictionary format.\n        For eg: { \"$set\": { \"field\": \"value\" } }.'''\n    )\n    upsert: bool = Field(\n        True,\n        title='Upsert',\n        description='Allow creation of a new document, if one does not exist.'\n    )\n\n\ndef mongodb_write_query_printer(output):\n    if output is None:\n        return\n    print(\"\\n\\n\")\n    if \"error\" in output:\n        print(f'Error : {output[\"error\"]}')\n    print(\n        f'MatchedCount: {output[\"matched_count\"]}, ModifiedCount: {output[\"modified_count\"]}')\n\n\ndef mongodb_write_query(\n        handle,\n        database_name: str,\n        collection_name: str,\n        match_query: dict,\n        update: dict,\n        upsert: bool = True\n        ) -> Dict:\n    \"\"\"mongodb_write_query Updates/creates an entry.\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type database_name: str\n        :param database_name: Name of the MongoDB database.\n\n        :type collection_name: str\n        :param collection_name: Name of the MongoDB collection.\n\n        :type match_query: Dict\n        :param match_query: The selection criteria for the update in dictionary format.\n\n        :type update: Dict\n        :param update: The modifications to apply in dictionary format.\n\n        :type upsert: bool\n        :param upsert: Allow creation of a new document, if one does not exist.\n\n        :rtype: Dict of Updated/created entry object.\n    \"\"\"\n    # Input param validation.\n    result = {}\n    try:\n        res = handle[database_name][collection_name].update_many(\n            filter=match_query,\n            update=update,\n            upsert=upsert)\n        result[\"matched_count\"] = res.matched_count\n        result[\"modified_count\"] = res.modified_count\n    except Exception as e:\n        raise e\n    # this is an object\n    return result\n"
  },
  {
    "path": "MsSQL/README.md",
    "content": "\n# MsSQL Actions\n* [Get MS-SQL Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/MsSQL/legos/mssql_get_handle/README.md): Get MS-SQL Handle\n* [MS-SQL Read Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/MsSQL/legos/mssql_read_query/README.md): MS-SQL Read Query\n* [MS-SQL Write Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/MsSQL/legos/mssql_write_query/README.md): MS-SQL Write Query\n"
  },
  {
    "path": "MsSQL/__init__.py",
    "content": ""
  },
  {
    "path": "MsSQL/legos/__init__.py",
    "content": ""
  },
  {
    "path": "MsSQL/legos/mssql_get_handle/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Get MS-SQL Handle</h1>\r\n\r\n## Description\r\nThis Lego Returns MS-SQL handle.\r\n\r\n\r\n## Lego Details\r\n\r\n    mssql_get_handle(handle)\r\n\r\n        handle: Object of type unSkript MYSQL Connector\r\n        \r\n\r\n## Lego Input\r\nThis Lego take only one  inputs handle. \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "MsSQL/legos/mssql_get_handle/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "MsSQL/legos/mssql_get_handle/mssql_get_handle.json",
    "content": "{\n\"action_title\": \"Get MS-SQL Handle\",\n\"action_description\": \"Get MS-SQL Handle\",\n\"action_type\": \"LEGO_TYPE_MSSQL\",\n\"action_entry_function\": \"mssql_get_handle\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": false,\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_NONE\",\n\"action_supports_iteration\": false\n}\n"
  },
  {
    "path": "MsSQL/legos/mssql_get_handle/mssql_get_handle.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nfrom pydantic import BaseModel\n\n\nclass InputSchema(BaseModel):\n    pass\n\n\ndef mssql_get_handle(handle):\n    \"\"\"mssql_get_handle retuns the handle of MSSQL.\n\n          :type handle: object\n          :param handle: Object returned from task.validate(...).\n\n          :rtype: handle of MSSQL.\n      \"\"\"\n    return handle\n"
  },
  {
    "path": "MsSQL/legos/mssql_read_query/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>MS-SQL Read Query</h1>\r\n\r\n## Description\r\nThis Lego Reads MS_SQL Query.\r\n\r\n\r\n## Lego Details\r\n\r\n    mssql_read_query(handle, query: str, params: Tuple = ())\r\n\r\n        handle: Object of type unSkript MSSQL Connector\r\n        query: MSSQL Query to execute.\r\n        params: Parameters to the query in Tuple format.\r\n\r\n## Lego Input\r\nThis Lego take three inputs handle, query and params. \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "MsSQL/legos/mssql_read_query/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "MsSQL/legos/mssql_read_query/mssql_read_query.json",
    "content": "{\n\"action_title\": \"MS-SQL Read Query\",\n\"action_description\": \"MS-SQL Read Query\",\n\"action_type\": \"LEGO_TYPE_MSSQL\",\n\"action_entry_function\": \"mssql_read_query\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": true,\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n\"action_supports_iteration\": true,\n\"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_MSSQL\", \"CATEGORY_TYPE_MSSQL_QUERY\"]\n}\n"
  },
  {
    "path": "MsSQL/legos/mssql_read_query/mssql_read_query.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nfrom typing import Tuple, List\nfrom pydantic import BaseModel, Field\nfrom tabulate import tabulate\n\nclass InputSchema(BaseModel):\n    query: str = Field(\n        title='Read Query',\n        description='Read query eg: select * from test;')\n    params: Tuple = Field(\n        None,\n        title='Parameters',\n        description='Parameters to the query in list format. For eg: [1, 2, \"abc\"]')\n\n\ndef mssql_read_query_printer(output):\n    if output is None:\n        return\n    print('\\n')\n    print(tabulate(output))\n\n\ndef mssql_read_query(handle, query: str, params: Tuple = ()) -> List:\n    \"\"\"mssql_read_query Runs mssql query with the provided parameters.\n\n          :type handle: object\n          :param handle: Object returned from task.validate(...).\n\n          :type query: str\n          :param query: MSSQL read query.\n\n          :type params: Tuple\n          :param params: Parameters to the query in Tuple format.\n\n          :rtype: List result of the query.\n      \"\"\"\n    cur = handle.cursor()\n    if params:\n        cur.execute(query, params)\n    else:\n        cur.execute(query)\n\n    res = cur.fetchall()\n\n    cur.close()\n    handle.close()\n    return res\n"
  },
  {
    "path": "MsSQL/legos/mssql_write_query/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>MS-SQL Write Query</h1>\r\n\r\n## Description\r\nThis Lego Writes MS-SQL Query.\r\n\r\n\r\n## Lego Details\r\n\r\n    mssql_write_query(handle, query: str, params: List = List[Any])\r\n\r\n        handle: Object of type unSkript MSSQL Connector\r\n        query: MS-SQL Query to execute.\r\n        params: Parameters to the query in list format.\r\n\r\n## Lego Input\r\nThis Lego take three inputs handle, query and params. \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "MsSQL/legos/mssql_write_query/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "MsSQL/legos/mssql_write_query/mssql_write_query.json",
    "content": "{\n\"action_title\": \"MS-SQL Write Query\",\n\"action_description\": \"MS-SQL Write Query\",\n\"action_type\": \"LEGO_TYPE_MSSQL\",\n\"action_entry_function\": \"mssql_write_query\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": true,\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_NONE\",\n\"action_supports_iteration\": true,\n\"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_MSSQL\", \"CATEGORY_TYPE_MSSQL_QUERY\"]\n}\n"
  },
  {
    "path": "MsSQL/legos/mssql_write_query/mssql_write_query.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nfrom typing import List, Any\nfrom pydantic import BaseModel, Field\n\n\nclass InputSchema(BaseModel):\n    query: str = Field(\n        title='Write Query',\n        description='Query to insert/update')\n    params: List = Field(\n        None,\n        title='Parameters',\n        description='Parameters to the query in list format. For eg: [1, 2, \"abc\"]')\n\n\ndef mssql_write_query(handle, query: str, params: List = List[Any]) -> None:\n    \"\"\"mssql_write_query Runs mssql query with the provided parameters.\n\n          :type handle: object\n          :param handle: Object returned from task.validate(...).\n\n          :type query: str\n          :param query: MSSQL insert/update query.\n\n          :type params: List\n          :param params: Parameters to the query in list format.\n\n          :rtype: None if success. Exception on error.\n      \"\"\"\n    cur = handle.cursor()\n    if params:\n        cur.execute(query, params)\n    else:\n        cur.execute(query)\n    handle.commit()\n    cur.close()\n    handle.close()\n"
  },
  {
    "path": "MySQL/README.md",
    "content": "\n# MySQL Actions\n* [Get MySQL Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/MySQL/legos/mysql_get_handle/README.md): Get MySQL Handle\n* [MySQl Get Long Running Queries](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/MySQL/legos/mysql_get_long_run_queries/README.md): MySQl Get Long Running Queries\n* [MySQl Kill Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/MySQL/legos/mysql_kill_query/README.md): MySQl Kill Query\n* [Run MySQL Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/MySQL/legos/mysql_read_query/README.md): Run MySQL Query\n* [Create a MySQL Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/MySQL/legos/mysql_write_query/README.md): Create a MySQL Query\n"
  },
  {
    "path": "MySQL/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "MySQL/legos/__init__.py",
    "content": ""
  },
  {
    "path": "MySQL/legos/mysql_get_handle/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Get MySQL Handle</h1>\r\n\r\n## Description\r\nThis Lego Retuns MySQL Handle.\r\n\r\n\r\n## Lego Details\r\n\r\n    mysql_get_handle(handle)\r\n\r\n        handle: Object of type unSkript MYSQL Connector\r\n        \r\n\r\n## Lego Input\r\nThis Lego take only one inputs handle. \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "MySQL/legos/mysql_get_handle/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "MySQL/legos/mysql_get_handle/mysql_get_handle.json",
    "content": "{\n\"action_title\": \"Get MySQL Handle\",\n\"action_description\": \"Get MySQL Handle\",\n\"action_type\": \"LEGO_TYPE_MYSQL\",\n\"action_entry_function\": \"mysql_get_handle\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": false,\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_NONE\",\n\"action_supports_iteration\": false\n}\n"
  },
  {
    "path": "MySQL/legos/mysql_get_handle/mysql_get_handle.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nfrom pydantic import BaseModel\n\n\nclass InputSchema(BaseModel):\n    pass\n\n\ndef mysql_get_handle(handle):\n    \"\"\"mysql_get_handle returns the mysql connection handle.\n\n       :rtype: mysql Handle.\n    \"\"\"\n    return handle\n"
  },
  {
    "path": "MySQL/legos/mysql_get_long_run_queries/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>MySQl Get Long Running Queries</h1>\r\n\r\n## Description\r\nThis Lego gets long running query in MYSQL.\r\n\r\n\r\n## Lego Details\r\n\r\n    mysql_get_long_run_queries(handle, interval: int = 5)\r\n\r\n        handle: Object of type unSkript MYSQL Connector\r\n        interval: Integer value to filter queries which runs above interval time.\r\n\r\n## Lego Input\r\nThis Lego take two inputs handle and interval. \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "MySQL/legos/mysql_get_long_run_queries/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "MySQL/legos/mysql_get_long_run_queries/mysql_get_long_run_queries.json",
    "content": "{\r\n\"action_title\": \"MySQl Get Long Running Queries\",\r\n\"action_description\": \"MySQl Get Long Running Queries\",\r\n\"action_type\": \"LEGO_TYPE_MYSQL\",\r\n\"action_entry_function\": \"mysql_get_long_run_queries\",\r\n\"action_needs_credential\": true,\r\n\"action_supports_poll\": true,\r\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n\"action_supports_iteration\": true,\r\n\"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_MYSQL\", \"CATEGORY_TYPE_MYSQL_QUERY\"]\r\n}\r\n"
  },
  {
    "path": "MySQL/legos/mysql_get_long_run_queries/mysql_get_long_run_queries.py",
    "content": "##\r\n##  Copyright (c) 2021 unSkript, Inc\r\n##  All rights reserved.\r\n##\r\nfrom typing import List\r\nfrom pydantic import BaseModel, Field\r\nfrom tabulate import tabulate\r\n\r\n\r\nclass InputSchema(BaseModel):\r\n    interval: int = Field(\r\n        default=5,\r\n        title='Interval(in seconds)',\r\n        description='Return queries running longer than this interval')\r\n\r\n\r\ndef mysql_read_query_printer(output):\r\n    if output is None:\r\n        return\r\n    print('\\n')\r\n    print(tabulate(output))\r\n\r\n\r\ndef mysql_get_long_run_queries(handle, interval: int = 5) -> List:\r\n    \"\"\"mysql_get_long_run_queries Runs returns information on all the MySQL long running queries.\r\n\r\n        :type handle: object\r\n        :param handle: Object returned from task.validate(...).\r\n\r\n        :type interval: int\r\n        :param interval: Integer value to filter queries which runs above interval time.\r\n\r\n        :rtype: Result of the query in the List form.\r\n      \"\"\"\r\n    # Get long running queries\r\n    try:\r\n        query = (\"SELECT PROCESSLIST_ID, PROCESSLIST_INFO FROM performance_schema.threads \"\r\n                 f\"WHERE PROCESSLIST_COMMAND = 'Query' AND PROCESSLIST_TIME >= {interval};\")\r\n\r\n        cur = handle.cursor()\r\n        cur.execute(query)\r\n\r\n        res = cur.fetchall()\r\n\r\n        cur.close()\r\n        handle.close()\r\n        return res\r\n\r\n    except Exception as e:\r\n        return {\"Error\": e}\r\n"
  },
  {
    "path": "MySQL/legos/mysql_kill_query/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>MySQl Kill Query</h1>\r\n\r\n## Description\r\nThis Lego kills the query process.\r\n\r\n\r\n## Lego Details\r\n\r\n    mysql_kill_query(handle, processId: int)\r\n\r\n        handle: Object of type unSkript MYSQL Connector\r\n        processId: Process ID as integer that needs to be killed.\r\n        \r\n\r\n## Lego Input\r\nThis Lego take two inputs handle and processId. \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "MySQL/legos/mysql_kill_query/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "MySQL/legos/mysql_kill_query/mysql_kill_query.json",
    "content": "{\r\n\"action_title\": \"MySQl Kill Query\",\r\n\"action_description\": \"MySQl Kill Query\",\r\n\"action_type\": \"LEGO_TYPE_MYSQL\",\r\n\"action_entry_function\": \"mysql_kill_query\",\r\n\"action_needs_credential\": true,\r\n\"action_supports_poll\": true,\r\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\r\n\"action_supports_iteration\": true,\r\n\"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_MYSQL\", \"CATEGORY_TYPE_MYSQL_QUERY\"]\r\n}\r\n"
  },
  {
    "path": "MySQL/legos/mysql_kill_query/mysql_kill_query.py",
    "content": "##\r\n##  Copyright (c) 2021 unSkript, Inc\r\n##  All rights reserved.\r\n##\r\nimport pprint\r\nfrom pydantic import BaseModel, Field\r\n\r\n\r\nclass InputSchema(BaseModel):\r\n    processId: int = Field(\r\n        title='An processId',\r\n        description='Kill the process based on processId'\r\n    )\r\n\r\ndef mysql_kill_query_printer(output):\r\n    if output is None:\r\n        return\r\n    print(\"\\n\\n\")\r\n    pprint.pprint(output)\r\n\r\ndef mysql_kill_query(handle, processId: int) -> str:\r\n    \"\"\"mysql_kill_query can kill queries (read process) that are running more or\r\n    equal than given interval.\r\n\r\n        :type handle: object\r\n        :param handle: Object returned by task.validate(...).\r\n        \r\n        :type processId: int\r\n        :param processId: Process ID as integer that needs to be killed\r\n\r\n        :rtype: Result of the kill %d process for the given processId in a str form.\r\n    \"\"\"\r\n    # Kill long running queries using processId\r\n    try:\r\n        query = f\"kill {processId};\"\r\n        cur = handle.cursor()\r\n        cur.execute(query)\r\n        res = cur.fetchall()\r\n        cur.close()\r\n        handle.close()\r\n        return res\r\n    except Exception as e:\r\n        return {\"Error\": e}\r\n"
  },
  {
    "path": "MySQL/legos/mysql_read_query/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Run MySQL Query</h1>\r\n\r\n## Description\r\nThis Lego Runs MySQL Read Query.\r\n\r\n\r\n## Lego Details\r\n\r\n    mysql_read_query(handle, query: str, params: List = List[Any])\r\n\r\n        handle: Object of type unSkript MYSQL Connector\r\n        query: MYSQL Query to execute.\r\n        params: Parameters to the query in list format.\r\n\r\n## Lego Input\r\nThis Lego take three inputs handle, query and params. \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "MySQL/legos/mysql_read_query/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "MySQL/legos/mysql_read_query/mysql_read_query.json",
    "content": "{\n\"action_title\": \"Run MySQL Query\",\n\"action_description\": \"Run MySQL Query\",\n\"action_type\": \"LEGO_TYPE_MYSQL\",\n\"action_entry_function\": \"mysql_read_query\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": true,\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n\"action_supports_iteration\": true,\n\"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_MYSQL\", \"CATEGORY_TYPE_MYSQL_QUERY\"]\n}\n"
  },
  {
    "path": "MySQL/legos/mysql_read_query/mysql_read_query.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import List, Any\nfrom pydantic import BaseModel, Field\nfrom tabulate import tabulate\n\n\nclass InputSchema(BaseModel):\n    query: str = Field(\n        title='Read Query',\n        description='MySQL get query.')\n    params: List = Field(\n        None,\n        title='Parameters',\n        description='Parameters to the query in list format. For eg: [1, 2, \"abc\"]')\n\n\ndef mysql_read_query_printer(output):\n    if output is None:\n        return\n    print('\\n')\n    pprint.pprint(tabulate(output))\n\n\ndef mysql_read_query(handle, query: str, params: List = List[Any]) -> List:\n    \"\"\"mysql_read_query Runs mysql query with the provided parameters.\n\n          :type handle: object\n          :param handle: Object returned from task.validate(...).\n\n          :type query: str\n          :param query: MySQL get query.\n\n          :type params: List\n          :param params: Parameters to the query in list format.\n\n          :rtype: List of  the results of the query.\n      \"\"\"\n    # Input param validation.\n\n    cur = handle.cursor()\n    cur.execute(query, params)\n\n    res = cur.fetchall()\n\n    cur.close()\n    handle.close()\n    return res\n"
  },
  {
    "path": "MySQL/legos/mysql_write_query/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Create a MySQL Query</h1>\r\n\r\n## Description\r\nThis Lego Creates a MySQL write Query.\r\n\r\n\r\n## Lego Details\r\n\r\n    mysql_write_query(handle, query: str, params: List = List[Any])\r\n\r\n        handle: Object of type unSkript MYSQL Connector\r\n        query: MYSQL Insert/Update  Query to execute.\r\n        params: Parameters to the query in list format.\r\n\r\n## Lego Input\r\nThis Lego take three inputs handle, query and params. \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n```\r\nNone if success. Exception on error.\r\n\r\n```\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "MySQL/legos/mysql_write_query/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "MySQL/legos/mysql_write_query/mysql_write_query.json",
    "content": "{\n\"action_title\": \"Create a MySQL Query\",\n\"action_description\": \"Create a MySQL Query\",\n\"action_type\": \"LEGO_TYPE_MYSQL\",\n\"action_entry_function\": \"mysql_write_query\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": true,\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_NONE\",\n\"action_supports_iteration\": true,\n\"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_MYSQL\", \"CATEGORY_TYPE_MYSQL_QUERY\"]\n}\n"
  },
  {
    "path": "MySQL/legos/mysql_write_query/mysql_write_query.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nfrom typing import List, Any\nfrom pydantic import BaseModel, Field\n\n\nclass InputSchema(BaseModel):\n    query: str = Field(\n        title='Write Query',\n        description='MySQL insert/update query.')\n    params: List = Field(\n        None,\n        title='Parameters',\n        description='Parameters to the query in list format. For eg: [1, 2, \"abc\"]')\n\n\ndef mysql_write_query(handle, query: str, params: List = List[Any]) -> None:\n    \"\"\"mysql_write_query Runs mysql query with the provided parameters.\n\n          :type handle: object\n          :param handle: Object returned from task.validate(...).\n\n          :type query: str\n          :param query: MySQL insert/update query.\n\n          :type params: List\n          :param params: Parameters to the query in list format.\n\n          :rtype: None if success. Exception on error.\n      \"\"\"\n    # Input param validation.\n\n    cur = handle.cursor()\n    cur.execute(query, params)\n    handle.commit()\n    cur.close()\n    handle.close()\n"
  },
  {
    "path": "Netbox/README.md",
    "content": "\n# Netbox Actions\n* [Netbox Get Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Netbox/legos/netbox_get_handle/README.md): Get Netbox Handle\n* [Netbox List Devices](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Netbox/legos/netbox_list_devices/README.md): List all Netbox devices\n"
  },
  {
    "path": "Netbox/__init__.py",
    "content": ""
  },
  {
    "path": "Netbox/legos/__init__.py",
    "content": ""
  },
  {
    "path": "Netbox/legos/netbox_get_handle/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Get Netbox Handle</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego Get Netbox Handle.\r\n\r\n\r\n## Lego Details\r\n\r\n    netbox_get_handle(handle: object)\r\n\r\n        handle: Object of type unSkript Netbox Connector\r\n\r\n## Lego Input\r\nThis Lego take one input handle.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Netbox/legos/netbox_get_handle/__init__.py",
    "content": ""
  },
  {
    "path": "Netbox/legos/netbox_get_handle/netbox_get_handle.json",
    "content": "{\r\n    \"action_title\": \"Netbox Get Handle\",\r\n    \"action_description\": \"Get Netbox Handle\",\r\n    \"action_type\": \"LEGO_TYPE_NETBOX\",\r\n    \"action_entry_function\": \"netbox_get_handle\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": false,\r\n    \"action_supports_iteration\": false,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\"\r\n}\r\n    "
  },
  {
    "path": "Netbox/legos/netbox_get_handle/netbox_get_handle.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nfrom pydantic import BaseModel\n\n\nclass InputSchema(BaseModel):\n    pass\n\n\ndef netbox_get_handle(handle):\n    \"\"\"netbox_get_handle returns the nomad handle.\n\n          :rtype: Nomad handle.\n    \"\"\"\n    return handle\n"
  },
  {
    "path": "Netbox/legos/netbox_list_devices/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \n<h2>List Netbox Devices</h2>\n\n<br>\n\n## Description\nThis Lego Lists all Netbox devices\n\n\n## Lego Details\n\n    netbox_list_devices(handle: object)\n\n        handle: Object of type unSkript Netbox Connector\n\n## Lego Input\nThis Lego take one input handle.\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Netbox/legos/netbox_list_devices/__init__.py",
    "content": ""
  },
  {
    "path": "Netbox/legos/netbox_list_devices/netbox_list_devices.json",
    "content": "{\n    \"action_title\": \"Netbox List Devices\",\n    \"action_description\": \"List all Netbox devices\",\n    \"action_type\": \"LEGO_TYPE_NETBOX\",\n    \"action_entry_function\": \"netbox_list_devices\",\n    \"action_needs_credential\": true,\n    \"action_supports_poll\": false,\n    \"action_supports_iteration\": false,\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_NETBOX\"]\n\n}\n    "
  },
  {
    "path": "Netbox/legos/netbox_list_devices/netbox_list_devices.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom pydantic import BaseModel\n\nclass InputSchema(BaseModel):\n    pass\n\ndef netbox_list_devices_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef netbox_list_devices(handle):\n    \"\"\"netbox_list_devices returns the Netbox devices.\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n          :rtype: List of netbox devices.\n    \"\"\"\n    result = handle.dcim.devices.all()\n    return result\n"
  },
  {
    "path": "Nomad/README.md",
    "content": "\n# Nomad Actions\n* [Nomad Get Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Nomad/legos/nomad_get_handle/README.md): Get Nomad Handle\n* [Nomad List Jobs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Nomad/legos/nomad_list_jobs/README.md): List all Nomad jobs\n"
  },
  {
    "path": "Nomad/__init__.py",
    "content": ""
  },
  {
    "path": "Nomad/legos/__init__.py",
    "content": ""
  },
  {
    "path": "Nomad/legos/nomad_get_handle/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Get Nomad Handle</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego Get Nomad Handle.\r\n\r\n\r\n## Lego Details\r\n\r\n    nomad_get_handle(handle: object)\r\n\r\n        handle: Object of type unSkript Nomad Connector\r\n\r\n## Lego Input\r\nThis Lego take one input handle.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Nomad/legos/nomad_get_handle/__init__.py",
    "content": ""
  },
  {
    "path": "Nomad/legos/nomad_get_handle/nomad_get_handle.json",
    "content": "{\r\n    \"action_title\": \"Nomad Get Handle\",\r\n    \"action_description\": \"Get Nomad Handle\",\r\n    \"action_type\": \"LEGO_TYPE_NOMAD\",\r\n    \"action_entry_function\": \"nomad_get_handle\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": false,\r\n    \"action_supports_iteration\": false\r\n}\r\n    "
  },
  {
    "path": "Nomad/legos/nomad_get_handle/nomad_get_handle.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nfrom pydantic import BaseModel\n\n\nclass InputSchema(BaseModel):\n    pass\n\n\ndef nomad_get_handle(handle):\n    \"\"\"nomad_get_handle returns the nomad handle.\n\n          :rtype: Nomad handle.\n    \"\"\"\n    return handle\n"
  },
  {
    "path": "Nomad/legos/nomad_list_jobs/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \n<h2>List Nomad Jobs</h2>\n\n<br>\n\n## Description\nThis Lego Lists Nomad Jobs.\n\n\n## Lego Details\n\n    nomad_list_jobs(handle: object)\n\n        handle: Object of type unSkript Nomad Connector\n\n## Lego Input\nThis Lego take one input handle.\n\n## Lego Output\nHere is a sample output.\n\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Nomad/legos/nomad_list_jobs/__init__.py",
    "content": ""
  },
  {
    "path": "Nomad/legos/nomad_list_jobs/nomad_list_jobs.json",
    "content": "{\n    \"action_title\": \"Nomad List Jobs\",\n    \"action_description\": \"List all Nomad jobs\",\n    \"action_type\": \"LEGO_TYPE_NOMAD\",\n    \"action_entry_function\": \"nomad_list_jobs\",\n    \"action_needs_credential\": true,\n    \"action_supports_poll\": false,\n    \"action_supports_iteration\": false,\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_NOMAD\"]\n}\n    "
  },
  {
    "path": "Nomad/legos/nomad_list_jobs/nomad_list_jobs.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom pydantic import BaseModel\n\nclass InputSchema(BaseModel):\n    pass\n\ndef nomad_list_jobs_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef nomad_list_jobs(handle):\n    \"\"\"nomad_list_jobs returns the nomad jobs.\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n          :rtype: List of Nomad jobs.\n    \"\"\"\n    result = handle.jobs.get_jobs()\n    return result\n"
  },
  {
    "path": "Opsgenie/README.md",
    "content": "\n# Opsgenie Actions\n* [Get Opsgenie Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Opsgenie/legos/opsgenie_get_handle/README.md): Get Opsgenie Handle\n"
  },
  {
    "path": "Opsgenie/__init__.py",
    "content": ""
  },
  {
    "path": "Opsgenie/legos/__init__.py",
    "content": ""
  },
  {
    "path": "Opsgenie/legos/opsgenie_get_handle/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>Get Opsgenie Handle</h1>\n\n## Description\nGet Opsgenie Handle\n\n## Lego Details\n\topsgenie_get_handle(handle)\n\t\thandle: Object of type unSkript Opsgenie Connector.\n\n## Lego Input\nThis Lego takes inputs handle.\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Opsgenie/legos/opsgenie_get_handle/__init__.py",
    "content": ""
  },
  {
    "path": "Opsgenie/legos/opsgenie_get_handle/opsgenie_get_handle.json",
    "content": "{\n  \"action_title\": \"Get Opsgenie Handle\",\n  \"action_description\": \"Get Opsgenie Handle\",\n  \"action_type\": \"LEGO_TYPE_OPSGENIE\",\n  \"action_entry_function\": \"opsgenie_get_handle\",\n  \"action_needs_credential\": true,\n  \"action_supports_iteration\": false,\n  \"action_supports_poll\": false\n}"
  },
  {
    "path": "Opsgenie/legos/opsgenie_get_handle/opsgenie_get_handle.py",
    "content": "##\n##  Copyright (c) 2023 unSkript, Inc\n##  All rights reserved.\n##\nfrom pydantic import BaseModel\n\n\nclass InputSchema(BaseModel):\n    pass\n\n\n\ndef opsgenie_get_handle_printer(output):\n    if output is None:\n        return\n    print(output)\n\ndef opsgenie_get_handle(handle):\n    \"\"\"opsgenie_get_handle returns the nomad handle.\n\n          :rtype: Opsgenie handle.\n    \"\"\"\n    return handle\n\n\n\n"
  },
  {
    "path": "Pingdom/README.md",
    "content": "\n# Pingdom Actions\n* [Create new maintenance window.](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Pingdom/legos/pingdom_create_new_maintenance_window/README.md): Create new maintenance window.\n* [Perform Pingdom single check](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Pingdom/legos/pingdom_do_single_check/README.md): Perform Pingdom Single Check\n* [Get Pingdom Analysis Results for a specified Check](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Pingdom/legos/pingdom_get_analysis/README.md): Get Pingdom Analysis Results for a specified Check\n* [Get list of checkIDs given a hostname](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Pingdom/legos/pingdom_get_checkids/README.md): Get list of checkIDs given a hostname. If no hostname provided, it lists all checkIDs.\n* [Get list of checkIDs given a name](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Pingdom/legos/pingdom_get_checkids_by_name/README.md): Get list of checkIDS given a name. If name is not given, it gives all checkIDs. If transaction is set to true, it returns transaction checkIDs\n* [Get Pingdom Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Pingdom/legos/pingdom_get_handle/README.md): Get Pingdom Handle\n* [Pingdom Get Maintenance](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Pingdom/legos/pingdom_get_maintenance/README.md): Pingdom Get Maintenance\n* [Get Pingdom Results](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Pingdom/legos/pingdom_get_results/README.md): Get Pingdom Results\n* [Get Pingdom TMS Check](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Pingdom/legos/pingdom_get_tmscheck/README.md): Get Pingdom TMS Check\n* [Pingdom lego to pause/unpause checkids](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Pingdom/legos/pingdom_pause_or_unpause_checkids/README.md): Pingdom lego to pause/unpause checkids\n* [Perform Pingdom Traceroute](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Pingdom/legos/pingdom_traceroute/README.md): Perform Pingdom Traceroute\n"
  },
  {
    "path": "Pingdom/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#"
  },
  {
    "path": "Pingdom/legos/__init__.py",
    "content": ""
  },
  {
    "path": "Pingdom/legos/pingdom_create_new_maintenance_window/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Create new maintenance window.</h1>\r\n\r\n## Description\r\nThis Lego Creates a new maintenance window.\r\n\r\n\r\n## Lego Details\r\n\r\n    pingdom_create_new_maintenance_window(handle, description: str, duration: int,tmsids=None, uptimeids=None)\r\n\r\n        handle: Object of type unSkript PINGDOM Connector\r\n        description: Description for the maintenance window.\r\n        duration: Duration of window in minutes.\r\n        tmsids: Transaction checks Ids.\r\n        uptimeids: Uptime checks Ids to assign to the maintenance window.\r\n\r\n## Lego Input\r\nThis Lego take five inputs handle, description, duration, tmsids  and uptimeids. \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Pingdom/legos/pingdom_create_new_maintenance_window/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "Pingdom/legos/pingdom_create_new_maintenance_window/pingdom_create_new_maintenance_window.json",
    "content": "{\n\"action_title\": \"Create new maintenance window.\",\n\"action_description\": \"Create new maintenance window.\",\n\"action_type\": \"LEGO_TYPE_PINGDOM\",\n\"action_entry_function\": \"pingdom_create_new_maintenance_window\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": true,\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_INT\",\n\"action_supports_iteration\": true,\n\"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_PINGDOM\"]\n}\n"
  },
  {
    "path": "Pingdom/legos/pingdom_create_new_maintenance_window/pingdom_create_new_maintenance_window.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import Optional, List\nfrom datetime import datetime as dt, timedelta\nfrom pydantic import BaseModel, Field\nfrom unskript.thirdparty.pingdom import swagger_client as pingdom_client\n\npp = pprint.PrettyPrinter(indent=4)\n\nclass InputSchema(BaseModel):\n    description: str = Field(\n        title='Description',\n        description='Description for the maintenance window.')\n    duration: int = Field(\n        title='duration',\n        description='Select a duration in minutes eg: 60.')\n    tmsids: Optional[List[int]] = Field(\n        default=None,\n        title='Transaction checks Ids',\n        description=('Transaction checks Ids to assign to the maintenance '\n                     'window eg: [120824,1208233].')\n        )\n    uptimeids: Optional[List[int]] = Field(\n        default=None,\n        title='Uptime Ids',\n        description=('Uptime checks Ids to assign to the maintenance window eg: '\n                     '[11061762,11061787].')\n                     )\n\n\ndef pingdom_create_new_maintenance_window_printer(output):\n    if output is None:\n        return\n    print(\"\\n\")\n    pp.pprint(\n        (f'Successfully created maintenance window {output}',\n        f'starting time {dt.now().strftime(\"%H:%M:%S\")}'))\n\n\ndef pingdom_create_new_maintenance_window(handle,\n                                          description: str,\n                                          duration: int,\n                                          tmsids=None,\n                                          uptimeids=None) -> int:\n    \"\"\"pingdom_create_new_maintenance_window .\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type description: str\n        :param description: Description for the maintenance window.\n\n        :type duration: int\n        :param duration: Duration in minutes eg: 60.\n        \n        :type tmsids: list\n        :param tmsids: Transaction checks Ids.\n\n        :type uptimeids: list\n        :param uptimeids: Uptime checks Ids to assign to the maintenance window.\n\n        :rtype:success message with window id.\n    \"\"\"\n    if uptimeids is None:\n        uptimeids = []\n    if tmsids is None:\n        tmsids = []\n    obj = {}\n    obj['description'] = description\n    start_time = dt.now()\n    to_time = (start_time + timedelta(minutes=duration)).strftime(\"%s\")\n\n    obj['from'] = start_time.strftime(\"%s\")\n    obj['to'] = to_time\n\n    if tmsids is not None:\n        obj['tmsids'] = tmsids\n\n    if uptimeids is not None:\n        obj['uptimeids'] = uptimeids\n\n    maintenance = pingdom_client.MaintenanceApi(api_client=handle)\n    result = maintenance.maintenance_post_with_http_info(_return_http_data_only=True, body=obj)\n    return result.maintenance.id\n"
  },
  {
    "path": "Pingdom/legos/pingdom_do_single_check/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Perform Pingdom single check</h1>\r\n\r\n## Description\r\nThis Lego Performs Pingdom single check.\r\n\r\n\r\n## Lego Details\r\n\r\n    pingdom_do_single_check(handle, host: str, type: str = 'http')\r\n\r\n        handle: Object of type unSkript PINGDOM Connector\r\n        host: Target host.\r\n        type: Target host type.\r\n\r\n\r\n## Lego Input\r\nThis Lego take three inputs handle, host, and type. \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Pingdom/legos/pingdom_do_single_check/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "Pingdom/legos/pingdom_do_single_check/pingdom_do_single_check.json",
    "content": "{\n\"action_title\": \"Perform Pingdom single check\",\n\"action_description\": \"Perform Pingdom Single Check\",\n\"action_type\": \"LEGO_TYPE_PINGDOM\",\n\"action_entry_function\": \"pingdom_do_single_check\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": true,\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\n\"action_supports_iteration\": true,\n\"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_PINGDOM\"]\n}\n"
  },
  {
    "path": "Pingdom/legos/pingdom_do_single_check/pingdom_do_single_check.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import Optional, Dict\nfrom pydantic import BaseModel, Field\nfrom unskript.thirdparty.pingdom import swagger_client as pingdom_client\n\npp = pprint.PrettyPrinter(indent=4)\n\nclass InputSchema(BaseModel):\n    host: str = Field(\n        title='Host',\n        description='Target Host')\n    type: Optional[str] = Field(\n        'http',\n        title=\"Type\",\n        description='Target host type. Possible values: http, smtp, pop3, imap')\n\n\ndef pingdom_do_single_check_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef pingdom_do_single_check(handle, host: str, type: str = 'http') -> Dict:\n    \"\"\"pingdom_do_single_check performs a single test using a\n    specified Pingdom probe against a specified target\n        \n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type host: str\n        :param host: Target Host.\n\n        :type type: str\n        :param type: Target host type.\n\n        :rtype: Returns the results for a given single check.\n    \"\"\"\n    # Input param validation.\n    params = {}\n    params['host'] = host\n    params['type'] = type\n    check = pingdom_client.SingleApi(api_client=handle)\n    result = check.single_get(_return_http_data_only=True, host=host, type=type)\n    return result\n"
  },
  {
    "path": "Pingdom/legos/pingdom_get_analysis/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Get Pingdom Analysis Results for a specified Check</h1>\r\n\r\n## Description\r\nThis Lego Returns Pingdom Analysis Results for a specified Check.\r\n\r\n\r\n## Lego Details\r\n\r\n    pingdom_get_analysis(handle, checkid: int, from_timestamp: int = 0, limit: int = 100, offset: int = 0, to_timestamp: int = 0)\r\n\r\n        handle: Object of type unSkript PINGDOM Connector\r\n        checkid: Pingdom Check ID.\r\n        limit: Number of results to get.\r\n        from_timestamp: Start Time Timestamp in the UNIX Format date.\r\n        offset:Offset of returned checks.\r\n        to_timestamp: End Time Timestamp in the UNIX Format date.\r\n\r\n\r\n## Lego Input\r\nThis Lego take six inputs handle, checkid, limit,from_timestamp, to_timestamp and offset. \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Pingdom/legos/pingdom_get_analysis/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "Pingdom/legos/pingdom_get_analysis/pingdom_get_analysis.json",
    "content": "{\n\"action_title\": \"Get Pingdom Analysis Results for a specified Check\",\n\"action_description\": \"Get Pingdom Analysis Results for a specified Check\",\n\"action_type\": \"LEGO_TYPE_PINGDOM\",\n\"action_entry_function\": \"pingdom_get_analysis\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": true,\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\n\"action_supports_iteration\": true,\n\"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_PINGDOM\"]\n}\n"
  },
  {
    "path": "Pingdom/legos/pingdom_get_analysis/pingdom_get_analysis.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import Optional, Dict\nfrom pydantic import BaseModel, Field\nfrom unskript.thirdparty.pingdom import swagger_client as pingdom_client\n\npp = pprint.PrettyPrinter(indent=4)\n\nclass InputSchema(BaseModel):\n    checkid: int = Field(\n        title='Check ID',\n        description='Pingdom Check ID')\n    from_timestamp: Optional[int] = Field(\n        0,\n        title=\"Start Time\",\n        description='Timestamp in the UNIX Format date +%s')\n    limit: Optional[int] = Field(\n        100,\n        title=\"Number of Results\",\n        description=\"Number of Results to Return\")\n    offset: Optional[int] = Field(\n        0,\n        title=\"Offset\",\n        description='Offset for Listing (requires limit to be specified)')\n    to_timestamp: Optional[int] = Field(\n        0,\n        title=\"End Time\",\n        description='Timestamp in the UNIX Format date +%s')\n\n\n\ndef pingdom_get_analysis_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef pingdom_get_analysis(\n        handle,\n        checkid: int,\n        from_timestamp: int = 0,\n        limit: int = 100,\n        offset: int = 0,\n        to_timestamp: int = 0\n        ) -> Dict:\n    \"\"\"pingdom_get_analysis returns the list of latest root cause analysis\n    results for a specified check.\n        \n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type checkid: int\n        :param checkid: Pingdom Check ID.\n\n        :type limit: int\n        :param limit: Number of returned checks.\n\n        :type  offset int\n        :param offset: Offset of returned checks.\n\n        :type from_timestamp: int\n        :param from_timestamp: Start Time Timestamp in the UNIX Format date\n\n        :type to_timestamp: int \n        :param to_timestamp: End Time Timestamp in the UNIX Format date\n\n        :rtype: Returns the list of latest RCA results for a given check.\n    \"\"\"\n\n    check = pingdom_client.AnalysisApi(api_client=handle)\n    result = check.analysis_checkid_get_with_http_info(\n        _return_http_data_only=True,\n        checkid=checkid,\n        _from=from_timestamp if from_timestamp != 0 else None,\n        to=to_timestamp if to_timestamp != 0 else None,\n        limit=limit,\n        offset=offset\n        )\n    return result\n"
  },
  {
    "path": "Pingdom/legos/pingdom_get_checkids/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Get list of checkIDs given a hostname</h1>\r\n\r\n## Description\r\nThis Lego Gets the list of checkIDs given a hostname..\r\n\r\n\r\n## Lego Details\r\n\r\n    pingdom_get_checkids(handle, host_name: str = \"\")\r\n\r\n        handle: Object of type unSkript PINGDOM Connector\r\n        host_name: Name of the target host.\r\n\r\n\r\n## Lego Input\r\nThis Lego take two inputs handle and host_name. \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Pingdom/legos/pingdom_get_checkids/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "Pingdom/legos/pingdom_get_checkids/pingdom_get_checkids.json",
    "content": "{\n\"action_title\": \"Get list of checkIDs given a hostname\",\n\"action_description\": \"Get list of checkIDs given a hostname. If no hostname provided, it lists all checkIDs.\",\n\"action_type\": \"LEGO_TYPE_PINGDOM\",\n\"action_entry_function\": \"pingdom_get_checkids\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": true,\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n\"action_supports_iteration\": true,\n\"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_PINGDOM\"]\n}\n"
  },
  {
    "path": "Pingdom/legos/pingdom_get_checkids/pingdom_get_checkids.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import Optional, List\nfrom pydantic import BaseModel, Field\nfrom unskript.thirdparty.pingdom import swagger_client as pingdom_client\n\npp = pprint.PrettyPrinter(indent=4)\n\n\nclass InputSchema(BaseModel):\n    host_name: Optional[str] = Field(\n        default=None,\n        title='Hostname',\n        description='Name of the target host.')\n\n\ndef pingdom_get_checkids_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef pingdom_get_checkids(handle, host_name: str = \"\") -> List[int]:\n    \"\"\"pingdom_get_checkids.\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type host_name: str\n        :param host_name: Name of the target host.\n\n        :rtype: list of checkids.\n    \"\"\"\n    check = pingdom_client.ChecksApi(api_client=handle)\n    result = check.checks_get_with_http_info(_return_http_data_only=True)\n    res = result.checks\n    if host_name:\n        res = [check.id for check in res if check.hostname == host_name]\n    else:\n        res = [check.id for check in res]\n\n    return res\n"
  },
  {
    "path": "Pingdom/legos/pingdom_get_checkids_by_name/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Get list of checkIDs given a name</h1>\r\n\r\n## Description\r\nThis Lego Returns list of checkIDs given a name.\r\n\r\n\r\n## Lego Details\r\n\r\n    pingdom_get_checkids_by_name(handle, checkNames=None, transaction: bool = False)\r\n\r\n        handle: Object of type unSkript PINGDOM Connector\r\n        checkNames: Name of the checks.\r\n        transaction: Set to true if the checks are transaction checks. Default is false. \r\n\r\n\r\n## Lego Input\r\nThis Lego take three inputs handle, checkNames  and transaction. \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Pingdom/legos/pingdom_get_checkids_by_name/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "Pingdom/legos/pingdom_get_checkids_by_name/pingdom_get_checkids_by_name.json",
    "content": "{\n\"action_title\": \"Get list of checkIDs given a name\",\n\"action_description\": \"Get list of checkIDS given a name. If name is not given, it gives all checkIDs. If transaction is set to true, it returns transaction checkIDs\",\n\"action_type\": \"LEGO_TYPE_PINGDOM\",\n\"action_entry_function\": \"pingdom_get_checkids_by_name\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": true,\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n\"action_supports_iteration\": true,\n\"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_PINGDOM\"]\n}\n"
  },
  {
    "path": "Pingdom/legos/pingdom_get_checkids_by_name/pingdom_get_checkids_by_name.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import Optional, List\nfrom pydantic import BaseModel, Field\nfrom unskript.thirdparty.pingdom import swagger_client as pingdom_client\n\npp = pprint.PrettyPrinter(indent=4)\n\n\nclass InputSchema(BaseModel):\n    checkNames: Optional[List[str]] = Field(\n        default=None,\n        title='Check names',\n        description='''Name of the checks, . Eg: [\"Google\", \"app\"]''')\n    transaction: Optional[bool] = Field(\n        default=False,\n        title='Transaction',\n        description='''Set to true if the checks are transaction checks. Default is false''')\n\n\ndef pingdom_get_checkids_by_name_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef pingdom_get_checkids_by_name(handle, checkNames=None, transaction: bool = False) -> List[int]:\n    \"\"\"pingdom_get_checkids_by_name .\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type checkNames: str\n        :param checkNames: Name of the checks.\n\n        :type transaction: bool\n        :param transaction: Set to true if the checks are transaction checks. Default is false.\n\n        :rtype: list of checknames.\n    \"\"\"\n    if checkNames is None:\n        checkNames = []\n    if transaction:\n        check = pingdom_client.TMSChecksApi(api_client=handle)\n        result = check.get_all_checks_with_http_info(_return_http_data_only=True)\n        res = result.checks\n        if checkNames:\n            res = [check.id for check in res if check.name in checkNames]\n        else:\n            res = [check.id for check in res]\n\n    else:\n        check = pingdom_client.ChecksApi(api_client=handle)\n        result = check.checks_get_with_http_info(_return_http_data_only=True)\n        res = result.checks\n        if checkNames:\n            res = [check.id for check in res if check.name in checkNames]\n        else:\n            res = [check.id for check in res]\n\n    return res\n"
  },
  {
    "path": "Pingdom/legos/pingdom_get_handle/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Get Pingdom Handle</h1>\r\n\r\n## Description\r\nThis Lego Returns Pingdom Handle.\r\n\r\n\r\n## Lego Details\r\n\r\n    pingdom_get_handle(handle)\r\n\r\n        handle: Object of type unSkript PINGDOM Connector\r\n        \r\n\r\n\r\n## Lego Input\r\nThis Lego take only one input. \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Pingdom/legos/pingdom_get_handle/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "Pingdom/legos/pingdom_get_handle/pingdom_get_handle.json",
    "content": "{\n\"action_title\": \"Get Pingdom Handle\",\n\"action_description\": \"Get Pingdom Handle\",\n\"action_type\": \"LEGO_TYPE_PINGDOM\",\n\"action_entry_function\": \"pingdom_get_handle\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": false,\n\"action_supports_iteration\": false\n}\n"
  },
  {
    "path": "Pingdom/legos/pingdom_get_handle/pingdom_get_handle.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nfrom pydantic import BaseModel\n\n\nclass InputSchema(BaseModel):\n    pass\n\n\ndef pingdom_get_handle(handle):\n    \"\"\"pingdom_get_handle returns the Pingdom handle.\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :rtype: Pingdom Handle.\n    \"\"\"\n    return handle\n"
  },
  {
    "path": "Pingdom/legos/pingdom_get_maintenance/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Pingdom Get Maintenance</h1>\r\n\r\n## Description\r\nThis Lego returns maintenace Pingdom Maintenance windows.\r\n\r\n\r\n## Lego Details\r\n\r\n    pingdom_get_maintenance(handle, limit: int = 0, offset: int = 0, order: str = 'asc',orderby: str = 'description')\r\n\r\n        handle: Object of type unSkript PINGDOM Connector\r\n        limit: Number of results to get.\r\n        offset: Offset of returned checks..\r\n        order: Display ascending/descending order.\r\n        orderby: Order by the specific property.\r\n\r\n\r\n## Lego Input\r\nThis Lego take five inputs handle, limit, offset, order  and orderby. \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Pingdom/legos/pingdom_get_maintenance/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "Pingdom/legos/pingdom_get_maintenance/pingdom_get_maintenance.json",
    "content": "{\n\"action_title\": \"Pingdom Get Maintenance\",\n\"action_description\": \"Pingdom Get Maintenance\",\n\"action_type\": \"LEGO_TYPE_PINGDOM\",\n\"action_entry_function\": \"pingdom_get_maintenance\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": true,\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\n\"action_supports_iteration\": true,\n\"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_PINGDOM\"]\n}\n"
  },
  {
    "path": "Pingdom/legos/pingdom_get_maintenance/pingdom_get_maintenance.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import Optional, Dict\nfrom pydantic import BaseModel, Field\nfrom unskript.thirdparty.pingdom import swagger_client as pingdom_client\n\npp = pprint.PrettyPrinter(indent=4)\n\n\nclass InputSchema(BaseModel):\n    limit: Optional[int] = Field(\n        title='Number of Results',\n        description='Number of Results to return')\n    offset: Optional[int] = Field(\n        title=\"Offset\",\n        description='Offset of the list')\n    order: Optional[str] = Field(\n        'asc',\n        title=\"Order\",\n        description=(\"Display ascending/descending order. Possible values: \"\n        \"asc, desc. NOTE: This needs to specify Order By field\")\n        )\n    orderby: Optional[str] = Field(\n        'description',\n        title=\"Order By\",\n        description=\"Order by the specific property. Eg: description\"\n    )\n\n\ndef pingdom_get_maintenance_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef pingdom_get_maintenance(handle, limit: int = 0, offset: int = 0, order: str = 'asc',\n                            orderby: str = 'description') -> Dict:\n    \"\"\"pingdom_get_maintenance Returns a list of Maintenance Windows\n        \n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type limit: int\n        :param limit: Number of returned checks.\n\n        :type  offset int\n        :param offset: Offset of returned checks.\n\n        :type order: str\n        :param order: Display ascending/descending order.\n\n        :type orderby: str\n        :param orderby:Order by the specific property.\n\n\n        :rtype: Returns the list of maintenance windows\n    \"\"\"\n    check = pingdom_client.MaintenanceApi(api_client=handle)\n    result = check.maintenance_get_with_http_info(\n        _return_http_data_only=True,\n        order=order,\n        orderby=orderby,\n        limit=limit if limit is not None else None,\n        offset=offset if offset is not None else None\n        )\n    return result\n"
  },
  {
    "path": "Pingdom/legos/pingdom_get_results/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Get Pingdom Results</h1>\r\n\r\n## Description\r\nThis Lego returns Pingdom Results.\r\n\r\n\r\n## Lego Details\r\n\r\n    pingdom_get_results(handle, checkid: int, status: str = 'down', limit: int = 10)\r\n\r\n        handle: Object of type unSkript PINGDOM Connector\r\n        checkid: Pingdom Check ID.\r\n        status: Filter to only show specified results.Comma seperated string. example: down,unconfirmed,unknown\r\n        limit: Number of results to get.\r\n\r\n\r\n## Lego Input\r\nThis Lego take five inputs handle, checkid, status ,down and limit. \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Pingdom/legos/pingdom_get_results/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "Pingdom/legos/pingdom_get_results/pingdom_get_results.json",
    "content": "{\n\"action_title\": \"Get Pingdom Results\",\n\"action_description\": \"Get Pingdom Results\",\n\"action_type\": \"LEGO_TYPE_PINGDOM\",\n\"action_entry_function\": \"pingdom_get_results\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": true,\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\n\"action_supports_iteration\": true,\n\"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_PINGDOM\"]\n}\n"
  },
  {
    "path": "Pingdom/legos/pingdom_get_results/pingdom_get_results.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import Optional, Dict\nfrom pydantic import BaseModel, Field\nfrom unskript.thirdparty.pingdom import swagger_client as pingdom_client\n\npp = pprint.PrettyPrinter(indent=4)\n\n\nclass InputSchema(BaseModel):\n    checkid: int = Field(\n        title='Check ID',\n        description='Pingdom Check ID')\n    status: Optional[str] = Field(\n        'down',\n        title=\"Status\",\n        description=(\"Filter to only show specified results.Comma \"\n                     \"seperated string. example: down,unconfirmed,unknown\")\n                     )\n    limit: Optional[int] = Field(\n        10,\n        title=\"Limit\",\n        description=\"Number of results to get\")\n\n\ndef pingdom_get_results_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\ndef pingdom_get_results(handle, checkid: int, status: str = 'down', limit: int = 10) -> Dict:\n    \"\"\"pingdom_get_result returns a lit of raw test results for a specified check\n        \n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type checkid: int\n        :param checkid: Pingdom Check ID.\n\n        :type status: str\n        :param status: Filter to only show specified results.Comma seperated string.\n\n        :type limit: int\n        :param limit: Number of returned checks.\n\n        :rtype: Returns the raw results for a given checkID.\n    \"\"\"\n    check = pingdom_client.ResultsApi(api_client=handle)\n    result = check.results_checkid_get_with_http_info(\n        _return_http_data_only=True,\n        checkid=checkid,\n        status=status,\n        limit=limit\n        )\n    return result\n"
  },
  {
    "path": "Pingdom/legos/pingdom_get_tmscheck/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Get Pingdom TMS Check</h1>\r\n\r\n## Description\r\nThis Lego returns results of Pingdom TMS Check.\r\n\r\n\r\n## Lego Details\r\n\r\n    pingdom_get_tmscheck(handle, extended_tags: bool = False, limit: int = 100, offset: int = 0, tags: str = \"\",type: str = \"\")\r\n\r\n        handle: Object of type unSkript PINGDOM Connector\r\n        extended_tags: Include Extended Tags or Not.\r\n        limit: Number of returned checks.\r\n        offset: Offset of returned checks.\r\n        tags: List of tags seperated by comma.\r\n        type:Filter Type: Possible values: script, recording.\r\n\r\n\r\n## Lego Input\r\nThis Lego take six inputs handle, extended_tags, limit, offset, tags, and type. \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Pingdom/legos/pingdom_get_tmscheck/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "Pingdom/legos/pingdom_get_tmscheck/pingdom_get_tmscheck.json",
    "content": "{\n\"action_title\": \"Get Pingdom TMS Check\",\n\"action_description\": \"Get Pingdom TMS Check\",\n\"action_type\": \"LEGO_TYPE_PINGDOM\",\n\"action_entry_function\": \"pingdom_get_tmscheck\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": true,\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\n\"action_supports_iteration\": true,\n\"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_PINGDOM\"]\n}\n"
  },
  {
    "path": "Pingdom/legos/pingdom_get_tmscheck/pingdom_get_tmscheck.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import Optional, Dict\nfrom pydantic import BaseModel, Field\nfrom unskript.thirdparty.pingdom import swagger_client as pingdom_client\n\n\npp = pprint.PrettyPrinter(indent=4)\n\nclass InputSchema(BaseModel):\n    extended_tags: Optional[bool] = Field(\n        False,\n        title='Include Extended Tags',\n        description='Include extended tags')\n    limit: Optional[str] = Field(\n        '100',\n        title=\"Number of Checks\",\n        description='Number of returned checks')\n    offset: Optional[str] = Field(\n        '0',\n        title=\"Offset\",\n        description=\"Offset of returned checks\")\n    tags: Optional[str] = Field(\n        title=\"Tags\",\n        description='List of tags seperated by comma eg: nginx')\n    type: Optional[str] = Field(\n        title=\"Type\",\n        description='Filter Type: Possible values: script, recording')\n\n\ndef pingdom_get_tmschecke_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef pingdom_get_tmscheck(\n        handle,\n        extended_tags: bool = False,\n        limit: int = 100,\n        offset: int = 0,\n        tags: str = \"\",\n        type: str = \"\"\n        ) -> Dict:\n    \"\"\"pingdom_get_tmscheck returns the results of Transaction check\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type extended_tags: bool\n        :param extended_tags: Include Extended Tags or Not.\n\n        :type limit: int\n        :param limit: Number of returned checks.\n\n        :type  offset int\n        :param offset: Offset of returned checks.\n\n        :type tags: List\n        :param tags:List of tags seperated by comma\n\n        :type type: str\n        :param type: Filter Type: Possible values: script, recording.\n\n        :rtype: Returns the list of result of all transaction checks\n    \"\"\"\n\n    check = pingdom_client.TMSChecksApi(api_client=handle)\n    result = check.get_all_checks_with_http_info(\n        _return_http_data_only=True,\n        extended_tags=extended_tags,\n        limit=limit,\n        offset=offset,\n        tags=tags if tags is not None else None,\n        type=type\n        )\n    return result\n"
  },
  {
    "path": "Pingdom/legos/pingdom_pause_or_unpause_checkids/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Pingdom lego to pause/unpause checkids</h1>\r\n\r\n## Description\r\nThis Pingdom lego to pause/unpause checkids.\r\n\r\n\r\n## Lego Details\r\n\r\n    pingdom_pause_or_unpause_checkids(handle, pause: bool, resolution: int, checkIds=None)\r\n\r\n        handle: Object of type unSkript PINGDOM Connector\r\n        pause: True to pause the check Ids and false to unpause it. resolution: Interval time to test website (In Minutes). eg: 1 5 15 30 60.\r\n        checkIds: List of check Ids to be modified.\r\n\r\n\r\n## Lego Input\r\nThis Lego take four inputs handle, pause, resolution  and checkIds. \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Pingdom/legos/pingdom_pause_or_unpause_checkids/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "Pingdom/legos/pingdom_pause_or_unpause_checkids/pingdom_pause_or_unpause_checkids.json",
    "content": "{\n\"action_title\": \"Pingdom lego to pause/unpause checkids\",\n\"action_description\": \"Pingdom lego to pause/unpause checkids\",\n\"action_type\": \"LEGO_TYPE_PINGDOM\",\n\"action_entry_function\": \"pingdom_pause_or_unpause_checkids\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": true,\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\n\"action_supports_iteration\": true,\n\"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_PINGDOM\"]\n}\n"
  },
  {
    "path": "Pingdom/legos/pingdom_pause_or_unpause_checkids/pingdom_pause_or_unpause_checkids.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import List, Optional, Dict\nfrom pydantic import BaseModel, Field\nfrom unskript.thirdparty.pingdom import swagger_client as pingdom_client\n\npp = pprint.PrettyPrinter(indent=4)\n\n\nclass InputSchema(BaseModel):\n    checkIds: Optional[List[str]] = Field(\n        title='checkIds',\n        description='List of check Ids to be modified. eg: [\"1643815305\",\"1643815323\"].')\n    pause: bool = Field(\n        title=\"pause\",\n        description='True to pause the check Ids and false to unpause it.')\n    resolution: int = Field(\n        title=\"resolution\",\n        description='Interval time to test website (In Minutes). eg: 1 5 15 30 60.')\n\n\ndef pingdom_pause_or_unpause_checkids_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef pingdom_pause_or_unpause_checkids(handle, pause: bool, resolution: int, checkIds=None) -> Dict:\n    \"\"\"pingdom_pause_or_unpause_checkids returns the results pause or unpause check ids\n        \n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type pause: bool\n        :param pause: True to pause the check Ids and false to unpause it.\n\n        :type resolution: int\n        :param resolution: Interval time to test website (In Minutes). eg: 1 5 15 30 60.\n        \n        :type checkIds: List\n        :param checkIds: List of check Ids to be modified.\n\n\n        :rtype: Returns the list of result of all pause or unpause check ids\n    \"\"\"\n    if checkIds is None:\n        checkIds = []\n    data = {\"paused\": pause, \"resolution\": resolution}\n    if checkIds:\n        data[\"checkids\"] = \",\".join(checkIds)\n    check = pingdom_client.ChecksApi(api_client=handle)\n    result = check.checks_put_with_http_info(body=data, _return_http_data_only=True)\n\n    return result\n"
  },
  {
    "path": "Pingdom/legos/pingdom_traceroute/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Perform Pingdom Traceroute</h1>\r\n\r\n## Description\r\nThis Lego Performs Pingdom Traceroute.\r\n\r\n\r\n## Lego Details\r\n\r\n    pingdom_traceroute(handle, host: str, probeid = None)\r\n\r\n        handle: Object of type unSkript PINGDOM Connector.\r\n        host: Target Host eg: google.com.\r\n        probeid: Probe Identifier. \r\n\r\n\r\n## Lego Input\r\nThis Lego take three inputs handle, host and probeid. \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Pingdom/legos/pingdom_traceroute/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "Pingdom/legos/pingdom_traceroute/pingdom_traceroute.json",
    "content": "{\n\"action_title\": \"Perform Pingdom Traceroute\",\n\"action_description\": \"Perform Pingdom Traceroute\",\n\"action_type\": \"LEGO_TYPE_PINGDOM\",\n\"action_entry_function\": \"pingdom_traceroute\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": true,\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\n\"action_supports_iteration\": true,\n\"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_PINGDOM\"]\n}\n"
  },
  {
    "path": "Pingdom/legos/pingdom_traceroute/pingdom_traceroute.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import Optional, Dict\nfrom pydantic import BaseModel, Field\nfrom unskript.thirdparty.pingdom import swagger_client as pingdom_client\n\npp = pprint.PrettyPrinter(indent=4)\n\n\nclass InputSchema(BaseModel):\n    host: str = Field(\n        title='Host',\n        description='Target Host eg: google.com')\n    probeid: Optional[int] = Field(\n        title=\"Probe ID\",\n        description='Probe Identifier')\n\n\ndef pingdom_traceroute_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef pingdom_traceroute(handle, host: str, probeid = None) -> Dict:\n    \"\"\"pingdom_traceroute performs traceroute for a given host and returns result.\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type host: str\n        :param host: Target Host eg: google.com.\n\n        :type probeid: str\n        :param probeid: Probe Identifier.\n\n        :rtype: Returns the list of latest RCA results for a given check.\n    \"\"\"\n\n\n    traceroute = pingdom_client.TracerouteApi(api_client=handle)\n    result = traceroute.traceroute_get_with_http_info(\n        _return_http_data_only=True,\n        host=host,\n        probeid=probeid\n        )\n\n    return result\n"
  },
  {
    "path": "Postgresql/Postgresql_Display_Long_Running.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"133bee4c\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Runbook Overview\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Runbook Overview\"\n   },\n   \"source\": [\n    \"<center><img src=\\\"https://storage.googleapis.com/unskript-website/assets/favicon.png\\\" alt=\\\"unSkript.com\\\" width=\\\"100\\\" height=\\\"100\\\">\\n\",\n    \"<h1 id=\\\"unSkript-Runbooks\\\">unSkript Runbooks<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#unSkript-Runbooks\\\" target=\\\"_self\\\">&para;</a></h1>\\n\",\n    \"<div class=\\\"alert alert-block alert-success\\\">\\n\",\n    \"<h3 id=\\\"Objective\\\"><strong>Objective</strong><a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Objective\\\" target=\\\"_self\\\">&para;</a></h3>\\n\",\n    \"<strong>To get PostgreSQL long-running queries using unSkript actions.</strong></div>\\n\",\n    \"</center><center>\\n\",\n    \"<h2 id=\\\"Display-PostgreSQL-Long-Running-Queries\\\">Display PostgreSQL Long Running Queries<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Display-PostgreSQL-Long-Running-Queries\\\" target=\\\"_self\\\">&para;</a></h2>\\n\",\n    \"</center>\\n\",\n    \"<h1 id=\\\"Steps-Overview\\\">Steps Overview<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Steps-Overview\\\" target=\\\"_self\\\">&para;</a></h1>\\n\",\n    \"<p>1. Long Running PostgreSQL Queries<br>2. Post Slack Message<code>\\n\",\n    \"</code></p>\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"086ace3b\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-1\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-1\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Long-Running-PostgreSQL-Queries\\\">Long Running PostgreSQL Queries<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Long-Running-PostgreSQL-Queries\\\" target=\\\"_self\\\">&para;</a></h3>\\n\",\n    \"<p>Here we will use unSkript <strong>Long Running PostgreSQL Queries</strong> action. This action finds out all the long-running queries on the PostgreSQL database.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p><strong>Input parameters:</strong> <code>interval</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p><strong>Output variable:</strong> <code>postgresql_queries</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 10,\n   \"id\": \"c8565b85-30c3-43f7-9f4b-b8a3bd271861\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionCategories\": [],\n    \"actionIsCheck\": false,\n    \"actionNeedsCredential\": true,\n    \"actionNextHop\": [],\n    \"actionNextHopParameterMapping\": {},\n    \"actionOutputType\": \"\",\n    \"actionRequiredLinesInCode\": [],\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_modified\": false,\n    \"action_uuid\": \"ef9f0f3dd00ef0972895ea006375f1a4496dca1b7266bc60fdfbd8ab4feee6c3\",\n    \"collapsed\": true,\n    \"continueOnError\": false,\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Long Running PostgreSQL Queries\",\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-02-15T18:50:41.391Z\"\n    },\n    \"id\": 332,\n    \"index\": 332,\n    \"inputData\": [\n     {\n      \"interval\": {\n       \"constant\": false,\n       \"value\": \"int(interval)\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"interval\": {\n        \"default\": 5,\n        \"description\": \"Return queries running longer than interval\",\n        \"title\": \"Interval(in minutes)\",\n        \"type\": \"integer\"\n       }\n      },\n      \"title\": \"postgresql_long_running_queries\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_POSTGRESQL\",\n    \"name\": \"Long Running PostgreSQL Queries\",\n    \"nouns\": [],\n    \"orderProperties\": [\n     \"interval\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"postgresql_queries\",\n     \"output_name_enabled\": true\n    },\n    \"printOutput\": true,\n    \"probeEnabled\": false,\n    \"tags\": [\n     \"postgresql_long_running_queries\"\n    ],\n    \"title\": \"Long Running PostgreSQL Queries\",\n    \"trusted\": true,\n    \"verbs\": []\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"# Copyright (c) 2021 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"import pprint \\n\",\n    \"\\n\",\n    \"from typing import List, Any, Optional, Tuple\\n\",\n    \"from tabulate import tabulate\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"def postgresql_long_running_queries_printer(output):\\n\",\n    \"    if output is None:\\n\",\n    \"        return\\n\",\n    \"\\n\",\n    \"    pprint.pprint(output)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"def postgresql_long_running_queries(handle, interval: int = 5) -> Tuple:\\n\",\n    \"    \\\"\\\"\\\"postgresql_long_running_queries Runs postgres query with the provided parameters.\\n\",\n    \"\\n\",\n    \"          :type handle: object\\n\",\n    \"          :param handle: Object returned from task.validate(...).\\n\",\n    \"\\n\",\n    \"          :type interval: int\\n\",\n    \"          :param interval: Interval (in seconds).\\n\",\n    \"\\n\",\n    \"          :rtype: All the results of the query.\\n\",\n    \"      \\\"\\\"\\\"\\n\",\n    \"    # Input param validation.\\n\",\n    \"\\n\",\n    \"    query = \\\"SELECT pid, user, pg_stat_activity.query_start, now() - \\\" \\\\\\n\",\n    \"        \\\"pg_stat_activity.query_start AS query_time, query, state \\\" \\\\\\n\",\n    \"        \\\" FROM pg_stat_activity WHERE state = 'active' AND (now() - \\\" \\\\\\n\",\n    \"        \\\"pg_stat_activity.query_start) > interval '%d seconds';\\\" % interval\\n\",\n    \"\\n\",\n    \"    cur = handle.cursor()\\n\",\n    \"    cur.execute(query)\\n\",\n    \"    output = []\\n\",\n    \"    res = cur.fetchall()\\n\",\n    \"    data = []\\n\",\n    \"    for records in res:\\n\",\n    \"        result = {\\n\",\n    \"            \\\"pid\\\": records[0],\\n\",\n    \"            \\\"user\\\": records[1],\\n\",\n    \"            \\\"query_start\\\": records[2],\\n\",\n    \"            \\\"query_time\\\": records[3],\\n\",\n    \"            \\\"query\\\": records[4],\\n\",\n    \"            \\\"state\\\": records[5]\\n\",\n    \"        }\\n\",\n    \"        output.append(result)\\n\",\n    \"        data.append([records[0], records[4], records[5], records[3]])\\n\",\n    \"\\n\",\n    \"    if len(res) > 0:\\n\",\n    \"        headers = [\\\"pid\\\", \\\"query\\\", \\\"state\\\", \\\"duration\\\"]\\n\",\n    \"        print(\\\"\\\\n\\\")\\n\",\n    \"        output = tabulate(data, headers=headers, tablefmt=\\\"grid\\\")\\n\",\n    \"\\n\",\n    \"    handle.commit()\\n\",\n    \"    cur.close()\\n\",\n    \"    handle.close()\\n\",\n    \"    if len(output) != 0:\\n\",\n    \"        return (False, output)\\n\",\n    \"    else:\\n\",\n    \"        return (True, None)\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"interval\\\": \\\"int(interval)\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"postgresql_queries\\\")\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.execute(postgresql_long_running_queries, lego_printer=postgresql_long_running_queries_printer, hdl=hdl, args=args)\"\n   ],\n   \"output\": {}\n  },\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"id\": \"5b8a6162-5475-422d-98c6-7d756956ed8f\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-1 Extension\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-1 Extension\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Modify-Output\\\">Modify Output</h3>\\n\",\n    \"<p>In this action, we modify the output from step 1 and return a list of dictionary items for all the long-running queries on the PostgreSQL database.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p><strong>Output variable: </strong>postgresql_queries</p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 12,\n   \"id\": \"e8b0d7b7-03a5-456c-971a-a638b2435eeb\",\n   \"metadata\": {\n    \"collapsed\": true,\n    \"customAction\": true,\n    \"execution_data\": {\n     \"last_date_success_run_cell\": \"2023-02-15T18:58:06.161Z\"\n    },\n    \"jupyter\": {\n     \"outputs_hidden\": true,\n     \"source_hidden\": true\n    },\n    \"name\": \"Modify Output\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Modify Output\",\n    \"trusted\": true,\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"sql_queries = []\\n\",\n    \"if postgresql_queries[0] == False:\\n\",\n    \"    for queries in postgresql_queries[1]:\\n\",\n    \"        sql_queries.append(queries)\"\n   ],\n   \"output\": {}\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"1256bbdf\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Step-2\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Step-2\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Post-Slack-Message\\\">Post Slack Message<a class=\\\"jp-InternalAnchorLink\\\" href=\\\"#Post-Slack-Message\\\" target=\\\"_self\\\">&para;</a></h3>\\n\",\n    \"<p>Here we will use unSkript <strong>Post Slack Message</strong> action. This action posts the message to the slack channel about the long-running queries on the PostgreSQL database.</p>\\n\",\n    \"<blockquote>\\n\",\n    \"<p><strong>Input parameters:</strong> <code>channel,&nbsp;message</code></p>\\n\",\n    \"</blockquote>\\n\",\n    \"<blockquote>\\n\",\n    \"<p><strong>Output variable:</strong> <code>message_status</code></p>\\n\",\n    \"</blockquote>\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 7,\n   \"id\": \"84b2379b-c11c-42a8-8575-8b75efe52574\",\n   \"metadata\": {\n    \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n    \"actionBashCommand\": false,\n    \"actionNeedsCredential\": true,\n    \"actionSupportsIteration\": true,\n    \"actionSupportsPoll\": true,\n    \"action_uuid\": \"6a87f83ab0ecfeecb9c98d084e2b1066c26fa64be5b4928d5573a5d60299802d\",\n    \"createTime\": \"1970-01-01T00:00:00Z\",\n    \"credentialsJson\": {},\n    \"currentVersion\": \"0.1.0\",\n    \"description\": \"Post Slack Message\",\n    \"id\": 44,\n    \"index\": 44,\n    \"inputData\": [\n     {\n      \"channel\": {\n       \"constant\": false,\n       \"value\": \"channel\"\n      },\n      \"message\": {\n       \"constant\": false,\n       \"value\": \"f\\\"Long Running Queries : {sql_queries}\\\"\"\n      }\n     }\n    ],\n    \"inputschema\": [\n     {\n      \"properties\": {\n       \"channel\": {\n        \"description\": \"Name of the slack channel where the message to be posted\",\n        \"title\": \"Channel\",\n        \"type\": \"string\"\n       },\n       \"message\": {\n        \"description\": \"Message to be sent\",\n        \"title\": \"Message\",\n        \"type\": \"string\"\n       }\n      },\n      \"required\": [\n       \"channel\",\n       \"message\"\n      ],\n      \"title\": \"slack_post_message\",\n      \"type\": \"object\"\n     }\n    ],\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"legotype\": \"LEGO_TYPE_SLACK\",\n    \"name\": \"Post Slack Message\",\n    \"nouns\": [\n     \"slack\",\n     \"message\"\n    ],\n    \"orderProperties\": [\n     \"channel\",\n     \"message\"\n    ],\n    \"output\": {\n     \"type\": \"\"\n    },\n    \"outputParams\": {\n     \"output_name\": \"message_status\",\n     \"output_name_enabled\": true\n    },\n    \"printOutput\": true,\n    \"tags\": [\n     \"slack_post_message\"\n    ],\n    \"title\": \"Post Slack Message\",\n    \"verbs\": [\n     \"post\"\n    ]\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##\\n\",\n    \"# Copyright (c) 2021 unSkript, Inc\\n\",\n    \"# All rights reserved.\\n\",\n    \"##\\n\",\n    \"\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"from pydantic import BaseModel, Field\\n\",\n    \"from slack_sdk import WebClient\\n\",\n    \"from slack_sdk.errors import SlackApiError\\n\",\n    \"\\n\",\n    \"pp = pprint.PrettyPrinter(indent=2)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"from beartype import beartype\\n\",\n    \"def legoPrinter(func):\\n\",\n    \"    def Printer(*args, **kwargs):\\n\",\n    \"        output = func(*args, **kwargs)\\n\",\n    \"        if output:\\n\",\n    \"            channel = kwargs[\\\"channel\\\"]\\n\",\n    \"            pp.pprint(print(f\\\"Message sent to Slack channel {channel}\\\"))\\n\",\n    \"        return output\\n\",\n    \"    return Printer\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"@legoPrinter\\n\",\n    \"@beartype\\n\",\n    \"def slack_post_message(\\n\",\n    \"        handle: WebClient,\\n\",\n    \"        channel: str,\\n\",\n    \"        message: str) -> bool:\\n\",\n    \"\\n\",\n    \"    try:\\n\",\n    \"        response = handle.chat_postMessage(\\n\",\n    \"            channel=channel,\\n\",\n    \"            text=message)\\n\",\n    \"        return True\\n\",\n    \"    except SlackApiError as e:\\n\",\n    \"        print(\\\"\\\\n\\\\n\\\")\\n\",\n    \"        pp.pprint(\\n\",\n    \"            f\\\"Failed sending message to slack channel {channel}, Error: {e.response['error']}\\\")\\n\",\n    \"        return False\\n\",\n    \"    except Exception as e:\\n\",\n    \"        print(\\\"\\\\n\\\\n\\\")\\n\",\n    \"        pp.pprint(\\n\",\n    \"            f\\\"Failed sending message to slack channel {channel}, Error: {e.__str__()}\\\")\\n\",\n    \"        return False\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"task = Task(Workflow())\\n\",\n    \"task.configure(printOutput=True)\\n\",\n    \"task.configure(inputParamsJson='''{\\n\",\n    \"    \\\"channel\\\": \\\"channel\\\",\\n\",\n    \"    \\\"message\\\": \\\"f\\\\\\\\\\\"Long Running Queries : {sql_queries}\\\\\\\\\\\"\\\"\\n\",\n    \"    }''')\\n\",\n    \"task.configure(outputName=\\\"message_status\\\")\\n\",\n    \"\\n\",\n    \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n    \"if err is None:\\n\",\n    \"    task.output = task.execute(slack_post_message, hdl=hdl, args=args)\\n\",\n    \"    if task.output_name != None:\\n\",\n    \"        globals().update({task.output_name: task.output[0]})\"\n   ],\n   \"output\": {}\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"f45b5e96\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Conclusion\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Conclusion\"\n   },\n   \"source\": [\n    \"<h3 id=\\\"Conclusion\\\">Conclusion</h3>\\n\",\n    \"<p>In this Runbook, we demonstrated the use of unSkript's PostgreSQL legos to run PostgreSQL Query and display and collect the long-running queries from a database and send the message to a slack channel. To view the full platform capabilities of unSkript please visit <a href=\\\"https://us.app.unskript.io\\\">https://us.app.unskript.io</a></p>\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"execution_data\": {\n   \"runbook_name\": \"Display long running queries in a PostgreSQL database\",\n   \"parameters\": [\n    \"interval\",\n    \"channel\"\n   ]\n  },\n  \"kernelspec\": {\n   \"display_name\": \"unSkript (Build: 891)\",\n   \"name\": \"python_kubernetes\"\n  },\n  \"language_info\": {\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"pygments_lexer\": \"ipython3\"\n  },\n  \"parameterSchema\": {\n   \"properties\": {\n    \"interval\": {\n     \"default\": \"5\",\n     \"description\": \"Time interval (in seconds) to check for long queries\",\n     \"title\": \"interval\",\n     \"type\": \"number\"\n    },\n    \"channel\": {\n     \"description\": \"Slack channel to post to\",\n     \"title\": \"channel\",\n     \"type\": \"string\"\n    }\n   },\n   \"required\": [],\n   \"title\": \"Schema\",\n   \"type\": \"object\"\n  },\n  \"parameterValues\": null\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}"
  },
  {
    "path": "Postgresql/Postgresql_Display_Long_Running.json",
    "content": "{\n  \"name\": \"Display long running queries in a PostgreSQL database\",\n  \"description\": \"This runbook displays collects the long running queries from a database and sends a message to the specified slack channel. Poorly optimized queries and excessive connections can cause problems in PostgreSQL, impacting upstream services.\", \n  \"uuid\": \"adcf88e8035c594e599fc9a33c28c9099187211f6daccb9d3ab4e5d17993086f\",\n  \"icon\": \"CONNECTOR_TYPE_POSTGRESQL\",\n  \"categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\" ],\n  \"connector_types\": [ \"CONNECTOR_TYPE_POSTGRESQL\" ],\n  \"version\": \"1.0.0\"\n}\n\n"
  },
  {
    "path": "Postgresql/README.md",
    "content": "# Postgresql RunBooks\n* [Display long running queries in a PostgreSQL database](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/Postgresql_Display_Long_Running.ipynb): This runbook displays collects the long running queries from a database and sends a message to the specified slack channel. Poorly optimized queries and excessive connections can cause problems in PostgreSQL, impacting upstream services.\n\n# Postgresql Actions\n* [PostgreSQL Calculate Bloat](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgres_calculate_bloat/README.md): This Lego calculates bloat for tables in Postgres\n* [Calling a PostgreSQL function](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_call_function/README.md): Calling a PostgreSQL function\n* [PostgreSQL Check Unused Indexes](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_check_unused_indexes/README.md): Find unused Indexes in a database in PostgreSQL\n* [Create Tables in PostgreSQL](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_create_table/README.md): Create Tables PostgreSQL\n* [Delete PostgreSQL Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_delete_query/README.md): Delete PostgreSQL Query\n* [PostgreSQL Get Cache Hit Ratio](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_get_cache_hit_ratio/README.md): The result of the action will show the total number of blocks read from disk, the total number of blocks found in the buffer cache, and the cache hit ratio as a percentage. For example, if the cache hit ratio is 99%, it means that 99% of all data requests were served from the buffer cache, and only 1% required reading data from disk.\n* [Get PostgreSQL Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_get_handle/README.md): Get PostgreSQL Handle\n* [PostgreSQL Get Index Usage](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_get_index_usage/README.md): The action result shows the data for table name, the percentage of times an index was used for that table, and the number of live rows in the table.\n* [PostgreSQL get service status](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_get_server_status/README.md): This action checks the status of each database.\n* [Execute commands in a PostgreSQL transaction.](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_handling_transaction/README.md): Given a set of PostgreSQL commands, this actions run them inside a transaction.\n* [Long Running PostgreSQL Queries](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_long_running_queries/README.md): Long Running PostgreSQL Queries\n* [Read PostgreSQL Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_read_query/README.md): Read PostgreSQL Query\n* [Show tables in PostgreSQL Database](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_show_tables/README.md): Show the tables existing in a PostgreSQL Database. We execute the following query to fetch this information SELECT * FROM pg_catalog.pg_tables WHERE schemaname != 'pg_catalog' AND schemaname != 'information_schema';\n* [Call PostgreSQL Stored Procedure](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_stored_procedures/README.md): Call PostgreSQL Stored Procedure\n* [Write PostgreSQL Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_write_query/README.md): Write PostgreSQL Query\n"
  },
  {
    "path": "Postgresql/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "Postgresql/legos/__init__.py",
    "content": ""
  },
  {
    "path": "Postgresql/legos/postgres_calculate_bloat/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>PostgreSQL Calculate Bloat</h1>\r\n\r\n## Description\r\nThis Lego calculates bloat for tables in Postgres\r\n\r\n## Lego Details\r\n\r\n    postgres_calculate_bloat(handle)\r\n        handle: Object of type unSkript POSTGRESQL Connector\r\n       \r\n\r\n## Lego Input\r\nThis Lego take one inputs handle\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Postgresql/legos/postgres_calculate_bloat/__init__.py",
    "content": ""
  },
  {
    "path": "Postgresql/legos/postgres_calculate_bloat/postgres_calculate_bloat.json",
    "content": "{\n    \"action_title\": \"PostgreSQL Calculate Bloat\",\n    \"action_description\": \"This Lego calculates bloat for tables in Postgres\",\n    \"action_type\": \"LEGO_TYPE_POSTGRESQL\",\n    \"action_entry_function\": \"postgres_calculate_bloat\",\n    \"action_needs_credential\": true,\n    \"action_supports_poll\": true,\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n    \"action_supports_iteration\": true,\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_POSTGRESQL\",\"CATEGORY_TYPE_TROUBLESHOOTING\"]\n    }\n    "
  },
  {
    "path": "Postgresql/legos/postgres_calculate_bloat/postgres_calculate_bloat.py",
    "content": "##\n# Copyright (c) 2023 unSkript, Inc\n# All rights reserved.\n##\nimport pprint\nfrom typing import List\nfrom tabulate import tabulate\nfrom pydantic import BaseModel\n\nclass InputSchema(BaseModel):\n    pass\n\n\ndef postgres_calculate_bloat_printer(output):\n    if output is None:\n        return\n    data = []\n    output_rows = []\n    for records in output:\n        result = {\n            \"database_name\": records[0],\n            \"schema_name\": records[1],\n            \"table_name\": records[2],\n            \"can_estimate\": records[3],\n            \"live_rows_count\": records[4],\n            \"pct_bloat\": records[5],\n            \"mb_bloat\": records[6],\n            \"table_mb\": records[7]\n        }\n        output_rows.append(result)\n        data.append([records[2], records[5], records[6]])\n    if len(output) > 0:\n        headers = [\"Table\", \"% Bloat\", \"Size(MB)\"]\n        output_rows = tabulate(data, headers=headers, tablefmt=\"grid\")\n    pprint.pprint(output_rows)\n\n\ndef postgres_calculate_bloat(handle) -> List:\n    \"\"\"postgres_calculate_bloat returns pecentage Bloat and Size Bloat of tables in a database\n        \n          :type handle: object\n          :param handle: Object returned from task.validate(...).\n\n          :rtype: Pecentage Bloat and Size Bloat of tables in a database\n      \"\"\"\n    query = \"WITH constants AS ( SELECT current_setting('block_size')::numeric AS bs, \"\\\n    \"23 AS hdr, 8 AS ma), no_stats AS ( SELECT table_schema, table_name, n_live_tup:\"\\\n    \":numeric as est_rows,pg_table_size(relid)::numeric as table_size FROM \"\\\n    \"information_schema.columns JOIN pg_stat_user_tables as psut \"\\\n    \"ON table_schema = psut.schemaname \"\\\n    \"AND table_name = psut.relname \"\\\n    \"LEFT OUTER JOIN pg_stats \"\\\n    \"ON table_schema = pg_stats.schemaname \"\\\n    \"AND table_name = pg_stats.tablename \"\\\n    \"AND column_name = attname \"\\\n    \"WHERE attname IS NULL \"\\\n    \"AND table_schema NOT IN ('pg_catalog', 'information_schema') \"\\\n    \"GROUP BY table_schema, table_name, relid, n_live_tup \"\\\n    \"), \"\\\n    \"null_headers AS ( \"\\\n    \"SELECT \"\\\n    \"hdr+1+(sum(case when null_frac <> 0 THEN 1 else 0 END)/8) as nullhdr, \"\\\n    \"SUM((1-null_frac)*avg_width) as datawidth, \"\\\n    \"MAX(null_frac) as maxfracsum, \"\\\n    \"schemaname, \"\\\n    \"tablename, \"\\\n    \"hdr, ma, bs \"\\\n    \"FROM pg_stats CROSS JOIN constants \"\\\n    \"LEFT OUTER JOIN no_stats \"\\\n    \"ON schemaname = no_stats.table_schema \"\\\n    \"AND tablename = no_stats.table_name \"\\\n    \"WHERE schemaname NOT IN ('pg_catalog', 'information_schema') \"\\\n    \"AND no_stats.table_name IS NULL \"\\\n    \"AND EXISTS ( SELECT 1 \"\\\n    \"FROM information_schema.columns \"\\\n    \"WHERE schemaname = columns.table_schema \"\\\n    \"AND tablename = columns.table_name ) \"\\\n    \"GROUP BY schemaname, tablename, hdr, ma, bs \"\\\n    \"), \"\\\n    \"data_headers AS ( \"\\\n    \"SELECT \"\\\n    \"ma, bs, hdr, schemaname, tablename, \"\\\n    \"(datawidth+(hdr+ma-(case when hdr%ma=0 THEN ma ELSE hdr%ma END)))::numeric AS datahdr, \"\\\n    \"(maxfracsum*(nullhdr+ma-(case when nullhdr%ma=0 THEN ma ELSE nullhdr%ma END))) AS nullhdr2 \"\\\n    \"FROM null_headers \"\\\n    \"), \"\\\n    \"table_estimates AS ( \"\\\n    \"SELECT schemaname, tablename, bs, \"\\\n    \"reltuples::numeric as est_rows, relpages * bs as table_bytes, \"\\\n    \"CEIL((reltuples* \"\\\n    \"(datahdr + nullhdr2 + 4 + ma - \"\\\n    \"(CASE WHEN datahdr%ma=0 \"\\\n    \"THEN ma ELSE datahdr%ma END) \"\\\n    \")/(bs-20))) * bs AS expected_bytes, \"\\\n    \"reltoastrelid \"\\\n    \"FROM data_headers \"\\\n    \"JOIN pg_class ON tablename = relname \"\\\n    \"JOIN pg_namespace ON relnamespace = pg_namespace.oid \"\\\n    \"AND schemaname = nspname \"\\\n    \"WHERE pg_class.relkind = 'r' \"\\\n    \"), \"\\\n    \"estimates_with_toast AS ( \"\\\n    \"SELECT schemaname, tablename, \"\\\n    \"TRUE as can_estimate, \"\\\n    \"est_rows, \"\\\n    \"table_bytes + ( coalesce(toast.relpages, 0) * bs ) as table_bytes, \"\\\n    \"expected_bytes + ( ceil( coalesce(toast.reltuples, 0) / 4 ) * bs ) as expected_bytes \"\\\n    \"FROM table_estimates LEFT OUTER JOIN pg_class as toast \"\\\n    \"ON table_estimates.reltoastrelid = toast.oid \"\\\n    \"AND toast.relkind = 't' \"\\\n    \"), \"\\\n    \"table_estimates_plus AS ( \"\\\n    \"SELECT current_database() as databasename, \"\\\n    \"schemaname, tablename, can_estimate, \"\\\n    \"est_rows, \"\\\n    \"CASE WHEN table_bytes > 0 \"\\\n    \"THEN table_bytes::NUMERIC \"\\\n    \"ELSE NULL::NUMERIC END \"\\\n    \"AS table_bytes, \"\\\n    \"CASE WHEN expected_bytes > 0 \"\\\n    \"THEN expected_bytes::NUMERIC \"\\\n    \"ELSE NULL::NUMERIC END \"\\\n    \"AS expected_bytes, \"\\\n    \"CASE WHEN expected_bytes > 0 AND table_bytes > 0 \"\\\n    \"AND expected_bytes <= table_bytes \"\\\n    \"THEN (table_bytes - expected_bytes)::NUMERIC \"\\\n    \"ELSE 0::NUMERIC END AS bloat_bytes \"\\\n    \"FROM estimates_with_toast \"\\\n    \"UNION ALL \"\\\n    \"SELECT current_database() as databasename, \"\\\n    \"table_schema, table_name, FALSE, \"\\\n    \"est_rows, table_size, \"\\\n    \"NULL::NUMERIC, NULL::NUMERIC \"\\\n    \"FROM no_stats \"\\\n    \"), \"\\\n    \"bloat_data AS ( \"\\\n    \"select current_database() as databasename, \"\\\n    \"schemaname, tablename, can_estimate, \"\\\n    \"table_bytes, round(table_bytes/(1024^2)::NUMERIC,3) as table_mb, \"\\\n    \"expected_bytes, round(expected_bytes/(1024^2)::NUMERIC,3) as expected_mb, \"\\\n    \"round(bloat_bytes*100/table_bytes) as pct_bloat, \"\\\n    \"round(bloat_bytes/(1024::NUMERIC^2),2) as mb_bloat, \"\\\n    \"table_bytes, expected_bytes, est_rows \"\\\n    \"FROM table_estimates_plus \"\\\n    \") \"\\\n    \"SELECT databasename, schemaname, tablename, \"\\\n    \"can_estimate, \"\\\n    \"est_rows, \"\\\n    \"pct_bloat, mb_bloat, \"\\\n    \"table_mb \"\\\n    \"FROM bloat_data \"\\\n    \"ORDER BY pct_bloat DESC; \"\n\n    cur = handle.cursor()\n    cur.execute(query)\n    result = cur.fetchall()\n    handle.commit()\n    cur.close()\n    handle.close()\n    return result\n"
  },
  {
    "path": "Postgresql/legos/postgresql_call_function/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Calling a PostgreSQL function</h1>\r\n\r\n## Description\r\nThis Lego Call a PostgreSQL function.\r\n\r\n\r\n## Lego Details\r\n\r\n    postgresql_call_function(handle, function_name: str, params: List = List[Any])\r\n\r\n        handle: Object of type unSkript POSTGRESQL Connector\r\n        function_name: Query to execute.\r\n        params: Parameters to the function in list format.\r\n\r\n## Lego Input\r\nThis Lego take three inputs handle, function_name and params. \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Postgresql/legos/postgresql_call_function/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "Postgresql/legos/postgresql_call_function/postgresql_call_function.json",
    "content": "{\n\"action_title\": \"Calling a PostgreSQL function\",\n\"action_description\": \"Calling a PostgreSQL function\",\n\"action_type\": \"LEGO_TYPE_POSTGRESQL\",\n\"action_entry_function\": \"postgresql_call_function\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": true,\n\"action_supports_iteration\": true,\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n\"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_POSTGRESQL\",\"CATEGORY_TYPE_POSTGRESQL_QUERY\"]\n}\n"
  },
  {
    "path": "Postgresql/legos/postgresql_call_function/postgresql_call_function.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nfrom typing import List, Any\nfrom pydantic import BaseModel, Field\nimport psycopg2\nfrom tabulate import tabulate\n\n\nclass InputSchema(BaseModel):\n    function_name: str = Field(\n        title='Function Name',\n        description='Calling a PostgreSQL function')\n    params: list = Field(\n        None,\n        title='Parameters',\n        description='Parameters to the function in list format. For eg: [1, 2]')\n\n\ndef postgresql_call_function_printer(output):\n    print(\"\\n\")\n    if len(output) > 0:\n        print(\"\\n\")\n        print(tabulate(output, tablefmt=\"grid\"))\n    return output\n\n\n\ndef postgresql_call_function(handle, function_name: str, params: List = List[Any]) -> List:\n    \"\"\"postgresql_call_function Runs postgresql function with the provided parameters.\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type function_name: str\n        :param function_name: Function Name.\n\n        :type params: List\n        :param params: Parameters to the Function in list format.\n\n        :rtype: List result of the function.\n    \"\"\"\n    data = []\n    try:\n        cur = handle.cursor()\n        cur.callproc(function_name, params)\n        # process the result set\n        res = cur.fetchall()\n\n        for records in res:\n            data.append(record for record in records)\n        # Close communication with the PostgreSQL database\n        cur.close()\n\n    except (Exception, psycopg2.DatabaseError) as error:\n        print(f\"Error : {error}\")\n    finally:\n        if handle:\n            handle.close()\n    return data\n"
  },
  {
    "path": "Postgresql/legos/postgresql_check_active_connections/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>PostgreSQL check active connections</h1>\n\n## Description\nChecks if the percentage of active connections to the database exceeds the provided threshold.\n\n## Lego Details\n\tpostgresql_check_active_connections(handle, threshold_percentage: int = 85)\n\t\thandle: Object of type unSkript POSTGRESQL Connector.\n\t\tthreshold_percentage: Optional, percentage of connections to consider as the threshold.\n\n\n## Lego Input\nThis Lego takes inputs handle, threshold_connections.\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Postgresql/legos/postgresql_check_active_connections/__init__.py",
    "content": ""
  },
  {
    "path": "Postgresql/legos/postgresql_check_active_connections/postgresql_check_active_connections.json",
    "content": "{\n  \"action_title\": \"PostgreSQL check active connections\",\n  \"action_description\": \"Checks if the percentage of active connections to the database exceeds the provided threshold.\",\n  \"action_type\": \"LEGO_TYPE_POSTGRESQL\",\n  \"action_entry_function\": \"postgresql_check_active_connections\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n  \"action_is_check\": true,\n  \"action_next_hop\": [\n    \"\"\n  ],\n  \"action_next_hop_parameter_mapping\": {},\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true\n}"
  },
  {
    "path": "Postgresql/legos/postgresql_check_active_connections/postgresql_check_active_connections.py",
    "content": "##\n# Copyright (c) 2023 unSkript, Inc\n# All rights reserved.\n##\nfrom typing import Optional, Tuple\nfrom pydantic import BaseModel, Field\nfrom tabulate import tabulate\n\n\nclass InputSchema(BaseModel):\n    threshold_connections: Optional[int] = Field(\n        100,\n        description='Number of connections to consider as the threshold.',\n        title='Threshold no. of connections',\n    )\n\n\ndef postgresql_check_active_connections_printer(output):\n    status, data = output\n\n    if not status and data:\n        headers = [\"Active Connections\", \"Threshold(%)\"]\n        table_data = [[record[\"active_connections\"], record[\"threshold\"]] for record in data]\n        print(tabulate(table_data, headers=headers, tablefmt=\"grid\"))\n    else:\n        print(\"Active connections are below the threshold.\")\n\n\ndef postgresql_check_active_connections(handle, threshold_percentage: int = 85) -> Tuple:\n    \"\"\"\n    postgresql_check_active_connections checks if the percentage of active connections to the database \n    exceeds the provided threshold.\n\n    :type handle: object\n    :param handle: Object returned from task.validate(...).\n\n    :type threshold_percentage: float\n    :param threshold_percentage: Optional, percentage of connections to consider as the threshold.\n\n    :rtype: Status, Result of active connections if any in tabular format\n    \"\"\"\n    # Query to fetch the count of active connections\n    query_active_connections = \"SELECT COUNT(*) FROM pg_stat_activity WHERE state = 'active';\"\n    # Query to fetch the total pool count\n    query_pool_count = \"SELECT setting::int FROM pg_settings WHERE name='max_connections';\"\n\n    result = []\n    try:\n        cur = handle.cursor()\n\n        # Fetch the total pool count\n        cur.execute(query_pool_count)\n        total_pool_count = cur.fetchone()[0]\n\n        # Calculate the threshold from the total pool count\n        threshold = int((total_pool_count * threshold_percentage)/100)\n\n        # Fetch the count of active connections\n        cur.execute(query_active_connections)\n        active_connections = cur.fetchone()[0]\n\n        handle.commit()\n        cur.close()\n        handle.close()\n\n        if active_connections > threshold:\n            data = {\n                \"active_connections\": active_connections,\n                \"threshold\": threshold,\n            }\n            result.append(data)\n\n    except Exception as e:\n        print(\"Error occurred:\", e)\n\n    if len(result) != 0:\n        return (False, result)\n    return (True, None)\n\n"
  },
  {
    "path": "Postgresql/legos/postgresql_check_locks/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>PostgreSQL check for locks in database</h1>\n\n## Description\nChecks for any locks in the postgres database.\n\n## Lego Details\n\tpostgresql_check_locks(handle)\n\t\thandle: Object of type unSkript POSTGRESQL Connector.\n\n\n## Lego Input\nThis Lego takes inputs handle.\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Postgresql/legos/postgresql_check_locks/__init__.py",
    "content": ""
  },
  {
    "path": "Postgresql/legos/postgresql_check_locks/postgresql_check_locks.json",
    "content": "{\n  \"action_title\": \"PostgreSQL check for locks in database\",\n  \"action_description\": \"Checks for any locks in the postgres database.\",\n  \"action_type\": \"LEGO_TYPE_POSTGRESQL\",\n  \"action_entry_function\": \"postgresql_check_locks\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n  \"action_is_check\": true,\n  \"action_next_hop\": [\n    \"\"\n  ],\n  \"action_next_hop_parameter_mapping\": {},\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true\n}"
  },
  {
    "path": "Postgresql/legos/postgresql_check_locks/postgresql_check_locks.py",
    "content": "##\n# Copyright (c) 2023 unSkript, Inc\n# All rights reserved.\n##\nfrom typing import Tuple\nfrom pydantic import BaseModel\nfrom tabulate import tabulate\n\n\nclass InputSchema(BaseModel):\n    pass\n\n\n\ndef postgresql_check_locks_printer(output):\n    status, data = output\n    \n    if not status and data:\n        headers = [\"PID\", \"Relation\", \"Lock Mode\", \"Granted\"]\n        table_data = [[record[\"pid\"], record[\"relation\"], record[\"lock_mode\"], record[\"granted\"]] for record in data]\n        print(tabulate(table_data, headers=headers, tablefmt=\"grid\"))\n    else:\n        print(\"No ungranted locks found.\")\n\n\ndef postgresql_check_locks(handle) -> Tuple:\n    \"\"\"\n    postgresql_check_locks identifies and returns the current locks in the PostgreSQL database.\n\n    :type handle: object\n    :param handle: Object returned from task.validate(...).\n\n    :rtype: Status, Result of current locks if any in tabular format\n    \"\"\"\n    # Query to fetch current locks in the database\n    query = \"\"\"\n            SELECT \n                pid,\n                relation::regclass,\n                mode,\n                granted\n            FROM \n                pg_locks\n            WHERE \n                granted IS FALSE;\n            \"\"\"\n\n    result = []\n    try:\n        cur = handle.cursor()\n        cur.execute(query)\n        res = cur.fetchall()\n        handle.commit()\n        cur.close()\n        handle.close()\n\n        for record in res:\n            data = {\n                \"pid\": record[0],\n                \"relation\": record[1],\n                \"lock_mode\": record[2],\n                \"granted\": record[3]\n            }\n            result.append(data)\n    except Exception as e:\n        print(\"Error occurred:\", e)\n\n    if len(result) != 0:\n        return (False, result)\n    return (True, None)\n\n"
  },
  {
    "path": "Postgresql/legos/postgresql_check_unused_indexes/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>PostgreSQL Check Unused Indexes</h1>\r\n\r\n## Description\r\nThis Lego finds unused Indexes in a database in PostgreSQL\r\n\r\n\r\n## Lego Details\r\n\r\n    postgresql_check_unused_indexes(handle, index_scans:int, index_size:int) -> Tuple:\r\n\r\n        handle: Object of type unSkript POSTGRESQL Connector\r\n        index_scans: Optional, Number of index scans initiated on this index\r\n        index_size: Optional, On-disk size in kB (kilobytes) of the table.\r\n       \r\n\r\n## Lego Input\r\nThis Lego take 3 inputs handle and index_scans, index_size. \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Postgresql/legos/postgresql_check_unused_indexes/__init__.py",
    "content": ""
  },
  {
    "path": "Postgresql/legos/postgresql_check_unused_indexes/postgresql_check_unused_indexes.json",
    "content": "{\n    \"action_title\": \"PostgreSQL Check Unused Indexes\",\n    \"action_description\": \"Find unused Indexes in a database in PostgreSQL\",\n    \"action_type\": \"LEGO_TYPE_POSTGRESQL\",\n    \"action_entry_function\": \"postgresql_check_unused_indexes\",\n    \"action_needs_credential\": true,\n    \"action_is_check\": true,\n    \"action_supports_poll\": true,\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n    \"action_supports_iteration\": true,\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\", \"CATEGORY_TYPE_POSTGRESQL\"],\n    \"action_next_hop\": [],\n    \"action_next_hop_parameter_mapping\": {}\n    }\n    "
  },
  {
    "path": "Postgresql/legos/postgresql_check_unused_indexes/postgresql_check_unused_indexes.py",
    "content": "##\n# Copyright (c) 2023 unSkript, Inc\n# All rights reserved.\n##\nimport pprint\nfrom typing import Optional, Tuple\nfrom pydantic import BaseModel, Field\n\n\nclass InputSchema(BaseModel):\n    index_scans: Optional[int] = Field(\n        default=10,\n        title='Index Scans',\n        description='Number of index scans initiated on this index')\n    index_size: Optional[int] = Field(\n        default=5242880, # 5GB\n        title='Index Size',\n        description='On-disk size in kB (kilobytes) of the table.')\n\n\n\ndef postgresql_check_unused_indexes_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef postgresql_check_unused_indexes(handle, index_scans:int=10,index_size:int=5242880) -> Tuple:\n    \"\"\"postgresql_check_unused_indexes returns unused indexes in a database\n\n          :type handle: object\n          :param handle: Object returned from task.validate(...).\n\n          :type index_scans: int\n          :param index_scans: Optional, Number of index scans initiated on this index\n\n          :type index_size: int\n          :param index_size: Opitonal, On-disk size in kB (kilobytes) of the table.\n\n          :rtype: Status, Result of unused indexes if any in tabular format\n      \"\"\"\n    size = int(index_size)\n    scans = int(index_scans)\n    query = \"SELECT schemaname || '.' || relname AS table,indexrelname AS index,\" \\\n        \"pg_size_pretty(pg_relation_size(i.indexrelid)) AS index_size,idx_scan as index_scans \" \\\n        \" FROM pg_stat_user_indexes ui JOIN pg_index i ON ui.indexrelid = i.indexrelid \"\\\n        \" WHERE NOT indisunique AND idx_scan < \" + str(scans) + \" AND pg_relation_size(relid) > \"+ \\\n            str(size)+\\\n        \" ORDER BY pg_relation_size(i.indexrelid) / nullif(idx_scan, 0) DESC NULLS FIRST,\"\\\n        \"pg_relation_size(i.indexrelid) DESC \"\n\n    #In the above query:\n    #pg_relation_size accepts the OID or name of a table, index or toast table,\n    # and returns the on-disk size in bytes of one fork of that relation.\n    # (Note that for most purposes it is more convenient to use the higher-level\n    # functions pg_total_relation_size or pg_table_size, which sum the sizes of all forks.)\n    # With one argument, it returns the size of the main data fork of the relation.\n    # The second argument can be provided to specify which fork to examine:\n    # 1. 'main' returns the size of the main data fork of the relation.\n    # 2. 'fsm' returns the size of the Free Space Map\n    # 3. 'vm' returns the size of the Visibility Map\n    # 4. 'init' returns the size of the initialization fork, if any, associated with the relation.\n    # We are getting the main data fork size\n\n    result = []\n    cur = handle.cursor()\n    cur.execute(query)\n    res = cur.fetchall()\n    handle.commit()\n    cur.close()\n    handle.close()\n    data = []\n    for records in res:\n        data = {\n            \"table_name\": records[0],\n            \"index_name\": records[1],\n            \"index_size\": records[2],\n            \"index_scans\": records[3],\n        }\n        result.append(data)\n    if len(result) != 0:\n        return (False, result)\n    return (True, None)\n    "
  },
  {
    "path": "Postgresql/legos/postgresql_create_table/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Create Tables in PostgreSQL</h1>\r\n\r\n## Description\r\nThis Lego Executes Create Tables PostgreSQL Commands.\r\n\r\n\r\n## Lego Details\r\n\r\n    postgresql_create_table(handle, commands: tuple)\r\n\r\n        handle: Object of type unSkript POSTGRESQL Connector\r\n        commands: Create Table Commands to execute.\r\n\r\n## Lego Input\r\nThis Lego take two inputs handle and commands. \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Postgresql/legos/postgresql_create_table/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "Postgresql/legos/postgresql_create_table/postgresql_create_table.json",
    "content": "{\n\"action_title\": \"Create Tables in PostgreSQL\",\n\"action_description\": \"Create Tables PostgreSQL\",\n\"action_type\": \"LEGO_TYPE_POSTGRESQL\",\n\"action_entry_function\": \"postgresql_create_table\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": true,\n\"action_supports_iteration\": true,\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\n\"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_POSTGRESQL\",\"CATEGORY_TYPE_POSTGRESQL_TABLE\"]\n}\n"
  },
  {
    "path": "Postgresql/legos/postgresql_create_table/postgresql_create_table.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nfrom typing import Dict\nfrom pydantic import BaseModel, Field\nimport psycopg2\n\nclass InputSchema(BaseModel):\n    commands: list = Field(\n        title='Commands to create tables',\n        description='''\n            Postgres create table.\n            For eg. [\"CREATE TABLE test (_id SERIAL PRIMARY KEY, _name VARCHAR(255) NOT NULL)\",\n            \"CREATE TABLE foo (_id SERIAL PRIMARY KEY)\"]\n        ''')\n\n\ndef postgresql_create_table_printer(output):\n    print(\"\\n\")\n    print(output)\n    return output\n\n\ndef postgresql_create_table(handle, commands: tuple) -> Dict:\n    \"\"\"postgresql_create_table Runs postgres query with the provided parameters.\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type commands: tuple\n        :param commands: Commands to create tables.\n\n        :rtype: All the results of the query.\n      \"\"\"\n    # Input param validation.\n\n    output = {}\n    try:\n        cur = handle.cursor()\n        # create table one by one\n        for command in tuple(commands):\n            cur.execute(command)\n        # close communication with the PostgreSQL database server\n        cur.close()\n        # commit the changes\n        handle.commit()\n        output['result'] = 'Tables Created Sucessfully'\n    except (Exception, psycopg2.DatabaseError) as error:\n        output[\"result\"] = error\n    finally:\n        if handle:\n            handle.close()\n\n    return output\n"
  },
  {
    "path": "Postgresql/legos/postgresql_delete_query/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Delete PostgreSQL Query\"</h1>\r\n\r\n## Description\r\nThis Lego Executes a Delete PostgreSQL Query\"\r\n\r\n\r\n## Lego Details\r\n\r\n    postgresql_delete_query(handle, query:str)\r\n\r\n        handle: Object of type unSkript POSTGRESQL Connector\r\n        query: PostgreSQL Delete Query to execute.\r\n\r\n## Lego Input\r\nThis Lego take two inputs handle and query. \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Postgresql/legos/postgresql_delete_query/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "Postgresql/legos/postgresql_delete_query/postgresql_delete_query.json",
    "content": "{\n\"action_title\": \"Delete PostgreSQL Query\",\n\"action_description\": \"Delete PostgreSQL Query\",\n\"action_type\": \"LEGO_TYPE_POSTGRESQL\",\n\"action_entry_function\": \"postgresql_delete_query\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": true,\n\"action_supports_iteration\": true,\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_NONE\",\n\"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_POSTGRESQL\",\"CATEGORY_TYPE_POSTGRESQL_QUERY\"]\n}\n"
  },
  {
    "path": "Postgresql/legos/postgresql_delete_query/postgresql_delete_query.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nimport psycopg2\nfrom pydantic import BaseModel, Field\n\npp = pprint.PrettyPrinter(indent=2)\nclass InputSchema(BaseModel):\n    query: str = Field(\n        title='Delete Query',\n        description='Postgres delete query.')\n\n\ndef postgresql_delete_query(handle, query:str):\n  \"\"\"postgresql_delete_query Runs postgres query with the provided parameters.\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type query: str\n        :param query: Postgresql Delete query.\n\n        :rtype: All the results of the query.\n    \"\"\"\n  # Input param validation.\n\n  delete_statement = query\n  try:\n      cur = handle.cursor()\n      cur.execute(delete_statement)\n      # get the number of updated rows\n      rows_deleted = cur.rowcount\n\n      # Commit the changes to the database\n      handle.commit()\n      # Close communication with the PostgreSQL database\n      cur.close()\n      print(\"\\n\")\n      pp.pprint(\"Deleted Record successfully\")\n      pp.pprint(f\"The number of deleted rows: {rows_deleted}\")\n\n  except (Exception, psycopg2.DatabaseError) as error:\n      pp.pprint(f\"Error : {error}\")\n  finally:\n      if handle:\n          handle.close()\n          pp.pprint(\"PostgreSQL connection is closed\")\n"
  },
  {
    "path": "Postgresql/legos/postgresql_get_cache_hit_ratio/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>PostgreSQL Get Cache Hit Ratio</h1>\r\n\r\n## Description\r\nThe result of the action will show the total number of blocks read from disk, the total number of blocks found in the buffer cache, and the cache hit ratio as a percentage. For example, if the cache hit ratio is 99%, it means that 99% of all data requests were served from the buffer cache, and only 1% required reading data from disk..\r\n\r\n\r\n## Lego Details\r\n\r\n    postgresql_get_cache_hit_ratio(handle) \r\n\r\n        handle: Object of type unSkript POSTGRESQL Connector\r\n       \r\n\r\n## Lego Input\r\nThis Lego take one input handle.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Postgresql/legos/postgresql_get_cache_hit_ratio/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "Postgresql/legos/postgresql_get_cache_hit_ratio/postgresql_get_cache_hit_ratio.json",
    "content": "{\n    \"action_title\": \"PostgreSQL Get Cache Hit Ratio\",\n    \"action_description\": \"The result of the action will show the total number of blocks read from disk, the total number of blocks found in the buffer cache, and the cache hit ratio as a percentage. For example, if the cache hit ratio is 99%, it means that 99% of all data requests were served from the buffer cache, and only 1% required reading data from disk.\",\n    \"action_type\": \"LEGO_TYPE_POSTGRESQL\",\n    \"action_entry_function\": \"postgresql_get_cache_hit_ratio\",\n    \"action_needs_credential\": true,\n    \"action_is_check\": false,\n    \"action_supports_poll\": true,\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n    \"action_supports_iteration\": true,\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_POSTGRESQL\",\"CATEGORY_TYPE_TROUBLESHOOTING\"],\n    \"action_next_hop\": [],\n    \"action_next_hop_parameter_mapping\": {}\n}\n"
  },
  {
    "path": "Postgresql/legos/postgresql_get_cache_hit_ratio/postgresql_get_cache_hit_ratio.py",
    "content": "##\n# Copyright (c) 2023 unSkript, Inc\n# All rights reserved.\n##\nimport pprint \nfrom typing import Tuple\nfrom pydantic import BaseModel\n\nclass InputSchema(BaseModel):\n    pass\n\ndef postgresql_get_cache_hit_ratio_printer(output):\n    if output is None or output[1] is None:\n        print(\"No cache hit ratio data available.\")\n        return\n\n    op = output[1]\n    if len(op) > 0:\n        cache_hit_ratio = op[0][2] * 100\n        print(f\"Cache hit ratio: {cache_hit_ratio:.2f}%\")\n    else:\n        print(\"No cache hit ratio data available.\")\n        \n    pprint.pprint(output)\n\n\n\ndef postgresql_get_cache_hit_ratio(handle) -> Tuple:\n    \"\"\"postgresql_get_cache_hit_ratio Runs postgresql query to get the Cache hit ratio.\n\n          :type handle: object\n          :param handle: Object returned from task.validate(...).\n\n          :rtype: All the results of the query.\n      \"\"\"\n\n    # Query to get the Cache hit ratio.\n    query = \"\"\"SELECT sum(heap_blks_read) as heap_read, sum(heap_blks_hit)  as heap_hit,\n            sum(heap_blks_hit) / (sum(heap_blks_hit) + sum(heap_blks_read)) as ratio FROM \n            pg_statio_user_tables;\"\"\"\n\n    cur = handle.cursor()\n    cur.execute(query)\n    res = cur.fetchall()\n    handle.commit()\n    cur.close()\n    handle.close()\n\n    if res is not None and len(res) > 0 and res[0][2] is not None:\n        cache_hit_ratio = res[0][2] * 100\n        if cache_hit_ratio >= 99:\n            return (True, res)\n        return (False, res)\n    return (False, None)\n"
  },
  {
    "path": "Postgresql/legos/postgresql_get_handle/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Get PostgreSQL Handle</h1>\r\n\r\n## Description\r\nThis Lego returns PostgreSQL Handle.\r\n\r\n\r\n## Lego Details\r\n\r\n    postgresql_get_handle(handle)\r\n\r\n        handle: Object of type unSkript POSTGRESQL Connector\r\n        \r\n\r\n## Lego Input\r\nThis Lego take only one inputs handle. \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Postgresql/legos/postgresql_get_handle/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "Postgresql/legos/postgresql_get_handle/postgresql_get_handle.json",
    "content": "{\n\"action_title\": \"Get PostgreSQL Handle\",\n\"action_description\": \"Get PostgreSQL Handle\",\n\"action_type\": \"LEGO_TYPE_POSTGRESQL\",\n\"action_entry_function\": \"postgresql_get_handle\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": false,\n\"action_supports_iteration\": false\n}\n"
  },
  {
    "path": "Postgresql/legos/postgresql_get_handle/postgresql_get_handle.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nfrom pydantic import BaseModel\n\nclass InputSchema(BaseModel):\n    pass\n\ndef postgresql_get_handle(handle):\n  \"\"\"postgresql_get_handle returns the postgresql connection handle.\n    :type handle: object\n    :param handle: Object returned from task.validate(...).\n    \n    :rtype: postgresql Handle.\n  \"\"\"\n  return handle\n"
  },
  {
    "path": "Postgresql/legos/postgresql_get_index_usage/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>PostgreSQL Get Index Usage</h1>\r\n\r\n## Description\r\nThis Lego shows the data for table name, the percentage of times an index was used for that table, and the number of live rows in the table.\r\n\r\n\r\n## Lego Details\r\n\r\n    postgresql_get_index_usage(handle)\r\n\r\n        handle: Object of type unSkript POSTGRESQL Connector\r\n       \r\n\r\n## Lego Input\r\nThis Lego take one input handle.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Postgresql/legos/postgresql_get_index_usage/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "Postgresql/legos/postgresql_get_index_usage/postgresql_get_index_usage.json",
    "content": "{\n    \"action_title\": \"PostgreSQL Get Index Usage\",\n    \"action_description\": \"The action result shows the data for table name, the percentage of times an index was used for that table, and the number of live rows in the table.\",\n    \"action_type\": \"LEGO_TYPE_POSTGRESQL\",\n    \"action_entry_function\": \"postgresql_get_index_usage\",\n    \"action_needs_credential\": true,\n    \"action_is_check\": false,\n    \"action_supports_poll\": true,\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n    \"action_supports_iteration\": true,\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_POSTGRESQL\",\"CATEGORY_TYPE_TROUBLESHOOTING\"]\n}\n"
  },
  {
    "path": "Postgresql/legos/postgresql_get_index_usage/postgresql_get_index_usage.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nfrom typing import List\nfrom tabulate import tabulate\nfrom pydantic import BaseModel\n\nclass InputSchema(BaseModel):\n    pass\n\ndef postgresql_get_index_usage_printer(output):\n    data = []\n    for records in output:\n        data.append(record for record in records)\n    headers = ['Table Name', 'Index Usage Percentage', 'Number of Rows']\n    print(tabulate(data, headers=headers, tablefmt=\"grid\"))\n\n\ndef postgresql_get_index_usage(handle) -> List:\n    \"\"\"postgresql_get_index_usage Runs postgresql query to get index usage.\n\n          :type handle: object\n          :param handle: Object returned from task.validate(...).\n\n          :rtype: All the results of the query.\n      \"\"\"\n    # Query to get the Index Usage.\n    query = \"\"\"\n                SELECT\n                  relname,\n                  100 * idx_scan / (seq_scan + idx_scan) percent_of_times_index_used,\n                  n_live_tup rows_in_table\n                FROM\n                  pg_stat_user_tables\n                WHERE\n                    seq_scan + idx_scan > 0\n                ORDER BY\n                  n_live_tup DESC;\n            \"\"\"\n\n    cur = handle.cursor()\n    cur.execute(query)\n    res = cur.fetchall()\n    handle.commit()\n    cur.close()\n    handle.close()\n    return res\n"
  },
  {
    "path": "Postgresql/legos/postgresql_get_server_status/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>PostgreSQL get service status</h1>\n\n## Description\nThis action checks the status of each database.\n\n## Lego Details\n\tpostgresql_get_service_status(handle, connection_threshold: int = 100, cache_hit_ratio_threshold: int = 90, blocked_query_threshold: int = 5)\n\t\thandle: Object of type unSkript POSTGRESQL Connector.\n\t\tconnection_threshold: Threshold for the number of connections considered abnormal\n\t\tcache_hit_ratio_threshold: Threshold for the cache hit ratio considered abnormal\n\t\tblocked_query_threshold: Threshold for the number of blocked queries considered abnormal\n\n\n## Lego Input\nThis Lego takes inputs handle, connection_threshold, cache_hit_ratio_threshold, blocked_query_threshold.\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Postgresql/legos/postgresql_get_server_status/__init__.py",
    "content": ""
  },
  {
    "path": "Postgresql/legos/postgresql_get_server_status/postgresql_get_server_status.json",
    "content": "{\n  \"action_title\": \"PostgreSQL get service status\",\n  \"action_description\": \"This action checks the status of each database.\",\n  \"action_type\": \"LEGO_TYPE_POSTGRESQL\",\n  \"action_entry_function\": \"postgresql_get_server_status\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n  \"action_is_check\": true,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_POSTGRESQL\"],\n  \"action_next_hop\": [\"\"],\n  \"action_next_hop_parameter_mapping\": {}\n}"
  },
  {
    "path": "Postgresql/legos/postgresql_get_server_status/postgresql_get_server_status.py",
    "content": "##\n# Copyright (c) 2023 unSkript, Inc\n# All rights reserved.\n##\nfrom typing import Tuple\nfrom pydantic import BaseModel\n\n\nclass InputSchema(BaseModel):\n    pass\n\n\ndef postgresql_get_server_status_printer(output):\n    if output[0]:\n        print(\"PostgreSQL Server Status: Reachable\")\n    else:\n        error_message = output[1]['message'] if output[1] else \"Unknown error\"\n        print(\"PostgreSQL Server Status: Unreachable\")\n        print(f\"Error: {error_message}\")\n\ndef postgresql_get_server_status(handle) -> Tuple:\n    \"\"\"\n    Returns a simple status indicating the reachability of the PostgreSQL server.\n\n    :type handle: object\n    :param handle: PostgreSQL connection object\n\n    :return: Tuple containing a boolean indicating success and optional error message\n    \"\"\"\n    try:\n        cur = handle.cursor()\n        cur.execute(\"SELECT 1;\")\n        cur.fetchone()\n        return (True, None)\n    except Exception as e:\n        return (False, {\"message\": str(e)})\n    finally:\n        handle.close()\n\n\n"
  },
  {
    "path": "Postgresql/legos/postgresql_handling_transaction/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Execute commands in a PostgreSQL transaction.</h1>\r\n\r\n## Description\r\nGiven a set of PostgreSQL commands, this actions run them inside a transaction.\r\n\r\n\r\n## Lego Details\r\n\r\n    postgresql_handling_transaction(handle, transaction:str)\r\n\r\n        handle: Object of type unSkript POSTGRESQL Connector\r\n        transaction: PostgreSQL commands to be run inside a transaction.\r\n\r\n## Lego Input\r\nThis Lego take two inputs handle and transaction. \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Postgresql/legos/postgresql_handling_transaction/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "Postgresql/legos/postgresql_handling_transaction/postgresql_handling_transaction.json",
    "content": "{\n\"action_title\": \"Execute commands in a PostgreSQL transaction.\",\n\"action_description\": \"Given a set of PostgreSQL commands, this actions run them inside a transaction.\",\n\"action_type\": \"LEGO_TYPE_POSTGRESQL\",\n\"action_entry_function\": \"postgresql_handling_transaction\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": true,\n\"action_supports_iteration\": true,\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_NONE\",\n\"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_POSTGRESQL\"]\n}\n"
  },
  {
    "path": "Postgresql/legos/postgresql_handling_transaction/postgresql_handling_transaction.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nimport psycopg2\nfrom pydantic import BaseModel, Field\n\npp = pprint.PrettyPrinter(indent=2)\nclass InputSchema(BaseModel):\n    transaction: str = Field(\n        title='Commands',\n        description='''\n            PostgreSQL commands to be run inside a transaction. The commands should be ; separated. For eg:\n            UPDATE test SET name = 'test-update3' WHERE _id = 3;\n            UPDATE test SET name = 'test-update3' WHERE _id = 4;\n        ''')\n\n\ndef postgresql_handling_transaction(handle, transaction:str):\n  \"\"\"postgresql_handling_transactions Runs postgres query with the provided parameters.\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type transaction: str\n        :param transaction: PostgreSQL commands to be run inside a transaction.\n\n        :rtype: Transaction Success message. Error if failed.\n    \"\"\"\n  # Input param validation.\n\n  command = \"BEGIN;\" + \"\\n\" + transaction + \"\\n\" + \"COMMIT;\"\n  try:\n      cur = handle.cursor()\n      cur.execute(command)\n      # close communication with the PostgreSQL database server\n      cur.close()\n      # commit the changes\n      handle.commit()\n      pp.pprint(\"Transaction completed successfully \")\n  except (Exception, psycopg2.DatabaseError) as error:\n      pp.pprint(f\"Error in transaction Reverting all other operations of a transactions {error}\")\n      handle.rollback()\n  finally:\n      if handle:\n          handle.close()\n          pp.pprint(\"PostgreSQL connection is closed\")\n"
  },
  {
    "path": "Postgresql/legos/postgresql_long_running_queries/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Long Running PostgreSQL Queries</h1>\r\n\r\n## Description\r\nThis Lego finds Long Running PostgreSQL Queries.\r\n\r\n\r\n## Lego Details\r\n\r\n    postgresql_long_running_queries(handle, interval: int = 5)\r\n\r\n        handle: Object of type unSkript POSTGRESQL Connector\r\n        interval: Optional-Interval(in minutes).\r\n       \r\n\r\n## Lego Input\r\nThis Lego take two inputs handle and interval. \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Postgresql/legos/postgresql_long_running_queries/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "Postgresql/legos/postgresql_long_running_queries/postgresql_long_running_queries.json",
    "content": "{\n\"action_title\": \"Long Running PostgreSQL Queries\",\n\"action_description\": \"Long Running PostgreSQL Queries\",\n\"action_type\": \"LEGO_TYPE_POSTGRESQL\",\n\"action_entry_function\": \"postgresql_long_running_queries\",\n\"action_needs_credential\": true,\n\"action_is_check\": true,\n\"action_supports_poll\": true,\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n\"action_supports_iteration\": true,\n\"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_POSTGRESQL\",\"CATEGORY_TYPE_POSTGRESQL_QUERY\"],\n\"action_next_hop\": [\"adcf88e8035c594e599fc9a33c28c9099187211f6daccb9d3ab4e5d17993086f\"],\n\"action_next_hop_parameter_mapping\": {}\n}\n"
  },
  {
    "path": "Postgresql/legos/postgresql_long_running_queries/postgresql_long_running_queries.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nimport pprint \nfrom typing import Optional, Tuple\nfrom tabulate import tabulate\nfrom pydantic import BaseModel, Field\n\n\nclass InputSchema(BaseModel):\n    interval: Optional[int] = Field(\n        default=5,\n        title='Interval (in seconds)',\n        description='Return queries running longer than interval')\n\ndef postgresql_long_running_queries_printer(output):\n    if output is None:\n        return\n\n    pprint.pprint(output)\n\n\ndef postgresql_long_running_queries(handle, interval: int = 5) -> Tuple:\n    \"\"\"postgresql_long_running_queries Runs postgres query with the provided parameters.\n\n          :type handle: object\n          :param handle: Object returned from task.validate(...).\n\n          :type interval: int\n          :param interval: Interval (in seconds).\n\n          :rtype: All the results of the query.\n      \"\"\"\n    # Input param validation.\n\n    # Multi-line will create an issue when we package the Legos.\n    # Hence concatinating it into a single line.\n    query = \"SELECT pid, user, pg_stat_activity.query_start, now() - \" \\\n        \"pg_stat_activity.query_start AS query_time, query, state \" \\\n        \" FROM pg_stat_activity WHERE state = 'active' AND (now() - \" \\\n        f\"pg_stat_activity.query_start) > interval '{interval} seconds';\"\n\n    cur = handle.cursor()\n    cur.execute(query)\n    output = []\n    res = cur.fetchall()\n    data = []\n    for records in res:\n        result = {\n            \"pid\": records[0],\n            \"user\": records[1],\n            \"query_start\": records[2],\n            \"query_time\": records[3],\n            \"query\": records[4],\n            \"state\": records[5]\n        }\n        output.append(result)\n        data.append([records[0], records[4], records[5], records[3]])\n\n    if len(res) > 0:\n        headers = [\"pid\", \"query\", \"state\", \"duration\"]\n        print(\"\\n\")\n        output = tabulate(data, headers=headers, tablefmt=\"grid\")\n\n    handle.commit()\n    cur.close()\n    handle.close()\n    if len(output) != 0:\n        return (False, output)\n    return (True, None)\n"
  },
  {
    "path": "Postgresql/legos/postgresql_read_query/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Read PostgreSQL Query\"</h1>\r\n\r\n## Description\r\nThis Lego executes Read PostgreSQL Query\".\r\n\r\n\r\n## Lego Details\r\n\r\n    postgresql_read_query(handle, query: str, params: tuple = ())\r\n\r\n        handle: Object of type unSkript POSTGRESQL Connector\r\n        query: Query to execute.\r\n        params: Parameters to the query in tuple format.\r\n\r\n## Lego Input\r\nThis Lego take three inputs handle, query and params. \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Postgresql/legos/postgresql_read_query/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "Postgresql/legos/postgresql_read_query/postgresql_read_query.json",
    "content": "{\n\"action_title\": \"Read PostgreSQL Query\",\n\"action_description\": \"Read PostgreSQL Query\",\n\"action_type\": \"LEGO_TYPE_POSTGRESQL\",\n\"action_entry_function\": \"postgresql_read_query\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": true,\n\"action_supports_iteration\": true,\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n\"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_POSTGRESQL\",\"CATEGORY_TYPE_POSTGRESQL_QUERY\"]\n}\n"
  },
  {
    "path": "Postgresql/legos/postgresql_read_query/postgresql_read_query.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nimport random\nimport string\nfrom typing import List\nfrom pydantic import BaseModel, Field\nfrom tabulate import tabulate\n\n\nclass InputSchema(BaseModel):\n    query: str = Field(\n        title='Read Query',\n        description='''\n            Read query in Postgresql PREPARE statement format. For eg.\n            SELECT foo FROM table WHERE bar=$1 AND customer=$2.\n            The values for $1 and $2 should be passed in the params field as a tuple.\n        ''')\n    params: List = Field(\n        None,\n        title='Parameters',\n        description='Parameters to the query in list format. For eg: [1, 2, \"abc\"]')\n\n\ndef postgresql_read_query_printer(output):\n    print(\"\\n\")\n    data = []\n    for records in output:\n        data.append(record for record in records)\n    print(tabulate(data, tablefmt=\"grid\"))\n    return output\n\n\ndef postgresql_read_query(handle, query: str, params: list = ()) -> List:\n    \"\"\"postgresql_read_query Runs postgresql query with the provided parameters.\n\n          :type handle: object\n          :param handle: Object returned from task.validate(...).\n\n          :type query: str\n          :param query: Postgresql read query.\n\n          :type params: tuples\n          :param params: Parameters to the query in tuple format.\n\n          :rtype: List of Result of the Query.\n      \"\"\"\n\n    cur = handle.cursor()\n    # cur.execute(query, params)\n\n    random_id = ''.join(\n        [random.choice(string.ascii_letters + string.digits) for n in range(32)])\n\n    query = f\"PREPARE psycop_{random_id} AS {query};\"\n    if not params:\n        prepared_query = f\"EXECUTE psycop_{random_id};\"\n    else:\n        parameters_tuple = tuple(params)\n        ## If there is only one tuple element, remove the trailing comma before format\n        if len(parameters_tuple) == 1:\n            tuple_string = str(parameters_tuple)\n            parameters_tuple = tuple_string[:-2] + tuple_string[-1]\n        prepared_query = f\"EXECUTE psycop_{random_id} {params};\"\n    cur.execute(query)\n    cur.execute(prepared_query)\n    res = cur.fetchall()\n    cur.close()\n    return res\n"
  },
  {
    "path": "Postgresql/legos/postgresql_show_tables/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Show tables in PostgreSQL Database\"</h1>\r\n\r\n## Description\r\nThis Lego Show the tables existing in a PostgreSQL Database.\r\n\r\n\r\n## Lego Details\r\n\r\n    postgresql_show_tables(handle)\r\n\r\n        handle: Object of type unSkript POSTGRESQL Connector\r\n       \r\n\r\n## Lego Input\r\nThis Lego take only one input handle. \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Postgresql/legos/postgresql_show_tables/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "Postgresql/legos/postgresql_show_tables/postgresql_show_tables.json",
    "content": "{\n  \"action_title\": \"Show tables in PostgreSQL Database\",\n  \"action_description\": \"Show the tables existing in a PostgreSQL Database. We execute the following query to fetch this information SELECT * FROM pg_catalog.pg_tables WHERE schemaname != 'pg_catalog' AND schemaname != 'information_schema';\",\n  \"action_type\": \"LEGO_TYPE_POSTGRESQL\",\n  \"action_entry_function\": \"postgresql_show_tables\",\n  \"action_needs_credential\": true,\n  \"action_supports_poll\": true,\n  \"action_supports_iteration\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n  \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_POSTGRESQL\",\"CATEGORY_TYPE_POSTGRESQL_TABLE\"]\n}\n"
  },
  {
    "path": "Postgresql/legos/postgresql_show_tables/postgresql_show_tables.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nfrom typing import List\nfrom pydantic import BaseModel\nfrom tabulate import tabulate\nfrom unskript.legos.postgresql.postgresql_read_query.postgresql_read_query import postgresql_read_query\n\n\nclass InputSchema(BaseModel):\n    pass\n\ndef postgresql_show_tables_printer(output):\n    print(\"\\n\")\n    data = []\n    for records in output:\n        data.append(record for record in records)\n    print(tabulate(data, tablefmt=\"grid\"))\n    return output\n\n\ndef postgresql_show_tables(handle) -> List:\n    \"\"\"ppostgresql_show_tables gives list of tables.\n\n          :type handle: object\n          :param handle: Object returned from task.validate(...).\n\n          :rtype: List of tables.\n      \"\"\"\n\n    query = (\"SELECT * FROM pg_catalog.pg_tables WHERE schemaname != \"\n             \"'pg_catalog' AND schemaname != 'information_schema';\")\n    return postgresql_read_query(handle, query, ())\n"
  },
  {
    "path": "Postgresql/legos/postgresql_stored_procedures/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Call PostgreSQL Stored Procedure</h1>\r\n\r\n## Description\r\nThis Lego Calls PostgreSQL Stored Procedure.\r\n\r\n\r\n## Lego Details\r\n\r\n    postgresql_stored_procedures(handle, stored_procedure_name: str, params: List = List[Any])\r\n\r\n        handle: Object of type unSkript POSTGRESQL Connector\r\n        stored_procedure_name: Query to execute.\r\n        params: Parameters to the Stored Procedure in list format.\r\n\r\n## Lego Input\r\nThis Lego take three inputs handle, stored_procedure_name and params. \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Postgresql/legos/postgresql_stored_procedures/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "Postgresql/legos/postgresql_stored_procedures/postgresql_stored_procedures.json",
    "content": "{\n\"action_title\": \"Call PostgreSQL Stored Procedure\",\n\"action_description\": \"Call PostgreSQL Stored Procedure\",\n\"action_type\": \"LEGO_TYPE_POSTGRESQL\",\n\"action_entry_function\": \"postgresql_stored_procedures\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": true,\n\"action_supports_iteration\": true,\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_NONE\",\n\"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_POSTGRESQL\",\"CATEGORY_TYPE_POSTGRESQL_QUERY\"]\n}\n"
  },
  {
    "path": "Postgresql/legos/postgresql_stored_procedures/postgresql_stored_procedures.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nfrom typing import Any, List\nimport psycopg2\nfrom pydantic import BaseModel, Field\n\nclass InputSchema(BaseModel):\n    stored_procedure_name: str = Field(\n        title='Stored procedure name.',\n        description='PostgreSQL stored procedure name.')\n    params: list = Field(\n        None,\n        title='Parameters',\n        description='Parameters to the Stored Procedure in list format. For eg: [1, 2]')\n\n\ndef postgresql_stored_procedures(handle, stored_procedure_name: str, params: List = List[Any]):\n    \"\"\"postgresql_stored_procedures Runs postgres query with the provided parameters.\n\n          :type handle: object\n          :param handle: Object returned from task.validate(...).\n\n          :type stored_procedure_name: str\n          :param stored_procedure_name: PostgreSQL stored procedure name.\n\n          :type params: List\n          :param params: Parameters to the Stored Procedure  in list format.\n\n          :rtype: All the results of the Stored Procedure .\n      \"\"\"\n    # Input param validation.\n\n    try:\n        cur = handle.cursor()\n\n        if params:\n            query = f\"CALL {stored_procedure_name}\"\n            cur.execute(query, params)\n        else:\n            query = f\"CALL {stored_procedure_name}\"\n            cur.execute(query)\n\n        # commit the transaction\n        handle.commit()\n        # Close communication with the PostgreSQL database\n        cur.close()\n        print(\"Call PostgreSQL Stored Procedures successfully\")\n    except (Exception, psycopg2.DatabaseError) as error:\n        print(f\"Error : {error}\")\n    finally:\n        if handle:\n            handle.close()\n            print(\"PostgreSQL connection is closed\")\n"
  },
  {
    "path": "Postgresql/legos/postgresql_write_query/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Write PostgreSQL Query\"</h1>\r\n\r\n## Description\r\nThis Lego runs Write PostgreSQL Query.\r\n\r\n\r\n## Lego Details\r\n\r\n    postgresql_write_query(handle, query: str, params: List = List[Any])\r\n\r\n        handle: Object of type unSkript POSTGRESQL Connector\r\n        query: Query to execute.\r\n        params: Parameters to the query in list format.\r\n\r\n## Lego Input\r\nThis Lego take three inputs handle, query and params. \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n```\r\nNone if success. Exception on error.\r\n\r\n```\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Postgresql/legos/postgresql_write_query/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "Postgresql/legos/postgresql_write_query/postgresql_write_query.json",
    "content": "{\n\"action_title\": \"Write PostgreSQL Query\",\n\"action_description\": \"Write PostgreSQL Query\",\n\"action_type\": \"LEGO_TYPE_POSTGRESQL\",\n\"action_entry_function\": \"postgresql_write_query\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": true,\n\"action_supports_iteration\": true,\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_NONE\",\n\"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_POSTGRESQL\",\"CATEGORY_TYPE_POSTGRESQL_QUERY\"]\n}\n"
  },
  {
    "path": "Postgresql/legos/postgresql_write_query/postgresql_write_query.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nimport random\nimport string\nfrom typing import Tuple, List, Any\nfrom pydantic import BaseModel, Field\n\n\nclass InputSchema(BaseModel):\n    query: str = Field(\n        title='Write Query',\n        description='''\n            INSERT/UPDATE query in Postgresql PREPARE statement format. For eg.\n            INSERT INTO my_table VALUES($1, $2).\n            The values for $1 and $2 should be passed in the params field as a list.\n        ''')\n    params: Tuple = Field(\n        default=None,\n        title='Parameters',\n        description='Parameters to the query in list format. Eg [ 42, \"abc\" ]')\n\n\ndef postgresql_write_query(handle, query: str, params: List = List[Any]):\n    \"\"\"postgresql_write_query Runs postgresql query with the provided parameters.\n\n          :type handle: object\n          :param handle: Object returned from task.validate(...).\n\n          :type query: str\n          :param query: Postgresql insert/update query.\n\n          :type params: List\n          :param params: Parameters to the query in list format.\n\n          :rtype: None if success. Exception on error.\n      \"\"\"\n\n    cur = handle.cursor()\n\n    random_id = ''.join(\n        [random.choice(string.ascii_letters + string.digits) for n in range(32)])\n\n    query = f\"PREPARE psycop_{random_id} AS {query};\"\n    if not params:\n        prepared_query = \"EXECUTE psycop_{random_id};\"\n    else:\n        prepared_query = \"EXECUTE psycop_{random_id} {params};\"\n\n    cur.execute(query)\n    cur.execute(prepared_query)\n\n    handle.commit()\n    cur.close()\n    handle.close()\n"
  },
  {
    "path": "Prometheus/README.md",
    "content": "\n# Prometheus Actions\n* [Get Prometheus rules](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Prometheus/legos/prometheus_alerts_list/README.md): Get Prometheus rules\n* [Get All Prometheus Metrics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Prometheus/legos/prometheus_get_all_metrics/README.md): Get All Prometheus Metrics\n* [Get Prometheus handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Prometheus/legos/prometheus_get_handle/README.md): Get Prometheus handle\n* [Get Prometheus Metric Statistics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Prometheus/legos/prometheus_get_metric_statistics/README.md): Get Prometheus Metric Statistics\n"
  },
  {
    "path": "Prometheus/__init__.py",
    "content": ""
  },
  {
    "path": "Prometheus/legos/__init__.py",
    "content": ""
  },
  {
    "path": "Prometheus/legos/prometheus_alerts_list/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Get Prometheus rules</h1>\r\n\r\n## Description\r\nThis Lego returns Prometheus rules list.\r\n\r\n\r\n## Lego Details\r\n\r\n    prometheus_get_all_metrics(handle)\r\n\r\n        handle: Object of type unSkript PROMETHEUS Connector\r\n\r\n\r\n## Lego Input\r\nThis Lego takes only one inputs handle.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Prometheus/legos/prometheus_alerts_list/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "Prometheus/legos/prometheus_alerts_list/prometheus_alerts_list.json",
    "content": "{\n\"action_title\": \"Get Prometheus rules\",\n\"action_description\": \"Get Prometheus rules\",\n\"action_type\": \"LEGO_TYPE_PROMETHEUS\",\n\"action_entry_function\": \"prometheus_alerts_list\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": true,\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n\"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_PROMETHEUS\"],\n\"action_supports_iteration\": true\n}\n"
  },
  {
    "path": "Prometheus/legos/prometheus_alerts_list/prometheus_alerts_list.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\n\nfrom typing import List\nfrom tabulate import tabulate\nfrom pydantic import BaseModel\n\nlego_title=\"Get All Prometheus Alerts\"\nlego_description=\"Get All Prometheus Alerts\"\nlego_type=\"LEGO_TYPE_PROMETHEUS\"\n\n\nclass InputSchema(BaseModel):\n    pass\n\n\ndef prometheus_alerts_list_printer(output):\n    if output is None:\n        return\n    alerts = []\n    for alert in output:\n        for key, value in alert.items():\n            alerts.append([key, value])\n    print(\"\\n\")\n    print(tabulate(alerts))\n\n\n\ndef prometheus_alerts_list(handle) -> List[dict]:\n  \"\"\"prometheus_alerts_list Returns all alerts.\n\n    :type handle: object\n    :param handle: Object returned from task.validate(...).\n\n    :return: Alerts list.\n  \"\"\"\n  try:\n      params = {\n          \"type\": \"alert\"\n      }\n      response = handle.all_alerts(params)\n  except Exception as e:\n      print(f'Alerts failed,  {str(e)}')\n      return [{\"error\": str(e)}]\n\n  result = []\n\n  if len(response['groups']) != 0:\n    for rules in response['groups']:\n        for rule in rules['rules']:\n            res = {}\n            res['name'] = rule['name']\n            result.append(res)\n  return result\n"
  },
  {
    "path": "Prometheus/legos/prometheus_get_all_metrics/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Get All Prometheus Metrics</h1>\r\n\r\n## Description\r\nThis Lego Get All Prometheus Metrics.\r\n\r\n\r\n## Lego Details\r\n\r\n    prometheus_get_all_metrics(handle)\r\n\r\n        handle: Object of type unSkript PROMETHEUS Connector\r\n\r\n\r\n## Lego Input\r\nThis Lego takes only one inputs handle. \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Prometheus/legos/prometheus_get_all_metrics/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "Prometheus/legos/prometheus_get_all_metrics/prometheus_get_all_metrics.json",
    "content": "{\n\"action_title\": \"Get All Prometheus Metrics\",\n\"action_description\": \"Get All Prometheus Metrics\",\n\"action_type\": \"LEGO_TYPE_PROMETHEUS\",\n\"action_entry_function\": \"prometheus_get_all_metrics\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": true,\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n\"action_supports_iteration\": true,\n\"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_PROMETHEUS\"]\n}\n"
  },
  {
    "path": "Prometheus/legos/prometheus_get_all_metrics/prometheus_get_all_metrics.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nfrom typing import List\nfrom pydantic import BaseModel\n\nclass InputSchema(BaseModel):\n    pass\n\n\ndef prometheus_get_all_metrics_printer(output):\n    if output is None:\n        return\n    for metric in output:\n        print(metric)\n\n\ndef prometheus_get_all_metrics(handle) -> List:\n    \"\"\"prometheus_get_all_metrics Returns Prometheus Metrics.\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :return: Metrics list.\n    \"\"\"\n    return handle.all_metrics()\n"
  },
  {
    "path": "Prometheus/legos/prometheus_get_handle/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Get Prometheus handle</h1>\r\n\r\n## Description\r\nThis Lego Returns Prometheus handle.\r\n\r\n\r\n## Lego Details\r\n\r\n    prometheus_get_handle(handle)\r\n\r\n        handle: Object of type unSkript PROMETHEUS Connector\r\n\r\n## Lego Input\r\nThis Lego takes only one inputs handle. \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Prometheus/legos/prometheus_get_handle/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "Prometheus/legos/prometheus_get_handle/prometheus_get_handle.json",
    "content": "{\n\"action_title\": \"Get Prometheus handle\",\n\"action_description\": \"Get Prometheus handle\",\n\"action_type\": \"LEGO_TYPE_PROMETHEUS\",\n\"action_entry_function\": \"prometheus_get_handle\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": false,\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_NONE\",\n\"action_supports_iteration\": false\n}\n"
  },
  {
    "path": "Prometheus/legos/prometheus_get_handle/prometheus_get_handle.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nfrom pydantic import BaseModel\n\n\nclass InputSchema(BaseModel):\n    pass\n\n\ndef prometheus_get_handle(handle):\n    \"\"\"prometheus_get_handle returns the prometheus api connection handle.\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :rtype: prometheus Handle.\n    \"\"\"\n    return handle\n"
  },
  {
    "path": "Prometheus/legos/prometheus_get_metric_statistics/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Get Prometheus Metric Statistics</h1>\r\n\r\n## Description\r\nThis Lego Gets Prometheus Metric Statistics.\r\n\r\n\r\n## Lego Details\r\n\r\n    prometheus_get_metric_range_data( handle, promql_query: str, timeSince: int, step: str, graph_size:list) \r\n\r\n        handle: Object of type unSkript PROMETHEUS Connector\r\n        promql_query: This is a PromQL query, a few examples can be found at https://prometheus.io/docs/prometheus/latest/querying/examples/.\r\n        timeSince: Starting from now, window (in seconds) for which you want to get the metric values for.\r\n        promql_query: Query resolution step width in duration format or float number of seconds.\r\n\r\n## Lego Input\r\nThis Lego takes five inputs handle, promql_query, timeSince, step, and graph_size. \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Prometheus/legos/prometheus_get_metric_statistics/prometheus_get_metric_statistics.json",
    "content": "{\n\"action_title\": \"Get Prometheus Metric Statistics\",\n\"action_description\": \"Get Prometheus Metric Statistics\",\n\"action_type\": \"LEGO_TYPE_PROMETHEUS\",\n\"action_entry_function\": \"prometheus_get_metric_range_data\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": true,\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\n\"action_supports_iteration\": true,\n\"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_PROMETHEUS\"]\n}\n"
  },
  {
    "path": "Prometheus/legos/prometheus_get_metric_statistics/prometheus_get_metric_statistics.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\n\nimport pprint\nfrom datetime import datetime, timedelta\nimport matplotlib.pyplot as plt\nfrom pydantic import BaseModel, Field\nfrom tabulate import tabulate\n\n\nclass InputSchema(BaseModel):\n    promql_query: str = Field(\n        title=\"PromQL Query\",\n        description=\"This is a PromQL query, a few examples can be found at \\\n            https://prometheus.io/docs/prometheus/latest/querying/examples/\"\n    )\n    timeSince: int = Field(\n        title=\"Time Since\",\n        description=\"Starting from now, window (in seconds) \\\n            for which you want to get the datapoints for.\",\n    )\n    step: str = Field(\n        title=\"Step\",\n        description=\"Query resolution step width in duration format or float number of seconds.\",\n    )\n    graph_size: list = Field(\n        default=[16, 8],\n        title=\"Graph Size\",\n        description=\"Size of the graph in inches (width, height), specified as a list.\",\n    )\n\n\ndef prometheus_get_metric_range_data_printer(output):\n    if output is None:\n        return\n    plt.show()\n    pprint.pprint(output)\n\n\ndef prometheus_get_metric_range_data(\n    handle,\n    promql_query: str,\n    timeSince: int,\n    step: str,\n    graph_size: list = [16, 8]\n) -> str:\n    \"\"\"prometheus_get_metric_statistics shows plotted values of Prometheus metric statistics.\n\n    :type handle: object\n    :param handle: Object returned from task.validate(...).\n\n    :type promql_query: string\n    :param PromQL Query: This is a PromQL query, a few examples can be found at \n    https://prometheus.io/docs/prometheus/latest/querying/examples/\n\n    :type timeSince: int\n    :param timeSince: Starting from now, window (in seconds) for which you want\n    to get the metric values for.\n\n    :type step: string\n    :param Step: Query resolution step width in duration format or float number of seconds\n\n    :type graph_size: list\n    :param graph_size: Size of the graph in inches (width, height), specified as a list.\n\n    :rtype: Shows plotted statistics.\n    \"\"\"\n    result = handle.custom_query_range(\n        query=promql_query,\n        start_time=datetime.utcnow() - timedelta(seconds=timeSince),\n        end_time=datetime.utcnow(),\n        step=step)\n    data = []\n    table_data = []\n    plt.figure(figsize=graph_size)\n    for each_result in result:\n        metric_data = {}\n        for each_metric_value in each_result[\"values\"]:\n            metric_data[datetime.fromtimestamp(each_metric_value[0])] = each_metric_value[1]\n        data.append(metric_data)\n    for metric_values in data:\n        data_keys = metric_values.keys()\n        times_stamps = list(data_keys)\n        times_stamps.sort()\n        sorted_values = []\n        for time in times_stamps:\n            table_data.append([time, metric_values[time]])\n            sorted_values.append(metric_values[time])\n        plt.plot_date(times_stamps, sorted_values, \"-o\")\n    plt.autoscale(enable=True, axis='both', tight=None)  # Enable autoscaling\n    plt.xlabel(\"Time\")\n    plt.ylabel(\"Value\")\n    plt.title(promql_query)\n    plt.grid(True)\n    head = [\"Timestamp\", \"Value\"]\n    table = tabulate(table_data, headers=head, tablefmt=\"grid\")\n    return table\n"
  },
  {
    "path": "README.md",
    "content": "[![Contributors][contributors-shield]][contributors-url]\n[![Forks][forks-shield]][forks-url]\n[![Stargazers][stars-shield]][stars-url]\n[![Issues][issues-shield]][issues-url]\n[![Twitter][twitter-shield]][twitter-url]\n![Actions][actions-shield]\n![Runbooks][runbooks-shield]\n\n# Runbooks.sh\n### Empowering Cloud Automation, Together\n**[Explore our docs](https://docs.unskript.com)**   \n*[Visit our blog](https://unskript.com/blog)* . *[Report Bug](https://github.com/unskript/Awesome-CloudOps-Automation/issues/new?assignees=&labels=&template=bug_report.md&title=)* . *[Request Feature](https://github.com/unskript/Awesome-CloudOps-Automation/issues/new?assignees=&labels=&template=feature_request.md&title=)*\n\n# 🚀 Quick Start Guide\n\nWe recommend using our docker setup which comes with Jupyter runtime along with pre-built [actions](https://docs.unskript.com/unskript-product-documentation/actions/what-is-an-action) and [runbooks](https://docs.unskript.com/unskript-product-documentation/readme/what-is-a-runbook). Build your own actions and runbooks with ease!\n\n## Get Started\n\n1. Launch Docker\n```\ndocker run -it -p 8888:8888 --user root unskript/awesome-runbooks:latest\n```\n2. Point your browser to http://127.0.0.1:8888/awesome.\n\n\n\n## Advanced Usage\n\nIn this section, we'll explore advanced configurations that enable:\n\n1. Custom Action and Runbook creation\n2. Custom Action creation using OpenAI integration\n\n### Custom Action and Runbook Creation\n\n1. Clone this repository to your local machine.\n```bash\ngit clone https://github.com/unskript/Awesome-CloudOps-Automation\ncd Awesome-CloudOps-Automation\n```\n\n2. Launch Docker \n  - Use this command to create custom runbooks and actions. (update the first -v line if you used a different directory in step 1).\n\n```bash\ndocker run -it -p 8888:8888 \\\n -v $HOME/Awesome-CloudOps-Automation/custom:/unskript/data \\\n -v $HOME/.unskript:/unskript/credentials \\\n -e ACA_AWESOME_MODE=1 \\\n --user root \\\n docker.io/unskript/awesome-runbooks:latest\n```\n\n3. Point your browser to http://127.0.0.1:8888/awesome.\n\n### Custom Action Creation using OpenAI Integration\n\n1. Clone this repository to your local machine if you haven't already.\n```bash\ngit clone https://github.com/unskript/Awesome-CloudOps-Automation\ncd Awesome-CloudOps-Automation\n```\n\n2. Launch Docker with OpenAI parameters:\n\n  - Use this command to create custom GenAI actions (update the first -v line if you used a different directory in step 1).\n\n```bash\ndocker run -it -p 8888:8888 \\\n -v $HOME/Awesome-CloudOps-Automation/actions:/unskript/data/actions \\\n -v $HOME/Awesome-CloudOps-Automation/runbooks:/unskript/data/runbooks \\\n -v $HOME/.unskript:/unskript/credentials \\\n -e ACA_AWESOME_MODE=1 \\\n -e OPENAI_ORGANIZATION_ID=<your openAI org> \\\n -e OPENAI_API_KEY=<your API key> \\\n -e OPENAI_MODEL=GPT-4 \\\n --user root \\\n docker.io/unskript/awesome-runbooks:latest\n\n```\n\nThe OPENAI parameters are used to initialize Generative AI creation with ChatGPT. They can be omitted from the command, but the generativeAI features will not be available.  For a list of models, visit [openAI](https://platform.openai.com/docs/models/overview).\n\n3. Point your browser to http://127.0.0.1:8888/awesome.\n\n\nYou can find more information around how to use and play with our runbooks in the documentation here. You can find a list of all the runbooks along with links in the [repository page](/xrunbooks-directory.md) or simply use [unSkript CLI](unskript-ctl/README.md). \n\n## 📚 Documentation\n\nDive deeper into Runbooks.sh by visiting our comprehensive [documentation](https://docs.unskript.com/unskript-product-documentation/guides/getting-started). Here, you'll find everything you need to know about using the platform, creating custom runbooks, developing plugins, and much more.\n\n# About the Project\nRunbooks.sh is a powerful, community-driven, open-source runbook automation platform designed to simplify cloud infrastructure management and streamline operations across diverse environments. Few of the highlighting features:\n\n- **Extensive Library**: Access hundreds of pre-built actions and runbooks to kickstart your automation journey.\n- **Customization**: Create and modify actions and runbooks tailored to your unique requirements.\n- **Generative AI Action Creation** Fully integrated with ChatGPT to create custom Actions in minutes.\n- **Diverse Compatibility**: Seamlessly integrate with various cloud providers, platforms, and tools.\n- **User-friendly Interface**: A Jupyter-based environment that simplifies runbook creation and execution.\n- **Active Community**: Join a vibrant community of users and contributors committed to improving the project.\n\n## 🏆 Mission\nOur mission is to simplify CloudOps automation for DevOps and SRE teams by providing an extensive, community-driven repository of actions and runbooks that streamline day-to-day operations. \n\n## 👁️ Vision \nOur vision is to be the one-stop solution for all CloudOps automation needs, allowing DevOps and SRE teams to automate their workflows with ease, improve efficiency, and minimize toil.\n\n## 🤝 Contributing\nWe welcome contributions from developers of all skill levels! Check out our [Contribution Guidelines](.github/CONTRIBUTING.md) to learn how you can contribute.\n\n## 📖 License\nExcept as otherwise noted, this project is licensed under the *[Apache License, Version 2.0](/License)* .\n\n## 🌐 Join Our Community\nConnect with other users and contributors by joining our [Slack workspace](https://communityinviter.com/apps/cloud-ops-community/awesome-cloud-automation). Share your experiences, ask questions, and collaborate on this exciting project!\n\n## 📣 Stay Informed\nKeep up-to-date with the latest news, updates, and announcements by following us on [Twitter](https://twitter.com/UnSkript) and [Linkedin](https://www.linkedin.com/company/unskript-inc/).\n\nTogether, let's make Runbooks.sh the go-to solution for runbook automation and cloud infrastructure management!\n\n[contributors-shield]: https://img.shields.io/github/contributors/unskript/awesome-cloudops-automation.svg?style=for-the-badge\n[contributors-url]: https://github.com/unskript/awesome-cloudops-automation/graphs/contributors\n[github-actions-shield]: https://img.shields.io/github/workflow/status/unskript/awesome-cloudops-automation/e2e%20test?color=orange&label=e2e-test&logo=github&logoColor=orange&style=for-the-badge\n[github-actions-url]: https://github.com/unskript/awesome-cloudops-automation/actions/workflows/docker-tests.yml\n[forks-shield]: https://img.shields.io/github/forks/unskript/awesome-cloudops-automation.svg?style=for-the-badge\n[forks-url]: https://github.com/unskript/awesome-cloudops-automation/network/members\n[stars-shield]: https://img.shields.io/github/stars/unskript/awesome-cloudops-automation.svg?style=for-the-badge\n[stars-url]: https://github.com/unskript/awesome-cloudops-automation/stargazers\n[issues-shield]: https://img.shields.io/github/issues/unskript/awesome-cloudops-automation.svg?style=for-the-badge\n[issues-url]: https://github.com/unskript/awesome-cloudops-automation/issues\n[twitter-shield]: https://img.shields.io/badge/-Twitter-black.svg?style=for-the-badge&logo=twitter&colorB=555\n[twitter-url]: https://twitter.com/unskript\n[awesome-shield]: https://img.shields.io/badge/awesome-cloudops-orange?style=for-the-badge&logo=bookstack \n[actions-shield]: https://img.shields.io/badge/ActionsCount-476-orange?style=for-the-badge \n[runbooks-shield]:https://img.shields.io/badge/xRunbooksCount-61-green?style=for-the-badge\n"
  },
  {
    "path": "README_extending_docker.md",
    "content": "<center>\n  <a href=\"https://github.com/unskript/Awesome-CloudOps-Automation\">\n    <img src=\"https://unskript.com/assets/favicon.png\" alt=\"Logo\" width=\"80\" height=\"80\">\n  </a>\n  <h1> Extending Awesome Docker </h1>\n</center>\n\n\n## Extending the docker\nYou can use our base docker to extend the functionality to fit your need. The steps below could be used to package your custom Actions/Runbooks and re-build your custom docker that you can upload and distribute to/from any docker registry.\n\n---\n**NOTE**\n\nunskript-ctl config is stored in unskript_ctl_config.yaml. Please look at the template at\n```\n/unskript-ctl/config/unskript_ctl_config.yaml\n```\n\nTo package your unskript-ctl config, do the following:\n\n* Make your version of unskript_ctl_config.yaml\n* Uncomment the following line in the Dockerfile\n```\n#COPY unskript_ctl_config.yaml /etc/unskript/unskript_ctl_config.yaml\n```\n---\n\n\n## Pre-requisites\n1. You are submoduling our Awesome-CloudOps-Automation to your existing\n   Git repo.\n   ```\n   cd $YOUR_REPO_DIRECTORY\n   git submodule add https://github.com/unskript/Awesome-CloudOps-Automation.git Awesome-CloudOps-Automation\n   ```\n2. In same directory, you will need to create two sub-folders.\n   ```\n   mkdir -p $YOUR_REPO_DIRECTORY/runbooks $YOUR_REPO_DIRECTORY/actions\n   ```\n\n3. The Directory structure resulting would be something like this\n   ```\n   YOUR_REPO_DIRECTORY/\n      actions/\n      runbooks/\n      Awesome-CloudOps-Automation/\n      your-repo-folders/\n      your-repo-files\n      ...\n   ```\n4. You have a working Python 3 environment installed on your build system\n5. You have `make` and other build tools installed on your build system\n6. You have Docker-ce installed and working on your build system\n\n\n## Building Custom Docker\n1. To build your custom docker. You need to set two environment variables\n   `CUSTOM_DOCKER_NAME` and `CUSTOM_DOCKER_VERSION`. If not set, by default the\n   Make rule will assume `my-custom-docker` and `0.1.0` as values for these\n   variables.\n\n   ```\n   export CUSTOM_DOCKER_NAME=my-awesome-docker\n   export CUSTOM_DOCKER_VERSION='0.1.0'\n   cd $YOUR_REPO_DIRECTORY\n   cp Awesome-CloudOps-Automation/build/templates/Makefile.extend-docker.template Makefile\n   make -f Makefile build\n   ```\n\n   It may take a few minutes to build the docker, once built, you can verify it using\n\n   ```\n   docker run -it -p 8888:8888 \\\n       $CUSTOM_DOCKER_NAME:$CUSTOM_DOCKER_VERSION\n   ```\n\n   This would run your `custom docker` and you can point your browser to `http://127.0.0.1:8888/awesome`!\n\n2. Push your `custom docker` to any docker registry for redistribution.\n<br/>\n\n## Action and arguments\n\nActions are small python functions that is designed to do a specific task. For example, aws_sts_get_caller_identity action\nis designed to display the  AWS sts caller identity for a given configuration. Actions may take one or more arguments, like\nany python function do. Some or all of these arguments may also assume a default value if none given at the time of calling.\nMany actions may have the same argument name used. For example `region` could be a common name of the argument used across\nmultiple AWS actions, likewise `namespace` could be a common argument for an K8S action.\n\n\nWe call an action a check (short for health check) when the return value of the action is in the form of a Tuple.\nFirst value being the result of the check (a boolean), whether it passed or not. True being check passed, False otherwise.\nAnd the second value being the list of errored objects, incase of failure, None otherwise.\n\nWe bundle a number of checks for some of the popular connectors like AWS, K8S, etc.. And you can write your own too!\n\n\n### How to create Custom Actions\n\nYou can create custom action on your workstation using your editor. Please follow the steps below to setup your workstation:\n\n1. We recommend to use [conda](https://docs.conda.io/projects/conda/en/latest/user-guide/install/index.html) to avoid any conflicts with preinstalled Python libraries.\n```\nconda create --name=unskript-dev python=3.9.6 -y\nconda activate unskript-dev\n```\n\n2. Install the following pip packages:\n\n```\npip install -U pytest\npip install jinja2\npip install unskript-core\npip install unskript-custom\n```\n\n3. To create a new check template files, do the following:\n```\ncd $YOUR_REPO_DIRECTORY\n./Awesome-CloudOps-Automation/bin/unskript-add-check.sh -t <Check type> -n <short name for the check, separated by _> -d <description of the check>\n```\n\nThe above command will create the template .py and pytest files. For eg:\n\n```\n(py396) amits-mbp-2:custom-checks amit$ ls -l actions/aws_list_public_sg/\ntotal 24\n-rw-r--r--  1 amit  staff     0 Sep 25 17:42 __init__.py\n-rw-r--r--  1 amit  staff   349 Sep 25 17:42 aws_list_public_sg.json\n-rw-r--r--  1 amit  staff  2557 Sep 25 17:44 aws_list_public_sg.py\n-rw-r--r--  1 amit  staff  1409 Sep 25 21:09 test_aws_list_public_sg.py\n```\n\n4. Edit the <short_name>.py (in the above eg, its aws_list_public_sg.py) and write the logic for the check. Please ensure that you define the InputSchema as well, if required.\n\n5. In order to test the check, you need to add a credential for the check. You can use the following utility to add credential\n```\n./Awesome-CloudOps-Automation/bin/add_creds.sh -c <Credential type> -h\n```\n\n6. Once the credential is programmed, you are ready to test out the check using pytest (Please ensure that pytest is installed on your workstation). You can test the check by running:\n\n```\n pytest -s actions/<short_name>/test_<short_name>.py\n```\n\nPlease ensure if your check requires any inputs, you fill the *InputParamsJson* accordingly.\n\n### Creating custom actions using jupyterlab\n\nYou can refer to [this link](https://docs.unskript.com/unskript-product-documentation/actions/create-custom-actions) on how to create custom Action using Jupyter Lab interface\n\n### How to Copy Custom Actions and Runbook\n\nIf you have deployed our Awesome runbook as a Kubernetes POD then follow the step below\n1. Copy the custom actions from the POD to your local machine so you can bundle into your custom Docker for re-distribution\n```\nkubectl cp <AWESOME_POD_NAME>:/unskript/data/runbooks -n <NAMESPACE> $YOUR_REPO_DIRECTORY/runbooks\nkubectl cp <AWESOME_POD_NAME>:/unskript/data/actions -n <NAMESPACE> $YOUR_REPO_DIRECTORY/actions\ncd $YOUR_REPO_DIRECTORY\ngit clone https://github.com/unskript/Awesome-CloudOps-Automation.git\n\nExample:\n\nkubectl cp awesome-runbooks-0:/unskript/data/actions -n awesome-ops $YOUR_REPO_DIRECTORY/actions\nkubectl cp awesome-runbooks-0:/unskript/data/runbooks -n awesome-ops $YOUR_REPO_DIRECTORY/runbooks\n```\n\nIf you have deployed our Awesome runbook as a Docker instance, then you can use\nthe following step.\n```\nexport CONTAINER_ID=`docker ps | grep awesome-runbooks | awk '{print $1}'`\ndocker cp $CONTAINER_ID:/unskript/data/actions $HOME/Workspace/acme/actions\ndocker cp $CONTAINER_ID:/unskript/data/runbooks $HOME/Workspace/acme/runbooks\n```\n\n### How to specify values for arguments used in checks\n\nYou can specify the values for the arguments that are used in the Checks in the **checks** section of the unskript_ctl_config.yaml. For eg:\n   ```\n   checks:\n     # Arguments common to all checks, like region, namespace, etc.\n     arguments:\n       global:\n         region: us-west-2\n         namespace: \"awesome-ops\"\n         threshold: \"string\"\n         services: [\"calendar\", \"audit\"]\n   ```\n\n> Here namespace is the argument used in the checks and \"awesome-ops\" is the value assigned to that argument.\n\n#### Multiple values support\nYou can specify multiple values for an argument, using the keyword **matrix**. For eg:\n```\nchecks:\n  # Arguments common to all checks, like region, namespace, etc.\n  arguments:\n    global:\n      matrix:\n       namespace: [n1, n2]\n```\n\nThe above config makes unskript-ctl run for 2 values of namespace.\n\n**NOTE**: We support exactly ONE argument of type **matrix**.\n\n### Creating a schedule for checks to run periodically\n\nTo schedule checks, you first need to define a **job**.\n\nA job can be a set of checks or connector types.\n\nIn future, we will support suites and custom scripts.\n\nA job **SHOULD** have a unique name.\n\nYou define a job in the **jobs** section of the unskript_ctl_config.yaml.\n\nOnce you have define a job, you can use that job name to configure a schedule.\n\nThe schedule can be configured in the **scheduler** section of the config file.\n\nFor the schedule, you need to define the following:\n* cadence - cron style of cadence.\n* job_name - name of the job for the schedule.\n\n### How to get checks run report via email/slack\n\nYou can configure the email/slack notification via the **notification** section of the config file.\n\nFor email, we support 3 providers:\n\n1. SMTP: Any smtp server\n2. SES: Amazon SES\n3. Sendgrid\n\nOnce configured, you are all set to receive the report whenever check is run with `--report` option\n```\nunskript-ctl.sh -r --check --type k8s, aws, postgresql --report\n```\n\nHere, the checks for all three connectors, k8s, aws and postgresql are run and the result is sent via slack  or email to the recipient.\n"
  },
  {
    "path": "Redis/README.md",
    "content": "\n# Redis Actions\n* [Delete All Redis Keys](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Redis/legos/redis_delete_all_keys/README.md): Delete All Redis keys\n* [Delete Redis Keys](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Redis/legos/redis_delete_keys/README.md): Delete Redis keys matching pattern\n* [Delete Redis Unused keys](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Redis/legos/redis_delete_stale_keys/README.md): Delete Redis Unused keys given a time threshold in seconds\n* [Get Redis cluster health](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Redis/legos/redis_get_cluster_health/README.md): This action gets the Redis cluster health.\n* [Get Redis Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Redis/legos/redis_get_handle/README.md): Get Redis Handle\n* [Get Redis keys count](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Redis/legos/redis_get_keys_count/README.md): Get Redis keys count matching pattern (default: '*')\n* [Get Redis metrics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Redis/legos/redis_get_metrics/README.md): This action fetched redis metrics like index size, memory utilization.\n* [ List Redis Large keys](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Redis/legos/redis_list_large_keys/README.md): Find Redis Large keys given a size threshold in bytes\n"
  },
  {
    "path": "Redis/__init__.py",
    "content": ""
  },
  {
    "path": "Redis/legos/__init__.py",
    "content": ""
  },
  {
    "path": "Redis/legos/redis_delete_all_keys/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \n<h1>Delete All Redis keys</h1>\n\n## Description\nReturns list of all deleted Redis keys\n\n## Lego Details\n    redis_delete_all_keys(handle)\n        handle: Object of type unSkript Redis Connector\n\n## Lego Input\nThis Lego takes 1 input: handle\n\n\n## Lego Output\nHere is a sample output.\n\n<img src=\"./1.png\">\n"
  },
  {
    "path": "Redis/legos/redis_delete_all_keys/__init__.py",
    "content": ""
  },
  {
    "path": "Redis/legos/redis_delete_all_keys/redis_delete_all_keys.json",
    "content": "{\n    \"action_title\": \"Delete All Redis Keys\",\n    \"action_description\": \"Delete All Redis keys\",\n    \"action_entry_function\": \"redis_delete_all_keys\",\n    \"action_type\": \"LEGO_TYPE_REDIS\",\n    \"action_needs_credential\": true,\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n    \"action_supports_poll\": true,\n    \"action_supports_iteration\": true,\n    \"action_verbs\": [\"delete\"],\n    \"action_nouns\": [\"all\",\"keys\",\"redis\"],\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_REDIS\"]\n}"
  },
  {
    "path": "Redis/legos/redis_delete_all_keys/redis_delete_all_keys.py",
    "content": "#\n# Copyright (c) 2022 unSkript.com\n# All rights reserved.\n#\n\nimport pprint\nfrom typing import List\nfrom pydantic import BaseModel\n\nclass InputSchema(BaseModel):\n    pass\n\n\ndef redis_delete_all_keys_printer(output):\n    if output is None:\n        return\n    pprint.pprint(\"Deleted Keys: \")\n    pprint.pprint(output)\n\n\ndef redis_delete_all_keys(handle) -> List:\n    \"\"\"redis_delete_all_keys deleted the pattern matched keys.\n\n       :rtype: List of all deleted keys.\n    \"\"\"\n    result = []\n    try:\n        for key in handle.scan_iter('*'):\n            result.append(key)\n            handle.delete(key)\n    except Exception as e:\n        print(e)\n    return result\n"
  },
  {
    "path": "Redis/legos/redis_delete_keys/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \n<h1>Delete Redis keys</h1>\n\n## Description\nReturns list of deleted Redis keys matching pattern\n\n## Lego Details\n    redis_delete_keys(handle, pattern: str)\n        handle: Object of type unSkript Redis Connector\n        pattern: Pattern for the searched keys\n\n## Lego Input\nThis Lego takes 2 inputs: handle and pattern.\n\n\n## Lego Output\nHere is a sample output.\n\n<img src=\"./1.png\">\n"
  },
  {
    "path": "Redis/legos/redis_delete_keys/__init__.py",
    "content": ""
  },
  {
    "path": "Redis/legos/redis_delete_keys/redis_delete_keys.json",
    "content": "{\n    \"action_title\": \"Delete Redis Keys\",\n    \"action_description\": \"Delete Redis keys matching pattern\",\n    \"action_entry_function\": \"redis_delete_keys\",\n    \"action_type\": \"LEGO_TYPE_REDIS\",\n    \"action_needs_credential\": true,\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n    \"action_supports_poll\": true,\n    \"action_supports_iteration\": true,\n    \"action_verbs\": [\"delete\"],\n    \"action_nouns\": [\"pattern\",\"keys\",\"redis\"],\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_REDIS\"]\n}"
  },
  {
    "path": "Redis/legos/redis_delete_keys/redis_delete_keys.py",
    "content": "#\n# Copyright (c) 2022 unSkript.com\n# All rights reserved.\n#\n\nimport pprint\nfrom typing import List\nfrom pydantic import BaseModel, Field\n\nclass InputSchema(BaseModel):\n    pattern: str = Field(\n        title='Pattern',\n        description='Pattern for the searched keys')\n\n\ndef redis_delete_keys_printer(output):\n    if output is None:\n        return\n    pprint.pprint(\"Deleted Keys: \")\n    pprint.pprint(output)\n\n\ndef redis_delete_keys(handle, pattern: str) -> List:\n    \"\"\"redis_delete_keys deleted the pattern matched keys.\n\n       :type pattern: string\n       :param pattern: Pattern for the searched keys.\n\n       :rtype: List of deleted keys.\n    \"\"\"\n    result = []\n    try:\n        for key in handle.scan_iter(pattern):\n            result.append(key)\n            handle.delete(key)\n    except Exception as e:\n        print(e)\n    return result\n"
  },
  {
    "path": "Redis/legos/redis_delete_stale_keys/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \n<h1>Delete Redis Stale keys count</h1>\n\n## Description\nReturns Deleted Redis Unused keys given a time threshold in seconds.\n\n## Lego Details\n    redis_delete_stale_keys(handle, time_in_sec: int)\n        handle: Object of type unSkript Redis Connector\n        time_in_sec: Threshold Idle Time in Seconds\n\n## Lego Input\nThis Lego takes 2 inputs: handle and time_in_sec\n\n\n## Lego Output\nHere is a sample output.\n\n<img src=\"./1.png\">\n"
  },
  {
    "path": "Redis/legos/redis_delete_stale_keys/__init__.py",
    "content": ""
  },
  {
    "path": "Redis/legos/redis_delete_stale_keys/redis_delete_stale_keys.json",
    "content": "{\n    \"action_title\": \"Delete Redis Unused keys\",\n    \"action_description\": \"Delete Redis Unused keys given a time threshold in seconds\",\n    \"action_entry_function\": \"redis_delete_stale_keys\",\n    \"action_type\": \"LEGO_TYPE_REDIS\",\n    \"action_needs_credential\": true,\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\n    \"action_supports_poll\": true,\n    \"action_supports_iteration\": true,\n    \"action_verbs\": [\"delete\"],\n    \"action_nouns\": [\"stale\",\"keys\",\"redis\"],\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_REDIS\"]\n}"
  },
  {
    "path": "Redis/legos/redis_delete_stale_keys/redis_delete_stale_keys.py",
    "content": "#\n# Copyright (c) 2022 unSkript.com\n# All rights reserved.\n#\n\nimport pprint\nfrom typing import Dict\nfrom pydantic import BaseModel, Field\n\nclass InputSchema(BaseModel):\n    time_in_sec: int = Field(\n        title='Time in Seconds',\n        description='Threshold Idle Time in Seconds')\n\n\ndef redis_delete_stale_keys_printer(output):\n    if output is None:\n        return\n    print(\"Deleted Keys: \")\n    pprint.pprint(output)\n\n\ndef redis_delete_stale_keys(handle, time_in_sec: int) -> Dict :\n    \"\"\"redis_delete_stale_keys returns deleted stale keys greater than given a threshold time\n\n       :type time_in_sec: int\n       :param time_in_sec: Threshold Idle Time in Seconds\n\n       :rtype: Dict of Deleted Unused keys \n    \"\"\"\n    try:\n        result = {}\n        for key in handle.scan_iter(\"*\"):\n            idle_time = handle.object(\"idletime\", key)\n            if idle_time > time_in_sec:\n                result[key]= idle_time\n                handle.delete(key)\n    except Exception as e:\n        result[\"error\"] = e\n    return result\n"
  },
  {
    "path": "Redis/legos/redis_get_cluster_health/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>Get Redis cluster health</h1>\n\n## Description\nThis action gets the Redis cluster health.\n\n## Lego Details\n\tredis_get_cluster_health(handle, client_threshold: int = 100, memory_threshold: int = 80)\n\t\thandle: Object of type unSkript REDIS Connector.\n\t\tclient_threshold: Threshold for the number of connected clients considered abnormal\n\t\tmemory_threshold: Threshold for the percentage of memory usage considered abnormal\n\n\n## Lego Input\nThis Lego takes inputs handle, client_threshold, memory_threshold.\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Redis/legos/redis_get_cluster_health/__init__.py",
    "content": ""
  },
  {
    "path": "Redis/legos/redis_get_cluster_health/redis_get_cluster_health.json",
    "content": "{\n  \"action_title\": \"Get Redis cluster health\",\n  \"action_description\": \"This action gets the Redis cluster health.\",\n  \"action_type\": \"LEGO_TYPE_REDIS\",\n  \"action_entry_function\": \"redis_get_cluster_health\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n  \"action_is_check\": true,\n  \"action_supports_poll\": true,\n  \"action_supports_iteration\": true,\n  \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_REDIS\"],\n  \"action_next_hop\": [\"\"],\n  \"action_next_hop_parameter_mapping\": {}\n}"
  },
  {
    "path": "Redis/legos/redis_get_cluster_health/redis_get_cluster_health.py",
    "content": "#\n# Copyright (c) 2023 unSkript.com\n# All rights reserved.\n#\nfrom typing import Optional, Tuple\nfrom pydantic import BaseModel, Field\n\n\nclass InputSchema(BaseModel):\n    client_threshold: Optional[int] = Field(\n        10000,\n        title='Client threshold',\n        description='Threshold for the number of connected clients considered abnormal. Default- 100 clients')\n    memory_threshold: Optional[int] = Field(\n        80,\n        title='Memory threshold (in %)',\n        description='Threshold for the percentage of memory usage considered abnormal. Default- 80%')\n\n\n\ndef redis_get_cluster_health_printer(output):\n    if output is None or not output[1]:\n        print(\"No health information available.\")\n        return\n\n    status, analysis = output\n\n    print(\"\\nRedis Health Info:\")\n    if status:\n        print(\"Status: Healthy\")\n    else:\n        print(\"Status: Unhealthy\")\n\n    for key, value in analysis.items():\n        if key != 'abnormal_metrics':\n            print(f\"{key}: {value}\")\n\n    if 'abnormal_metrics' in analysis:\n        print(\"\\nAbnormal Metrics Detected:\")\n        for metric, message in analysis['abnormal_metrics']:\n            print(f\"{metric}: {message}\")\n\n\ndef redis_get_cluster_health(handle, client_threshold: int = 10000, memory_threshold: int = 80) -> Tuple:\n    \"\"\"Returns the health of the Redis instance.\n    \n    :type handle: object\n    :param handle: Redis connection object\n    \n    :type client_threshold: int\n    :param client_threshold: Threshold for the number of connected clients considered abnormal\n    \n    :type memory_threshold: int\n    :param memory_threshold: Threshold for the percentage of memory usage considered abnormal\n    \n    :rtype: Tuple containing a boolean indicating overall health and a dictionary with detailed information\n    \"\"\"\n    # Metrics that need to be checked\n    health_metrics = [\n        'uptime_in_seconds',\n        'connected_clients',\n        'used_memory',\n        'maxmemory',\n        'rdb_last_bgsave_status',\n        'aof_last_bgrewrite_status',\n        'aof_last_write_status',\n    ]\n\n    health_info = {}\n    abnormal_metrics = []\n\n    try:\n        general_info = handle.info()\n        if not isinstance(general_info, dict):\n            raise Exception(\"Unexpected format for general info\")\n\n        # Iterate through the health metrics to check for soecific keys\n        for key in health_metrics:\n            value = general_info.get(key)\n            if value is None:\n                continue\n\n            health_info[key] = value\n\n            # Check if connected clients exceed the threshold\n            if key == 'connected_clients' and int(value) > client_threshold:\n                abnormal_metrics.append((key, f\"High number of connected clients: {value}\"))\n\n            # Check if memory usage exceeds the threshold\n            if key == 'used_memory' and general_info.get('maxmemory') and int(value) / int(general_info['maxmemory']) * 100 > memory_threshold:\n                abnormal_metrics.append((key, f\"Memory utilization is above {memory_threshold}%: {value}\"))\n\n            # Check for abnormal statuses\n            if key in ['rdb_last_bgsave_status', 'aof_last_bgrewrite_status', 'aof_last_write_status'] and value != 'ok':\n                abnormal_metrics.append((key, f\"Status not OK: {value}\"))\n\n\n        # Append abnormal metrics if any are found\n        if abnormal_metrics:\n            health_info['abnormal_metrics'] = abnormal_metrics\n            return (False, health_info)\n\n        return (True, health_info)\n\n    except Exception as e:\n        raise e\n"
  },
  {
    "path": "Redis/legos/redis_get_handle/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Get Redis Handle</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego Get Redis Handle.\r\n\r\n\r\n## Lego Details\r\n\r\n    redis_get_handle(handle: object)\r\n\r\n        handle: Object of type unSkript Redis Connector\r\n\r\n## Lego Input\r\nThis Lego take one input handle.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Redis/legos/redis_get_handle/__init__.py",
    "content": ""
  },
  {
    "path": "Redis/legos/redis_get_handle/redis_get_handle.json",
    "content": "{\n\"action_title\": \"Get Redis Handle\",\n\"action_description\": \"Get Redis Handle\",\n\"action_type\": \"LEGO_TYPE_REDIS\",\n\"action_entry_function\": \"redis_get_handle\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": false,\n\"action_supports_iteration\": false,\n\"action_verbs\": [\n\"get\"\n],\n\"action_nouns\": [\n\"redis\",\n\"handle\"\n]\n}\n"
  },
  {
    "path": "Redis/legos/redis_get_handle/redis_get_handle.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\nfrom pydantic import BaseModel\n\n\nclass InputSchema(BaseModel):\n    pass\n\n\ndef redis_get_handle(handle):\n    \"\"\"redis_get_handle returns the Redis handle.\n\n       :rtype: Redis Handle\n    \"\"\"\n    return handle\n"
  },
  {
    "path": "Redis/legos/redis_get_keys_count/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \n<h1>Get Redis keys count</h1>\n\n## Description\nReturns Redis keys count matching pattern (default: '*')\n\n## Lego Details\n    redis_get_keys_count(handle, pattern: str)\n        handle: Object of type unSkript Redis Connector\n        pattern: Pattern for the searched keys\n\n## Lego Input\nThis Lego takes 2 inputs: handle and pattern.\n\n\n## Lego Output\nHere is a sample output.\n\n<img src=\"./1.png\">\n"
  },
  {
    "path": "Redis/legos/redis_get_keys_count/__init__.py",
    "content": ""
  },
  {
    "path": "Redis/legos/redis_get_keys_count/redis_get_keys_count.json",
    "content": "{\n    \"action_title\": \"Get Redis keys count\",\n    \"action_description\": \"Get Redis keys count matching pattern (default: '*')\",\n    \"action_entry_function\": \"redis_get_keys_count\",\n    \"action_type\": \"LEGO_TYPE_REDIS\",\n    \"action_needs_credential\": true,\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_INT\",\n    \"action_supports_poll\": true,\n    \"action_supports_iteration\": true,\n    \"action_verbs\": [\"get\"],\n    \"action_nouns\": [\"count\",\"keys\",\"redis\"],\n    \"action_categories\": [\"CATEGORY_TYPE_INFORMATION\" , \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_REDIS\"]\n}\n"
  },
  {
    "path": "Redis/legos/redis_get_keys_count/redis_get_keys_count.py",
    "content": "#\n# Copyright (c) 2022 unSkript.com\n# All rights reserved.\n#\n\nimport pprint\nfrom beartype import beartype\nfrom pydantic import BaseModel, Field\nfrom typing import Optional\n\nclass InputSchema(BaseModel):\n    pattern: Optional[str] = Field(\n        default='*',\n        title='Pattern',\n        description='Pattern for the searched keys')\n\n@beartype\ndef redis_get_keys_count_printer(output):\n    if output is None:\n        return\n    pprint.pprint({\"Matched keys count\": output})\n\n@beartype\ndef redis_get_keys_count(handle, pattern: str=\"*\"):\n    \"\"\"redis_get_keys_count returns the matched keys count.\n\n       :type pattern: string\n       :param pattern: Pattern for the searched keys.\n\n       :rtype: Matched keys count.\n    \"\"\"\n\n    output = 0\n    for key in handle.scan_iter(pattern):\n        output += 1\n\n    return output\n"
  },
  {
    "path": "Redis/legos/redis_get_metrics/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>Get Redis metrics</h1>\n\n## Description\nThis action fetched redis metrics like index size, memory utilization.\n\n## Lego Details\n\tredis_get_metrics(handle)\n\t\thandle: Object of type unSkript REDIS Connector.\n\n## Lego Input\nThis Lego takes inputs handle.\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Redis/legos/redis_get_metrics/__init__.py",
    "content": ""
  },
  {
    "path": "Redis/legos/redis_get_metrics/redis_get_metrics.json",
    "content": "{\n  \"action_title\": \"Get Redis metrics\",\n  \"action_description\": \"This action fetched redis metrics like index size, memory utilization.\",\n  \"action_type\": \"LEGO_TYPE_REDIS\",\n  \"action_entry_function\": \"redis_get_metrics\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\n  \"action_is_check\": false,\n  \"action_supports_poll\": true,\n  \"action_supports_iteration\": true,\n  \"action_categories\": [\"CATEGORY_TYPE_INFORMATION\" , \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_REDIS\"]\n}"
  },
  {
    "path": "Redis/legos/redis_get_metrics/redis_get_metrics.py",
    "content": "#\n# Copyright (c) 2023 unSkript.com\n# All rights reserved.\n#\nfrom typing import Dict\nfrom pydantic import BaseModel\nfrom tabulate import tabulate\n\n\nclass InputSchema(BaseModel):\n    pass\n\n\n\ndef redis_get_metrics_printer(output):\n    if output is None:\n        return\n    print(\"\\nRedis Metrics: \")\n    headers = [\"Metric\", \"Value\"]\n    data = list(output.items())\n    print(tabulate(data, headers, tablefmt=\"pretty\"))\n\ndef bytes_to_human_readable(bytes, units=[' bytes', 'KB', 'MB', 'GB', 'TB', 'PB', 'EB']):\n    # Return a human-readable string representation of bytes.\n    return str(bytes) + units[0] if bytes < 1024 else bytes_to_human_readable(bytes >> 10, units[1:])\n\ndef redis_get_metrics(handle) -> Dict:\n    \"\"\"\n    redis_get_metrics returns redis metrics like index size, memory utilization.\n\n    :type handle: object\n    :param handle: Object returned from task.validate(...).\n\n    :rtype: Dict containing index size and memory usage metrics\n    \"\"\"\n    metrics = {}\n    try:\n        # Getting the information about the Redis server\n        info = handle.info()\n\n        # Initialize keys counter\n        total_keys = 0\n\n        # Iterate over all dbs in the info output\n        for key in info:\n            if key.startswith('db'):\n                total_keys += info[key]['keys']\n        metrics['index_size'] = total_keys # Total number of keys.\n        metrics['memory_utilization'] = bytes_to_human_readable(int(info['used_memory']))\n        metrics['dataset_size'] = bytes_to_human_readable(int(info['used_memory_dataset']))\n\n    except Exception as e:\n        raise e\n    return metrics\n\n\n\n"
  },
  {
    "path": "Redis/legos/redis_list_large_keys/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \n<h1>List Redis Large keys</h1>\n\n## Description\nReturns Large Redis keys given a threshold length in bytes.\n\n## Lego Details\n    redis_list_large_keys(handle, time_in_sec: int)\n        handle: Object of type unSkript Redis Connector\n        size_in_bytes: Threshold Length of Key in Bytes\n\n## Lego Input\nThis Lego takes 2 inputs: handle and size_in_bytes\n\n\n## Lego Output\nHere is a sample output.\n\n<img src=\"./1.png\">\n"
  },
  {
    "path": "Redis/legos/redis_list_large_keys/__init__.py",
    "content": ""
  },
  {
    "path": "Redis/legos/redis_list_large_keys/redis_list_large_keys.json",
    "content": "{\n    \"action_title\": \" List Redis Large keys\",\n    \"action_description\": \"Find Redis Large keys given a size threshold in bytes\",\n    \"action_entry_function\": \"redis_list_large_keys\",\n    \"action_type\": \"LEGO_TYPE_REDIS\",\n    \"action_needs_credential\": true,\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\n    \"action_supports_poll\": true,\n    \"action_supports_iteration\": true,\n    \"action_is_check\": true,\n    \"action_verbs\": [\"list\"],\n    \"action_nouns\": [\"large\",\"keys\",\"redis\"],\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_REDIS\"],\n    \"action_next_hop\": [\"\"],\n    \"action_next_hop_parameter_mapping\": {}\n}\n"
  },
  {
    "path": "Redis/legos/redis_list_large_keys/redis_list_large_keys.py",
    "content": "#\n# Copyright (c) 2022 unSkript.com\n# All rights reserved.\n#\n\nfrom typing import Optional, Tuple\nfrom pydantic import BaseModel, Field\nfrom tabulate import tabulate\n\nclass InputSchema(BaseModel):\n    size_in_bytes: Optional[int] = Field(\n        5368709120, # 5GB\n        title='Size in Bytes',\n        description='Threshold Size of Key in Bytes')\n\n\ndef redis_list_large_keys_printer(output):\n    status, data = output\n\n    if status:\n        print(\"There are no large keys\")\n        return\n    else:\n        flattened_data = []\n        for item in data:\n            for key, value in item.items():\n                flattened_data.append([key.decode(), value])\n\n        headers = [\"Key Name\", \"Key Size (Bytes)\"]\n\n        print(\"Large keys:\")\n        print(tabulate(flattened_data, headers=headers, tablefmt=\"grid\"))\n\n\n\ndef redis_list_large_keys(handle, size_in_bytes: int = 5368709120) -> Tuple :\n    \"\"\"redis_list_large_keys returns deleted stale keys greater than given a threshold time\n\n       :type size_in_bytes: int\n       :param size_in_bytes: Threshold Size of Key in Bytes\n\n       :rtype: Dict of Large keys \n    \"\"\"\n    try:\n        result = []\n        keys = handle.keys('*')\n        for key in keys:\n            value = handle.memory_usage(key)\n            if value > int(size_in_bytes):\n                large_key = {\"large_key\": key.decode('utf-8'), \"value\": value}\n                result.append(large_key)\n    except Exception as e:\n        raise e\n    if result:\n        return (False, result)\n    return (True, None)\n"
  },
  {
    "path": "Rest/README.md",
    "content": "\n# Rest Actions\n* [Get REST handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Rest/legos/rest_get_handle/README.md): Get REST handle\n* [Call REST Methods](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Rest/legos/rest_methods/README.md): Call REST Methods.\n"
  },
  {
    "path": "Rest/__init__.py",
    "content": ""
  },
  {
    "path": "Rest/legos/__init__.py",
    "content": ""
  },
  {
    "path": "Rest/legos/rest_get_handle/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Get REST Handle</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego get REST handle.\r\n\r\n\r\n## Lego Details\r\n\r\n    rest_get_handle(handle: object)\r\n\r\n        handle: Object of type unSkript Rest Connector\r\n## Lego Input\r\nThis Lego take one input handle.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Rest/legos/rest_get_handle/__init__.py",
    "content": ""
  },
  {
    "path": "Rest/legos/rest_get_handle/rest_get_handle.json",
    "content": "{\n\"action_title\": \"Get REST handle\",\n\"action_description\": \"Get REST handle\",\n\"action_type\": \"LEGO_TYPE_REST\",\n\"action_entry_function\": \"rest_get_handle\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": false,\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_NONE\",\n\"action_supports_iteration\": false,\n\"action_verbs\": [\n\"get\"\n],\n\"action_nouns\": [\n\"rest\",\n\"handle\"\n]\n}\n"
  },
  {
    "path": "Rest/legos/rest_get_handle/rest_get_handle.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nfrom pydantic import BaseModel\n\n\nclass InputSchema(BaseModel):\n    pass\n\n\ndef rest_get_handle(handle) -> None:\n    \"\"\"\n    rest_get_handle returns the REST handle.\n    :rtype: REST handle.\n    \"\"\"\n    return handle\n"
  },
  {
    "path": "Rest/legos/rest_methods/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Call REST Methods</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego call REST Methods.\r\n\r\n\r\n## Lego Details\r\n\r\n    rest_methods(handle: object, relative_url_path: str, method: Method, params: dict,\r\n                headers: dict, body: dict)\r\n\r\n        handle: Object of type unSkript Rest Connector\r\n        relative_url_path: Relative URL path for the request. eg /users.\r\n        method: Rest Method Supported methods : GET, POST, PUT, PATCH and DELETE\r\n        params: Dictionary or bytes to be sent in the query eg {'foo': 'bar'}.\r\n        headers: Dictionary of HTTP Headers to send with the requests.\r\n        body: Json to send in the body of the request eg {'foo': 'bar'}.\r\n\r\n## Lego Input\r\nThis Lego take six input handle, relative_url_path, method, params, headers and body.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Rest/legos/rest_methods/__init__.py",
    "content": ""
  },
  {
    "path": "Rest/legos/rest_methods/rest_methods.json",
    "content": "{\n\"action_title\": \"Call REST Methods\",\n\"action_description\": \"Call REST Methods.\",\n\"action_type\": \"LEGO_TYPE_REST\",\n\"action_entry_function\": \"rest_methods\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": true,\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\n\"action_supports_iteration\": true,\n\"action_verbs\": [\n\"call\"\n],\n\"action_nouns\": [\n\"rest\",\n\"methods\",\n\"apis\"\n],\n\"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_REST\"]\n}\n"
  },
  {
    "path": "Rest/legos/rest_methods/rest_methods.py",
    "content": "# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n\nimport pprint\nfrom typing import Optional, Dict\nfrom enum import Enum\nfrom unskript.enums.rest_enums import Method\nimport html_to_json\nfrom pydantic import BaseModel, Field\nfrom werkzeug.exceptions import MethodNotAllowed\n\n\nclass InputSchema(BaseModel):\n    relative_url_path: str = Field(\n        title='Path',\n        description='Relative URL path for the request. eg /users.'\n    )\n    method: Method = Field(\n        'GET',\n        title='Method',\n        description='''\n                    Rest Method\n                    Supported methods : GET, POST, PUT, PATCH and DELETE\n                    '''\n    )\n    params: Optional[dict] = Field(\n        default=None,\n        title='URL Parameters',\n        description=\"Dictionary or bytes to be sent in the query eg {'foo': 'bar'}\"\n    )\n    headers: Optional[dict] = Field(\n        default=None,\n        title='Headers',\n        description='''\n                Dictionary of HTTP Headers to send with the requests.\n                Example: {“Accept”: “*/*”}\n            '''\n    )\n    body: Optional[dict] = Field(\n        default=None,\n        title='Body',\n        description=\"Json to send in the body of the request eg {'foo': 'bar'}\"\n    )\n\n\ndef rest_methods_printer(output):\n    if output is None:\n        return None\n    print('\\n')\n    pprint.pprint(output)\n    return output\n\n\ndef rest_methods(\n    handle,\n    relative_url_path: str,\n    method: Method,\n    params: dict = None,\n    headers: dict = None,\n    body: dict = None) -> Dict:\n\n    \"\"\"rest_methods executes the rest method\n\n        :type relative_url_path: string\n        :param relative_url_path: Relative URL path for the request. eg /users.\n\n        :type method: Method\n        :param method: Rest Method Supported methods : GET, POST, PUT, PATCH and DELETE\n\n        :type params: dict\n        :param params: Dictionary or bytes to be sent in the query eg {'foo': 'bar'}.\n\n        :type headers: dict\n        :param headers: Dictionary of HTTP Headers to send with the requests.\n\n        :type body: dict\n        :param body: Json to send in the body of the request eg {'foo': 'bar'}.\n\n        :rtype: Dict\n    \"\"\"\n    if method == Method.GET:\n        res = handle.get(relative_url_path,\n                         params=params or {},\n                         headers=headers or {}\n                         )\n    elif method == Method.POST:\n        res = handle.post(relative_url_path, json=body, params=params or {}, headers=headers or {})\n    elif method == Method.PUT:\n        res = handle.put(relative_url_path, json=body, params=params or {}, headers=headers or {})\n    elif method == Method.PATCH:\n        res = handle.patch(relative_url_path, json=body, params=params or {}, headers=headers or {})\n    elif method == Method.DELETE:\n        res = handle.delete(relative_url_path, params=params or {}, headers=headers or {})\n        if res.status_code == 200:\n            print(f\"Status: {res.status_code}\")\n        else:\n            try:\n                result = res.json()\n            except Exception:\n                result = html_to_json.convert(res.content)\n            print(f\"Status: {res.status_code}, Response:{result}\")\n        return {}\n    else:\n        raise MethodNotAllowed(f'Unsupported method {method}')\n\n    handle.close()\n    try:\n        res.raise_for_status()\n        result = res.json()\n    except Exception as e:\n        return {'Error while executing api': {str(e)}}\n\n    return result\n"
  },
  {
    "path": "SSH/README.md",
    "content": "\n# SSH Actions\n* [SSH Execute Remote Command](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SSH/legos/ssh_execute_remote_command/README.md): SSH Execute Remote Command\n* [SSH: Locate large files on host](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SSH/legos/ssh_find_large_files/README.md): This action scans the file system on a given host and returns a dict of large files. The command used to perform the scan is \"find inspect_folder -type f -exec du -sk '{}' + | sort -rh | head -n count\"\n* [Get SSH handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SSH/legos/ssh_get_handle/README.md): Get SSH handle\n* [SSH Restart Service Using sysctl](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SSH/legos/ssh_restart_service_using_sysctl/README.md): SSH Restart Service Using sysctl\n* [SCP: Remote file transfer over SSH](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SSH/legos/ssh_scp/README.md): Copy files from or to remote host. Files are copied over SCP. \n"
  },
  {
    "path": "SSH/__init__.py",
    "content": ""
  },
  {
    "path": "SSH/legos/__init__.py",
    "content": ""
  },
  {
    "path": "SSH/legos/ssh_execute_remote_command/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>SSH Execute Remote Command</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego executes the given command on the remote.\r\n\r\n\r\n## Lego Details\r\n\r\n    ssh_execute_remote_command(sshClient, hosts: List[str], command: str, sudo: bool)\r\n\r\n        sshClient: Object of type unSkript ssh Connector\r\n        hosts: List of hosts to connect to. For eg. [\"host1\", \"host2\"].\r\n        command: Command to be executed on the remote server.\r\n        sudo: Run the command with sudo.\r\n\r\n## Lego Input\r\nThis Lego take four inputs handle, hosts, command and sudo.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "SSH/legos/ssh_execute_remote_command/__init__.py",
    "content": ""
  },
  {
    "path": "SSH/legos/ssh_execute_remote_command/ssh_execute_remote_command.json",
    "content": "{\n\"action_title\": \"SSH Execute Remote Command\",\n\"action_description\": \"SSH Execute Remote Command\",\n\"action_type\": \"LEGO_TYPE_SSH\",\n\"action_entry_function\": \"ssh_execute_remote_command\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": true,\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\n\"action_supports_iteration\": true,\n\"action_verbs\": [\n\"execute\"\n],\n\"action_nouns\": [\n\"ssh\",\n\"command\"\n],\n\"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_SSH\"]\n}\n"
  },
  {
    "path": "SSH/legos/ssh_execute_remote_command/ssh_execute_remote_command.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nimport pprint\nfrom typing import List, Optional, Dict\nfrom pydantic import BaseModel, Field\n\n\nclass InputSchema(BaseModel):\n    hosts: List[str] = Field(\n        title='Hosts',\n        description='List of hosts to connect to. For eg. [\"host1\", \"host2\"].'\n    )\n    command: str = Field(\n        title='Command',\n        description='Command to be executed on the remote server.'\n    )\n    sudo: Optional[bool] = Field(\n        default=False,\n        title='Run with sudo',\n        description='Run the command with sudo.'\n    )\n    proxy_host: Optional[str] = Field(\n        title='Proxy host',\n        description='Override the proxy host provided in the credentials. \\\n            It still uses the proxy_user and port from the credentials.'\n    )\n\n\ndef ssh_execute_remote_command_printer(output):\n    if output is None:\n        return\n    print(\"\\n\")\n    pprint.pprint(output)\n\n\ndef ssh_execute_remote_command(\n        sshClient,\n        hosts: List[str],\n        command: str,\n        sudo: bool = False,\n        proxy_host: str = None\n        ) -> Dict:\n    \"\"\"ssh_execute_remote_command executes the given command on the remote\n\n        :type hosts: List[str]\n        :param hosts: List of hosts to connect to. For eg. [\"host1\", \"host2\"].\n\n        :type command: str\n        :param command: Command to be executed on the remote server.\n\n        :type sudo: bool\n        :param sudo: Run the command with sudo.\n\n        :type proxy_host: str\n        :param proxy_host: Optional proxy host to use.\n\n        :rtype: dict of command output\n    \"\"\"\n\n    client = sshClient(hosts, proxy_host)\n\n    runCommandOutput = client.run_command(command=command, sudo=sudo)\n    client.join()\n    res = {}\n\n    for host_output in runCommandOutput:\n        hostname = host_output.host\n        output = []\n        for line in host_output.stdout:\n            output.append(line)\n\n        o = \"\\n\".join(output)\n        res[hostname] = o\n\n    return res\n"
  },
  {
    "path": "SSH/legos/ssh_find_large_files/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>SSH: Locate large files on host</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego scans the file system on a given host and returns a dict of large files. The command used to perform the scan is \\\"find inspect_folder -type f -exec du -sk '{}' + | sort -rh | head -n count\\\".\r\n\r\n\r\n## Lego Details\r\n\r\n    ssh_execute_remote_command(sshClient, host: str, inspect_folder: str, threshold: int,\r\n                                sudo: bool, count: int)\r\n\r\n        sshClient: Object of type unSkript ssh Connector\r\n        hosts: Host to connect to. Eg 10.10.10.10.\r\n        inspect_folder: Folder to inspect on the remote host.\r\n        sudo: Run the scan with sudo.\r\n        threshold: Threshold the files on given size. Specified in Mb. Default is 100Mb.\r\n        count: Number of files to report from the scan. Default is 10.\r\n\r\n## Lego Input\r\nThis Lego take six inputs sshClient, hosts, inspect_folder, threshold, count and sudo.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "SSH/legos/ssh_find_large_files/__init__.py",
    "content": ""
  },
  {
    "path": "SSH/legos/ssh_find_large_files/ssh_find_large_files.json",
    "content": "{\n  \"action_title\": \"SSH: Locate large files on host\",\n\n  \"action_description\": \"This action scans the file system on a given host and returns a dict of large files. The command used to perform the scan is \\\"find inspect_folder -type f -exec du -sk '{}' + | sort -rh | head -n count\\\"\",\n\n  \"action_type\": \"LEGO_TYPE_SSH\",\n  \"action_entry_function\": \"ssh_find_large_files\",\n  \"action_needs_credential\": true,\n  \"action_supports_poll\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\n  \"action_supports_iteration\": true,\n  \"action_verbs\": [\n    \"find\",\n    \"locate\"\n  ],\n  \"action_nouns\": [\n    \"ssh\",\n    \"files\"\n  ],\n  \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_SSH\"]\n}\n"
  },
  {
    "path": "SSH/legos/ssh_find_large_files/ssh_find_large_files.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nimport pprint\nfrom typing import Optional\nfrom pydantic import BaseModel, Field\n\n\nclass InputSchema(BaseModel):\n    host: str = Field(\n        title='Host',\n        description='Host to connect to. Eg 10.10.10.10'\n    )\n    proxy_host: Optional[str] = Field(\n        title='Proxy host',\n        description='Override the proxy host provided in the credentials. \\\n            It still uses the proxy_user and port from the credentials.'\n    )\n    inspect_folder: str = Field(\n        title='Inspect Folder',\n        description='''Folder to inspect on the remote host. Folders are scanned using \\\n            \"find inspect_folder -type f -exec du -sk '{}' + | sort -rh | head -n count\"'''\n    )\n    threshold: Optional[int] = Field(\n        default=100,\n        title=\"Size Threshold\",\n        description=\"Threshold the files on given size. Specified in Mb. Default is 100Mb\"\n    )\n    count: Optional[int] = Field(\n        default=10,\n        title=\"Count\",\n        description=\"Number of files to report from the scan. Default is 10\"\n    )\n    sudo: Optional[bool] = Field(\n        default=False,\n        title='Run with sudo',\n        description='Run the scan with sudo.'\n    )\n\ndef ssh_find_large_files_printer(output):\n    if output is None:\n        return\n    print(\"\\n\")\n    pprint.pprint(output)\n\n\ndef ssh_find_large_files(\n    sshClient,\n    host: str,\n    inspect_folder: str,\n    proxy_host: str = None,\n    threshold: int = 0,\n    sudo: bool = False,\n    count: int = 10) -> dict:\n\n    \"\"\"ssh_find_large_files scans the file system on a given host\n\n        :type host: str\n        :param host: Host to connect to. Eg 10.10.10.10.\n\n        :type inspect_folder: str\n        :param inspect_folder: Folder to inspect on the remote host.\n\n        :type proxy_host: str\n        :param proxy_host: Proxy Host to connect host via. Eg 10.10.10.10.\n\n        :type sudo: bool\n        :param sudo: Run the scan with sudo.\n\n        :type threshold: bool\n        :param threshold: Threshold the files on given size. Specified in Mb. Default is 100Mb.\n\n        :type count: bool\n        :param count: Number of files to report from the scan. Default is 10.\n\n        :rtype:\n    \"\"\"\n\n    client = sshClient([host], proxy_host)\n\n    # find size in Kb\n    command = \"find \" + inspect_folder + \\\n        \" -type f -exec du -sm '{}' + | sort -rh | head -n \" + str(count)\n    runCommandOutput = client.run_command(command=command, sudo=sudo)\n    client.join()\n    res = {}\n\n    for host_output in runCommandOutput:\n        for line in host_output.stdout:\n            # line is of the form {size} {fullfilename}\n            (size, filename) = line.split()\n            if int(size) > threshold:\n                res[filename] = int(size)\n\n    return res\n"
  },
  {
    "path": "SSH/legos/ssh_get_ec2_instances_with_low_available_disk_size/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>Get AWS EC2 with low available disk size</h1>\n\n## Description\nThis action retrieves the public IP's of AWS EC2 instances that have low available disk space.\n\n## Lego Details\n\tssh_get_ec2_instances_with_low_available_disk_size(handle, hosts: list, threshold: float = 5)\n\t\thandle: Object of type unSkript SSH Connector.\n\t\thosts: List of hosts to connect to.\n\t\tthreshold: The disk size threshold in GB.(Optional)\n\n\n## Lego Input\nThis Lego takes inputs handle, hosts, threshold.\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "SSH/legos/ssh_get_ec2_instances_with_low_available_disk_size/__init__.py",
    "content": ""
  },
  {
    "path": "SSH/legos/ssh_get_ec2_instances_with_low_available_disk_size/ssh_get_ec2_instances_with_low_available_disk_size.json",
    "content": "{\n  \"action_title\": \"Get AWS EC2 with low available disk size\",\n  \"action_description\": \"This action retrieves the public IP's of AWS EC2 instances that have low available disk space.\",\n  \"action_type\": \"LEGO_TYPE_SSH\",\n  \"action_entry_function\": \"ssh_get_ec2_instances_with_low_available_disk_size\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n  \"action_is_check\": true,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\", \"CATEGORY_TYPE_AWS\", \"CATEGORY_TYPE_AWS_EC2\", \"CATEGORY_TYPE_SSH\"],\n  \"action_next_hop\": [\"\"],\n  \"action_next_hop_parameter_mapping\": {}\n}"
  },
  {
    "path": "SSH/legos/ssh_get_ec2_instances_with_low_available_disk_size/ssh_get_ec2_instances_with_low_available_disk_size.py",
    "content": "##\n# Copyright (c) 2023 unSkript, Inc\n# All rights reserved.\n##\nimport pprint\nfrom typing import Tuple, Optional\nfrom pydantic import BaseModel, Field\nfrom unskript.legos.ssh.ssh_execute_remote_command.ssh_execute_remote_command import ssh_execute_remote_command\n\n\nclass InputSchema(BaseModel):\n    hosts: list = Field(\n        ...,\n        description='List of hosts to connect to. For eg. [\"host1\", \"host2\"].',\n        title='Hosts',\n    )\n    threshold: Optional[float] = Field(\n        default = 5, description='The disk size threshold in GB. Default- 5GB', title='Threshold(in GB)'\n    )\n\n\ndef ssh_get_ec2_instances_with_low_available_disk_size_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\ndef ssh_get_ec2_instances_with_low_available_disk_size(handle, hosts: list, threshold: float = 5) -> Tuple:\n    \"\"\"Checks the available root disk size and compares it with the threshold.\n\n    :type handle: SSH Client object\n    :param handle: The SSH client.\n\n    :type hosts: list\n    :param hosts: List of hosts to connect to.\n\n    :type threshold: float\n    :param threshold: The disk size threshold in GB.\n\n    :rtype: Status, list of dicts of hosts with available disk size less than the threshold\n    \"\"\"\n     # Command to determine the root disk\n    determine_disk_command = \"lsblk -o NAME,MOUNTPOINT | grep ' /$' | awk '{print $1}' | tr -d '└─-'\"\n    disks = ssh_execute_remote_command(handle, hosts, determine_disk_command)\n\n    # Check if all disks are the same for all hosts\n    unique_disks = set(disks.values())\n    if len(unique_disks) > 1:\n        disk_details = ', '.join([f\"{host}: {disk}\" for host, disk in disks.items()])\n        raise ValueError(f\"The provided hosts have different disk names. Details: {disk_details}. Please execute them one by one.\")\n    disk = unique_disks.pop()\n\n\n    # Create the command using the determined common disk\n    command = f\"df -h /dev/{disk.strip()} | tail -1\"\n    print(f\"Executing command: {command}\")\n    outputs = ssh_execute_remote_command(handle, hosts, command)\n\n    result = []\n    for host, host_output in outputs.items():\n        try:\n            # Extracting available space from the output\n            parts = host_output.split()\n            if len(parts) > 4:\n                available = parts[3]  # Assuming 'Available' column is the 4th one\n                available_size = float(available[:-1])  # excluding the 'G'\n\n                if available_size < threshold:\n                    result.append({host: available_size})\n            else:\n                print(f'For host {host}, the output is not in expected format.')\n                pass\n        except Exception as e:\n            raise e\n\n    if result:\n        return (False, result)\n    return (True, None)\n"
  },
  {
    "path": "SSH/legos/ssh_get_ec2_instances_with_low_memory_size/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>Get AWS EC2 with low free memory size</h1>\n\n## Description\nThis action uses SSH to identify AWS EC2 instances with low available memory.\n\n## Lego Details\n\tssh_get_ec2_instances_with_low_memory_size(handle, hosts: list, threshold: float = 400)\n\t\thandle: Object of type unSkript SSH Connector.\n\t\thosts: List of hosts to connect to.\n\t\tthreshold: The memory size threshold in MB.(Optional)\n\n## Lego Input\nThis Lego takes inputs handle,\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "SSH/legos/ssh_get_ec2_instances_with_low_memory_size/__init__.py",
    "content": ""
  },
  {
    "path": "SSH/legos/ssh_get_ec2_instances_with_low_memory_size/ssh_get_ec2_instances_with_low_memory_size.json",
    "content": "{\n  \"action_title\": \"Get AWS EC2 with low free memory size\",\n  \"action_description\": \"This action uses SSH to identify AWS EC2 instances with low available memory.\",\n  \"action_type\": \"LEGO_TYPE_SSH\",\n  \"action_entry_function\": \"ssh_get_ec2_instances_with_low_memory_size\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n  \"action_is_check\": true,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true,\n  \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\", \"CATEGORY_TYPE_AWS\", \"CATEGORY_TYPE_AWS_EC2\", \"CATEGORY_TYPE_SSH\"],\n  \"action_next_hop\": [\"\"],\n  \"action_next_hop_parameter_mapping\": {}\n}"
  },
  {
    "path": "SSH/legos/ssh_get_ec2_instances_with_low_memory_size/ssh_get_ec2_instances_with_low_memory_size.py",
    "content": "##\n# Copyright (c) 2023 unSkript, Inc\n# All rights reserved.\n##\nimport pprint\nfrom typing import List, Optional, Tuple\nfrom pydantic import BaseModel, Field\nfrom unskript.legos.ssh.ssh_execute_remote_command.ssh_execute_remote_command import ssh_execute_remote_command\n\n\nclass InputSchema(BaseModel):\n    hosts: list = Field(\n        ...,\n        description='List of hosts to connect to. For eg. [\"host1\", \"host2\"].',\n        title='Hosts',\n    )\n    threshold: Optional[float] = Field(\n        default= 400,\n        description='Optional memory size threshold in MB. Default- 400 MB',\n        title='Threshold(in MB)',\n    )\n\n\ndef ssh_get_ec2_instances_with_low_memory_size_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\ndef ssh_get_ec2_instances_with_low_memory_size(handle, hosts: list, threshold: float = 400) -> Tuple:\n    \"\"\"Get EC2 instances with free memory size less than a given threshold.\n\n    :type handle: SSH Client object\n    :param handle: The SSH client.\n\n    :type hosts: list\n    :param hosts: List of hosts to connect to.\n\n    :type threshold: float\n    :param threshold: Optional memory size threshold in MB.\n\n    :rtype: Status, list of dicts of hosts with available disk size less than the threshold along with the size in MB\n    \"\"\"\n    command = \"free -m| awk 'NR==2{printf \\\"%.2f\\\", $7}'\"\n    output = ssh_execute_remote_command(handle, hosts, command)\n    result = []\n    hosts_with_less_memory = {}\n    for host, host_output in output.items():\n        try:\n            available_memory = float(host_output)\n\n            # Compare the available memory size with the threshold\n            if available_memory < threshold:\n                hosts_with_less_memory[host] = available_memory\n                result.append(hosts_with_less_memory)\n        except Exception as e:\n            raise e\n\n    if len(result) != 0:\n        return (False, result)\n    return (True, None)\n\n\n"
  },
  {
    "path": "SSH/legos/ssh_get_handle/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Get SSH handle</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego get SSH handle.\r\n\r\n\r\n## Lego Details\r\n\r\n    ssh_get_handle(handle: object)\r\n\r\n        handle: Object of type unSkript ssh Connector\r\n\r\n## Lego Input\r\nThis Lego take one input handle.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "SSH/legos/ssh_get_handle/__init__.py",
    "content": ""
  },
  {
    "path": "SSH/legos/ssh_get_handle/ssh_get_handle.json",
    "content": "{\n\"action_title\": \"Get SSH handle\",\n\"action_description\": \"Get SSH handle\",\n\"action_type\": \"LEGO_TYPE_SSH\",\n\"action_entry_function\": \"ssh_get_handle\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": false,\n\"action_supports_iteration\": false,\n\"action_verbs\": [\n\"get\"\n],\n\"action_nouns\": [\n\"ssh\",\n\"handle\"\n]\n}\n"
  },
  {
    "path": "SSH/legos/ssh_get_handle/ssh_get_handle.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nfrom pydantic import BaseModel\n\n\nclass InputSchema(BaseModel):\n    pass\n\n\ndef ssh_get_handle(handle):\n    \"\"\"\n    ssh_get_handle returns the SSH handle.\n       :rtype: SSH handle.\n    \"\"\"\n    return handle\n"
  },
  {
    "path": "SSH/legos/ssh_get_hosts_with_low_disk_latency/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>Get hosts with low disk latency </h1>\n\n## Description\nThis action checks the disk latency on the provided hosts by running a disk write command and measuring the time taken. If the time taken exceeds a given threshold, the host is flagged as having potential latency issues.\n\n## Lego Details\n\tssh_get_hosts_with_low_disk_latency(handle, hosts: list, threshold: int = 5)\n\t\thandle: Object of type unSkript SSH Connector.\n\t\thosts: List of hosts to connect to.\n\t\tthreshold: Time threshold in seconds to flag a host for potential latency issues.\n\n\n## Lego Input\nThis Lego takes inputs handle, hosts, threshold.\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "SSH/legos/ssh_get_hosts_with_low_disk_latency/__init__.py",
    "content": ""
  },
  {
    "path": "SSH/legos/ssh_get_hosts_with_low_disk_latency/ssh_get_hosts_with_low_disk_latency.json",
    "content": "{\n  \"action_title\": \"Get hosts with low disk latency \",\n  \"action_description\": \"This action checks the disk latency on the provided hosts by running a disk write command and measuring the time taken. If the time taken exceeds a given threshold, the host is flagged as having potential latency issues.\",\n  \"action_type\": \"LEGO_TYPE_SSH\",\n  \"action_entry_function\": \"ssh_get_hosts_with_low_disk_latency\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n  \"action_is_check\": true,\n  \"action_next_hop\": [\n    \"\"\n  ],\n  \"action_next_hop_parameter_mapping\": {},\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true\n}"
  },
  {
    "path": "SSH/legos/ssh_get_hosts_with_low_disk_latency/ssh_get_hosts_with_low_disk_latency.py",
    "content": "##\n# Copyright (c) 2023 unSkript, Inc\n# All rights reserved.\n##\nfrom typing import Tuple, Optional\nfrom pydantic import BaseModel, Field\nfrom unskript.legos.ssh.ssh_execute_remote_command.ssh_execute_remote_command import ssh_execute_remote_command\n\n\nclass InputSchema(BaseModel):\n    hosts: list = Field(\n        ...,\n        description='List of hosts to connect to. For eg. [\"host1\", \"host2\"].',\n        title='Lis of Hosts',\n    )\n    threshold: Optional[float] = Field(\n        10,\n        description='Time threshold in seconds to flag a host for potential latency issues.',\n        title='Threshold (in seconds)',\n    )\n\n\ndef ssh_get_hosts_with_low_disk_latency_printer(output):\n    if not output:\n        print(\"No issues found.\")\n        return\n\n    status, problematic_hosts = output\n    if not status:\n        print(\"Hosts with potential disk latency issues:\", ', '.join(problematic_hosts))\n    else:\n        print(\"No latency issues found on any hosts.\")\n\ndef ssh_get_hosts_with_low_disk_latency(handle, hosts: list, threshold: int = 5) -> Tuple:\n    \"\"\"\n    ssh_get_hosts_with_low_disk_latency Checks the disk latency on the provided hosts by running a disk write command and \n    measuring the time taken. If the time taken exceeds a given threshold, the host is \n    flagged as having potential latency issues.\n\n    :type handle: SSH Client object\n    :param handle: The SSH client.\n\n    :type hosts: list\n    :param hosts: List of hosts to connect to.\n\n    :type threshold: float\n    :param threshold: Time threshold in seconds to flag a host for potential latency issues.\n\n    :return: Status and the hosts with potential latency issues if any.\n    \"\"\"\n    print(\"Starting the disk latency check...\")\n\n    latency_command = \"/usr/bin/time -p dd if=/dev/zero of=~/test.png bs=8192 count=10240 oflag=direct 2>&1\"\n    outputs = ssh_execute_remote_command(handle, hosts, latency_command)\n\n    # Cleanup: Remove the created test file\n    print(\"Cleaning up resources...\")\n    cleanup_command = \"rm ~/test.png\"\n    ssh_execute_remote_command(handle, hosts, cleanup_command)\n\n    hosts_with_issues = []\n\n    for host, output in outputs.items():\n        if not output.strip():\n            print(f\"Command execution failed or returned empty output on host {host}.\")\n            continue\n\n        for line in output.splitlines():\n            if line.startswith(\"real\"):\n                time_line = line\n                break\n        else:\n            print(f\"Couldn't find 'real' time in output for host {host}.\")\n            continue\n\n        # Parse the time and check against the threshold\n        try:\n            total_seconds = float(time_line.split()[1])\n\n            if total_seconds > threshold:\n                hosts_with_issues.append(host)\n        except Exception as e:\n            raise e\n\n    if hosts_with_issues:\n        return (False, hosts_with_issues)\n    return (True, None)\n\n\n\n\n"
  },
  {
    "path": "SSH/legos/ssh_restart_service_using_sysctl/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>SSH Restart Service Using sysctl</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego SSH restart Service Using sysctl.\r\n\r\n\r\n## Lego Details\r\n\r\n    ssh_restart_service_using_sysctl(sshClient, hosts: List[str], service_name: str, sudo: bool)\r\n\r\n        sshClient: Object of type unSkript ssh Connector\r\n        hosts: List of hosts to connect to. For eg. [\"host1\", \"host2\"].\r\n        service_name: Service name to restart.\r\n        sudo: Restart service with sudo.\r\n\r\n## Lego Input\r\nThis Lego take four inputs handle, hosts, service_name and sudo.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "SSH/legos/ssh_restart_service_using_sysctl/__init__.py",
    "content": ""
  },
  {
    "path": "SSH/legos/ssh_restart_service_using_sysctl/ssh_restart_service_using_sysctl.json",
    "content": "{\n\"action_title\": \"SSH Restart Service Using sysctl\",\n\"action_description\": \"SSH Restart Service Using sysctl\",\n\"action_type\": \"LEGO_TYPE_SSH\",\n\"action_entry_function\": \"ssh_restart_service_using_sysctl\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": true,\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\n\"action_supports_iteration\": true,\n\"action_verbs\": [\n\"restart\"\n],\n\"action_nouns\": [\n\"sysctl\"\n],\n\"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_SSH\"]\n}\n"
  },
  {
    "path": "SSH/legos/ssh_restart_service_using_sysctl/ssh_restart_service_using_sysctl.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nimport pprint\nfrom typing import List, Optional, Dict\nfrom pydantic import BaseModel, Field\n\n\nclass InputSchema(BaseModel):\n    hosts: List[str] = Field(\n        title='Hosts',\n        description='List of hosts to connect to. For eg. [\"host1\", \"host2\"].'\n    )\n    proxy_host: Optional[str] = Field(\n        title='Proxy host',\n        description='Override the proxy host provided in the credentials. \\\n            It still uses the proxy_user and port from the credentials.'\n    )\n    service_name: str = Field(\n        title='Service Name',\n        description='Service name to restart.'\n    )\n    sudo: Optional[bool] = Field(\n        default=False,\n        title='Restart with sudo',\n        description='Restart service with sudo.'\n    )\n\ndef ssh_restart_service_using_sysctl_printer(output):\n    if output is None:\n        return\n    print(\"\\n\")\n    pprint.pprint(output)\n\n\ndef ssh_restart_service_using_sysctl(\n        sshClient,\n        hosts: List[str],\n        service_name: str,\n        sudo: bool = False,\n        proxy_host: str = None\n        ) -> Dict:\n\n    \"\"\"ssh_restart_service_using_sysctl restart Service Using sysctl\n\n        :type hosts: List[str]\n        :param hosts: List of hosts to connect to. For eg. [\"host1\", \"host2\"].\n\n        :type service_name: str\n        :param service_name: Service name to restart.\n\n        :type sudo: bool\n        :param sudo: Restart service with sudo.\n\n        :type proxy_host: str\n        :param proxy_host: Optional proxy host to use.\n\n        :rtype:\n    \"\"\"\n    client = sshClient(hosts, proxy_host)\n    runCommandOutput = client.run_command(command=f\"systemctl restart {service_name}\", sudo=sudo)\n    client.join()\n    res = {}\n\n    for host_output in runCommandOutput:\n        hostname = host_output.host\n        output = list(host_output.stdout)\n        res[hostname] = output\n\n    return res\n"
  },
  {
    "path": "SSH/legos/ssh_scp/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>SCP: Remote file transfer over SSH</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego Copy files from or to remote host. Files are copied over SCP.\r\n\r\n\r\n## Lego Details\r\n\r\n    ssh_scp(sshClient, host: str, remote_file: str, local_file: str, direction: bool)\r\n\r\n        sshClient: Object of type unSkript ssh Connector\r\n        host: Host to connect to. Eg 10.10.10.10.\r\n        remote_file: Filename on the remote server. Eg /home/ec2-user/my_remote_file\r\n        local_file: Filename on the unSkript proxy. Eg /tmp/my_local_file\r\n        direction: Direction of the copy operation. Default is receive-from-remote-server\r\n\r\n## Lego Input\r\nThis Lego take five inputs sshClient, host, remote_file, local_file and direction.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "SSH/legos/ssh_scp/__init__.py",
    "content": ""
  },
  {
    "path": "SSH/legos/ssh_scp/ssh_scp.json",
    "content": "{\n\"action_title\": \"SCP: Remote file transfer over SSH\",\n\"action_description\": \"Copy files from or to remote host. Files are copied over SCP. \",\n\"action_type\": \"LEGO_TYPE_SSH\",\n\"action_entry_function\": \"ssh_scp\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": true,\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_NONE\",\n\"action_supports_iteration\": true,\n\"action_verbs\": [\n\"copy\",\n\"transfer\",\n\"scp\"\n],\n\"action_nouns\": [\n\"ssh\",\n\"file\"\n],\n\"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_SSH\"]\n}\n"
  },
  {
    "path": "SSH/legos/ssh_scp/ssh_scp.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nimport pprint\nfrom typing import Optional\nfrom pydantic import BaseModel, Field\n\n\nclass InputSchema(BaseModel):\n    host: str = Field(\n        title='Host',\n        description='Hosts to connect to. For eg. \"10.10.10.10\"'\n    )\n    proxy_host: Optional[str] = Field(\n        title='Proxy host',\n        description='Override the proxy host provided in the credentials. \\\n            It still uses the proxy_user and port from the credentials.'\n    )\n    remote_file: str = Field(\n        title='Remote File',\n        description='Filename on the remote server. Eg /home/ec2-user/my_remote_file'\n    )\n    local_file: str = Field(\n        title=\"Local File\",\n        description='Filename on the unSkript proxy. Eg /tmp/my_local_file'\n    )\n    direction: bool = Field(\n        default=True,\n        title=\"Receive\",\n        description=\"Direction of the copy operation. Default is receive-from-remote-server\"\n    )\n\ndef ssh_scp_printer(output):\n    if output is None:\n        return\n    print(\"\\n\")\n    pprint.pprint(output)\n\n\ndef ssh_scp(\n        sshClient,\n        host: str,\n        remote_file: str,\n        local_file: str,\n        proxy_host: str = None,\n        direction: bool = True):\n    \"\"\"ssh_scp Copy files from or to remote host.\n\n        :type host: str\n        :param host: Host to connect to. Eg 10.10.10.10.\n\n        :type remote_file: str\n        :param remote_file: Filename on the remote server. Eg /home/ec2-user/my_remote_file\n\n        :type local_file: str\n        :param local_file: Filename on the unSkript proxy. Eg /tmp/my_local_file\n\n        :type proxy_host: str\n        :param proxy_host: Proxy Host to connect host via. Eg 10.10.10.10.\n\n        :type direction: bool\n        :param direction: Direction of the copy operation. Default is receive-from-remote-server\n\n        :rtype:\n    \"\"\"\n    client = sshClient([host], proxy_host)\n    client.copy_file(local_file, remote_file, direction)\n    client.join()\n"
  },
  {
    "path": "SalesForce/README.md",
    "content": "\n# SalesForce Actions\n* [Assign Salesforce Case](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SalesForce/legos/salesforce_assign_case/README.md): Assign a Salesforce case\n* [Change Salesforce Case Status](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SalesForce/legos/salesforce_case_change_status/README.md): Change Salesforce Case Status\n* [Create Salesforce Case](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SalesForce/legos/salesforce_create_case/README.md): Create a Salesforce case\n* [Delete Salesforce Case](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SalesForce/legos/salesforce_delete_case/README.md): Delete a Salesforce case\n* [Get Salesforce Case Info](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SalesForce/legos/salesforce_get_case/README.md): Get a Salesforce case info\n* [Get Salesforce Case Status](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SalesForce/legos/salesforce_get_case_status/README.md): Get a Salesforce case status\n* [Get Salesforce handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SalesForce/legos/salesforce_get_handle/README.md): Get Salesforce handle\n* [Search Salesforce Case](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SalesForce/legos/salesforce_search_case/README.md): Search a Salesforce case\n* [Update Salesforce Case](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SalesForce/legos/salesforce_update_case/README.md): Update a Salesforce case\n"
  },
  {
    "path": "SalesForce/__init__.py",
    "content": ""
  },
  {
    "path": "SalesForce/legos/__init__.py",
    "content": ""
  },
  {
    "path": "SalesForce/legos/salesforce_assign_case/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Assign Salesforce Case</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego Assign a Salesforce case.\r\n\r\n\r\n## Lego Details\r\n\r\n    salesforce_assign_case(handle: object, case_number: str, owner_id: str)\r\n\r\n        handle: Object of type unSkript Salesforce Connector\r\n        case_number: The Case number to get the details about the case\r\n        owner_id: User to assign the case to. Eg user@acme.com\r\n\r\n## Lego Input\r\nThis Lego take three inputs handle, case_number and owner_id.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n    unskript/legos/salesforce/salesforce_assign_case/test_salesforce_assign_case.py::test_salesforce_assign_case\r\n    ----SETTING UP TEST----\r\n    Case 00001097 assigned successfully\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "SalesForce/legos/salesforce_assign_case/__init__.py",
    "content": ""
  },
  {
    "path": "SalesForce/legos/salesforce_assign_case/salesforce_assign_case.json",
    "content": "{\n\"action_title\": \"Assign Salesforce Case\",\n\"action_description\": \"Assign a Salesforce case\",\n\"action_type\": \"LEGO_TYPE_SALESFORCE\",\n\"action_entry_function\": \"salesforce_assign_case\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": false,\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\n\"action_supports_iteration\": false,\n\"action_verbs\": [\n\"assign\"\n],\n\"action_nouns\": [\n\"salesforce\",\n\"case\"\n],\n\"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_SALESFORCE\"]\n}\n"
  },
  {
    "path": "SalesForce/legos/salesforce_assign_case/salesforce_assign_case.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nimport pprint\nfrom pydantic import BaseModel, Field\n\npp = pprint.PrettyPrinter(indent=4)\n\n\nclass InputSchema(BaseModel):\n    case_number: str = Field(\n        title='Case Number',\n        description='The Case number to get the details about the case')\n    owner_id: str = Field(\n        title='Owner ID',\n        description='Owner ID to assign the case\"')\n\n\ndef salesforce_assign_case_printer(output):\n    if output is None:\n        return\n    print(output)\n\n\ndef salesforce_assign_case(handle, case_number: str, owner_id: str) -> str:\n    \"\"\"salesforce_assign_case assigns a given case to a user\n\n        :type case_number: str\n        :param case_number: The Case number to get the details about the case\n\n        :type owner_id: str\n        :param owner_id: User to assign the case to. Eg user@acme.com\n        \n        :rtype: str\n    \"\"\"\n    record_id = handle.query(f\"SELECT Id FROM Case WHERE CaseNumber = '{case_number}'\")\n    if not record_id['records']:\n        return \"Invalid Case Number\"\n    record_id = record_id['records'][0]['Id']\n    data = {\n        \"OwnerId\": owner_id\n    }\n    resp = handle.Case.update(record_id, data)\n    if resp == 204:\n        return f\"Case {case_number} assigned successfully\"\n    return \"Error Occurred\"\n"
  },
  {
    "path": "SalesForce/legos/salesforce_case_change_status/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Change Salesforce Case Status</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego change Salesforce Case Status.\r\n\r\n\r\n## Lego Details\r\n\r\n    salesforce_case_change_status(handle: object, case_number: str, status: Status)\r\n\r\n        handle: Object of type unSkript Salesforce Connector\r\n        case_number: The Case number to get the details about the case\r\n        status: Salesforce Case Status. Possible values: New|Working|Escalated\r\n\r\n## Lego Input\r\nThis Lego take three inputs handle, case_number and status.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "SalesForce/legos/salesforce_case_change_status/__init__.py",
    "content": ""
  },
  {
    "path": "SalesForce/legos/salesforce_case_change_status/salesforce_case_change_status.json",
    "content": "{\n\"action_title\": \"Change Salesforce Case Status\",\n\"action_description\": \"Change Salesforce Case Status\",\n\"action_type\": \"LEGO_TYPE_SALESFORCE\",\n\"action_entry_function\": \"salesforce_case_change_status\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": false,\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\n\"action_supports_iteration\": false,\n\"action_verbs\": [\n\"change\"\n],\n\"action_nouns\": [\n\"salesforce\",\n\"case\",\n\"status\"\n],\n\"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_SALESFORCE\"]\n\n}\n"
  },
  {
    "path": "SalesForce/legos/salesforce_case_change_status/salesforce_case_change_status.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nimport pprint\nfrom pydantic import BaseModel, Field\nfrom unskript.enums.salesforce_enums import Status\n\npp = pprint.PrettyPrinter(indent=4)\n\n\nclass InputSchema(BaseModel):\n    case_number: str = Field(\n        title='Case Number',\n        description='The Case number to get the details about the case')\n    status: Status = Field(\n        title='Status',\n        description='The status of the case. Default is \"New\"')\n\n\ndef salesforce_case_change_status_printer(output):\n    if output is None:\n        return\n    print(output)\n\n\ndef salesforce_case_change_status(handle, case_number: str, status: Status) -> str:\n    \"\"\"salesforce_case_change_status change status for given case\n        :type case_number: str\n        :param case_number: The Case number to get the details about the case\n\n        :type status: Status\n        :param status: Salesforce Case Status. Possible values: New|Working|Escalated\n        \n        :rtype: str\n    \"\"\"\n    record_id = handle.query(f\"SELECT Id FROM Case WHERE CaseNumber = '{case_number}'\")\n    if not record_id['records']:\n        return \"Invalid Case Number\"\n    status = status.value if status else None\n    record_id = record_id['records'][0]['Id']\n    data = {\n    \"Status\": status\n    }\n    resp = handle.Case.update(record_id, data)\n    if resp == 204:\n        return f\"Status change successfully for case {case_number} \"\n    return \"Error Occurred\"\n"
  },
  {
    "path": "SalesForce/legos/salesforce_create_case/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Create Salesforce Case</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego create Salesforce Case.\r\n\r\n\r\n## Lego Details\r\n\r\n    salesforce_create_case(handle: object, status: Status, case_origin: CaseOrigin, priority: Priority,\r\n                           contact_name: str, account_name: str, type: CaseType, case_reason: CaseReason,\r\n                           subject: str, description: str, internal_comments: str, additional_information: dict,\r\n                           web_information: dict)\r\n\r\n        handle: Object of type unSkript Salesforce Connector\r\n        status: The status of the case. Default is \"New\"\r\n        case_origin: The origin of the case.\r\n        priority: The priority of the case.\r\n        contact_name: The name of the contact.\r\n        account_name: The name of the Account.\r\n        type: The type of the case.\r\n        case_reason: The Reason for the case.\r\n        subject: Title of the case.\r\n        escription: A short description about the case.\r\n        internal_comments: Comments about thw case.\r\n        additional_information:\r\n        web_information: \r\n\r\n## Lego Input\r\nThis Lego take thirteen inputs handle, status, case_origin, priority, contact_name, account_name, type, case_reason,\r\n                           subject, description, internal_comments, additional_information and\r\n                           web_information.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "SalesForce/legos/salesforce_create_case/__init__.py",
    "content": ""
  },
  {
    "path": "SalesForce/legos/salesforce_create_case/salesforce_create_case.json",
    "content": "{\n\"action_title\": \"Create Salesforce Case\",\n\"action_description\": \"Create a Salesforce case\",\n\"action_type\": \"LEGO_TYPE_SALESFORCE\",\n\"action_entry_function\": \"salesforce_create_case\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": false,\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\n\"action_supports_iteration\": false,\n\"action_verbs\": [\n\"create\"\n],\n\"action_nouns\": [\n\"salesforce\",\n\"case\"\n],\n\"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_SALESFORCE\"]\n\n}\n"
  },
  {
    "path": "SalesForce/legos/salesforce_create_case/salesforce_create_case.py",
    "content": "import pprint\nimport json\nfrom typing import Dict, Optional\nfrom pydantic import BaseModel, Field\nfrom tabulate import tabulate\nfrom unskript.enums.salesforce_enums import Status, CaseOrigin, CaseType, Priority, CaseReason, \\\n    PotentialLiability, SLAViolation\n\npp = pprint.PrettyPrinter(indent=4)\n\n\nclass AdditionalInformation(BaseModel):\n    product: Optional[str] = Field(\n        title='Product',\n        description='Product associated with case')\n    engineering_req_number: Optional[str] = Field(\n        title='Engineering Req Number',\n        description='Engineering Req Number')\n    potential_liability: Optional[PotentialLiability] = Field(\n        title='Potential Liability',\n        description='Potential Liability')\n    sla_violation: Optional[SLAViolation] = Field(\n        title='SLA Violation',\n        description='SLA Violation')\n\n\nclass WebInformation(BaseModel):\n    web_email: Optional[str] = Field(\n        title='Web Email',\n        description='Web Email')\n    web_company: Optional[str] = Field(\n        title='Web Company',\n        description='Web Company')\n    web_name: Optional[str] = Field(\n        title='Web Name',\n        description='Web Name')\n    web_phone: Optional[str] = Field(\n        title='Web Phone',\n        description='Web Phone')\n\n\nclass InputSchema(BaseModel):\n    status: Status = Field(\n        title='Status',\n        description='The status of the case. Default is \"New\"')\n    priority: Optional[Priority] = Field(\n        title='Priority',\n        description='The priority of the case')\n    case_origin: CaseOrigin = Field(\n        title='Case Origin',\n        description='The origin of the case')\n    contact_name: Optional[str] = Field(\n        title='Contact Name',\n        description='The name of the contact')\n    account_name: Optional[str] = Field(\n        title='Account Name',\n        description='The name of the Account')\n    type: Optional[CaseType] = Field(\n        title='Type',\n        description='The type of the case')\n    case_reason: Optional[CaseReason] = Field(\n        title='Case Reason ',\n        description='The Reason for the case')\n    subject: Optional[str] = Field(\n        title='Subject',\n        description='Title of the case')\n    description: Optional[str] = Field(\n        title='Description',\n        description='A short description about the case')\n    internal_comments: Optional[str] = Field(\n        title='Internal Comments',\n        description='Comments about thw case')\n    additional_information: Optional[AdditionalInformation] = Field(...)\n    web_information: Optional[WebInformation] = Field(None, alias='Web Information')\n\n\ndef salesforce_create_case_printer(output):\n    if output is None:\n        return\n    print(\"\\n\")\n    print(json.dumps(output, indent=4))\n    case_number = output.get(\"CaseNumber\")\n    data = []\n    data.append(case_number)\n    print(\"\\n\")\n    od = tabulate([data], headers=['CaseNumber'], tablefmt=\"grid\")\n    print(od)\n\ndef salesforce_create_case(handle,\n                           status: Status,\n                           case_origin: CaseOrigin,\n                           priority: Priority = Priority.LOW,\n                           contact_name: str = \"\",\n                           account_name: str = \"\",\n                           type: CaseType = CaseType.ELECTRONIC,\n                           case_reason: CaseReason = CaseReason.OTHER,\n                           subject: str = \"\",\n                           description: str = \"\",\n                           internal_comments: str = \"\",\n                           additional_information: dict = None,\n                           web_information: dict = None,\n                           ) -> Dict:\n    \"\"\"salesforce_create_case create salesforce case\n\n        :type status: Status\n        :param status: The status of the case. Default is \"New\"\n\n        :type case_origin: CaseOrigin\n        :param case_origin: The origin of the case.\n\n        :type priority: Priority\n        :param priority: The priority of the case.\n\n        :type contact_name: str\n        :param contact_name: The name of the contact.\n\n        :type account_name: str\n        :param account_name: The name of the Account.\n\n        :type type: CaseType\n        :param type: The type of the case.\n\n        :type case_reason: CaseReason\n        :param case_reason: The Reason for the case.\n\n        :type subject: str\n        :param subject: Title of the case.\n        \n        :type description: str\n        :param description: A short description about the case.\n\n        :type internal_comments: str\n        :param internal_comments: Comments about thw case.\n\n        :rtype: \n    \"\"\"\n\n#salesforce_create_case creates a case in Salesforce.\n\n    contact_id = \"\"\n    account_id = \"\"\n    status = status.value if status else None\n    case_origin = case_origin.value if case_origin else None\n    type = type.value if type else None\n    priority = priority.value if priority else None\n    case_reason = case_reason.value if case_reason else None\n\n    if contact_name != \"\":\n        contact_id = handle.query(f\"SELECT Id FROM Contact WHERE Name = '{contact_name}'\")\n        if contact_id['records'] == []:\n            return {\"Error\": \"Invalid Contact name\"}\n        contact_id = contact_id['records'][0]['Id']\n    if account_name != \"\":\n        account_id = handle.query(f\"SELECT Id FROM Account WHERE Name = '{account_name}'\")\n        if account_id['records'] == []:\n            return {\"Error\": \"Invalid Account name\"}\n        account_id = account_id['records'][0]['Id']\n\n    data = {}\n    data['Status'] = status\n    data['Priority'] = priority\n    data['Origin'] = case_origin\n    data['ContactId'] = contact_id\n    data['AccountId'] = account_id\n    data['Type'] = type\n    data['Reason'] = case_reason\n    if web_information:\n        data['SuppliedEmail'] = web_information.get(\"web_email\", None)\n        data['SuppliedName'] = web_information.get(\"web_name\", None)\n        data['SuppliedCompany'] = web_information.get(\"web_company\", None)\n        data['SuppliedPhone'] = web_information.get(\"web_phone\", None)\n    if additional_information:\n        if additional_information.get(\"product\"):\n            data[\"Product__c\"] = additional_information.get(\"product\")\n        if additional_information.get(\"engineering_req_number\"):\n            data[\"EngineeringReqNumber__c\"] = additional_information.get(\"engineering_req_number\")\n        if additional_information.get(\"potential_liability\"):\n            data[\"PotentialLiability__c\"] = additional_information.get(\"potential_liability\")\n        if additional_information.get(\"sla_violation\"):\n            data[\"SLAViolation__c\"] = additional_information.get(\"sla_violation\")\n    data['Subject'] = subject\n    data['Description'] = description\n    data['Comments'] = internal_comments\n    case = handle.Case.create(data)\n    if case.get(\"success\"):\n        return handle.Case.get(case.get(\"id\"))\n    return case.get(\"errors\")\n"
  },
  {
    "path": "SalesForce/legos/salesforce_delete_case/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Delete Salesforce Case</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego delete Salesforce Case.\r\n\r\n\r\n## Lego Details\r\n\r\n    salesforce_delete_case(handle: object, case_number: str)\r\n\r\n        handle: Object of type unSkript Salesforce Connector\r\n        case_number: The Case number of the case to delete\r\n\r\n## Lego Input\r\nThis Lego take two inputs handle and case_number.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "SalesForce/legos/salesforce_delete_case/__init__.py",
    "content": ""
  },
  {
    "path": "SalesForce/legos/salesforce_delete_case/salesforce_delete_case.json",
    "content": "{\n\"action_title\": \"Delete Salesforce Case\",\n\"action_description\": \"Delete a Salesforce case\",\n\"action_type\": \"LEGO_TYPE_SALESFORCE\",\n\"action_entry_function\": \"salesforce_delete_case\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": false,\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\n\"action_supports_iteration\": false,\n\"action_verbs\": [\n\"delete\"\n],\n\"action_nouns\": [\n\"salesforce\",\n\"case\"\n],\n\"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_SALESFORCE\"]\n\n}\n"
  },
  {
    "path": "SalesForce/legos/salesforce_delete_case/salesforce_delete_case.py",
    "content": "import pprint\nfrom pydantic import BaseModel, Field\n\npp = pprint.PrettyPrinter(indent=4)\n\n\nclass InputSchema(BaseModel):\n    case_number: str = Field(\n        title='Case Number',\n        description='The Case number of the case to delete')\n\ndef salesforce_delete_case_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef salesforce_delete_case(handle, case_number: str) -> str:\n    \"\"\"salesforce_delete_case deletes a particular case.\n           :type case_number: str\n           :param case_number: The Case number of the case to delete\n       \"\"\"\n    record_id = handle.query(f\"SELECT Id FROM Case WHERE CaseNumber = '{case_number}'\")\n    if not record_id['records']:\n        return \"Invalid Case Number\"\n    resp = handle.Case.delete(record_id['records'][0]['Id'])\n    if resp == 204:\n        return f\"Case {case_number} deleted successfully\"\n    return \"Error Occurred\"\n"
  },
  {
    "path": "SalesForce/legos/salesforce_get_case/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Get Salesforce Case Info</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego gets the details about a particular case.\r\n\r\n\r\n## Lego Details\r\n\r\n    salesforce_get_case(handle: object, case_number: str)\r\n\r\n        handle: Object of type unSkript Salesforce Connector\r\n        case_number: The Case number to get the details about the case\r\n\r\n## Lego Input\r\nThis Lego take two inputs handle and case_number.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "SalesForce/legos/salesforce_get_case/__init__.py",
    "content": ""
  },
  {
    "path": "SalesForce/legos/salesforce_get_case/salesforce_get_case.json",
    "content": "{\n\"action_title\": \"Get Salesforce Case Info\",\n\"action_description\": \"Get a Salesforce case info\",\n\"action_type\": \"LEGO_TYPE_SALESFORCE\",\n\"action_entry_function\": \"salesforce_get_case\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": false,\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\n\"action_supports_iteration\": false,\n\"action_verbs\": [\n\"get\"\n],\n\"action_nouns\": [\n\"salesforce\",\n\"case\",\n\"info\"\n],\n\"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_SALESFORCE\"]\n\n}\n"
  },
  {
    "path": "SalesForce/legos/salesforce_get_case/salesforce_get_case.py",
    "content": "import json\nimport pprint\nfrom typing import Dict\nfrom pydantic import BaseModel, Field\n\npp = pprint.PrettyPrinter(indent=4)\n\n\nclass InputSchema(BaseModel):\n    case_number: str = Field(\n        title='Case Number',\n        description='The Case number to get the details about the case')\n\ndef salesforce_get_case_printer(output):\n    if output is None:\n        return\n    print(\"\\n\")\n    print(json.dumps(output, indent=4))\n\ndef salesforce_get_case(handle, case_number: str) -> Dict:\n    \"\"\"salesforce_get_case gets the details about a particular case.\n           :type case_number: str\n           :param case_number: The Case number to get the details about the case\n       \"\"\"\n    record_id = handle.query(f\"SELECT Id FROM Case WHERE CaseNumber = '{case_number}'\")\n    if not record_id['records']:\n        return {\"Error\": \"Invalid Case Number\"}\n    return handle.Case.get(record_id['records'][0]['Id'])\n"
  },
  {
    "path": "SalesForce/legos/salesforce_get_case_status/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Get Salesforce Case Status</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego gets the status about a particular case.\r\n\r\n\r\n## Lego Details\r\n\r\n    salesforce_get_case_status(handle: object, case_number: str)\r\n\r\n        handle: Object of type unSkript Salesforce Connector\r\n        case_number: The Case number to get the details about the case\r\n\r\n## Lego Input\r\nThis Lego take two inputs handle and case_number.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "SalesForce/legos/salesforce_get_case_status/__init__.py",
    "content": ""
  },
  {
    "path": "SalesForce/legos/salesforce_get_case_status/salesforce_get_case_status.json",
    "content": "{\n\"action_title\": \"Get Salesforce Case Status\",\n\"action_description\": \"Get a Salesforce case status\",\n\"action_type\": \"LEGO_TYPE_SALESFORCE\",\n\"action_entry_function\": \"salesforce_get_case_status\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": false,\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\n\"action_supports_iteration\": false,\n\"action_verbs\": [\n\"get\"\n],\n\"action_nouns\": [\n\"salesforce\",\n\"case\",\n\"status\"\n],\n\"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_SALESFORCE\"]\n\n}\n"
  },
  {
    "path": "SalesForce/legos/salesforce_get_case_status/salesforce_get_case_status.py",
    "content": "import pprint\nfrom pydantic import BaseModel, Field\n\npp = pprint.PrettyPrinter(indent=4)\n\n\nclass InputSchema(BaseModel):\n    case_number: str = Field(\n        title='Case Number',\n        description='The Case number to get the details about the case')\n\ndef salesforce_get_case_status_printer(output):\n    if output is None:\n        return\n    print(\"\\n\")\n    print(output)\n\ndef salesforce_get_case_status(handle, case_number: str) -> str:\n    \"\"\"salesforce_get_case_status gets the status about a particular case.\n           :type case_number: str\n           :param case_number: The Case number to get the details about the case\n       \"\"\"\n    records = handle.query(f\"SELECT Id FROM Case WHERE CaseNumber = '{case_number}'\")\n    if not records['records']:\n        return \"Invalid Case Number\"\n    case = handle.Case.get(records['records'][0]['Id'])\n    return case.get(\"Status\")\n"
  },
  {
    "path": "SalesForce/legos/salesforce_get_handle/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Get Salesforce handle</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego get Salesforce handle.\r\n\r\n\r\n## Lego Details\r\n\r\n    salesforce_get_handle(handle: object)\r\n\r\n        handle: Object of type unSkript Salesforce Connector\r\n\r\n## Lego Input\r\nThis Lego take one input handle.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "SalesForce/legos/salesforce_get_handle/__init__.py",
    "content": ""
  },
  {
    "path": "SalesForce/legos/salesforce_get_handle/salesforce_get_handle.json",
    "content": "{\n\"action_title\": \"Get Salesforce handle\",\n\"action_description\": \"Get Salesforce handle\",\n\"action_type\": \"LEGO_TYPE_SALESFORCE\",\n\"action_entry_function\": \"salesforce_get_handle\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": false,\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_NONE\",\n\"action_supports_iteration\": false,\n\"action_verbs\": [\n\"get\"\n],\n\"action_nouns\": [\n\"salesforce\",\n\"handle\"\n],\n\"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_SALESFORCE\"]\n\n}\n"
  },
  {
    "path": "SalesForce/legos/salesforce_get_handle/salesforce_get_handle.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nfrom pydantic import BaseModel\n\n\nclass InputSchema(BaseModel):\n    pass\n\n\ndef salesforce_get_handle(handle) -> None:\n    \"\"\"\n    salesforce_get_handle returns the Salesforce handle.\n    :rtype: Salesforce handle.\n    \"\"\"\n    return handle\n"
  },
  {
    "path": "SalesForce/legos/salesforce_search_case/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Search Salesforce Case</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego gets the details about a particular case.\r\n\r\n\r\n## Lego Details\r\n\r\n    salesforce_search_case(handle: object, search: str)\r\n\r\n        handle: Object of type unSkript Salesforce Connector\r\n        search: Search based on Status/Priority/Subject/CaseNumber/Reason\r\n\r\n## Lego Input\r\nThis Lego take two inputs handle and search.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "SalesForce/legos/salesforce_search_case/__init__.py",
    "content": ""
  },
  {
    "path": "SalesForce/legos/salesforce_search_case/salesforce_search_case.json",
    "content": "{\n\"action_title\": \"Search Salesforce Case\",\n\"action_description\": \"Search a Salesforce case\",\n\"action_type\": \"LEGO_TYPE_SALESFORCE\",\n\"action_entry_function\": \"salesforce_search_case\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": false,\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n\"action_supports_iteration\": false,\n\"action_verbs\": [\n\"search\"\n],\n\"action_nouns\": [\n\"salesforce\",\n\"case\"\n],\n\"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_SALESFORCE\"]\n\n}\n"
  },
  {
    "path": "SalesForce/legos/salesforce_search_case/salesforce_search_case.py",
    "content": "import json\nimport pprint\nfrom typing import List\nfrom pydantic import BaseModel, Field\nfrom tabulate import tabulate\n\npp = pprint.PrettyPrinter(indent=4)\n\n\nclass InputSchema(BaseModel):\n    case_number: str = Field(\n        title='Case Number',\n        description='The Case number to get the details about the case')\n\n\ndef salesforce_search_case_printer(output):\n    if output is None:\n        return\n    print(\"\\n\")\n    tb_data = []\n    for record in output:\n        print(json.dumps(record, indent=4))\n        case_number = record.get(\"CaseNumber\")\n        data = [case_number]\n        tb_data.append(data)\n    print(\"\\n\")\n    od = tabulate(tb_data, headers=['CaseNumber'], tablefmt=\"grid\")\n    print(od)\n\n\ndef salesforce_search_case(handle, search: str) -> List:\n    \"\"\"salesforce_search_case gets the details about a particular case.\n           :type search: str\n           :param search: Search based on Status/Priority/Subject/CaseNumber/Reason\n       \"\"\"\n    search = \"%\" + search\n    query = f\"SELECT Id FROM Case WHERE Priority Like '{search}'\" \\\n            f\"Or Status Like '{search}' \" \\\n            f\"Or Subject Like '{search}' \" \\\n            f\"Or Reason Like '{search}' \" \\\n            f\"Or CaseNumber Like '{search}' \" \\\n\n    records = handle.query(query)['records']\n    if records:\n        cases = []\n        for record in records:\n            cases.append(handle.Case.get(record['Id']))\n        return cases\n    return records\n"
  },
  {
    "path": "SalesForce/legos/salesforce_update_case/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Update Salesforce Case</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego update salesforce case.\r\n\r\n\r\n## Lego Details\r\n\r\n    salesforce_update_case(handle: object, status: Status, case_number: str,case_origin: CaseOrigin, \r\n                        priority: Priority, contact_name: str, account_name: str, type: CaseType, case_reason: CaseReason, subject: str, description: str, internal_comments: str, \r\n                        additional_information: dict, web_information: dict)\r\n\r\n        handle: Object of type unSkript Salesforce Connector\r\n        case_number: The Case number to get the details about the case\r\n        status: The status of the case. Default is \"New\"\r\n        case_origin: The origin of the case.\r\n        priority: The priority of the case.\r\n        contact_name: The name of the contact.\r\n        account_name: The name of the Account.\r\n        type: The type of the case.\r\n        case_reason: The Reason for the case.\r\n        subject: Title of the case.\r\n        escription: A short description about the case.\r\n        internal_comments: Comments about thw case.\r\n        additional_information:\r\n        web_information: \r\n\r\n## Lego Input\r\nThis Lego take fourteen inputs handle, status, case_number, case_origin, priority, contact_name, account_name,\r\n                 type, case_reason,subject, description, internal_comments, additional_information andweb_information.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "SalesForce/legos/salesforce_update_case/__init__.py",
    "content": ""
  },
  {
    "path": "SalesForce/legos/salesforce_update_case/salesforce_update_case.json",
    "content": "{\n\"action_title\": \"Update Salesforce Case\",\n\"action_description\": \"Update a Salesforce case\",\n\"action_type\": \"LEGO_TYPE_SALESFORCE\",\n\"action_entry_function\": \"salesforce_update_case\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": false,\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\n\"action_supports_iteration\": false,\n\"action_verbs\": [\n\"update\"\n],\n\"action_nouns\": [\n\"salesforce\",\n\"case\"\n],\n\"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_SALESFORCE\"]\n\n}\n"
  },
  {
    "path": "SalesForce/legos/salesforce_update_case/salesforce_update_case.py",
    "content": "import pprint\nimport json\nfrom typing import Dict, Optional\nfrom pydantic import BaseModel, Field\nfrom tabulate import tabulate\nfrom unskript.enums.salesforce_enums import Status, CaseOrigin, CaseType, Priority, CaseReason, \\\n    PotentialLiability, SLAViolation\n\npp = pprint.PrettyPrinter(indent=4)\n\nclass AdditionalInformation(BaseModel):\n    product: Optional[str] = Field(\n        title='Product',\n        description='Product associated with case')\n    engineering_req_number: Optional[str] = Field(\n        title='Engineering Req Number',\n        description='Engineering Req Number')\n    potential_liability: Optional[PotentialLiability] = Field(\n        title='Potential Liability',\n        description='Potential Liability')\n    sla_violation: Optional[SLAViolation] = Field(\n        title='SLA Violation',\n        description='SLA Violation')\n\n\nclass WebInformation(BaseModel):\n    web_email: Optional[str] = Field(\n        title='Web Email',\n        description='Web Email')\n    web_company: Optional[str] = Field(\n        title='Web Company',\n        description='Web Company')\n    web_name: Optional[str] = Field(\n        title='Web Name',\n        description='Web Name')\n    web_phone: Optional[str] = Field(\n        title='Web Phone',\n        description='Web Phone')\n\n\nclass InputSchema(BaseModel):\n    case_number: str = Field(\n        title='Case Number',\n        description='The Case number to get the details about the case')\n    status: Status = Field(\n        title='Status',\n        description='The status of the case. Default is \"New\"')\n    priority: Optional[Priority] = Field(\n        title='Priority',\n        description='The priority of the case')\n    case_origin: CaseOrigin = Field(\n        title='Case Origin',\n        description='The origin of the case')\n    contact_name: Optional[str] = Field(\n        title='Contact Name',\n        description='The name of the contact')\n    account_name: Optional[str] = Field(\n        title='Account Name',\n        description='The name of the Account')\n    type: Optional[CaseType] = Field(\n        title='Type',\n        description='The type of the case')\n    case_reason: Optional[CaseReason] = Field(\n        title='Case Reason ',\n        description='The Reason for the case')\n    subject: Optional[str] = Field(\n        title='Subject',\n        description='Title of the case')\n    description: Optional[str] = Field(\n        title='Description',\n        description='A short description about the case')\n    internal_comments: Optional[str] = Field(\n        title='Internal Comments',\n        description='Comments about thw case')\n    additional_information: Optional[AdditionalInformation] = Field(...)\n    web_information: Optional[WebInformation] = Field(None, alias='Web Information')\n\n\ndef salesforce_update_case_printer(output):\n    if output is None:\n        return\n    print(\"\\n\")\n    print(json.dumps(output, indent=4))\n    case_number = output.get(\"CaseNumber\")\n    data = []\n    data.append(case_number)\n    print(\"\\n\")\n    od = tabulate([data], headers=['CaseNumber'], tablefmt=\"grid\")\n    print(od)\n\n\ndef salesforce_update_case(handle,\n                           case_number: str,\n                           status: Status,\n                           case_origin: CaseOrigin,\n                           priority: Priority = Priority.LOW,\n                           contact_name: str = \"\",\n                           account_name: str = \"\",\n                           type: CaseType = CaseType.ELECTRONIC,\n                           case_reason: CaseReason = CaseReason.OTHER,\n                           subject: str = \"\",\n                           description: str = \"\",\n                           internal_comments: str = \"\",\n                           additional_information: dict = None,\n                           web_information: dict = None,\n                           ) -> Dict:\n\n    \"\"\"salesforce_update_case update salesforce case\n\n        :type status: Status\n        :param status: The status of the case. Default is \"New\"\n\n        :type case_number: str\n        :param case_number: The Case number to get the details about the case\n\n        :type case_origin: CaseOrigin\n        :param case_origin: The origin of the case.\n\n        :type priority: Priority\n        :param priority: The priority of the case.\n\n        :type contact_name: str\n        :param contact_name: The name of the contact.\n\n        :type account_name: str\n        :param account_name: The name of the Account.\n\n        :type type: CaseType\n        :param type: The type of the case.\n\n        :type case_reason: CaseReason\n        :param case_reason: The Reason for the case.\n\n        :type subject: str\n        :param subject: Title of the case.\n        \n        :type description: str\n        :param description: A short description about the case.\n\n        :type internal_comments: str\n        :param internal_comments: Comments about thw case.\n\n        :rtype: \n    \"\"\"\n\n#salesforce_update_case updated a case in Salesforce.\n\n    records = handle.query(f\"SELECT Id FROM Case WHERE CaseNumber = '{case_number}'\")\n    if not records['records']:\n        return {\"Error\": \"Invalid Case Number\"}\n\n    record_id = records['records'][0]['Id']\n    case = handle.Case.get(record_id)\n\n    data = {}\n    # resp = handle.Case.update(record_id, data)\n\n    contact_id = \"\"\n    account_id = \"\"\n    status = status.value if status else case.get(\"Status\")\n    case_origin = case_origin.value if case_origin else case.get(\"Origin\")\n    type = type.value if type else case.get(\"Type\")\n    priority = priority.value if priority else case.get(\"Priority\")\n    case_reason = case_reason.value if case_reason else case.get(\"Reason\")\n\n    if contact_name != \"\":\n        contact_id = handle.query(f\"SELECT Id FROM Contact WHERE Name = '{contact_name}'\")\n        if contact_id['records'] == []:\n            return {\"Error\": \"Invalid Contact name\"}\n        contact_id = contact_id['records'][0]['Id']\n    if account_name != \"\":\n        account_id = handle.query(f\"SELECT Id FROM Account WHERE Name = '{account_name}'\")\n        if account_id['records'] == []:\n            return {\"Error\": \"Invalid Account name\"}\n        else:\n            account_id = account_id['records'][0]['Id']\n\n    # data = {}\n    data['Status'] = status\n    data['Priority'] = priority\n    data['Origin'] = case_origin\n    data['ContactId'] = contact_id\n    data['AccountId'] = account_id\n    data['Type'] = type\n    data['Reason'] = case_reason\n    if web_information:\n        if web_information.get(\"web_email\", None):\n            data['SuppliedEmail'] = web_information.get(\"web_email\", None)\n        if web_information.get(\"web_name\", None):\n            data['SuppliedName'] = web_information.get(\"web_name\", None)\n        if web_information.get(\"web_company\", None):\n            data['SuppliedCompany'] = web_information.get(\"web_company\", None)\n        if web_information.get(\"web_phone\", None):\n            data['SuppliedPhone'] = web_information.get(\"web_phone\", None)\n    if additional_information:\n        if additional_information.get(\"product\"):\n            data[\"Product__c\"] = additional_information.get(\"product\")\n        if additional_information.get(\"engineering_req_number\"):\n            data[\"EngineeringReqNumber__c\"] = additional_information.get(\"engineering_req_number\")\n        if additional_information.get(\"potential_liability\"):\n            data[\"PotentialLiability__c\"] = additional_information.get(\"potential_liability\")\n        if additional_information.get(\"sla_violation\"):\n            data[\"SLAViolation__c\"] = additional_information.get(\"sla_violation\")\n    data['Subject'] = subject if subject else case.get(\"Subject\")\n    data['Description'] = description if description else case.get(\"Description\")\n    data['Comments'] = internal_comments if internal_comments else case.get(\"Comments\")\n    resp = handle.Case.update(record_id, data)\n    if resp == 204:\n        return handle.Case.get(record_id)\n    return resp\n"
  },
  {
    "path": "Slack/README.md",
    "content": "\n# Slack Actions\n* [Create Slack Channel and Invite Users](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Slack/legos/slack_create_channel_invite_users/README.md): Create a Slack Channel with given name, and invite a list of userIds to the channel.\n* [Get Slack SDK Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Slack/legos/slack_get_handle/README.md): Get Slack SDK Handle\n* [Slack Lookup User by Email](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Slack/legos/slack_lookup_user_by_email/README.md): Given an email address, find the slack user in the workspace.\n You can the extract their Profile picture, or retrieve their userid (which you can use to send messages) from the output.\n* [Post Slack Image](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Slack/legos/slack_post_image/README.md): Post Slack Image\n* [Post Slack Message](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Slack/legos/slack_post_message/README.md): Post Slack Message\n* [Slack Send DM](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Slack/legos/slack_send_DM/README.md): Given a list of Slack IDs, this Action will create a DM (one user) or group chat (multiple users), and send a message to the chat\n"
  },
  {
    "path": "Slack/__init__.py",
    "content": ""
  },
  {
    "path": "Slack/legos/__init__.py",
    "content": ""
  },
  {
    "path": "Slack/legos/slack_create_channel_invite_users/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>Slack Create Channel and Invite Users</h1>\n\n## Description\nCreate a Slack channel and invite user IDs to the channel. Useful for triaging issues.\n\n## Action Details\ndef slack_create_channel_invite_users(\n        handle: WebClient,\n        channel: str,\n        user_list: list) -> str:\n\n*Channel: name of channel to add\n*user_list: List of userIDs to invite to the new channel.\n\n## Action Output\nHere is a sample output.\n<img src=\"./1.jpg\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Slack/legos/slack_create_channel_invite_users/__init__.py",
    "content": ""
  },
  {
    "path": "Slack/legos/slack_create_channel_invite_users/slack_create_channel_invite_users.json",
    "content": "{\n  \"action_title\": \"Create Slack Channel and Invite Users\",\n  \"action_description\": \"Create a Slack Channel with given name, and invite a list of userIds to the channel.\",\n  \"action_type\": \"LEGO_TYPE_SLACK\",\n  \"action_entry_function\": \"slack_create_channel_invite_users\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\n  \"action_is_check\": false,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true\n}"
  },
  {
    "path": "Slack/legos/slack_create_channel_invite_users/slack_create_channel_invite_users.py",
    "content": "from __future__ import annotations\n\n##\n# Copyright (c) 2023 unSkript, Inc\n# All rights reserved.\n##\nimport pprint\nfrom typing import List\nfrom pydantic import BaseModel, Field\nfrom slack_sdk import WebClient\n## note: Your Slack App will need the files:write scope.\n# Your Bot will also need to be a member of the channel\n\n\n\n\nclass InputSchema(BaseModel):\n    channel: str = Field(..., description='Name of slack channel.', title='Channel')\n    user_list: List = Field(\n        ...,\n        description='List of users to invite to the new channel.',\n        title='user_list',\n        #list is slack user IDs, for example ['U046UH5F2HZ']\n    )\n\n\n\npp = pprint.PrettyPrinter(indent=2)\n\n\ndef slack_create_channel_invite_users_printer(output):\n    if output is not None:\n        pprint.pprint(output)\n\n\ndef slack_create_channel_invite_users(\n        handle: WebClient,\n        channel: str,\n        user_list: list) -> str:\n\n    try:\n        response = handle.conversations_create(\n            name = channel,\n            is_private=False\n    )\n        # Extract the ID of the created channel\n        channel_id = response[\"channel\"][\"id\"]\n        for username in user_list:\n            # Call the conversations.invite method to invite each user to the channel\n            user_response=handle.conversations_invite(\n                channel=channel_id,\n                users=username\n            )\n            print(user_response)\n            print(f\"Invited user '{username}' to the channel.\")\n\n        return f\"Successfully created Channel: #{channel}\"\n\n    except Exception as e:\n        print(\"\\n\\n\")\n        pp.pprint(\n            f\"Failed sending message to slack channel {channel}, Error: {str(e)}\")\n        return f\"Unable to send message on {channel}\"\n\n\n"
  },
  {
    "path": "Slack/legos/slack_get_handle/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Get Slack SDK Handle </h1>\r\n\r\n## Description\r\nThis Lego get Slack SDK Handle.\r\n\r\n\r\n## Lego Details\r\n\r\n    slack_get_handle(handle: object)\r\n\r\n        handle: Object of type unSkript Slack Connector\r\n\r\n## Lego Input\r\nThis Lego take one inputs handle.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\nYou can see this Lego in action following this link [unSkript Live](https://unskript.com)"
  },
  {
    "path": "Slack/legos/slack_get_handle/__init__.py",
    "content": ""
  },
  {
    "path": "Slack/legos/slack_get_handle/slack_get_handle.json",
    "content": "{\n\"action_title\": \"Get Slack SDK Handle\",\n\"action_description\": \"Get Slack SDK Handle\",\n\"action_type\": \"LEGO_TYPE_SLACK\",\n\"action_entry_function\": \"slack_get_handle\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": false,\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_NONE\",\n\"action_supports_iteration\": false,\n\"action_verbs\": [\n\"get\"\n],\n\"action_nouns\": [\n\"slack\",\n\"handle\"\n]\n}\n\n"
  },
  {
    "path": "Slack/legos/slack_get_handle/slack_get_handle.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nfrom pydantic import BaseModel\n\n\nclass InputSchema(BaseModel):\n    pass\n\n\ndef slack_get_handle(handle) -> None:\n    \"\"\"slack_get_handle returns the slack handle.\n\n       :rtype: slack Handle.\n    \"\"\"\n    return handle\n"
  },
  {
    "path": "Slack/legos/slack_lookup_user_by_email/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>Slack Lookup User by Email</h1>\n\n## Description\nGiven an email address, find the slack user in the workspace.\n\nYou can the extract their Profile picture, or retrieve their userid (which you can use to send messages) from the output\n\n## Action Details\ndef slack_lookup_user_by_email(\n        handle: WebClient,\n        email: str) -> Dict:\n\t\t\n\t\temail: Teh email address of the user you wish to lookup.\n## Action Output\nHere is a sample output.\n<img src=\"./1.jpg\">\n\n## Try it Out\n\nYou Try this Action in the unSkript [Free Trial](https://us.app.unskript.io/), or using the [open source Docker image](http://runbooks.sh)."
  },
  {
    "path": "Slack/legos/slack_lookup_user_by_email/__init__.py",
    "content": ""
  },
  {
    "path": "Slack/legos/slack_lookup_user_by_email/slack_lookup_user_by_email.json",
    "content": "{\n  \"action_title\": \"Slack Lookup User by Email\",\n  \"action_description\": \"Given an email address, find the slack user in the workspace.\\n You can the extract their Profile picture, or retrieve their userid (which you can use to send messages) from the output.\",\n  \"action_type\": \"LEGO_TYPE_SLACK\",\n  \"action_entry_function\": \"slack_lookup_user_by_email\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\n  \"action_is_check\": false,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true\n}"
  },
  {
    "path": "Slack/legos/slack_lookup_user_by_email/slack_lookup_user_by_email.py",
    "content": "from __future__ import annotations\n\n##\n# Copyright (c) 2023 unSkript, Inc\n# All rights reserved.\n##\nimport pprint\nfrom typing import Dict\nfrom pydantic import BaseModel, Field\nfrom slack_sdk import WebClient\n## note: Your Slack App will need the users:read.email scope\n# Otherwise you cannot access user's emails!\n\n\n\nclass InputSchema(BaseModel):\n    email: str = Field(..., description='Email Address of user', title='email')\n\n\n\npp = pprint.PrettyPrinter(indent=2)\n\n\n\ndef slack_lookup_user_by_email_printer(output):\n    if output is not None:\n        pprint.pprint(output)\n\n\ndef slack_lookup_user_by_email(\n        handle: WebClient,\n        email: str) -> Dict:\n\n\n    try:\n        response = handle.users_lookupByEmail(email=email)\n        #print(response)\n        return response['user']\n\n    except Exception as e:\n        print(\"\\n\\n\")\n        pp.pprint(\n            f\"Failed to find user, Error: {str(e)}\")\n        return f\"Unable to send find user with email {email}\"\n\n\n"
  },
  {
    "path": "Slack/legos/slack_post_image/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Post Slack Message with an Image</h1>\r\n\r\n## Description\r\nThis Lego Post Slack Message with an Image and gives a message sent status.\r\n\r\n\r\n## Lego Details\r\n\r\n    slack_post_image(handle: object, channel: str, message: str, image: str)\r\n\r\n        handle: Object of type unSkript AWS Connector\r\n        channel: Name of slack channel.\r\n        message: Message sent to channel.\r\n        image: File Name of the image to be sent in the message.\r\n    \r\n    Note: Your Slack App will need the ```files:write``` scope.  Your Bot will also need to be a member of the channel that you wish to send the message to.\r\n\r\n\r\n## Lego Input\r\nThis Lego take four inputs handle, channel, message and image.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.jpg\">\r\n<img src=\"./2.jpg\">\r\n\r\n## See it in Action\r\nYou can see this Lego in action following this link [unSkript Live](https://unskript.com)"
  },
  {
    "path": "Slack/legos/slack_post_image/__init__.py",
    "content": ""
  },
  {
    "path": "Slack/legos/slack_post_image/slack_post_image.json",
    "content": "{ \"action_title\": \"Post Slack Image\", \r\n  \"action_description\": \"Post Slack Image\", \r\n  \"action_type\": \"LEGO_TYPE_SLACK\", \r\n  \"action_entry_function\": \"slack_post_image\", \r\n  \"action_needs_credential\": true, \r\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\", \r\n  \"action_supports_poll\": true, \r\n  \"action_supports_iteration\": true,\r\n  \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_SLACK\"]\r\n}\r\n  "
  },
  {
    "path": "Slack/legos/slack_post_image/slack_post_image.py",
    "content": "##\r\n# Copyright (c) 2021 unSkript, Inc\r\n# All rights reserved.\r\n##\r\n\r\nimport pprint\r\nfrom pydantic import BaseModel, Field\r\nfrom slack_sdk import WebClient\r\nfrom beartype import beartype\r\n\r\npp = pprint.PrettyPrinter(indent=2)\r\n\r\n## note: Your Slack App will need the files:write scope.\r\n# Your Bot will also need to be a member of the channel\r\n\r\nclass InputSchema(BaseModel):\r\n    channel: str = Field(\r\n        title='Channel',\r\n        description='Name of slack channel.')\r\n    message: str = Field(\r\n        title='message',\r\n        description='Message for slack channel.')\r\n    image: str = Field(\r\n        title='image',\r\n        description='Path to image to be sent.')\r\n\r\n@beartype\r\ndef slack_post_image_printer(output):\r\n    if output is not None:\r\n        pprint.pprint(output)\r\n\r\n\r\n@beartype\r\ndef slack_post_image(\r\n        handle: WebClient,\r\n        channel: str,\r\n        message:str,\r\n        image: str) -> str:\r\n\r\n    try:\r\n        handle.files_upload(\r\n            channels = channel,\r\n            initial_comment=message,\r\n            file=image\r\n    )\r\n        return f\"Successfully Sent Message on Channel: #{channel}\"\r\n\r\n    except Exception as e:\r\n        print(\"\\n\\n\")\r\n        pp.pprint(\r\n            f\"Failed sending message to slack channel {channel}, Error: {str(e)}\")\r\n        return f\"Unable to send message on {channel}\"\r\n"
  },
  {
    "path": "Slack/legos/slack_post_message/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Post Slack Message </h1>\r\n\r\n## Description\r\nThis Lego Post Slack Message and gives a message sent status.\r\n\r\n\r\n## Lego Details\r\n\r\n    slack_post_message(handle: object, channel: str, message: str)\r\n\r\n        handle: Object of type unSkript AWS Connector\r\n        channel: Name of slack channel.\r\n        message: Message sent to channel.\r\n\r\n## Lego Input\r\nThis Lego take three inputs handle, channel and message.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\nYou can see this Lego in action following this link [unSkript Live](https://unskript.com)"
  },
  {
    "path": "Slack/legos/slack_post_message/__init__.py",
    "content": ""
  },
  {
    "path": "Slack/legos/slack_post_message/slack_post_message.json",
    "content": "{ \"action_title\": \"Post Slack Message\", \r\n  \"action_description\": \"Post Slack Message\", \r\n  \"action_type\": \"LEGO_TYPE_SLACK\", \r\n  \"action_entry_function\": \"slack_post_message\", \r\n  \"action_needs_credential\": true, \r\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\", \r\n  \"action_supports_poll\": true, \r\n  \"action_supports_iteration\": true,\r\n  \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_SLACK\"]\r\n}\r\n  "
  },
  {
    "path": "Slack/legos/slack_post_message/slack_post_message.py",
    "content": "##\r\n# Copyright (c) 2021 unSkript, Inc\r\n# All rights reserved.\r\n##\r\n\r\nimport pprint\r\nfrom pydantic import BaseModel, Field\r\nfrom beartype import beartype\r\nfrom slack_sdk import WebClient\r\nfrom slack_sdk.errors import SlackApiError\r\n\r\npp = pprint.PrettyPrinter(indent=2)\r\n\r\nclass InputSchema(BaseModel):\r\n    channel: str = Field(\r\n        title='Channel',\r\n        description='Name of slack channel.')\r\n    message: str = Field(\r\n        title='Message',\r\n        description='Message for slack channel.')\r\n\r\n@beartype\r\ndef slack_post_message_printer(output):\r\n    if output is not None:\r\n        pprint.pprint(output)\r\n\r\n\r\n@beartype\r\ndef slack_post_message(\r\n        handle: WebClient,\r\n        channel: str,\r\n        message: str) -> str:\r\n\r\n    try:\r\n        handle.chat_postMessage(\r\n            channel=channel,\r\n            text=message)\r\n        return f\"Successfully Sent Message on Channel: #{channel}\"\r\n    except SlackApiError as e:\r\n        pp.pprint(\r\n            f\"Failed sending message to slack channel {channel}, Error: {e.response['error']}\")\r\n        if e.response['error'] == 'channel_not_found':\r\n            raise Exception('Channel Not Found') from e\r\n        if e.response['error'] == 'duplicate_channel_not_found':\r\n            raise Exception('Channel associated with the message_id not valid') from e\r\n        if e.response['error'] == 'not_in_channel':\r\n            raise Exception('Cannot post message to channel user is not in') from e\r\n        if e.response['error'] == 'is_archived':\r\n            raise Exception('Channel has been archived') from e\r\n        if e.response['error'] == 'msg_too_long':\r\n            raise Exception('Message text is too long') from e\r\n        if e.response['error'] == 'no_text':\r\n            raise Exception('Message text was not provided') from e\r\n        if e.response['error'] == 'restricted_action':\r\n            raise Exception('Workspace preference prevents user from posting') from e\r\n        if e.response['error'] == 'restricted_action_read_only_channel':\r\n            raise Exception('Cannot Post message, read-only channel') from e\r\n        if e.response['error'] == 'team_access_not_granted':\r\n            raise Exception('The token used is not granted access to the workspace') from e\r\n        if e.response['error'] == 'not_authed':\r\n            raise Exception('No Authtnecition token provided') from e\r\n        if e.response['error'] == 'invalid_auth':\r\n            raise Exception('Some aspect of Authentication cannot be validated. Request denied') from e\r\n        if e.response['error'] == 'access_denied':\r\n            raise Exception('Access to a resource specified in the request denied') from e\r\n        if e.response['error'] == 'account_inactive':\r\n            raise Exception('Authentication token is for a deleted user') from e\r\n        if e.response['error'] == 'token_revoked':\r\n            raise Exception('Authentication token for a deleted user has been revoked') from e\r\n        if e.response['error'] == 'no_permission':\r\n            raise Exception('The workspace toekn used does not have necessary permission to send message') from e\r\n        if e.response['error'] == 'ratelimited':\r\n            raise Exception('The request has been ratelimited. Retry sending message later') from e\r\n        if e.response['error'] == 'service_unavailable':\r\n            raise Exception('The service is temporarily unavailable') from e\r\n        if e.response['error'] == 'fatal_error':\r\n            raise Exception('The server encountered catostrophic error while sending message') from e\r\n        if e.response['error'] == 'internal_error':\r\n            raise Exception('The server could not complete operation, likely due to transietn issue') from e\r\n        if e.response['error'] == 'request_timeout':\r\n            raise Exception('Sending message error via POST: either message was missing or truncated') from e\r\n        else:\r\n            raise Exception(f'Failed Sending Message to slack channel {channel} Error: {e.response[\"error\"]}') from e\r\n\r\n    except Exception as e:\r\n        print(\"\\n\\n\")\r\n        pp.pprint(\r\n            f\"Failed sending message to slack channel {channel}, Error: {str(e)}\")\r\n        return f\"Unable to send message on {channel}\"\r\n    "
  },
  {
    "path": "Slack/legos/slack_send_DM/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>Slack Send DM</h1>\n\n## Description\nGiven a list of Slack IDs, this Action will create a DM (one user) or group chat (multiple users), and send a message to the chat\n\n## Action Details\n\tdef slack_send_DM(\n\t        handle: WebClient,\n\t        users: list,\n\t        message:str) -> Dict:\n\t\thandle: Object of type unSkript Slack Connector.\n\n\t\t* users: The list of users to be added to the chat.  One user creates a DM, multiple users, a group chat.\n\t\t* message: The intro message to the chat from the bot.\n\n \n## Action Output\nHere is a sample output.\n<img src=\"./1.jpg\">\n<img src=\"./2.jpg\">\n## Try it Out\n\nYou Try this Action in the unSkript [Free Trial](https://us.app.unskript.io/), or using the [open source Docker image](http://runbooks.sh)."
  },
  {
    "path": "Slack/legos/slack_send_DM/__init__.py",
    "content": ""
  },
  {
    "path": "Slack/legos/slack_send_DM/slack_send_DM.json",
    "content": "{\n  \"action_title\": \"Slack Send DM\",\n  \"action_description\": \"Given a list of Slack IDs, this Action will create a DM (one user) or group chat (multiple users), and send a message to the chat\",\n  \"action_type\": \"LEGO_TYPE_SLACK\",\n  \"action_entry_function\": \"slack_send_DM\",\n  \"action_needs_credential\": true,\n  \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\n  \"action_is_check\": false,\n  \"action_supports_iteration\": true,\n  \"action_supports_poll\": true\n}"
  },
  {
    "path": "Slack/legos/slack_send_DM/slack_send_DM.py",
    "content": "from __future__ import annotations\n\n##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nimport pprint\nfrom pydantic import BaseModel, Field\nfrom slack_sdk import WebClient\nfrom typing import Dict, List\n## note: Your Slack App will need the:\n#im:write  (for DM)\n#mpim:write scope (for group IM messages).\n# Your Bot will also need to be a member of the channel\n\n\nclass InputSchema(BaseModel):\n    users: List = Field(..., description='List of users to DM', title='users')\n    message: str = Field(..., description='Message to DM to users.', title='message')\n\n\n\npp = pprint.PrettyPrinter(indent=2)\n\n\ndef slack_send_DM_printer(output):\n    if output is not None:\n        pprint.pprint(output)\n\n\ndef slack_send_DM(\n        handle: WebClient,\n        users: list,\n        message:str) -> Dict:\n\n    #slack takes in multiple users as a comma separated string\n    comma_separated_users = ', '.join(str(user) for user in users)\n    try:\n        #open the DM\n        createDM = handle.conversations_open(users=comma_separated_users)\n        #get the ID\n        channel_id = createDM['channel']['id']\n\n        #send a message\n        # Send message\n        message_response = handle.chat_postMessage(channel=channel_id, text=message)\n        return message_response['message']\n\n    except Exception as e:\n        print(\"\\n\\n\")\n        pp.pprint(\n            f\"Failed sending message to slack channel, Error: {str(e)}\")\n        return f\"Unable to send message \"\n\n\n"
  },
  {
    "path": "Snowflake/README.md",
    "content": "\n# Snowflake Actions\n* [Snowflake Read Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Snowflake/legos/snowflake_read_query/README.md): Snowflake Read Query\n* [Snowflake Write Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Snowflake/legos/snowflake_write_query/README.md): Snowflake Write Query\n"
  },
  {
    "path": "Snowflake/__init__.py",
    "content": ""
  },
  {
    "path": "Snowflake/legos/__init__.py",
    "content": ""
  },
  {
    "path": "Snowflake/legos/snowflake_read_query/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Snowflake Read Query</h1>\r\n\r\n## Description\r\nThis Lego Executes snowflake Read Query.\r\n\r\n\r\n## Lego Details\r\n\r\n    snowflake_read_query(handle, query: str, db_name: str, schema_name: str)\r\n\r\n        handle: Object of type unSkript SNOWFLAKE Connector.\r\n        query: Query to read data.\r\n        db_name: Name of the database to use.\r\n        schema_name: Name of the Schema to use.\r\n\r\n\r\n## Lego Input\r\nThis Lego takes four inputs handle, query, db_name  and schema_name. \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Snowflake/legos/snowflake_read_query/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "Snowflake/legos/snowflake_read_query/snowflake_read_query.json",
    "content": "{\n\"action_title\": \"Snowflake Read Query\",\n\"action_description\": \"Snowflake Read Query\",\n\"action_type\": \"LEGO_TYPE_SNOWFLAKE\",\n\"action_entry_function\": \"snowflake_read_query\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": true,\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n\"action_supports_iteration\": true,\n\"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_SNOWFLAKE\"]\n}\n"
  },
  {
    "path": "Snowflake/legos/snowflake_read_query/snowflake_read_query.py",
    "content": "import pprint\nfrom typing import List\nfrom pydantic import BaseModel, Field\n\npp = pprint.PrettyPrinter(indent=4)\n\n\nclass InputSchema(BaseModel):\n    query: str = Field(\n        title='Query',\n        description='Query to read data')\n    db_name: str = Field(\n        title='Database name',\n        description='Name of the database to use')\n    schema_name: str = Field(\n        title='Schema name',\n        description='Name of the Schema to use')\n\n\ndef snowflake_read_query_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef snowflake_read_query(handle, query: str, db_name: str, schema_name: str) -> List:\n    \"\"\"snowflake_read_query Runs query with the provided parameters.\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type query: str\n        :param query: Query to read data.\n\n        :type db_name: str\n        :param db_name: Name of the database to use.\n\n        :type schema_name: str\n        :param schema_name: Name of the Schema to use\n\n        :rtype: List if success. Exception on error.\n      \"\"\"\n    # Input param validation.\n\n    cur = handle.cursor()\n    cur.execute(\"USE DATABASE \" + db_name)\n    cur.execute(\"USE SCHEMA \" + schema_name)\n    cur.execute(query)\n    res = cur.fetchall()\n    cur.close()\n    handle.close()\n    return res\n"
  },
  {
    "path": "Snowflake/legos/snowflake_write_query/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Snowflake Write Query</h1>\r\n\r\n## Description\r\nThis Lego Executes Snowflake Write Query.\r\n\r\n\r\n## Lego Details\r\n\r\n    snowflake_write_query(handle, query: str, db_name: str, schema_name: str)\r\n\r\n        handle: Object of type unSkript SNOWFLAKE Connector.\r\n        query: Query to read data.\r\n        db_name: Name of the database to use.\r\n        schema_name: Name of the Schema to use.\r\n\r\n\r\n## Lego Input\r\nThis Lego takes four inputs handle, query, db_name  and schema_name. \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Snowflake/legos/snowflake_write_query/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "Snowflake/legos/snowflake_write_query/snowflake_write_query.json",
    "content": "{\n\"action_title\": \"Snowflake Write Query\",\n\"action_description\": \"Snowflake Write Query\",\n\"action_type\": \"LEGO_TYPE_SNOWFLAKE\",\n\"action_entry_function\": \"snowflake_write_query\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": true,\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\n\"action_supports_iteration\": true,\n\"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_SNOWFLAKE\"]\n}\n"
  },
  {
    "path": "Snowflake/legos/snowflake_write_query/snowflake_write_query.py",
    "content": "import pprint\nfrom typing import Dict\nfrom pydantic import BaseModel, Field\n\npp = pprint.PrettyPrinter(indent=4)\n\nclass InputSchema(BaseModel):\n    query: str = Field(\n        title='Query',\n        description='Query to write data')\n    db_name: str = Field(\n        title='Database name',\n        description='Name of the database to use')\n    schema_name: str = Field(\n        title='Schema name',\n        description='Name of the Schema to use')\n\ndef snowflake_write_query_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\n\ndef snowflake_write_query(handle, query: str, db_name: str, schema_name: str) -> Dict:\n    \"\"\"snowflake_write_query Runs query with the provided parameters.\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type query: str\n        :param query: Query to read data.\n\n        :type db_name: str\n        :param db_name: Name of the database to use.\n\n        :type schema_name: str\n        :param schema_name: Name of the Schema to use\n\n        :rtype: Dict if success. Exception on error.\n      \"\"\"\n    # Input param validation.\n    result = {}\n    cur = handle.cursor()\n    cur.execute(\"USE DATABASE \" + db_name)\n    cur.execute(\"USE SCHEMA \" + schema_name)\n    cur.execute(query)\n    result[\"Result\"] = \"The query executed successfully!\"\n    cur.close()\n    handle.close()\n    return result\n"
  },
  {
    "path": "Splunk/README.md",
    "content": "\n# Splunk Actions\n* [Get Splunk SDK Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Splunk/legos/splunk_get_handle/README.md): Get Splunk SDK Handle\n"
  },
  {
    "path": "Splunk/__init__.py",
    "content": ""
  },
  {
    "path": "Splunk/legos/__init__.py",
    "content": ""
  },
  {
    "path": "Splunk/legos/splunk_get_handle/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Get Splunk SDK Handle</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego get Splunk SDK Handle.\r\n\r\n\r\n## Lego Details\r\n\r\n    splunk_get_handle(handle: object)\r\n\r\n        handle: Object of type unSkript Splunk Connector\r\n\r\n## Lego Input\r\nThis Lego take one input handle.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)\r\n"
  },
  {
    "path": "Splunk/legos/splunk_get_handle/__init__.py",
    "content": ""
  },
  {
    "path": "Splunk/legos/splunk_get_handle/splunk_get_handle.json",
    "content": "{\r\n    \"action_title\": \"Get Splunk SDK Handle\",\r\n    \"action_description\": \"Get Splunk SDK Handle\",\r\n    \"action_type\": \"LEGO_TYPE_SPLUNK\",\r\n    \"action_entry_function\": \"splunk_get_handle\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": false,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_NONE\",\r\n    \"action_supports_iteration\": false,\r\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_SPLUNK\"]\r\n}\r\n    "
  },
  {
    "path": "Splunk/legos/splunk_get_handle/splunk_get_handle.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nfrom pydantic import BaseModel\n\n\nclass InputSchema(BaseModel):\n    pass\n\n\ndef splunk_get_handle(handle):\n    \"\"\"splunk_get_handle returns the splunk handle.\n\n       :rtype: splunk handle.\n    \"\"\"\n    return handle\n"
  },
  {
    "path": "Stripe/README.md",
    "content": "\n# Stripe Actions\n* [ Capture a Charge](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_capture_charge/README.md):  Capture the payment of an existing, uncaptured, charge\n* [Close Dispute](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_close_dispute/README.md): Close Dispute\n* [Create a Charge](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_create_charge/README.md): Create a Charge\n* [Create a Refund](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_create_refund/README.md): Create a Refund\n* [Get list of charges previously created](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_get_all_charges/README.md): Get list of charges previously created\n* [Get list of disputes](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_get_all_disputes/README.md): Get list of disputes\n* [Get list of refunds](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_get_all_refunds/README.md):  Get list of refunds for the given threshold.\n* [Get Stripe Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_get_handle/README.md): Get Stripe Handle\n* [Retrieve a Charge](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_retrieve_charge/README.md):  Retrieve a Charge\n* [Retrieve details of a dispute](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_retrieve_dispute/README.md): Retrieve details of a dispute\n* [Retrieve a refund](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_retrieve_refund/README.md): Retrieve a refund\n* [Update a Charge](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_update_charge/README.md): Update a Charge\n* [Update Dispute](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_update_dispute/README.md): Update Dispute\n* [Update Refund](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_update_refund/README.md): Updates the specified refund by setting the values of the parameters passed.\n"
  },
  {
    "path": "Stripe/__init__.py",
    "content": ""
  },
  {
    "path": "Stripe/legos/__init__.py",
    "content": ""
  },
  {
    "path": "Stripe/legos/stripe_capture_charge/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Capture a Charge</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego Capture the payment of an existing, uncaptured, charge.\r\n\r\n\r\n## Lego Details\r\n\r\n    stripe_capture_charge(handle: object, charge_id:str)\r\n\r\n        handle: Object of type unSkript stripe Connector\r\n        charge_id: Capture the payment of an existing, uncaptured, charge.\r\n\r\n## Lego Input\r\nThis Lego take two input handle and charge_id.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Stripe/legos/stripe_capture_charge/__init__.py",
    "content": ""
  },
  {
    "path": "Stripe/legos/stripe_capture_charge/stripe_capture_charge.json",
    "content": "{\r\n    \"action_title\": \" Capture a Charge\",\r\n    \"action_description\": \" Capture the payment of an existing, uncaptured, charge\",\r\n    \"action_type\": \"LEGO_TYPE_STRIPE\",\r\n    \"action_entry_function\": \"stripe_capture_charge\",\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_STRIPE\",\"CATEGORY_TYPE_STRIPE_CHARGE\"]\r\n}\r\n    "
  },
  {
    "path": "Stripe/legos/stripe_capture_charge/stripe_capture_charge.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nimport pprint\nfrom typing import List\nfrom pydantic import BaseModel, Field\n\nclass InputSchema(BaseModel):\n    charge_id: str = Field(\n        title='Charge Id',\n        description='Capture the payment of an existing, uncaptured, charge'\n    )\n\n\ndef stripe_capture_charge_printer(output):\n    if isinstance(output, (list, tuple)):\n        pprint.pprint(output)\n    elif isinstance(output, dict):\n        pprint.pprint(output)\n    else:\n        pprint.pprint(output)\n\n\ndef stripe_capture_charge(handle, charge_id:str) -> List:\n    \"\"\"stripe_capture_charge Capture the payment of an existing, uncaptured, charge.\n\n        :type charge_id: string\n        :param charge_id: Capture the payment of an existing, uncaptured, charge.\n\n        :rtype: List with response from the describe API.\n    \"\"\"\n    # Input param validation\n    result = []\n    try:\n        charge = handle.Charge.capture(charge_id)\n        result.append(charge)\n        return result\n    except Exception as e:\n        pprint.pprint(e)\n\n    return None\n"
  },
  {
    "path": "Stripe/legos/stripe_close_dispute/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Close Dispute</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego used to close dispute.\r\n\r\n\r\n## Lego Details\r\n\r\n    stripe_close_dispute(handle: object, dispute_id:str)\r\n\r\n        handle: Object of type unSkript stripe Connector\r\n        dispute_id: Dispute ID\r\n\r\n## Lego Input\r\nThis Lego take two input handle and dispute_id.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Stripe/legos/stripe_close_dispute/__init__.py",
    "content": ""
  },
  {
    "path": "Stripe/legos/stripe_close_dispute/stripe_close_dispute.json",
    "content": "{\r\n    \"action_title\": \"Close Dispute\",\r\n    \"action_description\": \"Close Dispute\",\r\n    \"action_type\": \"LEGO_TYPE_STRIPE\",\r\n    \"action_entry_function\": \"stripe_close_dispute\",\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_STRIPE\",\"CATEGORY_TYPE_STRIPE_DISPUTE\"]\r\n}\r\n    "
  },
  {
    "path": "Stripe/legos/stripe_close_dispute/stripe_close_dispute.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import List\nfrom pydantic import BaseModel, Field\n\n\nclass InputSchema(BaseModel):\n    dispute_id: str = Field(\n        title='Dispute ID',\n        description='Dispute ID'\n    )\n\n\ndef stripe_close_dispute_printer(output):\n    if isinstance(output, (list, tuple)):\n        pprint.pprint(output)\n    elif isinstance(output, dict):\n        pprint.pprint(output)\n    else:\n        pprint.pprint(output)\n\n\n\ndef stripe_close_dispute(handle, dispute_id:str) -> List:\n    \"\"\"stripe_close_dispute Close Dispute\n\n        :type dispute_id: string\n        :param dispute_id: Dispute ID\n\n        :rtype: List with response from the describe API.\n    \"\"\"\n    # Input param validation\n    result = []\n    try:\n        resp = handle.Dispute.close(dispute_id)\n        result.append(resp)\n        return result\n    except Exception as e:\n        pprint.pprint(e)\n\n    return None\n"
  },
  {
    "path": "Stripe/legos/stripe_create_charge/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Create a Charge</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego used to Charges a credit card or other payment source to the given amount\r\n        in the given currency.\r\n\r\n\r\n## Lego Details\r\n\r\n    stripe_close_dispute(handle: object, amount: int, source: str, description: str, currency: str)\r\n\r\n        handle: Object of type unSkript stripe Connector\r\n        amount: Amount intended to be collected by this payment.\r\n        source: A payment source to be charged.\r\n        description: Reason for the Charge. Small Description about charge.\r\n        currency: Three letter ISO currency code, eg: usd, cad, eur\r\n\r\n## Lego Input\r\nThis Lego take five input handle, amount, source, description and currency.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Stripe/legos/stripe_create_charge/__init__.py",
    "content": ""
  },
  {
    "path": "Stripe/legos/stripe_create_charge/stripe_create_charge.json",
    "content": "{\r\n    \"action_title\": \"Create a Charge\",\r\n    \"action_description\": \"Create a Charge\",\r\n    \"action_type\": \"LEGO_TYPE_STRIPE\",\r\n    \"action_entry_function\": \"stripe_create_charge\",\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_STRIPE\",\"CATEGORY_TYPE_STRIPE_CHARGE\"]\r\n}\r\n    "
  },
  {
    "path": "Stripe/legos/stripe_create_charge/stripe_create_charge.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nfrom typing import Optional, List\nfrom pydantic import BaseModel, Field\nfrom tabulate import tabulate\n\nclass InputSchema(BaseModel):\n    amount: int = Field(\n        title='Amount',\n        description='Amount intended to be collected by this payment')\n    currency: Optional[str] = Field(\n        'usd',\n        title='Currency',\n        description='Three letter ISO currency code, eg: usd, cad, eur')\n    source: Optional[str] = Field(\n        title='Payment Source',\n        description='A payment source to be charged. eg. credit card ID, bank account, token')\n    description: Optional[str] = Field(\n        title='Description',\n        description='Reason for the Charge. Small Description about charge.')\n\n\ndef stripe_create_charge_printer(output):\n    if output is None:\n        return\n    od = tabulate(output, headers=['Amount', 'ID', 'Description'])\n    print(od)\n\n\n\ndef stripe_create_charge(\n        handle,\n        amount: int,\n        source: str = \"\",\n        description: str = \"\",\n        currency: str = \"usd\"\n        ) -> List:\n    \"\"\"stripe_create_charge Charges a credit card or other payment source to the given amount\n        in the given currency.\n        \n        :type amount: int\n        :param amount: Amount intended to be collected by this payment.\n\n        :type source: str\n        :param source: A payment source to be charged.\n\n        :type description: str\n        :param description: Reason for the Charge. Small Description about charge.\n\n        :type currency: str\n        :param currency: Three letter ISO currency code, eg: usd, cad, eur\n\n        :rtype: Returns the results of all recent charges.\n    \"\"\"\n    # Input param validation.\n    result = []\n    try:\n        data = handle.Charge.create(\n            amount=amount,\n            currency=currency,\n            source=source,\n            description=description)\n        result.append([str(data['amount']), data['id'], data['description']])\n    except Exception:\n        data = 'Error occurred when Creating a charge'\n        print(data)\n\n    return result\n"
  },
  {
    "path": "Stripe/legos/stripe_create_customer/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Create a customer</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego used to Create a customer.\r\n\r\n\r\n## Lego Details\r\n\r\n    stripe_create_customer(handle: object, params:dict)\r\n\r\n        handle: Object of type unSkript stripe Connector\r\n        params: Params in key=value form.\r\n\r\n## Lego Input\r\nThis Lego take two input handle and params.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Stripe/legos/stripe_create_customer/__init__.py",
    "content": ""
  },
  {
    "path": "Stripe/legos/stripe_create_customer/stripe_create_customer.json",
    "content": "{\r\n    \"action_title\": \"Create a customer\",\r\n    \"action_description\": \"Create a customer\",\r\n    \"action_type\": \"LEGO_TYPE_STRIPE\",\r\n    \"action_entry_function\": \"stripe_create_customer\",\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_STRIPE\",\"CATEGORY_TYPE_STRIPE_CUSTOMER\"]\r\n}\r\n    "
  },
  {
    "path": "Stripe/legos/stripe_create_customer/stripe_create_customer.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import List\nfrom pydantic import BaseModel, Field\n\nclass InputSchema(BaseModel):\n    params: dict = Field(\n        title='Customer Data',\n        description='Params in key=value form.'\n    )\n\n\ndef stripe_create_customer_printer(output):\n    if isinstance(output, (list, tuple)):\n        pprint.pprint(output)\n    elif isinstance(output, dict):\n        pprint.pprint(output)\n    else:\n        pprint.pprint(output)\n\n\ndef stripe_create_customer(handle, params:dict) -> List:\n    \"\"\"stripe_create_customer Create a customer\n\n        :type params: dict\n        :param params: Params in key=value form.\n\n        :rtype: List with response from the describe API.\n    \"\"\"\n    # Input param validation\n    result = []\n    try:\n        customer = handle.Customer.create(**params)\n        result.append(customer)\n        return result\n    except Exception as e:\n        pprint.pprint(e)\n\n    return None\n"
  },
  {
    "path": "Stripe/legos/stripe_create_refund/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Create a Refund</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego used to Create a Refund.\r\n\r\n\r\n## Lego Details\r\n\r\n    stripe_create_refund(handle: object, charge_id:str)\r\n\r\n        handle: Object of type unSkript stripe Connector\r\n        charge_id: The identifier of the charge to refund.\r\n\r\n## Lego Input\r\nThis Lego take two input handle and charge_id.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Stripe/legos/stripe_create_refund/__init__.py",
    "content": ""
  },
  {
    "path": "Stripe/legos/stripe_create_refund/stripe_create_refund.json",
    "content": "{\r\n    \"action_title\": \"Create a Refund\",\r\n    \"action_description\": \"Create a Refund\",\r\n    \"action_type\": \"LEGO_TYPE_STRIPE\",\r\n    \"action_entry_function\": \"stripe_create_refund\",\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_STRIPE\",\"CATEGORY_TYPE_STRIPE_REFUND\"]\r\n}\r\n    "
  },
  {
    "path": "Stripe/legos/stripe_create_refund/stripe_create_refund.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import List\nfrom pydantic import BaseModel, Field\n\n\nclass InputSchema(BaseModel):\n    charge_id: str = Field(\n        title='Charge Id',\n        description='The identifier of the charge to refund.'\n    )\n\n\ndef stripe_create_refund_printer(output):\n    if isinstance(output, (list, tuple)):\n        pprint.pprint(output)\n    elif isinstance(output, dict):\n        pprint.pprint(output)\n    else:\n        pprint.pprint(output)\n\n\ndef stripe_create_refund(handle, charge_id:str) -> List:\n    \"\"\"stripe_create_refund Create a Refund\n\n        :type charge_id: string\n        :param charge_id: The identifier of the charge to refund.\n\n        :rtype: List with response from the describe API.\n    \"\"\"\n    # Input param validation\n    result = []\n    try:\n        refund_obj = handle.Refund.create(charge=charge_id)\n        result.append(refund_obj)\n        return result\n    except Exception as e:\n        pprint.pprint(e)\n\n    return None\n"
  },
  {
    "path": "Stripe/legos/stripe_delete_customer/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Delete customer</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego used to Delete customer.\r\n\r\n\r\n## Lego Details\r\n\r\n    stripe_delete_customer(handle: object, customer_id:str)\r\n\r\n        handle: Object of type unSkript stripe Connector\r\n        customer_id: Customer Id.\r\n\r\n## Lego Input\r\nThis Lego take two input handle and customer_id.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Stripe/legos/stripe_delete_customer/__init__.py",
    "content": ""
  },
  {
    "path": "Stripe/legos/stripe_delete_customer/stripe_delete_customer.json",
    "content": "{\r\n    \"action_title\": \"Delete customer\",\r\n    \"action_description\": \"Delete customer\",\r\n    \"action_type\": \"LEGO_TYPE_STRIPE\",\r\n    \"action_entry_function\": \"stripe_delete_customer\",\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_STRIPE\",\"CATEGORY_TYPE_STRIPE_CUSTOMER\"]\r\n}\r\n    "
  },
  {
    "path": "Stripe/legos/stripe_delete_customer/stripe_delete_customer.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import List\nfrom pydantic import BaseModel, Field\n\n\nclass InputSchema(BaseModel):\n    customer_id: str = Field(\n        title='Customer Id',\n        description='Customer Id'\n    )\n\n\ndef stripe_delete_customer_printer(output):\n    if isinstance(output, (list, tuple)):\n        pprint.pprint(output)\n    elif isinstance(output, dict):\n        pprint.pprint(output)\n    else:\n        pprint.pprint(output)\n\n\ndef stripe_delete_customer(handle, customer_id:str) -> List:\n    \"\"\"stripe_delete_customer Delete Customer\n\n        :type customer_id: string\n        :param customer_id: Customer Id.\n\n        :rtype: List with response from the describe API.\n    \"\"\"\n    # Input param validation\n    result = []\n    try:\n        resp = handle.Customer.delete(customer_id)\n        result.append(resp)\n        return result\n    except Exception as e:\n        pprint.pprint(e)\n\n    return None\n"
  },
  {
    "path": "Stripe/legos/stripe_get_all_charges/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Get list of charges previously created</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego used to list of charges that was previously created. The\r\n        charges are returned in sorted order, with the most recent charges appearing first.\r\n\r\n\r\n## Lego Details\r\n\r\n    stripe_get_all_charges(handle: object)\r\n\r\n        handle: Object of type unSkript stripe Connector\r\n\r\n## Lego Input\r\nThis Lego take one input handle.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Stripe/legos/stripe_get_all_charges/__init__.py",
    "content": ""
  },
  {
    "path": "Stripe/legos/stripe_get_all_charges/stripe_get_all_charges.json",
    "content": "{\r\n    \"action_title\": \"Get list of charges previously created\",\r\n    \"action_description\": \"Get list of charges previously created\",\r\n    \"action_type\": \"LEGO_TYPE_STRIPE\",\r\n    \"action_entry_function\": \"stripe_get_all_charges\",\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_STRIPE\",\"CATEGORY_TYPE_STRIPE_CHARGE\"]\r\n}\r\n    "
  },
  {
    "path": "Stripe/legos/stripe_get_all_charges/stripe_get_all_charges.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nfrom typing import List\nfrom pydantic import BaseModel, Field\nfrom tabulate import tabulate\n\nclass InputSchema(BaseModel):\n    max_results: int = Field(\n        title='Maximum Results',\n        description='Threshold to get maximum result.'\n    )\n\n\ndef stripe_get_all_charges_printer(output):\n    if output is None:\n        return\n    od = tabulate(output, headers=['Amount', 'ID', 'Description'])\n    print(od)\n\n\n\ndef stripe_get_all_charges(handle, max_results: int = 25) -> List:\n    \"\"\"stripe_get_all_charges Returns a list of charges that was previously created. The\n        charges are returned in sorted order, with the most recent charges appearing first.\n\n        :type max_results: int\n        :param max_results: Threshold to get maximum result.\n\n        :rtype: Returns the results of all recent charges.\n    \"\"\"\n    result = []\n    try:\n        if max_results == 0:\n            data = handle.Charge.list()\n            for charge in data.auto_paging_iter():\n                result.append([charge['amount'], charge['id'], charge['description']])\n        else:\n            data = handle.Charge.list(limit=max_results)\n            for charge in data:\n                result.append([charge['amount'], charge['id'], charge['description']])\n    except Exception as e:\n        print(e)\n\n    return result\n"
  },
  {
    "path": "Stripe/legos/stripe_get_all_customers/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Get list of customers</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego used to list of customers that was perviously created. The\r\n        charges are returned in sorted order, with the most recent charges appearing first.\r\n\r\n\r\n## Lego Details\r\n\r\n    stripe_get_all_customers(handle: object)\r\n\r\n        handle: Object of type unSkript stripe Connector\r\n\r\n## Lego Input\r\nThis Lego take one input handle.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Stripe/legos/stripe_get_all_customers/__init__.py",
    "content": ""
  },
  {
    "path": "Stripe/legos/stripe_get_all_customers/stripe_get_all_customers.json",
    "content": "{\r\n    \"action_title\": \"Get list of customers\",\r\n    \"action_description\": \"Get list of customers\",\r\n    \"action_type\": \"LEGO_TYPE_STRIPE\",\r\n    \"action_entry_function\": \"stripe_get_all_customers\",\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_STRIPE\",\"CATEGORY_TYPE_STRIPE_CUSTOMER\"]\r\n}\r\n    "
  },
  {
    "path": "Stripe/legos/stripe_get_all_customers/stripe_get_all_customers.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import List\nfrom pydantic import BaseModel, Field\n\nclass InputSchema(BaseModel):\n    max_results: int = Field(\n        title='Maximum Results',\n        description='Threshold to get maximum result.'\n    )\n\n\ndef stripe_get_all_customers_printer(output):\n    if isinstance(output, (list, tuple)):\n        pprint.pprint(output)\n    elif isinstance(output, dict):\n        pprint.pprint(output)\n    else:\n        pprint.pprint(output)\n\n\ndef stripe_get_all_customers(handle, max_results: int = 25) -> List:\n    \"\"\"stripe_get_all_customers Returns a list of customers that was perviously created.\n\n        :type max_results: int\n        :param max_results: Threshold to get maximum result.\n\n        :rtype: Returns the results of all customers.\n    \"\"\"\n    # Input param validation.\n    result = []\n    try:\n        if max_results == 0:\n            output = handle.Customer.list(limit=100)\n            for customer in output.auto_paging_iter():\n                result.append(customer)\n        else:\n            output = handle.Customer.list(limit=max_results)\n            result = output[\"data\"]\n    except Exception as e:\n        print(e)\n\n    return result\n"
  },
  {
    "path": "Stripe/legos/stripe_get_all_disputes/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Get list of disputes</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego used to list of disputes that was perviously created. The\r\n        charges are returned in sorted order, with the most recent charges appearing first.\r\n\r\n\r\n## Lego Details\r\n\r\n    stripe_get_all_disputes(handle: object)\r\n\r\n        handle: Object of type unSkript stripe Connector\r\n\r\n## Lego Input\r\nThis Lego take one input handle.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Stripe/legos/stripe_get_all_disputes/__init__.py",
    "content": ""
  },
  {
    "path": "Stripe/legos/stripe_get_all_disputes/stripe_get_all_disputes.json",
    "content": "{\r\n    \"action_title\": \"Get list of disputes\",\r\n    \"action_description\": \"Get list of disputes\",\r\n    \"action_type\": \"LEGO_TYPE_STRIPE\",\r\n    \"action_entry_function\": \"stripe_get_all_disputes\",\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_STRIPE\",\"CATEGORY_TYPE_STRIPE_DISPUTE\"]\r\n}\r\n    "
  },
  {
    "path": "Stripe/legos/stripe_get_all_disputes/stripe_get_all_disputes.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import List\nfrom pydantic import BaseModel, Field\n\n\nclass InputSchema(BaseModel):\n     max_results: int = Field(\n        title='Maximum Results',\n        description='Threshold to get maximum result.'\n    )\n\n\ndef stripe_get_all_disputes_printer(output):\n    if isinstance(output, (list, tuple)):\n        pprint.pprint(output)\n    elif isinstance(output, dict):\n        pprint.pprint(output)\n    else:\n        pprint.pprint(output)\n\n\ndef stripe_get_all_disputes(handle, max_results: int = 25) -> List:\n    \"\"\"stripe_get_all_disputes Returns a list of disputes that was perviously created.\n\n        :type max_results: int\n        :param max_results: Threshold to get maximum result.\n\n        rtype: Returns a list of disputes that was perviously created.\n    \"\"\"\n    result = []\n    try:\n        if max_results == 0:\n            output = handle.Dispute.list()\n            for dispute in output.auto_paging_iter():\n                result.append(dispute)\n        else:\n            output = handle.Dispute.list(limit=max_results)\n            result = output[\"data\"]\n    except Exception as e:\n        print(e)\n\n    return result\n"
  },
  {
    "path": "Stripe/legos/stripe_get_all_refunds/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Get list of refunds</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego used to list of refunds that was perviously created. The\r\n        charges are returned in sorted order, with the most recent charges appearing first.\r\n\r\n\r\n## Lego Details\r\n\r\n    stripe_get_all_refunds(handle: object)\r\n\r\n        handle: Object of type unSkript stripe Connector\r\n\r\n## Lego Input\r\nThis Lego take one input handle.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Stripe/legos/stripe_get_all_refunds/__init__.py",
    "content": ""
  },
  {
    "path": "Stripe/legos/stripe_get_all_refunds/stripe_get_all_refunds.json",
    "content": "{\r\n    \"action_title\": \"Get list of refunds\",\r\n    \"action_description\": \" Get list of refunds for the given threshold.\",\r\n    \"action_type\": \"LEGO_TYPE_STRIPE\",\r\n    \"action_entry_function\": \"stripe_get_all_refunds\",\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_STRIPE\",\"CATEGORY_TYPE_STRIPE_REFUND\"]\r\n}\r\n    "
  },
  {
    "path": "Stripe/legos/stripe_get_all_refunds/stripe_get_all_refunds.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import List\nfrom pydantic import BaseModel, Field\n\n\nclass InputSchema(BaseModel):\n    max_results: int = Field(\n        title='Maximum Results',\n        description='Threshold to get maximum result.'\n    )\n\n\ndef stripe_get_all_refunds_printer(output):\n    if isinstance(output, (list, tuple)):\n        pprint.pprint(output)\n    elif isinstance(output, dict):\n        pprint.pprint(output)\n    else:\n        pprint.pprint(output)\n\n\ndef stripe_get_all_refunds(handle, max_results: int = 25) -> List:\n    \"\"\"stripe_get_all_refunds Returns a list of refunds that was previously created. The\n        charges are returned in sorted order, with the most recent charges appearing first.\n\n        :type max_results: int\n        :param max_results: Threshold to get maximum result.\n\n        :rtype: Returns the results of all recent charges.\n    \"\"\"\n    result = []\n    if max_results == 0:\n        output = handle.Refund.list()\n        for refunds in output.auto_paging_iter():\n            result.append(refunds)\n    else:\n        output = handle.Refund.list(limit=max_results)\n        for refunds in output:\n            result.append(refunds)\n\n    return result\n"
  },
  {
    "path": "Stripe/legos/stripe_get_handle/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Get Stripe Handle</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego get Stripe Handle.\r\n\r\n\r\n## Lego Details\r\n\r\n    stripe_get_handle(handle: object)\r\n\r\n        handle: Object of type unSkript stripe Connector\r\n\r\n## Lego Input\r\nThis Lego take one input handle.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Stripe/legos/stripe_get_handle/__init__.py",
    "content": ""
  },
  {
    "path": "Stripe/legos/stripe_get_handle/stripe_get_handle.json",
    "content": "{\r\n    \"action_title\": \"Get Stripe Handle\",\r\n    \"action_description\": \"Get Stripe Handle\",\r\n    \"action_type\": \"LEGO_TYPE_STRIPE\",\r\n    \"action_entry_function\": \"stripe_get_handle\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": false,\r\n    \"action_supports_iteration\": false\r\n}\r\n    "
  },
  {
    "path": "Stripe/legos/stripe_get_handle/stripe_get_handle.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\nfrom pydantic import BaseModel\n\n\nclass InputSchema(BaseModel):\n    pass\n\n\ndef stripe_get_handle(handle):\n    \"\"\"stripe_get_handle returns the Stripe handle.\n\n       :rtype: Stripe Handle\n    \"\"\"\n    return handle\n"
  },
  {
    "path": "Stripe/legos/stripe_retrieve_charge/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Retrieve a Charge</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego Retrieve the Charge for given ID.\r\n\r\n\r\n## Lego Details\r\n\r\n    stripe_close_dispute(handle: object, charge_id:str)\r\n\r\n        handle: Object of type unSkript stripe Connector\r\n        charge_id: Charge ID.\r\n\r\n## Lego Input\r\nThis Lego take two input handle and charge_id.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Stripe/legos/stripe_retrieve_charge/__init__.py",
    "content": ""
  },
  {
    "path": "Stripe/legos/stripe_retrieve_charge/stripe_retrieve_charge.json",
    "content": "{\r\n    \"action_title\": \"Retrieve a Charge\",\r\n    \"action_description\": \" Retrieve a Charge\",\r\n    \"action_type\": \"LEGO_TYPE_STRIPE\",\r\n    \"action_entry_function\": \"stripe_retrieve_charge\",\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_STRIPE\",\"CATEGORY_TYPE_STRIPE_CHARGE\"]\r\n}\r\n"
  },
  {
    "path": "Stripe/legos/stripe_retrieve_charge/stripe_retrieve_charge.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nimport pprint\nfrom typing import Dict\nfrom pydantic import BaseModel, Field\n\n\nclass InputSchema(BaseModel):\n    charge_id: str = Field(\n        title='Charge Id',\n        description='Charge ID'\n    )\n\n\ndef stripe_retrieve_charge_printer(output):\n    if isinstance(output, (list, tuple)):\n        pprint.pprint(output)\n    elif isinstance(output, dict):\n        pprint.pprint(output)\n    else:\n        pprint.pprint(output)\n\n\ndef stripe_retrieve_charge(handle, charge_id:str) -> Dict:\n    \"\"\"stripe_retrive_charge Retrive the Charge for given ID\n\n        :type charge_id: string\n        :param charge_id: Charge ID.\n\n        :rtype: Dict with response from the describe API.\n    \"\"\"\n    # Input param validation\n    charge = handle.Charge.retrieve(charge_id)\n    return charge\n"
  },
  {
    "path": "Stripe/legos/stripe_retrieve_customer/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Retrive details of a customer</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego get customer data.\r\n\r\n\r\n## Lego Details\r\n\r\n    stripe_retrieve_customer(handle: object, customer_id:str)\r\n\r\n        handle: Object of type unSkript stripe Connector\r\n        customer_id: Retrive details of a customer.\r\n\r\n## Lego Input\r\nThis Lego take two input handle and customer_id.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Stripe/legos/stripe_retrieve_customer/__init__.py",
    "content": ""
  },
  {
    "path": "Stripe/legos/stripe_retrieve_customer/stripe_retrieve_customer.json",
    "content": "{\r\n    \"action_title\": \"Retrive details of a customer \",\r\n    \"action_description\": \"Retrive details of a customer \",\r\n    \"action_type\": \"LEGO_TYPE_STRIPE\",\r\n    \"action_entry_function\": \"stripe_retrieve_customer\",\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_STRIPE\",\"CATEGORY_TYPE_STRIPE_CUSTOMER\"]\r\n}\r\n    "
  },
  {
    "path": "Stripe/legos/stripe_retrieve_customer/stripe_retrieve_customer.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import List\nfrom pydantic import BaseModel, Field\n\n\nclass InputSchema(BaseModel):\n    customer_id: str = Field(\n        title='Customer Id',\n        description='Retrive details of a customer'\n    )\n\n\ndef stripe_retrieve_customer_printer(output):\n    if isinstance(output, (list, tuple)):\n        pprint.pprint(output)\n    elif isinstance(output, dict):\n        pprint.pprint(output)\n    else:\n        pprint.pprint(output)\n\n\ndef stripe_retrieve_customer(handle, customer_id:str) -> List:\n    \"\"\"stripe_retrieve_customer Get customer data\n\n        :type customer_id: string\n        :param customer_id: Retrive details of a customer.\n\n        :rtype: String with response from the describe command.\n    \"\"\"\n    # Input param validation\n    result = []\n    try:\n        customer = handle.Customer.retrieve(customer_id)\n        result.append(customer)\n        return result\n    except Exception as e:\n        pprint.pprint(e)\n\n    return None\n"
  },
  {
    "path": "Stripe/legos/stripe_retrieve_dispute/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Retrieve details of a dispute</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego retrieve details of a dispute.\r\n\r\n\r\n## Lego Details\r\n\r\n    stripe_retrieve_dispute(handle: object, dispute_id:str)\r\n\r\n        handle: Object of type unSkript stripe Connector\r\n        dispute_id: Retrieve details of a dispute.\r\n\r\n## Lego Input\r\nThis Lego take two input handle and dispute_id.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Stripe/legos/stripe_retrieve_dispute/__init__.py",
    "content": ""
  },
  {
    "path": "Stripe/legos/stripe_retrieve_dispute/stripe_retrieve_dispute.json",
    "content": "{\r\n    \"action_title\": \"Retrieve details of a dispute\",\r\n    \"action_description\": \"Retrieve details of a dispute\",\r\n    \"action_type\": \"LEGO_TYPE_STRIPE\",\r\n    \"action_entry_function\": \"stripe_retrieve_dispute\",\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_STRIPE\",\"CATEGORY_TYPE_STRIPE_DISPUTE\"]\r\n}\r\n    "
  },
  {
    "path": "Stripe/legos/stripe_retrieve_dispute/stripe_retrieve_dispute.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import List\nfrom pydantic import BaseModel, Field\n\n\nclass InputSchema(BaseModel):\n    dispute_id: str = Field(\n        title='Dispute Id',\n        description='Retrieve details of a dispute'\n    )\n\n\ndef stripe_retrieve_dispute_printer(output):\n    if isinstance(output, (list, tuple)):\n        pprint.pprint(output)\n    elif isinstance(output, dict):\n        pprint.pprint(output)\n    else:\n        pprint.pprint(output)\n\n\ndef stripe_retrieve_dispute(handle, dispute_id:str) -> List:\n    \"\"\"stripe_retrieve_dispute Get Dispute data\n\n        :type dispute_id: string\n        :param dispute_id: Retrieve details of a dispute.\n\n        :rtype: List with response from the describe API.\n    \"\"\"\n    result = []\n    try:\n        resp = handle.Dispute.retrieve(dispute_id)\n        result.append(resp)\n        return result\n    except Exception as e:\n        pprint.pprint(e)\n\n    return None\n"
  },
  {
    "path": "Stripe/legos/stripe_retrieve_refund/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Retrieve a refund</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego retrieve a refund.\r\n\r\n\r\n## Lego Details\r\n\r\n    stripe_retrieve_refund(handle: object, refund_id:str)\r\n\r\n        handle: Object of type unSkript stripe Connector\r\n        refund_id: The identifier of the refund.\r\n\r\n## Lego Input\r\nThis Lego take two input handle and refund_id.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Stripe/legos/stripe_retrieve_refund/__init__.py",
    "content": ""
  },
  {
    "path": "Stripe/legos/stripe_retrieve_refund/stripe_retrieve_refund.json",
    "content": "{\r\n    \"action_title\": \"Retrieve a refund\",\r\n    \"action_description\": \"Retrieve a refund\",\r\n    \"action_type\": \"LEGO_TYPE_STRIPE\",\r\n    \"action_entry_function\": \"stripe_retrieve_refund\",\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_STRIPE\",\"CATEGORY_TYPE_STRIPE_REFUND\"]\r\n}\r\n    "
  },
  {
    "path": "Stripe/legos/stripe_retrieve_refund/stripe_retrieve_refund.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import List\nfrom pydantic import BaseModel, Field\n\n\nclass InputSchema(BaseModel):\n    refund_id: str = Field(\n        title='Refund Id',\n        description='The identifier of the refund.'\n    )\n\n\ndef stripe_retrieve_refund_printer(output):\n    if isinstance(output, (list, tuple)):\n        pprint.pprint(output)\n    elif isinstance(output, dict):\n        pprint.pprint(output)\n    else:\n        pprint.pprint(output)\n\n\ndef stripe_retrieve_refund(handle, refund_id:str) -> List:\n    \"\"\"stripe_retrieve_refund Retrieve a refund\n\n        :type refund_id: string\n        :param refund_id: The identifier of the refund.\n\n        :rtype: List with response from the describe API.\n    \"\"\"\n    result = []\n    try:\n        refund_obj = handle.Refund.retrieve(refund_id)\n        result.append(refund_obj)\n        return result\n    except Exception as e:\n        pprint.pprint(e)\n\n    return None\n"
  },
  {
    "path": "Stripe/legos/stripe_update_charge/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Update a Charge</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego Updates the specified charge by setting the values of the parameters passed.\r\n        Any parameters not provided will be left unchanged.\r\n\r\n\r\n## Lego Details\r\n\r\n    stripe_close_dispute(handle: object, charge_id: str, customer: str, description: str,\r\n        receipt_email: str, metadata: dict, shipping: dict, fraud_details: dict,\r\n        transfer_group: str)\r\n\r\n        handle: Object of type unSkript stripe Connector\r\n        charge_id: Charge ID.\r\n        customer: Customer ID.\r\n        description: Description\r\n        receipt_email: This is the email address that the receipt for this charge will be sent to\r\n        metadata: This can be useful for storing additional information about the object in a structured format.\r\n        shipping: Shipping information for the charge. Helps prevent fraud on charges for physical goods.\r\n        raud_details: A set of key-value pairs you can attach to a charge giving information about its riskiness\r\n        transfer_group: A string that identifies this transaction as part of a group.\r\n\r\n\r\n## Lego Input\r\nThis Lego take nine input handle, charge_id, customer, description, receipt_email, metadata, shipping, raud_details and transfer_group.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Stripe/legos/stripe_update_charge/__init__.py",
    "content": ""
  },
  {
    "path": "Stripe/legos/stripe_update_charge/stripe_update_charge.json",
    "content": "{\r\n    \"action_title\": \"Update a Charge\",\r\n    \"action_description\": \"Update a Charge\",\r\n    \"action_type\": \"LEGO_TYPE_STRIPE\",\r\n    \"action_entry_function\": \"stripe_update_charge\",\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_STRIPE\",\"CATEGORY_TYPE_STRIPE_CHARGE\"]\r\n}\r\n    "
  },
  {
    "path": "Stripe/legos/stripe_update_charge/stripe_update_charge.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nimport pprint\nfrom typing import List\nfrom pydantic import BaseModel, Field\n\n\nclass InputSchema(BaseModel):\n    charge_id: str = Field(\n        title='Charge Id',\n        description='Charge ID'\n    )\n    customer: str = Field(\n        title='Customer Id',\n        description='Customer ID'\n    )\n    description: str = Field(\n        title='Description',\n        description='Description'\n    )\n    receipt_email: str = Field(\n        title='Email address ',\n        description='This is the email address that the receipt for this charge will be sent to'\n    )\n    metadata: dict = Field(\n        None,\n        title='Metadata',\n        description='This can be useful for storing additional information about \\\n            the object in a structured format. For Eg. {\"order_id\": \"6735\"}'\n    )\n    shipping: dict = Field(\n        None,\n        title='Shipping Details',\n        description='Shipping information for the charge. Helps prevent fraud on \\\n            charges for physical goods.'\n    )\n    fraud_details: dict = Field(\n        None,\n        title='Fraud Details',\n        description='A set of key-value pairs you can attach to a charge giving \\\n            information about its riskiness'\n    )\n    transfer_group: str = Field(\n        None,\n        title='Transfer Group',\n        description='A string that identifies this transaction as part of a group.'\n    )\n\n\ndef stripe_update_charge_printer(output):\n    if isinstance(output, (list, tuple)):\n        pprint.pprint(output)\n    elif isinstance(output, dict):\n        pprint.pprint(output)\n    else:\n        pprint.pprint(output)\n\n\ndef stripe_update_charge(\n        handle,\n        charge_id: str,\n        customer: str,\n        description: str,\n        receipt_email: str,\n        metadata: dict = None,\n        shipping: dict = None,\n        fraud_details: dict = None,\n        transfer_group: str = \"\") -> List:\n\n    \"\"\"stripe_update_charge Updates the specified charge by setting the values of \n    the parameters passed. Any parameters not provided will be left unchanged.\n\n        :type charge_id: string\n        :param charge_id: Charge ID.\n\n        :type customer: string\n        :param customer: Customer ID.\n\n        :type description: string\n        :param description: Description\n\n        :type receipt_email: string\n        :param receipt_email: This is the email address that the\n        receipt for this charge will be sent to\n\n        :type metadata: dict\n        :param metadata: This can be useful for storing additional\n        information about the object in a structured format.\n\n        :type shipping: dict\n        :param shipping: Shipping information for the charge. Helps\n        prevent fraud on charges for physical goods.\n\n        :type fraud_details: dict\n        :param fraud_details: A set of key-value pairs you can attach\n        to a charge giving information about its riskiness\n\n        :type transfer_group: string\n        :param transfer_group: A string that identifies this transaction as part of a group.\n\n        :rtype: String with response from the describe command.\n    \"\"\"\n    # Input param validation\n    result = []\n    try:\n        charge = handle.Charge.modify(\n            charge_id,\n            customer=customer if customer else None,\n            description=description if description else None,\n            metadata=metadata if metadata else {},\n            receipt_email=receipt_email if receipt_email else None,\n            shipping=shipping if shipping else None,\n            fraud_details=fraud_details if fraud_details else None,\n            transfer_group=transfer_group if transfer_group else None,\n        )\n        result.append(charge)\n        return result\n    except Exception as e:\n        pprint.pprint(e)\n\n    return None\n"
  },
  {
    "path": "Stripe/legos/stripe_update_customer/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Update Customers</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego Update Customers.\r\n\r\n\r\n## Lego Details\r\n\r\n    stripe_close_dispute(handle: object, customer_id: str, name: str, phone: str, description: str,\r\n        email: str, balance: int, metadata: dict, shipping: dict, address: dict)\r\n\r\n        handle: Object of type unSkript stripe Connector\r\n        customer_id: Customer ID\r\n        name: The customer’s full name or business name.\r\n        phone: The customer’s phone number.\r\n        description: Description\r\n        email: Customer’s email address\r\n        balance: Current Balance\r\n        metadata: This can be useful for storing additional information about the object in a structured format.\r\n        shipping: Shipping information for the customer.\r\n        address: The customer’s address.\r\n\r\n\r\n## Lego Input\r\nThis Lego take ten input handle, customer_id, name, phone, description, email, balance, metadata, shipping and address.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Stripe/legos/stripe_update_customer/__init__.py",
    "content": ""
  },
  {
    "path": "Stripe/legos/stripe_update_customer/stripe_update_customer.json",
    "content": "{\r\n    \"action_title\": \"Update Customers\",\r\n    \"action_description\": \"Update Customers\",\r\n    \"action_type\": \"LEGO_TYPE_STRIPE\",\r\n    \"action_entry_function\": \"stripe_update_customer\",\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_STRIPE\",\"CATEGORY_TYPE_STRIPE_CUSTOMER\"]\r\n}\r\n"
  },
  {
    "path": "Stripe/legos/stripe_update_customer/stripe_update_customer.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nimport pprint\nfrom typing import List\nfrom pydantic import BaseModel, Field\n\n\nclass InputSchema(BaseModel):\n    customer_id: str = Field(\n        title='Customer Id',\n        description='Customer ID'\n    )\n    name: str = Field(\n        title='The customer’s full name or business name.',\n        description='The customer’s full name or business name.'\n    )\n    phone: str = Field(\n        title='The customer’s phone number.',\n        description='The customer’s phone number.'\n    )\n    description: str = Field(\n        title='Description',\n        description='Description'\n    )\n    email: str = Field(\n        title='Email address ',\n        description='Customer’s email address'\n    )\n    balance: int = Field(\n        title='Current Balance',\n        description='Current Balance'\n    )\n    metadata: dict = Field(\n        None,\n        title='Metadata',\n        description='This can be useful for storing additional information about the object \\\n            in a structured format. For Eg. {\"order_id\": \"6735\"}'\n    )\n    shipping: dict = Field(\n        None,\n        title='Shipping Details',\n        description='Shipping information for the customer.'\n    )\n    address: dict = Field(\n        None,\n        title='The customer’s address.',\n        description='The customer’s address.'\n    )\n\n\ndef stripe_update_customer_printer(output):\n    if isinstance(output, (list, tuple)):\n        pprint.pprint(output)\n    elif isinstance(output, dict):\n        pprint.pprint(output)\n    else:\n        pprint.pprint(output)\n\n\ndef stripe_update_customer(\n        handle,\n        customer_id: str,\n        name: str,\n        phone: str,\n        description: str,\n        email: str,\n        balance: int,\n        metadata: dict,\n        shipping: dict,\n        address: dict) -> List:\n    \"\"\"stripe_update_customer Update a customer\n\n        :type customer_id: string\n        :param customer_id: Customer ID\n\n        :type name: string\n        :param name: The customer’s full name or business name.\n\n        :type phone: string\n        :param phone: The customer’s phone number.\n\n        :type description: string\n        :param description: Description\n\n        :type email: string\n        :param email: Customer’s email address\n\n        :type balance: int\n        :param balance: Current Balance\n\n        :type metadata: dict\n        :param metadata: This can be useful for storing additional information\n        about the object in a structured format.\n\n        :type shipping: dict\n        :param shipping: Shipping information for the customer.\n\n        :type address: dict\n        :param address: The customer’s address.\n\n        :rtype: List with response from the describe API.\n    \"\"\"\n    # Input param validation\n    result = []\n    try:\n        customer = handle.Customer.modify(\n            customer_id,\n            name=name if name else None,\n            phone=phone if phone else None,\n            description=description if description else None,\n            balance=balance if balance else None,\n            email=email if email else None,\n            metadata=metadata if metadata else {},\n            address=address if address else {},\n            shipping=shipping if shipping else None,\n        )\n        result.append(customer)\n        return result\n    except Exception as e:\n        pprint.pprint(e)\n\n    return None\n"
  },
  {
    "path": "Stripe/legos/stripe_update_dispute/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Update Dispute</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego Update Dispute.\r\n\r\n\r\n## Lego Details\r\n\r\n    stripe_update_dispute(handle: object, dispute_id:str, submit:bool,metadata=None, evidence)\r\n\r\n        handle: Object of type unSkript stripe Connector\r\n        dispute_id: Dispute Id\r\n        submit: Whether to immediately submit evidence to the bank.\r\n        metadata: This can be useful for storing additional information about the object in a structured format.\r\n        evidence: Evidence to upload, to respond to a dispute.\r\n\r\n\r\n## Lego Input\r\nThis Lego take nine input handle, dispute_id, submit, metadata and evidence.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Stripe/legos/stripe_update_dispute/__init__.py",
    "content": ""
  },
  {
    "path": "Stripe/legos/stripe_update_dispute/stripe_update_dispute.json",
    "content": "{\r\n    \"action_title\": \"Update Dispute\",\r\n    \"action_description\": \"Update Dispute\",\r\n    \"action_type\": \"LEGO_TYPE_STRIPE\",\r\n    \"action_entry_function\": \"stripe_update_dispute\",\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_STRIPE\",\"CATEGORY_TYPE_STRIPE_DISPUTE\"]\r\n}\r\n    "
  },
  {
    "path": "Stripe/legos/stripe_update_dispute/stripe_update_dispute.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import List\nfrom pydantic import BaseModel, Field\n\n\nclass InputSchema(BaseModel):\n    dispute_id: str = Field(\n        title='Dispute Id',\n        description='Dispute Id'\n    )\n    submit: bool = Field(\n        False,\n        title='Submit',\n        description='Whether to immediately submit evidence to the bank.'\n    )\n    metadata: dict = Field(\n        None,\n        title='Metadata',\n        description='This can be useful for storing additional information about the \\\n            object in a structured format. For Eg. {\"order_id\": \"6735\"}'\n    )\n    evidence: dict = Field(\n        None,\n        title='Evidence',\n        description='Evidence to upload, to respond to a dispute.'\n    )\n\n\ndef stripe_update_dispute_printer(output):\n    if isinstance(output, (list, tuple)):\n        pprint.pprint(output)\n    elif isinstance(output, dict):\n        pprint.pprint(output)\n    else:\n        pprint.pprint(output)\n\n\ndef stripe_update_dispute(handle,\n                          dispute_id: str,\n                          submit: bool = False,\n                          metadata=None,\n                          evidence=None) -> List:\n    \"\"\"stripe_update_dispute Update a Dispute.\n\n        :type dispute_id: string\n        :param dispute_id: Dispute Id\n\n        :type submit: bool\n        :param submit: Whether to immediately submit evidence to the bank.\n\n        :type metadata: dict\n        :param metadata: This can be useful for storing additional information\n        about the object in a structured format.\n\n        :type evidence: dict\n        :param evidence: Evidence to upload, to respond to a dispute.\n\n        :rtype: List with response from the describe API.\n    \"\"\"\n    # Input param validation\n    result = []\n    if evidence is None:\n        evidence = {}\n    if metadata is None:\n        metadata = {}\n    try:\n        dispute = handle.Dispute.modify(\n             dispute_id,\n             submit=submit if submit else None,\n             metadata=metadata if metadata else {},\n             evidence=evidence if evidence else {},\n        )\n        result.append(dispute)\n        return result\n    except Exception as e:\n        pprint.pprint(e)\n\n    return None\n"
  },
  {
    "path": "Stripe/legos/stripe_update_refund/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Update Refund</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego Updates the specified refund by setting the values of the parameters passed.\r\n\r\n## Lego Details\r\n\r\n    stripe_update_refund(handle: object, refund_id:str, metadata:dict)\r\n\r\n        handle: Object of type unSkript stripe Connector\r\n        metadata: Updates the specified refund by setting the values of the parameters passed.\r\n        refund_id: Refund Id\r\n\r\n## Lego Input\r\nThis Lego take three input handle, metadata and refund_id.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Stripe/legos/stripe_update_refund/__init__.py",
    "content": ""
  },
  {
    "path": "Stripe/legos/stripe_update_refund/stripe_update_refund.json",
    "content": "{\r\n    \"action_title\": \"Update Refund\",\r\n    \"action_description\": \"Updates the specified refund by setting the values of the parameters passed.\",\r\n    \"action_type\": \"LEGO_TYPE_STRIPE\",\r\n    \"action_entry_function\": \"stripe_update_refund\",\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\r\n    \"action_needs_credential\": true,\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_STRIPE\",\"CATEGORY_TYPE_STRIPE_REFUND\"]\r\n}\r\n    "
  },
  {
    "path": "Stripe/legos/stripe_update_refund/stripe_update_refund.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import List\nfrom pydantic import BaseModel, Field\n\n\nclass InputSchema(BaseModel):\n    refund_id:str = Field(\n        title='Refund Id',\n        description='Refund Id'\n    )\n    metadata: dict = Field(\n        title='Metadata',\n        description='''\n                    Updates the specified refund by setting the values of the parameters passed.\n                    For Eg. {\"order_id\": \"6735\"}'\n                    '''\n    )\n\ndef stripe_update_refund_printer(output):\n    if isinstance(output, (list, tuple)):\n        pprint.pprint(output)\n    elif isinstance(output, dict):\n        pprint.pprint(output)\n    else:\n        pprint.pprint(output)\n\n\ndef stripe_update_refund(handle, refund_id:str, metadata:dict) -> List:\n    \"\"\"stripe_update_refund Updates the specified refund by setting the values \n    of the parameters passed.\n\n        :type metadata: dict\n        :param metadata: Updates the specified refund by setting the values of\n        the parameters passed.\n\n        :type refund_id: string\n        :param refund_id: Refund Id\n        \n        :rtype: List with response from the describe API.\n    \"\"\"\n    # Input param validation\n    result = []\n    try:\n        refund = handle.Refund.modify(\n            refund_id,\n            metadata=metadata,\n        )\n        result.append(refund)\n        return result\n    except Exception as e:\n        pprint.pprint(e)\n\n    return None\n"
  },
  {
    "path": "Terraform/README.md",
    "content": "\n# Terraform Actions\n* [Execute Terraform Command](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Terraform/legos/terraform_exec_command/README.md): Execute Terraform Command\n* [Get terraform handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Terraform/legos/terraform_get_handle/README.md): Get terraform handle\n"
  },
  {
    "path": "Terraform/__init__.py",
    "content": ""
  },
  {
    "path": "Terraform/legos/__init__.py",
    "content": ""
  },
  {
    "path": "Terraform/legos/terraform_exec_command/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Execute Terraform Command</h1>\r\n\r\n## Description\r\nThis Lego Executes Terraform Command.\r\n\r\n\r\n## Lego Details\r\n\r\n    terraform_exec_command(handle, repo, branch, dir_path, command)\r\n\r\n        handle: Object of type unSkript TERRAFORM Connector\r\n        repo: Repository that has Terraform Scripts.\r\n        branch: Branch name of repository that has Terraform Scripts.\r\n        dir_path: Directory within Repository to run the terraform command.\r\n        command : Terraform Command to Execute.\r\n\r\n\r\n## Lego Input\r\nThis Lego take four inputs handle, repo, dir_path  and command. \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Terraform/legos/terraform_exec_command/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "Terraform/legos/terraform_exec_command/terraform_exec_command.json",
    "content": "{\n\"action_title\": \"Execute Terraform Command\",\n\"action_description\": \"Execute Terraform Command\",\n\"action_type\": \"LEGO_TYPE_TERRAFORM\",\n\"action_version\": \"2.0.0\",\n\"action_entry_function\": \"terraform_exec_command\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": true,\n\"action_supports_iteration\": true,\n\"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_TERRAFORM\"]\n}\n"
  },
  {
    "path": "Terraform/legos/terraform_exec_command/terraform_exec_command.py",
    "content": "#\n# Copyright (c) 2022 unSkript.com\n# All rights reserved.\n#\nimport json\nfrom typing import Optional\nfrom pydantic import BaseModel, Field\n\nclass InputSchema(BaseModel):\n    repo: str = Field(\n        title='Git Repository',\n        description='Repository that has Terraform Scripts eg: https://github.com/acme/acme.git'\n    )\n    branch: str = Field(\n        title='Git Repository Branch',\n        description='Branch name of repository that has Terraform Scripts \\\n            eg: master, dev, feature/multiuser'\n    )\n    dir_path: Optional[str] = Field(\n        title='Directory Path',\n        description='Directory within Repository to run the terraform command \\\n            eg: acme, ./, acme/terrform/main'\n    )\n    command: str = Field(\n        title='Terraform Command',\n        description='Terraform Command to Execute eg: terraform init, terraform \\\n            apply -var=\"instance_type=t3.micro\"'\n    )\n\n\ndef terraform_exec_command(handle, repo, branch, command, dir_path:str=None) -> str:\n    \"\"\"terraform_exec_command Executes the terraform command\n       with any arguments.\n\n       :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :type repo: str\n        :param repo: Repository that has Terraform Scripts.\n\n        :type dir_path: str\n        :param dir_path: Directory within Repository to run the terraform command.\n\n        :type command: str\n        :param command: Terraform Command to Execute.\n\n        :rtype: Str Output of the command .\n    \"\"\"\n    assert command.startswith(\"terraform\")\n    print('WARNING: Please note terraform apply and terraform destroy will be run with \\\n          -auto-approve for non-interactive run')\n\n    # Reason we are doing this instead of setting the default value in InputSchema is\n    # \"\" dont get inserted for the default value.\n    # causing an issue when we drag and drop in jupyter.\n    if dir_path is None:\n        dir_path = \"./\"\n\n    output = ''\n    # sanitize inputs that have come from validate\n\n    try:\n        result = handle.sidecar_command(\n            repo,\n            branch,\n            handle.credential_id,\n            dir_path,\n            command,\n            str(\"\")\n            )\n        output = result.data.decode('utf-8')\n        output = json.loads(output)['output']\n    except Exception as e:\n        output = f\"Execution was not successful {e}\"\n\n    return output\n"
  },
  {
    "path": "Terraform/legos/terraform_get_handle/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Get terraform handle</h1>\r\n\r\n## Description\r\nThis Lego returns terraform handle.\r\n\r\n\r\n## Lego Details\r\n\r\n    terraform_get_handle(handle)\r\n\r\n        handle: Object of type unSkript TERRAFORM Connector\r\n        \r\n\r\n\r\n## Lego Input\r\nThis Lego take only one input handle. \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Terraform/legos/terraform_get_handle/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "Terraform/legos/terraform_get_handle/terraform_get_handle.json",
    "content": "{\n\"action_title\": \"Get terraform handle\",\n\"action_description\": \"Get terraform handle\",\n\"action_type\": \"LEGO_TYPE_TERRAFORM\",\n\"action_entry_function\": \"terraform_get_handle\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": false,\n\"action_supports_iteration\": false\n}\n"
  },
  {
    "path": "Terraform/legos/terraform_get_handle/terraform_get_handle.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nfrom pydantic import BaseModel\n\n\nclass InputSchema(BaseModel):\n    pass\n\n\ndef terraform_get_handle(handle):\n    \"\"\"\n    terraform_get_handle returns the terraform handle.\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n        :rtype: terraform handle.\n    \"\"\"\n    return handle\n"
  },
  {
    "path": "Vault/__init__.py",
    "content": ""
  },
  {
    "path": "Vault/legos/__init__.py",
    "content": ""
  },
  {
    "path": "Vault/legos/vault_get_handle/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \n<h2>Get Vault Handle</h2>\n\n<br>\n\n## Description\nThis Lego Get Vault Handle.\n\n\n## Lego Details\n\n    vault_get_handle(handle: object)\n\n        handle: Object of type unSkript Vault Connector\n\n## Lego Input\nThis Lego take one input handle.\n\n## Lego Output\nHere is a sample output.\n\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Vault/legos/vault_get_handle/__init__.py",
    "content": ""
  },
  {
    "path": "Vault/legos/vault_get_handle/vault_get_handle.json",
    "content": "{\n    \"action_title\": \"Vault Get Handle\",\n    \"action_description\": \"Get Vault Handle\",\n    \"action_type\": \"LEGO_TYPE_VAULT\",\n    \"action_entry_function\": \"vault_get_handle\",\n    \"action_needs_credential\": true,\n    \"action_supports_poll\": false,\n    \"action_supports_iteration\": false,\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\"\n}\n    "
  },
  {
    "path": "Vault/legos/vault_get_handle/vault_get_handle.py",
    "content": "##\n##  Copyright (c) 2023 unSkript, Inc\n##  All rights reserved.\n##\nfrom pydantic import BaseModel\n\n\nclass InputSchema(BaseModel):\n    pass\n\n\ndef vault_get_handle(handle):\n    \"\"\"vault_get_handle returns the Vault handle.\n\n          :rtype: Vault handle.\n    \"\"\"\n    return handle\n"
  },
  {
    "path": "Vault/legos/vault_get_service_health/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">]\n(https://unskript.com/assets/favicon.png)\n<h1>Get Vault service health</h1>\n\n## Description\nFetches the health of the Vault service by using hvac's sys/health call.\n\n## Lego Details\n\tvault_get_service_health fetches the health of the Vault service by using hvac's sys/health call.\n    handle: Handle containing the Vault instance.\n\n\n## Lego Input\nThis Lego takes inputs handle.\n\n## Lego Output\nHere is a sample output.\n<img src=\"./1.png\">\n\n## See it in Action\n\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Vault/legos/vault_get_service_health/__init__.py",
    "content": ""
  },
  {
    "path": "Vault/legos/vault_get_service_health/vault_get_service_health.json",
    "content": "{\n    \"action_title\": \"Get Vault service health\",\n    \"action_description\": \"Fetches the health of the Vault service by using hvac's sys/health call.\",\n    \"action_type\": \"LEGO_TYPE_VAULT\",\n    \"action_entry_function\": \"vault_get_service_health\",\n    \"action_needs_credential\": true,\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_LIST\",\n    \"action_is_check\": true,\n    \"action_next_hop\": [\n      \"\"\n    ],\n    \"action_next_hop_parameter_mapping\": {},\n    \"action_supports_iteration\": true,\n    \"action_supports_poll\": true,\n    \"action_categories\":[\"CATEGORY_TYPE_SECOPS\",\"LEGO_TYPE_VAULT\"]\n  }"
  },
  {
    "path": "Vault/legos/vault_get_service_health/vault_get_service_health.py",
    "content": "from typing import Tuple\nimport hvac\nfrom pydantic import BaseModel\n\n\nclass InputSchema(BaseModel):\n    pass\n\ndef vault_get_service_health_printer(output):\n    is_healthy, errors = output\n\n    if is_healthy:\n        print(\"Vault Service is Healthy.\")\n    else:\n        print(\"Vault Service is Unhealthy.\")\n        if errors:\n            print(\"\\nErrors:\")\n            for error in errors:\n                print(f\"  - {error}\")\n\ndef vault_get_service_health(handle) -> Tuple:\n    \"\"\"\n    vault_get_service_health fetches the health of the Vault service by using hvac's sys/health call.\n\n    :type handle: object\n    :param handle: Handle containing the Vault instance.\n\n    :rtype: Tuple indicating if the service is healthy and an error message (or None if healthy).\n    \"\"\"\n    try:\n        health_data = handle.sys.read_health_status(method='GET')\n\n        # Health check is successful if Vault is initialized, not in standby, and unsealed\n        if health_data[\"initialized\"] and not health_data[\"standby\"] and not health_data[\"sealed\"]:\n            return (True, None)\n        else:\n            error_msg = []\n            if not health_data[\"initialized\"]:\n                error_msg.append({\"message\":\"Vault is not initialized.\"})\n            if health_data[\"standby\"]:\n                error_msg.append({\"message\":\"Vault is in standby mode.\"})\n            if health_data[\"sealed\"]:\n                error_msg.append({\"message\": \"Vault is sealed.\"})\n            return (False, error_msg)\n\n    except Exception as e:\n        raise e\n\n"
  },
  {
    "path": "Zabbix/README.md",
    "content": "\n# Zabbix Actions\n* [Get Zabbix Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Zabbix/legos/zabbix_get_handle/README.md): Get Zabbix Handle\n"
  },
  {
    "path": "Zabbix/__init__.py",
    "content": ""
  },
  {
    "path": "Zabbix/legos/__init__.py",
    "content": ""
  },
  {
    "path": "Zabbix/legos/zabbix_get_handle/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Get Zabbix Handle</h1>\r\n\r\n## Description\r\nThis Lego Returns Zabbix Handle.\r\n\r\n\r\n## Lego Details\r\n\r\n    zabbix_get_handle(handle):\r\n\r\n        handle: Object of type unSkript ZABBIX Connector\r\n        \r\n\r\n\r\n## Lego Input\r\nThis Lego take only One input handle. \r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "Zabbix/legos/zabbix_get_handle/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\n#\n#"
  },
  {
    "path": "Zabbix/legos/zabbix_get_handle/zabbix_get_handle.json",
    "content": "{\n\"action_title\": \"Get Zabbix Handle\",\n\"action_description\": \"Get Zabbix Handle\",\n\"action_type\": \"LEGO_TYPE_ZABBIX\",\n\"action_entry_function\": \"zabbix_get_handle\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": false,\n\"action_supports_iteration\": false,\n\"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_ZABBIX\"]\n}\n"
  },
  {
    "path": "Zabbix/legos/zabbix_get_handle/zabbix_get_handle.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#\nfrom pydantic import BaseModel\n\n\nclass InputSchema(BaseModel):\n    pass\n\n\ndef zabbix_get_handle(handle):\n    \"\"\"zabbix_get_handle returns the Zabbix handle.\n\n        :type handle: object\n        :param handle: Object returned from task.validate(...).\n\n\n       :rtype: Zabbix Handle\n    \"\"\"\n    return handle\n"
  },
  {
    "path": "_config.yml",
    "content": "remote_theme: pages-themes/minimal@v0.2.0\ntitle: Runbooks.sh\nlogo: https://i.ibb.co/s6RD5zS/logo-runbooks-4.png\ndescription: Empowering Cloud Automation, Together.\nshow_downloads: true\nplugins:\n  - jekyll-relative-links\n  - jekyll-remote-theme\nrelative_links:\n  enabled: true\n  collections: true\ninclude:\n  - CONTRIBUTING.md\n  - README.md\n  - LICENSE.md\n  - COPYING.md\n  - CODE_OF_CONDUCT.md\n  - CONTRIBUTING.md\n  - ISSUE_TEMPLATE.md\n  - PULL_REQUEST_TEMPLATE.md\n\n# Google Analytics\ngoogle_analytics: UA-237883650-1"
  },
  {
    "path": "all_modules_test.py",
    "content": "#!/usr/bin/env python\n#\n# Copyright (c) 2023 unSkript.com\n# All rights reserved.\n#\n#\nimport glob\nimport os\nimport importlib\n\ndef get_py_files() -> list:\n    \"\"\" get_py_files finds out all the python files under each `CONNECTOR/legos`\n        directory and returns this a s list. We use glob.glob() to search through\n        all Connectors using the wildcard `**`. The file list is then filtered\n        to exclude __init__.py and return the actual python files.\n\n        :rtype: list, the list of python files\n    \"\"\"\n    f = glob.glob('./**/legos/*/*.py')\n    f = [x for x in f if os.path.basename(x) != '__init__.py']\n    return f\n\ndef test_if_importable(files: list) -> bool:\n    \"\"\" test_if_importable is a function that just does what it says. For the given\n        python file, it does a equivalent of `from <CONNECTOR>.legos.<LEGO_DIR>.<LEGO_NAME> import *`\n        if there was any mistake in the code, the import will catch it and let us know\n        any syntactical issues. This code does not check any business logic. Only does\n        make sure that syntactical errors are not introduced.\n\n        :type files: list\n        :param files: List of python files that need to be checked for imports\n\n        :rtype: bool, True if importable, False if files is empty or not a list.\n                Exception in case not able to import the file\n    \"\"\"\n    if not files or not isinstance(files, list):\n        return False \n    \n    print(f\"Total number of file: {len(files)}\")\n    for f in files:\n        print(f\"Processing {f} ...\")\n        # Remove Leading `./`\n        f = f.replace('./', '')\n        # Replace `/` with `.`\n        f = f.replace('/', '.')\n        # Remove trailing `.py`\n        f = f.replace('.py', '')\n        try:\n            module = importlib.import_module(f)\n            globals().update(vars(module))\n        except ValueError as e:\n            print(f\"ERROR IMPORTING: {f}\")\n            raise e\n    return True\n\nif __name__ == '__main__':\n    files = get_py_files()\n    result = test_if_importable(files)\n    if result:\n        print(f\"Success: All python files import cleanly\")\n    else:\n        print(f\"ERROR. Issue with importing some libraries, check console output\")\n"
  },
  {
    "path": "bin/add_creds.sh",
    "content": "#!/bin/bash\n\n# Add creds script\n#     This script can be used to add credentials.\n\nif test -f \"/usr/local/bin/add_creds.py\"; then\n\tcd /usr/local/bin\n\t/usr/bin/env python /usr/local/bin/add_creds.py \"$@\"\nelse\n\tTOPDIR=$(git rev-parse --show-toplevel)\n    \tAWESOME_DIRECTORY=Awesome-CloudOps-Automation\n\t/usr/bin/env python $TOPDIR/$AWESOME_DIRECTORY/unskript-ctl/add_creds.py \"$@\"\nfi\n"
  },
  {
    "path": "bin/unskript-add-check.sh",
    "content": "#!/bin/bash\n\n# Add check script\n#     This script can be used to create new checks.\n\nTOPDIR=$(git rev-parse --show-toplevel)\nAWESOME_DIRECTORY=Awesome-CloudOps-Automation\n/usr/bin/env python $TOPDIR/$AWESOME_DIRECTORY/unskript-ctl/unskript-add-check.py \"$@\"\n"
  },
  {
    "path": "build/templates/Dockerfile.template",
    "content": "FROM unskript/awesome-runbooks:latest as base\nCOPY custom/actions/. /tmp/custom/actions/\nCOPY custom/runbooks/. /tmp/custom/runbooks/\n\n# Copy the unskript_ctl_config.yaml file.\n# Uncomment the below line to copy it to the docker.\n\n#COPY unskript_ctl_config.yaml /etc/unskript/unskript_ctl_config.yaml\n\nCMD [\"./start.sh\"]\n"
  },
  {
    "path": "build/templates/GetStarted.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"attachments\": {},\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Welcome\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Welcome\"\n   },\n   \"source\": [\n    \"\\n\",\n    \"<br />\\n\",\n    \"<p align=\\\"center\\\">\\n\",\n    \"  <a href=\\\"https://github.com/unskript/Awesome-CloudOps-Automation\\\">\\n\",\n    \"    <img src=\\\"https://storage.googleapis.com/unskript-website/assets/favicon.png\\\" alt=\\\"Logo\\\" width=\\\"80\\\" height=\\\"80\\\">\\n\",\n    \"  </a>\\n\",\n    \"<p align=\\\"center\\\">\\n\",\n    \"  <h3 align=\\\"center\\\">Awesome CloudOps Automation</h3>\\n\",\n    \"  <p align=\\\"center\\\">\\n\",\n    \"    CloudOps automation made simpler!\\n\",\n    \"    <br />\\n\",\n    \"    <a href=\\\"http://docs.unskript.com/\\\"><strong>Explore the docs »</strong></a>\\n\",\n    \"    <br />\\n\",\n    \"    <br />\\n\",\n    \"    <a href=\\\"https://unskript.com/blog/\\\">Visit our blog</a>\\n\",\n    \"    ·\\n\",\n    \"    <a href=\\\"https://github.com/unskript/Awesome-CloudOps-Automation/issues/new?assignees=&labels=&template=bug_report.md&title=\\\">Report Bug</a>\\n\",\n    \"    ·\\n\",\n    \"    <a href=\\\"https://github.com/unskript/Awesome-CloudOps-Automation/issues/new?assignees=&labels=&template=feature_request.md&title=\\\">Request Feature</a>\\n\",\n    \"  </p>\\n\",\n    \"</p>\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"# Welcome\\n\",\n    \"## Getting Started\\n\",\n    \"\\n\",\n    \"Use this Runbook as a starting point to build your own custom runbook. Follow the Links below for Documentation.\\n\",\n    \"\\n\",\n    \"## Documentation\\n\",\n    \"\\n\",\n    \"Documentation can be found [Here](https://unskript.gitbook.io/unskript-product-documentation/open-source/cloudops-automation-with-unskript).\\n\",\n    \"\\n\",\n    \"## Community\\n\",\n    \"[Join the CloudOps Community Workspace](https://join.slack.com/t/cloud-ops-community/shared_invite/zt-1fvuobp10-~r_KyK9BxPhGiebOvl3h_w) on Slack to connect with other users, contributors and awesome people behind awesome CloudOps automation project. \\n\",\n    \"\\n\",\n    \"## Roadmap\\n\",\n    \"\\n\",\n    \"See the [open issues](https://github.com/unskript/awesome-cloudops-automation/issues) for a list of proposed features (and known issues).\\n\",\n    \"\\n\",\n    \"## Contributing\\n\",\n    \"\\n\",\n    \"Contributions are what make the open community such an amazing place to be learn, inspire, and create. Any contributions you make are **greatly appreciated**. Check out our [Contribution Guidelines](https://github.com/unskript/Awesome-CloudOps-Automation/blob/master/.github/CONTRIBUTING.md) for more details. \\n\",\n    \"\\n\",\n    \"Here is the Link for the [Developer Guide](https://github.com/unskript/Awesome-CloudOps-Automation/blob/master/.github/DEVELOPERGUIDE.md)\\n\",\n    \"\\n\",\n    \"## Star us\\n\",\n    \"\\n\",\n    \"If you like this project, Please consider giving us a **star** at [Awesome CloudOps Automation](https://github.com/unskript/Awesome-CloudOps-Automation)\\n\",\n    \"\\n\",\n    \"## License\\n\",\n    \"Except as otherwise noted this project is licensed under the `Apache License, Version 2.0` .\\n\",\n    \"\\n\",\n    \"Licensed under the Apache License, Version 2.0 (the \\\"License\\\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 .\\n\",\n    \"\\n\",\n    \"Unless required by applicable law or agreed to in writing, project distributed under the License is distributed on an \\\"AS IS\\\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"[contributors-shield]: https://img.shields.io/github/contributors/unskript/awesome-cloudops-automation.svg?style=for-the-badge\\n\",\n    \"[contributors-url]: https://github.com/unskript/awesome-cloudops-automation/graphs/contributors\\n\",\n    \"[github-actions-shield]: https://img.shields.io/github/workflow/status/unskript/awesome-cloudops-automation/e2e%20test?color=orange&label=e2e-test&logo=github&logoColor=orange&style=for-the-badge\\n\",\n    \"[github-actions-url]: https://github.com/unskript/awesome-cloudops-automation/actions/workflows/docker-tests.yml\\n\",\n    \"[forks-shield]: https://img.shields.io/github/forks/unskript/awesome-cloudops-automation.svg?style=for-the-badge\\n\",\n    \"[forks-url]: https://github.com/unskript/awesome-cloudops-automation/network/members\\n\",\n    \"[stars-shield]: https://img.shields.io/github/stars/unskript/awesome-cloudops-automation.svg?style=for-the-badge\\n\",\n    \"[stars-url]: https://github.com/unskript/awesome-cloudops-automation/stargazers\\n\",\n    \"[issues-shield]: https://img.shields.io/github/issues/unskript/awesome-cloudops-automation.svg?style=for-the-badge\\n\",\n    \"[issues-url]: https://github.com/unskript/awesome-cloudops-automation/issues\\n\",\n    \"[twitter-shield]: https://img.shields.io/badge/-Twitter-black.svg?style=for-the-badge&logo=twitter&colorB=555\\n\",\n    \"[twitter-url]: https://twitter.com/unskript\\n\",\n    \"[awesome-shield]: https://img.shields.io/badge/awesome-cloudops-orange?style=for-the-badge&logo=bookstack \\n\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"kernelspec\": {\n   \"display_name\": \"Python 3.8.2 64-bit\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"name\": \"python\",\n   \"version\": \"3.8.2\"\n  },\n  \"orig_nbformat\": 4,\n  \"vscode\": {\n   \"interpreter\": {\n    \"hash\": \"31f2aee4e71d21fbe5cf8b01ff0e069b9275f58929596ceb00d14d90e3e16cd6\"\n   }\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 2\n}\n"
  },
  {
    "path": "build/templates/Makefile.extend-docker.template",
    "content": "#\n# Copyright (c) 2023 unSkript.com\n# All rights reserved.\n#\n# This program is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\n# FOR A PARTICULAR PURPOSE\n#\n#\n.PHONY: bulid\n\n# ACTIONS DIRECTORY\nACTION_DIRECTORY = actions\n\n# RUNBOOKS DIRECTORY\nRUNBOOK_DIRECTORY = runbooks\n\n# RUNBOOKS DIRECTORY\nAWESOME_DIRECTORY = Awesome-CloudOps-Automation\n\n# CUSTOM DIRECTORY\nCUSTOM_DIRECTORY = custom\n\n# Set Default values\nCUSTOM_DOCKER_NAME ?= my-awesome-docker\nCUSTOM_DOCKER_VERSION ?= 0.1.0\n\ncopy:\n\t@echo \"Copying Docker file\"\n\t@cp $(AWESOME_DIRECTORY)/build/templates/Dockerfile.template Dockerfile\n\npre-build:\n\t@echo \"Preparing To create custom Docker build\"\n\tif [ ! -d \"$(ACTION_DIRECTORY)\" ]; then\\\n\t  echo \"Actions Directory does not exist, It is needed to build the custom docker image\"; \\\n\t  exit 1; \\\n\tfi\n\tif [ ! -d \"$(RUNBOOK_DIRECTORY)\" ]; then\\\n\t  echo \"Runbooks Directory does not exist, It is needed to build the custom docker image\"; \\\n\t  exit 1; \\\n\tfi\n\tif [ ! -d \"$(AWESOME_DIRECTORY)\" ]; then\\\n\t  echo \"Awesome-CloudOps-Automation Directory does not exist, It is needed to build the custom docker image\"; \\\n\t  exit 1; \\\n\tfi\n\t@echo \"Verified all pre-requisites are met, proceeding to build custom docker image\"\n\t@mkdir -p $(CUSTOM_DIRECTORY)\n\t@cp -Rf $(ACTION_DIRECTORY)  $(CUSTOM_DIRECTORY)\n\t@cp -Rf $(RUNBOOK_DIRECTORY)  $(CUSTOM_DIRECTORY)\n\n\nbuild:\tpre-build\n\t@echo \"Using \\n Custom Docker Name: $(CUSTOM_DOCKER_NAME) Custom Docker Version: $(CUSTOM_DOCKER_VERSION)\"\n\t@docker build -t $(CUSTOM_DOCKER_NAME):$(CUSTOM_DOCKER_VERSION) -f Dockerfile .\n\nclean:\n\t@echo \"Cleaning up the directories\"\n\t@rm -rf $(CUSTOM_DIRECTORY)\n"
  },
  {
    "path": "build/templates/Welcome.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 6,\n   \"metadata\": {\n    \"deletable\": false,\n    \"editable\": false,\n    \"jupyter\": {\n     \"source_hidden\": true\n    },\n    \"orderProperties\": [],\n    \"tags\": [\n     \"unSkript:nbParam\"\n    ],\n    \"title\": \"unSkript Internal\",\n    \"credentialsJson\": {}\n   },\n   \"outputs\": {},\n   \"source\": [\n    \"import json\\n\",\n    \"from unskript import nbparams\\n\",\n    \"from unskript.fwk.workflow import Task, Workflow\\n\",\n    \"from unskript.secrets import ENV_MODE, ENV_MODE_LOCAL\\n\",\n    \"\\n\",\n    \"env = {\\\"ENV_MODE\\\": \\\"ENV_MODE_LOCAL\\\"}\\n\",\n    \"secret_store_cfg = {\\\"SECRET_STORE_TYPE\\\": \\\"SECRET_STORE_TYPE_LOCAL\\\"}\\n\",\n    \"\\n\",\n    \"paramDict = {}\\n\",\n    \"unSkriptOutputParamDict = {}\\n\",\n    \"paramDict.update(env)\\n\",\n    \"paramDict.update(secret_store_cfg)\\n\",\n    \"paramsJson = json.dumps(paramDict)\\n\",\n    \"nbParamsObj = nbparams.NBParams(paramsJson)\\n\",\n    \"w = Workflow(env, secret_store_cfg, None, global_vars=globals())\"\n   ],\n   \"output\": {}\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Welcome\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Welcome\"\n   },\n   \"source\": [\n    \"\\n\",\n    \"<br />\\n\",\n    \"<p align=\\\"center\\\">\\n\",\n    \"  <a href=\\\"https://github.com/unskript/Awesome-CloudOps-Automation\\\">\\n\",\n    \"    <img src=\\\"https://storage.googleapis.com/unskript-website/assets/favicon.png\\\" alt=\\\"Logo\\\" width=\\\"80\\\" height=\\\"80\\\">\\n\",\n    \"  </a>\\n\",\n    \"<p align=\\\"center\\\">\\n\",\n    \"  <h3 align=\\\"center\\\">Awesome CloudOps Automation</h3>\\n\",\n    \"  <p align=\\\"center\\\">\\n\",\n    \"    CloudOps automation made simpler!\\n\",\n    \"    <br />\\n\",\n    \"    <a href=\\\"http://docs.unskript.com/\\\"><strong>Explore the docs \\u00bb</strong></a>\\n\",\n    \"    <br />\\n\",\n    \"    <br />\\n\",\n    \"    <a href=\\\"https://unskript.com/blog/\\\">Visit our blog</a>\\n\",\n    \"    \\u00b7\\n\",\n    \"    <a href=\\\"https://github.com/unskript/Awesome-CloudOps-Automation/issues/new?assignees=&labels=&template=bug_report.md&title=\\\">Report Bug</a>\\n\",\n    \"    \\u00b7\\n\",\n    \"    <a href=\\\"https://github.com/unskript/Awesome-CloudOps-Automation/issues/new?assignees=&labels=&template=feature_request.md&title=\\\">Request Feature</a>\\n\",\n    \"  </p>\\n\",\n    \"</p>\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"# About the project\\n\",\n    \"\\n\",\n    \"## Mission\\n\",\n    \"To make CloudOps automation simpler for developers and DevOps engineers. \\n\",\n    \"\\n\",\n    \"## Vision \\n\",\n    \"A single repository to satisfy all your day-to-day CloudOps automation needs. Are you looking for a script to automate your Kubernetes management? \\n\",\n    \"Or do you need a script to restart the pod that is OOMkilled? We will cover that for you. \\n\",\n    \"\\n\",\n    \"## Runbooks\\n\",\n    \"\\n\",\n    \"| Runbook Title                                              | Runbook URL                                                                                                  |\\n\",\n    \"| ---------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------ |\\n\",\n    \"| Welcome                                                    | http://127.0.0.1:8888/lab/tree/Welcome.ipynb                                                                 |\\n\",\n    \"| AWS Access Key Rotation                                    | http://127.0.0.1:8888/lab/tree/unskript/aws/AWS_Access_Key_Rotation.ipynb                                    |\\n\",\n    \"| AWS Add Mandatory tags to EC2                              | http://127.0.0.1:8888/lab/tree/unskript/aws/AWS_Add_Mandatory_tags_to_EC2.ipynb                              |\\n\",\n    \"| Add new IAM user                                           | http://127.0.0.1:8888/lab/tree/unskript/aws/Add_new_IAM_user.ipynb                                           |\\n\",\n    \"| Change AWS EBS Volume To GP3 Type                          | http://127.0.0.1:8888/lab/tree/unskript/aws/Change_AWS_EBS_Volume_To_GP3_Type.ipynb                          |\\n\",\n    \"| Change AWS Route53 TTL                                     | http://127.0.0.1:8888/lab/tree/unskript/aws/Change_AWS_Route53_TTL.ipynb                                     |\\n\",\n    \"| Configure url endpoint on a cloudwatch alarm               | http://127.0.0.1:8888/lab/tree/unskript/aws/Configure_url_endpoint_on_a_cloudwatch_alarm.ipynb               |\\n\",\n    \"| Copy ami to all given AWS regions                          | http://127.0.0.1:8888/lab/tree/unskript/aws/Copy_ami_to_all_given_AWS_regions.ipynb                          |\\n\",\n    \"| Create IAM User with policy                                | http://127.0.0.1:8888/lab/tree/unskript/aws/Create_IAM_User_with_policy.ipynb                                |\\n\",\n    \"| Delete EBS Volumes With Low Usage                          | http://127.0.0.1:8888/lab/tree/unskript/aws/Delete_EBS_Volumes_With_Low_Usage.ipynb                          |\\n\",\n    \"| Delete Old EBS Snapshots                                   | http://127.0.0.1:8888/lab/tree/unskript/aws/Delete_Old_EBS_Snapshots.ipynb                                   |\\n\",\n    \"| Delete Unattached EBS Volume                               | http://127.0.0.1:8888/lab/tree/unskript/aws/Delete_Unattached_EBS_Volume.ipynb                               |\\n\",\n    \"| Delete Unused AWS Log Streams                              | http://127.0.0.1:8888/lab/tree/unskript/aws/Delete_Unused_AWS_Log_Streams.ipynb                              |\\n\",\n    \"| Delete Unused AWS NAT Gateways                             | http://127.0.0.1:8888/lab/tree/unskript/aws/Delete_Unused_AWS_NAT_Gateways.ipynb                             |\\n\",\n    \"| Delete Unused AWS Secrets                                  | http://127.0.0.1:8888/lab/tree/unskript/aws/Delete_Unused_AWS_Secrets.ipynb                                  |\\n\",\n    \"| Detach Instance from ASG                                   | http://127.0.0.1:8888/lab/tree/unskript/aws/Detach_Instance_from_ASG.ipynb                                   |\\n\",\n    \"| Detach ec2 Instance from ASG                               | http://127.0.0.1:8888/lab/tree/unskript/aws/Detach_ec2_Instance_from_ASG.ipynb                               |\\n\",\n    \"| Detect ECS failed deployment                               | http://127.0.0.1:8888/lab/tree/unskript/aws/Detect_ECS_failed_deployment.ipynb                               |\\n\",\n    \"| EC2 Disk Cleanup                                           | http://127.0.0.1:8888/lab/tree/unskript/aws/EC2_Disk_Cleanup.ipynb                                           |\\n\",\n    \"| Enforce HTTP Redirection across AWS ALB                    | http://127.0.0.1:8888/lab/tree/unskript/aws/Enforce_HTTP_Redirection_across_AWS_ALB.ipynb                    |\\n\",\n    \"| Enforce Mandatory Tags Across All AWS Resources            | http://127.0.0.1:8888/lab/tree/unskript/aws/Enforce_Mandatory_Tags_Across_All_AWS_Resources.ipynb            |\\n\",\n    \"| Find EC2 Instances Scheduled to retire                     | http://127.0.0.1:8888/lab/tree/unskript/aws/Find_EC2_Instances_Scheduled_to_retire.ipynb                     |\\n\",\n    \"| Get Aws Elb Unhealthy Instances                            | http://127.0.0.1:8888/lab/tree/unskript/aws/Get_Aws_Elb_Unhealthy_Instances.ipynb                            |\\n\",\n    \"| IAM security least privilege                               | http://127.0.0.1:8888/lab/tree/unskript/aws/IAM_security_least_privilege.ipynb                               |\\n\",\n    \"| Lowering AWS CloudTrail Costs by Removing Redundant Trails | http://127.0.0.1:8888/lab/tree/unskript/aws/Lowering_AWS_CloudTrail_Costs_by_Removing_Redundant_Trails.ipynb |\\n\",\n    \"| Monitor AWS DynamoDB provision capacity                    | http://127.0.0.1:8888/lab/tree/unskript/aws/Monitor_AWS_DynamoDB_provision_capacity.ipynb                    |\\n\",\n    \"| Notify about unused keypairs                               | http://127.0.0.1:8888/lab/tree/unskript/aws/Notify_about_unused_keypairs.ipynb                               |\\n\",\n    \"| Publicly Accessible Amazon RDS Instances                   | http://127.0.0.1:8888/lab/tree/unskript/aws/Publicly_Accessible_Amazon_RDS_Instances.ipynb                   |\\n\",\n    \"| Release Unattached AWS Elastic IPs                         | http://127.0.0.1:8888/lab/tree/unskript/aws/Release_Unattached_AWS_Elastic_IPs.ipynb                         |\\n\",\n    \"| Remediate unencrypted S3 buckets                           | http://127.0.0.1:8888/lab/tree/unskript/aws/Remediate_unencrypted_S3_buckets.ipynb                           |\\n\",\n    \"| Renew SSL Certificate                                      | http://127.0.0.1:8888/lab/tree/unskript/aws/Renew_SSL_Certificate.ipynb                                      |\\n\",\n    \"| Resize EBS Volume                                          | http://127.0.0.1:8888/lab/tree/unskript/aws/Resize_EBS_Volume.ipynb                                          |\\n\",\n    \"| Resize List Of Pvcs                                        | http://127.0.0.1:8888/lab/tree/unskript/aws/Resize_List_Of_Pvcs.ipynb                                        |\\n\",\n    \"| Resize PVC                                                 | http://127.0.0.1:8888/lab/tree/unskript/aws/Resize_PVC.ipynb                                                 |\\n\",\n    \"| Restart AWS EC2 Instances By Tag                           | http://127.0.0.1:8888/lab/tree/unskript/aws/Restart_AWS_EC2_Instances_By_Tag.ipynb                           |\\n\",\n    \"| Restart Aws Instance given Tag                             | http://127.0.0.1:8888/lab/tree/unskript/aws/Restart_Aws_Instance_given_Tag.ipynb                             |\\n\",\n    \"| Restart Unhealthy Services Target Group                    | http://127.0.0.1:8888/lab/tree/unskript/aws/Restart_Unhealthy_Services_Target_Group.ipynb                    |\\n\",\n    \"| Restrict S3 Buckets with READ WRITE Permissions            | http://127.0.0.1:8888/lab/tree/unskript/aws/Restrict_S3_Buckets_with_READ_WRITE_Permissions.ipynb            |\\n\",\n    \"| Run EC2 from AMI                                           | http://127.0.0.1:8888/lab/tree/unskript/aws/Run_EC2_from_AMI.ipynb                                           |\\n\",\n    \"| Secure Publicly accessible Amazon RDS Snapshot             | http://127.0.0.1:8888/lab/tree/unskript/aws/Secure_Publicly_accessible_Amazon_RDS_Snapshot.ipynb             |\\n\",\n    \"| Stop Untagged EC2 Instances                                | http://127.0.0.1:8888/lab/tree/unskript/aws/Stop_Untagged_EC2_Instances.ipynb                                |\\n\",\n    \"| Terminate EC2 Instances Without Valid Lifetime Tag         | http://127.0.0.1:8888/lab/tree/unskript/aws/Terminate_EC2_Instances_Without_Valid_Lifetime_Tag.ipynb         |\\n\",\n    \"| Troubleshooting Your EC2 Configuration in Private Subnet   | http://127.0.0.1:8888/lab/tree/unskript/aws/Troubleshooting_Your_EC2_Configuration_in_Private_Subnet.ipynb   |\\n\",\n    \"| Update and Manage AWS User Permission                      | http://127.0.0.1:8888/lab/tree/unskript/aws/Update_and_Manage_AWS_User_Permission.ipynb                      |\\n\",\n    \"| aws redshift get daily product costs                       | http://127.0.0.1:8888/lab/tree/unskript/aws/aws_redshift_get_daily_product_costs.ipynb                       |\\n\",\n    \"| aws redshift get ec2 daily costs                           | http://127.0.0.1:8888/lab/tree/unskript/aws/aws_redshift_get_ec2_daily_costs.ipynb                           |\\n\",\n    \"| aws redshift update database                               | http://127.0.0.1:8888/lab/tree/unskript/aws/aws_redshift_update_database.ipynb                               |\\n\",\n    \"| delete iam user                                            | http://127.0.0.1:8888/lab/tree/unskript/aws/delete_iam_user.ipynb                                            |\\n\",\n    \"| Elasticsearch Rolling Restart                              | http://127.0.0.1:8888/lab/tree/unskript/elasticsearch/Elasticsearch_Rolling_Restart.ipynb                    |\\n\",\n    \"| Fetch Jenkins Build Logs                                   | http://127.0.0.1:8888/lab/tree/unskript/jenkins/Fetch_Jenkins_Build_Logs.ipynb                               |\\n\",\n    \"| jira visualize time to resolution                          | http://127.0.0.1:8888/lab/tree/unskript/jira/jira_visualize_time_to_resolution.ipynb                         |\\n\",\n    \"| Delete Evicted Pods From Namespaces                        | http://127.0.0.1:8888/lab/tree/unskript/kubernetes/Delete_Evicted_Pods_From_Namespaces.ipynb                 |\\n\",\n    \"| Get Kube System Config Map                                 | http://127.0.0.1:8888/lab/tree/unskript/kubernetes/Get_Kube_System_Config_Map.ipynb                          |\\n\",\n    \"| K8S Get Candidate Nodes Given Config                       | http://127.0.0.1:8888/lab/tree/unskript/kubernetes/K8S_Get_Candidate_Nodes_Given_Config.ipynb                |\\n\",\n    \"| K8S Log Healthcheck                                        | http://127.0.0.1:8888/lab/tree/unskript/kubernetes/K8S_Log_Healthcheck.ipynb                                 |\\n\",\n    \"| K8S Pod Stuck In CrashLoopBack State                       | http://127.0.0.1:8888/lab/tree/unskript/kubernetes/K8S_Pod_Stuck_In_CrashLoopBack_State.ipynb                |\\n\",\n    \"| K8S Pod Stuck In ImagePullBackOff State                    | http://127.0.0.1:8888/lab/tree/unskript/kubernetes/K8S_Pod_Stuck_In_ImagePullBackOff_State.ipynb             |\\n\",\n    \"| K8S Pod Stuck In Terminating State                         | http://127.0.0.1:8888/lab/tree/unskript/kubernetes/K8S_Pod_Stuck_In_Terminating_State.ipynb                  |\\n\",\n    \"| Resize List of PVCs                                        | http://127.0.0.1:8888/lab/tree/unskript/kubernetes/Resize_List_of_PVCs.ipynb                                 |\\n\",\n    \"| Resize PVC                                                 | http://127.0.0.1:8888/lab/tree/unskript/kubernetes/Resize_PVC.ipynb                                          |\\n\",\n    \"| Rollback k8s Deployment and Update Jira                    | http://127.0.0.1:8888/lab/tree/unskript/kubernetes/Rollback_k8s_Deployment_and_Update_Jira.ipynb             |\\n\",\n    \"| Display Postgresql Long Running                            | http://127.0.0.1:8888/lab/tree/unskript/postgresql/Display_Postgresql_Long_Running.ipynb                     |\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"## Persistence\\n\",\n    \"\\n\",\n    \"> Inside the docker there is `/data` folder that is where we store the `credentials` and `runbooks`. So if you would like to retain the `connectors` and `runbooks` you can use the docker's `-v` option to retain the changes done on the `docker`.\\n\",\n    \"\\n\",\n    \"> Note: New files are created inside the docker and will persist unless --rm option is used.\\n\",\n    \"> \\n\",\n    \"\\n\",\n    \"## Creating your own runbook\\n\",\n    \"\\n\",\n    \"If you would like to create your own ipynb file and start using it. Do the following.\\n\",\n    \"\\n\",\n    \"`On your Host machine`:\\n\",\n    \" 1. git clone https://github.com/unskript/Awesome-CloudOps-Automation\\n\",\n    \" 2. cd Awesome-CloudOps-Automation\\n\",\n    \" 3. CONTAINER=`docker ps -l | grep awesome-runbooks | awk '{print $1}'`\\n\",\n    \" 4. docker cp templates/runbooks/StartHere.ipynb $CONTAINER:/home/jovyan/runbooks/<YOUR_RUNBOOK_NAME.ipynb>\\n\",\n    \"\\n\",\n    \"Now simply point your browser to `http:127.0.0.1:8888/lab/tree/<YOUER_RUNBOOK_NAME.ipynb>`. \\n\",\n    \"Volla! you are all set! Start searching your favorite actions, drag and drop them in the \\n\",\n    \"cells, attach/add new credentials to it and run it.\\n\",\n    \"\\n\",\n    \"## Documentation\\n\",\n    \"\\n\",\n    \"Documentation can be found [Here](https://unskript.gitbook.io/unskript-product-documentation/open-source/cloudops-automation-with-unskript).\\n\",\n    \"\\n\",\n    \"## Community\\n\",\n    \"[Join the CloudOps Community Workspace](https://join.slack.com/t/cloud-ops-community/shared_invite/zt-1fvuobp10-~r_KyK9BxPhGiebOvl3h_w) on Slack to connect with other users, contributors and awesome people behind awesome CloudOps automation project. \\n\",\n    \"\\n\",\n    \"## Roadmap\\n\",\n    \"\\n\",\n    \"See the [open issues](https://github.com/unskript/awesome-cloudops-automation/issues) for a list of proposed features (and known issues).\\n\",\n    \"\\n\",\n    \"## Contributing\\n\",\n    \"\\n\",\n    \"Contributions are what make the open community such an amazing place to be learn, inspire, and create. Any contributions you make are **greatly appreciated**. Check out our [Contribution Guidelines](https://github.com/unskript/Awesome-CloudOps-Automation/blob/master/.github/CONTRIBUTING.md) for more details. \\n\",\n    \"\\n\",\n    \"Here is the Link for the [Developer Guide](https://github.com/unskript/Awesome-CloudOps-Automation/blob/master/.github/DEVELOPERGUIDE.md)\\n\",\n    \"\\n\",\n    \"## Star us\\n\",\n    \"\\n\",\n    \"If you like this project, Please consider giving us a **star** at [Awesome CloudOps Automation](https://github.com/unskript/Awesome-CloudOps-Automation)\\n\",\n    \"\\n\",\n    \"## License\\n\",\n    \"Except as otherwise noted this project is licensed under the `Apache License, Version 2.0` .\\n\",\n    \"\\n\",\n    \"Licensed under the Apache License, Version 2.0 (the \\\"License\\\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 .\\n\",\n    \"\\n\",\n    \"Unless required by applicable law or agreed to in writing, project distributed under the License is distributed on an \\\"AS IS\\\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"[contributors-shield]: https://img.shields.io/github/contributors/unskript/awesome-cloudops-automation.svg?style=for-the-badge\\n\",\n    \"[contributors-url]: https://github.com/unskript/awesome-cloudops-automation/graphs/contributors\\n\",\n    \"[github-actions-shield]: https://img.shields.io/github/workflow/status/unskript/awesome-cloudops-automation/e2e%20test?color=orange&label=e2e-test&logo=github&logoColor=orange&style=for-the-badge\\n\",\n    \"[github-actions-url]: https://github.com/unskript/awesome-cloudops-automation/actions/workflows/docker-tests.yml\\n\",\n    \"[forks-shield]: https://img.shields.io/github/forks/unskript/awesome-cloudops-automation.svg?style=for-the-badge\\n\",\n    \"[forks-url]: https://github.com/unskript/awesome-cloudops-automation/network/members\\n\",\n    \"[stars-shield]: https://img.shields.io/github/stars/unskript/awesome-cloudops-automation.svg?style=for-the-badge\\n\",\n    \"[stars-url]: https://github.com/unskript/awesome-cloudops-automation/stargazers\\n\",\n    \"[issues-shield]: https://img.shields.io/github/issues/unskript/awesome-cloudops-automation.svg?style=for-the-badge\\n\",\n    \"[issues-url]: https://github.com/unskript/awesome-cloudops-automation/issues\\n\",\n    \"[twitter-shield]: https://img.shields.io/badge/-Twitter-black.svg?style=for-the-badge&logo=twitter&colorB=555\\n\",\n    \"[twitter-url]: https://twitter.com/unskript\\n\",\n    \"[awesome-shield]: https://img.shields.io/badge/awesome-cloudops-orange?style=for-the-badge&logo=bookstack \\n\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"kernelspec\": {\n   \"display_name\": \"unSkript (Build: 1108)\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"codemirror_mode\": {\n    \"name\": \"ipython\",\n    \"version\": 3\n   },\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"nbconvert_exporter\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.9.6\"\n  },\n  \"vscode\": {\n   \"interpreter\": {\n    \"hash\": \"31f2aee4e71d21fbe5cf8b01ff0e069b9275f58929596ceb00d14d90e3e16cd6\"\n   }\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 4\n}"
  },
  {
    "path": "build/templates/Welcome_template.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": false\n    },\n    \"name\": \"Welcome\",\n    \"orderProperties\": [],\n    \"tags\": [],\n    \"title\": \"Welcome\"\n   },\n   \"source\": [\n    \"\\n\",\n    \"<br />\\n\",\n    \"<p align=\\\"center\\\">\\n\",\n    \"  <a href=\\\"https://github.com/unskript/Awesome-CloudOps-Automation\\\">\\n\",\n    \"    <img src=\\\"https://storage.googleapis.com/unskript-website/assets/favicon.png\\\" alt=\\\"Logo\\\" width=\\\"80\\\" height=\\\"80\\\">\\n\",\n    \"  </a>\\n\",\n    \"<p align=\\\"center\\\">\\n\",\n    \"  <h3 align=\\\"center\\\">Awesome CloudOps Automation</h3>\\n\",\n    \"  <p align=\\\"center\\\">\\n\",\n    \"    CloudOps automation made simpler!\\n\",\n    \"    <br />\\n\",\n    \"    <a href=\\\"http://docs.unskript.com/\\\"><strong>Explore the docs »</strong></a>\\n\",\n    \"    <br />\\n\",\n    \"    <br />\\n\",\n    \"    <a href=\\\"https://unskript.com/blog/\\\">Visit our blog</a>\\n\",\n    \"    ·\\n\",\n    \"    <a href=\\\"https://github.com/unskript/Awesome-CloudOps-Automation/issues/new?assignees=&labels=&template=bug_report.md&title=\\\">Report Bug</a>\\n\",\n    \"    ·\\n\",\n    \"    <a href=\\\"https://github.com/unskript/Awesome-CloudOps-Automation/issues/new?assignees=&labels=&template=feature_request.md&title=\\\">Request Feature</a>\\n\",\n    \"  </p>\\n\",\n    \"</p>\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"# About the project\\n\",\n    \"\\n\",\n    \"## Mission\\n\",\n    \"To make CloudOps automation simpler for developers and DevOps engineers. \\n\",\n    \"\\n\",\n    \"## Vision \\n\",\n    \"A single repository to satisfy all your day-to-day CloudOps automation needs. Are you looking for a script to automate your Kubernetes management? \\n\",\n    \"Or do you need a script to restart the pod that is OOMkilled? We will cover that for you. \\n\",\n    \"\\n\",\n    \"## Runbooks\\n\",\n    \"\\n\",\n    \"DYNAMIC_CONTENT\\n\",\n    \"\\n\",\n    \"## Persistence\\n\",\n    \"\\n\",\n    \"> Inside the docker there is `/data` folder that is where we store the `credentials` and `runbooks`. So if you would like to retain the `connectors` and `runbooks` you can use the docker's `-v` option to retain the changes done on the `docker`.\\n\",\n    \"\\n\",\n    \"> Note: New files are created inside the docker and will persist unless --rm option is used.\\n\",\n    \"> \\n\",\n    \"\\n\",\n    \"## Creating your own runbook\\n\",\n    \"\\n\",\n    \"If you would like to create your own ipynb file and start using it. Do the following.\\n\",\n    \"\\n\",\n    \"`On your Host machine`:\\n\",\n    \" 1. git clone https://github.com/unskript/Awesome-CloudOps-Automation\\n\",\n    \" 2. cd Awesome-CloudOps-Automation\\n\",\n    \" 3. CONTAINER=`docker ps -l | grep awesome-runbooks | awk '{print $1}'`\\n\",\n    \" 4. docker cp templates/runbooks/StartHere.ipynb $CONTAINER:/home/jovyan/runbooks/<YOUR_RUNBOOK_NAME.ipynb>\\n\",\n    \"\\n\",\n    \"Now simply point your browser to `http:127.0.0.1:8888/lab/tree/<YOUER_RUNBOOK_NAME.ipynb>`. \\n\",\n    \"Volla! you are all set! Start searching your favorite actions, drag and drop them in the \\n\",\n    \"cells, attach/add new credentials to it and run it.\\n\",\n    \"\\n\",\n    \"## Documentation\\n\",\n    \"\\n\",\n    \"Documentation can be found [Here](https://unskript.gitbook.io/unskript-product-documentation/open-source/cloudops-automation-with-unskript).\\n\",\n    \"\\n\",\n    \"## Community\\n\",\n    \"[Join the CloudOps Community Workspace](https://join.slack.com/t/cloud-ops-community/shared_invite/zt-1fvuobp10-~r_KyK9BxPhGiebOvl3h_w) on Slack to connect with other users, contributors and awesome people behind awesome CloudOps automation project. \\n\",\n    \"\\n\",\n    \"## Roadmap\\n\",\n    \"\\n\",\n    \"See the [open issues](https://github.com/unskript/awesome-cloudops-automation/issues) for a list of proposed features (and known issues).\\n\",\n    \"\\n\",\n    \"## Contributing\\n\",\n    \"\\n\",\n    \"Contributions are what make the open community such an amazing place to be learn, inspire, and create. Any contributions you make are **greatly appreciated**. Check out our [Contribution Guidelines](https://github.com/unskript/Awesome-CloudOps-Automation/blob/master/.github/CONTRIBUTING.md) for more details. \\n\",\n    \"\\n\",\n    \"Here is the Link for the [Developer Guide](https://github.com/unskript/Awesome-CloudOps-Automation/blob/master/.github/DEVELOPERGUIDE.md)\\n\",\n    \"\\n\",\n    \"## Star us\\n\",\n    \"\\n\",\n    \"If you like this project, Please consider giving us a **star** at [Awesome CloudOps Automation](https://github.com/unskript/Awesome-CloudOps-Automation)\\n\",\n    \"\\n\",\n    \"## License\\n\",\n    \"Except as otherwise noted this project is licensed under the `Apache License, Version 2.0` .\\n\",\n    \"\\n\",\n    \"Licensed under the Apache License, Version 2.0 (the \\\"License\\\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 .\\n\",\n    \"\\n\",\n    \"Unless required by applicable law or agreed to in writing, project distributed under the License is distributed on an \\\"AS IS\\\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"[contributors-shield]: https://img.shields.io/github/contributors/unskript/awesome-cloudops-automation.svg?style=for-the-badge\\n\",\n    \"[contributors-url]: https://github.com/unskript/awesome-cloudops-automation/graphs/contributors\\n\",\n    \"[github-actions-shield]: https://img.shields.io/github/workflow/status/unskript/awesome-cloudops-automation/e2e%20test?color=orange&label=e2e-test&logo=github&logoColor=orange&style=for-the-badge\\n\",\n    \"[github-actions-url]: https://github.com/unskript/awesome-cloudops-automation/actions/workflows/docker-tests.yml\\n\",\n    \"[forks-shield]: https://img.shields.io/github/forks/unskript/awesome-cloudops-automation.svg?style=for-the-badge\\n\",\n    \"[forks-url]: https://github.com/unskript/awesome-cloudops-automation/network/members\\n\",\n    \"[stars-shield]: https://img.shields.io/github/stars/unskript/awesome-cloudops-automation.svg?style=for-the-badge\\n\",\n    \"[stars-url]: https://github.com/unskript/awesome-cloudops-automation/stargazers\\n\",\n    \"[issues-shield]: https://img.shields.io/github/issues/unskript/awesome-cloudops-automation.svg?style=for-the-badge\\n\",\n    \"[issues-url]: https://github.com/unskript/awesome-cloudops-automation/issues\\n\",\n    \"[twitter-shield]: https://img.shields.io/badge/-Twitter-black.svg?style=for-the-badge&logo=twitter&colorB=555\\n\",\n    \"[twitter-url]: https://twitter.com/unskript\\n\",\n    \"[awesome-shield]: https://img.shields.io/badge/awesome-cloudops-orange?style=for-the-badge&logo=bookstack \\n\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"kernelspec\": {\n   \"display_name\": \"Python 3.8.2 64-bit\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"name\": \"python\",\n   \"version\": \"3.8.2\"\n  },\n  \"orig_nbformat\": 4,\n  \"vscode\": {\n   \"interpreter\": {\n    \"hash\": \"31f2aee4e71d21fbe5cf8b01ff0e069b9275f58929596ceb00d14d90e3e16cd6\"\n   }\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 2\n}\n"
  },
  {
    "path": "generate_readme.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"latter-teddy\",\n   \"metadata\": {},\n   \"source\": [\n    \"\\n\",\n    \"# Generate Readme with up to date list of xRunBooks\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"421e4c6e-d5ef-4d53-9d36-f352426c4d87\",\n   \"metadata\": {\n    \"tags\": []\n   },\n   \"source\": [\n    \"## Input\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"e84f7e80-dda2-4569-96dd-5abaaed2c73a\",\n   \"metadata\": {},\n   \"source\": [\n    \"### Import libraries\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 1,\n   \"id\": \"sitting-directory\",\n   \"metadata\": {\n    \"credentialsJson\": {},\n    \"tags\": []\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"import os\\n\",\n    \"import json\\n\",\n    \"import requests\\n\",\n    \"import urllib.parse\\n\",\n    \"import copy\\n\",\n    \"from pathlib import Path\\n\",\n    \"import markdown\\n\",\n    \"import nbformat\\n\",\n    \"from nbconvert import MarkdownExporter\\n\",\n    \"from papermill.iorw import (\\n\",\n    \"    load_notebook_node,\\n\",\n    \"    write_ipynb,\\n\",\n    \")\\n\",\n    \"try:\\n\",\n    \"    from git import Repo\\n\",\n    \"except:\\n\",\n    \"    !pip install GitPython\\n\",\n    \"    from git import Repo\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"agricultural-contest\",\n   \"metadata\": {},\n   \"source\": [\n    \"### Variables\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 2,\n   \"id\": \"guided-edgar\",\n   \"metadata\": {\n    \"credentialsJson\": {},\n    \"tags\": []\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"# README variables\\n\",\n    \"readme_template = \\\"README_template.md\\\"\\n\",\n    \"readme = \\\"README.md\\\"\\n\",\n    \"replace_var = \\\"[[DYNAMIC_LIST]]\\\"\\n\",\n    \"badge_var = \\\"[[BADGE]]\\\"\\n\",\n    \"\\n\",\n    \"# welcome variables\\n\",\n    \"#this is a TODO\\n\",\n    \"#welcome_template = \\\"Welcome_template.ipynb\\\"\\n\",\n    \"#welcome = \\\"Welcome.ipynb\\\"\\n\",\n    \"#replace_var_quote = f'\\\"[[DYNAMIC_LIST]]\\\",\\\\n'\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"# Others\\n\",\n    \"current_file = '.'\\n\",\n    \"notebook_ext = '.ipynb'\\n\",\n    \"github_url_base = 'https://github.com/unskript/Awesome-CloudOps-Automation/tree/master'\\n\",\n    \"local_OSS_url = 'http://127.0.0.1:8888/lab/tree'\\n\",\n    \"#fix these!\\n\",\n    \"github_download_url = 'https://raw.githubusercontent.com/unskript/Awesome-CloudOps-Automation/master/'\\n\",\n    \"unSkript_logo ='https://storage.googleapis.com/unskript-website/assets/favicon.png'\\n\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"distinguished-declaration\",\n   \"metadata\": {},\n   \"source\": [\n    \"### Get files list\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 21,\n   \"id\": \"36c9011e-5f51-4779-8062-a627503100e1\",\n   \"metadata\": {\n    \"credentialsJson\": {},\n    \"tags\": []\n   },\n   \"outputs\": [\n    {\n     \"name\": \"stdout\",\n     \"output_type\": \"stream\",\n     \"text\": [\n      \"AWS Get Resources Missing Tag is missing categories\\n\",\n      \"AWS Get Resources With Expiration Tag is missing categories\\n\",\n      \"AWS Get Resources With Tag is missing categories\\n\",\n      \"Get Datadog Handle is missing categories\\n\",\n      \"Get large Elasticsearch Index size is missing categories\\n\",\n      \"Check Elasticsearch cluster disk size is missing categories\\n\",\n      \"Get GCP Handle is missing categories\\n\",\n      \"Get Github Handle is missing categories\\n\",\n      \"Get Grafana Handle is missing categories\\n\",\n      \"Get Jenkins Handle is missing categories\\n\",\n      \"Get Jira SDK Handle is missing categories\\n\",\n      \"Get Kafka Producer Handle is missing categories\\n\",\n      \"Check K8s services endpoint health is missing categories\\n\",\n      \"Get K8s pods status and resource utilization info is missing categories\\n\",\n      \"Get Kubernetes Handle is missing categories\\n\",\n      \"Get memory utilization for K8s services is missing categories\\n\",\n      \"Get MongoDB large databases is missing categories\\n\",\n      \"Get MongoDB Handle is missing categories\\n\",\n      \"Get MS-SQL Handle is missing categories\\n\",\n      \"Get MySQL Handle is missing categories\\n\",\n      \"Netbox Get Handle is missing categories\\n\",\n      \"Nomad Get Handle is missing categories\\n\",\n      \"Get Opsgenie Handle is missing categories\\n\",\n      \"Get Pingdom Handle is missing categories\\n\",\n      \"Get PostgreSQL Handle is missing categories\\n\",\n      \"Get Prometheus handle is missing categories\\n\",\n      \"Get Redis Handle is missing categories\\n\",\n      \"Get REST handle is missing categories\\n\",\n      \"Get SSH handle is missing categories\\n\",\n      \"Create Slack Channel and Invite Users is missing categories\\n\",\n      \"Get Slack SDK Handle is missing categories\\n\",\n      \"Slack Lookup User by Email is missing categories\\n\",\n      \"Slack Send DM is missing categories\\n\",\n      \"Get Stripe Handle is missing categories\\n\",\n      \"Get terraform handle is missing categories\\n\",\n      \"539\\n\",\n      \"\\n\",\n      \"# AWS\\n\",\n      \"* [AWS Start IAM Policy Generation ](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/AWS_Start_IAM_Policy_Generation/README.md): Given a region, a CloudTrail ARN (where the logs are being recorded), a reference IAM ARN (whose usage we will parse), and a Service role, this will begin the generation of a IAM policy.  The output is a String of the generation Id.\\n\",\n      \"* [Add Lifecycle Configuration to AWS S3 Bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_add_lifecycle_configuration_to_s3_bucket/README.md): Creates a new lifecycle configuration for the bucket or replaces an existing lifecycle configuration.\\n\",\n      \"* [Apply AWS Default Encryption for S3 Bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_apply_default_encryption_for_s3_buckets/README.md): Apply AWS Default Encryption for S3 Bucket\\n\",\n      \"* [Attach an EBS volume to an AWS EC2 Instance](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_attach_ebs_to_instances/README.md): Attach an EBS volume to an AWS EC2 Instance\\n\",\n      \"* [AWS Attach New Policy to User](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_attach_iam_policy/README.md): AWS Attach New Policy to User\\n\",\n      \"* [AWS Attach Tags to Resources](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_attach_tags_to_resources/README.md): AWS Attach Tags to Resources\\n\",\n      \"* [AWS Change ACL Permission of public S3 Bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_change_acl_permissions_of_buckets/README.md): AWS Change ACL Permission public S3 Bucket\\n\",\n      \"* [AWS Check if RDS instances are not M5 or T3](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_check_rds_non_m5_t3_instances/README.md): AWS Check if RDS instances are not M5 or T3\\n\",\n      \"* [Check SSL Certificate Expiry](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_check_ssl_certificate_expiry/README.md): Check ACM SSL Certificate expiry date\\n\",\n      \"* [Attach a webhook endpoint to AWS Cloudwatch alarm](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_cloudwatch_attach_webhook_notification_to_alarm/README.md): Attach a webhook endpoint to one of the SNS attached to the AWS Cloudwatch alarm.\\n\",\n      \"* [AWS Create IAM Policy](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_create_IAMpolicy/README.md): Given an AWS policy (as a string), and the name for the policy, this will create an IAM policy.\\n\",\n      \"* [AWS Create Access Key](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_create_access_key/README.md): Create a new Access Key for the User\\n\",\n      \"* [Create AWS Bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_create_bucket/README.md): Create a new AWS S3 Bucket\\n\",\n      \"* [Create New IAM User](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_create_iam_user/README.md): Create New IAM User\\n\",\n      \"* [AWS Redshift Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_create_redshift_query/README.md): Make a SQL Query to the given AWS Redshift database\\n\",\n      \"* [Create Login profile for IAM User](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_create_user_login_profile/README.md): Create Login profile for IAM User\\n\",\n      \"* [AWS Create Snapshot For Volume](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_create_volumes_snapshot/README.md): Create a snapshot for EBS volume of the EC2 Instance for backing up the data stored in EBS\\n\",\n      \"* [AWS Delete Access Key](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_delete_access_key/README.md): Delete an Access Key for a User\\n\",\n      \"* [Delete AWS Bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_delete_bucket/README.md): Delete an AWS S3 Bucket\\n\",\n      \"* [AWS Delete Classic Load Balancer](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_delete_classic_load_balancer/README.md): Delete Classic Elastic Load Balancers\\n\",\n      \"* [AWS Delete EBS Snapshot](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_delete_ebs_snapshot/README.md): Delete EBS Snapshot for an EC2 instance\\n\",\n      \"* [AWS Delete ECS Cluster](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_delete_ecs_cluster/README.md): Delete AWS ECS Cluster\\n\",\n      \"* [AWS Delete Load Balancer](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_delete_load_balancer/README.md): AWS Delete Load Balancer\\n\",\n      \"* [AWS Delete Log Stream](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_delete_log_stream/README.md): AWS Delete Log Stream\\n\",\n      \"* [AWS Delete NAT Gateway](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_delete_nat_gateway/README.md): AWS Delete NAT Gateway\\n\",\n      \"* [AWS Delete RDS Instance](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_delete_rds_instance/README.md): Delete AWS RDS Instance\\n\",\n      \"* [AWS Delete Redshift Cluster](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_delete_redshift_cluster/README.md): Delete AWS Redshift Cluster\\n\",\n      \"* [AWS Delete Route 53 HealthCheck](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_delete_route53_health_check/README.md): AWS Delete Route 53 HealthCheck\\n\",\n      \"* [Delete AWS Default Encryption for S3 Bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_delete_s3_bucket_encryption/README.md): Delete AWS Default Encryption for S3 Bucket\\n\",\n      \"* [AWS Delete Secret](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_delete_secret/README.md): AWS Delete Secret\\n\",\n      \"* [Delete AWS EBS Volume by Volume ID](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_delete_volume_by_id/README.md): Delete AWS Volume by Volume ID\\n\",\n      \"* [ Deregisters AWS Instances from a Load Balancer](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_deregister_instances/README.md):  Deregisters AWS Instances from a Load Balancer\\n\",\n      \"* [AWS Describe Cloudtrails ](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_describe_cloudtrail/README.md): Given an AWS Region, this Action returns a Dict with all of the Cloudtrail logs being recorded\\n\",\n      \"* [ Detach as AWS Instance with a Elastic Block Store](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_detach_ebs_to_instances/README.md):  Detach as AWS Instance with a Elastic Block Store.\\n\",\n      \"* [AWS Detach Instances From AutoScaling Group](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_detach_instances_from_autoscaling_group/README.md): Use This Action to AWS Detach Instances From AutoScaling Group\\n\",\n      \"* [EBS Modify Volume](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_ebs_modify_volume/README.md): Modify/Resize volume for Elastic Block Storage (EBS).\\n\",\n      \"* [AWS ECS Describe Task Definition.](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_ecs_describe_task_definition/README.md): Describe AWS ECS Task Definition.\\n\",\n      \"* [ECS detect failed deployment ](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_ecs_detect_failed_deployment/README.md): List of stopped tasks, associated with a deployment, along with their stopped reason\\n\",\n      \"* [Restart AWS ECS Service](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_ecs_service_restart/README.md): Restart an AWS ECS Service\\n\",\n      \"* [Update AWS ECS Service](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_ecs_update_service/README.md): Update AWS ECS Service\\n\",\n      \"* [ Copy EKS Pod logs to bucket.](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_eks_copy_pod_logs_to_bucket/README.md):  Copy given EKS pod logs to given S3 Bucket.\\n\",\n      \"* [ Delete EKS POD in a given Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_eks_delete_pod/README.md):  Delete a EKS POD in a given Namespace\\n\",\n      \"* [List of EKS dead pods](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_eks_get_all_dead_pods/README.md): Get list of all dead pods in a given EKS cluster\\n\",\n      \"* [List of EKS Namespaces](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_eks_get_all_namespaces/README.md): Get list of all Namespaces in a given EKS cluster\\n\",\n      \"* [List of EKS pods](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_eks_get_all_pods/README.md): Get list of all pods in a given EKS cluster\\n\",\n      \"* [ List of EKS deployment for given Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_eks_get_deployments_name/README.md):  Get list of EKS deployment names for given Namespace\\n\",\n      \"* [Get CPU and memory utilization of node.](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_eks_get_node_cpu_memory/README.md):  Get CPU and memory utilization of given node.\\n\",\n      \"* [ Get EKS Nodes](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_eks_get_nodes/README.md):  Get EKS Nodes\\n\",\n      \"* [ List of EKS pods not in RUNNING State](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_eks_get_not_running_pods/README.md):  Get list of all pods in a given EKS cluster that are not running.\\n\",\n      \"* [Get pod CPU and Memory usage from given namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_eks_get_pod_cpu_memory/README.md): Get all pod CPU and Memory usage from given namespace\\n\",\n      \"* [ EKS Get pod status](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_eks_get_pod_status/README.md):  Get a Status of given POD in a given Namespace and EKS cluster name\\n\",\n      \"* [ EKS Get Running Pods](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_eks_get_running_pods/README.md):  Get a list of running pods from given namespace and EKS cluster name\\n\",\n      \"* [ Run Kubectl commands on EKS Cluster](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_eks_run_kubectl_cmd/README.md): This action runs a kubectl command on an AWS EKS Cluster\\n\",\n      \"* [Get AWS EMR Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_emr_get_instances/README.md): Get a list of EC2 Instances for an EMR cluster. Filtered by node type (MASTER|CORE|TASK)\\n\",\n      \"* [Run Command via AWS CLI](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_execute_cli_command/README.md): Execute command using AWS CLI\\n\",\n      \"* [ Run Command via SSM](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_execute_command_ssm/README.md):  Execute command on EC2 instance(s) using SSM\\n\",\n      \"* [AWS Filter All Manual Database Snapshots](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_all_manual_database_snapshots/README.md): Use This Action to AWS Filter All Manual Database Snapshots\\n\",\n      \"* [Filter AWS Unattached EBS Volume](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_ebs_unattached_volumes/README.md): Filter AWS Unattached EBS Volume\\n\",\n      \"* [Filter AWS EBS Volume with Low IOPS](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_ebs_volumes_with_low_iops/README.md): IOPS (Input/Output Operations Per Second) is a metric used to measure the amount of input/output operations that an EBS volume can perform per second.\\n\",\n      \"* [Filter AWS EC2 Instance](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_ec2_by_tags/README.md): Filter AWS EC2 Instance\\n\",\n      \"* [Filter AWS EC2 instance by VPC Ids](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_ec2_by_vpc/README.md): Use this Action to Filter AWS EC2 Instance by VPC Ids\\n\",\n      \"* [Filter All AWS EC2 Instance](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_ec2_instances/README.md): Filter All AWS EC2 Instance\\n\",\n      \"* [Filter AWS EC2 Instances Without Lifetime Tag](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_ec2_without_lifetime_tag/README.md): Filter AWS EC2 Instances Without Lifetime Tag\\n\",\n      \"* [Filter AWS EC2 Instances Without Termination and Lifetime Tag](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_instances_without_termination_and_lifetime_tag/README.md): Filter AWS EC2 Instances Without Termination and Lifetime Tag and Check of they are valid\\n\",\n      \"* [AWS Filter Large EC2 Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_large_ec2_instances/README.md): This Action to filter all instances whose instanceType contains Large or xLarge, and that DO NOT have the largetag key/value.\\n\",\n      \"* [AWS Find Long Running EC2 Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_long_running_instances/README.md): This action list a all instances that are older than the threshold\\n\",\n      \"* [AWS Filter Old EBS Snapshots](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_old_ebs_snapshots/README.md): This action list a all snapshots details that are older than the threshold\\n\",\n      \"* [Get AWS public S3 Buckets using ACL](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_public_s3_buckets_by_acl/README.md): Get AWS public S3 Buckets using ACL\\n\",\n      \"* [Filter AWS Target groups by tag name](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_target_groups_by_tags/README.md): Filter AWS Target groups which have the provided tag attached to it. It also returns the value of that tag for each target group\\n\",\n      \"* [Filter AWS Unencrypted S3 Buckets](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_unencrypted_s3_buckets/README.md): Filter AWS Unencrypted S3 Buckets\\n\",\n      \"* [Get Unhealthy instances from ASG](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_unhealthy_instances_from_asg/README.md): Get Unhealthy instances from Auto Scaling Group\\n\",\n      \"* [Filter AWS Untagged EC2 Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_untagged_ec2_instances/README.md): Filter AWS Untagged EC2 Instances\\n\",\n      \"* [Filter AWS Unused Keypairs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_unused_keypairs/README.md): Filter AWS Unused Keypairs\\n\",\n      \"* [AWS Filter Unused Log Stream](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_unused_log_streams/README.md): This action lists all log streams that are unused for all the log groups by the given threshold.\\n\",\n      \"* [AWS Find Unused NAT Gateways](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_unused_nat_gateway/README.md): This action to get all of the Nat gateways that have zero traffic over those\\n\",\n      \"* [Find AWS ELBs with no targets or instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_find_elbs_with_no_targets_or_instances/README.md): Find AWS ELBs with no targets or instances attached to them.\\n\",\n      \"* [AWS Find Idle Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_find_idle_instances/README.md): Find Idle EC2 instances\\n\",\n      \"* [AWS Filter Lambdas with Long Runtime](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_find_long_running_lambdas/README.md): This action retrieves a list of all Lambda functions and searches for log events for each function for given runtime(duration).\\n\",\n      \"* [AWS Find Low Connections RDS instances Per Day](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_find_low_connection_rds_instances/README.md): This action will find RDS DB instances with a number of connections below the specified minimum in the specified region.\\n\",\n      \"* [AWS Find EMR Clusters of Old Generation Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_find_old_gen_emr_clusters/README.md): This action list of EMR clusters of old generation instances.\\n\",\n      \"* [AWS Find RDS Instances with low CPU Utilization](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_find_rds_instances_with_low_cpu_utilization/README.md): This lego finds RDS instances are not utilizing their CPU resources to their full potential.\\n\",\n      \"* [AWS Find Redshift Cluster without Pause Resume Enabled](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_find_redshift_cluster_without_pause_resume_enabled/README.md): Use This Action to AWS find redshift cluster for which paused resume are not Enabled\\n\",\n      \"* [AWS Find Redshift Clusters with low CPU Utilization](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_find_redshift_clusters_with_low_cpu_utilization/README.md): Find underutilized Redshift clusters in terms of CPU utilization.\\n\",\n      \"* [AWS Find S3 Buckets without Lifecycle Policies](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_find_s3_buckets_without_lifecycle_policies/README.md): S3 lifecycle policies enable you to automatically transition objects to different storage classes or delete them when they are no longer needed. This action finds all S3 buckets without lifecycle policies. \\n\",\n      \"* [Finding Redundant Trails in AWS](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_finding_redundant_trails/README.md): This action will find a redundant cloud trail if the attribute IncludeGlobalServiceEvents is true, and then we need to find multiple duplications.\\n\",\n      \"* [AWS Get AWS Account Number](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_acount_number/README.md): Some AWS functions require the AWS Account number. This programmatically retrieves it.\\n\",\n      \"* [Get AWS CloudWatch Alarms List](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_alarms_list/README.md): Get AWS CloudWatch Alarms List\\n\",\n      \"* [Get AWS ALB Listeners Without HTTP Redirection](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_alb_listeners_without_http_redirect/README.md): Get AWS ALB Listeners Without HTTP Redirection\\n\",\n      \"* [Get AWS EC2 Instances All ](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_all_ec2_instances/README.md): Use This Action to Get All AWS EC2 Instances\\n\",\n      \"* [AWS Get All Load Balancers](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_all_load_balancers/README.md): AWS Get All Load Balancers\\n\",\n      \"* [AWS Get All Service Names v3](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_all_service_names/README.md): Get a list of all service names in a region\\n\",\n      \"* [AWS Get Untagged Resources](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_all_untagged_resources/README.md): AWS Get Untagged Resources\\n\",\n      \"* [Get AWS AutoScaling Group Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_auto_scaling_instances/README.md): Use This Action to Get AWS AutoScaling Group Instances\\n\",\n      \"* [Get AWS Bucket Size](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_bucket_size/README.md): Get an AWS Bucket Size\\n\",\n      \"* [Get AWS EBS Metrics from Cloudwatch](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_ebs/README.md): Get AWS CloudWatch Statistics for EBS volumes\\n\",\n      \"* [Get AWS EC2 Metrics from Cloudwatch](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_ec2/README.md): Get AWS CloudWatch Metrics for EC2 instances. These could be CPU, Network, Disk based measurements\\n\",\n      \"* [Get AWS EC2 CPU Utilization Statistics from Cloudwatch](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_ec2_cpuutil/README.md): Get AWS CloudWatch Statistics for cpu utilization for EC2 instances\\n\",\n      \"* [Get AWS CloudWatch Metrics for AWS/ApplicationELB](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_metrics_applicationelb/README.md): Get AWS CloudWatch Metrics for AWS/ApplicationELB\\n\",\n      \"* [Get AWS CloudWatch Metrics for AWS/ELB](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_metrics_classic_elb/README.md): Get AWS CloudWatch Metrics for Classic Loadbalancer\\n\",\n      \"* [Get AWS CloudWatch Metrics for AWS/DynamoDB](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_metrics_dynamodb/README.md): Get AWS CloudWatch Metrics for AWS DynamoDB\\n\",\n      \"* [Get AWS CloudWatch Metrics for AWS/AutoScaling](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_metrics_ec2autoscaling/README.md): Get AWS CloudWatch Metrics for AWS EC2 AutoScaling groups\\n\",\n      \"* [Get AWS CloudWatch Metrics for AWS/GatewayELB](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_metrics_gatewayelb/README.md): Get AWS CloudWatch Metrics for AWS/GatewayELB\\n\",\n      \"* [Get AWS CloudWatch Metrics for AWS/Lambda](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_metrics_lambda/README.md): Get AWS CloudWatch Metrics for AWS/Lambda\\n\",\n      \"* [Get AWS CloudWatch Metrics for AWS/NetworkELB](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_metrics_network_elb/README.md): Get AWS CloudWatch Metrics for Network Loadbalancer\\n\",\n      \"* [Get AWS CloudWatch Metrics for AWS/RDS](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_metrics_rds/README.md): Get AWS CloudWatch Metrics for AWS/RDS\\n\",\n      \"* [Get AWS CloudWatch Metrics for AWS/Redshift](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_metrics_redshift/README.md): Get AWS CloudWatch Metrics for AWS/Redshift\\n\",\n      \"* [Get AWS CloudWatch Metrics for AWS/SQS](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_metrics_sqs/README.md): Get AWS CloudWatch Metrics for AWS/SQS\\n\",\n      \"* [Get AWS CloudWatch Statistics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_statistics/README.md): Get AWS CloudWatch Statistics\\n\",\n      \"* [AWS Get Costs For All Services](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cost_for_all_services/README.md): Get Costs for all AWS services in a given time period.\\n\",\n      \"* [AWS Get Costs For Data Transfer](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cost_for_data_transfer/README.md): Get daily cost for Data Transfer in AWS\\n\",\n      \"* [AWS Get Daily Total Spend](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_daily_total_spend/README.md): AWS get daily total spend from Cost Explorer\\n\",\n      \"* [AWS Get EBS Volumes for Low Usage](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_ebs_volume_for_low_usage/README.md): This action list low use volumes from AWS which used <10% capacity from the given threshold days.\\n\",\n      \"* [Get EBS Volumes By Type](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_ebs_volumes_by_type/README.md): Get EBS Volumes By Type\\n\",\n      \"* [Get AWS EBS Volume Without GP3 Type](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_ebs_volumes_without_gp3_type/README.md): AWS recently introduced the General Purpose SSD (gp3) volume type, which is designed to provide higher IOPS performance at a lower cost than the gp2 volume type.\\n\",\n      \"* [Get EC2 CPU Consumption For All Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_ec2_cpu_consumption/README.md): Get EC2 CPU Consumption For All Instances\\n\",\n      \"* [Get EC2 Data Traffic In and Out For All Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_ec2_data_traffic/README.md): Get EC2 Data Traffic In and Out For All Instances\\n\",\n      \"* [Get Age of all EC2 Instances in Days](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_ec2_instance_age/README.md): Get Age of all EC2 Instances in Days\\n\",\n      \"* [AWS ECS Instances without AutoScaling policy](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_ecs_instances_without_autoscaling/README.md): AWS ECS Instances without AutoScaling policy.\\n\",\n      \"* [Get AWS ECS Service Status](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_ecs_services_status/README.md): Get the Status of an AWS ECS Service\\n\",\n      \"* [AWS ECS Services without AutoScaling policy](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_ecs_services_without_autoscaling/README.md): AWS ECS Services without AutoScaling policy.\\n\",\n      \"* [AWS Get Generated Policy](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_generated_policy/README.md): Given a Region and the ID of a policy generation job, this Action will return the policy (once it has been completed).\\n\",\n      \"* [Get AWS boto3 handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_handle/README.md): Get AWS boto3 handle\\n\",\n      \"* [AWS List IAM users without password policies](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_iam_users_without_password_policies/README.md): Get a list of all IAM users that have no password policy attached to them.\\n\",\n      \"* [AWS Get Idle EMR Clusters](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_idle_emr_clusters/README.md): This action list of EMR clusters that have been idle for more than the specified time.\\n\",\n      \"* [Get AWS Instance Details with Matching Private DNS Name](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_instance_detail_with_private_dns_name/README.md): Use this action to get details of an AWS EC2 Instance that matches a Private DNS Name\\n\",\n      \"* [Get AWS Instances Details](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_instance_details/README.md): Get AWS Instances Details\\n\",\n      \"* [List All AWS EC2 Instances Under the ELB](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_instances/README.md):  Get a list of all AWS EC2 Instances from given ELB\\n\",\n      \"* [AWS Get Internet Gateway by VPC ID](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_internet_gateway_by_vpc/README.md): AWS Get Internet Gateway by VPC ID\\n\",\n      \"* [Find AWS Lambdas Not Using ARM64 Graviton2 Processor](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_lambdas_not_using_arm_graviton2_processor/README.md): Find all AWS Lambda functions that are not using the Arm-based AWS Graviton2 processor for their runtime architecture\\n\",\n      \"* [Get AWS Lambdas With High Error Rate](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_lambdas_with_high_error_rate/README.md): Get AWS Lambda Functions that exceed a given threshold error rate.\\n\",\n      \"* [AWS Get Long Running ElastiCache clusters Without Reserved Nodes](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_long_running_elasticcache_clusters_without_reserved_nodes/README.md): This action gets information about long running ElastiCache clusters and their status, and checks if they have any reserved nodes associated with them.\\n\",\n      \"* [AWS Get Long Running RDS Instances Without Reserved Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_long_running_rds_instances_without_reserved_instances/README.md): This action gets information about long running instances and their status, and checks if they have any reserved nodes associated with them.\\n\",\n      \"* [AWS Get Long Running Redshift Clusters Without Reserved Nodes](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_long_running_redshift_clusters_without_reserved_nodes/README.md): This action gets information about running clusters and their status, and checks if they have any reserved nodes associated with them.\\n\",\n      \"* [AWS Get NAT Gateway Info by VPC ID](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_nat_gateway_by_vpc/README.md): This action is used to get the details about nat gateways configured for VPC.\\n\",\n      \"* [Get all Targets for Network Load Balancer (NLB)](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_nlb_targets/README.md): Use this action to get all targets for Network Load Balancer (NLB)\\n\",\n      \"* [AWS Get Network Load Balancer (NLB) without Targets](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_nlbs_without_targets/README.md): Use this action to get AWS Network Load Balancer (NLB) without Targets\\n\",\n      \"* [AWS Get Older Generation RDS Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_older_generation_rds_instances/README.md): AWS Get Older Generation RDS Instances action retrieves information about RDS instances using older generation instance types.\\n\",\n      \"* [AWS Get Private Address from NAT Gateways](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_private_address_from_nat_gateways/README.md): This action is used to get private address from NAT gateways.\\n\",\n      \"* [Get AWS EC2 Instances with a public IP](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_public_ec2_instances/README.md): lists all EC2 instances with a public IP\\n\",\n      \"* [AWS Get Publicly Accessible RDS Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_publicly_accessible_db_instances/README.md): AWS Get Publicly Accessible RDS Instances\\n\",\n      \"* [AWS Get Publicly Accessible DB Snapshots in RDS](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_publicly_accessible_db_snapshots/README.md): AWS Get Publicly Accessible DB Snapshots in RDS\\n\",\n      \"* [Get AWS RDS automated db snapshots above retention period](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_rds_automated_snapshots_above_retention_period/README.md): This Action gets the snapshots above a certain retention period.\\n\",\n      \"* [AWS Get Redshift Query Details](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_redshift_query_details/README.md): Given an QueryId, this Action will give you the status of the Query, along with other data like  the number of lines/\\n\",\n      \"* [AWS Get Redshift Result](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_redshift_result/README.md): Given a QueryId, Get the Query Result, and format into a List\\n\",\n      \"* [AWS Get EC2 Instances About To Retired](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_reserved_instances_about_to_retired/README.md): AWS Get EC2 Instances About To Retired\\n\",\n      \"* [AWS Get Resources Missing Tag](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_resources_missing_tag/README.md): Gets a list of all AWS resources that are missing the tag in the input parameters.\\n\",\n      \"* [AWS Get Resources With Expiration Tag](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_resources_with_expiration_tag/README.md): AWS Get all Resources with an expiration tag\\n\",\n      \"* [AWS Get Resources With Tag](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_resources_with_tag/README.md): For a given tag and region, get every AWS resource with that tag.\\n\",\n      \"* [Get AWS S3 Buckets](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_s3_buckets/README.md): Get AWS S3 Buckets\\n\",\n      \"* [Get Schedule To Retire AWS EC2 Instance](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_schedule_to_retire_instances/README.md): Get Schedule To Retire AWS EC2 Instance\\n\",\n      \"* [ Get secrets from secretsmanager](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_secret_from_secretmanager/README.md):  Get secrets from AWS secretsmanager\\n\",\n      \"* [AWS Get Secrets Manager Secret](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_secrets_manager_secret/README.md): Get string (of JSON) containing Secret details\\n\",\n      \"* [AWS Get Secrets Manager SecretARN](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_secrets_manager_secretARN/README.md): Given a Secret Name - this Action returns the Secret ARN\\n\",\n      \"* [Get AWS Security Group Details](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_security_group_details/README.md): Get details about a security group, given its ID.\\n\",\n      \"* [AWS Get Service Quota for a Specific ServiceName](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_service_quota_details/README.md): Given an AWS Region, Service Code and Quota Code, this Action will output the quota information for the specified service.\\n\",\n      \"* [AWS Get Quotas for a Service](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_service_quotas/README.md): Given inputs of the AWS Region, and the Service_Code for a service, this Action will output all of the Service Quotas and limits.\\n\",\n      \"* [Get Stopped Instance Volumes](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_stopped_instance_volumes/README.md): This action helps to list the volumes that are attached to stopped instances.\\n\",\n      \"* [Get STS Caller Identity](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_sts_caller_identity/README.md): Get STS Caller Identity\\n\",\n      \"* [AWS Get Tags of All Resources](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_tags_of_all_resources/README.md): AWS Get Tags of All Resources\\n\",\n      \"* [Get Timed Out AWS Lambdas](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_timed_out_lambdas/README.md): Get AWS Lambda functions that have exceeded the maximum amount of time in seconds that a Lambda function can run.\\n\",\n      \"* [AWS Get TTL For Route53 Records](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_ttl_for_route53_records/README.md): Get TTL for Route53 records for a hosted zone.\\n\",\n      \"* [AWS: Check for short Route 53 TTL](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_ttl_under_given_hours/README.md): AWS: Check for short Route 53 TTL\\n\",\n      \"* [Get UnHealthy EC2 Instances for Classic ELB](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_unhealthy_instances/README.md): Get UnHealthy EC2 Instances for Classic ELB\\n\",\n      \"* [Get Unhealthy instances from ELB](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_unhealthy_instances_from_elb/README.md): Get Unhealthy instances from Elastic Load Balancer\\n\",\n      \"* [AWS get Unused Route53 Health Checks](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_unused_route53_health_checks/README.md): AWS get Unused Route53 Health Checks\\n\",\n      \"* [AWS Get IAM Users with Old Access Keys](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_users_with_old_access_keys/README.md): This Lego collects the access keys that have never been used or the access keys that have been used but are older than the threshold.\\n\",\n      \"* [Launch AWS EC2 Instance From an AMI](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_launch_instance_from_ami/README.md): Use this instance to Launch an AWS EC2 instance from an AMI\\n\",\n      \"* [AWS List Access Key](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_access_keys/README.md): List all Access Keys for the User\\n\",\n      \"* [AWS List All IAM Users](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_all_iam_users/README.md): List all AWS IAM Users\\n\",\n      \"* [AWS List All Regions](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_all_regions/README.md): List all available AWS Regions\\n\",\n      \"* [AWS List Application LoadBalancers ARNs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_application_loadbalancers/README.md): AWS List Application LoadBalancers ARNs\\n\",\n      \"* [AWS List Attached User Policies](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_attached_user_policies/README.md): AWS List Attached User Policies\\n\",\n      \"* [AWS List ECS Clusters with Low CPU Utilization](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_clusters_with_low_utilization/README.md): This action searches for clusters that have low CPU utilization.\\n\",\n      \"* [AWS List Expiring Access Keys](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_expiring_access_keys/README.md): List Expiring IAM User Access Keys\\n\",\n      \"* [List Expiring ACM Certificates](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_expiring_acm_certificates/README.md): List All Expiring ACM Certificates\\n\",\n      \"* [AWS List Hosted Zones](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_hosted_zones/README.md): List all AWS Hosted zones\\n\",\n      \"* [AWS List Unattached Elastic IPs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_unattached_elastic_ips/README.md): This action lists Elastic IP address and check if it is associated with an instance or network interface.\\n\",\n      \"* [AWS List Unhealthy Instances in a Target Group](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_unhealthy_instances_in_target_group/README.md): List Unhealthy Instances in a target group\\n\",\n      \"* [AWS List Unused Secrets](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_unused_secrets/README.md): This action lists all the unused secrets from AWS by comparing the last used date with the given threshold.\\n\",\n      \"* [AWS List IAM Users With Old Passwords](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_users_with_old_passwords/README.md): This Lego filter gets all the IAM users' login profiles, and if the login profile is available, checks for the last password change if the password is greater than the given threshold, and lists those users.\\n\",\n      \"* [AWS List Instances behind a Load Balancer.](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_loadbalancer_list_instances/README.md): List AWS Instances behind a Load Balancer\\n\",\n      \"* [Make AWS Bucket Public](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_make_bucket_public/README.md): Make an AWS Bucket Public!\\n\",\n      \"* [AWS Modify EBS Volume to GP3](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_modify_ebs_volume_to_gp3/README.md): AWS recently introduced the General Purpose SSD (gp3) volume type, which is designed to provide higher IOPS performance at a lower cost than the gp2 volume type.\\n\",\n      \"* [AWS Modify ALB Listeners HTTP Redirection](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_modify_listener_for_http_redirection/README.md): AWS Modify ALB Listeners HTTP Redirection\\n\",\n      \"* [AWS Modify Publicly Accessible RDS Snapshots](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_modify_public_db_snapshots/README.md): AWS Modify Publicly Accessible RDS Snapshots\\n\",\n      \"* [Get AWS Postgresql Max Configured Connections](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_postgresql_get_configured_max_connections/README.md): Get AWS Postgresql Max Configured Connections\\n\",\n      \"* [Plot AWS PostgreSQL Active Connections](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_postgresql_plot_active_connections/README.md): Plot AWS PostgreSQL Action Connections\\n\",\n      \"* [AWS Purchase ElastiCache Reserved Nodes](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_purchase_elasticcache_reserved_node/README.md): This action purchases a reserved cache node offering.\\n\",\n      \"* [AWS Purchase RDS Reserved Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_purchase_rds_reserved_instance/README.md): This action purchases a reserved DB instance offering.\\n\",\n      \"* [AWS Purchase Redshift Reserved Nodes](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_purchase_redshift_reserved_node/README.md): This action purchases reserved nodes. Amazon Redshift offers a predefined set of reserved node offerings. You can purchase one or more of the offerings.\\n\",\n      \"* [ Apply CORS Policy for S3 Bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_put_bucket_cors/README.md):  Apply CORS Policy for S3 Bucket\\n\",\n      \"* [Apply AWS New Policy for S3 Bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_put_bucket_policy/README.md): Apply a New AWS Policy for S3 Bucket\\n\",\n      \"* [Read AWS S3 Object](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_read_object/README.md): Read an AWS S3 Object\\n\",\n      \"* [ Register AWS Instances with a Load Balancer](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_register_instances/README.md):  Register AWS Instances with a Load Balancer\\n\",\n      \"* [AWS Release Elastic IP](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_release_elastic_ip/README.md): AWS Release Elastic IP for both VPC and Standard\\n\",\n      \"* [Renew Expiring ACM Certificates](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_renew_expiring_acm_certificates/README.md): Renew Expiring ACM Certificates\\n\",\n      \"* [AWS_Request_Service_Quota_Increase](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_request_service_quota_increase/README.md): Given an AWS Region, Service Code, quota code and a new value for the quota, this Action sends a request to AWS for a new value. Your Connector must have servicequotas:RequestServiceQuotaIncrease enabled for this to work.\\n\",\n      \"* [Restart AWS EC2 Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_restart_ec2_instances/README.md): Restart AWS EC2 Instances\\n\",\n      \"* [AWS Revoke Policy from IAM User](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_revoke_policy_from_iam_user/README.md): AWS Revoke Policy from IAM User\\n\",\n      \"* [Start AWS Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_run_instances/README.md): Start an AWS EC2 Instances\\n\",\n      \"* [AWS Schedule Redshift Cluster Pause Resume Enabled](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_schedule_pause_resume_enabled/README.md): AWS Schedule Redshift Cluster Pause Resume Enabled\\n\",\n      \"* [AWS Service Quota Limits](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_service_quota_limits/README.md): Input a List of Service Quotas, and get back which of your instances are above the warning percentage of the quota\\n\",\n      \"* [AWS VPC service quota limit](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_service_quota_limits_vpc/README.md): This Action queries all VPC Storage quotas, and returns all usage over warning_percentage.\\n\",\n      \"* [Stop AWS Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_stop_instances/README.md): Stop an AWS Instance\\n\",\n      \"* [Tag AWS Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_tag_ec2_instances/README.md): Tag AWS Instances\\n\",\n      \"* [AWS List Instances in a ELBV2 Target Group](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_target_group_list_instances/README.md): List AWS Instance in a ELBv2 Target Group\\n\",\n      \"* [ AWS List Unhealthy Instances in a ELBV2 Target Group](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_target_group_list_unhealthy_instances/README.md):  List AWS Unhealthy Instance in a ELBv2 Target Group\\n\",\n      \"* [AWS Register/Unregister Instances from a Target Group.](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_target_group_register_unregister_instances/README.md): Register/Unregister AWS Instance from a Target Group\\n\",\n      \"* [Terminate AWS EC2 Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_terminate_ec2_instances/README.md): This Action will Terminate AWS EC2 Instances\\n\",\n      \"* [AWS Update Access Key](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_update_access_key/README.md): Update status of the Access Key\\n\",\n      \"* [AWS Update TTL for Route53 Record](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_update_ttl_for_route53_records/README.md): Update TTL for an existing record in a hosted zone.\\n\",\n      \"* [Upload file to S3](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_upload_file_to_s3/README.md): Upload a local file to S3\\n\",\n      \"* [AWS_VPC_service_quota_warning](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_vpc_service_quota_warning/README.md): Given an AWS Region and a warning percentage, this Action queries all VPC quota limits, and returns any of Quotas that are over the alert value.\\n\",\n      \"\\n\",\n      \"# Airflow\\n\",\n      \"* [Get Status for given DAG](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Airflow/legos/airflow_check_dag_status/README.md): Get Status for given DAG\\n\",\n      \"* [Get Airflow handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Airflow/legos/airflow_get_handle/README.md): Get Airflow handle\\n\",\n      \"* [List DAG runs for given DagID](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Airflow/legos/airflow_list_DAG_runs/README.md): List DAG runs for given DagID\\n\",\n      \"* [Airflow trigger DAG run](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Airflow/legos/airflow_trigger_dag_run/README.md): Airflow trigger DAG run\\n\",\n      \"\\n\",\n      \"# Azure\\n\",\n      \"* [Get Azure Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Azure/legos/azure_get_handle/README.md): Get Azure Handle\\n\",\n      \"\\n\",\n      \"# Datadog\\n\",\n      \"* [Datadog delete incident](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_delete_incident/README.md): Delete an incident given its id\\n\",\n      \"* [Datadog get event](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_get_event/README.md): Get an event given its id\\n\",\n      \"* [Get Datadog Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_get_handle/README.md): Get Datadog Handle\\n\",\n      \"* [Datadog get incident](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_get_incident/README.md): Get an incident given its id\\n\",\n      \"* [Datadog get metric metadata](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_get_metric_metadata/README.md): Get the metadata of a metric.\\n\",\n      \"* [Datadog get monitor](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_get_monitor/README.md): Get details about a monitor\\n\",\n      \"* [Datadog get monitorID given the name](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_get_monitorid/README.md): Get monitorID given the name\\n\",\n      \"* [Datadog list active metrics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_list_active_metrics/README.md): Get the list of actively reporting metrics from a given time until now.\\n\",\n      \"* [Datadog list all monitors](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_list_all_monitors/README.md): List all monitors\\n\",\n      \"* [Datadog list metrics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_list_metrics/README.md): Lists metrics from the last 24 hours in Datadog.\\n\",\n      \"* [Datadog mute/unmute monitors](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_mute_or_unmute_alerts/README.md): Mute/unmute monitors\\n\",\n      \"* [Datadog query metrics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_query_metrics/README.md): Query timeseries points for a metric.\\n\",\n      \"* [Schedule downtime](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_schedule_downtime/README.md): Schedule downtime\\n\",\n      \"* [Datadog search monitors](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_search_monitors/README.md): Search monitors in datadog based on filters\\n\",\n      \"\\n\",\n      \"# ElasticSearch\\n\",\n      \"* [Elasticsearch Cluster Health](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/ElasticSearch/legos/elasticsearch_check_health_status/README.md): Elasticsearch Check Health Status\\n\",\n      \"* [Get large Elasticsearch Index size](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/ElasticSearch/legos/elasticsearch_check_large_index_size/README.md): This action checks the sizes of all indices in the Elasticsearch cluster and compares them to a given threshold.\\n\",\n      \"* [Check Elasticsearch cluster disk size](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/ElasticSearch/legos/elasticsearch_compare_cluster_disk_size_to_threshold/README.md): This action compares the disk usage percentage of the Elasticsearch cluster to a given threshold.\\n\",\n      \"* [Elasticsearch Delete Unassigned Shards](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/ElasticSearch/legos/elasticsearch_delete_unassigned_shards/README.md): Elasticsearch Delete Corrupted/Lost Shards\\n\",\n      \"* [Elasticsearch Disable Shard Allocation](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/ElasticSearch/legos/elasticsearch_disable_shard_allocation/README.md): Elasticsearch Disable Shard Allocation for any indices\\n\",\n      \"* [Elasticsearch Enable Shard Allocation](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/ElasticSearch/legos/elasticsearch_enable_shard_allocation/README.md): Elasticsearch Enable Shard Allocation for any shards for any indices\\n\",\n      \"* [Elasticsearch Cluster Statistics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/ElasticSearch/legos/elasticsearch_get_cluster_statistics/README.md): Elasticsearch Cluster Statistics fetches total index size, disk size, and memory utilization and information about the current nodes and shards that form the cluster\\n\",\n      \"* [Get Elasticsearch Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/ElasticSearch/legos/elasticsearch_get_handle/README.md): Get Elasticsearch Handle\\n\",\n      \"* [Get Elasticsearch index level health](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/ElasticSearch/legos/elasticsearch_get_index_health/README.md): This action checks the health of a given Elasticsearch index or all indices if no specific index is provided.\\n\",\n      \"* [Elasticsearch List Allocations](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/ElasticSearch/legos/elasticsearch_list_allocations/README.md): Elasticsearch List Allocations in a Cluster\\n\",\n      \"* [Elasticsearch List Nodes](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/ElasticSearch/legos/elasticsearch_list_nodes/README.md): Elasticsearch List Nodes in a Cluster\\n\",\n      \"* [Elasticsearch search](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/ElasticSearch/legos/elasticsearch_search_query/README.md): Elasticsearch Search\\n\",\n      \"\\n\",\n      \"# GCP\\n\",\n      \"* [Add lifecycle policy to GCP storage bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_add_lifecycle_policy_to_bucket/README.md): The action adds a lifecycle policy to a Google Cloud Platform (GCP) storage bucket.\\n\",\n      \"* [GCP Add Member to IAM Role](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_add_member_to_iam_role/README.md): Adding member to the IAM role which already available\\n\",\n      \"* [GCP Add Role to Service Account](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_add_role_to_service_account/README.md): Adding role and member to the service account\\n\",\n      \"* [Create GCP Bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_create_bucket/README.md): Create a new GCP bucket in the given location\\n\",\n      \"* [Create a GCP disk snapshot](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_create_disk_snapshot/README.md): Create a GCP disk snapshot.\\n\",\n      \"* [Create GCP Filestore Instance](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_create_filestore_instance/README.md): Create a new GCP Filestore Instance in the given location\\n\",\n      \"* [Create GKE Cluster](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_create_gke_cluster/README.md): Create GKE Cluster\\n\",\n      \"* [GCP Create Service Account](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_create_service_account/README.md): GCP Create Service Account\\n\",\n      \"* [Delete GCP Bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_delete_bucket/README.md): Delete a GCP bucket\\n\",\n      \"* [Delete GCP Filestore Instance](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_delete_filestore_instance/README.md): Delete a GCP Filestore Instance in the given location\\n\",\n      \"* [Delete an Object from GCP Bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_delete_object_from_bucket/README.md): Delete an Object/Blob from a GCP Bucket\\n\",\n      \"* [GCP Delete Service Account](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_delete_service_account/README.md): GCP Delete Service Account\\n\",\n      \"* [GCP Describe a GKE cluster](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_describe_gke_cluster/README.md): GCP Describe a GKE cluster\\n\",\n      \"* [Fetch Objects from GCP Bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_fetch_objects_from_bucket/README.md): List all Objects in a GCP bucket\\n\",\n      \"* [Get GCP storage buckets without lifecycle policies](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_get_buckets_without_lifecycle_policies/README.md): The action retrieves a list of Google Cloud Platform (GCP) storage buckets that do not have any lifecycle policies applied.\\n\",\n      \"* [Get details of GCP forwarding rules](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_get_forwarding_rules_details/README.md): Get details of forwarding rules associated with a backend service.\\n\",\n      \"* [Get GCP Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_get_handle/README.md): Get GCP Handle\\n\",\n      \"* [Get List of GCP compute instance without label](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_get_instances_without_label/README.md): Get List of GCP compute instance without label\\n\",\n      \"* [Get unused GCP backend services](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_get_unused_backend_services/README.md): Get unused backend service for an application load balancer that has no instances in it's target group.\\n\",\n      \"\\n\",\n      \"* [List all GCP Buckets](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_list_buckets/README.md): List all GCP buckets\\n\",\n      \"* [Get GCP compute instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_list_compute_instances/README.md): Get GCP compute instances\\n\",\n      \"* [Get List of GCP compute instance by label](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_list_compute_instances_by_label/README.md): Get List of GCP compute instance by label\\n\",\n      \"* [Get list  compute instance by VPC](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_list_compute_instances_by_vpc/README.md): Get list  compute instance by VPC\\n\",\n      \"* [GCP List GKE Cluster](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_list_gke_cluster/README.md): GCP List GKE Cluster\\n\",\n      \"* [GCP List Nodes in GKE Cluster](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_list_nodes_in_gke_cluster/README.md): GCP List Nodes of GKE Cluster\\n\",\n      \"* [List all Public GCP Buckets](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_list_public_buckets/README.md): List all publicly available GCP buckets\\n\",\n      \"* [List GCP Secrets](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_list_secrets/README.md): List of your GCP Secrets\\n\",\n      \"* [GCP List Service Accounts](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_list_service_accounts/README.md): GCP List Service Accounts\\n\",\n      \"* [List all GCP VMs and if Publicly Accessible](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_list_vms_access/README.md): Lists all GCP buckets, and identifies those tha are public.\\n\",\n      \"* [GCP Remove Member from IAM Role](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_remove_member_from_iam_role/README.md): Remove member from the chosen IAM role.\\n\",\n      \"* [GCP Remove Role from Service Account](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_remove_role_from_service_account/README.md): Remove role and member from the service account\\n\",\n      \"* [Remove role from user](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_remove_user_role/README.md): GCP lego for removing a role from a user (default: 'viewer')\\n\",\n      \"* [GCP Resize a GKE cluster](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_resize_gke_cluster/README.md): GCP Resize a GKE cluster by modifying nodes\\n\",\n      \"* [GCP Restart compute instance](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_restart_compute_instances/README.md): GCP Restart compute instance\\n\",\n      \"* [Restore GCP disk from a snapshot ](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_restore_disk_from_snapshot/README.md): Restore a GCP disk from a compute instance snapshot.\\n\",\n      \"* [Save CSV to Google Sheets](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_save_csv_to_google_sheets_v1/README.md): Saves your CSV (see notes) into a prepared Google Sheet.\\n\",\n      \"* [GCP Stop compute instance](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_stop_compute_instances/README.md): GCP Stop compute instance\\n\",\n      \"* [Upload an Object to GCP Bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_upload_file_to_bucket/README.md): Upload an Object/Blob in a GCP bucket\\n\",\n      \"\\n\",\n      \"# Github\\n\",\n      \"* [Github Assign Issue](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_assign_issue/README.md): Assign a github issue to a user\\n\",\n      \"* [Github Check if Pull Request is merged](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_check_if_pull_request_is_merged/README.md): Check if a Github Pull Request is merged\\n\",\n      \"* [Github Close Pull Request](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_close_pull_request/README.md): Close pull request based on pull request number\\n\",\n      \"* [Github Count Stars](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_count_stars/README.md): Get count of stars for a repository\\n\",\n      \"* [Github Create Issue](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_create_issue/README.md): Create a new Github Issue for a repository\\n\",\n      \"* [Github Create Team](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_create_team/README.md): Create a new Github Team\\n\",\n      \"* [Github Delete Branch](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_delete_branch/README.md): Delete a github branch\\n\",\n      \"* [Github Get Branch](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_get_branch/README.md): Get Github branch for a user in a repository\\n\",\n      \"* [Get Github Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_get_handle/README.md): Get Github Handle\\n\",\n      \"* [Github Get Issue](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_get_issue/README.md): Get Github Issue from a repository\\n\",\n      \"* [Github Get Open Branches](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_get_open_branches/README.md): Get first 100 open branches for a given user in a given repo.\\n\",\n      \"* [Github Get Pull Request](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_get_pull_request/README.md): Get Github Pull Request for a user in a repository\\n\",\n      \"* [Github Get Team](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_get_team/README.md): Github Get Team\\n\",\n      \"* [Github Get User](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_get_user/README.md): Get Github User details\\n\",\n      \"* [Github Invite User to Organization](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_invite_user_to_org/README.md): Invite a Github User to an Organization\\n\",\n      \"* [Github Comment on an Issue](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_issue_comment/README.md): Add a comment to the selected GitHub Issue\\n\",\n      \"* [Github List Open Issues](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_list_open_issues/README.md): List open Issues in a Github Repository\\n\",\n      \"* [Github List Organization Members](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_list_org_members/README.md): List Github Organization Members\\n\",\n      \"* [Github List PR Commits](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_list_pull_request_commits/README.md): Github List all Pull Request Commits\\n\",\n      \"* [Github List Pull Request Reviewers](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_list_pull_request_reviewers/README.md): List PR reviewers for a PR\\n\",\n      \"* [Github List Pull Requests](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_list_pull_requests/README.md): List pull requests for a user in a repository\\n\",\n      \"* [Github List Stale Issues](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_list_stale_issues/README.md): List Stale Github Issues that have crossed a certain age limit.\\n\",\n      \"* [Github List Stale Pull Requests](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_list_stale_pull_requests/README.md): Check for any Pull requests over a certain age. \\n\",\n      \"* [Github List Stargazers](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_list_stargazers/README.md): List of Github users that have starred (essentially bookmarked) a repository\\n\",\n      \"* [Github List Team Members](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_list_team_members/README.md): List Github Team Members for a given Team\\n\",\n      \"* [Github List Team Repositories](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_list_team_repos/README.md): Github List Team Repositories\\n\",\n      \"* [Github List Teams in Organization](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_list_teams_in_org/README.md): List teams in a organization in GitHub\\n\",\n      \"* [Github List Webhooks](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_list_webhooks/README.md): List webhooks for a repository\\n\",\n      \"* [Github Merge Pull Request](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_merge_pull_request/README.md): Github Merge Pull Request\\n\",\n      \"* [Github Remove Member from Organization](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_remove_member_from_org/README.md): Remove a member from a Github Organization\\n\",\n      \"\\n\",\n      \"# Grafana\\n\",\n      \"* [Get Grafana Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Grafana/legos/grafana_get_handle/README.md): Get Grafana Handle\\n\",\n      \"* [Grafana List Alerts](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Grafana/legos/grafana_list_alerts/README.md): List of Grafana alerts. Specifying the dashboard ID will show alerts in that dashboard\\n\",\n      \"\\n\",\n      \"# Hadoop\\n\",\n      \"* [Get Hadoop cluster apps](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Hadoop/legos/hadoop_get_cluster_apps/README.md): Get Hadoop cluster apps\\n\",\n      \"* [Get Hadoop cluster appstatistics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Hadoop/legos/hadoop_get_cluster_appstatistics/README.md): Get Hadoop cluster appstatistics\\n\",\n      \"* [Get Hadoop cluster metrics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Hadoop/legos/hadoop_get_cluster_metrics/README.md): Get Hadoop EMR cluster metrics\\n\",\n      \"* [Get Hadoop cluster nodes](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Hadoop/legos/hadoop_get_cluster_nodes/README.md): Get Hadoop cluster nodes\\n\",\n      \"* [Get Hadoop handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Hadoop/legos/hadoop_get_handle/README.md): Get Hadoop handle\\n\",\n      \"\\n\",\n      \"# Jenkins\\n\",\n      \"* [Get Jenkins Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Jenkins/legos/jenkins_get_handle/README.md): Get Jenkins Handle\\n\",\n      \"* [Get Jenkins Logs from a job](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Jenkins/legos/jenkins_get_logs/README.md): Get Jenkins Logs from a Job\\n\",\n      \"* [Get Jenkins Plugin List](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Jenkins/legos/jenkins_get_plugins/README.md): Get Jenkins Plugin List\\n\",\n      \"\\n\",\n      \"# Jira\\n\",\n      \"* [Jira Add Comment](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Jira/legos/jira_add_comment/README.md): Add a Jira Comment\\n\",\n      \"* [Assign Jira Issue](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Jira/legos/jira_assign_issue/README.md): Assign a Jira Issue to a user\\n\",\n      \"* [Create a Jira Issue](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Jira/legos/jira_create_issue/README.md): Create a Jira Issue\\n\",\n      \"* [Get Jira SDK Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Jira/legos/jira_get_handle/README.md): Get Jira SDK Handle\\n\",\n      \"* [Get Jira Issue Info](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Jira/legos/jira_get_issue/README.md): Get Issue Info from Jira API: description, labels, attachments\\n\",\n      \"* [Get Jira Issue Status](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Jira/legos/jira_get_issue_status/README.md): Get Issue Status from Jira API\\n\",\n      \"* [Change JIRA Issue Status](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Jira/legos/jira_issue_change_status/README.md): Change JIRA Issue Status to given status\\n\",\n      \"* [Search for Jira issues matching JQL queries](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Jira/legos/jira_search_issue/README.md): Use JQL to search all matching issues in Jira. Returns a List of the matching issues IDs/keys\\n\",\n      \"\\n\",\n      \"# Kafka\\n\",\n      \"* [Kafka Check In-Sync Replicas](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kafka/legos/kafka_check_in_sync_replicas/README.md): Checks number of actual min-isr for each topic-partition with configuration for that topic.\\n\",\n      \"* [Kafka Check Offline Partitions](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kafka/legos/kafka_check_offline_partitions/README.md): Checks the number of offline partitions.\\n\",\n      \"* [Kafka Check Replicas Available](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kafka/legos/kafka_check_replicas_available/README.md): Checks if the number of replicas not available for communication is equal to zero.\\n\",\n      \"* [Kafka get cluster health](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kafka/legos/kafka_get_cluster_health/README.md): Fetches the health of the Kafka cluster including brokers, topics, and partitions.\\n\",\n      \"* [Kafka get count of committed messages](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kafka/legos/kafka_get_committed_messages_count/README.md): Fetches the count of committed messages (consumer offsets) for a specific consumer group and its topics.\\n\",\n      \"* [Get Kafka Producer Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kafka/legos/kafka_get_handle/README.md): Get Kafka Producer Handle\\n\",\n      \"* [Kafka get topic health](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kafka/legos/kafka_get_topic_health/README.md): This action fetches the health and total number of messages for the specified topics.\\n\",\n      \"* [Kafka get topics with lag](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kafka/legos/kafka_get_topics_with_lag/README.md): This action fetches the topics with lag in the Kafka cluster.\\n\",\n      \"* [Kafka Publish Message](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kafka/legos/kafka_publish_message/README.md): Publish Kafka Message\\n\",\n      \"* [Run a Kafka command using kafka CLI](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kafka/legos/kafka_run_command/README.md): Run a Kafka command using kafka CLI. Eg kafka-topics.sh --list --exclude-internal\\n\",\n      \"\\n\",\n      \"# Kubernetes\\n\",\n      \"* [Add Node in a Kubernetes Cluster](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_add_node_to_cluster/README.md): Add Node in a Kubernetes Cluster\\n\",\n      \"* [Change size of Kubernetes PVC](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_change_pvc_size/README.md): Change size of Kubernetes PVC\\n\",\n      \"* [Check K8s services endpoint health](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_check_service_status/README.md): This action checks the health status of the provided Kubernetes services.\\n\",\n      \"* [Check K8s worker CPU Utilization](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_check_worker_cpu_utilization/README.md): Retrieves the CPU utilization for all worker nodes in the cluster and compares it to a given threshold.\\n\",\n      \"* [Delete a Kubernetes POD in a given Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_delete_pod/README.md): Delete a Kubernetes POD in a given Namespace\\n\",\n      \"* [Describe Kubernetes Node](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_describe_node/README.md): Describe a Kubernetes Node\\n\",\n      \"* [Describe a Kubernetes POD in a given Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_describe_pod/README.md): Describe a Kubernetes POD in a given Namespace\\n\",\n      \"* [Execute a command on a Kubernetes POD in a given Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_exec_command_on_pod/README.md): Execute a command on a Kubernetes POD in a given Namespace\\n\",\n      \"* [Kubernetes Execute a command on a POD in a given namespace and filter](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_exec_command_on_pods_and_filter/README.md): Execute a command on Kubernetes POD in a given namespace and filter output\\n\",\n      \"* [Execute local script on a pod](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_execute_local_script_on_a_pod/README.md): Execute local script on a pod in a namespace\\n\",\n      \"* [Gather Data for POD Troubleshoot](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_gather_data_for_pod_troubleshoot/README.md): Gather Data for POD Troubleshoot\\n\",\n      \"* [Gather Data for K8S Service Troubleshoot](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_gather_data_for_service_troubleshoot/README.md): Gather Data for K8S Service Troubleshoot\\n\",\n      \"* [Get All Evicted PODS From Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_all_evicted_pods_from_namespace/README.md): This action get all evicted PODS from given namespace. If namespace not given it will get all the pods from all namespaces.\\n\",\n      \"* [ Get All Kubernetes PODS with state in a given Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_all_pods/README.md):  Get All Kubernetes PODS with state in a given Namespace\\n\",\n      \"* [Get K8s pods status and resource utilization info](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_all_resources_utilization_info/README.md): This action gets the pod status and resource utilization of various Kubernetes resources like jobs, services, persistent volumes.\\n\",\n      \"* [Get candidate k8s nodes for given configuration](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_candidate_nodes_for_pods/README.md): Get candidate k8s nodes for given configuration\\n\",\n      \"* [Get K8S Cluster Health](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_cluster_health/README.md): Get K8S Cluster Health\\n\",\n      \"* [Get k8s kube system config map](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_config_map_kube_system/README.md): Get k8s kube system config map\\n\",\n      \"* [Get Kubernetes Deployment For a Pod in a Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_deployment/README.md): Get Kubernetes Deployment for a POD in a Namespace\\n\",\n      \"* [Get Deployment Status](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_deployment_status/README.md): This action search for failed deployment status and returns list.\\n\",\n      \"* [Get Kubernetes Error PODs from All Jobs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_error_pods_from_all_jobs/README.md): Get Kubernetes Error PODs from All Jobs\\n\",\n      \"* [Get expiring K8s certificates](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_expiring_certificates/README.md): Get the expiring certificates for a K8s cluster.\\n\",\n      \"* [Get Kubernetes Failed Deployments](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_failed_deployments/README.md): Get Kubernetes Failed Deployments\\n\",\n      \"* [Get frequently restarting K8s pods](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_frequently_restarting_pods/README.md): Get Kubernetes pods from all namespaces that are restarting too often.\\n\",\n      \"* [Get Kubernetes Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_handle/README.md): Get Kubernetes Handle\\n\",\n      \"* [Get All Kubernetes Healthy PODS in a given Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_healthy_pods/README.md): Get All Kubernetes Healthy PODS in a given Namespace\\n\",\n      \"* [Get memory utilization for K8s services](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_memory_utilization_of_services/README.md): This action executes the given kubectl commands to find the memory utilization of the specified services in a particular namespace and compares it with a given threshold.\\n\",\n      \"* [Get K8s node status and CPU utilization](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_node_status_and_resource_utilization/README.md): This action gathers Kubernetes node status and resource utilization information.\\n\",\n      \"* [Get Kubernetes Nodes](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_nodes/README.md): Get Kubernetes Nodes\\n\",\n      \"* [Get K8s nodes disk and memory pressure](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_nodes_pressure/README.md): This action fetches the memory and disk pressure status of each node in the cluster\\n\",\n      \"* [Get Kubernetes Nodes that have insufficient resources](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_nodes_with_insufficient_resources/README.md): Get Kubernetes Nodes that have insufficient resources\\n\",\n      \"* [Get K8s offline nodes](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_offline_nodes/README.md): This action checks if any node in the Kubernetes cluster is offline.\\n\",\n      \"* [Get K8S OOMKilled Pods](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_oomkilled_pods/README.md): Get K8S Pods which are OOMKilled from the container last states.\\n\",\n      \"* [Get K8s get pending pods](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_pending_pods/README.md): This action checks if any pod in the Kubernetes cluster is in 'Pending' status.\\n\",\n      \"* [Get Kubernetes POD Configuration](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_pod_config/README.md): Get Kubernetes POD Configuration\\n\",\n      \"* [Get Kubernetes Logs for a given POD in a Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_pod_logs/README.md): Get Kubernetes Logs for a given POD in a Namespace\\n\",\n      \"* [Get Kubernetes Logs for a list of PODs & Filter in a Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_pod_logs_and_filter/README.md): Get Kubernetes Logs for a list of PODs and Filter in a Namespace\\n\",\n      \"* [Get Kubernetes Status for a POD in a given Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_pod_status/README.md): Get Kubernetes Status for a POD in a given Namespace\\n\",\n      \"* [Get pods attached to Kubernetes PVC](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_pods_attached_to_pvc/README.md): Get pods attached to Kubernetes PVC\\n\",\n      \"* [Get all K8s Pods in CrashLoopBackOff State](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_pods_in_crashloopbackoff_state/README.md): Get all K8s pods in CrashLoopBackOff State\\n\",\n      \"* [Get all K8s Pods in ImagePullBackOff State](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_pods_in_imagepullbackoff_state/README.md): Get all K8s pods in ImagePullBackOff State\\n\",\n      \"* [Get Kubernetes PODs in not Running State](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_pods_in_not_running_state/README.md): Get Kubernetes PODs in not Running State\\n\",\n      \"* [Get all K8s Pods in Terminating State](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_pods_in_terminating_state/README.md): Get all K8s pods in Terminating State\\n\",\n      \"* [Get Kubernetes PODS with high restart](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_pods_with_high_restart/README.md): Get Kubernetes PODS with high restart\\n\",\n      \"* [Get K8S Service with no associated endpoints](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_service_with_no_associated_endpoints/README.md): Get K8S Service with no associated endpoints\\n\",\n      \"* [Get Kubernetes Services for a given Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_services/README.md): Get Kubernetes Services for a given Namespace\\n\",\n      \"* [Get Kubernetes Unbound PVCs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_unbound_pvcs/README.md): Get Kubernetes Unbound PVCs\\n\",\n      \"* [Kubectl command](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_command/README.md): Execute kubectl command.\\n\",\n      \"* [Kubectl set context entry in kubeconfig](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_config_set_context/README.md): Kubectl set context entry in kubeconfig\\n\",\n      \"* [Kubectl display merged kubeconfig settings](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_config_view/README.md): Kubectl display merged kubeconfig settings\\n\",\n      \"* [Kubectl delete a pod](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_delete_pod/README.md): Kubectl delete a pod\\n\",\n      \"* [Kubectl describe a node](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_describe_node/README.md): Kubectl describe a node\\n\",\n      \"* [Kubectl describe a pod](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_describe_pod/README.md): Kubectl describe a pod\\n\",\n      \"* [Kubectl drain a node](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_drain_node/README.md): Kubectl drain a node\\n\",\n      \"* [Execute command on a pod](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_exec_command/README.md): Execute command on a pod\\n\",\n      \"* [Kubectl get api resources](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_get_api_resources/README.md): Kubectl get api resources\\n\",\n      \"* [Kubectl get logs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_get_logs/README.md): Kubectl get logs for a given pod\\n\",\n      \"* [Kubectl get services](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_get_service_namespace/README.md): Kubectl get services in a given namespace\\n\",\n      \"* [Kubectl list pods](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_list_pods/README.md): Kubectl list pods in given namespace\\n\",\n      \"* [Kubectl update field](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_patch_pod/README.md): Kubectl update field of a resource using strategic merge patch\\n\",\n      \"* [Kubectl rollout deployment history](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_rollout_deployment/README.md): Kubectl rollout deployment history\\n\",\n      \"* [Kubectl scale deployment](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_scale_deployment/README.md): Kubectl scale a given deployment\\n\",\n      \"* [Kubectl show metrics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_show_metrics_node/README.md): Kubectl show metrics for a given node\\n\",\n      \"* [Kubectl show metrics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_show_metrics_pod/README.md): Kubectl show metrics for a given pod\\n\",\n      \"* [List matching name pods](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_list_all_matching_pods/README.md): List all pods matching a particular name string. The matching string can be a regular expression too\\n\",\n      \"* [List pvcs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_list_pvcs/README.md): List pvcs by namespace. By default, it will list all pvcs in all namespaces.\\n\",\n      \"* [Remove POD from Deployment](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_remove_pod_from_deployment/README.md): Remove POD from Deployment\\n\",\n      \"* [Update Commands in a Kubernetes POD in a given Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_update_command_in_pod_spec/README.md): Update Commands in a Kubernetes POD in a given Namespace\\n\",\n      \"\\n\",\n      \"# Mantishub\\n\",\n      \"* [Get Mantishub handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mantishub/legos/mantishub_get_handle/README.md): Get Mantishub handle\\n\",\n      \"\\n\",\n      \"# Mongo\\n\",\n      \"* [MongoDB add new field in all collections](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_add_new_field_in_collections/README.md): MongoDB add new field in all collections\\n\",\n      \"* [MongoDB Aggregate Command](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_aggregate_command/README.md): MongoDB Aggregate Command\\n\",\n      \"* [MongoDB Atlas cluster cloud backup](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_atlas_cluster_backup/README.md): Trigger on-demand Atlas cloud backup\\n\",\n      \"* [Get large MongoDB indices](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_check_large_index_size/README.md): This action compares the size of each index with a given threshold and returns any indexes that exceed the threshold.\\n\",\n      \"* [Get MongoDB large databases](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_compare_disk_size_to_threshold/README.md): This action compares the total disk size used by MongoDB to a given threshold.\\n\",\n      \"* [MongoDB Count Documents](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_count_documents/README.md): MongoDB Count Documents\\n\",\n      \"* [MongoDB Create Collection](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_create_collection/README.md): MongoDB Create Collection\\n\",\n      \"* [MongoDB Create Database](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_create_database/README.md): MongoDB Create Database\\n\",\n      \"* [Delete collection from MongoDB database](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_delete_collection/README.md): Delete collection from MongoDB database\\n\",\n      \"* [MongoDB Delete Database](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_delete_database/README.md): MongoDB Delete Database\\n\",\n      \"* [MongoDB Delete Document](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_delete_document/README.md): MongoDB Delete Document\\n\",\n      \"* [MongoDB Distinct Command](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_distinct_command/README.md): MongoDB Distinct Command\\n\",\n      \"* [MongoDB Find Document](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_find_document/README.md): MongoDB Find Document\\n\",\n      \"* [MongoDB Find One](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_find_one/README.md): MongoDB Find One returns a single entry that matches the query.\\n\",\n      \"* [Get MongoDB Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_get_handle/README.md): Get MongoDB Handle\\n\",\n      \"* [MongoDB get metrics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_get_metrics/README.md): This action retrieves various metrics such as index size, disk size per collection for all databases and collections.\\n\",\n      \"* [Get Mongo Server Status](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_get_server_status/README.md): Get Mongo Server Status and check for any abnormalities.\\n\",\n      \"* [MongoDB Insert Document](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_insert_document/README.md): MongoDB Insert Document\\n\",\n      \"* [MongoDB kill queries](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_kill_queries/README.md): MongoDB kill queries\\n\",\n      \"* [Get list of collections in MongoDB Database](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_list_collections/README.md): Get list of collections in MongoDB Database\\n\",\n      \"* [Get list of MongoDB Databases](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_list_databases/README.md): Get list of MongoDB Databases\\n\",\n      \"* [MongoDB list queries](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_list_queries/README.md): MongoDB list queries\\n\",\n      \"* [MongoDB Read Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_read_query/README.md): MongoDB Read Query\\n\",\n      \"* [MongoDB remove a field in all collections](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_remove_field_in_collections/README.md): MongoDB remove a field in all collections\\n\",\n      \"* [MongoDB Rename Database](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_rename_database/README.md): MongoDB Rename Database\\n\",\n      \"* [MongoDB Update Document](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_update_document/README.md): MongoDB Update Document\\n\",\n      \"* [MongoDB Upsert Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_write_query/README.md): MongoDB Upsert Query\\n\",\n      \"\\n\",\n      \"# MsSQL\\n\",\n      \"* [Get MS-SQL Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/MsSQL/legos/mssql_get_handle/README.md): Get MS-SQL Handle\\n\",\n      \"* [MS-SQL Read Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/MsSQL/legos/mssql_read_query/README.md): MS-SQL Read Query\\n\",\n      \"* [MS-SQL Write Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/MsSQL/legos/mssql_write_query/README.md): MS-SQL Write Query\\n\",\n      \"\\n\",\n      \"# MySQL\\n\",\n      \"* [Get MySQL Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/MySQL/legos/mysql_get_handle/README.md): Get MySQL Handle\\n\",\n      \"* [MySQl Get Long Running Queries](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/MySQL/legos/mysql_get_long_run_queries/README.md): MySQl Get Long Running Queries\\n\",\n      \"* [MySQl Kill Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/MySQL/legos/mysql_kill_query/README.md): MySQl Kill Query\\n\",\n      \"* [Run MySQL Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/MySQL/legos/mysql_read_query/README.md): Run MySQL Query\\n\",\n      \"* [Create a MySQL Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/MySQL/legos/mysql_write_query/README.md): Create a MySQL Query\\n\",\n      \"\\n\",\n      \"# Netbox\\n\",\n      \"* [Netbox Get Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Netbox/legos/netbox_get_handle/README.md): Get Netbox Handle\\n\",\n      \"* [Netbox List Devices](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Netbox/legos/netbox_list_devices/README.md): List all Netbox devices\\n\",\n      \"\\n\",\n      \"# Nomad\\n\",\n      \"* [Nomad Get Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Nomad/legos/nomad_get_handle/README.md): Get Nomad Handle\\n\",\n      \"* [Nomad List Jobs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Nomad/legos/nomad_list_jobs/README.md): List all Nomad jobs\\n\",\n      \"\\n\",\n      \"# Opsgenie\\n\",\n      \"* [Get Opsgenie Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Opsgenie/legos/opsgenie_get_handle/README.md): Get Opsgenie Handle\\n\",\n      \"\\n\",\n      \"# Pingdom\\n\",\n      \"* [Create new maintenance window.](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Pingdom/legos/pingdom_create_new_maintenance_window/README.md): Create new maintenance window.\\n\",\n      \"* [Perform Pingdom single check](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Pingdom/legos/pingdom_do_single_check/README.md): Perform Pingdom Single Check\\n\",\n      \"* [Get Pingdom Analysis Results for a specified Check](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Pingdom/legos/pingdom_get_analysis/README.md): Get Pingdom Analysis Results for a specified Check\\n\",\n      \"* [Get list of checkIDs given a hostname](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Pingdom/legos/pingdom_get_checkids/README.md): Get list of checkIDs given a hostname. If no hostname provided, it lists all checkIDs.\\n\",\n      \"* [Get list of checkIDs given a name](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Pingdom/legos/pingdom_get_checkids_by_name/README.md): Get list of checkIDS given a name. If name is not given, it gives all checkIDs. If transaction is set to true, it returns transaction checkIDs\\n\",\n      \"* [Get Pingdom Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Pingdom/legos/pingdom_get_handle/README.md): Get Pingdom Handle\\n\",\n      \"* [Pingdom Get Maintenance](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Pingdom/legos/pingdom_get_maintenance/README.md): Pingdom Get Maintenance\\n\",\n      \"* [Get Pingdom Results](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Pingdom/legos/pingdom_get_results/README.md): Get Pingdom Results\\n\",\n      \"* [Get Pingdom TMS Check](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Pingdom/legos/pingdom_get_tmscheck/README.md): Get Pingdom TMS Check\\n\",\n      \"* [Pingdom lego to pause/unpause checkids](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Pingdom/legos/pingdom_pause_or_unpause_checkids/README.md): Pingdom lego to pause/unpause checkids\\n\",\n      \"* [Perform Pingdom Traceroute](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Pingdom/legos/pingdom_traceroute/README.md): Perform Pingdom Traceroute\\n\",\n      \"\\n\",\n      \"# Postgresql\\n\",\n      \"* [PostgreSQL Calculate Bloat](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgres_calculate_bloat/README.md): This Lego calculates bloat for tables in Postgres\\n\",\n      \"* [Calling a PostgreSQL function](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_call_function/README.md): Calling a PostgreSQL function\\n\",\n      \"* [PostgreSQL Check Unused Indexes](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_check_unused_indexes/README.md): Find unused Indexes in a database in PostgreSQL\\n\",\n      \"* [Create Tables in PostgreSQL](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_create_table/README.md): Create Tables PostgreSQL\\n\",\n      \"* [Delete PostgreSQL Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_delete_query/README.md): Delete PostgreSQL Query\\n\",\n      \"* [PostgreSQL Get Cache Hit Ratio](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_get_cache_hit_ratio/README.md): The result of the action will show the total number of blocks read from disk, the total number of blocks found in the buffer cache, and the cache hit ratio as a percentage. For example, if the cache hit ratio is 99%, it means that 99% of all data requests were served from the buffer cache, and only 1% required reading data from disk.\\n\",\n      \"* [Get PostgreSQL Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_get_handle/README.md): Get PostgreSQL Handle\\n\",\n      \"* [PostgreSQL Get Index Usage](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_get_index_usage/README.md): The action result shows the data for table name, the percentage of times an index was used for that table, and the number of live rows in the table.\\n\",\n      \"* [PostgreSQL get service status](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_get_server_status/README.md): This action checks the status of each database.\\n\",\n      \"* [Execute commands in a PostgreSQL transaction.](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_handling_transaction/README.md): Given a set of PostgreSQL commands, this actions run them inside a transaction.\\n\",\n      \"* [Long Running PostgreSQL Queries](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_long_running_queries/README.md): Long Running PostgreSQL Queries\\n\",\n      \"* [Read PostgreSQL Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_read_query/README.md): Read PostgreSQL Query\\n\",\n      \"* [Show tables in PostgreSQL Database](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_show_tables/README.md): Show the tables existing in a PostgreSQL Database. We execute the following query to fetch this information SELECT * FROM pg_catalog.pg_tables WHERE schemaname != 'pg_catalog' AND schemaname != 'information_schema';\\n\",\n      \"* [Call PostgreSQL Stored Procedure](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_stored_procedures/README.md): Call PostgreSQL Stored Procedure\\n\",\n      \"* [Write PostgreSQL Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_write_query/README.md): Write PostgreSQL Query\\n\",\n      \"\\n\",\n      \"# Prometheus\\n\",\n      \"* [Get Prometheus rules](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Prometheus/legos/prometheus_alerts_list/README.md): Get Prometheus rules\\n\",\n      \"* [Get All Prometheus Metrics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Prometheus/legos/prometheus_get_all_metrics/README.md): Get All Prometheus Metrics\\n\",\n      \"* [Get Prometheus handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Prometheus/legos/prometheus_get_handle/README.md): Get Prometheus handle\\n\",\n      \"* [Get Prometheus Metric Statistics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Prometheus/legos/prometheus_get_metric_statistics/README.md): Get Prometheus Metric Statistics\\n\",\n      \"\\n\",\n      \"# Redis\\n\",\n      \"* [Delete All Redis Keys](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Redis/legos/redis_delete_all_keys/README.md): Delete All Redis keys\\n\",\n      \"* [Delete Redis Keys](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Redis/legos/redis_delete_keys/README.md): Delete Redis keys matching pattern\\n\",\n      \"* [Delete Redis Unused keys](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Redis/legos/redis_delete_stale_keys/README.md): Delete Redis Unused keys given a time threshold in seconds\\n\",\n      \"* [Get Redis cluster health](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Redis/legos/redis_get_cluster_health/README.md): This action gets the Redis cluster health.\\n\",\n      \"* [Get Redis Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Redis/legos/redis_get_handle/README.md): Get Redis Handle\\n\",\n      \"* [Get Redis keys count](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Redis/legos/redis_get_keys_count/README.md): Get Redis keys count matching pattern (default: '*')\\n\",\n      \"* [Get Redis metrics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Redis/legos/redis_get_metrics/README.md): This action fetched redis metrics like index size, memory utilization.\\n\",\n      \"* [ List Redis Large keys](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Redis/legos/redis_list_large_keys/README.md): Find Redis Large keys given a size threshold in bytes\\n\",\n      \"\\n\",\n      \"# Rest\\n\",\n      \"* [Get REST handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Rest/legos/rest_get_handle/README.md): Get REST handle\\n\",\n      \"* [Call REST Methods](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Rest/legos/rest_methods/README.md): Call REST Methods.\\n\",\n      \"\\n\",\n      \"# SSH\\n\",\n      \"* [SSH Execute Remote Command](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SSH/legos/ssh_execute_remote_command/README.md): SSH Execute Remote Command\\n\",\n      \"* [SSH: Locate large files on host](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SSH/legos/ssh_find_large_files/README.md): This action scans the file system on a given host and returns a dict of large files. The command used to perform the scan is \\\"find inspect_folder -type f -exec du -sk '{}' + | sort -rh | head -n count\\\"\\n\",\n      \"* [Get SSH handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SSH/legos/ssh_get_handle/README.md): Get SSH handle\\n\",\n      \"* [SSH Restart Service Using sysctl](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SSH/legos/ssh_restart_service_using_sysctl/README.md): SSH Restart Service Using sysctl\\n\",\n      \"* [SCP: Remote file transfer over SSH](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SSH/legos/ssh_scp/README.md): Copy files from or to remote host. Files are copied over SCP. \\n\",\n      \"\\n\",\n      \"# SalesForce\\n\",\n      \"* [Assign Salesforce Case](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SalesForce/legos/salesforce_assign_case/README.md): Assign a Salesforce case\\n\",\n      \"* [Change Salesforce Case Status](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SalesForce/legos/salesforce_case_change_status/README.md): Change Salesforce Case Status\\n\",\n      \"* [Create Salesforce Case](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SalesForce/legos/salesforce_create_case/README.md): Create a Salesforce case\\n\",\n      \"* [Delete Salesforce Case](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SalesForce/legos/salesforce_delete_case/README.md): Delete a Salesforce case\\n\",\n      \"* [Get Salesforce Case Info](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SalesForce/legos/salesforce_get_case/README.md): Get a Salesforce case info\\n\",\n      \"* [Get Salesforce Case Status](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SalesForce/legos/salesforce_get_case_status/README.md): Get a Salesforce case status\\n\",\n      \"* [Get Salesforce handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SalesForce/legos/salesforce_get_handle/README.md): Get Salesforce handle\\n\",\n      \"* [Search Salesforce Case](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SalesForce/legos/salesforce_search_case/README.md): Search a Salesforce case\\n\",\n      \"* [Update Salesforce Case](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SalesForce/legos/salesforce_update_case/README.md): Update a Salesforce case\\n\",\n      \"\\n\",\n      \"# Slack\\n\",\n      \"* [Create Slack Channel and Invite Users](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Slack/legos/slack_create_channel_invite_users/README.md): Create a Slack Channel with given name, and invite a list of userIds to the channel.\\n\",\n      \"* [Get Slack SDK Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Slack/legos/slack_get_handle/README.md): Get Slack SDK Handle\\n\",\n      \"* [Slack Lookup User by Email](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Slack/legos/slack_lookup_user_by_email/README.md): Given an email address, find the slack user in the workspace.\\n\",\n      \" You can the extract their Profile picture, or retrieve their userid (which you can use to send messages) from the output.\\n\",\n      \"* [Post Slack Image](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Slack/legos/slack_post_image/README.md): Post Slack Image\\n\",\n      \"* [Post Slack Message](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Slack/legos/slack_post_message/README.md): Post Slack Message\\n\",\n      \"* [Slack Send DM](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Slack/legos/slack_send_DM/README.md): Given a list of Slack IDs, this Action will create a DM (one user) or group chat (multiple users), and send a message to the chat\\n\",\n      \"\\n\",\n      \"# Snowflake\\n\",\n      \"* [Snowflake Read Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Snowflake/legos/snowflake_read_query/README.md): Snowflake Read Query\\n\",\n      \"* [Snowflake Write Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Snowflake/legos/snowflake_write_query/README.md): Snowflake Write Query\\n\",\n      \"\\n\",\n      \"# Splunk\\n\",\n      \"* [Get Splunk SDK Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Splunk/legos/splunk_get_handle/README.md): Get Splunk SDK Handle\\n\",\n      \"\\n\",\n      \"# Stripe\\n\",\n      \"* [ Capture a Charge](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_capture_charge/README.md):  Capture the payment of an existing, uncaptured, charge\\n\",\n      \"* [Close Dispute](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_close_dispute/README.md): Close Dispute\\n\",\n      \"* [Create a Charge](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_create_charge/README.md): Create a Charge\\n\",\n      \"* [Create a Refund](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_create_refund/README.md): Create a Refund\\n\",\n      \"* [Get list of charges previously created](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_get_all_charges/README.md): Get list of charges previously created\\n\",\n      \"* [Get list of disputes](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_get_all_disputes/README.md): Get list of disputes\\n\",\n      \"* [Get list of refunds](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_get_all_refunds/README.md):  Get list of refunds for the given threshold.\\n\",\n      \"* [Get Stripe Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_get_handle/README.md): Get Stripe Handle\\n\",\n      \"* [Retrieve a Charge](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_retrieve_charge/README.md):  Retrieve a Charge\\n\",\n      \"* [Retrieve details of a dispute](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_retrieve_dispute/README.md): Retrieve details of a dispute\\n\",\n      \"* [Retrieve a refund](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_retrieve_refund/README.md): Retrieve a refund\\n\",\n      \"* [Update a Charge](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_update_charge/README.md): Update a Charge\\n\",\n      \"* [Update Dispute](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_update_dispute/README.md): Update Dispute\\n\",\n      \"* [Update Refund](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_update_refund/README.md): Updates the specified refund by setting the values of the parameters passed.\\n\",\n      \"\\n\",\n      \"# Terraform\\n\",\n      \"* [Execute Terraform Command](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Terraform/legos/terraform_exec_command/README.md): Execute Terraform Command\\n\",\n      \"* [Get terraform handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Terraform/legos/terraform_get_handle/README.md): Get terraform handle\\n\",\n      \"\\n\",\n      \"# Zabbix\\n\",\n      \"* [Get Zabbix Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Zabbix/legos/zabbix_get_handle/README.md): Get Zabbix Handle\\n\",\n      \"\\n\",\n      \"# infra\\n\",\n      \"* [Infra: Execute runbook](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/infra/legos/infra_execute_runbook/README.md): Infra: use this action to execute particular runbook with given input parameters.\\n\",\n      \"* [Infra: Finish runbook execution](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/infra/legos/infra_workflow_done/README.md): Infra: use this action to finish the execution of a runbook. Once this is set, no more tasks will be executed\\n\",\n      \"* [Infra: Append values for a key in workflow state store](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/infra/legos/workflow_ss_append_keys/README.md): Infra: use this action to append values for a key in a state store provided by the workflow.\\n\",\n      \"* [Infra: Store keys in workflow state store](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/infra/legos/workflow_ss_create_keys/README.md): Infra: use this action to persist keys in a state store provided by the workflow.\\n\",\n      \"* [Infra: Delete keys from workflow state store](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/infra/legos/workflow_ss_delete_keys/README.md): Infra: use this action to delete keys from a state store provided by the workflow.\\n\",\n      \"* [Infra: Fetch keys from workflow state store](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/infra/legos/workflow_ss_get_keys/README.md): Infra: use this action to retrieve keys in a state store provided by the workflow.\\n\",\n      \"* [Infra: Rename keys in workflow state store](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/infra/legos/workflow_ss_rename_keys/README.md): Infra: use this action to rename keys in a state store provided by the workflow.\\n\",\n      \"* [Infra: Update keys in workflow state store](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/infra/legos/workflow_ss_update_keys/README.md): Infra: use this action to update keys in a state store provided by the workflow.\\n\",\n      \"\\n\",\n      \"# opensearch\\n\",\n      \"* [Opensearch Get Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/opensearch/legos/opensearch_get_handle/README.md): Opensearch Get Handle\\n\",\n      \"* [Opensearch search](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/opensearch/legos/opensearch_search/README.md): Opensearch Search\\n\",\n      \"\\n\"\n     ]\n    }\n   ],\n   \"source\": [\n    \"repo = Repo('.')\\n\",\n    \"branch =  'master' #repo.active_branch\\n\",\n    \"list_of_dir = f\\\"https://api.github.com/repos/unskript/Awesome-CloudOps-Automation/git/trees/{branch}?recursive=1\\\"\\n\",\n    \"r_gh = requests.get(list_of_dir).json().get(\\\"tree\\\")\\n\",\n    \"#print(branch)\\n\",\n    \"notebooks = []\\n\",\n    \"actions=[]\\n\",\n    \"actionCount = 0\\n\",\n    \"runBookCount = 0\\n\",\n    \"\\n\",\n    \"main_readme_chart = \\\"\\\"\\n\",\n    \"runbook_list=\\\"\\\"\\n\",\n    \"runbook_connector = \\\"\\\"\\n\",\n    \"runbook_connector_list = []\\n\",\n    \"\\n\",\n    \"action_list = \\\"\\\"\\n\",\n    \"action_connector = \\\"\\\"\\n\",\n    \"action_connector_list = []\\n\",\n    \"\\n\",\n    \"connector_readme_connector =''\\n\",\n    \"connector_readme = ''\\n\",\n    \"hasRunbook = False\\n\",\n    \"hasAction = False\\n\",\n    \"\\n\",\n    \"if r_gh is not None:\\n\",\n    \"    for file in r_gh:\\n\",\n    \"\\n\",\n    \"        #only look at files we care about - ignore some of the directories that we don't need to scan\\n\",\n    \"        if \\\".github\\\" not in file.get(\\\"path\\\") and \\\".gitignore\\\" not in file.get(\\\"path\\\") and \\\"templates\\\" not in file.get(\\\"path\\\") and \\\"/\\\" in file.get(\\\"path\\\")and \\\"__init__\\\" not in file.get(\\\"path\\\") and \\\"custom\\\" not in file.get(\\\"path\\\")and \\\"unskript-ctl\\\" not in file.get(\\\"path\\\"):\\n\",\n    \"            #runbooks are .ipynb, and actions are .py\\n\",\n    \"            if file.get(\\\"path\\\").endswith(\\\".ipynb\\\") or file.get(\\\"path\\\").endswith(\\\".py\\\"):\\n\",\n    \"                temp = file.get(\\\"path\\\").split(\\\"/\\\")\\n\",\n    \"                if temp == -1:\\n\",\n    \"                    data = {\\n\",\n    \"                        \\\"root\\\": None,\\n\",\n    \"                        \\\"filename\\\": file.get(\\\"path\\\")\\n\",\n    \"                    }\\n\",\n    \"                    notebooks.append(data)\\n\",\n    \"                    \\n\",\n    \"                else:\\n\",\n    \"                    isAction = False\\n\",\n    \"                    last_folder = \\\"\\\"\\n\",\n    \"                    file_name = temp[-1]\\n\",\n    \"                    filename_string = file_name[0:file_name.find(\\\".\\\")]\\n\",\n    \"\\n\",\n    \"                    temp.pop()\\n\",\n    \"                    path = \\\"\\\"\\n\",\n    \"                    for folder in temp:\\n\",\n    \"                        path = path + folder +\\\"/\\\"\\n\",\n    \"                        last_folder += \\\"/\\\" + folder\\n\",\n    \"                    \\n\",\n    \"                    # to be an action, the file must have the 2nd directory be lego, and there must be 3 layers of directory\\n\",\n    \"                    if len(temp) ==3 and temp[1] ==\\\"legos\\\":\\n\",\n    \"                        isAction = True\\n\",\n    \"                    #testing\\n\",\n    \"                    \\n\",\n    \"                    \\n\",\n    \"                    \\n\",\n    \"                    #JSON data\\n\",\n    \"                    filename_json = Path(path +\\\"/\\\"+filename_string+ \\\".json\\\")\\n\",\n    \"                    jsonData = json.loads(filename_json.read_text())\\n\",\n    \"                    \\n\",\n    \"                    \\n\",\n    \"                    ##we now have a path.. but only really need the root folder\\n\",\n    \"                    ## different ways to generate fior action vs runbook\\n\",\n    \"                    if isAction:\\n\",\n    \"                        actionCount += 1\\n\",\n    \"                        #this is an action folder\\n\",\n    \"                        #find first slash\\n\",\n    \"                        firstslash = last_folder.find(\\\"/\\\",1)\\n\",\n    \"                        root = last_folder[1:firstslash]\\n\",\n    \"                        name = jsonData['action_title']\\n\",\n    \"                        description = jsonData['action_description']\\n\",\n    \"                        if 'action_categories' in jsonData:\\n\",\n    \"                            categories = jsonData['action_categories']\\n\",\n    \"                        else:\\n\",\n    \"                            print(f\\\"{name} is missing categories\\\")\\n\",\n    \"                        polling = jsonData['action_supports_poll']\\n\",\n    \"                        iteration = jsonData['action_supports_iteration']\\n\",\n    \"                        #not the python file - but the readme\\n\",\n    \"                        github_url = f\\\"{github_url_base}{last_folder}/README.md\\\"   \\n\",\n    \"                        \\n\",\n    \"                        \\n\",\n    \"                    else:\\n\",\n    \"                        runBookCount+=1\\n\",\n    \"                        #root folder for notebooks\\n\",\n    \"                        root = last_folder[1:]\\n\",\n    \"                        name = jsonData['name']\\n\",\n    \"                        description = jsonData['description']\\n\",\n    \"                        categories = jsonData['categories']\\n\",\n    \"                        github_url = github_url_base+\\\"/\\\"+file.get(\\\"path\\\")\\n\",\n    \"                    \\n\",\n    \"                    data = {\\n\",\n    \"                        \\\"root\\\": root,\\n\",\n    \"                        \\\"filename\\\": file_name,\\n\",\n    \"                        \\\"name\\\": name,\\n\",\n    \"                        \\\"description\\\": description,\\n\",\n    \"                        \\\"categories\\\":categories,\\n\",\n    \"                        \\\"github_url\\\": github_url\\n\",\n    \"                    }\\n\",\n    \"                    \\n\",\n    \"                    if isAction:\\n\",\n    \"                        data['type'] = \\\"Action\\\"\\n\",\n    \"                        data['polling'] = polling\\n\",\n    \"                        data['iteration']=iteration\\n\",\n    \"                        actions.append(data)\\n\",\n    \"                    else:\\n\",\n    \"                        data['type'] = \\\"RunBook\\\"\\n\",\n    \"                        local_url = local_OSS_url+\\\"/\\\"+file.get(\\\"path\\\")\\n\",\n    \"                        data['local_url'] = local_url  \\n\",\n    \"                        notebooks.append(data)\\n\",\n    \"                        \\n\",\n    \"                        \\n\",\n    \"                    #generate the list of runbooks for tha main readme\\n\",\n    \"                    if not isAction:\\n\",\n    \"                        main_readme_chart += f\\\"|{root} |[{name}]({github_url}) | [Open in Browser]({local_url}) | \\\\n\\\"\\n\",\n    \"    \\n\",\n    \"                    #generate the runbook list page\\n\",\n    \"                    if not isAction:\\n\",\n    \"                        #have we created a category yet?\\n\",\n    \"                        if runbook_connector == \\\"\\\":\\n\",\n    \"                            runbook_connector = root\\n\",\n    \"                            runbook_connector_list.append(runbook_connector)\\n\",\n    \"                            runbook_list += f\\\"\\\\n# {runbook_connector}\\\\n\\\"\\n\",\n    \"                        #same category, or new one\\n\",\n    \"                        if runbook_connector != root:\\n\",\n    \"                            # new category\\n\",\n    \"                            runbook_connector = root\\n\",\n    \"                            runbook_connector_list.append(runbook_connector)\\n\",\n    \"                            runbook_list += f\\\"\\\\n# {runbook_connector}\\\\n\\\"\\n\",\n    \"                            \\n\",\n    \"                            \\n\",\n    \"                        #now add in each runbook\\n\",\n    \"                        runbook_list += f\\\"* [{name}]({github_url}): {description}\\\\n\\\"\\n\",\n    \"            \\n\",\n    \"                    #generate the action list page\\n\",\n    \"                    if isAction:\\n\",\n    \"                        #have we created a category yet?\\n\",\n    \"                        if action_connector == \\\"\\\":\\n\",\n    \"                            action_connector = root\\n\",\n    \"                            action_connector_list.append(action_connector)\\n\",\n    \"                            action_list += f\\\"\\\\n# {action_connector}\\\\n\\\"\\n\",\n    \"                        #same category, or new one\\n\",\n    \"                        if action_connector != root:\\n\",\n    \"                            # new category\\n\",\n    \"                            action_connector = root\\n\",\n    \"                            action_connector_list.append(action_connector)\\n\",\n    \"                            action_list += f\\\"\\\\n# {action_connector}\\\\n\\\"\\n\",\n    \"                            \\n\",\n    \"                        #now add in each Action\\n\",\n    \"                        action_list += f\\\"* [{name}]({github_url}): {description}\\\\n\\\"\\n\",\n    \"                    \\n\",\n    \"                    \\n\",\n    \"\\n\",\n    \"                    #generate the readme for each connector\\n\",\n    \"                    #have we created a category yet?\\n\",\n    \"                    if connector_readme_connector == \\\"\\\":\\n\",\n    \"                        connector_readme_connector = root\\n\",\n    \"                        connector_readme = ''\\n\",\n    \"                        hasRunbook = False\\n\",\n    \"                        hasAction = False\\n\",\n    \"                    #same category, or new one\\n\",\n    \"                    if connector_readme_connector != root:\\n\",\n    \"                        # starting a new readme\\n\",\n    \"                        #first let's save the old one:\\n\",\n    \"                        #print(connector_readme)\\n\",\n    \"                        readme_file = f\\\"{connector_readme_connector}/README.md\\\"\\n\",\n    \"                        f  = open(readme_file, \\\"w+\\\")\\n\",\n    \"                        f.write(connector_readme)\\n\",\n    \"                        f.close()\\n\",\n    \"                        #now start building the new readme\\n\",\n    \"                        connector_readme_connector = root\\n\",\n    \"                        connector_readme = ''\\n\",\n    \"                        hasRunbook = False\\n\",\n    \"                        hasAction = False\\n\",\n    \"\\n\",\n    \"                    if data['type'] ==\\\"RunBook\\\":\\n\",\n    \"                        if not hasRunbook:\\n\",\n    \"                            connector_readme += f'# {root} RunBooks\\\\n'\\n\",\n    \"                            hasRunbook = True\\n\",\n    \"                        connector_readme += f\\\"* [{name}]({github_url}): {description}\\\\n\\\"\\n\",\n    \"                    if data['type'] ==\\\"Action\\\":\\n\",\n    \"                        if not hasAction:\\n\",\n    \"                            connector_readme += f'\\\\n# {root} Actions\\\\n'\\n\",\n    \"                            hasAction = True\\n\",\n    \"                        connector_readme += f\\\"* [{name}]({github_url}): {description}\\\\n\\\"\\n\",\n    \"    \\n\",\n    \"print(actionCount)\\n\",\n    \"print(action_list)\\n\",\n    \"#print(action_connector_list)\\n\",\n    \"#print(runbook_connector_list)\\n\",\n    \"#print(runBookCount, actionCount)\\n\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 22,\n   \"id\": \"e492845d\",\n   \"metadata\": {\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"## generate category list and category url list for runbooks\\n\",\n    \"\\n\",\n    \"notebook_categories = {}\\n\",\n    \"notebook_category_urls = {}\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"for notebook in notebooks:\\n\",\n    \"    #print(notebook)\\n\",\n    \"    #print(notebook['root'], notebook['name'])\\n\",\n    \"    if len(notebook['categories'])>0:\\n\",\n    \"        #print(notebook['categories'])\\n\",\n    \"        for category in notebook['categories']:\\n\",\n    \"            if not category in notebook_categories:\\n\",\n    \"                notebook_categories[category] = []\\n\",\n    \"            notebook_categories[category].append(notebook)\\n\",\n    \"            category_name = category[14:]\\n\",\n    \"            notebook_category_urls[category_name] = Path(f\\\"runbook_{category_name}.md\\\")\\n\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 23,\n   \"id\": \"68d91e36\",\n   \"metadata\": {\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##headers for al of the runbook pages \\n\",\n    \"category_header1 =\\\"|\\\"\\n\",\n    \"category_header2 =\\\"|\\\"\\n\",\n    \"category_table =\\\"|\\\"\\n\",\n    \"\\n\",\n    \"connector_header1 =\\\"|\\\"\\n\",\n    \"connector_header2 =\\\"|\\\"\\n\",\n    \"connector_table =\\\"|\\\"\\n\",\n    \"counter =0\\n\",\n    \"for connector in runbook_connector_list:\\n\",\n    \"    counter+= 1\\n\",\n    \"    if counter <3:\\n\",\n    \"        connector_header1 += f\\\" |\\\"\\n\",\n    \"        connector_header2 += f\\\" ---|\\\"\\n\",\n    \"    connector_table += f\\\" [{connector}](xRunBook_List.md#{connector}) |\\\"      \\n\",\n    \"    if counter%3 ==0:\\n\",\n    \"        #start a new row of category\\n\",\n    \"        connector_table += f\\\"\\\\n |\\\"\\n\",\n    \"        \\n\",\n    \"connector_markdown_table = f\\\"{connector_header1} \\\\n {connector_header2} \\\\n {connector_table} \\\\n\\\\n\\\"\\n\",\n    \"\\n\",\n    \"#print(notebook_category_urls)\\n\",\n    \"counter =0\\n\",\n    \"for categoryname in notebook_category_urls:\\n\",\n    \"    counter+= 1\\n\",\n    \"    if counter <3:\\n\",\n    \"        category_header1 += f\\\" |\\\"\\n\",\n    \"        category_header2 += f\\\" ---|\\\"\\n\",\n    \"    category_table += f\\\" [{categoryname}]({notebook_category_urls[categoryname]}) |\\\"\\n\",\n    \"    if counter%3 ==0:\\n\",\n    \"        #start a new row of category\\n\",\n    \"        category_table += f\\\"\\\\n |\\\"    \\n\",\n    \"category_markdown_table = f\\\"{category_header1} \\\\n {category_header2} \\\\n {category_table} \\\\n\\\"\\n\",\n    \"#this builds the Runbook list page\\n\",\n    \"#lets try it without the categories.  There is \\\"too much\\\" category... not enough runbook on ther pages\\n\",\n    \"#runbook_list = f\\\"# RunBook Connectors:\\\\n {connector_markdown_table} \\\\n# RunBook Categories:\\\\n {category_markdown_table} \\\\n\\\\n {runbook_list}\\\"\\n\",\n    \"\\n\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 24,\n   \"id\": \"6bfed614\",\n   \"metadata\": {\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"## generate category lists from actions\\n\",\n    \"\\n\",\n    \"action_categories = {}\\n\",\n    \"action_category_urls = {}\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"for action in actions:\\n\",\n    \"    #print(action)\\n\",\n    \"    if len(action['categories'])>0:\\n\",\n    \"        #print(notebook['categories'])\\n\",\n    \"        for category in action['categories']:\\n\",\n    \"            if not category in action_categories:\\n\",\n    \"                action_categories[category] = []\\n\",\n    \"            action_categories[category].append(action)\\n\",\n    \"            category_name = category[14:]\\n\",\n    \"            action_category_urls[category_name] = Path(f\\\"action_{category_name}.md\\\")\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 25,\n   \"id\": \"d25256b7\",\n   \"metadata\": {\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##headers for al of the Action pages \\n\",\n    \"category_header1 =\\\"|\\\"\\n\",\n    \"category_header2 =\\\"|\\\"\\n\",\n    \"category_table =\\\"|\\\"\\n\",\n    \"\\n\",\n    \"action_header1 =\\\"|\\\"\\n\",\n    \"action_header2 =\\\"|\\\"\\n\",\n    \"action_table =\\\"|\\\"\\n\",\n    \"gitbooklist = \\\"\\\"\\n\",\n    \"counter =0\\n\",\n    \"for action in action_connector_list:\\n\",\n    \"    counter+= 1\\n\",\n    \"    if counter <3:\\n\",\n    \"        action_header1 += f\\\" |\\\"\\n\",\n    \"        action_header2 += f\\\" ---|\\\"\\n\",\n    \"    action_url = f\\\"action_{action.upper()}.md\\\"\\n\",\n    \"    action_url= action_url.replace(\\\"KUBERNETES\\\", \\\"K8S\\\")\\n\",\n    \"    action_url = action_url.replace(\\\"_MONGO.\\\", \\\"_MONGODB.\\\")\\n\",\n    \"    action_url= action_url.replace(\\\"action_ELASTICSEARCH\\\", \\\"action_ES\\\")\\n\",\n    \"    action_table += f\\\" [{action}]({action_url}) |\\\"\\n\",\n    \"    #print(action_url)\\n\",\n    \"    gitbooklist += f\\\"      * [{action}](action_url) \\\\n\\\"\\n\",\n    \"    if counter%3 ==0:\\n\",\n    \"        #start a new row of category\\n\",\n    \"        action_table += f\\\"\\\\n |\\\"\\n\",\n    \"\\n\",\n    \"action_connector_markdown_table = f\\\"{action_header1} \\\\n {action_header2} \\\\n {action_table} \\\\n\\\\n\\\"\\n\",\n    \"#print(action_connector_markdown_table)\\n\",\n    \"counter = 0\\n\",\n    \"for categoryname in action_category_urls:\\n\",\n    \"    category_printed = categoryname\\n\",\n    \"    counter+= 1\\n\",\n    \"    if counter <3:\\n\",\n    \"        category_header1 += f\\\" |\\\"\\n\",\n    \"        category_header2 += f\\\" ---|\\\"\\n\",\n    \"    category_url = str(action_category_urls[categoryname])\\n\",\n    \"    category_url = category_url.replace(\\\"KUBERNETES\\\", \\\"K8S\\\")\\n\",\n    \"    action_url = action_url.replace(\\\"_MONGO.\\\", \\\"_MONGODB.\\\")\\n\",\n    \"    #category_url = category_url.replace(\\\"POSTGRES\\\", \\\"POSTGRESQL\\\")\\n\",\n    \"    category_table += f\\\" [{category_printed}]({category_url}) |\\\"\\n\",\n    \"    gitbooklist += f\\\"      * [{category_printed}](lists/{action_category_urls[categoryname]})\\\\n\\\"\\n\",\n    \"    if counter%3 ==0:\\n\",\n    \"        #start a new row of category\\n\",\n    \"        category_table += f\\\"\\\\n |\\\"\\n\",\n    \"\\n\",\n    \"action_category_markdown_table = f\\\"{category_header1} \\\\n {category_header2} \\\\n {category_table} \\\\n\\\"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"#print(action_connector_markdown_table)\\n\",\n    \"action_list =f\\\"# Actions By Connector:\\\\n{action_connector_markdown_table} \\\\n # Actions By Category: \\\\n{action_category_markdown_table}\\\\n\\\\n\\\\n\\\\n \\\"   \\n\",\n    \"#print(action_list)\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 26,\n   \"id\": \"4f238d53\",\n   \"metadata\": {\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"\\n\",\n    \"#generate pages for each Runbook category page\\n\",\n    \"category_name =\\\"\\\"\\n\",\n    \"category_listing = \\\"\\\"\\n\",\n    \"categoryList_filename = \\\"\\\"\\n\",\n    \"for category in notebook_categories:\\n\",\n    \"    #category change - we need a new header\\n\",\n    \"    if category_name ==\\\"\\\":\\n\",\n    \"        #new category\\n\",\n    \"        category_name = category[14:]\\n\",\n    \"        category_listing = f\\\"# RunBook Connectors:\\\\n {connector_markdown_table} \\\\n# RunBook Categories:\\\\n {category_markdown_table} \\\\n# Runbooks in {category_name.replace('_', ' ')}\\\\n\\\"\\n\",\n    \"        #all of the categories was too much. Pulling this for now\\n\",\n    \"        category_listing = \\\"\\\"\\n\",\n    \"    elif category_name != category:\\n\",\n    \"        # we have finished off a category\\n\",\n    \"        #save the oldcategory\\n\",\n    \"        #print(category_name, category_listing)\\n\",\n    \"        categoryList_filename = f\\\"lists/{notebook_category_urls[category_name]}\\\"\\n\",\n    \"        f  = open(categoryList_filename, \\\"w+\\\")\\n\",\n    \"        f.write(category_listing)\\n\",\n    \"        f.close()\\n\",\n    \"        category_name = category[14:]\\n\",\n    \"        category_listing = f\\\"# RunBook Connectors:\\\\n {connector_markdown_table} \\\\n# RunBook Categories:\\\\n {category_markdown_table}\\\\n # Runbooks in {category_name.replace('_', ' ')}\\\\n\\\"\\n\",\n    \"        #all of the categories was too much. Pulling this for now\\n\",\n    \"        category_listing = \\\"\\\"\\n\",\n    \"    #print(notebook_categories[category])\\n\",\n    \"    for runbook in notebook_categories[category]:\\n\",\n    \"        category_listing += f\\\"* {runbook['root']} [{runbook['name']}]({runbook['github_url']}): {runbook['description']}\\\\n\\\"\\n\",\n    \"#finished loop -wrote last category\\n\",\n    \"#print(category_listing)\\n\",\n    \"f  = open(categoryList_filename, \\\"w+\\\")\\n\",\n    \"f.write(category_listing)\\n\",\n    \"f.close()\\n\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 27,\n   \"id\": \"81ea848a\",\n   \"metadata\": {\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"# generate all of the action category pages \\n\",\n    \"category_name =\\\"\\\"\\n\",\n    \"category_listing = \\\"\\\"\\n\",\n    \"for category in action_categories:\\n\",\n    \"    if category_name ==\\\"\\\":\\n\",\n    \"        #new category\\n\",\n    \"        \\n\",\n    \"        category_name = category[14:]\\n\",\n    \"        if category_name ==\\\"ELASTICSEARCH\\\":\\n\",\n    \"            category_name ==\\\"ES\\\"\\n\",\n    \"        elif category_name ==\\\"KUBERNETES\\\":\\n\",\n    \"            category_name ==\\\"K8S\\\"\\n\",\n    \"        \\n\",\n    \"        elif category_name ==\\\"MONGO\\\":\\n\",\n    \"            print(\\\"MONGO\\\")\\n\",\n    \"            category_name ==\\\"MONGODB\\\"\\n\",\n    \"        category_listing = f\\\"# Actions in the {category_name.replace('_', ' ')} category\\\\n\\\"\\n\",\n    \"        #all of the categories was too much. Pulling this for now\\n\",\n    \"        category_listing = \\\"\\\"        \\n\",\n    \"    elif category_name != category:\\n\",\n    \"        # we have finished off a category\\n\",\n    \"        #save the oldcategory\\n\",\n    \"        #print(category_name, category_listing)\\n\",\n    \"        #print(category)\\n\",\n    \"        #print(category_listing)\\n\",\n    \"        #place the links at the bottom\\n\",\n    \"        #category_listing += f\\\"\\\\n# Actions By Connector:\\\\n{action_connector_markdown_table} \\\\n # Actions By Category: \\\\n{action_category_markdown_table} \\\\n\\\\n\\\"\\n\",\n    \"        categoryList_filename = f\\\"lists/{action_category_urls[category_name]}\\\"\\n\",\n    \"        f  = open(categoryList_filename, \\\"w+\\\")\\n\",\n    \"        f.write(category_listing)\\n\",\n    \"        f.close()\\n\",\n    \"        category_name = category[14:]\\n\",\n    \"        if category_name ==\\\"ELASTICSEARCH\\\":\\n\",\n    \"            category_name ==\\\"ES\\\"\\n\",\n    \"        elif category_name ==\\\"KUBERNETES\\\":\\n\",\n    \"            category_name ==\\\"K8S\\\"\\n\",\n    \"        elif category_name ==\\\"MONGO\\\":\\n\",\n    \"            print(\\\"mongo\\\")\\n\",\n    \"            category_name ==\\\"MONGODB\\\"\\n\",\n    \"        category_listing = \\\"\\\" #f\\\"# Actions in the {category_name.replace('_', ' ')} category\\\\n\\\"\\n\",\n    \"    \\n\",\n    \"    #print(category_listing)\\n\",\n    \"    for action in action_categories[category]:\\n\",\n    \"        # removing **{action['root']}**: from each listing\\n\",\n    \"        category_listing += f\\\"* [{action['name']}]({action['github_url']}): {action['description']}\\\\n\\\\n\\\"\\n\",\n    \"# last category is compelted when loop ends\\n\",\n    \"\\n\",\n    \"#category_listing += f\\\"\\\\n# Actions By Connector:\\\\n{action_connector_markdown_table} \\\\n # Actions By Category: \\\\n{action_category_markdown_table} \\\\n\\\\n\\\"\\n\",\n    \"categoryList_filename = f\\\"lists/{action_category_urls[category_name]}\\\"\\n\",\n    \"f  = open(categoryList_filename, \\\"w+\\\")\\n\",\n    \"f.write(category_listing)\\n\",\n    \"f.close()\\n\",\n    \"        \\n\",\n    \"        \\n\",\n    \"#print(action_categories)\\n\",\n    \"#print(action_category_urls)\\n\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"aboriginal-responsibility\",\n   \"metadata\": {},\n   \"source\": [\n    \"### Preview the generated list\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"88e075a7-8341-45af-a250-c1594c004579\",\n   \"metadata\": {},\n   \"source\": [\n    \"### Generate readme for github repository\"\n   ]\n  },\n  {\n   \"cell_type\": \"raw\",\n   \"id\": \"4c1ed44c\",\n   \"metadata\": {},\n   \"source\": []\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 28,\n   \"id\": \"2a95cba3-027c-4a57-8bfa-2ee1e9053bb7\",\n   \"metadata\": {\n    \"credentialsJson\": {},\n    \"tags\": []\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"\\n\",\n    \"#generate a page of all the actions that are available\\n\",\n    \"ac_list= readme = \\\"lists/Action_list.md\\\"\\n\",\n    \"# Save README\\n\",\n    \"f  = open(ac_list, \\\"w+\\\")\\n\",\n    \"f.write(action_list)\\n\",\n    \"f.close()\\n\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 29,\n   \"id\": \"70db7c84-57ad-4a92-a60b-9f20b60b5329\",\n   \"metadata\": {\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"#generate a page of all the Runbooks that are available\\n\",\n    \"rb_list = readme = \\\"lists/xRunBook_list.md\\\"\\n\",\n    \"# Save README\\n\",\n    \"f  = open(rb_list, \\\"w+\\\")\\n\",\n    \"f.write(runbook_list)\\n\",\n    \"f.close()\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 30,\n   \"id\": \"ee479c83\",\n   \"metadata\": {\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"#create JSON files that can be used t build badges for the website showing our Runbook and action counts\\n\",\n    \"\\n\",\n    \"import json\\n\",\n    \"#store Runbookcount json (to create shield on webpage)\\n\",\n    \"rb_list = readme = \\\".github/images/runbookShield.json\\\"\\n\",\n    \"runBookCount = str(runBookCount)\\n\",\n    \"json1 = {\\\"schemaVersion\\\": 1,\\\"label\\\": \\\"RunBook Count\\\",\\\"message\\\": runBookCount,\\\"color\\\": \\\"orange\\\"}\\n\",\n    \"\\n\",\n    \"# Save README\\n\",\n    \"\\n\",\n    \"with open(rb_list, \\\"w\\\") as outfile:\\n\",\n    \"    json.dump(json1, outfile)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"#store action count json (to create shield on webpage)\\n\",\n    \"rb_list = readme = \\\".github/images/actionShield.json\\\"\\n\",\n    \"actionCount = str(actionCount)\\n\",\n    \"json2 = {\\\"schemaVersion\\\": 1,\\\"label\\\": \\\"Action Count\\\",\\\"message\\\": actionCount,\\\"color\\\": \\\"green\\\"}\\n\",\n    \"\\n\",\n    \"# Save README\\n\",\n    \"with open(rb_list, \\\"w\\\") as outfile:\\n\",\n    \"    json.dump(json2, outfile)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"076fbbd0\",\n   \"metadata\": {},\n   \"source\": []\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"0041b5a5\",\n   \"metadata\": {\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": []\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"af012c24\",\n   \"metadata\": {\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": []\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"27801143\",\n   \"metadata\": {\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": []\n  }\n ],\n \"metadata\": {\n  \"kernelspec\": {\n   \"display_name\": \"Python 3 (ipykernel)\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"codemirror_mode\": {\n    \"name\": \"ipython\",\n    \"version\": 3\n   },\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"nbconvert_exporter\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.9.6\"\n  },\n  \"vscode\": {\n   \"interpreter\": {\n    \"hash\": \"31f2aee4e71d21fbe5cf8b01ff0e069b9275f58929596ceb00d14d90e3e16cd6\"\n   }\n  },\n  \"widgets\": {\n   \"application/vnd.jupyter.widget-state+json\": {\n    \"state\": {},\n    \"version_major\": 2,\n    \"version_minor\": 0\n   }\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}\n"
  },
  {
    "path": "generate_readme.nbconvert.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"latter-teddy\",\n   \"metadata\": {},\n   \"source\": [\n    \"\\n\",\n    \"# Generate Readme with up to date list of xRunBooks\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"421e4c6e-d5ef-4d53-9d36-f352426c4d87\",\n   \"metadata\": {\n    \"tags\": []\n   },\n   \"source\": [\n    \"## Input\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"e84f7e80-dda2-4569-96dd-5abaaed2c73a\",\n   \"metadata\": {},\n   \"source\": [\n    \"### Import libraries\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 1,\n   \"id\": \"sitting-directory\",\n   \"metadata\": {\n    \"credentialsJson\": {},\n    \"execution\": {\n     \"iopub.execute_input\": \"2023-05-12T17:04:20.966275Z\",\n     \"iopub.status.busy\": \"2023-05-12T17:04:20.965745Z\",\n     \"iopub.status.idle\": \"2023-05-12T17:04:23.825994Z\",\n     \"shell.execute_reply\": \"2023-05-12T17:04:23.824909Z\"\n    },\n    \"tags\": []\n   },\n   \"outputs\": [\n    {\n     \"name\": \"stdout\",\n     \"output_type\": \"stream\",\n     \"text\": [\n      \"Defaulting to user installation because normal site-packages is not writeable\\r\\n\",\n      \"Collecting GitPython\\r\\n\",\n      \"  Downloading GitPython-3.1.31-py3-none-any.whl (184 kB)\\r\\n\",\n      \"\\u001b[2K     \\u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\\u001b[0m \\u001b[32m184.3/184.3 KB\\u001b[0m \\u001b[31m4.1 MB/s\\u001b[0m eta \\u001b[36m0:00:00\\u001b[0m\\r\\n\",\n      \"\\u001b[?25hCollecting gitdb<5,>=4.0.1\\r\\n\",\n      \"  Downloading gitdb-4.0.10-py3-none-any.whl (62 kB)\\r\\n\",\n      \"\\u001b[2K     \\u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\\u001b[0m \\u001b[32m62.7/62.7 KB\\u001b[0m \\u001b[31m17.0 MB/s\\u001b[0m eta \\u001b[36m0:00:00\\u001b[0m\\r\\n\",\n      \"\\u001b[?25hCollecting smmap<6,>=3.0.1\\r\\n\",\n      \"  Downloading smmap-5.0.0-py3-none-any.whl (24 kB)\\r\\n\",\n      \"Installing collected packages: smmap, gitdb, GitPython\\r\\n\",\n      \"Successfully installed GitPython-3.1.31 gitdb-4.0.10 smmap-5.0.0\\r\\n\"\n     ]\n    }\n   ],\n   \"source\": [\n    \"import os\\n\",\n    \"import json\\n\",\n    \"import requests\\n\",\n    \"import urllib.parse\\n\",\n    \"import copy\\n\",\n    \"from pathlib import Path\\n\",\n    \"import markdown\\n\",\n    \"import nbformat\\n\",\n    \"from nbconvert import MarkdownExporter\\n\",\n    \"from papermill.iorw import (\\n\",\n    \"    load_notebook_node,\\n\",\n    \"    write_ipynb,\\n\",\n    \")\\n\",\n    \"try:\\n\",\n    \"    from git import Repo\\n\",\n    \"except:\\n\",\n    \"    !pip install GitPython\\n\",\n    \"    from git import Repo\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"agricultural-contest\",\n   \"metadata\": {},\n   \"source\": [\n    \"### Variables\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 2,\n   \"id\": \"guided-edgar\",\n   \"metadata\": {\n    \"credentialsJson\": {},\n    \"execution\": {\n     \"iopub.execute_input\": \"2023-05-12T17:04:23.830598Z\",\n     \"iopub.status.busy\": \"2023-05-12T17:04:23.829762Z\",\n     \"iopub.status.idle\": \"2023-05-12T17:04:23.836565Z\",\n     \"shell.execute_reply\": \"2023-05-12T17:04:23.835758Z\"\n    },\n    \"tags\": []\n   },\n   \"output\": {},\n   \"outputs\": [],\n   \"source\": [\n    \"# README variables\\n\",\n    \"readme_template = \\\"README_template.md\\\"\\n\",\n    \"readme = \\\"README.md\\\"\\n\",\n    \"replace_var = \\\"[[DYNAMIC_LIST]]\\\"\\n\",\n    \"badge_var = \\\"[[BADGE]]\\\"\\n\",\n    \"\\n\",\n    \"# welcome variables\\n\",\n    \"#this is a TODO\\n\",\n    \"#welcome_template = \\\"Welcome_template.ipynb\\\"\\n\",\n    \"#welcome = \\\"Welcome.ipynb\\\"\\n\",\n    \"#replace_var_quote = f'\\\"[[DYNAMIC_LIST]]\\\",\\\\n'\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"# Others\\n\",\n    \"current_file = '.'\\n\",\n    \"notebook_ext = '.ipynb'\\n\",\n    \"github_url_base = 'https://github.com/unskript/Awesome-CloudOps-Automation/tree/master'\\n\",\n    \"local_OSS_url = 'http://127.0.0.1:8888/lab/tree'\\n\",\n    \"#fix these!\\n\",\n    \"github_download_url = 'https://raw.githubusercontent.com/unskript/Awesome-CloudOps-Automation/master/'\\n\",\n    \"unSkript_logo ='https://storage.googleapis.com/unskript-website/assets/favicon.png'\\n\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"distinguished-declaration\",\n   \"metadata\": {},\n   \"source\": [\n    \"### Get files list\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 3,\n   \"id\": \"36c9011e-5f51-4779-8062-a627503100e1\",\n   \"metadata\": {\n    \"credentialsJson\": {},\n    \"execution\": {\n     \"iopub.execute_input\": \"2023-05-12T17:04:23.840663Z\",\n     \"iopub.status.busy\": \"2023-05-12T17:04:23.840285Z\",\n     \"iopub.status.idle\": \"2023-05-12T17:04:24.399232Z\",\n     \"shell.execute_reply\": \"2023-05-12T17:04:24.398348Z\"\n    },\n    \"tags\": []\n   },\n   \"outputs\": [\n    {\n     \"name\": \"stdout\",\n     \"output_type\": \"stream\",\n     \"text\": [\n      \"ChatGPT Get Handle is missing categories\\n\",\n      \"Get Datadog Handle is missing categories\\n\",\n      \"Get GCP Handle is missing categories\\n\",\n      \"Get Github Handle is missing categories\\n\",\n      \"Get Grafana Handle is missing categories\\n\",\n      \"Get Jenkins Handle is missing categories\\n\",\n      \"Get Jira SDK Handle is missing categories\\n\",\n      \"Get Kafka Producer Handle is missing categories\\n\",\n      \"Get Kubernetes Handle is missing categories\\n\",\n      \"Get MongoDB Handle is missing categories\\n\",\n      \"Get MS-SQL Handle is missing categories\\n\",\n      \"Get MySQL Handle is missing categories\\n\",\n      \"Netbox Get Handle is missing categories\\n\",\n      \"Nomad Get Handle is missing categories\\n\",\n      \"Get Pingdom Handle is missing categories\\n\",\n      \"Get PostgreSQL Handle is missing categories\\n\",\n      \"Get Prometheus handle is missing categories\\n\",\n      \"Get Redis Handle is missing categories\\n\",\n      \"Get REST handle is missing categories\\n\",\n      \"Get SSH handle is missing categories\\n\",\n      \"Get Slack SDK Handle is missing categories\\n\",\n      \"Get Stripe Handle is missing categories\\n\",\n      \"Get terraform handle is missing categories\\n\",\n      \"487\\n\"\n     ]\n    }\n   ],\n   \"source\": [\n    \"repo = Repo('.')\\n\",\n    \"branch =  repo.active_branch\\n\",\n    \"list_of_dir = f\\\"https://api.github.com/repos/unskript/Awesome-CloudOps-Automation/git/trees/{branch}?recursive=1\\\"\\n\",\n    \"r_gh = requests.get(list_of_dir).json().get(\\\"tree\\\")\\n\",\n    \"#print(branch)\\n\",\n    \"notebooks = []\\n\",\n    \"actions=[]\\n\",\n    \"actionCount = 0\\n\",\n    \"runBookCount = 0\\n\",\n    \"\\n\",\n    \"main_readme_chart = \\\"\\\"\\n\",\n    \"runbook_list=\\\"\\\"\\n\",\n    \"runbook_connector = \\\"\\\"\\n\",\n    \"runbook_connector_list = []\\n\",\n    \"\\n\",\n    \"action_list = \\\"\\\"\\n\",\n    \"action_connector = \\\"\\\"\\n\",\n    \"action_connector_list = []\\n\",\n    \"\\n\",\n    \"connector_readme_connector =''\\n\",\n    \"connector_readme = ''\\n\",\n    \"hasRunbook = False\\n\",\n    \"hasAction = False\\n\",\n    \"\\n\",\n    \"if r_gh is not None:\\n\",\n    \"    for file in r_gh:\\n\",\n    \"\\n\",\n    \"        #only look at files we care about - ignore some of the directories that we don't need to scan\\n\",\n    \"        if \\\".github\\\" not in file.get(\\\"path\\\") and \\\".gitignore\\\" not in file.get(\\\"path\\\") and \\\"templates\\\" not in file.get(\\\"path\\\") and \\\"/\\\" in file.get(\\\"path\\\")and \\\"__init__\\\" not in file.get(\\\"path\\\") and \\\"custom\\\" not in file.get(\\\"path\\\")and \\\"unskript-ctl\\\" not in file.get(\\\"path\\\"):\\n\",\n    \"            #runbooks are .ipynb, and actions are .py\\n\",\n    \"            if file.get(\\\"path\\\").endswith(\\\".ipynb\\\") or file.get(\\\"path\\\").endswith(\\\".py\\\"):\\n\",\n    \"                temp = file.get(\\\"path\\\").split(\\\"/\\\")\\n\",\n    \"                if temp == -1:\\n\",\n    \"                    data = {\\n\",\n    \"                        \\\"root\\\": None,\\n\",\n    \"                        \\\"filename\\\": file.get(\\\"path\\\")\\n\",\n    \"                    }\\n\",\n    \"                    notebooks.append(data)\\n\",\n    \"                    \\n\",\n    \"                else:\\n\",\n    \"                    isAction = False\\n\",\n    \"                    last_folder = \\\"\\\"\\n\",\n    \"                    file_name = temp[-1]\\n\",\n    \"                    filename_string = file_name[0:file_name.find(\\\".\\\")]\\n\",\n    \"\\n\",\n    \"                    temp.pop()\\n\",\n    \"                    path = \\\"\\\"\\n\",\n    \"                    for folder in temp:\\n\",\n    \"                        path = path + folder +\\\"/\\\"\\n\",\n    \"                        last_folder += \\\"/\\\" + folder\\n\",\n    \"                    \\n\",\n    \"                    # to be an action, the file must have the 2nd directory be lego, and there must be 3 layers of directory\\n\",\n    \"                    if len(temp) ==3 and temp[1] ==\\\"legos\\\":\\n\",\n    \"                        isAction = True\\n\",\n    \"                    #testing\\n\",\n    \"                    \\n\",\n    \"                    \\n\",\n    \"                    \\n\",\n    \"                    #JSON data\\n\",\n    \"                    filename_json = Path(path +\\\"/\\\"+filename_string+ \\\".json\\\")\\n\",\n    \"                    jsonData = json.loads(filename_json.read_text())\\n\",\n    \"                    \\n\",\n    \"                    \\n\",\n    \"                    ##we now have a path.. but only really need the root folder\\n\",\n    \"                    ## different ways to generate fior action vs runbook\\n\",\n    \"                    if isAction:\\n\",\n    \"                        actionCount += 1\\n\",\n    \"                        #this is an action folder\\n\",\n    \"                        #find first slash\\n\",\n    \"                        firstslash = last_folder.find(\\\"/\\\",1)\\n\",\n    \"                        root = last_folder[1:firstslash]\\n\",\n    \"                        name = jsonData['action_title']\\n\",\n    \"                        description = jsonData['action_description']\\n\",\n    \"                        if 'action_categories' in jsonData:\\n\",\n    \"                            categories = jsonData['action_categories']\\n\",\n    \"                        else:\\n\",\n    \"                            print(f\\\"{name} is missing categories\\\")\\n\",\n    \"                        polling = jsonData['action_supports_poll']\\n\",\n    \"                        iteration = jsonData['action_supports_iteration']\\n\",\n    \"                        #not the python file - but the readme\\n\",\n    \"                        github_url = f\\\"{github_url_base}{last_folder}/README.md\\\"   \\n\",\n    \"                        \\n\",\n    \"                        \\n\",\n    \"                    else:\\n\",\n    \"                        runBookCount+=1\\n\",\n    \"                        #root folder for notebooks\\n\",\n    \"                        root = last_folder[1:]\\n\",\n    \"                        name = jsonData['name']\\n\",\n    \"                        description = jsonData['description']\\n\",\n    \"                        categories = jsonData['categories']\\n\",\n    \"                        github_url = github_url_base+\\\"/\\\"+file.get(\\\"path\\\")\\n\",\n    \"                    \\n\",\n    \"                    data = {\\n\",\n    \"                        \\\"root\\\": root,\\n\",\n    \"                        \\\"filename\\\": file_name,\\n\",\n    \"                        \\\"name\\\": name,\\n\",\n    \"                        \\\"description\\\": description,\\n\",\n    \"                        \\\"categories\\\":categories,\\n\",\n    \"                        \\\"github_url\\\": github_url\\n\",\n    \"                    }\\n\",\n    \"                    \\n\",\n    \"                    if isAction:\\n\",\n    \"                        data['type'] = \\\"Action\\\"\\n\",\n    \"                        data['polling'] = polling\\n\",\n    \"                        data['iteration']=iteration\\n\",\n    \"                        actions.append(data)\\n\",\n    \"                    else:\\n\",\n    \"                        data['type'] = \\\"RunBook\\\"\\n\",\n    \"                        local_url = local_OSS_url+\\\"/\\\"+file.get(\\\"path\\\")\\n\",\n    \"                        data['local_url'] = local_url  \\n\",\n    \"                        notebooks.append(data)\\n\",\n    \"                        \\n\",\n    \"                        \\n\",\n    \"                    #generate the list of runbooks for tha main readme\\n\",\n    \"                    if not isAction:\\n\",\n    \"                        main_readme_chart += f\\\"|{root} |[{name}]({github_url}) | [Open in Browser]({local_url}) | \\\\n\\\"\\n\",\n    \"    \\n\",\n    \"                    #generate the runbook list page\\n\",\n    \"                    if not isAction:\\n\",\n    \"                        #have we created a category yet?\\n\",\n    \"                        if runbook_connector == \\\"\\\":\\n\",\n    \"                            runbook_connector = root\\n\",\n    \"                            runbook_connector_list.append(runbook_connector)\\n\",\n    \"                            runbook_list += f\\\"\\\\n# {runbook_connector}\\\\n\\\"\\n\",\n    \"                        #same category, or new one\\n\",\n    \"                        if runbook_connector != root:\\n\",\n    \"                            # new category\\n\",\n    \"                            runbook_connector = root\\n\",\n    \"                            runbook_connector_list.append(runbook_connector)\\n\",\n    \"                            runbook_list += f\\\"\\\\n# {runbook_connector}\\\\n\\\"\\n\",\n    \"                            \\n\",\n    \"                            \\n\",\n    \"                        #now add in each runbook\\n\",\n    \"                        runbook_list += f\\\"* [{name}]({github_url}): {description}\\\\n\\\"\\n\",\n    \"            \\n\",\n    \"                    #generate the action list page\\n\",\n    \"                    if isAction:\\n\",\n    \"                        #have we created a category yet?\\n\",\n    \"                        if action_connector == \\\"\\\":\\n\",\n    \"                            action_connector = root\\n\",\n    \"                            action_connector_list.append(action_connector)\\n\",\n    \"                            action_list += f\\\"\\\\n# {action_connector}\\\\n\\\"\\n\",\n    \"                        #same category, or new one\\n\",\n    \"                        if action_connector != root:\\n\",\n    \"                            # new category\\n\",\n    \"                            action_connector = root\\n\",\n    \"                            action_connector_list.append(action_connector)\\n\",\n    \"                            action_list += f\\\"\\\\n# {action_connector}\\\\n\\\"\\n\",\n    \"                            \\n\",\n    \"                        #now add in each Action\\n\",\n    \"                        action_list += f\\\"* [{name}]({github_url}): {description}\\\\n\\\"\\n\",\n    \"                    \\n\",\n    \"                    \\n\",\n    \"\\n\",\n    \"                    #generate the readme for each connector\\n\",\n    \"                    #have we created a category yet?\\n\",\n    \"                    if connector_readme_connector == \\\"\\\":\\n\",\n    \"                        connector_readme_connector = root\\n\",\n    \"                        connector_readme = ''\\n\",\n    \"                        hasRunbook = False\\n\",\n    \"                        hasAction = False\\n\",\n    \"                    #same category, or new one\\n\",\n    \"                    if connector_readme_connector != root:\\n\",\n    \"                        # starting a new readme\\n\",\n    \"                        #first let's save the old one:\\n\",\n    \"                        #print(connector_readme)\\n\",\n    \"                        readme_file = f\\\"{connector_readme_connector}/README.md\\\"\\n\",\n    \"                        f  = open(readme_file, \\\"w+\\\")\\n\",\n    \"                        f.write(connector_readme)\\n\",\n    \"                        f.close()\\n\",\n    \"                        #now start building the new readme\\n\",\n    \"                        connector_readme_connector = root\\n\",\n    \"                        connector_readme = ''\\n\",\n    \"                        hasRunbook = False\\n\",\n    \"                        hasAction = False\\n\",\n    \"\\n\",\n    \"                    if data['type'] ==\\\"RunBook\\\":\\n\",\n    \"                        if not hasRunbook:\\n\",\n    \"                            connector_readme += f'# {root} RunBooks\\\\n'\\n\",\n    \"                            hasRunbook = True\\n\",\n    \"                        connector_readme += f\\\"* [{name}]({github_url}): {description}\\\\n\\\"\\n\",\n    \"                    if data['type'] ==\\\"Action\\\":\\n\",\n    \"                        if not hasAction:\\n\",\n    \"                            connector_readme += f'\\\\n# {root} Actions\\\\n'\\n\",\n    \"                            hasAction = True\\n\",\n    \"                        connector_readme += f\\\"* [{name}]({github_url}): {description}\\\\n\\\"\\n\",\n    \"    \\n\",\n    \"print(actionCount)\\n\",\n    \"#print(action_list)\\n\",\n    \"#print(action_connector_list)\\n\",\n    \"#print(runbook_connector_list)\\n\",\n    \"#print(runBookCount, actionCount)\\n\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 4,\n   \"id\": \"e492845d\",\n   \"metadata\": {\n    \"credentialsJson\": {},\n    \"execution\": {\n     \"iopub.execute_input\": \"2023-05-12T17:04:24.407934Z\",\n     \"iopub.status.busy\": \"2023-05-12T17:04:24.407433Z\",\n     \"iopub.status.idle\": \"2023-05-12T17:04:24.414946Z\",\n     \"shell.execute_reply\": \"2023-05-12T17:04:24.414227Z\"\n    }\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"## generate category list and category url list for runbooks\\n\",\n    \"\\n\",\n    \"notebook_categories = {}\\n\",\n    \"notebook_category_urls = {}\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"for notebook in notebooks:\\n\",\n    \"    #print(notebook)\\n\",\n    \"    #print(notebook['root'], notebook['name'])\\n\",\n    \"    if len(notebook['categories'])>0:\\n\",\n    \"        #print(notebook['categories'])\\n\",\n    \"        for category in notebook['categories']:\\n\",\n    \"            if not category in notebook_categories:\\n\",\n    \"                notebook_categories[category] = []\\n\",\n    \"            notebook_categories[category].append(notebook)\\n\",\n    \"            category_name = category[14:]\\n\",\n    \"            notebook_category_urls[category_name] = Path(f\\\"runbook_{category_name}.md\\\")\\n\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 5,\n   \"id\": \"68d91e36\",\n   \"metadata\": {\n    \"credentialsJson\": {},\n    \"execution\": {\n     \"iopub.execute_input\": \"2023-05-12T17:04:24.419947Z\",\n     \"iopub.status.busy\": \"2023-05-12T17:04:24.418609Z\",\n     \"iopub.status.idle\": \"2023-05-12T17:04:24.426325Z\",\n     \"shell.execute_reply\": \"2023-05-12T17:04:24.425696Z\"\n    }\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##headers for al of the runbook pages \\n\",\n    \"category_header1 =\\\"|\\\"\\n\",\n    \"category_header2 =\\\"|\\\"\\n\",\n    \"category_table =\\\"|\\\"\\n\",\n    \"\\n\",\n    \"connector_header1 =\\\"|\\\"\\n\",\n    \"connector_header2 =\\\"|\\\"\\n\",\n    \"connector_table =\\\"|\\\"\\n\",\n    \"counter =0\\n\",\n    \"for connector in runbook_connector_list:\\n\",\n    \"    counter+= 1\\n\",\n    \"    if counter <3:\\n\",\n    \"        connector_header1 += f\\\" |\\\"\\n\",\n    \"        connector_header2 += f\\\" ---|\\\"\\n\",\n    \"    connector_table += f\\\" [{connector}](xRunBook_List.md#{connector}) |\\\"      \\n\",\n    \"    if counter%3 ==0:\\n\",\n    \"        #start a new row of category\\n\",\n    \"        connector_table += f\\\"\\\\n |\\\"\\n\",\n    \"        \\n\",\n    \"connector_markdown_table = f\\\"{connector_header1} \\\\n {connector_header2} \\\\n {connector_table} \\\\n\\\\n\\\"\\n\",\n    \"\\n\",\n    \"#print(notebook_category_urls)\\n\",\n    \"counter =0\\n\",\n    \"for categoryname in notebook_category_urls:\\n\",\n    \"    counter+= 1\\n\",\n    \"    if counter <3:\\n\",\n    \"        category_header1 += f\\\" |\\\"\\n\",\n    \"        category_header2 += f\\\" ---|\\\"\\n\",\n    \"    category_table += f\\\" [{categoryname}]({notebook_category_urls[categoryname]}) |\\\"\\n\",\n    \"    if counter%3 ==0:\\n\",\n    \"        #start a new row of category\\n\",\n    \"        category_table += f\\\"\\\\n |\\\"    \\n\",\n    \"category_markdown_table = f\\\"{category_header1} \\\\n {category_header2} \\\\n {category_table} \\\\n\\\"\\n\",\n    \"#this builds the Runbook list page\\n\",\n    \"#lets try it without the categories.  There is \\\"too much\\\" category... not enough runbook on ther pages\\n\",\n    \"#runbook_list = f\\\"# RunBook Connectors:\\\\n {connector_markdown_table} \\\\n# RunBook Categories:\\\\n {category_markdown_table} \\\\n\\\\n {runbook_list}\\\"\\n\",\n    \"\\n\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 6,\n   \"id\": \"6bfed614\",\n   \"metadata\": {\n    \"credentialsJson\": {},\n    \"execution\": {\n     \"iopub.execute_input\": \"2023-05-12T17:04:24.431206Z\",\n     \"iopub.status.busy\": \"2023-05-12T17:04:24.429886Z\",\n     \"iopub.status.idle\": \"2023-05-12T17:04:24.443003Z\",\n     \"shell.execute_reply\": \"2023-05-12T17:04:24.442338Z\"\n    }\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"## generate category lists from actions\\n\",\n    \"\\n\",\n    \"action_categories = {}\\n\",\n    \"action_category_urls = {}\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"for action in actions:\\n\",\n    \"    #print(action)\\n\",\n    \"    if len(action['categories'])>0:\\n\",\n    \"        #print(notebook['categories'])\\n\",\n    \"        for category in action['categories']:\\n\",\n    \"            if not category in action_categories:\\n\",\n    \"                action_categories[category] = []\\n\",\n    \"            action_categories[category].append(action)\\n\",\n    \"            category_name = category[14:]\\n\",\n    \"            action_category_urls[category_name] = Path(f\\\"action_{category_name}.md\\\")\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 7,\n   \"id\": \"d25256b7\",\n   \"metadata\": {\n    \"credentialsJson\": {},\n    \"execution\": {\n     \"iopub.execute_input\": \"2023-05-12T17:04:24.447974Z\",\n     \"iopub.status.busy\": \"2023-05-12T17:04:24.446686Z\",\n     \"iopub.status.idle\": \"2023-05-12T17:04:24.456213Z\",\n     \"shell.execute_reply\": \"2023-05-12T17:04:24.455604Z\"\n    }\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"##headers for al of the Action pages \\n\",\n    \"category_header1 =\\\"|\\\"\\n\",\n    \"category_header2 =\\\"|\\\"\\n\",\n    \"category_table =\\\"|\\\"\\n\",\n    \"\\n\",\n    \"action_header1 =\\\"|\\\"\\n\",\n    \"action_header2 =\\\"|\\\"\\n\",\n    \"action_table =\\\"|\\\"\\n\",\n    \"gitbooklist = \\\"\\\"\\n\",\n    \"counter =0\\n\",\n    \"for action in action_connector_list:\\n\",\n    \"    counter+= 1\\n\",\n    \"    if counter <3:\\n\",\n    \"        action_header1 += f\\\" |\\\"\\n\",\n    \"        action_header2 += f\\\" ---|\\\"\\n\",\n    \"    action_url = f\\\"action_{action.upper()}.md\\\"\\n\",\n    \"    action_url= action_url.replace(\\\"KUBERNETES\\\", \\\"K8S\\\")\\n\",\n    \"    action_url = action_url.replace(\\\"_MONGO.\\\", \\\"_MONGODB.\\\")\\n\",\n    \"    action_url= action_url.replace(\\\"action_ELASTICSEARCH\\\", \\\"action_ES\\\")\\n\",\n    \"    action_table += f\\\" [{action}]({action_url}) |\\\"\\n\",\n    \"    #print(action_url)\\n\",\n    \"    gitbooklist += f\\\"      * [{action}](action_url) \\\\n\\\"\\n\",\n    \"    if counter%3 ==0:\\n\",\n    \"        #start a new row of category\\n\",\n    \"        action_table += f\\\"\\\\n |\\\"\\n\",\n    \"\\n\",\n    \"action_connector_markdown_table = f\\\"{action_header1} \\\\n {action_header2} \\\\n {action_table} \\\\n\\\\n\\\"\\n\",\n    \"#print(action_connector_markdown_table)\\n\",\n    \"counter = 0\\n\",\n    \"for categoryname in action_category_urls:\\n\",\n    \"    category_printed = categoryname\\n\",\n    \"    counter+= 1\\n\",\n    \"    if counter <3:\\n\",\n    \"        category_header1 += f\\\" |\\\"\\n\",\n    \"        category_header2 += f\\\" ---|\\\"\\n\",\n    \"    category_url = str(action_category_urls[categoryname])\\n\",\n    \"    category_url = category_url.replace(\\\"KUBERNETES\\\", \\\"K8S\\\")\\n\",\n    \"    action_url = action_url.replace(\\\"_MONGO.\\\", \\\"_MONGODB.\\\")\\n\",\n    \"    #category_url = category_url.replace(\\\"POSTGRES\\\", \\\"POSTGRESQL\\\")\\n\",\n    \"    category_table += f\\\" [{category_printed}]({category_url}) |\\\"\\n\",\n    \"    gitbooklist += f\\\"      * [{category_printed}](lists/{action_category_urls[categoryname]})\\\\n\\\"\\n\",\n    \"    if counter%3 ==0:\\n\",\n    \"        #start a new row of category\\n\",\n    \"        category_table += f\\\"\\\\n |\\\"\\n\",\n    \"\\n\",\n    \"action_category_markdown_table = f\\\"{category_header1} \\\\n {category_header2} \\\\n {category_table} \\\\n\\\"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"#print(action_connector_markdown_table)\\n\",\n    \"action_list =f\\\"# Actions By Connector:\\\\n{action_connector_markdown_table} \\\\n # Actions By Category: \\\\n{action_category_markdown_table}\\\\n\\\\n\\\\n\\\\n \\\"   \\n\",\n    \"#print(action_list)\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 8,\n   \"id\": \"4f238d53\",\n   \"metadata\": {\n    \"credentialsJson\": {},\n    \"execution\": {\n     \"iopub.execute_input\": \"2023-05-12T17:04:24.461018Z\",\n     \"iopub.status.busy\": \"2023-05-12T17:04:24.459745Z\",\n     \"iopub.status.idle\": \"2023-05-12T17:04:24.472692Z\",\n     \"shell.execute_reply\": \"2023-05-12T17:04:24.472044Z\"\n    }\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"\\n\",\n    \"#generate pages for each Runbook category page\\n\",\n    \"category_name =\\\"\\\"\\n\",\n    \"category_listing = \\\"\\\"\\n\",\n    \"for category in notebook_categories:\\n\",\n    \"    #category change - we need a new header\\n\",\n    \"    if category_name ==\\\"\\\":\\n\",\n    \"        #new category\\n\",\n    \"        category_name = category[14:]\\n\",\n    \"        category_listing = f\\\"# RunBook Connectors:\\\\n {connector_markdown_table} \\\\n# RunBook Categories:\\\\n {category_markdown_table} \\\\n# Runbooks in {category_name.replace('_', ' ')}\\\\n\\\"\\n\",\n    \"        #all of the categories was too much. Pulling this for now\\n\",\n    \"        category_listing = \\\"\\\"\\n\",\n    \"    elif category_name != category:\\n\",\n    \"        # we have finished off a category\\n\",\n    \"        #save the oldcategory\\n\",\n    \"        #print(category_name, category_listing)\\n\",\n    \"        categoryList_filename = f\\\"lists/{notebook_category_urls[category_name]}\\\"\\n\",\n    \"        f  = open(categoryList_filename, \\\"w+\\\")\\n\",\n    \"        f.write(category_listing)\\n\",\n    \"        f.close()\\n\",\n    \"        category_name = category[14:]\\n\",\n    \"        category_listing = f\\\"# RunBook Connectors:\\\\n {connector_markdown_table} \\\\n# RunBook Categories:\\\\n {category_markdown_table}\\\\n # Runbooks in {category_name.replace('_', ' ')}\\\\n\\\"\\n\",\n    \"        #all of the categories was too much. Pulling this for now\\n\",\n    \"        category_listing = \\\"\\\"\\n\",\n    \"    #print(notebook_categories[category])\\n\",\n    \"    for runbook in notebook_categories[category]:\\n\",\n    \"        category_listing += f\\\"* {runbook['root']} [{runbook['name']}]({runbook['github_url']}): {runbook['description']}\\\\n\\\"\\n\",\n    \"#finished loop -wrote last category\\n\",\n    \"#print(category_listing)\\n\",\n    \"f  = open(categoryList_filename, \\\"w+\\\")\\n\",\n    \"f.write(category_listing)\\n\",\n    \"f.close()\\n\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 9,\n   \"id\": \"81ea848a\",\n   \"metadata\": {\n    \"credentialsJson\": {},\n    \"execution\": {\n     \"iopub.execute_input\": \"2023-05-12T17:04:24.477765Z\",\n     \"iopub.status.busy\": \"2023-05-12T17:04:24.476494Z\",\n     \"iopub.status.idle\": \"2023-05-12T17:04:24.529900Z\",\n     \"shell.execute_reply\": \"2023-05-12T17:04:24.529091Z\"\n    }\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"# generate all of the action category pages \\n\",\n    \"category_name =\\\"\\\"\\n\",\n    \"category_listing = \\\"\\\"\\n\",\n    \"for category in action_categories:\\n\",\n    \"    if category_name ==\\\"\\\":\\n\",\n    \"        #new category\\n\",\n    \"        \\n\",\n    \"        category_name = category[14:]\\n\",\n    \"        if category_name ==\\\"ELASTICSEARCH\\\":\\n\",\n    \"            category_name ==\\\"ES\\\"\\n\",\n    \"        elif category_name ==\\\"KUBERNETES\\\":\\n\",\n    \"            category_name ==\\\"K8S\\\"\\n\",\n    \"        \\n\",\n    \"        elif category_name ==\\\"MONGO\\\":\\n\",\n    \"            print(\\\"MONGO\\\")\\n\",\n    \"            category_name ==\\\"MONGODB\\\"\\n\",\n    \"        category_listing = f\\\"# Actions in the {category_name.replace('_', ' ')} category\\\\n\\\"\\n\",\n    \"        #all of the categories was too much. Pulling this for now\\n\",\n    \"        category_listing = \\\"\\\"        \\n\",\n    \"    elif category_name != category:\\n\",\n    \"        # we have finished off a category\\n\",\n    \"        #save the oldcategory\\n\",\n    \"        #print(category_name, category_listing)\\n\",\n    \"        #print(category)\\n\",\n    \"        #print(category_listing)\\n\",\n    \"        #place the links at the bottom\\n\",\n    \"        #category_listing += f\\\"\\\\n# Actions By Connector:\\\\n{action_connector_markdown_table} \\\\n # Actions By Category: \\\\n{action_category_markdown_table} \\\\n\\\\n\\\"\\n\",\n    \"        categoryList_filename = f\\\"lists/{action_category_urls[category_name]}\\\"\\n\",\n    \"        f  = open(categoryList_filename, \\\"w+\\\")\\n\",\n    \"        f.write(category_listing)\\n\",\n    \"        f.close()\\n\",\n    \"        category_name = category[14:]\\n\",\n    \"        if category_name ==\\\"ELASTICSEARCH\\\":\\n\",\n    \"            category_name ==\\\"ES\\\"\\n\",\n    \"        elif category_name ==\\\"KUBERNETES\\\":\\n\",\n    \"            category_name ==\\\"K8S\\\"\\n\",\n    \"        elif category_name ==\\\"MONGO\\\":\\n\",\n    \"            print(\\\"mongo\\\")\\n\",\n    \"            category_name ==\\\"MONGODB\\\"\\n\",\n    \"        category_listing = \\\"\\\" #f\\\"# Actions in the {category_name.replace('_', ' ')} category\\\\n\\\"\\n\",\n    \"    \\n\",\n    \"    #print(category_listing)\\n\",\n    \"    for action in action_categories[category]:\\n\",\n    \"        # removing **{action['root']}**: from each listing\\n\",\n    \"        category_listing += f\\\"* [{action['name']}]({action['github_url']}): {action['description']}\\\\n\\\\n\\\"\\n\",\n    \"# last category is compelted when loop ends\\n\",\n    \"\\n\",\n    \"#category_listing += f\\\"\\\\n# Actions By Connector:\\\\n{action_connector_markdown_table} \\\\n # Actions By Category: \\\\n{action_category_markdown_table} \\\\n\\\\n\\\"\\n\",\n    \"categoryList_filename = f\\\"lists/{action_category_urls[category_name]}\\\"\\n\",\n    \"f  = open(categoryList_filename, \\\"w+\\\")\\n\",\n    \"f.write(category_listing)\\n\",\n    \"f.close()\\n\",\n    \"        \\n\",\n    \"        \\n\",\n    \"#print(action_categories)\\n\",\n    \"#print(action_category_urls)\\n\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"aboriginal-responsibility\",\n   \"metadata\": {},\n   \"source\": [\n    \"### Preview the generated list\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"88e075a7-8341-45af-a250-c1594c004579\",\n   \"metadata\": {},\n   \"source\": [\n    \"### Generate readme for github repository\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 10,\n   \"id\": \"younger-consensus\",\n   \"metadata\": {\n    \"credentialsJson\": {},\n    \"execution\": {\n     \"iopub.execute_input\": \"2023-05-12T17:04:24.535754Z\",\n     \"iopub.status.busy\": \"2023-05-12T17:04:24.534275Z\",\n     \"iopub.status.idle\": \"2023-05-12T17:04:24.541944Z\",\n     \"shell.execute_reply\": \"2023-05-12T17:04:24.541322Z\"\n    },\n    \"tags\": []\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"# Open README template\\n\",\n    \"template = open(readme_template).read()\\n\",\n    \"\\n\",\n    \"# Replace var to get list of templates in markdown format\\n\",\n    \"template = template.replace(replace_var, main_readme_chart)\\n\",\n    \"\\n\",\n    \"#create the action and runbook badges\\n\",\n    \"actionImage = f'https://img.shields.io/static/v1?label=ActionCount&message={actionCount}&color=green'\\n\",\n    \"runbookImage = f'https://img.shields.io/static/v1?label=xRunBookCount&message={runBookCount}&color=orange'\\n\",\n    \"actionbadge = f\\\"<img src={actionImage}>\\\"\\n\",\n    \"runbookbadge =  f\\\"<img src={runbookImage}>\\\"\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"#insert the badges on the readme\\n\",\n    \"badgeCode = actionbadge + runbookbadge\\n\",\n    \"template =  template.replace(badge_var, badgeCode)\\n\",\n    \"\\n\",\n    \"# Save README\\n\",\n    \"f  = open(readme, \\\"w+\\\")\\n\",\n    \"f.write(template)\\n\",\n    \"f.close()\"\n   ]\n  },\n  {\n   \"cell_type\": \"raw\",\n   \"id\": \"4c1ed44c\",\n   \"metadata\": {},\n   \"source\": []\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 11,\n   \"id\": \"2a95cba3-027c-4a57-8bfa-2ee1e9053bb7\",\n   \"metadata\": {\n    \"credentialsJson\": {},\n    \"execution\": {\n     \"iopub.execute_input\": \"2023-05-12T17:04:24.546467Z\",\n     \"iopub.status.busy\": \"2023-05-12T17:04:24.544956Z\",\n     \"iopub.status.idle\": \"2023-05-12T17:04:24.551141Z\",\n     \"shell.execute_reply\": \"2023-05-12T17:04:24.550347Z\"\n    },\n    \"tags\": []\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"\\n\",\n    \"#generate a page of all the actions that are available\\n\",\n    \"ac_list= readme = \\\"lists/Action_list.md\\\"\\n\",\n    \"# Save README\\n\",\n    \"f  = open(ac_list, \\\"w+\\\")\\n\",\n    \"f.write(action_list)\\n\",\n    \"f.close()\\n\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 12,\n   \"id\": \"70db7c84-57ad-4a92-a60b-9f20b60b5329\",\n   \"metadata\": {\n    \"credentialsJson\": {},\n    \"execution\": {\n     \"iopub.execute_input\": \"2023-05-12T17:04:24.554996Z\",\n     \"iopub.status.busy\": \"2023-05-12T17:04:24.554699Z\",\n     \"iopub.status.idle\": \"2023-05-12T17:04:24.559737Z\",\n     \"shell.execute_reply\": \"2023-05-12T17:04:24.559083Z\"\n    }\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"#generate a page of all the Runbooks that are available\\n\",\n    \"rb_list = readme = \\\"lists/xRunBook_list.md\\\"\\n\",\n    \"# Save README\\n\",\n    \"f  = open(rb_list, \\\"w+\\\")\\n\",\n    \"f.write(runbook_list)\\n\",\n    \"f.close()\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 13,\n   \"id\": \"ee479c83\",\n   \"metadata\": {\n    \"credentialsJson\": {},\n    \"execution\": {\n     \"iopub.execute_input\": \"2023-05-12T17:04:24.564042Z\",\n     \"iopub.status.busy\": \"2023-05-12T17:04:24.562668Z\",\n     \"iopub.status.idle\": \"2023-05-12T17:04:24.571948Z\",\n     \"shell.execute_reply\": \"2023-05-12T17:04:24.571034Z\"\n    }\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"#create JSON files that can be used t build badges for the website showing our Runbook and action counts\\n\",\n    \"\\n\",\n    \"import json\\n\",\n    \"#store Runbookcount json (to create shield on webpage)\\n\",\n    \"rb_list = readme = \\\".github/images/runbookShield.json\\\"\\n\",\n    \"runBookCount = str(runBookCount)\\n\",\n    \"json1 = {\\\"schemaVersion\\\": 1,\\\"label\\\": \\\"RunBook Count\\\",\\\"message\\\": runBookCount,\\\"color\\\": \\\"orange\\\"}\\n\",\n    \"\\n\",\n    \"# Save README\\n\",\n    \"\\n\",\n    \"with open(rb_list, \\\"w\\\") as outfile:\\n\",\n    \"    json.dump(json1, outfile)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"#store action count json (to create shield on webpage)\\n\",\n    \"rb_list = readme = \\\".github/images/actionShield.json\\\"\\n\",\n    \"actionCount = str(actionCount)\\n\",\n    \"json2 = {\\\"schemaVersion\\\": 1,\\\"label\\\": \\\"Action Count\\\",\\\"message\\\": actionCount,\\\"color\\\": \\\"green\\\"}\\n\",\n    \"\\n\",\n    \"# Save README\\n\",\n    \"with open(rb_list, \\\"w\\\") as outfile:\\n\",\n    \"    json.dump(json2, outfile)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"076fbbd0\",\n   \"metadata\": {},\n   \"source\": []\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"0041b5a5\",\n   \"metadata\": {\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": []\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"af012c24\",\n   \"metadata\": {\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": []\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"27801143\",\n   \"metadata\": {\n    \"credentialsJson\": {}\n   },\n   \"outputs\": [],\n   \"source\": []\n  }\n ],\n \"metadata\": {\n  \"kernelspec\": {\n   \"display_name\": \"Python 3 (ipykernel)\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"codemirror_mode\": {\n    \"name\": \"ipython\",\n    \"version\": 3\n   },\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"nbconvert_exporter\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.10.6\"\n  },\n  \"vscode\": {\n   \"interpreter\": {\n    \"hash\": \"31f2aee4e71d21fbe5cf8b01ff0e069b9275f58929596ceb00d14d90e3e16cd6\"\n   }\n  },\n  \"widgets\": {\n   \"application/vnd.jupyter.widget-state+json\": {\n    \"state\": {},\n    \"version_major\": 2,\n    \"version_minor\": 0\n   }\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}\n"
  },
  {
    "path": "helm/.helmignore",
    "content": "# Patterns to ignore when building packages.\n# This supports shell glob matching, relative path matching, and\n# negation (prefixed with !). Only one pattern per line.\n.DS_Store\n# Common VCS dirs\n.git/\n.gitignore\n.bzr/\n.bzrignore\n.hg/\n.hgignore\n.svn/\n# Common backup files\n*.swp\n*.bak\n*.tmp\n*.orig\n*~\n# Various IDEs\n.project\n.idea/\n*.tmproj\n.vscode/\n"
  },
  {
    "path": "helm/full/Chart.yaml",
    "content": "apiVersion: v2\nname: awesome-runbooks\ndescription: A Helm chart for Kubernetes\n\n# A chart can be either an 'application' or a 'library' chart.\n#\n# Application charts are a collection of templates that can be packaged into versioned archives\n# to be deployed.\n#\n# Library charts provide useful utilities or functions for the chart developer. They're included as\n# a dependency of application charts to inject those utilities and functions into the rendering\n# pipeline. Library charts do not define any templates and therefore cannot be deployed.\ntype: application\n\n# This is the chart version. This version number should be incremented each time you make changes\n# to the chart and its templates, including the app version.\n# Versions are expected to follow Semantic Versioning (https://semver.org/)\nversion: 0.0.1\n\n# This is the version number of the application being deployed. This version number should be\n# incremented each time you make changes to the application. Versions are not expected to\n# follow Semantic Versioning. They should reflect the version the application is using.\n# It is recommended to use it with quotes.\nappVersion: \"0.0.1\"\n"
  },
  {
    "path": "helm/full/README.md",
    "content": "# unSkript Open source docker Helm Chart\nA Helm Chart for installing and upgrading [unSkript open source docker](https://github.com/unskript/Awesome-CloudOps-Automation#open-source-docker)\n\n# Prerequisites\nThe installation of this Chart does not have prerequisites.\n\n# Limitations\nThe current chart does not have support for [persistent volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/). Any credential/runbook created would be lost once the pod dies.\n\n\n# Note\nWe recommend using 1vCPU and 400Mb of RAM to get the best performance\n"
  },
  {
    "path": "helm/full/templates/NOTES.txt",
    "content": "1. Get the application URL by running these commands:\n{{- if contains \"NodePort\" .Values.service.type }}\n  export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath=\"{.spec.ports[0].nodePort}\" services {{ include \"awesome-runbooks.fullname\" . }})\n  export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath=\"{.items[0].status.addresses[0].address}\")\n  echo http://$NODE_IP:$NODE_PORT\n{{- else if contains \"LoadBalancer\" .Values.service.type }}\n     NOTE: It may take a few minutes for the LoadBalancer IP to be available.\n           You can watch the status of by running 'kubectl get --namespace {{ .Release.Namespace }} svc -w {{ include \"awesome-runbooks.fullname\" . }}'\n  export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include \"awesome-runbooks.fullname\" . }} --template \"{{\"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}\"}}\")\n  echo http://$SERVICE_IP:{{ .Values.service.port }}\n{{- else if contains \"ClusterIP\" .Values.service.type }}\n  export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l \"app.kubernetes.io/name={{ include \"awesome-runbooks.name\" . }},app.kubernetes.io/instance={{ .Release.Name }}\" -o jsonpath=\"{.items[0].metadata.name}\")\n  export CONTAINER_PORT=$(kubectl get pod --namespace {{ .Release.Namespace }} $POD_NAME -o jsonpath=\"{.spec.containers[0].ports[0].containerPort}\")\n  echo \"Visit http://127.0.0.1:8888 to use your application\"\n  kubectl --namespace {{ .Release.Namespace }} port-forward $POD_NAME 8888:$CONTAINER_PORT\n{{- end }}\n"
  },
  {
    "path": "helm/full/templates/_helpers.tpl",
    "content": "{{/*\nExpand the name of the chart.\n*/}}\n{{- define \"awesome-runbooks.name\" -}}\n{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix \"-\" }}\n{{- end }}\n\n{{/*\nCreate a default fully qualified app name.\nWe truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).\nIf release name contains chart name it will be used as a full name.\n*/}}\n{{- define \"awesome-runbooks.fullname\" -}}\n{{- if .Values.fullnameOverride }}\n{{- .Values.fullnameOverride | trunc 63 | trimSuffix \"-\" }}\n{{- else }}\n{{- $name := default .Chart.Name .Values.nameOverride }}\n{{- if contains $name .Release.Name }}\n{{- .Release.Name | trunc 63 | trimSuffix \"-\" }}\n{{- else }}\n{{- printf \"%s-%s\" .Release.Name $name | trunc 63 | trimSuffix \"-\" }}\n{{- end }}\n{{- end }}\n{{- end }}\n\n{{/*\nCreate chart name and version as used by the chart label.\n*/}}\n{{- define \"awesome-runbooks.chart\" -}}\n{{- printf \"%s-%s\" .Chart.Name .Chart.Version | replace \"+\" \"_\" | trunc 63 | trimSuffix \"-\" }}\n{{- end }}\n\n{{/*\nCommon labels\n*/}}\n{{- define \"awesome-runbooks.labels\" -}}\nhelm.sh/chart: {{ include \"awesome-runbooks.chart\" . }}\n{{ include \"awesome-runbooks.selectorLabels\" . }}\n{{- if .Chart.AppVersion }}\napp.kubernetes.io/version: {{ .Chart.AppVersion | quote }}\n{{- end }}\napp.kubernetes.io/managed-by: {{ .Release.Service }}\n{{- end }}\n\n{{/*\nSelector labels\n*/}}\n{{- define \"awesome-runbooks.selectorLabels\" -}}\napp.kubernetes.io/name: {{ include \"awesome-runbooks.name\" . }}\napp.kubernetes.io/instance: {{ .Release.Name }}\n{{- end }}\n\n{{/*\nCreate the name of the service account to use\n*/}}\n{{- define \"awesome-runbooks.serviceAccountName\" -}}\n{{- if .Values.serviceAccount.create }}\n{{- default (include \"awesome-runbooks.fullname\" .) .Values.serviceAccount.name }}\n{{- else }}\n{{- default \"default\" .Values.serviceAccount.name }}\n{{- end }}\n{{- end }}\n"
  },
  {
    "path": "helm/full/templates/deployment.yaml",
    "content": "{{- if eq .Values.useStatefulSet false}}\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: {{ include \"awesome-runbooks.fullname\" . }}\n  labels:\n    {{- include \"awesome-runbooks.labels\" . | nindent 4 }}\nspec:\n  {{- if not .Values.autoscaling.enabled }}\n  replicas: {{ .Values.replicaCount }}\n  {{- end }}\n  selector:\n    matchLabels:\n      {{- include \"awesome-runbooks.selectorLabels\" . | nindent 6 }}\n  template:\n    metadata:\n      {{- with .Values.podAnnotations }}\n      annotations:\n        {{- toYaml . | nindent 8 }}\n      {{- end }}\n      labels:\n        {{- include \"awesome-runbooks.selectorLabels\" . | nindent 8 }}\n    spec:\n      {{- with .Values.imagePullSecrets }}\n      imagePullSecrets:\n        {{- toYaml . | nindent 8 }}\n      {{- end }}\n      serviceAccountName: {{ include \"awesome-runbooks.serviceAccountName\" . }}\n      securityContext:\n        {{- toYaml .Values.podSecurityContext | nindent 8 }}\n      containers:\n        - name: {{ .Chart.Name }}\n          securityContext:\n            {{- toYaml .Values.securityContext | nindent 12 }}\n          image: \"{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}\"\n          imagePullPolicy: {{ .Values.image.pullPolicy }}\n          ports:\n            - name: http\n              containerPort: {{ .Values.service.port }}\n              protocol: TCP\n          livenessProbe:\n            httpGet:\n              path: {{ if .Values.clientId }}/{{ .Values.clientId }}{{ end }}/api/kernelspecs\n              port: http\n            initialDelaySeconds: {{ .Values.common.initialDelaySeconds }}\n            periodSeconds: {{ .Values.common.periodSeconds }}\n            timeoutSeconds: {{ .Values.common.timeoutSeconds }}\n          readinessProbe:\n            httpGet:\n              path: {{ if .Values.clientId }}/{{ .Values.clientId }}{{ end }}/api/kernelspecs\n              port: http\n            initialDelaySeconds: {{ .Values.common.initialDelaySeconds }}\n            periodSeconds: {{ .Values.common.periodSeconds }}\n            successThreshold: {{ .Values.common.successThreshold }}\n            timeoutSeconds: {{ .Values.common.timeoutSeconds }}\n          env:\n            - name: CLIENT_ID\n              value: {{ .Values.clientId }}\n          resources:\n            {{- toYaml .Values.resources | nindent 12 }}\n      {{- with .Values.nodeSelector }}\n      nodeSelector:\n        {{- toYaml . | nindent 8 }}\n      {{- end }}\n      {{- with .Values.affinity }}\n      affinity:\n        {{- toYaml . | nindent 8 }}\n      {{- end }}\n      {{- with .Values.tolerations }}\n      tolerations:\n        {{- toYaml . | nindent 8 }}\n      {{- end }}\n\n{{- end }}"
  },
  {
    "path": "helm/full/templates/service.yaml",
    "content": "apiVersion: v1\nkind: Service\nmetadata:\n  name: {{ include \"awesome-runbooks.fullname\" . }}\n  labels:\n    {{- include \"awesome-runbooks.labels\" . | nindent 4 }}\nspec:\n  type: {{ .Values.service.type }}\n  ports:\n    - port: {{ .Values.service.port }}\n      targetPort: http\n      protocol: TCP\n      name: http\n  selector:\n    {{- include \"awesome-runbooks.selectorLabels\" . | nindent 4 }}\n"
  },
  {
    "path": "helm/full/templates/serviceaccount.yaml",
    "content": "{{- if .Values.serviceAccount.create -}}\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: {{ include \"awesome-runbooks.serviceAccountName\" . }}\n  labels:\n    {{- include \"awesome-runbooks.labels\" . | nindent 4 }}\n  {{- with .Values.serviceAccount.annotations }}\n  annotations:\n    {{- toYaml . | nindent 4 }}\n  {{- end }}\n{{- end }}\n"
  },
  {
    "path": "helm/full/templates/statefulset.yaml",
    "content": "{{- if .Values.useStatefulSet }}\n{{- if .Values.persistence.enabled }}\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n  name: {{ include \"awesome-runbooks.fullname\" . }}\n  namespace: {{ .Values.common.namespace }}\n  labels:\n    {{ .Values.common.labels | nindent 4}}\nspec:\n  podManagementPolicy: Parallel\n  replicas: {{ .Values.replicaCount }}\n  revisionHistoryLimit: 10\n  selector:\n    matchLabels:\n      {{ .Values.common.labels | nindent 8 }}\n  serviceName: {{ include \"awesome-runbooks.fullname\" . }}\n  template:\n    metadata:\n      labels:\n        {{ .Values.common.labels | nindent 8 }}\n    spec:\n      dnsPolicy: ClusterFirst\n      restartPolicy: Always\n      schedulerName: default-scheduler\n      securityContext: {}\n      terminationGracePeriodSeconds: {{ .Values.common.terminationGracePeriodSeconds }}\n      containers:\n      - image: {{ .Values.image.repository }}:{{ .Values.image.tag }}\n        imagePullPolicy: {{ .Values.image.pullPolicy }}\n        name: {{ include \"awesome-runbooks.fullname\" . }}\n        ports:\n        - containerPort: {{ .Values.common.port }}\n          protocol: TCP\n          name: jupyterlab-http\n        livenessProbe:\n          httpGet:\n            path: {{ if .Values.clientId }}/{{ .Values.clientId }}{{ end }}/api/kernelspecs\n            port: jupyterlab-http\n          initialDelaySeconds: {{ .Values.common.initialDelaySeconds }}\n          periodSeconds: {{ .Values.common.periodSeconds }}\n          timeoutSeconds: {{ .Values.common.timeoutSeconds }}\n        readinessProbe:\n          failureThreshold: 3\n          httpGet:\n            path: {{ if .Values.clientId }}/{{ .Values.clientId }}{{ end }}/api/kernelspecs\n            port: jupyterlab-http\n          initialDelaySeconds: {{ .Values.common.initialDelaySeconds }}\n          periodSeconds: {{ .Values.common.periodSeconds }}\n          successThreshold: {{ .Values.common.successThreshold }}\n          timeoutSeconds: {{ .Values.common.timeoutSeconds }}\n        env:\n          - name: CLIENT_ID\n            value: {{ .Values.clientId }}\n        resources:\n          {{- toYaml .Values.resources | nindent 10 }}\n        securityContext:\n          {{- toYaml .Values.securityContext | nindent 10 }}\n        volumeMounts:\n          - mountPath: {{ .Values.persistence.mountPath }}\n            name: {{ .Values.persistence.name }}\n        terminationMessagePath: /dev/termination-log\n        terminationMessagePolicy: File\n\n  updateStrategy:\n    rollingUpdate:\n      partition: 0\n    type: RollingUpdate\n  volumeClaimTemplates:\n  - metadata:\n      name: {{ .Values.persistence.name }}\n    spec:\n      accessModes:\n        {{- range .Values.persistence.accessModes }}\n          - {{ . | quote }}\n        {{- end }}\n      resources:\n        requests:\n          storage: {{ .Values.persistence.size }}\n\n{{- end }}\n{{- end }}\n"
  },
  {
    "path": "helm/full/values.yaml",
    "content": "# Default values for awesome-runbooks.\n# This is a YAML-formatted file.\n# Declare variables to be passed into your templates.\n\nreplicaCount: 1\n\n# Identifier which ca be used to differentiate between multiple instances.\n# Will use the value to run server with base_url. If empty then no base_url applied\nclientId: \"\"\n\nimage:\n  repository: unskript/awesome-runbooks\n  pullPolicy: Always\n  # Overrides the image tag whose default is the chart appVersion.\n  tag: \"latest\"\n\nimagePullSecrets: []\nnameOverride: \"\"\nfullnameOverride: \"\"\n\nserviceAccount:\n  # Specifies whether a service account should be created\n  create: true\n  # Annotations to add to the service account\n  annotations: {}\n  # The name of the service account to use.\n  # If not set and create is true, a name is generated using the fullname template\n  name: \"\"\n\npodAnnotations: {}\n\npodSecurityContext: {}\n  # fsGroup: 2000\n\n# Flag to indicate whether to use stateful-set or not\nuseStatefulSet: true\n\n# This section is common variables defined\n# Which is used in the template/*.yaml files\ncommon:\n  # Default namespace to be used. Change it as you see fit\n  namespace: \"awesome-ops\"\n  # ContainerPort to listen on\n  port: 8888\n  # Labels that should be attached to the POD\n  labels: \"app: awesome-runbooks\"\n  # These settings are recommended for optimal operation\n  # of the POD\n  initialDelaySeconds: 5\n  periodSeconds: 30\n  successThreshold: 3\n  timeoutSeconds: 15\n  terminationGracePeriodSeconds: 30\n\n# Persistence\npersistence:\n  # Flag to enable creation of PVC\n  enabled: true\n  # Name of the PVC\n  name: \"awesome-runbooks-pvc\"\n  # PVC Storage Class, by default not set\n  storageClassName: \"\"\n  # PV Access Mode.\n  accessModes:\n    - ReadWriteOnce\n  # Size of the PV\n  size: 1Gi\n  # Any Annotations for the PVC\n  annotations: {}\n  # MountPath that will be visible on the POD\n  mountPath: /unskript\n  # Any subdirectory under the mounthPath, default is at the root level\n  subPath: \"\"\n  # You can finetune volume template fields below\n  volumeClaimTemplate:\n    selector: {}\n    requests: {}\n    dataSource: {}\n\nsecurityContext:\n  capabilities:\n    add:\n      - NET_ADMIN\n  privileged: true\n  # capabilities:\n  #   drop:\n  #   - ALL\n  # readOnlyRootFilesystem: true\n  # runAsNonRoot: true\n  # runAsUser: 1000\n\nservice:\n  type: ClusterIP\n  port: 8888\n\nresources:\n  # We recommend using 1vCPU and 2Gb of RAM to get the best performance\n  limits:\n     cpu: \"1.0\"\n     memory: \"2Gi\"\n  requests:\n     cpu: \"1.0\"\n     memory: \"2Gi\"\n\n\nautoscaling:\n  enabled: false\n  minReplicas: 1\n  maxReplicas: 3\n  targetCPUUtilizationPercentage: 80\n\nnodeSelector: {}\n\ntolerations: []\n\naffinity: {}\n"
  },
  {
    "path": "helm/minimal/Chart.yaml",
    "content": "apiVersion: v2\nname: awesome-runbooks\ndescription: A Helm chart for Kubernetes\n\n# A chart can be either an 'application' or a 'library' chart.\n#\n# Application charts are a collection of templates that can be packaged into versioned archives\n# to be deployed.\n#\n# Library charts provide useful utilities or functions for the chart developer. They're included as\n# a dependency of application charts to inject those utilities and functions into the rendering\n# pipeline. Library charts do not define any templates and therefore cannot be deployed.\ntype: application\n\n# This is the chart version. This version number should be incremented each time you make changes\n# to the chart and its templates, including the app version.\n# Versions are expected to follow Semantic Versioning (https://semver.org/)\nversion: 0.0.1\n\n# This is the version number of the application being deployed. This version number should be\n# incremented each time you make changes to the application. Versions are not expected to\n# follow Semantic Versioning. They should reflect the version the application is using.\n# It is recommended to use it with quotes.\nappVersion: \"0.0.1\"\n"
  },
  {
    "path": "helm/minimal/README.md",
    "content": "# unSkript Open source docker Helm Chart\nA Helm Chart for installing and upgrading [unSkript open source docker](https://github.com/unskript/Awesome-CloudOps-Automation#open-source-docker)\n\n# Prerequisites\nThe installation of this Chart does not have prerequisites.\n\n# Limitations\nThe current chart does not have support for [persistent volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/). Any credential/runbook created would be lost once the pod dies.\n\n\n# Note\nWe recommend using 1vCPU and 400Mb of RAM to get the best performance\n"
  },
  {
    "path": "helm/minimal/templates/NOTES.txt",
    "content": "1. Get the application URL by running these commands:\n{{- if contains \"NodePort\" .Values.service.type }}\n  export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath=\"{.spec.ports[0].nodePort}\" services {{ include \"awesome-runbooks.fullname\" . }})\n  export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath=\"{.items[0].status.addresses[0].address}\")\n  echo http://$NODE_IP:$NODE_PORT\n{{- else if contains \"LoadBalancer\" .Values.service.type }}\n     NOTE: It may take a few minutes for the LoadBalancer IP to be available.\n           You can watch the status of by running 'kubectl get --namespace {{ .Release.Namespace }} svc -w {{ include \"awesome-runbooks.fullname\" . }}'\n  export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include \"awesome-runbooks.fullname\" . }} --template \"{{\"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}\"}}\")\n  echo http://$SERVICE_IP:{{ .Values.service.port }}\n{{- else if contains \"ClusterIP\" .Values.service.type }}\n  export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l \"app.kubernetes.io/name={{ include \"awesome-runbooks.name\" . }},app.kubernetes.io/instance={{ .Release.Name }}\" -o jsonpath=\"{.items[0].metadata.name}\")\n  export CONTAINER_PORT=$(kubectl get pod --namespace {{ .Release.Namespace }} $POD_NAME -o jsonpath=\"{.spec.containers[0].ports[0].containerPort}\")\n  echo \"Visit http://127.0.0.1:8888 to use your application\"\n  kubectl --namespace {{ .Release.Namespace }} port-forward $POD_NAME 8888:$CONTAINER_PORT\n{{- end }}\n"
  },
  {
    "path": "helm/minimal/templates/_helpers.tpl",
    "content": "{{/*\nExpand the name of the chart.\n*/}}\n{{- define \"awesome-runbooks.name\" -}}\n{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix \"-\" }}\n{{- end }}\n\n{{/*\nCreate a default fully qualified app name.\nWe truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).\nIf release name contains chart name it will be used as a full name.\n*/}}\n{{- define \"awesome-runbooks.fullname\" -}}\n{{- if .Values.fullnameOverride }}\n{{- .Values.fullnameOverride | trunc 63 | trimSuffix \"-\" }}\n{{- else }}\n{{- $name := default .Chart.Name .Values.nameOverride }}\n{{- if contains $name .Release.Name }}\n{{- .Release.Name | trunc 63 | trimSuffix \"-\" }}\n{{- else }}\n{{- printf \"%s-%s\" .Release.Name $name | trunc 63 | trimSuffix \"-\" }}\n{{- end }}\n{{- end }}\n{{- end }}\n\n{{/*\nCreate chart name and version as used by the chart label.\n*/}}\n{{- define \"awesome-runbooks.chart\" -}}\n{{- printf \"%s-%s\" .Chart.Name .Chart.Version | replace \"+\" \"_\" | trunc 63 | trimSuffix \"-\" }}\n{{- end }}\n\n{{/*\nCommon labels\n*/}}\n{{- define \"awesome-runbooks.labels\" -}}\nhelm.sh/chart: {{ include \"awesome-runbooks.chart\" . }}\n{{ include \"awesome-runbooks.selectorLabels\" . }}\n{{- if .Chart.AppVersion }}\napp.kubernetes.io/version: {{ .Chart.AppVersion | quote }}\n{{- end }}\napp.kubernetes.io/managed-by: {{ .Release.Service }}\n{{- end }}\n\n{{/*\nSelector labels\n*/}}\n{{- define \"awesome-runbooks.selectorLabels\" -}}\napp.kubernetes.io/name: {{ include \"awesome-runbooks.name\" . }}\napp.kubernetes.io/instance: {{ .Release.Name }}\n{{- end }}\n\n{{/*\nCreate the name of the service account to use\n*/}}\n{{- define \"awesome-runbooks.serviceAccountName\" -}}\n{{- if .Values.serviceAccount.create }}\n{{- default (include \"awesome-runbooks.fullname\" .) .Values.serviceAccount.name }}\n{{- else }}\n{{- default \"default\" .Values.serviceAccount.name }}\n{{- end }}\n{{- end }}\n"
  },
  {
    "path": "helm/minimal/templates/deployment.yaml",
    "content": "{{- if eq .Values.useStatefulSet false}}\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: {{ include \"awesome-runbooks.fullname\" . }}\n  labels:\n    {{- include \"awesome-runbooks.labels\" . | nindent 4 }}\nspec:\n  {{- if not .Values.autoscaling.enabled }}\n  replicas: {{ .Values.replicaCount }}\n  {{- end }}\n  selector:\n    matchLabels:\n      {{- include \"awesome-runbooks.selectorLabels\" . | nindent 6 }}\n  template:\n    metadata:\n      {{- with .Values.podAnnotations }}\n      annotations:\n        {{- toYaml . | nindent 8 }}\n      {{- end }}\n      labels:\n        {{- include \"awesome-runbooks.selectorLabels\" . | nindent 8 }}\n    spec:\n      {{- with .Values.imagePullSecrets }}\n      imagePullSecrets:\n        {{- toYaml . | nindent 8 }}\n      {{- end }}\n      serviceAccountName: {{ include \"awesome-runbooks.serviceAccountName\" . }}\n      securityContext:\n        {{- toYaml .Values.podSecurityContext | nindent 8 }}\n      containers:\n        - name: {{ .Chart.Name }}\n          securityContext:\n            {{- toYaml .Values.securityContext | nindent 12 }}\n          image: \"{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}\"\n          imagePullPolicy: {{ .Values.image.pullPolicy }}\n          ports:\n            - name: http\n              containerPort: {{ .Values.service.port }}\n              protocol: TCP\n          livenessProbe:\n            httpGet:\n              path: / \n              port: http\n            initialDelaySeconds: {{ .Values.common.initialDelaySeconds }}\n            periodSeconds: {{ .Values.common.periodSeconds }}\n            timeoutSeconds: {{ .Values.common.timeoutSeconds }}\n          readinessProbe:\n            httpGet:\n              path: / \n              port: http\n            initialDelaySeconds: {{ .Values.common.initialDelaySeconds }}\n            periodSeconds: {{ .Values.common.periodSeconds }}\n            successThreshold: {{ .Values.common.successThreshold }}\n            timeoutSeconds: {{ .Values.common.timeoutSeconds }}\n          resources:\n            {{- toYaml .Values.resources | nindent 12 }}\n      {{- with .Values.nodeSelector }}\n      nodeSelector:\n        {{- toYaml . | nindent 8 }}\n      {{- end }}\n      {{- with .Values.affinity }}\n      affinity:\n        {{- toYaml . | nindent 8 }}\n      {{- end }}\n      {{- with .Values.tolerations }}\n      tolerations:\n        {{- toYaml . | nindent 8 }}\n      {{- end }}\n\n{{- end }}\n"
  },
  {
    "path": "helm/minimal/templates/service.yaml",
    "content": "apiVersion: v1\nkind: Service\nmetadata:\n  name: {{ include \"awesome-runbooks.fullname\" . }}\n  labels:\n    {{- include \"awesome-runbooks.labels\" . | nindent 4 }}\nspec:\n  type: {{ .Values.service.type }}\n  ports:\n    - port: {{ .Values.service.port }}\n      targetPort: http\n      protocol: TCP\n      name: http\n  selector:\n    {{- include \"awesome-runbooks.selectorLabels\" . | nindent 4 }}\n"
  },
  {
    "path": "helm/minimal/templates/serviceaccount.yaml",
    "content": "{{- if .Values.serviceAccount.create -}}\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: {{ include \"awesome-runbooks.serviceAccountName\" . }}\n  labels:\n    {{- include \"awesome-runbooks.labels\" . | nindent 4 }}\n  {{- with .Values.serviceAccount.annotations }}\n  annotations:\n    {{- toYaml . | nindent 4 }}\n  {{- end }}\n{{- end }}\n"
  },
  {
    "path": "helm/minimal/templates/statefulset.yaml",
    "content": "{{- if .Values.useStatefulSet }}\n{{- if .Values.persistence.enabled }}\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n  name: {{ include \"awesome-runbooks.fullname\" . }}\n  namespace: {{ .Values.common.namespace }}\n  labels:\n    {{ .Values.common.labels | nindent 4}}\nspec:\n  podManagementPolicy: Parallel\n  replicas: {{ .Values.replicaCount }}\n  revisionHistoryLimit: 10\n  selector:\n    matchLabels:\n      {{ .Values.common.labels | nindent 8 }}\n  serviceName: {{ include \"awesome-runbooks.fullname\" . }}\n  template:\n    metadata:\n      labels:\n        {{ .Values.common.labels | nindent 8 }}\n    spec:\n      dnsPolicy: ClusterFirst\n      restartPolicy: Always\n      schedulerName: default-scheduler\n      securityContext: {}\n      terminationGracePeriodSeconds: {{ .Values.common.terminationGracePeriodSeconds }}\n      containers:\n      - image: {{ .Values.image.repository }}:{{ .Values.image.tag }}\n        imagePullPolicy: {{ .Values.image.pullPolicy }}\n        name: {{ include \"awesome-runbooks.fullname\" . }}\n        ports:\n        - containerPort: {{ .Values.common.port }}\n          protocol: TCP\n          name: jupyterlab-http\n        livenessProbe:\n          httpGet:\n            path: / \n            port: jupyterlab-http\n          initialDelaySeconds: {{ .Values.common.initialDelaySeconds }}\n          periodSeconds: {{ .Values.common.periodSeconds }}\n          timeoutSeconds: {{ .Values.common.timeoutSeconds }}\n        readinessProbe:\n          failureThreshold: 3\n          httpGet:\n            path: / \n            port: jupyterlab-http\n          initialDelaySeconds: {{ .Values.common.initialDelaySeconds }}\n          periodSeconds: {{ .Values.common.periodSeconds }}\n          successThreshold: {{ .Values.common.successThreshold }}\n          timeoutSeconds: {{ .Values.common.timeoutSeconds }}\n        resources:\n          {{- toYaml .Values.resources | nindent 10 }}\n        securityContext:\n          {{- toYaml .Values.securityContext | nindent 10 }}\n        volumeMounts:\n          - mountPath: {{ .Values.persistence.mountPath }}\n            name: {{ .Values.persistence.name }}\n        terminationMessagePath: /dev/termination-log\n        terminationMessagePolicy: File\n\n  updateStrategy:\n    rollingUpdate:\n      partition: 0\n    type: RollingUpdate\n  volumeClaimTemplates:\n  - metadata:\n      name: {{ .Values.persistence.name }}\n    spec:\n      accessModes:\n        {{- range .Values.persistence.accessModes }}\n          - {{ . | quote }}\n        {{- end }}\n      resources:\n        requests:\n          storage: {{ .Values.persistence.size }}\n\n{{- end }}\n{{- end }}\n"
  },
  {
    "path": "helm/minimal/values.yaml",
    "content": "# Default values for awesome-runbooks.\n# This is a YAML-formatted file.\n# Declare variables to be passed into your templates.\n\nreplicaCount: 1\n\nimage:\n  repository: unskript/awesome-runbooks\n  pullPolicy: Always\n  # Overrides the image tag whose default is the chart appVersion.\n  tag: \"latest\"\n\nimagePullSecrets: []\nnameOverride: \"\"\nfullnameOverride: \"\"\n\nserviceAccount:\n  # Specifies whether a service account should be created\n  create: true\n  # Annotations to add to the service account\n  annotations: {}\n  # The name of the service account to use.\n  # If not set and create is true, a name is generated using the fullname template\n  name: \"\"\n\npodAnnotations: {}\n\npodSecurityContext: {}\n  # fsGroup: 2000\n\n# Flag to indicate whether to use stateful-set or not\nuseStatefulSet: true\n\n# This section is common variables defined\n# Which is used in the template/*.yaml files\ncommon:\n  # Default namespace to be used. Change it as you see fit\n  namespace: \"awesome-ops\"\n  # ContainerPort to listen on\n  port: 8888\n  # Labels that should be attached to the POD\n  labels: \"app: awesome-runbooks\"\n  # These settings are recommended for optimal operation\n  # of the POD\n  initialDelaySeconds: 5\n  periodSeconds: 30\n  successThreshold: 3\n  timeoutSeconds: 15\n  terminationGracePeriodSeconds: 30\n\n# Persistence\npersistence:\n  # Flag to enable creation of PVC\n  enabled: true\n  # Name of the PVC\n  name: \"awesome-runbooks-pvc\"\n  # PVC Storage Class, by default not set\n  storageClassName: \"\"\n  # PV Access Mode.\n  accessModes:\n    - ReadWriteOnce\n  # Size of the PV\n  size: 1Gi\n  # Any Annotations for the PVC\n  annotations: {}\n  # MountPath that will be visible on the POD\n  mountPath: /unskript\n  # Any subdirectory under the mounthPath, default is at the root level\n  subPath: \"\"\n  # You can finetune volume template fields below\n  volumeClaimTemplate:\n    selector: {}\n    requests: {}\n    dataSource: {}\n\nsecurityContext:\n  capabilities:\n    add:\n      - NET_ADMIN\n  privileged: true\n  # capabilities:\n  #   drop:\n  #   - ALL\n  # readOnlyRootFilesystem: true\n  # runAsNonRoot: true\n  # runAsUser: 1000\n\nservice:\n  type: ClusterIP\n  port: 8888\n\nresources:\n  # We recommend using 1vCPU and 2Gb of RAM to get the best performance\n  limits:\n     cpu: \"1.0\"\n     memory: \"2Gi\"\n  requests:\n     cpu: \"1.0\"\n     memory: \"2Gi\"\n\n\nautoscaling:\n  enabled: false\n  minReplicas: 1\n  maxReplicas: 3\n  targetCPUUtilizationPercentage: 80\n\nnodeSelector: {}\n\ntolerations: []\n\naffinity: {}\n"
  },
  {
    "path": "infra/README.md",
    "content": "\n# infra Actions\n* [Infra: Execute runbook](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/infra/legos/infra_execute_runbook/README.md): Infra: use this action to execute particular runbook with given input parameters.\n* [Infra: Finish runbook execution](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/infra/legos/infra_workflow_done/README.md): Infra: use this action to finish the execution of a runbook. Once this is set, no more tasks will be executed\n* [Infra: Append values for a key in workflow state store](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/infra/legos/workflow_ss_append_keys/README.md): Infra: use this action to append values for a key in a state store provided by the workflow.\n* [Infra: Store keys in workflow state store](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/infra/legos/workflow_ss_create_keys/README.md): Infra: use this action to persist keys in a state store provided by the workflow.\n* [Infra: Delete keys from workflow state store](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/infra/legos/workflow_ss_delete_keys/README.md): Infra: use this action to delete keys from a state store provided by the workflow.\n* [Infra: Fetch keys from workflow state store](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/infra/legos/workflow_ss_get_keys/README.md): Infra: use this action to retrieve keys in a state store provided by the workflow.\n* [Infra: Rename keys in workflow state store](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/infra/legos/workflow_ss_rename_keys/README.md): Infra: use this action to rename keys in a state store provided by the workflow.\n* [Infra: Update keys in workflow state store](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/infra/legos/workflow_ss_update_keys/README.md): Infra: use this action to update keys in a state store provided by the workflow.\n"
  },
  {
    "path": "infra/__init__.py",
    "content": "#\n# Copyright (c) 2021 unSkript.com\n# All rights reserved.\n#"
  },
  {
    "path": "infra/legos/__init__.py",
    "content": ""
  },
  {
    "path": "infra/legos/infra_execute_runbook/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Infra: Execute runbook</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego use this action to execute particular runbook with given input parameters.\r\n\r\n\r\n## Lego Details\r\n\r\n    infra_execute_runbook(handle: object, runbook_id: str, params: str)\r\n\r\n        handle: Object of type unSkript infra Connector\r\n        runbook_id: ID of the runbook to execute.\r\n        params: JSON string of runbook input parameters.\r\n\r\n        \r\n\r\n## Lego Input\r\nThis Lego take three inputs handle, runbook_id and params.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n<img src=\"./1.png\">\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "infra/legos/infra_execute_runbook/__init__.py",
    "content": ""
  },
  {
    "path": "infra/legos/infra_execute_runbook/infra_execute_runbook.json",
    "content": "{\r\n    \"action_title\": \"Infra: Execute runbook\",\r\n    \"action_description\": \"Infra: use this action to execute particular runbook with given input parameters.\",\r\n    \"action_type\": \"LEGO_TYPE_INFRA\",\r\n    \"action_entry_function\": \"infra_execute_runbook\",\r\n    \"action_needs_credential\": false,\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_STR\",\r\n    \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_INFRA\" ]\r\n}\r\n"
  },
  {
    "path": "infra/legos/infra_execute_runbook/infra_execute_runbook.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nimport pprint\nfrom typing import Optional\nimport json\nfrom pydantic import BaseModel, Field\n\n\nclass unSkriptCustomType(str):\n    @classmethod\n    def __get_validators__(cls):\n        # one or more validators may be yielded which will be called in the\n        # order to validate the input, each validator will receive as an input\n        # the value returned from the previous validator\n        yield cls.validate\n\n    @classmethod\n    def __modify_schema__(cls, field_schema):\n        # __modify_schema__ should mutate the dict it receives in place,\n        # the returned value will be ignored\n        field_schema.update(\n            fetch_runbook_list='true'\n        )\n\n    @classmethod\n    def validate(cls, v):\n        if not isinstance(v, str):\n            raise TypeError('string required')\n        return cls(f'{v}')\n\n    def __repr__(self):\n        return f'{super().__repr__()}'\n\nclass InputSchema(BaseModel):\n    runbook_id: unSkriptCustomType = Field(\n        title='Runbook ID',\n        description='ID of the runbook'\n    )\n    params: Optional[dict] = Field(\n        title='Runbook parameters',\n        description='Parameters to the runbook as a dictionary.'\n    )\n\ndef infra_execute_runbook_printer(output):\n    if output is not None:\n        pprint.pprint(f\"Runbook execution status: {output}\")\n\ndef infra_execute_runbook(handle, runbook_id: str, params: dict = None) -> str:\n    \"\"\"execute_runbook executes particular runbook annd return execution status\n\n        :type runbook_id: str.\n        :param runbook_id: ID of the runbook to execute.\n\n        :type params: dict.\n        :param params: dictionary of runbook input parameters.\n\n        :rtype: str.\n    \"\"\"\n    try:\n        execution_status = handle.execute_runbook(runbook_id, json.dumps(params))\n        return execution_status\n    except Exception as e:\n        raise e\n"
  },
  {
    "path": "infra/legos/infra_workflow_done/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Infra: Finish runbook execution</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego use to finish the execution of a runbook. Once this is set, no more tasks will be executed.\r\n\r\n\r\n## Lego Details\r\n\r\n    infra_workflow_done(handle: object)\r\n\r\n        handle: Object of type unSkript infra Connector\r\n\r\n## Lego Input\r\nThis Lego take one input handle.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "infra/legos/infra_workflow_done/__init__.py",
    "content": ""
  },
  {
    "path": "infra/legos/infra_workflow_done/infra_workflow_done.json",
    "content": "{\r\n    \"action_title\": \"Infra: Finish runbook execution\",\r\n    \"action_description\": \"Infra: use this action to finish the execution of a runbook. Once this is set, no more tasks will be executed\",\r\n    \"action_type\": \"LEGO_TYPE_INFRA\",\r\n    \"action_entry_function\": \"infra_workflow_done\",\r\n    \"action_needs_credential\": false,\r\n    \"action_supports_poll\": false,\r\n    \"action_supports_iteration\": false,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_NONE\",\r\n    \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_INFRA\" ]\r\n}\r\n  \r\n"
  },
  {
    "path": "infra/legos/infra_workflow_done/infra_workflow_done.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nfrom pydantic import BaseModel\nfrom unskript.connectors.infra import InfraConnector\n\nclass InputSchema(BaseModel):\n    pass\n\n\ndef infra_workflow_done(handle: InfraConnector):\n    \"\"\"infra_workflow_done stops workflow execution (Not implemented).\n        :rtype: None.\n    \"\"\"\n    return handle.done(\"Success\")\n"
  },
  {
    "path": "infra/legos/workflow_ss_append_keys/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Infra: Append values for a key in workflow state store</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego use to append values for a key in a state store provided by the workflow..\r\n\r\n\r\n## Lego Details\r\n\r\n    workflow_ss_append_keys(handle: object, key, value)\r\n\r\n        handle: Object of type unSkript infra Connector\r\n        key: Name of the key to create.\r\n        value: Value to persist.\r\n\r\n## Lego Input\r\nThis Lego take three inputs handle, key and value.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "infra/legos/workflow_ss_append_keys/__init__.py",
    "content": ""
  },
  {
    "path": "infra/legos/workflow_ss_append_keys/workflow_ss_append_keys.json",
    "content": "{\r\n    \"action_title\": \"Infra: Append values for a key in workflow state store\",\r\n    \"action_description\": \"Infra: use this action to append values for a key in a state store provided by the workflow.\",\r\n    \"action_type\": \"LEGO_TYPE_INFRA\",\r\n    \"action_entry_function\": \"workflow_ss_append_keys\",\r\n    \"action_needs_credential\": false,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_BOOL\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_INFRA\" ]\r\n}"
  },
  {
    "path": "infra/legos/workflow_ss_append_keys/workflow_ss_append_keys.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nimport pprint\nfrom pydantic import BaseModel, Field\n\n\nclass InputSchema(BaseModel):\n    key: str = Field(\n        title='Key',\n        description='Name of the key to create'\n    )\n    value: str = Field(\n        title='Value',\n        description='Value to persist'\n    )\n\ndef workflow_ss_append_keys_printer(output):\n    if output is None:\n        return\n    pprint.pprint(\"The workflow key appended successfully!\")\n\ndef workflow_ss_append_keys(handle, key, value) -> bool:\n    \"\"\"workflow_ss_append_keys append the values for that key\n\n        :type key: str.\n        :param key: Name of the key to create.\n\n        :type value: str.\n        :param value: Value to persist.\n        \n        :rtype: String confirming the successful append of the key.\n    \"\"\"\n    try:\n        handle.append_workflow_key(key, value)\n    except Exception as e:\n        raise e\n\n    return True\n"
  },
  {
    "path": "infra/legos/workflow_ss_create_keys/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Infra: Store keys in workflow state store</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego use to persist keys in a state store provided by the workflow.\r\n\r\n\r\n## Lego Details\r\n\r\n    workflow_ss_create_keys(handle: object, key, value)\r\n\r\n        handle: Object of type unSkript infra Connector\r\n        key: Name of the key to create.\r\n        value: Value to persist.\r\n\r\n## Lego Input\r\nThis Lego take three inputs handle, key and value.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "infra/legos/workflow_ss_create_keys/__init__.py",
    "content": ""
  },
  {
    "path": "infra/legos/workflow_ss_create_keys/workflow_ss_create_keys.json",
    "content": "{\r\n    \"action_title\": \"Infra: Store keys in workflow state store\",\r\n    \"action_description\": \"Infra: use this action to persist keys in a state store provided by the workflow.\",\r\n    \"action_type\": \"LEGO_TYPE_INFRA\",\r\n    \"action_entry_function\": \"workflow_ss_create_keys\",\r\n    \"action_needs_credential\": false,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_BOOL\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_INFRA\" ]\r\n}"
  },
  {
    "path": "infra/legos/workflow_ss_create_keys/workflow_ss_create_keys.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nimport pprint\n\nfrom pydantic import BaseModel, Field\n\n\nclass InputSchema(BaseModel):\n    key: str = Field(\n        title='Key',\n        description='Name of the key to create'\n    )\n    value: str = Field(\n        title='Value',\n        description='Value to persist'\n    )\n\ndef workflow_ss_create_keys_printer(output):\n    if output is None:\n        return\n    if output:\n        pprint.pprint(\"The workflow key set successfully!\")\n\ndef workflow_ss_create_keys(handle, key, value) -> bool:\n    \"\"\"workflow_ss_create_keys create new workflow key.\n        :type key: str.\n        :param key: Name of the key to create.\n        :type value: str.\n        :param value: Value to persist.\n        :rtype: String confirming the successful creation of the key.\n    \"\"\"\n    try:\n        handle.set_workflow_key(key, value)\n    except Exception as e:\n        raise e\n\n    return True\n"
  },
  {
    "path": "infra/legos/workflow_ss_delete_keys/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Infra: Delete keys from workflow state store</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego use to delete keys from a state store provided by the workflow.\r\n\r\n\r\n## Lego Details\r\n\r\n    workflow_ss_delete_keys(handle: object, key)\r\n\r\n        handle: Object of type unSkript infra Connector\r\n        key: Name of the key to create.\r\n\r\n## Lego Input\r\nThis Lego take two inputs handle and key.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "infra/legos/workflow_ss_delete_keys/__init__.py",
    "content": ""
  },
  {
    "path": "infra/legos/workflow_ss_delete_keys/workflow_ss_delete_keys.json",
    "content": "{\r\n    \"action_title\": \"Infra: Delete keys from workflow state store\",\r\n    \"action_description\": \"Infra: use this action to delete keys from a state store provided by the workflow.\",\r\n    \"action_type\": \"LEGO_TYPE_INFRA\",\r\n    \"action_entry_function\": \"workflow_ss_delete_keys\",\r\n    \"action_needs_credential\": false,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_BOOL\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_INFRA\" ]\r\n}\r\n    "
  },
  {
    "path": "infra/legos/workflow_ss_delete_keys/workflow_ss_delete_keys.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nimport pprint\n\nfrom pydantic import BaseModel, Field\n\nfrom unskript.connectors.infra import InfraConnector\n\n\nclass InputSchema(BaseModel):\n    key: str = Field(\n        title='Key',\n        description='Name of the key to delete'\n    )\n\ndef workflow_ss_delete_keys_printer(output):\n    if output is None:\n        return\n    if output:\n        pprint.pprint(\"The workflow key deleted successfully!\")\n\ndef workflow_ss_delete_keys(handle: InfraConnector, key) -> bool:\n    \"\"\"workflow_ss_delete_keys delete workflow key.\n        :type key: str.\n        :param key: Name of the key to delete.\n        :rtype: String confirming the successful deleting of the key.\n    \"\"\"\n\n    try:\n        handle.del_workflow_key(key)\n    except Exception as e:\n        raise e\n\n    return True\n"
  },
  {
    "path": "infra/legos/workflow_ss_get_keys/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Infra: Fetch keys from workflow state store</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego use to retreive keys in a state store provided by the workflow.\r\n\r\n\r\n## Lego Details\r\n\r\n    workflow_ss_get_keys(handle: object, key)\r\n\r\n        handle: Object of type unSkript infra Connector\r\n        key: Name of the key to create.\r\n\r\n## Lego Input\r\nThis Lego take two inputs handle and key.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "infra/legos/workflow_ss_get_keys/__init__.py",
    "content": ""
  },
  {
    "path": "infra/legos/workflow_ss_get_keys/workflow_ss_get_keys.json",
    "content": "{\r\n    \"action_title\": \"Infra: Fetch keys from workflow state store\",\r\n    \"action_description\": \"Infra: use this action to retrieve keys in a state store provided by the workflow.\",\r\n    \"action_type\": \"LEGO_TYPE_INFRA\",\r\n    \"action_entry_function\": \"workflow_ss_get_keys\",\r\n    \"action_needs_credential\": false,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_BYTES\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_INFRA\" ]\r\n}\r\n    "
  },
  {
    "path": "infra/legos/workflow_ss_get_keys/workflow_ss_get_keys.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nimport pprint\nfrom pydantic import BaseModel, Field\nfrom unskript.connectors.infra import InfraConnector\n\nclass InputSchema(BaseModel):\n    key: str = Field(\n        title='Key',\n        description='Name of the key to fetch'\n    )\n\ndef workflow_ss_get_keys_printer(output):\n    if output is None:\n        return\n    pprint.pprint(output)\n\ndef workflow_ss_get_keys(handle: InfraConnector, key) -> bytes:\n    \"\"\"workflow_ss_get_keys get workflow key.\n        :type key: str.\n        :param key: Name of the key to fetch.\n        :rtype: bytes with the key value.\n    \"\"\"\n\n    try:\n        v = handle.get_workflow_key(key)\n    except Exception as e:\n        raise e\n\n    return v\n"
  },
  {
    "path": "infra/legos/workflow_ss_rename_keys/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Infra: Rename keys in workflow state store</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego use to rename keys in a state store provided by the workflow.\r\n\r\n\r\n## Lego Details\r\n\r\n    workflow_ss_append_keys(handle: object, old_key, new_key)\r\n\r\n        handle: Object of type unSkript infra Connector\r\n        old_key: Name of the key to update.\r\n        new_key: key to update.\r\n\r\n## Lego Input\r\nThis Lego take three inputs handle, old_key and new_key.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "infra/legos/workflow_ss_rename_keys/__init__.py",
    "content": ""
  },
  {
    "path": "infra/legos/workflow_ss_rename_keys/workflow_ss_rename_keys.json",
    "content": "{\r\n    \"action_title\": \"Infra: Rename keys in workflow state store\",\r\n    \"action_description\": \"Infra: use this action to rename keys in a state store provided by the workflow.\",\r\n    \"action_type\": \"LEGO_TYPE_INFRA\",\r\n    \"action_entry_function\": \"workflow_ss_rename_keys\",\r\n    \"action_needs_credential\": false,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_BOOL\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_INFRA\" ]\r\n}\r\n    "
  },
  {
    "path": "infra/legos/workflow_ss_rename_keys/workflow_ss_rename_keys.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nimport pprint\nfrom pydantic import BaseModel, Field\nfrom unskript.connectors.infra import InfraConnector\n\nclass InputSchema(BaseModel):\n    old_key: str = Field(\n        title='Old Key',\n        description='Name of the old key'\n    )\n    new_key: str = Field(\n        title='New Key',\n        description='Key to update'\n    )\n\ndef workflow_ss_rename_keys_printer(output):\n    if output is None:\n        return\n    if output:\n        pprint.pprint(\"The workflow key renamed successfully!\")\n\ndef workflow_ss_rename_keys(handle: InfraConnector, old_key, new_key) -> bool:\n    \"\"\"workflow_ss_rename_keys rename workflow key.\n\n        :type old_key: str.\n        :param old_key: Name of the key to update.\n\n        :type new_key: str.\n        :param new_key: key to update.\n        \n        :rtype: String confirming the successful renaming of the key.\n    \"\"\"\n\n    try:\n        handle.rename_workflow_key(old_key, new_key)\n    except Exception as e:\n        raise e\n\n    return True\n"
  },
  {
    "path": "infra/legos/workflow_ss_update_keys/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Infra: Update keys in workflow state store</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego use to update keys in a state store provided by the workflow.\r\n\r\n\r\n## Lego Details\r\n\r\n    workflow_ss_update_keys(handle: object, key, value)\r\n\r\n        handle: Object of type unSkript infra Connector\r\n        key: Name of the key to update.\r\n        value: Value to update.\r\n\r\n## Lego Input\r\nThis Lego take three inputs handle, key and value.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "infra/legos/workflow_ss_update_keys/__init__.py",
    "content": ""
  },
  {
    "path": "infra/legos/workflow_ss_update_keys/workflow_ss_update_keys.json",
    "content": "{\r\n    \"action_title\": \"Infra: Update keys in workflow state store\",\r\n    \"action_description\": \"Infra: use this action to update keys in a state store provided by the workflow.\",\r\n    \"action_type\": \"LEGO_TYPE_INFRA\",\r\n    \"action_entry_function\": \"workflow_ss_update_keys\",\r\n    \"action_needs_credential\": false,\r\n    \"action_supports_poll\": true,\r\n    \"action_output_type\": \"ACTION_OUTPUT_TYPE_BOOL\",\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [  \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_INFRA\" ]\r\n}\r\n    "
  },
  {
    "path": "infra/legos/workflow_ss_update_keys/workflow_ss_update_keys.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nimport pprint\nfrom pydantic import BaseModel, Field\nfrom unskript.connectors.infra import InfraConnector\n\n\nclass InputSchema(BaseModel):\n    key: str = Field(\n        title='Key',\n        description='Name of the key to update'\n    )\n    value: str = Field(\n        title='Value',\n        description='Value to update'\n    )\n\ndef workflow_ss_update_keys_printer(output):\n    if output is None:\n        return\n    if output:\n        pprint.pprint(\"The workflow key updated successfully!\")\n\ndef workflow_ss_update_keys(handle: InfraConnector, key, value) -> bool:\n    \"\"\"workflow_ss_update_keys updates workflow key.\n\n        :type key: str.\n        :param key: Name of the key to update.\n\n        :type value: str.\n        :param value: Value to update.\n        \n        :rtype: String confirming the successful updating of the key.\n    \"\"\"\n\n    try:\n        handle.upd_workflow_key(key, value)\n    except Exception as e:\n        raise e\n\n    return True\n"
  },
  {
    "path": "lists/Action_list.md",
    "content": "# Actions By Connector:\n| | | \n | ---| ---| \n | [AWS](action_AWS.md) | [Airflow](action_AIRFLOW.md) | [Azure](action_AZURE.md) |\n | [Datadog](action_DATADOG.md) | [ElasticSearch](action_ES.md) | [GCP](action_GCP.md) |\n | [Github](action_GITHUB.md) | [Grafana](action_GRAFANA.md) | [Hadoop](action_HADOOP.md) |\n | [Jenkins](action_JENKINS.md) | [Jira](action_JIRA.md) | [Kafka](action_KAFKA.md) |\n | [Kubernetes](action_K8S.md) | [Mantishub](action_MANTISHUB.md) | [Mongo](action_MONGODB.md) |\n | [MsSQL](action_MSSQL.md) | [MySQL](action_MYSQL.md) | [Netbox](action_NETBOX.md) |\n | [Nomad](action_NOMAD.md) | [Opsgenie](action_OPSGENIE.md) | [Pingdom](action_PINGDOM.md) |\n | [Postgresql](action_POSTGRESQL.md) | [Prometheus](action_PROMETHEUS.md) | [Redis](action_REDIS.md) |\n | [Rest](action_REST.md) | [SSH](action_SSH.md) | [SalesForce](action_SALESFORCE.md) |\n | [Slack](action_SLACK.md) | [Snowflake](action_SNOWFLAKE.md) | [Splunk](action_SPLUNK.md) |\n | [Stripe](action_STRIPE.md) | [Terraform](action_TERRAFORM.md) | [Zabbix](action_ZABBIX.md) |\n | [infra](action_INFRA.md) | [opensearch](action_OPENSEARCH.md) | \n\n \n # Actions By Category: \n| | | \n | ---| ---| \n | [CLOUDOPS](action_CLOUDOPS.md) | [COST_OPT](action_COST_OPT.md) | [AWS](action_AWS.md) |\n | [AWS_IAM](action_AWS_IAM.md) | [AWS_S3](action_AWS_S3.md) | [SECOPS](action_SECOPS.md) |\n | [DEVOPS](action_DEVOPS.md) | [SRE](action_SRE.md) | [AWS_EC2](action_AWS_EC2.md) |\n | [IAM](action_IAM.md) | [AWS_RDS](action_AWS_RDS.md) | [AWS_ACM](action_AWS_ACM.md) |\n | [AWS_CLOUDWATCH](action_AWS_CLOUDWATCH.md) | [AWS_REDSHIFT](action_AWS_REDSHIFT.md) | [EBS](action_EBS.md) |\n | [AWS_ELB](action_AWS_ELB.md) | [AWS_EBS](action_AWS_EBS.md) | [AWS_ECS](action_AWS_ECS.md) |\n | [AWS_EKS](action_AWS_EKS.md) | [AWS_EMR](action_AWS_EMR.md) | [AWS_CLI](action_AWS_CLI.md) |\n | [AWS_SSM](action_AWS_SSM.md) | [DB](action_DB.md) | [AWS_EBC](action_AWS_EBC.md) |\n | [AWS_VPC](action_AWS_VPC.md) | [AWS_ASG](action_AWS_ASG.md) | [AWS_LOGS](action_AWS_LOGS.md) |\n | [AWS_NAT_GATEWAY](action_AWS_NAT_GATEWAY.md) | [AWS_CLOUDTRAIL](action_AWS_CLOUDTRAIL.md) | [AWS_DYNAMODB](action_AWS_DYNAMODB.md) |\n | [AWS_LAMBDA](action_AWS_LAMBDA.md) | [AWS_SQS](action_AWS_SQS.md) | [AWS_COST_EXPLORER](action_AWS_COST_EXPLORER.md) |\n | [ECS](action_ECS.md) | [AWS_ROUTE53](action_AWS_ROUTE53.md) | [AWS_ELASTICACHE](action_AWS_ELASTICACHE.md) |\n | [TROUBLESHOOTING](action_TROUBLESHOOTING.md) | [AWS_SECRET_MANAGER](action_AWS_SECRET_MANAGER.md) | [AWS_STS](action_AWS_STS.md) |\n | [AWS_POSTGRES](action_AWS_POSTGRES.md) | [AIRFLOW](action_AIRFLOW.md) | [AZURE](action_AZURE.md) |\n | [DATADOG](action_DATADOG.md) | [DATADOG_INCIDENT](action_DATADOG_INCIDENT.md) | [DATADOG_EVENT](action_DATADOG_EVENT.md) |\n | [DATADOG_METRICS](action_DATADOG_METRICS.md) | [DATADOG_MONITOR](action_DATADOG_MONITOR.md) | [DATADOG_ALERTS](action_DATADOG_ALERTS.md) |\n | [ES](action_ES.md) | [GCP](action_GCP.md) | [GCP_STORAGE](action_GCP_STORAGE.md) |\n | [GCP_IAM](action_GCP_IAM.md) | [GCP_BUCKET](action_GCP_BUCKET.md) | [GCP_VM](action_GCP_VM.md) |\n | [GCP_FILE_STORE](action_GCP_FILE_STORE.md) | [GCP_GKE](action_GCP_GKE.md) | [GCP_VPC](action_GCP_VPC.md) |\n | [GCP_SECRET](action_GCP_SECRET.md) | [GCP_VMS](action_GCP_VMS.md) | [GCP_SHEETS](action_GCP_SHEETS.md) |\n | [GITHUB](action_GITHUB.md) | [GITHUB_ISSUE](action_GITHUB_ISSUE.md) | [GITHUB_PR](action_GITHUB_PR.md) |\n | [GITHUB_REPO](action_GITHUB_REPO.md) | [GITHUB_TEAM](action_GITHUB_TEAM.md) | [GITHUB_USER](action_GITHUB_USER.md) |\n | [GITHUB_ORG](action_GITHUB_ORG.md) | [GRAFANA](action_GRAFANA.md) | [HADOOP](action_HADOOP.md) |\n | [JENKINS](action_JENKINS.md) | [JIRA](action_JIRA.md) | [KAFKA](action_KAFKA.md) |\n | [K8S](action_K8S.md) | [K8S_CLUSTER](action_K8S_CLUSTER.md) | [K8S_NODE](action_K8S_NODE.md) |\n | [K8S_POD](action_K8S_POD.md) | [K8S_KUBECTL](action_K8S_KUBECTL.md) | [K8S_PVC](action_K8S_PVC.md) |\n | [K8S_NAMESPACE](action_K8S_NAMESPACE.md) | [MANTISHUB](action_MANTISHUB.md) | [MONGODB](action_MONGODB.md) |\n | [MONGODB_COLLECTION](action_MONGODB_COLLECTION.md) | [MONGODB_CLUSTER](action_MONGODB_CLUSTER.md) | [MONGODB_DOCUMENT](action_MONGODB_DOCUMENT.md) |\n | [MONGODB_QUERY](action_MONGODB_QUERY.md) | [MSSQL](action_MSSQL.md) | [MSSQL_QUERY](action_MSSQL_QUERY.md) |\n | [MYSQL](action_MYSQL.md) | [MYSQL_QUERY](action_MYSQL_QUERY.md) | [NETBOX](action_NETBOX.md) |\n | [NOMAD](action_NOMAD.md) | [PINGDOM](action_PINGDOM.md) | [POSTGRESQL](action_POSTGRESQL.md) |\n | [POSTGRESQL_QUERY](action_POSTGRESQL_QUERY.md) | [POSTGRESQL_TABLE](action_POSTGRESQL_TABLE.md) | [PROMETHEUS](action_PROMETHEUS.md) |\n | [REDIS](action_REDIS.md) | [REST](action_REST.md) | [SSH](action_SSH.md) |\n | [SALESFORCE](action_SALESFORCE.md) | [SLACK](action_SLACK.md) | [SNOWFLAKE](action_SNOWFLAKE.md) |\n | [SPLUNK](action_SPLUNK.md) | [STRIPE](action_STRIPE.md) | [STRIPE_CHARGE](action_STRIPE_CHARGE.md) |\n | [STRIPE_DISPUTE](action_STRIPE_DISPUTE.md) | [STRIPE_REFUND](action_STRIPE_REFUND.md) | [TERRAFORM](action_TERRAFORM.md) |\n | [ZABBIX](action_ZABBIX.md) | [INFRA](action_INFRA.md) | [OPENSEARCH](action_OPENSEARCH.md) |\n | \n\n\n\n\n "
  },
  {
    "path": "lists/action_AIRFLOW.md",
    "content": "* [Get Status for given DAG](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Airflow/legos/airflow_check_dag_status/README.md): Get Status for given DAG\n\n* [Get Airflow handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Airflow/legos/airflow_get_handle/README.md): Get Airflow handle\n\n* [List DAG runs for given DagID](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Airflow/legos/airflow_list_DAG_runs/README.md): List DAG runs for given DagID\n\n* [Airflow trigger DAG run](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Airflow/legos/airflow_trigger_dag_run/README.md): Airflow trigger DAG run\n\n"
  },
  {
    "path": "lists/action_AWS.md",
    "content": "* [AWS Start IAM Policy Generation ](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/AWS_Start_IAM_Policy_Generation/README.md): Given a region, a CloudTrail ARN (where the logs are being recorded), a reference IAM ARN (whose usage we will parse), and a Service role, this will begin the generation of a IAM policy.  The output is a String of the generation Id.\n\n* [Add Lifecycle Configuration to AWS S3 Bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_add_lifecycle_configuration_to_s3_bucket/README.md): Creates a new lifecycle configuration for the bucket or replaces an existing lifecycle configuration.\n\n* [Apply AWS Default Encryption for S3 Bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_apply_default_encryption_for_s3_buckets/README.md): Apply AWS Default Encryption for S3 Bucket\n\n* [Attach an EBS volume to an AWS EC2 Instance](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_attach_ebs_to_instances/README.md): Attach an EBS volume to an AWS EC2 Instance\n\n* [AWS Attach New Policy to User](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_attach_iam_policy/README.md): AWS Attach New Policy to User\n\n* [AWS Attach Tags to Resources](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_attach_tags_to_resources/README.md): AWS Attach Tags to Resources\n\n* [AWS Change ACL Permission of public S3 Bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_change_acl_permissions_of_buckets/README.md): AWS Change ACL Permission public S3 Bucket\n\n* [AWS Check if RDS instances are not M5 or T3](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_check_rds_non_m5_t3_instances/README.md): AWS Check if RDS instances are not M5 or T3\n\n* [Check SSL Certificate Expiry](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_check_ssl_certificate_expiry/README.md): Check ACM SSL Certificate expiry date\n\n* [Attach a webhook endpoint to AWS Cloudwatch alarm](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_cloudwatch_attach_webhook_notification_to_alarm/README.md): Attach a webhook endpoint to one of the SNS attached to the AWS Cloudwatch alarm.\n\n* [AWS Create IAM Policy](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_create_IAMpolicy/README.md): Given an AWS policy (as a string), and the name for the policy, this will create an IAM policy.\n\n* [AWS Create Access Key](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_create_access_key/README.md): Create a new Access Key for the User\n\n* [Create AWS Bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_create_bucket/README.md): Create a new AWS S3 Bucket\n\n* [Create New IAM User](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_create_iam_user/README.md): Create New IAM User\n\n* [AWS Redshift Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_create_redshift_query/README.md): Make a SQL Query to the given AWS Redshift database\n\n* [Create Login profile for IAM User](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_create_user_login_profile/README.md): Create Login profile for IAM User\n\n* [AWS Create Snapshot For Volume](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_create_volumes_snapshot/README.md): Create a snapshot for EBS volume of the EC2 Instance for backing up the data stored in EBS\n\n* [AWS Delete Access Key](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_delete_access_key/README.md): Delete an Access Key for a User\n\n* [Delete AWS Bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_delete_bucket/README.md): Delete an AWS S3 Bucket\n\n* [AWS Delete Classic Load Balancer](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_delete_classic_load_balancer/README.md): Delete Classic Elastic Load Balancers\n\n* [AWS Delete EBS Snapshot](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_delete_ebs_snapshot/README.md): Delete EBS Snapshot for an EC2 instance\n\n* [AWS Delete ECS Cluster](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_delete_ecs_cluster/README.md): Delete AWS ECS Cluster\n\n* [AWS Delete Load Balancer](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_delete_load_balancer/README.md): AWS Delete Load Balancer\n\n* [AWS Delete Log Stream](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_delete_log_stream/README.md): AWS Delete Log Stream\n\n* [AWS Delete NAT Gateway](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_delete_nat_gateway/README.md): AWS Delete NAT Gateway\n\n* [AWS Delete RDS Instance](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_delete_rds_instance/README.md): Delete AWS RDS Instance\n\n* [AWS Delete Redshift Cluster](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_delete_redshift_cluster/README.md): Delete AWS Redshift Cluster\n\n* [AWS Delete Route 53 HealthCheck](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_delete_route53_health_check/README.md): AWS Delete Route 53 HealthCheck\n\n* [Delete AWS Default Encryption for S3 Bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_delete_s3_bucket_encryption/README.md): Delete AWS Default Encryption for S3 Bucket\n\n* [AWS Delete Secret](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_delete_secret/README.md): AWS Delete Secret\n\n* [Delete AWS EBS Volume by Volume ID](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_delete_volume_by_id/README.md): Delete AWS Volume by Volume ID\n\n* [ Deregisters AWS Instances from a Load Balancer](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_deregister_instances/README.md):  Deregisters AWS Instances from a Load Balancer\n\n* [AWS Describe Cloudtrails ](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_describe_cloudtrail/README.md): Given an AWS Region, this Action returns a Dict with all of the Cloudtrail logs being recorded\n\n* [ Detach as AWS Instance with a Elastic Block Store](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_detach_ebs_to_instances/README.md):  Detach as AWS Instance with a Elastic Block Store.\n\n* [AWS Detach Instances From AutoScaling Group](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_detach_instances_from_autoscaling_group/README.md): Use This Action to AWS Detach Instances From AutoScaling Group\n\n* [EBS Modify Volume](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_ebs_modify_volume/README.md): Modify/Resize volume for Elastic Block Storage (EBS).\n\n* [AWS ECS Describe Task Definition.](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_ecs_describe_task_definition/README.md): Describe AWS ECS Task Definition.\n\n* [ECS detect failed deployment ](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_ecs_detect_failed_deployment/README.md): List of stopped tasks, associated with a deployment, along with their stopped reason\n\n* [Restart AWS ECS Service](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_ecs_service_restart/README.md): Restart an AWS ECS Service\n\n* [Update AWS ECS Service](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_ecs_update_service/README.md): Update AWS ECS Service\n\n* [ Copy EKS Pod logs to bucket.](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_eks_copy_pod_logs_to_bucket/README.md):  Copy given EKS pod logs to given S3 Bucket.\n\n* [ Delete EKS POD in a given Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_eks_delete_pod/README.md):  Delete a EKS POD in a given Namespace\n\n* [List of EKS dead pods](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_eks_get_all_dead_pods/README.md): Get list of all dead pods in a given EKS cluster\n\n* [List of EKS Namespaces](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_eks_get_all_namespaces/README.md): Get list of all Namespaces in a given EKS cluster\n\n* [List of EKS pods](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_eks_get_all_pods/README.md): Get list of all pods in a given EKS cluster\n\n* [ List of EKS deployment for given Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_eks_get_deployments_name/README.md):  Get list of EKS deployment names for given Namespace\n\n* [Get CPU and memory utilization of node.](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_eks_get_node_cpu_memory/README.md):  Get CPU and memory utilization of given node.\n\n* [ Get EKS Nodes](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_eks_get_nodes/README.md):  Get EKS Nodes\n\n* [ List of EKS pods not in RUNNING State](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_eks_get_not_running_pods/README.md):  Get list of all pods in a given EKS cluster that are not running.\n\n* [Get pod CPU and Memory usage from given namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_eks_get_pod_cpu_memory/README.md): Get all pod CPU and Memory usage from given namespace\n\n* [ EKS Get pod status](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_eks_get_pod_status/README.md):  Get a Status of given POD in a given Namespace and EKS cluster name\n\n* [ EKS Get Running Pods](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_eks_get_running_pods/README.md):  Get a list of running pods from given namespace and EKS cluster name\n\n* [ Run Kubectl commands on EKS Cluster](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_eks_run_kubectl_cmd/README.md): This action runs a kubectl command on an AWS EKS Cluster\n\n* [Get AWS EMR Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_emr_get_instances/README.md): Get a list of EC2 Instances for an EMR cluster. Filtered by node type (MASTER|CORE|TASK)\n\n* [Run Command via AWS CLI](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_execute_cli_command/README.md): Execute command using AWS CLI\n\n* [ Run Command via SSM](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_execute_command_ssm/README.md):  Execute command on EC2 instance(s) using SSM\n\n* [AWS Filter All Manual Database Snapshots](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_all_manual_database_snapshots/README.md): Use This Action to AWS Filter All Manual Database Snapshots\n\n* [Filter AWS Unattached EBS Volume](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_ebs_unattached_volumes/README.md): Filter AWS Unattached EBS Volume\n\n* [Filter AWS EC2 Instance](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_ec2_by_tags/README.md): Filter AWS EC2 Instance\n\n* [Filter AWS EC2 instance by VPC Ids](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_ec2_by_vpc/README.md): Use this Action to Filter AWS EC2 Instance by VPC Ids\n\n* [Filter All AWS EC2 Instance](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_ec2_instances/README.md): Filter All AWS EC2 Instance\n\n* [Filter AWS EC2 Instances Without Lifetime Tag](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_ec2_without_lifetime_tag/README.md): Filter AWS EC2 Instances Without Lifetime Tag\n\n* [Filter AWS EC2 Instances Without Termination and Lifetime Tag](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_instances_without_termination_and_lifetime_tag/README.md): Filter AWS EC2 Instances Without Termination and Lifetime Tag and Check of they are valid\n\n* [AWS Filter Large EC2 Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_large_ec2_instances/README.md): This Action to filter all instances whose instanceType contains Large or xLarge, and that DO NOT have the largetag key/value.\n\n* [AWS Find Long Running EC2 Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_long_running_instances/README.md): This action list a all instances that are older than the threshold\n\n* [AWS Filter Old EBS Snapshots](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_old_ebs_snapshots/README.md): This action list a all snapshots details that are older than the threshold\n\n* [Get AWS public S3 Buckets using ACL](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_public_s3_buckets_by_acl/README.md): Get AWS public S3 Buckets using ACL\n\n* [Filter AWS Target groups by tag name](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_target_groups_by_tags/README.md): Filter AWS Target groups which have the provided tag attached to it. It also returns the value of that tag for each target group\n\n* [Filter AWS Unencrypted S3 Buckets](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_unencrypted_s3_buckets/README.md): Filter AWS Unencrypted S3 Buckets\n\n* [Get Unhealthy instances from ASG](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_unhealthy_instances_from_asg/README.md): Get Unhealthy instances from Auto Scaling Group\n\n* [Filter AWS Untagged EC2 Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_untagged_ec2_instances/README.md): Filter AWS Untagged EC2 Instances\n\n* [Filter AWS Unused Keypairs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_unused_keypairs/README.md): Filter AWS Unused Keypairs\n\n* [AWS Filter Unused Log Stream](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_unused_log_streams/README.md): This action lists all log streams that are unused for all the log groups by the given threshold.\n\n* [AWS Find Unused NAT Gateways](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_unused_nat_gateway/README.md): This action to get all of the Nat gateways that have zero traffic over those\n\n* [Find AWS ELBs with no targets or instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_find_elbs_with_no_targets_or_instances/README.md): Find AWS ELBs with no targets or instances attached to them.\n\n* [AWS Find Idle Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_find_idle_instances/README.md): Find Idle EC2 instances\n\n* [AWS Filter Lambdas with Long Runtime](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_find_long_running_lambdas/README.md): This action retrieves a list of all Lambda functions and searches for log events for each function for given runtime(duration).\n\n* [AWS Find Low Connections RDS instances Per Day](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_find_low_connection_rds_instances/README.md): This action will find RDS DB instances with a number of connections below the specified minimum in the specified region.\n\n* [AWS Find EMR Clusters of Old Generation Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_find_old_gen_emr_clusters/README.md): This action list of EMR clusters of old generation instances.\n\n* [AWS Find RDS Instances with low CPU Utilization](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_find_rds_instances_with_low_cpu_utilization/README.md): This lego finds RDS instances are not utilizing their CPU resources to their full potential.\n\n* [AWS Find Redshift Cluster without Pause Resume Enabled](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_find_redshift_cluster_without_pause_resume_enabled/README.md): Use This Action to AWS find redshift cluster for which paused resume are not Enabled\n\n* [AWS Find Redshift Clusters with low CPU Utilization](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_find_redshift_clusters_with_low_cpu_utilization/README.md): Find underutilized Redshift clusters in terms of CPU utilization.\n\n* [AWS Find S3 Buckets without Lifecycle Policies](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_find_s3_buckets_without_lifecycle_policies/README.md): S3 lifecycle policies enable you to automatically transition objects to different storage classes or delete them when they are no longer needed. This action finds all S3 buckets without lifecycle policies. \n\n* [Finding Redundant Trails in AWS](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_finding_redundant_trails/README.md): This action will find a redundant cloud trail if the attribute IncludeGlobalServiceEvents is true, and then we need to find multiple duplications.\n\n* [AWS Get AWS Account Number](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_acount_number/README.md): Some AWS functions require the AWS Account number. This programmatically retrieves it.\n\n* [Get AWS CloudWatch Alarms List](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_alarms_list/README.md): Get AWS CloudWatch Alarms List\n\n* [Get AWS ALB Listeners Without HTTP Redirection](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_alb_listeners_without_http_redirect/README.md): Get AWS ALB Listeners Without HTTP Redirection\n\n* [Get AWS EC2 Instances All ](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_all_ec2_instances/README.md): Use This Action to Get All AWS EC2 Instances\n\n* [AWS Get All Load Balancers](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_all_load_balancers/README.md): AWS Get All Load Balancers\n\n* [AWS Get All Service Names v3](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_all_service_names/README.md): Get a list of all service names in a region\n\n* [AWS Get Untagged Resources](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_all_untagged_resources/README.md): AWS Get Untagged Resources\n\n* [Get AWS AutoScaling Group Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_auto_scaling_instances/README.md): Use This Action to Get AWS AutoScaling Group Instances\n\n* [Get AWS Bucket Size](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_bucket_size/README.md): Get an AWS Bucket Size\n\n* [Get AWS EBS Metrics from Cloudwatch](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_ebs/README.md): Get AWS CloudWatch Statistics for EBS volumes\n\n* [Get AWS EC2 Metrics from Cloudwatch](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_ec2/README.md): Get AWS CloudWatch Metrics for EC2 instances. These could be CPU, Network, Disk based measurements\n\n* [Get AWS EC2 CPU Utilization Statistics from Cloudwatch](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_ec2_cpuutil/README.md): Get AWS CloudWatch Statistics for cpu utilization for EC2 instances\n\n* [Get AWS CloudWatch Metrics for AWS/ApplicationELB](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_metrics_applicationelb/README.md): Get AWS CloudWatch Metrics for AWS/ApplicationELB\n\n* [Get AWS CloudWatch Metrics for AWS/ELB](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_metrics_classic_elb/README.md): Get AWS CloudWatch Metrics for Classic Loadbalancer\n\n* [Get AWS CloudWatch Metrics for AWS/DynamoDB](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_metrics_dynamodb/README.md): Get AWS CloudWatch Metrics for AWS DynamoDB\n\n* [Get AWS CloudWatch Metrics for AWS/AutoScaling](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_metrics_ec2autoscaling/README.md): Get AWS CloudWatch Metrics for AWS EC2 AutoScaling groups\n\n* [Get AWS CloudWatch Metrics for AWS/GatewayELB](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_metrics_gatewayelb/README.md): Get AWS CloudWatch Metrics for AWS/GatewayELB\n\n* [Get AWS CloudWatch Metrics for AWS/Lambda](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_metrics_lambda/README.md): Get AWS CloudWatch Metrics for AWS/Lambda\n\n* [Get AWS CloudWatch Metrics for AWS/NetworkELB](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_metrics_network_elb/README.md): Get AWS CloudWatch Metrics for Network Loadbalancer\n\n* [Get AWS CloudWatch Metrics for AWS/RDS](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_metrics_rds/README.md): Get AWS CloudWatch Metrics for AWS/RDS\n\n* [Get AWS CloudWatch Metrics for AWS/Redshift](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_metrics_redshift/README.md): Get AWS CloudWatch Metrics for AWS/Redshift\n\n* [Get AWS CloudWatch Metrics for AWS/SQS](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_metrics_sqs/README.md): Get AWS CloudWatch Metrics for AWS/SQS\n\n* [Get AWS CloudWatch Statistics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_statistics/README.md): Get AWS CloudWatch Statistics\n\n* [AWS Get Costs For All Services](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cost_for_all_services/README.md): Get Costs for all AWS services in a given time period.\n\n* [AWS Get Costs For Data Transfer](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cost_for_data_transfer/README.md): Get daily cost for Data Transfer in AWS\n\n* [AWS Get Daily Total Spend](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_daily_total_spend/README.md): AWS get daily total spend from Cost Explorer\n\n* [Get EBS Volumes By Type](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_ebs_volumes_by_type/README.md): Get EBS Volumes By Type\n\n* [Get EC2 CPU Consumption For All Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_ec2_cpu_consumption/README.md): Get EC2 CPU Consumption For All Instances\n\n* [Get EC2 Data Traffic In and Out For All Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_ec2_data_traffic/README.md): Get EC2 Data Traffic In and Out For All Instances\n\n* [Get Age of all EC2 Instances in Days](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_ec2_instance_age/README.md): Get Age of all EC2 Instances in Days\n\n* [Get AWS ECS Service Status](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_ecs_services_status/README.md): Get the Status of an AWS ECS Service\n\n* [AWS Get Generated Policy](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_generated_policy/README.md): Given a Region and the ID of a policy generation job, this Action will return the policy (once it has been completed).\n\n* [Get AWS boto3 handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_handle/README.md): Get AWS boto3 handle\n\n* [AWS List IAM users without password policies](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_iam_users_without_password_policies/README.md): Get a list of all IAM users that have no password policy attached to them.\n\n* [AWS Get Idle EMR Clusters](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_idle_emr_clusters/README.md): This action list of EMR clusters that have been idle for more than the specified time.\n\n* [Get AWS Instance Details with Matching Private DNS Name](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_instance_detail_with_private_dns_name/README.md): Use this action to get details of an AWS EC2 Instance that matches a Private DNS Name\n\n* [Get AWS Instances Details](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_instance_details/README.md): Get AWS Instances Details\n\n* [List All AWS EC2 Instances Under the ELB](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_instances/README.md):  Get a list of all AWS EC2 Instances from given ELB\n\n* [AWS Get Internet Gateway by VPC ID](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_internet_gateway_by_vpc/README.md): AWS Get Internet Gateway by VPC ID\n\n* [Find AWS Lambdas Not Using ARM64 Graviton2 Processor](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_lambdas_not_using_arm_graviton2_processor/README.md): Find all AWS Lambda functions that are not using the Arm-based AWS Graviton2 processor for their runtime architecture\n\n* [Get AWS Lambdas With High Error Rate](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_lambdas_with_high_error_rate/README.md): Get AWS Lambda Functions that exceed a given threshold error rate.\n\n* [AWS Get Long Running ElastiCache clusters Without Reserved Nodes](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_long_running_elasticcache_clusters_without_reserved_nodes/README.md): This action gets information about long running ElastiCache clusters and their status, and checks if they have any reserved nodes associated with them.\n\n* [AWS Get Long Running RDS Instances Without Reserved Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_long_running_rds_instances_without_reserved_instances/README.md): This action gets information about long running instances and their status, and checks if they have any reserved nodes associated with them.\n\n* [AWS Get Long Running Redshift Clusters Without Reserved Nodes](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_long_running_redshift_clusters_without_reserved_nodes/README.md): This action gets information about running clusters and their status, and checks if they have any reserved nodes associated with them.\n\n* [AWS Get NAT Gateway Info by VPC ID](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_nat_gateway_by_vpc/README.md): This action is used to get the details about nat gateways configured for VPC.\n\n* [Get all Targets for Network Load Balancer (NLB)](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_nlb_targets/README.md): Use this action to get all targets for Network Load Balancer (NLB)\n\n* [AWS Get Network Load Balancer (NLB) without Targets](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_nlbs_without_targets/README.md): Use this action to get AWS Network Load Balancer (NLB) without Targets\n\n* [AWS Get Older Generation RDS Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_older_generation_rds_instances/README.md): AWS Get Older Generation RDS Instances action retrieves information about RDS instances using older generation instance types.\n\n* [AWS Get Private Address from NAT Gateways](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_private_address_from_nat_gateways/README.md): This action is used to get private address from NAT gateways.\n\n* [Get AWS EC2 Instances with a public IP](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_public_ec2_instances/README.md): lists all EC2 instances with a public IP\n\n* [AWS Get Publicly Accessible RDS Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_publicly_accessible_db_instances/README.md): AWS Get Publicly Accessible RDS Instances\n\n* [AWS Get Publicly Accessible DB Snapshots in RDS](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_publicly_accessible_db_snapshots/README.md): AWS Get Publicly Accessible DB Snapshots in RDS\n\n* [Get AWS RDS automated db snapshots above retention period](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_rds_automated_snapshots_above_retention_period/README.md): This Action gets the snapshots above a certain retention period.\n\n* [AWS Get Redshift Query Details](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_redshift_query_details/README.md): Given an QueryId, this Action will give you the status of the Query, along with other data like  the number of lines/\n\n* [AWS Get Redshift Result](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_redshift_result/README.md): Given a QueryId, Get the Query Result, and format into a List\n\n* [AWS Get EC2 Instances About To Retired](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_reserved_instances_about_to_retired/README.md): AWS Get EC2 Instances About To Retired\n\n* [AWS Get Resources Missing Tag](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_resources_missing_tag/README.md): Gets a list of all AWS resources that are missing the tag in the input parameters.\n\n* [AWS Get Resources With Expiration Tag](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_resources_with_expiration_tag/README.md): AWS Get all Resources with an expiration tag\n\n* [AWS Get Resources With Tag](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_resources_with_tag/README.md): For a given tag and region, get every AWS resource with that tag.\n\n* [Get AWS S3 Buckets](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_s3_buckets/README.md): Get AWS S3 Buckets\n\n* [Get Schedule To Retire AWS EC2 Instance](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_schedule_to_retire_instances/README.md): Get Schedule To Retire AWS EC2 Instance\n\n* [ Get secrets from secretsmanager](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_secret_from_secretmanager/README.md):  Get secrets from AWS secretsmanager\n\n* [AWS Get Secrets Manager Secret](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_secrets_manager_secret/README.md): Get string (of JSON) containing Secret details\n\n* [AWS Get Secrets Manager SecretARN](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_secrets_manager_secretARN/README.md): Given a Secret Name - this Action returns the Secret ARN\n\n* [Get AWS Security Group Details](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_security_group_details/README.md): Get details about a security group, given its ID.\n\n* [AWS Get Service Quota for a Specific ServiceName](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_service_quota_details/README.md): Given an AWS Region, Service Code and Quota Code, this Action will output the quota information for the specified service.\n\n* [AWS Get Quotas for a Service](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_service_quotas/README.md): Given inputs of the AWS Region, and the Service_Code for a service, this Action will output all of the Service Quotas and limits.\n\n* [Get Stopped Instance Volumes](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_stopped_instance_volumes/README.md): This action helps to list the volumes that are attached to stopped instances.\n\n* [Get STS Caller Identity](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_sts_caller_identity/README.md): Get STS Caller Identity\n\n* [AWS Get Tags of All Resources](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_tags_of_all_resources/README.md): AWS Get Tags of All Resources\n\n* [Get Timed Out AWS Lambdas](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_timed_out_lambdas/README.md): Get AWS Lambda functions that have exceeded the maximum amount of time in seconds that a Lambda function can run.\n\n* [AWS Get TTL For Route53 Records](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_ttl_for_route53_records/README.md): Get TTL for Route53 records for a hosted zone.\n\n* [AWS: Check for short Route 53 TTL](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_ttl_under_given_hours/README.md): AWS: Check for short Route 53 TTL\n\n* [Get UnHealthy EC2 Instances for Classic ELB](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_unhealthy_instances/README.md): Get UnHealthy EC2 Instances for Classic ELB\n\n* [Get Unhealthy instances from ELB](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_unhealthy_instances_from_elb/README.md): Get Unhealthy instances from Elastic Load Balancer\n\n* [AWS get Unused Route53 Health Checks](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_unused_route53_health_checks/README.md): AWS get Unused Route53 Health Checks\n\n* [AWS Get IAM Users with Old Access Keys](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_users_with_old_access_keys/README.md): This Lego collects the access keys that have never been used or the access keys that have been used but are older than the threshold.\n\n* [Launch AWS EC2 Instance From an AMI](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_launch_instance_from_ami/README.md): Use this instance to Launch an AWS EC2 instance from an AMI\n\n* [AWS List Access Key](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_access_keys/README.md): List all Access Keys for the User\n\n* [AWS List All IAM Users](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_all_iam_users/README.md): List all AWS IAM Users\n\n* [AWS List All Regions](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_all_regions/README.md): List all available AWS Regions\n\n* [AWS List Application LoadBalancers ARNs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_application_loadbalancers/README.md): AWS List Application LoadBalancers ARNs\n\n* [AWS List Attached User Policies](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_attached_user_policies/README.md): AWS List Attached User Policies\n\n* [AWS List ECS Clusters with Low CPU Utilization](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_clusters_with_low_utilization/README.md): This action searches for clusters that have low CPU utilization.\n\n* [AWS List Expiring Access Keys](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_expiring_access_keys/README.md): List Expiring IAM User Access Keys\n\n* [List Expiring ACM Certificates](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_expiring_acm_certificates/README.md): List All Expiring ACM Certificates\n\n* [AWS List Hosted Zones](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_hosted_zones/README.md): List all AWS Hosted zones\n\n* [AWS List Unattached Elastic IPs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_unattached_elastic_ips/README.md): This action lists Elastic IP address and check if it is associated with an instance or network interface.\n\n* [AWS List Unhealthy Instances in a Target Group](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_unhealthy_instances_in_target_group/README.md): List Unhealthy Instances in a target group\n\n* [AWS List IAM Users With Old Passwords](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_users_with_old_passwords/README.md): This Lego filter gets all the IAM users' login profiles, and if the login profile is available, checks for the last password change if the password is greater than the given threshold, and lists those users.\n\n* [AWS List Instances behind a Load Balancer.](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_loadbalancer_list_instances/README.md): List AWS Instances behind a Load Balancer\n\n* [Make AWS Bucket Public](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_make_bucket_public/README.md): Make an AWS Bucket Public!\n\n* [AWS Modify EBS Volume to GP3](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_modify_ebs_volume_to_gp3/README.md): AWS recently introduced the General Purpose SSD (gp3) volume type, which is designed to provide higher IOPS performance at a lower cost than the gp2 volume type.\n\n* [AWS Modify ALB Listeners HTTP Redirection](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_modify_listener_for_http_redirection/README.md): AWS Modify ALB Listeners HTTP Redirection\n\n* [AWS Modify Publicly Accessible RDS Snapshots](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_modify_public_db_snapshots/README.md): AWS Modify Publicly Accessible RDS Snapshots\n\n* [Get AWS Postgresql Max Configured Connections](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_postgresql_get_configured_max_connections/README.md): Get AWS Postgresql Max Configured Connections\n\n* [Plot AWS PostgreSQL Active Connections](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_postgresql_plot_active_connections/README.md): Plot AWS PostgreSQL Action Connections\n\n* [AWS Purchase ElastiCache Reserved Nodes](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_purchase_elasticcache_reserved_node/README.md): This action purchases a reserved cache node offering.\n\n* [AWS Purchase RDS Reserved Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_purchase_rds_reserved_instance/README.md): This action purchases a reserved DB instance offering.\n\n* [AWS Purchase Redshift Reserved Nodes](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_purchase_redshift_reserved_node/README.md): This action purchases reserved nodes. Amazon Redshift offers a predefined set of reserved node offerings. You can purchase one or more of the offerings.\n\n* [ Apply CORS Policy for S3 Bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_put_bucket_cors/README.md):  Apply CORS Policy for S3 Bucket\n\n* [Apply AWS New Policy for S3 Bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_put_bucket_policy/README.md): Apply a New AWS Policy for S3 Bucket\n\n* [Read AWS S3 Object](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_read_object/README.md): Read an AWS S3 Object\n\n* [ Register AWS Instances with a Load Balancer](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_register_instances/README.md):  Register AWS Instances with a Load Balancer\n\n* [AWS Release Elastic IP](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_release_elastic_ip/README.md): AWS Release Elastic IP for both VPC and Standard\n\n* [Renew Expiring ACM Certificates](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_renew_expiring_acm_certificates/README.md): Renew Expiring ACM Certificates\n\n* [AWS_Request_Service_Quota_Increase](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_request_service_quota_increase/README.md): Given an AWS Region, Service Code, quota code and a new value for the quota, this Action sends a request to AWS for a new value. Your Connector must have servicequotas:RequestServiceQuotaIncrease enabled for this to work.\n\n* [Restart AWS EC2 Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_restart_ec2_instances/README.md): Restart AWS EC2 Instances\n\n* [AWS Revoke Policy from IAM User](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_revoke_policy_from_iam_user/README.md): AWS Revoke Policy from IAM User\n\n* [Start AWS Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_run_instances/README.md): Start an AWS EC2 Instances\n\n* [AWS Schedule Redshift Cluster Pause Resume Enabled](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_schedule_pause_resume_enabled/README.md): AWS Schedule Redshift Cluster Pause Resume Enabled\n\n* [AWS Service Quota Limits](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_service_quota_limits/README.md): Input a List of Service Quotas, and get back which of your instances are above the warning percentage of the quota\n\n* [AWS VPC service quota limit](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_service_quota_limits_vpc/README.md): This Action queries all VPC Storage quotas, and returns all usage over warning_percentage.\n\n* [Stop AWS Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_stop_instances/README.md): Stop an AWS Instance\n\n* [Tag AWS Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_tag_ec2_instances/README.md): Tag AWS Instances\n\n* [AWS List Instances in a ELBV2 Target Group](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_target_group_list_instances/README.md): List AWS Instance in a ELBv2 Target Group\n\n* [ AWS List Unhealthy Instances in a ELBV2 Target Group](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_target_group_list_unhealthy_instances/README.md):  List AWS Unhealthy Instance in a ELBv2 Target Group\n\n* [AWS Register/Unregister Instances from a Target Group.](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_target_group_register_unregister_instances/README.md): Register/Unregister AWS Instance from a Target Group\n\n* [Terminate AWS EC2 Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_terminate_ec2_instances/README.md): This Action will Terminate AWS EC2 Instances\n\n* [AWS Update Access Key](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_update_access_key/README.md): Update status of the Access Key\n\n* [AWS Update TTL for Route53 Record](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_update_ttl_for_route53_records/README.md): Update TTL for an existing record in a hosted zone.\n\n* [Upload file to S3](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_upload_file_to_s3/README.md): Upload a local file to S3\n\n* [AWS_VPC_service_quota_warning](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_vpc_service_quota_warning/README.md): Given an AWS Region and a warning percentage, this Action queries all VPC quota limits, and returns any of Quotas that are over the alert value.\n\n* [Datadog delete incident](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_delete_incident/README.md): Delete an incident given its id\n\n"
  },
  {
    "path": "lists/action_AWS_ACM.md",
    "content": "* [Check SSL Certificate Expiry](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_check_ssl_certificate_expiry/README.md): Check ACM SSL Certificate expiry date\n\n* [List Expiring ACM Certificates](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_expiring_acm_certificates/README.md): List All Expiring ACM Certificates\n\n* [Renew Expiring ACM Certificates](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_renew_expiring_acm_certificates/README.md): Renew Expiring ACM Certificates\n\n"
  },
  {
    "path": "lists/action_AWS_ASG.md",
    "content": "* [Get Unhealthy instances from ASG](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_unhealthy_instances_from_asg/README.md): Get Unhealthy instances from Auto Scaling Group\n\n* [Get AWS AutoScaling Group Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_auto_scaling_instances/README.md): Use This Action to Get AWS AutoScaling Group Instances\n\n"
  },
  {
    "path": "lists/action_AWS_CLI.md",
    "content": "* [Run Command via AWS CLI](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_execute_cli_command/README.md): Execute command using AWS CLI\n\n"
  },
  {
    "path": "lists/action_AWS_CLOUDTRAIL.md",
    "content": "* [Finding Redundant Trails in AWS](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_finding_redundant_trails/README.md): This action will find a redundant cloud trail if the attribute IncludeGlobalServiceEvents is true, and then we need to find multiple duplications.\n\n"
  },
  {
    "path": "lists/action_AWS_CLOUDWATCH.md",
    "content": "* [Attach a webhook endpoint to AWS Cloudwatch alarm](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_cloudwatch_attach_webhook_notification_to_alarm/README.md): Attach a webhook endpoint to one of the SNS attached to the AWS Cloudwatch alarm.\n\n* [AWS Find Redshift Clusters with low CPU Utilization](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_find_redshift_clusters_with_low_cpu_utilization/README.md): Find underutilized Redshift clusters in terms of CPU utilization.\n\n* [Get AWS CloudWatch Alarms List](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_alarms_list/README.md): Get AWS CloudWatch Alarms List\n\n* [Get AWS EBS Metrics from Cloudwatch](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_ebs/README.md): Get AWS CloudWatch Statistics for EBS volumes\n\n* [Get AWS EC2 Metrics from Cloudwatch](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_ec2/README.md): Get AWS CloudWatch Metrics for EC2 instances. These could be CPU, Network, Disk based measurements\n\n* [Get AWS EC2 CPU Utilization Statistics from Cloudwatch](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_ec2_cpuutil/README.md): Get AWS CloudWatch Statistics for cpu utilization for EC2 instances\n\n* [Get AWS CloudWatch Metrics for AWS/ApplicationELB](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_metrics_applicationelb/README.md): Get AWS CloudWatch Metrics for AWS/ApplicationELB\n\n* [Get AWS CloudWatch Metrics for AWS/ELB](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_metrics_classic_elb/README.md): Get AWS CloudWatch Metrics for Classic Loadbalancer\n\n* [Get AWS CloudWatch Metrics for AWS/DynamoDB](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_metrics_dynamodb/README.md): Get AWS CloudWatch Metrics for AWS DynamoDB\n\n* [Get AWS CloudWatch Metrics for AWS/AutoScaling](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_metrics_ec2autoscaling/README.md): Get AWS CloudWatch Metrics for AWS EC2 AutoScaling groups\n\n* [Get AWS CloudWatch Metrics for AWS/GatewayELB](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_metrics_gatewayelb/README.md): Get AWS CloudWatch Metrics for AWS/GatewayELB\n\n* [Get AWS CloudWatch Metrics for AWS/Lambda](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_metrics_lambda/README.md): Get AWS CloudWatch Metrics for AWS/Lambda\n\n* [Get AWS CloudWatch Metrics for AWS/NetworkELB](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_metrics_network_elb/README.md): Get AWS CloudWatch Metrics for Network Loadbalancer\n\n* [Get AWS CloudWatch Metrics for AWS/RDS](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_metrics_rds/README.md): Get AWS CloudWatch Metrics for AWS/RDS\n\n* [Get AWS CloudWatch Metrics for AWS/Redshift](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_metrics_redshift/README.md): Get AWS CloudWatch Metrics for AWS/Redshift\n\n* [Get AWS CloudWatch Metrics for AWS/SQS](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_metrics_sqs/README.md): Get AWS CloudWatch Metrics for AWS/SQS\n\n* [Get AWS CloudWatch Statistics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_statistics/README.md): Get AWS CloudWatch Statistics\n\n"
  },
  {
    "path": "lists/action_AWS_COST_EXPLORER.md",
    "content": "* [AWS Get Costs For All Services](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cost_for_all_services/README.md): Get Costs for all AWS services in a given time period.\n\n* [AWS Get Costs For Data Transfer](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cost_for_data_transfer/README.md): Get daily cost for Data Transfer in AWS\n\n* [AWS Get Daily Total Spend](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_daily_total_spend/README.md): AWS get daily total spend from Cost Explorer\n\n"
  },
  {
    "path": "lists/action_AWS_DYNAMODB.md",
    "content": "* [Get AWS CloudWatch Metrics for AWS/DynamoDB](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_metrics_dynamodb/README.md): Get AWS CloudWatch Metrics for AWS DynamoDB\n\n"
  },
  {
    "path": "lists/action_AWS_EBC.md",
    "content": "* [Filter AWS Unattached EBS Volume](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_ebs_unattached_volumes/README.md): Filter AWS Unattached EBS Volume\n\n* [Get EBS Volumes By Type](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_ebs_volumes_by_type/README.md): Get EBS Volumes By Type\n\n* [AWS List ECS Clusters with Low CPU Utilization](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_clusters_with_low_utilization/README.md): This action searches for clusters that have low CPU utilization.\n\n"
  },
  {
    "path": "lists/action_AWS_EBS.md",
    "content": "* [ Detach as AWS Instance with a Elastic Block Store](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_detach_ebs_to_instances/README.md):  Detach as AWS Instance with a Elastic Block Store.\n\n* [AWS Filter Old EBS Snapshots](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_old_ebs_snapshots/README.md): This action list a all snapshots details that are older than the threshold\n\n* [Get AWS EBS Metrics from Cloudwatch](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_ebs/README.md): Get AWS CloudWatch Statistics for EBS volumes\n\n* [Get Stopped Instance Volumes](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_stopped_instance_volumes/README.md): This action helps to list the volumes that are attached to stopped instances.\n\n"
  },
  {
    "path": "lists/action_AWS_EC2.md",
    "content": "* [Attach an EBS volume to an AWS EC2 Instance](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_attach_ebs_to_instances/README.md): Attach an EBS volume to an AWS EC2 Instance\n\n* [AWS Create Snapshot For Volume](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_create_volumes_snapshot/README.md): Create a snapshot for EBS volume of the EC2 Instance for backing up the data stored in EBS\n\n* [Delete AWS EBS Volume by Volume ID](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_delete_volume_by_id/README.md): Delete AWS Volume by Volume ID\n\n* [ Detach as AWS Instance with a Elastic Block Store](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_detach_ebs_to_instances/README.md):  Detach as AWS Instance with a Elastic Block Store.\n\n* [AWS Detach Instances From AutoScaling Group](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_detach_instances_from_autoscaling_group/README.md): Use This Action to AWS Detach Instances From AutoScaling Group\n\n* [EBS Modify Volume](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_ebs_modify_volume/README.md): Modify/Resize volume for Elastic Block Storage (EBS).\n\n* [AWS Filter All Manual Database Snapshots](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_all_manual_database_snapshots/README.md): Use This Action to AWS Filter All Manual Database Snapshots\n\n* [Filter AWS Unattached EBS Volume](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_ebs_unattached_volumes/README.md): Filter AWS Unattached EBS Volume\n\n* [Filter AWS EC2 Instance](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_ec2_by_tags/README.md): Filter AWS EC2 Instance\n\n* [Filter AWS EC2 instance by VPC Ids](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_ec2_by_vpc/README.md): Use this Action to Filter AWS EC2 Instance by VPC Ids\n\n* [Filter All AWS EC2 Instance](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_ec2_instances/README.md): Filter All AWS EC2 Instance\n\n* [Filter AWS EC2 Instances Without Lifetime Tag](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_ec2_without_lifetime_tag/README.md): Filter AWS EC2 Instances Without Lifetime Tag\n\n* [Filter AWS EC2 Instances Without Termination and Lifetime Tag](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_instances_without_termination_and_lifetime_tag/README.md): Filter AWS EC2 Instances Without Termination and Lifetime Tag and Check of they are valid\n\n* [AWS Filter Large EC2 Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_large_ec2_instances/README.md): This Action to filter all instances whose instanceType contains Large or xLarge, and that DO NOT have the largetag key/value.\n\n* [AWS Find Long Running EC2 Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_long_running_instances/README.md): This action list a all instances that are older than the threshold\n\n* [Get Unhealthy instances from ASG](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_unhealthy_instances_from_asg/README.md): Get Unhealthy instances from Auto Scaling Group\n\n* [Filter AWS Untagged EC2 Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_untagged_ec2_instances/README.md): Filter AWS Untagged EC2 Instances\n\n* [Filter AWS Unused Keypairs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_unused_keypairs/README.md): Filter AWS Unused Keypairs\n\n* [AWS Find Unused NAT Gateways](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_unused_nat_gateway/README.md): This action to get all of the Nat gateways that have zero traffic over those\n\n* [AWS Find Idle Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_find_idle_instances/README.md): Find Idle EC2 instances\n\n* [AWS Find Redshift Cluster without Pause Resume Enabled](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_find_redshift_cluster_without_pause_resume_enabled/README.md): Use This Action to AWS find redshift cluster for which paused resume are not Enabled\n\n* [Get AWS EC2 Instances All ](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_all_ec2_instances/README.md): Use This Action to Get All AWS EC2 Instances\n\n* [Get AWS AutoScaling Group Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_auto_scaling_instances/README.md): Use This Action to Get AWS AutoScaling Group Instances\n\n* [Get AWS EC2 Metrics from Cloudwatch](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_ec2/README.md): Get AWS CloudWatch Metrics for EC2 instances. These could be CPU, Network, Disk based measurements\n\n* [Get AWS EC2 CPU Utilization Statistics from Cloudwatch](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_ec2_cpuutil/README.md): Get AWS CloudWatch Statistics for cpu utilization for EC2 instances\n\n* [Get AWS CloudWatch Metrics for AWS/AutoScaling](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_metrics_ec2autoscaling/README.md): Get AWS CloudWatch Metrics for AWS EC2 AutoScaling groups\n\n* [Get EBS Volumes By Type](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_ebs_volumes_by_type/README.md): Get EBS Volumes By Type\n\n* [Get EC2 CPU Consumption For All Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_ec2_cpu_consumption/README.md): Get EC2 CPU Consumption For All Instances\n\n* [Get EC2 Data Traffic In and Out For All Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_ec2_data_traffic/README.md): Get EC2 Data Traffic In and Out For All Instances\n\n* [Get Age of all EC2 Instances in Days](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_ec2_instance_age/README.md): Get Age of all EC2 Instances in Days\n\n* [Get AWS Instance Details with Matching Private DNS Name](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_instance_detail_with_private_dns_name/README.md): Use this action to get details of an AWS EC2 Instance that matches a Private DNS Name\n\n* [Get AWS Instances Details](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_instance_details/README.md): Get AWS Instances Details\n\n* [List All AWS EC2 Instances Under the ELB](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_instances/README.md):  Get a list of all AWS EC2 Instances from given ELB\n\n* [Get all Targets for Network Load Balancer (NLB)](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_nlb_targets/README.md): Use this action to get all targets for Network Load Balancer (NLB)\n\n* [Get AWS EC2 Instances with a public IP](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_public_ec2_instances/README.md): lists all EC2 instances with a public IP\n\n* [AWS Get EC2 Instances About To Retired](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_reserved_instances_about_to_retired/README.md): AWS Get EC2 Instances About To Retired\n\n* [AWS Get Resources Missing Tag](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_resources_missing_tag/README.md): Gets a list of all AWS resources that are missing the tag in the input parameters.\n\n* [AWS Get Resources With Expiration Tag](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_resources_with_expiration_tag/README.md): AWS Get all Resources with an expiration tag\n\n* [AWS Get Resources With Tag](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_resources_with_tag/README.md): For a given tag and region, get every AWS resource with that tag.\n\n* [Get Schedule To Retire AWS EC2 Instance](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_schedule_to_retire_instances/README.md): Get Schedule To Retire AWS EC2 Instance\n\n* [Get AWS Security Group Details](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_security_group_details/README.md): Get details about a security group, given its ID.\n\n* [Get Stopped Instance Volumes](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_stopped_instance_volumes/README.md): This action helps to list the volumes that are attached to stopped instances.\n\n* [Launch AWS EC2 Instance From an AMI](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_launch_instance_from_ami/README.md): Use this instance to Launch an AWS EC2 instance from an AMI\n\n* [AWS List ECS Clusters with Low CPU Utilization](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_clusters_with_low_utilization/README.md): This action searches for clusters that have low CPU utilization.\n\n* [AWS List Unattached Elastic IPs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_unattached_elastic_ips/README.md): This action lists Elastic IP address and check if it is associated with an instance or network interface.\n\n* [AWS Modify EBS Volume to GP3](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_modify_ebs_volume_to_gp3/README.md): AWS recently introduced the General Purpose SSD (gp3) volume type, which is designed to provide higher IOPS performance at a lower cost than the gp2 volume type.\n\n* [AWS Modify ALB Listeners HTTP Redirection](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_modify_listener_for_http_redirection/README.md): AWS Modify ALB Listeners HTTP Redirection\n\n* [AWS Modify Publicly Accessible RDS Snapshots](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_modify_public_db_snapshots/README.md): AWS Modify Publicly Accessible RDS Snapshots\n\n* [Get AWS Postgresql Max Configured Connections](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_postgresql_get_configured_max_connections/README.md): Get AWS Postgresql Max Configured Connections\n\n* [Plot AWS PostgreSQL Active Connections](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_postgresql_plot_active_connections/README.md): Plot AWS PostgreSQL Action Connections\n\n* [Restart AWS EC2 Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_restart_ec2_instances/README.md): Restart AWS EC2 Instances\n\n* [Start AWS Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_run_instances/README.md): Start an AWS EC2 Instances\n\n* [Stop AWS Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_stop_instances/README.md): Stop an AWS Instance\n\n* [Tag AWS Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_tag_ec2_instances/README.md): Tag AWS Instances\n\n* [AWS List Instances in a ELBV2 Target Group](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_target_group_list_instances/README.md): List AWS Instance in a ELBv2 Target Group\n\n* [Terminate AWS EC2 Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_terminate_ec2_instances/README.md): This Action will Terminate AWS EC2 Instances\n\n"
  },
  {
    "path": "lists/action_AWS_ECS.md",
    "content": "* [AWS ECS Describe Task Definition.](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_ecs_describe_task_definition/README.md): Describe AWS ECS Task Definition.\n\n* [ECS detect failed deployment ](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_ecs_detect_failed_deployment/README.md): List of stopped tasks, associated with a deployment, along with their stopped reason\n\n* [Restart AWS ECS Service](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_ecs_service_restart/README.md): Restart an AWS ECS Service\n\n* [Update AWS ECS Service](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_ecs_update_service/README.md): Update AWS ECS Service\n\n"
  },
  {
    "path": "lists/action_AWS_EKS.md",
    "content": "* [ Copy EKS Pod logs to bucket.](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_eks_copy_pod_logs_to_bucket/README.md):  Copy given EKS pod logs to given S3 Bucket.\n\n* [ Delete EKS POD in a given Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_eks_delete_pod/README.md):  Delete a EKS POD in a given Namespace\n\n* [List of EKS dead pods](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_eks_get_all_dead_pods/README.md): Get list of all dead pods in a given EKS cluster\n\n* [List of EKS Namespaces](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_eks_get_all_namespaces/README.md): Get list of all Namespaces in a given EKS cluster\n\n* [List of EKS pods](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_eks_get_all_pods/README.md): Get list of all pods in a given EKS cluster\n\n* [ List of EKS deployment for given Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_eks_get_deployments_name/README.md):  Get list of EKS deployment names for given Namespace\n\n* [Get CPU and memory utilization of node.](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_eks_get_node_cpu_memory/README.md):  Get CPU and memory utilization of given node.\n\n* [ Get EKS Nodes](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_eks_get_nodes/README.md):  Get EKS Nodes\n\n* [ List of EKS pods not in RUNNING State](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_eks_get_not_running_pods/README.md):  Get list of all pods in a given EKS cluster that are not running.\n\n* [Get pod CPU and Memory usage from given namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_eks_get_pod_cpu_memory/README.md): Get all pod CPU and Memory usage from given namespace\n\n* [ EKS Get pod status](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_eks_get_pod_status/README.md):  Get a Status of given POD in a given Namespace and EKS cluster name\n\n* [ EKS Get Running Pods](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_eks_get_running_pods/README.md):  Get a list of running pods from given namespace and EKS cluster name\n\n* [ Run Kubectl commands on EKS Cluster](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_eks_run_kubectl_cmd/README.md): This action runs a kubectl command on an AWS EKS Cluster\n\n"
  },
  {
    "path": "lists/action_AWS_ELASTICACHE.md",
    "content": "* [AWS Get Long Running ElastiCache clusters Without Reserved Nodes](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_long_running_elasticcache_clusters_without_reserved_nodes/README.md): This action gets information about long running ElastiCache clusters and their status, and checks if they have any reserved nodes associated with them.\n\n* [AWS Purchase ElastiCache Reserved Nodes](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_purchase_elasticcache_reserved_node/README.md): This action purchases a reserved cache node offering.\n\n"
  },
  {
    "path": "lists/action_AWS_ELASTICCACHE.md",
    "content": "* [AWS Get Long Running ElasticCache clusters Without Reserved Nodes](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_long_running_elasticcache_clusters_without_reserved_nodes/README.md): This action gets information about long running ElasticCache clusters and their status, and checks if they have any reserved nodes associated with them.\n\n"
  },
  {
    "path": "lists/action_AWS_ELB.md",
    "content": "* [ Deregisters AWS Instances from a Load Balancer](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_deregister_instances/README.md):  Deregisters AWS Instances from a Load Balancer\n\n* [Filter AWS Target groups by tag name](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_target_groups_by_tags/README.md): Filter AWS Target groups which have the provided tag attached to it. It also returns the value of that tag for each target group\n\n* [Find AWS ELBs with no targets or instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_find_elbs_with_no_targets_or_instances/README.md): Find AWS ELBs with no targets or instances attached to them.\n\n* [Get AWS ALB Listeners Without HTTP Redirection](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_alb_listeners_without_http_redirect/README.md): Get AWS ALB Listeners Without HTTP Redirection\n\n* [AWS Get All Load Balancers](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_all_load_balancers/README.md): AWS Get All Load Balancers\n\n* [Get AWS CloudWatch Metrics for AWS/ApplicationELB](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_metrics_applicationelb/README.md): Get AWS CloudWatch Metrics for AWS/ApplicationELB\n\n* [Get AWS CloudWatch Metrics for AWS/ELB](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_metrics_classic_elb/README.md): Get AWS CloudWatch Metrics for Classic Loadbalancer\n\n* [Get AWS CloudWatch Metrics for AWS/GatewayELB](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_metrics_gatewayelb/README.md): Get AWS CloudWatch Metrics for AWS/GatewayELB\n\n* [Get AWS CloudWatch Metrics for AWS/NetworkELB](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_metrics_network_elb/README.md): Get AWS CloudWatch Metrics for Network Loadbalancer\n\n* [Get Timed Out AWS Lambdas](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_timed_out_lambdas/README.md): Get AWS Lambda functions that have exceeded the maximum amount of time in seconds that a Lambda function can run.\n\n* [Get UnHealthy EC2 Instances for Classic ELB](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_unhealthy_instances/README.md): Get UnHealthy EC2 Instances for Classic ELB\n\n* [Get Unhealthy instances from ELB](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_unhealthy_instances_from_elb/README.md): Get Unhealthy instances from Elastic Load Balancer\n\n* [AWS List Application LoadBalancers ARNs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_application_loadbalancers/README.md): AWS List Application LoadBalancers ARNs\n\n* [AWS List Unhealthy Instances in a Target Group](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_unhealthy_instances_in_target_group/README.md): List Unhealthy Instances in a target group\n\n* [AWS List Instances behind a Load Balancer.](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_loadbalancer_list_instances/README.md): List AWS Instances behind a Load Balancer\n\n* [AWS Modify ALB Listeners HTTP Redirection](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_modify_listener_for_http_redirection/README.md): AWS Modify ALB Listeners HTTP Redirection\n\n* [ Register AWS Instances with a Load Balancer](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_register_instances/README.md):  Register AWS Instances with a Load Balancer\n\n* [AWS List Instances in a ELBV2 Target Group](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_target_group_list_instances/README.md): List AWS Instance in a ELBv2 Target Group\n\n* [ AWS List Unhealthy Instances in a ELBV2 Target Group](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_target_group_list_unhealthy_instances/README.md):  List AWS Unhealthy Instance in a ELBv2 Target Group\n\n* [AWS Register/Unregister Instances from a Target Group.](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_target_group_register_unregister_instances/README.md): Register/Unregister AWS Instance from a Target Group\n\n"
  },
  {
    "path": "lists/action_AWS_EMR.md",
    "content": "* [Get AWS EMR Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_emr_get_instances/README.md): Get a list of EC2 Instances for an EMR cluster. Filtered by node type (MASTER|CORE|TASK)\n\n"
  },
  {
    "path": "lists/action_AWS_IAM.md",
    "content": "* [AWS Start IAM Policy Generation ](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/AWS_Start_IAM_Policy_Generation/README.md): Given a region, a CloudTrail ARN (where the logs are being recorded), a reference IAM ARN (whose usage we will parse), and a Service role, this will begin the generation of a IAM policy.  The output is a String of the generation Id.\n\n* [AWS Attach New Policy to User](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_attach_iam_policy/README.md): AWS Attach New Policy to User\n\n* [AWS Create IAM Policy](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_create_IAMpolicy/README.md): Given an AWS policy (as a string), and the name for the policy, this will create an IAM policy.\n\n* [AWS Create Access Key](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_create_access_key/README.md): Create a new Access Key for the User\n\n* [Create New IAM User](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_create_iam_user/README.md): Create New IAM User\n\n* [Create Login profile for IAM User](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_create_user_login_profile/README.md): Create Login profile for IAM User\n\n* [AWS Delete Access Key](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_delete_access_key/README.md): Delete an Access Key for a User\n\n* [AWS Get Generated Policy](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_generated_policy/README.md): Given a Region and the ID of a policy generation job, this Action will return the policy (once it has been completed).\n\n* [AWS List IAM users without password policies](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_iam_users_without_password_policies/README.md): Get a list of all IAM users that have no password policy attached to them.\n\n* [AWS Get IAM Users with Old Access Keys](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_users_with_old_access_keys/README.md): This Lego collects the access keys that have never been used or the access keys that have been used but are older than the threshold.\n\n* [AWS List Access Key](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_access_keys/README.md): List all Access Keys for the User\n\n* [AWS List All IAM Users](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_all_iam_users/README.md): List all AWS IAM Users\n\n* [AWS List Attached User Policies](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_attached_user_policies/README.md): AWS List Attached User Policies\n\n* [AWS List Expiring Access Keys](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_expiring_access_keys/README.md): List Expiring IAM User Access Keys\n\n* [AWS List IAM Users With Old Passwords](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_users_with_old_passwords/README.md): This Lego filter gets all the IAM users' login profiles, and if the login profile is available, checks for the last password change if the password is greater than the given threshold, and lists those users.\n\n* [AWS Update Access Key](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_update_access_key/README.md): Update status of the Access Key\n\n"
  },
  {
    "path": "lists/action_AWS_LAMBDA.md",
    "content": "* [Get AWS CloudWatch Metrics for AWS/Lambda](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_metrics_lambda/README.md): Get AWS CloudWatch Metrics for AWS/Lambda\n\n* [Find AWS Lambdas Not Using ARM64 Graviton2 Processor](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_lambdas_not_using_arm_graviton2_processor/README.md): Find all AWS Lambda functions that are not using the Arm-based AWS Graviton2 processor for their runtime architecture\n\n"
  },
  {
    "path": "lists/action_AWS_LOGS.md",
    "content": "* [AWS Filter Unused Log Stream](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_unused_log_streams/README.md): This action lists all log streams that are unused for all the log groups by the given threshold.\n\n"
  },
  {
    "path": "lists/action_AWS_NAT_GATEWAY.md",
    "content": "* [AWS Find Unused NAT Gateways](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_unused_nat_gateway/README.md): This action to get all of the Nat gateways that have zero traffic over those\n\n* [AWS Get Private Address from NAT Gateways](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_private_address_from_nat_gateways/README.md): This action is used to get private address from NAT gateways.\n\n"
  },
  {
    "path": "lists/action_AWS_POSTGRES.md",
    "content": "* [Get AWS Postgresql Max Configured Connections](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_postgresql_get_configured_max_connections/README.md): Get AWS Postgresql Max Configured Connections\n\n* [Plot AWS PostgreSQL Active Connections](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_postgresql_plot_active_connections/README.md): Plot AWS PostgreSQL Action Connections\n\n"
  },
  {
    "path": "lists/action_AWS_RDS.md",
    "content": "* [AWS Check if RDS instances are not M5 or T3](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_check_rds_non_m5_t3_instances/README.md): AWS Check if RDS instances are not M5 or T3\n\n* [AWS Delete RDS Instance](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_delete_rds_instance/README.md): Delete AWS RDS Instance\n\n* [AWS Find Low Connections RDS instances Per Day](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_find_low_connection_rds_instances/README.md): This action will find RDS DB instances with a number of connections below the specified minimum in the specified region.\n\n* [AWS Find EMR Clusters of Old Generation Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_find_old_gen_emr_clusters/README.md): This action list of EMR clusters of old generation instances.\n\n* [AWS Find RDS Instances with low CPU Utilization](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_find_rds_instances_with_low_cpu_utilization/README.md): This lego finds RDS instances are not utilizing their CPU resources to their full potential.\n\n* [Get AWS CloudWatch Metrics for AWS/RDS](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_metrics_rds/README.md): Get AWS CloudWatch Metrics for AWS/RDS\n\n* [AWS Get Idle EMR Clusters](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_idle_emr_clusters/README.md): This action list of EMR clusters that have been idle for more than the specified time.\n\n* [AWS Get Long Running RDS Instances Without Reserved Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_long_running_rds_instances_without_reserved_instances/README.md): This action gets information about long running instances and their status, and checks if they have any reserved nodes associated with them.\n\n* [AWS Get Older Generation RDS Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_older_generation_rds_instances/README.md): AWS Get Older Generation RDS Instances action retrieves information about RDS instances using older generation instance types.\n\n* [AWS Get Publicly Accessible RDS Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_publicly_accessible_db_instances/README.md): AWS Get Publicly Accessible RDS Instances\n\n* [AWS Get Publicly Accessible DB Snapshots in RDS](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_publicly_accessible_db_snapshots/README.md): AWS Get Publicly Accessible DB Snapshots in RDS\n\n* [Get AWS RDS automated db snapshots above retention period](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_rds_automated_snapshots_above_retention_period/README.md): This Action gets the snapshots above a certain retention period.\n\n* [AWS Modify Publicly Accessible RDS Snapshots](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_modify_public_db_snapshots/README.md): AWS Modify Publicly Accessible RDS Snapshots\n\n* [AWS Purchase RDS Reserved Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_purchase_rds_reserved_instance/README.md): This action purchases a reserved DB instance offering.\n\n"
  },
  {
    "path": "lists/action_AWS_REDSHIFT.md",
    "content": "* [AWS Redshift Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_create_redshift_query/README.md): Make a SQL Query to the given AWS Redshift database\n\n* [AWS Delete Redshift Cluster](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_delete_redshift_cluster/README.md): Delete AWS Redshift Cluster\n\n* [AWS Find Redshift Clusters with low CPU Utilization](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_find_redshift_clusters_with_low_cpu_utilization/README.md): Find underutilized Redshift clusters in terms of CPU utilization.\n\n* [Get AWS CloudWatch Metrics for AWS/Redshift](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_metrics_redshift/README.md): Get AWS CloudWatch Metrics for AWS/Redshift\n\n* [AWS Get Long Running Redshift Clusters Without Reserved Nodes](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_long_running_redshift_clusters_without_reserved_nodes/README.md): This action gets information about running clusters and their status, and checks if they have any reserved nodes associated with them.\n\n* [AWS Get Redshift Query Details](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_redshift_query_details/README.md): Given an QueryId, this Action will give you the status of the Query, along with other data like  the number of lines/\n\n* [AWS Get Redshift Result](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_redshift_result/README.md): Given a QueryId, Get the Query Result, and format into a List\n\n* [AWS Purchase Redshift Reserved Nodes](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_purchase_redshift_reserved_node/README.md): This action purchases reserved nodes. Amazon Redshift offers a predefined set of reserved node offerings. You can purchase one or more of the offerings.\n\n"
  },
  {
    "path": "lists/action_AWS_ROUTE53.md",
    "content": "* [Get AWS Lambdas With High Error Rate](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_lambdas_with_high_error_rate/README.md): Get AWS Lambda Functions that exceed a given threshold error rate.\n\n* [AWS Get TTL For Route53 Records](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_ttl_for_route53_records/README.md): Get TTL for Route53 records for a hosted zone.\n\n* [AWS: Check for short Route 53 TTL](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_ttl_under_given_hours/README.md): AWS: Check for short Route 53 TTL\n\n* [AWS get Unused Route53 Health Checks](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_unused_route53_health_checks/README.md): AWS get Unused Route53 Health Checks\n\n* [AWS List Hosted Zones](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_hosted_zones/README.md): List all AWS Hosted zones\n\n* [AWS Update TTL for Route53 Record](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_update_ttl_for_route53_records/README.md): Update TTL for an existing record in a hosted zone.\n\n"
  },
  {
    "path": "lists/action_AWS_S3.md",
    "content": "* [Add Lifecycle Configuration to AWS S3 Bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_add_lifecycle_configuration_to_s3_bucket/README.md): Creates a new lifecycle configuration for the bucket or replaces an existing lifecycle configuration.\n\n* [Apply AWS Default Encryption for S3 Bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_apply_default_encryption_for_s3_buckets/README.md): Apply AWS Default Encryption for S3 Bucket\n\n* [AWS Change ACL Permission of public S3 Bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_change_acl_permissions_of_buckets/README.md): AWS Change ACL Permission public S3 Bucket\n\n* [Create AWS Bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_create_bucket/README.md): Create a new AWS S3 Bucket\n\n* [Delete AWS Bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_delete_bucket/README.md): Delete an AWS S3 Bucket\n\n* [Delete AWS Default Encryption for S3 Bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_delete_s3_bucket_encryption/README.md): Delete AWS Default Encryption for S3 Bucket\n\n* [Get AWS public S3 Buckets using ACL](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_public_s3_buckets_by_acl/README.md): Get AWS public S3 Buckets using ACL\n\n* [Filter AWS Unencrypted S3 Buckets](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_unencrypted_s3_buckets/README.md): Filter AWS Unencrypted S3 Buckets\n\n* [AWS Find S3 Buckets without Lifecycle Policies](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_find_s3_buckets_without_lifecycle_policies/README.md): S3 lifecycle policies enable you to automatically transition objects to different storage classes or delete them when they are no longer needed. This action finds all S3 buckets without lifecycle policies. \n\n* [Get AWS Bucket Size](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_bucket_size/README.md): Get an AWS Bucket Size\n\n* [Get AWS S3 Buckets](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_s3_buckets/README.md): Get AWS S3 Buckets\n\n* [Make AWS Bucket Public](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_make_bucket_public/README.md): Make an AWS Bucket Public!\n\n* [ Apply CORS Policy for S3 Bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_put_bucket_cors/README.md):  Apply CORS Policy for S3 Bucket\n\n* [Apply AWS New Policy for S3 Bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_put_bucket_policy/README.md): Apply a New AWS Policy for S3 Bucket\n\n* [Read AWS S3 Object](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_read_object/README.md): Read an AWS S3 Object\n\n* [Upload file to S3](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_upload_file_to_s3/README.md): Upload a local file to S3\n\n"
  },
  {
    "path": "lists/action_AWS_SECRET_MANAGER.md",
    "content": "* [ Get secrets from secretsmanager](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_secret_from_secretmanager/README.md):  Get secrets from AWS secretsmanager\n\n* [AWS Get Secrets Manager Secret](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_secrets_manager_secret/README.md): Get string (of JSON) containing Secret details\n\n* [AWS Get Secrets Manager SecretARN](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_secrets_manager_secretARN/README.md): Given a Secret Name - this Action returns the Secret ARN\n\n"
  },
  {
    "path": "lists/action_AWS_SQS.md",
    "content": "* [Get AWS CloudWatch Metrics for AWS/SQS](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_metrics_sqs/README.md): Get AWS CloudWatch Metrics for AWS/SQS\n\n"
  },
  {
    "path": "lists/action_AWS_SSM.md",
    "content": "* [ Run Command via SSM](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_execute_command_ssm/README.md):  Execute command on EC2 instance(s) using SSM\n\n"
  },
  {
    "path": "lists/action_AWS_STS.md",
    "content": "* [Get STS Caller Identity](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_sts_caller_identity/README.md): Get STS Caller Identity\n\n"
  },
  {
    "path": "lists/action_AWS_VPC.md",
    "content": "* [Filter AWS EC2 instance by VPC Ids](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_ec2_by_vpc/README.md): Use this Action to Filter AWS EC2 Instance by VPC Ids\n\n* [Filter AWS Target groups by tag name](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_target_groups_by_tags/README.md): Filter AWS Target groups which have the provided tag attached to it. It also returns the value of that tag for each target group\n\n* [AWS Get Internet Gateway by VPC ID](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_internet_gateway_by_vpc/README.md): AWS Get Internet Gateway by VPC ID\n\n* [AWS Get NAT Gateway Info by VPC ID](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_nat_gateway_by_vpc/README.md): This action is used to get the details about nat gateways configured for VPC.\n\n* [AWS VPC service quota limit](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_service_quota_limits_vpc/README.md): This Action queries all VPC Storage quotas, and returns all usage over warning_percentage.\n\n* [AWS_VPC_service_quota_warning](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_vpc_service_quota_warning/README.md): Given an AWS Region and a warning percentage, this Action queries all VPC quota limits, and returns any of Quotas that are over the alert value.\n\n"
  },
  {
    "path": "lists/action_AZURE.md",
    "content": "* [Get Azure Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Azure/legos/azure_get_handle/README.md): Get Azure Handle\n\n"
  },
  {
    "path": "lists/action_CHATGPT.md",
    "content": ""
  },
  {
    "path": "lists/action_CLOUDOPS.md",
    "content": "* [AWS Start IAM Policy Generation ](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/AWS_Start_IAM_Policy_Generation/README.md): Given a region, a CloudTrail ARN (where the logs are being recorded), a reference IAM ARN (whose usage we will parse), and a Service role, this will begin the generation of a IAM policy.  The output is a String of the generation Id.\n\n* [Add Lifecycle Configuration to AWS S3 Bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_add_lifecycle_configuration_to_s3_bucket/README.md): Creates a new lifecycle configuration for the bucket or replaces an existing lifecycle configuration.\n\n* [Filter AWS EC2 Instances Without Termination and Lifetime Tag](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_instances_without_termination_and_lifetime_tag/README.md): Filter AWS EC2 Instances Without Termination and Lifetime Tag and Check of they are valid\n\n* [Get AWS public S3 Buckets using ACL](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_public_s3_buckets_by_acl/README.md): Get AWS public S3 Buckets using ACL\n\n* [Filter AWS Target groups by tag name](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_target_groups_by_tags/README.md): Filter AWS Target groups which have the provided tag attached to it. It also returns the value of that tag for each target group\n\n* [Filter AWS Unencrypted S3 Buckets](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_unencrypted_s3_buckets/README.md): Filter AWS Unencrypted S3 Buckets\n\n* [Get Unhealthy instances from ASG](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_unhealthy_instances_from_asg/README.md): Get Unhealthy instances from Auto Scaling Group\n\n* [Filter AWS Untagged EC2 Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_untagged_ec2_instances/README.md): Filter AWS Untagged EC2 Instances\n\n* [Filter AWS Unused Keypairs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_unused_keypairs/README.md): Filter AWS Unused Keypairs\n\n* [AWS Filter Unused Log Stream](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_unused_log_streams/README.md): This action lists all log streams that are unused for all the log groups by the given threshold.\n\n* [AWS Find Unused NAT Gateways](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_unused_nat_gateway/README.md): This action to get all of the Nat gateways that have zero traffic over those\n\n* [Find AWS ELBs with no targets or instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_find_elbs_with_no_targets_or_instances/README.md): Find AWS ELBs with no targets or instances attached to them.\n\n* [AWS Find S3 Buckets without Lifecycle Policies](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_find_s3_buckets_without_lifecycle_policies/README.md): S3 lifecycle policies enable you to automatically transition objects to different storage classes or delete them when they are no longer needed. This action finds all S3 buckets without lifecycle policies. \n\n* [Finding Redundant Trails in AWS](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_finding_redundant_trails/README.md): This action will find a redundant cloud trail if the attribute IncludeGlobalServiceEvents is true, and then we need to find multiple duplications.\n\n* [AWS Get AWS Account Number](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_acount_number/README.md): Some AWS functions require the AWS Account number. This programmatically retrieves it.\n\n* [Get AWS CloudWatch Alarms List](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_alarms_list/README.md): Get AWS CloudWatch Alarms List\n\n* [Get AWS EC2 Instances All ](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_all_ec2_instances/README.md): Use This Action to Get All AWS EC2 Instances\n\n* [AWS Get All Load Balancers](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_all_load_balancers/README.md): AWS Get All Load Balancers\n\n* [AWS Get All Service Names v3](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_all_service_names/README.md): Get a list of all service names in a region\n\n* [AWS Get Untagged Resources](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_all_untagged_resources/README.md): AWS Get Untagged Resources\n\n* [Get AWS AutoScaling Group Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_auto_scaling_instances/README.md): Use This Action to Get AWS AutoScaling Group Instances\n\n* [Get AWS Bucket Size](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_bucket_size/README.md): Get an AWS Bucket Size\n\n* [Get AWS EBS Metrics from Cloudwatch](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_ebs/README.md): Get AWS CloudWatch Statistics for EBS volumes\n\n* [Get AWS EC2 Metrics from Cloudwatch](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_ec2/README.md): Get AWS CloudWatch Metrics for EC2 instances. These could be CPU, Network, Disk based measurements\n\n* [Get AWS EC2 CPU Utilization Statistics from Cloudwatch](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_ec2_cpuutil/README.md): Get AWS CloudWatch Statistics for cpu utilization for EC2 instances\n\n* [Get AWS CloudWatch Metrics for AWS/ApplicationELB](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_metrics_applicationelb/README.md): Get AWS CloudWatch Metrics for AWS/ApplicationELB\n\n* [Get AWS CloudWatch Metrics for AWS/ELB](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_metrics_classic_elb/README.md): Get AWS CloudWatch Metrics for Classic Loadbalancer\n\n* [Get AWS CloudWatch Metrics for AWS/DynamoDB](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_metrics_dynamodb/README.md): Get AWS CloudWatch Metrics for AWS DynamoDB\n\n* [Get AWS CloudWatch Metrics for AWS/AutoScaling](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_metrics_ec2autoscaling/README.md): Get AWS CloudWatch Metrics for AWS EC2 AutoScaling groups\n\n* [Get AWS CloudWatch Metrics for AWS/GatewayELB](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_metrics_gatewayelb/README.md): Get AWS CloudWatch Metrics for AWS/GatewayELB\n\n* [Get AWS CloudWatch Metrics for AWS/Lambda](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_metrics_lambda/README.md): Get AWS CloudWatch Metrics for AWS/Lambda\n\n* [Get AWS CloudWatch Metrics for AWS/NetworkELB](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_metrics_network_elb/README.md): Get AWS CloudWatch Metrics for Network Loadbalancer\n\n* [Get AWS CloudWatch Metrics for AWS/RDS](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_metrics_rds/README.md): Get AWS CloudWatch Metrics for AWS/RDS\n\n* [Get AWS CloudWatch Metrics for AWS/Redshift](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_metrics_redshift/README.md): Get AWS CloudWatch Metrics for AWS/Redshift\n\n* [Get AWS CloudWatch Metrics for AWS/SQS](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_metrics_sqs/README.md): Get AWS CloudWatch Metrics for AWS/SQS\n\n* [Get AWS CloudWatch Statistics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_statistics/README.md): Get AWS CloudWatch Statistics\n\n* [Get EC2 CPU Consumption For All Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_ec2_cpu_consumption/README.md): Get EC2 CPU Consumption For All Instances\n\n* [Get EC2 Data Traffic In and Out For All Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_ec2_data_traffic/README.md): Get EC2 Data Traffic In and Out For All Instances\n\n* [Get Age of all EC2 Instances in Days](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_ec2_instance_age/README.md): Get Age of all EC2 Instances in Days\n\n* [Get AWS ECS Service Status](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_ecs_services_status/README.md): Get the Status of an AWS ECS Service\n\n* [AWS Get Generated Policy](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_generated_policy/README.md): Given a Region and the ID of a policy generation job, this Action will return the policy (once it has been completed).\n\n* [AWS List IAM users without password policies](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_iam_users_without_password_policies/README.md): Get a list of all IAM users that have no password policy attached to them.\n\n* [Get AWS Instance Details with Matching Private DNS Name](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_instance_detail_with_private_dns_name/README.md): Use this action to get details of an AWS EC2 Instance that matches a Private DNS Name\n\n* [Get AWS Instances Details](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_instance_details/README.md): Get AWS Instances Details\n\n* [List All AWS EC2 Instances Under the ELB](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_instances/README.md):  Get a list of all AWS EC2 Instances from given ELB\n\n* [AWS Get Internet Gateway by VPC ID](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_internet_gateway_by_vpc/README.md): AWS Get Internet Gateway by VPC ID\n\n* [Find AWS Lambdas Not Using ARM64 Graviton2 Processor](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_lambdas_not_using_arm_graviton2_processor/README.md): Find all AWS Lambda functions that are not using the Arm-based AWS Graviton2 processor for their runtime architecture\n\n* [AWS Get Long Running ElastiCache clusters Without Reserved Nodes](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_long_running_elasticcache_clusters_without_reserved_nodes/README.md): This action gets information about long running ElastiCache clusters and their status, and checks if they have any reserved nodes associated with them.\n\n* [AWS Get NAT Gateway Info by VPC ID](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_nat_gateway_by_vpc/README.md): This action is used to get the details about nat gateways configured for VPC.\n\n* [Get all Targets for Network Load Balancer (NLB)](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_nlb_targets/README.md): Use this action to get all targets for Network Load Balancer (NLB)\n\n* [AWS Get Network Load Balancer (NLB) without Targets](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_nlbs_without_targets/README.md): Use this action to get AWS Network Load Balancer (NLB) without Targets\n\n* [AWS Get Private Address from NAT Gateways](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_private_address_from_nat_gateways/README.md): This action is used to get private address from NAT gateways.\n\n* [Get AWS EC2 Instances with a public IP](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_public_ec2_instances/README.md): lists all EC2 instances with a public IP\n\n* [AWS Get Publicly Accessible DB Snapshots in RDS](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_publicly_accessible_db_snapshots/README.md): AWS Get Publicly Accessible DB Snapshots in RDS\n\n* [Get AWS RDS automated db snapshots above retention period](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_rds_automated_snapshots_above_retention_period/README.md): This Action gets the snapshots above a certain retention period.\n\n* [AWS Get Redshift Query Details](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_redshift_query_details/README.md): Given an QueryId, this Action will give you the status of the Query, along with other data like  the number of lines/\n\n* [AWS Get Redshift Result](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_redshift_result/README.md): Given a QueryId, Get the Query Result, and format into a List\n\n* [Get AWS S3 Buckets](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_s3_buckets/README.md): Get AWS S3 Buckets\n\n* [Get Schedule To Retire AWS EC2 Instance](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_schedule_to_retire_instances/README.md): Get Schedule To Retire AWS EC2 Instance\n\n* [ Get secrets from secretsmanager](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_secret_from_secretmanager/README.md):  Get secrets from AWS secretsmanager\n\n* [AWS Get Secrets Manager Secret](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_secrets_manager_secret/README.md): Get string (of JSON) containing Secret details\n\n* [AWS Get Secrets Manager SecretARN](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_secrets_manager_secretARN/README.md): Given a Secret Name - this Action returns the Secret ARN\n\n* [Get AWS Security Group Details](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_security_group_details/README.md): Get details about a security group, given its ID.\n\n* [AWS Get Service Quota for a Specific ServiceName](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_service_quota_details/README.md): Given an AWS Region, Service Code and Quota Code, this Action will output the quota information for the specified service.\n\n* [AWS Get Quotas for a Service](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_service_quotas/README.md): Given inputs of the AWS Region, and the Service_Code for a service, this Action will output all of the Service Quotas and limits.\n\n* [Get STS Caller Identity](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_sts_caller_identity/README.md): Get STS Caller Identity\n\n* [AWS Get Tags of All Resources](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_tags_of_all_resources/README.md): AWS Get Tags of All Resources\n\n* [Get UnHealthy EC2 Instances for Classic ELB](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_unhealthy_instances/README.md): Get UnHealthy EC2 Instances for Classic ELB\n\n* [Get Unhealthy instances from ELB](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_unhealthy_instances_from_elb/README.md): Get Unhealthy instances from Elastic Load Balancer\n\n* [AWS Get IAM Users with Old Access Keys](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_users_with_old_access_keys/README.md): This Lego collects the access keys that have never been used or the access keys that have been used but are older than the threshold.\n\n* [Launch AWS EC2 Instance From an AMI](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_launch_instance_from_ami/README.md): Use this instance to Launch an AWS EC2 instance from an AMI\n\n* [AWS List Access Key](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_access_keys/README.md): List all Access Keys for the User\n\n* [AWS List All IAM Users](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_all_iam_users/README.md): List all AWS IAM Users\n\n* [AWS List All Regions](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_all_regions/README.md): List all available AWS Regions\n\n* [AWS List Application LoadBalancers ARNs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_application_loadbalancers/README.md): AWS List Application LoadBalancers ARNs\n\n* [AWS List Attached User Policies](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_attached_user_policies/README.md): AWS List Attached User Policies\n\n* [AWS List Expiring Access Keys](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_expiring_access_keys/README.md): List Expiring IAM User Access Keys\n\n* [List Expiring ACM Certificates](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_expiring_acm_certificates/README.md): List All Expiring ACM Certificates\n\n* [AWS List Unhealthy Instances in a Target Group](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_unhealthy_instances_in_target_group/README.md): List Unhealthy Instances in a target group\n\n* [Add lifecycle policy to GCP storage bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_add_lifecycle_policy_to_bucket/README.md): The action adds a lifecycle policy to a Google Cloud Platform (GCP) storage bucket.\n\n* [Get GCP storage buckets without lifecycle policies](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_get_buckets_without_lifecycle_policies/README.md): The action retrieves a list of Google Cloud Platform (GCP) storage buckets that do not have any lifecycle policies applied.\n\n* [Get details of GCP forwarding rules](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_get_forwarding_rules_details/README.md): Get details of forwarding rules associated with a backend service.\n\n* [Get GCP Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_get_handle/README.md): Get GCP Handle\n\n* [Get unused GCP backend services](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_get_unused_backend_services/README.md): Get unused backend service for an application load balancer that has no instances in it's target group.\n\n\n* [Get Jenkins Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Jenkins/legos/jenkins_get_handle/README.md): Get Jenkins Handle\n\n* [Execute local script on a pod](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_execute_local_script_on_a_pod/README.md): Execute local script on a pod in a namespace\n\n* [Gather Data for POD Troubleshoot](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_gather_data_for_pod_troubleshoot/README.md): Gather Data for POD Troubleshoot\n\n* [Gather Data for K8S Service Troubleshoot](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_gather_data_for_service_troubleshoot/README.md): Gather Data for K8S Service Troubleshoot\n\n* [Get All Evicted PODS From Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_all_evicted_pods_from_namespace/README.md): This action get all evicted PODS from given namespace. If namespace not given it will get all the pods from all namespaces.\n\n* [Get Deployment Status](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_deployment_status/README.md): This action search for failed deployment status and returns list.\n\n* [Get Kubernetes Error PODs from All Jobs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_error_pods_from_all_jobs/README.md): Get Kubernetes Error PODs from All Jobs\n\n* [Get expiring K8s certificates](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_expiring_certificates/README.md): Get the expiring certificates for a K8s cluster.\n\n* [Get Kubernetes Failed Deployments](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_failed_deployments/README.md): Get Kubernetes Failed Deployments\n\n* [Get frequently restarting K8s pods](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_frequently_restarting_pods/README.md): Get Kubernetes pods from all namespaces that are restarting too often.\n\n* [Get Kubernetes Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_handle/README.md): Get Kubernetes Handle\n\n* [Get All Kubernetes Healthy PODS in a given Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_healthy_pods/README.md): Get All Kubernetes Healthy PODS in a given Namespace\n\n* [Get memory utilization for K8s services](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_memory_utilization_of_services/README.md): This action executes the given kubectl commands to find the memory utilization of the specified services in a particular namespace and compares it with a given threshold.\n\n* [Get Kubernetes Nodes that have insufficient resources](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_nodes_with_insufficient_resources/README.md): Get Kubernetes Nodes that have insufficient resources\n\n* [Get K8S OOMKilled Pods](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_oomkilled_pods/README.md): Get K8S Pods which are OOMKilled from the container last states.\n\n* [Get Kubernetes POD Configuration](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_pod_config/README.md): Get Kubernetes POD Configuration\n\n* [Get Kubernetes Logs for a given POD in a Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_pod_logs/README.md): Get Kubernetes Logs for a given POD in a Namespace\n\n* [Get Kubernetes Logs for a list of PODs & Filter in a Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_pod_logs_and_filter/README.md): Get Kubernetes Logs for a list of PODs and Filter in a Namespace\n\n* [Get Kubernetes Status for a POD in a given Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_pod_status/README.md): Get Kubernetes Status for a POD in a given Namespace\n\n* [Get pods attached to Kubernetes PVC](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_pods_attached_to_pvc/README.md): Get pods attached to Kubernetes PVC\n\n* [Get all K8s Pods in CrashLoopBackOff State](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_pods_in_crashloopbackoff_state/README.md): Get all K8s pods in CrashLoopBackOff State\n\n* [Get all K8s Pods in ImagePullBackOff State](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_pods_in_imagepullbackoff_state/README.md): Get all K8s pods in ImagePullBackOff State\n\n* [Get Kubernetes PODs in not Running State](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_pods_in_not_running_state/README.md): Get Kubernetes PODs in not Running State\n\n* [Get all K8s Pods in Terminating State](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_pods_in_terminating_state/README.md): Get all K8s pods in Terminating State\n\n* [Get Kubernetes PODS with high restart](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_pods_with_high_restart/README.md): Get Kubernetes PODS with high restart\n\n* [Get K8S Service with no associated endpoints](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_service_with_no_associated_endpoints/README.md): Get K8S Service with no associated endpoints\n\n* [Get Kubernetes Services for a given Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_services/README.md): Get Kubernetes Services for a given Namespace\n\n* [Get Kubernetes Unbound PVCs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_unbound_pvcs/README.md): Get Kubernetes Unbound PVCs\n\n* [Kubectl command](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_command/README.md): Execute kubectl command.\n\n* [Kubectl set context entry in kubeconfig](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_config_set_context/README.md): Kubectl set context entry in kubeconfig\n\n* [Kubectl display merged kubeconfig settings](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_config_view/README.md): Kubectl display merged kubeconfig settings\n\n* [Kubectl delete a pod](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_delete_pod/README.md): Kubectl delete a pod\n\n* [Kubectl describe a node](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_describe_node/README.md): Kubectl describe a node\n\n* [Kubectl describe a pod](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_describe_pod/README.md): Kubectl describe a pod\n\n* [Kubectl drain a node](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_drain_node/README.md): Kubectl drain a node\n\n* [Execute command on a pod](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_exec_command/README.md): Execute command on a pod\n\n* [Kubectl get api resources](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_get_api_resources/README.md): Kubectl get api resources\n\n* [Kubectl get logs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_get_logs/README.md): Kubectl get logs for a given pod\n\n* [Kubectl get services](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_get_service_namespace/README.md): Kubectl get services in a given namespace\n\n* [Kubectl list pods](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_list_pods/README.md): Kubectl list pods in given namespace\n\n* [Kubectl update field](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_patch_pod/README.md): Kubectl update field of a resource using strategic merge patch\n\n* [Kubectl rollout deployment history](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_rollout_deployment/README.md): Kubectl rollout deployment history\n\n* [Kubectl scale deployment](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_scale_deployment/README.md): Kubectl scale a given deployment\n\n* [Kubectl show metrics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_show_metrics_node/README.md): Kubectl show metrics for a given node\n\n* [Kubectl show metrics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_show_metrics_pod/README.md): Kubectl show metrics for a given pod\n\n* [List matching name pods](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_list_all_matching_pods/README.md): List all pods matching a particular name string. The matching string can be a regular expression too\n\n* [List pvcs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_list_pvcs/README.md): List pvcs by namespace. By default, it will list all pvcs in all namespaces.\n\n* [Remove POD from Deployment](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_remove_pod_from_deployment/README.md): Remove POD from Deployment\n\n* [Update Commands in a Kubernetes POD in a given Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_update_command_in_pod_spec/README.md): Update Commands in a Kubernetes POD in a given Namespace\n\n* [Get Mantishub handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mantishub/legos/mantishub_get_handle/README.md): Get Mantishub handle\n\n* [MongoDB add new field in all collections](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_add_new_field_in_collections/README.md): MongoDB add new field in all collections\n\n* [MongoDB Aggregate Command](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_aggregate_command/README.md): MongoDB Aggregate Command\n\n* [MongoDB Atlas cluster cloud backup](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_atlas_cluster_backup/README.md): Trigger on-demand Atlas cloud backup\n\n* [Get large MongoDB indices](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_check_large_index_size/README.md): This action compares the size of each index with a given threshold and returns any indexes that exceed the threshold.\n\n* [Get MongoDB large databases](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_compare_disk_size_to_threshold/README.md): This action compares the total disk size used by MongoDB to a given threshold.\n\n* [MongoDB Count Documents](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_count_documents/README.md): MongoDB Count Documents\n\n* [MongoDB Create Collection](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_create_collection/README.md): MongoDB Create Collection\n\n* [MongoDB Create Database](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_create_database/README.md): MongoDB Create Database\n\n* [Delete collection from MongoDB database](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_delete_collection/README.md): Delete collection from MongoDB database\n\n* [MongoDB Delete Database](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_delete_database/README.md): MongoDB Delete Database\n\n* [MongoDB Delete Document](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_delete_document/README.md): MongoDB Delete Document\n\n* [MongoDB Distinct Command](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_distinct_command/README.md): MongoDB Distinct Command\n\n* [MongoDB Find Document](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_find_document/README.md): MongoDB Find Document\n\n* [MongoDB Find One](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_find_one/README.md): MongoDB Find One returns a single entry that matches the query.\n\n* [Get MongoDB Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_get_handle/README.md): Get MongoDB Handle\n\n* [MongoDB get metrics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_get_metrics/README.md): This action retrieves various metrics such as index size, disk size per collection for all databases and collections.\n\n* [Get Mongo Server Status](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_get_server_status/README.md): Get Mongo Server Status and check for any abnormalities.\n\n* [MongoDB Insert Document](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_insert_document/README.md): MongoDB Insert Document\n\n* [MongoDB kill queries](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_kill_queries/README.md): MongoDB kill queries\n\n* [Get list of collections in MongoDB Database](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_list_collections/README.md): Get list of collections in MongoDB Database\n\n* [Get list of MongoDB Databases](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_list_databases/README.md): Get list of MongoDB Databases\n\n* [MongoDB list queries](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_list_queries/README.md): MongoDB list queries\n\n* [MongoDB Read Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_read_query/README.md): MongoDB Read Query\n\n* [MongoDB remove a field in all collections](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_remove_field_in_collections/README.md): MongoDB remove a field in all collections\n\n* [MongoDB Rename Database](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_rename_database/README.md): MongoDB Rename Database\n\n* [MongoDB Update Document](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_update_document/README.md): MongoDB Update Document\n\n* [MongoDB Upsert Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_write_query/README.md): MongoDB Upsert Query\n\n* [Get MS-SQL Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/MsSQL/legos/mssql_get_handle/README.md): Get MS-SQL Handle\n\n* [MS-SQL Read Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/MsSQL/legos/mssql_read_query/README.md): MS-SQL Read Query\n\n* [MS-SQL Write Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/MsSQL/legos/mssql_write_query/README.md): MS-SQL Write Query\n\n* [Get MySQL Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/MySQL/legos/mysql_get_handle/README.md): Get MySQL Handle\n\n* [MySQl Get Long Running Queries](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/MySQL/legos/mysql_get_long_run_queries/README.md): MySQl Get Long Running Queries\n\n* [MySQl Kill Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/MySQL/legos/mysql_kill_query/README.md): MySQl Kill Query\n\n* [Run MySQL Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/MySQL/legos/mysql_read_query/README.md): Run MySQL Query\n\n* [Create a MySQL Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/MySQL/legos/mysql_write_query/README.md): Create a MySQL Query\n\n* [Netbox Get Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Netbox/legos/netbox_get_handle/README.md): Get Netbox Handle\n\n* [Netbox List Devices](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Netbox/legos/netbox_list_devices/README.md): List all Netbox devices\n\n* [Nomad Get Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Nomad/legos/nomad_get_handle/README.md): Get Nomad Handle\n\n* [Nomad List Jobs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Nomad/legos/nomad_list_jobs/README.md): List all Nomad jobs\n\n* [Get Opsgenie Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Opsgenie/legos/opsgenie_get_handle/README.md): Get Opsgenie Handle\n\n* [Create new maintenance window.](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Pingdom/legos/pingdom_create_new_maintenance_window/README.md): Create new maintenance window.\n\n* [Perform Pingdom single check](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Pingdom/legos/pingdom_do_single_check/README.md): Perform Pingdom Single Check\n\n* [Get Pingdom Analysis Results for a specified Check](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Pingdom/legos/pingdom_get_analysis/README.md): Get Pingdom Analysis Results for a specified Check\n\n* [Get list of checkIDs given a hostname](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Pingdom/legos/pingdom_get_checkids/README.md): Get list of checkIDs given a hostname. If no hostname provided, it lists all checkIDs.\n\n* [Get list of checkIDs given a name](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Pingdom/legos/pingdom_get_checkids_by_name/README.md): Get list of checkIDS given a name. If name is not given, it gives all checkIDs. If transaction is set to true, it returns transaction checkIDs\n\n* [Get Pingdom Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Pingdom/legos/pingdom_get_handle/README.md): Get Pingdom Handle\n\n* [Pingdom Get Maintenance](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Pingdom/legos/pingdom_get_maintenance/README.md): Pingdom Get Maintenance\n\n* [Get Pingdom Results](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Pingdom/legos/pingdom_get_results/README.md): Get Pingdom Results\n\n* [Get Pingdom TMS Check](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Pingdom/legos/pingdom_get_tmscheck/README.md): Get Pingdom TMS Check\n\n* [Pingdom lego to pause/unpause checkids](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Pingdom/legos/pingdom_pause_or_unpause_checkids/README.md): Pingdom lego to pause/unpause checkids\n\n* [Perform Pingdom Traceroute](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Pingdom/legos/pingdom_traceroute/README.md): Perform Pingdom Traceroute\n\n* [PostgreSQL Calculate Bloat](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgres_calculate_bloat/README.md): This Lego calculates bloat for tables in Postgres\n\n* [Calling a PostgreSQL function](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_call_function/README.md): Calling a PostgreSQL function\n\n* [PostgreSQL Check Unused Indexes](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_check_unused_indexes/README.md): Find unused Indexes in a database in PostgreSQL\n\n* [Create Tables in PostgreSQL](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_create_table/README.md): Create Tables PostgreSQL\n\n* [Delete PostgreSQL Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_delete_query/README.md): Delete PostgreSQL Query\n\n* [PostgreSQL Get Cache Hit Ratio](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_get_cache_hit_ratio/README.md): The result of the action will show the total number of blocks read from disk, the total number of blocks found in the buffer cache, and the cache hit ratio as a percentage. For example, if the cache hit ratio is 99%, it means that 99% of all data requests were served from the buffer cache, and only 1% required reading data from disk.\n\n* [Get PostgreSQL Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_get_handle/README.md): Get PostgreSQL Handle\n\n* [PostgreSQL Get Index Usage](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_get_index_usage/README.md): The action result shows the data for table name, the percentage of times an index was used for that table, and the number of live rows in the table.\n\n* [PostgreSQL get service status](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_get_server_status/README.md): This action checks the status of each database.\n\n* [Execute commands in a PostgreSQL transaction.](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_handling_transaction/README.md): Given a set of PostgreSQL commands, this actions run them inside a transaction.\n\n* [Long Running PostgreSQL Queries](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_long_running_queries/README.md): Long Running PostgreSQL Queries\n\n* [Read PostgreSQL Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_read_query/README.md): Read PostgreSQL Query\n\n* [Show tables in PostgreSQL Database](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_show_tables/README.md): Show the tables existing in a PostgreSQL Database. We execute the following query to fetch this information SELECT * FROM pg_catalog.pg_tables WHERE schemaname != 'pg_catalog' AND schemaname != 'information_schema';\n\n* [Call PostgreSQL Stored Procedure](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_stored_procedures/README.md): Call PostgreSQL Stored Procedure\n\n* [Write PostgreSQL Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_write_query/README.md): Write PostgreSQL Query\n\n* [Get Prometheus rules](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Prometheus/legos/prometheus_alerts_list/README.md): Get Prometheus rules\n\n* [Get All Prometheus Metrics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Prometheus/legos/prometheus_get_all_metrics/README.md): Get All Prometheus Metrics\n\n* [Get Prometheus handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Prometheus/legos/prometheus_get_handle/README.md): Get Prometheus handle\n\n* [Get Prometheus Metric Statistics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Prometheus/legos/prometheus_get_metric_statistics/README.md): Get Prometheus Metric Statistics\n\n* [Delete All Redis Keys](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Redis/legos/redis_delete_all_keys/README.md): Delete All Redis keys\n\n* [Delete Redis Keys](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Redis/legos/redis_delete_keys/README.md): Delete Redis keys matching pattern\n\n* [Delete Redis Unused keys](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Redis/legos/redis_delete_stale_keys/README.md): Delete Redis Unused keys given a time threshold in seconds\n\n* [Get Redis cluster health](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Redis/legos/redis_get_cluster_health/README.md): This action gets the Redis cluster health.\n\n* [Get Redis Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Redis/legos/redis_get_handle/README.md): Get Redis Handle\n\n* [Get Redis keys count](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Redis/legos/redis_get_keys_count/README.md): Get Redis keys count matching pattern (default: '*')\n\n* [Get Redis metrics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Redis/legos/redis_get_metrics/README.md): This action fetched redis metrics like index size, memory utilization.\n\n* [ List Redis Large keys](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Redis/legos/redis_list_large_keys/README.md): Find Redis Large keys given a size threshold in bytes\n\n* [Get REST handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Rest/legos/rest_get_handle/README.md): Get REST handle\n\n* [Call REST Methods](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Rest/legos/rest_methods/README.md): Call REST Methods.\n\n* [SSH Execute Remote Command](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SSH/legos/ssh_execute_remote_command/README.md): SSH Execute Remote Command\n\n* [SSH: Locate large files on host](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SSH/legos/ssh_find_large_files/README.md): This action scans the file system on a given host and returns a dict of large files. The command used to perform the scan is \"find inspect_folder -type f -exec du -sk '{}' + | sort -rh | head -n count\"\n\n* [Get SSH handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SSH/legos/ssh_get_handle/README.md): Get SSH handle\n\n* [SSH Restart Service Using sysctl](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SSH/legos/ssh_restart_service_using_sysctl/README.md): SSH Restart Service Using sysctl\n\n* [SCP: Remote file transfer over SSH](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SSH/legos/ssh_scp/README.md): Copy files from or to remote host. Files are copied over SCP. \n\n* [Assign Salesforce Case](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SalesForce/legos/salesforce_assign_case/README.md): Assign a Salesforce case\n\n* [Change Salesforce Case Status](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SalesForce/legos/salesforce_case_change_status/README.md): Change Salesforce Case Status\n\n* [Create Salesforce Case](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SalesForce/legos/salesforce_create_case/README.md): Create a Salesforce case\n\n* [Delete Salesforce Case](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SalesForce/legos/salesforce_delete_case/README.md): Delete a Salesforce case\n\n* [Get Salesforce Case Info](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SalesForce/legos/salesforce_get_case/README.md): Get a Salesforce case info\n\n* [Get Salesforce Case Status](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SalesForce/legos/salesforce_get_case_status/README.md): Get a Salesforce case status\n\n* [Get Salesforce handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SalesForce/legos/salesforce_get_handle/README.md): Get Salesforce handle\n\n* [Search Salesforce Case](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SalesForce/legos/salesforce_search_case/README.md): Search a Salesforce case\n\n* [Update Salesforce Case](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SalesForce/legos/salesforce_update_case/README.md): Update a Salesforce case\n\n* [Create Slack Channel and Invite Users](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Slack/legos/slack_create_channel_invite_users/README.md): Create a Slack Channel with given name, and invite a list of userIds to the channel.\n\n* [Get Slack SDK Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Slack/legos/slack_get_handle/README.md): Get Slack SDK Handle\n\n* [Slack Lookup User by Email](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Slack/legos/slack_lookup_user_by_email/README.md): Given an email address, find the slack user in the workspace.\n You can the extract their Profile picture, or retrieve their userid (which you can use to send messages) from the output.\n\n* [Post Slack Image](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Slack/legos/slack_post_image/README.md): Post Slack Image\n\n* [Post Slack Message](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Slack/legos/slack_post_message/README.md): Post Slack Message\n\n* [Slack Send DM](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Slack/legos/slack_send_DM/README.md): Given a list of Slack IDs, this Action will create a DM (one user) or group chat (multiple users), and send a message to the chat\n\n* [Snowflake Read Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Snowflake/legos/snowflake_read_query/README.md): Snowflake Read Query\n\n* [Snowflake Write Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Snowflake/legos/snowflake_write_query/README.md): Snowflake Write Query\n\n* [Get Splunk SDK Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Splunk/legos/splunk_get_handle/README.md): Get Splunk SDK Handle\n\n* [ Capture a Charge](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_capture_charge/README.md):  Capture the payment of an existing, uncaptured, charge\n\n* [Close Dispute](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_close_dispute/README.md): Close Dispute\n\n* [Create a Charge](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_create_charge/README.md): Create a Charge\n\n* [Create a Refund](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_create_refund/README.md): Create a Refund\n\n* [Get list of charges previously created](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_get_all_charges/README.md): Get list of charges previously created\n\n* [Get list of disputes](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_get_all_disputes/README.md): Get list of disputes\n\n* [Get list of refunds](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_get_all_refunds/README.md):  Get list of refunds for the given threshold.\n\n* [Get Stripe Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_get_handle/README.md): Get Stripe Handle\n\n* [Retrieve a Charge](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_retrieve_charge/README.md):  Retrieve a Charge\n\n* [Retrieve details of a dispute](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_retrieve_dispute/README.md): Retrieve details of a dispute\n\n* [Retrieve a refund](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_retrieve_refund/README.md): Retrieve a refund\n\n* [Update a Charge](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_update_charge/README.md): Update a Charge\n\n* [Update Dispute](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_update_dispute/README.md): Update Dispute\n\n* [Update Refund](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_update_refund/README.md): Updates the specified refund by setting the values of the parameters passed.\n\n* [Execute Terraform Command](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Terraform/legos/terraform_exec_command/README.md): Execute Terraform Command\n\n* [Get terraform handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Terraform/legos/terraform_get_handle/README.md): Get terraform handle\n\n* [Get Zabbix Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Zabbix/legos/zabbix_get_handle/README.md): Get Zabbix Handle\n\n* [Opensearch Get Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/opensearch/legos/opensearch_get_handle/README.md): Opensearch Get Handle\n\n* [Opensearch search](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/opensearch/legos/opensearch_search/README.md): Opensearch Search\n\n"
  },
  {
    "path": "lists/action_COST_OPT,CATEGORY_TYPE_SRE.md",
    "content": "* [Get AWS Lambdas With High Error Rate](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_lambdas_with_high_error_rate/README.md): Get AWS Lambda Functions that exceed a given threshold error rate.\n\n"
  },
  {
    "path": "lists/action_COST_OPT.md",
    "content": "* [AWS Start IAM Policy Generation ](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/AWS_Start_IAM_Policy_Generation/README.md): Given a region, a CloudTrail ARN (where the logs are being recorded), a reference IAM ARN (whose usage we will parse), and a Service role, this will begin the generation of a IAM policy.  The output is a String of the generation Id.\n\n* [AWS Attach Tags to Resources](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_attach_tags_to_resources/README.md): AWS Attach Tags to Resources\n\n* [AWS Delete EBS Snapshot](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_delete_ebs_snapshot/README.md): Delete EBS Snapshot for an EC2 instance\n\n* [Filter AWS EC2 Instance](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_ec2_by_tags/README.md): Filter AWS EC2 Instance\n\n* [Filter AWS EC2 Instances Without Lifetime Tag](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_ec2_without_lifetime_tag/README.md): Filter AWS EC2 Instances Without Lifetime Tag\n\n* [Filter AWS EC2 Instances Without Termination and Lifetime Tag](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_instances_without_termination_and_lifetime_tag/README.md): Filter AWS EC2 Instances Without Termination and Lifetime Tag and Check of they are valid\n\n* [AWS Filter Large EC2 Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_large_ec2_instances/README.md): This Action to filter all instances whose instanceType contains Large or xLarge, and that DO NOT have the largetag key/value.\n\n* [AWS Find Long Running EC2 Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_long_running_instances/README.md): This action list a all instances that are older than the threshold\n\n* [AWS Filter Old EBS Snapshots](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_old_ebs_snapshots/README.md): This action list a all snapshots details that are older than the threshold\n\n* [Filter AWS Untagged EC2 Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_untagged_ec2_instances/README.md): Filter AWS Untagged EC2 Instances\n\n* [Find AWS ELBs with no targets or instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_find_elbs_with_no_targets_or_instances/README.md): Find AWS ELBs with no targets or instances attached to them.\n\n* [AWS Find Idle Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_find_idle_instances/README.md): Find Idle EC2 instances\n\n* [AWS Filter Lambdas with Long Runtime](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_find_long_running_lambdas/README.md): This action retrieves a list of all Lambda functions and searches for log events for each function for given runtime(duration).\n\n* [AWS Find RDS Instances with low CPU Utilization](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_find_rds_instances_with_low_cpu_utilization/README.md): This lego finds RDS instances are not utilizing their CPU resources to their full potential.\n\n* [AWS Find Redshift Clusters with low CPU Utilization](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_find_redshift_clusters_with_low_cpu_utilization/README.md): Find underutilized Redshift clusters in terms of CPU utilization.\n\n* [Finding Redundant Trails in AWS](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_finding_redundant_trails/README.md): This action will find a redundant cloud trail if the attribute IncludeGlobalServiceEvents is true, and then we need to find multiple duplications.\n\n* [AWS Get AWS Account Number](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_acount_number/README.md): Some AWS functions require the AWS Account number. This programmatically retrieves it.\n\n* [AWS Get Untagged Resources](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_all_untagged_resources/README.md): AWS Get Untagged Resources\n\n* [AWS Get Costs For All Services](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cost_for_all_services/README.md): Get Costs for all AWS services in a given time period.\n\n* [AWS Get Costs For Data Transfer](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cost_for_data_transfer/README.md): Get daily cost for Data Transfer in AWS\n\n* [AWS Get Daily Total Spend](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_daily_total_spend/README.md): AWS get daily total spend from Cost Explorer\n\n* [Get EBS Volumes By Type](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_ebs_volumes_by_type/README.md): Get EBS Volumes By Type\n\n* [AWS Get Generated Policy](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_generated_policy/README.md): Given a Region and the ID of a policy generation job, this Action will return the policy (once it has been completed).\n\n* [Find AWS Lambdas Not Using ARM64 Graviton2 Processor](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_lambdas_not_using_arm_graviton2_processor/README.md): Find all AWS Lambda functions that are not using the Arm-based AWS Graviton2 processor for their runtime architecture\n\n* [Get AWS Lambdas With High Error Rate](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_lambdas_with_high_error_rate/README.md): Get AWS Lambda Functions that exceed a given threshold error rate.\n\n* [AWS Get Long Running RDS Instances Without Reserved Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_long_running_rds_instances_without_reserved_instances/README.md): This action gets information about long running instances and their status, and checks if they have any reserved nodes associated with them.\n\n* [AWS Get Long Running Redshift Clusters Without Reserved Nodes](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_long_running_redshift_clusters_without_reserved_nodes/README.md): This action gets information about running clusters and their status, and checks if they have any reserved nodes associated with them.\n\n* [AWS Get Older Generation RDS Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_older_generation_rds_instances/README.md): AWS Get Older Generation RDS Instances action retrieves information about RDS instances using older generation instance types.\n\n* [AWS Get EC2 Instances About To Retired](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_reserved_instances_about_to_retired/README.md): AWS Get EC2 Instances About To Retired\n\n* [AWS Get Resources Missing Tag](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_resources_missing_tag/README.md): Gets a list of all AWS resources that are missing the tag in the input parameters.\n\n* [AWS Get Resources With Expiration Tag](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_resources_with_expiration_tag/README.md): AWS Get all Resources with an expiration tag\n\n* [AWS Get Resources With Tag](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_resources_with_tag/README.md): For a given tag and region, get every AWS resource with that tag.\n\n* [Get Timed Out AWS Lambdas](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_timed_out_lambdas/README.md): Get AWS Lambda functions that have exceeded the maximum amount of time in seconds that a Lambda function can run.\n\n* [AWS Get TTL For Route53 Records](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_ttl_for_route53_records/README.md): Get TTL for Route53 records for a hosted zone.\n\n* [AWS: Check for short Route 53 TTL](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_ttl_under_given_hours/README.md): AWS: Check for short Route 53 TTL\n\n* [AWS get Unused Route53 Health Checks](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_unused_route53_health_checks/README.md): AWS get Unused Route53 Health Checks\n\n* [AWS List Unused Secrets](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_unused_secrets/README.md): This action lists all the unused secrets from AWS by comparing the last used date with the given threshold.\n\n* [AWS Update TTL for Route53 Record](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_update_ttl_for_route53_records/README.md): Update TTL for an existing record in a hosted zone.\n\n"
  },
  {
    "path": "lists/action_DATADOG.md",
    "content": "* [Datadog delete incident](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_delete_incident/README.md): Delete an incident given its id\n\n* [Datadog get event](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_get_event/README.md): Get an event given its id\n\n* [Get Datadog Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_get_handle/README.md): Get Datadog Handle\n\n* [Datadog get incident](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_get_incident/README.md): Get an incident given its id\n\n* [Datadog get metric metadata](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_get_metric_metadata/README.md): Get the metadata of a metric.\n\n* [Datadog get monitor](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_get_monitor/README.md): Get details about a monitor\n\n* [Datadog get monitorID given the name](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_get_monitorid/README.md): Get monitorID given the name\n\n* [Datadog list active metrics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_list_active_metrics/README.md): Get the list of actively reporting metrics from a given time until now.\n\n* [Datadog list all monitors](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_list_all_monitors/README.md): List all monitors\n\n* [Datadog list metrics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_list_metrics/README.md): Lists metrics from the last 24 hours in Datadog.\n\n* [Datadog mute/unmute monitors](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_mute_or_unmute_alerts/README.md): Mute/unmute monitors\n\n* [Datadog query metrics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_query_metrics/README.md): Query timeseries points for a metric.\n\n* [Schedule downtime](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_schedule_downtime/README.md): Schedule downtime\n\n* [Datadog search monitors](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_search_monitors/README.md): Search monitors in datadog based on filters\n\n"
  },
  {
    "path": "lists/action_DATADOG_ALERTS.md",
    "content": "* [Datadog mute/unmute monitors](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_mute_or_unmute_alerts/README.md): Mute/unmute monitors\n\n"
  },
  {
    "path": "lists/action_DATADOG_EVENT.md",
    "content": "* [Datadog get event](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_get_event/README.md): Get an event given its id\n\n* [Get Datadog Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_get_handle/README.md): Get Datadog Handle\n\n"
  },
  {
    "path": "lists/action_DATADOG_INCIDENT.md",
    "content": "* [Datadog delete incident](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_delete_incident/README.md): Delete an incident given its id\n\n* [Datadog get incident](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_get_incident/README.md): Get an incident given its id\n\n"
  },
  {
    "path": "lists/action_DATADOG_METRICS.md",
    "content": "* [Datadog get metric metadata](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_get_metric_metadata/README.md): Get the metadata of a metric.\n\n* [Datadog list active metrics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_list_active_metrics/README.md): Get the list of actively reporting metrics from a given time until now.\n\n* [Datadog list metrics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_list_metrics/README.md): Lists metrics from the last 24 hours in Datadog.\n\n* [Datadog query metrics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_query_metrics/README.md): Query timeseries points for a metric.\n\n"
  },
  {
    "path": "lists/action_DATADOG_MONITOR.md",
    "content": "* [Datadog get monitor](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_get_monitor/README.md): Get details about a monitor\n\n* [Datadog get monitorID given the name](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_get_monitorid/README.md): Get monitorID given the name\n\n* [Datadog list all monitors](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_list_all_monitors/README.md): List all monitors\n\n* [Datadog search monitors](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_search_monitors/README.md): Search monitors in datadog based on filters\n\n"
  },
  {
    "path": "lists/action_DB.md",
    "content": "* [AWS Filter All Manual Database Snapshots](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_all_manual_database_snapshots/README.md): Use This Action to AWS Filter All Manual Database Snapshots\n\n* [AWS Find Redshift Cluster without Pause Resume Enabled](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_find_redshift_cluster_without_pause_resume_enabled/README.md): Use This Action to AWS find redshift cluster for which paused resume are not Enabled\n\n"
  },
  {
    "path": "lists/action_DEVOPS.md",
    "content": "* [Apply AWS Default Encryption for S3 Bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_apply_default_encryption_for_s3_buckets/README.md): Apply AWS Default Encryption for S3 Bucket\n\n* [Attach an EBS volume to an AWS EC2 Instance](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_attach_ebs_to_instances/README.md): Attach an EBS volume to an AWS EC2 Instance\n\n* [AWS Attach Tags to Resources](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_attach_tags_to_resources/README.md): AWS Attach Tags to Resources\n\n* [Attach a webhook endpoint to AWS Cloudwatch alarm](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_cloudwatch_attach_webhook_notification_to_alarm/README.md): Attach a webhook endpoint to one of the SNS attached to the AWS Cloudwatch alarm.\n\n* [Create AWS Bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_create_bucket/README.md): Create a new AWS S3 Bucket\n\n* [AWS Create Snapshot For Volume](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_create_volumes_snapshot/README.md): Create a snapshot for EBS volume of the EC2 Instance for backing up the data stored in EBS\n\n* [Delete AWS Bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_delete_bucket/README.md): Delete an AWS S3 Bucket\n\n* [AWS Delete Classic Load Balancer](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_delete_classic_load_balancer/README.md): Delete Classic Elastic Load Balancers\n\n* [AWS Delete ECS Cluster](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_delete_ecs_cluster/README.md): Delete AWS ECS Cluster\n\n* [AWS Delete Load Balancer](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_delete_load_balancer/README.md): AWS Delete Load Balancer\n\n* [AWS Delete Log Stream](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_delete_log_stream/README.md): AWS Delete Log Stream\n\n* [AWS Delete NAT Gateway](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_delete_nat_gateway/README.md): AWS Delete NAT Gateway\n\n* [AWS Delete RDS Instance](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_delete_rds_instance/README.md): Delete AWS RDS Instance\n\n* [AWS Delete Redshift Cluster](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_delete_redshift_cluster/README.md): Delete AWS Redshift Cluster\n\n* [AWS Delete Route 53 HealthCheck](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_delete_route53_health_check/README.md): AWS Delete Route 53 HealthCheck\n\n* [AWS Delete Secret](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_delete_secret/README.md): AWS Delete Secret\n\n* [Delete AWS EBS Volume by Volume ID](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_delete_volume_by_id/README.md): Delete AWS Volume by Volume ID\n\n* [ Deregisters AWS Instances from a Load Balancer](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_deregister_instances/README.md):  Deregisters AWS Instances from a Load Balancer\n\n* [AWS Describe Cloudtrails ](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_describe_cloudtrail/README.md): Given an AWS Region, this Action returns a Dict with all of the Cloudtrail logs being recorded\n\n* [ Detach as AWS Instance with a Elastic Block Store](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_detach_ebs_to_instances/README.md):  Detach as AWS Instance with a Elastic Block Store.\n\n* [AWS Detach Instances From AutoScaling Group](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_detach_instances_from_autoscaling_group/README.md): Use This Action to AWS Detach Instances From AutoScaling Group\n\n* [EBS Modify Volume](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_ebs_modify_volume/README.md): Modify/Resize volume for Elastic Block Storage (EBS).\n\n* [AWS ECS Describe Task Definition.](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_ecs_describe_task_definition/README.md): Describe AWS ECS Task Definition.\n\n* [ECS detect failed deployment ](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_ecs_detect_failed_deployment/README.md): List of stopped tasks, associated with a deployment, along with their stopped reason\n\n* [Restart AWS ECS Service](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_ecs_service_restart/README.md): Restart an AWS ECS Service\n\n* [Update AWS ECS Service](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_ecs_update_service/README.md): Update AWS ECS Service\n\n* [ Copy EKS Pod logs to bucket.](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_eks_copy_pod_logs_to_bucket/README.md):  Copy given EKS pod logs to given S3 Bucket.\n\n* [ Delete EKS POD in a given Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_eks_delete_pod/README.md):  Delete a EKS POD in a given Namespace\n\n* [List of EKS dead pods](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_eks_get_all_dead_pods/README.md): Get list of all dead pods in a given EKS cluster\n\n* [List of EKS Namespaces](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_eks_get_all_namespaces/README.md): Get list of all Namespaces in a given EKS cluster\n\n* [List of EKS pods](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_eks_get_all_pods/README.md): Get list of all pods in a given EKS cluster\n\n* [ List of EKS deployment for given Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_eks_get_deployments_name/README.md):  Get list of EKS deployment names for given Namespace\n\n* [Get CPU and memory utilization of node.](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_eks_get_node_cpu_memory/README.md):  Get CPU and memory utilization of given node.\n\n* [ Get EKS Nodes](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_eks_get_nodes/README.md):  Get EKS Nodes\n\n* [ List of EKS pods not in RUNNING State](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_eks_get_not_running_pods/README.md):  Get list of all pods in a given EKS cluster that are not running.\n\n* [Get pod CPU and Memory usage from given namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_eks_get_pod_cpu_memory/README.md): Get all pod CPU and Memory usage from given namespace\n\n* [ EKS Get pod status](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_eks_get_pod_status/README.md):  Get a Status of given POD in a given Namespace and EKS cluster name\n\n* [ EKS Get Running Pods](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_eks_get_running_pods/README.md):  Get a list of running pods from given namespace and EKS cluster name\n\n* [ Run Kubectl commands on EKS Cluster](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_eks_run_kubectl_cmd/README.md): This action runs a kubectl command on an AWS EKS Cluster\n\n* [Get AWS EMR Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_emr_get_instances/README.md): Get a list of EC2 Instances for an EMR cluster. Filtered by node type (MASTER|CORE|TASK)\n\n* [Run Command via AWS CLI](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_execute_cli_command/README.md): Execute command using AWS CLI\n\n* [ Run Command via SSM](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_execute_command_ssm/README.md):  Execute command on EC2 instance(s) using SSM\n\n* [AWS Filter All Manual Database Snapshots](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_all_manual_database_snapshots/README.md): Use This Action to AWS Filter All Manual Database Snapshots\n\n* [Filter AWS Unattached EBS Volume](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_ebs_unattached_volumes/README.md): Filter AWS Unattached EBS Volume\n\n* [Filter AWS EC2 instance by VPC Ids](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_ec2_by_vpc/README.md): Use this Action to Filter AWS EC2 Instance by VPC Ids\n\n* [Filter All AWS EC2 Instance](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_ec2_instances/README.md): Filter All AWS EC2 Instance\n\n* [Filter AWS EC2 Instances Without Termination and Lifetime Tag](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_instances_without_termination_and_lifetime_tag/README.md): Filter AWS EC2 Instances Without Termination and Lifetime Tag and Check of they are valid\n\n* [Get Unhealthy instances from ASG](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_unhealthy_instances_from_asg/README.md): Get Unhealthy instances from Auto Scaling Group\n\n* [Filter AWS Unused Keypairs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_unused_keypairs/README.md): Filter AWS Unused Keypairs\n\n* [AWS Find Redshift Cluster without Pause Resume Enabled](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_find_redshift_cluster_without_pause_resume_enabled/README.md): Use This Action to AWS find redshift cluster for which paused resume are not Enabled\n\n* [AWS Get All Load Balancers](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_all_load_balancers/README.md): AWS Get All Load Balancers\n\n* [Get AWS AutoScaling Group Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_auto_scaling_instances/README.md): Use This Action to Get AWS AutoScaling Group Instances\n\n* [Get AWS Bucket Size](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_bucket_size/README.md): Get an AWS Bucket Size\n\n* [Get AWS EBS Metrics from Cloudwatch](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_ebs/README.md): Get AWS CloudWatch Statistics for EBS volumes\n\n* [Get AWS EC2 Metrics from Cloudwatch](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_ec2/README.md): Get AWS CloudWatch Metrics for EC2 instances. These could be CPU, Network, Disk based measurements\n\n* [Get AWS EC2 CPU Utilization Statistics from Cloudwatch](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_ec2_cpuutil/README.md): Get AWS CloudWatch Statistics for cpu utilization for EC2 instances\n\n* [Get AWS CloudWatch Metrics for AWS/ApplicationELB](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_metrics_applicationelb/README.md): Get AWS CloudWatch Metrics for AWS/ApplicationELB\n\n* [Get AWS CloudWatch Metrics for AWS/ELB](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_metrics_classic_elb/README.md): Get AWS CloudWatch Metrics for Classic Loadbalancer\n\n* [Get AWS CloudWatch Metrics for AWS/DynamoDB](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_metrics_dynamodb/README.md): Get AWS CloudWatch Metrics for AWS DynamoDB\n\n* [Get AWS CloudWatch Metrics for AWS/AutoScaling](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_metrics_ec2autoscaling/README.md): Get AWS CloudWatch Metrics for AWS EC2 AutoScaling groups\n\n* [Get AWS CloudWatch Metrics for AWS/GatewayELB](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_metrics_gatewayelb/README.md): Get AWS CloudWatch Metrics for AWS/GatewayELB\n\n* [Get AWS CloudWatch Metrics for AWS/Lambda](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_metrics_lambda/README.md): Get AWS CloudWatch Metrics for AWS/Lambda\n\n* [Get AWS CloudWatch Metrics for AWS/NetworkELB](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_metrics_network_elb/README.md): Get AWS CloudWatch Metrics for Network Loadbalancer\n\n* [Get AWS CloudWatch Metrics for AWS/RDS](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_metrics_rds/README.md): Get AWS CloudWatch Metrics for AWS/RDS\n\n* [Get AWS CloudWatch Metrics for AWS/Redshift](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_metrics_redshift/README.md): Get AWS CloudWatch Metrics for AWS/Redshift\n\n* [Get AWS CloudWatch Metrics for AWS/SQS](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_metrics_sqs/README.md): Get AWS CloudWatch Metrics for AWS/SQS\n\n* [Get AWS CloudWatch Statistics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cloudwatch_statistics/README.md): Get AWS CloudWatch Statistics\n\n* [Get EBS Volumes By Type](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_ebs_volumes_by_type/README.md): Get EBS Volumes By Type\n\n* [Get EC2 CPU Consumption For All Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_ec2_cpu_consumption/README.md): Get EC2 CPU Consumption For All Instances\n\n* [Get EC2 Data Traffic In and Out For All Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_ec2_data_traffic/README.md): Get EC2 Data Traffic In and Out For All Instances\n\n* [Get Age of all EC2 Instances in Days](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_ec2_instance_age/README.md): Get Age of all EC2 Instances in Days\n\n* [Get AWS ECS Service Status](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_ecs_services_status/README.md): Get the Status of an AWS ECS Service\n\n* [AWS List IAM users without password policies](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_iam_users_without_password_policies/README.md): Get a list of all IAM users that have no password policy attached to them.\n\n* [Get AWS Instance Details with Matching Private DNS Name](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_instance_detail_with_private_dns_name/README.md): Use this action to get details of an AWS EC2 Instance that matches a Private DNS Name\n\n* [Get AWS Instances Details](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_instance_details/README.md): Get AWS Instances Details\n\n* [List All AWS EC2 Instances Under the ELB](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_instances/README.md):  Get a list of all AWS EC2 Instances from given ELB\n\n* [AWS Get Internet Gateway by VPC ID](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_internet_gateway_by_vpc/README.md): AWS Get Internet Gateway by VPC ID\n\n* [AWS Get Long Running ElastiCache clusters Without Reserved Nodes](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_long_running_elasticcache_clusters_without_reserved_nodes/README.md): This action gets information about long running ElastiCache clusters and their status, and checks if they have any reserved nodes associated with them.\n\n* [AWS Get NAT Gateway Info by VPC ID](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_nat_gateway_by_vpc/README.md): This action is used to get the details about nat gateways configured for VPC.\n\n* [AWS Get Private Address from NAT Gateways](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_private_address_from_nat_gateways/README.md): This action is used to get private address from NAT gateways.\n\n* [Get AWS EC2 Instances with a public IP](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_public_ec2_instances/README.md): lists all EC2 instances with a public IP\n\n* [AWS Get Redshift Query Details](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_redshift_query_details/README.md): Given an QueryId, this Action will give you the status of the Query, along with other data like  the number of lines/\n\n* [AWS Get Redshift Result](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_redshift_result/README.md): Given a QueryId, Get the Query Result, and format into a List\n\n* [AWS Get EC2 Instances About To Retired](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_reserved_instances_about_to_retired/README.md): AWS Get EC2 Instances About To Retired\n\n* [AWS Get Resources Missing Tag](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_resources_missing_tag/README.md): Gets a list of all AWS resources that are missing the tag in the input parameters.\n\n* [AWS Get Resources With Expiration Tag](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_resources_with_expiration_tag/README.md): AWS Get all Resources with an expiration tag\n\n* [AWS Get Resources With Tag](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_resources_with_tag/README.md): For a given tag and region, get every AWS resource with that tag.\n\n* [Get AWS S3 Buckets](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_s3_buckets/README.md): Get AWS S3 Buckets\n\n* [Get Schedule To Retire AWS EC2 Instance](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_schedule_to_retire_instances/README.md): Get Schedule To Retire AWS EC2 Instance\n\n* [AWS Get Service Quota for a Specific ServiceName](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_service_quota_details/README.md): Given an AWS Region, Service Code and Quota Code, this Action will output the quota information for the specified service.\n\n* [AWS Get Quotas for a Service](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_service_quotas/README.md): Given inputs of the AWS Region, and the Service_Code for a service, this Action will output all of the Service Quotas and limits.\n\n* [Get Stopped Instance Volumes](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_stopped_instance_volumes/README.md): This action helps to list the volumes that are attached to stopped instances.\n\n* [Get STS Caller Identity](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_sts_caller_identity/README.md): Get STS Caller Identity\n\n* [AWS Get Tags of All Resources](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_tags_of_all_resources/README.md): AWS Get Tags of All Resources\n\n* [Get UnHealthy EC2 Instances for Classic ELB](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_unhealthy_instances/README.md): Get UnHealthy EC2 Instances for Classic ELB\n\n* [Get Unhealthy instances from ELB](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_unhealthy_instances_from_elb/README.md): Get Unhealthy instances from Elastic Load Balancer\n\n* [Launch AWS EC2 Instance From an AMI](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_launch_instance_from_ami/README.md): Use this instance to Launch an AWS EC2 instance from an AMI\n\n* [AWS List All IAM Users](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_all_iam_users/README.md): List all AWS IAM Users\n\n* [AWS List All Regions](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_all_regions/README.md): List all available AWS Regions\n\n* [AWS List Application LoadBalancers ARNs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_application_loadbalancers/README.md): AWS List Application LoadBalancers ARNs\n\n* [AWS List Attached User Policies](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_attached_user_policies/README.md): AWS List Attached User Policies\n\n* [AWS List ECS Clusters with Low CPU Utilization](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_clusters_with_low_utilization/README.md): This action searches for clusters that have low CPU utilization.\n\n* [AWS List Expiring Access Keys](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_expiring_access_keys/README.md): List Expiring IAM User Access Keys\n\n* [AWS List Unattached Elastic IPs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_unattached_elastic_ips/README.md): This action lists Elastic IP address and check if it is associated with an instance or network interface.\n\n* [AWS List Unhealthy Instances in a Target Group](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_unhealthy_instances_in_target_group/README.md): List Unhealthy Instances in a target group\n\n* [AWS List Instances behind a Load Balancer.](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_loadbalancer_list_instances/README.md): List AWS Instances behind a Load Balancer\n\n* [Make AWS Bucket Public](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_make_bucket_public/README.md): Make an AWS Bucket Public!\n\n* [AWS Modify EBS Volume to GP3](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_modify_ebs_volume_to_gp3/README.md): AWS recently introduced the General Purpose SSD (gp3) volume type, which is designed to provide higher IOPS performance at a lower cost than the gp2 volume type.\n\n* [AWS Modify ALB Listeners HTTP Redirection](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_modify_listener_for_http_redirection/README.md): AWS Modify ALB Listeners HTTP Redirection\n\n* [AWS Modify Publicly Accessible RDS Snapshots](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_modify_public_db_snapshots/README.md): AWS Modify Publicly Accessible RDS Snapshots\n\n* [Get AWS Postgresql Max Configured Connections](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_postgresql_get_configured_max_connections/README.md): Get AWS Postgresql Max Configured Connections\n\n* [Plot AWS PostgreSQL Active Connections](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_postgresql_plot_active_connections/README.md): Plot AWS PostgreSQL Action Connections\n\n* [ Apply CORS Policy for S3 Bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_put_bucket_cors/README.md):  Apply CORS Policy for S3 Bucket\n\n* [Apply AWS New Policy for S3 Bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_put_bucket_policy/README.md): Apply a New AWS Policy for S3 Bucket\n\n* [Read AWS S3 Object](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_read_object/README.md): Read an AWS S3 Object\n\n* [ Register AWS Instances with a Load Balancer](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_register_instances/README.md):  Register AWS Instances with a Load Balancer\n\n* [AWS Release Elastic IP](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_release_elastic_ip/README.md): AWS Release Elastic IP for both VPC and Standard\n\n* [Renew Expiring ACM Certificates](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_renew_expiring_acm_certificates/README.md): Renew Expiring ACM Certificates\n\n* [AWS_Request_Service_Quota_Increase](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_request_service_quota_increase/README.md): Given an AWS Region, Service Code, quota code and a new value for the quota, this Action sends a request to AWS for a new value. Your Connector must have servicequotas:RequestServiceQuotaIncrease enabled for this to work.\n\n* [Restart AWS EC2 Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_restart_ec2_instances/README.md): Restart AWS EC2 Instances\n\n* [AWS Revoke Policy from IAM User](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_revoke_policy_from_iam_user/README.md): AWS Revoke Policy from IAM User\n\n* [Start AWS Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_run_instances/README.md): Start an AWS EC2 Instances\n\n* [AWS Schedule Redshift Cluster Pause Resume Enabled](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_schedule_pause_resume_enabled/README.md): AWS Schedule Redshift Cluster Pause Resume Enabled\n\n* [AWS Service Quota Limits](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_service_quota_limits/README.md): Input a List of Service Quotas, and get back which of your instances are above the warning percentage of the quota\n\n* [AWS VPC service quota limit](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_service_quota_limits_vpc/README.md): This Action queries all VPC Storage quotas, and returns all usage over warning_percentage.\n\n* [Stop AWS Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_stop_instances/README.md): Stop an AWS Instance\n\n* [Tag AWS Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_tag_ec2_instances/README.md): Tag AWS Instances\n\n* [AWS List Instances in a ELBV2 Target Group](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_target_group_list_instances/README.md): List AWS Instance in a ELBv2 Target Group\n\n* [ AWS List Unhealthy Instances in a ELBV2 Target Group](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_target_group_list_unhealthy_instances/README.md):  List AWS Unhealthy Instance in a ELBv2 Target Group\n\n* [AWS Register/Unregister Instances from a Target Group.](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_target_group_register_unregister_instances/README.md): Register/Unregister AWS Instance from a Target Group\n\n* [Terminate AWS EC2 Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_terminate_ec2_instances/README.md): This Action will Terminate AWS EC2 Instances\n\n* [AWS Update Access Key](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_update_access_key/README.md): Update status of the Access Key\n\n* [Upload file to S3](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_upload_file_to_s3/README.md): Upload a local file to S3\n\n* [AWS_VPC_service_quota_warning](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_vpc_service_quota_warning/README.md): Given an AWS Region and a warning percentage, this Action queries all VPC quota limits, and returns any of Quotas that are over the alert value.\n\n* [Get Status for given DAG](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Airflow/legos/airflow_check_dag_status/README.md): Get Status for given DAG\n\n* [Get Airflow handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Airflow/legos/airflow_get_handle/README.md): Get Airflow handle\n\n* [List DAG runs for given DagID](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Airflow/legos/airflow_list_DAG_runs/README.md): List DAG runs for given DagID\n\n* [Airflow trigger DAG run](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Airflow/legos/airflow_trigger_dag_run/README.md): Airflow trigger DAG run\n\n* [Get Azure Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Azure/legos/azure_get_handle/README.md): Get Azure Handle\n\n* [Datadog delete incident](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_delete_incident/README.md): Delete an incident given its id\n\n* [Datadog get event](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_get_event/README.md): Get an event given its id\n\n* [Get Datadog Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_get_handle/README.md): Get Datadog Handle\n\n* [Datadog get incident](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_get_incident/README.md): Get an incident given its id\n\n* [Datadog get metric metadata](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_get_metric_metadata/README.md): Get the metadata of a metric.\n\n* [Datadog get monitor](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_get_monitor/README.md): Get details about a monitor\n\n* [Datadog get monitorID given the name](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_get_monitorid/README.md): Get monitorID given the name\n\n* [Datadog list active metrics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_list_active_metrics/README.md): Get the list of actively reporting metrics from a given time until now.\n\n* [Datadog list all monitors](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_list_all_monitors/README.md): List all monitors\n\n* [Datadog list metrics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_list_metrics/README.md): Lists metrics from the last 24 hours in Datadog.\n\n* [Datadog mute/unmute monitors](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_mute_or_unmute_alerts/README.md): Mute/unmute monitors\n\n* [Datadog query metrics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_query_metrics/README.md): Query timeseries points for a metric.\n\n* [Schedule downtime](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_schedule_downtime/README.md): Schedule downtime\n\n* [Datadog search monitors](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_search_monitors/README.md): Search monitors in datadog based on filters\n\n* [Elasticsearch Cluster Health](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/ElasticSearch/legos/elasticsearch_check_health_status/README.md): Elasticsearch Check Health Status\n\n* [Get large Elasticsearch Index size](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/ElasticSearch/legos/elasticsearch_check_large_index_size/README.md): This action checks the sizes of all indices in the Elasticsearch cluster and compares them to a given threshold.\n\n* [Check Elasticsearch cluster disk size](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/ElasticSearch/legos/elasticsearch_compare_cluster_disk_size_to_threshold/README.md): This action compares the disk usage percentage of the Elasticsearch cluster to a given threshold.\n\n* [Elasticsearch Delete Unassigned Shards](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/ElasticSearch/legos/elasticsearch_delete_unassigned_shards/README.md): Elasticsearch Delete Corrupted/Lost Shards\n\n* [Elasticsearch Disable Shard Allocation](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/ElasticSearch/legos/elasticsearch_disable_shard_allocation/README.md): Elasticsearch Disable Shard Allocation for any indices\n\n* [Elasticsearch Enable Shard Allocation](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/ElasticSearch/legos/elasticsearch_enable_shard_allocation/README.md): Elasticsearch Enable Shard Allocation for any shards for any indices\n\n* [Elasticsearch Cluster Statistics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/ElasticSearch/legos/elasticsearch_get_cluster_statistics/README.md): Elasticsearch Cluster Statistics fetches total index size, disk size, and memory utilization and information about the current nodes and shards that form the cluster\n\n* [Get Elasticsearch Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/ElasticSearch/legos/elasticsearch_get_handle/README.md): Get Elasticsearch Handle\n\n* [Get Elasticsearch index level health](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/ElasticSearch/legos/elasticsearch_get_index_health/README.md): This action checks the health of a given Elasticsearch index or all indices if no specific index is provided.\n\n* [Elasticsearch List Allocations](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/ElasticSearch/legos/elasticsearch_list_allocations/README.md): Elasticsearch List Allocations in a Cluster\n\n* [Elasticsearch List Nodes](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/ElasticSearch/legos/elasticsearch_list_nodes/README.md): Elasticsearch List Nodes in a Cluster\n\n* [Elasticsearch search](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/ElasticSearch/legos/elasticsearch_search_query/README.md): Elasticsearch Search\n\n* [Add lifecycle policy to GCP storage bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_add_lifecycle_policy_to_bucket/README.md): The action adds a lifecycle policy to a Google Cloud Platform (GCP) storage bucket.\n\n* [Create GCP Bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_create_bucket/README.md): Create a new GCP bucket in the given location\n\n* [Create a GCP disk snapshot](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_create_disk_snapshot/README.md): Create a GCP disk snapshot.\n\n* [Create GCP Filestore Instance](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_create_filestore_instance/README.md): Create a new GCP Filestore Instance in the given location\n\n* [Create GKE Cluster](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_create_gke_cluster/README.md): Create GKE Cluster\n\n* [GCP Create Service Account](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_create_service_account/README.md): GCP Create Service Account\n\n* [Delete GCP Bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_delete_bucket/README.md): Delete a GCP bucket\n\n* [Delete GCP Filestore Instance](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_delete_filestore_instance/README.md): Delete a GCP Filestore Instance in the given location\n\n* [Delete an Object from GCP Bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_delete_object_from_bucket/README.md): Delete an Object/Blob from a GCP Bucket\n\n* [GCP Delete Service Account](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_delete_service_account/README.md): GCP Delete Service Account\n\n* [GCP Describe a GKE cluster](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_describe_gke_cluster/README.md): GCP Describe a GKE cluster\n\n* [Fetch Objects from GCP Bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_fetch_objects_from_bucket/README.md): List all Objects in a GCP bucket\n\n* [Get GCP storage buckets without lifecycle policies](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_get_buckets_without_lifecycle_policies/README.md): The action retrieves a list of Google Cloud Platform (GCP) storage buckets that do not have any lifecycle policies applied.\n\n* [Get details of GCP forwarding rules](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_get_forwarding_rules_details/README.md): Get details of forwarding rules associated with a backend service.\n\n* [Get GCP Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_get_handle/README.md): Get GCP Handle\n\n* [Get List of GCP compute instance without label](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_get_instances_without_label/README.md): Get List of GCP compute instance without label\n\n* [Get unused GCP backend services](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_get_unused_backend_services/README.md): Get unused backend service for an application load balancer that has no instances in it's target group.\n\n\n* [List all GCP Buckets](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_list_buckets/README.md): List all GCP buckets\n\n* [Get GCP compute instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_list_compute_instances/README.md): Get GCP compute instances\n\n* [Get List of GCP compute instance by label](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_list_compute_instances_by_label/README.md): Get List of GCP compute instance by label\n\n* [Get list  compute instance by VPC](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_list_compute_instances_by_vpc/README.md): Get list  compute instance by VPC\n\n* [GCP List GKE Cluster](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_list_gke_cluster/README.md): GCP List GKE Cluster\n\n* [GCP List Nodes in GKE Cluster](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_list_nodes_in_gke_cluster/README.md): GCP List Nodes of GKE Cluster\n\n* [List all Public GCP Buckets](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_list_public_buckets/README.md): List all publicly available GCP buckets\n\n* [List all GCP VMs and if Publicly Accessible](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_list_vms_access/README.md): Lists all GCP buckets, and identifies those tha are public.\n\n* [Remove role from user](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_remove_user_role/README.md): GCP lego for removing a role from a user (default: 'viewer')\n\n* [GCP Resize a GKE cluster](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_resize_gke_cluster/README.md): GCP Resize a GKE cluster by modifying nodes\n\n* [GCP Restart compute instance](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_restart_compute_instances/README.md): GCP Restart compute instance\n\n* [Restore GCP disk from a snapshot ](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_restore_disk_from_snapshot/README.md): Restore a GCP disk from a compute instance snapshot.\n\n* [Save CSV to Google Sheets](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_save_csv_to_google_sheets_v1/README.md): Saves your CSV (see notes) into a prepared Google Sheet.\n\n* [GCP Stop compute instance](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_stop_compute_instances/README.md): GCP Stop compute instance\n\n* [Upload an Object to GCP Bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_upload_file_to_bucket/README.md): Upload an Object/Blob in a GCP bucket\n\n* [Github Assign Issue](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_assign_issue/README.md): Assign a github issue to a user\n\n* [Github Check if Pull Request is merged](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_check_if_pull_request_is_merged/README.md): Check if a Github Pull Request is merged\n\n* [Github Close Pull Request](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_close_pull_request/README.md): Close pull request based on pull request number\n\n* [Github Count Stars](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_count_stars/README.md): Get count of stars for a repository\n\n* [Github Create Issue](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_create_issue/README.md): Create a new Github Issue for a repository\n\n* [Github Create Team](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_create_team/README.md): Create a new Github Team\n\n* [Github Delete Branch](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_delete_branch/README.md): Delete a github branch\n\n* [Github Get Branch](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_get_branch/README.md): Get Github branch for a user in a repository\n\n* [Get Github Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_get_handle/README.md): Get Github Handle\n\n* [Github Get Issue](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_get_issue/README.md): Get Github Issue from a repository\n\n* [Github Get Open Branches](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_get_open_branches/README.md): Get first 100 open branches for a given user in a given repo.\n\n* [Github Get Pull Request](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_get_pull_request/README.md): Get Github Pull Request for a user in a repository\n\n* [Github Get Team](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_get_team/README.md): Github Get Team\n\n* [Github Get User](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_get_user/README.md): Get Github User details\n\n* [Github Invite User to Organization](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_invite_user_to_org/README.md): Invite a Github User to an Organization\n\n* [Github Comment on an Issue](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_issue_comment/README.md): Add a comment to the selected GitHub Issue\n\n* [Github List Open Issues](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_list_open_issues/README.md): List open Issues in a Github Repository\n\n* [Github List Organization Members](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_list_org_members/README.md): List Github Organization Members\n\n* [Github List PR Commits](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_list_pull_request_commits/README.md): Github List all Pull Request Commits\n\n* [Github List Pull Request Reviewers](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_list_pull_request_reviewers/README.md): List PR reviewers for a PR\n\n* [Github List Pull Requests](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_list_pull_requests/README.md): List pull requests for a user in a repository\n\n* [Github List Stale Issues](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_list_stale_issues/README.md): List Stale Github Issues that have crossed a certain age limit.\n\n* [Github List Stale Pull Requests](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_list_stale_pull_requests/README.md): Check for any Pull requests over a certain age. \n\n* [Github List Stargazers](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_list_stargazers/README.md): List of Github users that have starred (essentially bookmarked) a repository\n\n* [Github List Team Members](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_list_team_members/README.md): List Github Team Members for a given Team\n\n* [Github List Team Repositories](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_list_team_repos/README.md): Github List Team Repositories\n\n* [Github List Teams in Organization](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_list_teams_in_org/README.md): List teams in a organization in GitHub\n\n* [Github List Webhooks](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_list_webhooks/README.md): List webhooks for a repository\n\n* [Github Merge Pull Request](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_merge_pull_request/README.md): Github Merge Pull Request\n\n* [Github Remove Member from Organization](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_remove_member_from_org/README.md): Remove a member from a Github Organization\n\n* [Get Grafana Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Grafana/legos/grafana_get_handle/README.md): Get Grafana Handle\n\n* [Grafana List Alerts](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Grafana/legos/grafana_list_alerts/README.md): List of Grafana alerts. Specifying the dashboard ID will show alerts in that dashboard\n\n* [Get Hadoop cluster apps](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Hadoop/legos/hadoop_get_cluster_apps/README.md): Get Hadoop cluster apps\n\n* [Get Hadoop cluster appstatistics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Hadoop/legos/hadoop_get_cluster_appstatistics/README.md): Get Hadoop cluster appstatistics\n\n* [Get Hadoop cluster metrics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Hadoop/legos/hadoop_get_cluster_metrics/README.md): Get Hadoop EMR cluster metrics\n\n* [Get Hadoop cluster nodes](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Hadoop/legos/hadoop_get_cluster_nodes/README.md): Get Hadoop cluster nodes\n\n* [Get Hadoop handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Hadoop/legos/hadoop_get_handle/README.md): Get Hadoop handle\n\n* [Get Jenkins Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Jenkins/legos/jenkins_get_handle/README.md): Get Jenkins Handle\n\n* [Get Jenkins Logs from a job](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Jenkins/legos/jenkins_get_logs/README.md): Get Jenkins Logs from a Job\n\n* [Get Jenkins Plugin List](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Jenkins/legos/jenkins_get_plugins/README.md): Get Jenkins Plugin List\n\n* [Jira Add Comment](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Jira/legos/jira_add_comment/README.md): Add a Jira Comment\n\n* [Assign Jira Issue](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Jira/legos/jira_assign_issue/README.md): Assign a Jira Issue to a user\n\n* [Create a Jira Issue](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Jira/legos/jira_create_issue/README.md): Create a Jira Issue\n\n* [Get Jira SDK Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Jira/legos/jira_get_handle/README.md): Get Jira SDK Handle\n\n* [Get Jira Issue Info](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Jira/legos/jira_get_issue/README.md): Get Issue Info from Jira API: description, labels, attachments\n\n* [Get Jira Issue Status](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Jira/legos/jira_get_issue_status/README.md): Get Issue Status from Jira API\n\n* [Change JIRA Issue Status](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Jira/legos/jira_issue_change_status/README.md): Change JIRA Issue Status to given status\n\n* [Search for Jira issues matching JQL queries](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Jira/legos/jira_search_issue/README.md): Use JQL to search all matching issues in Jira. Returns a List of the matching issues IDs/keys\n\n* [Kafka Check In-Sync Replicas](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kafka/legos/kafka_check_in_sync_replicas/README.md): Checks number of actual min-isr for each topic-partition with configuration for that topic.\n\n* [Kafka Check Replicas Available](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kafka/legos/kafka_check_replicas_available/README.md): Checks if the number of replicas not available for communication is equal to zero.\n\n* [Kafka get cluster health](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kafka/legos/kafka_get_cluster_health/README.md): Fetches the health of the Kafka cluster including brokers, topics, and partitions.\n\n* [Kafka get count of committed messages](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kafka/legos/kafka_get_committed_messages_count/README.md): Fetches the count of committed messages (consumer offsets) for a specific consumer group and its topics.\n\n* [Get Kafka Producer Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kafka/legos/kafka_get_handle/README.md): Get Kafka Producer Handle\n\n* [Kafka get topic health](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kafka/legos/kafka_get_topic_health/README.md): This action fetches the health and total number of messages for the specified topics.\n\n* [Kafka get topics with lag](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kafka/legos/kafka_get_topics_with_lag/README.md): This action fetches the topics with lag in the Kafka cluster.\n\n* [Kafka Publish Message](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kafka/legos/kafka_publish_message/README.md): Publish Kafka Message\n\n* [Run a Kafka command using kafka CLI](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kafka/legos/kafka_run_command/README.md): Run a Kafka command using kafka CLI. Eg kafka-topics.sh --list --exclude-internal\n\n* [Add Node in a Kubernetes Cluster](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_add_node_to_cluster/README.md): Add Node in a Kubernetes Cluster\n\n* [Change size of Kubernetes PVC](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_change_pvc_size/README.md): Change size of Kubernetes PVC\n\n* [Check K8s services endpoint health](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_check_service_status/README.md): This action checks the health status of the provided Kubernetes services.\n\n* [Check K8s worker CPU Utilization](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_check_worker_cpu_utilization/README.md): Retrieves the CPU utilization for all worker nodes in the cluster and compares it to a given threshold.\n\n* [Delete a Kubernetes POD in a given Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_delete_pod/README.md): Delete a Kubernetes POD in a given Namespace\n\n* [Describe Kubernetes Node](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_describe_node/README.md): Describe a Kubernetes Node\n\n* [Describe a Kubernetes POD in a given Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_describe_pod/README.md): Describe a Kubernetes POD in a given Namespace\n\n* [Execute a command on a Kubernetes POD in a given Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_exec_command_on_pod/README.md): Execute a command on a Kubernetes POD in a given Namespace\n\n* [Kubernetes Execute a command on a POD in a given namespace and filter](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_exec_command_on_pods_and_filter/README.md): Execute a command on Kubernetes POD in a given namespace and filter output\n\n* [Execute local script on a pod](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_execute_local_script_on_a_pod/README.md): Execute local script on a pod in a namespace\n\n* [Gather Data for POD Troubleshoot](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_gather_data_for_pod_troubleshoot/README.md): Gather Data for POD Troubleshoot\n\n* [Gather Data for K8S Service Troubleshoot](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_gather_data_for_service_troubleshoot/README.md): Gather Data for K8S Service Troubleshoot\n\n* [Get All Evicted PODS From Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_all_evicted_pods_from_namespace/README.md): This action get all evicted PODS from given namespace. If namespace not given it will get all the pods from all namespaces.\n\n* [ Get All Kubernetes PODS with state in a given Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_all_pods/README.md):  Get All Kubernetes PODS with state in a given Namespace\n\n* [Get K8s pods status and resource utilization info](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_all_resources_utilization_info/README.md): This action gets the pod status and resource utilization of various Kubernetes resources like jobs, services, persistent volumes.\n\n* [Get candidate k8s nodes for given configuration](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_candidate_nodes_for_pods/README.md): Get candidate k8s nodes for given configuration\n\n* [Get K8S Cluster Health](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_cluster_health/README.md): Get K8S Cluster Health\n\n* [Get k8s kube system config map](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_config_map_kube_system/README.md): Get k8s kube system config map\n\n* [Get Kubernetes Deployment For a Pod in a Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_deployment/README.md): Get Kubernetes Deployment for a POD in a Namespace\n\n* [Get Deployment Status](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_deployment_status/README.md): This action search for failed deployment status and returns list.\n\n* [Get Kubernetes Error PODs from All Jobs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_error_pods_from_all_jobs/README.md): Get Kubernetes Error PODs from All Jobs\n\n* [Get expiring K8s certificates](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_expiring_certificates/README.md): Get the expiring certificates for a K8s cluster.\n\n* [Get Kubernetes Failed Deployments](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_failed_deployments/README.md): Get Kubernetes Failed Deployments\n\n* [Get frequently restarting K8s pods](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_frequently_restarting_pods/README.md): Get Kubernetes pods from all namespaces that are restarting too often.\n\n* [Get Kubernetes Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_handle/README.md): Get Kubernetes Handle\n\n* [Get All Kubernetes Healthy PODS in a given Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_healthy_pods/README.md): Get All Kubernetes Healthy PODS in a given Namespace\n\n* [Get memory utilization for K8s services](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_memory_utilization_of_services/README.md): This action executes the given kubectl commands to find the memory utilization of the specified services in a particular namespace and compares it with a given threshold.\n\n* [Get K8s node status and CPU utilization](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_node_status_and_resource_utilization/README.md): This action gathers Kubernetes node status and resource utilization information.\n\n* [Get Kubernetes Nodes](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_nodes/README.md): Get Kubernetes Nodes\n\n* [Get K8s nodes disk and memory pressure](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_nodes_pressure/README.md): This action fetches the memory and disk pressure status of each node in the cluster\n\n* [Get Kubernetes Nodes that have insufficient resources](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_nodes_with_insufficient_resources/README.md): Get Kubernetes Nodes that have insufficient resources\n\n* [Get K8s offline nodes](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_offline_nodes/README.md): This action checks if any node in the Kubernetes cluster is offline.\n\n* [Get K8S OOMKilled Pods](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_oomkilled_pods/README.md): Get K8S Pods which are OOMKilled from the container last states.\n\n* [Get K8s get pending pods](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_pending_pods/README.md): This action checks if any pod in the Kubernetes cluster is in 'Pending' status.\n\n* [Get Kubernetes POD Configuration](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_pod_config/README.md): Get Kubernetes POD Configuration\n\n* [Get Kubernetes Logs for a given POD in a Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_pod_logs/README.md): Get Kubernetes Logs for a given POD in a Namespace\n\n* [Get Kubernetes Logs for a list of PODs & Filter in a Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_pod_logs_and_filter/README.md): Get Kubernetes Logs for a list of PODs and Filter in a Namespace\n\n* [Get Kubernetes Status for a POD in a given Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_pod_status/README.md): Get Kubernetes Status for a POD in a given Namespace\n\n* [Get pods attached to Kubernetes PVC](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_pods_attached_to_pvc/README.md): Get pods attached to Kubernetes PVC\n\n* [Get all K8s Pods in CrashLoopBackOff State](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_pods_in_crashloopbackoff_state/README.md): Get all K8s pods in CrashLoopBackOff State\n\n* [Get all K8s Pods in ImagePullBackOff State](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_pods_in_imagepullbackoff_state/README.md): Get all K8s pods in ImagePullBackOff State\n\n* [Get Kubernetes PODs in not Running State](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_pods_in_not_running_state/README.md): Get Kubernetes PODs in not Running State\n\n* [Get all K8s Pods in Terminating State](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_pods_in_terminating_state/README.md): Get all K8s pods in Terminating State\n\n* [Get Kubernetes PODS with high restart](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_pods_with_high_restart/README.md): Get Kubernetes PODS with high restart\n\n* [Get K8S Service with no associated endpoints](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_service_with_no_associated_endpoints/README.md): Get K8S Service with no associated endpoints\n\n* [Get Kubernetes Services for a given Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_services/README.md): Get Kubernetes Services for a given Namespace\n\n* [Get Kubernetes Unbound PVCs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_unbound_pvcs/README.md): Get Kubernetes Unbound PVCs\n\n* [Kubectl command](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_command/README.md): Execute kubectl command.\n\n* [Kubectl set context entry in kubeconfig](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_config_set_context/README.md): Kubectl set context entry in kubeconfig\n\n* [Kubectl display merged kubeconfig settings](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_config_view/README.md): Kubectl display merged kubeconfig settings\n\n* [Kubectl delete a pod](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_delete_pod/README.md): Kubectl delete a pod\n\n* [Kubectl describe a node](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_describe_node/README.md): Kubectl describe a node\n\n* [Kubectl describe a pod](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_describe_pod/README.md): Kubectl describe a pod\n\n* [Kubectl drain a node](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_drain_node/README.md): Kubectl drain a node\n\n* [Execute command on a pod](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_exec_command/README.md): Execute command on a pod\n\n* [Kubectl get api resources](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_get_api_resources/README.md): Kubectl get api resources\n\n* [Kubectl get logs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_get_logs/README.md): Kubectl get logs for a given pod\n\n* [Kubectl get services](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_get_service_namespace/README.md): Kubectl get services in a given namespace\n\n* [Kubectl list pods](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_list_pods/README.md): Kubectl list pods in given namespace\n\n* [Kubectl update field](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_patch_pod/README.md): Kubectl update field of a resource using strategic merge patch\n\n* [Kubectl rollout deployment history](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_rollout_deployment/README.md): Kubectl rollout deployment history\n\n* [Kubectl scale deployment](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_scale_deployment/README.md): Kubectl scale a given deployment\n\n* [Kubectl show metrics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_show_metrics_node/README.md): Kubectl show metrics for a given node\n\n* [Kubectl show metrics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_show_metrics_pod/README.md): Kubectl show metrics for a given pod\n\n* [List matching name pods](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_list_all_matching_pods/README.md): List all pods matching a particular name string. The matching string can be a regular expression too\n\n* [List pvcs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_list_pvcs/README.md): List pvcs by namespace. By default, it will list all pvcs in all namespaces.\n\n* [Remove POD from Deployment](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_remove_pod_from_deployment/README.md): Remove POD from Deployment\n\n* [Update Commands in a Kubernetes POD in a given Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_update_command_in_pod_spec/README.md): Update Commands in a Kubernetes POD in a given Namespace\n\n* [Get Mantishub handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mantishub/legos/mantishub_get_handle/README.md): Get Mantishub handle\n\n* [MongoDB add new field in all collections](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_add_new_field_in_collections/README.md): MongoDB add new field in all collections\n\n* [MongoDB Aggregate Command](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_aggregate_command/README.md): MongoDB Aggregate Command\n\n* [MongoDB Atlas cluster cloud backup](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_atlas_cluster_backup/README.md): Trigger on-demand Atlas cloud backup\n\n* [Get large MongoDB indices](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_check_large_index_size/README.md): This action compares the size of each index with a given threshold and returns any indexes that exceed the threshold.\n\n* [Get MongoDB large databases](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_compare_disk_size_to_threshold/README.md): This action compares the total disk size used by MongoDB to a given threshold.\n\n* [MongoDB Count Documents](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_count_documents/README.md): MongoDB Count Documents\n\n* [MongoDB Create Collection](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_create_collection/README.md): MongoDB Create Collection\n\n* [MongoDB Create Database](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_create_database/README.md): MongoDB Create Database\n\n* [Delete collection from MongoDB database](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_delete_collection/README.md): Delete collection from MongoDB database\n\n* [MongoDB Delete Database](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_delete_database/README.md): MongoDB Delete Database\n\n* [MongoDB Delete Document](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_delete_document/README.md): MongoDB Delete Document\n\n* [MongoDB Distinct Command](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_distinct_command/README.md): MongoDB Distinct Command\n\n* [MongoDB Find Document](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_find_document/README.md): MongoDB Find Document\n\n* [MongoDB Find One](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_find_one/README.md): MongoDB Find One returns a single entry that matches the query.\n\n* [Get MongoDB Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_get_handle/README.md): Get MongoDB Handle\n\n* [MongoDB get metrics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_get_metrics/README.md): This action retrieves various metrics such as index size, disk size per collection for all databases and collections.\n\n* [Get Mongo Server Status](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_get_server_status/README.md): Get Mongo Server Status and check for any abnormalities.\n\n* [MongoDB Insert Document](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_insert_document/README.md): MongoDB Insert Document\n\n* [MongoDB kill queries](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_kill_queries/README.md): MongoDB kill queries\n\n* [Get list of collections in MongoDB Database](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_list_collections/README.md): Get list of collections in MongoDB Database\n\n* [Get list of MongoDB Databases](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_list_databases/README.md): Get list of MongoDB Databases\n\n* [MongoDB list queries](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_list_queries/README.md): MongoDB list queries\n\n* [MongoDB Read Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_read_query/README.md): MongoDB Read Query\n\n* [MongoDB remove a field in all collections](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_remove_field_in_collections/README.md): MongoDB remove a field in all collections\n\n* [MongoDB Rename Database](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_rename_database/README.md): MongoDB Rename Database\n\n* [MongoDB Update Document](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_update_document/README.md): MongoDB Update Document\n\n* [MongoDB Upsert Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_write_query/README.md): MongoDB Upsert Query\n\n* [Get MS-SQL Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/MsSQL/legos/mssql_get_handle/README.md): Get MS-SQL Handle\n\n* [MS-SQL Read Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/MsSQL/legos/mssql_read_query/README.md): MS-SQL Read Query\n\n* [MS-SQL Write Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/MsSQL/legos/mssql_write_query/README.md): MS-SQL Write Query\n\n* [Get MySQL Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/MySQL/legos/mysql_get_handle/README.md): Get MySQL Handle\n\n* [MySQl Get Long Running Queries](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/MySQL/legos/mysql_get_long_run_queries/README.md): MySQl Get Long Running Queries\n\n* [MySQl Kill Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/MySQL/legos/mysql_kill_query/README.md): MySQl Kill Query\n\n* [Run MySQL Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/MySQL/legos/mysql_read_query/README.md): Run MySQL Query\n\n* [Create a MySQL Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/MySQL/legos/mysql_write_query/README.md): Create a MySQL Query\n\n* [Netbox Get Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Netbox/legos/netbox_get_handle/README.md): Get Netbox Handle\n\n* [Netbox List Devices](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Netbox/legos/netbox_list_devices/README.md): List all Netbox devices\n\n* [Nomad Get Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Nomad/legos/nomad_get_handle/README.md): Get Nomad Handle\n\n* [Nomad List Jobs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Nomad/legos/nomad_list_jobs/README.md): List all Nomad jobs\n\n* [Get Opsgenie Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Opsgenie/legos/opsgenie_get_handle/README.md): Get Opsgenie Handle\n\n* [Create new maintenance window.](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Pingdom/legos/pingdom_create_new_maintenance_window/README.md): Create new maintenance window.\n\n* [Perform Pingdom single check](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Pingdom/legos/pingdom_do_single_check/README.md): Perform Pingdom Single Check\n\n* [Get Pingdom Analysis Results for a specified Check](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Pingdom/legos/pingdom_get_analysis/README.md): Get Pingdom Analysis Results for a specified Check\n\n* [Get list of checkIDs given a hostname](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Pingdom/legos/pingdom_get_checkids/README.md): Get list of checkIDs given a hostname. If no hostname provided, it lists all checkIDs.\n\n* [Get list of checkIDs given a name](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Pingdom/legos/pingdom_get_checkids_by_name/README.md): Get list of checkIDS given a name. If name is not given, it gives all checkIDs. If transaction is set to true, it returns transaction checkIDs\n\n* [Get Pingdom Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Pingdom/legos/pingdom_get_handle/README.md): Get Pingdom Handle\n\n* [Pingdom Get Maintenance](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Pingdom/legos/pingdom_get_maintenance/README.md): Pingdom Get Maintenance\n\n* [Get Pingdom Results](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Pingdom/legos/pingdom_get_results/README.md): Get Pingdom Results\n\n* [Get Pingdom TMS Check](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Pingdom/legos/pingdom_get_tmscheck/README.md): Get Pingdom TMS Check\n\n* [Pingdom lego to pause/unpause checkids](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Pingdom/legos/pingdom_pause_or_unpause_checkids/README.md): Pingdom lego to pause/unpause checkids\n\n* [Perform Pingdom Traceroute](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Pingdom/legos/pingdom_traceroute/README.md): Perform Pingdom Traceroute\n\n* [PostgreSQL Calculate Bloat](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgres_calculate_bloat/README.md): This Lego calculates bloat for tables in Postgres\n\n* [Calling a PostgreSQL function](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_call_function/README.md): Calling a PostgreSQL function\n\n* [PostgreSQL Check Unused Indexes](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_check_unused_indexes/README.md): Find unused Indexes in a database in PostgreSQL\n\n* [Create Tables in PostgreSQL](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_create_table/README.md): Create Tables PostgreSQL\n\n* [Delete PostgreSQL Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_delete_query/README.md): Delete PostgreSQL Query\n\n* [PostgreSQL Get Cache Hit Ratio](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_get_cache_hit_ratio/README.md): The result of the action will show the total number of blocks read from disk, the total number of blocks found in the buffer cache, and the cache hit ratio as a percentage. For example, if the cache hit ratio is 99%, it means that 99% of all data requests were served from the buffer cache, and only 1% required reading data from disk.\n\n* [Get PostgreSQL Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_get_handle/README.md): Get PostgreSQL Handle\n\n* [PostgreSQL Get Index Usage](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_get_index_usage/README.md): The action result shows the data for table name, the percentage of times an index was used for that table, and the number of live rows in the table.\n\n* [PostgreSQL get service status](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_get_server_status/README.md): This action checks the status of each database.\n\n* [Execute commands in a PostgreSQL transaction.](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_handling_transaction/README.md): Given a set of PostgreSQL commands, this actions run them inside a transaction.\n\n* [Long Running PostgreSQL Queries](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_long_running_queries/README.md): Long Running PostgreSQL Queries\n\n* [Read PostgreSQL Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_read_query/README.md): Read PostgreSQL Query\n\n* [Show tables in PostgreSQL Database](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_show_tables/README.md): Show the tables existing in a PostgreSQL Database. We execute the following query to fetch this information SELECT * FROM pg_catalog.pg_tables WHERE schemaname != 'pg_catalog' AND schemaname != 'information_schema';\n\n* [Call PostgreSQL Stored Procedure](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_stored_procedures/README.md): Call PostgreSQL Stored Procedure\n\n* [Write PostgreSQL Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_write_query/README.md): Write PostgreSQL Query\n\n* [Get Prometheus rules](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Prometheus/legos/prometheus_alerts_list/README.md): Get Prometheus rules\n\n* [Get All Prometheus Metrics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Prometheus/legos/prometheus_get_all_metrics/README.md): Get All Prometheus Metrics\n\n* [Get Prometheus handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Prometheus/legos/prometheus_get_handle/README.md): Get Prometheus handle\n\n* [Get Prometheus Metric Statistics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Prometheus/legos/prometheus_get_metric_statistics/README.md): Get Prometheus Metric Statistics\n\n* [Delete All Redis Keys](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Redis/legos/redis_delete_all_keys/README.md): Delete All Redis keys\n\n* [Delete Redis Keys](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Redis/legos/redis_delete_keys/README.md): Delete Redis keys matching pattern\n\n* [Delete Redis Unused keys](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Redis/legos/redis_delete_stale_keys/README.md): Delete Redis Unused keys given a time threshold in seconds\n\n* [Get Redis cluster health](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Redis/legos/redis_get_cluster_health/README.md): This action gets the Redis cluster health.\n\n* [Get Redis Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Redis/legos/redis_get_handle/README.md): Get Redis Handle\n\n* [Get Redis keys count](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Redis/legos/redis_get_keys_count/README.md): Get Redis keys count matching pattern (default: '*')\n\n* [Get Redis metrics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Redis/legos/redis_get_metrics/README.md): This action fetched redis metrics like index size, memory utilization.\n\n* [ List Redis Large keys](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Redis/legos/redis_list_large_keys/README.md): Find Redis Large keys given a size threshold in bytes\n\n* [Get REST handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Rest/legos/rest_get_handle/README.md): Get REST handle\n\n* [Call REST Methods](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Rest/legos/rest_methods/README.md): Call REST Methods.\n\n* [SSH Execute Remote Command](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SSH/legos/ssh_execute_remote_command/README.md): SSH Execute Remote Command\n\n* [SSH: Locate large files on host](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SSH/legos/ssh_find_large_files/README.md): This action scans the file system on a given host and returns a dict of large files. The command used to perform the scan is \"find inspect_folder -type f -exec du -sk '{}' + | sort -rh | head -n count\"\n\n* [Get SSH handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SSH/legos/ssh_get_handle/README.md): Get SSH handle\n\n* [SSH Restart Service Using sysctl](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SSH/legos/ssh_restart_service_using_sysctl/README.md): SSH Restart Service Using sysctl\n\n* [SCP: Remote file transfer over SSH](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SSH/legos/ssh_scp/README.md): Copy files from or to remote host. Files are copied over SCP. \n\n* [Assign Salesforce Case](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SalesForce/legos/salesforce_assign_case/README.md): Assign a Salesforce case\n\n* [Change Salesforce Case Status](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SalesForce/legos/salesforce_case_change_status/README.md): Change Salesforce Case Status\n\n* [Create Salesforce Case](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SalesForce/legos/salesforce_create_case/README.md): Create a Salesforce case\n\n* [Delete Salesforce Case](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SalesForce/legos/salesforce_delete_case/README.md): Delete a Salesforce case\n\n* [Get Salesforce Case Info](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SalesForce/legos/salesforce_get_case/README.md): Get a Salesforce case info\n\n* [Get Salesforce Case Status](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SalesForce/legos/salesforce_get_case_status/README.md): Get a Salesforce case status\n\n* [Get Salesforce handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SalesForce/legos/salesforce_get_handle/README.md): Get Salesforce handle\n\n* [Search Salesforce Case](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SalesForce/legos/salesforce_search_case/README.md): Search a Salesforce case\n\n* [Update Salesforce Case](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SalesForce/legos/salesforce_update_case/README.md): Update a Salesforce case\n\n* [Create Slack Channel and Invite Users](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Slack/legos/slack_create_channel_invite_users/README.md): Create a Slack Channel with given name, and invite a list of userIds to the channel.\n\n* [Get Slack SDK Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Slack/legos/slack_get_handle/README.md): Get Slack SDK Handle\n\n* [Slack Lookup User by Email](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Slack/legos/slack_lookup_user_by_email/README.md): Given an email address, find the slack user in the workspace.\n You can the extract their Profile picture, or retrieve their userid (which you can use to send messages) from the output.\n\n* [Post Slack Image](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Slack/legos/slack_post_image/README.md): Post Slack Image\n\n* [Post Slack Message](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Slack/legos/slack_post_message/README.md): Post Slack Message\n\n* [Slack Send DM](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Slack/legos/slack_send_DM/README.md): Given a list of Slack IDs, this Action will create a DM (one user) or group chat (multiple users), and send a message to the chat\n\n* [Snowflake Read Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Snowflake/legos/snowflake_read_query/README.md): Snowflake Read Query\n\n* [Snowflake Write Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Snowflake/legos/snowflake_write_query/README.md): Snowflake Write Query\n\n* [Get Splunk SDK Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Splunk/legos/splunk_get_handle/README.md): Get Splunk SDK Handle\n\n* [ Capture a Charge](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_capture_charge/README.md):  Capture the payment of an existing, uncaptured, charge\n\n* [Close Dispute](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_close_dispute/README.md): Close Dispute\n\n* [Create a Charge](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_create_charge/README.md): Create a Charge\n\n* [Create a Refund](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_create_refund/README.md): Create a Refund\n\n* [Get list of charges previously created](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_get_all_charges/README.md): Get list of charges previously created\n\n* [Get list of disputes](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_get_all_disputes/README.md): Get list of disputes\n\n* [Get list of refunds](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_get_all_refunds/README.md):  Get list of refunds for the given threshold.\n\n* [Get Stripe Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_get_handle/README.md): Get Stripe Handle\n\n* [Retrieve a Charge](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_retrieve_charge/README.md):  Retrieve a Charge\n\n* [Retrieve details of a dispute](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_retrieve_dispute/README.md): Retrieve details of a dispute\n\n* [Retrieve a refund](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_retrieve_refund/README.md): Retrieve a refund\n\n* [Update a Charge](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_update_charge/README.md): Update a Charge\n\n* [Update Dispute](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_update_dispute/README.md): Update Dispute\n\n* [Update Refund](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_update_refund/README.md): Updates the specified refund by setting the values of the parameters passed.\n\n* [Execute Terraform Command](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Terraform/legos/terraform_exec_command/README.md): Execute Terraform Command\n\n* [Get terraform handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Terraform/legos/terraform_get_handle/README.md): Get terraform handle\n\n* [Get Zabbix Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Zabbix/legos/zabbix_get_handle/README.md): Get Zabbix Handle\n\n* [Infra: Execute runbook](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/infra/legos/infra_execute_runbook/README.md): Infra: use this action to execute particular runbook with given input parameters.\n\n* [Infra: Finish runbook execution](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/infra/legos/infra_workflow_done/README.md): Infra: use this action to finish the execution of a runbook. Once this is set, no more tasks will be executed\n\n* [Infra: Append values for a key in workflow state store](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/infra/legos/workflow_ss_append_keys/README.md): Infra: use this action to append values for a key in a state store provided by the workflow.\n\n* [Infra: Store keys in workflow state store](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/infra/legos/workflow_ss_create_keys/README.md): Infra: use this action to persist keys in a state store provided by the workflow.\n\n* [Infra: Delete keys from workflow state store](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/infra/legos/workflow_ss_delete_keys/README.md): Infra: use this action to delete keys from a state store provided by the workflow.\n\n* [Infra: Fetch keys from workflow state store](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/infra/legos/workflow_ss_get_keys/README.md): Infra: use this action to retrieve keys in a state store provided by the workflow.\n\n* [Infra: Rename keys in workflow state store](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/infra/legos/workflow_ss_rename_keys/README.md): Infra: use this action to rename keys in a state store provided by the workflow.\n\n* [Infra: Update keys in workflow state store](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/infra/legos/workflow_ss_update_keys/README.md): Infra: use this action to update keys in a state store provided by the workflow.\n\n* [Opensearch Get Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/opensearch/legos/opensearch_get_handle/README.md): Opensearch Get Handle\n\n* [Opensearch search](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/opensearch/legos/opensearch_search/README.md): Opensearch Search\n\n"
  },
  {
    "path": "lists/action_EBS.md",
    "content": "* [AWS Delete EBS Snapshot](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_delete_ebs_snapshot/README.md): Delete EBS Snapshot for an EC2 instance\n\n"
  },
  {
    "path": "lists/action_ECS.md",
    "content": "* [AWS ECS Instances without AutoScaling policy](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_ecs_instances_without_autoscaling/README.md): AWS ECS Instances without AutoScaling policy.\n\n* [AWS ECS Services without AutoScaling policy](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_ecs_services_without_autoscaling/README.md): AWS ECS Services without AutoScaling policy.\n\n"
  },
  {
    "path": "lists/action_ES.md",
    "content": "* [Elasticsearch Cluster Health](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/ElasticSearch/legos/elasticsearch_check_health_status/README.md): Elasticsearch Check Health Status\n\n* [Get large Elasticsearch Index size](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/ElasticSearch/legos/elasticsearch_check_large_index_size/README.md): This action checks the sizes of all indices in the Elasticsearch cluster and compares them to a given threshold.\n\n* [Check Elasticsearch cluster disk size](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/ElasticSearch/legos/elasticsearch_compare_cluster_disk_size_to_threshold/README.md): This action compares the disk usage percentage of the Elasticsearch cluster to a given threshold.\n\n* [Elasticsearch Delete Unassigned Shards](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/ElasticSearch/legos/elasticsearch_delete_unassigned_shards/README.md): Elasticsearch Delete Corrupted/Lost Shards\n\n* [Elasticsearch Disable Shard Allocation](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/ElasticSearch/legos/elasticsearch_disable_shard_allocation/README.md): Elasticsearch Disable Shard Allocation for any indices\n\n* [Elasticsearch Enable Shard Allocation](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/ElasticSearch/legos/elasticsearch_enable_shard_allocation/README.md): Elasticsearch Enable Shard Allocation for any shards for any indices\n\n* [Elasticsearch Cluster Statistics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/ElasticSearch/legos/elasticsearch_get_cluster_statistics/README.md): Elasticsearch Cluster Statistics fetches total index size, disk size, and memory utilization and information about the current nodes and shards that form the cluster\n\n* [Get Elasticsearch Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/ElasticSearch/legos/elasticsearch_get_handle/README.md): Get Elasticsearch Handle\n\n* [Get Elasticsearch index level health](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/ElasticSearch/legos/elasticsearch_get_index_health/README.md): This action checks the health of a given Elasticsearch index or all indices if no specific index is provided.\n\n* [Elasticsearch List Allocations](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/ElasticSearch/legos/elasticsearch_list_allocations/README.md): Elasticsearch List Allocations in a Cluster\n\n* [Elasticsearch List Nodes](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/ElasticSearch/legos/elasticsearch_list_nodes/README.md): Elasticsearch List Nodes in a Cluster\n\n* [Elasticsearch search](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/ElasticSearch/legos/elasticsearch_search_query/README.md): Elasticsearch Search\n\n"
  },
  {
    "path": "lists/action_GCP.md",
    "content": "* [Add lifecycle policy to GCP storage bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_add_lifecycle_policy_to_bucket/README.md): The action adds a lifecycle policy to a Google Cloud Platform (GCP) storage bucket.\n\n* [GCP Add Member to IAM Role](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_add_member_to_iam_role/README.md): Adding member to the IAM role which already available\n\n* [GCP Add Role to Service Account](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_add_role_to_service_account/README.md): Adding role and member to the service account\n\n* [Create GCP Bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_create_bucket/README.md): Create a new GCP bucket in the given location\n\n* [Create a GCP disk snapshot](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_create_disk_snapshot/README.md): Create a GCP disk snapshot.\n\n* [Create GCP Filestore Instance](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_create_filestore_instance/README.md): Create a new GCP Filestore Instance in the given location\n\n* [Create GKE Cluster](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_create_gke_cluster/README.md): Create GKE Cluster\n\n* [GCP Create Service Account](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_create_service_account/README.md): GCP Create Service Account\n\n* [Delete GCP Bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_delete_bucket/README.md): Delete a GCP bucket\n\n* [Delete GCP Filestore Instance](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_delete_filestore_instance/README.md): Delete a GCP Filestore Instance in the given location\n\n* [Delete an Object from GCP Bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_delete_object_from_bucket/README.md): Delete an Object/Blob from a GCP Bucket\n\n* [GCP Delete Service Account](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_delete_service_account/README.md): GCP Delete Service Account\n\n* [GCP Describe a GKE cluster](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_describe_gke_cluster/README.md): GCP Describe a GKE cluster\n\n* [Fetch Objects from GCP Bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_fetch_objects_from_bucket/README.md): List all Objects in a GCP bucket\n\n* [Get GCP storage buckets without lifecycle policies](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_get_buckets_without_lifecycle_policies/README.md): The action retrieves a list of Google Cloud Platform (GCP) storage buckets that do not have any lifecycle policies applied.\n\n* [Get details of GCP forwarding rules](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_get_forwarding_rules_details/README.md): Get details of forwarding rules associated with a backend service.\n\n* [Get GCP Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_get_handle/README.md): Get GCP Handle\n\n* [Get List of GCP compute instance without label](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_get_instances_without_label/README.md): Get List of GCP compute instance without label\n\n* [Get unused GCP backend services](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_get_unused_backend_services/README.md): Get unused backend service for an application load balancer that has no instances in it's target group.\n\n\n* [List all GCP Buckets](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_list_buckets/README.md): List all GCP buckets\n\n* [Get GCP compute instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_list_compute_instances/README.md): Get GCP compute instances\n\n* [Get List of GCP compute instance by label](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_list_compute_instances_by_label/README.md): Get List of GCP compute instance by label\n\n* [Get list  compute instance by VPC](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_list_compute_instances_by_vpc/README.md): Get list  compute instance by VPC\n\n* [GCP List GKE Cluster](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_list_gke_cluster/README.md): GCP List GKE Cluster\n\n* [GCP List Nodes in GKE Cluster](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_list_nodes_in_gke_cluster/README.md): GCP List Nodes of GKE Cluster\n\n* [List all Public GCP Buckets](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_list_public_buckets/README.md): List all publicly available GCP buckets\n\n* [List GCP Secrets](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_list_secrets/README.md): List of your GCP Secrets\n\n* [GCP List Service Accounts](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_list_service_accounts/README.md): GCP List Service Accounts\n\n* [List all GCP VMs and if Publicly Accessible](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_list_vms_access/README.md): Lists all GCP buckets, and identifies those tha are public.\n\n* [GCP Remove Member from IAM Role](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_remove_member_from_iam_role/README.md): Remove member from the chosen IAM role.\n\n* [GCP Remove Role from Service Account](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_remove_role_from_service_account/README.md): Remove role and member from the service account\n\n* [Remove role from user](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_remove_user_role/README.md): GCP lego for removing a role from a user (default: 'viewer')\n\n* [GCP Resize a GKE cluster](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_resize_gke_cluster/README.md): GCP Resize a GKE cluster by modifying nodes\n\n* [GCP Restart compute instance](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_restart_compute_instances/README.md): GCP Restart compute instance\n\n* [Restore GCP disk from a snapshot ](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_restore_disk_from_snapshot/README.md): Restore a GCP disk from a compute instance snapshot.\n\n* [Save CSV to Google Sheets](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_save_csv_to_google_sheets_v1/README.md): Saves your CSV (see notes) into a prepared Google Sheet.\n\n* [GCP Stop compute instance](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_stop_compute_instances/README.md): GCP Stop compute instance\n\n* [Upload an Object to GCP Bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_upload_file_to_bucket/README.md): Upload an Object/Blob in a GCP bucket\n\n"
  },
  {
    "path": "lists/action_GCP_BUCKET.md",
    "content": "* [Create GCP Bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_create_bucket/README.md): Create a new GCP bucket in the given location\n\n* [Delete GCP Bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_delete_bucket/README.md): Delete a GCP bucket\n\n* [Delete an Object from GCP Bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_delete_object_from_bucket/README.md): Delete an Object/Blob from a GCP Bucket\n\n* [Fetch Objects from GCP Bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_fetch_objects_from_bucket/README.md): List all Objects in a GCP bucket\n\n* [List all GCP Buckets](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_list_buckets/README.md): List all GCP buckets\n\n* [List all Public GCP Buckets](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_list_public_buckets/README.md): List all publicly available GCP buckets\n\n* [Upload an Object to GCP Bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_upload_file_to_bucket/README.md): Upload an Object/Blob in a GCP bucket\n\n"
  },
  {
    "path": "lists/action_GCP_FILE_STORE.md",
    "content": "* [Create GCP Filestore Instance](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_create_filestore_instance/README.md): Create a new GCP Filestore Instance in the given location\n\n* [Delete GCP Filestore Instance](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_delete_filestore_instance/README.md): Delete a GCP Filestore Instance in the given location\n\n"
  },
  {
    "path": "lists/action_GCP_GKE.md",
    "content": "* [Create GKE Cluster](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_create_gke_cluster/README.md): Create GKE Cluster\n\n* [GCP Describe a GKE cluster](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_describe_gke_cluster/README.md): GCP Describe a GKE cluster\n\n* [GCP List GKE Cluster](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_list_gke_cluster/README.md): GCP List GKE Cluster\n\n* [GCP List Nodes in GKE Cluster](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_list_nodes_in_gke_cluster/README.md): GCP List Nodes of GKE Cluster\n\n* [GCP Resize a GKE cluster](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_resize_gke_cluster/README.md): GCP Resize a GKE cluster by modifying nodes\n\n"
  },
  {
    "path": "lists/action_GCP_IAM.md",
    "content": "* [GCP Add Member to IAM Role](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_add_member_to_iam_role/README.md): Adding member to the IAM role which already available\n\n* [GCP Add Role to Service Account](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_add_role_to_service_account/README.md): Adding role and member to the service account\n\n* [GCP Create Service Account](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_create_service_account/README.md): GCP Create Service Account\n\n* [GCP Delete Service Account](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_delete_service_account/README.md): GCP Delete Service Account\n\n* [GCP Remove Member from IAM Role](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_remove_member_from_iam_role/README.md): Remove member from the chosen IAM role.\n\n* [GCP Remove Role from Service Account](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_remove_role_from_service_account/README.md): Remove role and member from the service account\n\n* [Remove role from user](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_remove_user_role/README.md): GCP lego for removing a role from a user (default: 'viewer')\n\n"
  },
  {
    "path": "lists/action_GCP_SECRET.md",
    "content": "* [List GCP Secrets](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_list_secrets/README.md): List of your GCP Secrets\n\n* [GCP List Service Accounts](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_list_service_accounts/README.md): GCP List Service Accounts\n\n"
  },
  {
    "path": "lists/action_GCP_SHEETS.md",
    "content": "* [Save CSV to Google Sheets](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_save_csv_to_google_sheets_v1/README.md): Saves your CSV (see notes) into a prepared Google Sheet.\n\n"
  },
  {
    "path": "lists/action_GCP_STORAGE.md",
    "content": "* [Add lifecycle policy to GCP storage bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_add_lifecycle_policy_to_bucket/README.md): The action adds a lifecycle policy to a Google Cloud Platform (GCP) storage bucket.\n\n* [Get GCP storage buckets without lifecycle policies](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_get_buckets_without_lifecycle_policies/README.md): The action retrieves a list of Google Cloud Platform (GCP) storage buckets that do not have any lifecycle policies applied.\n\n"
  },
  {
    "path": "lists/action_GCP_VM.md",
    "content": "* [Create a GCP disk snapshot](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_create_disk_snapshot/README.md): Create a GCP disk snapshot.\n\n* [Get details of GCP forwarding rules](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_get_forwarding_rules_details/README.md): Get details of forwarding rules associated with a backend service.\n\n* [Get GCP Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_get_handle/README.md): Get GCP Handle\n\n* [Get List of GCP compute instance without label](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_get_instances_without_label/README.md): Get List of GCP compute instance without label\n\n* [Get unused GCP backend services](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_get_unused_backend_services/README.md): Get unused backend service for an application load balancer that has no instances in it's target group.\n\n\n* [Get GCP compute instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_list_compute_instances/README.md): Get GCP compute instances\n\n* [Get List of GCP compute instance by label](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_list_compute_instances_by_label/README.md): Get List of GCP compute instance by label\n\n* [Get list  compute instance by VPC](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_list_compute_instances_by_vpc/README.md): Get list  compute instance by VPC\n\n* [GCP Restart compute instance](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_restart_compute_instances/README.md): GCP Restart compute instance\n\n* [Restore GCP disk from a snapshot ](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_restore_disk_from_snapshot/README.md): Restore a GCP disk from a compute instance snapshot.\n\n* [GCP Stop compute instance](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_stop_compute_instances/README.md): GCP Stop compute instance\n\n"
  },
  {
    "path": "lists/action_GCP_VMS.md",
    "content": "* [List all GCP VMs and if Publicly Accessible](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_list_vms_access/README.md): Lists all GCP buckets, and identifies those tha are public.\n\n"
  },
  {
    "path": "lists/action_GCP_VPC.md",
    "content": "* [Get list  compute instance by VPC](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_list_compute_instances_by_vpc/README.md): Get list  compute instance by VPC\n\n"
  },
  {
    "path": "lists/action_GITHUB.md",
    "content": "* [Github Assign Issue](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_assign_issue/README.md): Assign a github issue to a user\n\n* [Github Check if Pull Request is merged](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_check_if_pull_request_is_merged/README.md): Check if a Github Pull Request is merged\n\n* [Github Close Pull Request](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_close_pull_request/README.md): Close pull request based on pull request number\n\n* [Github Count Stars](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_count_stars/README.md): Get count of stars for a repository\n\n* [Github Create Issue](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_create_issue/README.md): Create a new Github Issue for a repository\n\n* [Github Create Team](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_create_team/README.md): Create a new Github Team\n\n* [Github Delete Branch](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_delete_branch/README.md): Delete a github branch\n\n* [Github Get Branch](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_get_branch/README.md): Get Github branch for a user in a repository\n\n* [Get Github Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_get_handle/README.md): Get Github Handle\n\n* [Github Get Issue](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_get_issue/README.md): Get Github Issue from a repository\n\n* [Github Get Open Branches](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_get_open_branches/README.md): Get first 100 open branches for a given user in a given repo.\n\n* [Github Get Pull Request](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_get_pull_request/README.md): Get Github Pull Request for a user in a repository\n\n* [Github Get Team](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_get_team/README.md): Github Get Team\n\n* [Github Get User](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_get_user/README.md): Get Github User details\n\n* [Github Invite User to Organization](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_invite_user_to_org/README.md): Invite a Github User to an Organization\n\n* [Github Comment on an Issue](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_issue_comment/README.md): Add a comment to the selected GitHub Issue\n\n* [Github List Open Issues](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_list_open_issues/README.md): List open Issues in a Github Repository\n\n* [Github List Organization Members](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_list_org_members/README.md): List Github Organization Members\n\n* [Github List PR Commits](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_list_pull_request_commits/README.md): Github List all Pull Request Commits\n\n* [Github List Pull Request Reviewers](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_list_pull_request_reviewers/README.md): List PR reviewers for a PR\n\n* [Github List Pull Requests](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_list_pull_requests/README.md): List pull requests for a user in a repository\n\n* [Github List Stale Issues](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_list_stale_issues/README.md): List Stale Github Issues that have crossed a certain age limit.\n\n* [Github List Stale Pull Requests](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_list_stale_pull_requests/README.md): Check for any Pull requests over a certain age. \n\n* [Github List Stargazers](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_list_stargazers/README.md): List of Github users that have starred (essentially bookmarked) a repository\n\n* [Github List Team Members](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_list_team_members/README.md): List Github Team Members for a given Team\n\n* [Github List Team Repositories](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_list_team_repos/README.md): Github List Team Repositories\n\n* [Github List Teams in Organization](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_list_teams_in_org/README.md): List teams in a organization in GitHub\n\n* [Github List Webhooks](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_list_webhooks/README.md): List webhooks for a repository\n\n* [Github Merge Pull Request](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_merge_pull_request/README.md): Github Merge Pull Request\n\n* [Github Remove Member from Organization](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_remove_member_from_org/README.md): Remove a member from a Github Organization\n\n* [Get Grafana Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Grafana/legos/grafana_get_handle/README.md): Get Grafana Handle\n\n"
  },
  {
    "path": "lists/action_GITHUB_ISSUE.md",
    "content": "* [Github Assign Issue](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_assign_issue/README.md): Assign a github issue to a user\n\n* [Github Create Issue](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_create_issue/README.md): Create a new Github Issue for a repository\n\n* [Github Get Issue](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_get_issue/README.md): Get Github Issue from a repository\n\n* [Github Comment on an Issue](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_issue_comment/README.md): Add a comment to the selected GitHub Issue\n\n* [Github List Open Issues](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_list_open_issues/README.md): List open Issues in a Github Repository\n\n* [Github List Stale Issues](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_list_stale_issues/README.md): List Stale Github Issues that have crossed a certain age limit.\n\n"
  },
  {
    "path": "lists/action_GITHUB_ORG.md",
    "content": "* [Github Invite User to Organization](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_invite_user_to_org/README.md): Invite a Github User to an Organization\n\n* [Github List Organization Members](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_list_org_members/README.md): List Github Organization Members\n\n* [Github List Teams in Organization](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_list_teams_in_org/README.md): List teams in a organization in GitHub\n\n* [Github Remove Member from Organization](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_remove_member_from_org/README.md): Remove a member from a Github Organization\n\n* [Get Grafana Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Grafana/legos/grafana_get_handle/README.md): Get Grafana Handle\n\n"
  },
  {
    "path": "lists/action_GITHUB_PR.md",
    "content": "* [Github Check if Pull Request is merged](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_check_if_pull_request_is_merged/README.md): Check if a Github Pull Request is merged\n\n* [Github Close Pull Request](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_close_pull_request/README.md): Close pull request based on pull request number\n\n* [Github Get Pull Request](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_get_pull_request/README.md): Get Github Pull Request for a user in a repository\n\n* [Github List PR Commits](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_list_pull_request_commits/README.md): Github List all Pull Request Commits\n\n* [Github List Pull Request Reviewers](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_list_pull_request_reviewers/README.md): List PR reviewers for a PR\n\n* [Github List Pull Requests](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_list_pull_requests/README.md): List pull requests for a user in a repository\n\n* [Github List Stale Pull Requests](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_list_stale_pull_requests/README.md): Check for any Pull requests over a certain age. \n\n* [Github Merge Pull Request](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_merge_pull_request/README.md): Github Merge Pull Request\n\n"
  },
  {
    "path": "lists/action_GITHUB_REPO.md",
    "content": "* [Github Count Stars](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_count_stars/README.md): Get count of stars for a repository\n\n* [Github Delete Branch](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_delete_branch/README.md): Delete a github branch\n\n* [Github Get Branch](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_get_branch/README.md): Get Github branch for a user in a repository\n\n* [Get Github Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_get_handle/README.md): Get Github Handle\n\n* [Github Get Open Branches](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_get_open_branches/README.md): Get first 100 open branches for a given user in a given repo.\n\n* [Github List Stargazers](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_list_stargazers/README.md): List of Github users that have starred (essentially bookmarked) a repository\n\n* [Github List Team Repositories](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_list_team_repos/README.md): Github List Team Repositories\n\n"
  },
  {
    "path": "lists/action_GITHUB_TEAM.md",
    "content": "* [Github Create Team](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_create_team/README.md): Create a new Github Team\n\n* [Github Get Team](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_get_team/README.md): Github Get Team\n\n* [Github List Team Members](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_list_team_members/README.md): List Github Team Members for a given Team\n\n* [Github List Teams in Organization](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_list_teams_in_org/README.md): List teams in a organization in GitHub\n\n"
  },
  {
    "path": "lists/action_GITHUB_USER.md",
    "content": "* [Github Get User](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_get_user/README.md): Get Github User details\n\n* [Github Invite User to Organization](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_invite_user_to_org/README.md): Invite a Github User to an Organization\n\n* [Github List Team Members](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_list_team_members/README.md): List Github Team Members for a given Team\n\n* [Github Remove Member from Organization](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_remove_member_from_org/README.md): Remove a member from a Github Organization\n\n* [Get Grafana Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Grafana/legos/grafana_get_handle/README.md): Get Grafana Handle\n\n"
  },
  {
    "path": "lists/action_GRAFANA.md",
    "content": "* [Grafana List Alerts](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Grafana/legos/grafana_list_alerts/README.md): List of Grafana alerts. Specifying the dashboard ID will show alerts in that dashboard\n\n"
  },
  {
    "path": "lists/action_HADOOP.md",
    "content": "* [Get Hadoop cluster apps](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Hadoop/legos/hadoop_get_cluster_apps/README.md): Get Hadoop cluster apps\n\n* [Get Hadoop cluster appstatistics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Hadoop/legos/hadoop_get_cluster_appstatistics/README.md): Get Hadoop cluster appstatistics\n\n* [Get Hadoop cluster metrics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Hadoop/legos/hadoop_get_cluster_metrics/README.md): Get Hadoop EMR cluster metrics\n\n* [Get Hadoop cluster nodes](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Hadoop/legos/hadoop_get_cluster_nodes/README.md): Get Hadoop cluster nodes\n\n* [Get Hadoop handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Hadoop/legos/hadoop_get_handle/README.md): Get Hadoop handle\n\n"
  },
  {
    "path": "lists/action_IAM.md",
    "content": "* [AWS Attach New Policy to User](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_attach_iam_policy/README.md): AWS Attach New Policy to User\n\n* [AWS Check if RDS instances are not M5 or T3](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_check_rds_non_m5_t3_instances/README.md): AWS Check if RDS instances are not M5 or T3\n\n* [AWS Create IAM Policy](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_create_IAMpolicy/README.md): Given an AWS policy (as a string), and the name for the policy, this will create an IAM policy.\n\n* [AWS Create Access Key](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_create_access_key/README.md): Create a new Access Key for the User\n\n* [Create New IAM User](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_create_iam_user/README.md): Create New IAM User\n\n* [Create Login profile for IAM User](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_create_user_login_profile/README.md): Create Login profile for IAM User\n\n* [AWS Delete Access Key](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_delete_access_key/README.md): Delete an Access Key for a User\n\n* [Filter AWS EBS Volume with Low IOPS](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_ebs_volumes_with_low_iops/README.md): IOPS (Input/Output Operations Per Second) is a metric used to measure the amount of input/output operations that an EBS volume can perform per second.\n\n* [AWS Find Low Connections RDS instances Per Day](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_find_low_connection_rds_instances/README.md): This action will find RDS DB instances with a number of connections below the specified minimum in the specified region.\n\n* [AWS Find EMR Clusters of Old Generation Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_find_old_gen_emr_clusters/README.md): This action list of EMR clusters of old generation instances.\n\n* [Get AWS ALB Listeners Without HTTP Redirection](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_alb_listeners_without_http_redirect/README.md): Get AWS ALB Listeners Without HTTP Redirection\n\n* [AWS Get EBS Volumes for Low Usage](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_ebs_volume_for_low_usage/README.md): This action list low use volumes from AWS which used <10% capacity from the given threshold days.\n\n* [Get AWS EBS Volume Without GP3 Type](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_ebs_volumes_without_gp3_type/README.md): AWS recently introduced the General Purpose SSD (gp3) volume type, which is designed to provide higher IOPS performance at a lower cost than the gp2 volume type.\n\n* [AWS Get Idle EMR Clusters](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_idle_emr_clusters/README.md): This action list of EMR clusters that have been idle for more than the specified time.\n\n* [AWS Get Publicly Accessible RDS Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_publicly_accessible_db_instances/README.md): AWS Get Publicly Accessible RDS Instances\n\n* [AWS List Unused Secrets](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_unused_secrets/README.md): This action lists all the unused secrets from AWS by comparing the last used date with the given threshold.\n\n"
  },
  {
    "path": "lists/action_INFRA.md",
    "content": "* [Infra: Execute runbook](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/infra/legos/infra_execute_runbook/README.md): Infra: use this action to execute particular runbook with given input parameters.\n\n* [Infra: Finish runbook execution](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/infra/legos/infra_workflow_done/README.md): Infra: use this action to finish the execution of a runbook. Once this is set, no more tasks will be executed\n\n* [Infra: Append values for a key in workflow state store](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/infra/legos/workflow_ss_append_keys/README.md): Infra: use this action to append values for a key in a state store provided by the workflow.\n\n* [Infra: Store keys in workflow state store](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/infra/legos/workflow_ss_create_keys/README.md): Infra: use this action to persist keys in a state store provided by the workflow.\n\n* [Infra: Delete keys from workflow state store](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/infra/legos/workflow_ss_delete_keys/README.md): Infra: use this action to delete keys from a state store provided by the workflow.\n\n* [Infra: Fetch keys from workflow state store](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/infra/legos/workflow_ss_get_keys/README.md): Infra: use this action to retrieve keys in a state store provided by the workflow.\n\n* [Infra: Rename keys in workflow state store](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/infra/legos/workflow_ss_rename_keys/README.md): Infra: use this action to rename keys in a state store provided by the workflow.\n\n* [Infra: Update keys in workflow state store](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/infra/legos/workflow_ss_update_keys/README.md): Infra: use this action to update keys in a state store provided by the workflow.\n\n"
  },
  {
    "path": "lists/action_JENKINS.md",
    "content": "* [Get Jenkins Logs from a job](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Jenkins/legos/jenkins_get_logs/README.md): Get Jenkins Logs from a Job\n\n* [Get Jenkins Plugin List](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Jenkins/legos/jenkins_get_plugins/README.md): Get Jenkins Plugin List\n\n"
  },
  {
    "path": "lists/action_JIRA.md",
    "content": "* [Jira Add Comment](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Jira/legos/jira_add_comment/README.md): Add a Jira Comment\n\n* [Assign Jira Issue](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Jira/legos/jira_assign_issue/README.md): Assign a Jira Issue to a user\n\n* [Create a Jira Issue](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Jira/legos/jira_create_issue/README.md): Create a Jira Issue\n\n* [Get Jira SDK Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Jira/legos/jira_get_handle/README.md): Get Jira SDK Handle\n\n* [Get Jira Issue Info](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Jira/legos/jira_get_issue/README.md): Get Issue Info from Jira API: description, labels, attachments\n\n* [Get Jira Issue Status](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Jira/legos/jira_get_issue_status/README.md): Get Issue Status from Jira API\n\n* [Change JIRA Issue Status](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Jira/legos/jira_issue_change_status/README.md): Change JIRA Issue Status to given status\n\n* [Search for Jira issues matching JQL queries](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Jira/legos/jira_search_issue/README.md): Use JQL to search all matching issues in Jira. Returns a List of the matching issues IDs/keys\n\n"
  },
  {
    "path": "lists/action_K8S.md",
    "content": "* [Add Node in a Kubernetes Cluster](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_add_node_to_cluster/README.md): Add Node in a Kubernetes Cluster\n\n* [Change size of Kubernetes PVC](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_change_pvc_size/README.md): Change size of Kubernetes PVC\n\n* [Check K8s services endpoint health](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_check_service_status/README.md): This action checks the health status of the provided Kubernetes services.\n\n* [Check K8s worker CPU Utilization](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_check_worker_cpu_utilization/README.md): Retrieves the CPU utilization for all worker nodes in the cluster and compares it to a given threshold.\n\n* [Delete a Kubernetes POD in a given Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_delete_pod/README.md): Delete a Kubernetes POD in a given Namespace\n\n* [Describe Kubernetes Node](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_describe_node/README.md): Describe a Kubernetes Node\n\n* [Describe a Kubernetes POD in a given Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_describe_pod/README.md): Describe a Kubernetes POD in a given Namespace\n\n* [Execute a command on a Kubernetes POD in a given Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_exec_command_on_pod/README.md): Execute a command on a Kubernetes POD in a given Namespace\n\n* [Kubernetes Execute a command on a POD in a given namespace and filter](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_exec_command_on_pods_and_filter/README.md): Execute a command on Kubernetes POD in a given namespace and filter output\n\n* [Execute local script on a pod](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_execute_local_script_on_a_pod/README.md): Execute local script on a pod in a namespace\n\n* [Gather Data for POD Troubleshoot](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_gather_data_for_pod_troubleshoot/README.md): Gather Data for POD Troubleshoot\n\n* [Get All Evicted PODS From Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_all_evicted_pods_from_namespace/README.md): This action get all evicted PODS from given namespace. If namespace not given it will get all the pods from all namespaces.\n\n* [ Get All Kubernetes PODS with state in a given Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_all_pods/README.md):  Get All Kubernetes PODS with state in a given Namespace\n\n* [Get K8s pods status and resource utilization info](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_all_resources_utilization_info/README.md): This action gets the pod status and resource utilization of various Kubernetes resources like jobs, services, persistent volumes.\n\n* [Get candidate k8s nodes for given configuration](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_candidate_nodes_for_pods/README.md): Get candidate k8s nodes for given configuration\n\n* [Get K8S Cluster Health](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_cluster_health/README.md): Get K8S Cluster Health\n\n* [Get k8s kube system config map](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_config_map_kube_system/README.md): Get k8s kube system config map\n\n* [Get Kubernetes Deployment For a Pod in a Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_deployment/README.md): Get Kubernetes Deployment for a POD in a Namespace\n\n* [Get Deployment Status](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_deployment_status/README.md): This action search for failed deployment status and returns list.\n\n* [Get Kubernetes Error PODs from All Jobs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_error_pods_from_all_jobs/README.md): Get Kubernetes Error PODs from All Jobs\n\n* [Get expiring K8s certificates](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_expiring_certificates/README.md): Get the expiring certificates for a K8s cluster.\n\n* [Get Kubernetes Failed Deployments](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_failed_deployments/README.md): Get Kubernetes Failed Deployments\n\n* [Get frequently restarting K8s pods](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_frequently_restarting_pods/README.md): Get Kubernetes pods from all namespaces that are restarting too often.\n\n* [Get Kubernetes Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_handle/README.md): Get Kubernetes Handle\n\n* [Get All Kubernetes Healthy PODS in a given Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_healthy_pods/README.md): Get All Kubernetes Healthy PODS in a given Namespace\n\n* [Get memory utilization for K8s services](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_memory_utilization_of_services/README.md): This action executes the given kubectl commands to find the memory utilization of the specified services in a particular namespace and compares it with a given threshold.\n\n* [Get K8s node status and CPU utilization](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_node_status_and_resource_utilization/README.md): This action gathers Kubernetes node status and resource utilization information.\n\n* [Get Kubernetes Nodes](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_nodes/README.md): Get Kubernetes Nodes\n\n* [Get K8s nodes disk and memory pressure](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_nodes_pressure/README.md): This action fetches the memory and disk pressure status of each node in the cluster\n\n* [Get Kubernetes Nodes that have insufficient resources](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_nodes_with_insufficient_resources/README.md): Get Kubernetes Nodes that have insufficient resources\n\n* [Get K8s offline nodes](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_offline_nodes/README.md): This action checks if any node in the Kubernetes cluster is offline.\n\n* [Get K8S OOMKilled Pods](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_oomkilled_pods/README.md): Get K8S Pods which are OOMKilled from the container last states.\n\n* [Get K8s get pending pods](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_pending_pods/README.md): This action checks if any pod in the Kubernetes cluster is in 'Pending' status.\n\n* [Get Kubernetes POD Configuration](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_pod_config/README.md): Get Kubernetes POD Configuration\n\n* [Get Kubernetes Logs for a given POD in a Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_pod_logs/README.md): Get Kubernetes Logs for a given POD in a Namespace\n\n* [Get Kubernetes Logs for a list of PODs & Filter in a Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_pod_logs_and_filter/README.md): Get Kubernetes Logs for a list of PODs and Filter in a Namespace\n\n* [Get Kubernetes Status for a POD in a given Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_pod_status/README.md): Get Kubernetes Status for a POD in a given Namespace\n\n* [Get pods attached to Kubernetes PVC](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_pods_attached_to_pvc/README.md): Get pods attached to Kubernetes PVC\n\n* [Get all K8s Pods in CrashLoopBackOff State](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_pods_in_crashloopbackoff_state/README.md): Get all K8s pods in CrashLoopBackOff State\n\n* [Get all K8s Pods in ImagePullBackOff State](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_pods_in_imagepullbackoff_state/README.md): Get all K8s pods in ImagePullBackOff State\n\n* [Get Kubernetes PODs in not Running State](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_pods_in_not_running_state/README.md): Get Kubernetes PODs in not Running State\n\n* [Get all K8s Pods in Terminating State](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_pods_in_terminating_state/README.md): Get all K8s pods in Terminating State\n\n* [Get Kubernetes PODS with high restart](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_pods_with_high_restart/README.md): Get Kubernetes PODS with high restart\n\n* [Get K8S Service with no associated endpoints](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_service_with_no_associated_endpoints/README.md): Get K8S Service with no associated endpoints\n\n* [Get Kubernetes Services for a given Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_services/README.md): Get Kubernetes Services for a given Namespace\n\n* [Get Kubernetes Unbound PVCs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_unbound_pvcs/README.md): Get Kubernetes Unbound PVCs\n\n* [Kubectl command](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_command/README.md): Execute kubectl command.\n\n* [Kubectl set context entry in kubeconfig](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_config_set_context/README.md): Kubectl set context entry in kubeconfig\n\n* [Kubectl display merged kubeconfig settings](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_config_view/README.md): Kubectl display merged kubeconfig settings\n\n* [Kubectl delete a pod](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_delete_pod/README.md): Kubectl delete a pod\n\n* [Kubectl describe a node](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_describe_node/README.md): Kubectl describe a node\n\n* [Kubectl describe a pod](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_describe_pod/README.md): Kubectl describe a pod\n\n* [Kubectl drain a node](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_drain_node/README.md): Kubectl drain a node\n\n* [Execute command on a pod](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_exec_command/README.md): Execute command on a pod\n\n* [Kubectl get api resources](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_get_api_resources/README.md): Kubectl get api resources\n\n* [Kubectl get logs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_get_logs/README.md): Kubectl get logs for a given pod\n\n* [Kubectl get services](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_get_service_namespace/README.md): Kubectl get services in a given namespace\n\n* [Kubectl list pods](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_list_pods/README.md): Kubectl list pods in given namespace\n\n* [Kubectl update field](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_patch_pod/README.md): Kubectl update field of a resource using strategic merge patch\n\n* [Kubectl rollout deployment history](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_rollout_deployment/README.md): Kubectl rollout deployment history\n\n* [Kubectl scale deployment](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_scale_deployment/README.md): Kubectl scale a given deployment\n\n* [Kubectl show metrics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_show_metrics_node/README.md): Kubectl show metrics for a given node\n\n* [Kubectl show metrics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_show_metrics_pod/README.md): Kubectl show metrics for a given pod\n\n* [List matching name pods](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_list_all_matching_pods/README.md): List all pods matching a particular name string. The matching string can be a regular expression too\n\n* [List pvcs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_list_pvcs/README.md): List pvcs by namespace. By default, it will list all pvcs in all namespaces.\n\n* [Remove POD from Deployment](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_remove_pod_from_deployment/README.md): Remove POD from Deployment\n\n* [Update Commands in a Kubernetes POD in a given Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_update_command_in_pod_spec/README.md): Update Commands in a Kubernetes POD in a given Namespace\n\n"
  },
  {
    "path": "lists/action_K8S_CLUSTER.md",
    "content": "* [Add Node in a Kubernetes Cluster](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_add_node_to_cluster/README.md): Add Node in a Kubernetes Cluster\n\n* [Get K8S Cluster Health](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_cluster_health/README.md): Get K8S Cluster Health\n\n"
  },
  {
    "path": "lists/action_K8S_KUBECTL.md",
    "content": "* [Execute local script on a pod](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_execute_local_script_on_a_pod/README.md): Execute local script on a pod in a namespace\n\n* [Gather Data for POD Troubleshoot](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_gather_data_for_pod_troubleshoot/README.md): Gather Data for POD Troubleshoot\n\n* [Kubectl command](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_command/README.md): Execute kubectl command.\n\n* [Kubectl set context entry in kubeconfig](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_config_set_context/README.md): Kubectl set context entry in kubeconfig\n\n* [Kubectl display merged kubeconfig settings](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_config_view/README.md): Kubectl display merged kubeconfig settings\n\n* [Kubectl delete a pod](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_delete_pod/README.md): Kubectl delete a pod\n\n* [Kubectl describe a node](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_describe_node/README.md): Kubectl describe a node\n\n* [Kubectl describe a pod](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_describe_pod/README.md): Kubectl describe a pod\n\n* [Kubectl drain a node](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_drain_node/README.md): Kubectl drain a node\n\n* [Execute command on a pod](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_exec_command/README.md): Execute command on a pod\n\n* [Kubectl get api resources](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_get_api_resources/README.md): Kubectl get api resources\n\n* [Kubectl get logs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_get_logs/README.md): Kubectl get logs for a given pod\n\n* [Kubectl get services](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_get_service_namespace/README.md): Kubectl get services in a given namespace\n\n* [Kubectl list pods](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_list_pods/README.md): Kubectl list pods in given namespace\n\n* [Kubectl update field](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_patch_pod/README.md): Kubectl update field of a resource using strategic merge patch\n\n* [Kubectl rollout deployment history](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_rollout_deployment/README.md): Kubectl rollout deployment history\n\n* [Kubectl scale deployment](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_scale_deployment/README.md): Kubectl scale a given deployment\n\n* [Kubectl show metrics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_show_metrics_node/README.md): Kubectl show metrics for a given node\n\n* [Kubectl show metrics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_show_metrics_pod/README.md): Kubectl show metrics for a given pod\n\n* [List matching name pods](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_list_all_matching_pods/README.md): List all pods matching a particular name string. The matching string can be a regular expression too\n\n* [List pvcs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_list_pvcs/README.md): List pvcs by namespace. By default, it will list all pvcs in all namespaces.\n\n* [Remove POD from Deployment](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_remove_pod_from_deployment/README.md): Remove POD from Deployment\n\n* [Update Commands in a Kubernetes POD in a given Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_update_command_in_pod_spec/README.md): Update Commands in a Kubernetes POD in a given Namespace\n\n"
  },
  {
    "path": "lists/action_K8S_NAMESPACE.md",
    "content": "* [Kubectl get services](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_get_service_namespace/README.md): Kubectl get services in a given namespace\n\n"
  },
  {
    "path": "lists/action_K8S_NODE.md",
    "content": "* [Add Node in a Kubernetes Cluster](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_add_node_to_cluster/README.md): Add Node in a Kubernetes Cluster\n\n* [Check K8s worker CPU Utilization](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_check_worker_cpu_utilization/README.md): Retrieves the CPU utilization for all worker nodes in the cluster and compares it to a given threshold.\n\n* [Describe Kubernetes Node](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_describe_node/README.md): Describe a Kubernetes Node\n\n* [Get candidate k8s nodes for given configuration](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_candidate_nodes_for_pods/README.md): Get candidate k8s nodes for given configuration\n\n* [Get K8s node status and CPU utilization](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_node_status_and_resource_utilization/README.md): This action gathers Kubernetes node status and resource utilization information.\n\n* [Get Kubernetes Nodes](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_nodes/README.md): Get Kubernetes Nodes\n\n* [Get K8s nodes disk and memory pressure](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_nodes_pressure/README.md): This action fetches the memory and disk pressure status of each node in the cluster\n\n* [Get K8s offline nodes](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_offline_nodes/README.md): This action checks if any node in the Kubernetes cluster is offline.\n\n* [Kubectl describe a node](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_describe_node/README.md): Kubectl describe a node\n\n* [Kubectl drain a node](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_drain_node/README.md): Kubectl drain a node\n\n* [Kubectl show metrics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_show_metrics_node/README.md): Kubectl show metrics for a given node\n\n"
  },
  {
    "path": "lists/action_K8S_POD.md",
    "content": "* [Delete a Kubernetes POD in a given Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_delete_pod/README.md): Delete a Kubernetes POD in a given Namespace\n\n* [Describe a Kubernetes POD in a given Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_describe_pod/README.md): Describe a Kubernetes POD in a given Namespace\n\n* [Execute a command on a Kubernetes POD in a given Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_exec_command_on_pod/README.md): Execute a command on a Kubernetes POD in a given Namespace\n\n* [Kubernetes Execute a command on a POD in a given namespace and filter](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_exec_command_on_pods_and_filter/README.md): Execute a command on Kubernetes POD in a given namespace and filter output\n\n* [Execute local script on a pod](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_execute_local_script_on_a_pod/README.md): Execute local script on a pod in a namespace\n\n* [Gather Data for POD Troubleshoot](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_gather_data_for_pod_troubleshoot/README.md): Gather Data for POD Troubleshoot\n\n* [Get All Evicted PODS From Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_all_evicted_pods_from_namespace/README.md): This action get all evicted PODS from given namespace. If namespace not given it will get all the pods from all namespaces.\n\n* [ Get All Kubernetes PODS with state in a given Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_all_pods/README.md):  Get All Kubernetes PODS with state in a given Namespace\n\n* [Get K8s pods status and resource utilization info](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_all_resources_utilization_info/README.md): This action gets the pod status and resource utilization of various Kubernetes resources like jobs, services, persistent volumes.\n\n* [Get candidate k8s nodes for given configuration](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_candidate_nodes_for_pods/README.md): Get candidate k8s nodes for given configuration\n\n* [Get K8S Cluster Health](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_cluster_health/README.md): Get K8S Cluster Health\n\n* [Get Kubernetes Error PODs from All Jobs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_error_pods_from_all_jobs/README.md): Get Kubernetes Error PODs from All Jobs\n\n* [Get All Kubernetes Healthy PODS in a given Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_healthy_pods/README.md): Get All Kubernetes Healthy PODS in a given Namespace\n\n* [Get memory utilization for K8s services](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_memory_utilization_of_services/README.md): This action executes the given kubectl commands to find the memory utilization of the specified services in a particular namespace and compares it with a given threshold.\n\n* [Get K8S OOMKilled Pods](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_oomkilled_pods/README.md): Get K8S Pods which are OOMKilled from the container last states.\n\n* [Get K8s get pending pods](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_pending_pods/README.md): This action checks if any pod in the Kubernetes cluster is in 'Pending' status.\n\n* [Get Kubernetes POD Configuration](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_pod_config/README.md): Get Kubernetes POD Configuration\n\n* [Get Kubernetes Logs for a given POD in a Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_pod_logs/README.md): Get Kubernetes Logs for a given POD in a Namespace\n\n* [Get Kubernetes Logs for a list of PODs & Filter in a Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_pod_logs_and_filter/README.md): Get Kubernetes Logs for a list of PODs and Filter in a Namespace\n\n* [Get Kubernetes Status for a POD in a given Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_pod_status/README.md): Get Kubernetes Status for a POD in a given Namespace\n\n* [Get pods attached to Kubernetes PVC](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_pods_attached_to_pvc/README.md): Get pods attached to Kubernetes PVC\n\n* [Get all K8s Pods in CrashLoopBackOff State](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_pods_in_crashloopbackoff_state/README.md): Get all K8s pods in CrashLoopBackOff State\n\n* [Get all K8s Pods in ImagePullBackOff State](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_pods_in_imagepullbackoff_state/README.md): Get all K8s pods in ImagePullBackOff State\n\n* [Get all K8s Pods in Terminating State](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_pods_in_terminating_state/README.md): Get all K8s pods in Terminating State\n\n* [Kubectl delete a pod](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_delete_pod/README.md): Kubectl delete a pod\n\n* [Kubectl describe a pod](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_describe_pod/README.md): Kubectl describe a pod\n\n* [Kubectl list pods](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_list_pods/README.md): Kubectl list pods in given namespace\n\n* [Kubectl update field](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_patch_pod/README.md): Kubectl update field of a resource using strategic merge patch\n\n* [Kubectl show metrics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_show_metrics_pod/README.md): Kubectl show metrics for a given pod\n\n* [List matching name pods](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_list_all_matching_pods/README.md): List all pods matching a particular name string. The matching string can be a regular expression too\n\n* [Remove POD from Deployment](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_remove_pod_from_deployment/README.md): Remove POD from Deployment\n\n* [Update Commands in a Kubernetes POD in a given Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_update_command_in_pod_spec/README.md): Update Commands in a Kubernetes POD in a given Namespace\n\n"
  },
  {
    "path": "lists/action_K8S_PVC.md",
    "content": "* [Get pods attached to Kubernetes PVC](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_pods_attached_to_pvc/README.md): Get pods attached to Kubernetes PVC\n\n* [List pvcs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_list_pvcs/README.md): List pvcs by namespace. By default, it will list all pvcs in all namespaces.\n\n"
  },
  {
    "path": "lists/action_KAFKA.md",
    "content": "* [Kafka Check In-Sync Replicas](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kafka/legos/kafka_check_in_sync_replicas/README.md): Checks number of actual min-isr for each topic-partition with configuration for that topic.\n\n* [Kafka Check Replicas Available](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kafka/legos/kafka_check_replicas_available/README.md): Checks if the number of replicas not available for communication is equal to zero.\n\n* [Kafka get cluster health](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kafka/legos/kafka_get_cluster_health/README.md): Fetches the health of the Kafka cluster including brokers, topics, and partitions.\n\n* [Kafka get count of committed messages](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kafka/legos/kafka_get_committed_messages_count/README.md): Fetches the count of committed messages (consumer offsets) for a specific consumer group and its topics.\n\n* [Get Kafka Producer Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kafka/legos/kafka_get_handle/README.md): Get Kafka Producer Handle\n\n* [Kafka get topic health](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kafka/legos/kafka_get_topic_health/README.md): This action fetches the health and total number of messages for the specified topics.\n\n* [Kafka get topics with lag](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kafka/legos/kafka_get_topics_with_lag/README.md): This action fetches the topics with lag in the Kafka cluster.\n\n* [Kafka Publish Message](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kafka/legos/kafka_publish_message/README.md): Publish Kafka Message\n\n* [Run a Kafka command using kafka CLI](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kafka/legos/kafka_run_command/README.md): Run a Kafka command using kafka CLI. Eg kafka-topics.sh --list --exclude-internal\n\n"
  },
  {
    "path": "lists/action_MANTISHUB.md",
    "content": "* [Get Mantishub handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mantishub/legos/mantishub_get_handle/README.md): Get Mantishub handle\n\n"
  },
  {
    "path": "lists/action_MONGODB.md",
    "content": "* [MongoDB add new field in all collections](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_add_new_field_in_collections/README.md): MongoDB add new field in all collections\n\n* [MongoDB Aggregate Command](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_aggregate_command/README.md): MongoDB Aggregate Command\n\n* [MongoDB Atlas cluster cloud backup](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_atlas_cluster_backup/README.md): Trigger on-demand Atlas cloud backup\n\n* [Get large MongoDB indices](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_check_large_index_size/README.md): This action compares the size of each index with a given threshold and returns any indexes that exceed the threshold.\n\n* [Get MongoDB large databases](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_compare_disk_size_to_threshold/README.md): This action compares the total disk size used by MongoDB to a given threshold.\n\n* [MongoDB Count Documents](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_count_documents/README.md): MongoDB Count Documents\n\n* [MongoDB Create Collection](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_create_collection/README.md): MongoDB Create Collection\n\n* [MongoDB Create Database](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_create_database/README.md): MongoDB Create Database\n\n* [Delete collection from MongoDB database](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_delete_collection/README.md): Delete collection from MongoDB database\n\n* [MongoDB Delete Database](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_delete_database/README.md): MongoDB Delete Database\n\n* [MongoDB Delete Document](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_delete_document/README.md): MongoDB Delete Document\n\n* [MongoDB Distinct Command](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_distinct_command/README.md): MongoDB Distinct Command\n\n* [MongoDB Find Document](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_find_document/README.md): MongoDB Find Document\n\n* [MongoDB Find One](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_find_one/README.md): MongoDB Find One returns a single entry that matches the query.\n\n* [Get MongoDB Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_get_handle/README.md): Get MongoDB Handle\n\n* [MongoDB get metrics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_get_metrics/README.md): This action retrieves various metrics such as index size, disk size per collection for all databases and collections.\n\n* [Get Mongo Server Status](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_get_server_status/README.md): Get Mongo Server Status and check for any abnormalities.\n\n* [MongoDB Insert Document](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_insert_document/README.md): MongoDB Insert Document\n\n* [MongoDB kill queries](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_kill_queries/README.md): MongoDB kill queries\n\n* [Get list of collections in MongoDB Database](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_list_collections/README.md): Get list of collections in MongoDB Database\n\n* [Get list of MongoDB Databases](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_list_databases/README.md): Get list of MongoDB Databases\n\n* [MongoDB list queries](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_list_queries/README.md): MongoDB list queries\n\n* [MongoDB Read Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_read_query/README.md): MongoDB Read Query\n\n* [MongoDB remove a field in all collections](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_remove_field_in_collections/README.md): MongoDB remove a field in all collections\n\n* [MongoDB Rename Database](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_rename_database/README.md): MongoDB Rename Database\n\n* [MongoDB Update Document](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_update_document/README.md): MongoDB Update Document\n\n* [MongoDB Upsert Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_write_query/README.md): MongoDB Upsert Query\n\n* [Get MS-SQL Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/MsSQL/legos/mssql_get_handle/README.md): Get MS-SQL Handle\n\n"
  },
  {
    "path": "lists/action_MONGODB_CLUSTER.md",
    "content": "* [MongoDB Atlas cluster cloud backup](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_atlas_cluster_backup/README.md): Trigger on-demand Atlas cloud backup\n\n"
  },
  {
    "path": "lists/action_MONGODB_COLLECTION.md",
    "content": "* [MongoDB add new field in all collections](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_add_new_field_in_collections/README.md): MongoDB add new field in all collections\n\n* [MongoDB Count Documents](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_count_documents/README.md): MongoDB Count Documents\n\n* [MongoDB Create Collection](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_create_collection/README.md): MongoDB Create Collection\n\n* [Delete collection from MongoDB database](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_delete_collection/README.md): Delete collection from MongoDB database\n\n* [MongoDB Delete Document](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_delete_document/README.md): MongoDB Delete Document\n\n* [MongoDB Find Document](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_find_document/README.md): MongoDB Find Document\n\n* [MongoDB Find One](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_find_one/README.md): MongoDB Find One returns a single entry that matches the query.\n\n* [Get MongoDB Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_get_handle/README.md): Get MongoDB Handle\n\n* [MongoDB Insert Document](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_insert_document/README.md): MongoDB Insert Document\n\n* [Get list of collections in MongoDB Database](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_list_collections/README.md): Get list of collections in MongoDB Database\n\n* [MongoDB remove a field in all collections](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_remove_field_in_collections/README.md): MongoDB remove a field in all collections\n\n* [MongoDB Update Document](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_update_document/README.md): MongoDB Update Document\n\n"
  },
  {
    "path": "lists/action_MONGODB_DOCUMENT.md",
    "content": "* [MongoDB Count Documents](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_count_documents/README.md): MongoDB Count Documents\n\n* [MongoDB Delete Document](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_delete_document/README.md): MongoDB Delete Document\n\n* [MongoDB Find Document](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_find_document/README.md): MongoDB Find Document\n\n* [MongoDB Find One](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_find_one/README.md): MongoDB Find One returns a single entry that matches the query.\n\n* [Get MongoDB Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_get_handle/README.md): Get MongoDB Handle\n\n* [MongoDB Insert Document](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_insert_document/README.md): MongoDB Insert Document\n\n* [MongoDB Update Document](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_update_document/README.md): MongoDB Update Document\n\n"
  },
  {
    "path": "lists/action_MONGODB_QUERY.md",
    "content": "* [MongoDB kill queries](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_kill_queries/README.md): MongoDB kill queries\n\n* [MongoDB list queries](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_list_queries/README.md): MongoDB list queries\n\n* [MongoDB Read Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_read_query/README.md): MongoDB Read Query\n\n* [MongoDB Upsert Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_write_query/README.md): MongoDB Upsert Query\n\n* [Get MS-SQL Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/MsSQL/legos/mssql_get_handle/README.md): Get MS-SQL Handle\n\n"
  },
  {
    "path": "lists/action_MSSQL.md",
    "content": "* [MS-SQL Read Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/MsSQL/legos/mssql_read_query/README.md): MS-SQL Read Query\n\n* [MS-SQL Write Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/MsSQL/legos/mssql_write_query/README.md): MS-SQL Write Query\n\n* [Get MySQL Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/MySQL/legos/mysql_get_handle/README.md): Get MySQL Handle\n\n"
  },
  {
    "path": "lists/action_MSSQL_QUERY.md",
    "content": "* [MS-SQL Read Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/MsSQL/legos/mssql_read_query/README.md): MS-SQL Read Query\n\n* [MS-SQL Write Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/MsSQL/legos/mssql_write_query/README.md): MS-SQL Write Query\n\n* [Get MySQL Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/MySQL/legos/mysql_get_handle/README.md): Get MySQL Handle\n\n"
  },
  {
    "path": "lists/action_MYSQL.md",
    "content": "* [MySQl Get Long Running Queries](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/MySQL/legos/mysql_get_long_run_queries/README.md): MySQl Get Long Running Queries\n\n* [MySQl Kill Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/MySQL/legos/mysql_kill_query/README.md): MySQl Kill Query\n\n* [Run MySQL Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/MySQL/legos/mysql_read_query/README.md): Run MySQL Query\n\n* [Create a MySQL Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/MySQL/legos/mysql_write_query/README.md): Create a MySQL Query\n\n* [Netbox Get Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Netbox/legos/netbox_get_handle/README.md): Get Netbox Handle\n\n"
  },
  {
    "path": "lists/action_MYSQL_QUERY.md",
    "content": "* [MySQl Get Long Running Queries](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/MySQL/legos/mysql_get_long_run_queries/README.md): MySQl Get Long Running Queries\n\n* [MySQl Kill Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/MySQL/legos/mysql_kill_query/README.md): MySQl Kill Query\n\n* [Run MySQL Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/MySQL/legos/mysql_read_query/README.md): Run MySQL Query\n\n* [Create a MySQL Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/MySQL/legos/mysql_write_query/README.md): Create a MySQL Query\n\n* [Netbox Get Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Netbox/legos/netbox_get_handle/README.md): Get Netbox Handle\n\n"
  },
  {
    "path": "lists/action_NETBOX.md",
    "content": "* [Netbox List Devices](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Netbox/legos/netbox_list_devices/README.md): List all Netbox devices\n\n* [Nomad Get Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Nomad/legos/nomad_get_handle/README.md): Get Nomad Handle\n\n"
  },
  {
    "path": "lists/action_NOMAD.md",
    "content": "* [Nomad List Jobs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Nomad/legos/nomad_list_jobs/README.md): List all Nomad jobs\n\n* [Get Opsgenie Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Opsgenie/legos/opsgenie_get_handle/README.md): Get Opsgenie Handle\n\n"
  },
  {
    "path": "lists/action_OPENSEARCH.md",
    "content": "* [Opensearch Get Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/opensearch/legos/opensearch_get_handle/README.md): Opensearch Get Handle\n\n* [Opensearch search](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/opensearch/legos/opensearch_search/README.md): Opensearch Search\n\n"
  },
  {
    "path": "lists/action_PINGDOM.md",
    "content": "* [Create new maintenance window.](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Pingdom/legos/pingdom_create_new_maintenance_window/README.md): Create new maintenance window.\n\n* [Perform Pingdom single check](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Pingdom/legos/pingdom_do_single_check/README.md): Perform Pingdom Single Check\n\n* [Get Pingdom Analysis Results for a specified Check](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Pingdom/legos/pingdom_get_analysis/README.md): Get Pingdom Analysis Results for a specified Check\n\n* [Get list of checkIDs given a hostname](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Pingdom/legos/pingdom_get_checkids/README.md): Get list of checkIDs given a hostname. If no hostname provided, it lists all checkIDs.\n\n* [Get list of checkIDs given a name](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Pingdom/legos/pingdom_get_checkids_by_name/README.md): Get list of checkIDS given a name. If name is not given, it gives all checkIDs. If transaction is set to true, it returns transaction checkIDs\n\n* [Get Pingdom Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Pingdom/legos/pingdom_get_handle/README.md): Get Pingdom Handle\n\n* [Pingdom Get Maintenance](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Pingdom/legos/pingdom_get_maintenance/README.md): Pingdom Get Maintenance\n\n* [Get Pingdom Results](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Pingdom/legos/pingdom_get_results/README.md): Get Pingdom Results\n\n* [Get Pingdom TMS Check](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Pingdom/legos/pingdom_get_tmscheck/README.md): Get Pingdom TMS Check\n\n* [Pingdom lego to pause/unpause checkids](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Pingdom/legos/pingdom_pause_or_unpause_checkids/README.md): Pingdom lego to pause/unpause checkids\n\n* [Perform Pingdom Traceroute](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Pingdom/legos/pingdom_traceroute/README.md): Perform Pingdom Traceroute\n\n"
  },
  {
    "path": "lists/action_POSTGRESQL.md",
    "content": "* [PostgreSQL Calculate Bloat](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgres_calculate_bloat/README.md): This Lego calculates bloat for tables in Postgres\n\n* [Calling a PostgreSQL function](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_call_function/README.md): Calling a PostgreSQL function\n\n* [PostgreSQL Check Unused Indexes](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_check_unused_indexes/README.md): Find unused Indexes in a database in PostgreSQL\n\n* [Create Tables in PostgreSQL](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_create_table/README.md): Create Tables PostgreSQL\n\n* [Delete PostgreSQL Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_delete_query/README.md): Delete PostgreSQL Query\n\n* [PostgreSQL Get Cache Hit Ratio](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_get_cache_hit_ratio/README.md): The result of the action will show the total number of blocks read from disk, the total number of blocks found in the buffer cache, and the cache hit ratio as a percentage. For example, if the cache hit ratio is 99%, it means that 99% of all data requests were served from the buffer cache, and only 1% required reading data from disk.\n\n* [Get PostgreSQL Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_get_handle/README.md): Get PostgreSQL Handle\n\n* [PostgreSQL Get Index Usage](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_get_index_usage/README.md): The action result shows the data for table name, the percentage of times an index was used for that table, and the number of live rows in the table.\n\n* [PostgreSQL get service status](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_get_server_status/README.md): This action checks the status of each database.\n\n* [Execute commands in a PostgreSQL transaction.](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_handling_transaction/README.md): Given a set of PostgreSQL commands, this actions run them inside a transaction.\n\n* [Long Running PostgreSQL Queries](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_long_running_queries/README.md): Long Running PostgreSQL Queries\n\n* [Read PostgreSQL Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_read_query/README.md): Read PostgreSQL Query\n\n* [Show tables in PostgreSQL Database](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_show_tables/README.md): Show the tables existing in a PostgreSQL Database. We execute the following query to fetch this information SELECT * FROM pg_catalog.pg_tables WHERE schemaname != 'pg_catalog' AND schemaname != 'information_schema';\n\n* [Call PostgreSQL Stored Procedure](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_stored_procedures/README.md): Call PostgreSQL Stored Procedure\n\n* [Write PostgreSQL Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_write_query/README.md): Write PostgreSQL Query\n\n"
  },
  {
    "path": "lists/action_POSTGRESQL_QUERY.md",
    "content": "* [Calling a PostgreSQL function](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_call_function/README.md): Calling a PostgreSQL function\n\n* [Delete PostgreSQL Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_delete_query/README.md): Delete PostgreSQL Query\n\n* [Long Running PostgreSQL Queries](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_long_running_queries/README.md): Long Running PostgreSQL Queries\n\n* [Read PostgreSQL Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_read_query/README.md): Read PostgreSQL Query\n\n* [Call PostgreSQL Stored Procedure](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_stored_procedures/README.md): Call PostgreSQL Stored Procedure\n\n* [Write PostgreSQL Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_write_query/README.md): Write PostgreSQL Query\n\n"
  },
  {
    "path": "lists/action_POSTGRESQL_TABLE.md",
    "content": "* [Create Tables in PostgreSQL](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_create_table/README.md): Create Tables PostgreSQL\n\n* [Show tables in PostgreSQL Database](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_show_tables/README.md): Show the tables existing in a PostgreSQL Database. We execute the following query to fetch this information SELECT * FROM pg_catalog.pg_tables WHERE schemaname != 'pg_catalog' AND schemaname != 'information_schema';\n\n"
  },
  {
    "path": "lists/action_PROMETHEUS.md",
    "content": "* [Get Prometheus rules](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Prometheus/legos/prometheus_alerts_list/README.md): Get Prometheus rules\n\n* [Get All Prometheus Metrics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Prometheus/legos/prometheus_get_all_metrics/README.md): Get All Prometheus Metrics\n\n* [Get Prometheus handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Prometheus/legos/prometheus_get_handle/README.md): Get Prometheus handle\n\n* [Get Prometheus Metric Statistics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Prometheus/legos/prometheus_get_metric_statistics/README.md): Get Prometheus Metric Statistics\n\n"
  },
  {
    "path": "lists/action_REDIS.md",
    "content": "* [Delete All Redis Keys](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Redis/legos/redis_delete_all_keys/README.md): Delete All Redis keys\n\n* [Delete Redis Keys](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Redis/legos/redis_delete_keys/README.md): Delete Redis keys matching pattern\n\n* [Delete Redis Unused keys](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Redis/legos/redis_delete_stale_keys/README.md): Delete Redis Unused keys given a time threshold in seconds\n\n* [Get Redis cluster health](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Redis/legos/redis_get_cluster_health/README.md): This action gets the Redis cluster health.\n\n* [Get Redis Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Redis/legos/redis_get_handle/README.md): Get Redis Handle\n\n* [Get Redis keys count](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Redis/legos/redis_get_keys_count/README.md): Get Redis keys count matching pattern (default: '*')\n\n* [Get Redis metrics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Redis/legos/redis_get_metrics/README.md): This action fetched redis metrics like index size, memory utilization.\n\n* [ List Redis Large keys](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Redis/legos/redis_list_large_keys/README.md): Find Redis Large keys given a size threshold in bytes\n\n* [Get REST handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Rest/legos/rest_get_handle/README.md): Get REST handle\n\n"
  },
  {
    "path": "lists/action_REST.md",
    "content": "* [Call REST Methods](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Rest/legos/rest_methods/README.md): Call REST Methods.\n\n"
  },
  {
    "path": "lists/action_SALESFORCE.md",
    "content": "* [Assign Salesforce Case](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SalesForce/legos/salesforce_assign_case/README.md): Assign a Salesforce case\n\n* [Change Salesforce Case Status](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SalesForce/legos/salesforce_case_change_status/README.md): Change Salesforce Case Status\n\n* [Create Salesforce Case](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SalesForce/legos/salesforce_create_case/README.md): Create a Salesforce case\n\n* [Delete Salesforce Case](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SalesForce/legos/salesforce_delete_case/README.md): Delete a Salesforce case\n\n* [Get Salesforce Case Info](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SalesForce/legos/salesforce_get_case/README.md): Get a Salesforce case info\n\n* [Get Salesforce Case Status](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SalesForce/legos/salesforce_get_case_status/README.md): Get a Salesforce case status\n\n* [Get Salesforce handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SalesForce/legos/salesforce_get_handle/README.md): Get Salesforce handle\n\n* [Search Salesforce Case](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SalesForce/legos/salesforce_search_case/README.md): Search a Salesforce case\n\n* [Update Salesforce Case](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SalesForce/legos/salesforce_update_case/README.md): Update a Salesforce case\n\n* [Create Slack Channel and Invite Users](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Slack/legos/slack_create_channel_invite_users/README.md): Create a Slack Channel with given name, and invite a list of userIds to the channel.\n\n* [Get Slack SDK Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Slack/legos/slack_get_handle/README.md): Get Slack SDK Handle\n\n* [Slack Lookup User by Email](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Slack/legos/slack_lookup_user_by_email/README.md): Given an email address, find the slack user in the workspace.\n You can the extract their Profile picture, or retrieve their userid (which you can use to send messages) from the output.\n\n"
  },
  {
    "path": "lists/action_SECOPS.md",
    "content": "* [Apply AWS Default Encryption for S3 Bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_apply_default_encryption_for_s3_buckets/README.md): Apply AWS Default Encryption for S3 Bucket\n\n* [AWS Attach New Policy to User](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_attach_iam_policy/README.md): AWS Attach New Policy to User\n\n* [AWS Change ACL Permission of public S3 Bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_change_acl_permissions_of_buckets/README.md): AWS Change ACL Permission public S3 Bucket\n\n* [AWS Check if RDS instances are not M5 or T3](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_check_rds_non_m5_t3_instances/README.md): AWS Check if RDS instances are not M5 or T3\n\n* [Check SSL Certificate Expiry](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_check_ssl_certificate_expiry/README.md): Check ACM SSL Certificate expiry date\n\n* [AWS Create IAM Policy](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_create_IAMpolicy/README.md): Given an AWS policy (as a string), and the name for the policy, this will create an IAM policy.\n\n* [AWS Create Access Key](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_create_access_key/README.md): Create a new Access Key for the User\n\n* [Create New IAM User](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_create_iam_user/README.md): Create New IAM User\n\n* [AWS Redshift Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_create_redshift_query/README.md): Make a SQL Query to the given AWS Redshift database\n\n* [Create Login profile for IAM User](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_create_user_login_profile/README.md): Create Login profile for IAM User\n\n* [AWS Delete Access Key](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_delete_access_key/README.md): Delete an Access Key for a User\n\n* [Delete AWS Default Encryption for S3 Bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_delete_s3_bucket_encryption/README.md): Delete AWS Default Encryption for S3 Bucket\n\n* [Filter AWS EBS Volume with Low IOPS](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_ebs_volumes_with_low_iops/README.md): IOPS (Input/Output Operations Per Second) is a metric used to measure the amount of input/output operations that an EBS volume can perform per second.\n\n* [Get AWS public S3 Buckets using ACL](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_public_s3_buckets_by_acl/README.md): Get AWS public S3 Buckets using ACL\n\n* [Filter AWS Target groups by tag name](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_target_groups_by_tags/README.md): Filter AWS Target groups which have the provided tag attached to it. It also returns the value of that tag for each target group\n\n* [Filter AWS Unencrypted S3 Buckets](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_unencrypted_s3_buckets/README.md): Filter AWS Unencrypted S3 Buckets\n\n* [AWS Filter Unused Log Stream](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_unused_log_streams/README.md): This action lists all log streams that are unused for all the log groups by the given threshold.\n\n* [AWS Find Unused NAT Gateways](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_unused_nat_gateway/README.md): This action to get all of the Nat gateways that have zero traffic over those\n\n* [AWS Find Low Connections RDS instances Per Day](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_find_low_connection_rds_instances/README.md): This action will find RDS DB instances with a number of connections below the specified minimum in the specified region.\n\n* [AWS Find EMR Clusters of Old Generation Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_find_old_gen_emr_clusters/README.md): This action list of EMR clusters of old generation instances.\n\n* [Get AWS CloudWatch Alarms List](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_alarms_list/README.md): Get AWS CloudWatch Alarms List\n\n* [Get AWS ALB Listeners Without HTTP Redirection](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_alb_listeners_without_http_redirect/README.md): Get AWS ALB Listeners Without HTTP Redirection\n\n* [Get AWS EC2 Instances All ](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_all_ec2_instances/README.md): Use This Action to Get All AWS EC2 Instances\n\n* [AWS Get All Service Names v3](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_all_service_names/README.md): Get a list of all service names in a region\n\n* [AWS Get EBS Volumes for Low Usage](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_ebs_volume_for_low_usage/README.md): This action list low use volumes from AWS which used <10% capacity from the given threshold days.\n\n* [Get AWS EBS Volume Without GP3 Type](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_ebs_volumes_without_gp3_type/README.md): AWS recently introduced the General Purpose SSD (gp3) volume type, which is designed to provide higher IOPS performance at a lower cost than the gp2 volume type.\n\n* [AWS ECS Instances without AutoScaling policy](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_ecs_instances_without_autoscaling/README.md): AWS ECS Instances without AutoScaling policy.\n\n* [AWS ECS Services without AutoScaling policy](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_ecs_services_without_autoscaling/README.md): AWS ECS Services without AutoScaling policy.\n\n* [AWS Get Idle EMR Clusters](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_idle_emr_clusters/README.md): This action list of EMR clusters that have been idle for more than the specified time.\n\n* [Get all Targets for Network Load Balancer (NLB)](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_nlb_targets/README.md): Use this action to get all targets for Network Load Balancer (NLB)\n\n* [AWS Get Network Load Balancer (NLB) without Targets](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_nlbs_without_targets/README.md): Use this action to get AWS Network Load Balancer (NLB) without Targets\n\n* [AWS Get Publicly Accessible RDS Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_publicly_accessible_db_instances/README.md): AWS Get Publicly Accessible RDS Instances\n\n* [AWS Get Publicly Accessible DB Snapshots in RDS](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_publicly_accessible_db_snapshots/README.md): AWS Get Publicly Accessible DB Snapshots in RDS\n\n* [ Get secrets from secretsmanager](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_secret_from_secretmanager/README.md):  Get secrets from AWS secretsmanager\n\n* [AWS Get Secrets Manager Secret](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_secrets_manager_secret/README.md): Get string (of JSON) containing Secret details\n\n* [AWS Get Secrets Manager SecretARN](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_secrets_manager_secretARN/README.md): Given a Secret Name - this Action returns the Secret ARN\n\n* [Get AWS Security Group Details](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_security_group_details/README.md): Get details about a security group, given its ID.\n\n* [AWS Get IAM Users with Old Access Keys](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_users_with_old_access_keys/README.md): This Lego collects the access keys that have never been used or the access keys that have been used but are older than the threshold.\n\n* [AWS List Access Key](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_access_keys/README.md): List all Access Keys for the User\n\n* [List Expiring ACM Certificates](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_expiring_acm_certificates/README.md): List All Expiring ACM Certificates\n\n* [AWS List Unused Secrets](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_unused_secrets/README.md): This action lists all the unused secrets from AWS by comparing the last used date with the given threshold.\n\n* [AWS List IAM Users With Old Passwords](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_users_with_old_passwords/README.md): This Lego filter gets all the IAM users' login profiles, and if the login profile is available, checks for the last password change if the password is greater than the given threshold, and lists those users.\n\n* [GCP Add Member to IAM Role](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_add_member_to_iam_role/README.md): Adding member to the IAM role which already available\n\n* [GCP Add Role to Service Account](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_add_role_to_service_account/README.md): Adding role and member to the service account\n\n* [List GCP Secrets](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_list_secrets/README.md): List of your GCP Secrets\n\n* [GCP List Service Accounts](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_list_service_accounts/README.md): GCP List Service Accounts\n\n* [GCP Remove Member from IAM Role](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_remove_member_from_iam_role/README.md): Remove member from the chosen IAM role.\n\n* [GCP Remove Role from Service Account](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_remove_role_from_service_account/README.md): Remove role and member from the service account\n\n"
  },
  {
    "path": "lists/action_SLACK.md",
    "content": "* [Post Slack Image](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Slack/legos/slack_post_image/README.md): Post Slack Image\n\n* [Post Slack Message](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Slack/legos/slack_post_message/README.md): Post Slack Message\n\n* [Slack Send DM](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Slack/legos/slack_send_DM/README.md): Given a list of Slack IDs, this Action will create a DM (one user) or group chat (multiple users), and send a message to the chat\n\n"
  },
  {
    "path": "lists/action_SNOWFLAKE.md",
    "content": "* [Snowflake Read Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Snowflake/legos/snowflake_read_query/README.md): Snowflake Read Query\n\n* [Snowflake Write Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Snowflake/legos/snowflake_write_query/README.md): Snowflake Write Query\n\n"
  },
  {
    "path": "lists/action_SPLUNK.md",
    "content": "* [Get Splunk SDK Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Splunk/legos/splunk_get_handle/README.md): Get Splunk SDK Handle\n\n"
  },
  {
    "path": "lists/action_SRE.md",
    "content": "* [Apply AWS Default Encryption for S3 Bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_apply_default_encryption_for_s3_buckets/README.md): Apply AWS Default Encryption for S3 Bucket\n\n* [Attach an EBS volume to an AWS EC2 Instance](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_attach_ebs_to_instances/README.md): Attach an EBS volume to an AWS EC2 Instance\n\n* [AWS Attach New Policy to User](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_attach_iam_policy/README.md): AWS Attach New Policy to User\n\n* [Attach a webhook endpoint to AWS Cloudwatch alarm](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_cloudwatch_attach_webhook_notification_to_alarm/README.md): Attach a webhook endpoint to one of the SNS attached to the AWS Cloudwatch alarm.\n\n* [AWS Create IAM Policy](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_create_IAMpolicy/README.md): Given an AWS policy (as a string), and the name for the policy, this will create an IAM policy.\n\n* [AWS Create Access Key](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_create_access_key/README.md): Create a new Access Key for the User\n\n* [Create AWS Bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_create_bucket/README.md): Create a new AWS S3 Bucket\n\n* [Create New IAM User](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_create_iam_user/README.md): Create New IAM User\n\n* [AWS Redshift Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_create_redshift_query/README.md): Make a SQL Query to the given AWS Redshift database\n\n* [Create Login profile for IAM User](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_create_user_login_profile/README.md): Create Login profile for IAM User\n\n* [AWS Create Snapshot For Volume](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_create_volumes_snapshot/README.md): Create a snapshot for EBS volume of the EC2 Instance for backing up the data stored in EBS\n\n* [AWS Delete Access Key](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_delete_access_key/README.md): Delete an Access Key for a User\n\n* [Delete AWS Bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_delete_bucket/README.md): Delete an AWS S3 Bucket\n\n* [AWS Delete Classic Load Balancer](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_delete_classic_load_balancer/README.md): Delete Classic Elastic Load Balancers\n\n* [AWS Delete EBS Snapshot](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_delete_ebs_snapshot/README.md): Delete EBS Snapshot for an EC2 instance\n\n* [AWS Delete ECS Cluster](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_delete_ecs_cluster/README.md): Delete AWS ECS Cluster\n\n* [AWS Delete Load Balancer](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_delete_load_balancer/README.md): AWS Delete Load Balancer\n\n* [AWS Delete Log Stream](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_delete_log_stream/README.md): AWS Delete Log Stream\n\n* [AWS Delete NAT Gateway](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_delete_nat_gateway/README.md): AWS Delete NAT Gateway\n\n* [AWS Delete RDS Instance](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_delete_rds_instance/README.md): Delete AWS RDS Instance\n\n* [AWS Delete Redshift Cluster](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_delete_redshift_cluster/README.md): Delete AWS Redshift Cluster\n\n* [AWS Delete Route 53 HealthCheck](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_delete_route53_health_check/README.md): AWS Delete Route 53 HealthCheck\n\n* [Delete AWS Default Encryption for S3 Bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_delete_s3_bucket_encryption/README.md): Delete AWS Default Encryption for S3 Bucket\n\n* [AWS Delete Secret](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_delete_secret/README.md): AWS Delete Secret\n\n* [Delete AWS EBS Volume by Volume ID](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_delete_volume_by_id/README.md): Delete AWS Volume by Volume ID\n\n* [ Deregisters AWS Instances from a Load Balancer](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_deregister_instances/README.md):  Deregisters AWS Instances from a Load Balancer\n\n* [AWS Describe Cloudtrails ](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_describe_cloudtrail/README.md): Given an AWS Region, this Action returns a Dict with all of the Cloudtrail logs being recorded\n\n* [ Detach as AWS Instance with a Elastic Block Store](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_detach_ebs_to_instances/README.md):  Detach as AWS Instance with a Elastic Block Store.\n\n* [AWS Detach Instances From AutoScaling Group](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_detach_instances_from_autoscaling_group/README.md): Use This Action to AWS Detach Instances From AutoScaling Group\n\n* [EBS Modify Volume](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_ebs_modify_volume/README.md): Modify/Resize volume for Elastic Block Storage (EBS).\n\n* [AWS ECS Describe Task Definition.](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_ecs_describe_task_definition/README.md): Describe AWS ECS Task Definition.\n\n* [ECS detect failed deployment ](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_ecs_detect_failed_deployment/README.md): List of stopped tasks, associated with a deployment, along with their stopped reason\n\n* [Restart AWS ECS Service](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_ecs_service_restart/README.md): Restart an AWS ECS Service\n\n* [Update AWS ECS Service](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_ecs_update_service/README.md): Update AWS ECS Service\n\n* [ Copy EKS Pod logs to bucket.](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_eks_copy_pod_logs_to_bucket/README.md):  Copy given EKS pod logs to given S3 Bucket.\n\n* [ Delete EKS POD in a given Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_eks_delete_pod/README.md):  Delete a EKS POD in a given Namespace\n\n* [List of EKS dead pods](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_eks_get_all_dead_pods/README.md): Get list of all dead pods in a given EKS cluster\n\n* [List of EKS Namespaces](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_eks_get_all_namespaces/README.md): Get list of all Namespaces in a given EKS cluster\n\n* [List of EKS pods](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_eks_get_all_pods/README.md): Get list of all pods in a given EKS cluster\n\n* [ List of EKS deployment for given Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_eks_get_deployments_name/README.md):  Get list of EKS deployment names for given Namespace\n\n* [Get CPU and memory utilization of node.](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_eks_get_node_cpu_memory/README.md):  Get CPU and memory utilization of given node.\n\n* [ Get EKS Nodes](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_eks_get_nodes/README.md):  Get EKS Nodes\n\n* [ List of EKS pods not in RUNNING State](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_eks_get_not_running_pods/README.md):  Get list of all pods in a given EKS cluster that are not running.\n\n* [Get pod CPU and Memory usage from given namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_eks_get_pod_cpu_memory/README.md): Get all pod CPU and Memory usage from given namespace\n\n* [ EKS Get pod status](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_eks_get_pod_status/README.md):  Get a Status of given POD in a given Namespace and EKS cluster name\n\n* [ EKS Get Running Pods](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_eks_get_running_pods/README.md):  Get a list of running pods from given namespace and EKS cluster name\n\n* [ Run Kubectl commands on EKS Cluster](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_eks_run_kubectl_cmd/README.md): This action runs a kubectl command on an AWS EKS Cluster\n\n* [Get AWS EMR Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_emr_get_instances/README.md): Get a list of EC2 Instances for an EMR cluster. Filtered by node type (MASTER|CORE|TASK)\n\n* [Run Command via AWS CLI](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_execute_cli_command/README.md): Execute command using AWS CLI\n\n* [ Run Command via SSM](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_execute_command_ssm/README.md):  Execute command on EC2 instance(s) using SSM\n\n* [AWS Filter All Manual Database Snapshots](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_all_manual_database_snapshots/README.md): Use This Action to AWS Filter All Manual Database Snapshots\n\n* [Filter AWS Unattached EBS Volume](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_ebs_unattached_volumes/README.md): Filter AWS Unattached EBS Volume\n\n* [Filter AWS EC2 Instance](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_ec2_by_tags/README.md): Filter AWS EC2 Instance\n\n* [Filter AWS EC2 instance by VPC Ids](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_ec2_by_vpc/README.md): Use this Action to Filter AWS EC2 Instance by VPC Ids\n\n* [Filter All AWS EC2 Instance](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_ec2_instances/README.md): Filter All AWS EC2 Instance\n\n* [Filter AWS EC2 Instances Without Lifetime Tag](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_ec2_without_lifetime_tag/README.md): Filter AWS EC2 Instances Without Lifetime Tag\n\n* [AWS Filter Large EC2 Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_large_ec2_instances/README.md): This Action to filter all instances whose instanceType contains Large or xLarge, and that DO NOT have the largetag key/value.\n\n* [AWS Find Long Running EC2 Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_long_running_instances/README.md): This action list a all instances that are older than the threshold\n\n* [AWS Filter Old EBS Snapshots](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_old_ebs_snapshots/README.md): This action list a all snapshots details that are older than the threshold\n\n* [Get Unhealthy instances from ASG](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_unhealthy_instances_from_asg/README.md): Get Unhealthy instances from Auto Scaling Group\n\n* [Filter AWS Unused Keypairs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_filter_unused_keypairs/README.md): Filter AWS Unused Keypairs\n\n* [AWS Find Idle Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_find_idle_instances/README.md): Find Idle EC2 instances\n\n* [AWS Filter Lambdas with Long Runtime](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_find_long_running_lambdas/README.md): This action retrieves a list of all Lambda functions and searches for log events for each function for given runtime(duration).\n\n* [AWS Find RDS Instances with low CPU Utilization](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_find_rds_instances_with_low_cpu_utilization/README.md): This lego finds RDS instances are not utilizing their CPU resources to their full potential.\n\n* [AWS Find Redshift Cluster without Pause Resume Enabled](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_find_redshift_cluster_without_pause_resume_enabled/README.md): Use This Action to AWS find redshift cluster for which paused resume are not Enabled\n\n* [AWS Get All Load Balancers](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_all_load_balancers/README.md): AWS Get All Load Balancers\n\n* [AWS Get Costs For All Services](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cost_for_all_services/README.md): Get Costs for all AWS services in a given time period.\n\n* [AWS Get Costs For Data Transfer](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_cost_for_data_transfer/README.md): Get daily cost for Data Transfer in AWS\n\n* [AWS Get Daily Total Spend](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_daily_total_spend/README.md): AWS get daily total spend from Cost Explorer\n\n* [Get EBS Volumes By Type](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_ebs_volumes_by_type/README.md): Get EBS Volumes By Type\n\n* [AWS List IAM users without password policies](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_iam_users_without_password_policies/README.md): Get a list of all IAM users that have no password policy attached to them.\n\n* [Get AWS Lambdas With High Error Rate](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_lambdas_with_high_error_rate/README.md): Get AWS Lambda Functions that exceed a given threshold error rate.\n\n* [AWS Get Long Running RDS Instances Without Reserved Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_long_running_rds_instances_without_reserved_instances/README.md): This action gets information about long running instances and their status, and checks if they have any reserved nodes associated with them.\n\n* [AWS Get Long Running Redshift Clusters Without Reserved Nodes](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_long_running_redshift_clusters_without_reserved_nodes/README.md): This action gets information about running clusters and their status, and checks if they have any reserved nodes associated with them.\n\n* [AWS Get Older Generation RDS Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_older_generation_rds_instances/README.md): AWS Get Older Generation RDS Instances action retrieves information about RDS instances using older generation instance types.\n\n* [AWS Get Private Address from NAT Gateways](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_private_address_from_nat_gateways/README.md): This action is used to get private address from NAT gateways.\n\n* [AWS Get EC2 Instances About To Retired](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_reserved_instances_about_to_retired/README.md): AWS Get EC2 Instances About To Retired\n\n* [AWS Get Resources Missing Tag](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_resources_missing_tag/README.md): Gets a list of all AWS resources that are missing the tag in the input parameters.\n\n* [AWS Get Resources With Expiration Tag](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_resources_with_expiration_tag/README.md): AWS Get all Resources with an expiration tag\n\n* [AWS Get Resources With Tag](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_resources_with_tag/README.md): For a given tag and region, get every AWS resource with that tag.\n\n* [Get Schedule To Retire AWS EC2 Instance](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_schedule_to_retire_instances/README.md): Get Schedule To Retire AWS EC2 Instance\n\n* [ Get secrets from secretsmanager](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_secret_from_secretmanager/README.md):  Get secrets from AWS secretsmanager\n\n* [AWS Get Secrets Manager Secret](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_secrets_manager_secret/README.md): Get string (of JSON) containing Secret details\n\n* [AWS Get Secrets Manager SecretARN](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_secrets_manager_secretARN/README.md): Given a Secret Name - this Action returns the Secret ARN\n\n* [Get AWS Security Group Details](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_security_group_details/README.md): Get details about a security group, given its ID.\n\n* [AWS Get Service Quota for a Specific ServiceName](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_service_quota_details/README.md): Given an AWS Region, Service Code and Quota Code, this Action will output the quota information for the specified service.\n\n* [AWS Get Quotas for a Service](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_service_quotas/README.md): Given inputs of the AWS Region, and the Service_Code for a service, this Action will output all of the Service Quotas and limits.\n\n* [Get Stopped Instance Volumes](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_stopped_instance_volumes/README.md): This action helps to list the volumes that are attached to stopped instances.\n\n* [Get STS Caller Identity](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_sts_caller_identity/README.md): Get STS Caller Identity\n\n* [AWS Get Tags of All Resources](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_tags_of_all_resources/README.md): AWS Get Tags of All Resources\n\n* [Get Timed Out AWS Lambdas](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_timed_out_lambdas/README.md): Get AWS Lambda functions that have exceeded the maximum amount of time in seconds that a Lambda function can run.\n\n* [AWS Get TTL For Route53 Records](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_ttl_for_route53_records/README.md): Get TTL for Route53 records for a hosted zone.\n\n* [AWS: Check for short Route 53 TTL](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_ttl_under_given_hours/README.md): AWS: Check for short Route 53 TTL\n\n* [Get UnHealthy EC2 Instances for Classic ELB](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_unhealthy_instances/README.md): Get UnHealthy EC2 Instances for Classic ELB\n\n* [Get Unhealthy instances from ELB](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_unhealthy_instances_from_elb/README.md): Get Unhealthy instances from Elastic Load Balancer\n\n* [AWS get Unused Route53 Health Checks](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_unused_route53_health_checks/README.md): AWS get Unused Route53 Health Checks\n\n* [AWS Get IAM Users with Old Access Keys](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_users_with_old_access_keys/README.md): This Lego collects the access keys that have never been used or the access keys that have been used but are older than the threshold.\n\n* [Launch AWS EC2 Instance From an AMI](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_launch_instance_from_ami/README.md): Use this instance to Launch an AWS EC2 instance from an AMI\n\n* [AWS List Access Key](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_access_keys/README.md): List all Access Keys for the User\n\n* [AWS List All IAM Users](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_all_iam_users/README.md): List all AWS IAM Users\n\n* [AWS List All Regions](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_all_regions/README.md): List all available AWS Regions\n\n* [AWS List Application LoadBalancers ARNs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_application_loadbalancers/README.md): AWS List Application LoadBalancers ARNs\n\n* [AWS List Attached User Policies](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_attached_user_policies/README.md): AWS List Attached User Policies\n\n* [AWS List ECS Clusters with Low CPU Utilization](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_clusters_with_low_utilization/README.md): This action searches for clusters that have low CPU utilization.\n\n* [AWS List Expiring Access Keys](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_expiring_access_keys/README.md): List Expiring IAM User Access Keys\n\n* [List Expiring ACM Certificates](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_expiring_acm_certificates/README.md): List All Expiring ACM Certificates\n\n* [AWS List Hosted Zones](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_hosted_zones/README.md): List all AWS Hosted zones\n\n* [AWS List Unattached Elastic IPs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_unattached_elastic_ips/README.md): This action lists Elastic IP address and check if it is associated with an instance or network interface.\n\n* [AWS List Unhealthy Instances in a Target Group](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_unhealthy_instances_in_target_group/README.md): List Unhealthy Instances in a target group\n\n* [AWS List IAM Users With Old Passwords](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_users_with_old_passwords/README.md): This Lego filter gets all the IAM users' login profiles, and if the login profile is available, checks for the last password change if the password is greater than the given threshold, and lists those users.\n\n* [AWS List Instances behind a Load Balancer.](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_loadbalancer_list_instances/README.md): List AWS Instances behind a Load Balancer\n\n* [Make AWS Bucket Public](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_make_bucket_public/README.md): Make an AWS Bucket Public!\n\n* [AWS Modify EBS Volume to GP3](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_modify_ebs_volume_to_gp3/README.md): AWS recently introduced the General Purpose SSD (gp3) volume type, which is designed to provide higher IOPS performance at a lower cost than the gp2 volume type.\n\n* [AWS Modify ALB Listeners HTTP Redirection](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_modify_listener_for_http_redirection/README.md): AWS Modify ALB Listeners HTTP Redirection\n\n* [AWS Modify Publicly Accessible RDS Snapshots](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_modify_public_db_snapshots/README.md): AWS Modify Publicly Accessible RDS Snapshots\n\n* [Get AWS Postgresql Max Configured Connections](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_postgresql_get_configured_max_connections/README.md): Get AWS Postgresql Max Configured Connections\n\n* [Plot AWS PostgreSQL Active Connections](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_postgresql_plot_active_connections/README.md): Plot AWS PostgreSQL Action Connections\n\n* [AWS Purchase ElastiCache Reserved Nodes](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_purchase_elasticcache_reserved_node/README.md): This action purchases a reserved cache node offering.\n\n* [AWS Purchase RDS Reserved Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_purchase_rds_reserved_instance/README.md): This action purchases a reserved DB instance offering.\n\n* [AWS Purchase Redshift Reserved Nodes](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_purchase_redshift_reserved_node/README.md): This action purchases reserved nodes. Amazon Redshift offers a predefined set of reserved node offerings. You can purchase one or more of the offerings.\n\n* [ Apply CORS Policy for S3 Bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_put_bucket_cors/README.md):  Apply CORS Policy for S3 Bucket\n\n* [Apply AWS New Policy for S3 Bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_put_bucket_policy/README.md): Apply a New AWS Policy for S3 Bucket\n\n* [Read AWS S3 Object](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_read_object/README.md): Read an AWS S3 Object\n\n* [ Register AWS Instances with a Load Balancer](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_register_instances/README.md):  Register AWS Instances with a Load Balancer\n\n* [AWS Release Elastic IP](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_release_elastic_ip/README.md): AWS Release Elastic IP for both VPC and Standard\n\n* [Renew Expiring ACM Certificates](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_renew_expiring_acm_certificates/README.md): Renew Expiring ACM Certificates\n\n* [AWS_Request_Service_Quota_Increase](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_request_service_quota_increase/README.md): Given an AWS Region, Service Code, quota code and a new value for the quota, this Action sends a request to AWS for a new value. Your Connector must have servicequotas:RequestServiceQuotaIncrease enabled for this to work.\n\n* [Restart AWS EC2 Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_restart_ec2_instances/README.md): Restart AWS EC2 Instances\n\n* [AWS Revoke Policy from IAM User](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_revoke_policy_from_iam_user/README.md): AWS Revoke Policy from IAM User\n\n* [Start AWS Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_run_instances/README.md): Start an AWS EC2 Instances\n\n* [AWS Schedule Redshift Cluster Pause Resume Enabled](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_schedule_pause_resume_enabled/README.md): AWS Schedule Redshift Cluster Pause Resume Enabled\n\n* [AWS Service Quota Limits](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_service_quota_limits/README.md): Input a List of Service Quotas, and get back which of your instances are above the warning percentage of the quota\n\n* [AWS VPC service quota limit](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_service_quota_limits_vpc/README.md): This Action queries all VPC Storage quotas, and returns all usage over warning_percentage.\n\n* [Stop AWS Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_stop_instances/README.md): Stop an AWS Instance\n\n* [Tag AWS Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_tag_ec2_instances/README.md): Tag AWS Instances\n\n* [AWS List Instances in a ELBV2 Target Group](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_target_group_list_instances/README.md): List AWS Instance in a ELBv2 Target Group\n\n* [ AWS List Unhealthy Instances in a ELBV2 Target Group](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_target_group_list_unhealthy_instances/README.md):  List AWS Unhealthy Instance in a ELBv2 Target Group\n\n* [AWS Register/Unregister Instances from a Target Group.](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_target_group_register_unregister_instances/README.md): Register/Unregister AWS Instance from a Target Group\n\n* [Terminate AWS EC2 Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_terminate_ec2_instances/README.md): This Action will Terminate AWS EC2 Instances\n\n* [AWS Update Access Key](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_update_access_key/README.md): Update status of the Access Key\n\n* [AWS Update TTL for Route53 Record](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_update_ttl_for_route53_records/README.md): Update TTL for an existing record in a hosted zone.\n\n* [Upload file to S3](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_upload_file_to_s3/README.md): Upload a local file to S3\n\n* [AWS_VPC_service_quota_warning](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_vpc_service_quota_warning/README.md): Given an AWS Region and a warning percentage, this Action queries all VPC quota limits, and returns any of Quotas that are over the alert value.\n\n* [Get Status for given DAG](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Airflow/legos/airflow_check_dag_status/README.md): Get Status for given DAG\n\n* [Get Airflow handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Airflow/legos/airflow_get_handle/README.md): Get Airflow handle\n\n* [List DAG runs for given DagID](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Airflow/legos/airflow_list_DAG_runs/README.md): List DAG runs for given DagID\n\n* [Airflow trigger DAG run](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Airflow/legos/airflow_trigger_dag_run/README.md): Airflow trigger DAG run\n\n* [Get Azure Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Azure/legos/azure_get_handle/README.md): Get Azure Handle\n\n* [Datadog delete incident](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_delete_incident/README.md): Delete an incident given its id\n\n* [Datadog get event](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_get_event/README.md): Get an event given its id\n\n* [Get Datadog Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_get_handle/README.md): Get Datadog Handle\n\n* [Datadog get incident](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_get_incident/README.md): Get an incident given its id\n\n* [Datadog get metric metadata](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_get_metric_metadata/README.md): Get the metadata of a metric.\n\n* [Datadog get monitor](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_get_monitor/README.md): Get details about a monitor\n\n* [Datadog get monitorID given the name](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_get_monitorid/README.md): Get monitorID given the name\n\n* [Datadog list active metrics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_list_active_metrics/README.md): Get the list of actively reporting metrics from a given time until now.\n\n* [Datadog list all monitors](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_list_all_monitors/README.md): List all monitors\n\n* [Datadog list metrics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_list_metrics/README.md): Lists metrics from the last 24 hours in Datadog.\n\n* [Datadog mute/unmute monitors](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_mute_or_unmute_alerts/README.md): Mute/unmute monitors\n\n* [Datadog query metrics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_query_metrics/README.md): Query timeseries points for a metric.\n\n* [Schedule downtime](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_schedule_downtime/README.md): Schedule downtime\n\n* [Datadog search monitors](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Datadog/legos/datadog_search_monitors/README.md): Search monitors in datadog based on filters\n\n* [Elasticsearch Cluster Health](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/ElasticSearch/legos/elasticsearch_check_health_status/README.md): Elasticsearch Check Health Status\n\n* [Get large Elasticsearch Index size](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/ElasticSearch/legos/elasticsearch_check_large_index_size/README.md): This action checks the sizes of all indices in the Elasticsearch cluster and compares them to a given threshold.\n\n* [Check Elasticsearch cluster disk size](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/ElasticSearch/legos/elasticsearch_compare_cluster_disk_size_to_threshold/README.md): This action compares the disk usage percentage of the Elasticsearch cluster to a given threshold.\n\n* [Elasticsearch Delete Unassigned Shards](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/ElasticSearch/legos/elasticsearch_delete_unassigned_shards/README.md): Elasticsearch Delete Corrupted/Lost Shards\n\n* [Elasticsearch Disable Shard Allocation](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/ElasticSearch/legos/elasticsearch_disable_shard_allocation/README.md): Elasticsearch Disable Shard Allocation for any indices\n\n* [Elasticsearch Enable Shard Allocation](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/ElasticSearch/legos/elasticsearch_enable_shard_allocation/README.md): Elasticsearch Enable Shard Allocation for any shards for any indices\n\n* [Elasticsearch Cluster Statistics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/ElasticSearch/legos/elasticsearch_get_cluster_statistics/README.md): Elasticsearch Cluster Statistics fetches total index size, disk size, and memory utilization and information about the current nodes and shards that form the cluster\n\n* [Get Elasticsearch Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/ElasticSearch/legos/elasticsearch_get_handle/README.md): Get Elasticsearch Handle\n\n* [Get Elasticsearch index level health](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/ElasticSearch/legos/elasticsearch_get_index_health/README.md): This action checks the health of a given Elasticsearch index or all indices if no specific index is provided.\n\n* [Elasticsearch List Allocations](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/ElasticSearch/legos/elasticsearch_list_allocations/README.md): Elasticsearch List Allocations in a Cluster\n\n* [Elasticsearch List Nodes](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/ElasticSearch/legos/elasticsearch_list_nodes/README.md): Elasticsearch List Nodes in a Cluster\n\n* [Elasticsearch search](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/ElasticSearch/legos/elasticsearch_search_query/README.md): Elasticsearch Search\n\n* [Add lifecycle policy to GCP storage bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_add_lifecycle_policy_to_bucket/README.md): The action adds a lifecycle policy to a Google Cloud Platform (GCP) storage bucket.\n\n* [GCP Add Member to IAM Role](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_add_member_to_iam_role/README.md): Adding member to the IAM role which already available\n\n* [GCP Add Role to Service Account](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_add_role_to_service_account/README.md): Adding role and member to the service account\n\n* [Create GCP Bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_create_bucket/README.md): Create a new GCP bucket in the given location\n\n* [Create a GCP disk snapshot](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_create_disk_snapshot/README.md): Create a GCP disk snapshot.\n\n* [Create GCP Filestore Instance](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_create_filestore_instance/README.md): Create a new GCP Filestore Instance in the given location\n\n* [Create GKE Cluster](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_create_gke_cluster/README.md): Create GKE Cluster\n\n* [GCP Create Service Account](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_create_service_account/README.md): GCP Create Service Account\n\n* [Delete GCP Bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_delete_bucket/README.md): Delete a GCP bucket\n\n* [Delete GCP Filestore Instance](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_delete_filestore_instance/README.md): Delete a GCP Filestore Instance in the given location\n\n* [Delete an Object from GCP Bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_delete_object_from_bucket/README.md): Delete an Object/Blob from a GCP Bucket\n\n* [GCP Delete Service Account](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_delete_service_account/README.md): GCP Delete Service Account\n\n* [GCP Describe a GKE cluster](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_describe_gke_cluster/README.md): GCP Describe a GKE cluster\n\n* [Fetch Objects from GCP Bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_fetch_objects_from_bucket/README.md): List all Objects in a GCP bucket\n\n* [Get GCP storage buckets without lifecycle policies](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_get_buckets_without_lifecycle_policies/README.md): The action retrieves a list of Google Cloud Platform (GCP) storage buckets that do not have any lifecycle policies applied.\n\n* [Get details of GCP forwarding rules](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_get_forwarding_rules_details/README.md): Get details of forwarding rules associated with a backend service.\n\n* [Get GCP Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_get_handle/README.md): Get GCP Handle\n\n* [Get List of GCP compute instance without label](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_get_instances_without_label/README.md): Get List of GCP compute instance without label\n\n* [Get unused GCP backend services](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_get_unused_backend_services/README.md): Get unused backend service for an application load balancer that has no instances in it's target group.\n\n\n* [List all GCP Buckets](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_list_buckets/README.md): List all GCP buckets\n\n* [Get GCP compute instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_list_compute_instances/README.md): Get GCP compute instances\n\n* [Get List of GCP compute instance by label](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_list_compute_instances_by_label/README.md): Get List of GCP compute instance by label\n\n* [Get list  compute instance by VPC](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_list_compute_instances_by_vpc/README.md): Get list  compute instance by VPC\n\n* [GCP List GKE Cluster](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_list_gke_cluster/README.md): GCP List GKE Cluster\n\n* [GCP List Nodes in GKE Cluster](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_list_nodes_in_gke_cluster/README.md): GCP List Nodes of GKE Cluster\n\n* [List all Public GCP Buckets](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_list_public_buckets/README.md): List all publicly available GCP buckets\n\n* [List GCP Secrets](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_list_secrets/README.md): List of your GCP Secrets\n\n* [GCP List Service Accounts](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_list_service_accounts/README.md): GCP List Service Accounts\n\n* [List all GCP VMs and if Publicly Accessible](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_list_vms_access/README.md): Lists all GCP buckets, and identifies those tha are public.\n\n* [GCP Remove Member from IAM Role](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_remove_member_from_iam_role/README.md): Remove member from the chosen IAM role.\n\n* [GCP Remove Role from Service Account](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_remove_role_from_service_account/README.md): Remove role and member from the service account\n\n* [Remove role from user](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_remove_user_role/README.md): GCP lego for removing a role from a user (default: 'viewer')\n\n* [GCP Resize a GKE cluster](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_resize_gke_cluster/README.md): GCP Resize a GKE cluster by modifying nodes\n\n* [GCP Restart compute instance](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_restart_compute_instances/README.md): GCP Restart compute instance\n\n* [Restore GCP disk from a snapshot ](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_restore_disk_from_snapshot/README.md): Restore a GCP disk from a compute instance snapshot.\n\n* [Save CSV to Google Sheets](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_save_csv_to_google_sheets_v1/README.md): Saves your CSV (see notes) into a prepared Google Sheet.\n\n* [GCP Stop compute instance](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_stop_compute_instances/README.md): GCP Stop compute instance\n\n* [Upload an Object to GCP Bucket](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/GCP/legos/gcp_upload_file_to_bucket/README.md): Upload an Object/Blob in a GCP bucket\n\n* [Github Assign Issue](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_assign_issue/README.md): Assign a github issue to a user\n\n* [Github Check if Pull Request is merged](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_check_if_pull_request_is_merged/README.md): Check if a Github Pull Request is merged\n\n* [Github Close Pull Request](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_close_pull_request/README.md): Close pull request based on pull request number\n\n* [Github Count Stars](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_count_stars/README.md): Get count of stars for a repository\n\n* [Github Create Issue](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_create_issue/README.md): Create a new Github Issue for a repository\n\n* [Github Create Team](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_create_team/README.md): Create a new Github Team\n\n* [Github Delete Branch](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_delete_branch/README.md): Delete a github branch\n\n* [Github Get Branch](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_get_branch/README.md): Get Github branch for a user in a repository\n\n* [Get Github Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_get_handle/README.md): Get Github Handle\n\n* [Github Get Issue](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_get_issue/README.md): Get Github Issue from a repository\n\n* [Github Get Open Branches](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_get_open_branches/README.md): Get first 100 open branches for a given user in a given repo.\n\n* [Github Get Pull Request](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_get_pull_request/README.md): Get Github Pull Request for a user in a repository\n\n* [Github Get Team](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_get_team/README.md): Github Get Team\n\n* [Github Get User](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_get_user/README.md): Get Github User details\n\n* [Github Invite User to Organization](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_invite_user_to_org/README.md): Invite a Github User to an Organization\n\n* [Github Comment on an Issue](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_issue_comment/README.md): Add a comment to the selected GitHub Issue\n\n* [Github List Open Issues](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_list_open_issues/README.md): List open Issues in a Github Repository\n\n* [Github List Organization Members](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_list_org_members/README.md): List Github Organization Members\n\n* [Github List PR Commits](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_list_pull_request_commits/README.md): Github List all Pull Request Commits\n\n* [Github List Pull Request Reviewers](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_list_pull_request_reviewers/README.md): List PR reviewers for a PR\n\n* [Github List Pull Requests](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_list_pull_requests/README.md): List pull requests for a user in a repository\n\n* [Github List Stale Issues](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_list_stale_issues/README.md): List Stale Github Issues that have crossed a certain age limit.\n\n* [Github List Stale Pull Requests](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_list_stale_pull_requests/README.md): Check for any Pull requests over a certain age. \n\n* [Github List Stargazers](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_list_stargazers/README.md): List of Github users that have starred (essentially bookmarked) a repository\n\n* [Github List Team Members](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_list_team_members/README.md): List Github Team Members for a given Team\n\n* [Github List Team Repositories](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_list_team_repos/README.md): Github List Team Repositories\n\n* [Github List Teams in Organization](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_list_teams_in_org/README.md): List teams in a organization in GitHub\n\n* [Github List Webhooks](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_list_webhooks/README.md): List webhooks for a repository\n\n* [Github Merge Pull Request](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_merge_pull_request/README.md): Github Merge Pull Request\n\n* [Github Remove Member from Organization](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Github/legos/github_remove_member_from_org/README.md): Remove a member from a Github Organization\n\n* [Get Grafana Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Grafana/legos/grafana_get_handle/README.md): Get Grafana Handle\n\n* [Grafana List Alerts](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Grafana/legos/grafana_list_alerts/README.md): List of Grafana alerts. Specifying the dashboard ID will show alerts in that dashboard\n\n* [Get Hadoop cluster apps](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Hadoop/legos/hadoop_get_cluster_apps/README.md): Get Hadoop cluster apps\n\n* [Get Hadoop cluster appstatistics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Hadoop/legos/hadoop_get_cluster_appstatistics/README.md): Get Hadoop cluster appstatistics\n\n* [Get Hadoop cluster metrics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Hadoop/legos/hadoop_get_cluster_metrics/README.md): Get Hadoop EMR cluster metrics\n\n* [Get Hadoop cluster nodes](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Hadoop/legos/hadoop_get_cluster_nodes/README.md): Get Hadoop cluster nodes\n\n* [Get Hadoop handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Hadoop/legos/hadoop_get_handle/README.md): Get Hadoop handle\n\n* [Get Jenkins Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Jenkins/legos/jenkins_get_handle/README.md): Get Jenkins Handle\n\n* [Get Jenkins Logs from a job](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Jenkins/legos/jenkins_get_logs/README.md): Get Jenkins Logs from a Job\n\n* [Get Jenkins Plugin List](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Jenkins/legos/jenkins_get_plugins/README.md): Get Jenkins Plugin List\n\n* [Jira Add Comment](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Jira/legos/jira_add_comment/README.md): Add a Jira Comment\n\n* [Assign Jira Issue](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Jira/legos/jira_assign_issue/README.md): Assign a Jira Issue to a user\n\n* [Create a Jira Issue](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Jira/legos/jira_create_issue/README.md): Create a Jira Issue\n\n* [Get Jira SDK Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Jira/legos/jira_get_handle/README.md): Get Jira SDK Handle\n\n* [Get Jira Issue Info](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Jira/legos/jira_get_issue/README.md): Get Issue Info from Jira API: description, labels, attachments\n\n* [Get Jira Issue Status](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Jira/legos/jira_get_issue_status/README.md): Get Issue Status from Jira API\n\n* [Change JIRA Issue Status](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Jira/legos/jira_issue_change_status/README.md): Change JIRA Issue Status to given status\n\n* [Search for Jira issues matching JQL queries](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Jira/legos/jira_search_issue/README.md): Use JQL to search all matching issues in Jira. Returns a List of the matching issues IDs/keys\n\n* [Kafka Check In-Sync Replicas](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kafka/legos/kafka_check_in_sync_replicas/README.md): Checks number of actual min-isr for each topic-partition with configuration for that topic.\n\n* [Kafka Check Replicas Available](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kafka/legos/kafka_check_replicas_available/README.md): Checks if the number of replicas not available for communication is equal to zero.\n\n* [Kafka get cluster health](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kafka/legos/kafka_get_cluster_health/README.md): Fetches the health of the Kafka cluster including brokers, topics, and partitions.\n\n* [Kafka get count of committed messages](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kafka/legos/kafka_get_committed_messages_count/README.md): Fetches the count of committed messages (consumer offsets) for a specific consumer group and its topics.\n\n* [Get Kafka Producer Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kafka/legos/kafka_get_handle/README.md): Get Kafka Producer Handle\n\n* [Kafka get topic health](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kafka/legos/kafka_get_topic_health/README.md): This action fetches the health and total number of messages for the specified topics.\n\n* [Kafka get topics with lag](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kafka/legos/kafka_get_topics_with_lag/README.md): This action fetches the topics with lag in the Kafka cluster.\n\n* [Kafka Publish Message](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kafka/legos/kafka_publish_message/README.md): Publish Kafka Message\n\n* [Run a Kafka command using kafka CLI](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kafka/legos/kafka_run_command/README.md): Run a Kafka command using kafka CLI. Eg kafka-topics.sh --list --exclude-internal\n\n* [Add Node in a Kubernetes Cluster](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_add_node_to_cluster/README.md): Add Node in a Kubernetes Cluster\n\n* [Change size of Kubernetes PVC](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_change_pvc_size/README.md): Change size of Kubernetes PVC\n\n* [Check K8s services endpoint health](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_check_service_status/README.md): This action checks the health status of the provided Kubernetes services.\n\n* [Check K8s worker CPU Utilization](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_check_worker_cpu_utilization/README.md): Retrieves the CPU utilization for all worker nodes in the cluster and compares it to a given threshold.\n\n* [Delete a Kubernetes POD in a given Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_delete_pod/README.md): Delete a Kubernetes POD in a given Namespace\n\n* [Describe Kubernetes Node](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_describe_node/README.md): Describe a Kubernetes Node\n\n* [Describe a Kubernetes POD in a given Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_describe_pod/README.md): Describe a Kubernetes POD in a given Namespace\n\n* [Execute a command on a Kubernetes POD in a given Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_exec_command_on_pod/README.md): Execute a command on a Kubernetes POD in a given Namespace\n\n* [Kubernetes Execute a command on a POD in a given namespace and filter](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_exec_command_on_pods_and_filter/README.md): Execute a command on Kubernetes POD in a given namespace and filter output\n\n* [Execute local script on a pod](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_execute_local_script_on_a_pod/README.md): Execute local script on a pod in a namespace\n\n* [Gather Data for POD Troubleshoot](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_gather_data_for_pod_troubleshoot/README.md): Gather Data for POD Troubleshoot\n\n* [Gather Data for K8S Service Troubleshoot](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_gather_data_for_service_troubleshoot/README.md): Gather Data for K8S Service Troubleshoot\n\n* [Get All Evicted PODS From Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_all_evicted_pods_from_namespace/README.md): This action get all evicted PODS from given namespace. If namespace not given it will get all the pods from all namespaces.\n\n* [ Get All Kubernetes PODS with state in a given Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_all_pods/README.md):  Get All Kubernetes PODS with state in a given Namespace\n\n* [Get K8s pods status and resource utilization info](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_all_resources_utilization_info/README.md): This action gets the pod status and resource utilization of various Kubernetes resources like jobs, services, persistent volumes.\n\n* [Get candidate k8s nodes for given configuration](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_candidate_nodes_for_pods/README.md): Get candidate k8s nodes for given configuration\n\n* [Get K8S Cluster Health](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_cluster_health/README.md): Get K8S Cluster Health\n\n* [Get k8s kube system config map](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_config_map_kube_system/README.md): Get k8s kube system config map\n\n* [Get Kubernetes Deployment For a Pod in a Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_deployment/README.md): Get Kubernetes Deployment for a POD in a Namespace\n\n* [Get Deployment Status](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_deployment_status/README.md): This action search for failed deployment status and returns list.\n\n* [Get Kubernetes Error PODs from All Jobs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_error_pods_from_all_jobs/README.md): Get Kubernetes Error PODs from All Jobs\n\n* [Get expiring K8s certificates](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_expiring_certificates/README.md): Get the expiring certificates for a K8s cluster.\n\n* [Get Kubernetes Failed Deployments](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_failed_deployments/README.md): Get Kubernetes Failed Deployments\n\n* [Get frequently restarting K8s pods](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_frequently_restarting_pods/README.md): Get Kubernetes pods from all namespaces that are restarting too often.\n\n* [Get Kubernetes Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_handle/README.md): Get Kubernetes Handle\n\n* [Get All Kubernetes Healthy PODS in a given Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_healthy_pods/README.md): Get All Kubernetes Healthy PODS in a given Namespace\n\n* [Get memory utilization for K8s services](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_memory_utilization_of_services/README.md): This action executes the given kubectl commands to find the memory utilization of the specified services in a particular namespace and compares it with a given threshold.\n\n* [Get K8s node status and CPU utilization](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_node_status_and_resource_utilization/README.md): This action gathers Kubernetes node status and resource utilization information.\n\n* [Get Kubernetes Nodes](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_nodes/README.md): Get Kubernetes Nodes\n\n* [Get K8s nodes disk and memory pressure](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_nodes_pressure/README.md): This action fetches the memory and disk pressure status of each node in the cluster\n\n* [Get Kubernetes Nodes that have insufficient resources](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_nodes_with_insufficient_resources/README.md): Get Kubernetes Nodes that have insufficient resources\n\n* [Get K8s offline nodes](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_offline_nodes/README.md): This action checks if any node in the Kubernetes cluster is offline.\n\n* [Get K8S OOMKilled Pods](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_oomkilled_pods/README.md): Get K8S Pods which are OOMKilled from the container last states.\n\n* [Get K8s get pending pods](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_pending_pods/README.md): This action checks if any pod in the Kubernetes cluster is in 'Pending' status.\n\n* [Get Kubernetes POD Configuration](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_pod_config/README.md): Get Kubernetes POD Configuration\n\n* [Get Kubernetes Logs for a given POD in a Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_pod_logs/README.md): Get Kubernetes Logs for a given POD in a Namespace\n\n* [Get Kubernetes Logs for a list of PODs & Filter in a Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_pod_logs_and_filter/README.md): Get Kubernetes Logs for a list of PODs and Filter in a Namespace\n\n* [Get Kubernetes Status for a POD in a given Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_pod_status/README.md): Get Kubernetes Status for a POD in a given Namespace\n\n* [Get pods attached to Kubernetes PVC](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_pods_attached_to_pvc/README.md): Get pods attached to Kubernetes PVC\n\n* [Get all K8s Pods in CrashLoopBackOff State](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_pods_in_crashloopbackoff_state/README.md): Get all K8s pods in CrashLoopBackOff State\n\n* [Get all K8s Pods in ImagePullBackOff State](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_pods_in_imagepullbackoff_state/README.md): Get all K8s pods in ImagePullBackOff State\n\n* [Get Kubernetes PODs in not Running State](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_pods_in_not_running_state/README.md): Get Kubernetes PODs in not Running State\n\n* [Get all K8s Pods in Terminating State](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_pods_in_terminating_state/README.md): Get all K8s pods in Terminating State\n\n* [Get Kubernetes PODS with high restart](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_pods_with_high_restart/README.md): Get Kubernetes PODS with high restart\n\n* [Get K8S Service with no associated endpoints](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_service_with_no_associated_endpoints/README.md): Get K8S Service with no associated endpoints\n\n* [Get Kubernetes Services for a given Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_services/README.md): Get Kubernetes Services for a given Namespace\n\n* [Get Kubernetes Unbound PVCs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_unbound_pvcs/README.md): Get Kubernetes Unbound PVCs\n\n* [Kubectl command](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_command/README.md): Execute kubectl command.\n\n* [Kubectl set context entry in kubeconfig](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_config_set_context/README.md): Kubectl set context entry in kubeconfig\n\n* [Kubectl display merged kubeconfig settings](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_config_view/README.md): Kubectl display merged kubeconfig settings\n\n* [Kubectl delete a pod](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_delete_pod/README.md): Kubectl delete a pod\n\n* [Kubectl describe a node](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_describe_node/README.md): Kubectl describe a node\n\n* [Kubectl describe a pod](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_describe_pod/README.md): Kubectl describe a pod\n\n* [Kubectl drain a node](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_drain_node/README.md): Kubectl drain a node\n\n* [Execute command on a pod](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_exec_command/README.md): Execute command on a pod\n\n* [Kubectl get api resources](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_get_api_resources/README.md): Kubectl get api resources\n\n* [Kubectl get logs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_get_logs/README.md): Kubectl get logs for a given pod\n\n* [Kubectl get services](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_get_service_namespace/README.md): Kubectl get services in a given namespace\n\n* [Kubectl list pods](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_list_pods/README.md): Kubectl list pods in given namespace\n\n* [Kubectl update field](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_patch_pod/README.md): Kubectl update field of a resource using strategic merge patch\n\n* [Kubectl rollout deployment history](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_rollout_deployment/README.md): Kubectl rollout deployment history\n\n* [Kubectl scale deployment](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_scale_deployment/README.md): Kubectl scale a given deployment\n\n* [Kubectl show metrics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_show_metrics_node/README.md): Kubectl show metrics for a given node\n\n* [Kubectl show metrics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_kubectl_show_metrics_pod/README.md): Kubectl show metrics for a given pod\n\n* [List matching name pods](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_list_all_matching_pods/README.md): List all pods matching a particular name string. The matching string can be a regular expression too\n\n* [List pvcs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_list_pvcs/README.md): List pvcs by namespace. By default, it will list all pvcs in all namespaces.\n\n* [Remove POD from Deployment](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_remove_pod_from_deployment/README.md): Remove POD from Deployment\n\n* [Update Commands in a Kubernetes POD in a given Namespace](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_update_command_in_pod_spec/README.md): Update Commands in a Kubernetes POD in a given Namespace\n\n* [Get Mantishub handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mantishub/legos/mantishub_get_handle/README.md): Get Mantishub handle\n\n* [MongoDB add new field in all collections](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_add_new_field_in_collections/README.md): MongoDB add new field in all collections\n\n* [MongoDB Aggregate Command](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_aggregate_command/README.md): MongoDB Aggregate Command\n\n* [MongoDB Atlas cluster cloud backup](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_atlas_cluster_backup/README.md): Trigger on-demand Atlas cloud backup\n\n* [Get large MongoDB indices](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_check_large_index_size/README.md): This action compares the size of each index with a given threshold and returns any indexes that exceed the threshold.\n\n* [Get MongoDB large databases](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_compare_disk_size_to_threshold/README.md): This action compares the total disk size used by MongoDB to a given threshold.\n\n* [MongoDB Count Documents](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_count_documents/README.md): MongoDB Count Documents\n\n* [MongoDB Create Collection](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_create_collection/README.md): MongoDB Create Collection\n\n* [MongoDB Create Database](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_create_database/README.md): MongoDB Create Database\n\n* [Delete collection from MongoDB database](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_delete_collection/README.md): Delete collection from MongoDB database\n\n* [MongoDB Delete Database](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_delete_database/README.md): MongoDB Delete Database\n\n* [MongoDB Delete Document](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_delete_document/README.md): MongoDB Delete Document\n\n* [MongoDB Distinct Command](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_distinct_command/README.md): MongoDB Distinct Command\n\n* [MongoDB Find Document](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_find_document/README.md): MongoDB Find Document\n\n* [MongoDB Find One](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_find_one/README.md): MongoDB Find One returns a single entry that matches the query.\n\n* [Get MongoDB Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_get_handle/README.md): Get MongoDB Handle\n\n* [MongoDB get metrics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_get_metrics/README.md): This action retrieves various metrics such as index size, disk size per collection for all databases and collections.\n\n* [Get Mongo Server Status](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_get_server_status/README.md): Get Mongo Server Status and check for any abnormalities.\n\n* [MongoDB Insert Document](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_insert_document/README.md): MongoDB Insert Document\n\n* [MongoDB kill queries](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_kill_queries/README.md): MongoDB kill queries\n\n* [Get list of collections in MongoDB Database](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_list_collections/README.md): Get list of collections in MongoDB Database\n\n* [Get list of MongoDB Databases](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_list_databases/README.md): Get list of MongoDB Databases\n\n* [MongoDB list queries](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_list_queries/README.md): MongoDB list queries\n\n* [MongoDB Read Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_read_query/README.md): MongoDB Read Query\n\n* [MongoDB remove a field in all collections](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_remove_field_in_collections/README.md): MongoDB remove a field in all collections\n\n* [MongoDB Rename Database](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_rename_database/README.md): MongoDB Rename Database\n\n* [MongoDB Update Document](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_update_document/README.md): MongoDB Update Document\n\n* [MongoDB Upsert Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Mongo/legos/mongodb_write_query/README.md): MongoDB Upsert Query\n\n* [Get MS-SQL Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/MsSQL/legos/mssql_get_handle/README.md): Get MS-SQL Handle\n\n* [MS-SQL Read Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/MsSQL/legos/mssql_read_query/README.md): MS-SQL Read Query\n\n* [MS-SQL Write Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/MsSQL/legos/mssql_write_query/README.md): MS-SQL Write Query\n\n* [Get MySQL Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/MySQL/legos/mysql_get_handle/README.md): Get MySQL Handle\n\n* [MySQl Get Long Running Queries](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/MySQL/legos/mysql_get_long_run_queries/README.md): MySQl Get Long Running Queries\n\n* [MySQl Kill Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/MySQL/legos/mysql_kill_query/README.md): MySQl Kill Query\n\n* [Run MySQL Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/MySQL/legos/mysql_read_query/README.md): Run MySQL Query\n\n* [Create a MySQL Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/MySQL/legos/mysql_write_query/README.md): Create a MySQL Query\n\n* [Netbox Get Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Netbox/legos/netbox_get_handle/README.md): Get Netbox Handle\n\n* [Netbox List Devices](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Netbox/legos/netbox_list_devices/README.md): List all Netbox devices\n\n* [Nomad Get Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Nomad/legos/nomad_get_handle/README.md): Get Nomad Handle\n\n* [Nomad List Jobs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Nomad/legos/nomad_list_jobs/README.md): List all Nomad jobs\n\n* [Get Opsgenie Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Opsgenie/legos/opsgenie_get_handle/README.md): Get Opsgenie Handle\n\n* [Create new maintenance window.](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Pingdom/legos/pingdom_create_new_maintenance_window/README.md): Create new maintenance window.\n\n* [Perform Pingdom single check](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Pingdom/legos/pingdom_do_single_check/README.md): Perform Pingdom Single Check\n\n* [Get Pingdom Analysis Results for a specified Check](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Pingdom/legos/pingdom_get_analysis/README.md): Get Pingdom Analysis Results for a specified Check\n\n* [Get list of checkIDs given a hostname](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Pingdom/legos/pingdom_get_checkids/README.md): Get list of checkIDs given a hostname. If no hostname provided, it lists all checkIDs.\n\n* [Get list of checkIDs given a name](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Pingdom/legos/pingdom_get_checkids_by_name/README.md): Get list of checkIDS given a name. If name is not given, it gives all checkIDs. If transaction is set to true, it returns transaction checkIDs\n\n* [Get Pingdom Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Pingdom/legos/pingdom_get_handle/README.md): Get Pingdom Handle\n\n* [Pingdom Get Maintenance](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Pingdom/legos/pingdom_get_maintenance/README.md): Pingdom Get Maintenance\n\n* [Get Pingdom Results](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Pingdom/legos/pingdom_get_results/README.md): Get Pingdom Results\n\n* [Get Pingdom TMS Check](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Pingdom/legos/pingdom_get_tmscheck/README.md): Get Pingdom TMS Check\n\n* [Pingdom lego to pause/unpause checkids](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Pingdom/legos/pingdom_pause_or_unpause_checkids/README.md): Pingdom lego to pause/unpause checkids\n\n* [Perform Pingdom Traceroute](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Pingdom/legos/pingdom_traceroute/README.md): Perform Pingdom Traceroute\n\n* [PostgreSQL Calculate Bloat](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgres_calculate_bloat/README.md): This Lego calculates bloat for tables in Postgres\n\n* [Calling a PostgreSQL function](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_call_function/README.md): Calling a PostgreSQL function\n\n* [PostgreSQL Check Unused Indexes](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_check_unused_indexes/README.md): Find unused Indexes in a database in PostgreSQL\n\n* [Create Tables in PostgreSQL](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_create_table/README.md): Create Tables PostgreSQL\n\n* [Delete PostgreSQL Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_delete_query/README.md): Delete PostgreSQL Query\n\n* [PostgreSQL Get Cache Hit Ratio](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_get_cache_hit_ratio/README.md): The result of the action will show the total number of blocks read from disk, the total number of blocks found in the buffer cache, and the cache hit ratio as a percentage. For example, if the cache hit ratio is 99%, it means that 99% of all data requests were served from the buffer cache, and only 1% required reading data from disk.\n\n* [Get PostgreSQL Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_get_handle/README.md): Get PostgreSQL Handle\n\n* [PostgreSQL Get Index Usage](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_get_index_usage/README.md): The action result shows the data for table name, the percentage of times an index was used for that table, and the number of live rows in the table.\n\n* [PostgreSQL get service status](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_get_server_status/README.md): This action checks the status of each database.\n\n* [Execute commands in a PostgreSQL transaction.](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_handling_transaction/README.md): Given a set of PostgreSQL commands, this actions run them inside a transaction.\n\n* [Long Running PostgreSQL Queries](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_long_running_queries/README.md): Long Running PostgreSQL Queries\n\n* [Read PostgreSQL Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_read_query/README.md): Read PostgreSQL Query\n\n* [Show tables in PostgreSQL Database](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_show_tables/README.md): Show the tables existing in a PostgreSQL Database. We execute the following query to fetch this information SELECT * FROM pg_catalog.pg_tables WHERE schemaname != 'pg_catalog' AND schemaname != 'information_schema';\n\n* [Call PostgreSQL Stored Procedure](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_stored_procedures/README.md): Call PostgreSQL Stored Procedure\n\n* [Write PostgreSQL Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_write_query/README.md): Write PostgreSQL Query\n\n* [Get Prometheus rules](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Prometheus/legos/prometheus_alerts_list/README.md): Get Prometheus rules\n\n* [Get All Prometheus Metrics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Prometheus/legos/prometheus_get_all_metrics/README.md): Get All Prometheus Metrics\n\n* [Get Prometheus handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Prometheus/legos/prometheus_get_handle/README.md): Get Prometheus handle\n\n* [Get Prometheus Metric Statistics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Prometheus/legos/prometheus_get_metric_statistics/README.md): Get Prometheus Metric Statistics\n\n* [Delete All Redis Keys](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Redis/legos/redis_delete_all_keys/README.md): Delete All Redis keys\n\n* [Delete Redis Keys](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Redis/legos/redis_delete_keys/README.md): Delete Redis keys matching pattern\n\n* [Delete Redis Unused keys](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Redis/legos/redis_delete_stale_keys/README.md): Delete Redis Unused keys given a time threshold in seconds\n\n* [Get Redis cluster health](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Redis/legos/redis_get_cluster_health/README.md): This action gets the Redis cluster health.\n\n* [Get Redis Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Redis/legos/redis_get_handle/README.md): Get Redis Handle\n\n* [Get Redis keys count](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Redis/legos/redis_get_keys_count/README.md): Get Redis keys count matching pattern (default: '*')\n\n* [Get Redis metrics](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Redis/legos/redis_get_metrics/README.md): This action fetched redis metrics like index size, memory utilization.\n\n* [ List Redis Large keys](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Redis/legos/redis_list_large_keys/README.md): Find Redis Large keys given a size threshold in bytes\n\n* [Get REST handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Rest/legos/rest_get_handle/README.md): Get REST handle\n\n* [Call REST Methods](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Rest/legos/rest_methods/README.md): Call REST Methods.\n\n* [SSH Execute Remote Command](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SSH/legos/ssh_execute_remote_command/README.md): SSH Execute Remote Command\n\n* [SSH: Locate large files on host](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SSH/legos/ssh_find_large_files/README.md): This action scans the file system on a given host and returns a dict of large files. The command used to perform the scan is \"find inspect_folder -type f -exec du -sk '{}' + | sort -rh | head -n count\"\n\n* [Get SSH handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SSH/legos/ssh_get_handle/README.md): Get SSH handle\n\n* [SSH Restart Service Using sysctl](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SSH/legos/ssh_restart_service_using_sysctl/README.md): SSH Restart Service Using sysctl\n\n* [SCP: Remote file transfer over SSH](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SSH/legos/ssh_scp/README.md): Copy files from or to remote host. Files are copied over SCP. \n\n* [Assign Salesforce Case](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SalesForce/legos/salesforce_assign_case/README.md): Assign a Salesforce case\n\n* [Change Salesforce Case Status](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SalesForce/legos/salesforce_case_change_status/README.md): Change Salesforce Case Status\n\n* [Create Salesforce Case](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SalesForce/legos/salesforce_create_case/README.md): Create a Salesforce case\n\n* [Delete Salesforce Case](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SalesForce/legos/salesforce_delete_case/README.md): Delete a Salesforce case\n\n* [Get Salesforce Case Info](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SalesForce/legos/salesforce_get_case/README.md): Get a Salesforce case info\n\n* [Get Salesforce Case Status](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SalesForce/legos/salesforce_get_case_status/README.md): Get a Salesforce case status\n\n* [Get Salesforce handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SalesForce/legos/salesforce_get_handle/README.md): Get Salesforce handle\n\n* [Search Salesforce Case](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SalesForce/legos/salesforce_search_case/README.md): Search a Salesforce case\n\n* [Update Salesforce Case](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SalesForce/legos/salesforce_update_case/README.md): Update a Salesforce case\n\n* [Create Slack Channel and Invite Users](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Slack/legos/slack_create_channel_invite_users/README.md): Create a Slack Channel with given name, and invite a list of userIds to the channel.\n\n* [Get Slack SDK Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Slack/legos/slack_get_handle/README.md): Get Slack SDK Handle\n\n* [Slack Lookup User by Email](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Slack/legos/slack_lookup_user_by_email/README.md): Given an email address, find the slack user in the workspace.\n You can the extract their Profile picture, or retrieve their userid (which you can use to send messages) from the output.\n\n* [Post Slack Image](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Slack/legos/slack_post_image/README.md): Post Slack Image\n\n* [Post Slack Message](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Slack/legos/slack_post_message/README.md): Post Slack Message\n\n* [Slack Send DM](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Slack/legos/slack_send_DM/README.md): Given a list of Slack IDs, this Action will create a DM (one user) or group chat (multiple users), and send a message to the chat\n\n* [Snowflake Read Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Snowflake/legos/snowflake_read_query/README.md): Snowflake Read Query\n\n* [Snowflake Write Query](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Snowflake/legos/snowflake_write_query/README.md): Snowflake Write Query\n\n* [Get Splunk SDK Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Splunk/legos/splunk_get_handle/README.md): Get Splunk SDK Handle\n\n* [ Capture a Charge](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_capture_charge/README.md):  Capture the payment of an existing, uncaptured, charge\n\n* [Close Dispute](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_close_dispute/README.md): Close Dispute\n\n* [Create a Charge](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_create_charge/README.md): Create a Charge\n\n* [Create a Refund](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_create_refund/README.md): Create a Refund\n\n* [Get list of charges previously created](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_get_all_charges/README.md): Get list of charges previously created\n\n* [Get list of disputes](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_get_all_disputes/README.md): Get list of disputes\n\n* [Get list of refunds](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_get_all_refunds/README.md):  Get list of refunds for the given threshold.\n\n* [Get Stripe Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_get_handle/README.md): Get Stripe Handle\n\n* [Retrieve a Charge](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_retrieve_charge/README.md):  Retrieve a Charge\n\n* [Retrieve details of a dispute](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_retrieve_dispute/README.md): Retrieve details of a dispute\n\n* [Retrieve a refund](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_retrieve_refund/README.md): Retrieve a refund\n\n* [Update a Charge](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_update_charge/README.md): Update a Charge\n\n* [Update Dispute](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_update_dispute/README.md): Update Dispute\n\n* [Update Refund](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_update_refund/README.md): Updates the specified refund by setting the values of the parameters passed.\n\n* [Execute Terraform Command](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Terraform/legos/terraform_exec_command/README.md): Execute Terraform Command\n\n* [Get terraform handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Terraform/legos/terraform_get_handle/README.md): Get terraform handle\n\n* [Get Zabbix Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Zabbix/legos/zabbix_get_handle/README.md): Get Zabbix Handle\n\n* [Infra: Execute runbook](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/infra/legos/infra_execute_runbook/README.md): Infra: use this action to execute particular runbook with given input parameters.\n\n* [Infra: Finish runbook execution](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/infra/legos/infra_workflow_done/README.md): Infra: use this action to finish the execution of a runbook. Once this is set, no more tasks will be executed\n\n* [Infra: Append values for a key in workflow state store](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/infra/legos/workflow_ss_append_keys/README.md): Infra: use this action to append values for a key in a state store provided by the workflow.\n\n* [Infra: Store keys in workflow state store](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/infra/legos/workflow_ss_create_keys/README.md): Infra: use this action to persist keys in a state store provided by the workflow.\n\n* [Infra: Delete keys from workflow state store](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/infra/legos/workflow_ss_delete_keys/README.md): Infra: use this action to delete keys from a state store provided by the workflow.\n\n* [Infra: Fetch keys from workflow state store](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/infra/legos/workflow_ss_get_keys/README.md): Infra: use this action to retrieve keys in a state store provided by the workflow.\n\n* [Infra: Rename keys in workflow state store](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/infra/legos/workflow_ss_rename_keys/README.md): Infra: use this action to rename keys in a state store provided by the workflow.\n\n* [Infra: Update keys in workflow state store](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/infra/legos/workflow_ss_update_keys/README.md): Infra: use this action to update keys in a state store provided by the workflow.\n\n* [Opensearch Get Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/opensearch/legos/opensearch_get_handle/README.md): Opensearch Get Handle\n\n* [Opensearch search](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/opensearch/legos/opensearch_search/README.md): Opensearch Search\n\n"
  },
  {
    "path": "lists/action_SSH.md",
    "content": "* [SSH Execute Remote Command](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SSH/legos/ssh_execute_remote_command/README.md): SSH Execute Remote Command\n\n* [SSH: Locate large files on host](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SSH/legos/ssh_find_large_files/README.md): This action scans the file system on a given host and returns a dict of large files. The command used to perform the scan is \"find inspect_folder -type f -exec du -sk '{}' + | sort -rh | head -n count\"\n\n* [Get SSH handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SSH/legos/ssh_get_handle/README.md): Get SSH handle\n\n* [SSH Restart Service Using sysctl](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SSH/legos/ssh_restart_service_using_sysctl/README.md): SSH Restart Service Using sysctl\n\n* [SCP: Remote file transfer over SSH](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/SSH/legos/ssh_scp/README.md): Copy files from or to remote host. Files are copied over SCP. \n\n"
  },
  {
    "path": "lists/action_STRIPE.md",
    "content": "* [ Capture a Charge](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_capture_charge/README.md):  Capture the payment of an existing, uncaptured, charge\n\n* [Close Dispute](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_close_dispute/README.md): Close Dispute\n\n* [Create a Charge](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_create_charge/README.md): Create a Charge\n\n* [Create a Refund](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_create_refund/README.md): Create a Refund\n\n* [Get list of charges previously created](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_get_all_charges/README.md): Get list of charges previously created\n\n* [Get list of disputes](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_get_all_disputes/README.md): Get list of disputes\n\n* [Get list of refunds](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_get_all_refunds/README.md):  Get list of refunds for the given threshold.\n\n* [Get Stripe Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_get_handle/README.md): Get Stripe Handle\n\n* [Retrieve a Charge](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_retrieve_charge/README.md):  Retrieve a Charge\n\n* [Retrieve details of a dispute](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_retrieve_dispute/README.md): Retrieve details of a dispute\n\n* [Retrieve a refund](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_retrieve_refund/README.md): Retrieve a refund\n\n* [Update a Charge](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_update_charge/README.md): Update a Charge\n\n* [Update Dispute](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_update_dispute/README.md): Update Dispute\n\n* [Update Refund](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_update_refund/README.md): Updates the specified refund by setting the values of the parameters passed.\n\n"
  },
  {
    "path": "lists/action_STRIPE_CHARGE.md",
    "content": "* [ Capture a Charge](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_capture_charge/README.md):  Capture the payment of an existing, uncaptured, charge\n\n* [Create a Charge](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_create_charge/README.md): Create a Charge\n\n* [Get list of charges previously created](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_get_all_charges/README.md): Get list of charges previously created\n\n* [Retrieve a Charge](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_retrieve_charge/README.md):  Retrieve a Charge\n\n* [Update a Charge](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_update_charge/README.md): Update a Charge\n\n"
  },
  {
    "path": "lists/action_STRIPE_DISPUTE.md",
    "content": "* [Close Dispute](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_close_dispute/README.md): Close Dispute\n\n* [Get list of disputes](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_get_all_disputes/README.md): Get list of disputes\n\n* [Retrieve details of a dispute](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_retrieve_dispute/README.md): Retrieve details of a dispute\n\n* [Update Dispute](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_update_dispute/README.md): Update Dispute\n\n"
  },
  {
    "path": "lists/action_STRIPE_REFUND.md",
    "content": "* [Create a Refund](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_create_refund/README.md): Create a Refund\n\n* [Get list of refunds](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_get_all_refunds/README.md):  Get list of refunds for the given threshold.\n\n* [Get Stripe Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_get_handle/README.md): Get Stripe Handle\n\n* [Retrieve a refund](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_retrieve_refund/README.md): Retrieve a refund\n\n* [Update Refund](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Stripe/legos/stripe_update_refund/README.md): Updates the specified refund by setting the values of the parameters passed.\n\n"
  },
  {
    "path": "lists/action_TERRAFORM.md",
    "content": "* [Execute Terraform Command](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Terraform/legos/terraform_exec_command/README.md): Execute Terraform Command\n\n* [Get terraform handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Terraform/legos/terraform_get_handle/README.md): Get terraform handle\n\n"
  },
  {
    "path": "lists/action_TROUBLESHOOTING.md",
    "content": "* [AWS Get Private Address from NAT Gateways](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_get_private_address_from_nat_gateways/README.md): This action is used to get private address from NAT gateways.\n\n* [AWS List Unhealthy Instances in a Target Group](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos/aws_list_unhealthy_instances_in_target_group/README.md): List Unhealthy Instances in a target group\n\n* [Get Kubernetes Error PODs from All Jobs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_error_pods_from_all_jobs/README.md): Get Kubernetes Error PODs from All Jobs\n\n* [Get K8S OOMKilled Pods](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_oomkilled_pods/README.md): Get K8S Pods which are OOMKilled from the container last states.\n\n* [Get all K8s Pods in CrashLoopBackOff State](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_pods_in_crashloopbackoff_state/README.md): Get all K8s pods in CrashLoopBackOff State\n\n* [Get all K8s Pods in ImagePullBackOff State](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/legos/k8s_get_pods_in_imagepullbackoff_state/README.md): Get all K8s pods in ImagePullBackOff State\n\n* [PostgreSQL Calculate Bloat](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgres_calculate_bloat/README.md): This Lego calculates bloat for tables in Postgres\n\n* [PostgreSQL Get Cache Hit Ratio](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_get_cache_hit_ratio/README.md): The result of the action will show the total number of blocks read from disk, the total number of blocks found in the buffer cache, and the cache hit ratio as a percentage. For example, if the cache hit ratio is 99%, it means that 99% of all data requests were served from the buffer cache, and only 1% required reading data from disk.\n\n* [Get PostgreSQL Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_get_handle/README.md): Get PostgreSQL Handle\n\n* [PostgreSQL Get Index Usage](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/legos/postgresql_get_index_usage/README.md): The action result shows the data for table name, the percentage of times an index was used for that table, and the number of live rows in the table.\n\n"
  },
  {
    "path": "lists/action_ZABBIX.md",
    "content": "* [Get Zabbix Handle](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Zabbix/legos/zabbix_get_handle/README.md): Get Zabbix Handle\n\n"
  },
  {
    "path": "lists/runbook_CLOUDOPS.md",
    "content": "* AWS [AWS Update Resources about to expire](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Add_Tag_Across_Selected_AWS_Resources.ipynb): This finds resources that have an expiration tag that is about to expire.  Can eitehr send a Slack message in 'auto'mode, or can be used to manually remediate the issue interactively.\n* AWS [AWS Bulk Update Resource Tag](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Bulk_Update_Resource_Tag.ipynb): This runbook will find all AWS Resources tagged with a given key:value tag.  It will then update the tag's value to a new value. This can be used to bulk update the owner of resources, or any other reason you might need to change the tag value for many AWS resources.\n* AWS [Create IAM User with policy](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Create_New_IAM_User_With_Policy.ipynb): Create new IAM user with a security Policy.  Sends confirmation to Slack.\n* AWS [Delete IAM profile](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Delete_IAM_User.ipynb): This runbook is the inverse of Create IAM user with profile - removes the profile, the login and then the IAM user itself..\n* AWS [Delete Unattached AWS EBS Volumes](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Delete_Unattached_EBS_Volume.ipynb): This runbook can be used to delete all unattached EBS Volumes within an AWS region. You can delete an Amazon EBS volume that you no longer need. After deletion, its data is gone and the volume can't be attached to any instance. So before deletion, you can store a snapshot of the volume, which you can use to re-create the volume later.\n* AWS [AWS Detach EC2 Instance from ASG](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Detach_ec2_Instance_from_ASG.ipynb): This runbook can be used to detach an instance from Auto Scaling Group. You can remove (detach) an instance that is in the InService state from an Auto Scaling group. After the instance is detached, you can manage it independently from the rest of the Auto Scaling group. By detaching an instance, you can move an instance out of one Auto Scaling group and attach it to a different group. For more information, see Attach EC2 instances to your Auto Scaling group.\n* AWS [AWS EC2 Disk Cleanup](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_EC2_Disk_Cleanup.ipynb): This runbook locates large files in an EC2 instance and backs them up into a given S3 bucket. Afterwards, it deletes the files backed up and send a message on a specified Slack channel. It uses SSH and linux commands to perform the functions it needs.\n* AWS [AWS Ensure Redshift Clusters have Paused Resume Enabled](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Ensure_Redshift_Clusters_have_Paused_Resume_Enabled.ipynb): This runbook finds redshift clusters that don't have pause resume enabled and schedules the pause resume for the cluster.\n* AWS [AWS Get unhealthy EC2 instances from ELB](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Get_Elb_Unhealthy_Instances.ipynb): This runbook can be used to list unhealthy EC2 instance from an ELB. Sometimes it difficult to determine why Amazon EC2 Auto Scaling didn't terminate an unhealthy instance from Activity History alone. You can find further details about an unhealthy instance's state, and how to terminate that instance, by checking the a few extra things.\n* AWS [AWS Redshift Get Daily Costs from AWS Products](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Get_Redshift_Daily_Product_Costs.ipynb): This runbook can be used to create charts and alerts around Your AWS product usage. It requires a Cost and USage report to be live in RedShift.\n* AWS [AWS Redshift Get Daily Costs from EC2 Usage](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Get_Redshift_EC2_Daily_Costs.ipynb): This runbook can be used to create charts and alerts around AWS EC2 usage. It requires a Cost and USage report to be live in RedShift.\n* AWS [AWS Lowering CloudTrail Costs by Removing Redundant Trails](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Lowering_AWS_CloudTrail_Costs_by_Removing_Redundant_Trails.ipynb): The AWS CloudTrail service allows developers to enable policies managing compliance, governance, and auditing of their AWS account. In addition, AWS CloudTrail offers logging, monitoring, and storage of any activity around actions related to your AWS structures. The service activates from the moment you set up your AWS account and while it provides real-time activity visibility, it also means higher AWS costs. Here Finding Redundant Trails in AWS\n* AWS [List unused Amazon EC2 key pairs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Notify_About_Unused_Keypairs.ipynb): This runbook finds all EC2 key pairs that are not used by an EC2 instance and notifies a slack channel about them. Optionally it can delete the key pairs based on user configuration.\n* AWS [Publicly Accessible Amazon RDS Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Publicly_Accessible_Amazon_RDS_Instances.ipynb): This runbook can be used to find the publicly accessible RDS instances for the given AWS region.\n* AWS [Purchase Reserved Nodes For Long Running AWS ElastiCache Clusters](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Purchase_Reserved_Cache_Nodes_For_Long_Running_ElastiCache_Clusters.ipynb): Ensuring that long-running AWS ElastiCache clusters have Reserved Nodes purchased for them is an effective cost optimization strategy for AWS users. By committing to a specific capacity of ElastiCache nodes for a period of one or three years, users can take advantage of significant discounts compared to On-Demand pricing. This approach can help optimize AWS costs for ElastiCache clusters that are expected to run for an extended period and have predictable usage patterns. This runbook helps us optimize costs by ensuring that Reserved Nodes are purchased for these ElastiCache clusters.\n* AWS [Purchase Reserved Instances For Long Running AWS RDS Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Purchase_Reserved_Instances_For_Long_Running_RDS_Instances.ipynb): Ensuring that long-running AWS RDS instances have Reserved Instances purchased for them is an important cost optimization strategy for AWS users. By committing to a specific capacity of RDS instances for a period of one or three years, users can take advantage of significant discounts compared to On-Demand pricing. This approach can help optimize AWS costs for RDS instances that are expected to run for an extended period and have predictable usage patterns. This runbook helps us to optimize costs by ensuring that Reserved Instances are purchased for these RDS instances.\n* AWS [Purchase Reserved Nodes For Long Running AWS Redshift Clusters](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Purchase_Reserved_Nodes_For_Long_Running_Redshift_Clusters.ipynb): Ensuring that long-running AWS Redshift Clusters have Reserved Nodes purchased for them is a critical cost optimization strategy . By committing to a specific capacity of Redshift nodes for a period of one or three years, users can take advantage of significant discounts compared to On-Demand pricing. This approach can help optimize AWS costs for Redshift Clusters that are expected to run for an extended period and have predictable usage patterns. This runbook helps us to ensure that Reserved Nodes are purchased for these clusters so that users can effectively plan ahead, reduce their AWS bill, and optimize their costs over time.\n* AWS [Remediate unencrypted S3 buckets](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Remediate_unencrypted_S3_buckets.ipynb): This runbook can be used to filter all the S3 buckets which are unencrypted and apply encryption on unencrypted S3 buckets.\n* AWS [Renew AWS SSL Certificates that are close to expiration](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Renew_SSL_Certificate.ipynb): This runbook can be used to list all AWS SSL (ACM) Certificates that need to be renewed within a given threshold number of days. Optionally it can renew the certificate using AWS ACM service.\n* AWS [AWS Restart unhealthy services in a Target Group](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Restart_Unhealthy_Services_Target_Group.ipynb): This runbook restarts unhealthy services in a target group. The restart command is provided via a tag attached to the instance.\n* AWS [Restrict S3 Buckets with READ/WRITE Permissions to all Authenticated Users](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Restrict_S3_Buckets_with_READ_WRITE_Permissions.ipynb): This runbook will list all the S3 buckets.Filter buckets which has ACL public READ/WRITE permissions and Change the ACL Public READ/WRITE permissions to private in the given region.\n* AWS [Secure Publicly accessible Amazon RDS Snapshot](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Secure_Publicly_accessible_Amazon_RDS_Snapshot.ipynb): This lego can be used to list all the manual database snapshots in the given region. Get publicly accessible DB snapshots in RDS and Modify the publicly accessible DB snapshots in RDS to private.\n* AWS [Stop Idle EC2 Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Stop_Idle_EC2_Instances.ipynb): This runbook can be used to Stop all EC2 Instances that are idle using given cpu threshold and duration.\n* AWS [Stop all Untagged AWS EC2 Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Stop_Untagged_EC2_Instances.ipynb): This runbook can be used to Stop all EC2 Instances that are Untagged\n* AWS [Terminate EC2 Instances Without Valid Lifetime Tag](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Terminate_EC2_Instances_Without_Valid_Lifetime_Tag.ipynb): This runbook can be used to list all the EC2 instances which don't have a lifetime tag and then terminate them.\n* AWS [AWS Update RDS Instances from Old to New Generation](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Update_RDS_Instances_from_Old_to_New_Generation.ipynb): This runbook can be used to find the old generation RDS instances for the given AWS region and modify then to the given instance class.\n* AWS [AWS Redshift Update Database](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Update_Redshift_Database.ipynb): This runbook can be used to update a redshift database from a SQL file stored in S3.\n* AWS [AWS Update Resource Tags](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Update_Resource_Tags.ipynb): This runbook can be used to update an existing tag to any resource in an AWS Region.\n* AWS [AWS Add Tags Across Selected AWS Resources](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Update_Resources_About_To_Expire.ipynb): This finds resources missing a tag, and allows you to choose which resources should add a specific tag/value pair.\n* AWS [Encrypt unencrypted S3 buckets](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_encrypt_unencrypted_S3_buckets.ipynb): This runbook can be used to filter all the S3 buckets which are unencrypted and apply encryption on unencrypted S3 buckets.\n* AWS [Configure URL endpoint on a AWS CloudWatch alarm](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Configure_url_endpoint_on_a_cloudwatch_alarm.ipynb): Configures the URL endpoint to the SNS associated with a CloudWatch alarm. This allows to external functions to be invoked within unSkript in response to an alert getting generated. Alarms can be attached to the handlers to perform data enrichment or remediation\n* AWS [Copy AMI to All Given AWS Regions](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Copy_ami_to_all_given_AWS_regions.ipynb): This runbook can be used to copy AMI from one region to multiple AWS regions using unSkript legos with AWS CLI commands.We can get all the available regions by using AWS CLI Commands.\n* AWS [Detach EC2 Instance from ASG](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Detach_Instance_from_ASG.ipynb): This runbook can be used to detach an instance from Auto Scaling Group. You can remove (detach) an instance that is in the Service state from an Auto Scaling group. After the instance is detached, you can manage it independently from the rest of the Auto Scaling group. By detaching an instance, you can move an instance out of one Auto Scaling group and attach it to a different group. For more information, see Attach EC2 instances to your Auto Scaling group.\n* AWS [Detect ECS failed deployment](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Detect_ECS_failed_deployment.ipynb): This runbook check if there is a failed deployment in progress for a service in an ECS cluster. If it finds one, it sends the list of stopped task associated with this deployment and their stopped reason to slack.\n* AWS [Enforce Mandatory Tags Across All AWS Resources](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Enforce_Mandatory_Tags_Across_All_AWS_Resources.ipynb): This runbook can be used to Enforce Mandatory Tags Across All AWS Resources.We can get all the  untag resources of the given region,discovers tag keys of the given region and attaches mandatory tags to all the untagged resource.\n* AWS [Handle AWS EC2 Instance Scheduled to retire](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Find_EC2_Instances_Scheduled_to_retire.ipynb): To avoid unexpected interruptions, it's a good practice to check to see if there are any EC2 instances scheduled to retire. This runbook can be used to List the EC2 instances that are scheduled to retire. To handle the instance retirement, user can stop and restart it before the retirement date. That action moves the instance over to a more stable host.\n* AWS [Create an IAM user using Principle of Least Privilege](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/IAM_security_least_privilege.ipynb): Extract usage details from Cloudtrail of an existing user. Apply the usage to a new IAM Policy, and connect it to a new IAM profile.\n* AWS [Monitor AWS DynamoDB provision capacity](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Monitor_AWS_DynamoDB_provision_capacity.ipynb): This runbook can be used to collect the data from cloudwatch related to AWS DynamoDB for provision capacity.\n* AWS [Resize EBS Volume](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Resize_EBS_Volume.ipynb): This run resizes the EBS volume to a specified amount. This runbook can be attached to Disk usage related Cloudwatch alarms to do the appropriate resizing. It also extends the filesystem to use the new volume size.\n* AWS [Resize list of pvcs.](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Resize_List_Of_Pvcs.ipynb): This runbook can be used to resize list of pvcs in a namespace. By default, it uses all pvcs to be resized.\n* AWS [Resize PVC](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Resize_PVC.ipynb): This runbook resizes the PVC to input size.\n* AWS [Restart AWS EC2 Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Restart_AWS_EC2_Instances_By_Tag.ipynb): This runbook can be used to Restart AWS EC2 Instances\n* AWS [Launch AWS EC2 from AMI](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Run_EC2_from_AMI.ipynb): This lego can be used to launch an AWS EC2 instance from AMI in the given region.\n* AWS [Troubleshooting Your EC2 Configuration in a Private Subnet](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Troubleshooting_Your_EC2_Configuration_in_Private_Subnet.ipynb): This runbook can be used to troubleshoot EC2 instance configuration in a private subnet by capturing the VPC ID for a given instance ID. Using VPC ID to get Internet Gateway details then try to SSH and connect to internet.\n* Jenkins [Fetch Jenkins Build Logs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Jenkins/Fetch_Jenkins_Build_Logs.ipynb): This runbook fetches the logs for a given Jenkins job and posts to a slack channel\n* Jira [Jira Visualize Issue Time to Resolution](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Jira/jira_visualize_time_to_resolution.ipynb): Using the Panel Library - visualize the time it takes for issues to close over a specifict timeframe\n* Kubernetes [k8s: Delete Evicted Pods From All Namespaces](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/Delete_Evicted_Pods_From_Namespaces.ipynb): This runbook shows and deletes the evicted pods for given namespace. If the user provides the namespace input, then it only collects pods for the given namespace; otherwise, it will select all pods from all the namespaces.\n* Kubernetes [k8s: Get kube system config map](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/Get_Kube_System_Config_Map.ipynb): This runbook fetches the kube system config map for a k8s cluster and publishes the information on a Slack channel.\n* Kubernetes [IP Exhaustion Mitigation: Failing K8s Pod Deletion from Jobs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/K8S_Delete_Pods_From_Failing_Jobs.ipynb): Preventing IP exhaustion is critical in Kubernetes environments, and a key strategy is deleting failing pods from jobs. Failing pods can consume valuable IP resources, leading to scarcity and inefficiency. By proactively identifying and removing malfunctioning pods, administrators can promptly free up IP addresses, optimizing resource utilization. This approach ensures that IP allocation remains efficient, enabling the cluster to accommodate new pods without experiencing IP exhaustion. This runbook helps us to identify failing pods within jobs thereby maximizing IP availability for other pods and services.\n* Kubernetes [k8s: Get candidate nodes for given configuration](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/K8S_Get_Candidate_Nodes_Given_Config.ipynb): This runbook get the matching nodes for a given configuration (storage, cpu, memory, pod_limit) from a k8s cluster\n* Kubernetes [Kubernetes Log Healthcheck](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/K8S_Log_Healthcheck.ipynb): This RunBook checks the logs of every pod in a namespace for warning messages.\n* Kubernetes [k8s: Pod Stuck in CrashLoopBackoff State](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/K8S_Pod_Stuck_In_CrashLoopBack_State.ipynb): This runbook checks if any Pod(s) in CrashLoopBackoff state in a given k8s namespace. If it finds, it tries to find out the reason why the Pod(s) is in that state.\n* Kubernetes [k8s: Pod Stuck in ImagePullBackOff State](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/K8S_Pod_Stuck_In_ImagePullBackOff_State.ipynb): This runbook checks if any Pod(s) in ImagePullBackOff state in a given k8s namespace. If it finds, it tries to find out the reason why the Pod(s) is in that state.\n* Kubernetes [k8s: Pod Stuck in ImagePullBackOff State using genAI](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/K8S_Pod_Stuck_In_ImagePullBackOff_State_with_genai.ipynb): This runbook checks if any Pod(s) in ImagePullBackOff state in a given k8s namespace, using genAI. If it finds, it tries to find out the reason why the Pod(s) is in that state.\n* Kubernetes [k8s: Pod Stuck in Terminating State](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/K8S_Pod_Stuck_In_Terminating_State.ipynb): This runbook checks any Pods are in terminating state in a given k8s namespace. If it finds, it tries to recover it by resetting finalizers of the pod.\n* Kubernetes [k8s: Resize List of PVCs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/Resize_List_of_PVCs.ipynb): This runbook resizes a list of Kubernetes PVCs.\n* Kubernetes [k8s: Resize PVC](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/Resize_PVC.ipynb): This runbook resizes a Kubernetes PVC.\n* Kubernetes [Rollback Kubernetes Deployment](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/Rollback_k8s_Deployment_and_Update_Jira.ipynb): This runbook can be used to rollback Kubernetes Deployment\n* Postgresql [Display long running queries in a PostgreSQL database](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/Postgresql_Display_Long_Running.ipynb): This runbook displays collects the long running queries from a database and sends a message to the specified slack channel. Poorly optimized queries and excessive connections can cause problems in PostgreSQL, impacting upstream services.\n"
  },
  {
    "path": "lists/runbook_COST_OPT.md",
    "content": "* AWS [Add Lifecycle Policy to S3 Buckets](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Add_Lifecycle_Policy_To_S3_Buckets.ipynb): Attaching lifecycle policies to AWS S3 buckets enables us to automate the management of object lifecycle in your storage buckets. By configuring lifecycle policies, you can define rules that determine the actions to be taken on objects based on their age or other criteria. This includes transitioning objects to different storage classes, such as moving infrequently accessed data to lower-cost storage tiers or archiving them to Glacier, as well as setting expiration dates for objects. By attaching lifecycle policies to your S3 buckets, you can optimize storage costs by automatically moving data to the most cost-effective storage tier based on its lifecycle. Additionally, it allows you to efficiently manage data retention and comply with regulatory requirements or business policies regarding data expiration. This runbook helps us find all the buckets without any lifecycle policy and attach one to them.\n* AWS [AWS Update Resources about to expire](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Add_Tag_Across_Selected_AWS_Resources.ipynb): This finds resources that have an expiration tag that is about to expire.  Can eitehr send a Slack message in 'auto'mode, or can be used to manually remediate the issue interactively.\n* AWS [AWS Bulk Update Resource Tag](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Bulk_Update_Resource_Tag.ipynb): This runbook will find all AWS Resources tagged with a given key:value tag.  It will then update the tag's value to a new value. This can be used to bulk update the owner of resources, or any other reason you might need to change the tag value for many AWS resources.\n* AWS [Change AWS EBS Volume To GP3 Type](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Change_EBS_Volume_To_GP3_Type.ipynb): This runbook can be used to change the type of an EBS volume to GP3(General Purpose 3). GP3 type volume has a number of advantages over it's predecessors. gp3 volumes are ideal for a wide variety of applications that require high performance at low cost\n* AWS [Change AWS Route53 TTL](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Change_Route53_TTL.ipynb): For a record in a hosted zone, lower TTL means that more queries arrive at the name servers because the cached values expire sooner. If you configure a higher TTL for your records, then the intermediate resolvers cache the records for longer time. As a result, there are fewer queries received by the name servers. This configuration reduces the charges corresponding to the DNS queries answered. However, higher TTL slows the propagation of record changes because the previous values are cached for longer periods. This Runbook can be used to configure a higher value of a TTL .\n* AWS [Delete EBS Volume Attached to Stopped Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Delete_EBS_Volumes_Attached_To_Stopped_Instances.ipynb): EBS (Elastic Block Storage) volumes are attached to EC2 Instances as storage devices. Unused (Unattached) EBS Volumes can keep accruing costs even when their associated EC2 instances are no longer running. These volumes need to be deleted if the instances they are attached to are no more required. This runbook helps us find such volumes and delete them.\n* AWS [Delete EBS Volume With Low Usage](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Delete_EBS_Volumes_With_Low_Usage.ipynb): This runbook can help us identify low usage Amazon Elastic Block Store (EBS) volumes and delete these volumes in order to lower the cost of your AWS bill. This is calculates using the VolumeUsage metric. It measures the percentage of the total storage space that is currently being used by an EBS volume. This metric is reported as a percentage value between 0 and 100.\n* AWS [Delete ECS Clusters with Low CPU Utilization](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Delete_ECS_Clusters_with_Low_CPU_Utilization.ipynb): ECS clusters are a managed service that allows users to run Docker containers on AWS, making it easier to manage and scale containerized applications. However, running ECS clusters with low CPU utilization can result in wasted resources and unnecessary costs. AWS charges for the resources allocated to a cluster, regardless of whether they are fully utilized or not. By deleting clusters that are not being fully utilized, you can reduce the number of resources being allocated and lower the overall cost of running ECS. Furthermore, deleting unused or low-utilization clusters can also improve overall system performance by freeing up resources for other applications that require more processing power. This runbook helps us to identify such clusters and delete them.\n* AWS [Delete AWS ELBs With No Targets Or Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Delete_ELBs_With_No_Targets_Or_Instances.ipynb): ELBs are used to distribute incoming traffic across multiple targets or instances, but if those targets or instances are no longer in use, then the ELBs may be unnecessary and can be deleted to save costs. Deleting ELBs with no targets or instances is a simple but effective way to optimize costs in your AWS environment. By identifying and removing these unused ELBs, you can reduce the number of resources you are paying for and avoid unnecessary charges. This runbook helps you identify all types of ELB's- Network, Application, Classic that don't have any target groups or instances attached to them.\n* AWS [Delete Old EBS Snapshots](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Delete_Old_EBS_Snapshots.ipynb): Amazon Elastic Block Store (EBS) snapshots are created incrementally, an initial snapshot will include all the data on the disk, and subsequent snapshots will only store the blocks on the volume that have changed since the prior snapshot. Unchanged data is not stored, but referenced using the previous snapshot. This runbook helps us to find old EBS snapshots and thereby lower storage costs.\n* AWS [Delete RDS Instances with Low CPU Utilization](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Delete_RDS_Instances_with_Low_CPU_Utilization.ipynb): Deleting RDS instances with low CPU utilization is a cost optimization strategy that involves identifying RDS instances with consistently low CPU usage and deleting them to save costs. This approach helps to eliminate unnecessary costs associated with running idle database instances that are not being fully utilized. This runbook helps us to find and delete such instances.\n* AWS [Delete Redshift Clusters with Low CPU Utilization](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Delete_Redshift_Clusters_with_Low_CPU_Utilization.ipynb): Redshift clusters are the basic units of compute and storage in Amazon Redshift, and they can be configured to meet specific performance and cost requirements. In order to optimize the cost and performance of Redshift clusters, it is important to regularly monitor their CPU utilization. If a cluster is consistently showing low CPU utilization over an extended period of time, it may be a good idea to delete the cluster to save costs. This runbook helps us find such clusters and delete them.\n* AWS [Delete Unused AWS Secrets](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Delete_Unused_AWS_Secrets.ipynb): This runbook can be used to delete unused secrets in AWS.\n* AWS [Delete Unused AWS Log Streams](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Delete_Unused_Log_Streams.ipynb): Cloudwatch will retain empty Log Streams after the data retention time period. Those log streams should be deleted in order to save costs. This runbook can find unused log streams over a threshold number of days and help you delete them.\n* AWS [Delete Unused NAT Gateways](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Delete_Unused_NAT_Gateways.ipynb): This runbook search for all unused NAT gateways from all the region and delete those gateways.\n* AWS [Delete Unused Route53 HealthChecks](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Delete_Unused_Route53_Healthchecks.ipynb): When we associate healthchecks with an endpoint, Amazon Route53 sends health check requests to the endpoint IP address. These health checks validate that the endpoint IP addresses are operating as intended. There may be multiple reasons that healtchecks are lying usused for example- health check was mistakenly configured against your application by another customer, health check was configured from your account for testing purposes but wasn't deleted when testing was complete, health check was based on domain names and hence requests were sent due to DNS caching,  Elastic Load Balancing service updated its public IP addresses due to scaling, and the IP addresses were reassigned to your load balancer, and many more. This runbook finds such healthchecks and deletes them to save AWS costs.\n* AWS [AWS Redshift Get Daily Costs from AWS Products](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Get_Redshift_Daily_Product_Costs.ipynb): This runbook can be used to create charts and alerts around Your AWS product usage. It requires a Cost and USage report to be live in RedShift.\n* AWS [AWS Redshift Get Daily Costs from EC2 Usage](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Get_Redshift_EC2_Daily_Costs.ipynb): This runbook can be used to create charts and alerts around AWS EC2 usage. It requires a Cost and USage report to be live in RedShift.\n* AWS [AWS Lowering CloudTrail Costs by Removing Redundant Trails](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Lowering_AWS_CloudTrail_Costs_by_Removing_Redundant_Trails.ipynb): The AWS CloudTrail service allows developers to enable policies managing compliance, governance, and auditing of their AWS account. In addition, AWS CloudTrail offers logging, monitoring, and storage of any activity around actions related to your AWS structures. The service activates from the moment you set up your AWS account and while it provides real-time activity visibility, it also means higher AWS costs. Here Finding Redundant Trails in AWS\n* AWS [Purchase Reserved Nodes For Long Running AWS ElastiCache Clusters](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Purchase_Reserved_Cache_Nodes_For_Long_Running_ElastiCache_Clusters.ipynb): Ensuring that long-running AWS ElastiCache clusters have Reserved Nodes purchased for them is an effective cost optimization strategy for AWS users. By committing to a specific capacity of ElastiCache nodes for a period of one or three years, users can take advantage of significant discounts compared to On-Demand pricing. This approach can help optimize AWS costs for ElastiCache clusters that are expected to run for an extended period and have predictable usage patterns. This runbook helps us optimize costs by ensuring that Reserved Nodes are purchased for these ElastiCache clusters.\n* AWS [Purchase Reserved Instances For Long Running AWS RDS Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Purchase_Reserved_Instances_For_Long_Running_RDS_Instances.ipynb): Ensuring that long-running AWS RDS instances have Reserved Instances purchased for them is an important cost optimization strategy for AWS users. By committing to a specific capacity of RDS instances for a period of one or three years, users can take advantage of significant discounts compared to On-Demand pricing. This approach can help optimize AWS costs for RDS instances that are expected to run for an extended period and have predictable usage patterns. This runbook helps us to optimize costs by ensuring that Reserved Instances are purchased for these RDS instances.\n* AWS [Purchase Reserved Nodes For Long Running AWS Redshift Clusters](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Purchase_Reserved_Nodes_For_Long_Running_Redshift_Clusters.ipynb): Ensuring that long-running AWS Redshift Clusters have Reserved Nodes purchased for them is a critical cost optimization strategy . By committing to a specific capacity of Redshift nodes for a period of one or three years, users can take advantage of significant discounts compared to On-Demand pricing. This approach can help optimize AWS costs for Redshift Clusters that are expected to run for an extended period and have predictable usage patterns. This runbook helps us to ensure that Reserved Nodes are purchased for these clusters so that users can effectively plan ahead, reduce their AWS bill, and optimize their costs over time.\n* AWS [Release Unattached AWS Elastic IPs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Release_Unattached_Elastic_IPs.ipynb): A disassociated Elastic IP address remains allocated to your account until you explicitly release it. AWS imposes a small hourly charge for Elastic IP addresses that are not associated with a running instance. This runbook can be used to deleted those unattached AWS Elastic IP addresses.\n* AWS [Stop Idle EC2 Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Stop_Idle_EC2_Instances.ipynb): This runbook can be used to Stop all EC2 Instances that are idle using given cpu threshold and duration.\n* AWS [Stop all Untagged AWS EC2 Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Stop_Untagged_EC2_Instances.ipynb): This runbook can be used to Stop all EC2 Instances that are Untagged\n* AWS [Terminate EC2 Instances Without Valid Lifetime Tag](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Terminate_EC2_Instances_Without_Valid_Lifetime_Tag.ipynb): This runbook can be used to list all the EC2 instances which don't have a lifetime tag and then terminate them.\n* AWS [AWS Redshift Update Database](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Update_Redshift_Database.ipynb): This runbook can be used to update a redshift database from a SQL file stored in S3.\n* AWS [AWS Update Resource Tags](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Update_Resource_Tags.ipynb): This runbook can be used to update an existing tag to any resource in an AWS Region.\n* AWS [AWS Add Tags Across Selected AWS Resources](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Update_Resources_About_To_Expire.ipynb): This finds resources missing a tag, and allows you to choose which resources should add a specific tag/value pair.\n* AWS [Delete Unused AWS NAT Gateways](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Delete_Unused_AWS_NAT_Gateways.ipynb): This runbook can be used to identify and remove any unused NAT Gateways. This allows us to adhere to best practices and avoid unnecessary costs. NAT gateways are used to connect a private instance with outside networks. When a NAT gateway is provisioned, AWS charges you based on the number of hours it was available and the data (GB) it processes.\n"
  },
  {
    "path": "lists/runbook_DEVOPS.md",
    "content": "* AWS [Add Lifecycle Policy to S3 Buckets](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Add_Lifecycle_Policy_To_S3_Buckets.ipynb): Attaching lifecycle policies to AWS S3 buckets enables us to automate the management of object lifecycle in your storage buckets. By configuring lifecycle policies, you can define rules that determine the actions to be taken on objects based on their age or other criteria. This includes transitioning objects to different storage classes, such as moving infrequently accessed data to lower-cost storage tiers or archiving them to Glacier, as well as setting expiration dates for objects. By attaching lifecycle policies to your S3 buckets, you can optimize storage costs by automatically moving data to the most cost-effective storage tier based on its lifecycle. Additionally, it allows you to efficiently manage data retention and comply with regulatory requirements or business policies regarding data expiration. This runbook helps us find all the buckets without any lifecycle policy and attach one to them.\n* AWS [AWS Bulk Update Resource Tag](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Bulk_Update_Resource_Tag.ipynb): This runbook will find all AWS Resources tagged with a given key:value tag.  It will then update the tag's value to a new value. This can be used to bulk update the owner of resources, or any other reason you might need to change the tag value for many AWS resources.\n* AWS [Change AWS EBS Volume To GP3 Type](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Change_EBS_Volume_To_GP3_Type.ipynb): This runbook can be used to change the type of an EBS volume to GP3(General Purpose 3). GP3 type volume has a number of advantages over it's predecessors. gp3 volumes are ideal for a wide variety of applications that require high performance at low cost\n* AWS [Change AWS Route53 TTL](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Change_Route53_TTL.ipynb): For a record in a hosted zone, lower TTL means that more queries arrive at the name servers because the cached values expire sooner. If you configure a higher TTL for your records, then the intermediate resolvers cache the records for longer time. As a result, there are fewer queries received by the name servers. This configuration reduces the charges corresponding to the DNS queries answered. However, higher TTL slows the propagation of record changes because the previous values are cached for longer periods. This Runbook can be used to configure a higher value of a TTL .\n* AWS [Create IAM User with policy](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Create_New_IAM_User_With_Policy.ipynb): Create new IAM user with a security Policy.  Sends confirmation to Slack.\n* AWS [Delete EBS Volume Attached to Stopped Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Delete_EBS_Volumes_Attached_To_Stopped_Instances.ipynb): EBS (Elastic Block Storage) volumes are attached to EC2 Instances as storage devices. Unused (Unattached) EBS Volumes can keep accruing costs even when their associated EC2 instances are no longer running. These volumes need to be deleted if the instances they are attached to are no more required. This runbook helps us find such volumes and delete them.\n* AWS [Delete EBS Volume With Low Usage](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Delete_EBS_Volumes_With_Low_Usage.ipynb): This runbook can help us identify low usage Amazon Elastic Block Store (EBS) volumes and delete these volumes in order to lower the cost of your AWS bill. This is calculates using the VolumeUsage metric. It measures the percentage of the total storage space that is currently being used by an EBS volume. This metric is reported as a percentage value between 0 and 100.\n* AWS [Delete ECS Clusters with Low CPU Utilization](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Delete_ECS_Clusters_with_Low_CPU_Utilization.ipynb): ECS clusters are a managed service that allows users to run Docker containers on AWS, making it easier to manage and scale containerized applications. However, running ECS clusters with low CPU utilization can result in wasted resources and unnecessary costs. AWS charges for the resources allocated to a cluster, regardless of whether they are fully utilized or not. By deleting clusters that are not being fully utilized, you can reduce the number of resources being allocated and lower the overall cost of running ECS. Furthermore, deleting unused or low-utilization clusters can also improve overall system performance by freeing up resources for other applications that require more processing power. This runbook helps us to identify such clusters and delete them.\n* AWS [Delete AWS ELBs With No Targets Or Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Delete_ELBs_With_No_Targets_Or_Instances.ipynb): ELBs are used to distribute incoming traffic across multiple targets or instances, but if those targets or instances are no longer in use, then the ELBs may be unnecessary and can be deleted to save costs. Deleting ELBs with no targets or instances is a simple but effective way to optimize costs in your AWS environment. By identifying and removing these unused ELBs, you can reduce the number of resources you are paying for and avoid unnecessary charges. This runbook helps you identify all types of ELB's- Network, Application, Classic that don't have any target groups or instances attached to them.\n* AWS [Delete IAM profile](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Delete_IAM_User.ipynb): This runbook is the inverse of Create IAM user with profile - removes the profile, the login and then the IAM user itself..\n* AWS [Delete Old EBS Snapshots](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Delete_Old_EBS_Snapshots.ipynb): Amazon Elastic Block Store (EBS) snapshots are created incrementally, an initial snapshot will include all the data on the disk, and subsequent snapshots will only store the blocks on the volume that have changed since the prior snapshot. Unchanged data is not stored, but referenced using the previous snapshot. This runbook helps us to find old EBS snapshots and thereby lower storage costs.\n* AWS [Delete RDS Instances with Low CPU Utilization](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Delete_RDS_Instances_with_Low_CPU_Utilization.ipynb): Deleting RDS instances with low CPU utilization is a cost optimization strategy that involves identifying RDS instances with consistently low CPU usage and deleting them to save costs. This approach helps to eliminate unnecessary costs associated with running idle database instances that are not being fully utilized. This runbook helps us to find and delete such instances.\n* AWS [Delete Redshift Clusters with Low CPU Utilization](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Delete_Redshift_Clusters_with_Low_CPU_Utilization.ipynb): Redshift clusters are the basic units of compute and storage in Amazon Redshift, and they can be configured to meet specific performance and cost requirements. In order to optimize the cost and performance of Redshift clusters, it is important to regularly monitor their CPU utilization. If a cluster is consistently showing low CPU utilization over an extended period of time, it may be a good idea to delete the cluster to save costs. This runbook helps us find such clusters and delete them.\n* AWS [Delete Unattached AWS EBS Volumes](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Delete_Unattached_EBS_Volume.ipynb): This runbook can be used to delete all unattached EBS Volumes within an AWS region. You can delete an Amazon EBS volume that you no longer need. After deletion, its data is gone and the volume can't be attached to any instance. So before deletion, you can store a snapshot of the volume, which you can use to re-create the volume later.\n* AWS [Delete Unused AWS Secrets](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Delete_Unused_AWS_Secrets.ipynb): This runbook can be used to delete unused secrets in AWS.\n* AWS [Delete Unused AWS Log Streams](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Delete_Unused_Log_Streams.ipynb): Cloudwatch will retain empty Log Streams after the data retention time period. Those log streams should be deleted in order to save costs. This runbook can find unused log streams over a threshold number of days and help you delete them.\n* AWS [Delete Unused NAT Gateways](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Delete_Unused_NAT_Gateways.ipynb): This runbook search for all unused NAT gateways from all the region and delete those gateways.\n* AWS [Delete Unused Route53 HealthChecks](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Delete_Unused_Route53_Healthchecks.ipynb): When we associate healthchecks with an endpoint, Amazon Route53 sends health check requests to the endpoint IP address. These health checks validate that the endpoint IP addresses are operating as intended. There may be multiple reasons that healtchecks are lying usused for example- health check was mistakenly configured against your application by another customer, health check was configured from your account for testing purposes but wasn't deleted when testing was complete, health check was based on domain names and hence requests were sent due to DNS caching,  Elastic Load Balancing service updated its public IP addresses due to scaling, and the IP addresses were reassigned to your load balancer, and many more. This runbook finds such healthchecks and deletes them to save AWS costs.\n* AWS [AWS Detach EC2 Instance from ASG](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Detach_ec2_Instance_from_ASG.ipynb): This runbook can be used to detach an instance from Auto Scaling Group. You can remove (detach) an instance that is in the InService state from an Auto Scaling group. After the instance is detached, you can manage it independently from the rest of the Auto Scaling group. By detaching an instance, you can move an instance out of one Auto Scaling group and attach it to a different group. For more information, see Attach EC2 instances to your Auto Scaling group.\n* AWS [AWS EC2 Disk Cleanup](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_EC2_Disk_Cleanup.ipynb): This runbook locates large files in an EC2 instance and backs them up into a given S3 bucket. Afterwards, it deletes the files backed up and send a message on a specified Slack channel. It uses SSH and linux commands to perform the functions it needs.\n* AWS [Enforce HTTP Redirection across all AWS ALB instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Enforce_HTTP_Redirection_across_AWS_ALB.ipynb): This runbook can be used to enforce HTTP redirection across all AWS ALBs. Web encryption protocols like SSL and TLS have been around for nearly three decades. By securing web data in transit, these security measures ensure that third parties can’t simply intercept unencrypted data and cause harm. HTTPS uses the underlying SSL/TLS technology and is the standard way to communicate web data in an encrypted and authenticated manner instead of using insecure HTTP protocol. In this runbook, we implement the industry best practice of redirecting all unencrypted HTTP data to the secure HTTPS protocol.\n* AWS [AWS Ensure Redshift Clusters have Paused Resume Enabled](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Ensure_Redshift_Clusters_have_Paused_Resume_Enabled.ipynb): This runbook finds redshift clusters that don't have pause resume enabled and schedules the pause resume for the cluster.\n* AWS [AWS Get unhealthy EC2 instances from ELB](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Get_Elb_Unhealthy_Instances.ipynb): This runbook can be used to list unhealthy EC2 instance from an ELB. Sometimes it difficult to determine why Amazon EC2 Auto Scaling didn't terminate an unhealthy instance from Activity History alone. You can find further details about an unhealthy instance's state, and how to terminate that instance, by checking the a few extra things.\n* AWS [List unused Amazon EC2 key pairs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Notify_About_Unused_Keypairs.ipynb): This runbook finds all EC2 key pairs that are not used by an EC2 instance and notifies a slack channel about them. Optionally it can delete the key pairs based on user configuration.\n* AWS [Release Unattached AWS Elastic IPs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Release_Unattached_Elastic_IPs.ipynb): A disassociated Elastic IP address remains allocated to your account until you explicitly release it. AWS imposes a small hourly charge for Elastic IP addresses that are not associated with a running instance. This runbook can be used to deleted those unattached AWS Elastic IP addresses.\n* AWS [Renew AWS SSL Certificates that are close to expiration](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Renew_SSL_Certificate.ipynb): This runbook can be used to list all AWS SSL (ACM) Certificates that need to be renewed within a given threshold number of days. Optionally it can renew the certificate using AWS ACM service.\n* AWS [AWS Restart unhealthy services in a Target Group](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Restart_Unhealthy_Services_Target_Group.ipynb): This runbook restarts unhealthy services in a target group. The restart command is provided via a tag attached to the instance.\n* AWS [Terminate EC2 Instances Without Valid Lifetime Tag](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Terminate_EC2_Instances_Without_Valid_Lifetime_Tag.ipynb): This runbook can be used to list all the EC2 instances which don't have a lifetime tag and then terminate them.\n* AWS [Copy AMI to All Given AWS Regions](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Copy_ami_to_all_given_AWS_regions.ipynb): This runbook can be used to copy AMI from one region to multiple AWS regions using unSkript legos with AWS CLI commands.We can get all the available regions by using AWS CLI Commands.\n* AWS [Delete Unused AWS NAT Gateways](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Delete_Unused_AWS_NAT_Gateways.ipynb): This runbook can be used to identify and remove any unused NAT Gateways. This allows us to adhere to best practices and avoid unnecessary costs. NAT gateways are used to connect a private instance with outside networks. When a NAT gateway is provisioned, AWS charges you based on the number of hours it was available and the data (GB) it processes.\n* AWS [Detach EC2 Instance from ASG](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Detach_Instance_from_ASG.ipynb): This runbook can be used to detach an instance from Auto Scaling Group. You can remove (detach) an instance that is in the Service state from an Auto Scaling group. After the instance is detached, you can manage it independently from the rest of the Auto Scaling group. By detaching an instance, you can move an instance out of one Auto Scaling group and attach it to a different group. For more information, see Attach EC2 instances to your Auto Scaling group.\n* AWS [Detect ECS failed deployment](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Detect_ECS_failed_deployment.ipynb): This runbook check if there is a failed deployment in progress for a service in an ECS cluster. If it finds one, it sends the list of stopped task associated with this deployment and their stopped reason to slack.\n* AWS [Enforce Mandatory Tags Across All AWS Resources](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Enforce_Mandatory_Tags_Across_All_AWS_Resources.ipynb): This runbook can be used to Enforce Mandatory Tags Across All AWS Resources.We can get all the  untag resources of the given region,discovers tag keys of the given region and attaches mandatory tags to all the untagged resource.\n* AWS [Handle AWS EC2 Instance Scheduled to retire](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Find_EC2_Instances_Scheduled_to_retire.ipynb): To avoid unexpected interruptions, it's a good practice to check to see if there are any EC2 instances scheduled to retire. This runbook can be used to List the EC2 instances that are scheduled to retire. To handle the instance retirement, user can stop and restart it before the retirement date. That action moves the instance over to a more stable host.\n* AWS [Create an IAM user using Principle of Least Privilege](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/IAM_security_least_privilege.ipynb): Extract usage details from Cloudtrail of an existing user. Apply the usage to a new IAM Policy, and connect it to a new IAM profile.\n* AWS [Monitor AWS DynamoDB provision capacity](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Monitor_AWS_DynamoDB_provision_capacity.ipynb): This runbook can be used to collect the data from cloudwatch related to AWS DynamoDB for provision capacity.\n* AWS [Resize EBS Volume](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Resize_EBS_Volume.ipynb): This run resizes the EBS volume to a specified amount. This runbook can be attached to Disk usage related Cloudwatch alarms to do the appropriate resizing. It also extends the filesystem to use the new volume size.\n* AWS [Resize list of pvcs.](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Resize_List_Of_Pvcs.ipynb): This runbook can be used to resize list of pvcs in a namespace. By default, it uses all pvcs to be resized.\n* AWS [Resize PVC](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Resize_PVC.ipynb): This runbook resizes the PVC to input size.\n* AWS [Launch AWS EC2 from AMI](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Run_EC2_from_AMI.ipynb): This lego can be used to launch an AWS EC2 instance from AMI in the given region.\n* AWS [Troubleshooting Your EC2 Configuration in a Private Subnet](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Troubleshooting_Your_EC2_Configuration_in_Private_Subnet.ipynb): This runbook can be used to troubleshoot EC2 instance configuration in a private subnet by capturing the VPC ID for a given instance ID. Using VPC ID to get Internet Gateway details then try to SSH and connect to internet.\n* Jenkins [Fetch Jenkins Build Logs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Jenkins/Fetch_Jenkins_Build_Logs.ipynb): This runbook fetches the logs for a given Jenkins job and posts to a slack channel\n* Kubernetes [k8s: Delete Evicted Pods From All Namespaces](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/Delete_Evicted_Pods_From_Namespaces.ipynb): This runbook shows and deletes the evicted pods for given namespace. If the user provides the namespace input, then it only collects pods for the given namespace; otherwise, it will select all pods from all the namespaces.\n* Kubernetes [k8s: Get kube system config map](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/Get_Kube_System_Config_Map.ipynb): This runbook fetches the kube system config map for a k8s cluster and publishes the information on a Slack channel.\n* Kubernetes [IP Exhaustion Mitigation: Failing K8s Pod Deletion from Jobs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/K8S_Delete_Pods_From_Failing_Jobs.ipynb): Preventing IP exhaustion is critical in Kubernetes environments, and a key strategy is deleting failing pods from jobs. Failing pods can consume valuable IP resources, leading to scarcity and inefficiency. By proactively identifying and removing malfunctioning pods, administrators can promptly free up IP addresses, optimizing resource utilization. This approach ensures that IP allocation remains efficient, enabling the cluster to accommodate new pods without experiencing IP exhaustion. This runbook helps us to identify failing pods within jobs thereby maximizing IP availability for other pods and services.\n* Kubernetes [k8s: Get candidate nodes for given configuration](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/K8S_Get_Candidate_Nodes_Given_Config.ipynb): This runbook get the matching nodes for a given configuration (storage, cpu, memory, pod_limit) from a k8s cluster\n* Kubernetes [Kubernetes Log Healthcheck](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/K8S_Log_Healthcheck.ipynb): This RunBook checks the logs of every pod in a namespace for warning messages.\n* Kubernetes [k8s: Pod Stuck in CrashLoopBackoff State](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/K8S_Pod_Stuck_In_CrashLoopBack_State.ipynb): This runbook checks if any Pod(s) in CrashLoopBackoff state in a given k8s namespace. If it finds, it tries to find out the reason why the Pod(s) is in that state.\n* Kubernetes [k8s: Pod Stuck in ImagePullBackOff State](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/K8S_Pod_Stuck_In_ImagePullBackOff_State.ipynb): This runbook checks if any Pod(s) in ImagePullBackOff state in a given k8s namespace. If it finds, it tries to find out the reason why the Pod(s) is in that state.\n* Kubernetes [k8s: Pod Stuck in ImagePullBackOff State using genAI](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/K8S_Pod_Stuck_In_ImagePullBackOff_State_with_genai.ipynb): This runbook checks if any Pod(s) in ImagePullBackOff state in a given k8s namespace, using genAI. If it finds, it tries to find out the reason why the Pod(s) is in that state.\n* Kubernetes [k8s: Pod Stuck in Terminating State](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/K8S_Pod_Stuck_In_Terminating_State.ipynb): This runbook checks any Pods are in terminating state in a given k8s namespace. If it finds, it tries to recover it by resetting finalizers of the pod.\n* Kubernetes [k8s: Resize List of PVCs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/Resize_List_of_PVCs.ipynb): This runbook resizes a list of Kubernetes PVCs.\n* Kubernetes [k8s: Resize PVC](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/Resize_PVC.ipynb): This runbook resizes a Kubernetes PVC.\n* Kubernetes [Rollback Kubernetes Deployment](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/Rollback_k8s_Deployment_and_Update_Jira.ipynb): This runbook can be used to rollback Kubernetes Deployment\n* Postgresql [Display long running queries in a PostgreSQL database](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/Postgresql_Display_Long_Running.ipynb): This runbook displays collects the long running queries from a database and sends a message to the specified slack channel. Poorly optimized queries and excessive connections can cause problems in PostgreSQL, impacting upstream services.\n"
  },
  {
    "path": "lists/runbook_IAM.md",
    "content": "* AWS [AWS Access Key Rotation for IAM users](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Access_Key_Rotation.ipynb): This runbook can be used to configure AWS Access Key rotation. Changing access keys (which consist of an access key ID and a secret access key) on a regular schedule is a well-known security best practice because it shortens the period an access key is active and therefore reduces the business impact if they are compromised. Having an established process that is run regularly also ensures the operational steps around key rotation are verified, so changing a key is never a scary step.\n* AWS [AWS Add Mandatory tags to EC2](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Add_Mandatory_tags_to_EC2.ipynb): This xRunBook is a set of example actions that could be used to establish mandatory tagging to EC2 instances.  First testing instances for compliance, and creating reports of instances that are missing the required tags. There is also and action to add tags to an instance - to help bring them into tag compliance.\n* AWS [Create a new AWS IAM User](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Add_new_IAM_user.ipynb): AWS has an inbuilt identity and access management system known as AWS IAM. IAM supports the concept of users, group, roles and privileges. IAM user is an identity that can be created and assigned some privileges. This runbook can be used to create an AWS IAM User\n* AWS [Update and Manage AWS User permission](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Update_and_Manage_AWS_User_Permission.ipynb): This runbook can be used Update and Manage AWS IAM User Permission\n"
  },
  {
    "path": "lists/runbook_SECOPS.md",
    "content": "* AWS [AWS Access Key Rotation for IAM users](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Access_Key_Rotation.ipynb): This runbook can be used to configure AWS Access Key rotation. Changing access keys (which consist of an access key ID and a secret access key) on a regular schedule is a well-known security best practice because it shortens the period an access key is active and therefore reduces the business impact if they are compromised. Having an established process that is run regularly also ensures the operational steps around key rotation are verified, so changing a key is never a scary step.\n* AWS [AWS Add Mandatory tags to EC2](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Add_Mandatory_tags_to_EC2.ipynb): This xRunBook is a set of example actions that could be used to establish mandatory tagging to EC2 instances.  First testing instances for compliance, and creating reports of instances that are missing the required tags. There is also and action to add tags to an instance - to help bring them into tag compliance.\n* AWS [Enforce HTTP Redirection across all AWS ALB instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Enforce_HTTP_Redirection_across_AWS_ALB.ipynb): This runbook can be used to enforce HTTP redirection across all AWS ALBs. Web encryption protocols like SSL and TLS have been around for nearly three decades. By securing web data in transit, these security measures ensure that third parties can’t simply intercept unencrypted data and cause harm. HTTPS uses the underlying SSL/TLS technology and is the standard way to communicate web data in an encrypted and authenticated manner instead of using insecure HTTP protocol. In this runbook, we implement the industry best practice of redirecting all unencrypted HTTP data to the secure HTTPS protocol.\n* AWS [Publicly Accessible Amazon RDS Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Publicly_Accessible_Amazon_RDS_Instances.ipynb): This runbook can be used to find the publicly accessible RDS instances for the given AWS region.\n* AWS [Remediate unencrypted S3 buckets](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Remediate_unencrypted_S3_buckets.ipynb): This runbook can be used to filter all the S3 buckets which are unencrypted and apply encryption on unencrypted S3 buckets.\n* AWS [Renew AWS SSL Certificates that are close to expiration](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Renew_SSL_Certificate.ipynb): This runbook can be used to list all AWS SSL (ACM) Certificates that need to be renewed within a given threshold number of days. Optionally it can renew the certificate using AWS ACM service.\n* AWS [Restrict S3 Buckets with READ/WRITE Permissions to all Authenticated Users](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Restrict_S3_Buckets_with_READ_WRITE_Permissions.ipynb): This runbook will list all the S3 buckets.Filter buckets which has ACL public READ/WRITE permissions and Change the ACL Public READ/WRITE permissions to private in the given region.\n* AWS [Secure Publicly accessible Amazon RDS Snapshot](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Secure_Publicly_accessible_Amazon_RDS_Snapshot.ipynb): This lego can be used to list all the manual database snapshots in the given region. Get publicly accessible DB snapshots in RDS and Modify the publicly accessible DB snapshots in RDS to private.\n* AWS [AWS Update RDS Instances from Old to New Generation](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Update_RDS_Instances_from_Old_to_New_Generation.ipynb): This runbook can be used to find the old generation RDS instances for the given AWS region and modify then to the given instance class.\n* AWS [Encrypt unencrypted S3 buckets](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_encrypt_unencrypted_S3_buckets.ipynb): This runbook can be used to filter all the S3 buckets which are unencrypted and apply encryption on unencrypted S3 buckets.\n* AWS [Create a new AWS IAM User](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Add_new_IAM_user.ipynb): AWS has an inbuilt identity and access management system known as AWS IAM. IAM supports the concept of users, group, roles and privileges. IAM user is an identity that can be created and assigned some privileges. This runbook can be used to create an AWS IAM User\n* AWS [Create an IAM user using Principle of Least Privilege](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/IAM_security_least_privilege.ipynb): Extract usage details from Cloudtrail of an existing user. Apply the usage to a new IAM Policy, and connect it to a new IAM profile.\n"
  },
  {
    "path": "lists/runbook_SRE.md",
    "content": "* AWS [Add Lifecycle Policy to S3 Buckets](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Add_Lifecycle_Policy_To_S3_Buckets.ipynb): Attaching lifecycle policies to AWS S3 buckets enables us to automate the management of object lifecycle in your storage buckets. By configuring lifecycle policies, you can define rules that determine the actions to be taken on objects based on their age or other criteria. This includes transitioning objects to different storage classes, such as moving infrequently accessed data to lower-cost storage tiers or archiving them to Glacier, as well as setting expiration dates for objects. By attaching lifecycle policies to your S3 buckets, you can optimize storage costs by automatically moving data to the most cost-effective storage tier based on its lifecycle. Additionally, it allows you to efficiently manage data retention and comply with regulatory requirements or business policies regarding data expiration. This runbook helps us find all the buckets without any lifecycle policy and attach one to them.\n* AWS [Change AWS EBS Volume To GP3 Type](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Change_EBS_Volume_To_GP3_Type.ipynb): This runbook can be used to change the type of an EBS volume to GP3(General Purpose 3). GP3 type volume has a number of advantages over it's predecessors. gp3 volumes are ideal for a wide variety of applications that require high performance at low cost\n* AWS [Change AWS Route53 TTL](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Change_Route53_TTL.ipynb): For a record in a hosted zone, lower TTL means that more queries arrive at the name servers because the cached values expire sooner. If you configure a higher TTL for your records, then the intermediate resolvers cache the records for longer time. As a result, there are fewer queries received by the name servers. This configuration reduces the charges corresponding to the DNS queries answered. However, higher TTL slows the propagation of record changes because the previous values are cached for longer periods. This Runbook can be used to configure a higher value of a TTL .\n* AWS [Create IAM User with policy](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Create_New_IAM_User_With_Policy.ipynb): Create new IAM user with a security Policy.  Sends confirmation to Slack.\n* AWS [Delete EBS Volume Attached to Stopped Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Delete_EBS_Volumes_Attached_To_Stopped_Instances.ipynb): EBS (Elastic Block Storage) volumes are attached to EC2 Instances as storage devices. Unused (Unattached) EBS Volumes can keep accruing costs even when their associated EC2 instances are no longer running. These volumes need to be deleted if the instances they are attached to are no more required. This runbook helps us find such volumes and delete them.\n* AWS [Delete EBS Volume With Low Usage](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Delete_EBS_Volumes_With_Low_Usage.ipynb): This runbook can help us identify low usage Amazon Elastic Block Store (EBS) volumes and delete these volumes in order to lower the cost of your AWS bill. This is calculates using the VolumeUsage metric. It measures the percentage of the total storage space that is currently being used by an EBS volume. This metric is reported as a percentage value between 0 and 100.\n* AWS [Delete ECS Clusters with Low CPU Utilization](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Delete_ECS_Clusters_with_Low_CPU_Utilization.ipynb): ECS clusters are a managed service that allows users to run Docker containers on AWS, making it easier to manage and scale containerized applications. However, running ECS clusters with low CPU utilization can result in wasted resources and unnecessary costs. AWS charges for the resources allocated to a cluster, regardless of whether they are fully utilized or not. By deleting clusters that are not being fully utilized, you can reduce the number of resources being allocated and lower the overall cost of running ECS. Furthermore, deleting unused or low-utilization clusters can also improve overall system performance by freeing up resources for other applications that require more processing power. This runbook helps us to identify such clusters and delete them.\n* AWS [Delete AWS ELBs With No Targets Or Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Delete_ELBs_With_No_Targets_Or_Instances.ipynb): ELBs are used to distribute incoming traffic across multiple targets or instances, but if those targets or instances are no longer in use, then the ELBs may be unnecessary and can be deleted to save costs. Deleting ELBs with no targets or instances is a simple but effective way to optimize costs in your AWS environment. By identifying and removing these unused ELBs, you can reduce the number of resources you are paying for and avoid unnecessary charges. This runbook helps you identify all types of ELB's- Network, Application, Classic that don't have any target groups or instances attached to them.\n* AWS [Delete IAM profile](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Delete_IAM_User.ipynb): This runbook is the inverse of Create IAM user with profile - removes the profile, the login and then the IAM user itself..\n* AWS [Delete Old EBS Snapshots](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Delete_Old_EBS_Snapshots.ipynb): Amazon Elastic Block Store (EBS) snapshots are created incrementally, an initial snapshot will include all the data on the disk, and subsequent snapshots will only store the blocks on the volume that have changed since the prior snapshot. Unchanged data is not stored, but referenced using the previous snapshot. This runbook helps us to find old EBS snapshots and thereby lower storage costs.\n* AWS [Delete RDS Instances with Low CPU Utilization](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Delete_RDS_Instances_with_Low_CPU_Utilization.ipynb): Deleting RDS instances with low CPU utilization is a cost optimization strategy that involves identifying RDS instances with consistently low CPU usage and deleting them to save costs. This approach helps to eliminate unnecessary costs associated with running idle database instances that are not being fully utilized. This runbook helps us to find and delete such instances.\n* AWS [Delete Redshift Clusters with Low CPU Utilization](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Delete_Redshift_Clusters_with_Low_CPU_Utilization.ipynb): Redshift clusters are the basic units of compute and storage in Amazon Redshift, and they can be configured to meet specific performance and cost requirements. In order to optimize the cost and performance of Redshift clusters, it is important to regularly monitor their CPU utilization. If a cluster is consistently showing low CPU utilization over an extended period of time, it may be a good idea to delete the cluster to save costs. This runbook helps us find such clusters and delete them.\n* AWS [Delete Unattached AWS EBS Volumes](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Delete_Unattached_EBS_Volume.ipynb): This runbook can be used to delete all unattached EBS Volumes within an AWS region. You can delete an Amazon EBS volume that you no longer need. After deletion, its data is gone and the volume can't be attached to any instance. So before deletion, you can store a snapshot of the volume, which you can use to re-create the volume later.\n* AWS [Delete Unused AWS Secrets](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Delete_Unused_AWS_Secrets.ipynb): This runbook can be used to delete unused secrets in AWS.\n* AWS [Delete Unused AWS Log Streams](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Delete_Unused_Log_Streams.ipynb): Cloudwatch will retain empty Log Streams after the data retention time period. Those log streams should be deleted in order to save costs. This runbook can find unused log streams over a threshold number of days and help you delete them.\n* AWS [Delete Unused NAT Gateways](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Delete_Unused_NAT_Gateways.ipynb): This runbook search for all unused NAT gateways from all the region and delete those gateways.\n* AWS [Delete Unused Route53 HealthChecks](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Delete_Unused_Route53_Healthchecks.ipynb): When we associate healthchecks with an endpoint, Amazon Route53 sends health check requests to the endpoint IP address. These health checks validate that the endpoint IP addresses are operating as intended. There may be multiple reasons that healtchecks are lying usused for example- health check was mistakenly configured against your application by another customer, health check was configured from your account for testing purposes but wasn't deleted when testing was complete, health check was based on domain names and hence requests were sent due to DNS caching,  Elastic Load Balancing service updated its public IP addresses due to scaling, and the IP addresses were reassigned to your load balancer, and many more. This runbook finds such healthchecks and deletes them to save AWS costs.\n* AWS [AWS Detach EC2 Instance from ASG](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Detach_ec2_Instance_from_ASG.ipynb): This runbook can be used to detach an instance from Auto Scaling Group. You can remove (detach) an instance that is in the InService state from an Auto Scaling group. After the instance is detached, you can manage it independently from the rest of the Auto Scaling group. By detaching an instance, you can move an instance out of one Auto Scaling group and attach it to a different group. For more information, see Attach EC2 instances to your Auto Scaling group.\n* AWS [AWS EC2 Disk Cleanup](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_EC2_Disk_Cleanup.ipynb): This runbook locates large files in an EC2 instance and backs them up into a given S3 bucket. Afterwards, it deletes the files backed up and send a message on a specified Slack channel. It uses SSH and linux commands to perform the functions it needs.\n* AWS [AWS Ensure Redshift Clusters have Paused Resume Enabled](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Ensure_Redshift_Clusters_have_Paused_Resume_Enabled.ipynb): This runbook finds redshift clusters that don't have pause resume enabled and schedules the pause resume for the cluster.\n* AWS [AWS Get unhealthy EC2 instances from ELB](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Get_Elb_Unhealthy_Instances.ipynb): This runbook can be used to list unhealthy EC2 instance from an ELB. Sometimes it difficult to determine why Amazon EC2 Auto Scaling didn't terminate an unhealthy instance from Activity History alone. You can find further details about an unhealthy instance's state, and how to terminate that instance, by checking the a few extra things.\n* AWS [List unused Amazon EC2 key pairs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Notify_About_Unused_Keypairs.ipynb): This runbook finds all EC2 key pairs that are not used by an EC2 instance and notifies a slack channel about them. Optionally it can delete the key pairs based on user configuration.\n* AWS [Release Unattached AWS Elastic IPs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Release_Unattached_Elastic_IPs.ipynb): A disassociated Elastic IP address remains allocated to your account until you explicitly release it. AWS imposes a small hourly charge for Elastic IP addresses that are not associated with a running instance. This runbook can be used to deleted those unattached AWS Elastic IP addresses.\n* AWS [AWS Restart unhealthy services in a Target Group](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Restart_Unhealthy_Services_Target_Group.ipynb): This runbook restarts unhealthy services in a target group. The restart command is provided via a tag attached to the instance.\n* AWS [Copy AMI to All Given AWS Regions](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Copy_ami_to_all_given_AWS_regions.ipynb): This runbook can be used to copy AMI from one region to multiple AWS regions using unSkript legos with AWS CLI commands.We can get all the available regions by using AWS CLI Commands.\n* AWS [Delete Unused AWS NAT Gateways](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Delete_Unused_AWS_NAT_Gateways.ipynb): This runbook can be used to identify and remove any unused NAT Gateways. This allows us to adhere to best practices and avoid unnecessary costs. NAT gateways are used to connect a private instance with outside networks. When a NAT gateway is provisioned, AWS charges you based on the number of hours it was available and the data (GB) it processes.\n* AWS [Detach EC2 Instance from ASG](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Detach_Instance_from_ASG.ipynb): This runbook can be used to detach an instance from Auto Scaling Group. You can remove (detach) an instance that is in the Service state from an Auto Scaling group. After the instance is detached, you can manage it independently from the rest of the Auto Scaling group. By detaching an instance, you can move an instance out of one Auto Scaling group and attach it to a different group. For more information, see Attach EC2 instances to your Auto Scaling group.\n* AWS [Detect ECS failed deployment](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Detect_ECS_failed_deployment.ipynb): This runbook check if there is a failed deployment in progress for a service in an ECS cluster. If it finds one, it sends the list of stopped task associated with this deployment and their stopped reason to slack.\n* AWS [Enforce Mandatory Tags Across All AWS Resources](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Enforce_Mandatory_Tags_Across_All_AWS_Resources.ipynb): This runbook can be used to Enforce Mandatory Tags Across All AWS Resources.We can get all the  untag resources of the given region,discovers tag keys of the given region and attaches mandatory tags to all the untagged resource.\n* AWS [Handle AWS EC2 Instance Scheduled to retire](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Find_EC2_Instances_Scheduled_to_retire.ipynb): To avoid unexpected interruptions, it's a good practice to check to see if there are any EC2 instances scheduled to retire. This runbook can be used to List the EC2 instances that are scheduled to retire. To handle the instance retirement, user can stop and restart it before the retirement date. That action moves the instance over to a more stable host.\n* AWS [Monitor AWS DynamoDB provision capacity](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Monitor_AWS_DynamoDB_provision_capacity.ipynb): This runbook can be used to collect the data from cloudwatch related to AWS DynamoDB for provision capacity.\n* AWS [Resize EBS Volume](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Resize_EBS_Volume.ipynb): This run resizes the EBS volume to a specified amount. This runbook can be attached to Disk usage related Cloudwatch alarms to do the appropriate resizing. It also extends the filesystem to use the new volume size.\n* AWS [Resize list of pvcs.](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Resize_List_Of_Pvcs.ipynb): This runbook can be used to resize list of pvcs in a namespace. By default, it uses all pvcs to be resized.\n* AWS [Resize PVC](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Resize_PVC.ipynb): This runbook resizes the PVC to input size.\n* AWS [Restart AWS EC2 Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Restart_AWS_EC2_Instances_By_Tag.ipynb): This runbook can be used to Restart AWS EC2 Instances\n* AWS [Launch AWS EC2 from AMI](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Run_EC2_from_AMI.ipynb): This lego can be used to launch an AWS EC2 instance from AMI in the given region.\n* AWS [Troubleshooting Your EC2 Configuration in a Private Subnet](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Troubleshooting_Your_EC2_Configuration_in_Private_Subnet.ipynb): This runbook can be used to troubleshoot EC2 instance configuration in a private subnet by capturing the VPC ID for a given instance ID. Using VPC ID to get Internet Gateway details then try to SSH and connect to internet.\n* Jenkins [Fetch Jenkins Build Logs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Jenkins/Fetch_Jenkins_Build_Logs.ipynb): This runbook fetches the logs for a given Jenkins job and posts to a slack channel\n* Jira [Jira Visualize Issue Time to Resolution](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Jira/jira_visualize_time_to_resolution.ipynb): Using the Panel Library - visualize the time it takes for issues to close over a specifict timeframe\n* Kubernetes [k8s: Delete Evicted Pods From All Namespaces](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/Delete_Evicted_Pods_From_Namespaces.ipynb): This runbook shows and deletes the evicted pods for given namespace. If the user provides the namespace input, then it only collects pods for the given namespace; otherwise, it will select all pods from all the namespaces.\n* Kubernetes [k8s: Get kube system config map](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/Get_Kube_System_Config_Map.ipynb): This runbook fetches the kube system config map for a k8s cluster and publishes the information on a Slack channel.\n* Kubernetes [IP Exhaustion Mitigation: Failing K8s Pod Deletion from Jobs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/K8S_Delete_Pods_From_Failing_Jobs.ipynb): Preventing IP exhaustion is critical in Kubernetes environments, and a key strategy is deleting failing pods from jobs. Failing pods can consume valuable IP resources, leading to scarcity and inefficiency. By proactively identifying and removing malfunctioning pods, administrators can promptly free up IP addresses, optimizing resource utilization. This approach ensures that IP allocation remains efficient, enabling the cluster to accommodate new pods without experiencing IP exhaustion. This runbook helps us to identify failing pods within jobs thereby maximizing IP availability for other pods and services.\n* Kubernetes [k8s: Get candidate nodes for given configuration](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/K8S_Get_Candidate_Nodes_Given_Config.ipynb): This runbook get the matching nodes for a given configuration (storage, cpu, memory, pod_limit) from a k8s cluster\n* Kubernetes [Kubernetes Log Healthcheck](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/K8S_Log_Healthcheck.ipynb): This RunBook checks the logs of every pod in a namespace for warning messages.\n* Kubernetes [k8s: Pod Stuck in CrashLoopBackoff State](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/K8S_Pod_Stuck_In_CrashLoopBack_State.ipynb): This runbook checks if any Pod(s) in CrashLoopBackoff state in a given k8s namespace. If it finds, it tries to find out the reason why the Pod(s) is in that state.\n* Kubernetes [k8s: Pod Stuck in ImagePullBackOff State](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/K8S_Pod_Stuck_In_ImagePullBackOff_State.ipynb): This runbook checks if any Pod(s) in ImagePullBackOff state in a given k8s namespace. If it finds, it tries to find out the reason why the Pod(s) is in that state.\n* Kubernetes [k8s: Pod Stuck in ImagePullBackOff State using genAI](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/K8S_Pod_Stuck_In_ImagePullBackOff_State_with_genai.ipynb): This runbook checks if any Pod(s) in ImagePullBackOff state in a given k8s namespace, using genAI. If it finds, it tries to find out the reason why the Pod(s) is in that state.\n* Kubernetes [k8s: Pod Stuck in Terminating State](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/K8S_Pod_Stuck_In_Terminating_State.ipynb): This runbook checks any Pods are in terminating state in a given k8s namespace. If it finds, it tries to recover it by resetting finalizers of the pod.\n* Kubernetes [k8s: Resize List of PVCs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/Resize_List_of_PVCs.ipynb): This runbook resizes a list of Kubernetes PVCs.\n* Kubernetes [k8s: Resize PVC](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/Resize_PVC.ipynb): This runbook resizes a Kubernetes PVC.\n* Kubernetes [Rollback Kubernetes Deployment](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/Rollback_k8s_Deployment_and_Update_Jira.ipynb): This runbook can be used to rollback Kubernetes Deployment\n* Postgresql [Display long running queries in a PostgreSQL database](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/Postgresql_Display_Long_Running.ipynb): This runbook displays collects the long running queries from a database and sends a message to the specified slack channel. Poorly optimized queries and excessive connections can cause problems in PostgreSQL, impacting upstream services.\n"
  },
  {
    "path": "lists/runbook_TROUBLESHOOTING.md",
    "content": "* ElasticSearch [Elasticsearch Rolling restart](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/ElasticSearch/Elasticsearch_Rolling_Restart.ipynb): This runbook can be used to perform rolling restart on ES\n"
  },
  {
    "path": "lists/xRunBook_list.md",
    "content": "\n# AWS\n* [AWS Access Key Rotation for IAM users](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Access_Key_Rotation.ipynb): This runbook can be used to configure AWS Access Key rotation. Changing access keys (which consist of an access key ID and a secret access key) on a regular schedule is a well-known security best practice because it shortens the period an access key is active and therefore reduces the business impact if they are compromised. Having an established process that is run regularly also ensures the operational steps around key rotation are verified, so changing a key is never a scary step.\n* [Add Lifecycle Policy to S3 Buckets](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Add_Lifecycle_Policy_To_S3_Buckets.ipynb): Attaching lifecycle policies to AWS S3 buckets enables us to automate the management of object lifecycle in your storage buckets. By configuring lifecycle policies, you can define rules that determine the actions to be taken on objects based on their age or other criteria. This includes transitioning objects to different storage classes, such as moving infrequently accessed data to lower-cost storage tiers or archiving them to Glacier, as well as setting expiration dates for objects. By attaching lifecycle policies to your S3 buckets, you can optimize storage costs by automatically moving data to the most cost-effective storage tier based on its lifecycle. Additionally, it allows you to efficiently manage data retention and comply with regulatory requirements or business policies regarding data expiration. This runbook helps us find all the buckets without any lifecycle policy and attach one to them.\n* [AWS Add Mandatory tags to EC2](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Add_Mandatory_tags_to_EC2.ipynb): This xRunBook is a set of example actions that could be used to establish mandatory tagging to EC2 instances.  First testing instances for compliance, and creating reports of instances that are missing the required tags. There is also and action to add tags to an instance - to help bring them into tag compliance.\n* [AWS Update Resources about to expire](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Add_Tag_Across_Selected_AWS_Resources.ipynb): This finds resources that have an expiration tag that is about to expire.  Can eitehr send a Slack message in 'auto'mode, or can be used to manually remediate the issue interactively.\n* [AWS Bulk Update Resource Tag](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Bulk_Update_Resource_Tag.ipynb): This runbook will find all AWS Resources tagged with a given key:value tag.  It will then update the tag's value to a new value. This can be used to bulk update the owner of resources, or any other reason you might need to change the tag value for many AWS resources.\n* [Change AWS EBS Volume To GP3 Type](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Change_EBS_Volume_To_GP3_Type.ipynb): This runbook can be used to change the type of an EBS volume to GP3(General Purpose 3). GP3 type volume has a number of advantages over it's predecessors. gp3 volumes are ideal for a wide variety of applications that require high performance at low cost\n* [Change AWS Route53 TTL](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Change_Route53_TTL.ipynb): For a record in a hosted zone, lower TTL means that more queries arrive at the name servers because the cached values expire sooner. If you configure a higher TTL for your records, then the intermediate resolvers cache the records for longer time. As a result, there are fewer queries received by the name servers. This configuration reduces the charges corresponding to the DNS queries answered. However, higher TTL slows the propagation of record changes because the previous values are cached for longer periods. This Runbook can be used to configure a higher value of a TTL .\n* [Create IAM User with policy](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Create_New_IAM_User_With_Policy.ipynb): Create new IAM user with a security Policy.  Sends confirmation to Slack.\n* [Delete EBS Volume Attached to Stopped Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Delete_EBS_Volumes_Attached_To_Stopped_Instances.ipynb): EBS (Elastic Block Storage) volumes are attached to EC2 Instances as storage devices. Unused (Unattached) EBS Volumes can keep accruing costs even when their associated EC2 instances are no longer running. These volumes need to be deleted if the instances they are attached to are no more required. This runbook helps us find such volumes and delete them.\n* [Delete EBS Volume With Low Usage](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Delete_EBS_Volumes_With_Low_Usage.ipynb): This runbook can help us identify low usage Amazon Elastic Block Store (EBS) volumes and delete these volumes in order to lower the cost of your AWS bill. This is calculates using the VolumeUsage metric. It measures the percentage of the total storage space that is currently being used by an EBS volume. This metric is reported as a percentage value between 0 and 100.\n* [Delete ECS Clusters with Low CPU Utilization](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Delete_ECS_Clusters_with_Low_CPU_Utilization.ipynb): ECS clusters are a managed service that allows users to run Docker containers on AWS, making it easier to manage and scale containerized applications. However, running ECS clusters with low CPU utilization can result in wasted resources and unnecessary costs. AWS charges for the resources allocated to a cluster, regardless of whether they are fully utilized or not. By deleting clusters that are not being fully utilized, you can reduce the number of resources being allocated and lower the overall cost of running ECS. Furthermore, deleting unused or low-utilization clusters can also improve overall system performance by freeing up resources for other applications that require more processing power. This runbook helps us to identify such clusters and delete them.\n* [Delete AWS ELBs With No Targets Or Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Delete_ELBs_With_No_Targets_Or_Instances.ipynb): ELBs are used to distribute incoming traffic across multiple targets or instances, but if those targets or instances are no longer in use, then the ELBs may be unnecessary and can be deleted to save costs. Deleting ELBs with no targets or instances is a simple but effective way to optimize costs in your AWS environment. By identifying and removing these unused ELBs, you can reduce the number of resources you are paying for and avoid unnecessary charges. This runbook helps you identify all types of ELB's- Network, Application, Classic that don't have any target groups or instances attached to them.\n* [Delete IAM profile](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Delete_IAM_User.ipynb): This runbook is the inverse of Create IAM user with profile - removes the profile, the login and then the IAM user itself..\n* [Delete Old EBS Snapshots](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Delete_Old_EBS_Snapshots.ipynb): Amazon Elastic Block Store (EBS) snapshots are created incrementally, an initial snapshot will include all the data on the disk, and subsequent snapshots will only store the blocks on the volume that have changed since the prior snapshot. Unchanged data is not stored, but referenced using the previous snapshot. This runbook helps us to find old EBS snapshots and thereby lower storage costs.\n* [Delete RDS Instances with Low CPU Utilization](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Delete_RDS_Instances_with_Low_CPU_Utilization.ipynb): Deleting RDS instances with low CPU utilization is a cost optimization strategy that involves identifying RDS instances with consistently low CPU usage and deleting them to save costs. This approach helps to eliminate unnecessary costs associated with running idle database instances that are not being fully utilized. This runbook helps us to find and delete such instances.\n* [Delete Redshift Clusters with Low CPU Utilization](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Delete_Redshift_Clusters_with_Low_CPU_Utilization.ipynb): Redshift clusters are the basic units of compute and storage in Amazon Redshift, and they can be configured to meet specific performance and cost requirements. In order to optimize the cost and performance of Redshift clusters, it is important to regularly monitor their CPU utilization. If a cluster is consistently showing low CPU utilization over an extended period of time, it may be a good idea to delete the cluster to save costs. This runbook helps us find such clusters and delete them.\n* [Delete Unattached AWS EBS Volumes](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Delete_Unattached_EBS_Volume.ipynb): This runbook can be used to delete all unattached EBS Volumes within an AWS region. You can delete an Amazon EBS volume that you no longer need. After deletion, its data is gone and the volume can't be attached to any instance. So before deletion, you can store a snapshot of the volume, which you can use to re-create the volume later.\n* [Delete Unused AWS Secrets](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Delete_Unused_AWS_Secrets.ipynb): This runbook can be used to delete unused secrets in AWS.\n* [Delete Unused AWS Log Streams](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Delete_Unused_Log_Streams.ipynb): Cloudwatch will retain empty Log Streams after the data retention time period. Those log streams should be deleted in order to save costs. This runbook can find unused log streams over a threshold number of days and help you delete them.\n* [Delete Unused NAT Gateways](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Delete_Unused_NAT_Gateways.ipynb): This runbook search for all unused NAT gateways from all the region and delete those gateways.\n* [Delete Unused Route53 HealthChecks](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Delete_Unused_Route53_Healthchecks.ipynb): When we associate healthchecks with an endpoint, Amazon Route53 sends health check requests to the endpoint IP address. These health checks validate that the endpoint IP addresses are operating as intended. There may be multiple reasons that healtchecks are lying usused for example- health check was mistakenly configured against your application by another customer, health check was configured from your account for testing purposes but wasn't deleted when testing was complete, health check was based on domain names and hence requests were sent due to DNS caching,  Elastic Load Balancing service updated its public IP addresses due to scaling, and the IP addresses were reassigned to your load balancer, and many more. This runbook finds such healthchecks and deletes them to save AWS costs.\n* [AWS Detach EC2 Instance from ASG](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Detach_ec2_Instance_from_ASG.ipynb): This runbook can be used to detach an instance from Auto Scaling Group. You can remove (detach) an instance that is in the InService state from an Auto Scaling group. After the instance is detached, you can manage it independently from the rest of the Auto Scaling group. By detaching an instance, you can move an instance out of one Auto Scaling group and attach it to a different group. For more information, see Attach EC2 instances to your Auto Scaling group.\n* [AWS EC2 Disk Cleanup](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_EC2_Disk_Cleanup.ipynb): This runbook locates large files in an EC2 instance and backs them up into a given S3 bucket. Afterwards, it deletes the files backed up and send a message on a specified Slack channel. It uses SSH and linux commands to perform the functions it needs.\n* [Enforce HTTP Redirection across all AWS ALB instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Enforce_HTTP_Redirection_across_AWS_ALB.ipynb): This runbook can be used to enforce HTTP redirection across all AWS ALBs. Web encryption protocols like SSL and TLS have been around for nearly three decades. By securing web data in transit, these security measures ensure that third parties can’t simply intercept unencrypted data and cause harm. HTTPS uses the underlying SSL/TLS technology and is the standard way to communicate web data in an encrypted and authenticated manner instead of using insecure HTTP protocol. In this runbook, we implement the industry best practice of redirecting all unencrypted HTTP data to the secure HTTPS protocol.\n* [AWS Ensure Redshift Clusters have Paused Resume Enabled](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Ensure_Redshift_Clusters_have_Paused_Resume_Enabled.ipynb): This runbook finds redshift clusters that don't have pause resume enabled and schedules the pause resume for the cluster.\n* [AWS Get unhealthy EC2 instances from ELB](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Get_Elb_Unhealthy_Instances.ipynb): This runbook can be used to list unhealthy EC2 instance from an ELB. Sometimes it difficult to determine why Amazon EC2 Auto Scaling didn't terminate an unhealthy instance from Activity History alone. You can find further details about an unhealthy instance's state, and how to terminate that instance, by checking the a few extra things.\n* [AWS Redshift Get Daily Costs from AWS Products](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Get_Redshift_Daily_Product_Costs.ipynb): This runbook can be used to create charts and alerts around Your AWS product usage. It requires a Cost and USage report to be live in RedShift.\n* [AWS Redshift Get Daily Costs from EC2 Usage](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Get_Redshift_EC2_Daily_Costs.ipynb): This runbook can be used to create charts and alerts around AWS EC2 usage. It requires a Cost and USage report to be live in RedShift.\n* [AWS Lowering CloudTrail Costs by Removing Redundant Trails](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Lowering_AWS_CloudTrail_Costs_by_Removing_Redundant_Trails.ipynb): The AWS CloudTrail service allows developers to enable policies managing compliance, governance, and auditing of their AWS account. In addition, AWS CloudTrail offers logging, monitoring, and storage of any activity around actions related to your AWS structures. The service activates from the moment you set up your AWS account and while it provides real-time activity visibility, it also means higher AWS costs. Here Finding Redundant Trails in AWS\n* [List unused Amazon EC2 key pairs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Notify_About_Unused_Keypairs.ipynb): This runbook finds all EC2 key pairs that are not used by an EC2 instance and notifies a slack channel about them. Optionally it can delete the key pairs based on user configuration.\n* [Publicly Accessible Amazon RDS Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Publicly_Accessible_Amazon_RDS_Instances.ipynb): This runbook can be used to find the publicly accessible RDS instances for the given AWS region.\n* [Purchase Reserved Nodes For Long Running AWS ElastiCache Clusters](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Purchase_Reserved_Cache_Nodes_For_Long_Running_ElastiCache_Clusters.ipynb): Ensuring that long-running AWS ElastiCache clusters have Reserved Nodes purchased for them is an effective cost optimization strategy for AWS users. By committing to a specific capacity of ElastiCache nodes for a period of one or three years, users can take advantage of significant discounts compared to On-Demand pricing. This approach can help optimize AWS costs for ElastiCache clusters that are expected to run for an extended period and have predictable usage patterns. This runbook helps us optimize costs by ensuring that Reserved Nodes are purchased for these ElastiCache clusters.\n* [Purchase Reserved Instances For Long Running AWS RDS Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Purchase_Reserved_Instances_For_Long_Running_RDS_Instances.ipynb): Ensuring that long-running AWS RDS instances have Reserved Instances purchased for them is an important cost optimization strategy for AWS users. By committing to a specific capacity of RDS instances for a period of one or three years, users can take advantage of significant discounts compared to On-Demand pricing. This approach can help optimize AWS costs for RDS instances that are expected to run for an extended period and have predictable usage patterns. This runbook helps us to optimize costs by ensuring that Reserved Instances are purchased for these RDS instances.\n* [Purchase Reserved Nodes For Long Running AWS Redshift Clusters](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Purchase_Reserved_Nodes_For_Long_Running_Redshift_Clusters.ipynb): Ensuring that long-running AWS Redshift Clusters have Reserved Nodes purchased for them is a critical cost optimization strategy . By committing to a specific capacity of Redshift nodes for a period of one or three years, users can take advantage of significant discounts compared to On-Demand pricing. This approach can help optimize AWS costs for Redshift Clusters that are expected to run for an extended period and have predictable usage patterns. This runbook helps us to ensure that Reserved Nodes are purchased for these clusters so that users can effectively plan ahead, reduce their AWS bill, and optimize their costs over time.\n* [Release Unattached AWS Elastic IPs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Release_Unattached_Elastic_IPs.ipynb): A disassociated Elastic IP address remains allocated to your account until you explicitly release it. AWS imposes a small hourly charge for Elastic IP addresses that are not associated with a running instance. This runbook can be used to deleted those unattached AWS Elastic IP addresses.\n* [Remediate unencrypted S3 buckets](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Remediate_unencrypted_S3_buckets.ipynb): This runbook can be used to filter all the S3 buckets which are unencrypted and apply encryption on unencrypted S3 buckets.\n* [Renew AWS SSL Certificates that are close to expiration](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Renew_SSL_Certificate.ipynb): This runbook can be used to list all AWS SSL (ACM) Certificates that need to be renewed within a given threshold number of days. Optionally it can renew the certificate using AWS ACM service.\n* [AWS Restart unhealthy services in a Target Group](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Restart_Unhealthy_Services_Target_Group.ipynb): This runbook restarts unhealthy services in a target group. The restart command is provided via a tag attached to the instance.\n* [Restrict S3 Buckets with READ/WRITE Permissions to all Authenticated Users](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Restrict_S3_Buckets_with_READ_WRITE_Permissions.ipynb): This runbook will list all the S3 buckets.Filter buckets which has ACL public READ/WRITE permissions and Change the ACL Public READ/WRITE permissions to private in the given region.\n* [Secure Publicly accessible Amazon RDS Snapshot](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Secure_Publicly_accessible_Amazon_RDS_Snapshot.ipynb): This lego can be used to list all the manual database snapshots in the given region. Get publicly accessible DB snapshots in RDS and Modify the publicly accessible DB snapshots in RDS to private.\n* [Stop Idle EC2 Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Stop_Idle_EC2_Instances.ipynb): This runbook can be used to Stop all EC2 Instances that are idle using given cpu threshold and duration.\n* [Stop all Untagged AWS EC2 Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Stop_Untagged_EC2_Instances.ipynb): This runbook can be used to Stop all EC2 Instances that are Untagged\n* [Terminate EC2 Instances Without Valid Lifetime Tag](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Terminate_EC2_Instances_Without_Valid_Lifetime_Tag.ipynb): This runbook can be used to list all the EC2 instances which don't have a lifetime tag and then terminate them.\n* [AWS Update RDS Instances from Old to New Generation](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Update_RDS_Instances_from_Old_to_New_Generation.ipynb): This runbook can be used to find the old generation RDS instances for the given AWS region and modify then to the given instance class.\n* [AWS Redshift Update Database](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Update_Redshift_Database.ipynb): This runbook can be used to update a redshift database from a SQL file stored in S3.\n* [AWS Update Resource Tags](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Update_Resource_Tags.ipynb): This runbook can be used to update an existing tag to any resource in an AWS Region.\n* [AWS Add Tags Across Selected AWS Resources](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Update_Resources_About_To_Expire.ipynb): This finds resources missing a tag, and allows you to choose which resources should add a specific tag/value pair.\n* [Encrypt unencrypted S3 buckets](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_encrypt_unencrypted_S3_buckets.ipynb): This runbook can be used to filter all the S3 buckets which are unencrypted and apply encryption on unencrypted S3 buckets.\n* [Create a new AWS IAM User](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Add_new_IAM_user.ipynb): AWS has an inbuilt identity and access management system known as AWS IAM. IAM supports the concept of users, group, roles and privileges. IAM user is an identity that can be created and assigned some privileges. This runbook can be used to create an AWS IAM User\n* [Configure URL endpoint on a AWS CloudWatch alarm](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Configure_url_endpoint_on_a_cloudwatch_alarm.ipynb): Configures the URL endpoint to the SNS associated with a CloudWatch alarm. This allows to external functions to be invoked within unSkript in response to an alert getting generated. Alarms can be attached to the handlers to perform data enrichment or remediation\n* [Copy AMI to All Given AWS Regions](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Copy_ami_to_all_given_AWS_regions.ipynb): This runbook can be used to copy AMI from one region to multiple AWS regions using unSkript legos with AWS CLI commands.We can get all the available regions by using AWS CLI Commands.\n* [Delete Unused AWS NAT Gateways](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Delete_Unused_AWS_NAT_Gateways.ipynb): This runbook can be used to identify and remove any unused NAT Gateways. This allows us to adhere to best practices and avoid unnecessary costs. NAT gateways are used to connect a private instance with outside networks. When a NAT gateway is provisioned, AWS charges you based on the number of hours it was available and the data (GB) it processes.\n* [Detach EC2 Instance from ASG](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Detach_Instance_from_ASG.ipynb): This runbook can be used to detach an instance from Auto Scaling Group. You can remove (detach) an instance that is in the Service state from an Auto Scaling group. After the instance is detached, you can manage it independently from the rest of the Auto Scaling group. By detaching an instance, you can move an instance out of one Auto Scaling group and attach it to a different group. For more information, see Attach EC2 instances to your Auto Scaling group.\n* [Detect ECS failed deployment](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Detect_ECS_failed_deployment.ipynb): This runbook check if there is a failed deployment in progress for a service in an ECS cluster. If it finds one, it sends the list of stopped task associated with this deployment and their stopped reason to slack.\n* [Enforce Mandatory Tags Across All AWS Resources](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Enforce_Mandatory_Tags_Across_All_AWS_Resources.ipynb): This runbook can be used to Enforce Mandatory Tags Across All AWS Resources.We can get all the  untag resources of the given region,discovers tag keys of the given region and attaches mandatory tags to all the untagged resource.\n* [Handle AWS EC2 Instance Scheduled to retire](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Find_EC2_Instances_Scheduled_to_retire.ipynb): To avoid unexpected interruptions, it's a good practice to check to see if there are any EC2 instances scheduled to retire. This runbook can be used to List the EC2 instances that are scheduled to retire. To handle the instance retirement, user can stop and restart it before the retirement date. That action moves the instance over to a more stable host.\n* [Create an IAM user using Principle of Least Privilege](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/IAM_security_least_privilege.ipynb): Extract usage details from Cloudtrail of an existing user. Apply the usage to a new IAM Policy, and connect it to a new IAM profile.\n* [Monitor AWS DynamoDB provision capacity](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Monitor_AWS_DynamoDB_provision_capacity.ipynb): This runbook can be used to collect the data from cloudwatch related to AWS DynamoDB for provision capacity.\n* [Resize EBS Volume](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Resize_EBS_Volume.ipynb): This run resizes the EBS volume to a specified amount. This runbook can be attached to Disk usage related Cloudwatch alarms to do the appropriate resizing. It also extends the filesystem to use the new volume size.\n* [Resize list of pvcs.](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Resize_List_Of_Pvcs.ipynb): This runbook can be used to resize list of pvcs in a namespace. By default, it uses all pvcs to be resized.\n* [Resize PVC](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Resize_PVC.ipynb): This runbook resizes the PVC to input size.\n* [Restart AWS EC2 Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Restart_AWS_EC2_Instances_By_Tag.ipynb): This runbook can be used to Restart AWS EC2 Instances\n* [Launch AWS EC2 from AMI](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Run_EC2_from_AMI.ipynb): This lego can be used to launch an AWS EC2 instance from AMI in the given region.\n* [Troubleshooting Your EC2 Configuration in a Private Subnet](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Troubleshooting_Your_EC2_Configuration_in_Private_Subnet.ipynb): This runbook can be used to troubleshoot EC2 instance configuration in a private subnet by capturing the VPC ID for a given instance ID. Using VPC ID to get Internet Gateway details then try to SSH and connect to internet.\n* [Update and Manage AWS User permission](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Update_and_Manage_AWS_User_Permission.ipynb): This runbook can be used Update and Manage AWS IAM User Permission\n\n# ElasticSearch\n* [Elasticsearch Rolling restart](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/ElasticSearch/Elasticsearch_Rolling_Restart.ipynb): This runbook can be used to perform rolling restart on ES\n\n# Jenkins\n* [Fetch Jenkins Build Logs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Jenkins/Fetch_Jenkins_Build_Logs.ipynb): This runbook fetches the logs for a given Jenkins job and posts to a slack channel\n\n# Jira\n* [Jira Visualize Issue Time to Resolution](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Jira/jira_visualize_time_to_resolution.ipynb): Using the Panel Library - visualize the time it takes for issues to close over a specifict timeframe\n\n# Kubernetes\n* [k8s: Delete Evicted Pods From All Namespaces](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/Delete_Evicted_Pods_From_Namespaces.ipynb): This runbook shows and deletes the evicted pods for given namespace. If the user provides the namespace input, then it only collects pods for the given namespace; otherwise, it will select all pods from all the namespaces.\n* [k8s: Get kube system config map](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/Get_Kube_System_Config_Map.ipynb): This runbook fetches the kube system config map for a k8s cluster and publishes the information on a Slack channel.\n* [IP Exhaustion Mitigation: Failing K8s Pod Deletion from Jobs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/K8S_Delete_Pods_From_Failing_Jobs.ipynb): Preventing IP exhaustion is critical in Kubernetes environments, and a key strategy is deleting failing pods from jobs. Failing pods can consume valuable IP resources, leading to scarcity and inefficiency. By proactively identifying and removing malfunctioning pods, administrators can promptly free up IP addresses, optimizing resource utilization. This approach ensures that IP allocation remains efficient, enabling the cluster to accommodate new pods without experiencing IP exhaustion. This runbook helps us to identify failing pods within jobs thereby maximizing IP availability for other pods and services.\n* [k8s: Get candidate nodes for given configuration](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/K8S_Get_Candidate_Nodes_Given_Config.ipynb): This runbook get the matching nodes for a given configuration (storage, cpu, memory, pod_limit) from a k8s cluster\n* [Kubernetes Log Healthcheck](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/K8S_Log_Healthcheck.ipynb): This RunBook checks the logs of every pod in a namespace for warning messages.\n* [k8s: Pod Stuck in CrashLoopBackoff State](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/K8S_Pod_Stuck_In_CrashLoopBack_State.ipynb): This runbook checks if any Pod(s) in CrashLoopBackoff state in a given k8s namespace. If it finds, it tries to find out the reason why the Pod(s) is in that state.\n* [k8s: Pod Stuck in ImagePullBackOff State](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/K8S_Pod_Stuck_In_ImagePullBackOff_State.ipynb): This runbook checks if any Pod(s) in ImagePullBackOff state in a given k8s namespace. If it finds, it tries to find out the reason why the Pod(s) is in that state.\n* [k8s: Pod Stuck in ImagePullBackOff State using genAI](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/K8S_Pod_Stuck_In_ImagePullBackOff_State_with_genai.ipynb): This runbook checks if any Pod(s) in ImagePullBackOff state in a given k8s namespace, using genAI. If it finds, it tries to find out the reason why the Pod(s) is in that state.\n* [k8s: Pod Stuck in Terminating State](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/K8S_Pod_Stuck_In_Terminating_State.ipynb): This runbook checks any Pods are in terminating state in a given k8s namespace. If it finds, it tries to recover it by resetting finalizers of the pod.\n* [k8s: Resize List of PVCs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/Resize_List_of_PVCs.ipynb): This runbook resizes a list of Kubernetes PVCs.\n* [k8s: Resize PVC](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/Resize_PVC.ipynb): This runbook resizes a Kubernetes PVC.\n* [Rollback Kubernetes Deployment](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/Rollback_k8s_Deployment_and_Update_Jira.ipynb): This runbook can be used to rollback Kubernetes Deployment\n\n# Postgresql\n* [Display long running queries in a PostgreSQL database](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/Postgresql_Display_Long_Running.ipynb): This runbook displays collects the long running queries from a database and sends a message to the specified slack channel. Poorly optimized queries and excessive connections can cause problems in PostgreSQL, impacting upstream services.\n"
  },
  {
    "path": "opensearch/README.md",
    "content": "\n## infra Actions new\n* [Infra: Finish runbook execution](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/Display_Postgresql_Long_Running.ipynb): Infra: use this action to finish the execution of a runbook. Once this is set, no more tasks will be executed\n* [Infra: Append values for a key in workflow state store](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/Display_Postgresql_Long_Running.ipynb): Infra: use this action to append values for a key in a state store provided by the workflow.\n* [Infra: Store keys in workflow state store](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/Display_Postgresql_Long_Running.ipynb): Infra: use this action to persist keys in a state store provided by the workflow.\n* [Infra: Delete keys from workflow state store](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/Display_Postgresql_Long_Running.ipynb): Infra: use this action to delete keys from a state store provided by the workflow.\n* [Infra: Fetch keys from workflow state store](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/Display_Postgresql_Long_Running.ipynb): Infra: use this action to retreive keys in a state store provided by the workflow.\n* [Infra: Rename keys in workflow state store](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/Display_Postgresql_Long_Running.ipynb): Infra: use this action to rename keys in a state store provided by the workflow.\n* [Infra: Update keys in workflow state store](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/Display_Postgresql_Long_Running.ipynb): Infra: use this action to update keys in a state store provided by the workflow.\n"
  },
  {
    "path": "opensearch/__init__.py",
    "content": ""
  },
  {
    "path": "opensearch/legos/__init__.py",
    "content": ""
  },
  {
    "path": "opensearch/legos/opensearch_get_handle/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Get Opensearch handle</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego Get Opensearch handle.\r\n\r\n\r\n## Lego Details\r\n\r\n    opensearch_get_handle(handle: object)\r\n\r\n        handle: Object of type unSkript Opensearch Connector\r\n\r\n## Lego Input\r\nThis Lego take one input handle.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "opensearch/legos/opensearch_get_handle/__init__.py",
    "content": ""
  },
  {
    "path": "opensearch/legos/opensearch_get_handle/opensearch_get_handle.json",
    "content": "{\n\"action_title\": \"Opensearch Get Handle\",\n\"action_description\": \"Opensearch Get Handle\",\n\"action_type\": \"LEGO_TYPE_OPENSEARCH\",\n\"action_entry_function\": \"opensearch_get_handle\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": true,\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\n\"action_supports_iteration\": true,\n\"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_OPENSEARCH\"]\n}\n"
  },
  {
    "path": "opensearch/legos/opensearch_get_handle/opensearch_get_handle.py",
    "content": "##\n# Copyright (c) 2021 unSkript, Inc\n# All rights reserved.\n##\nfrom pydantic import BaseModel\n\n\nclass InputSchema(BaseModel):\n    pass\n\n\ndef opensearch_get_handle(handle):\n    \"\"\"\n    opensearch_get_handle returns the Opensearch handle.\n    :rtype: Opensearch handle.\n    \"\"\"\n    return handle\n"
  },
  {
    "path": "opensearch/legos/opensearch_search/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h2>Opensearch search</h2>\r\n\r\n<br>\r\n\r\n## Description\r\nThis Lego does an opensearch search on the provided query.\r\n\r\n\r\n## Lego Details\r\n\r\n    opensearch_search(handle: object, query: dict, index: str, size: int)\r\n\r\n        handle: Object of type unSkript Opensearch Connector\r\n        query: Opensearch Query DSL.\r\n        index: A comma-separated list of index names to search; use _all or empty string to perform the operation on all indices.\r\n        size: The number of hits to return.\r\n\r\n\r\n## Lego Input\r\nThis Lego take four input handle, query, index and size.\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n\r\n## See it in Action\r\n\r\nYou can see this Lego in action following this link [unSkript Live](https://us.app.unskript.io)"
  },
  {
    "path": "opensearch/legos/opensearch_search/__init__.py",
    "content": ""
  },
  {
    "path": "opensearch/legos/opensearch_search/opensearch_search.json",
    "content": "{\n\"action_title\": \"Opensearch search\",\n\"action_description\": \"Opensearch Search\",\n\"action_type\": \"LEGO_TYPE_OPENSEARCH\",\n\"action_entry_function\": \"opensearch_search\",\n\"action_needs_credential\": true,\n\"action_supports_poll\": true,\n\"action_output_type\": \"ACTION_OUTPUT_TYPE_DICT\",\n\"action_supports_iteration\": true,\n\"action_verbs\": [\"search\"],\n\"action_nouns\": [\n\"opensearch\"\n],\n\"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_OPENSEARCH\"]\n}\n"
  },
  {
    "path": "opensearch/legos/opensearch_search/opensearch_search.py",
    "content": "##\n##  Copyright (c) 2021 unSkript, Inc\n##  All rights reserved.\n##\nimport pprint\nfrom typing import Dict\nfrom pydantic import BaseModel, Field\n\npp = pprint.PrettyPrinter(indent=4)\n\n\nclass InputSchema(BaseModel):\n    query: dict = Field(\n        title='Query',\n        description='''\n        Opensearch Query DSL. For eg. {\n            \"multi_match\": {\n              \"query\": \"alice\",\n              \"fields\": [\"title^2\", \"director\"]\n            }\n          }\n        '''\n    )\n    index: str = Field(\n        '',\n        title='Index',\n        description=('A comma-separated list of index names to search; '\n                     'use _all or empty string to perform the operation on all indices.')\n    )\n    size: int = Field(\n        '100',\n        title='Number of hits to return.',\n        description='The number of hits to return.'\n    )\n\n\ndef opensearch_search_printer(output):\n    print('\\n\\n')\n    all_hits = output['hits']['hits']\n    print(f\"Got {output['hits']['total']['value']} Hits:\")\n\n    for num, doc in enumerate(all_hits):\n        pp.pprint(f'DOC ID: {doc[\"_id\"]}')\n        pp.pprint(doc[\"_source\"])\n\n\ndef opensearch_search(handle, query: dict, index: str = '', size: int = 100) -> Dict:\n    \"\"\"opensearch_search Does an opensearch search on the provided query.\n\n        :type query: dict\n        :param query: Opensearch Query DSL.\n\n        :type index: string\n        :param index: A comma-separated list of index names to search;\n        use _all or empty string to perform the operation on all indices.\n\n        :type size: int\n        :param size: The number of hits to return.\n\n        :rtype: All the results of the query.\n    \"\"\"\n    # Input param validation.\n\n    if index:\n        res = handle.search(body={\"query\": query}, index=index, size=size)\n    else:\n        res = handle.search(body={\"query\": query}, size=size)\n    return res\n"
  },
  {
    "path": "region_test.py",
    "content": "import inspect\nimport re\nimport os\nimport importlib\nfrom subprocess import run\n\ndef git_top_dir() -> str:\n    \"\"\"git_top_dir returns the output of git rev-parse --show-toplevel \n\n    :rtype: string, the output of the git rev-parse --show-toplevel command\n    \"\"\"\n    run_output = run([\"git\", \"rev-parse\", \"--show-toplevel\"], capture_output=True)\n    top_dir = run_output.stdout.strip()\n    top_dir = top_dir.decode('utf-8')\n    return top_dir\n\n# Get the top-level directory of the Git repository\nfolder_path = git_top_dir()\n\ndef check_method_signature(param):\n    \"\"\" Accepts a string representing the parameters. \n        Returns true if the method signature either doesn't contain \n        a riff or \"region\" at all, or contains \"region\" exactly. \n        Else it returns false.\n\n        :type module: string\n        :param param: the parameters being checked.\n    \"\"\"\n    if re.search(r\"egion\", param):\n        # checks if that riff is \"region\" exactly\n        pattern = r\"(?<![^\\s(,])region(?=\\s|:|\\)|,)\"\n        return bool(re.findall(pattern, param+\")\"))\n    else:\n        return True\n\ndef check_module_methods(module):\n    \"\"\" Accepts a module and calls check_method_signature on each \n        function/method present in it.\n\n        :type module: ModuleSpec\n        :param module: The module being checked.\n    \"\"\"\n    has_region = True\n    module_act = importlib.util.module_from_spec(module)\n    module_source = inspect.getsource(module_act)\n    # finding all the methods in the file\n    method_matches = re.findall(r\"def (.*?)\\)\", module_source, flags=re.DOTALL)\n    for method_match in method_matches:\n        method_name = re.findall(r\"(\\w+)\\s*\\(\", method_match)\n        method_match_new = method_match.replace(method_name[0], \"\")\n        if not check_method_signature(method_match_new):\n            has_region = False\n    return has_region\n\n# Runs the checker on all the files in the repository\nif __name__ == '__main__':   \n    current_file = os.path.abspath(__file__)\n    for root, dirs, files in os.walk(folder_path):\n        for file in files:\n            if file.endswith('.py') and os.path.abspath(file) != current_file and not file.endswith('__init__.py'):\n                file_path = os.path.join(root, file)\n                module_name = os.path.splitext(file)[0]\n                try:\n                    module = importlib.util.spec_from_file_location(module_name,file_path)\n                    if not check_module_methods(module):\n                        print(f\"Error in module {file_path}\")\n                except Exception as e:\n                    print(f\"Error importing module {file_path}: {str(e)}\")\n"
  },
  {
    "path": "sanitize.py",
    "content": "#!/usr/bin/env python\n#\n# Copyright (c) 2023 unSkript.com\n# All rights reserved.\n#\nimport os\nimport json\nimport sys\nimport argparse\n\n\nimport nbformat\nimport re\nimport requests\nfrom collections import defaultdict\nfrom urlextract import URLExtract\n\ndef extract_links_from_notebook(notebook_path, extractor):\n    with open(notebook_path) as f:\n        notebook = nbformat.read(f, as_version=4)\n\n    links = []\n    for cell in notebook.cells:\n        if cell.cell_type != \"markdown\":\n            continue\n\n        urls = extractor.find_urls(cell.source)\n        links.extend(urls)\n\n    return links\n\ndef validate_link(link):\n\n    if link.lower().find(\"unskript.com\") != -1 or link.lower().find(\"us.app.unskript.io\") != -1:\n        return True\n\n    try:\n        response = requests.get(link, timeout=3)\n        return response.status_code == 200\n    except requests.exceptions.RequestException:\n        return False\n\ndef check_notebooks(notebook_paths, extractor):\n    link_cache = {}\n    dead_link_report = defaultdict(list)\n\n    for notebook in notebook_paths:\n        links = extract_links_from_notebook(notebook, extractor)\n        for link in links:\n            if link not in link_cache:\n                link_cache[link] = validate_link(link)\n            if not link_cache[link]:\n                dead_link_report[notebook].append(link)\n\n    return dict(dead_link_report)\n\n\n## returns True is everything is ok \ndef check_sanity(ipynbFile: str = '') -> bool:\n\n    rc = True\n    with open(ipynbFile) as f:\n        nb = json.loads(f.read())\n\n    jsonFile = ipynbFile.replace(\"ipynb\", \"json\")\n    if os.path.exists(jsonFile) is False:\n        print(f\"Skipping sanity on file ({ipynbFile}) since {jsonFile} is missing\")\n        return True\n     \n    with open(jsonFile) as jf:\n        jsonData = json.loads(jf.read())\n\n    if nb.get('metadata') is None:\n        print(\"Failed metadata check for notebook\")\n        rc = False\n\n    if nb.get('metadata').get('execution_data') is None:\n        print(\"Failed execution_data check for notebook\")\n        rc = False\n\n    exec_data = nb.get('metadata').get('execution_data')\n    if len(exec_data) > 2:\n        print(\"Failed execution_data keys check for notebook\")\n        rc = False\n\n    if exec_data.get('runbook_name') is None:\n        print(\"Failed runbook_name check for notebook\")\n        rc = False\n    \n    ## runbook_name should be same as the name in JSON file\n    if exec_data.get('runbook_name') != jsonData.get('name'):\n        print(\"Failed runbook_name value check for notebook\")\n        rc = False\n\n    if nb.get('metadata').get('parameterSchema') is None:\n        print(\"Failed parameters value check for notebook\")\n        rc = False\n\n    cells = nb.get(\"cells\")\n    for cell in cells:\n        cell_name = cell.get('metadata').get('name')\n        if cell_name is None:\n            cell_name = cell.get('metadata').get('title')\n        if cell.get('cell_type') == 'markdown':\n            continue\n\n        if cell.get('metadata') is None:\n            print(\"Failed metadata check for cell\")\n            rc = False\n\n        if cell.get('metadata').get('tags') is None:\n            print(\"Failed metadata.tags check for cell\")\n            rc = False\n\n        if 'unSkript:nbParam' in cell.get('metadata').get('tags'):\n            print(\"Failed first cell check for cell\")\n            rc = False\n\n        if cell.get('metadata').get('credentialsJson') != {} and cell.get('metadata').get('credentialsJson') is not None:\n            print(f\"Failed credentialJson/md check for cell '{cell.get('metadata').get('title')}', found {cell.get('metadata').get('credentialsJson')}\")\n            rc = False\n\n        if cell.get('outputs') != []:\n            print(\"Failed outputs check for cell\")\n            rc = False\n\n        # Look for this pattern\n        # \"task.configure(credentialsJson='''{\\n\",\n        # \"    \\\"credential_name\\\": abc,\n        # \"    \\\"credential_type\\\": def\",\n        # \"    \\\"credential_id\\\": ghi\",\n        # \"}''')\\n\",\n\n        # This pattern is ok\n        # \"task.configure(credentialsJson='''{\\\"credential_type\\\": \\\"\" + md.action_type + \"\\\",}''')\"\n\n        if cell.get('metadata').get('legotype') is None:\n            if cell.get('metadata').get('actionNeedsCredential'):\n                print(f\"Failed actionNeedsCredential check for cell '{cell_name}'\")\n                rc = False\n            continue\n\n        action_type = cell.get('metadata').get('legotype').replace(\"LEGO\", \"CONNECTOR\")\n        skip_pattern = 'task.configure(credentialsJson='\n        ok_pattern = \"task.configure(credentialsJson='''{\\\\\\\"credential_type\\\\\\\": \\\\\\\"\" + action_type + \"\\\\\\\"}''')\"\n        for line in cell.get('source'):\n            if skip_pattern in line and ok_pattern not in line:\n                print(f\"Failed credentialsJson/code check for cell '{cell.get('metadata').get('name')}'\")\n                print(line)\n                rc = False\n\n    return rc\n\ndef replace_default_string_values_with_extra_quotes(inputDict: dict)-> dict:\n    for k, v in inputDict.get('properties').items():\n        if 'default' in v.keys():\n            if v.get('type') != 'string':\n                continue\n\n            defaultValue = v.get('default')\n            if defaultValue.startswith(\"\\\"\") and defaultValue.endswith(\"\\\"\"):\n                continue\n\n            if len(defaultValue) > 0:\n                newDefaultValue = \"\\\"\" + defaultValue + \"\\\"\"\n                v['default'] = newDefaultValue\n    return inputDict\n\n## returns True is everything is ok \ndef sanitize(ipynbFile: str = '') -> bool:\n    retVal = False\n    if not ipynbFile:\n        print(\"ERROR: IPYNB file is needed\")\n        return retVal\n\n    jsonFile = ipynbFile.replace(\"ipynb\", \"json\")\n    with open(jsonFile) as jf:\n        jsonData = json.loads(jf.read())\n\n    with open(ipynbFile) as f:\n        nb = json.loads(f.read())\n\n        execution_data = {\n            'runbook_name': jsonData.get('name'),\n            'parameters': nb.get('metadata').get('execution_data').get('parameters'),\n        }\n\n\n    new_cells = []\n    old_cells = nb.get(\"cells\")\n    for cell in old_cells:\n\n        cell_type = cell.get('cell_type')\n        if cell_type != 'code':\n            new_cells.append(cell)\n            continue\n\n        # Lets make sure Cell Metadata has tags, only then check if it matches the first cell\n        if cell.get('metadata').get('tags') is not None and 'unSkript:nbParam' in cell.get('metadata').get('tags'):\n            print(f\"SKIPPING FIRST CELL with {len(cell.get('source'))} lines\")\n            print(cell.get('source'))\n            continue\n\n        if cell.get('metadata').get('legotype') is None:\n            if cell.get('metadata').get('actionNeedsCredential'):\n                cell['metadata']['actionNeedsCredential'] = False\n                cell['metadata']['actionSupportsIteration'] = False\n                cell['metadata']['actionSupportsPoll'] = False\n                new_cells.append(cell)\n            else:\n                print(f\"Skipping cell without legotype {cell.get('metadata').get('name')}\")\n            continue\n\n        # Reset InputSchema\n        # if cell.get('metadata').get('inputschema') is not None:\n        #     cell['metadata']['inputschema'] = \\\n        #         [ replace_default_string_values_with_extra_quotes(cell.get('metadata').get('inputschema')[0]) ]\n\n        # Reset CredentialsJson\n        cell['metadata']['credentialsJson'] = {}\n        cell['metadata']['execution_data'] = {}\n        cell['metadata']['execution_count'] = {}\n\n        # Cleanout output\n        cell['outputs'] = []\n\n        # Delete CredentialsJson from source\n        skip_pattern = \"task.configure(credentialsJson=\"\n        action_type = cell.get('metadata').get('legotype').replace(\"LEGO\", \"CONNECTOR\")\n        new_creds_line = \"task.configure(credentialsJson='''{\\\\\\\"credential_type\\\\\\\": \\\\\\\"\" + action_type + \"\\\\\\\"}''')\"\n\n        # source code be a list or a string (delimited by \\n)\n        # we prefer the list version with \\n to make it readable in code reviews\n        if isinstance(cell.get('source'), str):\n            old_cell_source = cell.get('source').split(\"\\n\") # [ l+\"\\n\" for l in cell.get('source').split(\"\\n\") ]\n        else:\n            old_cell_source = cell.get('source')\n\n        skip = False\n        cell_source = []\n        for line in old_cell_source:\n            if new_creds_line in line:\n                continue\n            elif skip_pattern in line and new_creds_line not in line:\n                skip = True\n            elif skip and line.strip() == \"}''')\":\n                skip = False\n            elif not skip:\n                cell_source.append(line)\n        cell['source'] = cell_source\n\n        new_cells.append(cell)\n\n    nb_new = nb.copy()\n    nb_new[\"cells\"] = new_cells\n    try:\n        # Reset Environment & Tenant Information\n        nb_new['metadata']['execution_data'] = execution_data\n        nb_new['metadata']['parameterValues'] = {}\n\n    except Exception as e:\n        raise e\n\n    with open(ipynbFile, 'w') as f:\n        json.dump(nb_new, f, indent=nb.get('indent', 4))\n        print(u'\\u2713', f\"Updated Notebook {ipynbFile}\")\n        retVal = True\n\n    return retVal\n\n\nif __name__ == '__main__':\n\n    # create argparse object\n    parser = argparse.ArgumentParser(description='Process input files')\n\n    # add argument for 'validate' mode\n    parser.add_argument('-v', '--validate', action='store_true',\n                        help='validate input files')\n\n    # add argument for 'fix' mode\n    parser.add_argument('-f', '--fix', action='store_true',\n                        help='fix input files')\n\n    # add positional argument for list of files\n    parser.add_argument('files', nargs='+',\n                        help='list of files to process')\n\n    # parse arguments\n    args = parser.parse_args()\n\n    # default mode is validate\n    validate_mode = True\n\n    # check which mode is selected\n    if args.validate:\n        print('Running in validation mode')\n        validate_mode = True\n    elif args.fix:\n        print('Running in fix mode')\n        validate_mode = False\n\n    # access the list of files\n    filelist = []\n    for f in args.files:\n        # To handle file delete case, check if the file exists.\n        if os.path.isfile(f) is False:\n            continue\n\n        if f.endswith('.ipynb') is False:\n            continue\n\n        filelist.append(f)\n\n    failedlist = []\n    print(f'Processing {len(filelist)} files')\n\n    for f in filelist:\n        # To handle file delete case, check if the file exists.\n        print(f\"Processing {f}\")\n        if validate_mode is True:\n            rc = check_sanity(f)\n        else:\n            rc = sanitize(f)\n\n        if rc is False:\n            print(f\"Sanity failures found in {f}\")\n            failedlist.append(f)\n\n    if len(failedlist) > 0:\n        \n        for f in failedlist:\n            print(f\"Failed sanity {f}\")\n\n        sys.exit(-1)\n\n\n    extractor = URLExtract()\n    dead_links = check_notebooks(filelist, extractor)\n    failedlist = []\n    for notebook, links in dead_links.items():\n        if len(links) == 0:\n            continue\n\n        failedlist.append(f)\n        print(f'Notebook {notebook} contains the following dead links:')\n        for link in links:\n            print(link)\n\n\n    if len(failedlist) > 0:\n        \n        for f in failedlist:\n            print(f\"Failed sanity {f}\")\n\n        sys.exit(-1)\n\n    sys.exit(0)\n\n## handle uniform region, namespace"
  },
  {
    "path": "suites/AWS_RDS.json",
    "content": "{\n    \"suite_name\": \"AWS RDS Health\",\n    \"suite_check_uuids\":   \n            [\"f50fe12ce6fe9000257361f74d58e99fb1c08cf36ffbdc60f507f442324dd703\", \n            \"97bfc082be1cffdf5c795b3119bfa90b36946934b37cf213d762e0ee3ee881f8\", \n            \"8d01f8abc8274090c2325ef32905b2649a6af779ce86f78b9e9712ad1d482165\", \n            \"75df073e335235a086b5a58213acac248ce54dbbecb786b028726f987fcfe243\", \n            \"1e04772ba07b2ed694a0be4c8265b425d8a7de0a40484ff35b9b9c770e43bf4c\", \n            \"08da2db2f8fe2dbce378c314e54341b68ee2e9e99ae271f2acd044ef7e8bdee3\", \n            \"77d61931741da6d2be410571e205c93962815430843b1fbaf8e575e6384598ae\", \n            \"e665224418391a4deafae48140c5b83c8af7b881dd281acbd79ed9ceb52aad4f\"]\n    \n\n  }"
  },
  {
    "path": "suites/Cost_optimization.json",
    "content": "{\n  \"suite_name\": \"Cost optimization\",\n  \"suite_check_uuids\": [\n\t\"9a74af3d2bb5a9aac60e5d30fb89b3ebf6867ce4782fc629cd9842bd5156a327\", \n\t\"1e04772ba07b2ed694a0be4c8265b425d8a7de0a40484ff35b9b9c770e43bf4c\", \n\t\"2e187515d84ddfa1b319617e1aae8f6483eb9886582075f94a1065ea38e3b652\", \n\t\"33c1b0734f0feace2470b0fd2e59a77af348d1cc2c5c7b11029e0159554fa121\",\n\t\"53df09f034bd51da247c01b663d9e7c84d0ca615cfed4bfe2545547a5a4466be\",\n\t\"c25a662a49587285082c36455564eed5664cc852926fcc2cec374300492df09d\",\n\t\"0f0c137beaf6a9246508393d1e868cea529d30a88631cd0f321799acbfbd47bb\", \n\t\"c4fcaf0f517e1f7522cfa0f551857a760298211e4cb65a485df40e7770b8fbcd\",\n\t\"5371ce7d1e24b4f013413eca601869bd263f94e6b440f386135eaa1d7c474978\",\n\t\"88c492263d4fe922035cc9cbb7d8f10aad735156835926d4c609e4f5d4a09b7f\",\n\t\"e665224418391a4deafae48140c5b83c8af7b881dd281acbd79ed9ceb52aad4f\",\n\t\"97bfc082be1cffdf5c795b3119bfa90b36946934b37cf213d762e0ee3ee881f8\",\n\t\"7bde6d48cf5e9b2b984335fb1434716a3dba113da0762bc70f57f4246b91df07\"]\n\n}\n"
  },
  {
    "path": "suites/K8s_connectivity.json",
    "content": "{\n    \"suite_name\": \"K8s Connectivity health\",\n    \"suite_check_uuids\": [\"0e3f155377bafd8cf105db4ada0772c979dc14f95a8d76110b0ff2005a652b95\", \n    \"134d70d8685769e42fdf3b014e948b88e3d0efd0b9da0f5a2e60cf6f62069aad\", \n    \"9006093a3cd1f8d0b2fa5f4b7958469bddf8f788d089ef8480fbc7d3af189088\"]\n  }"
  },
  {
    "path": "suites/K8s_pods.json",
    "content": "{\n    \"suite_name\": \"K8s Pod Health\",\n    \"suite_check_uuids\": [\"d8047bf803242cfbfd1a19e28d64ae8d95168f8edb753ae4e1e7a7af1ffccf07\", \n                            \"683b7f1a1482a5bed32698689e2b47e13dcdb5e00d719316cc46ada5ead26758\",\n                             \"0ee6916ced53898c496c01c396ee6765611e023029080258463bd4331af54582\", \n                             \"feb60351fb3290f22855cc68f4741e24cd930debb326724e977ad9e450c49c74\", \n                             \"d7a1da167d056a912739fce8c4571c6863050f52d6e19495971277057e709857\", \n                             \"38e1b66dd63e5e211aca4c451e211fe24b00e1ec206e172e85bfe93427a795c2\", \n                             \"bf0dad12a041d356406d77f967c2ff2ed31e1bfd47088c0844b629d792fb28ca\", \n                             \"0e3f155377bafd8cf105db4ada0772c979dc14f95a8d76110b0ff2005a652b95\", \n                             \"f859a8bb5222b242b8366f5d0459b72309b6891d2dcac154cd273f4dbde1e5ac\"]\n  }"
  },
  {
    "path": "suites/K8s_runtime.json",
    "content": "{\n    \"suite_name\": \"K8s Runtime Health\",\n    \"suite_check_uuids\": [\"04477a3e600b67ac96bcc2430a7202c50babd0fee6dd78804a18551631a06287\", \n                    \"0ee6916ced53898c496c01c396ee6765611e023029080258463bd4331af54582\", \n                    \"d7a1da167d056a912739fce8c4571c6863050f52d6e19495971277057e709857\", \n                    \"feb60351fb3290f22855cc68f4741e24cd930debb326724e977ad9e450c49c74\", \n                    \"f859a8bb5222b242b8366f5d0459b72309b6891d2dcac154cd273f4dbde1e5ac\", \n                    \"588e9f61ddf3343d359958ae195a3d912a7fe4d4341d098c62c2ffa0c6a1814f\", \n                    \"38e1b66dd63e5e211aca4c451e211fe24b00e1ec206e172e85bfe93427a795c2\", \n                    \"d8047bf803242cfbfd1a19e28d64ae8d95168f8edb753ae4e1e7a7af1ffccf07\", \n                    \"134d70d8685769e42fdf3b014e948b88e3d0efd0b9da0f5a2e60cf6f62069aad\", \n                    \"9006093a3cd1f8d0b2fa5f4b7958469bddf8f788d089ef8480fbc7d3af189088\", \n                    \"bf0dad12a041d356406d77f967c2ff2ed31e1bfd47088c0844b629d792fb28ca\"]\n  }"
  },
  {
    "path": "suites/aws_ec2.json",
    "content": "{\n    \"suite_name\": \"AWS EC2 Health\",\n    \"suite_check_uuids\":   \n    [\"6cc8a1355937c21df3ace495375225012fa8915f4125ad143367e0feb34486c5\", \n    \"5371ce7d1e24b4f013413eca601869bd263f94e6b440f386135eaa1d7c474978\", \n    \"0ebc91f11a150d8933a8ebf4cf8824f0ca8cd9e64383b30dd9fad4e7b9b26ac9\", \n    \"2e187515d84ddfa1b319617e1aae8f6483eb9886582075f94a1065ea38e3b652\", \n    \"c25a662a49587285082c36455564eed5664cc852926fcc2cec374300492df09d\", \n    \"8d47a48733e9c721bf2cc896a10e2445bb49d5c70ca1b93eee38b28afe2bd157\"]\n    \n\n  }"
  },
  {
    "path": "suites/aws_lambbdas.json",
    "content": "{\n    \"suite_name\": \"AWS Lambda Health\",\n    \"suite_check_uuids\":   \n    [\"e0427cd28eee85da5bdda8b0b6e5294f9761f5229695d0b30c86062d906f946f\", \n    \"b67ac77869789a5c1a9fd6acf2786d5899e0df9c6cbee27601674a6956822ed3\", \n    \"7c711bc3e744eb16a7372e2b6d27706d5aa53303bf30e633fa58c9e610aa1b29\"]\n  }"
  },
  {
    "path": "suites/aws_loadbalancer.json",
    "content": "{\n    \"suite_name\": \"AWS Load Balancer NAT Gateway Health\",\n    \"suite_check_uuids\":   \n        [\"88c492263d4fe922035cc9cbb7d8f10aad735156835926d4c609e4f5d4a09b7f\", \n        \"0f0c137beaf6a9246508393d1e868cea529d30a88631cd0f321799acbfbd47bb\", \n        \"ed9c71d09866b0a019abe4f10951f32f9484504e0e274eb3d248e8bc321cb257\",\n        \"6d2964252c14fd1439bdefd224d147ac75fc7fe06036c6d0956081fa45505139\"]\n  }"
  },
  {
    "path": "templates/README.md",
    "content": ""
  },
  {
    "path": "templates/legos/README.md",
    "content": "[<img align=\"left\" src=\"https://unskript.com/assets/favicon.png\" width=\"100\" height=\"100\" style=\"padding-right: 5px\">](https://unskript.com/assets/favicon.png) \r\n<h1>Filter All AWS EC2 Instances </h1>\r\n\r\n## Description\r\n- Lego description here\r\n\r\n\r\n## Lego Details\r\n\r\n    lego_function_name(handle, param1:type1, param2:type2 ...)\r\n\r\n        handle: Object of type unSkript AWS Connector\r\n        param1:type1 short description about parameter param1\r\n        param2:type2 short description about parameter param2\r\n\r\n\r\nwhere type1, type2 are the (python types)[https://docs.python.org/3/library/stdtypes.html] \r\n\r\n\r\n## Lego Input\r\n\r\nparam1: \r\nparam2:\r\n\r\n\r\n## Lego Output\r\nHere is a sample output.\r\n\r\n<img src=\"./1.png\">\r\n"
  },
  {
    "path": "templates/legos/__init__.py",
    "content": ""
  },
  {
    "path": "templates/legos/lego.json",
    "content": "{\r\n    \"action_title\": \"Your lego title\",\r\n    \"action_description\": \"Your lego description\",\r\n    \"action_entry_function\": \"name_of_the_function_implemented\",\r\n    \"action_supports_poll\": true,\r\n    \"action_supports_iteration\": true,\r\n    \"action_categories\": [ \"CATEGORY_TYPE_CLOUDOPS\", \"CATEGORY_TYPE_DEVOPS\", \"CATEGORY_TYPE_SRE\",\"CATEGORY_TYPE_TEMPLATE\"]\r\n}\r\n"
  },
  {
    "path": "templates/legos/lego.py",
    "content": ""
  },
  {
    "path": "templates/runbooks/StartHere.ipynb",
    "content": "{\n    \"cells\": [\n        {\n            \"cell_type\": \"markdown\",\n            \"id\": \"6e397c81\",\n            \"metadata\": {\n                \"name\": \"Start Here\",\n                \"title\": \"Start Here\"\n            },\n            \"source\": [\n                \"Search the lego on the search bar on the right, drag and drop the lego to the cell.\"\n            ]\n        }\n    ],\n    \"metadata\": {\n        \"execution_data\": {\n            \"environment_id\": \"1499f27c-6406-4fbd-bd1b-c6f92800018f\",\n            \"environment_name\": \"Staging\",\n            \"execution_id\": \"\",\n            \"inputs_for_searched_lego\": \"\",\n            \"notebook_id\": \"d4159cb3-6c83-4ba5-a2f7-d23c0777076b.ipynb\",\n            \"parameters\": null,\n            \"runbook_name\": \"gcp\",\n            \"search_string\": \"\",\n            \"show_tool_tip\": true,\n            \"tenant_id\": \"982dba5f-d9df-48ae-a5bf-ec1fc94d4882\",\n            \"tenant_url\": \"https://tenant-staging.alpha.unskript.io\",\n            \"user_email_id\": \"support+staging@unskript.com\",\n            \"workflow_id\": \"f8ead207-81c0-414a-a15b-76fcdefafe8d\"\n        },\n        \"kernelspec\": {\n            \"display_name\": \"unSkript (Build: 618)\",\n            \"name\": \"python_kubernetes\"\n        },\n        \"language_info\": {\n            \"file_extension\": \".py\",\n            \"mimetype\": \"text/x-python\",\n            \"name\": \"python\",\n            \"pygments_lexer\": \"ipython3\"\n        },\n        \"parameterSchema\": {\n            \"properties\": {},\n            \"required\": [],\n            \"title\": \"Schema\",\n            \"type\": \"object\"\n        },\n        \"parameterValues\": null\n    },\n    \"nbformat\": 4,\n    \"nbformat_minor\": 5\n}\n"
  },
  {
    "path": "templates/runbooks/gcp.ipynb",
    "content": "{\n    \"cells\": [\n        {\n            \"cell_type\": \"markdown\",\n            \"id\": \"6e397c81\",\n            \"metadata\": {\n                \"name\": \"Welcome\",\n                \"title\": \"Welcome\"\n            },\n            \"source\": [\n                \"Use the below lego to start building your functionality.\"\n            ]\n        },\n        {\n            \"cell_type\": \"code\",\n            \"execution_count\": null,\n            \"id\": \"7214df50-b385-4093-8147-72fc30ebd671\",\n            \"metadata\": {\n                \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n                \"actionBashCommand\": false,\n                \"actionNeedsCredential\": true,\n                \"actionRequiredLinesInCode\": [],\n                \"actionSupportsIteration\": false,\n                \"actionSupportsPoll\": false,\n                \"action_uuid\": \"aa0e997987a52d5a181f7d2352066443e675cd2c6893ffd8ae18c46dc2dcf8f1\",\n                \"createTime\": \"1970-01-01T00:00:00Z\",\n                \"currentVersion\": \"0.1.0\",\n                \"description\": \"Get GCP Handle\",\n                \"id\": 92,\n                \"index\": 92,\n                \"inputschema\": [\n                    {\n                        \"properties\": {},\n                        \"title\": \"gcp_get_handle\",\n                        \"type\": \"object\"\n                    }\n                ],\n                \"jupyter\": {\n                    \"source_hidden\": true\n                },\n                \"legotype\": \"LEGO_TYPE_GCP\",\n                \"name\": \"Get GCP Handle\",\n                \"nouns\": [\n                    \"gcp\",\n                    \"handle\"\n                ],\n                \"orderProperties\": [],\n                \"output\": {\n                    \"type\": \"\"\n                },\n                \"tags\": [\n                    \"gcp_get_handle\"\n                ],\n                \"verbs\": [\n                    \"get\"\n                ]\n            },\n            \"outputs\": [],\n            \"source\": [\n                \"##\\n\",\n                \"##  Copyright (c) 2021 unSkript, Inc\\n\",\n                \"##  All rights reserved.\\n\",\n                \"##\\n\",\n                \"from pydantic import BaseModel\\n\",\n                \"\\n\",\n                \"\\n\",\n                \"from beartype import beartype\\n\",\n                \"@beartype\\n\",\n                \"def gcp_get_handle(handle):\\n\",\n                \"    \\\"\\\"\\\"gcp_get_handle returns the GCP handle.\\n\",\n                \"\\n\",\n                \"       :rtype: GCP Handle.\\n\",\n                \"    \\\"\\\"\\\"\\n\",\n                \"    return handle\\n\",\n                \"\\n\",\n                \"\\n\",\n                \"def unskript_default_printer(output):\\n\",\n                \"    if isinstance(output, (list, tuple)):\\n\",\n                \"        for item in output:\\n\",\n                \"            print(f'item: {item}')\\n\",\n                \"    elif isinstance(output, dict):\\n\",\n                \"        for item in output.items():\\n\",\n                \"            print(f'item: {item}')\\n\",\n                \"    else:\\n\",\n                \"        print(f'Output for {task.name}')\\n\",\n                \"        print(output)\\n\",\n                \"\\n\",\n                \"task = Task(Workflow())\\n\",\n                \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n                \"if err is None:\\n\",\n                \"    task.execute(gcp_get_handle, lego_printer=unskript_default_printer, hdl=hdl, args=args)\"\n            ]\n        }\n    ],\n    \"metadata\": {\n        \"execution_data\": {\n            \"environment_id\": \"1499f27c-6406-4fbd-bd1b-c6f92800018f\",\n            \"environment_name\": \"Staging\",\n            \"execution_id\": \"\",\n            \"inputs_for_searched_lego\": \"\",\n            \"notebook_id\": \"d4159cb3-6c83-4ba5-a2f7-d23c0777076b.ipynb\",\n            \"parameters\": null,\n            \"runbook_name\": \"gcp\",\n            \"search_string\": \"\",\n            \"show_tool_tip\": false,\n            \"tenant_id\": \"982dba5f-d9df-48ae-a5bf-ec1fc94d4882\",\n            \"tenant_url\": \"https://tenant-staging.alpha.unskript.io\",\n            \"user_email_id\": \"support+staging@unskript.com\",\n            \"workflow_id\": \"f8ead207-81c0-414a-a15b-76fcdefafe8d\"\n        },\n        \"kernelspec\": {\n            \"display_name\": \"unSkript (Build: 618)\",\n            \"name\": \"python_kubernetes\"\n        },\n        \"language_info\": {\n            \"file_extension\": \".py\",\n            \"mimetype\": \"text/x-python\",\n            \"name\": \"python\",\n            \"pygments_lexer\": \"ipython3\"\n        },\n        \"parameterSchema\": {\n            \"properties\": {},\n            \"required\": [],\n            \"title\": \"Schema\",\n            \"type\": \"object\"\n        },\n        \"parameterValues\": null\n    },\n    \"nbformat\": 4,\n    \"nbformat_minor\": 5\n}\n"
  },
  {
    "path": "templates/runbooks/k8s.ipynb",
    "content": "{\n    \"cells\": [\n        {\n            \"cell_type\": \"markdown\",\n            \"id\": \"6e397c81\",\n            \"metadata\": {},\n            \"source\": [\n                \"Use the below lego to start building your functionality.\"\n            ]\n        },\n        {\n            \"cell_type\": \"code\",\n            \"execution_count\": null,\n            \"id\": \"9cb27a11-95cb-4b20-8d41-59903faa0e52\",\n            \"metadata\": {\n                \"accessType\": \"ACCESS_TYPE_UNSPECIFIED\",\n                \"actionBashCommand\": false,\n                \"actionNeedsCredential\": true,\n                \"actionRequiredLinesInCode\": [],\n                \"actionSupportsIteration\": true,\n                \"actionSupportsPoll\": true,\n                \"action_uuid\": \"ae0b25757f0c6c0ca4b3aaf6feea636e3f193dc354f74823a7becd7d675becdc\",\n                \"collapsed\": true,\n                \"createTime\": \"1970-01-01T00:00:00Z\",\n                \"currentVersion\": \"0.1.0\",\n                \"description\": \"Kubectl command in python syntax.\",\n                \"id\": 26,\n                \"index\": 26,\n                \"inputschema\": [\n                    {\n                        \"properties\": {\n                            \"kubectl_command\": {\n                                \"description\": \"kubectl command eg \\\"kubectl get pods --all-namespaces\\\"\",\n                                \"title\": \"Kubectl Command\",\n                                \"type\": \"string\"\n                            }\n                        },\n                        \"required\": [\n                            \"kubectl_command\"\n                        ],\n                        \"title\": \"k8s_kubectl_command\",\n                        \"type\": \"object\"\n                    }\n                ],\n                \"jupyter\": {\n                    \"outputs_hidden\": true,\n                    \"source_hidden\": true\n                },\n                \"legotype\": \"LEGO_TYPE_K8S\",\n                \"name\": \"Kubectl in python syntax\",\n                \"nouns\": [\n                    \"command\"\n                ],\n                \"orderProperties\": [\n                    \"kubectl_command\"\n                ],\n                \"output\": {\n                    \"type\": \"\"\n                },\n                \"tags\": [\n                    \"k8s_kubectl_command\"\n                ],\n                \"verbs\": [\n                    \"execute\"\n                ]\n            },\n            \"outputs\": [],\n            \"source\": [\n                \"#\\n\",\n                \"# Copyright (c) 2021 unSkript.com\\n\",\n                \"# All rights reserved.\\n\",\n                \"#\\n\",\n                \"\\n\",\n                \"from pydantic import BaseModel, Field\\n\",\n                \"\\n\",\n                \"\\n\",\n                \"from beartype import beartype\\n\",\n                \"@beartype\\n\",\n                \"def k8s_kubectl_command_printer(output):\\n\",\n                \"    if output is None:\\n\",\n                \"        return\\n\",\n                \"    print(output)\\n\",\n                \"\\n\",\n                \"\\n\",\n                \"@beartype\\n\",\n                \"def k8s_kubectl_command(handle, kubectl_command: str) -> str:\\n\",\n                \"\\n\",\n                \"    result = handle.run_native_cmd(kubectl_command)\\n\",\n                \"    if result is None or hasattr(result, \\\"stderr\\\") is False or result.stderr is None:\\n\",\n                \"        print(\\n\",\n                \"            f\\\"Error while executing command ({kubectl_command}): {result.stderr}\\\")\\n\",\n                \"        return str()\\n\",\n                \"\\n\",\n                \"    return result.stdout\\n\",\n                \"\\n\",\n                \"\\n\",\n                \"task = Task(Workflow())\\n\",\n                \"(err, hdl, args) = task.validate(vars=vars())\\n\",\n                \"if err is None:\\n\",\n                \"    task.execute(k8s_kubectl_command, lego_printer=k8s_kubectl_command_printer, hdl=hdl, args=args)\"\n            ]\n        }\n    ],\n    \"metadata\": {\n        \"execution_data\": {\n            \"environment_id\": \"1499f27c-6406-4fbd-bd1b-c6f92800018f\",\n            \"environment_name\": \"Staging\",\n            \"execution_id\": \"\",\n            \"inputs_for_searched_lego\": \"\",\n            \"notebook_id\": \"3413c470-a729-4b66-aeac-a9b362e0da42.ipynb\",\n            \"parameters\": null,\n            \"runbook_name\": \"k8s\",\n            \"search_string\": \"\",\n            \"show_tool_tip\": false,\n            \"tenant_id\": \"982dba5f-d9df-48ae-a5bf-ec1fc94d4882\",\n            \"tenant_url\": \"https://tenant-staging.alpha.unskript.io\",\n            \"user_email_id\": \"support+staging@unskript.com\",\n            \"workflow_id\": \"87d520c9-1582-43c6-8c56-f90788ef6de6\"\n        },\n        \"kernelspec\": {\n            \"display_name\": \"Python 3.9.6 ('jupyter-elyra')\",\n            \"language\": \"python\",\n            \"name\": \"python3\"\n        },\n        \"language_info\": {\n            \"file_extension\": \".py\",\n            \"mimetype\": \"text/x-python\",\n            \"name\": \"python\",\n            \"pygments_lexer\": \"ipython3\",\n            \"version\": \"3.9.6\"\n        },\n        \"parameterSchema\": {\n            \"properties\": {},\n            \"required\": [],\n            \"title\": \"Schema\",\n            \"type\": \"object\"\n        },\n        \"parameterValues\": null,\n        \"vscode\": {\n            \"interpreter\": {\n                \"hash\": \"abbf80fbfe9c242090d0fbc1079a9f03583a8e7a3457324ed37aa21600e94bd8\"\n            }\n        }\n    },\n    \"nbformat\": 4,\n    \"nbformat_minor\": 5\n}\n"
  },
  {
    "path": "templates/scheduler.template",
    "content": "#!/bin/bash\n\n*/30 * * * * sudo -H -u root bash -c \"/usr/local/bin/unskript-ctl.sh -rc --type k8s, aws --report\"\n\n"
  },
  {
    "path": "tools/README.md",
    "content": "<center>\n  <a href=\"https://github.com/unskript/Awesome-CloudOps-Automation\">\n    <img src=\"https://unskript.com/assets/favicon.png\" alt=\"Logo\" width=\"80\" height=\"80\">\n  </a>\n  <h1> Tools Directory </h1>\n</center>\n\n\n# Static analysis on Runbooks\n\nThis tool can be used to run Static analysis on given runbook. \n\nHere are the options to run the tool. You can either run static analysis on all the cells in a given runbook or just custom cells in the given runbook. (A custom cell is a code cell that is not created using unSkript Actions)\n```\n➭ ./runbook_sa.sh -h\nusage: runbook-sa [-h] [-ra RA_RUNBOOKS] [-rc RC_RUNBOOKS]\n\nWelcome to Runbook Static Analysis Tool VERSION: 0.1.0\n\noptions:\n  -h, --help            show this help message and exit\n  -ra RA_RUNBOOKS, --run-on-all-cells RA_RUNBOOKS\n                        Run Static Analysis on cells in the notebook -ra Runbook1,Runbook2, etc..\n  -rc RC_RUNBOOKS, --run-on-custom-cells RC_RUNBOOKS\n                        Run Static Analysis only on cells in the notebook -rc Runbook1,Runbook2, etc..\n\nThis tool needs pyflakes and jupyter-lab to run\n```\n\nHere is a sample output\n\n```\n ./runbook_sa.sh -rc notebook.ipynb\nAnalyzing notebook.ipynb\n\n./custom_cell_contents_0.py:68:5 'json' imported but unused\n./custom_cell_contents_0.py:100:35 undefined name 'namespace'\n./custom_cell_contents_0.py:103:88 undefined name 'all_alpha_nodes'\n./custom_cell_contents_0.py:154:5 local variable 'kubectl_delete_pvc_command' is assigned to but never used\n./custom_cell_contents_0.py:179:5 local variable 'curl_cmds' is assigned to but never used\n./custom_cell_contents_0.py:180:54 undefined name 'max_uid'\n./custom_cell_contents_0.py:181:60 undefined name 'max_ts'\n./custom_cell_contents_0.py:182:55 undefined name 'max_nsid'\n./custom_cell_contents_0.py:185:36 undefined name 'zero_leader_node'\n```\n"
  },
  {
    "path": "tools/runbook-sa/runbook_sa.py",
    "content": "\"\"\"This file runs static analysis using pyflakes on runbooks\"\"\"\n#!/usr/bin/env python\n#\n# Copyright (c) 2023 unSkript.com\n# All rights reserved.\n#\n# This program is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\n# FOR A PARTICULAR PURPOSE\n#\n#\n\nimport sys\nimport os\nimport time\nimport subprocess\nimport argparse\n\nfrom nbformat import read\nfrom argparse import ArgumentParser\n\n\ndef save_custom_cell_contents_to_file(runbook, output_file=\"./custom_cell_contents.py\"):\n    with open(runbook, 'r') as f:\n        notebook_content = read(f, as_version=4)\n\n    custom_cells = []\n    globals_list = []\n    globals_inserted = False\n    for cell in notebook_content['cells'][1:]:\n        if cell.get('cell_type') == \"code\":\n            if cell.get('metadata').get('customAction') is True:\n                # Get only custom cells putput\n                custom_cells.append(cell)\n            if cell.get('metadata').get('outputParams') is not None:\n                output_name = cell.get('metadata').get('outputParams').get('output_name')\n                globals_list.append(output_name)\n        else:\n            # Cell is a Markdown cell\n            pass\n    \n    if len(custom_cells) > 0:\n        with open(output_file, 'w') as  f:\n            for idx,cell in enumerate(custom_cells):\n                if not cell.get('source'):\n                    # If we have empty custom cell\n                    continue \n                if not globals_inserted:\n                    for g_list in globals_list:\n                        f.write(f\"{g_list} = None\" + \" # noqa\" + '\\n')\n                    globals_inserted = True\n                f.write(f\"def custom_cell_{idx}(): \\n\")\n                for line in cell.get('source').split('\\n'):\n                    f.write(f\"    {line} \\n\")\n                f.write('\\n')\n            f.write('\\n')\n        time.sleep(2)\n        if not os.path.exists(output_file) or os.path.getsize(output_file) == 0:\n            print(f\"ERROR: Error occurred during extraction of custom cell for {runbook}\")\n    else:\n        print(\"No Custom Cell Found in the Runbook\")\n\ndef run_pyflakes(script):\n    command = [f\"pyflakes {script}\"]\n    process = subprocess.run(command,\n                                stdout=subprocess.PIPE,\n                                stderr=subprocess.PIPE,\n                                shell=True)\n    if process.stdout is None:\n        return process.stderr.decode('utf-8')\n    else:\n        return process.stdout.decode('utf-8')\n\ndef main(category: str = 'all', runbooks: list = []):\n    if len(runbooks) == 0:\n        print(\"Need an Input to run the script\")\n        sys.exit(-1)\n\n    for idx,runbook in enumerate(runbooks):\n        if not os.path.exists(runbook):\n            print(f\"Unable to find {runbook}...\")\n            continue\n        print(f\"Analyzing {runbook}\")\n        if category == 'custom':\n            save_custom_cell_contents_to_file(runbook, f\"./custom_cell_contents_{idx}.py\")\n            try:\n                print(run_pyflakes(f'./custom_cell_contents_{idx}.py'))\n            except Exception as e:\n                raise e\n        elif category == 'all':\n            command = [f\"jupyter nbconvert --to script {runbook}\"]\n            _ = subprocess.run(command,\n                                stdout=subprocess.PIPE,\n                                stderr=subprocess.PIPE,\n                                shell=True)\n            pyfile = runbook.replace('.ipynb', '.py')\n            try:\n                print(run_pyflakes(pyfile))\n            except Exception as e:\n                raise e\n        \n\nclass CommaSeparatedAction(argparse.Action):\n    def __call__(self, parser, namespace, values, option_string=None):\n        setattr(namespace, self.dest, values.split(','))\n\n\nif __name__ == '__main__':\n    parser = ArgumentParser(prog='runbook-sa')\n    version_number = \"0.1.0\"\n    description = \"\"\n    description = description + str(\"\\n\")\n    description = description + str(\"\\t  Welcome to Runbook Static Analysis Tool\") + '\\n'\n    description = description + str(f\"\\t\\t   VERSION: {version_number}\") + '\\n' \n    parser.description = description\n    parser.epilog = 'This tool needs pyflakes and jupyter-lab to run'\n\n    parser.add_argument('-ra', '--run-on-all-cells', \n                        dest='ra_runbooks',\n                        action=CommaSeparatedAction,\n                        help='Run Static Analysis on cells in the notebook -ra Runbook1,Runbook2, etc..')\n    parser.add_argument('-rc', '--run-on-custom-cells', \n                        dest='rc_runbooks',\n                        action=CommaSeparatedAction,\n                        help='Run Static Analysis only on cells in the notebook -rc Runbook1,Runbook2, etc..')\n    \n\n    args = parser.parse_args()\n    if len(sys.argv) == 1:\n        parser.print_help()\n        sys.exit(0)\n\n    if args.ra_runbooks:\n        main(category='all', runbooks=args.ra_runbooks)\n    elif args.rc_runbooks:\n        main(category='custom', runbooks=args.rc_runbooks)\n    else:\n        parser.print_help()\n        sys.exit(0)\n"
  },
  {
    "path": "tools/runbook-sa/runbook_sa.sh",
    "content": "#!/bin/bash\n\n# Run Static Analysis \n#     This script invokes the static analysis script\n#\n/usr/bin/env python ./runbook_sa.py \"$@\""
  },
  {
    "path": "unskript-ctl/DESIGN.md",
    "content": "<br />\n<div align=\"center\">\n    <a href=\"https://unskript.com/\">\n        <img src=\"https://storage.googleapis.com/unskript-website/assets/favicon.png\" alt=\"Logo\" width=\"80\" height=\"80\">\n    </a>\n    <p align=\"center\">\n</p>\n</div>\n\n# Design Consideration\n\nThis document describes the design consideration that was the basis for refactoring unskript-ctl \n\n* Reusable Classes \n* Mockable Classes \n* Easily replacable components\n\nThe refactored code reflects these three points. Below the UML representation and the Pictorial representation is provided to help understand the code and to help with maintaining the code. \n\n## UML representation\n```\n@startuml\nabstract class UnskriptFactory {\n    - _config = ConfigParserFactory()\n    - logger \n    --\n    - __init__()\n    - __new__()\n    - _configure_logger()\n    - update_credential_to_uglobal()\n    - _banner()\n    - _error()\n}\n\nabstract class ChecksFactory {\n    - __init__()\n    - run()\n}\n\nabstract class ScriptsFactory {\n    - __init__()\n    - run()\n}\n\nabstract class NotificationFactory {\n    - __init__()\n    - notify() \n}\n\nclass ConfigParserFactory {\n    - __init__()\n    - load_config_file()\n    - get_schedule()\n    - get_jobs()\n    - get_checks()\n    - get_notification()\n    - get_credentials()\n    - get_global()\n    - get_checks_params()\n    --\n    - _get()\n}\n\nabstract class DatabaseFactory {\n    - __init__()\n    - create()\n    - read()\n    - update()\n    - delete()\n}\n\nUnskriptFactory <-- ChecksFactory\nUnskriptFactory <-- ScriptsFactory\nUnskriptFactory <-- NotificationFactory\nUnskriptFactory <-- ConfigParserFactory\nUnskriptFactory <-- DatabaseFactory \n\nclass ZoDBInterface {\n    - __init__()\n    - create()\n    - read()\n    - update()\n    - delete()\n}\n\nclass SQLInterface {\n    - __init__()\n    - create()\n    - read()\n    - update()\n    - delete()\n}\n\nDatabaseFactory <-- ZoDBInterface\nDatabaseFactory <-- SQLInterface \n\nclass CodeSnippets {\n    - __init__()\n    - get_checks_by_uuid()\n    - get_checs_by_connector()\n    - get_all_check_names()\n    - get_check_by_name()\n    - get_action_name_from_id()\n    - get_connector_name_from_id()\n}\n\nZoDBInterface <-- CodeSnippets\n\nclass PSS {\n    - __init__()\n}\n\nZoDBInterface <-- PSS \n\nclass DBInterface {\n    - __init__()\n    --\n    - pss = PSS()\n    - cs = CodeSnippets()\n}\n\nUnskriptFactory <-- DBInterface\n\nPSS o-- DBInterface\nCodeSnippets o-- DBInterface\n\nclass SlackNotification {\n    - __init__()\n    - validate_data()\n    - notify()\n    --\n    - _generate_notification_message()\n}\n\nNotificationFactory <-- SlackNotification\n\nclass EmailNotification {\n    - __init__()\n    - notify()\n    - validate_data()\n    - create_tarball_archive()\n    - create_temp_files_of_failed_check_results()\n    - create_script_summary_message()\n    - create_email_attachment()\n    - create_checks_summary_message()\n    - create_email_header()\n    - prepare_combined_email()\n}\n\nNotificationFactory <-- EmailNotification\n\nclass SendgridNotification {\n    - __init__()\n    - notify()\n    - send_sendgrid_notification()\n    - sendgrid_add_email_attachment()\n}\n\nEmailNotification <-- SendgridNotification\n\nclass AWSEmailNotification {\n    - __init__()\n    - notify()\n    - prepare_to_send_awsses_notification()\n    - do_send_awsses_email()\n}\n\nEmailNotification <-- AWSEmailNotification\n\nclass SmtpNotification {\n    - __init__()\n    - notify()\n    - send_smtp_notification()\n}\n\nEmailNotification <-- SmtpNotification\n\nclass Notification {\n    - __init__()\n    - notify() \n    - _send_email()\n}\n\nSmtpNotification o-- Notification \nAWSEmailNotification o-- Notification\nSendgridNotification o-- Notification\n\nclass Checks {\n    - __init__()\n    - run()\n    - display_check_result()\n    - output_after_merging_checks()\n    - calculate_combined_check_status()\n    - _create_jit_script()\n    - get_code_cell_name_and_uuid()\n    - get_first_cell_content()\n    - get_last_cell_content()\n    - get_after_check_content()\n    - update_exec_id()\n    - insert_task_lines()\n    - replace_input_with_globals()\n    - create_checks_for_matrix_argument()\n}\n\nChecksFactory <-- Checks\n\nclass Script {\n    - __init__()\n    - run()\n}\n\nScriptsFactory <-- Script\n\nclass UnskriptCtl {\n    - __init__()\n    - create_creds()\n    - display_creds_ui()\n    - save_check_names()\n    - run_main()\n    - update_audit_trail()\n    - list_main()\n    - list_credentials()\n    - list_checks_by_connector()\n    - display_failed_checks()\n    - show_main()\n    - print_all_result_table()\n    - print_connector_result_table()\n    - print_execution_result_table()\n    - service_main() | TBD\n    - debug_main()\n    - start_debug()\n    - stop_debug()\n    - notify()\n    --\n    checks = Checks()\n    script = Script()\n    notification = Notification()\n}\n\nUnskriptFactory <-- UnskriptCtl\n\nChecks <-- UnskriptCtl\nScript <-- UnskriptCtl\nNotification <-- UnskriptCtl\n\nclass main {\n    - uc = UnskriptCtl()\n    - parser = ArgumentParser()\n}\n\nUnskriptCtl o-- main\n\n@enduml\n```\n\n## Pictorial representation \n\n![unSkript Ctl](docs/design.png \"unSkript Design\")\n\n"
  },
  {
    "path": "unskript-ctl/README.md",
    "content": "<br />\n<div align=\"center\">\n    <a href=\"https://unskript.com/\">\n        <img src=\"https://storage.googleapis.com/unskript-website/assets/favicon.png\" alt=\"Logo\" width=\"80\" height=\"80\">\n    </a>\n    <p align=\"center\">\n</p>\n</div>\n\n# unSkript CLI\n---\n\n\n\n## Introduction\nunskript-ctl is a command line tool which allows you to run checks against your resources, be it infrastructure or your own services. \n\nHere are the options that are supported by the uskript-ctl command\n```\nunskript-ctl.sh\nusage: unskript-ctl [-h] [--create-credential ...] {run,list,show,debug} ...\n\nWelcome to unSkript CLI Interface VERSION: 1.1.0 BUILD_NUMBER: 1.1.0\n\npositional arguments:\n  {run,list,show,debug}\n                        Available Commands\n    run                 Run Options\n    list                List Options\n    show                Show Options\n    debug               Debug Option\n\noptions:\n  -h, --help            show this help message and exit\n  --create-credential ...\n                        Create Credential [-creds-type creds_file_path]\n```\n\n## \n\n## Command Line Options\n### Run options\nUsing the **run** option, you can run check(s), scripts and runbooks.\nAlso, if you want to get the report of the run in an email or slack, you can\nuse the **--report** option.\n\n```\nusage: unskript-ctl run [-h] [--script SCRIPT] [--report] [--info] {check} ...\n\npositional arguments:\n  {check}\n    check          Run Check Option\n\noptions:\n  -h, --help       show this help message and exit\n  --script SCRIPT  Script name to run\n  --report         Report Results\n  --info           Run information gathering actions\n```\n\n```\nusage: unskript-ctl run check [-h] [--name NAME] [--type TYPE] [--all]\n\noptions:\n  -h, --help   show this help message and exit\n  --name NAME  Check name to run\n  --type TYPE  Type of Check to run\n  --all        Run all checks\n```\n\n### List options\n```\nusage: unskript-ctl list [-h] [--credential] {checks,failed-checks,info} ...\n\npositional arguments:\n  {checks,failed-checks,info}\n    checks              List Check Options\n    failed-checks       List Failed check options\n    info                List information gathering actions\n\noptions:\n  -h, --help            show this help message and exit\n  --credential          List All credentials\n```\n\n```\nusage: unskript-ctl list checks [-h] [--all]\n                                [--type {aws,gcp,k8s,elasticsearch,grafana,redis,jenkins,github,netbox,nomad,jira,kafka,mongodb,mysql,postgresql,rest,slack,ssh,vault,salesforce}]\n\noptions:\n  -h, --help            show this help message and exit\n  --all                 List All Checks\n  --type {aws,gcp,k8s,elasticsearch,grafana,redis,jenkins,github,netbox,nomad,jira,kafka,mongodb,mysql,postgresql,rest,slack,ssh,vault,salesforce}\n                        List All Checks of given connector type\n```\n\n```\nusage: unskript-ctl list failed-checks [-h] [--all]\n                                       [--type {aws,gcp,k8s,elasticsearch,grafana,redis,jenkins,github,netbox,nomad,jira,kafka,mongodb,mysql,postgresql,rest,slack,ssh,vault,salesforce}]\n\noptions:\n  -h, --help            show this help message and exit\n  --all                 Show All Failed Checks\n  --type {aws,gcp,k8s,elasticsearch,grafana,redis,jenkins,github,netbox,nomad,jira,kafka,mongodb,mysql,postgresql,rest,slack,ssh,vault,salesforce}\n                        List All Checks of given connector type\n```\n\n```\nusage: unskript-ctl list info [-h] [--all]\n                              [--type {aws,gcp,k8s,elasticsearch,grafana,redis,jenkins,github,netbox,nomad,jira,kafka,mongodb,mysql,postgresql,rest,slack,ssh,vault,salesforce}]\n\noptions:\n  -h, --help            show this help message and exit\n  --all                 List all info gathering actions\n  --type {aws,gcp,k8s,elasticsearch,grafana,redis,jenkins,github,netbox,nomad,jira,kafka,mongodb,mysql,postgresql,rest,slack,ssh,vault,salesforce}\n                        List info gathering actions for given connector type\n```\n\n\n### Show options\n```\nusage: unskript-ctl show [-h] {audit-trail,failed-logs} ...\n\npositional arguments:\n  {audit-trail,failed-logs}\n    audit-trail         Show Audit Trail option\n    failed-logs         Show Failed Logs option\n\noptions:\n  -h, --help            show this help message and exit\n```\n```\nusage: unskript-ctl show audit-trail [-h] [--all]\n                                     [--type {aws,gcp,k8s,elasticsearch,grafana,redis,jenkins,github,netbox,nomad,jira,kafka,mongodb,mysql,postgresql,rest,slack,ssh,vault,salesforce}]\n                                     [--execution_id EXECUTION_ID]\n\noptions:\n  -h, --help            show this help message and exit\n  --all                 List trail of all checks across all connectors\n  --type {aws,gcp,k8s,elasticsearch,grafana,redis,jenkins,github,netbox,nomad,jira,kafka,mongodb,mysql,postgresql,rest,slack,ssh,vault,salesforce}\n                        Show Audit trail for checks for given connector\n  --execution_id EXECUTION_ID\n                        Execution ID for which the audit trail should be shown\n```\n```\nusage: unskript-ctl show failed-logs [-h] [--execution_id EXECUTION_ID]\n\noptions:\n  -h, --help            show this help message and exit\n  --execution_id EXECUTION_ID\n                        Execution ID for which the logs should be fetched\n```\n\n### Debug options\n\nUsing the **debug** option, you can connect this pod to the upstream VPN\nserver, so that you can access this pod from the control unSkript portal.\n```\nusage: unskript-ctl debug [-h] [--start ...] [--stop]\n\noptions:\n  -h, --help   show this help message and exit\n  --start ...  Start debug session. Example [--start --config /tmp/config.ovpn]\n  --stop       Stop debug session\n```"
  },
  {
    "path": "unskript-ctl/add_creds.py",
    "content": "\"\"\"This file implements a wrapper over creds-ui\"\"\"\n#!/usr/bin/env python\n#\n# Copyright (c) 2023 unSkript.com\n# All rights reserved.\n#\n# This program is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\n# FOR A PARTICULAR PURPOSE\n#\n#\nimport os\nimport sys\nimport json\nfrom pathlib import Path\nimport subprocess\n\n#from creds_ui import main as ui\nfrom argparse import ArgumentParser, REMAINDER\n\n# CONSTANTS USED IN THIS FILE\nSTUB_FILE = \"stub_creds.json\"\n\n# Note: Any change in credential_schema should also be followed by\n# the corresponding change in creds-ui too.\ncredential_schemas = '''\n[\n    {\n      \"title\": \"AWSSchema\",\n      \"type\": \"object\",\n      \"properties\": {\n        \"authentication\": {\n          \"title\": \"Authentication\",\n          \"discriminator\": \"auth_type\",\n          \"anyOf\": [\n            {\n              \"$ref\": \"#/definitions/AccessKeySchema\"\n            }\n          ]\n        }\n      },\n      \"required\": [\n        \"authentication\"\n      ],\n      \"definitions\": {\n        \"AccessKeySchema\": {\n          \"title\": \"Access Key\",\n          \"type\": \"object\",\n          \"properties\": {\n            \"auth_type\": {\n              \"title\": \"Auth Type\",\n              \"enum\": [\n                \"Access Key\"\n              ],\n              \"type\": \"string\"\n            },\n            \"access_key\": {\n              \"title\": \"Access Key\",\n              \"description\": \"Access Key to use for authentication.\",\n              \"type\": \"string\"\n            },\n            \"secret_access_key\": {\n              \"title\": \"Secret Access Key\",\n              \"description\": \"Secret Access Key to use for authentication.\",\n              \"type\": \"string\",\n              \"writeOnly\": true,\n              \"format\": \"password\"\n            }\n          },\n          \"required\": [\n            \"auth_type\",\n            \"access_key\",\n            \"secret_access_key\"\n          ]\n        }\n      }\n    },\n    {\n      \"title\": \"GCPSchema\",\n      \"type\": \"object\",\n      \"properties\": {\n        \"credentials\": {\n          \"title\": \"Google Cloud Credentials JSON\",\n          \"description\": \"Contents of the Google Cloud Credentials JSON file.\",\n          \"type\": \"string\"\n        }\n      },\n      \"required\": [\n        \"credentials\"\n      ]\n    },\n    {\n      \"title\": \"ElasticSearchSchema\",\n      \"type\": \"object\",\n      \"properties\": {\n        \"host\": {\n          \"title\": \"Host Name\",\n          \"description\": \"Elasticsearch Node URL. For eg: https://localhost:9200\",\n          \"type\": \"string\"\n        },\n        \"username\": {\n          \"title\": \"Username\",\n          \"description\": \"Username for Basic Auth.\",\n          \"default\": \"\",\n          \"type\": \"string\"\n        },\n        \"password\": {\n          \"title\": \"Password\",\n          \"description\": \"Password for Basic Auth.\",\n          \"default\": \"\",\n          \"type\": \"string\",\n          \"writeOnly\": true,\n          \"format\": \"password\"\n        },\n        \"api_key\": {\n          \"title\": \"API Key\",\n          \"description\": \"API Key based authentication.\",\n          \"default\": \"\",\n          \"type\": \"string\"\n        }\n      },\n      \"required\": [\n        \"host\"\n      ]\n    },\n    {\n      \"title\": \"GrafanaSchema\",\n      \"type\": \"object\",\n      \"properties\": {\n        \"api_key\": {\n          \"title\": \"API Token\",\n          \"description\": \"API Token to authenticate to grafana.\",\n          \"default\": \"\",\n          \"type\": \"string\"\n        },\n        \"username\": {\n          \"title\": \"Username\",\n          \"description\": \"Username of the grafana user.\",\n          \"default\": \"\",\n          \"type\": \"string\"\n        },\n        \"password\": {\n          \"title\": \"Password\",\n          \"description\": \"Password to authenticate to grafana.\",\n          \"default\": \"\",\n          \"type\": \"string\",\n          \"writeOnly\": true,\n          \"format\": \"password\"\n        },\n        \"host\": {\n          \"title\": \"Hostname\",\n          \"description\": \"Hostname of the grafana.\",\n          \"type\": \"string\"\n        }\n      },\n      \"required\": [\n        \"host\"\n      ]\n    },\n    {\n        \"title\": \"RedisSchema\",\n        \"type\": \"object\",\n        \"properties\": {\n          \"db\": {\n            \"title\": \"Database\",\n            \"description\": \"ID of the database to connect to.\",\n            \"default\": 0,\n            \"type\": \"integer\"\n          },\n        \"host\": {\n          \"title\": \"Hostname\",\n          \"description\": \"Hostname of the redis server.\",\n          \"type\": \"string\"\n        },\n        \"username\": {\n          \"title\": \"Username\",\n          \"description\": \"Username to authenticate to redis.\",\n          \"default\": \"\",\n          \"type\": \"string\"\n        },\n        \"password\": {\n          \"title\": \"Password\",\n          \"description\": \"Password to authenticate to redis.\",\n          \"default\": \"\",\n          \"type\": \"string\",\n          \"writeOnly\": true,\n          \"format\": \"password\"\n        },\n        \"port\": {\n          \"title\": \"Port\",\n          \"description\": \"Port on which redis server is listening.\",\n          \"default\": 6379,\n          \"type\": \"integer\"\n        },\n        \"use_ssl\": {\n          \"title\": \"Use SSL\",\n          \"description\": \"Use SSL for communicating to Redis host.\",\n          \"default\": false,\n          \"type\": \"boolean\"\n        }\n      },\n      \"required\": [\n        \"host\"\n      ]\n    },\n    {\n      \"title\": \"JenkinsSchema\",\n      \"type\": \"object\",\n      \"properties\": {\n        \"url\": {\n          \"title\": \"Jenkins url\",\n          \"description\": \"Full Jenkins URL.\",\n          \"type\": \"string\"\n        },\n        \"user_name\": {\n          \"title\": \"Username\",\n          \"description\": \"Username to authenticate with Jenkins.\",\n          \"default\": \"\",\n          \"type\": \"string\"\n        },\n        \"password\": {\n          \"title\": \"Password\",\n          \"description\": \"Password or API Token to authenticate with Jenkins.\",\n          \"default\": \"\",\n          \"type\": \"string\",\n          \"writeOnly\": true,\n          \"format\": \"password\"\n        }\n      },\n      \"required\": [\n        \"url\"\n      ]\n    },\n    {\n      \"title\": \"GithubSchema\",\n      \"type\": \"object\",\n      \"properties\": {\n        \"token\": {\n          \"title\": \"Access token\",\n          \"description\": \"Github Personal Access Token.\",\n          \"type\": \"string\",\n          \"writeOnly\": true,\n          \"format\": \"password\"\n        },\n        \"hostname\": {\n          \"title\": \"Custom Hostname\",\n          \"description\": \"Custom hostname for Github Enterprise Version.\",\n          \"default\": \"\",\n          \"type\": \"string\"\n        }\n      },\n      \"required\": [\n        \"token\"\n      ]\n    },\n    {\n      \"title\": \"NetboxSchema\",\n      \"type\": \"object\",\n      \"properties\": {\n        \"host\": {\n          \"title\": \"Netbox Host\",\n          \"description\": \"Address of Netbox host\",\n          \"type\": \"string\"\n        },\n        \"token\": {\n          \"title\": \"Token\",\n          \"description\": \"Token value to authenticate write requests.\",\n          \"default\": \"\",\n          \"type\": \"string\",\n          \"writeOnly\": true,\n          \"format\": \"password\"\n        },\n        \"threading\": {\n          \"title\": \"Threading\",\n          \"description\": \"Enable for multithreaded calls like .filter() and .all() queries. To enable set to True \",\n          \"type\": \"boolean\"\n        }\n      },\n      \"required\": [\n        \"host\"\n      ]\n    },\n    {\n      \"title\": \"NomadSchema\",\n      \"type\": \"object\",\n      \"properties\": {\n        \"host\": {\n          \"title\": \"Nomad IP address\",\n          \"description\": \"IP address of Nomad host\",\n          \"type\": \"string\"\n        },\n        \"timeout\": {\n          \"title\": \"Timeout(seconds)\",\n          \"description\": \"Timeout in seconds to retry connection\",\n          \"default\": 5,\n          \"type\": \"integer\"\n        },\n        \"token\": {\n          \"title\": \"Token\",\n          \"description\": \"Token value to authenticate requests to the cluster when using namespace\",\n          \"default\": \"\",\n          \"type\": \"string\",\n          \"writeOnly\": true,\n          \"format\": \"password\"\n        },\n        \"verify_certs\": {\n          \"title\": \"Verify certs\",\n          \"description\": \"Verify server ssl certs. This can be set to true when working with private certs.\",\n          \"type\": \"boolean\"\n        },\n        \"secure\": {\n          \"title\": \"Secure\",\n          \"description\": \"HTTPS enabled?\",\n          \"type\": \"boolean\"\n        },\n        \"namespace\": {\n          \"title\": \"Namespace\",\n          \"description\": \"Name of Nomad Namespace. By default, the default namespace will be considered.\",\n          \"type\": \"string\"\n        }\n      },\n      \"required\": [\n        \"host\"\n      ]\n    },\n    {\n      \"title\": \"ChatGPTSchema\",\n      \"type\": \"object\",\n      \"properties\": {\n        \"organization\": {\n          \"title\": \"Organization ID\",\n          \"description\": \"Identifier for the organization which is sometimes used in API requests. Eg: org-s8OPLNKVjsDAjjdbfTuhqAc\",\n          \"default\": \"\",\n          \"type\": \"string\"\n        },\n        \"api_token\": {\n          \"title\": \"API Token\",\n          \"description\": \"API Token value to authenticate requests.\",\n          \"type\": \"string\",\n          \"writeOnly\": true,\n          \"format\": \"password\"\n        }\n      },\n      \"required\": [\n        \"api_token\"\n      ]\n    },\n    {\n      \"title\": \"OpsgenieSchema\",\n      \"type\": \"object\",\n      \"properties\": {\n          \"api_token\": {\n            \"title\": \"Api Token\",\n            \"description\": \"Api token to authenticate Opsgenie: GenieKey\",\n            \"type\": \"string\",\n            \"writeOnly\": true,\n            \"format\": \"password\"\n          }\n      },\n      \"required\": [\n          \"api_token\"\n      ]\n    },\n    {\n      \"title\": \"JiraSchema\",\n      \"type\": \"object\",\n      \"properties\": {\n        \"url\": {\n          \"title\": \"URL\",\n          \"description\": \"URL of jira server.\",\n          \"type\": \"string\"\n        },\n        \"email\": {\n          \"title\": \"Email\",\n          \"description\": \"Email to authenticate to jira.\",\n          \"type\": \"string\"\n        },\n        \"api_token\": {\n          \"title\": \"Api Token\",\n          \"description\": \"Api token to authenticate to jira.\",\n          \"type\": \"string\",\n          \"writeOnly\": true,\n          \"format\": \"password\"\n        }\n      },\n      \"required\": [\n        \"url\",\n        \"email\",\n        \"api_token\"\n      ]\n    },\n    {\n      \"title\": \"K8SSchema\",\n      \"type\": \"object\",\n      \"properties\": {\n        \"kubeconfig\": {\n          \"title\": \"Kubeconfig\",\n          \"description\": \"Contents of the kubeconfig file.\",\n          \"type\": \"string\"\n        }\n      },\n      \"required\": [\n        \"kubeconfig\"\n      ]\n    },\n    {\n      \"title\": \"KafkaSchema\",\n      \"type\": \"object\",\n      \"properties\": {\n        \"broker\": {\n          \"title\": \"Broker\",\n          \"description\": \"host[:port] that the producer should contact to bootstrap initial cluster metadata. Default port is 9092\",\n          \"type\": \"string\"\n        },\n        \"sasl_username\": {\n          \"title\": \"SASL Username\",\n          \"description\": \"Username for SASL PlainText Authentication.\",\n          \"default\": \"\",\n          \"type\": \"string\"\n        },\n        \"sasl_password\": {\n          \"title\": \"SASL Password\",\n          \"description\": \"Password for SASL PlainText Authentication.\",\n          \"default\": \"\",\n          \"type\": \"string\",\n          \"writeOnly\": true,\n          \"format\": \"password\"\n        }\n      },\n      \"required\": [\n        \"broker\"\n      ]\n    },\n    {\n     \"title\": \"MongoDBSchema\",\n     \"type\": \"object\",\n     \"properties\": {\n      \"host\": {\n        \"title\": \"Host\",\n        \"description\": \"Full MongoDB URI, in addition to simple hostname. It also supports mongodb+srv:// URIs\",\n        \"type\": \"string\"\n      },\n      \"port\": {\n        \"title\": \"Port\",\n        \"description\": \"Port on which mongoDB server is listening.\",\n        \"default\": 27017,\n        \"type\": \"integer\"\n      },\n      \"authentication\": {\n        \"title\": \"Authentication\",\n        \"discriminator\": \"auth_type\",\n        \"anyOf\": [\n          {\n            \"$ref\": \"#/definitions/AtlasSchema\"\n          },\n          {\n            \"$ref\": \"#/definitions/AuthSchema\"\n          }\n        ]\n      }\n    },\n    \"required\": [\n      \"host\",\n      \"authentication\"\n    ],\n    \"definitions\": {\n      \"AtlasSchema\": {\n        \"title\": \"AtlasSchema\",\n        \"type\": \"object\",\n        \"properties\": {\n          \"auth_type\": {\n            \"title\": \"Auth Type\",\n            \"enum\": [\n              \"Atlas Administrative API using HTTP Digest Authentication\"\n            ],\n            \"type\": \"string\"\n          },\n          \"atlas_public_key\": {\n            \"title\": \"Atlas API Public Key\",\n            \"description\": \"The public key acts as the username when making API requests\",\n            \"default\": \"\",\n            \"type\": \"string\"\n          },\n          \"atlas_private_key\": {\n            \"title\": \"Atlas API Private Key\",\n            \"description\": \"The private key acts as the password when making API requests\",\n            \"default\": \"\",\n            \"type\": \"string\",\n            \"writeOnly\": true,\n            \"format\": \"password\"\n          }\n        },\n        \"required\": [\n          \"auth_type\"\n        ]\n      },\n      \"AuthSchema\": {\n        \"title\": \"AuthSchema\",\n        \"type\": \"object\",\n        \"properties\": {\n          \"auth_type\": {\n            \"title\": \"Auth Type\",\n            \"enum\": [\n              \"Basic Auth\"\n            ],\n            \"type\": \"string\"\n          },\n          \"user_name\": {\n            \"title\": \"Username\",\n            \"description\": \"Username to authenticate with MongoDB.\",\n            \"default\": \"\",\n            \"type\": \"string\"\n          },\n          \"password\": {\n            \"title\": \"Password\",\n            \"description\": \"Password to authenticate with MongoDB.\",\n            \"default\": \"\",\n            \"type\": \"string\",\n            \"writeOnly\": true,\n            \"format\": \"password\"\n          }\n        },\n        \"required\": [\n          \"auth_type\"\n        ]\n      }\n     }\n    },\n    {\n      \"title\": \"MySQLSchema\",\n      \"type\": \"object\",\n      \"properties\": {\n        \"DBName\": {\n          \"title\": \"Database name\",\n          \"description\": \"Name of the database to connect to MySQL.\",\n          \"type\": \"string\"\n        },\n        \"User\": {\n          \"title\": \"Username\",\n          \"description\": \"Username to authenticate to MySQL.\",\n          \"default\": \"\",\n          \"type\": \"string\"\n        },\n        \"Password\": {\n          \"title\": \"Password\",\n          \"description\": \"Password to authenticate to MySQL.\",\n          \"default\": \"\",\n          \"type\": \"string\",\n          \"writeOnly\": true,\n          \"format\": \"password\"\n        },\n        \"Host\": {\n          \"title\": \"Hostname\",\n          \"description\": \"Hostname of the MySQL server.\",\n          \"type\": \"string\"\n        },\n        \"Port\": {\n          \"title\": \"Port\",\n          \"description\": \"Port on which MySQL server is listening.\",\n          \"default\": 5432,\n          \"type\": \"integer\"\n        }\n      },\n      \"required\": [\n        \"DBName\",\n        \"Host\"\n      ]\n    },\n    {\n      \"title\": \"PostgreSQLSchema\",\n      \"type\": \"object\",\n      \"properties\": {\n        \"DBName\": {\n          \"title\": \"Database name\",\n          \"description\": \"Name of the database to connect to.\",\n          \"type\": \"string\"\n        },\n        \"User\": {\n          \"title\": \"Username\",\n          \"description\": \"Username to authenticate to postgres.\",\n          \"default\": \"\",\n          \"type\": \"string\"\n        },\n        \"Password\": {\n          \"title\": \"Password\",\n          \"description\": \"Password to authenticate to postgres.\",\n          \"default\": \"\",\n          \"type\": \"string\",\n          \"writeOnly\": true,\n          \"format\": \"password\"\n        },\n        \"Host\": {\n          \"title\": \"Hostname\",\n          \"description\": \"Hostname of the postgres server.\",\n          \"type\": \"string\"\n        },\n        \"Port\": {\n          \"title\": \"Port\",\n          \"description\": \"Port on which postgres server is listening.\",\n          \"default\": 5432,\n          \"type\": \"integer\"\n        }\n      },\n      \"required\": [\n        \"DBName\",\n        \"Host\"\n      ]\n    },\n    {\n      \"title\": \"RESTSchema\",\n      \"type\": \"object\",\n      \"properties\": {\n        \"base_url\": {\n          \"title\": \"Base URL\",\n          \"description\": \"Base URL of REST server\",\n          \"type\": \"string\"\n        },\n        \"username\": {\n          \"title\": \"Username\",\n          \"description\": \"Username for Basic Authentication\",\n          \"default\": \"\",\n          \"type\": \"string\"\n        },\n        \"password\": {\n          \"title\": \"Password\",\n          \"description\": \"Password for the Given User for Basic Auth\",\n          \"default\": \"\",\n          \"type\": \"string\",\n          \"writeOnly\": true,\n          \"format\": \"password\"\n        },\n        \"headers\": {\n          \"title\": \"Headers\",\n          \"description\": \"A dictionary of http headers to be used to communicate with the host.Example: Authorization: bearer my_oauth_token_to_the_host .These headers will be included in all requests.\",\n          \"type\": \"object\"\n        }\n      },\n      \"required\": [\n        \"base_url\"\n      ]\n    },\n    {\n      \"title\": \"SlackSchema\",\n      \"type\": \"object\",\n      \"properties\": {\n        \"bot_user_oauth_token\": {\n          \"title\": \"OAuth Access Token\",\n          \"description\": \"OAuth Access Token of the Slack app.\",\n          \"type\": \"string\"\n        }\n      },\n      \"required\": [\n        \"bot_user_oauth_token\"\n      ]\n    },\n    {\n      \"title\": \"SSHSchema\",\n      \"type\": \"object\",\n      \"properties\": {\n        \"port\": {\n          \"title\": \"Port\",\n          \"description\": \"SSH port to connect to.\",\n          \"default\": 22,\n          \"type\": \"integer\"\n        },\n        \"username\": {\n          \"title\": \"Username\",\n          \"description\": \"Username to use for authentication\",\n          \"default\": \"\",\n          \"type\": \"string\"\n        },\n        \"proxy_host\": {\n          \"title\": \"Proxy host\",\n          \"description\": \"SSH host to tunnel connection through so that SSH clients connect to host via client -> proxy_host -> host.\",\n          \"type\": \"string\"\n        },\n        \"proxy_user\": {\n          \"title\": \"Proxy user\",\n          \"description\": \"User to login to proxy_host as. Defaults to username.\",\n          \"type\": \"string\"\n        },\n        \"proxy_port\": {\n          \"title\": \"Proxy port\",\n          \"description\": \"SSH port to use to login to proxy host if set. Defaults to 22.\",\n          \"default\": 22,\n          \"type\": \"integer\"\n        },\n        \"authentication\": {\n          \"title\": \"Authentication\",\n          \"discriminator\": \"auth_type\",\n          \"anyOf\": [\n            {\n              \"$ref\": \"#/definitions/AuthSchema\"\n            },\n            {\n              \"$ref\": \"#/definitions/PrivateKeySchema\"\n            },\n            {\n              \"$ref\": \"#/definitions/VaultSchema\"\n            },\n            {\n              \"$ref\": \"#/definitions/KerberosSchema\"\n            }\n          ]\n        }\n      },\n      \"required\": [\n        \"authentication\"\n      ],\n      \"definitions\": {\n        \"AuthSchema\": {\n          \"title\": \"Basic Auth\",\n          \"type\": \"object\",\n          \"properties\": {\n            \"auth_type\": {\n              \"title\": \"Auth Type\",\n              \"enum\": [\n                \"Basic Auth\"\n              ],\n              \"type\": \"string\"\n            },\n            \"password\": {\n              \"title\": \"Password\",\n              \"description\": \"Password to use for password authentication.\",\n              \"default\": \"\",\n              \"type\": \"string\",\n              \"writeOnly\": true,\n              \"format\": \"password\"\n            },\n            \"proxy_password\": {\n              \"title\": \"Proxy user password\",\n              \"description\": \"Password to login to proxy_host with. Defaults to no password.\",\n              \"type\": \"string\"\n            }\n          },\n          \"required\": [\n            \"auth_type\"\n          ]\n        },\n        \"PrivateKeySchema\": {\n          \"title\": \"Pem File\",\n          \"type\": \"object\",\n          \"properties\": {\n            \"auth_type\": {\n              \"title\": \"Auth Type\",\n              \"enum\": [\n                \"API Token\"\n              ],\n              \"type\": \"string\"\n            },\n            \"private_key\": {\n              \"title\": \"Private Key File\",\n              \"description\": \"Contents of the Private Key File to use for authentication.\",\n              \"default\": \"\",\n              \"type\": \"string\"\n            },\n            \"proxy_private_key\": {\n              \"title\": \"Proxy Private Key File\",\n              \"description\": \"Private key file to be used for authentication with proxy_host.\",\n              \"type\": \"string\"\n            }\n          },\n          \"required\": [\n            \"auth_type\"\n          ]\n        },\n        \"VaultSchema\": {\n          \"title\": \"Vault\",\n          \"type\": \"object\",\n          \"properties\": {\n            \"auth_type\": {\n              \"title\": \"Auth Type\",\n              \"enum\": [\n                \"Vault\"\n              ],\n              \"type\": \"string\"\n            },\n            \"vault_url\": {\n              \"title\": \"Vault URL\",\n              \"description\": \"Vault URL eg: http://127.0.0.1:8200\",\n              \"type\": \"string\"\n            },\n            \"vault_secret_path\": {\n              \"title\": \"SSH Secret Path\",\n              \"description\": \"The is the path in the Vault Configuration tab of ssh secret. eg: ssh\",\n              \"type\": \"string\"\n            },\n            \"vault_role\": {\n              \"title\": \"Vault Role\",\n              \"description\": \"Vault role associated with the above ssh secret.\",\n              \"type\": \"string\"\n            }\n          },\n          \"required\": [\n            \"auth_type\",\n            \"vault_url\",\n            \"vault_secret_path\",\n            \"vault_role\"\n          ]\n        },\n        \"KerberosSchema\": {\n          \"title\": \"Kerberos\",\n          \"type\": \"object\",\n          \"properties\": {\n            \"auth_type\": {\n              \"title\": \"Auth Type\",\n              \"enum\": [\n                \"Kerberos\"\n              ],\n              \"type\": \"string\"\n            },\n            \"user_with_realm\": {\n              \"title\": \"Kerberos user@REALM\",\n              \"description\": \"Kerberos UserName like user@EXAMPLE.COM REALM is usually defined as UPPER-CASE\",\n              \"type\": \"string\"\n            },\n            \"kdc_server\": {\n              \"title\": \"KDC Server\",\n              \"description\": \"KDC Server Domain Name. like kdc.example.com\",\n              \"type\": \"string\"\n            },\n            \"admin_server\": {\n              \"title\": \"Admin Server\",\n              \"description\": \"Kerberos Admin Server. Normally same as KDC Server\",\n              \"default\": \"\",\n              \"type\": \"string\"\n            },\n            \"password\": {\n              \"title\": \"Password\",\n              \"description\": \"Password for the above Username\",\n              \"default\": \"\",\n              \"type\": \"string\",\n              \"writeOnly\": true,\n              \"format\": \"password\"\n            },\n            \"proxy_password\": {\n              \"title\": \"Proxy user password\",\n              \"description\": \"Password to login to proxy_host with. Defaults is no password.\",\n              \"default\": \"\",\n              \"type\": \"string\",\n              \"writeOnly\": true,\n              \"format\": \"password\"\n            }\n          },\n          \"required\": [\n            \"auth_type\",\n            \"user_with_realm\",\n            \"kdc_server\"\n          ]\n        }\n      }\n    },\n    {\n      \"title\": \"SalesforceSchema\",\n      \"type\": \"object\",\n      \"properties\": {\n      \"Username\": {\n        \"title\": \"Username\",\n        \"description\": \"Username to authenticate to Salesforce.\",\n        \"type\": \"string\"\n      },\n      \"Password\": {\n        \"title\": \"Password\",\n        \"description\": \"Password to authenticate to Salesforce.\",\n        \"type\": \"string\",\n        \"writeOnly\": true,\n        \"format\": \"password\"\n      },\n      \"Security_Token\": {\n        \"title\": \"Security token\",\n        \"description\": \"Token to authenticate to Salesforce.\",\n        \"type\": \"string\"\n      }\n    },\n    \"required\": [\n      \"Username\",\n      \"Password\",\n      \"Security_Token\"\n    ]\n  },\n  {\n    \"title\": \"VaultSchema\",\n    \"type\": \"object\",\n    \"properties\": {\n      \"url\": {\n        \"title\": \"Vault URL\",\n        \"description\": \"URL for the Vault instance.\",\n        \"type\": \"string\"\n      },\n      \"token\": {\n        \"title\": \"Token\",\n          \"description\": \"Token value to authenticate requests to Vault.\",\n          \"default\": \"\",\n          \"type\": \"string\",\n          \"writeOnly\": true,\n          \"format\": \"password\"\n      },\n      \"verify_ssl\": {\n        \"default\": false,\n        \"description\": \"Flag to decide if SSL verification should be enforced for Vault connection.\",\n        \"title\": \"Verify SSL\",\n        \"type\": \"boolean\"\n      }\n    },\n    \"required\": [\n      \"url\"\n    ]\n  },\n  {\n    \"properties\": {\n      \"server_url\": {\n        \"description\": \"Base URL of the Keycloak instance\",\n        \"title\": \"Keycloak Server URL\",\n        \"type\": \"string\"\n      },\n      \"realm\": {\n        \"description\": \"Name of the realm for authentication\",\n        \"title\": \"Keycloak Realm\",\n        \"type\": \"string\"\n      },\n      \"client_id\": {\n        \"anyOf\": [\n          {\n            \"type\": \"string\"\n          },\n          {\n            \"type\": \"null\"\n          }\n        ],\n        \"default\": null,\n        \"description\": \"Client ID for authentication\",\n        \"title\": \"Client ID\"\n      },\n      \"username\": {\n        \"anyOf\": [\n          {\n            \"type\": \"string\"\n          },\n          {\n            \"type\": \"null\"\n          }\n        ],\n        \"default\": null,\n        \"description\": \"Username for client-based authentication\",\n        \"title\": \"Username\"\n      },\n      \"password\": {\n        \"anyOf\": [\n          {\n            \"format\": \"password\",\n            \"type\": \"string\",\n            \"writeOnly\": true\n          },\n          {\n            \"type\": \"null\"\n          }\n        ],\n        \"default\": null,\n        \"description\": \"Password for client-based authentication\",\n        \"title\": \"Password\"\n      },\n      \"client_secret\": {\n        \"anyOf\": [\n          {\n            \"format\": \"password\",\n            \"type\": \"string\",\n            \"writeOnly\": true\n          },\n          {\n            \"type\": \"null\"\n          }\n        ],\n        \"default\": null,\n        \"description\": \"Client Secret for client-based authentication\",\n        \"title\": \"Client Secret\"\n      },\n      \"verify\": {\n        \"anyOf\": [\n          {\n            \"type\": \"boolean\"\n          },\n          {\n            \"type\": \"null\"\n          }\n        ],\n        \"default\": true,\n        \"description\": \"Boolean to decide if SSL certificate verification should be performed\",\n        \"title\": \"SSL Verification\"\n      }\n    },\n    \"required\": [\n      \"server_url\",\n      \"realm\"\n    ],\n    \"title\": \"KeycloakSchema\",\n    \"type\": \"object\"\n  }\n  ]\n'''\n\nAWESOME_DIRECTORY = \"Awesome-CloudOps-Automation\"\n\n\ndef getGitRoot():\n    return subprocess.Popen(['git', 'rev-parse', '--show-toplevel'], stdout=subprocess.PIPE).communicate()[0].rstrip().decode('utf-8')\n\ndef create_stub_cred_files(dirname: str):\n    \"\"\"create_stub_cred_files This function creates the stub files needed by creds-ui\"\"\"\n    if not os.path.exists(dirname):\n        path = Path(CREDS_DIR)\n        path.mkdir(parents=True)\n\n    # Lets read the Stubs Creds file and create placeholder files\n    NEW_STUB_FILE=STUB_FILE\n    if not os.path.exists(STUB_FILE):\n        # Most likely being run outside docker.\n        git_root_directory = getGitRoot()\n        NEW_STUB_FILE = os.path.join(git_root_directory, AWESOME_DIRECTORY, \"unskript-ctl\", STUB_FILE)\n\n    with open(NEW_STUB_FILE, 'r') as f:\n        stub_creds_json = json.load(f)\n\n    for cred in stub_creds_json:\n        f_name = os.path.join(dirname, cred.get('display_name'))\n        f_name = f_name + '.json'\n        # Lets check if file already exists, if it does not, then create it\n        if not os.path.exists(f_name):\n            with open(f_name, 'w') as f:\n                f.write(json.dumps(cred, indent=4))\n\nCREDS_DIR = os.environ.get('HOME') + \"/.local/share/jupyter/metadata/credential-save/\"\n#CREDS_DIR = os.environ.get('HOME') + \"/creds/\"\n\nclass CredentialsAdd():\n    def __init__(self):\n      create_stub_cred_files(CREDS_DIR)\n      try:\n        schema_json = json.loads(credential_schemas)\n      except Exception as e:\n        print(f\"Exception occured {e}\")\n        return\n      mainParser = ArgumentParser(prog='add_creds')\n      description = \"\"\n      description = description + str(\"\\n\")\n      description = description + str(\"\\t  Add credentials \\n\")\n      mainParser.description = description\n      mainParser.add_argument('-c', '--credential-type', choices=[\n         'aws',\n         'k8s',\n         'gcp',\n         'elasticsearch',\n         'redis',\n         'postgres',\n         'mongodb',\n         'kafka',\n         'rest',\n         'keycloak',\n         'vault'\n         ], help='Credential type')\n\n      args = mainParser.parse_args(sys.argv[1:3])\n      if len(sys.argv) == 1:\n          mainParser.print_help()\n          sys.exit(0)\n\n      getattr(self, args.credential_type)()\n\n    def write_creds_to_file(self, json_file_name, data):\n      creds_file = CREDS_DIR + json_file_name\n      if os.path.exists(creds_file) is False:\n          raise AssertionError(f\"credential file {json_file_name} missing\")\n\n      with open(creds_file, 'r', encoding=\"utf-8\") as f:\n          contents = json.loads(f.read())\n      if not contents:\n          raise AssertionError(f\"credential file {json_file_name} is invalid\")\n\n      contents['metadata']['connectorData'] = data\n\n      with open(creds_file, 'w', encoding=\"utf-8\") as f:\n          f.write(json.dumps(contents, indent=2))\n\n    def aws(self):\n      parser = ArgumentParser(description='Add AWS credential')\n      parser.add_argument('-a', '--access-key', required=True, help='AWS Access Key')\n      parser.add_argument('-s', '--secret-access-key', required=True, help='AWS Secret Access Key')\n      args = parser.parse_args(sys.argv[3:])\n\n      if len(sys.argv[3:]) != 4:\n         parser.print_help()\n         sys.exit(0)\n\n      if args.access_key is None or args.secret_access_key is None:\n          raise AssertionError('Access Key or Secret Access Key missing')\n\n      d = {}\n      d['authentication'] = {}\n      d['authentication']['auth_type'] = \"Access Key\"\n      d['authentication']['access_key'] =  args.access_key\n      d['authentication']['secret_access_key'] = args.secret_access_key\n      self.write_creds_to_file('awscreds.json', json.dumps(d))\n\n    def k8s(self):\n      parser = ArgumentParser(description='Add K8S credential')\n      parser.add_argument('-k', '--kubeconfig', required=True, help='Contents of the kubeconfig file')\n      args = parser.parse_args(sys.argv[3:])\n\n      if len(sys.argv[3:]) != 2:\n         parser.print_help()\n         sys.exit(0)\n\n      d = {}\n      d['kubeconfig'] = args.kubeconfig\n      self.write_creds_to_file('k8screds.json', json.dumps(d))\n\n    def gcp(self):\n      parser = ArgumentParser(description='Add GCP credential')\n      parser.add_argument('-g', '--gcp-credentials', help='Contents of the GCP credentials json file')\n      args = parser.parse_args(sys.argv[3:])\n\n      if len(sys.argv[3:]) != 2:\n         parser.print_help()\n         sys.exit(0)\n\n      d = {}\n      d['credentials'] = args.gcp_credentials\n      self.write_creds_to_file('gcpcreds.json', json.dumps(d))\n\n    def elasticsearch(self):\n      parser = ArgumentParser(description='Add Elasticsearch credential')\n      parser.add_argument('-s', '--host', required=True, help='''\n                          Elasticsearch Node URL. For eg: https://localhost:9200.\n                          NOTE: Please ensure that this is the Elastisearch URL and NOT Kibana URL.\n                          ''')\n      parser.add_argument('-a', '--api-key', help='API key')\n      parser.add_argument('--no-verify-certs', action='store_true', help='Not verify server ssl certs. This can be set to true when working with private certs.')\n      args = parser.parse_args(sys.argv[3:])\n\n      if len(sys.argv[3:]) < 2:\n         parser.print_help()\n         sys.exit(0)\n\n      d = {}\n      d['host'] = args.host\n      if args.api_key is not None:\n         d['api_key'] = args.api_key\n\n      if args.no_verify_certs is True:\n         d['verify_certs'] = False\n      else:\n         d['verify_certs'] = True\n\n      self.write_creds_to_file('escreds.json', json.dumps(d))\n\n    def redis(self):\n      parser = ArgumentParser(description='Add Redis credential')\n      parser.add_argument('-s', '--host', required=True, help='Hostname of the redis server')\n      parser.add_argument('-p', '--port', help='Port on which redis server is listening', type=int, default=6379)\n      parser.add_argument('-u', '--username', help='Username')\n      parser.add_argument('-pa', '--password', help='Password')\n      parser.add_argument('-db', '--database', help='ID of the database to connect to', type=int)\n      parser.add_argument('--use-ssl', action='store_false', help='Use SSL to connect to redis host')\n      args = parser.parse_args(sys.argv[3:])\n\n      if len(sys.argv[3:]) < 2:\n         parser.print_help()\n         sys.exit(0)\n\n      d = {}\n      d['host'] = args.host\n      d['port'] = args.port\n      if args.username is not None:\n         d['username'] = args.username\n      if args.password is not None:\n         d['password'] = args.password\n      if args.database is not None:\n         d['db'] = args.database\n      d['use_ssl'] = args.use_ssl\n      self.write_creds_to_file('rediscreds.json', json.dumps(d))\n\n    def postgres(self):\n      parser = ArgumentParser(description='Add POSTGRES credential')\n      parser.add_argument('-s', '--host', required=True, help='Hostname of the PostGRES server')\n      parser.add_argument('-p', '--port', help='Port on which PostGRES server is listening', type=int, default=5432)\n      parser.add_argument('-db', '--database-name', help='Name of the database to connect to', required=True)\n      parser.add_argument('-u', '--username', help='Username')\n      parser.add_argument('-pa', '--password', help='Password')\n      args = parser.parse_args(sys.argv[3:])\n\n      if len(sys.argv[3:]) < 4:\n         parser.print_help()\n         sys.exit(0)\n\n      d = {}\n      d['Host'] = args.host\n      d['Port'] = args.port\n      d['DBName'] = args.database_name\n      if args.username is not None:\n         d['User'] = args.username\n      if args.password is not None:\n         d['Password'] = args.password\n      self.write_creds_to_file('postgrescreds.json', json.dumps(d))\n\n    def mongodb(self):\n      parser = ArgumentParser(description='Add MongoDB credential')\n      parser.add_argument('-s', '--host', required=True, help='Full MongoDB URI, in addition to simple hostname. It also supports mongodb+srv:// URIs\"')\n      parser.add_argument('-p', '--port', help='Port on which MongoDB server is listening', type=int, default=27017)\n      parser.add_argument('-u', '--username', help='Username')\n      parser.add_argument('-pa', '--password', help='Password')\n      args = parser.parse_args(sys.argv[3:])\n\n      if len(sys.argv[3:]) < 2:\n         parser.print_help()\n         sys.exit(0)\n\n      d = {}\n      d['host'] = args.host\n      d['port'] = args.port\n      #TBD: Add support for atlas\n      d['authentication'] = {}\n      d['authentication']['auth_type'] = \"Basic Auth\"\n      if args.username is not None:\n        d['authentication']['user_name'] = args.username\n      if args.password is not None:\n        d['authentication']['password'] = args.password\n\n      self.write_creds_to_file('mongodbcreds.json', json.dumps(d))\n\n    def kafka(self):\n      parser = ArgumentParser(description='Add Kafka credential')\n      parser.add_argument('-b', '--broker', required=True, help='''\n                          host[:port] that the producer should contact to bootstrap initial cluster metadata. Default port is 9092.\n                          ''')\n      parser.add_argument('-u', '--sasl-username', help='Username for SASL PlainText Authentication.')\n      parser.add_argument('-p', '--sasl-password', help='Password for SASL PlainText Authentication.')\n      parser.add_argument('-z', '--zookeeper', help='Zookeeper connection string. This is needed to do health checks. Eg: host[:port]. The default port is 2182')\n      args = parser.parse_args(sys.argv[3:])\n\n      if len(sys.argv[3:]) < 2:\n         parser.print_help()\n         sys.exit(0)\n\n      d = {}\n      d['broker'] = args.broker\n      if args.sasl_username is not None:\n         d['sasl_username'] = args.sasl_username\n\n      if args.sasl_password is not None:\n         d['sasl_password'] = args.sasl_password\n\n      if args.zookeeper is not None:\n         d['zookeeper'] = args.zookeeper\n\n      self.write_creds_to_file('kafkacreds.json', json.dumps(d))\n\n    def rest(self):\n      parser = ArgumentParser(description='Add REST credential')\n      parser.add_argument('-b', '--base-url', required=True, help='''\n                          Base URL of REST server\n                          ''')\n      parser.add_argument('-u', '--username', help='Username for Basic Authentication')\n      parser.add_argument('-p', '--password', help='Password for the Given User for Basic Auth')\n      parser.add_argument('-hdr', '--headers', type=json.loads, help='''\n                          A dictionary of http headers to be used to communicate with the host.\n                          Example: Authorization: bearer my_oauth_token_to_the_host.\n                          These headers will be included in all requests.\n                          ''')\n      args = parser.parse_args(sys.argv[3:])\n\n      if len(sys.argv[3:]) < 2:\n         parser.print_help()\n         sys.exit(0)\n\n      d = {}\n      d['base_url'] = args.base_url\n      if args.username is not None:\n         d['username'] = args.username\n      if args.password is not None:\n         d['password'] = args.password\n      if args.headers is not None:\n         d['headers'] = args.headers\n\n      self.write_creds_to_file('restcreds.json', json.dumps(d))\n\n    def vault(self):\n      parser = ArgumentParser(description='Add Vault credential')\n      parser.add_argument('-u', '--url', required=True, help='URL for the Vault instance')\n      parser.add_argument('-t', '--token', help='Token value to authenticate requests to Vault.')\n      parser.add_argument('--verify_ssl', action='store_true', help='Flag to decide if SSL verification should be enforced for Vault connection. Default is False.')\n\n\n      args = parser.parse_args(sys.argv[3:])\n\n      if len(sys.argv[3:]) < 2:\n         parser.print_help()\n         sys.exit(0)\n\n      d = {}\n      d['url'] = args.url\n      if args.token is not None:\n         d['token'] = args.token\n\n      self.write_creds_to_file('vaultcreds.json', json.dumps(d))\n\n    def keycloak(self):\n      parser = ArgumentParser(description='Add Keycloak credential')\n      parser.add_argument('-su', '--server-url', required=True, help='''\n                          Base URL of the keycloak instance.\n                         ''')\n      parser.add_argument('-r', '--realm', required=True, help='Name of the realm for authentication')\n      parser.add_argument('-c', '--client-id', help='Client ID for authentication')\n      parser.add_argument('-u', '--username', help='Username for client-based authentication')\n      parser.add_argument('-p', '--password', help='Password for client-based authentication')\n      parser.add_argument('-cs', '--client-secret', help='Client secret for client-based authentication')\n      parser.add_argument('--no-verify-certs', action='store_true', help='Not verify server ssl certs. This can be set to true when working with private certs.')\n      args = parser.parse_args(sys.argv[3:])\n\n      if len(sys.argv[3:]) < 6:\n         parser.print_help()\n         sys.exit(0)\n\n      d = {}\n      d['server_url'] = args.server_url\n      d['realm'] = args.realm\n      d['client_id'] = args.client_id\n      d['username'] = args.username\n      d['password'] = args.password\n      d['client_secret'] = args.client_secret\n      if args.no_verify_certs is True:\n         d['verify'] = False\n      else:\n         d['verify'] = True\n\n      self.write_creds_to_file('keycloakcreds.json', json.dumps(d))\n\n\nif __name__ == '__main__':\n    CredentialsAdd()\n"
  },
  {
    "path": "unskript-ctl/bash_completion_unskript_ctl.bash",
    "content": "#!/bin/bash\n\n_unskript-client-completion() {\n    local cur prev opts\n    COMPREPLY=()\n    cur=\"${COMP_WORDS[COMP_CWORD]}\"\n    prev=\"${COMP_WORDS[COMP_CWORD-1]}\"\n    connector_list=(\"aws\" \"k8s\" \"postgres\" \"mongodb\" \"elasticsearch\" \"vault\" \"ssh\" \"keycloak\" \"github\" \"redis\")\n\n\n    # Find the absolute path of unskript-client.py\n    local unskript_client_script\n    unskript_client_script=\"$(which unskript_ctl_main.py)\"\n\n    if [ -n \"$unskript_client_script\" ]; then\n        # Check if the script exists and save check names\n        if [ ! -f \"/tmp/allopts.txt\" ]; then\n             /usr/bin/env python \"$unskript_client_script\" -h > /tmp/allopts.txt\n        fi\n        if [ ! -f \"/tmp/checknames.txt\" ]; then\n            /usr/bin/env python \"$unskript_client_script\" --save-check-names /tmp/checknames.txt\n\n        fi\n    fi\n    # Define options with each option on a separate line using newline characters\n    opts=\"run list show debug --create-credential\"\n\n    # Completion logic\n    case \"${prev}\" in\n        run)\n            # Provide completion suggestions for runbook filenames\n            COMPREPLY=( $(compgen -W \"--info --script  check\" -- \"${cur}\" -o nospace) )\n            return 0\n            ;;\n\n        list)\n            # Provide completion suggestions for running script\n            COMPREPLY=( $(compgen -W \"failed-checks checks --credential\" -- \"${cur}\" -o nospace) )\n            return 0\n            ;;\n\n        show)\n            case ${prev} in\n                audit-trail)\n                    COMPREPLY=( $(compgen -W \"--all --type --execution_id\" -- \"${cur}\" -o nospace) )\n                    ;;\n                failed-logs)\n                    COMPREPLY=( $(compgen -W \"--execution_id <EXECUTION_ID>\" -- \"${cur}\" -o nospace) )\n                    ;;\n                *)\n                    COMPREPLY=( $(compgen -W \"audit-trail failed-logs\" -- \"${cur}\" -o nospace) )\n                    ;;\n            esac \n            return 0\n            ;;\n        debug)\n            COMPREPLY=( $(compgen -W \"--start --stop\" -- \"${cur}\" -o nospace) )    \n            return 0\n            ;;\n\n        *)  # Default: Provide completion suggestions for global options             \n            if [[ (\" ${COMP_WORDS[*]} \" == *\"run check --name \"* )\\\n                 || (\" ${COMP_WORDS[*]} \" =~ *\"run [^[:space:]]+ check --name \"* ) \\\n                 || (\" ${COMP_WORDS[*]} \" =~ *\"run  [^[:space:]]+ check --name \"* ) \\\n                 || (\" ${COMP_WORDS[*]} \" == *\"check --name \"* ) ]];\n            then\n                cur=${cur#--check}\n                cur=${cur#--name}\n                opt2=\"$(grep -E \"^${cur}\" /tmp/checknames.txt)\"\n                COMPREPLY=( $(compgen -W \"${opt2}\" -o nospace) )\n                compopt -o nospace\n                return 0\n            fi\n            if [[ (\" ${COMP_WORDS[*]} \" =~ *\"check --name [^[:space:]]+ \"* )  \\\n                   || (\" ${COMP_WORDS[*]} \" == *\"check --all\"* )  \\\n                   || (\" ${COMP_WORDS[*]} \" == *\"check --type \\ [^[:space:]]+ \"* )  \\\n                   || (\" ${COMP_WORDS[*]} \" == *\"check --type \\ [^[:space:]]+ \"* ) ]];\n            then\n                COMPREPLY=( $(compgen -W \"--script SCRIPT_NAME\" -- \"${cur}\" -o nospace) )\n                return 0\n            fi\n\n            if [[ (\" ${COMP_WORDS[*]} \" == *\"run check --type \"* )  \\\n                  || (\" ${COMP_WORDS[*]} \" == *\"list checks --type \"* ) \\\n                  || (\" ${COMP_WORDS[*]} \" == *\"list failed-checks --type \"* ) \\\n                  || (\" ${COMP_WORDS[*]} \" == *\"show audit-trail --type \"* ) \\\n                  || (\" ${COMP_WORDS[*]} \" == *\"check --type \"* ) \\\n                  ]];\n\n            then\n                COMPREPLY=( $(compgen -W \"aws k8s postgres mongodb elasticsearch vault ssh keycloak redis\" -o nospace) )\n                return 0\n            fi\n            if [[ (\" ${COMP_WORDS[*]} \" == *\"run check --all \"*) \\\n                  || (\" ${COMP_WORDS[*]} \" == *\"check --all \"* ) \\\n                  && (\" ${COMP_WORDS[*]} \" != *\"run check --all --report\"* ) ]];\n            then\n                COMPREPLY=( $(compgen -W \"--script SCRIPT_NAME --report\" -o nospace) )\n                return 0\n            fi\n            if [[ (\" ${COMP_WORDS[*]} \" == *\"run [^[:space:]]+ check \"*) \\\n                  || (\" ${COMP_WORDS[*]} \" == *\"run [^[:space:]]+ check \"*) \\\n                  || (\" ${COMP_WORDS[*]} \" == *\"check \"*) ]];\n            then\n                COMPREPLY=( $(compgen -W \"--all --type --name\" -- \"${cur}\" -o nospace) ) \n                return 0\n            fi\n            \n\n            if [[ (\" ${COMP_WORDS[*]} \" == *\"list failed-checks --all\"*) \\\n                  || (\" ${COMP_WORDS[*]} \" == *\"list checks --all\"*) \\\n                  || (\" ${COMP_WORDS[*]} \" == *\"show audit-trail --all\"*) \\\n                  ]];\n            then\n                return 0\n            fi\n\n\n            if [[ (\" ${COMP_WORDS[*]} \" == *\"list failed-checks \"*) \\\n                  || (\" ${COMP_WORDS[*]} \" == *\"list failed-checks \"*)]];\n            then\n                COMPREPLY=( $(compgen -W \"--all --type\" -- \"${cur}\" -o nospace) ) \n                return 0\n            fi\n\n            if [[ (\" ${COMP_WORDS[*]} \" == *\"list --checks \"*) \\\n                  || (\" ${COMP_WORDS[*]} \" == *\"list checks \"*)]];\n            then\n                COMPREPLY=( $(compgen -W \"--all --type\" -- \"${cur}\" -o nospace) ) \n                return 0\n            fi\n           \n            if [[ (\" ${COMP_WORDS[*]} \" == *\"show audit-trail --execution_id\"*) \\\n                  && (\" ${COMP_WORDS[*]} \" != *\"show audit-trail --execution_id EXECUTION_ID\"*) \\\n                  ]];\n            then\n                COMPREPLY=( $(compgen -W \"EXECUTION_ID\" -- \"${cur}\" -o nospace) ) \n                return 0\n            fi\n\n            if [[ (\" ${COMP_WORDS[*]} \" == *\"show failed-logs\"*) \\\n                  || (\" ${COMP_WORDS[*]} \" == *\"show failed-logs \"*) ]];\n            then\n                COMPREPLY=( $(compgen -W \"--execution_id EXECUTION_ID\" -- \"${cur}\" -o nospace) ) \n                return 0\n            fi\n\n\n            if [[ (\" ${COMP_WORDS[*]} \" == *\"show audit-trail \"*) \\\n                  || (\" ${COMP_WORDS[*]} \" == *\"show audit-trail \"*)]];\n            then\n                COMPREPLY=( $(compgen -W \"--all --type --execution_id\" -- \"${cur}\" -o nospace) ) \n                return 0\n            fi\n\n\n            if [[ (\" ${COMP_WORDS[*]} \" == *\"run --script \"*) \\\n                  && (\" ${COMP_WORDS[*]} \" != *\"run --script SCRIPT_FILE\"*) \\\n                  && (\" ${COMP_WORDS[*]} \" != *\"run --script \\ [^[:space:]]+\"*) \\\n                  && (\" ${COMP_WORDS[*]} \" != *\"check \"*) \\\n                  ]];\n            then\n                COMPREPLY=( $(compgen -W \"SCRIPT_FILE\" -- \"${cur}\" -o nospace) ) \n                return 0\n            fi\n\n\n            IFS=' ' read -r -a words <<<\"${COMP_WORDS[*]}\"\n            if [[ \" ${COMP_WORDS[*]} \" == *\"run --script \"* && \" ${COMP_WORDS[*]} \" != *\"check \"* ]]; then\n                last_word=\"${words[${#words[@]}-1]}\"\n                if [[ \"${last_word}\" != \"\" && \"${last_word}\" =~ ^[^[:space:]]+$ ]]; then\n                    COMPREPLY=( $(compgen -W \"check\" -- \"${cur}\" -o nospace) )\n                else\n                    COMPREPLY=( $(compgen -W \"SCRIPT_FILE\" -- \"${cur}\" -o nospace) )\n                fi\n                return 0\n            fi\n\n            if [[ (\" ${COMP_WORDS[*]} \" =~ *\"run --script  [^[:space:]]+\"*) \\\n                  || (\" ${COMP_WORDS[*]} \" =~ *\"run --script  [^[:space:]]+\"*) \\\n                  ]];\n            then\n                COMPREPLY=( $(compgen -W \"check\" -- \"${cur}\" -o nospace) ) \n                return 0\n            fi\n\n            if [[ (\" ${COMP_WORDS[*]} \" =~ *\"run --script \\ [^[:space:]]+\"*) \\\n                  || (\" ${COMP_WORDS[*]} \" =~ *\"run --script  [^[:space:]]+\"*) \\\n                  ]];\n            then\n                COMPREPLY=( $(compgen -W \"check\" -- \"${cur}\" -o nospace) ) \n                return 0\n            fi\n\n            if [[ (\" ${COMP_WORDS[*]} \" == *\"run --script \\ [^[:space:]]+ check\"*) \\\n                  || (\" ${COMP_WORDS[*]} \" == *\"run --script \\ [^[:space:]]+ check\"*) \\\n                  ]];\n            then\n                COMPREPLY=( $(compgen -W \"--type --all --name\" -- \"${cur}\" -o nospace) ) \n                return 0\n            fi\n\n\n\n            if [[ (\" ${COMP_WORDS[*]} \" == *\"run check --all --report\"*) \\\n                  || (\" ${COMP_WORDS[*]} \" == *\"run check --all --report \"*)]];\n            then\n                return 0\n            fi\n\n            if [[ (\" ${COMP_WORDS[*]} \" == *\"debug --start\"*) \\\n                  || (\" ${COMP_WORDS[*]} \" == *\"debug --start \"*) \\\n                  && (\" ${COMP_WORDS[*]} \" != *\"debug --start --config OVPNFILE\"*) \\\n                  ]];\n            then\n                COMPREPLY=( $(compgen -W \"--config OVPNFILE\" -- \"${cur}\" -o nospace) ) \n                return 0\n            fi\n\n            if [[ (\" ${COMP_WORDS[*]} \" == *\"debug --stop\"*) \\\n                  || (\" ${COMP_WORDS[*]} \" == *\"debug --stop \"*) \\\n                  ]];\n            then\n                return 0\n            fi\n\n            if [[  \" ${COMP_WORDS[*]} \" == *\"--create-credential\"* ]];\n            then\n                return 0\n            fi\n\n            if [[ (\" ${COMP_WORDS[*]} \" == *\"run\"* ) \\\n                && ( \"${COMP_WORDS[*]} \" != *\"check\"* ) \\\n                && (\" ${COMP_WORDS[*]} \" == *\"--script \\ [^[:space:]]+\"* )]];\n            then\n                if [[ \" ${COMP_WORDS[*]} \" == *\"check\"* ]];\n                then \n                    COMPREPLY=( $(compgen -W \"--all --type --name\" -- \"${cur}\" -o nospace) )\n                else \n                    COMPREPLY=( $(compgen -W \"check\" -- \"${cur}\" -o nospace) )\n                fi\n                return 0\n            fi\n\n            if [ \"${#COMP_WORDS[@]}\" != \"1\" ];\n            then \n                COMPREPLY=( $(compgen -W \"${opts}\" -- \"${cur}\" -o nospace) )\n                return 0\n            fi\n\n            return 0\n            ;;\n    esac\n}\n\n# Register the completion function for unskript-client.py\ncomplete -F _unskript-client-completion unskript-ctl.sh\n"
  },
  {
    "path": "unskript-ctl/config/unskript_ctl_config.yaml",
    "content": "# unSkript-ctl config file\n#\n# Copyright (c) 2023 unSkript.com\n# All rights reserved.\n#\n# This program is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\n# FOR A PARTICULAR PURPOSE\n#\n#\nversion: 1.0.0\n\n#\n# Global section\n#\n# Global config\n#\nglobal:\n   # if enable_runbooks is enabled, jupyterlab is launched so that one can open\n   # runbooks in jupyterlab.\n   enable_runbooks: true\n   # audit_period in days. Number of days worth of audit data to be kept.\n   # Any date older than this number of days, will be deleted.\n   audit_period: 90\n   # per check timeout, this timeout decides how much time should be given\n   # per check for it to complete\n   execution_timeout: 200\n\n\n#\n# Checks section\n#\n# Check specific configuration. For eg, arguments.\n#\nchecks:\n  # Arguments common to all checks, like region, namespace, etc.\n  arguments:\n    global:\n       region: us-west-2\n       #matrix:\n       #  namespace: [n1, n2]\n  # Allocate priority for checks.\n  priority:\n    # p0 is the top priority, followed by p1, p2.\n    # Each priority will have the list of check names.\n    # Default it will be p2\n    p0: []\n    p1: []\n    # \n    # You can specify execution timeout per check like this\n    # execution_timeout:\n    #   k8s_get_unbound_pvcs: 60\n    #   ...\n    # \n\n#\n# Info gathering action section\n#\n# Info action specific configuration. For eg, arguments.\n#\ninfo:\n  # Arguments common to all info gathering actions like namespace, etc.\n  arguments:\n    global:\n      region: us-west-2\n\n# Credential section\n#\n# uncomment the relevant sections below to enable respective credential\n#\ncredential:\n  # AWS connector details\n  aws:\n   - name: awscreds\n     enable: false\n     access-key: \"\"\n     secret-access-key: \"\"\n\n  # Kubernetes connector details\n  k8s:\n   - name: k8screds\n     enable: false\n     kubeconfig: \"\"\n\n  # GCP connector details\n  gcp:\n   - name: gcpcreds\n     enable: false\n     credential-json: \"\"\n\n  # Elasticsearch connector details\n  elasticsearch:\n   - name: escreds\n     enable: false\n     host: \"\"\n     api-key: \"\"\n     no-verify-ssl: \"\"\n\n  # Redis connector details\n  redis:\n   - name: rediscreds\n     enable: false\n     host: \"\"\n     port: \"\"\n     username: \"\"\n     password: \"\"\n     database: \"\"\n     use-ssl: \"\"\n\n  # Postgres connector details\n  postgres:\n   - name: postgrescreds\n     enable: false\n     host: \"\"\n     port: \"\"\n     username: \"\"\n     password: \"\"\n     database: \"\"\n\n  # Mongodb connector details\n  mongodb:\n   - name: mongodbcreds\n     enable: false\n     host: \"\"\n     port: \"\"\n     username: \"\"\n     password: \"\"\n\n  # Kafka connector details\n  kafka:\n   - name: kafkacreds\n     enable: false\n     broker: \"\"\n     username: \"\"\n     password: \"\"\n     zookeeper: \"\"\n\n  # Rest connector details\n  rest:\n   - name: restcreds\n     enable: false\n     base-url: \"\"\n     username: \"\"\n     password: \"\"\n     headers: \"\"\n\n  # Vault connector details\n  vault:\n   - name: vaultcreds\n     enable: false\n     url: \"\"\n     token: \"\"\n\n  # Keycloak connector details\n  keycloak:\n   - name: keycloakcreds\n     enable: false\n     server-url: \"\"\n     realm: \"\"\n     client-id: \"\"\n     username: \"\"\n     password: \"\"\n     client-secret: \"\"\n     no-verify-certs: \"\"\n\n#\n# Notification section\n#\n# uncomment the relevant sections below to enable either slack or email notification\nnotification:\n  # Slack Notification setting\n  Slack:\n    enable: false\n    web-hook-url: \"\"\n    channel-name: \"\"\n    verbose: false #Not yet supported\n  Email:\n    verbose: true #Not yet supported\n    enable: false\n    email_subject_line: \"\"\n    # Skip Generating Summary pdf\n    skip_generating_summary_report: false\n    # Specify if SMTP credentials vault path\n    vault:\n      enable: false\n      smtp_credential_path: \"v1/lb-secrets/smtp-server/credentials\"\n      # Auth Type: \"Basic Auth\" or OAuth2\n      auth_type: \"Basic Auth\"\n\n    # provider for the email. Possible values:\n    #    - SMTP - SMTP server\n    #    - SES -  AWS SES\n    #    - Sendgrid - Sendgrid\n    provider: \"\"\n    SMTP:\n      vault-secret-path: \"\"\n      smtp-host: \"\"\n      smtp-user: \"\"\n      smtp-password: \"\"\n      to-email: \"\"\n      from-email: \"\"\n    SES:\n      vault-secret-path: \"\"\n      access_key: \"\"\n      secret_access: \"\"\n      region: \"\"\n      to-email: \"\"\n      from-email: \"\"\n    Sendgrid:\n      vault-secret-path: \"\"\n      api_key: \"\"\n      to-email: \"\"\n      from-email: \"\"\n      \n\n#\n# Job section\n#\n# Job detail contains information about what all unskript-ctl can run.\njobs:\n  - name: \"\" # Unique name\n    # The results of the job to be notified or not.\n    notify: true\n    #notify_sink: foo\n    enable: false\n    # Specific checks to run\n    # Not supported: multiple checks, only single check support for now.\n    checks: []\n    # Specific info gathering actions to run\n    # can specify individual info gathering actions\n    info: []\n    # Specific suites to run\n    # Not supported\n    suites: []\n    # connector types whose checks need to be run\n    # Possible values:\n    #   - aws\n    #   - k8s\n    #   - gcp\n    #   - postgresql\n    #   - slack\n    #   - mongodb\n    #   - jenkins\n    #   - mysql\n    #   - jira\n    #   - rest\n    #   - elasticsearch\n    #   - kafka\n    #   - grafana\n    #   - ssh\n    #   - prometheus\n    #   - datadog\n    #   - stripe\n    #   - redis\n    #   - zabbix\n    #   - opensearch\n    #   - pingdom\n    #   - github\n    #   - terraform\n    #   - airflow\n    #   - hadoop\n    #   - mssql\n    #   - snowflake\n    #   - splunk\n    #   - salesforce\n    #   - azure\n    #   - nomad\n    #   - netbox\n    #   - opsgenie\n    connector_types: []\n    # Custom scripts to be run.\n    custom_scripts: []\n\n#\n# Scheduler section\n#\n# You can configure multiple schedules.\nscheduler:\n  - enable: false\n    # Cadence is specified in cron syntax. More information about the syntax can\n    # be found in https://crontab.guru\n    # minute  hour  day (of month)  month  day (of week)\n    #   *      *          *           *        *\n    # Example: \"*/30 * * * *\"   <= This will run every 30 Minutes\n    cadence: \"*/60 * * * *\"\n    # Name of the job to add to the schedule\n    job_name: \"\"\n\nremote_debugging:\n  enable: false\n  # ovpn file location\n  ovpn_file: \"\"\n  # Cadence at which tunnel needs to be brought up.\n  # Cadence is specified in cron syntax. More information about the syntax can\n  # be found in https://crontab.guru\n  # minute  hour  day (of month)  month  day (of week)\n  #   *      *          *           *        *\n  # Example: \"*/30 * * * *\"   <= This will run every 30 Minutes\n  #\n  tunnel_up_cadence: \"\"\n  # Cadence at which tunnel needs to be brought down.\n  # Cadence is specified in cron syntax. More information about the syntax can\n  # be found in https://crontab.guru\n  # minute  hour  day (of month)  month  day (of week)\n  #   *      *          *           *        *\n  # Example: \"*/30 * * * *\"   <= This will run every 30 Minutes\n  #\n  tunnel_down_cadence: \"\"\n  # Cadence at which proxy session logs needs to be uploaded to storage bucket.\n  # Cadence is specified in cron syntax. More information about the syntax can\n  # be found in https://crontab.guru\n  # minute  hour  day (of month)  month  day (of week)\n  #   *      *          *           *        *\n  # Example: \"*/30 * * * *\"   <= This will run every 30 Minutes\n  #\n  upload_log_files_cadence: \"\"\n"
  },
  {
    "path": "unskript-ctl/config_parser_test_matrix.md",
    "content": "## Report Disabled\n\n*  unskript-ctl.sh run --info \n```\nSchedule: cadence 0 0 * * *, job name: lightbeam\nSchedule: Programming crontab 0 0 * * * /usr/local/bin/unskript-ctl.sh run --info\nAdding audit log deletion cron job entry, 0 0 * * * /opt/conda/bin/python /usr/local/bin/unskript_audit_cleanup.py\n```\n\n*  unskript-ctl.sh run check --name <NAME>\n\n```\nSchedule: cadence 0 0 * * *, job name: lightbeam\nSchedule: Programming crontab 0 0 * * * /usr/local/bin/unskript-ctl.sh run check --name k8s_get_offline_nodes \nAdding audit log deletion cron job entry, 0 0 * * * /opt/conda/bin/python /usr/local/bin/unskript_audit_cleanup.py\n```\n\n\n*  unskript-ctl.sh run check --type <TYPE>\n\n```\nSchedule: cadence 0 0 * * *, job name: lightbeam\nSchedule: Programming crontab 0 0 * * * /usr/local/bin/unskript-ctl.sh run check --type k8s \nAdding audit log deletion cron job entry, 0 0 * * * /opt/conda/bin/python /usr/local/bin/unskript_audit_cleanup.py\n```\n\n*  unskript-ctl.sh run --script <SCRIPT>\n\n```\nSchedule: cadence 0 0 * * *, job name: lightbeam\nSchedule: Programming crontab 0 0 * * * /usr/local/bin/unskript-ctl.sh run --script \"/usr/local/bin/lb_pvc.sh\" \nAdding audit log deletion cron job entry, 0 0 * * * /opt/conda/bin/python /usr/local/bin/unskript_audit_cleanup.py\n```\n\n*  unksirpt-ctl.sh run --script <SCRIPT> --info\n```\nSchedule: cadence 0 0 * * *, job name: lightbeam\nSchedule: Programming crontab 0 0 * * * /usr/local/bin/unskript-ctl.sh run --script \"/usr/local/bin/lb_pvc.sh\"  --info\nAdding audit log deletion cron job entry, 0 0 * * * /opt/conda/bin/python /usr/local/bin/unskript_audit_cleanup.py\n```\n\n\n*  unskript-ctl.sh run check --name <NAME> --info \n\n```\nSchedule: cadence 0 0 * * *, job name: lightbeam\nSchedule: Programming crontab 0 0 * * * /usr/local/bin/unskript-ctl.sh run check --name k8s_get_offline_nodes  --info\nAdding audit log deletion cron job entry, 0 0 * * * /opt/conda/bin/python /usr/local/bin/unskript_audit_cleanup.py\n```\n\n*  unskript-ctl.sh run check --type <TYPE> --info \n```\nSchedule: cadence 0 0 * * *, job name: lightbeam\nSchedule: Programming crontab 0 0 * * * /usr/local/bin/unskript-ctl.sh run check --type k8s  --info\nAdding audit log deletion cron job entry, 0 0 * * * /opt/conda/bin/python /usr/local/bin/unskript_audit_cleanup.py\n\n```\n\n*  unskript-ctl.sh run check --name <NAME> --script <SCRIPT>\n```\nSchedule: cadence 0 0 * * *, job name: lightbeam\nSchedule: Programming crontab 0 0 * * * /usr/local/bin/unskript-ctl.sh run check --name k8s_get_offline_nodes  --script \"/usr/local/bin/lb_pvc.sh\"  \nAdding audit log deletion cron job entry, 0 0 * * * /opt/conda/bin/python /usr/local/bin/unskript_audit_cleanup.py\n```\n\n*  unskript-ctl.sh run check --type <TYPE> --script <SCRIPT>\n```\nSchedule: cadence 0 0 * * *, job name: lightbeam\nSchedule: Programming crontab 0 0 * * * /usr/local/bin/unskript-ctl.sh run check --type k8s  --script \"/usr/local/bin/lb_pvc.sh\"  \nAdding audit log deletion cron job entry, 0 0 * * * /opt/conda/bin/python /usr/local/bin/unskript_audit_cleanup.py\n```\n*  unskript-ctl.sh run check --name <NAME> check --type <TYPE> \n\n```\nSchedule: cadence 0 0 * * *, job name: lightbeam\nSchedule: Programming crontab 0 0 * * * /usr/local/bin/unskript-ctl.sh run check --name k8s_get_offline_nodes ; /usr/local/bin/unskript-ctl.sh run check --type k8s \nAdding audit log deletion cron job entry, 0 0 * * * /opt/conda/bin/python /usr/local/bin/unskript_audit_cleanup.py\n```\n\n\n*  unskript-ctl.sh run check --type <TYPE> check --name <NAME> --script <SCRIPT>\n\n```\nSchedule: cadence 0 0 * * *, job name: lightbeam\nSchedule: Programming crontab 0 0 * * * /usr/local/bin/unskript-ctl.sh run check --name k8s_get_offline_nodes ; /usr/local/bin/unskript-ctl.sh run check --type k8s  --script \"/usr/local/bin/lb_pvc.sh\"  \nAdding audit log deletion cron job entry, 0 0 * * * /opt/conda/bin/python /usr/local/bin/unskript_audit_cleanup.py\n```\n\n*  unskript-ctl.sh run check --type <TYPE> --script <SCRIPT> --info \n\n```\nSchedule: cadence 0 0 * * *, job name: lightbeam\nSchedule: Programming crontab 0 0 * * * /usr/local/bin/unskript-ctl.sh run check --type k8s --info --script \"/usr/local/bin/lb_pvc.sh\" \nAdding audit log deletion cron job entry, 0 0 * * * /opt/conda/bin/python /usr/local/bin/unskript_audit_cleanup.py\n```\n\n*  unskript-ctl.sh run check --name <NAME> --script <SCRIPT> --info \n```\nSchedule: cadence 0 0 * * *, job name: lightbeam\nSchedule: Programming crontab 0 0 * * * /usr/local/bin/unskript-ctl.sh run check --name k8s_get_offline_nodes --info --script \"/usr/local/bin/lb_pvc.sh\" \nAdding audit log deletion cron job entry, 0 0 * * * /opt/conda/bin/python /usr/local/bin/unskript_audit_cleanup.py\n\n```\n\n*  unskript-ctl.sh run check --name <NAME> check --type <TYPE> --info  \n\n```\nSchedule: cadence 0 0 * * *, job name: lightbeam\nSchedule: Programming crontab 0 0 * * * /usr/local/bin/unskript-ctl.sh run check --name k8s_get_offline_nodes  --info; /usr/local/bin/unskript-ctl.sh run check --type k8s  --script \"/usr/local/bin/lb_pvc.sh\" \nAdding audit log deletion cron job entry, 0 0 * * * /opt/conda/bin/python /usr/local/bin/unskript_audit_cleanup.py\n```\n\n*  unskript-ctl.sh run check --name <NAME> check --type <TYPE> --script <SCRIPT> --info \n\n```\nSchedule: cadence 0 0 * * *, job name: lightbeam\nSchedule: Programming crontab 0 0 * * * /usr/local/bin/unskript-ctl.sh run check --name k8s_get_offline_nodes  --info; /usr/local/bin/unskript-ctl.sh run check --type k8s --script \"/usr/local/bin/lb_pvc.sh\" \nAdding audit log deletion cron job entry, 0 0 * * * /opt/conda/bin/python /usr/local/bin/unskript_audit_cleanup.py\n```\n\n\n## Report Enabled\n\n*  unskript-ctl.sh run --info --report\n```\nSchedule: cadence 0 0 * * *, job name: lightbeam\nSchedule: Programming crontab 0 0 * * * /usr/local/bin/unskript-ctl.sh run --info --report\nAdding audit log deletion cron job entry, 0 0 * * * /opt/conda/bin/python /usr/local/bin/unskript_audit_cleanup.py\n```\n\n*  unskript-ctl.sh run check --name <NAME> --report\n\n```\nSchedule: cadence 0 0 * * *, job name: lightbeam\nSchedule: Programming crontab 0 0 * * * /usr/local/bin/unskript-ctl.sh run check --name k8s_get_offline_nodes --report\nAdding audit log deletion cron job entry, 0 0 * * * /opt/conda/bin/python /usr/local/bin/unskript_audit_cleanup.py\n```\n\n\n*  unskript-ctl.sh run check --type <TYPE> --report\n\n```\nSchedule: cadence 0 0 * * *, job name: lightbeam\nSchedule: Programming crontab 0 0 * * * /usr/local/bin/unskript-ctl.sh run check --type k8s --report\nAdding audit log deletion cron job entry, 0 0 * * * /opt/conda/bin/python /usr/local/bin/unskript_audit_cleanup.py\n```\n\n*  unskript-ctl.sh run --script <SCRIPT> --report\n\n```\nSchedule: cadence 0 0 * * *, job name: lightbeam\nSchedule: Programming crontab 0 0 * * * /usr/local/bin/unskript-ctl.sh run --script \"/usr/local/bin/lb_pvc.sh\" --report\nAdding audit log deletion cron job entry, 0 0 * * * /opt/conda/bin/python /usr/local/bin/unskript_audit_cleanup.py\n```\n\n*  unksirpt-ctl.sh run --script <SCRIPT> --info --report\n```\nSchedule: cadence 0 0 * * *, job name: lightbeam\nSchedule: Programming crontab 0 0 * * * /usr/local/bin/unskript-ctl.sh run --script \"/usr/local/bin/lb_pvc.sh\" --report --info\nAdding audit log deletion cron job entry, 0 0 * * * /opt/conda/bin/python /usr/local/bin/unskript_audit_cleanup.py\n```\n\n\n*  unskript-ctl.sh run check --name <NAME> --info --report\n\n```\nSchedule: cadence 0 0 * * *, job name: lightbeam\nSchedule: Programming crontab 0 0 * * * /usr/local/bin/unskript-ctl.sh run check --name k8s_get_offline_nodes --report --info\nAdding audit log deletion cron job entry, 0 0 * * * /opt/conda/bin/python /usr/local/bin/unskript_audit_cleanup.py\n```\n\n*  unskript-ctl.sh run check --type <TYPE> --info --report\n```\nSchedule: cadence 0 0 * * *, job name: lightbeam\nSchedule: Programming crontab 0 0 * * * /usr/local/bin/unskript-ctl.sh run check --type k8s --report --info\nAdding audit log deletion cron job entry, 0 0 * * * /opt/conda/bin/python /usr/local/bin/unskript_audit_cleanup.py\n\n```\n\n*  unskript-ctl.sh run check --name <NAME> --script <SCRIPT> --report\n```\nSchedule: cadence 0 0 * * *, job name: lightbeam\nSchedule: Programming crontab 0 0 * * * /usr/local/bin/unskript-ctl.sh run check --name k8s_get_offline_nodes --script \"/usr/local/bin/lb_pvc.sh\" --report\nAdding audit log deletion cron job entry, 0 0 * * * /opt/conda/bin/python /usr/local/bin/unskript_audit_cleanup.py\n\n```\n\n*  unskript-ctl.sh run check --type <TYPE> --script <SCRIPT> --report\n```\nSchedule: cadence 0 0 * * *, job name: lightbeam\nSchedule: Programming crontab 0 0 * * * /usr/local/bin/unskript-ctl.sh run check --type k8s  --script \"/usr/local/bin/lb_pvc.sh\" --report\nAdding audit log deletion cron job entry, 0 0 * * * /opt/conda/bin/python /usr/local/bin/unskript_audit_cleanup.py\n```\n*  unskript-ctl.sh run check --name <NAME> check --type <TYPE> --report\n\n```\nSchedule: cadence 0 0 * * *, job name: lightbeam\nSchedule: Programming crontab 0 0 * * * /usr/local/bin/unskript-ctl.sh run check --name k8s_get_offline_nodes --report; /usr/local/bin/unskript-ctl.sh run check --type k8s --report\nAdding audit log deletion cron job entry, 0 0 * * * /opt/conda/bin/python /usr/local/bin/unskript_audit_cleanup.py\n\n```\n\n\n*  unskript-ctl.sh run check --type <TYPE> check --name <NAME> --script <SCRIPT> --report\n\n```\nSchedule: cadence 0 0 * * *, job name: lightbeam\nSchedule: Programming crontab 0 0 * * * /usr/local/bin/unskript-ctl.sh run check --name k8s_get_offline_nodes --report; /usr/local/bin/unskript-ctl.sh run check --type k8s  --script \"/usr/local/bin/lb_pvc.sh\" --report\nAdding audit log deletion cron job entry, 0 0 * * * /opt/conda/bin/python /usr/local/bin/unskript_audit_cleanup.py\n```\n\n*  unskript-ctl.sh run check --type <TYPE> --script <SCRIPT> --info  --report\n\n```\nSchedule: cadence 0 0 * * *, job name: lightbeam\nSchedule: Programming crontab 0 0 * * * /usr/local/bin/unskript-ctl.sh run check --type k8s --info --script \"/usr/local/bin/lb_pvc.sh\" --report\nAdding audit log deletion cron job entry, 0 0 * * * /opt/conda/bin/python /usr/local/bin/unskript_audit_cleanup.py\n```\n\n*  unskript-ctl.sh run check --name <NAME> --script <SCRIPT> --info --report\n```\nSchedule: cadence 0 0 * * *, job name: lightbeam\nSchedule: Programming crontab 0 0 * * * /usr/local/bin/unskript-ctl.sh run check --name k8s_get_offline_nodes --info --script \"/usr/local/bin/lb_pvc.sh\" --report\nAdding audit log deletion cron job entry, 0 0 * * * /opt/conda/bin/python /usr/local/bin/unskript_audit_cleanup.py\n\n```\n\n*  unskript-ctl.sh run check --name <NAME> check --type <TYPE> --info  --report\n\n```\nSchedule: cadence 0 0 * * *, job name: lightbeam\nSchedule: Programming crontab 0 0 * * * /usr/local/bin/unskript-ctl.sh run check --name k8s_get_offline_nodes --report --info; /usr/local/bin/unskript-ctl.sh run check --type k8s --report \nAdding audit log deletion cron job entry, 0 0 * * * /opt/conda/bin/python /usr/local/bin/unskript_audit_cleanup.py\n```\n\n*  unskript-ctl.sh run check --name <NAME> check --type <TYPE> --script <SCRIPT> --info --report\n\n```\nSchedule: cadence 0 0 * * *, job name: lightbeam\nSchedule: Programming crontab 0 0 * * * /usr/local/bin/unskript-ctl.sh run check --name k8s_get_offline_nodes --report --info; /usr/local/bin/unskript-ctl.sh run check --type k8s  --script \"/usr/local/bin/lb_pvc.sh\" --report\nAdding audit log deletion cron job entry, 0 0 * * * /opt/conda/bin/python /usr/local/bin/unskript_audit_cleanup.py\n```\n"
  },
  {
    "path": "unskript-ctl/creds_ui.py",
    "content": "\"\"\"This file implements Text User Interface for Credentials.\"\"\"\n#!/usr/bin/env python\n#\n# Copyright (c) 2023 unSkript.com\n# All rights reserved.\n#\n# This program is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\n# FOR A PARTICULAR PURPOSE\n#\n#\nimport os\nimport json\nimport logging\nimport npyscreen\n\n\n# CONSTANTS USED IN THIS SCRIPT\nCONNECTOR_LIST = [\n    'AWS', \n    'GCP', \n    'Kubernetes', \n    'ElasticSearch', \n    'Grafana', \n    'Redis', \n    'Jenkins', \n    'Github', \n    'Netbox', \n    'Nomad', \n    'Jira', \n    'Kafka', \n    'MongoDB', \n    'MySQL', \n    'PostgreSQL', \n    'REST', \n    'Slack', \n    'SSH', \n    'Salesforce'\n]\n\n# Here we create a CONNECTOR GRID of (list_connector)/3 x 3 matrix\n# We use the CONNECTOR_LIST above as the list and append the credential\n# Text to each of the grid elements.\nCONNECTOR_GRID = {}\nnum_rows = int(len(CONNECTOR_LIST) / 3)\nnum_columns = 3\n_idx = 0\nfor row in range(num_rows):\n    CONNECTOR_GRID[row] = {}\n    for col in range(num_columns):\n        CONNECTOR_GRID[row][col] = CONNECTOR_LIST[_idx]\n        _idx += 1\n\n# This variable is used to hold the Credential directory\n# Where all the creds are saved\nif os.environ.get('CREDS_DIR') is not None:\n    CREDS_DIR = os.environ.get('CREDS_DIR')\nelse:\n    CREDS_DIR = os.environ.get('HOME') + \"/.local/share/jupyter/metadata/credential-save/\"\n\ndef read_existing_creds(creds_file: str) -> dict:\n    \"\"\"read_existing_creds This is a utility function that simply\n       reads the given credential file, extracts the connector data \n       off of it and returns it back as a dict.\n\n       :type creds_file: string\n       :param creds_file: Credential File Name with the full path\n\n       :rtype: dict. The connectorData of the given credential\n    \"\"\"\n    retval = {}\n    if os.path.exists(creds_file) is False:\n        return retval \n    \n    try:\n        with open(creds_file, 'r', encoding='utf-8') as f:\n            contents = json.loads(f.read())\n        retval = contents.get('metadata')\n    except Exception as e:\n        raise e\n    \n    return json.loads(retval.get('connectorData'))\n\n\n# This class is a custom class of npyscreen.Textfield. The only change here is\n# This class does a type checking and ensures that the field that is entered is\n# Integer only. \nclass IntegerTextfield(npyscreen.Textfield):\n    def when_value_edited(self):\n        try:\n            int(self.value)\n        except ValueError:\n            self.value = \"\"\n\n# This class is a custom class of npyscreen.GridColTitles. The only difference\n# is that we override the `Esc` key, key number 10. and when pressed we just call\n# the Parent's change_forms so it goes to the previous screen. Essential this\n# class implements Going back to Main screen on `Esc`\nclass CredsGrid(npyscreen.GridColTitles):\n    def handle_input(self, key):\n        if key == 10:\n            selected_row = self.edit_cell[0]\n            selected_column = self.edit_cell[1]\n            try:\n                name = CONNECTOR_GRID[selected_row][selected_column]\n            except ValueError:\n                name = 'MAIN'\n\n            self.parent.change_forms(name)\n        return super().handle_input(key)\n\n\n# This is the Main class where we initialize all the other screens.\n# The first screen should be called as MAIN per the library requirement\n# The subsequent screens are named similar to what the CONNECTOR_LIST \n# is defined above.  \n# This class implements onCleanExit(), on_cancel() and change_form() \n# methods\nclass CredsApp(npyscreen.NPSAppManaged):\n    def onStart(self):\n        # The first form need to be name MAIN as per the library requirement\n        self.ui = {}\n        self.ui['MAIN'] = self.addForm(\"MAIN\", MainScreen, name=\"Connectors\", color=\"IMPORTANT\", align=\"^\")\n        self.ui['AWS'] = self.addForm(\"AWS\", AWSCreds, name='AWS Connector', color=\"IMPORTANT\",)\n        self.ui['GCP'] = self.addForm(\"GCP\", GCPCreds, name='GCP Connector', color=\"IMPORTANT\",)\n        self.ui['Kubernetes'] = self.addForm(\"Kubernetes\", K8SCreds, name='Kubernetes Connector', color=\"IMPORTANT\",)\n        self.ui['ElasticSearch'] = self.addForm(\"ElasticSearch\", ElasticSearchCreds, name='ElasticSearch Connector', color=\"IMPORTANT\",)\n        self.ui['Grafana'] = self.addForm(\"Grafana\", GrafanaCreds, name='Grafana Connector', color=\"IMPORTANT\",)\n        self.ui['Redis'] = self.addForm(\"Redis\", RedisCreds, name='Redis Connector', color=\"IMPORTANT\",)\n        self.ui['Jenkins'] = self.addForm(\"Jenkins\", JenkinsCreds, name='Jenkins Connector', color=\"IMPORTANT\",)\n        self.ui['Github'] = self.addForm(\"Github\", GithubCreds, name='Github Connector', color=\"IMPORTANT\",)\n        self.ui['Netbox'] = self.addForm(\"Netbox\", NetboxCreds, name='Netbox Connector', color=\"IMPORTANT\",)\n        self.ui['Nomad'] = self.addForm(\"Nomad\", NomadCreds, name='Nomad Connector', color=\"IMPORTANT\",)\n        self.ui['Jira'] = self.addForm(\"Jira\", JiraCreds, name='Jira Connector', color=\"IMPORTANT\",)\n        self.ui['Kafka'] = self.addForm(\"Kafka\", KafkaCreds, name='Kafka Connector', color=\"IMPORTANT\",)\n        self.ui['MongoDB'] = self.addForm(\"MongoDB\", MongoCreds, name='MongoDB Connector', color=\"IMPORTANT\",)\n        self.ui['MySQL'] = self.addForm(\"MySQL\", MySQLCreds, name='MySQL Connector', color=\"IMPORTANT\",)\n        self.ui['PostgreSQL'] = self.addForm(\"PostgreSQL\", PostgresCreds, name='PostgreSQL Connector', color=\"IMPORTANT\",)\n        self.ui['REST'] = self.addForm(\"REST\", RestCreds, name='REST Connector', color=\"IMPORTANT\",)\n        self.ui['Slack'] = self.addForm(\"Slack\", SlackCreds, name='Slack Connector', color=\"IMPORTANT\",)\n        self.ui['SSH'] = self.addForm(\"SSH\", SSHCreds, name='SSH Connector', color=\"IMPORTANT\",)\n        self.ui['Salesforce'] = self.addForm(\"Salesforce\", SalesforceCreds, name='Salesforce Connector', color=\"IMPORTANT\",)\n    \n\n    def onCleanExit(self):\n        npyscreen.notify_wait(\"Syncing Data back to disk!\")\n\n    def on_cancel(self,t):\n        npyscreen.notify_wait(\"Bye!\")\n        self.switchForm(None)\n                \n    def change_form(self, name):\n        self.switchForm(name)\n        self.resetHistory()\n    \n    def set_schemas(self, schema_json):\n        if not schema_json:\n            return\n        try:\n            self.schema_json = schema_json\n        except Exception as e:\n            print(f\"Unable to store the Json Schema, please check Schema Json content: {e}\")\n            return \n        \n\n\n\n# This is a custom class that inherits from npyscreen.ActionForm. This is\n# more like an abstract class which is being inheritted by all the connector\n# class. This class defines the basic class structure and implements the\n# change_forms() method that is being used by all the other subclasses below\n\nclass CredsForm(npyscreen.ActionForm):\n    def create(self):\n        self.add(npyscreen.TitleFixedText, name = \"Press Esc to go back to connectors page\", align=\"^\")\n        self.add_handlers({\"^T\": self.change_forms})\n        self.add_handlers({\"^Q\": self.custom_quit})\n\n    def custom_quit(self, a):\n        self.on_ok()\n\n    def on_cancel(self):\n        self.parentApp.resetHistory()\n        self.parentApp.change_form('MAIN')\n\n    def on_ok(self):\n        self.parentApp.switchForm(None)\n\n    def change_forms(self, *args, **keywords):\n        n = self.name.replace('Connector', '').strip()\n        name = ''\n        if n in CONNECTOR_LIST:\n            idx = CONNECTOR_LIST.index(n) + 1\n            if idx >= len(CONNECTOR_LIST):\n                name = \"MAIN\"\n                idx = 0\n            else:\n                name = CONNECTOR_LIST[idx]\n        self.parentApp.change_forms(name)\n\n# This class implements the GRID for the main screen. GRID contains elements from CONNECTOR_GRID\n# matrix. This class inherits from npyscreen.FromBaseNew and implements methods exit_application\n# on_cancel and change_forms. \nclass MainScreen(npyscreen.FormBaseNew):\n    def create(self):\n        self.show_cancel_button = False \n        self.add(npyscreen.TitleFixedText, name=\"* Select Connector to edit or ^Q to quit *\")\n        self.add(CredsGrid,\n                 values=[[CONNECTOR_GRID[row][col] for col in range(num_columns)] for row in range(num_rows)])\n        self.add_handlers({\"^Q\": self.parentApp.on_cancel})\n        self.add_handlers({\"27\": self.parentApp.on_cancel})\n\n        self.how_exited_handers[npyscreen.wgwidget.EXITED_ESCAPE] = self.exit_application\n\n    \n    def exit_application(self):\n        self.parentApp.setNextForm(None)\n        self.editing = False \n\n    def on_cancel(self):\n        self.parentApp.resetHistory()\n        self.parentApp.on_cancel()\n\n    def change_forms(self, name):\n        self.parentApp.change_form(name)\n\n# Some Design questions answered\n# 1. Why not read the JSON Schema and create the UI reading it?\n# A. Every connector has a unique UI requirement. Saving of every\n#    connector is also little different. Even a generalized implementation\n#    was attempted, we would need special case for each connector\n#    which negates using a generic implementation to create UI reading schema file\n#\n# 2. Why File upload is not implemented for GCP and K8S Connector?\n# A. File upload on a terminal means the file should be present locally\n#    on the docker. Such credentials are stored on the Users Laptop not\n#    on the docker. The next best thing in terms of UI is to present\n#    a text field wherein user can just Copy-Paste the configuration.\n#    Copy-Pasting eliminates the need for JSON/YAML file to be present\n#    on the docker at the time of creating the credential.  \n#\n# 3. Why npyscreen was chosen? \n# A. ncurses is the lowest level that can be used for creating simple UI\n#    however, to achieve a simple UI screen with Buttons and some Text lable\n#    to a few tens of lines depending on the complexity. There are other\n#    packages like tkinter, pytermgui, etc... among them, npyscreen seemed\n#    like a nicer and well written package. About 1.5K stars on the Github\n#    at the time of writing. As can be seen the actual UI element code\n#    here is very minimum. \n\n# Following Classes define specific UI Element for each connector type.\n# Every class inherits from CredsForm. Every class implements\n# create() and on_ok() methods which are specific to the connector. \n# Any new connector that needs to be added should also implement a custom class\n\nclass AWSCreds(CredsForm):\n    def create(self):\n        super().create()\n        self.add(npyscreen.TitleFixedText, name=\"Auth Schema\", align=\"^\", color=\"IMPORTANT\", )\n        self.access = self.add(npyscreen.TitleText, name=\"Access Key\", align=\"^\",)\n        self.secret = self.add(npyscreen.TitlePassword, name=\"Secret Key\", align=\"^\")\n\n        c_data = read_existing_creds(CREDS_DIR + 'awscreds.json')\n        if c_data:\n            if c_data.get('authentication'):\n                if c_data.get('authentication').get('access_key'):\n                    self.access.value = c_data.get('authentication').get('access_key')\n                if c_data.get('authentication').get('secret_access_key'):\n                    self.secret.value = c_data.get('authentication').get('secret_access_key')\n\n    def on_ok(self):\n        if self.access.value and self.secret.value:\n            creds_file = CREDS_DIR + 'awscreds.json'\n            if os.path.exists(creds_file) is False:\n                npyscreen.notify(\"AWS Credential File is Missing! Cannot proceed further. Contact support@unskript.com\")\n                raise AssertionError(\"Credential file for AWS is Missing\")\n\n            with open(creds_file, 'r', encoding=\"utf-8\") as f:\n                contents = json.loads(f.read())\n            if not contents:\n                npyscreen.notify(\"AWS Credential File is Missing! Cannot proceed further. Contact support@unskript.com\")\n                raise AssertionError(\"Credential file for AWS is Missing\")\n            d = {}\n            d['authentication'] = {}\n            d['authentication']['auth_type'] = \"Access Key\"\n            d['authentication']['access_key'] = self.access.value  \n            d['authentication']['secret_access_key'] = self.secret.value  \n            contents['metadata']['connectorData'] = json.dumps(d) \n\n            with open(creds_file, 'w', encoding=\"utf-8\") as f:\n                f.write(json.dumps(contents, indent=2))\n        \n        super().on_ok()\n\nclass GCPCreds(CredsForm):\n    def create(self):\n        super().create()\n        self.gcpjson = self.add(npyscreen.MultiLineEditableBoxed, values=[\"\"],name=\"~ Paste your Credential JSON Below ~\", align=\"^\", color=\"IMPORTANT\")\n        \n        c_data = read_existing_creds(CREDS_DIR + 'gcpcreds.json')\n        if c_data:\n            self.gcpjson.value = json.dumps(c_data)\n\n    def on_ok(self):\n        if self.gcpjson.values:\n            creds_file = CREDS_DIR + 'gcpcreds.json'\n            if os.path.exists(creds_file) is False:\n                npyscreen.notify(\"GCP Credential File is Missing! Cannot proceed further. Contact support@unskript.com\")\n                raise AssertionError(\"Credential file for GCP is Missing\")\n            with open(creds_file, 'r', encoding=\"utf-8\") as f:\n                contents = json.loads(f.read())\n            if not contents:\n                npyscreen.notify(\"GCP Credential File is Missing! Cannot proceed further. Contact support@unskript.com\")\n                raise AssertionError(\"Credential file for GCP is Missing\")\n            contents['metadata']['connectorData'] = json.dumps(self.gcpjson.values)\n    \n            with open(creds_file, 'w', encoding=\"utf-8\") as f:\n                f.write(json.dumps(contents, indent=2))\n\n        super().on_ok()\n\nclass ElasticSearchCreds(CredsForm):\n    def create(self):\n        super().create()\n        self.es_hostname = self.add(npyscreen.TitleText, name=\"Hostname\", align=\"^\", color=\"IMPORTANT\",)\n        self.es_username = self.add(npyscreen.TitleText, name=\"Username\", align=\"^\", color=\"IMPORTANT\",)\n        self.es_password = self.add(npyscreen.TitlePassword, name=\"Password\", align=\"^\", color=\"IMPORTANT\",)\n        self.es_apikey = self.add(npyscreen.TitlePassword, name=\"API Key\", align=\"^\", color=\"IMPORTANT\",)\n\n        c_data = read_existing_creds(CREDS_DIR + 'escreds.json')\n        if c_data:\n            if c_data.get('username'):\n                self.es_username.value = c_data.get('username')\n            if c_data.get('host'):\n                self.es_hostname.value = c_data.get('host')\n            if c_data.get('password'):\n                self.es_password.value = c_data.get('password')\n            if c_data.get('api_key'):\n                self.es_apikey.value = c_data.get('api_key')\n\n    def on_ok(self):\n        if self.es_hostname and self.es_username and self.es_apikey and self.es_password:\n            creds_file = CREDS_DIR + 'escreds.json'\n            if os.path.exists(creds_file) is False:\n                npyscreen.notify(\"Elastic Search Credential File is Missing! Cannot proceed further. Contact support@unskript.com\")\n                raise AssertionError(\"Credential file for ES is Missing\")\n            with open(creds_file, 'r', encoding=\"utf-8\") as f:\n                contents = json.loads(f.read())\n            if not contents:\n                npyscreen.notify(\"Elastic Search Credential File is Missing! Cannot proceed further. Contact support@unskript.com\")\n                raise AssertionError(\"Credential file for ES is Missing\")\n            d = {}\n            d['username'] = self.es_username.value\n            d['password'] = self.es_password.value\n            d['host'] = self.es_hostname.value\n            d['api_key'] = self.es_apikey.value\n            contents['metadata']['connectorData'] = json.dumps(d)\n            with open(creds_file, 'w', encoding=\"utf-8\") as f:\n                f.write(json.dumps(contents, indent=2))\n        \n        super().on_ok()\n             \n\nclass GrafanaCreds(CredsForm):\n    def create(self):\n        super().create()\n        self._apikey = self.add(npyscreen.TitlePassword, name=\"API Key\", align=\"^\", color=\"IMPORTANT\",)\n        self._username = self.add(npyscreen.TitleText, name=\"Username\", align=\"^\", color=\"IMPORTANT\",)\n        self._password = self.add(npyscreen.TitlePassword, name=\"Password\", align=\"^\", color=\"IMPORTANT\",)\n        self._hostname = self.add(npyscreen.TitleText, name=\"Hostname\", align=\"^\", color=\"IMPORTANT\",)\n\n        c_data = read_existing_creds(CREDS_DIR + 'grafanacreds.json')\n        if c_data:\n            if c_data.get('api_key'):\n                self._apikey.value = c_data.get('api_key')\n            if c_data.get('username'):\n                self._username.value = c_data.get('username')\n            if c_data.get('password'):\n                self._password.value = c_data.get('password')\n            if c_data.get('host'):\n                self._hostname.value = c_data.get('host')\n\n    def on_ok(self):\n        if self._hostname and self._username and self._apikey and self._password:\n            creds_file = CREDS_DIR + 'grafanacreds.json'\n            if os.path.exists(creds_file) is False:\n                npyscreen.notify(\"Grafana Credential File is Missing! Cannot proceed further. Contact support@unskript.com\")\n                raise AssertionError(\"Credential file for Grafana is Missing\")\n            with open(creds_file, 'r', encoding=\"utf-8\") as f:\n                contents = json.loads(f.read())\n            if not contents:\n                npyscreen.notify(\"Grafana Credential File is Missing! Cannot proceed further. Contact support@unskript.com\")\n                raise AssertionError(\"Credential file for Grafana is Missing\")\n            d = {}\n            d['api_key'] = self._apikey.value\n            d['username'] = self._username.value\n            d['password'] = self._password.value\n            d['host'] = self._hostname.value\n            contents['metadata']['connectorData'] = json.dumps(d)\n            with open(creds_file, 'w', encoding=\"utf-8\") as f:\n                f.write(json.dumps(contents, indent=2))\n        super().on_ok()\n\nclass RedisCreds(CredsForm):\n    def create(self):\n        super().create()\n        self.add(npyscreen.TitleFixedText, name=\"DB\", align=\"^\", color=\"IMPORTANT\",)\n        self._db = self.add(IntegerTextfield, name=\"DB\", align=\"^\",  color=\"IMPORTANT\",)\n        self._hostname = self.add(npyscreen.TitleText, name=\"Hostname\", align=\"^\", color=\"IMPORTANT\",)\n        self._username = self.add(npyscreen.TitleText, name=\"Username\", align=\"^\", color=\"IMPORTANT\",)\n        self._password = self.add(npyscreen.TitlePassword, name=\"Password\", align=\"^\", color=\"IMPORTANT\",)\n        self.add(npyscreen.TitleFixedText, name=\"Port\", align=\"^\", color=\"IMPORTANT\")\n        self._port = self.add(IntegerTextfield, name=\"Port\", value=\"6379\", align=\"^\", color=\"IMPORTANT\",)\n        self.add(npyscreen.TitleFixedText, name=\"Use SSL\", args=\"^\", color=\"IMPORTANT\",)\n        self._use_ssl = self.add(npyscreen.ComboBox, name=\"Use SSL\", values=[True, False], scroll_exit=True, color='IMPORTANT',)\n\n        c_data = read_existing_creds(CREDS_DIR + 'rediscreds.json')\n        if c_data:\n            if c_data.get('db'):\n                self._db.value = c_data.get('db')\n            if c_data.get('username'):\n                self._username.value = c_data.get('username')\n            if c_data.get('password'):\n                self._password.value = c_data.get('password')\n            if c_data.get('host'):\n                self._hostname.value = c_data.get('host')\n            if c_data.get('use_ssl'):\n                self._hostname.value = c_data.get('use_ssl')\n            if c_data.get('port'):\n                self._port.value = c_data.get('port')\n    \n    def on_ok(self):\n        if self._hostname and self._username and self._db and self._password:\n            creds_file = CREDS_DIR + 'rediscreds.json'\n            if os.path.exists(creds_file) is False:\n                npyscreen.notify(\"Redis Credential File is Missing! Cannot proceed further. Contact support@unskript.com\")\n                raise AssertionError(\"Credential file for Redis is Missing\")\n            with open(creds_file, 'r', encoding=\"utf-8\") as f:\n                contents = json.loads(f.read())\n            if not contents:\n                npyscreen.notify(\"Redis Credential File is Missing! Cannot proceed further. Contact support@unskript.com\")\n                raise AssertionError(\"Credential file for Redis is Missing\")\n            d = {}\n            d['db'] = self._db.value\n            d['username'] = self._username.value\n            d['password'] = self._password.value\n            d['host'] = self._hostname.value\n            d['use_ssl'] = self._use_ssl.values[self._use_ssl.value]\n            d['port'] = self._port.value\n            contents['metadata']['connectorData'] = json.dumps(d)\n            with open(creds_file, 'w', encoding=\"utf-8\") as f:\n                f.write(json.dumps(contents, indent=2))\n        super().on_ok()\n\nclass JenkinsCreds(CredsForm):\n    def create(self):\n        super().create()\n        self._url = self.add(npyscreen.TitleText, name=\"Jenkins URL\", align=\"^\", type=int, color=\"IMPORTANT\",)\n        self._username = self.add(npyscreen.TitleText, name=\"Username\", align=\"^\", color=\"IMPORTANT\",)\n        self._password = self.add(npyscreen.TitlePassword, name=\"Password\", align=\"^\", color=\"IMPORTANT\",)\n\n        c_data = read_existing_creds(CREDS_DIR + 'jenkinscreds.json')\n        if c_data:\n            if c_data.get('username'):\n                self._username.value = c_data.get('username')\n            if c_data.get('password'):\n                self._password.value = c_data.get('password')\n            if c_data.get('url'):\n                self._url.value = c_data.get('url')\n\n    def on_ok(self):\n        if self._url and self._username and self._password:\n            creds_file = CREDS_DIR + 'jenkinscreds.json'\n            if os.path.exists(creds_file) is False:\n                npyscreen.notify(\"Jenkins Credential File is Missing! Cannot proceed further. Contact support@unskript.com\")\n                raise AssertionError(\"Credential file for Jenkins is Missing\")\n            with open(creds_file, 'r', encoding=\"utf-8\") as f:\n                contents = json.loads(f.read())\n            if not contents:\n                npyscreen.notify(\"Jenkins Credential File is Missing! Cannot proceed further. Contact support@unskript.com\")\n                raise AssertionError(\"Credential file for Jenkins is Missing\")\n            d = {}\n            d['username'] = self._username.value\n            d['password'] = self._password.value\n            d['url'] = self._url.value\n            contents['metadata']['connectorData'] = json.dumps(d)\n            with open(creds_file, 'w', encoding=\"utf-8\") as f:\n                f.write(json.dumps(contents, indent=2))\n        super().on_ok()\n\nclass GithubCreds(CredsForm):\n    def create(self):\n        super().create()\n        self._token = self.add(npyscreen.TitlePassword, name=\"Access Token\", align=\"^\", color=\"IMPORTANT\",)\n        self._hostname = self.add(npyscreen.TitleText, name=\"Custom Hostname\", align=\"^\", color=\"IMPORTANT\",)\n\n        c_data = read_existing_creds(CREDS_DIR + 'githubcreds.json')\n        if c_data:\n            if c_data.get('hostname'):\n                self._hostname.value = c_data.get('hostname')\n            if c_data.get('token'):\n                self._token.value = c_data.get('token')\n    \n    def on_ok(self):\n        if self._token and self._hostname:\n            creds_file = CREDS_DIR + 'githubcreds.json'\n            if os.path.exists(creds_file) is False:\n                npyscreen.notify(\"Github Credential File is Missing! Cannot proceed further. Contact support@unskript.com\")\n                raise AssertionError(\"Credential file for Github is Missing\")\n            with open(creds_file, 'r', encoding=\"utf-8\") as f:\n                contents = json.loads(f.read())\n            if not contents:\n                npyscreen.notify(\"Github Credential File is Missing! Cannot proceed further. Contact support@unskript.com\")\n                raise AssertionError(\"Credential file for Github is Missing\")\n            d = {}\n            d['hostname'] = self._hostname.value\n            d['token'] = self._token.value\n            contents['metadata']['connectorData'] = json.dumps(d)\n            with open(creds_file, 'w', encoding=\"utf-8\") as f:\n                f.write(json.dumps(contents, indent=2))\n        super().on_ok()\n\nclass NetboxCreds(CredsForm):\n    def create(self):\n        super().create()\n        self._token = self.add(npyscreen.TitlePassword, name=\"Token\", align=\"^\", color=\"IMPORTANT\",)\n        self._host = self.add(npyscreen.TitleText, name=\"Hostname\", align=\"^\", color=\"IMPORTANT\",)\n        self._threading = self.add(npyscreen.TitleText, name=\"Threading\", align=\"^\", color=\"IMPORTANT\",)\n\n        c_data = read_existing_creds(CREDS_DIR + 'netboxcreds.json')\n        if c_data:\n            if c_data.get('token'):\n                self._token.value = c_data.get('token')\n            if c_data.get('host'):\n                self._host.value = c_data.get('host')\n            if c_data.get('threading'):\n                self._threading.value = c_data.get('threading')\n    \n    def on_ok(self):\n        if self._token and self._host and self._threading:\n            creds_file = CREDS_DIR + 'netboxcreds.json'\n            if os.path.exists(creds_file) is False:\n                npyscreen.notify(\"Netbox Credential File is Missing! Cannot proceed further. Contact support@unskript.com\")\n                raise AssertionError(\"Credential file for Netbox is Missing\")\n            with open(creds_file, 'r', encoding=\"utf-8\") as f:\n                contents = json.loads(f.read())\n            if not contents:\n                npyscreen.notify(\"Netbox Credential File is Missing! Cannot proceed further. Contact support@unskript.com\")\n                raise AssertionError(\"Credential file for Netbox is Missing\")\n            d = {}\n            d['token'] = self._token.value\n            d['host'] = self._host.value\n            d['threading'] = self._threading.value\n            contents['metadata']['connectorData'] = json.dumps(d)\n            with open(creds_file, 'w', encoding=\"utf-8\") as f:\n                f.write(json.dumps(contents, indent=2))\n        super().on_ok()\n\nclass NomadCreds(CredsForm):\n    def create(self):\n        super().create()\n        self._timeout = self.add(npyscreen.TitleText, name=\"Timeout\", align=\"^\", color=\"IMPORTANT\",)\n        self._token = self.add(npyscreen.TitlePassword, name=\"Token\", align=\"^\", color=\"IMPORTANT\",)\n        self._host = self.add(npyscreen.TitleText, name=\"Host\", align=\"^\", color=\"IMPORTANT\",)\n\n        c_data = read_existing_creds(CREDS_DIR + 'nomadcreds.json')\n        if c_data:\n            if c_data.get('timeout'):\n                self._timeout.value = c_data.get('timeout')\n            if c_data.get('token'):\n                self._token.value = c_data.get('token')\n            if c_data.get('host'):\n                self._host.value = c_data.get('host')\n    \n    def on_ok(self):\n        if self._token and self._host and self._timeout:\n            creds_file = CREDS_DIR + 'nomadcreds.json'\n            if os.path.exists(creds_file) is False:\n                npyscreen.notify(\"Nomad Credential File is Missing! Cannot proceed further. Contact support@unskript.com\")\n                raise AssertionError(\"Credential file for Nomad is Missing\")\n            with open(creds_file, 'r', encoding=\"utf-8\") as f:\n                contents = json.loads(f.read())\n            if not contents:\n                npyscreen.notify(\"Nomad Credential File is Missing! Cannot proceed further. Contact support@unskript.com\")\n                raise AssertionError(\"Credential file for Nomad is Missing\")\n            d = {}\n            d['timeout'] = self._timeout.value\n            d['token'] = self._token.value\n            d['host'] = self._host.value\n            contents['metadata']['connectorData'] = json.dumps(d)\n            with open(creds_file, 'w', encoding=\"utf-8\") as f:\n                f.write(json.dumps(contents, indent=2))\n        super().on_ok()\n\nclass JiraCreds(CredsForm):\n    def create(self):\n        super().create()\n        self._url = self.add(npyscreen.TitleText, name=\"URL\", align=\"^\", color=\"IMPORTANT\",)\n        self._email = self.add(npyscreen.TitleText, name=\"Email\", align=\"^\", color=\"IMPORTANT\",)\n        self._api_token = self.add(npyscreen.TitlePassword, name=\"API Token\", align=\"^\", color=\"IMPORTANT\",)\n\n        c_data = read_existing_creds(CREDS_DIR + 'jiracreds.json')\n        if c_data:\n            if c_data.get('url'):\n                self._url.value = c_data.get('url')\n            if c_data.get('email'):\n                self._email.value = c_data.get('email')\n            if c_data.get('api_token'):\n                self._api_token.value = c_data.get('api_token')\n    \n    def on_ok(self):\n        if self._api_token.value and self._email.value and self._url.value:\n            creds_file = CREDS_DIR + 'jiracreds.json'\n            if os.path.exists(creds_file) is False:\n                npyscreen.notify(\"Jira Credential File is Missing! Cannot proceed further. Contact support@unskript.com\")\n                raise AssertionError(\"Credential file for Jira is Missing\")\n            with open(creds_file, 'r', encoding=\"utf-8\") as f:\n                contents = json.loads(f.read())\n            if not contents:\n                npyscreen.notify(\"Jira Credential File is Missing! Cannot proceed further. Contact support@unskript.com\")\n                raise AssertionError(\"Credential file for Jira is Missing\")\n            d = {}\n            d['url'] = self._url.value\n            d['email'] = self._email.value\n            d['api_token'] = self._api_token.value\n            contents['metadata']['connectorData'] = json.dumps(d)\n            with open(creds_file, 'w', encoding=\"utf-8\") as f:\n                f.write(json.dumps(contents, indent=2))\n        super().on_ok()\n\n\nclass K8SCreds(CredsForm):\n    def create(self):\n        super().create()\n        self._kubeconfig = self.add(npyscreen.MultiLineEditableBoxed, values=[\"\"],name=\"~ Paste your Kube Configuration Below ~\", align=\"^\", color=\"IMPORTANT\",)\n\n        c_data = read_existing_creds(CREDS_DIR + 'k8screds.json')\n        if c_data:\n            self._kubeconfig.value = json.dumps(c_data)\n    \n    def on_ok(self):\n        if self._kubeconfig.values:\n            creds_file = CREDS_DIR + 'k8screds.json'\n            if os.path.exists(creds_file) is False:\n                npyscreen.notify(\"K8S Credential File is Missing! Cannot proceed further. Contact support@unskript.com\")\n                raise AssertionError(\"Credential file for K8S is Missing\")\n            with open(creds_file, 'r', encoding=\"utf-8\") as f:\n                contents = json.loads(f.read())\n            if not contents:\n                npyscreen.notify(\"K8S Credential File is Missing! Cannot proceed further. Contact support@unskript.com\")\n                raise AssertionError(\"Credential file for K8S is Missing\")\n            contents['metadata']['connectorData'] = json.dumps({\"kubeconfig\": self._kubeconfig.values})\n    \n            with open(creds_file, 'w', encoding=\"utf-8\") as f:\n                f.write(json.dumps(contents, indent=2))\n\n        super().on_ok()\n\nclass KafkaCreds(CredsForm):\n    def create(self):\n        super().create()\n        self._sasl_username = self.add(npyscreen.TitleText, name=\"SASL Username\", align=\"^\", color=\"IMPORTANT\",)\n        self._sasl_password = self.add(npyscreen.TitlePassword, name=\"SASL Password\", align=\"^\", color=\"IMPORTANT\",)\n        self._broker = self.add(npyscreen.TitleText, name=\"Broker\", align=\"^\", color=\"IMPORTANT\",)\n\n        c_data = read_existing_creds(CREDS_DIR + 'kafkacreds.json')\n        if c_data:\n            if c_data.get('sasl_username'):\n                self._sasl_username.value = c_data.get('sasl_username')\n            if c_data.get('sasl_password'):\n                self._sasl_password.value = c_data.get('sasl_password')\n            if c_data.get('broker'):\n                self._broker.value = c_data.get('broker')\n\n    \n    def on_ok(self):\n        if self._sasl_password.value and self._sasl_username.value and self._broker.value:\n            creds_file = CREDS_DIR + 'kafkacreds.json'\n            if os.path.exists(creds_file) is False:\n                npyscreen.notify(\"Kafka Credential File is Missing! Cannot proceed further. Contact support@unskript.com\")\n                raise AssertionError(\"Credential file for Kafka is Missing\")\n            with open(creds_file, 'r', encoding=\"utf-8\") as f:\n                contents = json.loads(f.read())\n            if not contents:\n                npyscreen.notify(\"Kafka Credential File is Missing! Cannot proceed further. Contact support@unskript.com\")\n                raise AssertionError(\"Credential file for Kafka is Missing\")\n            d = {}\n            d['sasl_username'] = self._sasl_username.value\n            d['sasl_password'] = self._sasl_password.value\n            d['broker'] = self._broker.value\n            contents['metadata']['connectorData'] = json.dumps(d)\n            with open(creds_file, 'w', encoding=\"utf-8\") as f:\n                f.write(json.dumps(contents, indent=2))\n        super().on_ok()\n\n\n# We need to create over own Radio button here, reason\n# being depending on the option selected, the other elements\n# of the screen need to change. Example in case of\n# Atlas, API key should be shown and not username and password.\nclass MongoSelectOneField(npyscreen.TitleSelectOne):\n    def when_value_edited(self):\n        if not self.value:\n            return \n        \n        v = int(self.value[0])\n        if v == 0:\n            self.parent.display_atlas_ui()\n        elif v == 1:\n            self.parent.display_auth_ui()\n        else:\n            raise AssertionError(\"Option value not recognized\")\n\nclass MongoCreds(CredsForm):\n    def create(self):\n        super().create()\n        self._host = self.add(npyscreen.TitleText, name=\"Host\", align=\"^\", color=\"IMPORTANT\",)\n        self.add(npyscreen.TitleFixedText, name=\"Port\")\n        self._port = self.add(IntegerTextfield, name=\"Port\", value=\"27017\", args=\"^\", color=\"IMPORTANT\",)\n        self._schema = self.add(MongoSelectOneField,\n                 values=[\"Atlas Schema\", \"Auth Schema\"],\n                 name=\"Pick One\",\n                 scroll_exit=True,\n                 max_height=4)\n        self._atlas_api_public_key = self.add(npyscreen.TitleText, name=\"Atlas API Public Key\", args=\"^\", color=\"IMPORTANT\",)\n        self._atlas_api_private_key = self.add(npyscreen.TitlePassword, name=\"Atlas API Private Key\", args=\"^\", color=\"IMPORTANT\",)\n        self._username = self.add(npyscreen.TitleText, name=\"Username\", args=\"^\", color=\"IMPORTANT\",)\n        self._password = self.add(npyscreen.TitlePassword, name=\"Password\", args=\"^\", color=\"IMPORTANT\",)\n        self._atlas_api_private_key.hidden = True \n        self._atlas_api_public_key.hidden = True \n        self._username.hidden = True \n        self._password.hidden = True \n\n    def display_atlas_ui(self):\n        self._atlas_api_private_key.hidden = False\n        self._atlas_api_public_key.hidden = False \n        self._username.hidden = True \n        self._password.hidden = True \n        self._atlas_api_private_key.display()\n        self._atlas_api_public_key.display()\n        self._username.display()\n        self._password.display()\n\n    def display_auth_ui(self):\n        self._atlas_api_private_key.hidden = True\n        self._atlas_api_public_key.hidden = True \n        self._username.hidden = False \n        self._password.hidden = False \n        self._atlas_api_private_key.display()\n        self._atlas_api_public_key.display()\n        self._username.display()\n        self._password.display()\n\n    def on_ok(self):\n        creds_file = CREDS_DIR + \"mongodbcreds.json\"\n        with open(creds_file, 'r', encoding=\"utf-8\") as f:\n            contents = json.loads(f.read())\n        d = {}\n        if int(self._schema.value[0]) == 0:\n            if self._atlas_api_public_key and self._atlas_api_private_key:\n                d['port'] = self._port.value\n                d['host'] = self._host.value \n                d['authentication'] = {}\n                d['authentication']['auth_type'] = \"Atlas Administrative API using HTTP Digest Authentication\"\n                d['authentication']['atlas_public_key'] = self._atlas_api_public_key.value\n                d['authentication']['atlas_private_key'] = self._atlas_api_private_key.value\n\n        elif int(self._schema.value[0]) == 1:\n            if self._username and self._password:\n                d['port'] = self._port.value\n                d['host'] = self._host.value \n                d['authentication'] = {}\n                d['authentication']['auth_type'] = \"Basic Auth\"\n                d['authentication']['user_name'] = self._username.value \n                d['authentication']['password'] = self._password.value \n        else:\n            raise AssertionError(\"Unknown Option, not able to save credential\")\n        if d:\n            contents['metadata']['connectorData'] = json.dumps(d)\n            with open(creds_file, 'w', encoding=\"utf-8\") as f:\n                f.write(json.dumps(contents, indent=2))\n                \n        super().on_ok()\n\nclass MySQLCreds(CredsForm):\n    def create(self):\n        super().create()\n        self._user = self.add(npyscreen.TitleText, name=\"User\", align=\"^\", color=\"IMPORTANT\",)\n        self._password = self.add(npyscreen.TitlePassword, name=\"Password\", align=\"^\", color=\"IMPORTANT\",)\n        self.add(npyscreen.TitleFixedText, name=\"Port\", align=\"^\", color=\"IMPORTANT\",)\n        self._port = self.add(IntegerTextfield, name=\"Port\", value=\"3306\", align=\"^\", color=\"IMPORTANT\",)\n        self._host = self.add(npyscreen.TitleText, name=\"Host\", align=\"^\", color=\"IMPORTANT\",)\n        self._dbname = self.add(npyscreen.TitleText, name=\"DB Name\", align=\"^\", color=\"IMPORTANT\",)\n\n        c_data = read_existing_creds(CREDS_DIR + 'mysqlcreds.json')\n        if c_data:\n            if c_data.get('User'):\n                self._user.value = c_data.get('User')\n            if c_data.get('Password'):\n                self._password.value = c_data.get('Password')\n            if c_data.get('Host'):\n                self._host.value = c_data.get('Host')\n            if c_data.get('DBName'):\n                self._dbname.value = c_data.get('DBName')\n\n    def on_ok(self):\n        if self._password.value and self._user.value and self._port.value and self._host.value and self._dbname.value:\n            creds_file = CREDS_DIR + 'mysqlcreds.json'\n            if os.path.exists(creds_file) is False:\n                npyscreen.notify(\"MySQL Credential File is Missing! Cannot proceed further. Contact support@unskript.com\")\n                raise AssertionError(\"Credential file for MySQL is Missing\")\n            with open(creds_file, 'r', encoding=\"utf-8\") as f:\n                contents = json.loads(f.read())\n            if not contents:\n                npyscreen.notify(\"MySQL Credential File is Missing! Cannot proceed further. Contact support@unskript.com\")\n                raise AssertionError(\"Credential file for MySQL is Missing\")\n            d = {}\n            d['User'] = self._user.value\n            d['Password'] = self._password.value\n            d['Port'] = self._port.value \n            d['Host'] = self._host.value \n            d['DBName'] = self._dbname.value \n            contents['metadata']['connectorData'] = json.dumps(d)\n            with open(creds_file, 'w', encoding=\"utf-8\") as f:\n                f.write(json.dumps(contents, indent=2))\n        super().on_ok()\n\n\nclass PostgresCreds(CredsForm):\n    def create(self):\n        super().create()\n        self._user = self.add(npyscreen.TitleText, name=\"User\", align=\"^\", color=\"IMPORTANT\",)\n        self._password = self.add(npyscreen.TitlePassword, name=\"Password\", align=\"^\", color=\"IMPORTANT\",)\n        self.add(npyscreen.TitleFixedText, name=\"Port\", align=\"^\", color=\"IMPORTANT\",)\n        self._port = self.add(IntegerTextfield, name=\"Port\", align=\"^\", value=\"5432\", color=\"IMPORTANT\",)\n        self._host = self.add(npyscreen.TitleText, name=\"Host\", align=\"^\", color=\"IMPORTANT\",)\n        self._dbname = self.add(npyscreen.TitleText, name=\"DB Name\", align=\"^\", color=\"IMPORTANT\",)\n\n        c_data = read_existing_creds(CREDS_DIR + 'postgrescreds.json')\n        if c_data:\n            if c_data.get('User'):\n                self._user.value = c_data.get('User')\n            if c_data.get('Password'):\n                self._password.value = c_data.get('Password')\n            if c_data.get('Host'):\n                self._host.value = c_data.get('Host')\n            if c_data.get('DBName'):\n                self._dbname.value = c_data.get('DBName')\n\n    def on_ok(self):\n        if self._password.value and self._user.value and self._port.value and self._host.value and self._dbname.value:\n            creds_file = CREDS_DIR + 'postgrescreds.json'\n            if os.path.exists(creds_file) is False:\n                npyscreen.notify(\"PostgreSQL Credential File is Missing! Cannot proceed further. Contact support@unskript.com\")\n                raise AssertionError(\"Credential file for PostgreSQL is Missing\")\n            with open(creds_file, 'r', encoding=\"utf-8\") as f:\n                contents = json.loads(f.read())\n            if not contents:\n                npyscreen.notify(\"PostgreSQL Credential File is Missing! Cannot proceed further. Contact support@unskript.com\")\n                raise AssertionError(\"Credential file for PostgreSQL is Missing\")\n            d = {}\n            d['User'] = self._user.value\n            d['Password'] = self._password.value\n            d['Port'] = self._port.value \n            d['Host'] = self._host.value \n            d['DBName'] = self._dbname.value \n            contents['metadata']['connectorData'] = json.dumps(d)\n            with open(creds_file, 'w', encoding=\"utf-8\") as f:\n                f.write(json.dumps(contents, indent=2))\n        super().on_ok()\n\nclass RestCreds(CredsForm):\n    def create(self):\n        super().create()\n        self._url = self.add(npyscreen.TitleText, name=\"URL\", align=\"^\", color=\"IMPORTANT\",)\n        self._username = self.add(npyscreen.TitleText, name=\"Username\", align=\"^\", color=\"IMPORTANT\",)\n        self._password = self.add(npyscreen.TitlePassword, name=\"Password\", align=\"^\", color=\"IMPORTANT\",)\n        self.add(npyscreen.TitleFixedText, name=\"Additional Headers\", align=\"^\", color=\"IMPORTANT\",)\n        self._key = self.add(npyscreen.TitleText, name=\"Key\", align=\"^\", color=\"IMPORTANT\",)\n        self._value = self.add(npyscreen.TitleText, name=\"Value\", align=\"^\", color=\"IMPORTANT\",)\n\n        c_data = read_existing_creds(CREDS_DIR + 'restcreds.json')\n        if c_data:\n            if c_data.get('username'):\n                self._username.value = c_data.get('username')\n            if c_data.get('password'):\n                self._password.value = c_data.get('password')\n            if c_data.get('base_url'):\n                self._url.value = c_data.get('base_url')\n            if c_data.get('headers'):\n                #self._key.value = [x for x in c_data.get('headers').keys()][0]\n                #self._value.value = [x for x in c_data.get('headers').values()][0]\n                self._key.value = list(c_data.get('headers').keys())[0]\n                self._value.value = list(c_data.get('headers').values())[0]\n\n    def on_ok(self):\n        if self._password.value and self._username.value and self._url.value:\n            creds_file = CREDS_DIR + 'restcreds.json'\n            if os.path.exists(creds_file) is False:\n                npyscreen.notify(\"REST Credential File is Missing! Cannot proceed further. Contact support@unskript.com\")\n                raise AssertionError(\"Credential file for REST is Missing\")\n            with open(creds_file, 'r', encoding=\"utf-8\") as f:\n                contents = json.loads(f.read())\n            if not contents:\n                npyscreen.notify(\"REST Credential File is Missing! Cannot proceed further. Contact support@unskript.com\")\n                raise AssertionError(\"Credential file for REST is Missing\")\n            d = {}\n            d['username'] = self._username.value\n            d['password'] = self._password.value\n            d['base_url'] = self._url.value \n            if self._key.value and self._key.value:\n                d['headers'] = {}\n                d['headers'][self._key.value] = self._value.value\n            contents['metadata']['connectorData'] = json.dumps(d)\n            with open(creds_file, 'w', encoding=\"utf-8\") as f:\n                f.write(json.dumps(contents, indent=2))\n        super().on_ok()\n\nclass SlackCreds(CredsForm):\n    def create(self):\n        super().create()\n        self._oauth_token = self.add(npyscreen.TitlePassword, name=\"OAuth Token\", align=\"^\", color=\"IMPORTANT\",)\n        c_data = read_existing_creds(CREDS_DIR + 'slackcreds.json')\n        if c_data:\n            if c_data.get('bot_user_oauth_token'):\n                self._oauth_token.value = c_data.get('bot_user_oauth_token')\n\n    def on_ok(self):\n        if self._oauth_token.value:\n            creds_file = CREDS_DIR + 'slackcreds.json'\n            if os.path.exists(creds_file) is False:\n                npyscreen.notify(\"SLACK Credential File is Missing! Cannot proceed further. Contact support@unskript.com\")\n                raise AssertionError(\"Credential file for SLACK is Missing\")\n            with open(creds_file, 'r', encoding=\"utf-8\") as f:\n                contents = json.loads(f.read())\n            if not contents:\n                npyscreen.notify(\"SLACK Credential File is Missing! Cannot proceed further. Contact support@unskript.com\")\n                raise AssertionError(\"Credential file for SLACK is Missing\")\n            d = {}\n            d['bot_user_oauth_token'] = self._oauth_token.value\n            contents['metadata']['connectorData'] = json.dumps(d)\n            with open(creds_file, 'w', encoding=\"utf-8\") as f:\n                f.write(json.dumps(contents, indent=2))\n        super().on_ok()\n\n# Similar to MongoDB creds, here we implement custom\n# Radio button class to change the UI for PEM and basic Auth \n# types.\n\nclass SSHSelectOneField(npyscreen.TitleSelectOne):\n    def when_value_edited(self):\n        if not self.value:\n            return\n        try:\n            v = int(self.value[0])\n        except ValueError:\n            v = 0\n        if v == 0:\n            self.parent.display_auth_ui()\n        elif v == 1:\n            self.parent.display_pem_ui()\n        else:\n            raise AssertionError(\"Option value not recognized\")\n        \nclass SSHCreds(CredsForm):\n    def create(self):\n        super().create()\n        self.add(npyscreen.TitleFixedText, name=\"Port\")\n        self._port = self.add(IntegerTextfield, name=\"Port\", value='22', args=\"^\", color=\"IMPORTANT\",)\n        self._username = self.add(npyscreen.TitleText, name=\"Username\", align=\"^\", color=\"IMPORTANT\",)\n        self._schema = self.add(SSHSelectOneField,\n                 values=[\"Basic Auth\", \"PEM File\"],\n                 name=\"Pick One\",\n                 scroll_exit=True,\n                 max_height=2)\n        self._pemfile = self.add(npyscreen.TitleText, name=\"~ Paste your PEM File Below ~\", align=\"^\", color=\"IMPORTANT\")\n        self._basic_auth = self.add(npyscreen.TitlePassword, name=\"Password\", args=\"^\", color=\"IMPORTANT\",)\n        self._pemfile.hidden = True\n        self._basic_auth.hidden = True \n\n    def display_auth_ui(self):\n        self._pemfile.hidden = True\n        self._basic_auth.hidden = False \n        self._pemfile.display()\n        self._basic_auth.display()\n    \n    def display_pem_ui(self):\n        self._pemfile.hidden = False \n        self._basic_auth.hidden = True \n        self._pemfile.display()\n        self._basic_auth.display()\n\n    def on_ok(self):\n        creds_file = CREDS_DIR + \"sshcreds.json\"\n        with open(creds_file, 'r', encoding=\"utf-8\") as f:\n            contents = json.loads(f.read())\n        d = {}\n        if int(self._schema.value[0]) == 0:\n            if self._username.value and self._port.value and self._basic_auth.value:\n                d['port'] = self._port.value\n                d['username'] = self._username.value \n                d['authentication'] = {}\n                d['authentication']['auth_type'] = \"Basic Auth\"\n                d['authentication']['password'] = self._basic_auth.value\n\n        elif int(self._schema.value[0]) == 1:\n            if self._username.value and self._pemfile.value and self._port.value:\n                d['port'] = self._port.value\n                d['username'] = self._username.value \n                d['authentication'] = {}\n                d['authentication']['auth_type'] = \"API Token\"\n                d['authentication']['private_key'] = json.dumps(self._pemfile.value) \n        else:\n            raise AssertionError(\"Unknown Option, not able to save credential\")\n        if d:\n            contents['metadata']['connectorData'] = json.dumps(d)\n            with open(creds_file, 'w', encoding=\"utf-8\") as f:\n                f.write(json.dumps(contents, indent=2))\n        super().on_ok()\n\n\nclass SalesforceCreds(CredsForm):\n    def create(self):\n        super().create()\n        self._username = self.add(npyscreen.TitleText, name=\"Username\", align=\"^\", color=\"IMPORTANT\",)\n        self._password = self.add(npyscreen.TitlePassword, name=\"Password\", align=\"^\", color=\"IMPORTANT\",)\n        self._security_token = self.add(npyscreen.TitlePassword, name=\"Security Token\", align=\"^\", color=\"IMPORTANT\",)\n\n        c_data = read_existing_creds(CREDS_DIR + 'salesforcecreds.json')\n        if c_data:\n            if c_data.get('Username'):\n                self._username.value = c_data.get('Username')\n            if c_data.get('Password'):\n                self._password.value = c_data.get('Password')\n            if c_data.get('Security_Token'):\n                self._security_token.value = c_data.get('Security_Token')\n\n    def on_ok(self):\n        if self._password.value and self._username.value and self._security_token.value:\n            creds_file = CREDS_DIR + 'salesforcecreds.json'\n            if os.path.exists(creds_file) is False:\n                npyscreen.notify(\"Salesforce Credential File is Missing! Cannot proceed further. Contact support@unskript.com\")\n                raise AssertionError(\"Credential file for Salesforce is Missing\")\n            with open(creds_file, 'r', encoding=\"utf-8\") as f:\n                contents = json.loads(f.read())\n            if not contents:\n                npyscreen.notify(\"Salesforce Credential File is Missing! Cannot proceed further. Contact support@unskript.com\")\n                raise AssertionError(\"Credential file for Salesforce is Missing\")\n            d = {}\n            d['Username'] = self._username.value\n            d['Password'] = self._password.value\n            d['Security_Token'] = self._security_token.value \n            contents['metadata']['connectorData'] = json.dumps(d)\n            with open(creds_file, 'w', encoding=\"utf-8\") as f:\n                f.write(json.dumps(contents, indent=2))\n        super().on_ok()\n\n# Dont implement credential class below this line.\n# Lest wrap everything up into a single callable \n# function. We use this function when we import\n# from the unskript-client.py. It can also\n# be used as a standalone application too.\n\ndef main(schema_json: str = None, creds_dir: str = None):\n    global CREDS_DIR\n    creds_app = CredsApp()\n    if schema_json:\n        creds_app.set_schemas(schema_json=schema_json)\n    if creds_dir:\n        if creds_dir.endswith('/'):\n            CREDS_DIR = creds_dir \n        else:\n            CREDS_DIR = creds_dir + '/'\n    creds_app.run()\n\n\nif __name__ == '__main__':\n    main()\n"
  },
  {
    "path": "unskript-ctl/diagnostics.py",
    "content": "#!/usr/bin/env python\n#\n# Copyright (c) 2024 unSkript.com\n# All rights reserved.\n#\n# This program is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\n# FOR A PARTICULAR PURPOSE\n#\n#\n\nimport argparse\nimport json\nimport os\nimport sys\nimport yaml\nfrom diagnostics_worker import *\n\n\nclass DiagnosticsScript:\n    def __init__(self, args):\n        self.args = args\n        self.calling_map = {\n            \"k8s\": \"k8s_diagnostics\",\n            \"mongodb\": \"mongodb_diagnostics\",\n            \"redis\": \"redis_diagnostics\",\n            \"postgresql\": \"postgresql_diagnostics\",\n            \"elasticsearch\": \"elasticsearch_diagnostics\",\n            \"keycloak\": \"keycloak_diagnostics\",\n            \"vault\": \"vault_diagnostics\"\n        }\n\n    def get_failed_objects(self):\n        failed_objects_file = self.args.failed_objects_file\n        try:\n            with open(failed_objects_file, 'r') as file:\n                data = json.load(file)\n        except FileNotFoundError:\n            print(f\"Error: File '{failed_objects_file}' not found.\")\n            return []\n        except json.JSONDecodeError as e:\n            print(f\"Error decoding JSON file '{failed_objects_file}': {e}\")\n            return []\n        except Exception as e:\n            print(f\"An unexpected error occurred while reading '{failed_objects_file}': {e}\")\n            return []\n\n        if not isinstance(data, list):\n            print(f\"Error: Invalid JSON data format in '{failed_objects_file}'.\")\n            return []\n\n        all_failed_entry_functions = []\n        for entry in data:\n            if isinstance(entry, dict) and entry['status']==2:\n                all_failed_entry_functions.append(entry.get('check_entry_function', ''))\n        # print(\"All failed entry functions\",all_failed_entry_functions)\n        return all_failed_entry_functions\n\n    def get_diagnostic_commands(self):\n        yaml_file = self.args.yaml_file\n        try:\n            with open(yaml_file, 'r') as file:\n                data = yaml.safe_load(file)\n        except FileNotFoundError:\n            print(f\"Error: File '{yaml_file}' not found.\")\n            return {}\n        except yaml.YAMLError as e:\n            print(f\"Error parsing YAML file '{yaml_file}': {e}\")\n            return {}\n\n        if not isinstance(data, dict):\n            print(f\"Error: Invalid YAML data format in '{yaml_file}'.\")\n            return {}\n\n        diagnostics_commands = {}\n        if 'checks' in data and isinstance(data['checks'], dict) and 'diagnostic_commands' in data['checks']:\n            for check_name, d_commands in data['checks']['diagnostic_commands'].items():\n                if isinstance(d_commands, list):\n                    diagnostics_commands[check_name] = d_commands\n                else:\n                    print(f\"Error: Invalid format for diagnostics commands under '{check_name}' in '{yaml_file}'.\")\n        else:\n            print(f\"Error: 'checks->diagnostics' section not found in '{yaml_file}'.\")\n        # print(\"Diagnostic commands from yaml\", diagnostics_commands)\n        return diagnostics_commands\n\n    def get_diagnostic_commands_for_failed_checks(self):\n        diagnostics_commands = self.get_diagnostic_commands()\n        failed_checks = self.get_failed_objects()\n        diagnostic_commands_for_failed_checks = {}\n\n        for failed_check in failed_checks:\n            if failed_check in diagnostics_commands:\n                diagnostic_commands_for_failed_checks[failed_check] = diagnostics_commands[failed_check]\n\n        return diagnostic_commands_for_failed_checks\n\n    def execute_diagnostics(self, diag_commands):\n        diag_outputs = {}\n        for entry_function, commands in diag_commands.items():\n            for prefix, function_name in self.calling_map.items():\n                if entry_function.startswith(prefix):\n                    try:\n                        # Fetch the function from globals based on the name\n                        function = globals().get(function_name)\n                        if function:\n                            # Call the function with the commands\n                            diag_outputs[entry_function] = function(commands)\n                        else:\n                            raise ValueError(f\"Function '{function_name}' not found in the global namespace.\")\n                    except Exception as e:\n                        print(f\"Error occurred while processing '{entry_function}': {e}\")\n        return diag_outputs\n\n    def write_to_yaml_file(self, data, file_path):\n        with open(file_path, 'w') as file:\n            yaml.dump(data, file, default_flow_style=False)\n\n    def main(self):\n        if not os.path.exists(self.args.output_dir_path):\n            print(f\"ERROR: Output directory {self.args.output_dir_path} does not exist!\")\n            sys.exit(1)\n        \n        diag_commands = self.get_diagnostic_commands_for_failed_checks()\n\n        if not diag_commands:\n            print(\"Skipping Diagnostics: No diagnostic command found. You can define them in the YAML configuration file\")\n            return \n        print(\"\\nRunning Diagnostics...\")\n        diag_outputs = self.execute_diagnostics(diag_commands)\n\n        if diag_outputs:\n            diag_file = os.path.join(self.args.output_dir_path, 'diagnostics.yaml')\n            self.write_to_yaml_file(diag_outputs, diag_file)\n        else:\n            print(\"WARNING: Nothing to write, diagnostic outputs are empty!\")\n\n\ndef main(args):\n    parser = argparse.ArgumentParser(description=\"Diagnostic Script for unskript-ctl\")\n    parser.add_argument(\"--yaml-file\", '-y', help=\"Path to YAML file\", required=True)\n    parser.add_argument(\"--failed-objects-file\", '-f', help=\"Path to failed objects file\", required=True)\n    parser.add_argument(\"--output-dir-path\", '-o', help=\"Path to output directory\", required=True)\n    ap = parser.parse_args(args)\n\n    print(\"\\nFetching logs...\")\n    fetch_pod_logs_not_running(ap.output_dir_path)\n    fetch_pod_logs_high_restarts(ap.output_dir_path)\n\n    diagnostics_script = DiagnosticsScript(ap)\n    diagnostics_script.main()\n\nif __name__ == \"__main__\":\n    main(sys.argv[1:])"
  },
  {
    "path": "unskript-ctl/diagnostics_worker.py",
    "content": "##\n##  Copyright (c) 2024 unSkript, Inc\n##  All rights reserved.\n##\nimport os\nimport subprocess\nimport json\nfrom unskript_ctl_factory import UctlLogger, ConfigParserFactory\nfrom concurrent.futures import ThreadPoolExecutor\n\n\nlogger = UctlLogger('UnskriptDiagnostics')\n\ndef mongodb_diagnostics(commands:list):\n    \"\"\"\n    mongodb_diagnostics runs mongocli command with command as the parameter\n    \"\"\"\n    MONGODB_USERNAME = os.getenv('MONGODB_USERNAME')\n    MONGODB_PASSWORD = os.getenv('MONGODB_PASSWORD')\n    MONGODB_HOSTNAME = os.getenv('MONGODB_HOSTNAME', 'localhost')\n    MONGODB_PORT = int(os.getenv('MONGODB_PORT', 27017))\n\n    # Format the connection string for mongosh\n    connection_string = f\"mongodb://{MONGODB_USERNAME}:{MONGODB_PASSWORD}@{MONGODB_HOSTNAME}:{MONGODB_PORT}\"\n    command_outputs = []\n\n    for command in commands:\n        cmd = [\n            \"mongosh\",\n            connection_string,\n            \"--quiet\",\n            \"--eval\",\n            command\n        ]\n        try:\n            result = subprocess.run(cmd, capture_output=True, text=True)\n            if result.stderr:\n                command_outputs.append({command: f\"Error: {result.stderr.strip()}\"})\n            else:\n                output = result.stdout.splitlines()\n                command_outputs.append({command: output})\n        except Exception as e:\n            command_outputs.append({command: f\"Exception: {str(e)}\"})\n\n    # for result_dict in command_outputs:\n    #     for command, cmd_output in result_dict.items():\n    #         logger.debug(\"\\nMongodb Diagnostics\")\n    #         logger.debug(f\"Mongosh Command: {command}\\nOutput: {cmd_output}\\n\")\n    return command_outputs\n\ndef get_matrix_namespaces():\n    config_parser = ConfigParserFactory()\n    global_params = config_parser.get_checks_params()\n\n    if 'global' in global_params and 'matrix' in global_params['global']:\n        namespaces = global_params['global']['matrix'].get('namespace', [])\n        return namespaces\n    return []\n\ndef fetch_logs(namespace, pod, output_path):\n    logs_file_path = os.path.join(output_path, f'logs.txt')\n    separator = \"\\n\" + \"=\" * 40 + \"\\n\"\n    header = f\"Logs for Namespace: {namespace}, Pod: {pod}\\n\"\n    header_previous = f\"Previous Logs for Namespace: {namespace}, Pod: {pod}\\n\"\n\n    with open(logs_file_path, 'a') as log_file:\n        log_file.write(separator + header)\n        # Fetch current logs\n        proc = subprocess.Popen([\"kubectl\", \"logs\", \"--namespace\", namespace, \"--tail=100\", \"--all-containers\", pod],\n                                stdout=log_file, stderr=subprocess.PIPE, text=True)\n        stderr = proc.communicate()[1]\n        if proc.returncode != 0:\n            logger.debug(f\"Error fetching logs for {pod}: {stderr}\")\n\n        log_file.write(separator + header_previous)\n        # Fetch previous logs\n        proc = subprocess.Popen([\"kubectl\", \"logs\", \"--namespace\", namespace, \"--tail=100\", \"--all-containers\", pod, \"--previous\"],\n                                stdout=log_file, stderr=subprocess.PIPE, text=True)\n        stderr = proc.communicate()[1]\n        if proc.returncode != 0:\n            logger.debug(f\"Error fetching previous logs for {pod}: {stderr}\")\n\ndef fetch_pod_logs_for_namespace(namespace, output_path, condition='not_running'):\n    # logger.debug(f\"Starting log fetch for namespace: {namespace} with condition: {condition}\")\n    proc = subprocess.Popen([\"kubectl\", \"get\", \"pods\", \"-n\", namespace, \"-o\", \"json\"],\n                            stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)\n    stdout, stderr = proc.communicate()\n    if proc.returncode != 0:\n        logger.debug(f\"Error fetching pods in namespace {namespace}: {stderr}\")\n        return\n\n    try:\n        pods = json.loads(stdout)['items']\n        for pod in pods:\n            if condition == 'not_running' and (pod['status']['phase'] != \"Running\" or pod['status']['phase'] != \"Succeeded\"):\n                # logger.debug(f\"Fetching logs for not running/succeeded pod: {pod['metadata']['name']} in {namespace}\")\n                fetch_logs(namespace, pod['metadata']['name'], output_path)\n            elif condition == 'high_restarts':\n                for cs in pod['status'].get('containerStatuses', []):\n                    if cs['restartCount'] > 25:\n                        # logger.debug(f\"Fetching logs for pod with high restarts: {pod['metadata']['name']} in {namespace}\")\n                        fetch_logs(namespace, pod['metadata']['name'], output_path)\n    except json.JSONDecodeError:\n        logger.debug(f\"Failed to decode JSON response from kubectl get pods in namespace {namespace}: {stdout}\")\n\ndef fetch_pod_logs_not_running(output_path):\n    allowed_namespaces = get_matrix_namespaces()\n    with ThreadPoolExecutor(max_workers=5) as executor:\n        # logger.debug(\"Initiating ThreadPool to fetch logs for pods not running across namespaces\")\n        for namespace in allowed_namespaces:\n            executor.submit(fetch_pod_logs_for_namespace, namespace, output_path, 'not_running')\n\ndef fetch_pod_logs_high_restarts(output_path):\n    allowed_namespaces = get_matrix_namespaces()\n    with ThreadPoolExecutor(max_workers=5) as executor:\n        # logger.debug(\"Initiating ThreadPool to fetch logs for pods with high restarts across namespaces\")\n        for namespace in allowed_namespaces:\n            executor.submit(fetch_pod_logs_for_namespace, namespace, output_path, 'high_restarts')\n\n\ndef k8s_diagnostics(commands:list):\n    \"\"\"\n    k8s_diagnostics runs kubectl command\n\n    \"\"\"\n    command_outputs = []\n\n    for command in commands:\n        cmd_list = command.split()\n        try:\n            result = subprocess.run(cmd_list, capture_output=True, text=True)\n            if result.stderr:\n                command_outputs.append({command: f\"Error: {result.stderr.strip()}\"})\n            else:\n                output = result.stdout.splitlines()\n                command_outputs.append({command: output})\n        except Exception as e:\n            command_outputs.append({command: f\"Exception: {str(e)}\"})\n\n    # for result_dict in command_outputs:\n    #     for command, cmd_output in result_dict.items():\n    #         logger.debug(\"\\n Kubernetes Diagnostics\")\n    #         logger.debug(f\"K8S Command: {command}\\nOutput: {cmd_output}\\n\")\n    return command_outputs\n\ndef redis_diagnostics(commands:list):\n    \"\"\"\n    redis_diagnostics runs redis-cli command with command as the parameter\n\n    \"\"\"\n    REDIS_HOSTNAME = os.getenv('REDIS_HOSTNAME', 'localhost')\n    REDIS_PORT = os.getenv('REDIS_PORT', '6379')\n    REDIS_USERNAME = os.getenv('REDIS_USERNAME')\n    REDIS_PASSWORD = os.getenv('REDIS_PASSWORD')\n\n    if REDIS_USERNAME and REDIS_PASSWORD:\n        redis_uri = f\"redis://{REDIS_USERNAME}:{REDIS_PASSWORD}@{REDIS_HOSTNAME}:{REDIS_PORT}\"\n    elif REDIS_PASSWORD:\n        redis_uri = f\"redis://:{REDIS_PASSWORD}@{REDIS_HOSTNAME}:{REDIS_PORT}\"\n    else:\n        redis_uri = f\"redis://{REDIS_HOSTNAME}:{REDIS_PORT}\"\n\n    command_outputs = []\n\n    for command in commands:\n        cmd = [\n            \"redis-cli\",\n            \"-u\", redis_uri,\n            command\n        ]\n        try:\n            result = subprocess.run(cmd, capture_output=True, text=True)\n            if result.stderr:\n                command_outputs.append({command: f\"Error: {result.stderr.strip()}\"})\n            else:\n                output = result.stdout.splitlines()\n                command_outputs.append({command: output})\n        except Exception as e:\n            command_outputs.append({command: f\"Exception: {str(e)}\"})\n    # for result_dict in command_outputs:\n    #     for command, cmd_output in result_dict.items():\n    #         logger.debug(\"\\nRedis Diagnostics\")\n    #         logger.debug(f\"Redis Command: {command}\\nOutput: {cmd_output}\\n\")\n    return command_outputs\n\ndef postgresql_diagnostics(commands:list):\n    \"\"\"\n    postgresql_diagnostics runs psql command with query as the parameter\n    \"\"\"\n    POSTGRES_USERNAME = os.getenv('POSTGRES_USERNAME')\n    POSTGRES_PASSWORD = os.getenv('POSTGRES_PASSWORD')\n    POSTGRES_HOSTNAME = os.getenv('POSTGRES_HOST', 'localhost')\n    POSTGRES_PORT = int(os.getenv('POSTGRES_PORT', 5432))\n    POSTGRES_DB_NAME =os.getenv('POSTGRES_DB_NAME',\"\")\n\n    connection_string = f\"postgresql://{POSTGRES_USERNAME}:{POSTGRES_PASSWORD}@{POSTGRES_HOSTNAME}:{POSTGRES_PORT}/{POSTGRES_DB_NAME}\"\n    command_outputs = []\n\n    for command in commands:\n        cmd = [\n            \"psql\",\n            connection_string,\n            \"-c\",\n            command\n        ]\n        try:\n            result = subprocess.run(cmd, capture_output=True, text=True)\n            if result.stderr:\n                command_outputs.append({command: f\"Error: {result.stderr.strip()}\"})\n            else:\n                output = result.stdout.splitlines()\n                command_outputs.append({command: output})\n        except Exception as e:\n            command_outputs.append({command: f\"Exception: {str(e)}\"})\n\n    # for result_dict in command_outputs:\n    #     for command, cmd_output in result_dict.items():\n    #         logger.debug(\"\\nPostgresql Diagnostics\")\n    #         logger.debug(f\"Postgres Command: {command}\\nOutput: {cmd_output}\\n\")\n    return command_outputs\n\ndef elasticsearch_diagnostics(commands: list) -> list:\n    \"\"\"\n    Runs Elasticsearch diagnostics commands via curl.\n\n    \"\"\"\n    ELASTICSEARCH_HOSTS = os.getenv('ELASTICSEARCH_HOSTS', 'http://localhost:9200')\n\n    command_outputs = []\n\n    for command in commands:\n        # Ensure the command does not end with a slash as it might cause issues with curl\n        if command.endswith('/'):\n            command = command[:-1]\n        cmd = f\"curl -sS -X GET {ELASTICSEARCH_HOSTS}/{command}\"\n        try:\n            result = subprocess.run(cmd, capture_output=True, text=True, shell=True)\n            if result.stderr:\n                command_outputs.append({command: f\"Error: {result.stderr.strip()}\"})\n            else:\n                output = result.stdout.splitlines()\n                command_outputs.append({command: output})\n        except Exception as e:\n            command_outputs.append({command: f\"Exception: {str(e)}\"})\n\n    # for result_dict in command_outputs:\n    #     for command, cmd_output in result_dict.items():\n    #         logger.debug(\"\\nElasticsearch Diagnostics\")\n    #         logger.debug(f\"Elasticsearch curl command: {command}\\nOutput: {cmd_output}\\n\")\n    return command_outputs\n\ndef keycloak_diagnostics(commands: list):\n    \"\"\"\n    Runs Keycloak diagnostics commands via curl.\n    \"\"\"\n    keycloak_url = os.getenv('KEYCLOAK_URL', 'http://localhost/auth/')\n    keycloak_realm = os.getenv('KEYCLOAK_REALM', 'master')\n    command_outputs = []\n    \n    openid_config_url = f\"{keycloak_url.rstrip('/')}/realms/{keycloak_realm}/\"\n\n    for command in commands:\n        cmd = f\"curl -k -s \\\"{openid_config_url}{command}\\\"\"\n    \n        try:\n            result = subprocess.run(cmd, capture_output=True, text=True, shell=True)\n            if result.stderr:\n                command_outputs.append({command: f\"Error: {result.stderr.strip()}\"})\n            else:\n                output = result.stdout.splitlines()\n                command_outputs.append({command: output})\n        except Exception as e:\n            command_outputs.append({command: f\"Exception: {str(e)}\"})\n\n    # for result_dict in command_outputs:\n    #     for command, cmd_output in result_dict.items():\n    #         logger.debug(\"\\nKeycloak Diagnostics\")\n    #         logger.debug(f\"Keycloak curl command: {command}\\nOutput: {cmd_output}\\n\")\n    return command_outputs\n\ndef vault_diagnostics(commands: list):\n    \"\"\"\n    vault_diagnostics runs Vault CLI commands with the command as the parameter.\n\n    \"\"\"\n    VAULT_ADDR = os.getenv('VAULT_ADDR', 'http://localhost:8200')\n    VAULT_TOKEN = os.getenv('VAULT_TOKEN')\n    \n    command_outputs = []\n\n    for command in commands:\n        command_parts = command.split()\n        \n        cmd = [\n            \"vault\",\n        ] + command_parts \n        \n        try:\n            env = os.environ.copy()\n            env['VAULT_ADDR'] = VAULT_ADDR\n            env['VAULT_TOKEN'] = VAULT_TOKEN\n            \n            result = subprocess.run(cmd, capture_output=True, text=True, env=env)\n            if result.stderr:\n                command_outputs.append({command: f\"Error: {result.stderr.strip()}\"})\n            else:\n                output = result.stdout.splitlines()\n                command_outputs.append({command: output})\n        except Exception as e:\n            command_outputs.append({command: f\"Exception: {str(e)}\"})\n\n    # for result_dict in command_outputs:\n    #     for command, cmd_output in result_dict.items():\n    #         logger.debug(\"\\nVault Diagnostics\")\n    #         logger.debug(f\"Vault Command: {command}\\nOutput: {cmd_output}\\n\")\n    return command_outputs\n"
  },
  {
    "path": "unskript-ctl/docs/design.puml",
    "content": "@startuml\nabstract class UnskriptFactory {\n    - _config = ConfigParserFactory()\n    - logger \n    --\n    - __init__()\n    - __new__()\n    - _configure_logger()\n    - update_credential_to_uglobal()\n    - _banner()\n    - _error()\n}\n\nabstract class ChecksFactory {\n    - __init__()\n    - run()\n}\n\nabstract class ScriptsFactory {\n    - __init__()\n    - run()\n}\n\nabstract class NotificationFactory {\n    - __init__()\n    - notify() \n}\n\nclass ConfigParserFactory {\n    - __init__()\n    - load_config_file()\n    - get_schedule()\n    - get_jobs()\n    - get_checks()\n    - get_notification()\n    - get_credentials()\n    - get_global()\n    - get_checks_params()\n    --\n    - _get()\n}\n\nabstract class DatabaseFactory {\n    - __init__()\n    - create()\n    - read()\n    - update()\n    - delete()\n}\n\nUnskriptFactory <-- ChecksFactory\nUnskriptFactory <-- ScriptsFactory\nUnskriptFactory <-- NotificationFactory\nUnskriptFactory <-- ConfigParserFactory\nUnskriptFactory <-- DatabaseFactory \n\nclass ZoDBInterface {\n    - __init__()\n    - create()\n    - read()\n    - update()\n    - delete()\n}\n\nclass SQLInterface {\n    - __init__()\n    - create()\n    - read()\n    - update()\n    - delete()\n}\n\nDatabaseFactory <-- ZoDBInterface\nDatabaseFactory <-- SQLInterface \n\nclass CodeSnippets {\n    - __init__()\n    - get_checks_by_uuid()\n    - get_checs_by_connector()\n    - get_all_check_names()\n    - get_check_by_name()\n    - get_action_name_from_id()\n    - get_connector_name_from_id()\n}\n\nZoDBInterface <-- CodeSnippets\n\nclass PSS {\n    - __init__()\n}\n\nZoDBInterface <-- PSS \n\nclass DBInterface {\n    - __init__()\n    --\n    - pss = PSS()\n    - cs = CodeSnippets()\n}\n\nUnskriptFactory <-- DBInterface\n\nPSS o-- DBInterface\nCodeSnippets o-- DBInterface\n\nclass SlackNotification {\n    - __init__()\n    - validate_data()\n    - notify()\n    --\n    - _generate_notification_message()\n}\n\nNotificationFactory <-- SlackNotification\n\nclass EmailNotification {\n    - __init__()\n    - notify()\n    - validate_data()\n    - create_tarball_archive()\n    - create_temp_files_of_failed_check_results()\n    - create_script_summary_message()\n    - create_email_attachment()\n    - create_checks_summary_message()\n    - create_email_header()\n    - prepare_combined_email()\n}\n\nNotificationFactory <-- EmailNotification\n\nclass SendgridNotification {\n    - __init__()\n    - notify()\n    - send_sendgrid_notification()\n    - sendgrid_add_email_attachment()\n}\n\nEmailNotification <-- SendgridNotification\n\nclass AWSEmailNotification {\n    - __init__()\n    - notify()\n    - prepare_to_send_awsses_notification()\n    - do_send_awsses_email()\n}\n\nEmailNotification <-- AWSEmailNotification\n\nclass SmtpNotification {\n    - __init__()\n    - notify()\n    - send_smtp_notification()\n}\n\nEmailNotification <-- SmtpNotification\n\nclass Notification {\n    - __init__()\n    - notify() \n    - _send_email()\n}\n\nSmtpNotification o-- Notification \nAWSEmailNotification o-- Notification\nSendgridNotification o-- Notification\n\nclass Checks {\n    - __init__()\n    - run()\n    - display_check_result()\n    - output_after_merging_checks()\n    - calculate_combined_check_status()\n    - _create_jit_script()\n    - get_code_cell_name_and_uuid()\n    - get_first_cell_content()\n    - get_last_cell_content()\n    - get_after_check_content()\n    - update_exec_id()\n    - insert_task_lines()\n    - replace_input_with_globals()\n    - create_checks_for_matrix_argument()\n}\n\nChecksFactory <-- Checks\n\nclass Script {\n    - __init__()\n    - run()\n}\n\nScriptsFactory <-- Script\n\nclass UnskriptCtl {\n    - __init__()\n    - create_creds()\n    - display_creds_ui()\n    - save_check_names()\n    - run_main()\n    - update_audit_trail()\n    - list_main()\n    - list_credentials()\n    - list_checks_by_connector()\n    - display_failed_checks()\n    - show_main()\n    - print_all_result_table()\n    - print_connector_result_table()\n    - print_execution_result_table()\n    - service_main() | TBD\n    - debug_main()\n    - start_debug()\n    - stop_debug()\n    - notify()\n    --\n    checks = Checks()\n    script = Script()\n    notification = Notification()\n}\n\nUnskriptFactory <-- UnskriptCtl\n\nChecks <-- UnskriptCtl\nScript <-- UnskriptCtl\nNotification <-- UnskriptCtl\n\nclass main {\n    - uc = UnskriptCtl()\n    - parser = ArgumentParser()\n}\n\nUnskriptCtl o-- main\n\n@enduml"
  },
  {
    "path": "unskript-ctl/stub_creds.json",
    "content": "[\n  {\n    \"display_name\": \"awscreds\",\n    \"metadata\": {\n      \"id\": \"e29374b4-1edd-4098-88e2-0da2a290dbb8\",\n      \"connectorData\": \"{}\",\n      \"type\": \"CONNECTOR_TYPE_AWS\",\n      \"name\": \"awscreds\"\n    },\n    \"schema_name\": \"credential-save\",\n    \"type\": \"CONNECTOR_TYPE_AWS\",\n    \"id\": \"ebca9178-aadc-478d-bc87-7c895874d3ab\"\n  },\n  {\n    \"display_name\": \"chatgptcreds\",\n    \"metadata\": {\n      \"name\": \"chatgptcreds\",\n      \"connectorData\": \"{}\",\n      \"type\": \"CONNECTOR_TYPE_CHATGPT\"\n    },\n    \"schema_name\": \"credential-save\",\n    \"type\": \"CONNECTOR_TYPE_CHATGPT\",\n    \"id\": \"7ab97aa6-3016-4818-afde-1a9eb3cb32cb\"\n  },\n  {\n    \"display_name\": \"escreds\",\n    \"metadata\": {\n      \"id\": \"025e0bcd-1ddf-426c-bf24-864dfe3adf9a\",\n      \"connectorData\": \"{}\",\n      \"type\": \"CONNECTOR_TYPE_ELASTICSEARCH\",\n      \"name\": \"escreds\"\n    },\n    \"schema_name\": \"credential-save\",\n    \"type\": \"CONNECTOR_TYPE_ELASTICSEARCH\",\n    \"id\": \"d287cf62-ad4e-40c1-a595-ee1b52454b1d\"\n  },\n  {\n    \"display_name\": \"gcpcreds\",\n    \"metadata\": {\n      \"id\": \"ab1d3ae1-70cc-4d8f-8fa1-3a1f6de63b9e\",\n      \"connectorData\": \"{}\",\n      \"type\": \"CONNECTOR_TYPE_GCP\",\n      \"name\": \"gcpcreds\"\n    },\n    \"schema_name\": \"credential-save\",\n    \"type\": \"CONNECTOR_TYPE_GCP\",\n    \"id\": \"794371b5-3c1c-4f3b-87e9-4aa0cdb43011\"\n  },\n  {\n    \"display_name\": \"githubcreds\",\n    \"metadata\": {\n      \"name\": \"githubcreds\",\n      \"connectorData\": \"{}\",\n      \"type\": \"CONNECTOR_TYPE_GITHUB\"\n    },\n    \"schema_name\": \"credential-save\",\n    \"type\": \"CONNECTOR_TYPE_GITHUB\",\n    \"id\": \"95f1aa88-c661-4525-b536-56d627395e92\"\n  },\n  {\n    \"display_name\": \"grafanacreds\",\n    \"metadata\": {\n      \"name\": \"grafanacreds\",\n      \"connectorData\": \"{}\",\n      \"type\": \"CONNECTOR_TYPE_GRAFANA\"\n    },\n    \"schema_name\": \"credential-save\",\n    \"type\": \"CONNECTOR_TYPE_GRAFANA\",\n    \"id\": \"582673a2-e493-4c37-bda7-118aeeb715b5\"\n  },\n  {\n    \"display_name\": \"jenkinscreds\",\n    \"metadata\": {\n      \"name\": \"jenkinscreds\",\n      \"connectorData\": \"{}\",\n      \"type\": \"CONNECTOR_TYPE_JENKINS\"\n    },\n    \"schema_name\": \"credential-save\",\n    \"type\": \"CONNECTOR_TYPE_JENKINS\",\n    \"id\": \"6dc1dd1d-b616-4c89-a88f-0f23bd6ad408\"\n  },\n  {\n    \"display_name\": \"jiracreds\",\n    \"metadata\": {\n      \"name\": \"jiracreds\",\n      \"connectorData\": \"{}\",\n      \"type\": \"CONNECTOR_TYPE_JIRA\"\n    },\n    \"schema_name\": \"credential-save\",\n    \"type\": \"CONNECTOR_TYPE_JIRA\",\n    \"id\": \"486460a4-72b9-4759-8f8f-5a3ccb0096f9\"\n  },\n  {\n    \"display_name\": \"k8screds\",\n    \"metadata\": {\n      \"name\": \"k8screds\",\n      \"connectorData\": \"{}\",\n      \"type\": \"CONNECTOR_TYPE_K8S\",\n      \"env\": \"Global\",\n      \"service_id\": \"\"\n    },\n    \"schema_name\": \"credential-save\",\n    \"type\": \"CONNECTOR_TYPE_K8S\",\n    \"id\": \"e5e22603-9586-4b8a-9635-4dcc3058c465\"\n  },\n  {\n    \"display_name\": \"kafkacreds\",\n    \"metadata\": {\n      \"name\": \"kafkacreds\",\n      \"connectorData\": \"{}\",\n      \"type\": \"CONNECTOR_TYPE_KAFKA\"\n    },\n    \"schema_name\": \"credential-save\",\n    \"type\": \"CONNECTOR_TYPE_KAFKA\",\n    \"id\": \"9c7e58e1-a7d6-4211-8b0c-ac5b5a6b951b\"\n  },\n  {\n    \"display_name\": \"mongodbcreds\",\n    \"metadata\": {\n      \"name\": \"mongodbcreds\",\n      \"connectorData\": \"{}\",\n      \"type\": \"CONNECTOR_TYPE_MONGODB\"\n    },\n    \"schema_name\": \"credential-save\",\n    \"type\": \"CONNECTOR_TYPE_MONGODB\",\n    \"id\": \"8af91554-f190-48f6-8407-43cf4d3a2eaa\"\n  },\n  {\n    \"display_name\": \"mysqlcreds\",\n    \"metadata\": {\n      \"name\": \"mysqlcreds\",\n      \"connectorData\": \"{}\",\n      \"type\": \"CONNECTOR_TYPE_MYSQL\"\n    },\n    \"schema_name\": \"credential-save\",\n    \"type\": \"CONNECTOR_TYPE_MYSQL\",\n    \"id\": \"9751cb50-6660-4ad3-b2ae-14f894de638f\"\n  },\n  {\n    \"display_name\": \"netboxcreds\",\n    \"metadata\": {\n      \"name\": \"netboxcreds\",\n      \"connectorData\": \"{}\",\n      \"type\": \"CONNECTOR_TYPE_NETBOX\"\n    },\n    \"schema_name\": \"credential-save\",\n    \"type\": \"CONNECTOR_TYPE_NETBOX\",\n    \"id\": \"ca137fd8-5a1d-48d0-8d49-57576179e550\"\n  },\n  {\n    \"display_name\": \"nomadcreds\",\n    \"metadata\": {\n      \"name\": \"nomadcreds\",\n      \"connectorData\": \"{}\",\n      \"type\": \"CONNECTOR_TYPE_NOMAD\"\n    },\n    \"schema_name\": \"credential-save\",\n    \"type\": \"CONNECTOR_TYPE_NOMAD\",\n    \"id\": \"907fcd0c-df0e-44b6-bc98-e97fe898cfcf\"\n  },\n  {\n    \"display_name\": \"postgrescreds\",\n    \"metadata\": {\n      \"name\": \"postgrescreds\",\n      \"connectorData\": \"{}\",\n      \"type\": \"CONNECTOR_TYPE_POSTGRESQL\"\n    },\n    \"schema_name\": \"credential-save\",\n    \"type\": \"CONNECTOR_TYPE_POSTGRESQL\",\n    \"id\": \"8c537a0c-bfd8-4333-8cf1-9c933cdd2d7e\"\n  },\n  {\n    \"display_name\": \"rediscreds\",\n    \"metadata\": {\n      \"id\": \"383f0a57-cbc9-4dab-959f-d762a4d736c6\",\n      \"connectorData\": \"{}\",\n      \"type\": \"CONNECTOR_TYPE_REDIS\",\n      \"name\": \"rediscreds\"\n    },\n    \"schema_name\": \"credential-save\",\n    \"type\": \"CONNECTOR_TYPE_REDIS\",\n    \"id\": \"454026ef-d6bc-426b-b9f9-5d67c74c60bb\"\n  },\n  {\n    \"display_name\": \"restcreds\",\n    \"metadata\": {\n      \"name\": \"restcreds\",\n      \"connectorData\": \"{}\",\n      \"type\": \"CONNECTOR_TYPE_REST\"\n    },\n    \"schema_name\": \"credential-save\",\n    \"type\": \"CONNECTOR_TYPE_REST\",\n    \"id\": \"75dafeae-b46d-45dc-9614-a58ba94187be\"\n  },\n  {\n    \"display_name\": \"slackcreds\",\n    \"metadata\": {\n      \"name\": \"slackcreds\",\n      \"connectorData\": \"{}\",\n      \"type\": \"CONNECTOR_TYPE_SLACK\"\n    },\n    \"schema_name\": \"credential-save\",\n    \"type\": \"CONNECTOR_TYPE_SLACK\",\n    \"id\": \"3adb23af-4207-441c-ad3a-4737aa0e8635\"\n  },\n  {\n    \"display_name\": \"sshcreds\",\n    \"metadata\": {\n      \"name\": \"sshcreds\",\n      \"connectorData\": \"{}\",\n      \"type\": \"CONNECTOR_TYPE_SSH\"\n    },\n    \"schema_name\": \"credential-save\",\n    \"type\": \"CONNECTOR_TYPE_SSH\",\n    \"id\": \"d27208f2-ff71-4ff6-bedc-6403add8b184\"\n  },\n  {\n    \"display_name\": \"vaultcreds\",\n    \"metadata\": {\n      \"name\": \"vaultcreds\",\n      \"connectorData\": \"{}\",\n      \"type\": \"CONNECTOR_TYPE_VAULT\"\n    },\n    \"schema_name\": \"credential-save\",\n    \"type\": \"CONNECTOR_TYPE_VAULT\",\n    \"id\": \"75dafeae-b46d-45dc-9614-a58ba94187bb\"\n  },\n  {\n    \"display_name\": \"keycloakcreds\",\n    \"metadata\": {\n      \"name\": \"keycloakcreds\",\n      \"connectorData\": \"{}\",\n      \"type\": \"CONNECTOR_TYPE_KEYCLOAK\"\n    },\n    \"schema_name\": \"credential-save\",\n    \"type\": \"CONNECTOR_TYPE_KEYCLOAK\",\n    \"id\": \"75dafeae-b46d-45dc-9614-a58ba94187bf\"\n  }\n]"
  },
  {
    "path": "unskript-ctl/templates/check.py.template",
    "content": "from pydantic import BaseModel, Field\nfrom typing import Optional, Tuple\nimport pprint\n\n\nclass InputSchema(BaseModel):\n    '''\n    This pydantic (https://docs.pydantic.dev/latest) class defines the schema of the inputs to the check.\n    For eg, if region is an input to the check, you can define it like this:\n\n    region: Optional[str] = Field(\n        default=\"\",\n        title='Region',\n        description='AWS Region.')\n    '''\n    pass\n\ndef {{ check_function_name }}_printer(output:Tuple):\n    '''\n    This is the printer function, which prints the output of the check.\n    A basic print of the failed objects is provided here.\n    Feel free to modify it to print the output in whatever format you want, for eg. tabular\n    '''\n    if output[0] is True:\n        print(f'Check passed')\n        return\n\n    print(\"Failed objects\")\n    pprint.pprint(output[1])\n\ndef {{ check_function_name }}(handle) -> Tuple:\n    '''\n    This is where you define the logic of the check. Things to keep in mind:\n    * handle is a required input. It is an abstraction for the credential layer, handled by unskript.\n    * As you add more arguments to this function, please define the schema for\n    * those input in the InputSchema class as well.\n    * The output of a check is always a tuple. If the check passes, it returns\n    * (True, None). If it fails, it returns (False, list of failed objects)\n    '''\n    pass"
  },
  {
    "path": "unskript-ctl/templates/check_test.py.template",
    "content": "#!/usr/bin/env python\n\nimport json\nimport os\n\nfrom unskript import nbparams\nfrom unskript.fwk.workflow import Task, Workflow\nfrom unskript.secrets import ENV_MODE, ENV_MODE_LOCAL\nfrom {{ check_function_name }}.{{ check_function_name }} import InputSchema, {{ check_function_name }}, {{ check_function_name }}_printer\n\n\ndef test_{{ check_function_name }}():\n\n    env = {\"ENV_MODE\": \"ENV_MODE_LOCAL\"}\n    secret_store_cfg = {\"SECRET_STORE_TYPE\": \"SECRET_STORE_TYPE_LOCAL\"}\n    w = Workflow(env, secret_store_cfg, None)\n    # credentialsJson will be inserted by the jupyter extension on connector selection from the drop down.\n    credentialsJson = \"\"\"{\"credential_type\": \"CONNECTOR_TYPE_{{ check_type_upper_case }}\", \"credential_id\":\"{{ check_type }}creds\", \"credential_name\": \"{{ check_type }}creds\" }\"\"\"\n\n    inputParamsJson = \"{}\"\n    # Fill inputParamsJson depending upon the arguments you have for the check function.\n    # For eg:\n    # inputParamsJson = '''\n    #            {\n    #                \"lifetime_tag\": \"aws:autoscaling:groupName\",\n    #                \"region\": \"us-west-2\"\n    #            }\n    #            '''\n    t = Task(Workflow())\n    t.configure(inputParamsJson, credentialsJson)\n    (err, hdl, args) = t.validate(InputSchema, vars())\n    if err is None:\n        t.output = t.execute({{ check_function_name }}, hdl,\n                             args, {{ check_function_name }}_printer)\n    assert t.workflow.global_vars['unskript_task_error'] == None\n    assert t.output[0] != None\n    if t.output[0] is False:\n        assert(len(t.output[1])> 0)\n"
  },
  {
    "path": "unskript-ctl/templates/first_cell_content.j2",
    "content": "import json\nimport concurrent.futures\nimport threading\nimport functools\nimport polling2\nfrom polling2 import poll\nfrom unskript import nbparams\nfrom unskript.fwk.workflow import Task, Workflow\nfrom unskript.secrets import ENV_MODE, ENV_MODE_LOCAL\n\nenv = {\"ENV_MODE\": \"ENV_MODE_LOCAL\"}\nsecret_store_cfg = {\"SECRET_STORE_TYPE\": \"SECRET_STORE_TYPE_LOCAL\"}\n\nparamDict = {{ runbook_params }}\nparamsJson = json.dumps(paramDict)\nnbParamsObj = nbparams.NBParams(paramsJson)\n{{ runbook_variables }}"
  },
  {
    "path": "unskript-ctl/templates/last_cell_content.j2",
    "content": "from unskript.legos.utils import CheckOutput, CheckOutputStatus\n\nglobal w \nglobal _logger\n\nall_outputs = []\nother_outputs = []\nid_to_name = {}\nif _logger:\n   _logger.debug(f\"ERRORED CHECKS ARE: {w.errored_checks}\")\n   _logger.debug(f\"TIMED OUT CHECKS ARE: {w.timeout_checks}\")\n\nif hasattr(w, 'check_uuid_entry_function_map'):\n    for key,value in w.check_uuid_entry_function_map.items():\n        if value not in id_to_name:\n            id_to_name[value] = key\n\ntry:\n    if 'w' in globals():\n        if w.check_run:\n            for id,output in w.check_output.items():\n                output = json.loads(output)\n                output['id'] = id\n                #output['name'] = id_to_name.get(id) if id else str()\n                all_outputs.append(output)\n            # Lets check if we have errored_checks or timeout_checks\n            # exists, if yes then lets dump the output \n            if hasattr(w, 'check_uuid_entry_function_map'):\n                if hasattr(w, 'timeout_checks') and len(w.timeout_checks):\n                    for name, err_msg in w.timeout_checks.items():\n                        _id = w.check_uuid_entry_function_map.get(name)\n                        other_outputs.append({\n                            \"status\": 3,\n                            \"objects\": None,\n                            \"error\": err_msg,\n                            \"id\": str(_id)\n                            #\"name\": str(name)\n                        })\n                if hasattr(w, 'errored_checks') and len(w.errored_checks):\n                    for name, err_msg in w.errored_checks.items():\n                        _id = w.check_uuid_entry_function_map.get(name)\n                        other_outputs.append({\n                            \"status\": 3,\n                            \"objects\": None,\n                            \"error\": err_msg,\n                            \"id\": str(_id)\n                            #\"name\": str(name)\n                        })\n                        \n            if other_outputs:\n               for _other in other_outputs:\n                  for _output in all_outputs:\n                     # Lets eliminate duplicate entries in the output\n                     # We could have double accounted failed and error timeout \n                     # case \n                     if _other.get('id') == _output.get('id'):\n                         _output.update(_other)\n                         if _logger:\n                             _logger.debug(f\"FOUND DUPLICATE FOR {_other.get('id')}\")\n\n            if _logger:\n                _logger.debug(f\"OTHER OUTPUTS: {other_outputs}\")\n            existing_ids = set(output.get('id') for output in all_outputs)\n            unique_other_outputs = [other_output for other_output in other_outputs if other_output.get('id') not in existing_ids]\n            if unique_other_outputs:\n                # Lets insert the unique other outputs at the same respective place\n                #all_outputs.extend(unique_other_outputs)\n                if _logger:\n                    _logger.debug(f\"LENGTH OF ALL OUTPUT BEFORE INSERT IS: {len(all_outputs)}\")\n                for uo in unique_other_outputs:\n                    insert_index = w.check_uuids.index(uo.get('id'))\n                    if _logger:\n                        _logger.debug(f\"INSERTING RESULT FOR {uo.get('id')} at {insert_index} position\")\n                    if insert_index:\n                        all_outputs.insert(insert_index, uo)\n                \n            \n            if not all_outputs:\n                all_outputs = other_outputs\n \n            _outputs_with_valid_names = []\n            for _output in all_outputs:\n                if id_to_name.get(_output.get('id')):\n                    _outputs_with_valid_names.append(_output)\n                    if _logger:\n                        _logger.debug(f\"All output has result for ID: {_output.get('id')} Name: {id_to_name.get(_output.get('id'))} Status: {_output.get('status')}\")\n            all_outputs = _outputs_with_valid_names\n            for _output in all_outputs:\n                print(json.dumps(_output))\n        else:\n            print(json.dumps(\"Not a check run\"))\n    else:\n        print(json.dumps(\"ERROR: Internal Error, Workflow is missing\"))\nexcept Exception as e:\n    print(f\"Internal error {{e}}\")"
  },
  {
    "path": "unskript-ctl/templates/template_info_lego.j2",
    "content": "if __name__ == \"__main__\":\n    action()"
  },
  {
    "path": "unskript-ctl/templates/template_script.j2",
    "content": "import signal \nimport sys\nimport os\nimport io\nimport threading\nimport functools\nimport polling2\nfrom polling2 import poll\n\n# Logger object\n_logger = None\n\n# Script to check mapping\n_script_to_check_mapping = {}\n\nclass TimeoutException(Exception):\n    pass\n\n\ndef timeout_handler(signum, frame):\n    raise TimeoutException(\"Checks timed out\")\n\ndef _run_function(fname):\n    global w\n    l_cell = False\n    if fname == \"last_cell\":\n        l_cell = True\n    output = None\n    success = False \n    output_buffer = io.StringIO()\n    sys.stdout = output_buffer\n    if l_cell is True:\n        last_cell()\n        output = output_buffer.getvalue()\n    else:\n        try:\n            fn = globals().get(fname)\n            # We use the _script_to_check_mapping runbook_variable to explicitly map \n            # check name to script name.\n            chk_name = ''\n            if _script_to_check_mapping:\n                chk_name = _script_to_check_mapping.get(fname)\n\n            if _logger:\n                _logger.debug(f\"Starting to execute check {fn} {chk_name}\")\n            \n            response = poll(globals().get(fname), \n                            step=1, \n                            timeout={{ execution_timeout }},\n                            max_tries=1,\n                            poll_forever=False,\n                            check_success= lambda v: v is not None)\n            return response\n        except polling2.TimeoutException as e:\n            # Polling timeout\n            if _logger:\n                _logger.debug(f\"Execution completed for {fn} <-> {chk_name}\")\n            \n            if not hasattr(w, 'timeout_checks'):\n                w.timeout_checks = {}\n            w.timeout_checks.update({chk_name: str(e).replace(fname,\"\")})\n            if _logger:\n                _logger.debug(f\"TIMEOUT CHECKS {w.timeout_checks}\")\n\n        except Exception as e:\n            # If one of the action fails dump the exception on console and proceed further\n            print(str(e))\n            if _logger:\n                _logger.debug(str(e))\n\n            if not hasattr(w, 'errored_checks'):\n                w.errored_checks = {}\n            w.errored_checks.update({chk_name: str(e).replace(fname,\"\")})\n            if _logger:\n                _logger.debug(f\"ERRORED CHECKS {w.errored_checks}\")\n\n        finally:\n            if _logger:\n                _logger.debug(f\"Completed Execution of {fn} <-> {chk_name}\")\n    sys.stdout = sys.__stdout__\n    \n    return output, success\n\ndef do_run_(logger = None, script_to_check_mapping = {}):\n    import sys\n    from tqdm import tqdm\n    global _logger \n    global _script_to_check_mapping\n    global w\n    all_outputs = []\n\n    output = None\n    if logger:\n        _logger = logger\n    \n    if script_to_check_mapping:\n        _script_to_check_mapping = script_to_check_mapping\n\n    if _logger:\n        _logger.debug(\"Starting to execute {{ num_checks }} number of checks\")\n\n    {# check_i should always start with 1 #}\n    for i in tqdm(range(1, {{ num_checks + 1 }}), desc=\"Running\", leave=True, ncols=100):\n        fn = \"check_\" + str(i)\n        if hasattr(globals().get(fn), \"__call__\"):\n            result = _run_function(fn)\n            {# if _logger: #}\n                {# _logger.debug(f\"FUNCTION: {fn} RESULT FOR FUNCTION RUN : {result}\") #}\n            if _logger:\n                if result:\n                    if isinstance(result, tuple):\n                        if result[-1]:\n                            _logger.debug(f\"Check {fn} was successful\")\n                        else:\n                            _logger.debug(f\"Check {fn} failed\")\n\n        {# Get last_output and last_status #}\n        output, _ = _run_function('last_cell')\n        all_outputs.append(output)\n        {# if _logger:\n            _logger.debug(f\"ALL OUTPUTS {all_outputs} for {fn}\") #}\n\n    # Lets dump the output in the log file so we can refer to the status of it \n    # later on\n    if _logger:\n        if output:\n            _logger.debug(output)\n        else:\n            _logger.debug(\"No output for the checks run\")\n\n    return all_outputs\n\nif __name__ == \"__main__\":\n    logger = None\n    script_to_check_mapping = None\n    try:\n        logger = sys.argv[1]\n        script_to_check_mapping = sys.argv[2]\n    except:\n        pass \n    do_run_(logger, script_to_check_mapping)"
  },
  {
    "path": "unskript-ctl/templates/timeout_handler.j2",
    "content": "import threading\nimport functools\n\n\ndef timeout(seconds=60, error_message=\"Function call timed out\"):\n    def decorator(func):\n        @functools.wraps(func)\n        def wrapper(*args, **kwargs):\n            # Container for storing function's result\n            result_container = [None]\n\n            # Define a target function for the thread that captures the return value\n            def target():\n                result_container[0] = func(*args, **kwargs)\n\n            # Start the thread\n            thread = threading.Thread(target=target)\n            thread.daemon = True\n            thread.start()\n            thread.join(seconds)\n\n            if thread.is_alive():\n                # If the thread is still alive after the timeout, raise a TimeoutError\n                raise TimeoutError(error_message)\n            else:\n                # Return the value stored in the container\n                return result_container[0]\n\n        return wrapper\n    return decorator"
  },
  {
    "path": "unskript-ctl/tests/test_database.py",
    "content": "import unittest\nimport os\nimport json\nimport shutil\nimport sqlite3\nimport sys\nimport logging\n\nsys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), '../')))\n\ntry:\n    from unskript_ctl_database import ZoDBInterface, SQLInterface\nexcept Exception as e: \n    print(f\"ERROR: {e}\")\n\ndef delete_db_dir(db_dir: str = './unskript/db'):\n    if os.path.exists(os.path.dirname(db_dir)):\n        shutil.rmtree(os.path.dirname(db_dir))\n\ndef create_db_dir(db_dir: str = './unskript/db'):\n    if not os.path.exists(os.path.dirname(db_dir)):\n        os.makedirs(db_dir)\n\n\nclass TestZoDBInterface(unittest.TestCase):\n    def setUp(self):\n        create_db_dir()\n        self.zodb = ZoDBInterface(db_dir='./unskript/db')\n    \n    def tearDown(self):\n        # delete_db_dir()\n        pass \n\n\n    # def test_create_database(self):\n    #     # Test database creation\n    #     db = self.zodb.create(db_dir='./unskript/db')\n    #     self.assertIsNotNone(db)\n    #     self.assertTrue(os.path.exists(os.path.join(self.zodb.db_dir, self.zodb.db_name)))\n\n    def test_read_from_nonexistent_collection(self):\n        # Test reading from a non-existent collection\n\n        data = self.zodb.read(collection_name='nonexistent_collection')\n        self.assertIsNone(data)\n\n    def test_update_collection(self):\n        # Test updating a collection\n        test_data = {'key1': 'value1', 'key2': 'value2'}\n        self.zodb.create(db_dir='./unskript/db')  \n        self.zodb.update(collection_name='audit_trail', data=test_data)\n        data = self.zodb.read(collection_name='audit_trail')\n        self.assertEqual(data, test_data)\n\n    def test_delete_database(self):\n        # Test database deletion\n        self.zodb.create(db_dir='./unskript/db')  \n        self.assertTrue(os.path.exists(os.path.join(self.zodb.db_dir, self.zodb.db_name)))\n        self.zodb.delete() \n        self.assertFalse(os.path.exists(os.path.join(self.zodb.db_dir, self.zodb.db_name)))\n\n@unittest.skip(\"Skipping for now\")\nclass TestSQLInterface(unittest.TestCase):\n    def setUp(self):\n        # Initialize SQLInterface for testing\n        self.db_name = 'test_unskript_pss.db'\n        self.db_dir = './unskript/db'\n        self.table_name = 'AUDIT_TRAIL'\n        self.schema_file = 'unskript_db_schema.json'\n\n        # Use SQLInterface for setting up the test environment\n        self.sql_interface = SQLInterface(db_name=self.db_name, db_dir=self.db_dir, table_name=self.table_name)\n        self.sql_interface.create_table()\n\n    def tearDown(self):\n        # Clean up after tests\n        self.sql_interface.close_connection()\n        os.remove(os.path.join(self.db_dir, self.db_name))\n\n    def test_create_read(self):\n        # Test Create and Read operations\n        # Data for insertion\n        execution_data = {\n            \"execution_id\": \"123\",\n            \"time_stamp\": \"2023-12-19T08:00:00Z\",\n            \"connector\": \"k8s\",\n            \"runbook\": \"somerunbook.ipynb\",\n            \"summary\": \"Summary P/E/F\",\n            \"check_name\": \"ABC\",\n            \"failed_objects\": json.dumps([\"HELLO \\n\", \"WORLD\\n\"]),\n            \"status\": \"PASS\"\n        }\n\n        # Perform create operation\n        self.sql_interface.create(execution_data)\n\n        # Perform read operation\n        retrieved_data = self.sql_interface.read({\"execution_id\": \"123\"})\n        self.assertIsNotNone(retrieved_data)\n\n    def test_update_delete(self):\n        # Test Update and Delete operations\n        # Data for insertion\n        execution_data = {\n            \"execution_id\": \"123\",\n            \"time_stamp\": \"2023-12-19T08:00:00Z\",\n            \"connector\": \"k8s\",\n            \"runbook\": \"somerunbook.ipynb\",\n            \"summary\": \"Summary P/E/F\",\n            \"check_name\": \"ABC\",\n            \"failed_objects\": json.dumps([\"HELLO \\n\", \"WORLD\\n\"]),\n            \"status\": \"PASS\"\n        }\n\n        # Perform create operation\n        self.sql_interface.create(execution_data)\n\n        # Perform update operation\n        new_data = {\n            \"execution_id\": \"123\",\n            \"time_stamp\": \"2023-12-20T08:00:00Z\",\n            \"connector\": \"k8s\",\n            \"runbook\": \"somerunbook.ipynb\",\n            \"summary\": \"Summary P/E/F\",\n            \"check_name\": \"ABC\",\n            \"failed_objects\": json.dumps([\"HELLO \\n\", \"WORLD\\n\"]),\n            \"status\": \"PASS\"\n        }\n        self.sql_interface.update(new_data=new_data, filters={\"execution_id\": \"123\"})\n\n        # Perform read operation after update\n        updated_data = self.sql_interface.read({\"execution_id\": \"123\"})\n        updated_data = updated_data[0]\n        self.assertEqual(updated_data[\"time_stamp\"], \"2023-12-20T08:00:00Z\")\n        # Add other assertions for the updated data\n\n        # Perform delete operation\n        self.sql_interface.delete(filters={\"execution_id\": \"123\"})\n\n        # Perform read operation after delete\n        deleted_data = self.sql_interface.read({\"execution_id\": \"123\"})\n        self.assertIsNone(deleted_data)\n\n\n\n\nif __name__ == '__main__':\n    unittest.main()\n\n"
  },
  {
    "path": "unskript-ctl/tests/test_errors.log",
    "content": ""
  },
  {
    "path": "unskript-ctl/tests/test_notification.py",
    "content": "#!/usr/bin/env python\n#\n# Copyright (c) 2023 unSkript.com\n# All rights reserved.\n#\n# This program is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\n# FOR A PARTICULAR PURPOSE\n#\n#\nimport os\nimport sys\nimport unittest\nfrom unittest.mock import patch, Mock, MagicMock\n\nsys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), '../')))\n\ntry:\n    from unskript_ctl_notification import SlackNotification, Notification, SmtpNotification\nexcept Exception as e:\n    print(f\"ERROR: {e}\")\n\nclass TestSlackNotification(unittest.TestCase):\n    @patch('unskript_ctl_notification.requests.post', notify=Mock(return_value=True))\n    def test_notify_success(self, mock_post):\n        mock_post.return_value.status_code = 200\n        mock_post.return_value.text = \"OK\"\n\n        # Mocking summary results\n        summary_results = [\n            {'result': [('Check1', 'PASS'), ('Check2', 'FAIL')]},\n            {'result': [('Check3', 'ERROR'), ('Check4', 'PASS')]}\n        ]\n\n        slack = SlackNotification()\n        result = slack.notify(mode='slack', summary_result_table=summary_results)\n\n        self.assertFalse(result)\n\n    @patch('unskript_ctl_notification.requests.post', notify=Mock(return_value=True))\n    def test_notify_failure(self, mock_post):\n        mock_post.return_value.status_code = 500\n        mock_post.return_value.text = \"Internal Server Error\"\n\n        # Mocking empty summary results\n        summary_results = []\n\n        slack = SlackNotification()\n        result = slack.notify(mode='slack', summary_result_table=summary_results)\n\n        self.assertFalse(result)\n\n    def test_generate_notification_message(self):\n        slack = SlackNotification()\n\n        summary_results = [\n            {'result': [('Check1', 'PASS'), ('Check2', 'FAIL')]},\n            {'result': [('Check3', 'ERROR'), ('Check4', 'PASS')]}\n        ]\n\n        expected_message = (\n            ':wave: *unSkript Ctl Check Results* \\n'\n            ':hash: *Check1*  :white_check_mark: \\n'\n            ':hash: *Check2*  :x: \\n'\n            ':hash: *Check3*  :x: \\n'\n            ':hash: *Check4*  :white_check_mark: \\n'\n            ':trophy: *(Pass/Fail/Error)* <-> *(2/1/1)*\\n\\n'\n        )\n\n        message = slack._generate_notification_message(summary_results)\n        self.assertEqual(message, expected_message)\n\n\n# Import the Notification class and other necessary classes here\nclass TestNotification(unittest.TestCase):\n    def setUp(self):\n        # Initialize any necessary objects or configurations\n        pass\n\n    def tearDown(self):\n        # Clean up after each test case, if needed\n        pass\n\n    @patch('unskript_ctl_notification.SlackNotification.notify', notify=Mock(return_value=True))  \n    def test_slack_notification(self, mock_slack_notify):\n        # Mock the SlackNotification.notify method\n        mock_slack_notify.return_value = True  # Mock the return value\n        summary_result = [\n            {'result': [('Check1', 'PASS'), ('Check2', 'FAIL')]},\n            {'result': [('Check3', 'ERROR'), ('Check4', 'PASS')]}\n        ]\n\n        notification = Notification()\n        result = notification.notify(mode='slack', summary_result=summary_result)\n        self.assertTrue(result)  # Assert that the Slack notification was successful\n\n    @patch('unskript_ctl_notification.SmtpNotification.notify', notify=Mock(return_value=True))  \n    def test_email_notification(self, mock_smtp_notify):\n        # Mock the SmtpNotification.notify method\n        mock_smtp_notify.return_value = True  # Mock the return value\n        summary_result = [\n            {'result': [('Check1', 'PASS'), ('Check2', 'FAIL')]},\n            {'result': [('Check3', 'ERROR'), ('Check4', 'PASS')]}\n        ]\n\n        failed_objects = {\"result\": [{\"check1\": [\"object1\", \"object2\"]}]}  # Provide a sample of failed objects\n        to_email = 'test@example.com'  # Provide a sample recipient email\n        from_email = 'sender@example.com'  # Provide a sample sender email\n        subject = 'Test Subject'  # Provide a sample subject\n        notification = Notification()\n        result = notification.notify(mode='email', summary_result=summary_result,\n                                     to_email=to_email, from_email=from_email, subject=subject)\n\n        self.assertFalse(result)  \n\n    @patch.multiple('unskript_ctl_notification.SlackNotification', notify=Mock(return_value=True))\n    @patch.multiple('unskript_ctl_notification.SmtpNotification', notify=Mock(return_value=True))\n    def test_both_notification(self):\n        summary_result = [\n            {'result': [('Check1', 'PASS'), ('Check2', 'FAIL')]},\n            {'result': [('Check3', 'ERROR'), ('Check4', 'PASS')]}\n        ]\n        failed_objects = {\"result\": [{\"check1\": [\"object1\", \"object2\"]}]}  # Provide a sample of failed objects\n        to_email = 'test@example.com'  # Provide a sample recipient email\n        from_email = 'sender@example.com'  # Provide a sample sender email\n        subject = 'Test Subject'  # Provide a sample subject\n        notification = Notification()\n        result = notification.notify(mode='both', summary_result=summary_result,\n                                     to_email=to_email, from_email=from_email, subject=subject)\n        self.assertFalse(result)  \n\nif __name__ == '__main__':\n    unittest.main()"
  },
  {
    "path": "unskript-ctl/tests/test_unskript_factory.py",
    "content": "import sys\nimport os \nimport unittest\n\nsys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), '../')))\n\ntry:\n    from unskript_ctl_factory import ChecksFactory, ScriptsFactory, NotificationFactory, ConfigParserFactory\nexcept Exception as e: \n    print(f\"ERROR: {e}\")\n\nclass TestChecksFactory(unittest.TestCase):\n    def test_checks_factory_run(self):\n        factory1 = ChecksFactory()\n        factory2 = ChecksFactory()\n        # Test Singleton behavior \n        self.assertIs(factory1, factory2)\n        \n        factory1.run()\n\nclass TestScriptsFactory(unittest.TestCase):\n    def test_scripts_factory_run(self):\n        factory1 = ScriptsFactory()\n        factory2 = ScriptsFactory()\n        # Test Singleton behavior \n        self.assertIs(factory1, factory2)\n\n        factory1.run()\n\nclass TestReportsFactory(unittest.TestCase):\n    def test_reports_factory_run(self):\n        factory1 = NotificationFactory()\n        factory2 = NotificationFactory()\n        # Test Singleton behavior\n        self.assertIs(factory1, factory2)\n\n        factory1.notify()\n\nclass TestConfigParserFactory(unittest.TestCase):\n    def test_reports_factory_run(self):\n        factory1 = ConfigParserFactory()\n        factory2 = ConfigParserFactory()\n        # Test Singleton behavior\n        self.assertIs(factory1, factory2)\n\n        g = factory1.get_global()\n        assert isinstance(g, dict) is True\n        n = factory1.get_notification()\n        assert isinstance(n, dict) is True\n        cp = factory1.get_checks_params()\n        assert isinstance(cp, dict) is True\n        c = factory1.get_checks()\n        assert isinstance(c, list) is True\n        j = factory1.get_jobs()\n        assert isinstance(j, dict) is True\n        s = factory1.get_schedule()\n        assert isinstance(s, dict) is True\n\nif __name__ == '__main__':    \n    unittest.main()"
  },
  {
    "path": "unskript-ctl/unskript-add-check.py",
    "content": "\"\"\"This file implements add check functionality\"\"\"\n#!/usr/bin/env python\n#\n# Copyright (c) 2023 unSkript.com\n# All rights reserved.\n#\n# This program is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\n# FOR A PARTICULAR PURPOSE\n#\n#\nimport os\nimport sys\nimport json\nfrom pathlib import Path\nfrom jinja2 import Environment, FileSystemLoader\n\n#from creds_ui import main as ui\nfrom argparse import ArgumentParser\n\nclass CreateCheck():\n    def __init__(self):\n        #Check if this is being run from the top of the directory which has actions directory.\n        current_directory = os.getcwd()\n        action_dir_path = current_directory + '/actions'\n        if os.path.exists(action_dir_path) is False:\n            print(\"Please run it from the top of the directory, where actions directory exists\")\n            return\n\n        mainParser = ArgumentParser(prog='unskript-add-check')\n        mainParser.add_argument('-t', '--type', help='Type of check', choices=[\n         'AWS',\n         'K8S',\n         'GCP',\n         'POSTGRESQL',\n         'SLACK',\n         'MONGODB',\n         'JENKINS',\n         'MYSQL',\n         'JIRA',\n         'REST',\n         'ELASTICSEARCH',\n         'KAFKA',\n         'GRAFANA',\n         'SSH',\n         'PROMETHEUS',\n         'DATADOG',\n         'STRIPE',\n         'REDIS',\n         'ZABBIX',\n         'OPENSEARCH',\n         'PINGDOM',\n         'GITHUB',\n         'TERRAFORM',\n         'AIRFLOW',\n         'HADOOP',\n         'MSSQL',\n         'SNOWFLAKE',\n         'SPLUNK',\n         'SALESFORCE',\n         'AZURE',\n         'NOMAD',\n         'NETBOX',\n         'OPSGENIE',\n         'KEYCLOAK',\n         'VAULT'\n        ])\n        mainParser.add_argument('-n', '--name', help='Short name separated by underscore. For eg: aws_list_public_buckets')\n        mainParser.add_argument('-d', '--description', help='Detailed description about the check.')\n\n        args = mainParser.parse_args()\n        json_data = {}\n        json_data['action_title'] = args.name\n        json_data['action_description'] = args.description\n        json_data['action_type'] = \"LEGO_TYPE_\" + args.type\n        json_data['action_entry_function'] = args.name\n        json_data['action_needs_credential'] = True\n        json_data['action_output_type'] = \"ACTION_OUTPUT_TYPE_LIST\"\n        json_data['action_is_check'] = True\n        json_data['action_supports_iteration'] = True\n        json_data['action_supports_poll'] = True\n\n        custom_dir = action_dir_path + \"/\" + args.name\n        Path(custom_dir).mkdir(parents=True, exist_ok=True)\n        # Generate .json file\n        try:\n            with open(custom_dir + \"/\" + args.name + \".json\", \"w\") as f:\n                f.write(json.dumps(json_data, indent=2))\n        except Exception as e:\n            raise Exception(f\"Unable to create JSON File {e}\")\n\n        # Generate __init__.py file\n        try:\n            file = open( custom_dir + \"/\" + \"__init__.py\",\"w\")\n            file.close()\n        except Exception as e:\n            print(f\"Unable to create __init__.py File {e}\")\n\n        AWESOME_DIRECTORY = \"Awesome-CloudOps-Automation\"\n        environment = Environment(loader=FileSystemLoader(current_directory + \"/\" + AWESOME_DIRECTORY + \"/unskript-ctl/templates/\"))\n        template = environment.get_template(\"check.py.template\")\n\n        content = template.render({\"check_function_name\": args.name})\n        try:\n            with open(custom_dir + \"/\" + args.name + \".py\", \"w\") as f:\n                f.write(content)\n        except Exception as e:\n            raise Exception(f\"Unable to create .py File {e}\")\n\n        template = environment.get_template(\"check_test.py.template\")\n\n        content = template.render({\n            \"check_function_name\": args.name,\n            \"check_type_upper_case\": args.type.upper(),\n            \"check_type\": args.type.lower(),\n            })\n        try:\n            with open(custom_dir + \"/\" + \"test_\" + args.name + \".py\", \"w\") as f:\n                f.write(content)\n        except Exception as e:\n            raise Exception(f\"Unable to create test .py File {e}\")\n\n\nif __name__ == '__main__':\n    CreateCheck()\n"
  },
  {
    "path": "unskript-ctl/unskript-ctl.sh",
    "content": "#!/bin/bash\n\n# unSkript Control Script\n#     This script can be used to list all available runbook\n#     and Run the runbook\n\ncd /usr/local/bin\nif [ -f \"/opt/conda/bin/python\" ];\nthen\n    /opt/conda/bin/python ./unskript_ctl_main.py \"$@\"\nelif [ -f \"/opt/unskript/bin/python\" ];\nthen\n    /opt/unskript/bin/python ./unskript_ctl_main.py \"$@\"\nelse\n    /usr/bin/env python ./unskript_ctl_main.py \"$@\"\nfi\n"
  },
  {
    "path": "unskript-ctl/unskript_audit_cleanup.py",
    "content": "#!/usr/bin/env python\n#\n# Copyright (c) 2023 unSkript.com\n# All rights reserved.\n#\n# This program is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\n# FOR A PARTICULAR PURPOSE\n#\n#\n\nimport os\nimport yaml\nimport shutil\nimport ZODB\nimport datetime\nimport ZODB.FileStorage\nfrom ZODB import DB\n\nfrom unskript_utils import *\n\n\n# We also need to remove old directories in the execution directory\n# That are older than the threshold date\ndef remove_old_directories():\n    # Read the Audit period from config file\n    audit_period = get_audit_period()\n    current_date = datetime.datetime.now()\n    threshold_date = current_date - datetime.timedelta(days=audit_period)\n    directories_deleted = False\n    for directory in os.listdir(UNSKRIPT_EXECUTION_DIR):\n        dir_path = os.path.join(UNSKRIPT_EXECUTION_DIR, directory)\n        if os.path.isdir(dir_path):\n            dir_ts = datetime.datetime.fromtimestamp(os.path.getmtime(dir_path))\n            if dir_ts < threshold_date:\n                try:\n                    # Use shutil.rmtree instead of os.rmdir to remove non-empty directories\n                    shutil.rmtree(dir_path)\n                    print(f\"Deleted {dir_path}\")\n                    directories_deleted = True\n                except Exception as e:\n                    print(f\"ERROR: Failed to delete {dir_path}: {e}\")\n                    # Continue with other directories rather than returning\n                    continue\n\n    if directories_deleted:\n        print(f\"Deleted directories older than {audit_period} days!\")\n    else:\n        print(f\"No directories are older than {audit_period}. Nothing to delete\")\n    return\n\n\ndef get_audit_period():\n    audit_period = 90\n    try:\n        if os.path.exists(GLOBAL_CTL_CONFIG) is True:\n            with open(GLOBAL_CTL_CONFIG, \"r\", encoding=\"utf-8\") as f:\n                data = yaml.safeload(f.read())\n                if (\n                    data\n                    and data.get(\"global\")\n                    and data.get(\"global\").get(\"audit_period\")\n                ):\n                    audit_period = data.get(\"global\").get(\"audit_period\")\n    except:\n        # We use 90 days as the default period to cleanup  then.\n        pass\n    return audit_period\n\n\ndef clean_db() -> None:\n    \"\"\"clean_db This function calls the db.pack(...) function to cleanup the ZoDB of data that are audit_period old\n    default is 90 days. This function can be called as docker cron job to cleanup ZoDB data that are 90 days old\n    \"\"\"\n    audit_period = get_audit_period()\n\n    try:\n        db = DB(PSS_DB_PATH)\n        db.pack(days=audit_period)\n        db.close()\n        print(\"Clean up successful\")\n    except Exception as e:\n        print(f\"ERROR: {e}\")\n\n\nif __name__ == \"__main__\":\n    remove_old_directories()\n    clean_db()\n"
  },
  {
    "path": "unskript-ctl/unskript_ctl_config_parser.py",
    "content": "#!/usr/bin/env python\n#\n# Copyright (c) 2024 unSkript.com\n# All rights reserved.\n#\n# This program is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\n# FOR A PARTICULAR PURPOSE\n#\n#\nimport logging\nimport subprocess\nimport os\nimport sys\nfrom envyaml import EnvYAML\nfrom unskript_utils import bcolors, UNSKRIPT_EXECUTION_DIR\n\n#logging.basicConfig(\n#    level=logging.DEBUG,\n#    format='%(asctime)s [%(levelname)s] - %(message)s',\n#    datefmt='%Y-%m-%d %H:%M:%S',\n#    filename=\"/tmp/\"\n#)\nUNSKRIPT_CTL_CONFIG_FILE=\"/etc/unskript/unskript_ctl_config.yaml\"\nUNSKRIPT_CTL_BINARY=\"/usr/local/bin/unskript-ctl.sh\"\n\n\n# Job config related\nJOB_CONFIG_CHECKS_KEY_NAME = \"checks\"\nJOB_CONFIG_INFO_KEY_NAME = \"info\"\nJOB_CONFIG_SUITES_KEY_NAME = \"suites\"\nJOB_CONFIG_CONNECTORS_KEY_NAME = \"connector_types\"\nJOB_CONFIG_CUSTOM_SCRIPTS_KEY_NAME = \"custom_scripts\"\nJOB_CONFIG_NOTIFY_KEY_NAME = \"notify\"\n\n# Credential section related\nCREDENTIAL_CONFIG_SKIP_VALUE_FOR_ARGUMENTS = [\"no-verify-certs\", \"no-verify-ssl\", \"use-ssl\"]\n\n# Global section related\nGLOBAL_CONFIG_AUDIT_PERIOD_KEY_NAME = \"audit_period\"\nGLOBAL_DEFAULT_AUDIT_PERIOD = 90\n\n# Checks section related\nCHECKS_ARGUMENTS_KEY_NAME = \"arguments\"\nCHECKS_GLOBAL_KEY_NAME = \"global\"\nCHECKS_MATRIX_KEY_NAME = \"matrix\"\n\n# Config top level keys\nCONFIG_GLOBAL = \"global\"\nCONFIG_CHECKS = \"checks\"\nCONFIG_CREDENTIAL = \"credential\"\nCONFIG_NOTIFICATION = \"notification\"\nCONFIG_JOBS = \"jobs\"\nCONFIG_SCHEDULER = \"scheduler\"\nCONFIG_REMOTE_DEBUGGING = \"remote_debugging\"\n\nclass Job():\n    def __init__(\n            self,\n            job_name: str,\n            checks: list[str],\n            info: list[str],\n            suites: list[str]=None,\n            connectors: list[str] = None,\n            custom_scripts: list[str] = None,\n            notify: bool = False):\n        self.job_name = job_name\n        self.checks = checks\n        self.info = info\n        self.suites = suites\n        self.connectors = connectors\n        self.custom_scripts = custom_scripts\n        self.notify = notify\n\n\n    def parse(self):\n        cmds = []\n        notify = '--report' if self.notify is True else ''\n        info = '--info' if self.info else ''\n        # Today, we dont support\n        # check --name <> check --type k8s --script\n        # So, if both check names and types are configured, we will split it\n        # into 2 commands.\n        # We will combine script with --types and make the --name as separate\n        # command.\n        combine_check_types_and_script = False\n        combine_check_names_and_script = False\n\n        if self.checks is not None and len(self.checks) != 0 and self.custom_scripts is not None and len(self.custom_scripts) != 0:\n            combine_check_names_and_script = True\n        if self.connectors is not None and len(self.connectors) != 0 and self.custom_scripts is not None and len(self.custom_scripts) != 0:\n            combine_check_names_and_script = False\n            combine_check_types_and_script = True\n\n        # full_command will contain the full command if both check and --script\n        # are specified.\n        full_command = None\n\n        if self.checks is not None and len(self.checks) != 0:\n            if combine_check_names_and_script:\n                full_command = f'{UNSKRIPT_CTL_BINARY} run check --name {self.checks[0]} {info}'\n            else:\n                command = f'{UNSKRIPT_CTL_BINARY} run check --name {self.checks[0]} {notify}'\n                if self.info:\n                    command += ' --info'\n                cmds.append(command)\n                print(f'Job: {self.job_name} contains check: {self.checks[0]}')\n\n        if self.connectors is not None and len(self.connectors) != 0:\n            connector_types_string = ','.join(self.connectors)\n            print(f'Job: {self.job_name} contains connector types: {connector_types_string}')\n            if combine_check_types_and_script:\n                full_command = f'{UNSKRIPT_CTL_BINARY} run check --type {connector_types_string} {info}'\n            else:\n                command = f'{UNSKRIPT_CTL_BINARY} run check --type {connector_types_string} {notify}'\n                if self.info:\n                    command += ' --info'\n                cmds.append(command)\n\n        accessmode = os.F_OK | os.X_OK\n\n        if self.custom_scripts is not None and len(self.custom_scripts) != 0:\n            filtered_scripts = self.custom_scripts\n            if filtered_scripts:\n                combined_script = ';'.join(filtered_scripts)\n                print(f'Job: {self.job_name} contains custom script: {combined_script}')\n                if combine_check_types_and_script or combine_check_names_and_script:\n                    if info not in full_command:\n                        full_command += f' --script \"{combined_script}\" {info} {notify}'\n                    else:\n                        full_command += f' --script \"{combined_script}\" {notify}'\n                else:\n                    command = f'{UNSKRIPT_CTL_BINARY} run --script \"{combined_script}\" {notify}'\n                    if self.info:\n                        command += ' --info'\n                    cmds.append(command)\n\n        if full_command is not None:\n            cmds.append(full_command)\n        else:\n            if info:\n                full_command = f'{UNSKRIPT_CTL_BINARY} run {info} {notify}'\n                if len(cmds) == 0:\n                    cmds.append(full_command)\n\n        info_exists = False\n        for idx,c in enumerate(cmds):\n            if '--info' in c and not info_exists:\n                info_exists = True\n                continue\n            elif '--info' in c:\n                c = c.replace('--info', '')\n                cmds[idx] = c\n        self.cmds = cmds\n\n\nclass ConfigParser():\n    def __init__(self, config_file: str):\n        self.config_file = config_file\n        # Dictionary of jobs, with job name being the key.\n        self.jobs = {}\n        self.tunnel_up_cmd = None\n        self.tunnel_down_cmd = None\n        self.upload_logs_files_cmd = None\n\n    def parse_config_yaml(self) -> dict:\n        \"\"\"parse_config_yaml: This function parses the config yaml file and converts the\n        content as a python dictionary and returns back to the caller.\n        \"\"\"\n        retval = {}\n\n        if os.path.exists(self.config_file) is False:\n            print(f\"{bcolors.FAIL} {self.config_file} Not found!{bcolors.ENDC}\")\n            sys.exit(0)\n\n        # We use EnvYAML to parse the hook file and give us the\n        # dictionary representation of the YAML file\n        try:\n            retval = EnvYAML(self.config_file, strict=False)\n            if not retval:\n                print(f\"{bcolors.WARNING} Parsing config file {self.config_file} failed{bcolors.ENDC}\")\n                sys.exit(0)\n        except Exception as e:\n            print(f\"{bcolors.FAIL} Parsing config file {self.config_file} failed{bcolors.ENDC}\")\n            sys.exit(0)\n\n        self.parsed_config = retval\n\n    def parse_global(self):\n        \"\"\"parse_global: This function parses the global section of the config.\n        \"\"\"\n        print('###################################')\n        print(f'{bcolors.HEADER}Processing global section{bcolors.ENDC}')\n        print('###################################')\n        config = self.parsed_config.get(CONFIG_GLOBAL)\n        if config is None:\n            print(f\"{bcolors.WARNING}Global: Nothing to configure credential with, found empty creds data{bcolors.ENDC}\")\n            return\n\n        # Process the audit_period config\n        audit_period = config.get(GLOBAL_CONFIG_AUDIT_PERIOD_KEY_NAME, GLOBAL_DEFAULT_AUDIT_PERIOD)\n        print(f'Global: audit period {audit_period} days')\n        self.audit_period = audit_period\n\n    def parse_checks(self):\n        \"\"\"parse_checks: This function parses the checks section of the config.\n        \"\"\"\n        print('###################################')\n        print(f'{bcolors.HEADER}Processing checks section{bcolors.ENDC}')\n        print('###################################')\n        config = self.parsed_config.get(CONFIG_CHECKS)\n        if config is None:\n            print(f\"{bcolors.WARNING}Checks: No checks config{bcolors.ENDC}\")\n            return\n        arguments = config.get(CHECKS_ARGUMENTS_KEY_NAME)\n        if arguments is None:\n            print(f\"{bcolors.WARNING}Checks: No arguments config{bcolors.ENDC}\")\n            return\n        global_args = arguments.get(CHECKS_GLOBAL_KEY_NAME)\n        if global_args is None:\n            print(f\"{bcolors.WARNING}Checks: No global config{bcolors.ENDC}\")\n            return\n        # Ensure we atmost have ONLY one matrix argument\n        matrix_args = global_args.get(CHECKS_MATRIX_KEY_NAME)\n        if matrix_args is None:\n            return\n        if len(matrix_args) > 1:\n            print(f'{bcolors.FAIL} Only one matrix argument supported {bcolors.ENDC}')\n            return\n\n    def configure_credential(self):\n        \"\"\"configure_credential: This function is used to parse through the creds_dict and\n        call the add_creds.sh method to populate the respective credential json\n        \"\"\"\n        print('###################################')\n        print(f'{bcolors.HEADER}Processing credential section{bcolors.ENDC}')\n        print('###################################')\n        creds_dict = self.parsed_config.get(CONFIG_CREDENTIAL)\n        if creds_dict is None:\n            print(f\"{bcolors.WARNING}Credential: Nothing to configure credential with, found empty creds data{bcolors.ENDC}\")\n            return\n\n        for cred_type in creds_dict.keys():\n            cred_list = creds_dict.get(cred_type)\n            for cred in cred_list:\n                name = cred.get('name')\n                if cred.get('enable') is False:\n                    print(f'Credential: Skipping type {cred_type}, name {name}')\n                    continue\n                creds_cmd = ['/usr/local/bin/add_creds.sh', '-c', cred_type]\n                try:\n                    print(f'Credential: Programming type {cred_type}, name {name}')\n                    for cred_key in cred:\n                        # Skip name and enable keys\n                        if cred_key in ['name', 'enable']:\n                            continue\n                        # Certain arguments dont need extra value part like -no-verify-certs\n                        if cred_key in CREDENTIAL_CONFIG_SKIP_VALUE_FOR_ARGUMENTS:\n                            creds_cmd.extend(['--'+cred_key])\n                        else:\n                            creds_cmd.extend(['--'+cred_key, str(cred.get(cred_key))])\n                    if creds_cmd:\n                        #print_cmd = ' '.join(creds_cmd)\n                        self.run_command(creds_cmd)\n                        print(f\"{bcolors.OKGREEN}Credential: Successfully programmed {cred_type}, name {name}{bcolors.ENDC}\")\n                except Exception as e:\n                    print(f'{bcolors.FAIL}Credential: Failed to program {cred_type}, name {name}{bcolors.ENDC}')\n                    continue\n\n    def validate_cron_format(self, cron_expression):\n        \"\"\"Validate if a cron expression is in proper format\n        Returns tuple: (is_valid: bool, error_message: str)\n        \"\"\"\n        if not cron_expression or not isinstance(cron_expression, str):\n            return False, \"Cron expression cannot be empty\"\n\n        # Remove extra whitespace and split\n        parts = cron_expression.strip().split()\n\n        # Standard cron should have 5 parts: minute hour day month day_of_week\n        if len(parts) != 5:\n            return False, f\"Cron expression must have exactly 5 parts (minute hour day month day_of_week), got {len(parts)}: {cron_expression}\"\n\n        # Define valid ranges for each field\n        field_ranges = [\n            (0, 59, \"minute\"),      # minute: 0-59\n            (0, 23, \"hour\"),        # hour: 0-23\n            (1, 31, \"day\"),         # day: 1-31\n            (1, 12, \"month\"),       # month: 1-12\n            (0, 7, \"day_of_week\")   # day_of_week: 0-7 (0 and 7 are Sunday)\n        ]\n\n        for i, (part, (min_val, max_val, field_name)) in enumerate(zip(parts, field_ranges)):\n            if not self._validate_cron_field(part, min_val, max_val, field_name):\n                return False, f\"Invalid {field_name} field: '{part}' (should be {min_val}-{max_val} or valid cron syntax)\"\n\n        return True, \"Valid cron expression\"\n\n    def _validate_cron_field(self, field, min_val, max_val, field_name):\n        \"\"\"Validate individual cron field\"\"\"\n        # Allow wildcards\n        if field == \"*\":\n            return True\n\n        # Allow step values (*/5, */10, etc.)\n        if field.startswith(\"*/\"):\n            try:\n                step = int(field[2:])\n                return step > 0 and step <= max_val\n            except ValueError:\n                return False\n\n        # Allow ranges (1-5, 10-15, etc.)\n        if \"-\" in field:\n            try:\n                start, end = field.split(\"-\", 1)\n                start_num = int(start)\n                end_num = int(end)\n                return (min_val <= start_num <= max_val and\n                       min_val <= end_num <= max_val and\n                       start_num <= end_num)\n            except ValueError:\n                return False\n\n        # Allow comma-separated lists (1,3,5 or 10,20,30, etc.)\n        if \",\" in field:\n            try:\n                values = [int(x.strip()) for x in field.split(\",\")]\n                return all(min_val <= val <= max_val for val in values)\n            except ValueError:\n                return False\n\n        # Allow single numbers\n        try:\n            num = int(field)\n            return min_val <= num <= max_val\n        except ValueError:\n            return False\n\n\n    def configure_schedule(self):\n        \"\"\"configure_schedule: configures the schedule settings\n        \"\"\"\n        print('###################################')\n        print(f'{bcolors.HEADER}Processing scheduler section{bcolors.ENDC}')\n        print('###################################')\n        config = self.parsed_config.get(CONFIG_SCHEDULER)\n        if config is None:\n            print(f\"{bcolors.WARNING}Scheduler: No scheduler configuration found{bcolors.ENDC}\")\n            return\n\n        # Check for LB_JOB_SCHEDULE environment variable\n        lb_job_schedule = os.environ.get('LB_JOB_SCHEDULE')\n        if lb_job_schedule:\n            print(f'{bcolors.OKGREEN}Found LB_JOB_SCHEDULE environment variable: {lb_job_schedule}{bcolors.ENDC}')\n\n            # Validate the cron format\n            is_valid, validation_message = self.validate_cron_format(lb_job_schedule)\n            if not is_valid:\n                print(f'{bcolors.FAIL}ERROR: LB_JOB_SCHEDULE has invalid cron format: {validation_message}{bcolors.ENDC}')\n                print(f'{bcolors.FAIL}Examples of valid cron formats:{bcolors.ENDC}')\n                print(f'{bcolors.FAIL}  */15 * * * *    (every 15 minutes){bcolors.ENDC}')\n                print(f'{bcolors.FAIL}  0 */2 * * *     (every 2 hours){bcolors.ENDC}')\n                print(f'{bcolors.FAIL}  30 9 * * 1-5    (9:30 AM, Monday to Friday){bcolors.ENDC}')\n                print(f'{bcolors.FAIL}  0 0 1 * *       (first day of every month){bcolors.ENDC}')\n                print(f'{bcolors.FAIL}Falling back to YAML configuration cadence values{bcolors.ENDC}')\n                lb_job_schedule = None  # Disable override\n            else:\n                print(f'{bcolors.OKGREEN}LB_JOB_SCHEDULE validation passed: {validation_message}{bcolors.ENDC}')\n                print(f'{bcolors.OKGREEN}This will override all cadence values in the scheduler configuration{bcolors.ENDC}')\n\n\n        unskript_crontab_file = \"/etc/unskript/unskript_crontab.tab\"\n        crons = []\n        try:\n            for schedule in config:\n                if schedule.get('enable') is False:\n                    print(f'Skipping')\n                    continue\n                if lb_job_schedule:\n                    cadence = lb_job_schedule\n                    print(f\"{bcolors.OKGREEN}Using LB_JOB_SCHEDULE override: {cadence}{bcolors.ENDC}\")\n                else:\n                    cadence = schedule.get('cadence')\n\n                job_name = schedule.get('job_name')\n                # look up the job name and get the commands\n                job = self.jobs.get(job_name)\n                if job is None:\n                    print(f'{bcolors.FAIL}Schedule: Unknown job name {job_name}. Please check the jobs section and ensure the job is defined{bcolors.ENDC}')\n                    continue\n                print(f'Schedule: cadence {cadence}, job name: {job_name}')\n                if len(job.cmds) == 0:\n                    print(f'{bcolors.WARNING}Scheduler: Empty job {job.job_name}, not adding to schedule{bcolors.ENDC}')\n                    continue\n                script = '; '.join(job.cmds)\n                # TBD: Validate cadence and script is valid\n                crons.append(f'{cadence} {script}')\n        except Exception as e:\n            print(f'{bcolors.FAIL}Schedule: Got error in programming cadence {cadence}, script {script}, {e}{bcolors.ENDC}')\n            #raise e\n            return\n\n        try:\n            with open(unskript_crontab_file, \"w\") as f:\n                # Since crontabs dont inherit the environmnent variables, we have to\n                # do it explicitly.\n                for name, value in os.environ.items():\n                    if value != \"\":\n                        f.write(f'{name}={value}')\n                        f.write(\"\\n\")\n                if crons:\n                    cmds = []\n                    crons_per_line = \"\\n\".join(crons)\n                    print(f'Schedule: Programming crontab {crons_per_line}')\n                    f.write('\\n'.join(crons))\n                    f.write(\"\\n\")\n                # Add the audit period cron job as well, to be run daily.\n                audit_cadence = \"0 0 * * *\"\n                # delete_old_files_command = f'/usr/bin/find {UNSKRIPT_EXECUTION_DIR} -type f -mtime +{self.audit_period} -exec rm -f {{}} \\;'\n                delete_old_files_command = f'/opt/conda/bin/python /usr/local/bin/unskript_audit_cleanup.py'\n                print(f'{bcolors.OKGREEN}Adding audit log deletion cron job entry, {audit_cadence} {delete_old_files_command}{bcolors.ENDC}')\n                f.write(f'{audit_cadence} {delete_old_files_command}')\n                f.write(\"\\n\")\n\n                # If there is remote_debugging commands, add them too\n                if self.tunnel_up_cmd:\n                    f.write(self.tunnel_up_cmd)\n                    f.write(\"\\n\")\n                if self.tunnel_down_cmd:\n                    f.write(self.tunnel_down_cmd)\n                    f.write(\"\\n\")\n                if self.upload_logs_files_cmd:\n                    f.write(self.upload_logs_files_cmd)\n                    f.write(\"\\n\")\n\n            cmds = ['crontab', unskript_crontab_file]\n            self.run_command(cmds)\n        except Exception as e:\n            print(f'{bcolors.FAIL}Schedule: Cron programming failed, {e}{bcolors.ENDC}')\n            #raise e\n\n    def parse_jobs(self):\n        print('###################################')\n        print(f'{bcolors.HEADER}Processing jobs section{bcolors.ENDC}')\n        print('###################################')\n        config = self.parsed_config.get(CONFIG_JOBS)\n        if config is None:\n            print(f'{bcolors.WARNING}Jobs: No jobs config found{bcolors.ENDC}')\n            return\n\n        for job in config:\n            job_name = job.get('name')\n            if job_name is None:\n                print(f\"{bcolors.OKBLUE}Jobs: Skipping invalid job, name not found{bcolors.ENDC}\")\n                continue\n            if job.get('enable') is False:\n                print(f'Jobs: Skipping {job_name}')\n                continue\n            # Check if the same job name exists\n            if job_name in self.jobs:\n                print(f'{bcolors.WARNING}Jobs: Skipping job name {job_name}, duplicate entry{bcolors.ENDC}')\n                continue\n            checks = job.get(JOB_CONFIG_CHECKS_KEY_NAME)\n            info = job.get(JOB_CONFIG_INFO_KEY_NAME)\n            suites = job.get(JOB_CONFIG_SUITES_KEY_NAME)\n            connectors = job.get(JOB_CONFIG_CONNECTORS_KEY_NAME)\n            custom_scripts = job.get(JOB_CONFIG_CUSTOM_SCRIPTS_KEY_NAME)\n            notify = job.get(JOB_CONFIG_NOTIFY_KEY_NAME, False)\n            if checks is not None and len(checks) > 1:\n                print(f'{job_name}: NOT SUPPORTED: more than 1 check')\n                continue\n            new_job = Job(job_name, checks, info, suites, connectors, custom_scripts, notify)\n            new_job.parse()\n            self.jobs[job_name] = new_job\n\n    def run_command(self, cmds:list)->str:\n        \"\"\"run_command: Runs the command in a subprocess and returns the output\n        or raise excetption\n        \"\"\"\n        try:\n            result = subprocess.run(cmds,\n                                    capture_output=True,\n                                    check=True)\n        except Exception as e:\n            print(f'cmd: {\" \".join(cmds)} failed, {e}')\n            raise e\n\n        return str(result.stdout)\n\n    def parse_remote_debugging(self):\n        print('###################################')\n        print(f'{bcolors.HEADER}Processing remote debugging section{bcolors.ENDC}')\n        print('###################################')\n        config = self.parsed_config.get(CONFIG_REMOTE_DEBUGGING)\n        if config is None:\n            print(f'{bcolors.WARNING}Remote_debugging: No remote_debugging config found{bcolors.ENDC}')\n            return\n        if config.get('enable') is False:\n            print(f'{bcolors.WARNING} Skipping remote_debugging section{bcolors.ENDC}')\n            return\n        upload_log_files_cadence = config.get('upload_log_files_cadence', None)\n        if upload_log_files_cadence is not None:\n            print(f'{bcolors.HEADER} Programming upload_log_files_cadence {upload_log_files_cadence}')\n            self.upload_logs_files_cmd = f'{upload_log_files_cadence} /opt/unskript/bin/python /usr/local/bin/unskript_ctl_upload_session_logs.py'\n        ovpn_file = config.get('ovpn_file', None)\n        if ovpn_file is None:\n            print(f'{bcolors.FAIL}Please mention the ovpn file location{bcolors.ENDC}')\n            return\n        tunnel_up_cadence = config.get('tunnel_up_cadence', None)\n        tunnel_down_cadence = config.get('tunnel_down_cadence', None)\n        # Check both of them are present.\n        if (tunnel_up_cadence is None and tunnel_down_cadence is not None) or (tunnel_up_cadence is not None and tunnel_down_cadence is None):\n            print(f'{bcolors.FAIL} Please ensure both tunnel_up_cadence and tunnel_down_cadence is configured{bcolors.ENDC}')\n            return\n        if tunnel_up_cadence is not None:\n            print(f'{bcolors.HEADER} Programming tunnel_up_cadence {tunnel_up_cadence}')\n            self.tunnel_up_cmd = f'{tunnel_up_cadence} /usr/local/bin/unskript-ctl.sh debug --start  {ovpn_file}'\n        if tunnel_down_cadence is not None:\n            print(f'{bcolors.HEADER} Programming tunnel_down_cadence {tunnel_down_cadence}')\n            self.tunnel_down_cmd = f'{tunnel_down_cadence} /usr/local/bin/unskript-ctl.sh debug --stop'\n\ndef main():\n    \"\"\"main: This is the main function that gets called by the start.sh function\n    to parse the unskript_ctl_config.yaml file and program credential and schedule as configured\n    \"\"\"\n    config_parser = ConfigParser(UNSKRIPT_CTL_CONFIG_FILE)\n    config_parser.parse_config_yaml()\n\n    config_parser.parse_global()\n    config_parser.parse_checks()\n    config_parser.configure_credential()\n    config_parser.parse_jobs()\n    config_parser.parse_remote_debugging()\n    config_parser.configure_schedule()\n\nif __name__ == '__main__':\n    main()\n"
  },
  {
    "path": "unskript-ctl/unskript_ctl_custom_notification.py",
    "content": "\"\"\"\nEnhanced Email Notification System using Microsoft Graph API and HashiCorp Vault\n\"\"\"\n\nimport os\nimport requests\nimport json\nimport base64\nimport logging\nfrom typing import Optional, Dict, Any, Union\nfrom pathlib import Path\nfrom tenacity import retry, stop_after_attempt, wait_exponential\nfrom urllib3.exceptions import InsecureRequestWarning\nfrom dataclasses import dataclass\n\n# Suppress only the single warning from urllib3 needed.\nrequests.packages.urllib3.disable_warnings(category=InsecureRequestWarning)\n\n# Constants\nDEFAULT_EMAIL_TEMPLATE = \"\"\"\n<html>\n    <body>\n        <h1>Hello!</h1>\n        <p>This is a <b>test email</b> sent using <i>Microsoft Graph API</i> with HTML content and an attachment.</p>\n        <p>Have a great day!</p>\n    </body>\n</html>\n\"\"\"\n\nGRAPH_API_BASE_URL = \"https://graph.microsoft.com/v1.0\"\nOAUTH_TOKEN_URL = \"https://login.microsoftonline.com/{tenant_id}/oauth2/v2.0/token\"\n\n@dataclass\nclass VaultConfig:\n    \"\"\"Configuration for HashiCorp Vault\"\"\"\n    addr: str\n    token: str\n    path: str = \"lb-secrets/smtp-server/credentials/smtp\"\n    verify_ssl: bool = True\n\nclass EmailNotificationError(Exception):\n    \"\"\"Base exception for email notification errors\"\"\"\n    pass\n\nclass VaultError(EmailNotificationError):\n    \"\"\"Exception for Vault-related errors\"\"\"\n    pass\n\nclass AuthenticationError(EmailNotificationError):\n    \"\"\"Exception for authentication-related errors\"\"\"\n    pass\n\nclass EmailSendError(EmailNotificationError):\n    \"\"\"Exception for email sending failures\"\"\"\n    pass\n\nclass CustomEmailNotification:\n    \"\"\"\n    A system for sending emails using Microsoft Graph API with Vault integration\n    \"\"\"\n    \n    def __init__(\n        self,\n        vault_config: VaultConfig,\n        logger: Optional[logging.Logger] = None\n    ):\n        \"\"\"\n        Initialize the email notification system.\n        \n        Args:\n            vault_config: VaultConfig object containing Vault settings\n            logger: Optional logger instance\n        \"\"\"\n        self.vault_config = vault_config\n        self.logger = logger or logging.getLogger(__name__)\n        self.credentials = self._fetch_vault_secret()\n\n    def _fetch_vault_secret(self) -> Dict[str, Any]:\n        \"\"\"\n        Fetch secrets from HashiCorp Vault.\n        \n        Returns:\n            Dict containing the secret data\n            \n        Raises:\n            VaultError: If secret fetching fails\n        \"\"\"\n        headers = {\"X-Vault-Token\": self.vault_config.token}\n        url = f\"{self.vault_config.addr}/v1/{self.vault_config.path}\"\n\n        try:\n            self.logger.debug(f\"Fetching secret from Vault at path: {self.vault_config.path}\")\n            response = requests.get(\n                url,\n                headers=headers,\n                verify=self.vault_config.verify_ssl,\n                timeout=10\n            )\n            response.raise_for_status()\n            \n            secret_data = response.json().get(\"data\", {}).get(\"value\")\n            if not secret_data:\n                raise VaultError(\"No secret found at the specified path\")\n            \n            return json.loads(secret_data)\n            \n        except requests.exceptions.RequestException as e:\n            raise VaultError(f\"Failed to fetch secret from Vault: {str(e)}\")\n        except json.JSONDecodeError as e:\n            raise VaultError(f\"Failed to parse secret data: {str(e)}\")\n\n    @retry(\n        stop=stop_after_attempt(3),\n        wait=wait_exponential(multiplier=1, min=4, max=10),\n        retry_error_callback=lambda retry_state: None\n    )\n    def _get_access_token(self) -> str:\n        \"\"\"\n        Get OAuth2 token from Microsoft Graph API.\n        \n        Returns:\n            str: Access token\n            \n        Raises:\n            AuthenticationError: If token acquisition fails\n        \"\"\"\n        try:\n            url = OAUTH_TOKEN_URL.format(\n                tenant_id=self.credentials[\"credentials\"][\"tenantId\"]\n            )\n            \n            data = {\n                \"grant_type\": \"client_credentials\",\n                \"client_id\": self.credentials[\"credentials\"][\"clientId\"],\n                \"client_secret\": self.credentials[\"credentials\"][\"clientSecret\"],\n                \"scope\": self.credentials[\"credentials\"][\"scope\"]\n            }\n            \n            response = requests.post(\n                url,\n                headers={\"Content-Type\": \"application/x-www-form-urlencoded\"},\n                data=data,\n                timeout=10\n            )\n            response.raise_for_status()\n            \n            return response.json()[\"access_token\"]\n            \n        except requests.exceptions.RequestException as e:\n            raise AuthenticationError(f\"Failed to obtain access token: {str(e)}\")\n\n    def _create_attachment(self, file_path: Union[str, Path]) -> Dict[str, str]:\n        \"\"\"\n        Create file attachment payload.\n        \n        Args:\n            file_path: Path to the file to attach\n            \n        Returns:\n            Dict containing the attachment data\n            \n        Raises:\n            ValueError: If file operations fail\n        \"\"\"\n        path = Path(file_path)\n        if not path.exists():\n            raise ValueError(f\"File not found: {file_path}\")\n        if not path.is_file():\n            raise ValueError(f\"Not a file: {file_path}\")\n\n        try:\n            with path.open(\"rb\") as file:\n                file_content = file.read()\n                encoded_content = base64.b64encode(file_content).decode(\"utf-8\")\n                \n                return {\n                    \"@odata.type\": \"#microsoft.graph.fileAttachment\",\n                    \"name\": path.name,\n                    \"contentType\": \"application/octet-stream\",\n                    \"contentBytes\": encoded_content\n                }\n        except Exception as e:\n            raise ValueError(f\"Failed to create attachment: {str(e)}\")\n\n    @retry(\n        stop=stop_after_attempt(3),\n        wait=wait_exponential(multiplier=1, min=4, max=10),\n        retry_error_callback=lambda retry_state: False\n    )\n    def send_email(\n        self,\n        recipient_email: str,\n        subject: str,\n        email_content: Optional[str] = None,\n        file_path: Optional[Union[str, Path]] = None\n    ) -> bool:\n        \"\"\"\n        Send email using Microsoft Graph API.\n        \n        Args:\n            recipient_email: Email address of the recipient\n            subject: Email subject\n            email_content: HTML content of the email (optional)\n            file_path: Path to attachment file (optional)\n            \n        Returns:\n            bool: True if email was sent successfully\n            \n        Raises:\n            EmailSendError: If email sending fails\n            ValueError: If input validation fails\n        \"\"\"\n        # Get fresh access token\n        access_token = self._get_access_token()\n\n        email_data = {\n            \"message\": {\n                \"subject\": subject,\n                \"body\": {\n                    \"contentType\": \"HTML\",\n                    \"content\": email_content or DEFAULT_EMAIL_TEMPLATE\n                },\n                \"toRecipients\": [\n                    {\n                        \"emailAddress\": {\n                            \"address\": recipient_email\n                        }\n                    }\n                ]\n            }\n        }\n\n        # Add attachment if provided\n        if file_path:\n            attachment = self._create_attachment(file_path)\n            email_data[\"message\"][\"attachments\"] = [attachment]\n\n        url = f\"{GRAPH_API_BASE_URL}/users/{self.credentials['credentials']['smtpSender']}/sendMail\"\n        headers = {\n            \"Authorization\": f\"Bearer {access_token}\",\n            \"Content-Type\": \"application/json\"\n        }\n\n        try:\n            response = requests.post(\n                url,\n                headers=headers,\n                json=email_data,\n                timeout=30\n            )\n            response.raise_for_status()\n            \n            if response.status_code == 202:\n                self.logger.info(\"Email sent successfully!\")\n                return True\n            else:\n                raise EmailSendError(f\"Unexpected status code: {response.status_code}\")\n                \n        except requests.exceptions.RequestException as e:\n            self.logger.error(f\"Failed to send email: {str(e)}\")\n            raise EmailSendError(f\"Email sending failed: {str(e)}\")\n\ndef setup_logger(log_level: int = logging.INFO) -> logging.Logger:\n    \"\"\"\n    Set up a logger with the specified log level.\n    \n    Args:\n        log_level: Logging level (default: logging.INFO)\n        \n    Returns:\n        logging.Logger: Configured logger instance\n    \"\"\"\n    logger = logging.getLogger(\"email_notification_system\")\n    handler = logging.StreamHandler()\n    formatter = logging.Formatter(\n        '%(asctime)s - %(name)s - %(levelname)s - %(message)s'\n    )\n    handler.setFormatter(formatter)\n    logger.addHandler(handler)\n    logger.setLevel(log_level)\n    return logger\n\ndef custom_email_notification_main(_logger,\n         email_subject,\n         email_content,\n         email_recipient,\n         file_path = None):\n    \"\"\"Main entry point for the email notification system.\"\"\"\n    retval = False\n    # Get environment variables\n    vault_addr = os.getenv(\"VAULT_ADDR\")\n    vault_token = os.getenv(\"VAULT_TOKEN\")\n    \n    # Setup logging\n    if _logger:\n        logger = _logger\n    else:\n        logger = setup_logger()\n    \n    if not vault_addr or not vault_token:\n        logger.error(\n            \"VAULT_ADDR and VAULT_TOKEN environment variables must be set.\"\n        )\n        return retval\n\n    \n    # Initialize Vault configuration\n    vault_config = VaultConfig(\n        addr=vault_addr,\n        token=vault_token,\n        verify_ssl=False \n    )\n    \n    try:\n        # Initialize the email notification system\n        email_system = CustomEmailNotification(vault_config, logger)\n        \n        # Example usage\n        success = email_system.send_email(\n            recipient_email=email_recipient,\n            subject=email_subject,\n            email_content=email_content,\n            file_path=file_path\n        )\n        \n        if success:\n            logger.info(\"Email notification sent successfully\")\n            retval = True\n        else:\n            logger.error(\"Failed to send email notification\")\n            \n    except EmailNotificationError as e:\n        logger.error(f\"Email notification error: {str(e)}\")\n    except Exception as e:\n        logger.error(f\"Unexpected error: {str(e)}\")\n\n    return retval"
  },
  {
    "path": "unskript-ctl/unskript_ctl_database.py",
    "content": "#!/usr/bin/env python\n#\n# Copyright (c) 2023 unSkript.com\n# All rights reserved.\n#\n# This program is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\n# FOR A PARTICULAR PURPOSE\n#\n#\nimport os \nimport re\nimport ZODB\nimport zlib\nimport sqlite3\nimport json\nimport ZODB.FileStorage\n\nfrom unskript_ctl_factory import DatabaseFactory, UnskriptFactory\nfrom unskript_ctl_version import *\n\nfrom ZODB import DB\n\n# Class ZoDBInterface is a child class of DatabaseFactory\n# This class implements CRUD operation for ZODB. This class\n# Will be used for both Codesnippet as well as PSS that is\n# used by unskript-ctl \nclass ZoDBInterface(DatabaseFactory):\n    def __init__(self, **kwargs):\n        \"\"\"Constructor: Initializes class specific variables\"\"\"\n        super().__init__()\n        self.db_name = 'unskript_pss.db'\n        self.db_dir = '/unskript/db'\n        self.collection_name = 'audit_trail'\n        if 'db_name' in kwargs:\n            self.db_name = kwargs.get('db_name')\n        if 'db_dir' in kwargs:\n            self.db_dir = kwargs.get('db_dir')\n        if 'collection_name' in kwargs:\n            self.collection_name = kwargs['collection_name']\n\n        \n        self.db = self.create()\n\n    def create(self, **kwargs):\n        \"\"\"Create option of the CRUD\"\"\"\n        if 'db_name' in kwargs:\n            self.db_name = kwargs.get('db_name')\n        if 'db_dir' in kwargs:\n            self.db_dir = kwargs.get('db_dir')\n        if 'collection_name' in kwargs:\n            self.collection_name = kwargs['collection_name']\n\n        self.logger.debug(f'Checking if DB {self.db_name} exists')\n        if not os.path.exists(self.db_dir):\n            os.makedirs(self.db_dir, exist_ok=True)\n\n        if not os.path.exists(os.path.join(self.db_dir, self.db_name)):\n            pss = ZODB.FileStorage.FileStorage(os.path.join(self.db_dir, self.db_name), pack_keep_old=False)\n            db = DB(pss)\n            self.logger.debug(f'Creating DB {self.db_name}')\n            with db.transaction() as connection:\n                root = connection.root()\n                root[self.collection_name] = {} \n                root['schema_version'] = SCHEMA_VERSION\n                connection.transaction_manager.commit()\n                connection.close() \n                del root \n                del connection \n        else:\n            self.logger.debug(f'DB {self.db_name} Exists!')\n            db = DB(os.path.join(self.db_dir, self.db_name))\n        self.db = db \n        \n        return self.db \n\n    def read(self, **kwargs):\n        \"\"\"READ option of the CRUD\"\"\"\n        data = None\n        if not self.db:\n            self.logger.error(f\"DB {self.db_name} Not initialized or does not exist\")\n            return  \n        if 'collection_name' in kwargs:\n            self.collection_name = kwargs['collection_name']\n\n        with self.db.transaction() as connection:\n            root = connection.root()\n            data = root.get(self.collection_name)\n            if data is None:\n                # if data does not exist, lets create it\n                root[self.collection_name] = {}\n            connection.transaction_manager.commit()\n            connection.close()\n            del root\n            del connection \n        \n        return data \n\n    def update(self, **kwargs):\n        \"\"\"UPDATE option of CRUD\"\"\"\n        data = None\n        if not self.db:\n            self.logger.error(f\"DB {self.db_name} Not initialized or does not exist\")\n            return False\n        if 'collection_name' in kwargs:\n            self.collection_name = kwargs.get('collection_name')\n        if 'data' in kwargs:\n            data = kwargs.get('data')\n        try:\n            with self.db.transaction() as connection:\n                root = connection.root()\n                old_data = root[self.collection_name]\n                old_data.update(data)\n                root[self.collection_name] = old_data\n                connection.transaction_manager.commit()\n                connection.close()\n                del root\n                del connection\n        except Exception as e:\n            self.logger.error(f\"ERROR: Hit a snag while updating record to DB. {str(e)}\")\n            return False \n\n        return True \n\n\n    def delete(self, **kwargs):\n        \"\"\"DELETE option of CRUD\"\"\"\n        if 'db_name' in kwargs:\n            self.db_name = kwargs.get('db_name')\n        if 'db_dir' in kwargs:\n            self.db_dir = kwargs.get('db_dir')\n\n        if os.path.exists(os.path.join(self.db_dir, self.db_name)) is True:\n            try:\n                os.remove(os.path.join(self.db_dir, self.db_name))\n                self.logger.debug(f'Deleted DB {self.db_name}')\n            except Exception as e:\n                self.logger.error(f'Deletion of DB {self.db_name} had error. {e}')\n                return False\n        return True \n\n\n# SQLInterface. This class implements the same CRUD methods as ZoDBInterface\n# This class is implemented when we decide to move from ZoDB to SQL. \nclass SQLInterface(DatabaseFactory):\n    def __init__(self, **kwargs):\n        \"\"\"Constructor: This sets some class specific variables\"\"\"\n        self.db_name = 'unskript_pss.db'\n        self.db_dir = '/unskript/db'\n        self.table_name = 'AUDIT_TRAIL'\n        self.db = None\n        if 'db_name' in kwargs:\n            self.db_name = kwargs.get('db_name')\n        if 'db_dir' in kwargs:\n            self.db_dir = kwargs.get('db_dir')\n        if 'table_name' in kwargs:\n            self.table_name = kwargs.get('table_name')\n\n\n        if not os.path.exists(self.db_dir):\n            os.makedirs(self.db_dir, exist_ok=True)\n        self.conn = sqlite3.connect(os.path.join(self.db_dir, self.db_name))\n        self.cursor = self.conn.cursor()\n        self.schema = self._read_schema(os.path.join(os.path.dirname(__file__), 'unskript_db_schema.json'))\n        self.create_table()\n\n    def _read_schema(self, schema_file):\n        with open(schema_file, 'r') as file:\n            return json.load(file)\n\n    def create_table(self):\n        \"\"\"create_table if it does not exist\"\"\"\n        # Create a table based on the schema read from the file\n        columns = ', '.join(f\"{col} {self.schema['properties'][col]['type']}\" for col in self.schema['properties'])\n        self.cursor.execute(f'''\n            CREATE TABLE IF NOT EXISTS {self.table_name} (\n                {columns}\n            )\n        ''')\n        self.conn.commit()\n\n    def create(self, execution_data):\n        \"\"\"CREATE of CRUD\"\"\"\n        # Create a new execution record\n        columns = ', '.join(self.schema['properties'].keys())\n        placeholders = ', '.join(['?'] * len(self.schema['properties']))\n        values = [execution_data[key] for key in self.schema['properties']]\n        self.cursor.execute(f'''\n            INSERT INTO {self.table_name} ({columns}) VALUES ({placeholders})\n        ''', values)\n        self.conn.commit()\n\n    def read(self, filters=None):\n        \"\"\"READ of CRUD\"\"\"\n        # Read data with optional filters\n        if filters is None:\n            # If no filters provided, fetch all data\n            self.cursor.execute(f'''\n                SELECT * FROM {self.table_name}\n            ''')\n        else:\n            # Construct the WHERE clause based on the filters\n            filter_conditions = ' AND '.join(f\"{key} = ?\" for key in filters)\n            filter_values = tuple(filters.values())\n            self.cursor.execute(f'''\n                SELECT * FROM {self.table_name} WHERE {filter_conditions}\n            ''', filter_values)\n\n        data = self.cursor.fetchall()\n        if data:\n            result = []\n            for row in data:\n                result.append(dict(zip(self.schema['properties'], row)))\n            return result\n        return None\n\n    def update(self, new_data=None, filters=None):\n        \"\"\"UPDATE of CRUD\"\"\"\n        # Update rows based on optional filters and new data\n        if new_data is None or filters is None:\n            # If no new_data or filters provided, do not perform update\n            return False\n\n        # Construct SET clause for new data\n        set_values = ', '.join(f\"{key} = ?\" for key in new_data)\n        set_params = tuple(new_data.values())\n\n        # Construct the WHERE clause based on the filters\n        filter_conditions = ' AND '.join(f\"{key} = ?\" for key in filters)\n        filter_values = tuple(filters.values())\n\n        self.cursor.execute(f'''\n            UPDATE {self.table_name} SET {set_values} WHERE {filter_conditions}\n        ''', (*set_params, *filter_values))\n\n        self.conn.commit()\n        return True\n\n    def delete(self, filters=None):\n        \"\"\"DELETE of CRUD\"\"\"\n        # Delete rows based on optional filters\n        if filters is None:\n            # If no filters provided, do not perform deletion\n            return False\n\n        # Construct the WHERE clause based on the filters\n        filter_conditions = ' AND '.join(f\"{key} = ?\" for key in filters)\n        filter_values = tuple(filters.values())\n\n        self.cursor.execute(f'''\n            DELETE FROM {self.table_name} WHERE {filter_conditions}\n        ''', filter_values)\n\n        self.conn.commit()\n        return True \n\n    def close_connection(self):\n        \"\"\"Utility function that closes the connection\"\"\"\n        # Close the database connection\n        self.conn.close()\n\n# SnippetsDB Interface\n# This class implements CodeSnippets methods that are used\n# to query Codesnippets database and return the checks that\n# are stored as python dictionary in the ZoDB database. \n# The Code snippets are saved with the dictionary key `unskript_cs`\nclass CodeSnippets(ZoDBInterface):\n    def __init__(self, **kwargs):\n        \"\"\" This Constructor initializes the Snippets DB and reads existing snippets to a local variable\"\"\"\n        self.db_dir = '/var/unskript'\n        self.db_name = 'snippets.db'\n        self.collection_name = 'unskript_cs'\n        \n        if 'db_dir' in kwargs:\n            self.db_dir = kwargs.get('db_dir')\n        if 'db_name' in kwargs:\n            self.db_name = kwargs.get('db_name')\n        if 'collection_name' in kwargs:\n            self.collection_name = kwargs.get('collection_name')\n        \n        super().__init__(db_dir=self.db_dir,\n                         db_name=self.db_name,\n                         collection_name=self.collection_name)\n        \n        self.snippets = self.read() or []\n    \n    def get_checks_by_uuid(self, check_uuid_list: list):\n        \"\"\"Given a list of UUID, this method queries self.snippets and return the checks that match the uuid\"\"\"\n        return [snippet for snippet in self.snippets\n                if snippet.get('metadata') and\n                snippet.get('metadata').get('uuid') in check_uuid_list]\n\n\n    def get_checks_by_connector(self, connector_names: list, full_snippet: bool = False):\n        \"\"\"Given a list of connectors, this method returns all checks for the given connectors\"\"\"\n        filtered_snippets = []\n        if not isinstance(connector_names, list):\n            connector_names = [connector_names]\n\n        for snippet in self.snippets:\n            metadata = snippet.get('metadata')\n            if metadata and metadata.get('action_is_check'):\n                connector = metadata.get('action_type')\n                connector = connector.split('_')[-1].lower()\n                if any(name.lower() == 'all' or re.match(name.lower(), connector) for name in connector_names):\n                    if full_snippet:\n                        filtered_snippets.append(snippet)\n                    else:\n                        filtered_snippets.append([\n                            connector.capitalize(),\n                            snippet.get('name'),\n                            metadata.get('action_entry_function')\n                        ])\n        return filtered_snippets\n    \n    def get_all_check_names(self):\n        \"\"\"Gets all checks available in the snippets db (from self.snippets)\"\"\"\n        return [snippet.get('metadata').get('action_entry_function') for snippet in self.snippets\n                if snippet.get('metadata') and snippet.get('metadata').get('action_is_check')]\n\n    def get_check_by_name(self, check_name: str):\n        \"\"\"Given the main function name, this routine returns the Check that matches the name\"\"\"\n        return [snippet for snippet in self.snippets\n                if snippet.get('metadata') and\n                snippet.get('metadata').get('action_is_check') and\n                snippet.get('metadata').get('action_entry_function') == check_name]\n    \n    def get_info_actions(self):\n        \"\"\"This routine returns the actions that has CATEGORY_TYPE_INFORMATION in action_category\"\"\"\n        return [snippet for snippet in self.snippets\n                if snippet.get('metadata') and\n                snippet.get('metadata').get('action_categories') and\n                'CATEGORY_TYPE_INFORMATION' in snippet.get('metadata').get('action_categories')]\n    \n    def get_info_action_by_name(self, action_name: str):\n        \"\"\"Given the action name, this routine returns the information action that matches the name\"\"\"\n        if not action_name:\n            return []\n\n        snippets = self.get_info_actions()\n        retVal = []\n        for snippet in snippets:\n            if snippet.get('metadata').get('action_entry_function').strip().lower() == action_name.strip().lower():\n                retVal = [snippet]\n                break\n    \n        return retVal \n    \n    def get_info_action_by_connector(self, connector_list: list):\n        \"\"\"Given the connectors, this routine returns the information actions that matches the connectors\"\"\"\n        if not connector_list:\n            return []\n        retVal = []\n        c_snippets = self.get_info_actions()\n        for connector in connector_list:\n            for c_snippet in c_snippets:\n                if connector.lower() == 'all':\n                    _connector = c_snippet.get('metadata').get('action_type').split('_')[-1]\n                else: \n                    _connector = connector\n                    if connector.upper() != c_snippet.get('metadata').get('action_type').split('_')[-1]:\n                        continue \n                retVal.append([\n                            _connector.capitalize(),\n                            c_snippet.get('name'),\n                            c_snippet.get('metadata').get('action_entry_function')\n                        ])\n        \n        return retVal \n        \n    def get_action_name_from_id(self, action_uuid: str):\n        \"\"\"Given a uuid, this method returns the Name of the action\"\"\"\n        matches = [snippet for snippet in self.snippets if snippet.get('metadata') and snippet.get('metadata').get('uuid') == action_uuid]\n        return matches[0] if matches else None\n\n    def get_connector_name_from_id(self, action_uuid: str):\n        \"\"\"Given a Action UUID, this method returns the connector type for the given connector\"\"\"\n        matches = [\n            snippet.get('metadata').get('action_type').replace('LEGO_TYPE_', '').lower()\n            for snippet in self.snippets\n            if snippet.get('metadata') and snippet.get('metadata').get('uuid') == action_uuid\n        ]\n        return matches[0] if matches else None\n\n# PSS Interface\n# This class implements a wrapper around ZoDBInterface as PSS, which is used\n# to update audit-trail.\nclass PSS(ZoDBInterface):\n    def __init__(self, **kwargs):\n        super().__init__(**kwargs)\n\n\n# DBInterface \n# This is the External Interface that implements Database interface.\n# This class implements an instance of PSS and CodeSnippets DB. \n# When we decide to move to SQL, All we need to do is implement\n# both PSS and CodeSnippets as SQLInterface.\nclass DBInterface(UnskriptFactory):\n    def __init__(self, **kwargs):\n        super().__init__(**kwargs)\n        # PSS Interface \n        self.pss = PSS(db_name='unskript_pss.db',\n                       db_dir = '/unskript/db',\n                       collection_name = 'audit_trail')\n        \n        # CodeSnippet Interface\n        self.cs = CodeSnippets(db_name = 'snippets.db',\n                               db_dir = '/var/unskript',\n                               collection_name = 'unskript_cs')\n        if not self.pss or not self.cs:\n            self.logger.error(\"Unable to Initialize CS and PSS Database!, Check log file\")\n            return \n        \n        self.logger.debug(\"Initialized DBInterface\")"
  },
  {
    "path": "unskript-ctl/unskript_ctl_factory.py",
    "content": "#!/usr/bin/env python\n#\n# Copyright (c) 2023 unSkript.com\n# All rights reserved.\n#\n# This program is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\n# FOR A PARTICULAR PURPOSE\n#\n#\nimport os\nimport yaml\nimport logging\nimport json\nimport glob\n\nfrom abc import ABC, abstractmethod\nfrom datetime import datetime\nfrom unskript_utils import *\ntry:\n     from envyaml import EnvYAML\nexcept Exception as e:\n     print(\"ERROR: Unable to find required yaml package to parse the config file\")\n     raise e\n\n\n# This is a custom logger class to implement the following logic\n# Any logger.info(...) & logger.error(...) message should be shown on the console\n# Any logger.debug(...),  logger.warning(...)\n# Message should be dumped to a log file that can be used to debug\n# any issue.\nclass UctlLogger(logging.Logger):\n    def __init__(self, name, level=logging.NOTSET):\n        super().__init__(name, level)\n\n        if not self.handlers:\n            # Create a custom formatter\n            formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')\n\n            # Create a console handler for INFO level\n            console_handler = logging.StreamHandler()\n            console_handler.setLevel(logging.INFO)\n            console_handler.setFormatter(formatter)\n            self.addHandler(console_handler)\n\n            # Create File handler to dump all other level\n            self.log_file_name = os.path.join(os.path.expanduser('~'), 'unskript_ctl.log')\n            file_handler = logging.FileHandler(self.log_file_name)\n            file_handler.setLevel(logging.DEBUG)\n            file_handler.setFormatter(formatter)\n            self.addHandler(file_handler)\n\n            # Set Default logger level\n            self.setLevel(logging.DEBUG)\n            self.propagate = False\n\n    def info(self, msg, *args, **kwargs):\n        # Pass up the Info message to show the log to console\n        super().info(msg, *args, **kwargs)\n\n    def debug(self, msg, *args, **kwargs):\n        # Dump to logfile\n        self.dump_to_file(msg)\n\n    def warning(self, msg, *args, **kwargs):\n        # Warning to logfile\n        self.dump_to_file(msg)\n\n    def error(self, msg, *args, **kwargs):\n        # Error to logfile\n        self.dump_to_file(msg)\n        super().info(msg, *args, **kwargs)\n\n    def critical(self, msg, *args, **kwargs):\n        # Critical msg to logfile and to console\n        self.dump_to_file(msg)\n        super().info(msg, *args, **kwargs)\n\n    def dump_to_file(self, msg):\n        timestamp = datetime.now().strftime(\"%Y-%m-%d %H:%M:%S\")\n        with open(self.log_file_name, 'a') as f:\n            f.write(timestamp + ' : ' + str(msg) + '\\n')\n\n# This is the Base class, Abstract class that shall be used by all the other\n# classes that are implemented. This class is implemented as a Singleton class\n# which means, the Child that inherits this class, will have a single copy in\n# memory. This saves Memory footprint! This class also implements a Logger\n# that is being used by individual child class. This class generates\n# unskript_ctl.log in the same directory, from where the unskript-ctl.sh is\n# called.\nclass UnskriptFactory(ABC):\n    _instance = None\n    log_file_name = os.path.join(os.path.expanduser('~'), 'unskript_ctl.log')\n\n    def __new__(cls, *args, **kwargs):\n        if not cls._instance:\n            cls._instance = super().__new__(cls)\n            if os.path.exists(os.path.dirname(cls.log_file_name)) is False:\n                os.makedirs(os.path.dirname(cls.log_file_name))\n            cls._instance.logger = cls._configure_logger()\n        return cls._instance\n\n\n    @staticmethod\n    def _configure_logger():\n        logger = UctlLogger('UnskriptCtlLogger')\n        # if not logger.handlers:\n        #     logger.setLevel(logging.DEBUG)\n\n        #     # Create a formatter for log messages\n        #     formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')\n\n        #     # Create a file handler and set its format\n        #     file_handler = logging.FileHandler(UnskriptFactory.log_file_name)\n        #     file_handler.setLevel(logging.DEBUG)  # Set file logging level\n        #     file_handler.setFormatter(formatter)\n\n        #     # Add the file handler to the logger\n        #     logger.addHandler(file_handler)\n        #     logger.propagate = False\n\n        return logger\n\n    def __init__(self):\n        self.uglobals = UnskriptGlobals()\n        self.update_credentials_to_uglobal()\n        pass\n\n    def update_credentials_to_uglobal(self):\n        mapping = {}\n        home = os.path.expanduser('~')\n        creds_json_files = []\n        for dirpath, dirname, filenames in os.walk(home):\n            if 'credential-save' in dirname:\n                pattern = os.path.join(dirpath, dirname[-1]) + '/*.json'\n                creds_json_files.extend(glob.glob(pattern, recursive=True))\n                break\n        self.creds_json_files = creds_json_files\n        c_data = {}\n        for creds_json_file in creds_json_files:\n\n            if is_creds_json_file_valid(creds_file=creds_json_file) is False:\n                raise ValueError(f\"Given Credential file {creds_json_file} is corrupt!\")\n\n            with open(creds_json_file, 'r', encoding='utf-8') as f:\n                try:\n                    c_data = json.load(f)\n                except Exception as e:\n                    # If creds file is corrupt, raise exception and bail out\n                    self.logger.error(f\"Exception Occurred while parsing credential file {creds_json_file}: {str(e)}\")\n                    raise ValueError(e)\n                finally:\n                    if c_data.get('metadata').get('connectorData') == '{}':\n                        continue\n                    mapping[c_data.get('metadata').get('type')] = {\"name\": c_data.get('metadata').get('name'),\n                                                            \"id\": c_data.get('id')}\n        self.uglobals['default_credentials'] = mapping\n\n    def _banner(self, msg: str):\n        print('\\033[4m\\x1B[1;20;42m' + msg + '\\x1B[0m\\033[0m')\n\n\n    def _error(self, msg: str):\n        print('\\x1B[1;20;41m' + msg + '\\x1B[0m')\n\n\n# This class implements an Abstract class for All Checks\nclass ChecksFactory(UnskriptFactory):\n    def __init__(self):\n        super().__init__()\n        self.logger.debug(f'{self.__class__.__name__} instance initialized')\n        self._config = ConfigParserFactory()\n        pass\n\n    def run(self, **kwargs):\n        pass\n\n# This class implements an Abstract class for Executing Script\nclass ScriptsFactory(UnskriptFactory):\n    def __init__(self):\n        super().__init__()\n        self.logger.debug(f'{self.__class__.__name__} instance initialized')\n        self._config = ConfigParserFactory()\n        pass\n\n    def run(self, *args, **kwargs):\n        pass\n\n# This class implements an Abstract class for Notification that is used by Slack and Email\nclass NotificationFactory(UnskriptFactory):\n    def __init__(self, **kwargs):\n        super().__init__(**kwargs)\n        self.logger.debug(f'{self.__class__.__name__} instance initialized')\n        self._config = ConfigParserFactory()\n        pass\n\n    def notify(self, **kwargs):\n        pass\n\n# This class implements the Database abstract class that is implemented by ZoDB and SQL\nclass DatabaseFactory(UnskriptFactory):\n    def __init__(self):\n        super().__init__()\n        self.logger.debug(f'{self.__class__.__name__} instance initialized')\n        pass\n\n    @abstractmethod\n    def create(self, **kwargs):\n        pass\n\n    @abstractmethod\n    def read(self, **kwargs):\n        pass\n\n    @abstractmethod\n    def update(self, **kwargs):\n        pass\n\n    @abstractmethod\n    def delete(self, **kwargs):\n        pass\n\n\n# This class implements the Config parser that is being used by the UnskriptFactory\n# This class looks the the unskript_ctl_config.yaml in known directories, parses it\n# and saves it as a local class specific variable.\nclass ConfigParserFactory(UnskriptFactory):\n    CONFIG_FILE_NAME = \"unskript_ctl_config.yaml\"\n    DEFAULT_DIRS = [\"/etc/unskript\", \"/opt/unskript\", \"/tmp\", \"./config\", \"./\"]\n\n    def __init__(self):\n        super().__init__()\n        self.logger.debug(f'{self.__class__.__name__} instance initialized')\n        self.yaml_content = self.load_config_file()\n        if not self.yaml_content:\n            raise FileNotFoundError(f\"{self.CONFIG_FILE_NAME} not found or empty!\")\n\n    def load_config_file(self):\n        for directory in self.DEFAULT_DIRS:\n            conf_file = os.path.join(directory, self.CONFIG_FILE_NAME)\n            if os.path.exists(conf_file):\n                yaml_content = EnvYAML(conf_file, strict=False)\n                return yaml_content\n        return {}  # Return an empty dictionary if file not found or empty\n\n    def _get(self, key, sub_key=None):\n        if self.yaml_content:\n            value = self.yaml_content.get(key)\n            if sub_key:\n                value = value.get(sub_key) if value else None\n            return value if value is not None else {}\n        return {}\n\n    def get_schedule(self):\n        return self._get('scheduler')[0]\n\n    def get_jobs(self):\n        return self._get('jobs')[0]\n\n    def get_checks(self):\n        return self.get_jobs().get('checks')\n\n\n    def get_notification(self):\n        return self._get('notification')\n\n    def get_credentials(self):\n        # FIXME: Not implemented\n        pass\n\n    def get_global(self):\n        return self._get('global')\n\n    def get_checks_params(self):\n        return self._get('checks', 'arguments')\n\n    def get_info_action_params(self):\n        return self._get('info', 'arguments')\n\n    def get_info(self):\n        return self.get_jobs().get('info',{})\n\n    def get_email_fmt(self):\n        notification_config = self.get_notification()\n        email_config = notification_config.get('Email', {})\n        email_fmt = email_config.get('email_fmt', {})\n        return email_fmt\n\n    def get_checks_section(self):\n        # Get the checks_section from email_fmt\n        checks_section = self.get_email_fmt().get('checks_section', {})\n        return checks_section.get('priority', {})\n\n    def get_info_section(self):\n        # Get the info_section from email_fmt\n        return self.get_email_fmt().get('info_section', [])\n\n    def get_checks_priority(self)->dict:\n        # This function reads the priority part of the config and converts it\n        # into a dict with check_name as the key and priority as the value.\n        # If the check is not found in the dict, its assigned priority P2.\n\n        email_fmt = self.get_email_fmt()\n        checks_section = email_fmt.get('checks_section', {})\n        priority_config = checks_section.get('priority', {})\n\n        checks_priority = {}\n\n        # Check if the 'priority' configuration is properly set; if not, return None\n        if not priority_config:\n            return None\n\n        # Explicitly fetch and map checks for each priority level using the constants\n        for priority_level in [CHECK_PRIORITY_P0, CHECK_PRIORITY_P1, CHECK_PRIORITY_P2]:\n            priority_checks = priority_config.get(priority_level, [])\n            if priority_checks:\n                for check_name in priority_checks:\n                    checks_priority[check_name] = priority_level\n\n        return checks_priority\n\n"
  },
  {
    "path": "unskript-ctl/unskript_ctl_main.py",
    "content": "#!/usr/bin/env python\n#\n# Copyright (c) 2023 unSkript.com\n# All rights reserved.\n#\n# This program is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\n# FOR A PARTICULAR PURPOSE\n#\n#\nimport os\nimport sys\nimport json\nimport psutil\n\nfrom datetime import datetime\nfrom argparse import ArgumentParser, REMAINDER, SUPPRESS\nfrom unskript_ctl_database import *\nfrom unskript_ctl_run import *\nfrom unskript_ctl_notification import *\nfrom unskript_utils import *\nfrom unskript_ctl_factory import *\nfrom unskript_ctl_version import *\nfrom unskript_ctl_upload_session_logs import upload_session_logs\nfrom diagnostics import main as diagnostics\nfrom unskript_upload_results_to_s3 import S3Uploader\n\nYAML_CONFIG_FILE = \"/etc/unskript/unskript_ctl_config.yaml\"\nON_DEMAND_SCRIPT_FOLDER = \"/unskript/ondemand\"\n\n# UnskriptCTL class that instantiates class instance of Checks, Script, Notification and DBInterface\n# This implementation is an example how to use the different components of unskript-ctl into a single\n# class.\nclass UnskriptCtl(UnskriptFactory):\n    def __init__(self, **kwargs):\n        \"\"\"Constructor: This class instantiates notification, checks, script and dbinterface class\"\"\"\n        super().__init__(**kwargs)\n        self.logger.debug(\"Initializing UnskriptCtl\")\n        self.logger.debug(f\"\\tVERSION: {VERSION} \\n\")\n        self.logger.debug(f\"\\tAUTHORS: {AUTHOR} \\n\")\n        self.logger.debug(f\"\\tBUILD_NUMBER: {get_version()} \\n\")\n        self._config = ConfigParserFactory()\n        self._notification = Notification()\n        self.uglobals = UnskriptGlobals()\n        self._check = Checks()\n        self._script = Script()\n        self._checks_priority = self._config.get_checks_priority()\n\n\n        self._db = DBInterface()\n        # Create execution directory so all results\n        # gets logged there\n        create_execution_run_directory()\n\n    def create_creds(self, args):\n        \"\"\"This method can be used to create credential\"\"\"\n        try:\n            connector_type, connector_data_file = args.create_credential\n        except:\n            self.logger.error(\"InSufficient Argument given to create credential.\")\n            self.logger.error(\"Use: --create-credential --type /path/to/file\")\n            self.logger.error(\"Example:  --create-credential --k8s /tmp/kubeconfig.yaml\")\n            self._error(\"Usage: --create-credential --type /path/to/file\")\n            self.display_creds_ui()\n\n        connector_type = connector_type.replace('-', '')\n\n        if connector_type in (\"k8s\", \"kubernetes\"):\n            with open(connector_data_file, 'r', encoding='utf-8') as f:\n                creds_data = f.read()\n            homedir = os.path.expanduser('~')\n            k8s_creds_file = os.path.join(homedir + CREDENTIAL_DIR + '/k8screds.json')\n            with open(k8s_creds_file, 'r', encoding='utf-8') as f:\n                k8s_creds_content = json.loads(f.read())\n            try:\n                k8s_creds_content['metadata']['connectorData'] = json.dumps({\"kubeconfig\": creds_data})\n                with open(k8s_creds_file, 'w', encoding='utf-8') as f:\n                    f.write(json.dumps(k8s_creds_content, indent=2))\n            except:\n                self.logger.error(\"Not able to write k8s creds data to k8screds.json, check permission\")\n                self._error(\"Not able to write k8s creds data to k8screds.json\")\n                sys.exit(1)\n            finally:\n                print(\"Successfully Created K8S Credential\")\n        else:\n            self.display_creds_ui()\n\n    def display_creds_ui(self):\n        \"\"\"Wrapper for creds_ui to display npyscreen dialogs\"\"\"\n        try:\n            from creds_ui import main as ui\n            ui()\n        except:\n            self.logger.error(\"Required python library creds_ui is not packaged\")\n            self._error(\"Required python library creds_ui is not packaged\")\n\n    def save_check_names(self, args):\n        \"\"\"This method is called by bash completion script to create all available checks as a file\"\"\"\n        if args.save_check_names:\n            filename = args.save_check_names\n        else:\n            filename = '/tmp/checknames.txt'\n        list_of_names = self._db.cs.get_all_check_names()\n        with open(filename, 'w', encoding='utf-8') as f:\n            for name in list_of_names:\n                f.write(name + '\\n')\n        self.logger.info(f\"Saved  {len(list_of_names)} Check Names!\")\n\n    def run_main(self, **kwargs):\n        \"\"\"Main Function to handle all options under the run command\"\"\"\n        args = parser = None\n        if 'args' in kwargs:\n            args = kwargs.get('args')\n        if 'parser' in kwargs:\n            parser = kwargs.get('parser')\n\n        if not args or not parser:\n            self.logger.error(\"ARGS and/or Parser sent to run_main is None!\")\n            self._error(\"ARGS and/or Parser sent to run_main is None\")\n            sys.exit(0)\n        status_of_run = []\n        if args.check_command == 'check':\n            if args.name is not None:\n                check_list = self._db.cs.get_check_by_name(check_name=str(args.name))\n            elif args.type is not None:\n                all_connectors = args.type\n                if not isinstance(all_connectors, list):\n                    all_connectors = [all_connectors]\n                if len(all_connectors) == 1 and ',' in all_connectors[0]:\n                    all_connectors = all_connectors[0].split(',')\n                for connector in all_connectors:\n                    connector = connector.replace(',', '')\n                temp_list = self._db.cs.get_checks_by_connector(all_connectors, True)\n                check_list = []\n                for t in temp_list:\n                    if t not in check_list:\n                        check_list.append(t)\n            elif args.all is not False:\n                check_list = self._db.cs.get_checks_by_connector(\"all\", True)\n            else:\n                parser.print_help()\n                sys.exit(0)\n\n            status_of_run = self._check.run(checks_list=check_list)\n            self.uglobals['status_of_run'] = status_of_run\n            self.update_audit_trail(collection_name='audit_trail', status_dict_list=status_of_run)\n\n        if 'script' in args and args.command == 'run' and args.script not in ('', None):\n            self._script.run(script=args.script)\n\n        if args.command == 'run' and args.info:\n            self.run_info()\n        \n        if args.command == 'run' and args.check_command == 'check':\n            # call diagnostics\n            if self.uglobals.get('CURRENT_EXECUTION_RUN_DIRECTORY'):\n                output_dir = self.uglobals.get('CURRENT_EXECUTION_RUN_DIRECTORY')\n            else:\n                output_dir = os.path.join(UNSKRIPT_EXECUTION_DIR, self.uglobals.get('exec_id'))           \n            \n            if not os.environ.get('SKIP_DEBUGS'):\n                failed_objects_file = os.path.join(output_dir, self.uglobals.get('exec_id')) + '_output.txt'\n                diag_args = [\n                    '--yaml-file',\n                    YAML_CONFIG_FILE,\n                    '--failed-objects-file',\n                    failed_objects_file,\n                    '--output-dir-path',\n                    output_dir\n                ]\n                diagnostics(diag_args)\n\n                print(\"Uploading run artifacts to S3...\")\n                uploader = S3Uploader()\n                uploader.rename_and_upload_other_items()\n        \n        # Run any on-demand scripts\n        self.run_on_demand_script()\n\n    def run_on_demand_script(self):\n        \"\"\"This function runs scripts that are present in ON_DEMAND_SCRIPT_FOLDER folder recursively.\"\"\"\n        if not os.path.exists(ON_DEMAND_SCRIPT_FOLDER):\n            self.logger.error(f\"Folder '{ON_DEMAND_SCRIPT_FOLDER}' does not exist\")\n            return\n\n        # List all files in the folder recursively\n        executable_found = False  # Track if any executables are found\n\n        for root, _, files in os.walk(ON_DEMAND_SCRIPT_FOLDER):\n            for file_name in files:\n                file_path = os.path.join(root, file_name)\n\n                # Check if it is a file and is executable\n                if os.path.isfile(file_path) and os.access(file_path, os.X_OK):\n                    executable_found = True\n                    self.logger.info(f\"Found executable: {file_name} at {file_path}\")\n                    try:\n                        self._script.run(script=file_path,  # Use the full path for execution\n                                        output_file=f\"{file_name}.output\")\n                        self.logger.info(f\"Executed '{file_name}' successfully.\")\n                    except Exception as e:\n                        self.logger.error(f\"Error executing '{file_name}': {e}\")\n                else:\n                    self.logger.debug(f\"'{file_name}' is not executable or not a regular file.\")\n\n        if not executable_found:\n            self.logger.warning(\"No executable scripts found in on-demand folder.\")\n\n\n    def run_info(self):\n        \"\"\"This function runs the info gathering actions\"\"\"\n        # Lets find out if any specific info action mentioned, if mentioned\n        # get the list and run them\n        self._info = InfoAction()\n        snippet_names = self._config.get_info()\n        list_of_snippets = []\n        if not snippet_names:\n            self.logger.error(\"No Information gathering action mentioned in the config file!\")\n            sys.exit(0)\n        else:\n            for snippet_name in snippet_names:\n                list_of_snippets.extend(self._db.cs.get_info_action_by_name(snippet_name))\n\n        if not list_of_snippets:\n            self.logger.error(f\"No Actions found for these names: {snippet_names}\")\n            sys.exit(0)\n\n        print(\"\\n\")\n        self._banner(\"Information Gathering Action Results\")\n        self._info.run(action_list=list_of_snippets)\n\n\n    def update_audit_trail(self, collection_name: str, status_dict_list: list):\n        \"\"\"This function updates PSS with the collection name audit-trail\"\"\"\n        trail_data = {}\n        id = ''\n        k = str(datetime.now())\n        p = f = e = 0\n        id = self.uglobals.get('exec_id')\n        if not id:\n            id = uuid.uuid4()\n\n        trail_data[id] = {}\n        trail_data[id]['time_stamp'] = k\n        trail_data[id]['runbook'] = id + '_output.txt'\n        trail_data[id]['check_status'] = {}\n        for sd in status_dict_list:\n            if sd == {}:\n                continue\n            for s in sd.get('result'):\n                for priority in [CHECK_PRIORITY_P0, CHECK_PRIORITY_P1, CHECK_PRIORITY_P2]:\n                    checks_per_priority = sd.get('result').get(priority)\n                    if checks_per_priority is None:\n                        continue\n                    for status in ['PASS', 'FAIL', 'ERROR']:\n                        checks = checks_per_priority.get(status)\n                        if checks is None or len(checks) == 0:\n                            continue\n                        for check in checks:\n                            check_name, check_id, connector = check\n                            if status == 'PASS':\n                                p += 1\n                            elif status == 'FAIL':\n                                f += 1\n                            elif status == 'ERROR':\n                                e += 1\n                            trail_data[id]['check_status'][check_id] = {}\n                            trail_data[id]['check_status'][check_id]['check_name'] = check_name\n                            trail_data[id]['check_status'][check_id]['status'] = status\n                            trail_data[id]['check_status'][check_id]['connector'] = connector\n                            if self.uglobals.get('failed_result'):\n                                c_name = connector + ':' + check_name\n                                for name, obj in self.uglobals.get('failed_result').items():\n                                    if name in (c_name, check_name):\n                                        trail_data[id]['check_status'][check_id]['failed_objects'] = obj\n\n        trail_data[id]['summary'] = f'Summary (total/p/f/e): {p+e+f}/{p}/{f}/{e}'\n        self._db.pss.update(collection_name=collection_name, data=trail_data)\n        return id\n\n    def list_main(self, **kwargs):\n        \"\"\"This is the Main function to handle all list command\"\"\"\n        s = '\\x1B[1;20;42m' + \"~~~~ CLI Used ~~~~\" + '\\x1B[0m'\n        print(\"\")\n        print(s)\n        print(\"\")\n        print(f\"\\033[1m {sys.argv[0:]} \\033[0m\")\n        print(\"\")\n\n        args = kwargs.get('args')\n        if args.credential:\n            self.list_credentials()\n        elif args.sub_command == 'checks' and args.type:\n            self.list_checks_by_connector(args)\n        elif args.sub_command == 'checks' and args.all:\n            self.list_checks_by_connector(args)\n        elif args.command == 'list' and args.sub_command == 'failed-checks':\n            self.display_failed_checks(args)\n        elif args.command == 'list' and args.sub_command == 'info':\n            self.list_info_action_by_connector(args)\n\n\n    def list_credentials(self):\n        \"\"\"Function to handle displaying state of credentials\"\"\"\n        active_creds = []\n        incomplete_creds = []\n        for cred_file in self.creds_json_files:\n            if is_creds_json_file_valid(creds_file=cred_file) is False:\n                raise ValueError(f\"Given Credential file {cred_file} is corrupt!\")\n\n            with open(cred_file, 'r') as f:\n                c_data = json.load(f)\n\n                c_type = c_data.get('metadata').get('type')\n                c_name = c_data.get('metadata').get('name')\n                if c_data.get('metadata').get('connectorData') != \"{}\":\n                    active_creds.append((c_type, c_name))\n                else:\n                    incomplete_creds.append((c_type, c_name))\n        combined = active_creds + incomplete_creds\n        headers = [\"#\", \"Connector Type\", \"Connector Name\", \"Status\"]\n        table_data = [headers]\n\n        for index, (ctype, cname) in enumerate(combined, start=1):\n            status = \"Active\" if index <= len(active_creds) else \"Incomplete\"\n            table_data.append([index, ctype, cname, status])\n\n        print(tabulate(table_data, headers='firstrow', tablefmt='fancy_grid'))\n\n    def list_checks_by_connector(self, args):\n        \"\"\"List checks by connector\"\"\"\n        all_connectors = args.type\n        if not all_connectors:\n            all_connectors = 'all'\n\n        if not isinstance(all_connectors, list):\n            all_connectors = [all_connectors]\n        if len(all_connectors) == 1 and ',' in all_connectors[0]:\n            all_connectors = all_connectors[0].split(',')\n        for connector in all_connectors:\n            connector = connector.replace(',', '')\n        list_connector_table = [\n            [TBL_HDR_LIST_CHKS_CONNECTOR, TBL_HDR_CHKS_NAME, TBL_HDR_CHKS_FN]]\n        checks_list = self._db.cs.get_checks_by_connector(all_connectors, False)\n        for cl in checks_list:\n            list_connector_table.append(cl)\n        print(\"\")\n        print(tabulate(list_connector_table, headers='firstrow', tablefmt='fancy_grid'))\n        print(\"\")\n\n    def list_info_action_by_connector(self, args):\n        \"\"\"List checks by connector\"\"\"\n        all_connectors = args.type\n        if not all_connectors:\n            all_connectors = 'all'\n\n        if not isinstance(all_connectors, list):\n            all_connectors = [all_connectors]\n        if len(all_connectors) == 1 and ',' in all_connectors[0]:\n            all_connectors = all_connectors[0].split(',')\n        for connector in all_connectors:\n            connector = connector.replace(',', '')\n        list_connector_table = [\n            [TBL_HDR_LIST_INFO_CONNECTOR, TBL_HDR_INFO_NAME, TBL_HDR_INFO_FN]]\n        action_list = self._db.cs.get_info_action_by_connector(all_connectors)\n        for cl in action_list:\n            list_connector_table.append(cl)\n        print(\"\")\n        print(tabulate(list_connector_table, headers='firstrow', tablefmt='fancy_grid'))\n        print(\"\")\n\n\n    def display_failed_checks(self, args):\n        \"\"\"Display failed checks from the audit_trail\"\"\"\n        if args.all:\n            connector = 'all'\n        elif args.type:\n            connector = args.type\n        else:\n            connector = 'all'\n\n        pss_content = self._db.pss.read(collection_name='audit_trail')\n        failed_checks_table = [[TBL_HDR_DSPL_CHKS_NAME, TBL_HDR_FAILED_OBJECTS, TBL_HDR_DSPL_EXEC_ID]]\n        for exec_id in pss_content.keys():\n            execution_id = exec_id\n            for check_id in pss_content.get(exec_id).get('check_status').keys():\n                if pss_content.get(exec_id).get('check_status').get(check_id).get('status').lower() == \"fail\":\n                    if connector == 'all':\n                        failed_checks_table += [[\n                            pss_content.get(exec_id).get('check_status').get(check_id).get('check_name') + '\\n' + f\"(Test Failed on: {pss_content.get(exec_id).get('time_stamp')})\",\n                            pprint.pformat(pss_content.get(exec_id).get('check_status').get(check_id).get('failed_objects'), width=10),\n                            execution_id\n                        ]]\n                    elif connector.lower() == pss_content.get(exec_id).get('check_status').get(check_id).get('connector').lower():\n                        failed_checks_table += [[\n                            pss_content.get(exec_id).get('check_status').get(check_id).get('check_name') + '\\n' + f\"(Test Failed on: {pss_content.get(exec_id).get('time_stamp')})\",\n                            pprint.pformat(pss_content.get(exec_id).get('check_status').get(check_id).get('failed_objects'), width=10),\n                            execution_id\n                        ]]\n\n        print(\"\")\n        print(tabulate(failed_checks_table, headers='firstrow', tablefmt='fancy_grid'))\n        print(\"\")\n\n\n    def show_main(self, **kwargs):\n        \"\"\"This function is the Main method to handle all show command\"\"\"\n        s = '\\x1B[1;20;42m' + \"~~~~ CLI Used ~~~~\" + '\\x1B[0m'\n        print(\"\")\n        print(s)\n        print(\"\")\n        print(f\"\\033[1m {sys.argv[0:]} \\033[0m\")\n        print(\"\")\n        args = None\n        parser = None\n        if \"args\" in kwargs:\n            args = kwargs.get('args')\n        if \"parser\" in kwargs:\n            parser = kwargs.get('parser')\n\n        if args.show_command == 'audit-trail':\n            pss_content = self._db.pss.read(collection_name='audit_trail')\n            if args.all:\n                self.print_all_result_table(pss_content=pss_content)\n            elif args.type:\n                self.print_connector_result_table(pss_content=pss_content, connector=args.type)\n            elif args.execution_id:\n                self.print_execution_result_table(pss_content=pss_content, execution_id=args.execution_id)\n            pass\n        elif args.show_command == 'failed-logs':\n            if args.execution_id:\n                output = os.path.join(self.uglobals.get('UNSKRIPT_EXECUTION_DIR'), f'{args.execution_id}_output.txt')\n                if os.path.exists(output) is False:\n                    self.logger.error(\"Failed Log file does not exist. Please check the path!\")\n                    self._error(f\"Unable to locate logs file for {args.execution_id}\")\n                    sys.exit(0)\n                with open(output, 'r') as f:\n                    output = json.loads(f.read())\n                    print(\"\\033[1mFAILED OBJECTS \\033[0m \\n\")\n                    for o in output:\n                        if o.get('status') != 1:\n                            print(f\"\\033[1m{o.get('name')} \\033[0m\")\n                            p = yaml.safe_dump(o.get('objects'))\n                            print(p)\n                            print(\"\\n\")\n            else:\n                self.logger.error(\"Execution ID Is Empty, cannot find any logs\")\n                self._error(f\"Execution ID {args.execution_id} Logs cannot be found!\")\n        else:\n            parser.print_help()\n\n    def print_all_result_table(self, pss_content: dict):\n        \"\"\"Prints result table in a tabular form\"\"\"\n        if not pss_content:\n            return\n\n        all_result_table = [[\"\\033[1m Execution ID \\033[0m\",\n                            \"\\033[1m Execution Summary \\033[0m\",\n                            \"\\033[1m Execution Timestamp \\033[0m\"]]\n        for item in pss_content.items():\n            k, v = item\n            summary_text = \"\\033[1m\" + v.get('summary') + \"\\033[0m\"\n            check_names = \"\\033[1m\" + str(k) + '\\n' + \"\\033[0m\"\n            for k1, v1 in v.get('check_status').items():\n                check_names += \"    \" + v1.get('check_name') + ' ['\n                check_names += \"\\033[1m\" + \\\n                    v1.get('status') + \"\\033[0m\" + ']' + '\\n'\n            each_row = [[check_names, summary_text, v.get('time_stamp')]]\n            all_result_table += each_row\n\n        print(tabulate(all_result_table, headers='firstrow', tablefmt='fancy_grid'))\n\n\n    def print_connector_result_table(self, pss_content: dict, connector: str):\n        \"\"\"Prints result table for given connector test\"\"\"\n        if not pss_content:\n            return\n\n        connector_result_table = [[\"\\033[1m Check Name \\033[0m\",\n                                \"\\033[1m Run Status \\033[0m\",\n                                \"\\033[1m Time Stamp \\033[0m\",\n                                \"\\033[1m Execution ID \\033[0m\"]]\n\n        for exec_id in pss_content.keys():\n            execution_id = exec_id\n            if pss_content.get(exec_id).get('check_status'):\n                for check_id in pss_content.get(exec_id).get('check_status').keys():\n                    if pss_content.get(exec_id).get('check_status').get(check_id).get('connector').lower() == connector.lower():\n                        connector_result_table += [[pss_content.get(exec_id).get('check_status').get(check_id).get('check_name'),\n                                                pss_content.get(exec_id).get('check_status').get(check_id).get('status'),\n                                                pss_content.get(exec_id).get('time_stamp'),\n                                                execution_id]]\n\n        print(tabulate(connector_result_table,\n                headers='firstrow', tablefmt='fancy_grid'))\n        return\n\n\n    def print_execution_result_table(self, pss_content: dict, execution_id: str):\n        \"\"\"Auxilary function to show execution result for a given execution_id\"\"\"\n        execution_result_table = [[\"\\033[1m Check Name \\033[0m\",\n                                \"\\033[1m Failed Objects \\033[0m\",\n                                \"\\033[1m Run Status \\033[0m\",\n                                \"\\033[1m Time Stamp \\033[0m\"]]\n        for exec_id in pss_content.keys():\n            if exec_id == execution_id:\n                ts = pss_content.get(exec_id).get('time_stamp')\n                for check_ids in pss_content.get(exec_id).get('check_status').keys():\n                    execution_result_table += [[pss_content.get(exec_id).get('check_status').get(check_ids).get('check_name'),\n                                                pprint.pformat(pss_content.get(exec_id).get('check_status').get(check_ids).get('failed_objects'), 30),\n                                                pss_content.get(exec_id).get('check_status').get(check_ids).get('status'),\n                                                ts]]\n\n        print(tabulate(execution_result_table,\n                headers='firstrow', tablefmt='fancy_grid'))\n\n    def service_main(self, **kwargs):\n        \"\"\"This is a placeholder implementation, think of it as wrapper for gotty or any other similar service\"\"\"\n        raise NotImplementedError(\"NOT IMPLEMENTED\")\n\n    def debug_main(self, **kwargs):\n        \"\"\"Debug Main function\"\"\"\n        args = kwargs.get('args', None)\n        parser = kwargs.get('parser', None)\n\n        if args and args.command == 'debug':\n            if args.debug_command == 'start':\n                self.start_debug(args.config)\n                pass\n            elif args.stop:\n                self.stop_debug()\n                pass\n            else:\n                self.logger.error(\"WRONG OPTION: Only start and stop are supported for debug\")\n                self._error(\"Wrong Option, only start and stop are supported\")\n        pass\n\n    def start_debug(self, args):\n        \"\"\"start_debug Starts Debug session. This function takes\n        the remote configuration as input and if valid, starts\n        the debug session.\n        \"\"\"\n        if not args:\n            print(\"ERROR: Insufficient information provided\")\n            return\n\n        try:\n            remote_config_file = args\n        except:\n            print(f\"ERROR: Not able to find the configuration to start debug session\")\n            return\n\n        if os.path.exists(remote_config_file) is False:\n            print(f\"ERROR: Required Remote Configuration not present. Ensure {remote_config_file} file is present.\")\n            return\n\n        openvpn_log_file = \"/tmp/openvpn_client.log\"\n        command = [f\"sudo openvpn --config {remote_config_file} > {openvpn_log_file}\"]\n        try:\n            process = subprocess.Popen(command,\n                                    stdout=subprocess.PIPE,\n                                    stderr=subprocess.PIPE,\n                                    shell=True)\n        except Exception as e:\n            print(f\"ERROR: Unable to run the command {command}, error {e}\")\n            return\n\n        # Lets give few seconds for the subprocess to spawn\n        try:\n            outs, errs = process.communicate(timeout=10)\n        except subprocess.TimeoutExpired:\n            # This is expected as the ovpn command needs to run indefinitely.\n            pass\n        except Exception as e:\n            print(f'ERROR: Unable to communicate to child process, {e}')\n            return\n\n        # Lets verify if the openvpn process is really running\n        running = False\n        for proc in psutil.process_iter(['pid', 'name']):\n            # Search for openvpn process.\n            if proc.info['name'] == \"openvpn\":\n                # Lets make sure we ensure Tunnel Interface is Created and Up!\n                try:\n                    intf_up_result = subprocess.run([\"ip\", \"link\", \"show\", \"tun0\"],\n                                                    stdout=subprocess.PIPE,\n                                                    stderr=subprocess.PIPE)\n                    if intf_up_result.returncode == 0:\n                        running = True\n                    break\n                except Exception as e:\n                    print(f'ERROR: ip link show tun0 command failed, {e}')\n\n        if running is True:\n            print (\"Successfully Started the Debug Session\")\n        else:\n            self.logger.debug(f\"Error Occurred while starting the Debug Session. Here are the logs from openvpn\")\n            print(f\"{bcolors.FAIL}Error Occurred while starting the Debug Session. Here are the logs from openvpn{bcolors.ENDC}\")\n            print(\"===============================================================================================\")\n            with open(openvpn_log_file, \"r\") as fp:\n                print(fp.read())\n            # Bring down the ovpn process\n            print(\"===============================================================================================\")\n            self.stop_debug()\n\n    def _kill_process(self, name):\n        for proc in psutil.process_iter(['pid', 'name']):\n            if proc.info['name'] == name:\n                p_id = proc.info['pid']\n                # Attempt graceful termination first\n                command = [\"sudo\", \"kill\", \"-15\", str(p_id)]\n                try:\n                    process = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n                    stdout, stderr = process.communicate()\n                    if process.returncode != 0:\n                        self.logger.debug(f\"Unable to stop {name} process with PID {p_id}: {stderr.decode()}\")\n                        return False\n                except Exception as e:\n                    self.logger.debug(f\"Unable to stop {name} process: {e}\")\n                    return False\n                \n                # If not stopped, force termination\n                if psutil.pid_exists(p_id):\n                    command = [\"sudo\", \"kill\", \"-9\", str(p_id)]\n                    process = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n                    stdout, stderr = process.communicate()\n                    if process.returncode != 0:\n                        self.logger.debug(f\"Unable to force stop {name} process with PID {p_id}: {stderr.decode()}\")\n                        return False\n        return True\n    \n    def stop_debug(self):\n        \"\"\"stop_debug Stops the Active Debug session.\"\"\"\n        upload_session_logs()\n\n        if not self._kill_process(\"openvpn\"):\n            print(\"ERROR: Unable to stop the debug session\")\n            return\n\n        if not self._kill_process(\"script\"):\n            print(\"ERROR: Unable to stop session log capture\")\n            return\n\n        self.logger.debug(\"Stopped Active Debug session successfully\")\n        print(\"Stopped Active Debug session successfully\")\n\n    def notify(self, args):\n        \"\"\"Notification is called when the --report flag is used. This is a wrapper for both email and slack notification.\"\"\"\n        output_dir = create_execution_run_directory()\n        summary_result = None\n        failed_objects = None\n        output_json_file = None\n        mode = None\n        if args.command == 'run' and args.check_command == 'check':\n            summary_result = self.uglobals.get('status_of_run')\n            failed_objects = self.uglobals.get('failed_result')\n            mode = 'both'\n        if args.script:\n            output_json_file = os.path.join(output_dir,UNSKRIPT_SCRIPT_RUN_OUTPUT_FILE_NAME + '.json')\n            mode = 'both'\n        if args.command == 'run' and args.info:\n            if args.check_command:\n                # Case when info is called with check\n                mode = \"both\"\n            else:\n                # Case when run is called with info only or info with script, that case we dont want\n                # to send any slack notification because we dont have any slack notification to send\n                mode = \"email\"\n\n        self._notification.notify(summary_results=summary_result,\n                                  failed_objects=failed_objects,\n                                  output_metadata_file=output_json_file,\n                                  mode=mode)\n        pass\n\n# This function is the main function. Unlike the previous implementation of\n# argparse, here, this function implements sub-parser to differentiate all the\n# commands that are supported by unskript-ctl.\n# The fact that sub-parser is used means, the Keyword cannot start with a -, like --run\n# That is the reason why --run is implemented as just run. Similarly, list, debug and show\n# options are all implemented the same way.\n#\ndef main():\n    uc = UnskriptCtl()\n    parser = ArgumentParser(prog='unskript-ctl')\n    description = \"\"\n    description = description + str(\"\\n\")\n    description = description + str(\"\\t  Welcome to unSkript CLI Interface \\n\")\n    description = description + str(f\"\\t\\t   VERSION: {VERSION} \\n\")\n    description = description + str(f\"\\t\\t   BUILD_NUMBER: {get_version()} \\n\")\n    parser.description = description\n\n    subparsers = parser.add_subparsers(dest='command', help='Available Commands')\n    # Run Option\n    run_parser = subparsers.add_parser('run', help='Run Options')\n    run_parser.add_argument('--script', type=str, help='Script name to run', required=False)\n    run_parser.add_argument('--report',\n                        action='store_true',\n                        help='Report Results')\n    run_parser.add_argument('--info',\n                        action='store_true',\n                        help='Run information gathering actions')\n    check_subparser = run_parser.add_subparsers(dest='check_command')\n    check_parser = check_subparser.add_parser('check', help='Run Check Option')\n    check_parser.add_argument('--name', type=str, help='Check name to run')\n    check_parser.add_argument('--type', type=str, help='Type of Check to run')\n    check_parser.add_argument('--all', action='store_true', help='Run all checks')\n\n    # List Option\n    list_parser = subparsers.add_parser('list', help='List Options')\n    list_parser.add_argument('--credential', action='store_true', help='List All credentials')\n    list_check_subparser = list_parser.add_subparsers(dest='sub_command')\n    list_check_parser  = list_check_subparser.add_parser('checks', help='List Check Options')\n    list_check_parser.add_argument('--all', action='store_true', help='List All Checks')\n    list_check_parser.add_argument('--type',\n                                   type=str,\n                                   help='List All Checks of given connector type',\n                                   choices=CONNECTOR_LIST)\n    list_failed_check_parser = list_check_subparser.add_parser('failed-checks', help='List Failed check options')\n    list_failed_check_parser.add_argument('--all', action='store_true', help='Show All Failed Checks')\n    list_failed_check_parser.add_argument('--type',\n                                   type=str,\n                                   help='List All Checks of given connector type',\n                                   choices=CONNECTOR_LIST)\n    list_info_parser = list_check_subparser.add_parser('info', help='List information gathering actions')\n    list_info_parser.add_argument('--all', action='store_true', help='List all info gathering actions')\n    list_info_parser.add_argument('--type',\n                                  type=str,\n                                  help='List info gathering actions for given connector type',\n                                  choices=CONNECTOR_LIST)\n    # Show Option\n    show_parser = subparsers.add_parser('show', help='Show Options')\n    show_audit_subparser = show_parser.add_subparsers(dest='show_command')\n    show_audit_parser = show_audit_subparser.add_parser('audit-trail', help='Show Audit Trail option')\n    show_audit_parser.add_argument('--all',\n                                   action='store_true',\n                                   help='List trail of all checks across all connectors')\n    show_audit_parser.add_argument('--type',\n                                   type=str,\n                                   choices=CONNECTOR_LIST,\n                                   help='Show Audit trail for checks for given connector')\n    show_audit_parser.add_argument('--execution_id',\n                                   type=str,\n                                   help='Execution ID for which the audit trail should be shown')\n\n    show_flogs_parser = show_audit_subparser.add_parser('failed-logs', help='Show Failed Logs option')\n    show_flogs_parser.add_argument('--execution_id',\n                                   type=str,\n                                   help='Execution ID for which the logs should be fetched')\n\n    # Debug / Service Option\n    debug_parser = subparsers.add_parser('debug', help='Debug Option')\n    debug_subparser = debug_parser.add_subparsers(dest='debug_command')\n\n    debug_start_parser = debug_subparser.add_parser('start', help='Start Debug Option')\n\n    debug_start_parser.add_argument('--config',\n                                help='Config File, OVPN File, eg: /tmp/test.ovpn',\n                                type=str)\n    debug_parser.add_argument('--stop',\n                                help='Stop debug session',\n                                action='store_true')\n\n    # Create Credential\n    parser.add_argument('--create-credential',\n                        type=str,\n                        nargs=REMAINDER,\n                        help='Create Credential [-creds-type creds_file_path]')\n    # Save Check Names\n    parser.add_argument('--save-check-names',\n                        type=str,\n                        help=SUPPRESS)\n\n\n\n    # Lets re-arrange arguments such that parse_args is efficient with\n    # the rules defined above\n    def rearrange_argv(argv):\n        script_idx = argv.index('--script') if '--script' in argv else -1\n        check_idx = argv.index('check') if 'check' in argv else -1\n        report_idx = argv.index('--report') if '--report' in argv else -1\n        info_idx = argv.index('--info') if '--info' in argv else -1\n        run_idx = argv.index('run') if 'run' in argv else -1\n\n        if script_idx != -1 and check_idx != -1:\n            if script_idx > check_idx:\n                argv.remove('--script')\n                script_name = argv.pop(script_idx)\n                argv.insert(run_idx + 1, '--script')\n                argv.insert(run_idx + 2, script_name)\n\n        if run_idx != -1 and info_idx != -1:\n            argv.remove('--info')\n            if check_idx != -1:\n                argv.insert(check_idx, '--info')\n            elif run_idx != -1:\n                argv.insert(run_idx + 1, '--info')\n\n        if report_idx != -1 and check_idx != -1:\n            if report_idx > check_idx:\n                argv.remove('--report')\n                argv.insert(run_idx + 1, '--report')\n\n        return argv\n\n    argv = sys.argv[1:].copy()\n    argv = rearrange_argv(argv)\n    args = parser.parse_args(argv)\n\n    if len(sys.argv) <= 2:\n        parser.print_help()\n        sys.exit(0)\n\n    if args.command == 'run':\n        uc.run_main(args=args, parser=parser)\n    elif args.command == 'list':\n        uc.list_main(args=args, parser=parser)\n    elif args.command == 'show':\n        uc.show_main(args=args, parser=parser)\n    elif args.command == 'debug':\n        uc.debug_main(args=args, parser=parser)\n    elif args.create_credential not in ('', None):\n        if len(args.create_credential) == 0:\n            uc.display_creds_ui()\n        else:\n            uc.create_creds(args)\n    elif args.save_check_names not in ('', None):\n        uc.save_check_names(args)\n    else:\n        parser.print_help()\n\n    if args.command == 'run' and  args.report:\n        uc.notify(args)\n\n    os._exit(0)\n\nif __name__ == '__main__':\n    main()\n"
  },
  {
    "path": "unskript-ctl/unskript_ctl_notification.py",
    "content": "#!/usr/bin/env python\n#\n# Copyright (c) 2023 unSkript.com\n# All rights reserved.\n#\n# This program is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\n# FOR A PARTICULAR PURPOSE\n#\n#\nimport json\nimport yaml\nimport requests\nimport subprocess\nimport smtplib\nimport os\nimport base64\nimport logging \n\nfrom pathlib import Path\nfrom datetime import datetime\nfrom email.mime.multipart import MIMEMultipart\nfrom email.mime.text import MIMEText\nfrom email.mime.application import MIMEApplication\n\nfrom jsonschema import validate, ValidationError\nfrom tenacity import retry, stop_after_attempt, wait_exponential, before_log, after_log\n\n\n\nfrom unskript_utils import *\nfrom unskript_ctl_version import *\nfrom unskript_ctl_factory import NotificationFactory\nfrom unskript_ctl_custom_notification import custom_email_notification_main\n\n# This class implements Notification function for Slack\nclass SlackNotification(NotificationFactory):\n    def __init__(self, **kwargs):\n        super().__init__(**kwargs)\n        config = self._config.get_notification()\n        self.slack_config = config.get('Slack')\n        self.schema_file = os.path.join(os.path.dirname(__file__), \"unskript_slack_notify_schema.json\")\n\n    def validate_data(self, data):\n        if not os.path.exists(self.schema_file):\n            self.logger.error(f\"Unable to find Notification Schema file {self.schema_file}!\")\n            return False\n        try:\n            with open(self.schema_file, 'r') as f:\n                schema = json.load(f)\n                validate(instance=data, schema=schema)\n                return True\n        except ValidationError as e:\n            self.logger.debug(str(e))\n            return False\n\n    def notify(self, **kwargs):\n        webhook = self.slack_config.get('web-hook-url')\n        summary_results = kwargs.get('summary_results', None)\n\n        if self.slack_config.get('enable') is False:\n            self.logger.error(\"Slack Notification disabled\")\n            return False\n\n        if summary_results and len(summary_results) == 0:\n            self.logger.error(\"Result Empty: No results to notify\")\n            return False\n\n        # if not self.validate_data(summary_results):\n        #     self.logger.debug(\"Given Summary Result does not validate against Slack Schema\")\n\n        message = self._generate_notification_message(summary_results)\n        if not message:\n            self.logger.error(\"ERROR: Nothing to send, Results Empty\")\n            return False\n\n        try:\n            to_send = {\"text\": message, \"mrkdwn\": True, \"type\": \"mrkdwn\"}\n            response = requests.post(webhook,\n                                     data=json.dumps(to_send, indent=4),\n                                     headers={\"Content-Type\": \"application/json\"})\n\n            if response.status_code == 200:\n                self.logger.info(\"Slack Message was sent successfully!\")\n                return True\n            else:\n                self.logger.info(f\"ERROR: Failed to send slack message {response.status_code}, {response.text}\")\n                return False\n        except requests.RequestException as e:\n            self.logger.error(f\"ERROR: Not able to send slack message: {str(e)}\")\n            return False\n\n    def _generate_notification_message(self, summary_results):\n        summary_message = ':wave: *unSkript Ctl Check Results* \\n'\n        status_count = {'PASS': 0, 'FAIL': 0, 'ERROR': 0}\n\n        if not summary_results:\n            return\n\n        for result_set in summary_results:\n            if not result_set or not result_set.get('result'):\n                continue\n            c_result = result_set.get('result')\n            for priority in [CHECK_PRIORITY_P0, CHECK_PRIORITY_P1, CHECK_PRIORITY_P2]:\n                checks_per_priority = c_result.get(priority)\n                if checks_per_priority is None:\n                    continue\n                for status in ['FAIL', 'ERROR', 'PASS']:\n                    checks = checks_per_priority.get(status)\n                    if checks is None or len(checks) == 0:\n                        continue\n                    for check in checks:\n                        check_name = check[0]\n                        if status in status_count:\n                            status_count[status] += 1\n                        if status == 'PASS':\n                            summary_message += f':hash: *{check_name}*  :white_check_mark: ' + '\\n'\n                        elif status in ('FAIL', 'ERROR'):\n                            summary_message += f':hash: *{check_name}*  :x: ' + '\\n'\n\n        summary_message += f':trophy: *(Pass/Fail/Error)* <-> *({status_count[\"PASS\"]}/{status_count[\"FAIL\"]}/{status_count[\"ERROR\"]})*' + '\\n\\n'\n        return summary_message\n\n\n# This class implements Notification function for Email category\nclass EmailNotification(NotificationFactory):\n    def __init__(self, **kwargs):\n        super().__init__(**kwargs)\n        config = self._config.get_notification()\n        self.execution_dir = kwargs.get('execution_dir', create_execution_run_directory())\n        self.email_config = config.get('Email')\n        self.provider = self.email_config.get('provider', '').lower()\n        self.checks_schema_file = os.path.join(os.path.dirname(__file__), \"unskript_email_notify_check_schema.json\")\n        self.send_failed_objects_as_attachment = True\n\n    def notify(self, **kwargs):\n        failed_result = kwargs.get('failed_result', {})\n        if failed_result is None:\n            failed_result = {}\n        \n        failed_object_character_count = sum((len(str(value)) for value in failed_result.values()))\n\n        if failed_object_character_count >= MAX_CHARACTER_COUNT_FOR_FAILED_OBJECTS:\n            self.send_failed_objects_as_attachment = True\n        else:\n            self.send_failed_objects_as_attachment = False\n        pass\n\n    def validate_data(self, data, schema_file):\n        if not os.path.exists(schema_file):\n            self.logger.error(f\"Data Differs From  Schema file {schema_file}!\")\n            return False\n        try:\n            with open(schema_file, 'r') as f:\n                schema = json.load(f)\n                validate(instance=data, schema=schema)\n                return True\n        except ValidationError as e:\n            self.logger.debug(str(e))\n            return False\n\n    def create_tarball_archive(self,\n                               tar_file_name: str,\n                               output_metadata_file: str,\n                               parent_folder: str):\n        \n        log_file_path = None\n        \n        if os.path.exists(os.path.expanduser(\"~/unskript_ctl.log\")):\n            log_file_path = os.path.expanduser(\"~/unskript_ctl.log\")\n\n        if not tar_file_name.startswith('/tmp'):\n            tar_file_name = os.path.join('/tmp', tar_file_name)\n\n        if output_metadata_file:\n            tar_cmd = [\"tar\", \"Jcvf\", tar_file_name, f\"--exclude={output_metadata_file}\", \"-C\" , parent_folder, \".\"]\n        else:\n            tar_cmd = [\"tar\", \"Jcvf\", tar_file_name, \"-C\" , parent_folder, \".\"]\n        \n        if log_file_path:\n            tar_cmd.append(str(log_file_path))\n\n        try:\n            result = subprocess.run(tar_cmd,\n                            stdout=subprocess.PIPE,\n                            stderr=subprocess.PIPE)\n            if result.returncode != 0:\n                self.logger.error(f\"ERROR: Tar command returned Non Zero value {result.returncode}\")\n                return False \n        except Exception as e:\n            self.logger.error(f\"ERROR: {e}\")\n            return False\n\n        return True\n\n    def create_temp_files_of_failed_check_results(self,\n                                            failed_result: dict):\n        list_of_failed_files = []\n        self.logger.debug(f\"Creating {len(failed_result)} Temp Files for failed check results \")\n        if not failed_result:\n            self.logger.error(\"Failed Result is Empty\")\n            return list_of_failed_files\n        # if not self.validate_data(failed_result, self.checks_schema_file):\n        #     self.logger.debug(\"Validation of Given Result failed against Notification Schema\")\n\n        if failed_result and len(failed_result.get('result', [])):\n            for result_item in failed_result['result']:\n                for check_name, failed_obj in result_item.items():\n                    connector = check_name.split(':')[0]\n                    connector_file = f\"{self.execution_dir}/{connector}_failed_objects.txt\"\n                    with open(connector_file, 'a', encoding='utf-8') as f:\n                        f.write('\\n' + check_name + '\\n')\n                        yaml.dump(failed_obj, f, default_flow_style=False)\n                    if connector_file not in list_of_failed_files:\n                        list_of_failed_files.append(connector_file)\n\n        return list_of_failed_files\n\n    def create_script_summary_message(self, output_metadata_file: str):\n        message = ''\n        if os.path.exists(output_metadata_file) is False:\n            self.logger.error(f\"ERROR: The metadata file is missing, please check if file exists? {output_metadata_file}\")\n            return message\n\n        metadata = ''\n        with open(output_metadata_file, 'r', encoding='utf-8') as f:\n            metadata = json.loads(f.read())\n\n        if not metadata:\n            self.logger.error(f'ERROR: Metadata is empty for the script. Please check content of {output_metadata_file}')\n            raise ValueError(\"Metadata is empty\")\n\n        # Log the custom script run result instead of including it in the email.\n        self.logger.debug(f\"\\tStatus: {metadata.get('status')} \\n\\tTime (in seconds): {metadata.get('time_taken')} \\n\\tError: {metadata.get('error')} \\n\")\n\n        # Remove from email \n        message += f'''\n                <br>\n                <h3> Custom Script Run Result </h3>\n                <table border=\"1\">\n                    <tr>\n                        <th> Status </th>\n                        <th> Time (in seconds) </th>\n                        <th> Error </th>\n                    </tr>\n                    <tr>\n                        <td>{metadata.get('status')}</td>\n                        <td>{metadata.get('time_taken')}</td>\n                        <td>{metadata.get('error')}</td>\n                    </tr>\n                </table>\n        '''\n\n        return message\n\n    def create_info_legos_output_file(self):\n        \"\"\"create_info_legos_output_file: This function creates a file that will\n           be added to the final tarball\n        \"\"\"\n        parent_folder = self.execution_dir\n        if self.uglobals.get('CURRENT_EXECUTION_RUN_DIRECTORY'):\n            parent_folder = self.uglobals.get('CURRENT_EXECUTION_RUN_DIRECTORY')\n        info_legos_output_file_path = os.path.join(parent_folder, \"info_legos_output.txt\")\n        info_action_results = self.uglobals.get('info_action_results')\n\n        # Write all info lego outputs to a single file\n        if info_action_results:\n            with open(info_legos_output_file_path, 'w', encoding='utf-8') as f:\n                for action_name, action_output in info_action_results.items():\n                    content = f\"{action_name}:\\n{action_output if action_output else 'NO OUTPUT'}\\n\\n\"\n                    f.write(content)\n        else:\n            self.logger.error(\"No information gathering action result available\")\n            return None\n\n        return info_legos_output_file_path\n\n\n    def create_info_gathering_action_result(self):\n        \"\"\"create_info_gathering_action_result: This function creates an inline\n           results of all the output from info gathering action\n        \"\"\"\n        message = ''\n        # Fetch the list of actions specified in the YAML under info_section\n        specified_actions = self._config.get_email_fmt().get('info_section', [])\n        info_action_results = self.uglobals.get('info_action_results')\n        if not specified_actions:\n            return message \n\n        if info_action_results:\n            message += '''\n                    <br>\n                    <h3> Information Gathering Action Result </h3>\n                    <br>\n            '''\n            for specified_action in specified_actions:\n                action_found = False\n                for full_action_name, action_output in info_action_results.items():\n                    # Extract the part of the action name after '/'\n                    _, action_name_suffix = full_action_name.split('/', 1)\n                    if action_name_suffix == specified_action:\n                        message += f'<h4>{specified_action}</h4> <pre>'\n                        message += action_output if action_output else 'NO OUTPUT'\n                        message += '</pre>'\n                        action_found = True\n                        break  # Stop looking once found\n                if not action_found:\n                    # If the specified action was not found in the results, show no output\n                    message += f'<h4>{specified_action}</h4> <pre>Action not executed</pre>'\n\n            message += '<br>'\n\n        return message\n\n\n    def create_email_attachment(self, output_metadata_file: str = None):\n        \"\"\"create_email_attachment: This function reads the output_metadata_file\n        to find out the name of the attachment, the output that should be included as the attachment\n        of the test run as listed in the output_metadata_file.\n        \"\"\"\n        metadata = ''\n        with open(output_metadata_file, 'r', encoding='utf-8') as f:\n            metadata = json.loads(f.read())\n\n        if not metadata:\n            self.logger.error(f'ERROR: Metadata is empty for the script. Please check content of {output_metadata_file}')\n            raise ValueError(\"Metadata is empty\")\n\n        # if the status is FAIL, then there is no file to attach, so just send the message.\n        multipart_content_subtype = 'mixed'\n        attachment_ = MIMEMultipart(multipart_content_subtype)\n\n        target_file_name = None\n        if metadata.get('output_file'):\n            target_file_name  = os.path.basename(metadata.get('output_file'))\n        else:\n            target_file_name = \"unskript_ctl_result\"\n\n        if metadata.get('compress') is True:\n            parent_folder = os.path.dirname(output_metadata_file)\n            target_name = os.path.basename(parent_folder)\n            tar_file_name = f\"{target_name}\" + '.tar.bz2'\n            output_metadata_file = output_metadata_file.split('/')[-1]\n            if self.create_tarball_archive(tar_file_name=tar_file_name,\n                                    output_metadata_file=output_metadata_file,\n                                    parent_folder=parent_folder) is False:\n                raise ValueError(\"ERROR: Archiving attachments failed!\")\n            # With the non-root user support. Lets create the tar file in the\n            # common accessable area like /tmp\n            target_file_name = os.path.join(\"/tmp\", tar_file_name)\n\n        with open(target_file_name, 'rb') as f:\n            part = MIMEApplication(f.read())\n            part.add_header('Content-Disposition', 'attachment', filename=target_file_name)\n            attachment_.attach(part)\n        try:\n            if metadata.get('compress') is True:\n                os.remove(target_file_name)\n        except Exception as e:\n            self.logger.error(f\"ERROR: {e}\")\n\n        return attachment_\n\n    def create_priority_message_table(self, priority:str, checks_per_status: dict)-> tuple:\n        pass_count = len(checks_per_status['PASS'])\n        fail_count = len(checks_per_status['FAIL'])\n        error_count = len(checks_per_status['ERROR'])\n        if pass_count == 0 and fail_count == 0 and error_count == 0:\n            return '', 0, 0, 0\n        print_priority = priority.capitalize()\n        tr_message = f'''\n            <table border=\"1\">\n            <tr>\n            <th> {print_priority} Checks </th>\n            <th> RESULT </th>\n            </tr>\n        '''\n        for status in ['FAIL', 'ERROR', 'PASS']:\n            checks = checks_per_status.get(status)\n            for st in checks:\n                check_name = st[0]\n                if status in ['ERROR', 'PASS']:\n                    tr_message += f'<tr> <td> {check_name}</td> <td> <strong>{status}</strong> </td></tr>' + '\\n'\n                else:\n                    check_link = f\"{check_name}\".lower().replace(' ','_')\n                    tr_message += f'<tr><td> <a href=\"#{check_link}\">{check_name}</a></td><td>  <strong>FAIL</strong> </td></tr>' + '\\n'\n        tr_message += '</table><br>' + '\\n'\n        return tr_message, pass_count, fail_count, error_count\n\n\n    def create_checks_summary_message(self,\n                                      summary_results: list,\n                                      failed_result: dict):\n        message = ''\n        if not summary_results:\n            return message\n\n        if len(summary_results):\n            p = f = e = 0\n            tr_message = ''\n            for sd in summary_results:\n                if sd == {}:\n                    continue\n                # sd.get('result') will return a map [priority][status]{list of checks}\n                # Check if there are any P0 checks\n                table_part_of_the_message = ''\n                for priority in [CHECK_PRIORITY_P0, CHECK_PRIORITY_P1, CHECK_PRIORITY_P2]:\n                    if len(sd.get('result').get(priority)) > 0:\n                        tr_message, pass_count, fail_count, error_count = self.create_priority_message_table(priority, sd.get('result').get(priority))\n                        p += pass_count\n                        f += fail_count\n                        e += error_count\n                        table_part_of_the_message += tr_message + '\\n'\n\n            message += f'<center><h3>Checks Summary<br>Pass : {p}  Fail: {f}  Error: {e}</h3></center><br>' + '\\n'\n            message += '''\n                <br>\n                <h3> Check Summary Result </h3>\n                '''\n            message += table_part_of_the_message + '\\n'\n\n            if failed_result and len(failed_result) and not self.send_failed_objects_as_attachment:\n                message += '<br> <ul>' + '\\n'\n                message += '<h2> FAILED OBJECTS </h2>' + '\\n'\n                if failed_result.get('result'):\n                    for r in failed_result.get('result'):\n                        for k,v in r.items():\n                            check_link = f\"{k}\".split(':')[-1].lower().replace(' ', '_')\n                            message += f'<li> <strong id=\"{check_link}\">{k}</strong> </li>' + '\\n'\n                            message += f'<pre>{yaml.dump(v,default_flow_style=False)}</pre>' + '\\n'\n                message += '</ul> <br>' + '\\n'\n\n        return message\n\n    def create_email_header(self, title: str = None):\n        email_title = title or \"Run result\"\n        message = f'''\n            <!DOCTYPE html>\n            <html>\n            <head>\n            </head>\n            <body>\n            <center>\n            <h1> {email_title} </h1>\n            <h3> <strong>Performed On <br> {datetime.now().strftime(\"%a %b %d %I:%M:%S %p %Y %Z\")} </strong></h3>\n            <h4> <strong>Version : {get_version()} </strong></h4><br>\n            </center>\n            '''\n        return message\n\n\n    def prepare_combined_email(self,\n                               summary_results: list,\n                               failed_result: dict,\n                               output_metadata_file: str,\n                               title: str,\n                               attachment: MIMEMultipart,\n                               **kwargs):\n        message = self.create_email_header(title=title)\n        temp_attachment = msg = None\n        parent_folder = self.execution_dir\n        target_name = os.path.basename(parent_folder)\n        tar_file_name = f\"{target_name}\" + '.tar.bz2'\n        target_file_name = os.path.join('/tmp', tar_file_name)\n\n        if summary_results and len(summary_results):\n            message += self.create_checks_summary_message(summary_results=summary_results,\n                                                    failed_result=failed_result)\n            if len(failed_result) and self.send_failed_objects_as_attachment:\n                self.create_temp_files_of_failed_check_results(failed_result=failed_result)\n\n            if len(os.listdir(self.execution_dir)) == 0 or not self.create_tarball_archive(tar_file_name=tar_file_name, output_metadata_file=None, parent_folder=parent_folder):\n                self.logger.error(\"Execution directory is empty , tarball creation unsuccessful!\")\n                return attachment\n            \n            msg = MIMEMultipart('mixed')\n            with open(target_file_name, 'rb') as f:\n                part = MIMEApplication(f.read())\n                part.add_header('Content-Disposition', 'attachment', filename=target_file_name)\n                msg.attach(part)\n\n        if failed_result and len(failed_result) and self.send_failed_objects_as_attachment:\n            message += '<br> <ul>' + '\\n'\n            message += '<h3> DETAILS ABOUT THE FAILED OBJECTS CAN BE FOUND IN THE ATTACHMENTS </h3>' + '\\n'\n            message += '</ul> <br>' + '\\n'\n\n\n        info_result = self.create_info_gathering_action_result()\n        if info_result:\n            message += info_result\n            self.create_info_legos_output_file()\n        # print(\"Output Metadata File\\n\",output_metadata_file)\n        if output_metadata_file:\n            message += self.create_script_summary_message(output_metadata_file=output_metadata_file)\n            temp_attachment = self.create_email_attachment(output_metadata_file=output_metadata_file)\n\n        if len(os.listdir(self.execution_dir)) == 0 or not self.create_tarball_archive(tar_file_name=tar_file_name, output_metadata_file=None, parent_folder=parent_folder):\n            self.logger.error(\"Execution directory is empty , tarball creation unsuccessful!\")\n            return attachment\n        \n        msg = MIMEMultipart('mixed')\n        with open(target_file_name, 'rb') as f:\n            part = MIMEApplication(f.read())\n            part.add_header('Content-Disposition', 'attachment', filename=target_file_name)\n            msg.attach(part)\n\n\n        message += \"</body> </html>\"\n        attachment.attach(MIMEText(message, 'html'))\n        if temp_attachment:\n            attachment.attach(temp_attachment)\n        elif msg:\n            attachment.attach(msg)\n\n        return attachment              \n\n\n# Sendgrid specific implementation\nclass SendgridNotification(EmailNotification):\n    def __init__(self, **kwargs):\n        super().__init__(**kwargs)\n        self.sendgrid_config = self.email_config.get('Sendgrid')\n\n    def notify(self, **kwargs):\n        super().notify(**kwargs)\n        summary_results = kwargs.get('summary_result', [])\n        failed_result = kwargs.get('failed_result', {})\n        output_metadata_file = kwargs.get('output_metadata_file')\n        from_email = kwargs.get('from_email', self.sendgrid_config.get('from-email'))\n        to_email = kwargs.get('to_email', self.sendgrid_config.get('to-email'))\n        api_key = kwargs.get('api_key', self.sendgrid_config.get('api_key'))\n        subject = kwargs.get('subject', self.email_config.get('email_subject_line', 'Run Result'))\n\n        retval = self.send_sendgrid_notification(summary_results=summary_results,\n                                               failed_result=failed_result,\n                                               output_metadata_file=output_metadata_file,\n                                               from_email=from_email,\n                                               to_email=to_email,\n                                               api_key=api_key,\n                                               subject=subject)\n\n        if retval:\n            self.logger.info(\"Successfully sent Email notification via Sendgrid.\")\n        else:\n            self.logger.error(\"Failed to send email notification via Sendgrid!\")\n\n        return retval\n    def send_sendgrid_notification(self,\n                                summary_results: list,\n                                failed_result: dict,\n                                output_metadata_file: str,\n                                from_email: str,\n                                to_email: str,\n                                api_key: str,\n                                subject: str):\n        # Dynamic Load (Import) necessary libraries for sendgrid\n        import sendgrid\n        from sendgrid import SendGridAPIClient\n        from sendgrid.helpers.mail import Mail, Attachment, FileContent, FileName, FileType\n\n        if not from_email or not to_email or not api_key:\n            self.logger.error(\"ERROR: From Email, To Email and API Key are mandatory parameters to send email notification\")\n            return False\n\n        html_message = ''\n        email_subject = subject\n        parent_folder = self.execution_dir\n        target_name = os.path.basename(parent_folder)\n        tar_file_name = f\"{target_name}\" + '.tar.bz2'\n        target_file_name = os.path.join('/tmp', tar_file_name)\n        metadata = None\n\n        try:\n            # We can have custom Title here\n            html_message += self.create_email_header(title=None)\n            if summary_results and len(summary_results):\n                html_message += self.create_checks_summary_message(summary_results=summary_results,\n                                                            failed_result=failed_result)\n                if failed_result and len(failed_result) and self.send_failed_objects_as_attachment:\n                    self.create_temp_files_of_failed_check_results(failed_result=failed_result)\n            self.create_info_legos_output_file()\n\n            # Check conditions for creating tarball\n            if len(os.listdir(self.execution_dir)) == 0 or not self.create_tarball_archive(tar_file_name=tar_file_name, output_metadata_file=None, parent_folder=parent_folder):\n                self.logger.error(\"Execution directory is empty , tarball creation unsuccessful!\")\n\n            if output_metadata_file:\n                html_message += self.create_script_summary_message(output_metadata_file=output_metadata_file)\n\n            info_result = self.create_info_gathering_action_result()\n            if info_result:\n                html_message += info_result\n\n            to_email_list = []\n            if isinstance(to_email, list):\n                to_email_list = to_email\n            else:\n                to_email_list = [to_email]\n\n            email_message = Mail(\n                from_email=from_email,\n                to_emails=to_email_list,\n                subject=email_subject,\n                html_content=html_message\n            )\n            if os.path.exists(target_file_name) and os.path.getsize(target_file_name) > 0:\n                email_message = self.sendgrid_add_email_attachment(email_message=email_message,\n                                                            file_to_attach=target_file_name,\n                                                            compress=True)\n            try:\n                if target_file_name:\n                    os.remove(target_file_name)\n            except Exception as e:\n                self.logger.error(f\"ERROR: {e}\")\n\n            sg = sendgrid.SendGridAPIClient(api_key)\n            sg.send(email_message)\n            self.logger.info(f\"Notification sent successfully to {to_email}\")\n        except Exception as e:\n            self.logger.error(f\"ERROR: Unable to send notification as email. {e}\")\n            return False\n\n        return True\n\n    def sendgrid_add_email_attachment(self,\n                                      email_message,\n                                      file_to_attach: str,\n                                      compress: bool = True):\n        from sendgrid.helpers.mail import Attachment, FileContent, FileName, FileType\n        with open(file_to_attach, 'rb') as f:\n            file_data = f.read()\n\n            encoded = base64.b64encode(file_data).decode()\n            attachment = Attachment()\n            attachment.file_content = FileContent(encoded)\n            file_name = os.path.basename(file_to_attach)\n            attachment.file_name = FileName(file_name)\n            if compress is True:\n                attachment.file_type = FileType('application/zip')\n            else:\n                attachment.file_type = FileType('application/text')\n            attachment.disposition = 'attachment'\n            email_message.add_attachment(attachment)\n\n        return email_message\n\n\n# SES specific implementation\nclass AWSEmailNotification(EmailNotification):\n    def __init__(self, **kwargs):\n        super().__init__(**kwargs)\n        self.aws_config = self.email_config.get('SES')\n\n    def notify(self, **kwargs):\n        super().notify(**kwargs)\n        summary_results = kwargs.get('summary_result', [])\n        failed_result = kwargs.get('failed_result', {})\n        output_metadata_file = kwargs.get('output_metadata_file')\n        access_key = kwargs.get('access_key', self.aws_config.get('access_key'))\n        secret_key = kwargs.get('secret_key', self.aws_config.get('secret_access'))\n        to_email = kwargs.get('to_email', self.aws_config.get('to-email'))\n        from_email = kwargs.get('from_email', self.aws_config.get('from-email'))\n        region = kwargs.get('region', self.aws_config.get('region'))\n        subject = kwargs.get('subject', self.email_config.get('email_subject_line', 'Run Result'))\n\n        retval =  self.prepare_to_send_awsses_notification(summary_results=summary_results,\n                                    failed_result=failed_result,\n                                    output_metadata_file=output_metadata_file,\n                                    access_key=access_key,\n                                    secret_key=secret_key,\n                                    to_email=to_email,\n                                    from_email=from_email,\n                                    region=region,\n                                    subject=subject)\n        if retval:\n            self.logger.info(\"Successfully sent Email notification via AWS SES.\")\n        else:\n            self.logger.error(\"Failed to send email notification via AWS SES!\")\n\n        return retval\n\n    def prepare_to_send_awsses_notification(self, summary_results: list,\n                                failed_result: dict,\n                                output_metadata_file: str,\n                                access_key: str,\n                                secret_key: str,\n                                to_email: str,\n                                from_email: str,\n                                region: str,\n                                subject: str):\n        if not access_key or not secret_key:\n            self.logger.error(\"ERROR: Cannot send AWS SES Notification without access and/or secret_key\")\n            return  False\n\n        # WE CAN TAKE A TITLE AS WELL, IF WE WANT CUSTOM TITLE IN THE REPORT\n        attachment_ = MIMEMultipart('mixed')\n        attachment_['Subject'] = subject\n\n        attachment_ = self.prepare_combined_email(summary_results=summary_results,\n                                            failed_result=failed_result,\n                                            output_metadata_file=output_metadata_file,\n                                            title=None,\n                                            attachment=attachment_)\n\n        return self.do_send_awsses_email(from_email=from_email,\n                                    to_email=to_email,\n                                    attachment_=attachment_,\n                                    access_key=access_key,\n                                    secret_key=secret_key,\n                                    region=region)\n\n    def do_send_awsses_email(self, from_email: str,\n                            to_email: str,\n                            attachment_,\n                            access_key: str,\n                            secret_key: str,\n                            region: str):\n        # Boto3 client needs AWS Access Key and Secret Key\n        # to be able to initialize the SES client.\n        # We do it by setting  the os.environ variables\n        # for access and secret key\n        import boto3\n        from botocore.exceptions import NoCredentialsError\n\n        if access_key is not None and secret_key is not None:\n            os.environ['AWS_ACCESS_KEY_ID'] = access_key\n            os.environ['AWS_SECRET_ACCESS_KEY'] = secret_key\n\n        client = boto3.client('ses', region_name=region)\n        to_email_list = []\n        if isinstance(to_email, list):\n            to_email_list = to_email\n        else:\n            to_email_list = [to_email]\n        \n        try:\n            response = client.send_raw_email(\n                Source=from_email,\n                Destinations=to_email_list,\n                RawMessage={'Data': attachment_.as_string()}\n            )\n            if response.get('ResponseMetadata') and response.get('ResponseMetadata').get('HTTPStatusCode') == 200:\n                self.logger.info(f\"Email notification sent to {to_email}\")\n            return True\n        except NoCredentialsError:\n            self.logger.error(\"ERROR: Unable to send email notification to {to_email}, credentials are invalid\")\n            return False\n        except client.exceptions.MessageRejected:\n            self.logger.error(f\"ERROR: Unable to send email. Message was Rejected from SES server check from email-id {to_email} is valid!\")\n            return False\n        except client.exceptions.MailFromDomainNotVerifiedException:\n            self.logger.error(\"ERROR: Unable to send email. Domain of from email-id is not verified!, Please use a valid from email-id\")\n            return False\n        except client.exceptions.ConfigurationSetDoesNotExistException:\n            self.logger.error(\"ERROR: Unable to send email. Email Configuration set does not exist. Please check SES policy\")\n            return False\n        except client.exceptions.ConfigurationSetSendingPausedException:\n            self.logger.error(f\"ERROR: Unable to send email. Email sending is paused for the from email id {from_email}!\")\n            return False\n        except client.exceptions.AccountSendingPausedException:\n            self.logger.error(\"ERROR: Unable to send email. Sending email is paused for the AWS Account!\")\n            return False\n        except client.exceptions.ClientError as e:\n            self.logger.error(f\"ERROR: {e}\")\n            return False\n        except Exception as e:\n            self.logger.error(f\"ERROR: {e}\")\n            return False\n\n# SMTP Implementation, like Gmail, etc..\nclass SmtpNotification(EmailNotification):\n    def __init__(self, **kwargs):\n        super().__init__(**kwargs)\n        self.SMTP_TLS_PORT = 587\n        self.smtp_config = self.email_config.get('SMTP')\n\n    def notify(self, **kwargs):\n        super().notify(**kwargs)\n\n        summary_results = kwargs.get('summary_result', [])\n        failed_result = kwargs.get('failed_result', {})\n        output_metadata_file = kwargs.get('output_metadata_file')\n        smtp_host = kwargs.get('smtp-host', self.smtp_config.get('smtp-host'))\n        smtp_user = kwargs.get('smtp-user', self.smtp_config.get('smtp-user'))\n        smtp_password = kwargs.get('smtp-password', self.smtp_config.get('smtp-password'))\n        to_email = kwargs.get('to_email', self.smtp_config.get('to-email'))\n        from_email = kwargs.get('from_email', self.smtp_config.get('from-email'))\n        subject = kwargs.get('subject', self.email_config.get('email_subject_line', 'Run Result'))\n        retval =  self.send_smtp_notification(summary_results=summary_results,\n                                           failed_result=failed_result,\n                                           smtp_host=smtp_host,\n                                           output_metadata_file=output_metadata_file,\n                                           smtp_user=smtp_user,\n                                           smtp_password=smtp_password,\n                                           to_email=to_email,\n                                           from_email=from_email,\n                                           subject=subject)\n        if retval:\n            self.logger.info(\"Successfully sent Email notification via SMTP.\")\n        else:\n            self.logger.error(\"Failed to send email notification via SMTP!\")\n\n        return retval\n\n    def send_smtp_notification(self,\n                                summary_results: list,\n                                failed_result: dict,\n                                output_metadata_file: str,\n                                smtp_host: str,\n                                smtp_user: str,\n                                smtp_password: str,\n                                to_email: str,\n                                from_email: str,\n                                subject: str):\n        \"\"\"send_smtp_notification: This function sends the summary result\n        in the form of an email for smtp option.\n        \"\"\"\n        msg = MIMEMultipart('mixed')\n        if from_email:\n            msg['From'] =  from_email\n        else:\n            msg['From'] = smtp_user\n\n        to_email_list = []\n        if isinstance(to_email, list):\n            to_email_list = to_email\n        else:\n            to_email_list = [to_email]\n\n        msg['To'] = \", \".join(to_email_list)\n        msg['Subject'] = subject\n        try:\n            server = smtplib.SMTP(smtp_host, self.SMTP_TLS_PORT)\n            server.starttls()\n            server.login(smtp_user, smtp_password)\n        except Exception as e:\n            self.logger.error(e)\n            return False\n\n        msg = self.prepare_combined_email(summary_results=summary_results,\n                                    failed_result=failed_result,\n                                    output_metadata_file=output_metadata_file,\n                                    title=None,\n                                    attachment=msg)\n\n        try:\n            server.sendmail(smtp_user, to_email, msg.as_string())\n        except Exception as e:\n            self.logger.error(f\"ERROR: {e}\")\n        finally:\n            self.logger.info(f\"Notification sent successfully to {to_email}\")\n        return True\n\n\nclass CustomSMTPNotification(EmailNotification):\n    def __init__(self, **kwargs):\n        super().__init__(**kwargs)\n\n    def notify(self, **kwargs):\n        super().notify(**kwargs)\n\n        summary_results = kwargs.get('summary_result', [])\n        failed_result = kwargs.get('failed_result', {})\n        output_metadata_file = kwargs.get('output_metadata_file')\n        to_email = os.environ.get(\"LB_NOTIFICATION_RECEIVER_EMAIL\",\n                                  \"name@example.com\")\n\n        subject = kwargs.get('subject', self.email_config.get('email_subject_line', 'Run Result'))\n\n        parent_folder = self.execution_dir\n        target_file_name = None\n        tar_file_name = None\n        if output_metadata_file:\n            parent_folder = os.path.dirname(output_metadata_file)\n            target_name = os.path.basename(parent_folder)\n            tar_file_name = f\"{target_name}\" + '.tar.bz2'\n            target_file_name = os.path.join('/tmp', tar_file_name)\n            output_metadata_file = output_metadata_file.split('/')[-1]\n\n        message = self.create_email_header(title=None)\n        if summary_results and len(summary_results):\n            message += self.create_checks_summary_message(\n                summary_results=summary_results,\n                failed_result=failed_result\n            )\n        if len(failed_result) and self.send_failed_objects_as_attachment:\n            self.create_temp_files_of_failed_check_results(failed_result=failed_result)\n\n\n        if tar_file_name and len(os.listdir(self.execution_dir)):\n            self.create_tarball_archive(\n                tar_file_name=tar_file_name,\n                output_metadata_file=output_metadata_file,\n                parent_folder=parent_folder\n            )\n\n        email_attach_name = None\n        if target_file_name and os.path.exists(target_file_name):\n            email_attach_name = target_file_name\n\n        info_result = self.create_info_gathering_action_result()\n        if info_result:\n            message += info_result\n            self.create_info_legos_output_file()\n\n        if output_metadata_file:\n            message += self.create_script_summary_message(\n                output_metadata_file=output_metadata_file\n            )\n        message += \"</body></html>\"\n\n\n        retval = custom_email_notification_main(\n            _logger=self.logger,\n            email_subject=subject,\n            email_content = message,\n            email_recipient = to_email,\n            file_path = email_attach_name\n        )\n        if retval:\n            self.logger.info(\"Successfully Sent Email via SMTP Relay\")\n        else:\n            self.logger.error(\"Failed to send email via SMTP Relay\")\n\n        return retval\n\n\n# Usage:\n# n = Notification()\n# n.notify(\n#          mode='slack',   # slack, email or both, Mandatory parameter\n#          failed_objects=failed_objects,  # Failed objects from the checks run, Mandatory parameter\n#          output_metadata_file=None,  # Metadata that is generated after script run, Optional\n#          summary_result=summary_result,  # Summary result of the run that includes pass,fail,error, Mandatory parameter\n#          to_email=to_email,   # Only applicable for `email` mode, Optional\n#          from_email=from_email,  # Only applicable for `email` mode, Optional\n#          subject=subject, # Email Subject, Optional\n#          subject=subject, # Only applicable for `email` mode, Optional\n#          access_key=access_key, # Only applicable for AWS SES email, Optional\n#          secret_access=secret_access,  # Only applicable for AWS SES email, Optional\n#          api_key=api_key, # Only applicable for sendgrid email, Optional\n#          smtp_host=smtp_host, # Only applicable for SMTP email, Optional\n#          smtp_user=smtp_user, # Only applicable for SMTP email, Optional\n#          smtp_password=smtp_password # Only applicable for SMTP email, Optional\n#          )\n#\n\n# This function can be used as a usable component by any other class. As long as\n# the Data that is used for slack or email follow the Schema.\nclass Notification(NotificationFactory):\n    def __init__(self, **kwargs):\n        super().__init__(**kwargs)\n        self.notify_config = self._config.get_notification()\n        self.email_config = self.notify_config.get('Email')\n\n    def notify(self, **kwargs):\n        retval = False\n        mode = kwargs.get('mode', 'slack')\n\n        if mode.lower() == 'slack':\n            retval = SlackNotification().notify(**kwargs)\n        elif mode.lower() == 'email':\n            retval = self._send_email(**kwargs)\n        elif mode.lower() == 'both':\n            retval = SlackNotification().notify(**kwargs)\n            retval = self._send_email(**kwargs)\n        return retval\n\n    def _send_email(self, **kwargs):\n        @retry(\n            stop=stop_after_attempt(5),  # Retry up to 5 times\n            wait=wait_exponential(multiplier=1, min=4, max=60),  # Exponential backoff, min 4s, max 60s\n            before=before_log(self.logger, logging.INFO),\n            after=after_log(self.logger, logging.INFO)\n        )\n        def _do_send_email():\n            retval = False\n            summary_results = kwargs.get('summary_results')\n            failed_objects = kwargs.get('failed_objects')\n            if failed_objects is None:\n                failed_objects = {}\n\n\n            if self.email_config.get('provider').lower() == 'smtp':\n                smtp = self.email_config.get('SMTP')\n                retval = SmtpNotification().notify(\n                            summary_result = summary_results,\n                            failed_result = failed_objects,\n                            output_metadata_file = kwargs.get('output_metadata_file'),\n                            smtp_host = kwargs.get('smtp_host', smtp.get('smtp-host')),\n                            smtp_user = kwargs.get('smtp_user', smtp.get('smtp-user')),\n                            smtp_password = kwargs.get('smtp_password', smtp.get('smtp-password')),\n                            to_email = kwargs.get('to_email', smtp.get('to-email')),\n                            from_email = kwargs.get('from_email', smtp.get('from-email')),\n                            subject = kwargs.get('subject', self.email_config.get('email_subject_line', 'Run Result'))\n                            )\n            elif self.email_config.get('provider').lower() == 'custom':\n                retval = CustomSMTPNotification().notify(\n                            summary_result = summary_results,\n                            failed_result = failed_objects,\n                            output_metadata_file = kwargs.get('output_metadata_file'),\n                            subject = kwargs.get('subject', self.email_config.get('email_subject_line', 'Run Result'))\n                            )\n            elif self.email_config.get('provider').lower() == 'sendgrid':\n                sendgrid = self.email_config.get('Sendgrid')\n                retval = SendgridNotification().notify(\n                            summary_result = summary_results,\n                            failed_result = failed_objects,\n                            output_metadata_file = kwargs.get('output_metadata_file'),\n                            from_email = kwargs.get('from_email', sendgrid.get('from-email')),\n                            to_email = kwargs.get('to_email', sendgrid.get('to-email')),\n                            api_key = kwargs.get('api_key', sendgrid.get('api_key')),\n                            subject = kwargs.get('subject', self.email_config.get('email_subject_line', 'Run Result'))\n                            )\n            elif self.email_config.get('provider').lower() == 'ses':\n                aws = self.email_config.get('SES')\n                retval = AWSEmailNotification().notify(\n                            summary_result = summary_results,\n                            failed_result = failed_objects,\n                            output_metadata_file = kwargs.get('output_metadata_file'),\n                            access_key = kwargs.get('access_key', aws.get('access_key')),\n                            secret_access = kwargs.get('secret_access', aws.get('secret_access')),\n                            to_email = kwargs.get('to_email', aws.get('to-email')),\n                            from_email = kwargs.get('from_email', aws.get('from-email')),\n                            region = kwargs.get('region', aws.get('region')),\n                            subject = kwargs.get('subject', self.email_config.get('email_subject_line', 'Run Result'))\n                            )\n\n            return retval\n        \n        try:\n            # Call the retry-wrapped do send email\n            return _do_send_email()\n\n        except Exception as e:\n            self.logger.error(f\"Error sending email: {e}\")\n            return False"
  },
  {
    "path": "unskript-ctl/unskript_ctl_run.py",
    "content": "#!/usr/bin/env python\n#\n# Copyright (c) 2023 unSkript.com\n# All rights reserved.\n#\n# This program is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\n# FOR A PARTICULAR PURPOSE\n#\n#\nimport json\nimport yaml\nimport os\nimport shutil\nimport uuid\nimport pprint\nimport time\nimport subprocess\nimport concurrent.futures\n\nfrom jinja2 import Template\nfrom tabulate import tabulate\nfrom tqdm import tqdm\n\nfrom unskript_utils import *\nfrom unskript_ctl_factory import ChecksFactory, ScriptsFactory\nfrom unskript.legos.utils import CheckOutputStatus\nfrom unskript_upload_results_to_s3 import S3Uploader\n\n\n# Implements Checks Class that is wrapper for All Checks Function\nclass Checks(ChecksFactory):\n    TBL_CELL_CONTENT_PASS=\"\\033[1m PASS \\033[0m\"\n    TBL_CELL_CONTENT_SKIPPED=\"\\033[1m SKIPPED \\033[0m\"\n    TBL_CELL_CONTENT_FAIL=\"\\033[1m FAIL \\033[0m\"\n    TBL_CELL_CONTENT_ERROR=\"\\033[1m ERROR \\033[0m\"\n\n    def __init__(self, **kwargs):\n        super().__init__(**kwargs)\n        self.logger.debug(\"Initialized Checks Class\")\n        if self._config.get_checks_params():\n            self.checks_globals = self._config.get_checks_params().get('global')\n            self.matrix = self.checks_globals.get('matrix')\n        else:\n            self.checks_globals = None\n            self.matrix = None\n        self.temp_jit_file = \"/tmp/jit_script.py\"\n        self.check_names = []\n        self.check_entry_functions = []\n        self.check_uuids = []\n        self.connector_types = []\n        self.status_list_of_dict = []\n        self.uglobals = UnskriptGlobals()\n        self._common = CommonAction()\n        self.update_credentials_to_uglobal()\n        self.uglobals['global'] = self.checks_globals\n        self.checks_priority = self._config.get_checks_priority()\n        self.script_to_check_mapping = {}\n        # Prioritized checks to uuid mapping\n        self.prioritized_checks_to_id_mapping = {}\n        self.map_entry_function_to_check_name = {}\n        self.map_check_name_to_connector = {}\n        self.check_name_to_id_mapping = {}\n\n        for k,v in self.checks_globals.items():\n            os.environ[k] = json.dumps(v)\n\n    def run(self, **kwargs):\n        if \"checks_list\" not in kwargs:\n            self.logger.error(\"ERROR: checks_list is a mandatory parameter to be sent, cannot run without the checks list\")\n            raise ValueError(\"Parameter check_list is not present in the argument, please call run with the check_list=[list_of_checks]\")\n        checks_list = kwargs.get('checks_list')\n        if len(checks_list) == 0:\n            self.logger.error(\"ERROR: Checks list is empty, Cannot run anything\")\n            self.logger.info(\"ERROR: There are no checks found that match! Please check if the connector is active.\")\n            sys.exit(0) \n\n        checks_list = self.create_checks_for_matrix_argument(checks_list)\n        checks_list = self.insert_task_lines(checks_list=checks_list)\n        if not self._create_jit_script(checks_list=checks_list):\n            self.logger.error(\"ERROR: Cannot create JIT script to run the checks, please look at logs\")\n            raise ValueError(\"Unable to create JIT script to run the checks\")\n        outputs = None\n        try:\n            if \"/tmp\" not in sys.path:\n                sys.path.append(\"/tmp/\")\n            from jit_script import do_run_\n            temp_output = do_run_(self.logger, self.script_to_check_mapping)\n            output_list = []\n            # Combine all parts of all_outputs in template_script.j2 do_run function into a single string\n            combined_output = ''.join(temp_output)\n\n            # Correct the formatting to ensure it's proper JSON\n            formatted_output = combined_output.replace('}\\n{', '},\\n{')\n            if not formatted_output.endswith('\\n'):\n                formatted_output += '\\n'\n\n            # Strip trailing comma and newline, then wrap in array brackets\n            formatted_output = formatted_output.rstrip(',\\n')\n            json_output = f\"[{formatted_output}]\"\n\n            try:\n                # Parse the JSON array into a list of dictionaries\n                data = json.loads(json_output)\n            except json.JSONDecodeError as e:\n                # Handle the case where the JSON could not be decoded\n                self.logger.error(f\"Failed to decode JSON: {e}\")\n                raise ValueError(\"Invalid JSON format of output\") from e\n            for d in data:\n                # Assign appropriate check names\n                d['name'] = self.check_names[self.check_uuids.index(d.get('id'))]\n                d['check_entry_function'] = self.check_entry_functions[self.check_uuids.index(d.get('id'))]\n                output_list.append(d)\n            outputs = output_list\n        except Exception as e:\n            self.logger.error(e)\n            self._error(str(e))\n        finally:\n            self._common.update_exec_id()\n            output_file = os.path.join(self.uglobals.get('CURRENT_EXECUTION_RUN_DIRECTORY'), \n                                       self.uglobals.get('exec_id')) + '_output.txt'\n            if not outputs:\n                self.logger.error(\"Output is None from check's output\")\n                self._error('OUTPUT IS EMPTY FROM CHECKS RUN!')\n                sys.exit(0)\n            with open(output_file, 'w', encoding='utf-8') as f:\n                f.write(json.dumps(outputs))\n            if len(outputs) == 0:\n                self.logger.error(f\"Output from checks execution is empty, pls check {self.temp_jit_file}\")\n                self._error(f\" Output from checks execution is empty, pls check {self.temp_jit_file}\")\n                return\n\n        self.display_check_result(checks_output=outputs)\n        self.uglobals['status_of_run'] = self.status_list_of_dict\n\n        return self.status_list_of_dict\n\n    def parse_failed_objects(self, failed_object):\n        retVal = \"N/A\"\n        for line in failed_object:\n            if not line:\n                continue\n            if \"forbidden\" in line:\n                retVal = \"Forbidden \"\n            if \"permission\" in line:\n                retVal = \"Access Denied\"\n            if \"not reachable\" in line:\n                retVal = \"Network error\"\n        return retVal\n\n\n    def display_check_result(self, checks_output):\n        if not checks_output:\n            self.logger.error(\"Check's Output is None!\")\n            self._error(\" Check's Output is None\")\n            return\n\n        result_table = [[\"Checks Name\", \"Result\", \"Failed Count\", \"Error\"]]\n        status_dict = {}\n        status_dict['runbook'] = os.path.join(UNSKRIPT_EXECUTION_DIR, self.uglobals.get('exec_id') + '_output.txt')\n        status_dict['result'] = []\n        checks_per_priority_per_result_list = {CHECK_PRIORITY_P0: {'PASS':[], 'FAIL':[], 'ERROR': []},\n                                               CHECK_PRIORITY_P1: {'PASS':[], 'FAIL':[], 'ERROR': []},\n                                               CHECK_PRIORITY_P2: {'PASS':[], 'FAIL':[], 'ERROR': []}}\n        if self.uglobals.get('skipped'):\n            for check_name,connector in self.uglobals.get('skipped'):\n                result_table.append([\n                    check_name,\n                    self.TBL_CELL_CONTENT_SKIPPED,\n                    \"N/A\",\n                    \"Credential Incomplete\"\n                ])\n                if self.checks_priority is None:\n                    priority = CHECK_PRIORITY_P2\n                else:\n                    priority = self.checks_priority.get(check_name, CHECK_PRIORITY_P2)\n                checks_per_priority_per_result_list[priority]['ERROR'].append([\n                    check_name,\n                    \"\",\n                    connector\n                    ])\n        idx = 0\n        ids = self.check_uuids\n        failed_result_available = False\n        failed_result = {}\n        checks_output = self.output_after_merging_checks(checks_output, self.check_uuids)\n        self.uglobals.create_property('CHECKS_OUTPUT')\n        self.uglobals['CHECKS_OUTPUT'] = checks_output\n        self.logger.debug(\"Creating checks output JSON to upload to S3\")\n        # print(\"Uploading failed objects to S3...\")\n        # uploader = S3Uploader()\n        # uploader.rename_and_upload_failed_objects(checks_output)\n        now = datetime.now()\n        rfc3339_timestamp = now.isoformat() + 'Z'\n        parent_folder = '/tmp'\n        if self.uglobals.get('CURRENT_EXECUTION_RUN_DIRECTORY'):\n            parent_folder = self.uglobals.get('CURRENT_EXECUTION_RUN_DIRECTORY')\n        dashboard_checks_output_file = f\"dashboard_{rfc3339_timestamp}.json\"\n        dashboard_checks_output_file_path = os.path.join(parent_folder, dashboard_checks_output_file)\n        try:\n            # Convert checks_output to JSON format\n            checks_output_json = json.dumps(checks_output, indent=2)\n        except json.JSONDecodeError:\n            self.logger.debug(f\"Failed to decode JSON response for {self.customer_name}\")\n            return\n\n        # Write checks output JSON to a separate file\n        try:\n            if checks_output_json:\n                self.logger.debug(f\"Writing JSON data to dashboard json file\")\n                with open(dashboard_checks_output_file_path, 'w') as json_file:\n                    json_file.write(checks_output_json)\n        except IOError as e:\n            self.logger.debug(f\"Failed to write JSON data to {dashboard_checks_output_file_path}: {e}\")\n            return\n\n        for result in checks_output:\n            if result.get('skip') and result.get('skip') is True:\n                idx += 1\n                continue\n            payload = result\n            try:\n                _action_uuid = payload.get('id')\n                if self.checks_priority is None:\n                    priority = CHECK_PRIORITY_P2\n                else:\n                    # priority = self.checks_priority.get(self.check_entry_functions[idx], CHECK_PRIORITY_P2)\n                    priority = self.checks_priority.get(self.check_name_to_id_mapping.get(_action_uuid), CHECK_PRIORITY_P2)\n\n                if _action_uuid:\n                    #c_name = self.connector_types[idx] + ':' + self.prioritized_checks_to_id_mapping[_action_uuid]\n                    p_check_name = self.prioritized_checks_to_id_mapping[_action_uuid]\n                else:\n                    #c_name = self.connector_types[idx] + ':' + self.check_names[idx]\n                    p_check_name = self.check_names[idx]\n                if p_check_name in self.check_entry_functions:\n                    p_check_name = self.map_entry_function_to_check_name.get(p_check_name)\n                if ids and CheckOutputStatus(payload.get('status')) == CheckOutputStatus.SUCCESS:\n                    result_table.append([\n                        p_check_name,\n                        self.TBL_CELL_CONTENT_PASS,\n                        0,\n                        'N/A'\n                        ])\n                    checks_per_priority_per_result_list[priority]['PASS'].append([\n                        p_check_name,\n                        ids[idx],\n                        # self.connector_types[idx]]\n                        self.map_check_name_to_connector[p_check_name]]\n                        )\n                elif ids and CheckOutputStatus(payload.get('status')) == CheckOutputStatus.FAILED:\n                    failed_objects = payload.get('objects')\n                    c_name = self.map_check_name_to_connector[p_check_name] + ':' + p_check_name\n                    failed_result[c_name] = failed_objects\n                    result_table.append([\n                        p_check_name,\n                        self.TBL_CELL_CONTENT_FAIL,\n                        len(failed_objects),\n                        self.parse_failed_objects(failed_object=failed_objects)\n                        ])\n                    failed_result_available = True\n                    checks_per_priority_per_result_list[priority]['FAIL'].append([\n                        p_check_name,\n                        ids[idx],\n                        # self.connector_types[idx]\n                        self.map_check_name_to_connector[p_check_name]\n                        ])\n                elif ids and CheckOutputStatus(payload.get('status')) == CheckOutputStatus.RUN_EXCEPTION:\n                    if payload.get('error') is not None:\n                        failed_objects = payload.get('error')\n                        if isinstance(failed_objects, str) is True:\n                            failed_objects = [failed_objects]\n                        c_name = self.map_check_name_to_connector[p_check_name] + ':' + p_check_name\n                        failed_result[c_name] = failed_objects\n                        failed_result_available = True\n                    error_msg = payload.get('error') if payload.get('error') else self.parse_failed_objects(failed_object=failed_objects)\n                    result_table.append([\n                        p_check_name,\n                        self.TBL_CELL_CONTENT_ERROR,\n                        0,\n                        pprint.pformat(error_msg, width=30)\n                        ])\n                    checks_per_priority_per_result_list[priority]['ERROR'].append([\n                        # self.check_names[idx],\n                        p_check_name,\n                        ids[idx],\n                        # self.connector_types[idx]\n                        self.map_check_name_to_connector[p_check_name]\n                        ])\n            except Exception as e:\n                self.logger.error(e)\n                pass\n            idx += 1\n\n        status_dict['result'] = checks_per_priority_per_result_list\n        print(\"\")\n        print(tabulate(result_table, headers='firstrow', tablefmt='fancy_grid'))\n\n        if failed_result_available is True:\n            self.uglobals['failed_result'] = {'result': []}\n            for k,v in failed_result.items():\n                d = {}\n                if not v:\n                    continue\n                d[k] = {'failed_object': v}\n                self.uglobals['failed_result']['result'].append(d)\n\n        print(\"\")\n        self.status_list_of_dict.append(status_dict)\n        for k,v in failed_result.items():\n            check_name = '\\x1B[1;4m' + k + '\\x1B[0m'\n            print(check_name)\n            self._error(\"Failed Objects:\")\n            print(yaml.safe_dump(v))\n            print('\\x1B[1;4m', '\\x1B[0m')\n        return\n\n    def output_after_merging_checks(self, outputs: list, ids: list) -> list:\n        \"\"\"output_after_merging_checks: this function combines the output from duplicated\n        checks and stores the combined output.\n        TBD: What if one duplicated check returns an ERROR\n        Status:\n            1 : PASS\n            2 : FAIL\n            3 : ERROR\n        \"\"\"\n        result_dict = {}\n\n        for output in outputs:\n            if not output:\n                continue\n\n            check_id = output.get('id')\n            current_output = result_dict.get(check_id)\n\n            if current_output is None:\n                # If no entry exists, directly use this output\n                result_dict[check_id] = output\n            else:\n                # If an entry exists, merge this output with the existing one\n                if current_output['status'] < output['status']:\n                    # If the new status is more severe, overwrite the old status\n                    current_output['status'] = output['status']\n                    current_output['objects'] = output.get('objects', [])\n\n                if output['status'] == 2 and output.get('objects'):\n                    # Append objects if status is FAILED and objects are non-empty\n                    if 'objects' not in current_output or not isinstance(current_output['objects'], list):\n                        current_output['objects'] = []\n                    current_output['objects'].extend(output.get('objects', []))\n\n                # Update error message if there's a new one and it's non-empty\n                if 'error' in output and output['error']:\n                    current_output['error'] = output['error']\n\n        return list(result_dict.values())\n\n    def calculate_combined_check_status(self, outputs:list):\n        combined_output = {}\n        status = CheckOutputStatus.SUCCESS\n        failed_objects = []\n        error = None\n        for output in outputs:\n            if CheckOutputStatus(output.get('status')) == CheckOutputStatus.FAILED:\n                status = CheckOutputStatus.FAILED\n                failed_objects.append(output.get('objects'))\n            elif CheckOutputStatus(output.get('status')) == CheckOutputStatus.RUN_EXCEPTION:\n                status = CheckOutputStatus.RUN_EXCEPTION\n                error = output.get('error')\n\n        combined_output['status'] = status\n        combined_output['objects'] = failed_objects\n        combined_output['error'] = error\n        return combined_output\n\n    def _create_jit_script(self, checks_list: list = None):\n        if not checks_list:\n            self.logger.error(\"Checks List Cannot be empty. Please verify the checks_list is valid\")\n            return False\n\n        execution_timeout = self._config._get('global').get('execution_timeout', 60)\n        exec_timeout = execution_timeout\n        per_check_timeout = {}\n        g = self._config._get('checks')\n        if g and g.get('execution_timeout'):\n            if isinstance(g.get('execution_timeout'), dict):\n               per_check_timeout = g.get('execution_timeout')\n        \n        with open(self.temp_jit_file, 'w', encoding='utf-8') as f:\n            f.write(self.get_first_cell_content(checks_list))\n            f.write('\\n\\n')\n            f.write(self.get_timeout_decorator_function(execution_timeout=execution_timeout))\n            f.write('\\n\\n')\n            for idx,c in enumerate(checks_list[:]):\n                _entry_func = c.get('metadata', {}).get('action_entry_function', '')\n                _action_uuid = c.get('metadata', {}).get('action_uuid', '')\n                _check_name = c.get('metadata', {}).get('action_title', '')\n                idx += 1\n                self.script_to_check_mapping[f\"check_{idx}\"] =  _entry_func\n                self.prioritized_checks_to_id_mapping[str(_action_uuid)] = _entry_func\n                self.map_entry_function_to_check_name[_entry_func] = _check_name\n\n                exec_timeout = per_check_timeout.get(_entry_func, execution_timeout)\n                f.write(f\"@timeout(seconds={exec_timeout}, error_message=\\\"Check check_{idx} timed out\\\")\\n\")\n                check_name = f\"def check_{idx}():\"\n                f.write(check_name + '\\n')\n                f.write('    global w' + '\\n')\n                for line in c.get('code'):\n                    line = line.replace('\\n', '')\n                    for l in line.split('\\n'):\n                        l = l.replace('\\n', '')\n                        if l.startswith(\"from __future__\"):\n                            continue\n                        if 'task.execute' in l:\n                            f.write('        output =' + l.replace('\\n', '') + '\\n')\n                        else:\n                            f.write('    ' + l.replace('\\n', '') + '\\n')\n                f.write('        return output \\n')\n            f.write('\\n')\n            # Lets create the last cell content\n            f.write('def last_cell():' + '\\n')\n            last_cell_content = self.get_last_cell_content()\n            for line in last_cell_content.split('\\n'):\n                f.write('    ' + line + '\\n')\n            f.write('\\n')\n\n            post_check_content = self.get_after_check_content(len(checks_list), exec_timeout=exec_timeout)\n            f.write(post_check_content + '\\n')\n\n        if os.path.exists(self.temp_jit_file) is True:\n            return True\n\n        return False\n\n\n    def get_first_cell_content(self, list_of_checks: list):\n        if len(list_of_checks) == 0:\n            return None\n        self.check_uuids, self.check_names, self.connector_types, self.check_entry_functions = self._common.get_code_cell_name_and_uuid(list_of_actions=list_of_checks)\n        self.map_check_name_to_connector = dict(zip(self.check_names, self.connector_types))\n        first_cell_content = self._common.get_first_cell_content()\n\n        if self.checks_globals and len(self.checks_globals):\n            for k,v in self.checks_globals.items():\n                if k == 'matrix':\n                    continue\n                if isinstance(v,str) is True:\n                    first_cell_content += f'{k} = \\\"{v}\\\"' + '\\n'\n                else:\n                    first_cell_content += f'{k} = {v}' + '\\n'\n        if self.matrix:\n            for k,v in self.matrix.items():\n                if v:\n                    for index, value in enumerate(v):\n                        first_cell_content += f'{k}{index} = \\\"{value}\\\"' + '\\n'\n        first_cell_content += f'''w = Workflow(env, secret_store_cfg, None, global_vars=globals(), check_uuids={self.check_uuids})''' + '\\n'\n        # temp_map = {key: value for key, value in zip(self.check_entry_functions, self.check_uuids)}\n        # temp_map = dict(zip(self.check_entry_functions, self.check_uuids))\n        temp_map = {}\n        for index,value in enumerate(self.check_uuids):\n            temp_map[self.check_entry_functions[index]] = value\n            self.check_name_to_id_mapping[value] = self.check_entry_functions[index]\n\n        first_cell_content += f'''w.check_uuid_entry_function_map = {temp_map}''' + '\\n'\n        first_cell_content += '''w.errored_checks = {}''' + '\\n'\n        first_cell_content += '''w.timeout_checks = {}''' + '\\n'\n\n        return first_cell_content\n\n    def get_timeout_decorator_function(self, execution_timeout):\n        with open(os.path.join(os.path.dirname(__file__), 'templates/timeout_handler.j2'), 'r') as f:\n            content_template = f.read()\n\n        template = Template(content_template)\n        return  template.render(execution_timeout=execution_timeout)\n\n\n    def get_last_cell_content(self):\n        with open(os.path.join(os.path.dirname(__file__), 'templates/last_cell_content.j2'), 'r') as f:\n            content_template = f.read()\n\n        template = Template(content_template)\n        return  template.render()\n\n\n    def get_after_check_content(self, len_of_checks, exec_timeout=60):\n        with open(os.path.join(os.path.dirname(__file__), 'templates/template_script.j2'), 'r') as f:\n            content_template = f.read()\n\n        template = Template(content_template)\n        return  template.render(num_checks=len_of_checks,\n                                execution_timeout=exec_timeout)\n\n\n    def insert_task_lines(self, checks_list: list):\n        if checks_list and len(checks_list):\n            return self._common.insert_task_lines(list_of_actions=checks_list)\n\n    def create_checks_for_matrix_argument(self, checks: list):\n        checks_list = []\n        if self.checks_globals and len(self.checks_globals):\n            checks_list = self._common.create_checks_for_matrix_argument(actions=checks, matrix=self.matrix)\n\n        return checks_list\n\n\n# This class implements Script interface for ScriptsFactory.\nclass Script(ScriptsFactory):\n    def __init__(self, **kwargs):\n        super().__init__(**kwargs)\n        self.logger.debug(\"Initialized Script Class\")\n        self.uglobals = UnskriptGlobals()\n\n    def run(self, **kwargs):\n        if 'script' not in kwargs:\n            self.logger.error(\"ERROR: script is a mandatory parameter to be sent, cannot run without the scripts list\")\n            raise ValueError(\"Parameter script is not present in the argument, please call run with the scripts_list=[scripts]\")\n        script = kwargs.get('script')\n        output_file = kwargs.get('output_file', UNSKRIPT_SCRIPT_RUN_OUTPUT_FILE_NAME)\n        if not self.uglobals.get('CURRENT_EXECUTION_RUN_DIRECTORY'):\n            output_dir = create_execution_run_directory()\n        else:\n            output_dir = self.uglobals.get('CURRENT_EXECUTION_RUN_DIRECTORY')\n        output_file_txt = os.path.join(output_dir, output_file + \".txt\")\n        output_file_json = os.path.join(output_dir, output_file + \".json\")\n        execution_timeout = self._config._get('global').get('execution_timeout', 60)\n        current_env = os.environ.copy()\n        current_env[UNSKRIPT_SCRIPT_RUN_OUTPUT_DIR_ENV] = output_dir\n        if isinstance(script, list) is False:\n            script = [script]\n        script_to_print = ' '.join(script)\n        self._banner(f\"Execution script {script_to_print}\")\n        self._banner(f\"OUTPUT FILE {output_file_txt}\")\n        st = time.time()\n        status = \"SUCCESS\"\n        error = None\n        try:\n            with open(output_file_txt, \"w\") as f:\n                subprocess.run(script,\n                               check=True,\n                               env=current_env,\n                               shell=True,\n                               stdout=f,\n                               stderr=f,\n                               timeout=execution_timeout)\n        except subprocess.TimeoutExpired:\n            self._error(f'{\" \".join(script)} Timed out')\n            error = \"Script Execution Timeout\"\n            status = \"TIMEOUT\"\n        except subprocess.CalledProcessError as e:\n            self._error(f'{\" \".join(script)} error, {e}')\n            error = str(e)\n            status = \"FAIL\"\n        except Exception as e:\n            self._error(f'{\" \".join(script)} failed, {e}')\n            error = str(e)\n            status = \"FAIL\"\n\n        et = time.time()\n        elapsed_time = et - st\n\n        json_output = {}\n        json_output['status'] = status\n        json_output['time_taken'] = f'{elapsed_time:.2f}'\n        json_output['error'] = error\n        json_output['output_file'] = output_file_txt\n        json_output['compress'] = True\n\n        try:\n            with open(output_file_json, 'w') as f:\n                json.dump(json_output, fp=f)\n        except Exception as e:\n            self._error(str(e))\n            sys.exit(0)\n\n\nclass CommonAction(ChecksFactory):\n    def __init__(self, **kwargs):\n        super().__init__(**kwargs)\n\n    def get_code_cell_name_and_uuid(self, list_of_actions: list):\n        action_uuids, action_names, connector_types, action_entry_functions = [], [], [], []\n        if len(list_of_actions) == 0:\n            self.logger.error(\"List of actions is empty!\")\n            return action_uuids, action_names, connector_types, action_entry_functions\n\n        for action in list_of_actions:\n            metadata = action.get('metadata')\n            action_uuid = action.get('uuid')\n            if metadata:\n                action_name = metadata.get('name')\n                action_entry_function = metadata.get('action_entry_function')\n                connector_type = metadata.get('action_type').replace('LEGO_TYPE_', '').lower()\n\n                if action_uuid:\n                    action_uuids.append(action_uuid)\n                if action_name:\n                    action_names.append(action_name)\n                if action_entry_function:\n                    action_entry_functions.append(action_entry_function)\n                if connector_type:\n                    connector_types.append(connector_type)\n\n        self.logger.debug(f\"Returning {len(action_uuids)} UUIDs and {len(action_names)} names\")\n        return action_uuids, action_names, connector_types, action_entry_functions\n\n    def update_exec_id(self):\n        self.uglobals = UnskriptGlobals()\n        if not self.uglobals.get('exec_id'):\n            self.uglobals['exec_id'] = str(uuid.uuid4())\n\n\n    def get_first_cell_content(self):\n        runbook_params = {}\n        if os.environ.get('ACA_RUNBOOK_PARAMS') is not None:\n            runbook_params = json.loads(os.environ.get('ACA_RUNBOOK_PARAMS'))\n        runbook_variables = ''\n        if runbook_params:\n            for k, v in runbook_params.items():\n                runbook_variables = runbook_variables + \\\n                    f\"{k} = nbParamsObj.get('{k}')\" + '\\n'\n\n        with open(os.path.join(os.path.dirname(__file__), 'templates/first_cell_content.j2'), 'r') as f:\n            first_cell_content_template = f.read()\n\n        template = Template(first_cell_content_template)\n        first_cell_content = template.render(runbook_params=runbook_params,\n                                             runbook_variables=runbook_variables)\n        return first_cell_content\n\n    def create_checks_for_matrix_argument(self, actions: list, matrix: dict):\n        \"\"\"create_checks_for_matrix_argument: This function generates the inputJson line of\n        code for a check. It handles the matrix case wherein you need to use the\n        appropriate variable name for argument assignment.\n        \"\"\"\n        self.matrix = matrix\n        action_list = []\n        for action in actions:\n            input_schema = action.get('inputschema')\n            if input_schema is None:\n                action_list.append(action)\n                continue\n            add_check_to_list = True\n\n            input_json_line = ''\n            try:\n                schema = input_schema[0]\n                if schema.get('properties'):\n                    for key in schema.get('properties').keys():\n                        # Check if the property is a matrix argument.\n                        # If thats the case, replicate the check the number\n                        # of entries in  that argument.\n                        duplicate_count = 1\n                        if self.matrix:\n                            matrix_value = self.matrix.get(key)\n                            if matrix_value is not None:\n                                duplicate_count += len(matrix_value)\n                                # Duplicate this check len(matrix_argument) times.\n                                # Also, for each check, you need to use a different\n                                # argument, so store that in a field named\n                                # matrixinputline\n                                # UUID Mapping need to initialized before assinging it a value!\n                                if not isinstance(self.uglobals.get('uuid_mapping'), dict):\n                                    self.uglobals['uuid_mapping'] = {}\n                                is_first = True\n                                for dup in range(duplicate_count-1):\n                                    add_check_to_list = False\n                                    input_json_line = ''\n                                    input_json_line += f\"\\\"{key}\\\":  \\\"{matrix_value[dup]}\\\" ,\"\n                                    newcheck = action.copy()\n                                    if is_first is False:\n                                        # Maintain the uuid mapping that this uuid is the same as\n                                        # as the one its copied from.\n                                        new_uuid = str(uuid.uuid4())\n                                        self.uglobals[\"uuid_mapping\"][new_uuid] = action[\"uuid\"]\n                                        # newcheck['uuid'] = new_uuid\n                                        newcheck['uuid'] = action[\"uuid\"]\n                                        newcheck['id'] = str(action[\"uuid\"])\n                                        #print(f'Adding duplicate check {new_uuid}, parent_uuid {check.get(\"uuid\")}')\n                                    newcheck['matrixinputline'] = input_json_line.rstrip(',')\n                                    action_list.append(newcheck)\n                                    is_first = False\n            except Exception as e:\n                self.logger.error(f\"EXCEPTION {e}\")\n                self._error(str(e))\n                pass\n            if add_check_to_list:\n                    action_list.append(action)\n\n        return action_list\n\n    def insert_task_lines(self, list_of_actions: list):\n        self.update_credentials_to_uglobal()\n\n        for action in list_of_actions:\n            s_connector = action.get('metadata').get('action_type')\n            s_connector = s_connector.replace('LEGO', 'CONNECTOR')\n            cred_name, cred_id = None, None\n            for k,v in self.uglobals.get('default_credentials').items():\n                if k == s_connector:\n                    cred_name, cred_id = v.get('name'), v.get('id')\n                    break\n            if cred_name is None or cred_id is None:\n                if self.uglobals.get('skipped') is None:\n                    self.uglobals['skipped'] = []\n                _t = [action.get('name'), s_connector]\n                if _t not in self.uglobals.get('skipped'):\n                    self.uglobals['skipped'].append(_t)\n                    continue\n            task_lines = '''\ntask.configure(printOutput=True)\ntask.configure(credentialsJson=\\'\\'\\'{\n        \\\"credential_name\\\":''' + f\" \\\"{cred_name}\\\"\" + ''',\n        \\\"credential_type\\\":''' + f\" \\\"{s_connector}\\\"\" + '''}\\'\\'\\')\n'''\n            input_json = self.replace_input_with_globals(action)\n            if input_json:\n                task_lines += input_json\n\n            try:\n                c = action.get('code')\n                idx = c.index(\"task = Task(Workflow())\")\n                if c[idx+1].startswith(\"task.configure(credentialsJson\"):\n                    # With credential caching now packged in, we need to\n                    # Skip the credential line and let the normal credential\n                    # logic work.\n                    c = c[:idx+1] + task_lines.split('\\n') + c[idx+2:]\n                else:\n                    c = c[:idx+1] + task_lines.split('\\n') + c[idx+1:]\n                action['code'] = []\n                for line in c[:]:\n                    action['code'].append(str(line + \"\\n\"))\n\n                action['metadata']['action_uuid'] = action['uuid']\n                action['metadata']['name'] = action['name']\n\n            except Exception as e:\n                self.logger.error(f\"Unable to insert Task lines {e}\")\n                self._error(str(e))\n                sys.exit(0)\n\n        return list_of_actions\n\n\n    def replace_input_with_globals(self, action: dict):\n        inputSchema = action.get('inputschema')\n        retval = None\n        if not inputSchema:\n            return retval\n        input_json_start_line = '''\ntask.configure(inputParamsJson=\\'\\'\\'{\n        '''\n        input_json_end_line = '''}\\'\\'\\')\n        '''\n        input_json_line = ''\n        try:\n            schema = inputSchema[0]\n            if schema.get('properties'):\n                for key in schema.get('properties').keys():\n                    if self.uglobals.get('global') and key in self.uglobals.get('global').keys():\n                        value = self.uglobals.get('global').get(key)\n                        if value:\n                            # Value in the YAML file could be a list. Which means we just cannot \n                            # use the value as is, need to convert it to json decodable/compatible string.\n                            if isinstance(value, list):\n                                value = json.dumps(value)\n                                value = value.replace('\"', '\\\\\\\\\"')\n                            input_json_line += f\"\\\"{key}\\\":  \\\"{value}\\\" ,\"\n                        else:\n                            input_json_line += f\"\\\"{key}\\\":  \\\"{key}\\\" ,\"\n        except Exception as e:\n            self.logger.error(str(e))\n            self._error(str(e))\n        # Handle Matrix argument\n        matrix_argument_line = action.get('matrixinputline')\n        if matrix_argument_line:\n            input_json_line += matrix_argument_line\n        retval = input_json_start_line + input_json_line.rstrip(',') + '\\n' + input_json_end_line\n\n        return retval\n\n\n\n# Implements Info class that is a wrapper to run all info gathering function\nclass InfoAction(ChecksFactory):\n    def __init__(self, **kwargs):\n        super().__init__(**kwargs)\n        self.logger.debug(\"Initialized InfoAction class\")\n        if self._config.get_info_action_params():\n            self.info_globals = self._config.get_info_action_params().get('global')\n            self.matrix = self.info_globals.get('matrix')\n        else:\n            self.info_globals = None\n            self.matrix = None\n        self.temp_jit_dir = '/tmp/jit'\n        self.temp_jit_base_name = 'jit_info_script'\n        self._common = CommonAction()\n        self.uglobals = UnskriptGlobals()\n        self.uglobals['global'] = self.info_globals\n        self.jit_mapping = {}\n\n        if self.info_globals:\n            for k,v in self.info_globals.items():\n                os.environ[k] =  json.dumps(v)\n\n    def run(self, **kwargs):\n        if \"action_list\" not in kwargs:\n            self.logger.error(\"ERROR: action_list is a mandatory parameter to be sent, cannot run without the action_list\")\n            raise ValueError(\"Parameter action_list is not present in the argument, please call run with the action_list=[list_of_action]\")\n        action_list = kwargs.get('action_list')\n        if len(action_list) == 0:\n            self.logger.error(\"ERROR: Action list is empty, Cannot run anything\")\n            raise ValueError(\"Action List is empty!\")\n\n        self.action_uuid, self.check_names, self.connector_types, self.check_entry_functions = \\\n                self._common.get_code_cell_name_and_uuid(list_of_actions=action_list)\n\n        action_list = self.create_checks_for_matrix_argument(action_list)\n        action_list = self.insert_task_lines(list_of_actions=action_list)\n\n        self.uglobals['info_action_results'] = {}\n        if not self._create_jit_script(action_list=action_list):\n            self.logger.error(\"Cannot create JIT scripts to run the checks, please look at logs\")\n            raise ValueError(\"Unable to create JIT script to run the checks\")\n\n        execution_timeout = self._config._get('global').get('execution_timeout', 60)\n        # Internal routine to run through all python JIT script and return the output\n        def _execute_script(script, idx):\n            script = script.strip()\n            # Lets get the result_key from the jit_mapping. Why? because\n            # action_entry_function list will fail in case of matrix argument\n            result_key = self.jit_mapping.get(script)\n            self.logger.debug(f\"Starting to Run {script} for {result_key}\")\n            if not self.uglobals['info_action_results'].get(result_key):\n                self.uglobals['info_action_results'][result_key] = ''\n\n            try:\n                # TODO: We should consider adding Timeout to subprocess.run.\n                result = subprocess.run(['python', script], \n                                        capture_output=True, \n                                        check=True, \n                                        text=True,\n                                        timeout=execution_timeout)\n                self.logger.debug(result.stdout)\n            except subprocess.TimeoutExpired as e:\n                self.logger.error(f\"Timeout occurred while executing {script}: {str(e)}\")\n                self.uglobals['info_action_results'][result_key] += '\\n' + 'ACTION TIMEOUT'\n                return None\n            except subprocess.CalledProcessError as e:\n                self.logger.error(f\"Error executing {script}: {str(e)}\")\n                raise ValueError(e)\n\n            self.uglobals['info_action_results'][result_key] += '\\n' + result.stdout\n            self.logger.debug(f\"Completed running {script}\")\n            return result.stdout\n\n        script_files = [f for f in os.listdir(self.temp_jit_dir) if f.endswith('.py')]\n        with concurrent.futures.ThreadPoolExecutor(max_workers=1) as executor, tqdm(total=len(script_files), desc=\"Running\") as pbar:\n            futures = {executor.submit(_execute_script, os.path.join(self.temp_jit_dir, script), idx): idx for idx, script in enumerate(script_files)}\n            # Wait for all scripts to complete\n            for future in concurrent.futures.as_completed(futures):\n                pbar.update(1)\n                try:\n                    _ = future.result()\n                    # This means the script run was complete\n                except Exception as e:\n                    # Error case\n                    self.logger.error(f\"Exception Caught while executing info legos. Please see the unskript_ctl.log for more details. {str(e)}\") \n\n        # Lets remove the directory if it exists\n        try:\n            shutil.rmtree(self.temp_jit_dir)\n        except OSError as e:\n            self.logging.error(str(e))\n\n        self.display_action_result()\n\n\n    def _create_jit_script(self, action_list: list = None):\n        if not action_list:\n            self.logger.error(\"Action cannot be Empty. Nothing to create!\")\n            return False\n\n        try:\n            shutil.rmtree(self.temp_jit_dir)\n        except:\n            pass\n        os.makedirs(self.temp_jit_dir, exist_ok=True)\n        first_cell_content = self.get_first_cell_content()\n\n        for index, action in enumerate(action_list):\n            jit_file = os.path.join(self.temp_jit_dir, self.temp_jit_base_name + str(index) + '.py')\n            # Lets create a mapping of which action_entry_function maps to which script file\n            _name = action.get('metadata', {}).get('action_entry_function', '')\n            _connector = action.get('metadata', {}).get('action_type', '').replace('LEGO_TYPE_', '').lower()\n            self.jit_mapping[jit_file] = _connector + '/' + _name\n            with open(jit_file, 'w') as f:\n                f.write(first_cell_content)\n                f.write('\\n\\n')\n                f.write('def action():' + '\\n')\n                f.write('    global w' + '\\n')\n                for lines in action.get('code'):\n                    lines = lines.rstrip().split('\\n')\n                    for line in lines:\n                        line = line.replace('\\n', '')\n                        if line.startswith(\"from __future__\"):\n                            continue\n                        f.write('    ' + line.rstrip() + '\\n')\n                f.write('\\n')\n                # Now the Main section\n                f.write(self.get_main_section_of_info_lego())\n                f.write('\\n')\n                \n\n        if os.path.exists(self.temp_jit_dir) is True:\n            return True\n\n        return False\n\n    def display_action_result(self):\n        if self.uglobals.get('info_action_results'):\n            for k,v in self.uglobals.get('info_action_results').items():\n                self._banner('')\n                print(bcolors.UNDERLINE + bcolors.HIGHLIGHT + k + bcolors.ARG_END + bcolors.ENDC)\n                print('\\n')\n                print(v)\n                print('###')\n        else:\n            self.logger.info(\"Information gathering actions: No Results to display\")\n\n\n    def get_first_cell_content(self):\n        first_cell_content = self._common.get_first_cell_content()\n\n        if self.info_globals and len(self.info_globals):\n            for k,v in self.info_globals.items():\n                if k == 'matrix':\n                    continue\n                if isinstance(v, str) is True:\n                    first_cell_content += f'{k} = \\\"{v}\\\"' + '\\n'\n                else:\n                    first_cell_content += f'{k} = {v}' + '\\n'\n\n        if self.matrix:\n            for k,v in self.matrix.items():\n                if v:\n                    for index, value in enumerate(v):\n                        first_cell_content += f'{k}{index} = \\\"{value}\\\"' + '\\n'\n\n        first_cell_content += '''w = Workflow(env, secret_store_cfg, None, global_vars=globals(), check_uuids=None)'''\n        return first_cell_content\n\n    def get_timeout_decorator_function(self, execution_timeout):\n        with open(os.path.join(os.path.dirname(__file__), 'templates/timeout_handler.j2'), 'r') as f:\n            content_template = f.read()\n\n        template = Template(content_template)\n        return  template.render(execution_timeout=execution_timeout)\n\n    def get_main_section_of_info_lego(self):\n        with open(os.path.join(os.path.dirname(__file__), 'templates/template_info_lego.j2'), 'r') as f:\n            content_template = f.read()\n\n        template = Template(content_template)\n        return  template.render()\n    \n\n    def insert_task_lines(self, list_of_actions: list):\n        if list_of_actions and len(list_of_actions):\n            return self._common.insert_task_lines(list_of_actions=list_of_actions)\n\n    def create_checks_for_matrix_argument(self, list_of_actions: list):\n        action_list = list_of_actions\n        if self.info_globals and len(self.info_globals):\n            action_list = self._common.create_checks_for_matrix_argument(actions=list_of_actions,\n                                                                         matrix=self.matrix)\n        return action_list\n"
  },
  {
    "path": "unskript-ctl/unskript_ctl_upload_session_logs.py",
    "content": "#!/usr/bin/env python\n#\n# Copyright (c) 2023 unSkript.com\n# All rights reserved.\n#\n# This program is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\n# FOR A PARTICULAR PURPOSE\n#\n#\nimport os\nimport shutil\nimport tarfile\nimport logging\nfrom datetime import datetime\nimport requests\nimport json\n\nTEMP_FOLDER = '/var/unskript/sessions/temp'\nCOMPLETED_LOGS_FOLDER = '/var/unskript/sessions/completed-logs'\nTAR_FILE_PATH = '/var/unskript/sessions/session_logs.tgz'\nRTS_HOST = 'http://10.8.0.1:6443'\nURL_PATH = '/v1alpha1/sessions/logs'\nLOG_FILE_PATH = '/var/log/unskript/upload_script.log'\n\nsession_end_times = {}\n\n# Set logging config\nlogger = logging.getLogger(__name__)\nlogger.setLevel(logging.DEBUG)\nformatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s')\nfile_handler = logging.FileHandler(LOG_FILE_PATH)\nfile_handler.setLevel(logging.DEBUG)\nfile_handler.setFormatter(formatter)\nlogger.addHandler(file_handler)\n\n\n\ndef upload_session_logs():    \n    if not os.path.exists(COMPLETED_LOGS_FOLDER):\n        os.makedirs(COMPLETED_LOGS_FOLDER)\n\n    # If completed-logs folder is not empty, the get timestamps from the files in it\n    # (To handle case where uploads may fail, hence logs files will be retained in the completed-logs folder)\n    if any(os.scandir(COMPLETED_LOGS_FOLDER)):\n        for filename in os.listdir(COMPLETED_LOGS_FOLDER):\n            # Get the end time from the file\n            try:\n                get_session_timestamp(filename)\n            except Exception as e:\n                logger.error(\"get session timestamp: %s\", str(e))\n                return\n            \n    # If temp folder is not empty, then move the files from temp to completed-logs\n    if any(os.scandir(TEMP_FOLDER)):\n        for filename in os.listdir(TEMP_FOLDER):\n            if not filename.endswith(\".log\"):\n                continue\n            # Get the end time from the file\n            try:\n                get_session_timestamp(filename)\n            except Exception as e:\n                logger.error(\"get session timestamp: %s\", str(e))\n                return\n            source_path = os.path.join(TEMP_FOLDER, filename)\n            destination_path = os.path.join(COMPLETED_LOGS_FOLDER, filename)\n            try:\n                shutil.move(source_path, destination_path)\n            except Exception as e:\n                logger.error(\"File move error: %s\", str(e))\n                return\n    # Cancel upload if there are no files to upload\n    if len(session_end_times) == 0:\n        return\n    # Create a tar.gz archive\n    with tarfile.open(TAR_FILE_PATH, 'w:gz') as tar:\n        tar.add(COMPLETED_LOGS_FOLDER, arcname='completed-logs')\n\n    # Capture start time\n    start_time = datetime.now()\n    logger.info(f'Start Time: {start_time}')\n    # Upload to rts\n    try:\n        upload_logs_files(session_end_times)\n    except Exception as e:\n        logger.error(str(e))\n    \n    # Capture end time\n    end_time = datetime.now()\n    logger.info(f'End Time: {end_time}')\n\ndef upload_logs_files(session_end_times):\n    # Open the file in binary mode\n    try:\n        with open(TAR_FILE_PATH, 'rb') as file:\n            # Set up the files parameter with a tuple containing the filename and file object\n            files = {'file': (TAR_FILE_PATH, file)}\n            url = f'{RTS_HOST}{URL_PATH}'\n            # Make the POST request with the files parameter\n            try:\n                # payload contains session_ids and their corresponding session end times\n                payload = {'session_end_times': json.dumps(session_end_times)}\n                response = requests.post(url, files=files, data=payload)\n                # Check the response\n                if response.status_code == 204:\n                    logger.info(\"%d file(s) uploaded successfully\", len(session_end_times))\n                    # Remove the files from the completed-logs folder\n                    shutil.rmtree(COMPLETED_LOGS_FOLDER)\n                else:\n                    logger.error(\"Status Code: %s. Response: %s\", response.status_code, response.text)\n            except Exception as err:\n                logger.error(\"Error Occurred while uploading: %s\", str(err))\n    except FileNotFoundError:\n        logger.error(\"File not found. Tar file path: %s\",TAR_FILE_PATH)\n        return\n    \n    # Remove Tar file\n    os.remove(TAR_FILE_PATH)\n\ndef get_session_timestamp(filename):\n    split_file = filename.split('.')[0].split('-time-')\n    if len(split_file) != 2:\n        logger.error(\"timestamp or session id missing from file name: %s\",filename)\n        return\n    session_end_times[split_file[0]] = split_file[1]\n    \nif __name__ == \"__main__\":\n    upload_session_logs()"
  },
  {
    "path": "unskript-ctl/unskript_ctl_version.py",
    "content": "import os \n\n# Version\nVERSION = '1.2.0'\nif os.environ.get('VERSION'):\n    VERSION = os.environ.get('VERSION')\n\ndef get_version():\n    if 'BUILD_NUMBER' in globals():\n        return globals().get('BUILD_NUMBER')\n    else:\n        return VERSION \n\n# Author\nAUTHOR = 'unSkript Authors'\n\n# PSS DB Schema Version\nSCHEMA_VERSION = '1.0.0'\nif os.environ.get('SCHEMA_VERSION'):\n    SCHEMA_VERSION = os.environ.get('SCHEMA_VERSION')\n\n"
  },
  {
    "path": "unskript-ctl/unskript_db_schema.json",
    "content": "{\n    \"properties\": {\n        \"execution_id\": {\n            \"title\": \"Execution Id\",\n            \"type\": \"string\"\n        },\n        \"time_stamp\": {\n            \"title\": \"Time Stamp\",\n            \"type\": \"string\"\n        },\n        \"connector\": {\n            \"title\": \"Connector\",\n            \"type\": \"string\"\n        },\n        \"runbook\": {\n            \"title\": \"Runbook\",\n            \"type\": \"string\"\n        },\n        \"summary\": {\n            \"title\": \"Summary\",\n            \"type\": \"string\"\n        },\n        \"check_name\": {\n            \"title\": \"Check Name\",\n            \"type\": \"string\"\n        },\n        \"failed_objects\": {\n            \"items\": {\n                \"type\": \"string\"\n            },\n            \"title\": \"Failed Objects\",\n            \"type\": \"array\"\n        },\n        \"status\": {\n            \"title\": \"Status\",\n            \"type\": \"string\"\n        }\n    },\n    \"required\": [\n        \"execution_id\",\n        \"time_stamp\",\n        \"connector\",\n        \"runbook\",\n        \"summary\",\n        \"check_name\",\n        \"failed_objects\",\n        \"status\"\n    ],\n    \"title\": \"Schema\",\n    \"type\": \"object\"\n}"
  },
  {
    "path": "unskript-ctl/unskript_email_notify_check_schema.json",
    "content": "{\n    \"$defs\": {\n        \"FailedResult\": {\n            \"properties\": {\n                \"failed_object\": {\n                    \"items\": {\n                        \"type\": \"array\",\n                        \"items\": {\n                            \"type\": \"string\"\n                        }\n                    },\n                    \"title\": \"Failed Object\",\n                    \"type\": \"array\"\n                }\n            },\n            \"required\": [\n                \"failed_object\"\n            ],\n            \"title\": \"FailedResult\",\n            \"type\": \"object\"\n        }\n    },\n    \"properties\": {\n        \"result\": {\n            \"type\": \"array\",\n            \"items\": {\n                \"type\": \"object\",\n                \"additionalProperties\": {\n                    \"$ref\": \"#/$defs/FailedResult\"\n                }\n            }\n        }\n    },\n    \"required\": [\n        \"result\"\n    ],\n    \"title\": \"Notification\",\n    \"type\": \"object\"\n}"
  },
  {
    "path": "unskript-ctl/unskript_slack_notify_schema.json",
    "content": "{\n    \"$schema\": \"http://json-schema.org/draft-07/schema#\",\n    \"type\": \"array\",\n    \"items\": {\n      \"type\": \"object\",\n      \"properties\": {\n        \"result\": {\n          \"type\": \"array\",\n          \"items\": {\n            \"type\": \"array\",\n            \"items\": [\n              { \"type\": \"string\" },\n              { \"type\": \"string\" }\n            ]\n          }\n        },\n        \"runbook\": {\n          \"type\": \"string\",\n          \"format\": \"uri\"\n        }\n      },\n      \"required\": [\"result\"],\n      \"additionalProperties\": false\n    }\n  }\n  "
  },
  {
    "path": "unskript-ctl/unskript_upload_results_to_s3.py",
    "content": "import boto3\nfrom botocore.exceptions import NoCredentialsError, PartialCredentialsError, ClientError\nimport os\nfrom datetime import datetime\nimport json\nfrom unskript_ctl_factory import UctlLogger\nfrom unskript_utils import *\n\nlogger = UctlLogger('UnskriptDiagnostics')\n\nclass S3Uploader:\n    def __init__(self):\n        logger.debug(\"Initializing S3Uploader\")\n        aws_access_key_id = os.getenv('LIGHTBEAM_AWS_ACCESS_KEY_ID')\n        aws_secret_access_key = os.getenv('LIGHTBEAM_AWS_SECRET_ACCESS_KEY')\n        self.bucket_name = 'lightbeam-reports'\n        now = datetime.now()\n        rfc3339_timestamp = now.isoformat() + 'Z'\n        self.ts = rfc3339_timestamp\n        year = now.strftime(\"%Y\")\n        month = now.strftime(\"%m\")\n        day = now.strftime(\"%d\")\n        self.customer_name = os.getenv('CUSTOMER_NAME','UNKNOWN_CUSTOMER_NAME')\n        self.file_name = f\"dashboard_{rfc3339_timestamp}.json\"\n        self.folder_path = f\"{self.customer_name}/{year}/{month}/{day}/\"\n        self.file_path = f\"{self.folder_path}{self.file_name}\"\n        self.local_file_name = f\"/tmp/{self.file_name}\"\n\n        if not aws_access_key_id or not aws_secret_access_key:\n            logger.debug(\"AWS credentials are not set in environment variables\")\n            return\n\n        self.uglobals = UnskriptGlobals()\n\n        try:\n            self.s3_client = boto3.client('s3',\n                                          aws_access_key_id=aws_access_key_id,\n                                          aws_secret_access_key=aws_secret_access_key,\n                                          )\n            self.s3_client.list_buckets()                       \n            logger.debug(\"AWS credentials are valid\")\n        except (NoCredentialsError, PartialCredentialsError) as e:\n            logger.debug(\"Invalid AWS credentials\")\n        except ClientError as e:\n            logger.debug(f\"Client error: {e}\")\n\n    def create_s3_folder_path(self):\n        # Initialize folder_exists\n        folder_exists = False\n\n        # Ensure the bucket exists\n        try:\n            self.s3_client.head_bucket(Bucket=self.bucket_name)\n            logger.debug(f\"S3 bucket {self.bucket_name} exists\")\n        except ClientError as e:\n            if e.response['Error']['Code'] == '404':\n                logger.debug(f\"S3 bucket {self.bucket_name} does not exist, creating bucket\")\n                try:\n                    self.s3_client.create_bucket(Bucket=self.bucket_name)\n                    logger.debug(f\"S3 bucket {self.bucket_name} created\")\n                except ClientError as e:\n                    logger.debug(f\"Failed to create bucket: {e}\")\n                    return False  # Exit if the bucket cannot be created\n            else:\n                logger.debug(f\"Error checking bucket existence: {e}\")\n                return False  # Exit if there is any other error\n        \n        # Ensure the folder structure exists in the bucket\n        try:\n            self.s3_client.head_object(Bucket=self.bucket_name, Key=self.folder_path)\n            folder_exists = True\n            logger.debug(f\"S3 folder {self.folder_path} exists\")\n        except ClientError as e:\n            if e.response['Error']['Code'] == '404':\n                folder_exists = False\n                logger.debug(f\"S3 folder {self.folder_path} does not exist\")\n            else:\n                logger.debug(f\"Error checking folder existence: {e}\")\n\n        # Create folder if it doesn't exist\n        if not folder_exists:\n            logger.debug(f\"Creating folder {self.folder_path} in the bucket\")\n            try:\n                self.s3_client.put_object(Bucket=self.bucket_name, Key=self.folder_path)\n            except ClientError as e:\n                logger.debug(f\"Failed to create folder: {e}\")\n    \n        return True\n\n    # def rename_and_upload_failed_objects(self, checks_output):\n    #     try:\n    #         # Convert checks_output to JSON format\n    #         checks_output_json = json.dumps(checks_output, indent=2)\n    #     except json.JSONDecodeError:\n    #         logger.debug(f\"Failed to decode JSON response for {self.customer_name}\")\n    #         return\n\n    #     # Write JSON data to a local file\n    #     try:\n    #         logger.debug(f\"Writing JSON data to local file: {self.local_file_name}\")\n    #         with open(self.local_file_name, 'w') as json_file:\n    #             json_file.write(checks_output_json)\n    #     except IOError as e:\n    #         logger.debug(f\"Failed to write JSON data to local file: {e}\")\n    #         return\n\n    #     if not self.create_s3_folder_path():\n    #         logger.debug(\"Unable to create bucket\")\n    #         return\n\n    #     # Upload the JSON file\n    #     try:\n    #         logger.debug(f\"Uploading file {self.file_name} to {self.bucket_name}/{self.file_path}\")\n    #         self.s3_client.upload_file(self.local_file_name, self.bucket_name, self.file_path)\n    #         logger.debug(f\"File {self.file_name} uploaded successfully to {self.bucket_name}/{self.folder_path}\")\n    #     except NoCredentialsError:\n    #         logger.debug(\"Credentials not available\")\n    #     except Exception as e:\n    #         logger.debug(f\"Unable to upload failed objetcs file to S3 bucket: {e}\")\n    #     # Remove the local file after upload\n    #     logger.debug(f\"Removing local file of check outputs json from /tmp: {self.local_file_name}\")\n    #     os.remove(self.local_file_name)\n\n    def rename_and_upload_other_items(self):\n        if not self.create_s3_folder_path():\n            logger.debug(\"Unable to create bucket\")\n            return\n        # Upload the files in the CURRENT_EXECUTION_RUN_DIRECTORY \n        file_list_to_upload = [self.local_file_name]\n        if self.uglobals.get('CURRENT_EXECUTION_RUN_DIRECTORY') and \\\n            os.path.exists(self.uglobals.get('CURRENT_EXECUTION_RUN_DIRECTORY')):\n            try:\n                for parent_dir, _, _files in os.walk(self.uglobals.get('CURRENT_EXECUTION_RUN_DIRECTORY')):\n                    # Currently there is no need to read the sub_directories (child_dir) under CURRENT_EXECUTION_RUN_DIRECTORY\n                    # So we can ignore it. Lets create list of files that needs to be uploaded\n                    # to S3.\n                    for _file in _files:\n                        file_list_to_upload.append(os.path.join(parent_dir, _file))\n            except:\n                logger.debug(\"Failed to get contents of Execution Run directory\")\n        \n        for _file in file_list_to_upload:\n            base_name, extension = os.path.splitext(os.path.basename(_file))\n            if base_name.startswith(\"dashboard\"):\n                file_path = os.path.join(self.folder_path, os.path.basename(_file))\n            else:\n                temp_fp = f\"{base_name}_{self.ts}{extension}\"\n                file_path = os.path.join(self.folder_path, temp_fp)\n            \n            if not self.do_upload_(_file, file_path):\n                logger.debug(f\"ERROR: Uploading error for {_file}\")\n\n    def do_upload_(self, file_name, file_path):\n        \"\"\"Uploads the given file_name to s3 bucket defined in file_path\n        \"\"\"\n        try:\n            logger.debug(f\"Uploading file {file_name} to {self.bucket_name}/{file_path}\")\n            self.s3_client.upload_file(file_name, self.bucket_name, file_path)\n            logger.debug(f\"File {file_name} uploaded successfully to {self.bucket_name}/{file_path}\")\n            return True\n        except NoCredentialsError:\n            logger.debug(\"Credentials not available\")\n        except Exception as e:\n            logger.debug(f\"Unable to upload failed objects file to S3 bucket: {e}\")\n       \n        return False\n"
  },
  {
    "path": "unskript-ctl/unskript_utils.py",
    "content": "#!/usr/bin/env python\n#\n# Copyright (c) 2023 unSkript.com\n# All rights reserved.\n#\n# This program is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\n# FOR A PARTICULAR PURPOSE\n#\n#\nimport os\nimport sys\n\nfrom datetime import datetime\n\nUNSKRIPT_EXECUTION_DIR=\"/unskript/data/execution/\"\nPSS_DB_PATH=\"/unskript/db/unskript_pss.db\"\nGLOBAL_CTL_CONFIG=\"/etc/unskript/unskript_ctl_config.yaml\"\nCREDENTIAL_DIR=\"/.local/share/jupyter/metadata/credential-save\"\n\nUNSKRIPT_SCRIPT_RUN_OUTPUT_FILE_NAME = \"run_output\"\nUNSKRIPT_SCRIPT_RUN_OUTPUT_DIR_ENV = \"UNSKRIPT_SCRIPT_OUTPUT_DIR\"\nJIT_PYTHON_SCRIPT = \"/tmp/jit_script.py\"\n\n# With introduction emal_fmt, we want all failed objects to be included\n# in the attachment. MAX_CHARACTER COUNT is therefore set to very low value\nMAX_CHARACTER_COUNT_FOR_FAILED_OBJECTS = 10\n\nTBL_HDR_CHKS_NAME=\"\\033[36m Checks Name \\033[0m\"\nTBL_HDR_INFO_NAME=\"\\033[36m Action Name \\033[0m\"\nTBL_HDR_DSPL_CHKS_NAME=\"\\033[35m Check Name \\n (Last Failed) \\033[0m\"\nTBL_HDR_DSPL_EXEC_ID=\"\\033[1m Failed Execution ID \\033[0m\"\nTBL_HDR_FAILED_OBJECTS=\"\\033[1m Failed Objects \\033[0m\"\nTBL_HDR_CHKS_FN=\"\\033[1m Function Name \\033[0m\"\nTBL_HDR_INFO_FN=\"\\033[1m Function Name \\033[0m\"\nTBL_HDR_LIST_CHKS_CONNECTOR=\"\\033[36m Connector Name \\033[0m\"\nTBL_HDR_LIST_INFO_CONNECTOR=\"\\033[36m Connector Name \\033[0m\"\n\n# Check priority related constants\nCHECKS_PRIORITY_KEY = \"priority\"\nCHECK_PRIORITY_P0 = \"p0\"\nCHECK_PRIORITY_P1 = \"p1\"\nCHECK_PRIORITY_P2 = \"p2\"\n\nCONNECTOR_LIST = [\n    'aws',\n    'gcp',\n    'k8s',\n    'elasticsearch',\n    'grafana',\n    'redis',\n    'jenkins',\n    'github',\n    'netbox',\n    'nomad',\n    'jira',\n    'kafka',\n    'keycloak',\n    'mongodb',\n    'mysql',\n    'postgresql',\n    'rest',\n    'slack',\n    'ssh',\n    'vault',\n    'salesforce'\n]\n\n# Unskript Global is a singleton class that\n# will replace the Global variable UNSKRIPT_GLOBAL\n# It becomes essential to use this class to keep the spread of\n# Variable to a minimum and access it every where within the scope\n# of the program\n\nclass GenericSingleton(type):\n    _instances = {}\n\n    def __call__(cls, *args, **kwargs):\n        if cls not in cls._instances:\n            instance = super(GenericSingleton, cls).__call__(*args, **kwargs)\n            cls._instances[cls] = instance\n        return cls._instances[cls]\n\n\nclass UnskriptGlobals(metaclass=GenericSingleton):\n    def __init__(self):\n        self._data = {}\n\n    def __getitem__(self, key):\n        return self._data.get(key, None)\n\n    def __setitem__(self, key, value):\n        self._data[key] = value\n\n    def __delitem__(self, key):\n        del self._data[key]\n\n    def get(self, key):\n        return self._data.get(key, None)\n\n    def keys(self):\n        return self._data.keys()\n\n    def values(self):\n        return self._data.values()\n\n    def items(self):\n        return self._data.items()\n\n    def create_property(self, prop_name):\n        def getter(self):\n            return self._data.get(prop_name, None)\n\n        def setter(self, value):\n            self._data[prop_name] = value\n\n        setattr(UnskriptGlobals, prop_name, property(getter, setter))\n\n\n\n# Lets create an Alias so that any reference to UNSKRIPT_GLOBAL\n# refers to the class. In this way no change has to be done\n# when UNSKRIPT_GLOBALS variable is used.\nUNSKRIPT_GLOBALS = UnskriptGlobals()\n\nclass bcolors:\n    HEADER = '\\033[95m'\n    OKBLUE = '\\033[94m'\n    OKCYAN = '\\033[96m'\n    OKGREEN = '\\033[92m'\n    WARNING = '\\033[93m'\n    FAIL = '\\033[91m'\n    ENDC = '\\033[0m'\n    BOLD = '\\033[1m'\n    UNDERLINE = '\\033[4m'\n    ARG_START = '\\x1B[1;20;42m'\n    ARG_END = '\\x1B[0m'\n    HIGHLIGHT = '\\x1B[1;20;40m'\n\n# Utility Functions\ndef create_execution_run_directory(file_prefix: str = None):\n    if UNSKRIPT_GLOBALS.get('CURRENT_EXECUTION_RUN_DIRECTORY') is None:\n        current_time = datetime.now().isoformat().replace(':', '_')\n        if not file_prefix:\n            output_dir = UNSKRIPT_EXECUTION_DIR + f\"{UNSKRIPT_SCRIPT_RUN_OUTPUT_FILE_NAME}-{current_time}\"\n        else:\n            output_dir = UNSKRIPT_EXECUTION_DIR +  f\"{file_prefix}-{current_time}\"\n\n        try:\n            os.makedirs(output_dir)\n        except Exception as e:\n            print(f'{bcolors.FAIL} output dir {output_dir} creation failed{bcolors.ENDC}')\n            sys.exit(0)\n        finally:\n            UNSKRIPT_GLOBALS.create_property('CURRENT_EXECUTION_RUN_DIRECTORY')\n            UNSKRIPT_GLOBALS['CURRENT_EXECUTION_RUN_DIRECTORY'] = output_dir\n    else:\n        output_dir = UNSKRIPT_GLOBALS.get('CURRENT_EXECUTION_RUN_DIRECTORY')\n    return output_dir\n\n# Utility Function\ndef is_creds_json_file_valid(creds_file: str = None):\n    if not creds_file:\n        return False\n\n    if os.path.getsize(creds_file) == 0:\n        return False\n\n    # If reached to this point, return true\n    return True\n"
  },
  {
    "path": "validator.py",
    "content": "#!/usr/bin/env python\n#\n# Copyright (c) 2023 unSkript.com\n# All rights reserved.\n#\n#\nimport os\nimport json \nimport glob \nimport sys \nimport time \n\nfrom subprocess import run\n\ndef git_top_dir() -> str:\n    \"\"\"git_top_dir returns the output of git rev-parse --show-toplevel \n\n    :rtype: string, the output of the git rev-parse --show-toplevel comand\n    \"\"\"\n    run_output = run([\"git\", \"rev-parse\", \"--show-toplevel\"], capture_output=True)\n    top_dir = run_output.stdout.strip()\n    top_dir = top_dir.decode('utf-8')\n    return top_dir \n\ndef check_action_by_connector_names(connector: str = '') -> bool:\n    \"\"\"check_action_by_connector_names This function takes the connector name and\n       verifies if the contents under the <connector>/legos/<lego_for_connector>/ \n       has the same name. For examples AWS/legos/aws_check_expired_keys \n       should have the following contents.\n       AWS/legos/aws_check_expired_keys \n               + aws_check_expired_keys.py\n               + aws_check_expired_keys.json\n               + README.md\n               + __init__.py \n               + (Optional) 1.png (Sample output screenshot)\n    \n       :type connector: string\n       :param connector: The Connector name that the Action is written for. \n\n       :rtype: bool (T/F) True if validate passes, False otherwise.\n    \"\"\"\n    if connector == '':\n        return False\n\n    top_dir = git_top_dir()\n    dirs_under_connector = [] \n    if top_dir not in ('', None):\n        dirs_under_connector = os.listdir(os.path.join(top_dir, connector, 'legos'))\n    \n    if len(dirs_under_connector) == 0:\n        print(f\"ERROR: No contents found under {connector}\")\n        return False\n    \n    process_list = []\n    ret_val = {}\n    #  Spawn multiple process to verify for parallel processing\n    #  Lets batch it and process it for every 20 \n    idx = 0\n    print(f\"CONNECTOR {connector} ({len(dirs_under_connector)})\")\n    for _dir in dirs_under_connector:\n        if _dir in ('templates', '__init__.py'):\n            continue \n        check_dir_contents(os.path.join(connector, 'legos', _dir), ret_val)\n\n    for k,v in ret_val.items():\n        if v is False:\n            print(f\"CHECK FAILED FOR {k}\")\n            return False\n    \n    return True\n\ndef check_dir_contents(_dir: str, ret_val) -> bool:\n    \"\"\"check_dir_contents This is a worker function that goes into each of\n       the given directory and checks the following.\n       1. Name of python file  and json file should be the directory name with .py and .json extension\n       2. action_entry_function should be the same as directory name in the json file\n       3. README.md and __init__.py should be present in the given directory\n       \n       If all the above case are met, the function returns True, else False\n\n       :type _dir: string\n       :param _dir: Name of the directory\n\n       :rtype: bool. True if all the check pass, Flase otherwise\n    \"\"\"\n    dir_content = glob.glob(os.path.join(_dir, '*'))\n    if len(dir_content) < 4:\n        return False\n\n    pyfile = os.path.join(_dir, os.path.basename(_dir) + '.py')\n    jsonfile = os.path.join(_dir, os.path.basename(_dir) + '.json')\n    readmefile = os.path.join(_dir, 'README.md')\n    initfile = os.path.join(_dir,'__init__.py')\n    if pyfile not in dir_content \\\n        or jsonfile not in dir_content \\\n        or initfile not in dir_content \\\n        or readmefile not in dir_content:\n        ret_val[_dir] = False\n        print(f\"ERROR: Missing File {dir_content} \")\n        return None\n    \n    try:\n        with open(jsonfile, 'r') as f:\n            print(f\"Processing JSON File {jsonfile}\")\n            d = json.load(f)\n\n        if d.get('action_entry_function') != os.path.basename(_dir):\n            print(f\"ERROR: ENTRY FUNCTION IN {jsonfile} is Wrong Expecting: {os.path.basename(_dir)} Has: {d.get('action_entry_function')}\")\n            ret_val[_dir] = False\n            return None\n    except Exception as e:\n        ret_val[_dir] = False\n        raise e\n\n    expected_function_definition = f\"def {os.path.basename(_dir)}(\"\n    \n    try:\n        with open(pyfile, 'r') as f:\n            if not any(line.strip().startswith(expected_function_definition) for line in f):\n                print(f\"ERROR: FUNCTION DEFINITION in {pyfile} is missing or incorrect. Expecting: {expected_function_definition}\")\n                ret_val[_dir] = False\n                return None\n    except Exception as e:\n        print(f\"ERROR: Issue while processing {pyfile}. Error: {e}\")\n        ret_val[_dir] = False\n        return None\n \n    ret_val[_dir] = True\n    return None\n\n\ndef main():\n    \"\"\"main: Main function that gets called. This function finds out all Legos directories\n       and calls the check_action_by_connector_name for each of the connectors.\n    \"\"\"\n    all_connectors = glob.glob(git_top_dir() + '/*/legos')\n    result = []\n    for connector in all_connectors:\n        connector = connector.replace(git_top_dir() + '/', '')\n        result.append(check_action_by_connector_names(os.path.dirname(connector)))\n    \n    for r in result:\n        if r is False:\n            print(\"ERROR: Check Failed. Please note the Validation process checks -\")\n            print(\"ERROR:     1. Lego Directory name should match python and json file name \") \n            print(\"ERROR:     2. Action Entry function should be the same as Lego directory name\") \n            print(\"ERROR:     3. A __init__.py File should exist for every Lego directory\")\n            print(\"ERROR:     4. A README.md should be present for every Lego\")\n            print(\"ERROR:     5. (Optional) A Screen shot that is referenced in README.md that shows output of the Action\")\n            sys.exit(-1)\n\n    print(\"Checks were successful\")\n\n\nif __name__ == '__main__':\n    main()\n"
  },
  {
    "path": "xrunbooks-directory.md",
    "content": "# xRunbooks directory\n\nThese xRunBooks are included in every install.  Use them as is, or make a copy to modify for your use!\n\n  | **Category**                                                                                               | **Runbooks**                                                                                                                                                                 | **URL**                                                                                                    |\n  | ------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------ |\n  |AWS |[AWS Access Key Rotation for IAM users](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Access_Key_Rotation.ipynb) | [Open in Browser](http://127.0.0.1:8888/lab/tree/AWS/AWS_Access_Key_Rotation.ipynb) | \n|AWS |[AWS Add Mandatory tags to EC2](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/AWS_Add_Mandatory_tags_to_EC2.ipynb) | [Open in Browser](http://127.0.0.1:8888/lab/tree/AWS/AWS_Add_Mandatory_tags_to_EC2.ipynb) | \n|AWS |[Create a new AWS IAM User](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Add_new_IAM_user.ipynb) | [Open in Browser](http://127.0.0.1:8888/lab/tree/AWS/Add_new_IAM_user.ipynb) | \n|AWS |[Change AWS EBS Volume To GP3 Type](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Change_AWS_EBS_Volume_To_GP3_Type.ipynb) | [Open in Browser](http://127.0.0.1:8888/lab/tree/AWS/Change_AWS_EBS_Volume_To_GP3_Type.ipynb) | \n|AWS |[Change AWS Route53 TTL](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Change_AWS_Route53_TTL.ipynb) | [Open in Browser](http://127.0.0.1:8888/lab/tree/AWS/Change_AWS_Route53_TTL.ipynb) | \n|AWS |[Configure URL endpoint on a AWS CloudWatch alarm](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Configure_url_endpoint_on_a_cloudwatch_alarm.ipynb) | [Open in Browser](http://127.0.0.1:8888/lab/tree/AWS/Configure_url_endpoint_on_a_cloudwatch_alarm.ipynb) | \n|AWS |[Copy AMI to All Given AWS Regions](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Copy_ami_to_all_given_AWS_regions.ipynb) | [Open in Browser](http://127.0.0.1:8888/lab/tree/AWS/Copy_ami_to_all_given_AWS_regions.ipynb) | \n|AWS |[Create IAM User with policy](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Create_IAM_User_with_policy.ipynb) | [Open in Browser](http://127.0.0.1:8888/lab/tree/AWS/Create_IAM_User_with_policy.ipynb) | \n|AWS |[Delete EBS Volume With Low Usage](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Delete_EBS_Volumes_With_Low_Usage.ipynb) | [Open in Browser](http://127.0.0.1:8888/lab/tree/AWS/Delete_EBS_Volumes_With_Low_Usage.ipynb) | \n|AWS |[Delete Old EBS Snapshots](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Delete_Old_EBS_Snapshots.ipynb) | [Open in Browser](http://127.0.0.1:8888/lab/tree/AWS/Delete_Old_EBS_Snapshots.ipynb) | \n|AWS |[Delete Unattached AWS EBS Volumes](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Delete_Unattached_EBS_Volume.ipynb) | [Open in Browser](http://127.0.0.1:8888/lab/tree/AWS/Delete_Unattached_EBS_Volume.ipynb) | \n|AWS |[Delete Unused AWS Log Streams](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Delete_Unused_AWS_Log_Streams.ipynb) | [Open in Browser](http://127.0.0.1:8888/lab/tree/AWS/Delete_Unused_AWS_Log_Streams.ipynb) | \n|AWS |[Delete Unused AWS NAT Gateways](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Delete_Unused_AWS_NAT_Gateways.ipynb) | [Open in Browser](http://127.0.0.1:8888/lab/tree/AWS/Delete_Unused_AWS_NAT_Gateways.ipynb) | \n|AWS |[Delete Unused AWS Secrets](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Delete_Unused_AWS_Secrets.ipynb) | [Open in Browser](http://127.0.0.1:8888/lab/tree/AWS/Delete_Unused_AWS_Secrets.ipynb) | \n|AWS |[Detach EC2 Instance from ASG](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Detach_Instance_from_ASG.ipynb) | [Open in Browser](http://127.0.0.1:8888/lab/tree/AWS/Detach_Instance_from_ASG.ipynb) | \n|AWS |[Detach EC2 Instance from ASG and Load balancer](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Detach_ec2_Instance_from_ASG.ipynb) | [Open in Browser](http://127.0.0.1:8888/lab/tree/AWS/Detach_ec2_Instance_from_ASG.ipynb) | \n|AWS |[Detect ECS failed deployment](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Detect_ECS_failed_deployment.ipynb) | [Open in Browser](http://127.0.0.1:8888/lab/tree/AWS/Detect_ECS_failed_deployment.ipynb) | \n|AWS |[AWS EC2 Disk Cleanup](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/EC2_Disk_Cleanup.ipynb) | [Open in Browser](http://127.0.0.1:8888/lab/tree/AWS/EC2_Disk_Cleanup.ipynb) | \n|AWS |[Enforce HTTP Redirection across all AWS ALB instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Enforce_HTTP_Redirection_across_AWS_ALB.ipynb) | [Open in Browser](http://127.0.0.1:8888/lab/tree/AWS/Enforce_HTTP_Redirection_across_AWS_ALB.ipynb) | \n|AWS |[Enforce Mandatory Tags Across All AWS Resources](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Enforce_Mandatory_Tags_Across_All_AWS_Resources.ipynb) | [Open in Browser](http://127.0.0.1:8888/lab/tree/AWS/Enforce_Mandatory_Tags_Across_All_AWS_Resources.ipynb) | \n|AWS |[Handle AWS EC2 Instance Scheduled to retire](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Find_EC2_Instances_Scheduled_to_retire.ipynb) | [Open in Browser](http://127.0.0.1:8888/lab/tree/AWS/Find_EC2_Instances_Scheduled_to_retire.ipynb) | \n|AWS |[Get unhealthy EC2 instances from ELB](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Get_Aws_Elb_Unhealthy_Instances.ipynb) | [Open in Browser](http://127.0.0.1:8888/lab/tree/AWS/Get_Aws_Elb_Unhealthy_Instances.ipynb) | \n|AWS |[Create an IAM user using Principle of Least Privilege](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/IAM_security_least_privilege.ipynb) | [Open in Browser](http://127.0.0.1:8888/lab/tree/AWS/IAM_security_least_privilege.ipynb) | \n|AWS |[Lowering AWS CloudTrail Costs by Removing Redundant Trails](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Lowering_AWS_CloudTrail_Costs_by_Removing_Redundant_Trails.ipynb) | [Open in Browser](http://127.0.0.1:8888/lab/tree/AWS/Lowering_AWS_CloudTrail_Costs_by_Removing_Redundant_Trails.ipynb) | \n|AWS |[Monitor AWS DynamoDB provision capacity](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Monitor_AWS_DynamoDB_provision_capacity.ipynb) | [Open in Browser](http://127.0.0.1:8888/lab/tree/AWS/Monitor_AWS_DynamoDB_provision_capacity.ipynb) | \n|AWS |[List unused Amazon EC2 key pairs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Notify_about_unused_keypairs.ipynb) | [Open in Browser](http://127.0.0.1:8888/lab/tree/AWS/Notify_about_unused_keypairs.ipynb) | \n|AWS |[Publicly Accessible Amazon RDS Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Publicly_Accessible_Amazon_RDS_Instances.ipynb) | [Open in Browser](http://127.0.0.1:8888/lab/tree/AWS/Publicly_Accessible_Amazon_RDS_Instances.ipynb) | \n|AWS |[Release Unattached AWS Elastic IPs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Release_Unattached_AWS_Elastic_IPs.ipynb) | [Open in Browser](http://127.0.0.1:8888/lab/tree/AWS/Release_Unattached_AWS_Elastic_IPs.ipynb) | \n|AWS |[Remediate unencrypted S3 buckets](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Remediate_unencrypted_S3_buckets.ipynb) | [Open in Browser](http://127.0.0.1:8888/lab/tree/AWS/Remediate_unencrypted_S3_buckets.ipynb) | \n|AWS |[Renew AWS SSL Certificates that are close to expiration](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Renew_SSL_Certificate.ipynb) | [Open in Browser](http://127.0.0.1:8888/lab/tree/AWS/Renew_SSL_Certificate.ipynb) | \n|AWS |[Resize EBS Volume](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Resize_EBS_Volume.ipynb) | [Open in Browser](http://127.0.0.1:8888/lab/tree/AWS/Resize_EBS_Volume.ipynb) | \n|AWS |[Resize list of pvcs.](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Resize_List_Of_Pvcs.ipynb) | [Open in Browser](http://127.0.0.1:8888/lab/tree/AWS/Resize_List_Of_Pvcs.ipynb) | \n|AWS |[Resize PVC](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Resize_PVC.ipynb) | [Open in Browser](http://127.0.0.1:8888/lab/tree/AWS/Resize_PVC.ipynb) | \n|AWS |[Restart AWS EC2 Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Restart_AWS_EC2_Instances_By_Tag.ipynb) | [Open in Browser](http://127.0.0.1:8888/lab/tree/AWS/Restart_AWS_EC2_Instances_By_Tag.ipynb) | \n|AWS |[Restart AWS Instances with a given tag](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Restart_Aws_Instance_given_Tag.ipynb) | [Open in Browser](http://127.0.0.1:8888/lab/tree/AWS/Restart_Aws_Instance_given_Tag.ipynb) | \n|AWS |[Restart unhealthy services in a Target Group](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Restart_Unhealthy_Services_Target_Group.ipynb) | [Open in Browser](http://127.0.0.1:8888/lab/tree/AWS/Restart_Unhealthy_Services_Target_Group.ipynb) | \n|AWS |[Restrict S3 Buckets with READ/WRITE Permissions to all Authenticated Users](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Restrict_S3_Buckets_with_READ_WRITE_Permissions.ipynb) | [Open in Browser](http://127.0.0.1:8888/lab/tree/AWS/Restrict_S3_Buckets_with_READ_WRITE_Permissions.ipynb) | \n|AWS |[Launch AWS EC2 from AMI](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Run_EC2_from_AMI.ipynb) | [Open in Browser](http://127.0.0.1:8888/lab/tree/AWS/Run_EC2_from_AMI.ipynb) | \n|AWS |[Secure Publicly accessible Amazon RDS Snapshot](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Secure_Publicly_accessible_Amazon_RDS_Snapshot.ipynb) | [Open in Browser](http://127.0.0.1:8888/lab/tree/AWS/Secure_Publicly_accessible_Amazon_RDS_Snapshot.ipynb) | \n|AWS |[Stop Untagged AWS EC2 Instances](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Stop_Untagged_EC2_Instances.ipynb) | [Open in Browser](http://127.0.0.1:8888/lab/tree/AWS/Stop_Untagged_EC2_Instances.ipynb) | \n|AWS |[Terminate EC2 Instances Without Valid Lifetime Tag](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Terminate_EC2_Instances_Without_Valid_Lifetime_Tag.ipynb) | [Open in Browser](http://127.0.0.1:8888/lab/tree/AWS/Terminate_EC2_Instances_Without_Valid_Lifetime_Tag.ipynb) | \n|AWS |[Troubleshooting Your EC2 Configuration in a Private Subnet](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Troubleshooting_Your_EC2_Configuration_in_Private_Subnet.ipynb) | [Open in Browser](http://127.0.0.1:8888/lab/tree/AWS/Troubleshooting_Your_EC2_Configuration_in_Private_Subnet.ipynb) | \n|AWS |[Update and Manage AWS User permission](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/Update_and_Manage_AWS_User_Permission.ipynb) | [Open in Browser](http://127.0.0.1:8888/lab/tree/AWS/Update_and_Manage_AWS_User_Permission.ipynb) | \n|AWS |[AWS Redshift Get Daily Costs from AWS Products](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/aws_redshift_get_daily_product_costs.ipynb) | [Open in Browser](http://127.0.0.1:8888/lab/tree/AWS/aws_redshift_get_daily_product_costs.ipynb) | \n|AWS |[AWS Redshift Get Daily Costs from EC2 Usage](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/aws_redshift_get_ec2_daily_costs.ipynb) | [Open in Browser](http://127.0.0.1:8888/lab/tree/AWS/aws_redshift_get_ec2_daily_costs.ipynb) | \n|AWS |[AWS Redshift Update Database](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/aws_redshift_update_database.ipynb) | [Open in Browser](http://127.0.0.1:8888/lab/tree/AWS/aws_redshift_update_database.ipynb) | \n|AWS |[Delete IAM profile](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/delete_iam_user.ipynb) | [Open in Browser](http://127.0.0.1:8888/lab/tree/AWS/delete_iam_user.ipynb) | \n|ElasticSearch |[Elasticsearch Rolling restart](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/ElasticSearch/Elasticsearch_Rolling_Restart.ipynb) | [Open in Browser](http://127.0.0.1:8888/lab/tree/ElasticSearch/Elasticsearch_Rolling_Restart.ipynb) | \n|Jenkins |[Fetch Jenkins Build Logs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Jenkins/Fetch_Jenkins_Build_Logs.ipynb) | [Open in Browser](http://127.0.0.1:8888/lab/tree/Jenkins/Fetch_Jenkins_Build_Logs.ipynb) | \n|Jira |[Jira Visualize Issue Time to Resolution](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Jira/jira_visualize_time_to_resolution.ipynb) | [Open in Browser](http://127.0.0.1:8888/lab/tree/Jira/jira_visualize_time_to_resolution.ipynb) | \n|Kubernetes |[k8s: Delete Evicted Pods From All Namespaces](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/Delete_Evicted_Pods_From_Namespaces.ipynb) | [Open in Browser](http://127.0.0.1:8888/lab/tree/Kubernetes/Delete_Evicted_Pods_From_Namespaces.ipynb) | \n|Kubernetes |[k8s: Get kube system config map](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/Get_Kube_System_Config_Map.ipynb) | [Open in Browser](http://127.0.0.1:8888/lab/tree/Kubernetes/Get_Kube_System_Config_Map.ipynb) | \n|Kubernetes |[k8s: Get candidate nodes for given configuration](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/K8S_Get_Candidate_Nodes_Given_Config.ipynb) | [Open in Browser](http://127.0.0.1:8888/lab/tree/Kubernetes/K8S_Get_Candidate_Nodes_Given_Config.ipynb) | \n|Kubernetes |[Kubernetes Log Healthcheck](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/K8S_Log_Healthcheck.ipynb) | [Open in Browser](http://127.0.0.1:8888/lab/tree/Kubernetes/K8S_Log_Healthcheck.ipynb) | \n|Kubernetes |[k8s: Pod Stuck in CrashLoopBackoff State](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/K8S_Pod_Stuck_In_CrashLoopBack_State.ipynb) | [Open in Browser](http://127.0.0.1:8888/lab/tree/Kubernetes/K8S_Pod_Stuck_In_CrashLoopBack_State.ipynb) | \n|Kubernetes |[k8s: Pod Stuck in ImagePullBackOff State](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/K8S_Pod_Stuck_In_ImagePullBackOff_State.ipynb) | [Open in Browser](http://127.0.0.1:8888/lab/tree/Kubernetes/K8S_Pod_Stuck_In_ImagePullBackOff_State.ipynb) | \n|Kubernetes |[k8s: Pod Stuck in Terminating State](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/K8S_Pod_Stuck_In_Terminating_State.ipynb) | [Open in Browser](http://127.0.0.1:8888/lab/tree/Kubernetes/K8S_Pod_Stuck_In_Terminating_State.ipynb) | \n|Kubernetes |[k8s: Resize List of PVCs](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/Resize_List_of_PVCs.ipynb) | [Open in Browser](http://127.0.0.1:8888/lab/tree/Kubernetes/Resize_List_of_PVCs.ipynb) | \n|Kubernetes |[k8s: Resize PVC](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/Resize_PVC.ipynb) | [Open in Browser](http://127.0.0.1:8888/lab/tree/Kubernetes/Resize_PVC.ipynb) | \n|Kubernetes |[Rollback Kubernetes Deployment](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Kubernetes/Rollback_k8s_Deployment_and_Update_Jira.ipynb) | [Open in Browser](http://127.0.0.1:8888/lab/tree/Kubernetes/Rollback_k8s_Deployment_and_Update_Jira.ipynb) | \n|Postgresql |[Display long running queries in a PostgreSQL database](https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Postgresql/Display_Postgresql_Long_Running.ipynb) | [Open in Browser](http://127.0.0.1:8888/lab/tree/Postgresql/Display_Postgresql_Long_Running.ipynb) | \n\n</details>\n\n<br/>\n<br/>"
  }
]