[
  {
    "path": ".github/CODEOWNERS",
    "content": "* @aws/aws-firelens\n"
  },
  {
    "path": ".github/PULL_REQUEST_TEMPLATE.md",
    "content": "*Issue #, if available:*\n\n*Description of changes:*\n\n\nBy submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.\n"
  },
  {
    "path": ".github/dependabot.yml",
    "content": "version: 2\nupdates:\n  - package-ecosystem: gomod \n    directory: \"/\" # Location of package manifests\n    schedule:\n      interval: \"weekly\"\n"
  },
  {
    "path": ".github/workflows/build.yml",
    "content": "name: Build\n\non:\n  push:\n    branches: [ mainline ]\n  pull_request:\n    branches: [ mainline ]\n\njobs:\n\n  build:\n    name: Build\n    runs-on: ubuntu-latest\n    steps:\n\n    - name: Set up Go 1.24.11\n      uses: actions/setup-go@v5\n      with:\n        go-version: '1.24.11'\n      id: go\n\n    - name: Install cross-compiler for Windows\n      run: sudo apt-get update && sudo apt-get install -y -o Acquire::Retries=3 gcc-multilib gcc-mingw-w64\n\n    - name: Check out code into the Go module directory\n      uses: actions/checkout@v5\n\n    - name: golint\n      run: go install golang.org/x/lint/golint@latest\n\n    - name: Build\n      run: make build windows-release test\n"
  },
  {
    "path": ".gitignore",
    "content": "\n# Binaries for programs and plugins\n*.exe\n*.exe~\n*.dll\n*.so\n*.dylib\n\n# Test binary, built with `go test -c`\n*.test\n\n# build output dir\nbin\n\n# Output of the go coverage tool, specifically when used with LiteIDE\n*.out\n"
  },
  {
    "path": "CHANGELOG.md",
    "content": "# Changelog\n\n## 1.9.5\n* Enhancement - Update github.com/aws/aws-sdk-go to v1.55.8\n* Enhancement - Update github.com/sirupsen/logrus to v1.9.4\n* Enhancement - Update github.com/stretchr/testify to v1.11.1\n\n## 1.9.4\n* Bug - Fix utf-8 calculation of payload length to account for invalid unicode bytes that will be replaced with the 3 byte unicode replacement character. This bug can lead to an `InvalidParameterException` from CloudWatch when the payload sent is calculated to be over the limit due to character replacement.\n\n## 1.9.3\n* Enhancement - Upgrade Go version to 1.20\n\n## 1.9.2\n* Bug - Fixed Log Loss can occur when log group creation or retention policy API calls fail. (#314)\n\n## 1.9.1\n* Enhancement - Added different base user agent for Linux and Windows\n\n## 1.9.0\n* Feature - Add support for building this plugin on Windows. *Note that this is only support in this plugin repo for Windows compilation.*\n\n## 1.8.0\n* Feature - Add `auto_create_stream ` option (#257)\n* Bug - Allow recovery from a stream being deleted and created by a user (#257)\n\n## 1.7.0\n* Feature - Add support for external_id (#226)\n\n## 1.6.4\n* Bug - Remove corrupted unicode fragments on truncation (#208)\n\n## 1.6.3\n* Enhancement - Upgrade Go version to 1.17\n\n## 1.6.2\n* Enhancement - Add validation to stop accepting both of `log_stream_name` and `log_stream_prefix` together (#190)\n\n## 1.6.1\n* Enhancement - Delete debug messages which make log info useless (#146)\n\n## 1.6.0\n* Enhancement - Add support for updating the retention policy of existing log groups (#121)\n\n## 1.5.0\n* Feature - Automatically re-create CloudWatch log groups and log streams if they are deleted (#95)\n* Feature - Add default fallback log group and stream names (#99)\n* Feature - Add support for ECS Metadata and UUID via special variables in log stream and group names (#108)\n* Enhancement - Remove invalid characters in log stream and log group names (#103)\n\n## 1.4.1\n* Bug - Add back `auto_create_group` option (#96)\n* Bug - Truncate log events to max size (#85)\n\n## 1.4.0\n* Feature - Add support for dynamic log group names (#46)\n* Feature - Add support for dynamic log stream names (#16)\n* Feature - Support tagging of newly created log groups (#51)\n* Feature - Support setting log group retention policies (#50)\n\n## 1.3.1\n* Bug - Check for empty logEvents before calling PutLogEvents (#66)\n\n## 1.3.0\n* Feature - Add sts_endpoint param for custom STS API endpoint (#55)\n\n## 1.2.0\n* Feature - Add support for Embedded Metric Format (#27)\n\n## 1.1.1\n* Bug - Discard and do not send empty messages (#40)\n\n## 1.1.0\n* Bug - A single CloudWatch Logs PutLogEvents request can not contain logs that span more than 24 hours (#29)\n* Feature - Add `credentials_endpoint` option (#36)\n* Feature - Support IAM Roles for Service Accounts in Amazon EKS (#33)\n\n## 1.0.0\nInitial versioned release of the Amazon CloudWatch Logs for Fluent Bit Plugin\n"
  },
  {
    "path": "CODEOWNERS",
    "content": "/ @aws/aws-firelens\n"
  },
  {
    "path": "CODE_OF_CONDUCT.md",
    "content": "## Code of Conduct\nThis project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct).\nFor more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact\nopensource-codeofconduct@amazon.com with any additional questions or comments.\n"
  },
  {
    "path": "CONTRIBUTING.md",
    "content": "# Contributing Guidelines\n\nThank you for your interest in contributing to our project. Whether it's a bug report, new feature, correction, or additional\ndocumentation, we greatly value feedback and contributions from our community.\n\nPlease read through this document before submitting any issues or pull requests to ensure we have all the necessary\ninformation to effectively respond to your bug report or contribution.\n\n\n## Reporting Bugs/Feature Requests\n\nWe welcome you to use the GitHub issue tracker to report bugs or suggest features.\n\nWhen filing an issue, please check [existing open](https://github.com/awslabs/cloudwatch-logs-for-fluent-bit/issues), or [recently closed](https://github.com/awslabs/cloudwatch-logs-for-fluent-bit/issues?utf8=%E2%9C%93&q=is%3Aissue%20is%3Aclosed%20), issues to make sure somebody else hasn't already\nreported the issue. Please try to include as much information as you can. Details like these are incredibly useful:\n\n* A reproducible test case or series of steps\n* The version of our code being used\n* Any modifications you've made relevant to the bug\n* Anything unusual about your environment or deployment\n\n\n## Contributing via Pull Requests\nContributions via pull requests are much appreciated. Before sending us a pull request, please ensure that:\n\n1. You are working against the latest source on the *master* branch.\n2. You check existing open, and recently merged, pull requests to make sure someone else hasn't addressed the problem already.\n3. You open an issue to discuss any significant work - we would hate for your time to be wasted.\n\nTo send us a pull request, please:\n\n1. Fork the repository.\n2. Modify the source; please focus on the specific change you are contributing. If you also reformat all the code, it will be hard for us to focus on your change.\n3. Ensure local tests pass.\n4. Commit to your fork using clear commit messages.\n5. Send us a pull request, answering any default questions in the pull request interface.\n6. Pay attention to any automated CI failures reported in the pull request, and stay involved in the conversation.\n\nGitHub provides additional document on [forking a repository](https://help.github.com/articles/fork-a-repo/) and\n[creating a pull request](https://help.github.com/articles/creating-a-pull-request/).\n\n\n## Finding contributions to work on\nLooking at the existing issues is a great way to find something to contribute on. As our projects, by default, use the default GitHub issue labels (enhancement/bug/duplicate/help wanted/invalid/question/wontfix), looking at any ['help wanted'](https://github.com/awslabs/cloudwatch-logs-for-fluent-bit/labels/help%20wanted) issues is a great place to start.\n\n\n## Code of Conduct\nThis project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct).\nFor more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact\nopensource-codeofconduct@amazon.com with any additional questions or comments.\n\n\n## Security issue notifications\nIf you discover a potential security issue in this project we ask that you notify AWS/Amazon Security via our [vulnerability reporting page](http://aws.amazon.com/security/vulnerability-reporting/). Please do **not** create a public github issue.\n\n\n## Licensing\n\nSee the [LICENSE](https://github.com/awslabs/cloudwatch-logs-for-fluent-bit/blob/master/LICENSE) file for our project's licensing. We will ask you to confirm the licensing of your contribution.\n\nWe may ask you to sign a [Contributor License Agreement (CLA)](http://en.wikipedia.org/wiki/Contributor_License_Agreement) for larger changes.\n"
  },
  {
    "path": "LICENSE",
    "content": "\n                                 Apache License\n                           Version 2.0, January 2004\n                        http://www.apache.org/licenses/\n\n   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION\n\n   1. Definitions.\n\n      \"License\" shall mean the terms and conditions for use, reproduction,\n      and distribution as defined by Sections 1 through 9 of this document.\n\n      \"Licensor\" shall mean the copyright owner or entity authorized by\n      the copyright owner that is granting the License.\n\n      \"Legal Entity\" shall mean the union of the acting entity and all\n      other entities that control, are controlled by, or are under common\n      control with that entity. For the purposes of this definition,\n      \"control\" means (i) the power, direct or indirect, to cause the\n      direction or management of such entity, whether by contract or\n      otherwise, or (ii) ownership of fifty percent (50%) or more of the\n      outstanding shares, or (iii) beneficial ownership of such entity.\n\n      \"You\" (or \"Your\") shall mean an individual or Legal Entity\n      exercising permissions granted by this License.\n\n      \"Source\" form shall mean the preferred form for making modifications,\n      including but not limited to software source code, documentation\n      source, and configuration files.\n\n      \"Object\" form shall mean any form resulting from mechanical\n      transformation or translation of a Source form, including but\n      not limited to compiled object code, generated documentation,\n      and conversions to other media types.\n\n      \"Work\" shall mean the work of authorship, whether in Source or\n      Object form, made available under the License, as indicated by a\n      copyright notice that is included in or attached to the work\n      (an example is provided in the Appendix below).\n\n      \"Derivative Works\" shall mean any work, whether in Source or Object\n      form, that is based on (or derived from) the Work and for which the\n      editorial revisions, annotations, elaborations, or other modifications\n      represent, as a whole, an original work of authorship. For the purposes\n      of this License, Derivative Works shall not include works that remain\n      separable from, or merely link (or bind by name) to the interfaces of,\n      the Work and Derivative Works thereof.\n\n      \"Contribution\" shall mean any work of authorship, including\n      the original version of the Work and any modifications or additions\n      to that Work or Derivative Works thereof, that is intentionally\n      submitted to Licensor for inclusion in the Work by the copyright owner\n      or by an individual or Legal Entity authorized to submit on behalf of\n      the copyright owner. For the purposes of this definition, \"submitted\"\n      means any form of electronic, verbal, or written communication sent\n      to the Licensor or its representatives, including but not limited to\n      communication on electronic mailing lists, source code control systems,\n      and issue tracking systems that are managed by, or on behalf of, the\n      Licensor for the purpose of discussing and improving the Work, but\n      excluding communication that is conspicuously marked or otherwise\n      designated in writing by the copyright owner as \"Not a Contribution.\"\n\n      \"Contributor\" shall mean Licensor and any individual or Legal Entity\n      on behalf of whom a Contribution has been received by Licensor and\n      subsequently incorporated within the Work.\n\n   2. Grant of Copyright License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      copyright license to reproduce, prepare Derivative Works of,\n      publicly display, publicly perform, sublicense, and distribute the\n      Work and such Derivative Works in Source or Object form.\n\n   3. Grant of Patent License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      (except as stated in this section) patent license to make, have made,\n      use, offer to sell, sell, import, and otherwise transfer the Work,\n      where such license applies only to those patent claims licensable\n      by such Contributor that are necessarily infringed by their\n      Contribution(s) alone or by combination of their Contribution(s)\n      with the Work to which such Contribution(s) was submitted. If You\n      institute patent litigation against any entity (including a\n      cross-claim or counterclaim in a lawsuit) alleging that the Work\n      or a Contribution incorporated within the Work constitutes direct\n      or contributory patent infringement, then any patent licenses\n      granted to You under this License for that Work shall terminate\n      as of the date such litigation is filed.\n\n   4. Redistribution. You may reproduce and distribute copies of the\n      Work or Derivative Works thereof in any medium, with or without\n      modifications, and in Source or Object form, provided that You\n      meet the following conditions:\n\n      (a) You must give any other recipients of the Work or\n          Derivative Works a copy of this License; and\n\n      (b) You must cause any modified files to carry prominent notices\n          stating that You changed the files; and\n\n      (c) You must retain, in the Source form of any Derivative Works\n          that You distribute, all copyright, patent, trademark, and\n          attribution notices from the Source form of the Work,\n          excluding those notices that do not pertain to any part of\n          the Derivative Works; and\n\n      (d) If the Work includes a \"NOTICE\" text file as part of its\n          distribution, then any Derivative Works that You distribute must\n          include a readable copy of the attribution notices contained\n          within such NOTICE file, excluding those notices that do not\n          pertain to any part of the Derivative Works, in at least one\n          of the following places: within a NOTICE text file distributed\n          as part of the Derivative Works; within the Source form or\n          documentation, if provided along with the Derivative Works; or,\n          within a display generated by the Derivative Works, if and\n          wherever such third-party notices normally appear. The contents\n          of the NOTICE file are for informational purposes only and\n          do not modify the License. You may add Your own attribution\n          notices within Derivative Works that You distribute, alongside\n          or as an addendum to the NOTICE text from the Work, provided\n          that such additional attribution notices cannot be construed\n          as modifying the License.\n\n      You may add Your own copyright statement to Your modifications and\n      may provide additional or different license terms and conditions\n      for use, reproduction, or distribution of Your modifications, or\n      for any such Derivative Works as a whole, provided Your use,\n      reproduction, and distribution of the Work otherwise complies with\n      the conditions stated in this License.\n\n   5. Submission of Contributions. Unless You explicitly state otherwise,\n      any Contribution intentionally submitted for inclusion in the Work\n      by You to the Licensor shall be under the terms and conditions of\n      this License, without any additional terms or conditions.\n      Notwithstanding the above, nothing herein shall supersede or modify\n      the terms of any separate license agreement you may have executed\n      with Licensor regarding such Contributions.\n\n   6. Trademarks. This License does not grant permission to use the trade\n      names, trademarks, service marks, or product names of the Licensor,\n      except as required for reasonable and customary use in describing the\n      origin of the Work and reproducing the content of the NOTICE file.\n\n   7. Disclaimer of Warranty. Unless required by applicable law or\n      agreed to in writing, Licensor provides the Work (and each\n      Contributor provides its Contributions) on an \"AS IS\" BASIS,\n      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n      implied, including, without limitation, any warranties or conditions\n      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A\n      PARTICULAR PURPOSE. You are solely responsible for determining the\n      appropriateness of using or redistributing the Work and assume any\n      risks associated with Your exercise of permissions under this License.\n\n   8. Limitation of Liability. In no event and under no legal theory,\n      whether in tort (including negligence), contract, or otherwise,\n      unless required by applicable law (such as deliberate and grossly\n      negligent acts) or agreed to in writing, shall any Contributor be\n      liable to You for damages, including any direct, indirect, special,\n      incidental, or consequential damages of any character arising as a\n      result of this License or out of the use or inability to use the\n      Work (including but not limited to damages for loss of goodwill,\n      work stoppage, computer failure or malfunction, or any and all\n      other commercial damages or losses), even if such Contributor\n      has been advised of the possibility of such damages.\n\n   9. Accepting Warranty or Additional Liability. While redistributing\n      the Work or Derivative Works thereof, You may choose to offer,\n      and charge a fee for, acceptance of support, warranty, indemnity,\n      or other liability obligations and/or rights consistent with this\n      License. However, in accepting such obligations, You may act only\n      on Your own behalf and on Your sole responsibility, not on behalf\n      of any other Contributor, and only if You agree to indemnify,\n      defend, and hold each Contributor harmless for any liability\n      incurred by, or claims asserted against, such Contributor by reason\n      of your accepting any such warranty or additional liability.\n"
  },
  {
    "path": "Makefile",
    "content": "# Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"). You\n# may not use this file except in compliance with the License. A copy of\n# the License is located at\n#\n# \thttp://aws.amazon.com/apache2.0/\n#\n# or in the \"license\" file accompanying this file. This file is\n# distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific\n# language governing permissions and limitations under the License.\n\n# Build settings.\nGOARCH ?= amd64\nCOMPILER ?= x86_64-w64-mingw32-gcc # Cross-compiler for Windows\n\nROOT := $(shell pwd)\n\nall: build\n\nSCRIPT_PATH := $(ROOT)/scripts/:${PATH}\nSOURCES := $(shell find . -name '*.go')\nPLUGIN_BINARY := ./bin/cloudwatch.so\nPLUGIN_BINARY_WINDOWS := ./bin/cloudwatch.dll\n\n.PHONY: build\nbuild: $(PLUGIN_BINARY)\n\n$(PLUGIN_BINARY): $(SOURCES)\n\tPATH=${PATH} golint ./cloudwatch\n\tmkdir -p ./bin\n\tgo build -buildmode c-shared -o $(PLUGIN_BINARY) ./\n\t@echo \"Built Amazon CloudWatch Logs Fluent Bit Plugin\"\n\n.PHONY: release\nrelease:\n\tmkdir -p ./bin\n\tgo build -buildmode c-shared -o $(PLUGIN_BINARY) ./\n\t@echo \"Built Amazon CloudWatch Logs Fluent Bit Plugin\"\n\n.PHONY: windows-release\nwindows-release:\n\tmkdir -p ./bin\n\tGOOS=windows GOARCH=$(GOARCH) CGO_ENABLED=1 CC=$(COMPILER) go build -buildmode c-shared -o $(PLUGIN_BINARY_WINDOWS) ./\n\t@echo \"Built Amazon CloudWatch Logs Fluent Bit Plugin for Windows\"\n\n.PHONY: generate\ngenerate: $(SOURCES)\n\tPATH=$(SCRIPT_PATH) go generate ./...\n\n\n.PHONY: test\ntest:\n\tgo test -timeout=120s -v -cover ./...\n\n.PHONY: clean\nclean:\n\trm -rf ./bin/*\n"
  },
  {
    "path": "NOTICE",
    "content": "Fluent Bit Plugin for CloudWatch Logs\nCopyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved. \n"
  },
  {
    "path": "README.md",
    "content": "[![Test Actions Status](https://github.com/aws/amazon-cloudwatch-logs-for-fluent-bit/workflows/Build/badge.svg)](https://github.com/aws/amazon-cloudwatch-logs-for-fluent-bit/actions)\n\n## Fluent Bit Plugin for CloudWatch Logs\n\n**NOTE: A new higher performance Fluent Bit CloudWatch Logs Plugin has been released.** Check out our [official guidance](#new-higher-performance-core-fluent-bit-plugin).\n\nA Fluent Bit output plugin for CloudWatch Logs\n\n#### Security disclosures\n\nIf you think you’ve found a potential security issue, please do not post it in the Issues.  Instead, please follow the instructions [here](https://aws.amazon.com/security/vulnerability-reporting/) or email AWS security directly at [aws-security@amazon.com](mailto:aws-security@amazon.com).\n\n### Usage\n\nRun `make` to build `./bin/cloudwatch.so`. Then use with Fluent Bit:\n```\n./fluent-bit -e ./cloudwatch.so -i cpu \\\n-o cloudwatch \\\n-p \"region=us-west-2\" \\\n-p \"log_group_name=fluent-bit-cloudwatch\" \\\n-p \"log_stream_name=testing\" \\\n-p \"auto_create_group=true\"\n```\n\nFor building Windows binaries, we need to install `mingw-w64` for cross-compilation. The same can be done using-\n```\nsudo apt-get install -y gcc-multilib gcc-mingw-w64\n```\nAfter this step, run `make windows-release`. Then use with Fluent Bit on Windows:\n```\n./fluent-bit.exe -e ./cloudwatch.dll -i dummy `\n-o cloudwatch `\n-p \"region=us-west-2\" `\n-p \"log_group_name=fluent-bit-cloudwatch\" `\n-p \"log_stream_name=testing\" `\n-p \"auto_create_group=true\"\n```\n\n### Plugin Options\n\n* `region`: The AWS region.\n* `log_group_name`: The name of the CloudWatch Log Group that you want log records sent to. This value allows a template in the form of `$(variable)`. See section [Templating Log Group and Stream Names](#templating-log-group-and-stream-names) for more. Fluent Bit will create missing log groups if `auto_create_group` is set, and will throw an error if it does not have permission.\n* `log_stream_name`: The name of the CloudWatch Log Stream that you want log records sent to. This value allows a template in the form of `$(variable)`. See section [Templating Log Group and Stream Names](#templating-log-group-and-stream-names) for more.\n* `default_log_group_name`: This required variable is the fallback in case any variables in `log_group_name` fails to parse. Defaults to `fluentbit-default`.\n* `default_log_stream_name`: This required variable is the fallback in case any variables in `log_stream_name` fails to parse. Defaults to `/fluentbit-default`.\n* `log_stream_prefix`: (deprecated) Prefix for the Log Stream name. Setting this to `prefix-` is the same as setting `log_stream_name = prefix-$(tag)`.\n* `log_key`: By default, the whole log record will be sent to CloudWatch. If you specify a key name with this option, then only the value of that key will be sent to CloudWatch. For example, if you are using the Fluentd Docker log driver, you can specify `log_key log` and only the log message will be sent to CloudWatch.\n* `log_format`: An optional parameter that can be used to tell CloudWatch the format of the data. A value of `json/emf` enables CloudWatch to extract custom metrics embedded in a JSON payload. See the [Embedded Metric Format](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Embedded_Metric_Format_Specification.html).\n* `role_arn`: ARN of an IAM role to assume (for cross account access).\n* `auto_create_group`: Automatically create log groups (and add tags). Valid values are \"true\" or \"false\" (case insensitive). Defaults to false. If you use dynamic variables in your log group name, you may need this to be `true`.\n* `auto_create_stream`: Automatically create log streams. Valid values are \"true\" or \"false\" (case insensitive). Defaults to true.\n* `new_log_group_tags`: Comma/equal delimited string of tags to include with _auto created_ log groups. Example: `\"tag=val,cooltag2=my other value\"`\n* `log_retention_days`: If set to a number greater than zero, and newly create log group's retention policy is set to this many days.\n* `endpoint`: Specify a custom endpoint for the CloudWatch Logs API.\n* `sts_endpoint`: Specify a custom endpoint for the STS API; used to assume your custom role provided with `role_arn`.\n* `credentials_endpoint`: Specify a custom HTTP endpoint to pull credentials from. The HTTP response body should look like the following:\n```\n{\n    \"AccessKeyId\": \"ACCESS_KEY_ID\",\n    \"Expiration\": \"EXPIRATION_DATE\",\n    \"SecretAccessKey\": \"SECRET_ACCESS_KEY\",\n    \"Token\": \"SECURITY_TOKEN_STRING\"\n}\n```\n\n**Note**: The plugin will always create the log stream, if it does not exist.\n\n### Permissions\n\nThis plugin requires the following permissions:\n* CreateLogGroup (useful when using dynamic groups)\n* CreateLogStream\n* DescribeLogStreams\n* PutLogEvents\n* PutRetentionPolicy (if `log_retention_days` is set > 0)\n\n### Credentials\n\nThis plugin uses the AWS SDK Go, and uses its [default credential provider chain](https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html). If you are using the plugin on Amazon EC2 or Amazon ECS or Amazon EKS, the plugin will use your EC2 instance role or [ECS Task role permissions](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html) or [EKS IAM Roles for Service Accounts for pods](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html). The plugin can also retrieve credentials from a [shared credentials file](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html), or from the standard `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, `AWS_SESSION_TOKEN` environment variables.\n\n### Environment Variables\n\n* `FLB_LOG_LEVEL`: Set the log level for the plugin. Valid values are: `debug`, `info`, and `error` (case insensitive). Default is `info`. **Note**: Setting log level in the Fluent Bit Configuration file using the Service key will not affect the plugin log level (because the plugin is external).\n* `SEND_FAILURE_TIMEOUT`: Allows you to configure a timeout if the plugin can not send logs to CloudWatch. The timeout is specified as a [Golang duration](https://golang.org/pkg/time/#ParseDuration), for example: `5m30s`. If the plugin has failed to make any progress for the given period of time, then it will exit and kill Fluent Bit. This is useful in scenarios where you want your logging solution to fail fast if it has been misconfigured (i.e. network or credentials have not been set up to allow it to send to CloudWatch).\n\n### Retries and Buffering\n\nBuffering and retries are managed by the Fluent Bit core engine, not by the plugin. Whenever the plugin encounters any error, it returns a retry to the engine which schedules a retry. This means that log group creation, log stream creation or log retention policy calls can consume a retry if they fail.\n\n* [Fluent Bit upstream documentation on retries](https://docs.fluentbit.io/manual/administration/scheduling-and-retries)\n* [Fluent Bit upstream documentation on buffering](https://docs.fluentbit.io/manual/administration/buffering-and-storage)\n* [FireLens OOMKill prevent example for buffering](https://github.com/aws-samples/amazon-ecs-firelens-examples/tree/mainline/examples/fluent-bit/oomkill-prevention)\n\n### Templating Log Group and Stream Names\n\n A template in the form of `$(variable)` can be set in `log_group_name` or `log_stream_name`. `variable` can be a map key name in the log message. To access sub-values in the map use the form `$(variable['subkey'])`. Also, it can be replaced with special values to insert the tag, ECS metadata or a random string in the name.\n\n Special Values:\n *  `$(tag)` references the full tag name, `$(tag[0])` and `$(tag[1])` are the first and second values of log tag split on periods. You may access any member by index, 0 through 9.\n *  `$(uuid)` will insert a random string in the names. The random string is generated automatically with format: 4 bytes of time (seconds) + 16 random bytes. It is created when the plugin starts up and uniquely identifies the output - which means that until Fluent Bit is restarted, it will be the same. If you have multiple CloudWatch outputs, each one will get a unique UUID.\n * If your container is running in ECS, `$(variable)` can be set as `$(ecs_task_id)`, `$(ecs_cluster)` or `$(ecs_task_arn)`. It will set ECS metadata into `log_group_name` or `log_stream_name`.\n\n Here is an example for `fluent-bit.conf`:\n\n```\n[INPUT]\n    Name        dummy\n    Tag         dummy.data\n    Dummy {\"pam\": {\"item\": \"soup\", \"item2\":{\"subitem\": \"rice\"}}}\n\n[OUTPUT]\n    Name cloudwatch\n    Match   *\n    region us-east-1\n    log_group_name fluent-bit-cloudwatch-$(uuid)-$(tag)\n    log_stream_name from-fluent-bit-$(pam['item2']['subitem'])-$(ecs_task_id)-$(ecs_cluster)\n    auto_create_group true\n```\n\nAnd here is the resulting log stream name and log group name:\n\n```\nlog_group_name fluent-bit-cloudwatch-1jD7P6bbSRtbc9stkWjJZYerO6s-dummy.data\nlog_stream_name from-fluent-bit-rice-37e873f6-37b4-42a7-af47-eac7275c6152-ecs-local-cluster\n```\n\n#### Templating Log Group and Stream Names based on Kubernetes metadata\n\nIf you enable the kubernetes filter, then metadata like the following will be added to each log:\n\n```\nkubernetes: {\n    annotations: {\n        \"kubernetes.io/psp\": \"eks.privileged\"\n    },\n    container_hash: \"<some hash>\",\n    container_name: \"myapp\",\n    docker_id: \"<some id>\",\n    host: \"ip-10-1-128-166.us-east-2.compute.internal\",\n    labels: {\n        app: \"myapp\",\n        \"pod-template-hash\": \"<some hash>\"\n    },\n    namespace_name: \"default\",\n    pod_id: \"198f7dd2-2270-11ea-be47-0a5d932f5920\",\n    pod_name: \"myapp-5468c5d4d7-n2swr\"\n}\n```\n\nFor help setting up Fluent Bit with kubernetes please see [Kubernetes Logging Powered by AWS for Fluent Bit](https://aws.amazon.com/blogs/containers/kubernetes-logging-powered-by-aws-for-fluent-bit/) or [Set up Fluent Bit as a DaemonSet to send logs to CloudWatch Logs](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Container-Insights-setup-logs-FluentBit.html).\n\nThe kubernetes metadata can be referenced just like any other keys using the templating feature, for example, the following will result in a log group name which is `/eks/{namespace_name}/{pod_name}`. \n\n```\n    [OUTPUT]\n      Name              cloudwatch\n      Match             kube.*\n      region            us-east-1\n      log_group_name    /eks/$(kubernetes['namespace_name'])/$(kubernetes['pod_name'])\n      log_stream_name   $(kubernetes['namespace_name'])/$(kubernetes['container_name'])\n      auto_create_group true\n```\n\n### New Higher Performance Core Fluent Bit Plugin\n\nIn the summer of 2020, we released a [new higher performance CloudWatch Logs plugin](https://docs.fluentbit.io/manual/pipeline/outputs/cloudwatch) named `cloudwatch_logs`.\n\nThat plugin has a core subset of the features of this older, lower performance and less efficient plugin. Check out its [documentation](https://docs.fluentbit.io/manual/pipeline/outputs/cloudwatch).\n\n#### Do you plan to deprecate this older plugin?\n\nAt this time, we do not. This plugin will continue to be supported. It contains features that have not been ported to the higher performance version. Specifically, the feature for [templating of log group name and streams with ECS Metadata or values in the logs](#templating-log-group-and-stream-names). While [simple templating support](https://docs.fluentbit.io/manual/pipeline/outputs/cloudwatch#log-stream-and-group-name-templating-using-record_accessor-syntax) now exists in the high performance plugin, it does not have all of the features of the plugin in this repo. Some users will continue to need the features in this repo. \n\n#### Which plugin should I use?\n\nIf the features of the higher performance plugin are sufficient for your use cases, please use it. It can achieve higher throughput and will consume less CPU and memory.\n\n#### How can I migrate to the higher performance plugin?\n\nIt supports a subset of the options of this plugin. For many users, you can simply replace the plugin name `cloudwatch` with the new name `cloudwatch_logs`. Check out its [documentation](https://docs.fluentbit.io/manual/pipeline/outputs/cloudwatch). \n\n#### Do you accept contributions to both plugins?\n\nYes. The high performance plugin is written in C, and this plugin is written in Golang. We understand that Go is an easier language for amateur contributors to write code in- that is a key reason why we are continuing to maintain it.\n\nHowever, if you can write code in C, please consider contributing new features to the [higher performance plugin](https://github.com/fluent/fluent-bit/tree/master/plugins/out_cloudwatch_logs).\n\n### Fluent Bit Versions\n\nThis plugin has been tested with Fluent Bit 1.2.0+. It may not work with older Fluent Bit versions. We recommend using the latest version of Fluent Bit as it will contain the newest features and bug fixes.\n\n### Example Fluent Bit Config File\n\n```\n[INPUT]\n    Name        forward\n    Listen      0.0.0.0\n    Port        24224\n\n[OUTPUT]\n    Name cloudwatch\n    Match   *\n    region us-east-1\n    log_group_name fluent-bit-cloudwatch\n    log_stream_prefix from-fluent-bit-\n    auto_create_group true\n```\n\n### AWS for Fluent Bit\n\nWe distribute a container image with Fluent Bit and these plugins.\n\n##### GitHub\n\n[github.com/aws/aws-for-fluent-bit](https://github.com/aws/aws-for-fluent-bit)\n\n##### Amazon ECR Public Gallery\n\n[aws-for-fluent-bit](https://gallery.ecr.aws/aws-observability/aws-for-fluent-bit)\n\nOur images are available in Amazon ECR Public Gallery. You can download images with different tags by following command:\n\n```\ndocker pull public.ecr.aws/aws-observability/aws-for-fluent-bit:<tag>\n```\n\nFor example, you can pull the image with latest version by:\n\n```\ndocker pull public.ecr.aws/aws-observability/aws-for-fluent-bit:latest\n```\n\nIf you see errors for image pull limits, try log into public ECR with your AWS credentials:\n\n```\naws ecr-public get-login-password --region us-east-1 | docker login --username AWS --password-stdin public.ecr.aws\n```\n\nYou can check the [Amazon ECR Public official doc](https://docs.aws.amazon.com/AmazonECR/latest/public/get-set-up-for-amazon-ecr.html) for more details.\n\n##### Docker Hub\n\n[amazon/aws-for-fluent-bit](https://hub.docker.com/r/amazon/aws-for-fluent-bit/tags)\n\n##### Amazon ECR\n\nYou can use our SSM Public Parameters to find the Amazon ECR image URI in your region:\n\n```\naws ssm get-parameters-by-path --path /aws/service/aws-for-fluent-bit/\n```\n\nFor more see [our docs](https://github.com/aws/aws-for-fluent-bit#public-images).\n\n## License\n\nThis library is licensed under the Apache 2.0 License.\n"
  },
  {
    "path": "THIRD-PARTY",
    "content": "** github.com/aws/amazon-kinesis-firehose-for-fluent-bit; version c41b42995068\n-- https://github.com/aws/amazon-kinesis-firehose-for-fluent-bit\nCopyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n** github.com/aws/aws-sdk-go; version v1.20.6 --\nhttps://github.com/aws/aws-sdk-go\nCopyright 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved.\nCopyright 2014-2015 Stripe, Inc.\n** github.com/fluent/fluent-bit-go; version fc386d263885 --\nhttps://github.com/fluent/fluent-bit-go\nCopyright (C) 2015-2017 Treasure Data Inc.\n** github.com/golang/mock; version 1.3.1 -- https://github.com/golang/mock\nCopyright 2010 Google Inc.\n** github.com/jmespath/go-jmespath; version c2b33e8439af --\nhttps://github.com/jmespath/go-jmespath\nCopyright 2015 James Saryerwinnie\n** github.com/modern-go/concurrent; version bacd9c7ef1dd --\nhttps://github.com/modern-go/concurrent\nNone\n** github.com/modern-go/reflect2; version v1.0.1 --\nhttps://github.com/modern-go/reflect2\nNone\n\nApache License\n\nVersion 2.0, January 2004\n\nhttp://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND\nDISTRIBUTION\n\n   1. Definitions.\n\n      \"License\" shall mean the terms and conditions for use, reproduction, and\n      distribution as defined by Sections 1 through 9 of this document.\n\n      \"Licensor\" shall mean the copyright owner or entity authorized by the\n      copyright owner that is granting the License.\n\n      \"Legal Entity\" shall mean the union of the acting entity and all other\n      entities that control, are controlled by, or are under common control\n      with that entity. For the purposes of this definition, \"control\" means\n      (i) the power, direct or indirect, to cause the direction or management\n      of such entity, whether by contract or otherwise, or (ii) ownership of\n      fifty percent (50%) or more of the outstanding shares, or (iii)\n      beneficial ownership of such entity.\n\n      \"You\" (or \"Your\") shall mean an individual or Legal Entity exercising\n      permissions granted by this License.\n\n      \"Source\" form shall mean the preferred form for making modifications,\n      including but not limited to software source code, documentation source,\n      and configuration files.\n\n      \"Object\" form shall mean any form resulting from mechanical\n      transformation or translation of a Source form, including but not limited\n      to compiled object code, generated documentation, and conversions to\n      other media types.\n\n      \"Work\" shall mean the work of authorship, whether in Source or Object\n      form, made available under the License, as indicated by a copyright\n      notice that is included in or attached to the work (an example is\n      provided in the Appendix below).\n\n      \"Derivative Works\" shall mean any work, whether in Source or Object form,\n      that is based on (or derived from) the Work and for which the editorial\n      revisions, annotations, elaborations, or other modifications represent,\n      as a whole, an original work of authorship. For the purposes of this\n      License, Derivative Works shall not include works that remain separable\n      from, or merely link (or bind by name) to the interfaces of, the Work and\n      Derivative Works thereof.\n\n      \"Contribution\" shall mean any work of authorship, including the original\n      version of the Work and any modifications or additions to that Work or\n      Derivative Works thereof, that is intentionally submitted to Licensor for\n      inclusion in the Work by the copyright owner or by an individual or Legal\n      Entity authorized to submit on behalf of the copyright owner. For the\n      purposes of this definition, \"submitted\" means any form of electronic,\n      verbal, or written communication sent to the Licensor or its\n      representatives, including but not limited to communication on electronic\n      mailing lists, source code control systems, and issue tracking systems\n      that are managed by, or on behalf of, the Licensor for the purpose of\n      discussing and improving the Work, but excluding communication that is\n      conspicuously marked or otherwise designated in writing by the copyright\n      owner as \"Not a Contribution.\"\n\n      \"Contributor\" shall mean Licensor and any individual or Legal Entity on\n      behalf of whom a Contribution has been received by Licensor and\n      subsequently incorporated within the Work.\n\n   2. Grant of Copyright License. Subject to the terms and conditions of this\n   License, each Contributor hereby grants to You a perpetual, worldwide,\n   non-exclusive, no-charge, royalty-free, irrevocable copyright license to\n   reproduce, prepare Derivative Works of, publicly display, publicly perform,\n   sublicense, and distribute the Work and such Derivative Works in Source or\n   Object form.\n\n   3. Grant of Patent License. Subject to the terms and conditions of this\n   License, each Contributor hereby grants to You a perpetual, worldwide,\n   non-exclusive, no-charge, royalty-free, irrevocable (except as stated in\n   this section) patent license to make, have made, use, offer to sell, sell,\n   import, and otherwise transfer the Work, where such license applies only to\n   those patent claims licensable by such Contributor that are necessarily\n   infringed by their Contribution(s) alone or by combination of their\n   Contribution(s) with the Work to which such Contribution(s) was submitted.\n   If You institute patent litigation against any entity (including a\n   cross-claim or counterclaim in a lawsuit) alleging that the Work or a\n   Contribution incorporated within the Work constitutes direct or contributory\n   patent infringement, then any patent licenses granted to You under this\n   License for that Work shall terminate as of the date such litigation is\n   filed.\n\n   4. Redistribution. You may reproduce and distribute copies of the Work or\n   Derivative Works thereof in any medium, with or without modifications, and\n   in Source or Object form, provided that You meet the following conditions:\n\n      (a) You must give any other recipients of the Work or Derivative Works a\n      copy of this License; and\n\n      (b) You must cause any modified files to carry prominent notices stating\n      that You changed the files; and\n\n      (c) You must retain, in the Source form of any Derivative Works that You\n      distribute, all copyright, patent, trademark, and attribution notices\n      from the Source form of the Work, excluding those notices that do not\n      pertain to any part of the Derivative Works; and\n\n      (d) If the Work includes a \"NOTICE\" text file as part of its\n      distribution, then any Derivative Works that You distribute must include\n      a readable copy of the attribution notices contained within such NOTICE\n      file, excluding those notices that do not pertain to any part of the\n      Derivative Works, in at least one of the following places: within a\n      NOTICE text file distributed as part of the Derivative Works; within the\n      Source form or documentation, if provided along with the Derivative\n      Works; or, within a display generated by the Derivative Works, if and\n      wherever such third-party notices normally appear. The contents of the\n      NOTICE file are for informational purposes only and do not modify the\n      License. You may add Your own attribution notices within Derivative Works\n      that You distribute, alongside or as an addendum to the NOTICE text from\n      the Work, provided that such additional attribution notices cannot be\n      construed as modifying the License.\n\n      You may add Your own copyright statement to Your modifications and may\n      provide additional or different license terms and conditions for use,\n      reproduction, or distribution of Your modifications, or for any such\n      Derivative Works as a whole, provided Your use, reproduction, and\n      distribution of the Work otherwise complies with the conditions stated in\n      this License.\n\n   5. Submission of Contributions. Unless You explicitly state otherwise, any\n   Contribution intentionally submitted for inclusion in the Work by You to the\n   Licensor shall be under the terms and conditions of this License, without\n   any additional terms or conditions. Notwithstanding the above, nothing\n   herein shall supersede or modify the terms of any separate license agreement\n   you may have executed with Licensor regarding such Contributions.\n\n   6. Trademarks. This License does not grant permission to use the trade\n   names, trademarks, service marks, or product names of the Licensor, except\n   as required for reasonable and customary use in describing the origin of the\n   Work and reproducing the content of the NOTICE file.\n\n   7. Disclaimer of Warranty. Unless required by applicable law or agreed to in\n   writing, Licensor provides the Work (and each Contributor provides its\n   Contributions) on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n   KIND, either express or implied, including, without limitation, any\n   warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or\n   FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining\n   the appropriateness of using or redistributing the Work and assume any risks\n   associated with Your exercise of permissions under this License.\n\n   8. Limitation of Liability. In no event and under no legal theory, whether\n   in tort (including negligence), contract, or otherwise, unless required by\n   applicable law (such as deliberate and grossly negligent acts) or agreed to\n   in writing, shall any Contributor be liable to You for damages, including\n   any direct, indirect, special, incidental, or consequential damages of any\n   character arising as a result of this License or out of the use or inability\n   to use the Work (including but not limited to damages for loss of goodwill,\n   work stoppage, computer failure or malfunction, or any and all other\n   commercial damages or losses), even if such Contributor has been advised of\n   the possibility of such damages.\n\n   9. Accepting Warranty or Additional Liability. While redistributing the Work\n   or Derivative Works thereof, You may choose to offer, and charge a fee for,\n   acceptance of support, warranty, indemnity, or other liability obligations\n   and/or rights consistent with this License. However, in accepting such\n   obligations, You may act only on Your own behalf and on Your sole\n   responsibility, not on behalf of any other Contributor, and only if You\n   agree to indemnify, defend, and hold each Contributor harmless for any\n   liability incurred by, or claims asserted against, such Contributor by\n   reason of your accepting any such warranty or additional liability. END OF\n   TERMS AND CONDITIONS\n\nAPPENDIX: How to apply the Apache License to your work.\n\nTo apply the Apache License to your work, attach the following boilerplate\nnotice, with the fields enclosed by brackets \"[]\" replaced with your own\nidentifying information. (Don't include the brackets!) The text should be\nenclosed in the appropriate comment syntax for the file format. We also\nrecommend that a file or class name and description of purpose be included on\nthe same \"printed page\" as the copyright notice for easier identification\nwithin third-party archives.\n\nCopyright [yyyy] [name of copyright owner]\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\n\nyou may not use this file except in compliance with the License.\n\nYou may obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\n\ndistributed under the License is distributed on an \"AS IS\" BASIS,\n\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n\nSee the License for the specific language governing permissions and\n\nlimitations under the License.\n\n* For github.com/aws/amazon-kinesis-firehose-for-fluent-bit see also this\nrequired NOTICE:\n    Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n* For github.com/aws/aws-sdk-go see also this required NOTICE:\n    Copyright 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n    Copyright 2014-2015 Stripe, Inc.\n* For github.com/fluent/fluent-bit-go see also this required NOTICE:\n    Copyright (C) 2015-2017 Treasure Data Inc.\n* For github.com/golang/mock see also this required NOTICE:\n    Copyright 2010 Google Inc.\n* For github.com/jmespath/go-jmespath see also this required NOTICE:\n    Copyright 2015 James Saryerwinnie\n* For github.com/modern-go/concurrent see also this required NOTICE:\n    None\n* For github.com/modern-go/reflect2 see also this required NOTICE:\n    None\n\n------\n\n** golang.org; version go1.12 -- https://golang.org/\nCopyright (c) 2009 The Go Authors. All rights reserved.\n\nCopyright (c) 2009 The Go Authors. All rights reserved.\n\nRedistribution and use in source and binary forms, with or without\nmodification, are permitted provided that the following conditions are\nmet:\n\n   * Redistributions of source code must retain the above copyright\nnotice, this list of conditions and the following disclaimer.\n   * Redistributions in binary form must reproduce the above\ncopyright notice, this list of conditions and the following disclaimer\nin the documentation and/or other materials provided with the\ndistribution.\n   * Neither the name of Google Inc. nor the names of its\ncontributors may be used to endorse or promote products derived from\nthis software without specific prior written permission.\n\nTHIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n\"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\nLIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\nA PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\nOWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\nSPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\nLIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\nDATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\nTHEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\nOF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\n------\n\n** github.com/pmezard/go-difflib; version v1.0.0 --\nhttps://github.com/pmezard/go-difflib\nCopyright (c) 2013, Patrick Mezard\n\nCopyright (c) 2013, Patrick Mezard\nAll rights reserved.\n\nRedistribution and use in source and binary forms, with or without\nmodification, are permitted provided that the following conditions are\nmet:\n\n    Redistributions of source code must retain the above copyright\nnotice, this list of conditions and the following disclaimer.\n    Redistributions in binary form must reproduce the above copyright\nnotice, this list of conditions and the following disclaimer in the\ndocumentation and/or other materials provided with the distribution.\n    The names of its contributors may not be used to endorse or promote\nproducts derived from this software without specific prior written\npermission.\n\nTHIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS\nIS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED\nTO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A\nPARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\nHOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\nSPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED\nTO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR\nPROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF\nLIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING\nNEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\nSOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\n------\n\n** github.com/davecgh/go-spew; version v1.1.1 --\nhttps://github.com/davecgh/go-spew\nCopyright (c) 2012-2016 Dave Collins <dave@davec.name>\n** github.com/davecgh/go-spew; version v1.1.1 --\nhttps://github.com/davecgh/go-spew\nCopyright (c) 2012-2016 Dave Collins <dave@davec.name>\n\nISC License\n\nCopyright (c) 2012-2016 Dave Collins <dave@davec.name>\n\nPermission to use, copy, modify, and/or distribute this software for any\npurpose with or without fee is hereby granted, provided that the above\ncopyright notice and this permission notice appear in all copies.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\" AND THE AUTHOR DISCLAIMS ALL WARRANTIES\nWITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF\nMERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR\nANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES\nWHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN\nACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF\nOR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.\n\n------\n\n** github.com/stretchr/testify; version v1.3.0 --\nhttps://github.com/stretchr/testify\nCopyright (c) 2012-2018 Mat Ryer and Tyler Bunnell\n\nMIT License\n\nCopyright (c) 2012-2018 Mat Ryer and Tyler Bunnell\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n\n------\n\n** github.com/sirupsen/logrus; version v1.4.2 --\nhttps://github.com/sirupsen/logrus\nCopyright (c) 2014 Simon Eskildsen\n\nThe MIT License (MIT)\n\nCopyright (c) 2014 Simon Eskildsen\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in\nall copies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\nTHE SOFTWARE.\n\n------\n\n** github.com/konsorten/go-windows-terminal-sequences; version v1.0.1 --\nhttps://github.com/konsorten/go-windows-terminal-sequences\nCopyright (c) 2017 marvin + konsorten GmbH (open-source@konsorten.de)\n\n(The MIT License)\n\nCopyright (c) 2017 marvin + konsorten GmbH (open-source@konsorten.de)\n\nPermission is hereby granted, free of charge, to any person obtaining a copy of\nthis software and associated documentation files (the 'Software'), to deal in\nthe Software without restriction, including without limitation the rights to\nuse, copy, modify, merge, publish, distribute, sublicense, and/or sell copies\nof the Software, and to permit persons to whom the Software is furnished to do\nso, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n\n------\n\n** github.com/ugorji/go; version v1.1.4 -- https://github.com/ugorji/go\nCopyright (c) 2012-2015 Ugorji Nwoke.\n\nThe MIT License (MIT)\n\nCopyright (c) 2012-2015 Ugorji Nwoke.\nAll rights reserved.\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n\n------\n\n** github.com/json-iterator/go; version v1.1.6 --\nhttps://github.com/json-iterator/go\nCopyright (c) 2016 json-iterator\n\nMIT License\n\nCopyright (c) 2016 json-iterator\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n\n------\n\n** github.com/cenkalti/backoff; version v2.1.1 --\nhttps://github.com/cenkalti/backoff\nCopyright (c) 2014 Cenk AltÄ±\n\nThe MIT License (MIT)\n\nCopyright (c) 2014 Cenk AltÄ±\n\nPermission is hereby granted, free of charge, to any person obtaining a copy of\nthis software and associated documentation files (the \"Software\"), to deal in\nthe Software without restriction, including without limitation the rights to\nuse, copy, modify, merge, publish, distribute, sublicense, and/or sell copies\nof\nthe Software, and to permit persons to whom the Software is furnished to do so,\nsubject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS\nFOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR\nCOPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER\nIN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN\nCONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\n------\n\n** github.com/stretchr/objx; version v0.1.1 -- https://github.com/stretchr/objx\nCopyright (c) 2014 Stretchr, Inc.\nCopyright (c) 2017-2018 objx contributors\n\nThe MIT License\n\nCopyright (c) 2014 Stretchr, Inc.\nCopyright (c) 2017-2018 objx contributors\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n"
  },
  {
    "path": "VERSION",
    "content": "1.9.5\n"
  },
  {
    "path": "cloudwatch/cloudwatch.go",
    "content": "// Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\"). You may\n// not use this file except in compliance with the License. A copy of the\n// License is located at\n//\n//\thttp://aws.amazon.com/apache2.0/\n//\n// or in the \"license\" file accompanying this file. This file is distributed\n// on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either\n// express or implied. See the License for the specific language governing\n// permissions and limitations under the License.\n\npackage cloudwatch\n\nimport (\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"io/ioutil\"\n\t\"net/http\"\n\t\"os\"\n\t\"runtime\"\n\t\"sort\"\n\t\"strings\"\n\t\"time\"\n\t\"unicode/utf8\"\n\n\t\"github.com/aws/amazon-kinesis-firehose-for-fluent-bit/plugins\"\n\t\"github.com/aws/aws-sdk-go/aws\"\n\t\"github.com/aws/aws-sdk-go/aws/arn\"\n\t\"github.com/aws/aws-sdk-go/aws/awserr\"\n\t\"github.com/aws/aws-sdk-go/aws/credentials/endpointcreds\"\n\t\"github.com/aws/aws-sdk-go/aws/credentials/stscreds\"\n\t\"github.com/aws/aws-sdk-go/aws/endpoints\"\n\t\"github.com/aws/aws-sdk-go/aws/request\"\n\t\"github.com/aws/aws-sdk-go/aws/session\"\n\t\"github.com/aws/aws-sdk-go/service/cloudwatchlogs\"\n\tfluentbit \"github.com/fluent/fluent-bit-go/output\"\n\tjsoniter \"github.com/json-iterator/go\"\n\t\"github.com/segmentio/ksuid\"\n\t\"github.com/sirupsen/logrus\"\n\t\"github.com/valyala/bytebufferpool\"\n\t\"github.com/valyala/fasttemplate\"\n)\n\nconst (\n\t// See: http://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutLogEvents.html\n\tperEventBytes          = 26\n\tmaximumBytesPerPut     = 1048576\n\tmaximumLogEventsPerPut = 10000\n\tmaximumBytesPerEvent   = 1024 * 256 //256KB\n\tmaximumTimeSpanPerPut  = time.Hour * 24\n\ttruncatedSuffix        = \"[Truncated...]\"\n\tmaxGroupStreamLength   = 512\n)\n\nconst (\n\t// Log stream objects that are empty and inactive for longer than the timeout get cleaned up\n\tlogStreamInactivityTimeout = time.Hour\n\t// Check for expired log streams every 10 minutes\n\tlogStreamInactivityCheckInterval = 10 * time.Minute\n\t// linuxBaseUserAgent is the base user agent string used for Linux.\n\tlinuxBaseUserAgent = \"aws-fluent-bit-plugin\"\n\t// windowsBaseUserAgent is the base user agent string used for Windows.\n\twindowsBaseUserAgent = \"aws-fluent-bit-plugin-windows\"\n)\n\n// LogsClient contains the CloudWatch API calls used by this plugin\ntype LogsClient interface {\n\tCreateLogGroup(input *cloudwatchlogs.CreateLogGroupInput) (*cloudwatchlogs.CreateLogGroupOutput, error)\n\tPutRetentionPolicy(input *cloudwatchlogs.PutRetentionPolicyInput) (*cloudwatchlogs.PutRetentionPolicyOutput, error)\n\tCreateLogStream(input *cloudwatchlogs.CreateLogStreamInput) (*cloudwatchlogs.CreateLogStreamOutput, error)\n\tDescribeLogStreams(input *cloudwatchlogs.DescribeLogStreamsInput) (*cloudwatchlogs.DescribeLogStreamsOutput, error)\n\tPutLogEvents(input *cloudwatchlogs.PutLogEventsInput) (*cloudwatchlogs.PutLogEventsOutput, error)\n}\n\ntype logStream struct {\n\tlogEvents         []*cloudwatchlogs.InputLogEvent\n\tcurrentByteLength int\n\tcurrentBatchStart *time.Time\n\tcurrentBatchEnd   *time.Time\n\tnextSequenceToken *string\n\tlogStreamName     string\n\tlogGroupName      string\n\texpiration        time.Time\n}\n\n// Event is the input data and contains a log entry.\n// The group and stream are added during processing.\ntype Event struct {\n\tTS     time.Time\n\tRecord map[interface{}]interface{}\n\tTag    string\n\tgroup  string\n\tstream string\n}\n\n// TaskMetadata it the task metadata from ECS V3 endpoint\ntype TaskMetadata struct {\n\tCluster string `json:\"Cluster,omitempty\"`\n\tTaskARN string `json:\"TaskARN,omitempty\"`\n\tTaskID  string `json:\"TaskID,omitempty\"`\n}\n\ntype streamDoesntExistError struct {\n\tstreamName string\n\tgroupName  string\n}\n\nfunc (stream *logStream) isExpired() bool {\n\tif len(stream.logEvents) == 0 && stream.expiration.Before(time.Now()) {\n\t\treturn true\n\t}\n\treturn false\n}\n\nfunc (stream *logStream) updateExpiration() {\n\tstream.expiration = time.Now().Add(logStreamInactivityTimeout)\n}\n\ntype fastTemplate struct {\n\tString string\n\t*fasttemplate.Template\n}\n\n// OutputPlugin is the CloudWatch Logs Fluent Bit output plugin\ntype OutputPlugin struct {\n\tlogGroupName                  *fastTemplate\n\tdefaultLogGroupName           string\n\tlogStreamPrefix               string\n\tlogStreamName                 *fastTemplate\n\tdefaultLogStreamName          string\n\tlogKey                        string\n\tclient                        LogsClient\n\tstreams                       map[string]*logStream\n\tgroups                        map[string]struct{}\n\ttimer                         *plugins.Timeout\n\tnextLogStreamCleanUpCheckTime time.Time\n\tPluginInstanceID              int\n\tlogGroupTags                  map[string]*string\n\tlogGroupRetention             int64\n\tautoCreateGroup               bool\n\tautoCreateStream              bool\n\tbufferPool                    bytebufferpool.Pool\n\tecsMetadata                   TaskMetadata\n\trunningInECS                  bool\n\tuuid                          string\n\textraUserAgent                string\n}\n\n// OutputPluginConfig is the input information used by NewOutputPlugin to create a new OutputPlugin\ntype OutputPluginConfig struct {\n\tRegion               string\n\tLogGroupName         string\n\tDefaultLogGroupName  string\n\tLogStreamPrefix      string\n\tLogStreamName        string\n\tDefaultLogStreamName string\n\tLogKey               string\n\tRoleARN              string\n\tAutoCreateGroup      bool\n\tAutoCreateStream     bool\n\tNewLogGroupTags      string\n\tLogRetentionDays     int64\n\tCWEndpoint           string\n\tSTSEndpoint          string\n\tExternalID           string\n\tCredsEndpoint        string\n\tPluginInstanceID     int\n\tLogFormat            string\n\tExtraUserAgent       string\n}\n\n// Validate checks the configuration input for an OutputPlugin instances\nfunc (config OutputPluginConfig) Validate() error {\n\terrorStr := \"%s is a required parameter\"\n\tif config.Region == \"\" {\n\t\treturn fmt.Errorf(errorStr, \"region\")\n\t}\n\tif config.LogGroupName == \"\" {\n\t\treturn fmt.Errorf(errorStr, \"log_group_name\")\n\t}\n\tif config.LogStreamName == \"\" && config.LogStreamPrefix == \"\" {\n\t\treturn fmt.Errorf(\"log_stream_name or log_stream_prefix is required\")\n\t}\n\n\tif config.LogStreamName != \"\" && config.LogStreamPrefix != \"\" {\n\t\treturn fmt.Errorf(\"either log_stream_name or log_stream_prefix can be configured. They cannot be provided together\")\n\t}\n\n\treturn nil\n}\n\n// NewOutputPlugin creates a OutputPlugin object\nfunc NewOutputPlugin(config OutputPluginConfig) (*OutputPlugin, error) {\n\tlogrus.Debugf(\"[cloudwatch %d] Initializing NewOutputPlugin\", config.PluginInstanceID)\n\n\tclient, err := newCloudWatchLogsClient(config)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\ttimer, err := plugins.NewTimeout(func(d time.Duration) {\n\t\tlogrus.Errorf(\"[cloudwatch %d] timeout threshold reached: Failed to send logs for %s\\n\", config.PluginInstanceID, d.String())\n\t\tlogrus.Fatalf(\"[cloudwatch %d] Quitting Fluent Bit\", config.PluginInstanceID) // exit the plugin and kill Fluent Bit\n\t})\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tlogGroupTemplate, err := newTemplate(config.LogGroupName)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tlogStreamTemplate, err := newTemplate(config.LogStreamName)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\trunningInECS := true\n\t// check if it is running in ECS\n\tif os.Getenv(\"ECS_CONTAINER_METADATA_URI\") == \"\" {\n\t\trunningInECS = false\n\t}\n\n\treturn &OutputPlugin{\n\t\tlogGroupName:                  logGroupTemplate,\n\t\tlogStreamName:                 logStreamTemplate,\n\t\tlogStreamPrefix:               config.LogStreamPrefix,\n\t\tdefaultLogGroupName:           config.DefaultLogGroupName,\n\t\tdefaultLogStreamName:          config.DefaultLogStreamName,\n\t\tlogKey:                        config.LogKey,\n\t\tclient:                        client,\n\t\ttimer:                         timer,\n\t\tstreams:                       make(map[string]*logStream),\n\t\tnextLogStreamCleanUpCheckTime: time.Now().Add(logStreamInactivityCheckInterval),\n\t\tPluginInstanceID:              config.PluginInstanceID,\n\t\tlogGroupTags:                  tagKeysToMap(config.NewLogGroupTags),\n\t\tlogGroupRetention:             config.LogRetentionDays,\n\t\tautoCreateGroup:               config.AutoCreateGroup,\n\t\tautoCreateStream:              config.AutoCreateStream,\n\t\tgroups:                        make(map[string]struct{}),\n\t\tecsMetadata:                   TaskMetadata{},\n\t\trunningInECS:                  runningInECS,\n\t\tuuid:                          ksuid.New().String(),\n\t\textraUserAgent:                config.ExtraUserAgent,\n\t}, nil\n}\n\nfunc newCloudWatchLogsClient(config OutputPluginConfig) (*cloudwatchlogs.CloudWatchLogs, error) {\n\tcustomResolverFn := func(service, region string, optFns ...func(*endpoints.Options)) (endpoints.ResolvedEndpoint, error) {\n\t\tif service == endpoints.LogsServiceID && config.CWEndpoint != \"\" {\n\t\t\treturn endpoints.ResolvedEndpoint{\n\t\t\t\tURL: config.CWEndpoint,\n\t\t\t}, nil\n\t\t} else if service == endpoints.StsServiceID && config.STSEndpoint != \"\" {\n\t\t\treturn endpoints.ResolvedEndpoint{\n\t\t\t\tURL: config.STSEndpoint,\n\t\t\t}, nil\n\t\t}\n\t\treturn endpoints.DefaultResolver().EndpointFor(service, region, optFns...)\n\t}\n\n\t// Fetch base credentials\n\tbaseConfig := &aws.Config{\n\t\tRegion:                        aws.String(config.Region),\n\t\tEndpointResolver:              endpoints.ResolverFunc(customResolverFn),\n\t\tCredentialsChainVerboseErrors: aws.Bool(true),\n\t}\n\n\tif config.CredsEndpoint != \"\" {\n\t\tcreds := endpointcreds.NewCredentialsClient(*baseConfig, request.Handlers{}, config.CredsEndpoint,\n\t\t\tfunc(provider *endpointcreds.Provider) {\n\t\t\t\tprovider.ExpiryWindow = 5 * time.Minute\n\t\t\t})\n\t\tbaseConfig.Credentials = creds\n\t}\n\n\tsess, err := session.NewSession(baseConfig)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tvar svcSess = sess\n\tvar svcConfig = baseConfig\n\teksRole := os.Getenv(\"EKS_POD_EXECUTION_ROLE\")\n\tif eksRole != \"\" {\n\t\tlogrus.Debugf(\"[cloudwatch %d] Fetching EKS pod credentials.\\n\", config.PluginInstanceID)\n\t\teksConfig := &aws.Config{}\n\t\tcreds := stscreds.NewCredentials(svcSess, eksRole)\n\t\teksConfig.Credentials = creds\n\t\teksConfig.Region = aws.String(config.Region)\n\t\tsvcConfig = eksConfig\n\n\t\tsvcSess, err = session.NewSession(svcConfig)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t}\n\n\tif config.RoleARN != \"\" {\n\t\tlogrus.Debugf(\"[cloudwatch %d] Fetching credentials for %s\\n\", config.PluginInstanceID, config.RoleARN)\n\t\tstsConfig := &aws.Config{}\n\t\tcreds := stscreds.NewCredentials(svcSess, config.RoleARN, func(p *stscreds.AssumeRoleProvider) {\n\t\t\tif config.ExternalID != \"\" {\n\t\t\t\tp.ExternalID = aws.String(config.ExternalID)\n\t\t\t}\n\t\t})\n\t\tstsConfig.Credentials = creds\n\t\tstsConfig.Region = aws.String(config.Region)\n\t\tsvcConfig = stsConfig\n\n\t\tsvcSess, err = session.NewSession(svcConfig)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t}\n\n\tclient := cloudwatchlogs.New(svcSess, svcConfig)\n\tclient.Handlers.Build.PushBackNamed(customUserAgentHandler(config))\n\tif config.LogFormat != \"\" {\n\t\tclient.Handlers.Build.PushBackNamed(LogFormatHandler(config.LogFormat))\n\t}\n\treturn client, nil\n}\n\n// CustomUserAgentHandler returns a http request handler that sets a custom user agent to all aws requests\nfunc customUserAgentHandler(config OutputPluginConfig) request.NamedHandler {\n\tconst userAgentHeader = \"User-Agent\"\n\n\tbaseUserAgent := linuxBaseUserAgent\n\tif runtime.GOOS == \"windows\" {\n\t\tbaseUserAgent = windowsBaseUserAgent\n\t}\n\n\treturn request.NamedHandler{\n\t\tName: \"ECSLocalEndpointsAgentHandler\",\n\t\tFn: func(r *request.Request) {\n\t\t\tcurrentAgent := r.HTTPRequest.Header.Get(userAgentHeader)\n\t\t\tif config.ExtraUserAgent != \"\" {\n\t\t\t\tr.HTTPRequest.Header.Set(userAgentHeader,\n\t\t\t\t\tfmt.Sprintf(\"%s-%s (%s) %s\", baseUserAgent, config.ExtraUserAgent, runtime.GOOS, currentAgent))\n\t\t\t} else {\n\t\t\t\tr.HTTPRequest.Header.Set(userAgentHeader,\n\t\t\t\t\tfmt.Sprintf(\"%s (%s) %s\", baseUserAgent, runtime.GOOS, currentAgent))\n\t\t\t}\n\t\t},\n\t}\n}\n\n// AddEvent accepts a record and adds it to the buffer for its stream, flushing the buffer if it is full\n// the return value is one of: FLB_OK, FLB_RETRY\n// API Errors lead to an FLB_RETRY, and all other errors are logged, the record is discarded and FLB_OK is returned\nfunc (output *OutputPlugin) AddEvent(e *Event) int {\n\t// Step 1: convert the Event data to strings, and check for a log key.\n\tdata, err := output.processRecord(e)\n\tif err != nil {\n\t\tlogrus.Errorf(\"[cloudwatch %d] %v\\n\", output.PluginInstanceID, err)\n\t\t// discard this single bad record and let the batch continue\n\t\treturn fluentbit.FLB_OK\n\t}\n\n\t// Step 2. Make sure the Event data isn't empty.\n\teventString := logString(data)\n\tif len(eventString) == 0 {\n\t\tlogrus.Debugf(\"[cloudwatch %d] Discarding an event from publishing as it is empty\\n\", output.PluginInstanceID)\n\t\t// discard this single empty record and let the batch continue\n\t\treturn fluentbit.FLB_OK\n\t}\n\n\t// Step 3. Extract the Task Metadata if applicable.\n\tif output.runningInECS && output.ecsMetadata.TaskID == \"\" {\n\t\terr := output.getECSMetadata()\n\t\tif err != nil {\n\t\t\tlogrus.Errorf(\"[cloudwatch %d] Failed to get ECS Task Metadata with error: %v\\n\", output.PluginInstanceID, err)\n\t\t\treturn fluentbit.FLB_RETRY\n\t\t}\n\t}\n\n\t// Step 4. Assign a log group and log stream name to the Event.\n\toutput.setGroupStreamNames(e)\n\n\t// Step 5. Create a missing log group for this Event.\n\tif _, ok := output.groups[e.group]; !ok {\n\t\tlogrus.Debugf(\"[cloudwatch %d] Finding log group: %s\", output.PluginInstanceID, e.group)\n\n\t\tif err := output.createLogGroup(e); err != nil {\n\t\t\tlogrus.Error(err)\n\t\t\treturn fluentbit.FLB_RETRY\n\t\t}\n\n\t\toutput.groups[e.group] = struct{}{}\n\t}\n\n\t// Step 6. Create or retrieve an existing log stream for this Event.\n\tstream, err := output.getLogStream(e)\n\tif err != nil {\n\t\tlogrus.Errorf(\"[cloudwatch %d] %v\\n\", output.PluginInstanceID, err)\n\t\t// an error means that the log stream was not created; this is retryable\n\t\treturn fluentbit.FLB_RETRY\n\t}\n\n\t// Step 7. Check batch limits and flush buffer if any of these limits will be exeeded by this log Entry.\n\tcountLimit := len(stream.logEvents) == maximumLogEventsPerPut\n\tsizeLimit := (stream.currentByteLength + cloudwatchLen(eventString)) >= maximumBytesPerPut\n\tspanLimit := stream.logBatchSpan(e.TS) >= maximumTimeSpanPerPut\n\tif countLimit || sizeLimit || spanLimit {\n\t\terr = output.putLogEvents(stream)\n\t\tif err != nil {\n\t\t\tlogrus.Errorf(\"[cloudwatch %d] %v\\n\", output.PluginInstanceID, err)\n\t\t\t// send failures are retryable\n\t\t\treturn fluentbit.FLB_RETRY\n\t\t}\n\t}\n\n\t// Step 8. Add this event to the running tally.\n\tstream.logEvents = append(stream.logEvents, &cloudwatchlogs.InputLogEvent{\n\t\tMessage:   aws.String(eventString),\n\t\tTimestamp: aws.Int64(e.TS.UnixNano() / 1e6), // CloudWatch uses milliseconds since epoch\n\t})\n\tstream.currentByteLength += cloudwatchLen(eventString)\n\tif stream.currentBatchStart == nil || stream.currentBatchStart.After(e.TS) {\n\t\tstream.currentBatchStart = &e.TS\n\t}\n\tif stream.currentBatchEnd == nil || stream.currentBatchEnd.Before(e.TS) {\n\t\tstream.currentBatchEnd = &e.TS\n\t}\n\n\treturn fluentbit.FLB_OK\n}\n\n// This plugin tracks CW Log streams\n// We need to periodically delete any streams that haven't been written to in a while\n// Because each stream incurs some memory for its buffer of log events\n// (Which would be empty for an unused stream)\nfunc (output *OutputPlugin) cleanUpExpiredLogStreams() {\n\tif output.nextLogStreamCleanUpCheckTime.Before(time.Now()) {\n\t\tlogrus.Debugf(\"[cloudwatch %d] Checking for expired log streams\", output.PluginInstanceID)\n\n\t\tfor name, stream := range output.streams {\n\t\t\tif stream.isExpired() {\n\t\t\t\tlogrus.Debugf(\"[cloudwatch %d] Removing internal buffer for log stream %s in group %s; the stream has not been written to for %s\",\n\t\t\t\t\toutput.PluginInstanceID, stream.logStreamName, stream.logGroupName, logStreamInactivityTimeout.String())\n\t\t\t\tdelete(output.streams, name)\n\t\t\t}\n\t\t}\n\t\toutput.nextLogStreamCleanUpCheckTime = time.Now().Add(logStreamInactivityCheckInterval)\n\t}\n}\n\nfunc (err *streamDoesntExistError) Error() string {\n\treturn fmt.Sprintf(\"error: stream %s doesn't exist in log group %s\", err.streamName, err.groupName)\n}\n\nfunc (output *OutputPlugin) getLogStream(e *Event) (*logStream, error) {\n\tstream, ok := output.streams[e.group+e.stream]\n\tif !ok {\n\t\t// assume the stream exists\n\t\tstream, err := output.existingLogStream(e)\n\t\tif err != nil {\n\t\t\t// if it doesn't then create it\n\t\t\tif _, ok := err.(*streamDoesntExistError); ok {\n\t\t\t\treturn output.createStream(e)\n\t\t\t}\n\t\t}\n\t\treturn stream, err\n\t}\n\treturn stream, nil\n}\n\nfunc (output *OutputPlugin) existingLogStream(e *Event) (*logStream, error) {\n\tvar nextToken *string\n\tvar stream *logStream\n\n\tfor stream == nil {\n\t\tresp, err := output.describeLogStreams(e, nextToken)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\n\t\tfor _, result := range resp.LogStreams {\n\t\t\tif aws.StringValue(result.LogStreamName) == e.stream {\n\t\t\t\tstream = &logStream{\n\t\t\t\t\tlogGroupName:      e.group,\n\t\t\t\t\tlogStreamName:     e.stream,\n\t\t\t\t\tlogEvents:         make([]*cloudwatchlogs.InputLogEvent, 0, maximumLogEventsPerPut),\n\t\t\t\t\tnextSequenceToken: result.UploadSequenceToken,\n\t\t\t\t}\n\t\t\t\toutput.streams[e.group+e.stream] = stream\n\n\t\t\t\tlogrus.Debugf(\"[cloudwatch %d] Initializing internal buffer for exising log stream %s\\n\", output.PluginInstanceID, e.stream)\n\t\t\t\tstream.updateExpiration() // initialize\n\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\n\t\tif stream == nil && resp.NextToken == nil {\n\t\t\tlogrus.Infof(\"[cloudwatch %d] Log stream %s does not exist in log group %s\", output.PluginInstanceID, e.stream, e.group)\n\t\t\treturn nil, &streamDoesntExistError{\n\t\t\t\tstreamName: e.stream,\n\t\t\t\tgroupName:  e.group,\n\t\t\t}\n\t\t}\n\n\t\tnextToken = resp.NextToken\n\t}\n\treturn stream, nil\n}\n\nfunc (output *OutputPlugin) describeLogStreams(e *Event, nextToken *string) (*cloudwatchlogs.DescribeLogStreamsOutput, error) {\n\toutput.timer.Check()\n\tresp, err := output.client.DescribeLogStreams(&cloudwatchlogs.DescribeLogStreamsInput{\n\t\tLogGroupName:        aws.String(e.group),\n\t\tLogStreamNamePrefix: aws.String(e.stream),\n\t\tNextToken:           nextToken,\n\t})\n\n\tif err != nil {\n\t\toutput.timer.Start()\n\t\treturn nil, err\n\t}\n\toutput.timer.Reset()\n\n\treturn resp, err\n}\n\n// setGroupStreamNames adds the log group and log stream names to the event struct.\n// This happens by parsing (any) template data in either configured name.\nfunc (output *OutputPlugin) setGroupStreamNames(e *Event) {\n\t// This happens here to avoid running Split more than once per log Event.\n\tlogTagSplit := strings.SplitN(e.Tag, \".\", 10)\n\ts := &sanitizer{sanitize: sanitizeGroup, buf: output.bufferPool.Get()}\n\n\tif _, err := parseDataMapTags(e, logTagSplit, output.logGroupName, output.ecsMetadata, output.uuid, s); err != nil {\n\t\te.group = output.defaultLogGroupName\n\t\tlogrus.Errorf(\"[cloudwatch %d] parsing log_group_name template '%s' \"+\n\t\t\t\"(using value of default_log_group_name instead): %v\",\n\t\t\toutput.PluginInstanceID, output.logGroupName.String, err)\n\t} else if e.group = s.buf.String(); len(e.group) == 0 {\n\t\te.group = output.defaultLogGroupName\n\t} else if len(e.group) > maxGroupStreamLength {\n\t\te.group = e.group[:maxGroupStreamLength]\n\t}\n\n\tif output.logStreamPrefix != \"\" {\n\t\te.stream = output.logStreamPrefix + e.Tag\n\t\toutput.bufferPool.Put(s.buf)\n\n\t\treturn\n\t}\n\n\ts.sanitize = sanitizeStream\n\ts.buf.Reset()\n\n\tif _, err := parseDataMapTags(e, logTagSplit, output.logStreamName, output.ecsMetadata, output.uuid, s); err != nil {\n\t\te.stream = output.defaultLogStreamName\n\t\tlogrus.Errorf(\"[cloudwatch %d] parsing log_stream_name template '%s': %v\",\n\t\t\toutput.PluginInstanceID, output.logStreamName.String, err)\n\t} else if e.stream = s.buf.String(); len(e.stream) == 0 {\n\t\te.stream = output.defaultLogStreamName\n\t} else if len(e.stream) > maxGroupStreamLength {\n\t\te.stream = e.stream[:maxGroupStreamLength]\n\t}\n\n\toutput.bufferPool.Put(s.buf)\n}\n\nfunc (output *OutputPlugin) createStream(e *Event) (*logStream, error) {\n\tif !output.autoCreateStream {\n\t\treturn nil, fmt.Errorf(\"error: attempting to create log Stream %s in log group %s however auto_create_stream is disabled\", e.stream, e.group)\n\t}\n\toutput.timer.Check()\n\t_, err := output.client.CreateLogStream(&cloudwatchlogs.CreateLogStreamInput{\n\t\tLogGroupName:  aws.String(e.group),\n\t\tLogStreamName: aws.String(e.stream),\n\t})\n\n\tif err != nil {\n\t\toutput.timer.Start()\n\t\treturn nil, err\n\t}\n\toutput.timer.Reset()\n\n\tstream := &logStream{\n\t\tlogStreamName:     e.stream,\n\t\tlogGroupName:      e.group,\n\t\tlogEvents:         make([]*cloudwatchlogs.InputLogEvent, 0, maximumLogEventsPerPut),\n\t\tnextSequenceToken: nil, // sequence token not required for a new log stream\n\t}\n\toutput.streams[e.group+e.stream] = stream\n\tstream.updateExpiration() // initialize\n\tlogrus.Infof(\"[cloudwatch %d] Created log stream %s in group %s\", output.PluginInstanceID, e.stream, e.group)\n\n\treturn stream, nil\n}\n\nfunc (output *OutputPlugin) createLogGroup(e *Event) error {\n\tif !output.autoCreateGroup {\n\t\treturn nil\n\t}\n\n\t_, err := output.client.CreateLogGroup(&cloudwatchlogs.CreateLogGroupInput{\n\t\tLogGroupName: aws.String(e.group),\n\t\tTags:         output.logGroupTags,\n\t})\n\tif err == nil {\n\t\tlogrus.Infof(\"[cloudwatch %d] Created log group %s\\n\", output.PluginInstanceID, e.group)\n\t\treturn output.setLogGroupRetention(e.group)\n\t}\n\n\tif awsErr, ok := err.(awserr.Error); !ok ||\n\t\tawsErr.Code() != cloudwatchlogs.ErrCodeResourceAlreadyExistsException {\n\t\treturn err\n\t}\n\n\tlogrus.Infof(\"[cloudwatch %d] Log group %s already exists\\n\", output.PluginInstanceID, e.group)\n\treturn output.setLogGroupRetention(e.group)\n}\n\nfunc (output *OutputPlugin) setLogGroupRetention(name string) error {\n\tif output.logGroupRetention < 1 {\n\t\treturn nil\n\t}\n\n\t_, err := output.client.PutRetentionPolicy(&cloudwatchlogs.PutRetentionPolicyInput{\n\t\tLogGroupName:    aws.String(name),\n\t\tRetentionInDays: aws.Int64(output.logGroupRetention),\n\t})\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tlogrus.Infof(\"[cloudwatch %d] Set retention policy on log group %s to %dd\\n\", output.PluginInstanceID, name, output.logGroupRetention)\n\n\treturn nil\n}\n\n// Takes the byte slice and returns a string\n// Also removes leading and trailing whitespace\nfunc logString(record []byte) string {\n\treturn strings.TrimSpace(string(record))\n}\n\nfunc (output *OutputPlugin) processRecord(e *Event) ([]byte, error) {\n\tvar err error\n\te.Record, err = plugins.DecodeMap(e.Record)\n\tif err != nil {\n\t\tlogrus.Debugf(\"[cloudwatch %d] Failed to decode record: %v\\n\", output.PluginInstanceID, e.Record)\n\t\treturn nil, err\n\t}\n\n\tvar json = jsoniter.ConfigCompatibleWithStandardLibrary\n\tvar data []byte\n\n\tif output.logKey != \"\" {\n\t\tlog, err := plugins.LogKey(e.Record, output.logKey)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\n\t\tdata, err = plugins.EncodeLogKey(log)\n\t} else {\n\t\tdata, err = json.Marshal(e.Record)\n\t}\n\n\tif err != nil {\n\t\tlogrus.Debugf(\"[cloudwatch %d] Failed to marshal record: %v\\nLog Key: %s\\n\", output.PluginInstanceID, e.Record, output.logKey)\n\t\treturn nil, err\n\t}\n\n\t// append newline\n\tdata = append(data, []byte(\"\\n\")...)\n\n\tif (len(data) + perEventBytes) > maximumBytesPerEvent {\n\t\tlogrus.Warnf(\"[cloudwatch %d] Found record with %d bytes, truncating to 256KB, logGroup=%s, stream=%s\\n\",\n\t\t\toutput.PluginInstanceID, len(data)+perEventBytes, e.group, e.stream)\n\n\t\t/*\n\t\t * Find last byte of trailing unicode character via efficient byte scanning\n\t\t * Avoids corrupting rune\n\t\t *\n\t\t * A unicode character may be composed of 1 - 4 bytes\n\t\t *   bytes [11, 01, 00]xx xxxx: represent the first byte in a unicode character\n\t\t *   byte 10xx xxxx: represent all bytes following the first byte.\n\t\t *\n\t\t * nextByte is the first byte that is truncated,\n\t\t * so nextByte should be the start of a new unicode character in first byte format.\n\t\t */\n\t\tnextByte := (maximumBytesPerEvent - len(truncatedSuffix) - perEventBytes)\n\t\tfor (data[nextByte]&0xc0 == 0x80) && nextByte > 0 {\n\t\t\tnextByte--\n\t\t}\n\n\t\tdata = data[:nextByte]\n\t\tdata = append(data, []byte(truncatedSuffix)...)\n\t}\n\n\treturn data, nil\n}\n\nfunc (output *OutputPlugin) getECSMetadata() error {\n\tecsTaskMetadataEndpointV3 := os.Getenv(\"ECS_CONTAINER_METADATA_URI\")\n\tvar metadata TaskMetadata\n\tres, err := http.Get(fmt.Sprintf(\"%s/task\", ecsTaskMetadataEndpointV3))\n\tif err != nil {\n\t\treturn fmt.Errorf(\"Failed to get endpoint response: %w\", err)\n\t}\n\tresponse, err := ioutil.ReadAll(res.Body)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"Failed to read response '%v' from URL: %w\", res, err)\n\t}\n\tres.Body.Close()\n\n\terr = json.Unmarshal(response, &metadata)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"Failed to unmarshal ECS metadata '%+v': %w\", metadata, err)\n\t}\n\n\tarnInfo, err := arn.Parse(metadata.TaskARN)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"Failed to parse ECS TaskARN '%s': %w\", metadata.TaskARN, err)\n\t}\n\tresourceID := strings.Split(arnInfo.Resource, \"/\")\n\ttaskID := resourceID[len(resourceID)-1]\n\tmetadata.TaskID = taskID\n\n\toutput.ecsMetadata = metadata\n\treturn nil\n}\n\n// Flush sends the current buffer of records.\nfunc (output *OutputPlugin) Flush() error {\n\tlogrus.Debugf(\"[cloudwatch %d] Flush() Called\", output.PluginInstanceID)\n\n\tfor _, stream := range output.streams {\n\t\tif err := output.flushStream(stream); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\treturn nil\n}\n\nfunc (output *OutputPlugin) flushStream(stream *logStream) error {\n\toutput.cleanUpExpiredLogStreams() // will periodically clean up, otherwise is no-op\n\treturn output.putLogEvents(stream)\n}\n\nfunc (output *OutputPlugin) putLogEvents(stream *logStream) error {\n\t// return in case of empty logEvents\n\tif len(stream.logEvents) == 0 {\n\t\treturn nil\n\t}\n\n\toutput.timer.Check()\n\tstream.updateExpiration()\n\n\t// Log events in a single PutLogEvents request must be in chronological order.\n\tsort.SliceStable(stream.logEvents, func(i, j int) bool {\n\t\treturn aws.Int64Value(stream.logEvents[i].Timestamp) < aws.Int64Value(stream.logEvents[j].Timestamp)\n\t})\n\tresponse, err := output.client.PutLogEvents(&cloudwatchlogs.PutLogEventsInput{\n\t\tLogEvents:     stream.logEvents,\n\t\tLogGroupName:  aws.String(stream.logGroupName),\n\t\tLogStreamName: aws.String(stream.logStreamName),\n\t\tSequenceToken: stream.nextSequenceToken,\n\t})\n\tif err != nil {\n\t\tif awsErr, ok := err.(awserr.Error); ok {\n\t\t\tif awsErr.Code() == cloudwatchlogs.ErrCodeDataAlreadyAcceptedException {\n\t\t\t\t// already submitted, just grab the correct sequence token\n\t\t\t\tparts := strings.Split(awsErr.Message(), \" \")\n\t\t\t\tstream.nextSequenceToken = &parts[len(parts)-1]\n\t\t\t\tstream.logEvents = stream.logEvents[:0]\n\t\t\t\tstream.currentByteLength = 0\n\t\t\t\tstream.currentBatchStart = nil\n\t\t\t\tstream.currentBatchEnd = nil\n\t\t\t\tlogrus.Infof(\"[cloudwatch %d] Encountered error %v; data already accepted, ignoring error\\n\", output.PluginInstanceID, awsErr)\n\t\t\t\treturn nil\n\t\t\t} else if awsErr.Code() == cloudwatchlogs.ErrCodeInvalidSequenceTokenException {\n\t\t\t\t// sequence code is bad, grab the correct one and retry\n\t\t\t\tparts := strings.Split(awsErr.Message(), \" \")\n\t\t\t\tnextSequenceToken := &parts[len(parts)-1]\n\t\t\t\t// If this is a new stream then the error will end like \"The next expected sequenceToken is: null\" and sequenceToken should be nil\n\t\t\t\tif strings.HasPrefix(*nextSequenceToken, \"null\") {\n\t\t\t\t\tnextSequenceToken = nil\n\t\t\t\t}\n\t\t\t\tstream.nextSequenceToken = nextSequenceToken\n\n\t\t\t\treturn output.putLogEvents(stream)\n\t\t\t} else if awsErr.Code() == cloudwatchlogs.ErrCodeResourceNotFoundException {\n\t\t\t\t// a log group or a log stream should be re-created after it is deleted and then retry\n\t\t\t\tlogrus.Errorf(\"[cloudwatch %d] Encountered error %v; detailed information: %s\\n\", output.PluginInstanceID, awsErr, awsErr.Message())\n\t\t\t\tif strings.Contains(awsErr.Message(), \"group\") {\n\t\t\t\t\tif err := output.createLogGroup(&Event{group: stream.logGroupName}); err != nil {\n\t\t\t\t\t\tlogrus.Errorf(\"[cloudwatch %d] Encountered error %v\\n\", output.PluginInstanceID, err)\n\t\t\t\t\t\treturn err\n\t\t\t\t\t}\n\t\t\t\t} else if strings.Contains(awsErr.Message(), \"stream\") {\n\t\t\t\t\tif _, err := output.createStream(&Event{group: stream.logGroupName, stream: stream.logStreamName}); err != nil {\n\t\t\t\t\t\tlogrus.Errorf(\"[cloudwatch %d] Encountered error %v\\n\", output.PluginInstanceID, err)\n\t\t\t\t\t\treturn err\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\treturn fmt.Errorf(\"A Log group/stream did not exist, re-created it. Will retry PutLogEvents on next flush\")\n\t\t\t} else {\n\t\t\t\toutput.timer.Start()\n\t\t\t\treturn err\n\t\t\t}\n\t\t} else {\n\t\t\treturn err\n\t\t}\n\t}\n\toutput.processRejectedEventsInfo(response)\n\toutput.timer.Reset()\n\tlogrus.Debugf(\"[cloudwatch %d] Sent %d events to CloudWatch for stream '%s' in group '%s'\",\n\t\toutput.PluginInstanceID, len(stream.logEvents), stream.logStreamName, stream.logGroupName)\n\n\tstream.nextSequenceToken = response.NextSequenceToken\n\tstream.logEvents = stream.logEvents[:0]\n\tstream.currentByteLength = 0\n\tstream.currentBatchStart = nil\n\tstream.currentBatchEnd = nil\n\n\treturn nil\n}\n\nfunc (output *OutputPlugin) processRejectedEventsInfo(response *cloudwatchlogs.PutLogEventsOutput) {\n\tif response.RejectedLogEventsInfo != nil {\n\t\tif response.RejectedLogEventsInfo.ExpiredLogEventEndIndex != nil {\n\t\t\tlogrus.Warnf(\"[cloudwatch %d] %d log events were marked as expired by CloudWatch\\n\", output.PluginInstanceID, aws.Int64Value(response.RejectedLogEventsInfo.ExpiredLogEventEndIndex))\n\t\t}\n\t\tif response.RejectedLogEventsInfo.TooNewLogEventStartIndex != nil {\n\t\t\tlogrus.Warnf(\"[cloudwatch %d] %d log events were marked as too new by CloudWatch\\n\", output.PluginInstanceID, aws.Int64Value(response.RejectedLogEventsInfo.TooNewLogEventStartIndex))\n\t\t}\n\t\tif response.RejectedLogEventsInfo.TooOldLogEventEndIndex != nil {\n\t\t\tlogrus.Warnf(\"[cloudwatch %d] %d log events were marked as too old by CloudWatch\\n\", output.PluginInstanceID, aws.Int64Value(response.RejectedLogEventsInfo.TooOldLogEventEndIndex))\n\t\t}\n\t}\n}\n\n// counts the effective number of bytes in the string, after\n// UTF-8 normalization.  UTF-8 normalization includes replacing bytes that do\n// not constitute valid UTF-8 encoded Unicode codepoints with the Unicode\n// replacement codepoint U+FFFD (a 3-byte UTF-8 sequence, represented in Go as\n// utf8.RuneError)\n// this works because Go range will parse the string as UTF-8 runes\n// copied from AWSLogs driver: https://github.com/moby/moby/commit/1e8ef386279e2e28aff199047e798fad660efbdd\nfunc cloudwatchLen(event string) int {\n\teffectiveBytes := perEventBytes\n\tfor _, rune := range event {\n\t\teffectiveBytes += utf8.RuneLen(rune)\n\t}\n\treturn effectiveBytes\n}\n\nfunc (stream *logStream) logBatchSpan(timestamp time.Time) time.Duration {\n\tif stream.currentBatchStart == nil || stream.currentBatchEnd == nil {\n\t\treturn 0\n\t}\n\n\tif stream.currentBatchStart.After(timestamp) {\n\t\treturn stream.currentBatchEnd.Sub(timestamp)\n\t} else if stream.currentBatchEnd.Before(timestamp) {\n\t\treturn timestamp.Sub(*stream.currentBatchStart)\n\t}\n\n\treturn stream.currentBatchEnd.Sub(*stream.currentBatchStart)\n}\n"
  },
  {
    "path": "cloudwatch/cloudwatch_test.go",
    "content": "// Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\"). You may\n// not use this file except in compliance with the License. A copy of the\n// License is located at\n//\n//\thttp://aws.amazon.com/apache2.0/\n//\n// or in the \"license\" file accompanying this file. This file is distributed\n// on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either\n// express or implied. See the License for the specific language governing\n// permissions and limitations under the License.\n\npackage cloudwatch\n\nimport (\n\t\"bytes\"\n\t\"errors\"\n\t\"fmt\"\n\t\"os\"\n\t\"strings\"\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/aws/amazon-cloudwatch-logs-for-fluent-bit/cloudwatch/mock_cloudwatch\"\n\t\"github.com/aws/amazon-kinesis-firehose-for-fluent-bit/plugins\"\n\t\"github.com/aws/aws-sdk-go/aws\"\n\t\"github.com/aws/aws-sdk-go/aws/awserr\"\n\t\"github.com/aws/aws-sdk-go/service/cloudwatchlogs\"\n\tfluentbit \"github.com/fluent/fluent-bit-go/output\"\n\t\"github.com/golang/mock/gomock\"\n\t\"github.com/sirupsen/logrus\"\n\t\"github.com/stretchr/testify/assert\"\n)\n\nconst (\n\ttestRegion          = \"us-west-2\"\n\ttestLogGroup        = \"my-logs\"\n\ttestLogStreamPrefix = \"my-prefix\"\n\ttestTag             = \"tag\"\n\ttestNextToken       = \"next-token\"\n\ttestSequenceToken   = \"sequence-token\"\n)\n\ntype configTest struct {\n\tname          string\n\tconfig        OutputPluginConfig\n\tisValidConfig bool\n\texpectedError string\n}\n\nvar (\n\tconfigValidationTestCases = []configTest{\n\t\t{\n\t\t\tname: \"ValidConfiguration\",\n\t\t\tconfig: OutputPluginConfig{\n\t\t\t\tRegion:          testRegion,\n\t\t\t\tLogGroupName:    testLogGroup,\n\t\t\t\tLogStreamPrefix: testLogStreamPrefix,\n\t\t\t},\n\t\t\tisValidConfig: true,\n\t\t\texpectedError: \"\",\n\t\t},\n\t\t{\n\t\t\tname: \"MissingRegion\",\n\t\t\tconfig: OutputPluginConfig{\n\t\t\t\tLogGroupName:    testLogGroup,\n\t\t\t\tLogStreamPrefix: testLogStreamPrefix,\n\t\t\t},\n\t\t\tisValidConfig: false,\n\t\t\texpectedError: \"region is a required parameter\",\n\t\t},\n\t\t{\n\t\t\tname: \"MissingLogGroup\",\n\t\t\tconfig: OutputPluginConfig{\n\t\t\t\tRegion:          testRegion,\n\t\t\t\tLogStreamPrefix: testLogStreamPrefix,\n\t\t\t},\n\t\t\tisValidConfig: false,\n\t\t\texpectedError: \"log_group_name is a required parameter\",\n\t\t},\n\t\t{\n\t\t\tname: \"OnlyLogStreamNameProvided\",\n\t\t\tconfig: OutputPluginConfig{\n\t\t\t\tRegion:        testRegion,\n\t\t\t\tLogGroupName:  testLogGroup,\n\t\t\t\tLogStreamName: \"testLogStream\",\n\t\t\t},\n\t\t\tisValidConfig: true,\n\t\t},\n\t\t{\n\t\t\tname: \"OnlyLogStreamPrefixProvided\",\n\t\t\tconfig: OutputPluginConfig{\n\t\t\t\tRegion:          testRegion,\n\t\t\t\tLogGroupName:    testLogGroup,\n\t\t\t\tLogStreamPrefix: testLogStreamPrefix,\n\t\t\t},\n\t\t\tisValidConfig: true,\n\t\t},\n\t\t{\n\t\t\tname: \"LogStreamAndPrefixBothProvided\",\n\t\t\tconfig: OutputPluginConfig{\n\t\t\t\tRegion:          testRegion,\n\t\t\t\tLogGroupName:    testLogGroup,\n\t\t\t\tLogStreamName:   \"testLogStream\",\n\t\t\t\tLogStreamPrefix: testLogStreamPrefix,\n\t\t\t},\n\t\t\tisValidConfig: false,\n\t\t\texpectedError: \"either log_stream_name or log_stream_prefix can be configured. They cannot be provided together\",\n\t\t},\n\t\t{\n\t\t\tname: \"LogStreamAndPrefixBothMissing\",\n\t\t\tconfig: OutputPluginConfig{\n\t\t\t\tRegion:       testRegion,\n\t\t\t\tLogGroupName: testLogGroup,\n\t\t\t},\n\t\t\tisValidConfig: false,\n\t\t\texpectedError: \"log_stream_name or log_stream_prefix is required\",\n\t\t},\n\t}\n)\n\n// helper function to make a log stream/log group name template from a string.\nfunc testTemplate(template string) *fastTemplate {\n\tt, _ := newTemplate(template)\n\treturn t\n}\n\nfunc TestAddEvent(t *testing.T) {\n\tctrl := gomock.NewController(t)\n\tmockCloudWatch := mock_cloudwatch.NewMockLogsClient(ctrl)\n\n\tgomock.InOrder(\n\t\tmockCloudWatch.EXPECT().DescribeLogStreams(gomock.Any()).Do(func(input *cloudwatchlogs.DescribeLogStreamsInput) {\n\t\t\tassert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, \"Expected log group name to match\")\n\t\t}).Return(&cloudwatchlogs.DescribeLogStreamsOutput{}, nil),\n\t\tmockCloudWatch.EXPECT().CreateLogStream(gomock.Any()).Do(func(input *cloudwatchlogs.CreateLogStreamInput) {\n\t\t\tassert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, \"Expected log group name to match\")\n\t\t\tassert.Equal(t, aws.StringValue(input.LogStreamName), testLogStreamPrefix+testTag, \"Expected log group name to match\")\n\t\t}).Return(&cloudwatchlogs.CreateLogStreamOutput{}, nil),\n\t)\n\n\toutput := OutputPlugin{\n\t\tlogGroupName:     testTemplate(testLogGroup),\n\t\tlogStreamPrefix:  testLogStreamPrefix,\n\t\tclient:           mockCloudWatch,\n\t\ttimer:            setupTimeout(),\n\t\tstreams:          make(map[string]*logStream),\n\t\tgroups:           map[string]struct{}{testLogGroup: {}},\n\t\tautoCreateStream: true,\n\t}\n\n\trecord := map[interface{}]interface{}{\n\t\t\"somekey\": []byte(\"some value\"),\n\t}\n\n\tretCode := output.AddEvent(&Event{TS: time.Now(), Tag: testTag, Record: record})\n\tassert.Equal(t, retCode, fluentbit.FLB_OK, \"Expected return code to FLB_OK\")\n}\n\nfunc TestTruncateLargeLogEvent(t *testing.T) {\n\tctrl := gomock.NewController(t)\n\tmockCloudWatch := mock_cloudwatch.NewMockLogsClient(ctrl)\n\n\tgomock.InOrder(\n\t\tmockCloudWatch.EXPECT().DescribeLogStreams(gomock.Any()).Do(func(input *cloudwatchlogs.DescribeLogStreamsInput) {\n\t\t\tassert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, \"Expected log group name to match\")\n\t\t}).Return(&cloudwatchlogs.DescribeLogStreamsOutput{}, nil),\n\t\tmockCloudWatch.EXPECT().CreateLogStream(gomock.Any()).Do(func(input *cloudwatchlogs.CreateLogStreamInput) {\n\t\t\tassert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, \"Expected log group name to match\")\n\t\t\tassert.Equal(t, aws.StringValue(input.LogStreamName), testLogStreamPrefix+testTag, \"Expected log group name to match\")\n\t\t}).Return(&cloudwatchlogs.CreateLogStreamOutput{}, nil),\n\t)\n\n\toutput := OutputPlugin{\n\t\tlogGroupName:     testTemplate(testLogGroup),\n\t\tlogStreamPrefix:  testLogStreamPrefix,\n\t\tclient:           mockCloudWatch,\n\t\ttimer:            setupTimeout(),\n\t\tstreams:          make(map[string]*logStream),\n\t\tgroups:           map[string]struct{}{testLogGroup: {}},\n\t\tautoCreateStream: true,\n\t}\n\n\trecord := map[interface{}]interface{}{\n\t\t\"somekey\": make([]byte, 256*1024+100),\n\t}\n\n\tretCode := output.AddEvent(&Event{TS: time.Now(), Tag: testTag, Record: record})\n\tactualData, err := output.processRecord(&Event{TS: time.Now(), Tag: testTag, Record: record})\n\n\tif err != nil {\n\t\tlogrus.Debugf(\"[cloudwatch %d] Failed to process record: %v\\n\", output.PluginInstanceID, record)\n\t}\n\n\tassert.Equal(t, retCode, fluentbit.FLB_OK, \"Expected return code to be FLB_OK\")\n\tassert.Len(t, actualData, 256*1024-26, \"Expected length is 256*1024-26\")\n}\n\nfunc TestTruncateLargeLogEventWithSpecialCharacterOneTrailingFragments(t *testing.T) {\n\tctrl := gomock.NewController(t)\n\tmockCloudWatch := mock_cloudwatch.NewMockLogsClient(ctrl)\n\n\tgomock.InOrder(\n\t\tmockCloudWatch.EXPECT().DescribeLogStreams(gomock.Any()).Do(func(input *cloudwatchlogs.DescribeLogStreamsInput) {\n\t\t\tassert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, \"Expected log group name to match\")\n\t\t}).Return(&cloudwatchlogs.DescribeLogStreamsOutput{}, nil),\n\t\tmockCloudWatch.EXPECT().CreateLogStream(gomock.Any()).Do(func(input *cloudwatchlogs.CreateLogStreamInput) {\n\t\t\tassert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, \"Expected log group name to match\")\n\t\t\tassert.Equal(t, aws.StringValue(input.LogStreamName), testLogStreamPrefix+testTag, \"Expected log group name to match\")\n\t\t}).Return(&cloudwatchlogs.CreateLogStreamOutput{}, nil),\n\t)\n\n\toutput := OutputPlugin{\n\t\tlogGroupName:     testTemplate(testLogGroup),\n\t\tlogStreamPrefix:  testLogStreamPrefix,\n\t\tclient:           mockCloudWatch,\n\t\ttimer:            setupTimeout(),\n\t\tstreams:          make(map[string]*logStream),\n\t\tgroups:           map[string]struct{}{testLogGroup: {}},\n\t\tautoCreateStream: true,\n\t}\n\n\tvar b bytes.Buffer\n\tfor i := 0; i < 262095; i++ {\n\t\tb.WriteString(\"x\")\n\t}\n\tb.WriteString(\"𒁈zrgchimqigtm\")\n\n\trecord := map[interface{}]interface{}{\n\t\t\"key\": b.String(),\n\t}\n\n\tretCode := output.AddEvent(&Event{TS: time.Now(), Tag: testTag, Record: record})\n\tactualData, err := output.processRecord(&Event{TS: time.Now(), Tag: testTag, Record: record})\n\n\tif err != nil {\n\t\tlogrus.Debugf(\"[cloudwatch %d] Failed to process record: %v\\n\", output.PluginInstanceID, record)\n\t}\n\n\t/* invalid characters will be expanded when sent as request */\n\tactualDataString := logString(actualData)\n\tactualDataString = fmt.Sprintf(\"%q\", actualDataString) /* converts: <invalid> -> \\x<hex> */\n\n\texampleWorkingData := \"{\\\"key\\\":\\\"x\\\"}\"\n\taddedLength := len(fmt.Sprintf(\"%q\", exampleWorkingData)) - len(exampleWorkingData)\n\n\tassert.Equal(t, retCode, fluentbit.FLB_OK, \"Expected return code to be FLB_OK\")\n\tassert.LessOrEqual(t, len(actualDataString), 256*1024-26+addedLength, \"Expected length to be less than or equal to 256*1024-26\")\n}\n\nfunc TestTruncateLargeLogEventWithSpecialCharacterTwoTrailingFragments(t *testing.T) {\n\tctrl := gomock.NewController(t)\n\tmockCloudWatch := mock_cloudwatch.NewMockLogsClient(ctrl)\n\tgomock.InOrder(\n\t\tmockCloudWatch.EXPECT().DescribeLogStreams(gomock.Any()).Do(func(input *cloudwatchlogs.DescribeLogStreamsInput) {\n\t\t\tassert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, \"Expected log group name to match\")\n\t\t}).Return(&cloudwatchlogs.DescribeLogStreamsOutput{}, nil),\n\t\tmockCloudWatch.EXPECT().CreateLogStream(gomock.Any()).Do(func(input *cloudwatchlogs.CreateLogStreamInput) {\n\t\t\tassert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, \"Expected log group name to match\")\n\t\t\tassert.Equal(t, aws.StringValue(input.LogStreamName), testLogStreamPrefix+testTag, \"Expected log group name to match\")\n\t\t}).Return(&cloudwatchlogs.CreateLogStreamOutput{}, nil),\n\t)\n\n\toutput := OutputPlugin{\n\t\tlogGroupName:     testTemplate(testLogGroup),\n\t\tlogStreamPrefix:  testLogStreamPrefix,\n\t\tclient:           mockCloudWatch,\n\t\ttimer:            setupTimeout(),\n\t\tstreams:          make(map[string]*logStream),\n\t\tgroups:           map[string]struct{}{testLogGroup: {}},\n\t\tautoCreateStream: true,\n\t}\n\n\tvar b bytes.Buffer\n\tfor i := 0; i < 262094; i++ {\n\t\tb.WriteString(\"x\")\n\t}\n\tb.WriteString(\"𒁈zrgchimqigtm\")\n\n\trecord := map[interface{}]interface{}{\n\t\t\"key\": b.String(),\n\t}\n\n\tretCode := output.AddEvent(&Event{TS: time.Now(), Tag: testTag, Record: record})\n\tactualData, err := output.processRecord(&Event{TS: time.Now(), Tag: testTag, Record: record})\n\n\tif err != nil {\n\t\tlogrus.Debugf(\"[cloudwatch %d] Failed to process record: %v\\n\", output.PluginInstanceID, record)\n\t}\n\n\t/* invalid characters will be expanded when sent as request */\n\tactualDataString := logString(actualData)\n\tactualDataString = fmt.Sprintf(\"%q\", actualDataString) /* converts: <invalid> -> \\x<hex> */\n\n\texampleWorkingData := \"{\\\"key\\\":\\\"x\\\"}\"\n\taddedLength := len(fmt.Sprintf(\"%q\", exampleWorkingData)) - len(exampleWorkingData)\n\n\tassert.Equal(t, retCode, fluentbit.FLB_OK, \"Expected return code to be FLB_OK\")\n\tassert.LessOrEqual(t, len(actualDataString), 256*1024-26+addedLength, \"Expected length to be less than or equal to 256*1024-26\")\n}\n\nfunc TestTruncateLargeLogEventWithSpecialCharacterThreeTrailingFragments(t *testing.T) {\n\tctrl := gomock.NewController(t)\n\tmockCloudWatch := mock_cloudwatch.NewMockLogsClient(ctrl)\n\tgomock.InOrder(\n\t\tmockCloudWatch.EXPECT().DescribeLogStreams(gomock.Any()).Do(func(input *cloudwatchlogs.DescribeLogStreamsInput) {\n\t\t\tassert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, \"Expected log group name to match\")\n\t\t}).Return(&cloudwatchlogs.DescribeLogStreamsOutput{}, nil),\n\t\tmockCloudWatch.EXPECT().CreateLogStream(gomock.Any()).Do(func(input *cloudwatchlogs.CreateLogStreamInput) {\n\t\t\tassert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, \"Expected log group name to match\")\n\t\t\tassert.Equal(t, aws.StringValue(input.LogStreamName), testLogStreamPrefix+testTag, \"Expected log group name to match\")\n\t\t}).Return(&cloudwatchlogs.CreateLogStreamOutput{}, nil),\n\t)\n\n\toutput := OutputPlugin{\n\t\tlogGroupName:     testTemplate(testLogGroup),\n\t\tlogStreamPrefix:  testLogStreamPrefix,\n\t\tclient:           mockCloudWatch,\n\t\ttimer:            setupTimeout(),\n\t\tstreams:          make(map[string]*logStream),\n\t\tgroups:           map[string]struct{}{testLogGroup: {}},\n\t\tautoCreateStream: true,\n\t}\n\n\tvar b bytes.Buffer\n\tfor i := 0; i < 262093; i++ {\n\t\tb.WriteString(\"x\")\n\t}\n\tb.WriteString(\"𒁈zrgchimqigtm\")\n\n\trecord := map[interface{}]interface{}{\n\t\t\"key\": b.String(),\n\t}\n\n\tretCode := output.AddEvent(&Event{TS: time.Now(), Tag: testTag, Record: record})\n\tactualData, err := output.processRecord(&Event{TS: time.Now(), Tag: testTag, Record: record})\n\n\tif err != nil {\n\t\tlogrus.Debugf(\"[cloudwatch %d] Failed to process record: %v\\n\", output.PluginInstanceID, record)\n\t}\n\n\t/* invalid characters will be expanded when sent as request */\n\tactualDataString := logString(actualData)\n\tactualDataString = fmt.Sprintf(\"%q\", actualDataString) /* converts: <invalid> -> \\x<hex> */\n\n\texampleWorkingData := \"{\\\"key\\\":\\\"x\\\"}\"\n\taddedLength := len(fmt.Sprintf(\"%q\", exampleWorkingData)) - len(exampleWorkingData)\n\n\tassert.Equal(t, retCode, fluentbit.FLB_OK, \"Expected return code to be FLB_OK\")\n\tassert.LessOrEqual(t, len(actualDataString), 256*1024-26+addedLength, \"Expected length to be less than or equal to 256*1024-26\")\n}\n\nfunc TestAddEventCreateLogGroup(t *testing.T) {\n\tctrl := gomock.NewController(t)\n\tmockCloudWatch := mock_cloudwatch.NewMockLogsClient(ctrl)\n\n\tgomock.InOrder(\n\t\tmockCloudWatch.EXPECT().CreateLogGroup(gomock.Any()).Return(&cloudwatchlogs.CreateLogGroupOutput{}, nil),\n\t\tmockCloudWatch.EXPECT().PutRetentionPolicy(gomock.Any()).Return(&cloudwatchlogs.PutRetentionPolicyOutput{}, nil),\n\t\tmockCloudWatch.EXPECT().DescribeLogStreams(gomock.Any()).Do(func(input *cloudwatchlogs.DescribeLogStreamsInput) {\n\t\t\tassert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, \"Expected log group name to match\")\n\t\t}).Return(&cloudwatchlogs.DescribeLogStreamsOutput{}, nil),\n\t\tmockCloudWatch.EXPECT().CreateLogStream(gomock.Any()).Do(func(input *cloudwatchlogs.CreateLogStreamInput) {\n\t\t\tassert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, \"Expected log group name to match\")\n\t\t\tassert.Equal(t, aws.StringValue(input.LogStreamName), testLogStreamPrefix+testTag, \"Expected log group name to match\")\n\t\t}).Return(&cloudwatchlogs.CreateLogStreamOutput{}, nil),\n\t)\n\n\toutput := OutputPlugin{\n\t\tlogGroupName:      testTemplate(testLogGroup),\n\t\tlogStreamPrefix:   testLogStreamPrefix,\n\t\tclient:            mockCloudWatch,\n\t\ttimer:             setupTimeout(),\n\t\tstreams:           make(map[string]*logStream),\n\t\tgroups:            make(map[string]struct{}),\n\t\tlogGroupRetention: 14,\n\t\tautoCreateGroup:   true,\n\t\tautoCreateStream:  true,\n\t}\n\n\trecord := map[interface{}]interface{}{\n\t\t\"somekey\": []byte(\"some value\"),\n\t}\n\n\tretCode := output.AddEvent(&Event{TS: time.Now(), Tag: testTag, Record: record})\n\tassert.Equal(t, retCode, fluentbit.FLB_OK, \"Expected return code to FLB_OK\")\n\n}\n\n// Existing Log Stream that requires 2 API calls to find\nfunc TestAddEventExistingStream(t *testing.T) {\n\tctrl := gomock.NewController(t)\n\tmockCloudWatch := mock_cloudwatch.NewMockLogsClient(ctrl)\n\n\tgomock.InOrder(\n\t\tmockCloudWatch.EXPECT().DescribeLogStreams(gomock.Any()).Do(func(input *cloudwatchlogs.DescribeLogStreamsInput) {\n\t\t\tassert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, \"Expected log group name to match\")\n\t\t\tassert.Equal(t, aws.StringValue(input.LogStreamNamePrefix), testLogStreamPrefix+testTag, \"Expected log group name to match\")\n\t\t}).Return(&cloudwatchlogs.DescribeLogStreamsOutput{\n\t\t\tLogStreams: []*cloudwatchlogs.LogStream{\n\t\t\t\t&cloudwatchlogs.LogStream{\n\t\t\t\t\tLogStreamName: aws.String(\"wrong stream\"),\n\t\t\t\t},\n\t\t\t},\n\t\t\tNextToken: aws.String(testNextToken),\n\t\t}, nil),\n\t\tmockCloudWatch.EXPECT().DescribeLogStreams(gomock.Any()).Do(func(input *cloudwatchlogs.DescribeLogStreamsInput) {\n\t\t\tassert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, \"Expected log group name to match\")\n\t\t\tassert.Equal(t, aws.StringValue(input.LogStreamNamePrefix), testLogStreamPrefix+testTag, \"Expected log group name to match\")\n\t\t\tassert.Equal(t, aws.StringValue(input.NextToken), testNextToken, \"Expected next token to match\")\n\t\t}).Return(&cloudwatchlogs.DescribeLogStreamsOutput{\n\t\t\tLogStreams: []*cloudwatchlogs.LogStream{\n\t\t\t\t&cloudwatchlogs.LogStream{\n\t\t\t\t\tLogStreamName: aws.String(testLogStreamPrefix + testTag),\n\t\t\t\t},\n\t\t\t},\n\t\t\tNextToken: aws.String(testNextToken),\n\t\t}, nil),\n\t)\n\n\toutput := OutputPlugin{\n\t\tlogGroupName:    testTemplate(testLogGroup),\n\t\tlogStreamPrefix: testLogStreamPrefix,\n\t\tclient:          mockCloudWatch,\n\t\ttimer:           setupTimeout(),\n\t\tstreams:         make(map[string]*logStream),\n\t\tgroups:          map[string]struct{}{testLogGroup: {}},\n\t}\n\n\trecord := map[interface{}]interface{}{\n\t\t\"somekey\": []byte(\"some value\"),\n\t}\n\n\tretCode := output.AddEvent(&Event{TS: time.Now(), Tag: testTag, Record: record})\n\tassert.Equal(t, retCode, fluentbit.FLB_OK, \"Expected return code to FLB_OK\")\n\n}\n\nfunc TestAddEventDescribeStreamsException(t *testing.T) {\n\tctrl := gomock.NewController(t)\n\tmockCloudWatch := mock_cloudwatch.NewMockLogsClient(ctrl)\n\n\tmockCloudWatch.EXPECT().DescribeLogStreams(gomock.Any()).Do(func(input *cloudwatchlogs.DescribeLogStreamsInput) {\n\t}).Return(nil, awserr.New(cloudwatchlogs.ErrCodeResourceNotFoundException, \"The specified log group does not exist.\", fmt.Errorf(\"API Error\")))\n\n\toutput := OutputPlugin{\n\t\tlogGroupName:     testTemplate(testLogGroup),\n\t\tlogStreamPrefix:  testLogStreamPrefix,\n\t\tclient:           mockCloudWatch,\n\t\ttimer:            setupTimeout(),\n\t\tstreams:          make(map[string]*logStream),\n\t\tgroups:           map[string]struct{}{testLogGroup: {}},\n\t\tautoCreateStream: true,\n\t}\n\n\trecord := map[interface{}]interface{}{\n\t\t\"somekey\": []byte(\"some value\"),\n\t}\n\n\tretCode := output.AddEvent(&Event{TS: time.Now(), Tag: testTag, Record: record})\n\tassert.Equal(t, retCode, fluentbit.FLB_RETRY, \"Expected return code to FLB_OK\")\n}\n\nfunc TestAddEventAutoCreateDisabled(t *testing.T) {\n\tctrl := gomock.NewController(t)\n\tmockCloudWatch := mock_cloudwatch.NewMockLogsClient(ctrl)\n\n\tmockCloudWatch.EXPECT().DescribeLogStreams(gomock.Any()).Do(func(input *cloudwatchlogs.DescribeLogStreamsInput) {\n\t\tassert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, \"Expected log group name to match\")\n\t}).Return(&cloudwatchlogs.DescribeLogStreamsOutput{}, nil)\n\tmockCloudWatch.EXPECT().CreateLogStream(gomock.Any()).Do(func(input *cloudwatchlogs.CreateLogStreamInput) {\n\t}).Return(&cloudwatchlogs.CreateLogStreamOutput{}, nil).Times(0)\n\n\toutput := OutputPlugin{\n\t\tlogGroupName:     testTemplate(testLogGroup),\n\t\tlogStreamPrefix:  testLogStreamPrefix,\n\t\tclient:           mockCloudWatch,\n\t\ttimer:            setupTimeout(),\n\t\tstreams:          make(map[string]*logStream),\n\t\tgroups:           map[string]struct{}{testLogGroup: {}},\n\t\tautoCreateStream: false,\n\t}\n\n\trecord := map[interface{}]interface{}{\n\t\t\"somekey\": []byte(\"some value\"),\n\t}\n\n\tretCode := output.AddEvent(&Event{TS: time.Now(), Tag: testTag, Record: record})\n\tassert.Equal(t, retCode, fluentbit.FLB_RETRY, \"Expected return code to FLB_RETRY\")\n}\n\nfunc TestAddEventExistingStreamNotFound(t *testing.T) {\n\tctrl := gomock.NewController(t)\n\tmockCloudWatch := mock_cloudwatch.NewMockLogsClient(ctrl)\n\n\tgomock.InOrder(\n\t\tmockCloudWatch.EXPECT().DescribeLogStreams(gomock.Any()).Do(func(input *cloudwatchlogs.DescribeLogStreamsInput) {\n\t\t\tassert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, \"Expected log group name to match\")\n\t\t\tassert.Equal(t, aws.StringValue(input.LogStreamNamePrefix), testLogStreamPrefix+testTag, \"Expected log group name to match\")\n\t\t}).Return(&cloudwatchlogs.DescribeLogStreamsOutput{\n\t\t\tLogStreams: []*cloudwatchlogs.LogStream{\n\t\t\t\t&cloudwatchlogs.LogStream{\n\t\t\t\t\tLogStreamName: aws.String(\"wrong stream\"),\n\t\t\t\t},\n\t\t\t},\n\t\t\tNextToken: aws.String(testNextToken),\n\t\t}, nil),\n\t\tmockCloudWatch.EXPECT().DescribeLogStreams(gomock.Any()).Do(func(input *cloudwatchlogs.DescribeLogStreamsInput) {\n\t\t\tassert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, \"Expected log group name to match\")\n\t\t\tassert.Equal(t, aws.StringValue(input.LogStreamNamePrefix), testLogStreamPrefix+testTag, \"Expected log group name to match\")\n\t\t\tassert.Equal(t, aws.StringValue(input.NextToken), testNextToken, \"Expected next token to match\")\n\t\t}).Return(&cloudwatchlogs.DescribeLogStreamsOutput{\n\t\t\tLogStreams: []*cloudwatchlogs.LogStream{\n\t\t\t\t&cloudwatchlogs.LogStream{\n\t\t\t\t\tLogStreamName: aws.String(\"another wrong stream\"),\n\t\t\t\t},\n\t\t\t},\n\t\t}, nil),\n\t\tmockCloudWatch.EXPECT().CreateLogStream(gomock.Any()).Do(func(input *cloudwatchlogs.CreateLogStreamInput) {\n\t\t\tassert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, \"Expected log group name to match\")\n\t\t\tassert.Equal(t, aws.StringValue(input.LogStreamName), testLogStreamPrefix+testTag, \"Expected log group name to match\")\n\t\t}).Return(nil, awserr.New(cloudwatchlogs.ErrCodeResourceAlreadyExistsException, \"Log Stream already exists\", fmt.Errorf(\"API Error\"))),\n\t)\n\n\toutput := OutputPlugin{\n\t\tlogGroupName:     testTemplate(testLogGroup),\n\t\tlogStreamPrefix:  testLogStreamPrefix,\n\t\tclient:           mockCloudWatch,\n\t\ttimer:            setupTimeout(),\n\t\tstreams:          make(map[string]*logStream),\n\t\tgroups:           map[string]struct{}{testLogGroup: {}},\n\t\tautoCreateStream: true,\n\t}\n\n\trecord := map[interface{}]interface{}{\n\t\t\"somekey\": []byte(\"some value\"),\n\t}\n\n\tretCode := output.AddEvent(&Event{TS: time.Now(), Tag: testTag, Record: record})\n\tassert.Equal(t, retCode, fluentbit.FLB_RETRY, \"Expected return code to FLB_RETRY\")\n\n}\n\nfunc TestAddEventEmptyRecord(t *testing.T) {\n\tctrl := gomock.NewController(t)\n\tmockCloudWatch := mock_cloudwatch.NewMockLogsClient(ctrl)\n\n\toutput := OutputPlugin{\n\t\tlogGroupName:    testTemplate(testLogGroup),\n\t\tlogStreamPrefix: testLogStreamPrefix,\n\t\tclient:          mockCloudWatch,\n\t\ttimer:           setupTimeout(),\n\t\tstreams:         make(map[string]*logStream),\n\t\tlogKey:          \"somekey\",\n\t\tgroups:          map[string]struct{}{testLogGroup: {}},\n\t}\n\n\trecord := map[interface{}]interface{}{\n\t\t\"somekey\": []byte(\"\"),\n\t}\n\n\tretCode := output.AddEvent(&Event{TS: time.Now(), Tag: testTag, Record: record})\n\tassert.Equal(t, retCode, fluentbit.FLB_OK, \"Expected return code to FLB_OK\")\n\n}\n\nfunc TestAddEventAndFlush(t *testing.T) {\n\tctrl := gomock.NewController(t)\n\tmockCloudWatch := mock_cloudwatch.NewMockLogsClient(ctrl)\n\n\tgomock.InOrder(\n\t\tmockCloudWatch.EXPECT().DescribeLogStreams(gomock.Any()).Do(func(input *cloudwatchlogs.DescribeLogStreamsInput) {\n\t\t\tassert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, \"Expected log group name to match\")\n\t\t}).Return(&cloudwatchlogs.DescribeLogStreamsOutput{}, nil),\n\t\tmockCloudWatch.EXPECT().CreateLogStream(gomock.Any()).Do(func(input *cloudwatchlogs.CreateLogStreamInput) {\n\t\t\tassert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, \"Expected log group name to match\")\n\t\t\tassert.Equal(t, aws.StringValue(input.LogStreamName), testLogStreamPrefix+testTag, \"Expected log stream name to match\")\n\t\t}).Return(&cloudwatchlogs.CreateLogStreamOutput{}, nil),\n\t\tmockCloudWatch.EXPECT().PutLogEvents(gomock.Any()).Do(func(input *cloudwatchlogs.PutLogEventsInput) {\n\t\t\tassert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, \"Expected log group name to match\")\n\t\t\tassert.Equal(t, aws.StringValue(input.LogStreamName), testLogStreamPrefix+testTag, \"Expected log stream name to match\")\n\t\t}).Return(&cloudwatchlogs.PutLogEventsOutput{\n\t\t\tNextSequenceToken: aws.String(\"token\"),\n\t\t}, nil),\n\t)\n\n\toutput := OutputPlugin{\n\t\tlogGroupName:     testTemplate(testLogGroup),\n\t\tlogStreamPrefix:  testLogStreamPrefix,\n\t\tclient:           mockCloudWatch,\n\t\ttimer:            setupTimeout(),\n\t\tstreams:          make(map[string]*logStream),\n\t\tgroups:           map[string]struct{}{testLogGroup: {}},\n\t\tautoCreateStream: true,\n\t}\n\n\trecord := map[interface{}]interface{}{\n\t\t\"somekey\": []byte(\"some value\"),\n\t}\n\n\tretCode := output.AddEvent(&Event{TS: time.Now(), Tag: testTag, Record: record})\n\tassert.Equal(t, retCode, fluentbit.FLB_OK, \"Expected return code to FLB_OK\")\n\toutput.Flush()\n}\n\nfunc TestPutLogEvents(t *testing.T) {\n\tctrl := gomock.NewController(t)\n\tmockCloudWatch := mock_cloudwatch.NewMockLogsClient(ctrl)\n\n\toutput := OutputPlugin{\n\t\tlogGroupName:    testTemplate(testLogGroup),\n\t\tlogStreamPrefix: testLogStreamPrefix,\n\t\tclient:          mockCloudWatch,\n\t\ttimer:           setupTimeout(),\n\t\tstreams:         make(map[string]*logStream),\n\t\tlogKey:          \"somekey\",\n\t\tgroups:          map[string]struct{}{testLogGroup: {}},\n\t}\n\n\tstream := &logStream{}\n\terr := output.putLogEvents(stream)\n\tassert.Nil(t, err)\n}\n\nfunc TestSetGroupStreamNames(t *testing.T) {\n\trecord := map[interface{}]interface{}{\n\t\t\"ident\": \"cron\",\n\t\t\"msg\":   \"my cool log message\",\n\t\t\"details\": map[interface{}]interface{}{\n\t\t\t\"region\": \"us-west-2\",\n\t\t\t\"az\":     \"a\",\n\t\t},\n\t}\n\n\te := &Event{Tag: \"syslog.0\", Record: record}\n\n\t// Test against non-template name.\n\toutput := OutputPlugin{\n\t\tlogStreamName:        testTemplate(\"/aws/ecs/test-stream-name\"),\n\t\tlogGroupName:         testTemplate(\"\"),\n\t\tdefaultLogGroupName:  \"fluentbit-default\",\n\t\tdefaultLogStreamName: \"/fluentbit-default\",\n\t}\n\n\toutput.setGroupStreamNames(e)\n\tassert.Equal(t, \"/aws/ecs/test-stream-name\", e.stream,\n\t\t\"The provided stream name must be returned exactly, without modifications.\")\n\n\toutput.logStreamName = testTemplate(\"\")\n\toutput.setGroupStreamNames(e)\n\tassert.Equal(t, output.defaultLogStreamName, e.stream,\n\t\t\"The default stream name must be set when no stream name is provided.\")\n\n\t// Test against a simple log stream prefix.\n\toutput.logStreamPrefix = \"/aws/ecs/test-stream-prefix/\"\n\toutput.setGroupStreamNames(e)\n\tassert.Equal(t, output.logStreamPrefix+\"syslog.0\", e.stream,\n\t\t\"The provided stream prefix must be prefixed to the provided tag name.\")\n\n\t// Test replacing items from template variables.\n\toutput.logStreamPrefix = \"\"\n\toutput.logStreamName = testTemplate(\"/aws/ecs/$(tag[0])/$(tag[1])/$(details['region'])/$(details['az'])/$(ident)\")\n\toutput.setGroupStreamNames(e)\n\tassert.Equal(t, \"/aws/ecs/syslog/0/us-west-2/a/cron\", e.stream,\n\t\t\"The stream name template was not correctly parsed.\")\n\tassert.Equal(t, output.defaultLogGroupName, e.group,\n\t\t\"The default log group name must be set when no log group is provided.\")\n\n\t// Test another bad template ] missing.\n\toutput.logStreamName = testTemplate(\"/aws/ecs/$(details['region')\")\n\toutput.setGroupStreamNames(e)\n\tassert.Equal(t, \"/aws/ecs/['region'\", e.stream,\n\t\t\"The provided stream name must match when the tag is incomplete.\")\n\n\t// Make sure we get default group and stream names when their variables cannot be parsed.\n\toutput.logStreamName = testTemplate(\"/aws/ecs/$(details['activity'])\")\n\toutput.logGroupName = testTemplate(\"$(details['activity'])\")\n\toutput.setGroupStreamNames(e)\n\tassert.Equal(t, output.defaultLogStreamName, e.stream,\n\t\t\"The default stream name must return when elements are missing.\")\n\tassert.Equal(t, output.defaultLogGroupName, e.group,\n\t\t\"The default group name must return when elements are missing.\")\n\n\t// Test that log stream and log group names get truncated to the maximum allowed.\n\tb := make([]byte, maxGroupStreamLength*2)\n\tfor i := range b { // make a string twice the max\n\t\tb[i] = '_'\n\t}\n\n\tident := string(b)\n\tassert.True(t, len(ident) > maxGroupStreamLength, \"test string creation failed\")\n\n\te.Record = map[interface{}]interface{}{\"ident\": ident} // set the long string into our record.\n\toutput.logStreamName = testTemplate(\"/aws/ecs/$(ident)\")\n\toutput.logGroupName = testTemplate(\"/aws/ecs/$(ident)\")\n\n\toutput.setGroupStreamNames(e)\n\tassert.Equal(t, maxGroupStreamLength, len(e.stream), \"the stream name should be truncated to the maximum size\")\n\tassert.Equal(t, maxGroupStreamLength, len(e.group), \"the group name should be truncated to the maximum size\")\n\tassert.Equal(t, \"/aws/ecs/\"+string(b[:maxGroupStreamLength-len(\"/aws/ecs/\")]),\n\t\te.stream, \"the stream name was incorrectly truncated\")\n\tassert.Equal(t, \"/aws/ecs/\"+string(b[:maxGroupStreamLength-len(\"/aws/ecs/\")]),\n\t\te.group, \"the group name was incorrectly truncated\")\n}\n\nfunc TestAddEventAndFlushDataAlreadyAcceptedException(t *testing.T) {\n\tctrl := gomock.NewController(t)\n\tmockCloudWatch := mock_cloudwatch.NewMockLogsClient(ctrl)\n\n\tgomock.InOrder(\n\t\tmockCloudWatch.EXPECT().DescribeLogStreams(gomock.Any()).Do(func(input *cloudwatchlogs.DescribeLogStreamsInput) {\n\t\t\tassert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, \"Expected log group name to match\")\n\t\t}).Return(&cloudwatchlogs.DescribeLogStreamsOutput{}, nil),\n\t\tmockCloudWatch.EXPECT().CreateLogStream(gomock.Any()).Do(func(input *cloudwatchlogs.CreateLogStreamInput) {\n\t\t\tassert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, \"Expected log group name to match\")\n\t\t\tassert.Equal(t, aws.StringValue(input.LogStreamName), testLogStreamPrefix+testTag, \"Expected log stream name to match\")\n\t\t}).Return(&cloudwatchlogs.CreateLogStreamOutput{}, nil),\n\t\tmockCloudWatch.EXPECT().PutLogEvents(gomock.Any()).Do(func(input *cloudwatchlogs.PutLogEventsInput) {\n\t\t\tassert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, \"Expected log group name to match\")\n\t\t\tassert.Equal(t, aws.StringValue(input.LogStreamName), testLogStreamPrefix+testTag, \"Expected log stream name to match\")\n\t\t}).Return(nil, awserr.New(cloudwatchlogs.ErrCodeDataAlreadyAcceptedException, \"Data already accepted; The next expected sequenceToken is: \"+testSequenceToken, fmt.Errorf(\"API Error\"))),\n\t)\n\n\toutput := OutputPlugin{\n\t\tlogGroupName:     testTemplate(testLogGroup),\n\t\tlogStreamPrefix:  testLogStreamPrefix,\n\t\tclient:           mockCloudWatch,\n\t\ttimer:            setupTimeout(),\n\t\tstreams:          make(map[string]*logStream),\n\t\tgroups:           map[string]struct{}{testLogGroup: {}},\n\t\tautoCreateStream: true,\n\t}\n\n\trecord := map[interface{}]interface{}{\n\t\t\"somekey\": []byte(\"some value\"),\n\t}\n\n\tretCode := output.AddEvent(&Event{TS: time.Now(), Tag: testTag, Record: record})\n\tassert.Equal(t, retCode, fluentbit.FLB_OK, \"Expected return code to FLB_OK\")\n\toutput.Flush()\n}\n\nfunc TestAddEventAndFlushDataInvalidSequenceTokenException(t *testing.T) {\n\tctrl := gomock.NewController(t)\n\tmockCloudWatch := mock_cloudwatch.NewMockLogsClient(ctrl)\n\n\tgomock.InOrder(\n\t\tmockCloudWatch.EXPECT().DescribeLogStreams(gomock.Any()).Do(func(input *cloudwatchlogs.DescribeLogStreamsInput) {\n\t\t\tassert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, \"Expected log group name to match\")\n\t\t}).Return(&cloudwatchlogs.DescribeLogStreamsOutput{}, nil),\n\t\tmockCloudWatch.EXPECT().CreateLogStream(gomock.Any()).Do(func(input *cloudwatchlogs.CreateLogStreamInput) {\n\t\t\tassert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, \"Expected log group name to match\")\n\t\t\tassert.Equal(t, aws.StringValue(input.LogStreamName), testLogStreamPrefix+testTag, \"Expected log stream name to match\")\n\t\t}).Return(&cloudwatchlogs.CreateLogStreamOutput{}, nil),\n\t\tmockCloudWatch.EXPECT().PutLogEvents(gomock.Any()).Do(func(input *cloudwatchlogs.PutLogEventsInput) {\n\t\t\tassert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, \"Expected log group name to match\")\n\t\t\tassert.Equal(t, aws.StringValue(input.LogStreamName), testLogStreamPrefix+testTag, \"Expected log stream name to match\")\n\t\t}).Return(nil, awserr.New(cloudwatchlogs.ErrCodeInvalidSequenceTokenException, \"The given sequenceToken is invalid; The next expected sequenceToken is: \"+testSequenceToken, fmt.Errorf(\"API Error\"))),\n\t\tmockCloudWatch.EXPECT().PutLogEvents(gomock.Any()).Do(func(input *cloudwatchlogs.PutLogEventsInput) {\n\t\t\tassert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, \"Expected log group name to match\")\n\t\t\tassert.Equal(t, aws.StringValue(input.LogStreamName), testLogStreamPrefix+testTag, \"Expected log stream name to match\")\n\t\t\tassert.Equal(t, aws.StringValue(input.SequenceToken), testSequenceToken, \"Expected sequence token to match response from previous error\")\n\t\t}).Return(&cloudwatchlogs.PutLogEventsOutput{\n\t\t\tNextSequenceToken: aws.String(\"token\"),\n\t\t}, nil),\n\t)\n\n\toutput := OutputPlugin{\n\t\tlogGroupName:     testTemplate(testLogGroup),\n\t\tlogStreamPrefix:  testLogStreamPrefix,\n\t\tclient:           mockCloudWatch,\n\t\ttimer:            setupTimeout(),\n\t\tstreams:          make(map[string]*logStream),\n\t\tgroups:           map[string]struct{}{testLogGroup: {}},\n\t\tautoCreateStream: true,\n\t}\n\n\trecord := map[interface{}]interface{}{\n\t\t\"somekey\": []byte(\"some value\"),\n\t}\n\n\tretCode := output.AddEvent(&Event{TS: time.Now(), Tag: testTag, Record: record})\n\tassert.Equal(t, retCode, fluentbit.FLB_OK, \"Expected return code to FLB_OK\")\n\toutput.Flush()\n}\n\nfunc TestAddEventAndFlushDataInvalidSequenceTokenNextNullException(t *testing.T) {\n\tctrl := gomock.NewController(t)\n\tmockCloudWatch := mock_cloudwatch.NewMockLogsClient(ctrl)\n\n\tgomock.InOrder(\n\t\tmockCloudWatch.EXPECT().DescribeLogStreams(gomock.Any()).Do(func(input *cloudwatchlogs.DescribeLogStreamsInput) {\n\t\t\tassert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, \"Expected log group name to match\")\n\t\t}).Return(&cloudwatchlogs.DescribeLogStreamsOutput{}, nil),\n\t\tmockCloudWatch.EXPECT().CreateLogStream(gomock.Any()).Do(func(input *cloudwatchlogs.CreateLogStreamInput) {\n\t\t\tassert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, \"Expected log group name to match\")\n\t\t\tassert.Equal(t, aws.StringValue(input.LogStreamName), testLogStreamPrefix+testTag, \"Expected log stream name to match\")\n\t\t}).Return(&cloudwatchlogs.CreateLogStreamOutput{}, nil),\n\t\tmockCloudWatch.EXPECT().PutLogEvents(gomock.Any()).Do(func(input *cloudwatchlogs.PutLogEventsInput) {\n\t\t\tassert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, \"Expected log group name to match\")\n\t\t\tassert.Equal(t, aws.StringValue(input.LogStreamName), testLogStreamPrefix+testTag, \"Expected log stream name to match\")\n\t\t}).Return(nil, awserr.New(cloudwatchlogs.ErrCodeInvalidSequenceTokenException, \"The given sequenceToken is invalid; The next expected sequenceToken is: null\", fmt.Errorf(\"API Error\"))),\n\t\tmockCloudWatch.EXPECT().PutLogEvents(gomock.Any()).Do(func(input *cloudwatchlogs.PutLogEventsInput) {\n\t\t\tassert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, \"Expected log group name to match\")\n\t\t\tassert.Equal(t, aws.StringValue(input.LogStreamName), testLogStreamPrefix+testTag, \"Expected log stream name to match\")\n\t\t\tassert.Nil(t, input.SequenceToken, \"Expected sequence token to be nil\")\n\t\t}).Return(&cloudwatchlogs.PutLogEventsOutput{\n\t\t\tNextSequenceToken: aws.String(\"token\"),\n\t\t}, nil),\n\t)\n\n\toutput := OutputPlugin{\n\t\tlogGroupName:     testTemplate(testLogGroup),\n\t\tlogStreamPrefix:  testLogStreamPrefix,\n\t\tclient:           mockCloudWatch,\n\t\ttimer:            setupTimeout(),\n\t\tstreams:          make(map[string]*logStream),\n\t\tgroups:           map[string]struct{}{testLogGroup: {}},\n\t\tautoCreateStream: true,\n\t}\n\n\trecord := map[interface{}]interface{}{\n\t\t\"somekey\": []byte(\"some value\"),\n\t}\n\n\tretCode := output.AddEvent(&Event{TS: time.Now(), Tag: testTag, Record: record})\n\tassert.Equal(t, retCode, fluentbit.FLB_OK, \"Expected return code to FLB_OK\")\n\toutput.Flush()\n}\n\nfunc TestAddEventAndDataResourceNotFoundExceptionWithNoLogGroup(t *testing.T) {\n\tctrl := gomock.NewController(t)\n\tmockCloudWatch := mock_cloudwatch.NewMockLogsClient(ctrl)\n\n\tgomock.InOrder(\n\t\tmockCloudWatch.EXPECT().DescribeLogStreams(gomock.Any()).Do(func(input *cloudwatchlogs.DescribeLogStreamsInput) {\n\t\t\tassert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, \"Expected log group name to match\")\n\t\t}).Return(&cloudwatchlogs.DescribeLogStreamsOutput{}, nil),\n\t\tmockCloudWatch.EXPECT().CreateLogStream(gomock.Any()).Do(func(input *cloudwatchlogs.CreateLogStreamInput) {\n\t\t\tassert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, \"Expected log group name to match\")\n\t\t\tassert.Equal(t, aws.StringValue(input.LogStreamName), testLogStreamPrefix+testTag, \"Expected log stream name to match\")\n\t\t}).Return(&cloudwatchlogs.CreateLogStreamOutput{}, nil),\n\t\tmockCloudWatch.EXPECT().PutLogEvents(gomock.Any()).Do(func(input *cloudwatchlogs.PutLogEventsInput) {\n\t\t\tassert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, \"Expected log group name to match\")\n\t\t\tassert.Equal(t, aws.StringValue(input.LogStreamName), testLogStreamPrefix+testTag, \"Expected log stream name to match\")\n\t\t}).Return(nil, awserr.New(cloudwatchlogs.ErrCodeResourceNotFoundException, \"The specified log group does not exist.\", fmt.Errorf(\"API Error\"))),\n\t\tmockCloudWatch.EXPECT().CreateLogGroup(gomock.Any()).Do(func(input *cloudwatchlogs.CreateLogGroupInput) {\n\t\t\tassert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, \"Expected log group name to match\")\n\t\t}).Return(&cloudwatchlogs.CreateLogGroupOutput{}, nil),\n\t)\n\n\toutput := OutputPlugin{\n\t\tlogGroupName:     testTemplate(testLogGroup),\n\t\tlogStreamPrefix:  testLogStreamPrefix,\n\t\tclient:           mockCloudWatch,\n\t\ttimer:            setupTimeout(),\n\t\tstreams:          make(map[string]*logStream),\n\t\tgroups:           map[string]struct{}{testLogGroup: {}},\n\t\tautoCreateStream: true,\n\t\tautoCreateGroup:  true,\n\t}\n\n\trecord := map[interface{}]interface{}{\n\t\t\"somekey\": []byte(\"some value\"),\n\t}\n\n\tretCode := output.AddEvent(&Event{TS: time.Now(), Tag: testTag, Record: record})\n\tassert.Equal(t, retCode, fluentbit.FLB_OK, \"Expected return code to FLB_OK\")\n\t// Flush triggers PutLogEvents which will encounter the ResourceNotFoundException\n\terr := output.Flush()\n\t// After creating the missing log group, the code returns an error and expects retry on next flush\n\tassert.Error(t, err, \"Expected error after flush when log group is missing\")\n}\n\nfunc TestAddEventAndDataResourceNotFoundExceptionWithNoLogStream(t *testing.T) {\n\tctrl := gomock.NewController(t)\n\tmockCloudWatch := mock_cloudwatch.NewMockLogsClient(ctrl)\n\n\tgomock.InOrder(\n\t\tmockCloudWatch.EXPECT().DescribeLogStreams(gomock.Any()).Do(func(input *cloudwatchlogs.DescribeLogStreamsInput) {\n\t\t\tassert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, \"Expected log group name to match\")\n\t\t}).Return(&cloudwatchlogs.DescribeLogStreamsOutput{}, nil),\n\t\tmockCloudWatch.EXPECT().CreateLogStream(gomock.Any()).Do(func(input *cloudwatchlogs.CreateLogStreamInput) {\n\t\t\tassert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, \"Expected log group name to match\")\n\t\t\tassert.Equal(t, aws.StringValue(input.LogStreamName), testLogStreamPrefix+testTag, \"Expected log stream name to match\")\n\t\t}).Return(&cloudwatchlogs.CreateLogStreamOutput{}, nil),\n\t\tmockCloudWatch.EXPECT().PutLogEvents(gomock.Any()).Do(func(input *cloudwatchlogs.PutLogEventsInput) {\n\t\t\tassert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, \"Expected log group name to match\")\n\t\t\tassert.Equal(t, aws.StringValue(input.LogStreamName), testLogStreamPrefix+testTag, \"Expected log stream name to match\")\n\t\t}).Return(nil, awserr.New(cloudwatchlogs.ErrCodeResourceNotFoundException, \"The specified log stream does not exist.\", fmt.Errorf(\"API Error\"))),\n\t\tmockCloudWatch.EXPECT().CreateLogStream(gomock.Any()).Do(func(input *cloudwatchlogs.CreateLogStreamInput) {\n\t\t\tassert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, \"Expected log group name to match\")\n\t\t\tassert.Equal(t, aws.StringValue(input.LogStreamName), testLogStreamPrefix+testTag, \"Expected log stream name to match\")\n\t\t}).Return(&cloudwatchlogs.CreateLogStreamOutput{}, nil),\n\t)\n\n\toutput := OutputPlugin{\n\t\tlogGroupName:     testTemplate(testLogGroup),\n\t\tlogStreamPrefix:  testLogStreamPrefix,\n\t\tclient:           mockCloudWatch,\n\t\ttimer:            setupTimeout(),\n\t\tstreams:          make(map[string]*logStream),\n\t\tgroups:           map[string]struct{}{testLogGroup: {}},\n\t\tautoCreateStream: true,\n\t}\n\n\trecord := map[interface{}]interface{}{\n\t\t\"somekey\": []byte(\"some value\"),\n\t}\n\n\tretCode := output.AddEvent(&Event{TS: time.Now(), Tag: testTag, Record: record})\n\tassert.Equal(t, retCode, fluentbit.FLB_OK, \"Expected return code to FLB_OK\")\n\t// Flush triggers PutLogEvents which will encounter the ResourceNotFoundException\n\terr := output.Flush()\n\t// After creating the missing log stream, the code returns an error and expects retry on next flush\n\tassert.Error(t, err, \"Expected error after flush when log stream is missing\")\n}\n\nfunc TestAddEventAndBatchSpanLimit(t *testing.T) {\n\toutput := setupLimitTestOutput(t, 2)\n\n\trecord := map[interface{}]interface{}{\n\t\t\"somekey\": []byte(\"some value\"),\n\t}\n\n\tbefore := time.Now()\n\tstart := before.Add(time.Nanosecond)\n\tend := start.Add(time.Hour*24 - time.Nanosecond)\n\tafter := start.Add(time.Hour * 24)\n\n\tretCode := output.AddEvent(&Event{TS: start, Tag: testTag, Record: record})\n\tassert.Equal(t, retCode, fluentbit.FLB_OK, \"Expected return code to FLB_OK\")\n\n\tretCode = output.AddEvent(&Event{TS: end, Tag: testTag, Record: record})\n\tassert.Equal(t, retCode, fluentbit.FLB_OK, \"Expected return code to FLB_OK\")\n\n\tretCode = output.AddEvent(&Event{TS: before, Tag: testTag, Record: record})\n\tassert.Equal(t, retCode, fluentbit.FLB_RETRY, \"Expected return code to FLB_RETRY\")\n\n\tretCode = output.AddEvent(&Event{TS: after, Tag: testTag, Record: record})\n\tassert.Equal(t, retCode, fluentbit.FLB_RETRY, \"Expected return code to FLB_RETRY\")\n}\n\nfunc TestAddEventAndBatchSpanLimitOnReverseOrder(t *testing.T) {\n\toutput := setupLimitTestOutput(t, 2)\n\n\trecord := map[interface{}]interface{}{\n\t\t\"somekey\": []byte(\"some value\"),\n\t}\n\n\tbefore := time.Now()\n\tstart := before.Add(time.Nanosecond)\n\tend := start.Add(time.Hour*24 - time.Nanosecond)\n\tafter := start.Add(time.Hour * 24)\n\n\tretCode := output.AddEvent(&Event{TS: end, Tag: testTag, Record: record})\n\tassert.Equal(t, retCode, fluentbit.FLB_OK, \"Expected return code to FLB_OK\")\n\n\tretCode = output.AddEvent(&Event{TS: start, Tag: testTag, Record: record})\n\tassert.Equal(t, retCode, fluentbit.FLB_OK, \"Expected return code to FLB_OK\")\n\n\tretCode = output.AddEvent(&Event{TS: before, Tag: testTag, Record: record})\n\tassert.Equal(t, retCode, fluentbit.FLB_RETRY, \"Expected return code to FLB_RETRY\")\n\n\tretCode = output.AddEvent(&Event{TS: after, Tag: testTag, Record: record})\n\tassert.Equal(t, retCode, fluentbit.FLB_RETRY, \"Expected return code to FLB_RETRY\")\n}\n\nfunc TestAddEventAndEventsCountLimit(t *testing.T) {\n\toutput := setupLimitTestOutput(t, 1)\n\n\trecord := map[interface{}]interface{}{\n\t\t\"somekey\": []byte(\"some value\"),\n\t}\n\n\tnow := time.Now()\n\n\tfor i := 0; i < 10000; i++ {\n\t\tretCode := output.AddEvent(&Event{TS: now, Tag: testTag, Record: record})\n\t\tassert.Equal(t, retCode, fluentbit.FLB_OK, fmt.Sprintf(\"Expected return code to FLB_OK on %d iteration\", i))\n\t}\n\tretCode := output.AddEvent(&Event{TS: now, Tag: testTag, Record: record})\n\tassert.Equal(t, retCode, fluentbit.FLB_RETRY, \"Expected return code to FLB_RETRY\")\n}\n\nfunc TestAddEventAndBatchSizeLimit(t *testing.T) {\n\toutput := setupLimitTestOutput(t, 1)\n\n\trecord := map[interface{}]interface{}{\n\t\t\"somekey\": []byte(strings.Repeat(\"some value\", 100)),\n\t}\n\n\tnow := time.Now()\n\n\tfor i := 0; i < 104; i++ { // 104 * 10_000 < 1_048_576\n\t\tretCode := output.AddEvent(&Event{TS: now, Tag: testTag, Record: record})\n\t\tassert.Equal(t, retCode, fluentbit.FLB_OK, \"Expected return code to FLB_OK\")\n\t}\n\n\t// 105 * 10_000 > 1_048_576\n\tretCode := output.AddEvent(&Event{TS: now.Add(time.Hour*24 + time.Nanosecond), Tag: testTag, Record: record})\n\tassert.Equal(t, retCode, fluentbit.FLB_RETRY, \"Expected return code to FLB_RETRY\")\n}\n\nfunc setupLimitTestOutput(t *testing.T, times int) OutputPlugin {\n\tctrl := gomock.NewController(t)\n\tmockCloudWatch := mock_cloudwatch.NewMockLogsClient(ctrl)\n\n\tgomock.InOrder(\n\t\tmockCloudWatch.EXPECT().DescribeLogStreams(gomock.Any()).Do(func(input *cloudwatchlogs.DescribeLogStreamsInput) {\n\t\t\tassert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, \"Expected log group name to match\")\n\t\t}).Return(&cloudwatchlogs.DescribeLogStreamsOutput{}, nil),\n\t\tmockCloudWatch.EXPECT().CreateLogStream(gomock.Any()).AnyTimes().Do(func(input *cloudwatchlogs.CreateLogStreamInput) {\n\t\t\tassert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, \"Expected log group name to match\")\n\t\t\tassert.Equal(t, aws.StringValue(input.LogStreamName), testLogStreamPrefix+testTag, \"Expected log stream name to match\")\n\t\t}).Return(&cloudwatchlogs.CreateLogStreamOutput{}, nil),\n\t\tmockCloudWatch.EXPECT().PutLogEvents(gomock.Any()).Times(times).Return(nil, errors.New(\"should fail\")),\n\t)\n\n\treturn OutputPlugin{\n\t\tlogGroupName:     testTemplate(testLogGroup),\n\t\tlogStreamPrefix:  testLogStreamPrefix,\n\t\tclient:           mockCloudWatch,\n\t\ttimer:            setupTimeout(),\n\t\tstreams:          make(map[string]*logStream),\n\t\tgroups:           map[string]struct{}{testLogGroup: {}},\n\t\tautoCreateStream: true,\n\t}\n}\n\nfunc setupTimeout() *plugins.Timeout {\n\ttimer, _ := plugins.NewTimeout(func(d time.Duration) {\n\t\tlogrus.Errorf(\"[firehose] timeout threshold reached: Failed to send logs for %v\\n\", d)\n\t\tlogrus.Error(\"[firehose] Quitting Fluent Bit\")\n\t\tos.Exit(1)\n\t})\n\treturn timer\n}\n\nfunc TestValidate(t *testing.T) {\n\tfor _, test := range configValidationTestCases {\n\t\tt.Run(test.name, func(t *testing.T) {\n\t\t\terr := test.config.Validate()\n\n\t\t\tif test.isValidConfig {\n\t\t\t\tassert.Nil(t, err)\n\t\t\t} else {\n\t\t\t\tassert.NotNil(t, err)\n\t\t\t\tassert.Equal(t, err.Error(), test.expectedError)\n\t\t\t}\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "cloudwatch/generate_mock.go",
    "content": "// Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\"). You may\n// not use this file except in compliance with the License. A copy of the\n// License is located at\n//\n//\thttp://aws.amazon.com/apache2.0/\n//\n// or in the \"license\" file accompanying this file. This file is distributed\n// on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either\n// express or implied. See the License for the specific language governing\n// permissions and limitations under the License.\n\npackage cloudwatch\n\n//go:generate ../scripts/mockgen.sh github.com/aws/amazon-cloudwatch-logs-for-fluent-bit/cloudwatch LogsClient mock_cloudwatch/mock.go\n"
  },
  {
    "path": "cloudwatch/handlers.go",
    "content": "// Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\"). You may\n// not use this file except in compliance with the License. A copy of the\n// License is located at\n//\n//\thttp://aws.amazon.com/apache2.0/\n//\n// or in the \"license\" file accompanying this file. This file is distributed\n// on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either\n// express or implied. See the License for the specific language governing\n// permissions and limitations under the License.\n\npackage cloudwatch\n\nimport \"github.com/aws/aws-sdk-go/aws/request\"\n\nconst logFormatHeader = \"x-amzn-logs-format\"\n\n// LogFormatHandler returns an http request handler that sets an HTTP header.\n// The header is used to indicate the format of the logs being sent.\nfunc LogFormatHandler(format string) request.NamedHandler {\n\treturn request.NamedHandler{\n\t\tName: \"LogFormatHandler\",\n\t\tFn: func(req *request.Request) {\n\t\t\treq.HTTPRequest.Header.Set(logFormatHeader, format)\n\t\t},\n\t}\n}\n"
  },
  {
    "path": "cloudwatch/handlers_test.go",
    "content": "// Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\"). You may\n// not use this file except in compliance with the License. A copy of the\n// License is located at\n//\n//\thttp://aws.amazon.com/apache2.0/\n//\n// or in the \"license\" file accompanying this file. This file is distributed\n// on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either\n// express or implied. See the License for the specific language governing\n// permissions and limitations under the License.\n\npackage cloudwatch\n\nimport (\n\t\"net/http\"\n\t\"testing\"\n\n\t\"github.com/aws/aws-sdk-go/aws/request\"\n\t\"github.com/stretchr/testify/assert\"\n)\n\nfunc TestLogFormatHandler(t *testing.T) {\n\thttpReq, _ := http.NewRequest(\"POST\", \"\", nil)\n\tr := &request.Request{\n\t\tHTTPRequest: httpReq,\n\t\tBody:        nil,\n\t}\n\tr.SetBufferBody([]byte{})\n\n\thandler := LogFormatHandler(\"json/emf\")\n\thandler.Fn(r)\n\n\theader := r.HTTPRequest.Header.Get(logFormatHeader)\n\tassert.Equal(t, \"json/emf\", header)\n}\n"
  },
  {
    "path": "cloudwatch/helpers.go",
    "content": "package cloudwatch\n\nimport (\n\t\"fmt\"\n\t\"io\"\n\t\"strconv\"\n\t\"strings\"\n\n\t\"github.com/valyala/bytebufferpool\"\n\t\"github.com/valyala/fasttemplate\"\n)\n\n// Errors output by the help procedures.\nvar (\n\tErrNoTagValue     = fmt.Errorf(\"not enough dots in the tag to satisfy the index position\")\n\tErrMissingTagName = fmt.Errorf(\"tag name not found\")\n\tErrMissingSubName = fmt.Errorf(\"sub-tag name not found\")\n)\n\n// newTemplate is the only place you'll find the template start and end tags.\nfunc newTemplate(template string) (*fastTemplate, error) {\n\tt, err := fasttemplate.NewTemplate(template, \"$(\", \")\")\n\n\treturn &fastTemplate{Template: t, String: template}, err\n}\n\n// tagKeysToMap converts a raw string into a go map.\n// This is used by input data to create AWS tags applied to newly-created log groups.\n//\n// The input string should be match this: \"key=value,key2=value2\".\n// Spaces are trimmed, empty values are permitted, empty keys are ignored.\n// The final value in the input string wins in case of duplicate keys.\nfunc tagKeysToMap(tags string) map[string]*string {\n\toutput := make(map[string]*string)\n\n\tfor _, tag := range strings.Split(strings.TrimSpace(tags), \",\") {\n\t\tsplit := strings.SplitN(tag, \"=\", 2)\n\t\tkey := strings.TrimSpace(split[0])\n\t\tvalue := \"\"\n\n\t\tif key == \"\" {\n\t\t\tcontinue\n\t\t}\n\n\t\tif len(split) > 1 {\n\t\t\tvalue = strings.TrimSpace(split[1])\n\t\t}\n\n\t\toutput[key] = &value\n\t}\n\n\tif len(output) == 0 {\n\t\treturn nil\n\t}\n\n\treturn output\n}\n\n// parseKeysTemplate takes in an interface map and a list of nested keys. It returns\n// the value of the final key, or the name of the first key not found in the chain.\n// example keys := \"['level1']['level2']['level3']\"\n// This is called by parseDataMapTags any time a nested value is found in a log Event.\n// This procedure checks if any of the nested values match variable identifiers in the logStream or logGroups.\nfunc parseKeysTemplate(data map[interface{}]interface{}, keys string, w io.Writer) (int64, error) {\n\treturn fasttemplate.ExecuteFunc(keys, \"['\", \"']\", w, func(w io.Writer, tag string) (int, error) {\n\t\tswitch val := data[tag].(type) {\n\t\tcase []byte:\n\t\t\treturn w.Write(val)\n\t\tcase string:\n\t\t\treturn w.Write([]byte(val))\n\t\tcase map[interface{}]interface{}:\n\t\t\tdata = val // drill down another level.\n\t\t\treturn 0, nil\n\t\tdefault: // missing\n\t\t\treturn 0, fmt.Errorf(\"%s: %w\", tag, ErrMissingSubName)\n\t\t}\n\t})\n}\n\n// parseDataMapTags parses the provided tag values in template form,\n// from an interface{} map (expected to contain strings or more interface{} maps).\n// This runs once for every log line.\n// Used to fill in any template variables that may exist in the logStream or logGroup names.\nfunc parseDataMapTags(e *Event, logTags []string, t *fastTemplate, metadata TaskMetadata, uuid string, w io.Writer) (int64, error) {\n\treturn t.ExecuteFunc(w, func(w io.Writer, tag string) (int, error) {\n\t\tswitch tag {\n\t\tcase \"ecs_task_id\":\n\t\t\tif metadata.TaskID != \"\" {\n\t\t\t\treturn w.Write([]byte(metadata.TaskID))\n\t\t\t}\n\n\t\t\treturn 0, fmt.Errorf(\"Failed to fetch ecs_task_id; The container is not running in ECS\")\n\t\tcase \"ecs_cluster\":\n\t\t\tif metadata.Cluster != \"\" {\n\t\t\t\treturn w.Write([]byte(metadata.Cluster))\n\t\t\t}\n\n\t\t\treturn 0, fmt.Errorf(\"Failed to fetch ecs_cluster; The container is not running in ECS\")\n\t\tcase \"ecs_task_arn\":\n\t\t\tif metadata.TaskARN != \"\" {\n\t\t\t\treturn w.Write([]byte(metadata.TaskARN))\n\t\t\t}\n\n\t\t\treturn 0, fmt.Errorf(\"Failed to fetch ecs_task_arn; The container is not running in ECS\")\n\t\tcase \"uuid\":\n\t\t\treturn w.Write([]byte(uuid))\n\t\t}\n\n\t\tv := strings.Index(tag, \"[\")\n\t\tif v == -1 {\n\t\t\tv = len(tag)\n\t\t}\n\n\t\tif tag[:v] == \"tag\" {\n\t\t\tswitch {\n\t\t\tdefault: // input string is either `tag` or `tag[`, so return the $tag.\n\t\t\t\treturn w.Write([]byte(e.Tag))\n\t\t\tcase len(tag) >= 5: // input string is at least \"tag[x\" where x is hopefully an integer 0-9.\n\t\t\t\t// The index value is always in the same position: 4:5 (this is why supporting more than 0-9 is rough)\n\t\t\t\tif v, _ = strconv.Atoi(tag[4:5]); len(logTags) <= v {\n\t\t\t\t\treturn 0, fmt.Errorf(\"%s: %w\", tag, ErrNoTagValue)\n\t\t\t\t}\n\n\t\t\t\treturn w.Write([]byte(logTags[v]))\n\t\t\t}\n\t\t}\n\n\t\tswitch val := e.Record[tag[:v]].(type) {\n\t\tcase string:\n\t\t\treturn w.Write([]byte(val))\n\t\tcase map[interface{}]interface{}:\n\t\t\ti, err := parseKeysTemplate(val, tag[v:], w)\n\n\t\t\treturn int(i), err\n\t\tcase []byte:\n\t\t\t// we should never land here because the interface{} map should have already been converted to strings.\n\t\t\treturn w.Write(val)\n\t\tdefault: // missing\n\t\t\treturn 0, fmt.Errorf(\"%s: %w\", tag, ErrMissingTagName)\n\t\t}\n\t})\n}\n\n// sanitizer implements io.Writer for fasttemplate usage.\n// Instead of just writing bytes to a buffer, sanitize them first.\ntype sanitizer struct {\n\tsanitize func(b []byte) []byte\n\tbuf      *bytebufferpool.ByteBuffer\n}\n\n// Write completes the io.Writer implementation.\nfunc (s *sanitizer) Write(b []byte) (int, error) {\n\treturn s.buf.Write(s.sanitize(b))\n}\n\n// sanitizeGroup removes special characters from the log group names bytes.\n// https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-logs-loggroup.html\nfunc sanitizeGroup(b []byte) []byte {\n\tfor i, r := range b {\n\t\t// 45-47 = / . -\n\t\t// 48-57 = 0-9\n\t\t// 65-90 = A-Z\n\t\t// 95 = _\n\t\t// 97-122 = a-z\n\t\tif r == 95 || (r > 44 && r < 58) ||\n\t\t\t(r > 64 && r < 91) || (r > 96 && r < 123) {\n\t\t\tcontinue\n\t\t}\n\n\t\tb[i] = '.'\n\t}\n\n\treturn b\n}\n\n// sanitizeStream removes : and * from the log stream bytes.\n// https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-logs-logstream.html\nfunc sanitizeStream(b []byte) []byte {\n\tfor i, r := range b {\n\t\tif r == '*' || r == ':' {\n\t\t\tb[i] = '.'\n\t\t}\n\t}\n\n\treturn b\n}\n"
  },
  {
    "path": "cloudwatch/helpers_test.go",
    "content": "package cloudwatch\n\nimport (\n\t\"testing\"\n\n\t\"github.com/stretchr/testify/assert\"\n\t\"github.com/valyala/bytebufferpool\"\n)\n\nfunc TestTagKeysToMap(t *testing.T) {\n\tt.Parallel()\n\n\t// Testable values. Purposely \"messed up\" - they should all parse out OK.\n\tvalues := \" key1 =value , key2=value2, key3= value3 ,key4=, key5  = v5,,key7==value7,\" +\n\t\t\" k8, k9,key1=value1,space key = space value\"\n\t// The values above should return a map like this.\n\texpect := map[string]string{\"key1\": \"value1\", \"key2\": \"value2\", \"key3\": \"value3\",\n\t\t\"key4\": \"\", \"key5\": \"v5\", \"key7\": \"=value7\", \"k8\": \"\", \"k9\": \"\", \"space key\": \"space value\"}\n\n\tfor k, v := range tagKeysToMap(values) {\n\t\tassert.Equal(t, *v, expect[k], \"Tag key or value failed parser.\")\n\t}\n}\n\nfunc TestParseDataMapTags(t *testing.T) {\n\tt.Parallel()\n\n\ttemplate := testTemplate(\"$(ecs_task_id).$(ecs_cluster).$(ecs_task_arn).$(uuid).$(tag).$(pam['item2']['subitem2']['more']).$(pam['item']).$(pam['item2']).\" +\n\t\t\"$(pam['item2']['subitem'])-$(pam['item2']['subitem2']['more'])-$(tag[1])\")\n\tdata := map[interface{}]interface{}{\n\t\t\"pam\": map[interface{}]interface{}{\n\t\t\t\"item\": \"soup\",\n\t\t\t\"item2\": map[interface{}]interface{}{\"subitem\": []byte(\"SubIt3m\"),\n\t\t\t\t\"subitem2\": map[interface{}]interface{}{\"more\": \"final\"}},\n\t\t},\n\t}\n\n\ts := &sanitizer{buf: bytebufferpool.Get(), sanitize: sanitizeGroup}\n\tdefer bytebufferpool.Put(s.buf)\n\n\t_, err := parseDataMapTags(&Event{Record: data, Tag: \"syslog.0\"}, []string{\"syslog\", \"0\"}, template, TaskMetadata{Cluster: \"cluster\", TaskARN: \"taskARN\", TaskID: \"taskID\"}, \"123\", s)\n\n\tassert.Nil(t, err, err)\n\tassert.Equal(t, \"taskID.cluster.taskARN.123.syslog.0.final.soup..SubIt3m-final-0\", s.buf.String(), \"Rendered string is incorrect.\")\n\n\t// Test missing variables. These should always return an error and an empty string.\n\ts.buf.Reset()\n\ttemplate = testTemplate(\"$(missing-variable).stuff\")\n\t_, err = parseDataMapTags(&Event{Record: data, Tag: \"syslog.0\"}, []string{\"syslog\", \"0\"}, template, TaskMetadata{Cluster: \"cluster\", TaskARN: \"taskARN\", TaskID: \"taskID\"}, \"123\", s)\n\tassert.EqualError(t, err, \"missing-variable: \"+ErrMissingTagName.Error(), \"the wrong error was returned\")\n\tassert.Empty(t, s.buf.String())\n\n\ts.buf.Reset()\n\ttemplate = testTemplate(\"$(pam['item6']).stuff\")\n\t_, err = parseDataMapTags(&Event{Record: data, Tag: \"syslog.0\"}, []string{\"syslog\", \"0\"}, template, TaskMetadata{}, \"\", s)\n\tassert.EqualError(t, err, \"item6: \"+ErrMissingSubName.Error(), \"the wrong error was returned\")\n\tassert.Empty(t, s.buf.String())\n\n\ts.buf.Reset()\n\ttemplate = testTemplate(\"$(tag[9]).stuff\")\n\t_, err = parseDataMapTags(&Event{Record: data, Tag: \"syslog.0\"}, []string{\"syslog\", \"0\"}, template, TaskMetadata{}, \"\", s)\n\tassert.EqualError(t, err, \"tag[9]: \"+ErrNoTagValue.Error(), \"the wrong error was returned\")\n\tassert.Empty(t, s.buf.String())\n}\n\nfunc TestSanitizeGroup(t *testing.T) {\n\tt.Parallel()\n\n\ttests := map[string]string{ // \"send\": \"expect\",\n\t\t\"this.is.a.log.group.name\":             \"this.is.a.log.group.name\",\n\t\t\"1234567890abcdefghijklmnopqrstuvwxyz\": \"1234567890abcdefghijklmnopqrstuvwxyz\",\n\t\t\"ABCDEFGHIJKLMNOPQRSTUVWXYZ\":           \"ABCDEFGHIJKLMNOPQRSTUVWXYZ\",\n\t\t`!@#$%^&*()_+}{][=-';\":/.?>,<~\"']}`:    \".........._......-..../..........\",\n\t\t\"\":                                     \"\",\n\t}\n\n\tfor send, expect := range tests {\n\t\tactual := sanitizeGroup([]byte(send))\n\t\tassert.Equal(t, expect, string(actual), \"the wrong characters were modified in sanitizeGroup\")\n\t}\n}\n\nfunc TestSanitizeStream(t *testing.T) {\n\tt.Parallel()\n\n\ttests := map[string]string{ // \"send\": \"expect\",\n\t\t\"this.is.a.log.group.name\":             \"this.is.a.log.group.name\",\n\t\t\"1234567890abcdefghijklmnopqrstuvwxyz\": \"1234567890abcdefghijklmnopqrstuvwxyz\",\n\t\t\"ABCDEFGHIJKLMNOPQRSTUVWXYZ\":           \"ABCDEFGHIJKLMNOPQRSTUVWXYZ\",\n\t\t`!@#$%^&*()_+}{][=-';\":/.?>,<~\"']}`:    `!@#$%^&.()_+}{][=-';\"./.?>,<~\"']}`,\n\t\t\"\":                                     \"\",\n\t}\n\n\tfor send, expect := range tests {\n\t\tactual := sanitizeStream([]byte(send))\n\t\tassert.Equal(t, expect, string(actual), \"the wrong characters were modified in sanitizeStream\")\n\t}\n}\n"
  },
  {
    "path": "cloudwatch/mock_cloudwatch/mock.go",
    "content": "// Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\"). You may\n// not use this file except in compliance with the License. A copy of the\n// License is located at\n//\n//     http://aws.amazon.com/apache2.0/\n//\n// or in the \"license\" file accompanying this file. This file is distributed\n// on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either\n// express or implied. See the License for the specific language governing\n// permissions and limitations under the License.\n\n// Code generated by MockGen. DO NOT EDIT.\n// Source: github.com/aws/amazon-cloudwatch-logs-for-fluent-bit/cloudwatch (interfaces: LogsClient)\n\n// Package mock_cloudwatch is a generated GoMock package.\npackage mock_cloudwatch\n\nimport (\n\treflect \"reflect\"\n\n\tcloudwatchlogs \"github.com/aws/aws-sdk-go/service/cloudwatchlogs\"\n\tgomock \"github.com/golang/mock/gomock\"\n)\n\n// MockLogsClient is a mock of LogsClient interface\ntype MockLogsClient struct {\n\tctrl     *gomock.Controller\n\trecorder *MockLogsClientMockRecorder\n}\n\n// MockLogsClientMockRecorder is the mock recorder for MockLogsClient\ntype MockLogsClientMockRecorder struct {\n\tmock *MockLogsClient\n}\n\n// NewMockLogsClient creates a new mock instance\nfunc NewMockLogsClient(ctrl *gomock.Controller) *MockLogsClient {\n\tmock := &MockLogsClient{ctrl: ctrl}\n\tmock.recorder = &MockLogsClientMockRecorder{mock}\n\treturn mock\n}\n\n// EXPECT returns an object that allows the caller to indicate expected use\nfunc (m *MockLogsClient) EXPECT() *MockLogsClientMockRecorder {\n\treturn m.recorder\n}\n\n// CreateLogGroup mocks base method\nfunc (m *MockLogsClient) CreateLogGroup(arg0 *cloudwatchlogs.CreateLogGroupInput) (*cloudwatchlogs.CreateLogGroupOutput, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"CreateLogGroup\", arg0)\n\tret0, _ := ret[0].(*cloudwatchlogs.CreateLogGroupOutput)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// CreateLogGroup indicates an expected call of CreateLogGroup\nfunc (mr *MockLogsClientMockRecorder) CreateLogGroup(arg0 interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"CreateLogGroup\", reflect.TypeOf((*MockLogsClient)(nil).CreateLogGroup), arg0)\n}\n\n// CreateLogStream mocks base method\nfunc (m *MockLogsClient) CreateLogStream(arg0 *cloudwatchlogs.CreateLogStreamInput) (*cloudwatchlogs.CreateLogStreamOutput, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"CreateLogStream\", arg0)\n\tret0, _ := ret[0].(*cloudwatchlogs.CreateLogStreamOutput)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// CreateLogStream indicates an expected call of CreateLogStream\nfunc (mr *MockLogsClientMockRecorder) CreateLogStream(arg0 interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"CreateLogStream\", reflect.TypeOf((*MockLogsClient)(nil).CreateLogStream), arg0)\n}\n\n// DescribeLogStreams mocks base method\nfunc (m *MockLogsClient) DescribeLogStreams(arg0 *cloudwatchlogs.DescribeLogStreamsInput) (*cloudwatchlogs.DescribeLogStreamsOutput, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"DescribeLogStreams\", arg0)\n\tret0, _ := ret[0].(*cloudwatchlogs.DescribeLogStreamsOutput)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// DescribeLogStreams indicates an expected call of DescribeLogStreams\nfunc (mr *MockLogsClientMockRecorder) DescribeLogStreams(arg0 interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"DescribeLogStreams\", reflect.TypeOf((*MockLogsClient)(nil).DescribeLogStreams), arg0)\n}\n\n// PutLogEvents mocks base method\nfunc (m *MockLogsClient) PutLogEvents(arg0 *cloudwatchlogs.PutLogEventsInput) (*cloudwatchlogs.PutLogEventsOutput, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"PutLogEvents\", arg0)\n\tret0, _ := ret[0].(*cloudwatchlogs.PutLogEventsOutput)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// PutLogEvents indicates an expected call of PutLogEvents\nfunc (mr *MockLogsClientMockRecorder) PutLogEvents(arg0 interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"PutLogEvents\", reflect.TypeOf((*MockLogsClient)(nil).PutLogEvents), arg0)\n}\n\n// PutRetentionPolicy mocks base method\nfunc (m *MockLogsClient) PutRetentionPolicy(arg0 *cloudwatchlogs.PutRetentionPolicyInput) (*cloudwatchlogs.PutRetentionPolicyOutput, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"PutRetentionPolicy\", arg0)\n\tret0, _ := ret[0].(*cloudwatchlogs.PutRetentionPolicyOutput)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// PutRetentionPolicy indicates an expected call of PutRetentionPolicy\nfunc (mr *MockLogsClientMockRecorder) PutRetentionPolicy(arg0 interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"PutRetentionPolicy\", reflect.TypeOf((*MockLogsClient)(nil).PutRetentionPolicy), arg0)\n}\n"
  },
  {
    "path": "fluent-bit-cloudwatch.go",
    "content": "// Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\"). You may\n// not use this file except in compliance with the License. A copy of the\n// License is located at\n//\n//\thttp://aws.amazon.com/apache2.0/\n//\n// or in the \"license\" file accompanying this file. This file is distributed\n// on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either\n// express or implied. See the License for the specific language governing\n// permissions and limitations under the License.\n\npackage main\n\nimport (\n\t\"C\"\n\t\"fmt\"\n\t\"strconv\"\n\t\"time\"\n\t\"unsafe\"\n\n\t\"github.com/aws/amazon-cloudwatch-logs-for-fluent-bit/cloudwatch\"\n\t\"github.com/aws/amazon-kinesis-firehose-for-fluent-bit/plugins\"\n\t\"github.com/fluent/fluent-bit-go/output\"\n\n\t\"github.com/sirupsen/logrus\"\n)\nimport (\n\t\"strings\"\n)\n\nvar (\n\tpluginInstances []*cloudwatch.OutputPlugin\n)\n\nfunc addPluginInstance(ctx unsafe.Pointer) error {\n\tpluginID := len(pluginInstances)\n\n\tconfig := getConfiguration(ctx, pluginID)\n\terr := config.Validate()\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tinstance, err := cloudwatch.NewOutputPlugin(config)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\toutput.FLBPluginSetContext(ctx, pluginID)\n\tpluginInstances = append(pluginInstances, instance)\n\n\treturn nil\n}\n\nfunc getPluginInstance(ctx unsafe.Pointer) *cloudwatch.OutputPlugin {\n\tpluginID := output.FLBPluginGetContext(ctx).(int)\n\treturn pluginInstances[pluginID]\n}\n\n//export FLBPluginRegister\nfunc FLBPluginRegister(ctx unsafe.Pointer) int {\n\treturn output.FLBPluginRegister(ctx, \"cloudwatch\", \"AWS CloudWatch Fluent Bit Plugin!\")\n}\n\nfunc getConfiguration(ctx unsafe.Pointer, pluginID int) cloudwatch.OutputPluginConfig {\n\tconfig := cloudwatch.OutputPluginConfig{}\n\tconfig.PluginInstanceID = pluginID\n\n\tconfig.LogGroupName = output.FLBPluginConfigKey(ctx, \"log_group_name\")\n\tlogrus.Infof(\"[cloudwatch %d] plugin parameter log_group_name = '%s'\", pluginID, config.LogGroupName)\n\n\tconfig.DefaultLogGroupName = output.FLBPluginConfigKey(ctx, \"default_log_group_name\")\n\tif config.DefaultLogGroupName == \"\" {\n\t\tconfig.DefaultLogGroupName = \"fluentbit-default\"\n\t}\n\n\tlogrus.Infof(\"[cloudwatch %d] plugin parameter default_log_group_name = '%s'\", pluginID, config.DefaultLogGroupName)\n\n\tconfig.LogStreamPrefix = output.FLBPluginConfigKey(ctx, \"log_stream_prefix\")\n\tlogrus.Infof(\"[cloudwatch %d] plugin parameter log_stream_prefix = '%s'\", pluginID, config.LogStreamPrefix)\n\n\tconfig.LogStreamName = output.FLBPluginConfigKey(ctx, \"log_stream_name\")\n\tlogrus.Infof(\"[cloudwatch %d] plugin parameter log_stream_name = '%s'\", pluginID, config.LogStreamName)\n\n\tconfig.DefaultLogStreamName = output.FLBPluginConfigKey(ctx, \"default_log_stream_name\")\n\tif config.DefaultLogStreamName == \"\" {\n\t\tconfig.DefaultLogStreamName = \"/fluentbit-default\"\n\t}\n\n\tlogrus.Infof(\"[cloudwatch %d] plugin parameter default_log_stream_name = '%s'\", pluginID, config.DefaultLogStreamName)\n\n\tconfig.Region = output.FLBPluginConfigKey(ctx, \"region\")\n\tlogrus.Infof(\"[cloudwatch %d] plugin parameter region = '%s'\", pluginID, config.Region)\n\n\tconfig.LogKey = output.FLBPluginConfigKey(ctx, \"log_key\")\n\tlogrus.Infof(\"[cloudwatch %d] plugin parameter log_key = '%s'\", pluginID, config.LogKey)\n\n\tconfig.RoleARN = output.FLBPluginConfigKey(ctx, \"role_arn\")\n\tlogrus.Infof(\"[cloudwatch %d] plugin parameter role_arn = '%s'\", pluginID, config.RoleARN)\n\n\tconfig.AutoCreateGroup = getBoolParam(ctx, \"auto_create_group\", false)\n\tlogrus.Infof(\"[cloudwatch %d] plugin parameter auto_create_group = '%v'\", pluginID, config.AutoCreateGroup)\n\n\tconfig.AutoCreateStream = getBoolParam(ctx, \"auto_create_stream\", true)\n\tlogrus.Infof(\"[cloudwatch %d] plugin parameter auto_create_stream = '%v'\", pluginID, config.AutoCreateStream)\n\n\tconfig.NewLogGroupTags = output.FLBPluginConfigKey(ctx, \"new_log_group_tags\")\n\tlogrus.Infof(\"[cloudwatch %d] plugin parameter new_log_group_tags = '%s'\", pluginID, config.NewLogGroupTags)\n\n\tconfig.LogRetentionDays, _ = strconv.ParseInt(output.FLBPluginConfigKey(ctx, \"log_retention_days\"), 10, 64)\n\tlogrus.Infof(\"[cloudwatch %d] plugin parameter log_retention_days = '%d'\", pluginID, config.LogRetentionDays)\n\n\tconfig.CWEndpoint = output.FLBPluginConfigKey(ctx, \"endpoint\")\n\tlogrus.Infof(\"[cloudwatch %d] plugin parameter endpoint = '%s'\", pluginID, config.CWEndpoint)\n\n\tconfig.STSEndpoint = output.FLBPluginConfigKey(ctx, \"sts_endpoint\")\n\tlogrus.Infof(\"[cloudwatch %d] plugin parameter sts_endpoint = '%s'\", pluginID, config.STSEndpoint)\n\n\tconfig.ExternalID = output.FLBPluginConfigKey(ctx, \"external_id\")\n\tlogrus.Infof(\"[cloudwatch %d] plugin parameter external_id = '%s'\", pluginID, config.ExternalID)\n\n\tconfig.CredsEndpoint = output.FLBPluginConfigKey(ctx, \"credentials_endpoint\")\n\tlogrus.Infof(\"[cloudwatch %d] plugin parameter credentials_endpoint = %s\", pluginID, config.CredsEndpoint)\n\n\tconfig.LogFormat = output.FLBPluginConfigKey(ctx, \"log_format\")\n\tlogrus.Infof(\"[cloudwatch %d] plugin parameter log_format = '%s'\", pluginID, config.LogFormat)\n\n\tconfig.ExtraUserAgent = output.FLBPluginConfigKey(ctx, \"extra_user_agent\")\n\n\treturn config\n}\n\nfunc getBoolParam(ctx unsafe.Pointer, param string, defaultVal bool) bool {\n\tval := strings.ToLower(output.FLBPluginConfigKey(ctx, param))\n\tif val == \"true\" {\n\t\treturn true\n\t} else if val == \"false\" {\n\t\treturn false\n\t} else {\n\t\treturn defaultVal\n\t}\n}\n\n//export FLBPluginInit\nfunc FLBPluginInit(ctx unsafe.Pointer) int {\n\tplugins.SetupLogger()\n\n\tlogrus.Debug(\"A new higher performance CloudWatch Logs plugin has been released; \" +\n\t\t\"you are using the old plugin. Check out the new plugin's documentation and \" +\n\t\t\"determine if you can migrate.\\n\" +\n\t\t\"https://docs.fluentbit.io/manual/pipeline/outputs/cloudwatch\")\n\n\terr := addPluginInstance(ctx)\n\tif err != nil {\n\t\tlogrus.Error(err)\n\t\treturn output.FLB_ERROR\n\t}\n\treturn output.FLB_OK\n}\n\n//export FLBPluginFlushCtx\nfunc FLBPluginFlushCtx(ctx, data unsafe.Pointer, length C.int, tag *C.char) int {\n\tvar count int\n\tvar ret int\n\tvar ts interface{}\n\tvar record map[interface{}]interface{}\n\n\t// Create Fluent Bit decoder\n\tdec := output.NewDecoder(data, int(length))\n\n\tcloudwatchLogs := getPluginInstance(ctx)\n\n\tfluentTag := C.GoString(tag)\n\tlogrus.Debugf(\"[cloudwatch %d] Found logs with tag: %s\", cloudwatchLogs.PluginInstanceID, fluentTag)\n\n\tfor {\n\t\t// Extract Record\n\t\tret, ts, record = output.GetRecord(dec)\n\t\tif ret != 0 {\n\t\t\tbreak\n\t\t}\n\n\t\tvar timestamp time.Time\n\t\tswitch tts := ts.(type) {\n\t\tcase output.FLBTime:\n\t\t\ttimestamp = tts.Time\n\t\tcase uint64:\n\t\t\t// when ts is of type uint64 it appears to\n\t\t\t// be the amount of seconds since unix epoch.\n\t\t\ttimestamp = time.Unix(int64(tts), 0)\n\t\tdefault:\n\t\t\ttimestamp = time.Now()\n\t\t}\n\n\t\tretCode := cloudwatchLogs.AddEvent(&cloudwatch.Event{Tag: fluentTag, Record: record, TS: timestamp})\n\t\tif retCode != output.FLB_OK {\n\t\t\treturn retCode\n\t\t}\n\t\tcount++\n\t}\n\terr := cloudwatchLogs.Flush()\n\tif err != nil {\n\t\tfmt.Println(err)\n\t\t// TODO: Better error handling\n\t\treturn output.FLB_RETRY\n\t}\n\n\tlogrus.Debugf(\"[cloudwatch %d] Processed %d events\", cloudwatchLogs.PluginInstanceID, count)\n\n\t// Return options:\n\t//\n\t// output.FLB_OK    = data have been processed.\n\t// output.FLB_ERROR = unrecoverable error, do not try this again. Never returned by flush.\n\t// output.FLB_RETRY = retry to flush later.\n\treturn output.FLB_OK\n}\n\n//export FLBPluginExit\nfunc FLBPluginExit() int {\n\treturn output.FLB_OK\n}\n\nfunc main() {\n}\n"
  },
  {
    "path": "go.mod",
    "content": "module github.com/aws/amazon-cloudwatch-logs-for-fluent-bit\n\ngo 1.24.11\n\nrequire (\n\tgithub.com/aws/amazon-kinesis-firehose-for-fluent-bit v1.7.2\n\tgithub.com/aws/aws-sdk-go v1.55.8\n\tgithub.com/fluent/fluent-bit-go v0.0.0-20201210173045-3fd1e0486df2\n\tgithub.com/golang/mock v1.6.0\n\tgithub.com/json-iterator/go v1.1.12\n\tgithub.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect\n\tgithub.com/segmentio/ksuid v1.0.4\n\tgithub.com/sirupsen/logrus v1.9.4\n\tgithub.com/stretchr/testify v1.11.1\n\tgithub.com/valyala/bytebufferpool v1.0.0\n\tgithub.com/valyala/fasttemplate v1.2.2\n\tgolang.org/x/sys v0.40.0 // indirect\n)\n\nrequire (\n\tgithub.com/davecgh/go-spew v1.1.1 // indirect\n\tgithub.com/jmespath/go-jmespath v0.4.0 // indirect\n\tgithub.com/modern-go/reflect2 v1.0.2 // indirect\n\tgithub.com/pmezard/go-difflib v1.0.0 // indirect\n\tgithub.com/ugorji/go/codec v1.2.6 // indirect\n\tgopkg.in/yaml.v3 v3.0.1 // indirect\n)\n"
  },
  {
    "path": "go.sum",
    "content": "github.com/aws/amazon-kinesis-firehose-for-fluent-bit v1.7.2 h1:uKb5MpgJ0aDa/TkAPWQIExnO/caFEpTyWTXiZz+hPkA=\ngithub.com/aws/amazon-kinesis-firehose-for-fluent-bit v1.7.2/go.mod h1:bs9SAGlYjBbPqBpoopCAuYG/vHTgiucyb71mp8wyF88=\ngithub.com/aws/aws-sdk-go v1.55.8 h1:JRmEUbU52aJQZ2AjX4q4Wu7t4uZjOu71uyNmaWlUkJQ=\ngithub.com/aws/aws-sdk-go v1.55.8/go.mod h1:ZkViS9AqA6otK+JBBNH2++sx1sgxrPKcSzPPvQkUtXk=\ngithub.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=\ngithub.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=\ngithub.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=\ngithub.com/fluent/fluent-bit-go v0.0.0-20201210173045-3fd1e0486df2 h1:G57WNyWS0FQf43hjRXLy5JT1V5LWVsSiEpkUcT67Ugk=\ngithub.com/fluent/fluent-bit-go v0.0.0-20201210173045-3fd1e0486df2/go.mod h1:L92h+dgwElEyUuShEwjbiHjseW410WIcNz+Bjutc8YQ=\ngithub.com/golang/mock v1.6.0 h1:ErTB+efbowRARo13NNdxyJji2egdxLGQhRaY+DUumQc=\ngithub.com/golang/mock v1.6.0/go.mod h1:p6yTPP+5HYm5mzsMV8JkE6ZKdX+/wYM6Hr+LicevLPs=\ngithub.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=\ngithub.com/jmespath/go-jmespath v0.4.0 h1:BEgLn5cpjn8UN1mAw4NjwDrS35OdebyEtFe+9YPoQUg=\ngithub.com/jmespath/go-jmespath v0.4.0/go.mod h1:T8mJZnbsbmF+m6zOOFylbeCJqk5+pHWvzYPziyZiYoo=\ngithub.com/jmespath/go-jmespath/internal/testify v1.5.1 h1:shLQSRRSCCPj3f2gpwzGwWFoC7ycTf1rcQZHOlsJ6N8=\ngithub.com/jmespath/go-jmespath/internal/testify v1.5.1/go.mod h1:L3OGu8Wl2/fWfCI6z80xFu9LTZmf1ZRjMHUOPmWr69U=\ngithub.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM=\ngithub.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo=\ngithub.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=\ngithub.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg=\ngithub.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=\ngithub.com/modern-go/reflect2 v1.0.2 h1:xBagoLtFs94CBntxluKeaWgTMpvLxC4ur3nMaC9Gz0M=\ngithub.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk=\ngithub.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=\ngithub.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=\ngithub.com/segmentio/ksuid v1.0.4 h1:sBo2BdShXjmcugAMwjugoGUdUV0pcxY5mW4xKRn3v4c=\ngithub.com/segmentio/ksuid v1.0.4/go.mod h1:/XUiZBD3kVx5SmUOl55voK5yeAbBNNIed+2O73XgrPE=\ngithub.com/sirupsen/logrus v1.9.4 h1:TsZE7l11zFCLZnZ+teH4Umoq5BhEIfIzfRDZ1Uzql2w=\ngithub.com/sirupsen/logrus v1.9.4/go.mod h1:ftWc9WdOfJ0a92nsE2jF5u5ZwH8Bv2zdeOC42RjbV2g=\ngithub.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=\ngithub.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=\ngithub.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=\ngithub.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=\ngithub.com/ugorji/go v1.1.7/go.mod h1:kZn38zHttfInRq0xu/PH0az30d+z6vm202qpg1oXVMw=\ngithub.com/ugorji/go v1.2.6/go.mod h1:anCg0y61KIhDlPZmnH+so+RQbysYVyDko0IMgJv0Nn0=\ngithub.com/ugorji/go/codec v1.1.7/go.mod h1:Ax+UKWsSmolVDwsd+7N3ZtXu+yMGCf907BLYF3GoBXY=\ngithub.com/ugorji/go/codec v1.2.6 h1:7kbGefxLoDBuYXOms4yD7223OpNMMPNPZxXk5TvFcyQ=\ngithub.com/ugorji/go/codec v1.2.6/go.mod h1:V6TCNZ4PHqoHGFZuSG1W8nrCzzdgA2DozYxWFFpvxTw=\ngithub.com/valyala/bytebufferpool v1.0.0 h1:GqA5TC/0021Y/b9FG4Oi9Mr3q7XYx6KllzawFIhcdPw=\ngithub.com/valyala/bytebufferpool v1.0.0/go.mod h1:6bBcMArwyJ5K/AmCkWv1jt77kVWyCJ6HpOuEn7z0Csc=\ngithub.com/valyala/fasttemplate v1.2.2 h1:lxLXG0uE3Qnshl9QyaK6XJxMXlQZELvChBOCmQD0Loo=\ngithub.com/valyala/fasttemplate v1.2.2/go.mod h1:KHLXt3tVN2HBp8eijSv/kGJopbvo7S+qRAEEKiv+SiQ=\ngithub.com/yuin/goldmark v1.3.5/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k=\ngolang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=\ngolang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=\ngolang.org/x/mod v0.4.2/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=\ngolang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=\ngolang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=\ngolang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM=\ngolang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=\ngolang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=\ngolang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=\ngolang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=\ngolang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=\ngolang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=\ngolang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=\ngolang.org/x/sys v0.40.0 h1:DBZZqJ2Rkml6QMQsZywtnjnnGvHza6BTfYFWY9kjEWQ=\ngolang.org/x/sys v0.40.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=\ngolang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=\ngolang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=\ngolang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=\ngolang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=\ngolang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=\ngolang.org/x/tools v0.1.1/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=\ngolang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=\ngolang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=\ngolang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=\ngopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM=\ngopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=\ngopkg.in/yaml.v2 v2.2.8 h1:obN1ZagJSUGI0Ek/LBmuj4SNLPfIny3KsKFopxRdj10=\ngopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=\ngopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=\ngopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=\n"
  },
  {
    "path": "scripts/mockgen.sh",
    "content": "#!/bin/bash\n# Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the\n# \"License\"). You may not use this file except in compliance\n#  with the License. A copy of the License is located at\n#\n#     http://aws.amazon.com/apache2.0/\n#\n# or in the \"license\" file accompanying this file. This file is\n# distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR\n# CONDITIONS OF ANY KIND, either express or implied. See the\n# License for the specific language governing permissions and\n# limitations under the License.\n#\n# This script wraps the mockgen tool and inserts licensing information.\n\nset -e\npackage=${1?Must provide package}\ninterfaces=${2?Must provide interface names}\noutputfile=${3?Must provide an output file}\n\nexport PATH=\"${GOPATH//://bin:}/bin:$PATH\"\n\ndata=$(\ncat << EOF\n// Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\"). You may\n// not use this file except in compliance with the License. A copy of the\n// License is located at\n//\n//     http://aws.amazon.com/apache2.0/\n//\n// or in the \"license\" file accompanying this file. This file is distributed\n// on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either\n// express or implied. See the License for the specific language governing\n// permissions and limitations under the License.\n\n$(mockgen \"$package\" \"$interfaces\")\nEOF\n)\n\necho \"$data\" | goimports > \"${outputfile}\"\n"
  }
]