mainline eba24827191e cached
27 files
166.4 KB
44.4k tokens
116 symbols
1 requests
Download .txt
Repository: aws/amazon-cloudwatch-logs-for-fluent-bit
Branch: mainline
Commit: eba24827191e
Files: 27
Total size: 166.4 KB

Directory structure:
gitextract_6qru89jo/

├── .github/
│   ├── CODEOWNERS
│   ├── PULL_REQUEST_TEMPLATE.md
│   ├── dependabot.yml
│   └── workflows/
│       └── build.yml
├── .gitignore
├── CHANGELOG.md
├── CODEOWNERS
├── CODE_OF_CONDUCT.md
├── CONTRIBUTING.md
├── LICENSE
├── Makefile
├── NOTICE
├── README.md
├── THIRD-PARTY
├── VERSION
├── cloudwatch/
│   ├── cloudwatch.go
│   ├── cloudwatch_test.go
│   ├── generate_mock.go
│   ├── handlers.go
│   ├── handlers_test.go
│   ├── helpers.go
│   ├── helpers_test.go
│   └── mock_cloudwatch/
│       └── mock.go
├── fluent-bit-cloudwatch.go
├── go.mod
├── go.sum
└── scripts/
    └── mockgen.sh

================================================
FILE CONTENTS
================================================

================================================
FILE: .github/CODEOWNERS
================================================
* @aws/aws-firelens


================================================
FILE: .github/PULL_REQUEST_TEMPLATE.md
================================================
*Issue #, if available:*

*Description of changes:*


By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.


================================================
FILE: .github/dependabot.yml
================================================
version: 2
updates:
  - package-ecosystem: gomod 
    directory: "/" # Location of package manifests
    schedule:
      interval: "weekly"


================================================
FILE: .github/workflows/build.yml
================================================
name: Build

on:
  push:
    branches: [ mainline ]
  pull_request:
    branches: [ mainline ]

jobs:

  build:
    name: Build
    runs-on: ubuntu-latest
    steps:

    - name: Set up Go 1.24.11
      uses: actions/setup-go@v5
      with:
        go-version: '1.24.11'
      id: go

    - name: Install cross-compiler for Windows
      run: sudo apt-get update && sudo apt-get install -y -o Acquire::Retries=3 gcc-multilib gcc-mingw-w64

    - name: Check out code into the Go module directory
      uses: actions/checkout@v5

    - name: golint
      run: go install golang.org/x/lint/golint@latest

    - name: Build
      run: make build windows-release test


================================================
FILE: .gitignore
================================================

# Binaries for programs and plugins
*.exe
*.exe~
*.dll
*.so
*.dylib

# Test binary, built with `go test -c`
*.test

# build output dir
bin

# Output of the go coverage tool, specifically when used with LiteIDE
*.out


================================================
FILE: CHANGELOG.md
================================================
# Changelog

## 1.9.5
* Enhancement - Update github.com/aws/aws-sdk-go to v1.55.8
* Enhancement - Update github.com/sirupsen/logrus to v1.9.4
* Enhancement - Update github.com/stretchr/testify to v1.11.1

## 1.9.4
* Bug - Fix utf-8 calculation of payload length to account for invalid unicode bytes that will be replaced with the 3 byte unicode replacement character. This bug can lead to an `InvalidParameterException` from CloudWatch when the payload sent is calculated to be over the limit due to character replacement.

## 1.9.3
* Enhancement - Upgrade Go version to 1.20

## 1.9.2
* Bug - Fixed Log Loss can occur when log group creation or retention policy API calls fail. (#314)

## 1.9.1
* Enhancement - Added different base user agent for Linux and Windows

## 1.9.0
* Feature - Add support for building this plugin on Windows. *Note that this is only support in this plugin repo for Windows compilation.*

## 1.8.0
* Feature - Add `auto_create_stream ` option (#257)
* Bug - Allow recovery from a stream being deleted and created by a user (#257)

## 1.7.0
* Feature - Add support for external_id (#226)

## 1.6.4
* Bug - Remove corrupted unicode fragments on truncation (#208)

## 1.6.3
* Enhancement - Upgrade Go version to 1.17

## 1.6.2
* Enhancement - Add validation to stop accepting both of `log_stream_name` and `log_stream_prefix` together (#190)

## 1.6.1
* Enhancement - Delete debug messages which make log info useless (#146)

## 1.6.0
* Enhancement - Add support for updating the retention policy of existing log groups (#121)

## 1.5.0
* Feature - Automatically re-create CloudWatch log groups and log streams if they are deleted (#95)
* Feature - Add default fallback log group and stream names (#99)
* Feature - Add support for ECS Metadata and UUID via special variables in log stream and group names (#108)
* Enhancement - Remove invalid characters in log stream and log group names (#103)

## 1.4.1
* Bug - Add back `auto_create_group` option (#96)
* Bug - Truncate log events to max size (#85)

## 1.4.0
* Feature - Add support for dynamic log group names (#46)
* Feature - Add support for dynamic log stream names (#16)
* Feature - Support tagging of newly created log groups (#51)
* Feature - Support setting log group retention policies (#50)

## 1.3.1
* Bug - Check for empty logEvents before calling PutLogEvents (#66)

## 1.3.0
* Feature - Add sts_endpoint param for custom STS API endpoint (#55)

## 1.2.0
* Feature - Add support for Embedded Metric Format (#27)

## 1.1.1
* Bug - Discard and do not send empty messages (#40)

## 1.1.0
* Bug - A single CloudWatch Logs PutLogEvents request can not contain logs that span more than 24 hours (#29)
* Feature - Add `credentials_endpoint` option (#36)
* Feature - Support IAM Roles for Service Accounts in Amazon EKS (#33)

## 1.0.0
Initial versioned release of the Amazon CloudWatch Logs for Fluent Bit Plugin


================================================
FILE: CODEOWNERS
================================================
/ @aws/aws-firelens


================================================
FILE: CODE_OF_CONDUCT.md
================================================
## Code of Conduct
This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct).
For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact
opensource-codeofconduct@amazon.com with any additional questions or comments.


================================================
FILE: CONTRIBUTING.md
================================================
# Contributing Guidelines

Thank you for your interest in contributing to our project. Whether it's a bug report, new feature, correction, or additional
documentation, we greatly value feedback and contributions from our community.

Please read through this document before submitting any issues or pull requests to ensure we have all the necessary
information to effectively respond to your bug report or contribution.


## Reporting Bugs/Feature Requests

We welcome you to use the GitHub issue tracker to report bugs or suggest features.

When filing an issue, please check [existing open](https://github.com/awslabs/cloudwatch-logs-for-fluent-bit/issues), or [recently closed](https://github.com/awslabs/cloudwatch-logs-for-fluent-bit/issues?utf8=%E2%9C%93&q=is%3Aissue%20is%3Aclosed%20), issues to make sure somebody else hasn't already
reported the issue. Please try to include as much information as you can. Details like these are incredibly useful:

* A reproducible test case or series of steps
* The version of our code being used
* Any modifications you've made relevant to the bug
* Anything unusual about your environment or deployment


## Contributing via Pull Requests
Contributions via pull requests are much appreciated. Before sending us a pull request, please ensure that:

1. You are working against the latest source on the *master* branch.
2. You check existing open, and recently merged, pull requests to make sure someone else hasn't addressed the problem already.
3. You open an issue to discuss any significant work - we would hate for your time to be wasted.

To send us a pull request, please:

1. Fork the repository.
2. Modify the source; please focus on the specific change you are contributing. If you also reformat all the code, it will be hard for us to focus on your change.
3. Ensure local tests pass.
4. Commit to your fork using clear commit messages.
5. Send us a pull request, answering any default questions in the pull request interface.
6. Pay attention to any automated CI failures reported in the pull request, and stay involved in the conversation.

GitHub provides additional document on [forking a repository](https://help.github.com/articles/fork-a-repo/) and
[creating a pull request](https://help.github.com/articles/creating-a-pull-request/).


## Finding contributions to work on
Looking at the existing issues is a great way to find something to contribute on. As our projects, by default, use the default GitHub issue labels (enhancement/bug/duplicate/help wanted/invalid/question/wontfix), looking at any ['help wanted'](https://github.com/awslabs/cloudwatch-logs-for-fluent-bit/labels/help%20wanted) issues is a great place to start.


## Code of Conduct
This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct).
For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact
opensource-codeofconduct@amazon.com with any additional questions or comments.


## Security issue notifications
If you discover a potential security issue in this project we ask that you notify AWS/Amazon Security via our [vulnerability reporting page](http://aws.amazon.com/security/vulnerability-reporting/). Please do **not** create a public github issue.


## Licensing

See the [LICENSE](https://github.com/awslabs/cloudwatch-logs-for-fluent-bit/blob/master/LICENSE) file for our project's licensing. We will ask you to confirm the licensing of your contribution.

We may ask you to sign a [Contributor License Agreement (CLA)](http://en.wikipedia.org/wiki/Contributor_License_Agreement) for larger changes.


================================================
FILE: LICENSE
================================================

                                 Apache License
                           Version 2.0, January 2004
                        http://www.apache.org/licenses/

   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION

   1. Definitions.

      "License" shall mean the terms and conditions for use, reproduction,
      and distribution as defined by Sections 1 through 9 of this document.

      "Licensor" shall mean the copyright owner or entity authorized by
      the copyright owner that is granting the License.

      "Legal Entity" shall mean the union of the acting entity and all
      other entities that control, are controlled by, or are under common
      control with that entity. For the purposes of this definition,
      "control" means (i) the power, direct or indirect, to cause the
      direction or management of such entity, whether by contract or
      otherwise, or (ii) ownership of fifty percent (50%) or more of the
      outstanding shares, or (iii) beneficial ownership of such entity.

      "You" (or "Your") shall mean an individual or Legal Entity
      exercising permissions granted by this License.

      "Source" form shall mean the preferred form for making modifications,
      including but not limited to software source code, documentation
      source, and configuration files.

      "Object" form shall mean any form resulting from mechanical
      transformation or translation of a Source form, including but
      not limited to compiled object code, generated documentation,
      and conversions to other media types.

      "Work" shall mean the work of authorship, whether in Source or
      Object form, made available under the License, as indicated by a
      copyright notice that is included in or attached to the work
      (an example is provided in the Appendix below).

      "Derivative Works" shall mean any work, whether in Source or Object
      form, that is based on (or derived from) the Work and for which the
      editorial revisions, annotations, elaborations, or other modifications
      represent, as a whole, an original work of authorship. For the purposes
      of this License, Derivative Works shall not include works that remain
      separable from, or merely link (or bind by name) to the interfaces of,
      the Work and Derivative Works thereof.

      "Contribution" shall mean any work of authorship, including
      the original version of the Work and any modifications or additions
      to that Work or Derivative Works thereof, that is intentionally
      submitted to Licensor for inclusion in the Work by the copyright owner
      or by an individual or Legal Entity authorized to submit on behalf of
      the copyright owner. For the purposes of this definition, "submitted"
      means any form of electronic, verbal, or written communication sent
      to the Licensor or its representatives, including but not limited to
      communication on electronic mailing lists, source code control systems,
      and issue tracking systems that are managed by, or on behalf of, the
      Licensor for the purpose of discussing and improving the Work, but
      excluding communication that is conspicuously marked or otherwise
      designated in writing by the copyright owner as "Not a Contribution."

      "Contributor" shall mean Licensor and any individual or Legal Entity
      on behalf of whom a Contribution has been received by Licensor and
      subsequently incorporated within the Work.

   2. Grant of Copyright License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      copyright license to reproduce, prepare Derivative Works of,
      publicly display, publicly perform, sublicense, and distribute the
      Work and such Derivative Works in Source or Object form.

   3. Grant of Patent License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      (except as stated in this section) patent license to make, have made,
      use, offer to sell, sell, import, and otherwise transfer the Work,
      where such license applies only to those patent claims licensable
      by such Contributor that are necessarily infringed by their
      Contribution(s) alone or by combination of their Contribution(s)
      with the Work to which such Contribution(s) was submitted. If You
      institute patent litigation against any entity (including a
      cross-claim or counterclaim in a lawsuit) alleging that the Work
      or a Contribution incorporated within the Work constitutes direct
      or contributory patent infringement, then any patent licenses
      granted to You under this License for that Work shall terminate
      as of the date such litigation is filed.

   4. Redistribution. You may reproduce and distribute copies of the
      Work or Derivative Works thereof in any medium, with or without
      modifications, and in Source or Object form, provided that You
      meet the following conditions:

      (a) You must give any other recipients of the Work or
          Derivative Works a copy of this License; and

      (b) You must cause any modified files to carry prominent notices
          stating that You changed the files; and

      (c) You must retain, in the Source form of any Derivative Works
          that You distribute, all copyright, patent, trademark, and
          attribution notices from the Source form of the Work,
          excluding those notices that do not pertain to any part of
          the Derivative Works; and

      (d) If the Work includes a "NOTICE" text file as part of its
          distribution, then any Derivative Works that You distribute must
          include a readable copy of the attribution notices contained
          within such NOTICE file, excluding those notices that do not
          pertain to any part of the Derivative Works, in at least one
          of the following places: within a NOTICE text file distributed
          as part of the Derivative Works; within the Source form or
          documentation, if provided along with the Derivative Works; or,
          within a display generated by the Derivative Works, if and
          wherever such third-party notices normally appear. The contents
          of the NOTICE file are for informational purposes only and
          do not modify the License. You may add Your own attribution
          notices within Derivative Works that You distribute, alongside
          or as an addendum to the NOTICE text from the Work, provided
          that such additional attribution notices cannot be construed
          as modifying the License.

      You may add Your own copyright statement to Your modifications and
      may provide additional or different license terms and conditions
      for use, reproduction, or distribution of Your modifications, or
      for any such Derivative Works as a whole, provided Your use,
      reproduction, and distribution of the Work otherwise complies with
      the conditions stated in this License.

   5. Submission of Contributions. Unless You explicitly state otherwise,
      any Contribution intentionally submitted for inclusion in the Work
      by You to the Licensor shall be under the terms and conditions of
      this License, without any additional terms or conditions.
      Notwithstanding the above, nothing herein shall supersede or modify
      the terms of any separate license agreement you may have executed
      with Licensor regarding such Contributions.

   6. Trademarks. This License does not grant permission to use the trade
      names, trademarks, service marks, or product names of the Licensor,
      except as required for reasonable and customary use in describing the
      origin of the Work and reproducing the content of the NOTICE file.

   7. Disclaimer of Warranty. Unless required by applicable law or
      agreed to in writing, Licensor provides the Work (and each
      Contributor provides its Contributions) on an "AS IS" BASIS,
      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
      implied, including, without limitation, any warranties or conditions
      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
      PARTICULAR PURPOSE. You are solely responsible for determining the
      appropriateness of using or redistributing the Work and assume any
      risks associated with Your exercise of permissions under this License.

   8. Limitation of Liability. In no event and under no legal theory,
      whether in tort (including negligence), contract, or otherwise,
      unless required by applicable law (such as deliberate and grossly
      negligent acts) or agreed to in writing, shall any Contributor be
      liable to You for damages, including any direct, indirect, special,
      incidental, or consequential damages of any character arising as a
      result of this License or out of the use or inability to use the
      Work (including but not limited to damages for loss of goodwill,
      work stoppage, computer failure or malfunction, or any and all
      other commercial damages or losses), even if such Contributor
      has been advised of the possibility of such damages.

   9. Accepting Warranty or Additional Liability. While redistributing
      the Work or Derivative Works thereof, You may choose to offer,
      and charge a fee for, acceptance of support, warranty, indemnity,
      or other liability obligations and/or rights consistent with this
      License. However, in accepting such obligations, You may act only
      on Your own behalf and on Your sole responsibility, not on behalf
      of any other Contributor, and only if You agree to indemnify,
      defend, and hold each Contributor harmless for any liability
      incurred by, or claims asserted against, such Contributor by reason
      of your accepting any such warranty or additional liability.


================================================
FILE: Makefile
================================================
# Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# 	http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.

# Build settings.
GOARCH ?= amd64
COMPILER ?= x86_64-w64-mingw32-gcc # Cross-compiler for Windows

ROOT := $(shell pwd)

all: build

SCRIPT_PATH := $(ROOT)/scripts/:${PATH}
SOURCES := $(shell find . -name '*.go')
PLUGIN_BINARY := ./bin/cloudwatch.so
PLUGIN_BINARY_WINDOWS := ./bin/cloudwatch.dll

.PHONY: build
build: $(PLUGIN_BINARY)

$(PLUGIN_BINARY): $(SOURCES)
	PATH=${PATH} golint ./cloudwatch
	mkdir -p ./bin
	go build -buildmode c-shared -o $(PLUGIN_BINARY) ./
	@echo "Built Amazon CloudWatch Logs Fluent Bit Plugin"

.PHONY: release
release:
	mkdir -p ./bin
	go build -buildmode c-shared -o $(PLUGIN_BINARY) ./
	@echo "Built Amazon CloudWatch Logs Fluent Bit Plugin"

.PHONY: windows-release
windows-release:
	mkdir -p ./bin
	GOOS=windows GOARCH=$(GOARCH) CGO_ENABLED=1 CC=$(COMPILER) go build -buildmode c-shared -o $(PLUGIN_BINARY_WINDOWS) ./
	@echo "Built Amazon CloudWatch Logs Fluent Bit Plugin for Windows"

.PHONY: generate
generate: $(SOURCES)
	PATH=$(SCRIPT_PATH) go generate ./...


.PHONY: test
test:
	go test -timeout=120s -v -cover ./...

.PHONY: clean
clean:
	rm -rf ./bin/*


================================================
FILE: NOTICE
================================================
Fluent Bit Plugin for CloudWatch Logs
Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved. 


================================================
FILE: README.md
================================================
[![Test Actions Status](https://github.com/aws/amazon-cloudwatch-logs-for-fluent-bit/workflows/Build/badge.svg)](https://github.com/aws/amazon-cloudwatch-logs-for-fluent-bit/actions)

## Fluent Bit Plugin for CloudWatch Logs

**NOTE: A new higher performance Fluent Bit CloudWatch Logs Plugin has been released.** Check out our [official guidance](#new-higher-performance-core-fluent-bit-plugin).

A Fluent Bit output plugin for CloudWatch Logs

#### Security disclosures

If you think you’ve found a potential security issue, please do not post it in the Issues.  Instead, please follow the instructions [here](https://aws.amazon.com/security/vulnerability-reporting/) or email AWS security directly at [aws-security@amazon.com](mailto:aws-security@amazon.com).

### Usage

Run `make` to build `./bin/cloudwatch.so`. Then use with Fluent Bit:
```
./fluent-bit -e ./cloudwatch.so -i cpu \
-o cloudwatch \
-p "region=us-west-2" \
-p "log_group_name=fluent-bit-cloudwatch" \
-p "log_stream_name=testing" \
-p "auto_create_group=true"
```

For building Windows binaries, we need to install `mingw-w64` for cross-compilation. The same can be done using-
```
sudo apt-get install -y gcc-multilib gcc-mingw-w64
```
After this step, run `make windows-release`. Then use with Fluent Bit on Windows:
```
./fluent-bit.exe -e ./cloudwatch.dll -i dummy `
-o cloudwatch `
-p "region=us-west-2" `
-p "log_group_name=fluent-bit-cloudwatch" `
-p "log_stream_name=testing" `
-p "auto_create_group=true"
```

### Plugin Options

* `region`: The AWS region.
* `log_group_name`: The name of the CloudWatch Log Group that you want log records sent to. This value allows a template in the form of `$(variable)`. See section [Templating Log Group and Stream Names](#templating-log-group-and-stream-names) for more. Fluent Bit will create missing log groups if `auto_create_group` is set, and will throw an error if it does not have permission.
* `log_stream_name`: The name of the CloudWatch Log Stream that you want log records sent to. This value allows a template in the form of `$(variable)`. See section [Templating Log Group and Stream Names](#templating-log-group-and-stream-names) for more.
* `default_log_group_name`: This required variable is the fallback in case any variables in `log_group_name` fails to parse. Defaults to `fluentbit-default`.
* `default_log_stream_name`: This required variable is the fallback in case any variables in `log_stream_name` fails to parse. Defaults to `/fluentbit-default`.
* `log_stream_prefix`: (deprecated) Prefix for the Log Stream name. Setting this to `prefix-` is the same as setting `log_stream_name = prefix-$(tag)`.
* `log_key`: By default, the whole log record will be sent to CloudWatch. If you specify a key name with this option, then only the value of that key will be sent to CloudWatch. For example, if you are using the Fluentd Docker log driver, you can specify `log_key log` and only the log message will be sent to CloudWatch.
* `log_format`: An optional parameter that can be used to tell CloudWatch the format of the data. A value of `json/emf` enables CloudWatch to extract custom metrics embedded in a JSON payload. See the [Embedded Metric Format](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Embedded_Metric_Format_Specification.html).
* `role_arn`: ARN of an IAM role to assume (for cross account access).
* `auto_create_group`: Automatically create log groups (and add tags). Valid values are "true" or "false" (case insensitive). Defaults to false. If you use dynamic variables in your log group name, you may need this to be `true`.
* `auto_create_stream`: Automatically create log streams. Valid values are "true" or "false" (case insensitive). Defaults to true.
* `new_log_group_tags`: Comma/equal delimited string of tags to include with _auto created_ log groups. Example: `"tag=val,cooltag2=my other value"`
* `log_retention_days`: If set to a number greater than zero, and newly create log group's retention policy is set to this many days.
* `endpoint`: Specify a custom endpoint for the CloudWatch Logs API.
* `sts_endpoint`: Specify a custom endpoint for the STS API; used to assume your custom role provided with `role_arn`.
* `credentials_endpoint`: Specify a custom HTTP endpoint to pull credentials from. The HTTP response body should look like the following:
```
{
    "AccessKeyId": "ACCESS_KEY_ID",
    "Expiration": "EXPIRATION_DATE",
    "SecretAccessKey": "SECRET_ACCESS_KEY",
    "Token": "SECURITY_TOKEN_STRING"
}
```

**Note**: The plugin will always create the log stream, if it does not exist.

### Permissions

This plugin requires the following permissions:
* CreateLogGroup (useful when using dynamic groups)
* CreateLogStream
* DescribeLogStreams
* PutLogEvents
* PutRetentionPolicy (if `log_retention_days` is set > 0)

### Credentials

This plugin uses the AWS SDK Go, and uses its [default credential provider chain](https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html). If you are using the plugin on Amazon EC2 or Amazon ECS or Amazon EKS, the plugin will use your EC2 instance role or [ECS Task role permissions](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html) or [EKS IAM Roles for Service Accounts for pods](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html). The plugin can also retrieve credentials from a [shared credentials file](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html), or from the standard `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, `AWS_SESSION_TOKEN` environment variables.

### Environment Variables

* `FLB_LOG_LEVEL`: Set the log level for the plugin. Valid values are: `debug`, `info`, and `error` (case insensitive). Default is `info`. **Note**: Setting log level in the Fluent Bit Configuration file using the Service key will not affect the plugin log level (because the plugin is external).
* `SEND_FAILURE_TIMEOUT`: Allows you to configure a timeout if the plugin can not send logs to CloudWatch. The timeout is specified as a [Golang duration](https://golang.org/pkg/time/#ParseDuration), for example: `5m30s`. If the plugin has failed to make any progress for the given period of time, then it will exit and kill Fluent Bit. This is useful in scenarios where you want your logging solution to fail fast if it has been misconfigured (i.e. network or credentials have not been set up to allow it to send to CloudWatch).

### Retries and Buffering

Buffering and retries are managed by the Fluent Bit core engine, not by the plugin. Whenever the plugin encounters any error, it returns a retry to the engine which schedules a retry. This means that log group creation, log stream creation or log retention policy calls can consume a retry if they fail.

* [Fluent Bit upstream documentation on retries](https://docs.fluentbit.io/manual/administration/scheduling-and-retries)
* [Fluent Bit upstream documentation on buffering](https://docs.fluentbit.io/manual/administration/buffering-and-storage)
* [FireLens OOMKill prevent example for buffering](https://github.com/aws-samples/amazon-ecs-firelens-examples/tree/mainline/examples/fluent-bit/oomkill-prevention)

### Templating Log Group and Stream Names

 A template in the form of `$(variable)` can be set in `log_group_name` or `log_stream_name`. `variable` can be a map key name in the log message. To access sub-values in the map use the form `$(variable['subkey'])`. Also, it can be replaced with special values to insert the tag, ECS metadata or a random string in the name.

 Special Values:
 *  `$(tag)` references the full tag name, `$(tag[0])` and `$(tag[1])` are the first and second values of log tag split on periods. You may access any member by index, 0 through 9.
 *  `$(uuid)` will insert a random string in the names. The random string is generated automatically with format: 4 bytes of time (seconds) + 16 random bytes. It is created when the plugin starts up and uniquely identifies the output - which means that until Fluent Bit is restarted, it will be the same. If you have multiple CloudWatch outputs, each one will get a unique UUID.
 * If your container is running in ECS, `$(variable)` can be set as `$(ecs_task_id)`, `$(ecs_cluster)` or `$(ecs_task_arn)`. It will set ECS metadata into `log_group_name` or `log_stream_name`.

 Here is an example for `fluent-bit.conf`:

```
[INPUT]
    Name        dummy
    Tag         dummy.data
    Dummy {"pam": {"item": "soup", "item2":{"subitem": "rice"}}}

[OUTPUT]
    Name cloudwatch
    Match   *
    region us-east-1
    log_group_name fluent-bit-cloudwatch-$(uuid)-$(tag)
    log_stream_name from-fluent-bit-$(pam['item2']['subitem'])-$(ecs_task_id)-$(ecs_cluster)
    auto_create_group true
```

And here is the resulting log stream name and log group name:

```
log_group_name fluent-bit-cloudwatch-1jD7P6bbSRtbc9stkWjJZYerO6s-dummy.data
log_stream_name from-fluent-bit-rice-37e873f6-37b4-42a7-af47-eac7275c6152-ecs-local-cluster
```

#### Templating Log Group and Stream Names based on Kubernetes metadata

If you enable the kubernetes filter, then metadata like the following will be added to each log:

```
kubernetes: {
    annotations: {
        "kubernetes.io/psp": "eks.privileged"
    },
    container_hash: "<some hash>",
    container_name: "myapp",
    docker_id: "<some id>",
    host: "ip-10-1-128-166.us-east-2.compute.internal",
    labels: {
        app: "myapp",
        "pod-template-hash": "<some hash>"
    },
    namespace_name: "default",
    pod_id: "198f7dd2-2270-11ea-be47-0a5d932f5920",
    pod_name: "myapp-5468c5d4d7-n2swr"
}
```

For help setting up Fluent Bit with kubernetes please see [Kubernetes Logging Powered by AWS for Fluent Bit](https://aws.amazon.com/blogs/containers/kubernetes-logging-powered-by-aws-for-fluent-bit/) or [Set up Fluent Bit as a DaemonSet to send logs to CloudWatch Logs](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Container-Insights-setup-logs-FluentBit.html).

The kubernetes metadata can be referenced just like any other keys using the templating feature, for example, the following will result in a log group name which is `/eks/{namespace_name}/{pod_name}`. 

```
    [OUTPUT]
      Name              cloudwatch
      Match             kube.*
      region            us-east-1
      log_group_name    /eks/$(kubernetes['namespace_name'])/$(kubernetes['pod_name'])
      log_stream_name   $(kubernetes['namespace_name'])/$(kubernetes['container_name'])
      auto_create_group true
```

### New Higher Performance Core Fluent Bit Plugin

In the summer of 2020, we released a [new higher performance CloudWatch Logs plugin](https://docs.fluentbit.io/manual/pipeline/outputs/cloudwatch) named `cloudwatch_logs`.

That plugin has a core subset of the features of this older, lower performance and less efficient plugin. Check out its [documentation](https://docs.fluentbit.io/manual/pipeline/outputs/cloudwatch).

#### Do you plan to deprecate this older plugin?

At this time, we do not. This plugin will continue to be supported. It contains features that have not been ported to the higher performance version. Specifically, the feature for [templating of log group name and streams with ECS Metadata or values in the logs](#templating-log-group-and-stream-names). While [simple templating support](https://docs.fluentbit.io/manual/pipeline/outputs/cloudwatch#log-stream-and-group-name-templating-using-record_accessor-syntax) now exists in the high performance plugin, it does not have all of the features of the plugin in this repo. Some users will continue to need the features in this repo. 

#### Which plugin should I use?

If the features of the higher performance plugin are sufficient for your use cases, please use it. It can achieve higher throughput and will consume less CPU and memory.

#### How can I migrate to the higher performance plugin?

It supports a subset of the options of this plugin. For many users, you can simply replace the plugin name `cloudwatch` with the new name `cloudwatch_logs`. Check out its [documentation](https://docs.fluentbit.io/manual/pipeline/outputs/cloudwatch). 

#### Do you accept contributions to both plugins?

Yes. The high performance plugin is written in C, and this plugin is written in Golang. We understand that Go is an easier language for amateur contributors to write code in- that is a key reason why we are continuing to maintain it.

However, if you can write code in C, please consider contributing new features to the [higher performance plugin](https://github.com/fluent/fluent-bit/tree/master/plugins/out_cloudwatch_logs).

### Fluent Bit Versions

This plugin has been tested with Fluent Bit 1.2.0+. It may not work with older Fluent Bit versions. We recommend using the latest version of Fluent Bit as it will contain the newest features and bug fixes.

### Example Fluent Bit Config File

```
[INPUT]
    Name        forward
    Listen      0.0.0.0
    Port        24224

[OUTPUT]
    Name cloudwatch
    Match   *
    region us-east-1
    log_group_name fluent-bit-cloudwatch
    log_stream_prefix from-fluent-bit-
    auto_create_group true
```

### AWS for Fluent Bit

We distribute a container image with Fluent Bit and these plugins.

##### GitHub

[github.com/aws/aws-for-fluent-bit](https://github.com/aws/aws-for-fluent-bit)

##### Amazon ECR Public Gallery

[aws-for-fluent-bit](https://gallery.ecr.aws/aws-observability/aws-for-fluent-bit)

Our images are available in Amazon ECR Public Gallery. You can download images with different tags by following command:

```
docker pull public.ecr.aws/aws-observability/aws-for-fluent-bit:<tag>
```

For example, you can pull the image with latest version by:

```
docker pull public.ecr.aws/aws-observability/aws-for-fluent-bit:latest
```

If you see errors for image pull limits, try log into public ECR with your AWS credentials:

```
aws ecr-public get-login-password --region us-east-1 | docker login --username AWS --password-stdin public.ecr.aws
```

You can check the [Amazon ECR Public official doc](https://docs.aws.amazon.com/AmazonECR/latest/public/get-set-up-for-amazon-ecr.html) for more details.

##### Docker Hub

[amazon/aws-for-fluent-bit](https://hub.docker.com/r/amazon/aws-for-fluent-bit/tags)

##### Amazon ECR

You can use our SSM Public Parameters to find the Amazon ECR image URI in your region:

```
aws ssm get-parameters-by-path --path /aws/service/aws-for-fluent-bit/
```

For more see [our docs](https://github.com/aws/aws-for-fluent-bit#public-images).

## License

This library is licensed under the Apache 2.0 License.


================================================
FILE: THIRD-PARTY
================================================
** github.com/aws/amazon-kinesis-firehose-for-fluent-bit; version c41b42995068
-- https://github.com/aws/amazon-kinesis-firehose-for-fluent-bit
Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
** github.com/aws/aws-sdk-go; version v1.20.6 --
https://github.com/aws/aws-sdk-go
Copyright 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved.
Copyright 2014-2015 Stripe, Inc.
** github.com/fluent/fluent-bit-go; version fc386d263885 --
https://github.com/fluent/fluent-bit-go
Copyright (C) 2015-2017 Treasure Data Inc.
** github.com/golang/mock; version 1.3.1 -- https://github.com/golang/mock
Copyright 2010 Google Inc.
** github.com/jmespath/go-jmespath; version c2b33e8439af --
https://github.com/jmespath/go-jmespath
Copyright 2015 James Saryerwinnie
** github.com/modern-go/concurrent; version bacd9c7ef1dd --
https://github.com/modern-go/concurrent
None
** github.com/modern-go/reflect2; version v1.0.1 --
https://github.com/modern-go/reflect2
None

Apache License

Version 2.0, January 2004

http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND
DISTRIBUTION

   1. Definitions.

      "License" shall mean the terms and conditions for use, reproduction, and
      distribution as defined by Sections 1 through 9 of this document.

      "Licensor" shall mean the copyright owner or entity authorized by the
      copyright owner that is granting the License.

      "Legal Entity" shall mean the union of the acting entity and all other
      entities that control, are controlled by, or are under common control
      with that entity. For the purposes of this definition, "control" means
      (i) the power, direct or indirect, to cause the direction or management
      of such entity, whether by contract or otherwise, or (ii) ownership of
      fifty percent (50%) or more of the outstanding shares, or (iii)
      beneficial ownership of such entity.

      "You" (or "Your") shall mean an individual or Legal Entity exercising
      permissions granted by this License.

      "Source" form shall mean the preferred form for making modifications,
      including but not limited to software source code, documentation source,
      and configuration files.

      "Object" form shall mean any form resulting from mechanical
      transformation or translation of a Source form, including but not limited
      to compiled object code, generated documentation, and conversions to
      other media types.

      "Work" shall mean the work of authorship, whether in Source or Object
      form, made available under the License, as indicated by a copyright
      notice that is included in or attached to the work (an example is
      provided in the Appendix below).

      "Derivative Works" shall mean any work, whether in Source or Object form,
      that is based on (or derived from) the Work and for which the editorial
      revisions, annotations, elaborations, or other modifications represent,
      as a whole, an original work of authorship. For the purposes of this
      License, Derivative Works shall not include works that remain separable
      from, or merely link (or bind by name) to the interfaces of, the Work and
      Derivative Works thereof.

      "Contribution" shall mean any work of authorship, including the original
      version of the Work and any modifications or additions to that Work or
      Derivative Works thereof, that is intentionally submitted to Licensor for
      inclusion in the Work by the copyright owner or by an individual or Legal
      Entity authorized to submit on behalf of the copyright owner. For the
      purposes of this definition, "submitted" means any form of electronic,
      verbal, or written communication sent to the Licensor or its
      representatives, including but not limited to communication on electronic
      mailing lists, source code control systems, and issue tracking systems
      that are managed by, or on behalf of, the Licensor for the purpose of
      discussing and improving the Work, but excluding communication that is
      conspicuously marked or otherwise designated in writing by the copyright
      owner as "Not a Contribution."

      "Contributor" shall mean Licensor and any individual or Legal Entity on
      behalf of whom a Contribution has been received by Licensor and
      subsequently incorporated within the Work.

   2. Grant of Copyright License. Subject to the terms and conditions of this
   License, each Contributor hereby grants to You a perpetual, worldwide,
   non-exclusive, no-charge, royalty-free, irrevocable copyright license to
   reproduce, prepare Derivative Works of, publicly display, publicly perform,
   sublicense, and distribute the Work and such Derivative Works in Source or
   Object form.

   3. Grant of Patent License. Subject to the terms and conditions of this
   License, each Contributor hereby grants to You a perpetual, worldwide,
   non-exclusive, no-charge, royalty-free, irrevocable (except as stated in
   this section) patent license to make, have made, use, offer to sell, sell,
   import, and otherwise transfer the Work, where such license applies only to
   those patent claims licensable by such Contributor that are necessarily
   infringed by their Contribution(s) alone or by combination of their
   Contribution(s) with the Work to which such Contribution(s) was submitted.
   If You institute patent litigation against any entity (including a
   cross-claim or counterclaim in a lawsuit) alleging that the Work or a
   Contribution incorporated within the Work constitutes direct or contributory
   patent infringement, then any patent licenses granted to You under this
   License for that Work shall terminate as of the date such litigation is
   filed.

   4. Redistribution. You may reproduce and distribute copies of the Work or
   Derivative Works thereof in any medium, with or without modifications, and
   in Source or Object form, provided that You meet the following conditions:

      (a) You must give any other recipients of the Work or Derivative Works a
      copy of this License; and

      (b) You must cause any modified files to carry prominent notices stating
      that You changed the files; and

      (c) You must retain, in the Source form of any Derivative Works that You
      distribute, all copyright, patent, trademark, and attribution notices
      from the Source form of the Work, excluding those notices that do not
      pertain to any part of the Derivative Works; and

      (d) If the Work includes a "NOTICE" text file as part of its
      distribution, then any Derivative Works that You distribute must include
      a readable copy of the attribution notices contained within such NOTICE
      file, excluding those notices that do not pertain to any part of the
      Derivative Works, in at least one of the following places: within a
      NOTICE text file distributed as part of the Derivative Works; within the
      Source form or documentation, if provided along with the Derivative
      Works; or, within a display generated by the Derivative Works, if and
      wherever such third-party notices normally appear. The contents of the
      NOTICE file are for informational purposes only and do not modify the
      License. You may add Your own attribution notices within Derivative Works
      that You distribute, alongside or as an addendum to the NOTICE text from
      the Work, provided that such additional attribution notices cannot be
      construed as modifying the License.

      You may add Your own copyright statement to Your modifications and may
      provide additional or different license terms and conditions for use,
      reproduction, or distribution of Your modifications, or for any such
      Derivative Works as a whole, provided Your use, reproduction, and
      distribution of the Work otherwise complies with the conditions stated in
      this License.

   5. Submission of Contributions. Unless You explicitly state otherwise, any
   Contribution intentionally submitted for inclusion in the Work by You to the
   Licensor shall be under the terms and conditions of this License, without
   any additional terms or conditions. Notwithstanding the above, nothing
   herein shall supersede or modify the terms of any separate license agreement
   you may have executed with Licensor regarding such Contributions.

   6. Trademarks. This License does not grant permission to use the trade
   names, trademarks, service marks, or product names of the Licensor, except
   as required for reasonable and customary use in describing the origin of the
   Work and reproducing the content of the NOTICE file.

   7. Disclaimer of Warranty. Unless required by applicable law or agreed to in
   writing, Licensor provides the Work (and each Contributor provides its
   Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
   KIND, either express or implied, including, without limitation, any
   warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or
   FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining
   the appropriateness of using or redistributing the Work and assume any risks
   associated with Your exercise of permissions under this License.

   8. Limitation of Liability. In no event and under no legal theory, whether
   in tort (including negligence), contract, or otherwise, unless required by
   applicable law (such as deliberate and grossly negligent acts) or agreed to
   in writing, shall any Contributor be liable to You for damages, including
   any direct, indirect, special, incidental, or consequential damages of any
   character arising as a result of this License or out of the use or inability
   to use the Work (including but not limited to damages for loss of goodwill,
   work stoppage, computer failure or malfunction, or any and all other
   commercial damages or losses), even if such Contributor has been advised of
   the possibility of such damages.

   9. Accepting Warranty or Additional Liability. While redistributing the Work
   or Derivative Works thereof, You may choose to offer, and charge a fee for,
   acceptance of support, warranty, indemnity, or other liability obligations
   and/or rights consistent with this License. However, in accepting such
   obligations, You may act only on Your own behalf and on Your sole
   responsibility, not on behalf of any other Contributor, and only if You
   agree to indemnify, defend, and hold each Contributor harmless for any
   liability incurred by, or claims asserted against, such Contributor by
   reason of your accepting any such warranty or additional liability. END OF
   TERMS AND CONDITIONS

APPENDIX: How to apply the Apache License to your work.

To apply the Apache License to your work, attach the following boilerplate
notice, with the fields enclosed by brackets "[]" replaced with your own
identifying information. (Don't include the brackets!) The text should be
enclosed in the appropriate comment syntax for the file format. We also
recommend that a file or class name and description of purpose be included on
the same "printed page" as the copyright notice for easier identification
within third-party archives.

Copyright [yyyy] [name of copyright owner]

Licensed under the Apache License, Version 2.0 (the "License");

you may not use this file except in compliance with the License.

You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software

distributed under the License is distributed on an "AS IS" BASIS,

WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

See the License for the specific language governing permissions and

limitations under the License.

* For github.com/aws/amazon-kinesis-firehose-for-fluent-bit see also this
required NOTICE:
    Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
* For github.com/aws/aws-sdk-go see also this required NOTICE:
    Copyright 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved.
    Copyright 2014-2015 Stripe, Inc.
* For github.com/fluent/fluent-bit-go see also this required NOTICE:
    Copyright (C) 2015-2017 Treasure Data Inc.
* For github.com/golang/mock see also this required NOTICE:
    Copyright 2010 Google Inc.
* For github.com/jmespath/go-jmespath see also this required NOTICE:
    Copyright 2015 James Saryerwinnie
* For github.com/modern-go/concurrent see also this required NOTICE:
    None
* For github.com/modern-go/reflect2 see also this required NOTICE:
    None

------

** golang.org; version go1.12 -- https://golang.org/
Copyright (c) 2009 The Go Authors. All rights reserved.

Copyright (c) 2009 The Go Authors. All rights reserved.

Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:

   * Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
   * Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
   * Neither the name of Google Inc. nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

------

** github.com/pmezard/go-difflib; version v1.0.0 --
https://github.com/pmezard/go-difflib
Copyright (c) 2013, Patrick Mezard

Copyright (c) 2013, Patrick Mezard
All rights reserved.

Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:

    Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
    Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
    The names of its contributors may not be used to endorse or promote
products derived from this software without specific prior written
permission.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED
TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

------

** github.com/davecgh/go-spew; version v1.1.1 --
https://github.com/davecgh/go-spew
Copyright (c) 2012-2016 Dave Collins <dave@davec.name>
** github.com/davecgh/go-spew; version v1.1.1 --
https://github.com/davecgh/go-spew
Copyright (c) 2012-2016 Dave Collins <dave@davec.name>

ISC License

Copyright (c) 2012-2016 Dave Collins <dave@davec.name>

Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.

THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.

------

** github.com/stretchr/testify; version v1.3.0 --
https://github.com/stretchr/testify
Copyright (c) 2012-2018 Mat Ryer and Tyler Bunnell

MIT License

Copyright (c) 2012-2018 Mat Ryer and Tyler Bunnell

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

------

** github.com/sirupsen/logrus; version v1.4.2 --
https://github.com/sirupsen/logrus
Copyright (c) 2014 Simon Eskildsen

The MIT License (MIT)

Copyright (c) 2014 Simon Eskildsen

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

------

** github.com/konsorten/go-windows-terminal-sequences; version v1.0.1 --
https://github.com/konsorten/go-windows-terminal-sequences
Copyright (c) 2017 marvin + konsorten GmbH (open-source@konsorten.de)

(The MIT License)

Copyright (c) 2017 marvin + konsorten GmbH (open-source@konsorten.de)

Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the 'Software'), to deal in
the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
of the Software, and to permit persons to whom the Software is furnished to do
so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

------

** github.com/ugorji/go; version v1.1.4 -- https://github.com/ugorji/go
Copyright (c) 2012-2015 Ugorji Nwoke.

The MIT License (MIT)

Copyright (c) 2012-2015 Ugorji Nwoke.
All rights reserved.

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

------

** github.com/json-iterator/go; version v1.1.6 --
https://github.com/json-iterator/go
Copyright (c) 2016 json-iterator

MIT License

Copyright (c) 2016 json-iterator

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

------

** github.com/cenkalti/backoff; version v2.1.1 --
https://github.com/cenkalti/backoff
Copyright (c) 2014 Cenk Altı

The MIT License (MIT)

Copyright (c) 2014 Cenk Altı

Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal in
the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
of
the Software, and to permit persons to whom the Software is furnished to do so,
subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS
FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

------

** github.com/stretchr/objx; version v0.1.1 -- https://github.com/stretchr/objx
Copyright (c) 2014 Stretchr, Inc.
Copyright (c) 2017-2018 objx contributors

The MIT License

Copyright (c) 2014 Stretchr, Inc.
Copyright (c) 2017-2018 objx contributors

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.


================================================
FILE: VERSION
================================================
1.9.5


================================================
FILE: cloudwatch/cloudwatch.go
================================================
// Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License"). You may
// not use this file except in compliance with the License. A copy of the
// License is located at
//
//	http://aws.amazon.com/apache2.0/
//
// or in the "license" file accompanying this file. This file is distributed
// on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
// express or implied. See the License for the specific language governing
// permissions and limitations under the License.

package cloudwatch

import (
	"encoding/json"
	"fmt"
	"io/ioutil"
	"net/http"
	"os"
	"runtime"
	"sort"
	"strings"
	"time"
	"unicode/utf8"

	"github.com/aws/amazon-kinesis-firehose-for-fluent-bit/plugins"
	"github.com/aws/aws-sdk-go/aws"
	"github.com/aws/aws-sdk-go/aws/arn"
	"github.com/aws/aws-sdk-go/aws/awserr"
	"github.com/aws/aws-sdk-go/aws/credentials/endpointcreds"
	"github.com/aws/aws-sdk-go/aws/credentials/stscreds"
	"github.com/aws/aws-sdk-go/aws/endpoints"
	"github.com/aws/aws-sdk-go/aws/request"
	"github.com/aws/aws-sdk-go/aws/session"
	"github.com/aws/aws-sdk-go/service/cloudwatchlogs"
	fluentbit "github.com/fluent/fluent-bit-go/output"
	jsoniter "github.com/json-iterator/go"
	"github.com/segmentio/ksuid"
	"github.com/sirupsen/logrus"
	"github.com/valyala/bytebufferpool"
	"github.com/valyala/fasttemplate"
)

const (
	// See: http://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutLogEvents.html
	perEventBytes          = 26
	maximumBytesPerPut     = 1048576
	maximumLogEventsPerPut = 10000
	maximumBytesPerEvent   = 1024 * 256 //256KB
	maximumTimeSpanPerPut  = time.Hour * 24
	truncatedSuffix        = "[Truncated...]"
	maxGroupStreamLength   = 512
)

const (
	// Log stream objects that are empty and inactive for longer than the timeout get cleaned up
	logStreamInactivityTimeout = time.Hour
	// Check for expired log streams every 10 minutes
	logStreamInactivityCheckInterval = 10 * time.Minute
	// linuxBaseUserAgent is the base user agent string used for Linux.
	linuxBaseUserAgent = "aws-fluent-bit-plugin"
	// windowsBaseUserAgent is the base user agent string used for Windows.
	windowsBaseUserAgent = "aws-fluent-bit-plugin-windows"
)

// LogsClient contains the CloudWatch API calls used by this plugin
type LogsClient interface {
	CreateLogGroup(input *cloudwatchlogs.CreateLogGroupInput) (*cloudwatchlogs.CreateLogGroupOutput, error)
	PutRetentionPolicy(input *cloudwatchlogs.PutRetentionPolicyInput) (*cloudwatchlogs.PutRetentionPolicyOutput, error)
	CreateLogStream(input *cloudwatchlogs.CreateLogStreamInput) (*cloudwatchlogs.CreateLogStreamOutput, error)
	DescribeLogStreams(input *cloudwatchlogs.DescribeLogStreamsInput) (*cloudwatchlogs.DescribeLogStreamsOutput, error)
	PutLogEvents(input *cloudwatchlogs.PutLogEventsInput) (*cloudwatchlogs.PutLogEventsOutput, error)
}

type logStream struct {
	logEvents         []*cloudwatchlogs.InputLogEvent
	currentByteLength int
	currentBatchStart *time.Time
	currentBatchEnd   *time.Time
	nextSequenceToken *string
	logStreamName     string
	logGroupName      string
	expiration        time.Time
}

// Event is the input data and contains a log entry.
// The group and stream are added during processing.
type Event struct {
	TS     time.Time
	Record map[interface{}]interface{}
	Tag    string
	group  string
	stream string
}

// TaskMetadata it the task metadata from ECS V3 endpoint
type TaskMetadata struct {
	Cluster string `json:"Cluster,omitempty"`
	TaskARN string `json:"TaskARN,omitempty"`
	TaskID  string `json:"TaskID,omitempty"`
}

type streamDoesntExistError struct {
	streamName string
	groupName  string
}

func (stream *logStream) isExpired() bool {
	if len(stream.logEvents) == 0 && stream.expiration.Before(time.Now()) {
		return true
	}
	return false
}

func (stream *logStream) updateExpiration() {
	stream.expiration = time.Now().Add(logStreamInactivityTimeout)
}

type fastTemplate struct {
	String string
	*fasttemplate.Template
}

// OutputPlugin is the CloudWatch Logs Fluent Bit output plugin
type OutputPlugin struct {
	logGroupName                  *fastTemplate
	defaultLogGroupName           string
	logStreamPrefix               string
	logStreamName                 *fastTemplate
	defaultLogStreamName          string
	logKey                        string
	client                        LogsClient
	streams                       map[string]*logStream
	groups                        map[string]struct{}
	timer                         *plugins.Timeout
	nextLogStreamCleanUpCheckTime time.Time
	PluginInstanceID              int
	logGroupTags                  map[string]*string
	logGroupRetention             int64
	autoCreateGroup               bool
	autoCreateStream              bool
	bufferPool                    bytebufferpool.Pool
	ecsMetadata                   TaskMetadata
	runningInECS                  bool
	uuid                          string
	extraUserAgent                string
}

// OutputPluginConfig is the input information used by NewOutputPlugin to create a new OutputPlugin
type OutputPluginConfig struct {
	Region               string
	LogGroupName         string
	DefaultLogGroupName  string
	LogStreamPrefix      string
	LogStreamName        string
	DefaultLogStreamName string
	LogKey               string
	RoleARN              string
	AutoCreateGroup      bool
	AutoCreateStream     bool
	NewLogGroupTags      string
	LogRetentionDays     int64
	CWEndpoint           string
	STSEndpoint          string
	ExternalID           string
	CredsEndpoint        string
	PluginInstanceID     int
	LogFormat            string
	ExtraUserAgent       string
}

// Validate checks the configuration input for an OutputPlugin instances
func (config OutputPluginConfig) Validate() error {
	errorStr := "%s is a required parameter"
	if config.Region == "" {
		return fmt.Errorf(errorStr, "region")
	}
	if config.LogGroupName == "" {
		return fmt.Errorf(errorStr, "log_group_name")
	}
	if config.LogStreamName == "" && config.LogStreamPrefix == "" {
		return fmt.Errorf("log_stream_name or log_stream_prefix is required")
	}

	if config.LogStreamName != "" && config.LogStreamPrefix != "" {
		return fmt.Errorf("either log_stream_name or log_stream_prefix can be configured. They cannot be provided together")
	}

	return nil
}

// NewOutputPlugin creates a OutputPlugin object
func NewOutputPlugin(config OutputPluginConfig) (*OutputPlugin, error) {
	logrus.Debugf("[cloudwatch %d] Initializing NewOutputPlugin", config.PluginInstanceID)

	client, err := newCloudWatchLogsClient(config)
	if err != nil {
		return nil, err
	}

	timer, err := plugins.NewTimeout(func(d time.Duration) {
		logrus.Errorf("[cloudwatch %d] timeout threshold reached: Failed to send logs for %s\n", config.PluginInstanceID, d.String())
		logrus.Fatalf("[cloudwatch %d] Quitting Fluent Bit", config.PluginInstanceID) // exit the plugin and kill Fluent Bit
	})
	if err != nil {
		return nil, err
	}

	logGroupTemplate, err := newTemplate(config.LogGroupName)
	if err != nil {
		return nil, err
	}

	logStreamTemplate, err := newTemplate(config.LogStreamName)
	if err != nil {
		return nil, err
	}

	runningInECS := true
	// check if it is running in ECS
	if os.Getenv("ECS_CONTAINER_METADATA_URI") == "" {
		runningInECS = false
	}

	return &OutputPlugin{
		logGroupName:                  logGroupTemplate,
		logStreamName:                 logStreamTemplate,
		logStreamPrefix:               config.LogStreamPrefix,
		defaultLogGroupName:           config.DefaultLogGroupName,
		defaultLogStreamName:          config.DefaultLogStreamName,
		logKey:                        config.LogKey,
		client:                        client,
		timer:                         timer,
		streams:                       make(map[string]*logStream),
		nextLogStreamCleanUpCheckTime: time.Now().Add(logStreamInactivityCheckInterval),
		PluginInstanceID:              config.PluginInstanceID,
		logGroupTags:                  tagKeysToMap(config.NewLogGroupTags),
		logGroupRetention:             config.LogRetentionDays,
		autoCreateGroup:               config.AutoCreateGroup,
		autoCreateStream:              config.AutoCreateStream,
		groups:                        make(map[string]struct{}),
		ecsMetadata:                   TaskMetadata{},
		runningInECS:                  runningInECS,
		uuid:                          ksuid.New().String(),
		extraUserAgent:                config.ExtraUserAgent,
	}, nil
}

func newCloudWatchLogsClient(config OutputPluginConfig) (*cloudwatchlogs.CloudWatchLogs, error) {
	customResolverFn := func(service, region string, optFns ...func(*endpoints.Options)) (endpoints.ResolvedEndpoint, error) {
		if service == endpoints.LogsServiceID && config.CWEndpoint != "" {
			return endpoints.ResolvedEndpoint{
				URL: config.CWEndpoint,
			}, nil
		} else if service == endpoints.StsServiceID && config.STSEndpoint != "" {
			return endpoints.ResolvedEndpoint{
				URL: config.STSEndpoint,
			}, nil
		}
		return endpoints.DefaultResolver().EndpointFor(service, region, optFns...)
	}

	// Fetch base credentials
	baseConfig := &aws.Config{
		Region:                        aws.String(config.Region),
		EndpointResolver:              endpoints.ResolverFunc(customResolverFn),
		CredentialsChainVerboseErrors: aws.Bool(true),
	}

	if config.CredsEndpoint != "" {
		creds := endpointcreds.NewCredentialsClient(*baseConfig, request.Handlers{}, config.CredsEndpoint,
			func(provider *endpointcreds.Provider) {
				provider.ExpiryWindow = 5 * time.Minute
			})
		baseConfig.Credentials = creds
	}

	sess, err := session.NewSession(baseConfig)
	if err != nil {
		return nil, err
	}

	var svcSess = sess
	var svcConfig = baseConfig
	eksRole := os.Getenv("EKS_POD_EXECUTION_ROLE")
	if eksRole != "" {
		logrus.Debugf("[cloudwatch %d] Fetching EKS pod credentials.\n", config.PluginInstanceID)
		eksConfig := &aws.Config{}
		creds := stscreds.NewCredentials(svcSess, eksRole)
		eksConfig.Credentials = creds
		eksConfig.Region = aws.String(config.Region)
		svcConfig = eksConfig

		svcSess, err = session.NewSession(svcConfig)
		if err != nil {
			return nil, err
		}
	}

	if config.RoleARN != "" {
		logrus.Debugf("[cloudwatch %d] Fetching credentials for %s\n", config.PluginInstanceID, config.RoleARN)
		stsConfig := &aws.Config{}
		creds := stscreds.NewCredentials(svcSess, config.RoleARN, func(p *stscreds.AssumeRoleProvider) {
			if config.ExternalID != "" {
				p.ExternalID = aws.String(config.ExternalID)
			}
		})
		stsConfig.Credentials = creds
		stsConfig.Region = aws.String(config.Region)
		svcConfig = stsConfig

		svcSess, err = session.NewSession(svcConfig)
		if err != nil {
			return nil, err
		}
	}

	client := cloudwatchlogs.New(svcSess, svcConfig)
	client.Handlers.Build.PushBackNamed(customUserAgentHandler(config))
	if config.LogFormat != "" {
		client.Handlers.Build.PushBackNamed(LogFormatHandler(config.LogFormat))
	}
	return client, nil
}

// CustomUserAgentHandler returns a http request handler that sets a custom user agent to all aws requests
func customUserAgentHandler(config OutputPluginConfig) request.NamedHandler {
	const userAgentHeader = "User-Agent"

	baseUserAgent := linuxBaseUserAgent
	if runtime.GOOS == "windows" {
		baseUserAgent = windowsBaseUserAgent
	}

	return request.NamedHandler{
		Name: "ECSLocalEndpointsAgentHandler",
		Fn: func(r *request.Request) {
			currentAgent := r.HTTPRequest.Header.Get(userAgentHeader)
			if config.ExtraUserAgent != "" {
				r.HTTPRequest.Header.Set(userAgentHeader,
					fmt.Sprintf("%s-%s (%s) %s", baseUserAgent, config.ExtraUserAgent, runtime.GOOS, currentAgent))
			} else {
				r.HTTPRequest.Header.Set(userAgentHeader,
					fmt.Sprintf("%s (%s) %s", baseUserAgent, runtime.GOOS, currentAgent))
			}
		},
	}
}

// AddEvent accepts a record and adds it to the buffer for its stream, flushing the buffer if it is full
// the return value is one of: FLB_OK, FLB_RETRY
// API Errors lead to an FLB_RETRY, and all other errors are logged, the record is discarded and FLB_OK is returned
func (output *OutputPlugin) AddEvent(e *Event) int {
	// Step 1: convert the Event data to strings, and check for a log key.
	data, err := output.processRecord(e)
	if err != nil {
		logrus.Errorf("[cloudwatch %d] %v\n", output.PluginInstanceID, err)
		// discard this single bad record and let the batch continue
		return fluentbit.FLB_OK
	}

	// Step 2. Make sure the Event data isn't empty.
	eventString := logString(data)
	if len(eventString) == 0 {
		logrus.Debugf("[cloudwatch %d] Discarding an event from publishing as it is empty\n", output.PluginInstanceID)
		// discard this single empty record and let the batch continue
		return fluentbit.FLB_OK
	}

	// Step 3. Extract the Task Metadata if applicable.
	if output.runningInECS && output.ecsMetadata.TaskID == "" {
		err := output.getECSMetadata()
		if err != nil {
			logrus.Errorf("[cloudwatch %d] Failed to get ECS Task Metadata with error: %v\n", output.PluginInstanceID, err)
			return fluentbit.FLB_RETRY
		}
	}

	// Step 4. Assign a log group and log stream name to the Event.
	output.setGroupStreamNames(e)

	// Step 5. Create a missing log group for this Event.
	if _, ok := output.groups[e.group]; !ok {
		logrus.Debugf("[cloudwatch %d] Finding log group: %s", output.PluginInstanceID, e.group)

		if err := output.createLogGroup(e); err != nil {
			logrus.Error(err)
			return fluentbit.FLB_RETRY
		}

		output.groups[e.group] = struct{}{}
	}

	// Step 6. Create or retrieve an existing log stream for this Event.
	stream, err := output.getLogStream(e)
	if err != nil {
		logrus.Errorf("[cloudwatch %d] %v\n", output.PluginInstanceID, err)
		// an error means that the log stream was not created; this is retryable
		return fluentbit.FLB_RETRY
	}

	// Step 7. Check batch limits and flush buffer if any of these limits will be exeeded by this log Entry.
	countLimit := len(stream.logEvents) == maximumLogEventsPerPut
	sizeLimit := (stream.currentByteLength + cloudwatchLen(eventString)) >= maximumBytesPerPut
	spanLimit := stream.logBatchSpan(e.TS) >= maximumTimeSpanPerPut
	if countLimit || sizeLimit || spanLimit {
		err = output.putLogEvents(stream)
		if err != nil {
			logrus.Errorf("[cloudwatch %d] %v\n", output.PluginInstanceID, err)
			// send failures are retryable
			return fluentbit.FLB_RETRY
		}
	}

	// Step 8. Add this event to the running tally.
	stream.logEvents = append(stream.logEvents, &cloudwatchlogs.InputLogEvent{
		Message:   aws.String(eventString),
		Timestamp: aws.Int64(e.TS.UnixNano() / 1e6), // CloudWatch uses milliseconds since epoch
	})
	stream.currentByteLength += cloudwatchLen(eventString)
	if stream.currentBatchStart == nil || stream.currentBatchStart.After(e.TS) {
		stream.currentBatchStart = &e.TS
	}
	if stream.currentBatchEnd == nil || stream.currentBatchEnd.Before(e.TS) {
		stream.currentBatchEnd = &e.TS
	}

	return fluentbit.FLB_OK
}

// This plugin tracks CW Log streams
// We need to periodically delete any streams that haven't been written to in a while
// Because each stream incurs some memory for its buffer of log events
// (Which would be empty for an unused stream)
func (output *OutputPlugin) cleanUpExpiredLogStreams() {
	if output.nextLogStreamCleanUpCheckTime.Before(time.Now()) {
		logrus.Debugf("[cloudwatch %d] Checking for expired log streams", output.PluginInstanceID)

		for name, stream := range output.streams {
			if stream.isExpired() {
				logrus.Debugf("[cloudwatch %d] Removing internal buffer for log stream %s in group %s; the stream has not been written to for %s",
					output.PluginInstanceID, stream.logStreamName, stream.logGroupName, logStreamInactivityTimeout.String())
				delete(output.streams, name)
			}
		}
		output.nextLogStreamCleanUpCheckTime = time.Now().Add(logStreamInactivityCheckInterval)
	}
}

func (err *streamDoesntExistError) Error() string {
	return fmt.Sprintf("error: stream %s doesn't exist in log group %s", err.streamName, err.groupName)
}

func (output *OutputPlugin) getLogStream(e *Event) (*logStream, error) {
	stream, ok := output.streams[e.group+e.stream]
	if !ok {
		// assume the stream exists
		stream, err := output.existingLogStream(e)
		if err != nil {
			// if it doesn't then create it
			if _, ok := err.(*streamDoesntExistError); ok {
				return output.createStream(e)
			}
		}
		return stream, err
	}
	return stream, nil
}

func (output *OutputPlugin) existingLogStream(e *Event) (*logStream, error) {
	var nextToken *string
	var stream *logStream

	for stream == nil {
		resp, err := output.describeLogStreams(e, nextToken)
		if err != nil {
			return nil, err
		}

		for _, result := range resp.LogStreams {
			if aws.StringValue(result.LogStreamName) == e.stream {
				stream = &logStream{
					logGroupName:      e.group,
					logStreamName:     e.stream,
					logEvents:         make([]*cloudwatchlogs.InputLogEvent, 0, maximumLogEventsPerPut),
					nextSequenceToken: result.UploadSequenceToken,
				}
				output.streams[e.group+e.stream] = stream

				logrus.Debugf("[cloudwatch %d] Initializing internal buffer for exising log stream %s\n", output.PluginInstanceID, e.stream)
				stream.updateExpiration() // initialize

				break
			}
		}

		if stream == nil && resp.NextToken == nil {
			logrus.Infof("[cloudwatch %d] Log stream %s does not exist in log group %s", output.PluginInstanceID, e.stream, e.group)
			return nil, &streamDoesntExistError{
				streamName: e.stream,
				groupName:  e.group,
			}
		}

		nextToken = resp.NextToken
	}
	return stream, nil
}

func (output *OutputPlugin) describeLogStreams(e *Event, nextToken *string) (*cloudwatchlogs.DescribeLogStreamsOutput, error) {
	output.timer.Check()
	resp, err := output.client.DescribeLogStreams(&cloudwatchlogs.DescribeLogStreamsInput{
		LogGroupName:        aws.String(e.group),
		LogStreamNamePrefix: aws.String(e.stream),
		NextToken:           nextToken,
	})

	if err != nil {
		output.timer.Start()
		return nil, err
	}
	output.timer.Reset()

	return resp, err
}

// setGroupStreamNames adds the log group and log stream names to the event struct.
// This happens by parsing (any) template data in either configured name.
func (output *OutputPlugin) setGroupStreamNames(e *Event) {
	// This happens here to avoid running Split more than once per log Event.
	logTagSplit := strings.SplitN(e.Tag, ".", 10)
	s := &sanitizer{sanitize: sanitizeGroup, buf: output.bufferPool.Get()}

	if _, err := parseDataMapTags(e, logTagSplit, output.logGroupName, output.ecsMetadata, output.uuid, s); err != nil {
		e.group = output.defaultLogGroupName
		logrus.Errorf("[cloudwatch %d] parsing log_group_name template '%s' "+
			"(using value of default_log_group_name instead): %v",
			output.PluginInstanceID, output.logGroupName.String, err)
	} else if e.group = s.buf.String(); len(e.group) == 0 {
		e.group = output.defaultLogGroupName
	} else if len(e.group) > maxGroupStreamLength {
		e.group = e.group[:maxGroupStreamLength]
	}

	if output.logStreamPrefix != "" {
		e.stream = output.logStreamPrefix + e.Tag
		output.bufferPool.Put(s.buf)

		return
	}

	s.sanitize = sanitizeStream
	s.buf.Reset()

	if _, err := parseDataMapTags(e, logTagSplit, output.logStreamName, output.ecsMetadata, output.uuid, s); err != nil {
		e.stream = output.defaultLogStreamName
		logrus.Errorf("[cloudwatch %d] parsing log_stream_name template '%s': %v",
			output.PluginInstanceID, output.logStreamName.String, err)
	} else if e.stream = s.buf.String(); len(e.stream) == 0 {
		e.stream = output.defaultLogStreamName
	} else if len(e.stream) > maxGroupStreamLength {
		e.stream = e.stream[:maxGroupStreamLength]
	}

	output.bufferPool.Put(s.buf)
}

func (output *OutputPlugin) createStream(e *Event) (*logStream, error) {
	if !output.autoCreateStream {
		return nil, fmt.Errorf("error: attempting to create log Stream %s in log group %s however auto_create_stream is disabled", e.stream, e.group)
	}
	output.timer.Check()
	_, err := output.client.CreateLogStream(&cloudwatchlogs.CreateLogStreamInput{
		LogGroupName:  aws.String(e.group),
		LogStreamName: aws.String(e.stream),
	})

	if err != nil {
		output.timer.Start()
		return nil, err
	}
	output.timer.Reset()

	stream := &logStream{
		logStreamName:     e.stream,
		logGroupName:      e.group,
		logEvents:         make([]*cloudwatchlogs.InputLogEvent, 0, maximumLogEventsPerPut),
		nextSequenceToken: nil, // sequence token not required for a new log stream
	}
	output.streams[e.group+e.stream] = stream
	stream.updateExpiration() // initialize
	logrus.Infof("[cloudwatch %d] Created log stream %s in group %s", output.PluginInstanceID, e.stream, e.group)

	return stream, nil
}

func (output *OutputPlugin) createLogGroup(e *Event) error {
	if !output.autoCreateGroup {
		return nil
	}

	_, err := output.client.CreateLogGroup(&cloudwatchlogs.CreateLogGroupInput{
		LogGroupName: aws.String(e.group),
		Tags:         output.logGroupTags,
	})
	if err == nil {
		logrus.Infof("[cloudwatch %d] Created log group %s\n", output.PluginInstanceID, e.group)
		return output.setLogGroupRetention(e.group)
	}

	if awsErr, ok := err.(awserr.Error); !ok ||
		awsErr.Code() != cloudwatchlogs.ErrCodeResourceAlreadyExistsException {
		return err
	}

	logrus.Infof("[cloudwatch %d] Log group %s already exists\n", output.PluginInstanceID, e.group)
	return output.setLogGroupRetention(e.group)
}

func (output *OutputPlugin) setLogGroupRetention(name string) error {
	if output.logGroupRetention < 1 {
		return nil
	}

	_, err := output.client.PutRetentionPolicy(&cloudwatchlogs.PutRetentionPolicyInput{
		LogGroupName:    aws.String(name),
		RetentionInDays: aws.Int64(output.logGroupRetention),
	})
	if err != nil {
		return err
	}

	logrus.Infof("[cloudwatch %d] Set retention policy on log group %s to %dd\n", output.PluginInstanceID, name, output.logGroupRetention)

	return nil
}

// Takes the byte slice and returns a string
// Also removes leading and trailing whitespace
func logString(record []byte) string {
	return strings.TrimSpace(string(record))
}

func (output *OutputPlugin) processRecord(e *Event) ([]byte, error) {
	var err error
	e.Record, err = plugins.DecodeMap(e.Record)
	if err != nil {
		logrus.Debugf("[cloudwatch %d] Failed to decode record: %v\n", output.PluginInstanceID, e.Record)
		return nil, err
	}

	var json = jsoniter.ConfigCompatibleWithStandardLibrary
	var data []byte

	if output.logKey != "" {
		log, err := plugins.LogKey(e.Record, output.logKey)
		if err != nil {
			return nil, err
		}

		data, err = plugins.EncodeLogKey(log)
	} else {
		data, err = json.Marshal(e.Record)
	}

	if err != nil {
		logrus.Debugf("[cloudwatch %d] Failed to marshal record: %v\nLog Key: %s\n", output.PluginInstanceID, e.Record, output.logKey)
		return nil, err
	}

	// append newline
	data = append(data, []byte("\n")...)

	if (len(data) + perEventBytes) > maximumBytesPerEvent {
		logrus.Warnf("[cloudwatch %d] Found record with %d bytes, truncating to 256KB, logGroup=%s, stream=%s\n",
			output.PluginInstanceID, len(data)+perEventBytes, e.group, e.stream)

		/*
		 * Find last byte of trailing unicode character via efficient byte scanning
		 * Avoids corrupting rune
		 *
		 * A unicode character may be composed of 1 - 4 bytes
		 *   bytes [11, 01, 00]xx xxxx: represent the first byte in a unicode character
		 *   byte 10xx xxxx: represent all bytes following the first byte.
		 *
		 * nextByte is the first byte that is truncated,
		 * so nextByte should be the start of a new unicode character in first byte format.
		 */
		nextByte := (maximumBytesPerEvent - len(truncatedSuffix) - perEventBytes)
		for (data[nextByte]&0xc0 == 0x80) && nextByte > 0 {
			nextByte--
		}

		data = data[:nextByte]
		data = append(data, []byte(truncatedSuffix)...)
	}

	return data, nil
}

func (output *OutputPlugin) getECSMetadata() error {
	ecsTaskMetadataEndpointV3 := os.Getenv("ECS_CONTAINER_METADATA_URI")
	var metadata TaskMetadata
	res, err := http.Get(fmt.Sprintf("%s/task", ecsTaskMetadataEndpointV3))
	if err != nil {
		return fmt.Errorf("Failed to get endpoint response: %w", err)
	}
	response, err := ioutil.ReadAll(res.Body)
	if err != nil {
		return fmt.Errorf("Failed to read response '%v' from URL: %w", res, err)
	}
	res.Body.Close()

	err = json.Unmarshal(response, &metadata)
	if err != nil {
		return fmt.Errorf("Failed to unmarshal ECS metadata '%+v': %w", metadata, err)
	}

	arnInfo, err := arn.Parse(metadata.TaskARN)
	if err != nil {
		return fmt.Errorf("Failed to parse ECS TaskARN '%s': %w", metadata.TaskARN, err)
	}
	resourceID := strings.Split(arnInfo.Resource, "/")
	taskID := resourceID[len(resourceID)-1]
	metadata.TaskID = taskID

	output.ecsMetadata = metadata
	return nil
}

// Flush sends the current buffer of records.
func (output *OutputPlugin) Flush() error {
	logrus.Debugf("[cloudwatch %d] Flush() Called", output.PluginInstanceID)

	for _, stream := range output.streams {
		if err := output.flushStream(stream); err != nil {
			return err
		}
	}

	return nil
}

func (output *OutputPlugin) flushStream(stream *logStream) error {
	output.cleanUpExpiredLogStreams() // will periodically clean up, otherwise is no-op
	return output.putLogEvents(stream)
}

func (output *OutputPlugin) putLogEvents(stream *logStream) error {
	// return in case of empty logEvents
	if len(stream.logEvents) == 0 {
		return nil
	}

	output.timer.Check()
	stream.updateExpiration()

	// Log events in a single PutLogEvents request must be in chronological order.
	sort.SliceStable(stream.logEvents, func(i, j int) bool {
		return aws.Int64Value(stream.logEvents[i].Timestamp) < aws.Int64Value(stream.logEvents[j].Timestamp)
	})
	response, err := output.client.PutLogEvents(&cloudwatchlogs.PutLogEventsInput{
		LogEvents:     stream.logEvents,
		LogGroupName:  aws.String(stream.logGroupName),
		LogStreamName: aws.String(stream.logStreamName),
		SequenceToken: stream.nextSequenceToken,
	})
	if err != nil {
		if awsErr, ok := err.(awserr.Error); ok {
			if awsErr.Code() == cloudwatchlogs.ErrCodeDataAlreadyAcceptedException {
				// already submitted, just grab the correct sequence token
				parts := strings.Split(awsErr.Message(), " ")
				stream.nextSequenceToken = &parts[len(parts)-1]
				stream.logEvents = stream.logEvents[:0]
				stream.currentByteLength = 0
				stream.currentBatchStart = nil
				stream.currentBatchEnd = nil
				logrus.Infof("[cloudwatch %d] Encountered error %v; data already accepted, ignoring error\n", output.PluginInstanceID, awsErr)
				return nil
			} else if awsErr.Code() == cloudwatchlogs.ErrCodeInvalidSequenceTokenException {
				// sequence code is bad, grab the correct one and retry
				parts := strings.Split(awsErr.Message(), " ")
				nextSequenceToken := &parts[len(parts)-1]
				// If this is a new stream then the error will end like "The next expected sequenceToken is: null" and sequenceToken should be nil
				if strings.HasPrefix(*nextSequenceToken, "null") {
					nextSequenceToken = nil
				}
				stream.nextSequenceToken = nextSequenceToken

				return output.putLogEvents(stream)
			} else if awsErr.Code() == cloudwatchlogs.ErrCodeResourceNotFoundException {
				// a log group or a log stream should be re-created after it is deleted and then retry
				logrus.Errorf("[cloudwatch %d] Encountered error %v; detailed information: %s\n", output.PluginInstanceID, awsErr, awsErr.Message())
				if strings.Contains(awsErr.Message(), "group") {
					if err := output.createLogGroup(&Event{group: stream.logGroupName}); err != nil {
						logrus.Errorf("[cloudwatch %d] Encountered error %v\n", output.PluginInstanceID, err)
						return err
					}
				} else if strings.Contains(awsErr.Message(), "stream") {
					if _, err := output.createStream(&Event{group: stream.logGroupName, stream: stream.logStreamName}); err != nil {
						logrus.Errorf("[cloudwatch %d] Encountered error %v\n", output.PluginInstanceID, err)
						return err
					}
				}

				return fmt.Errorf("A Log group/stream did not exist, re-created it. Will retry PutLogEvents on next flush")
			} else {
				output.timer.Start()
				return err
			}
		} else {
			return err
		}
	}
	output.processRejectedEventsInfo(response)
	output.timer.Reset()
	logrus.Debugf("[cloudwatch %d] Sent %d events to CloudWatch for stream '%s' in group '%s'",
		output.PluginInstanceID, len(stream.logEvents), stream.logStreamName, stream.logGroupName)

	stream.nextSequenceToken = response.NextSequenceToken
	stream.logEvents = stream.logEvents[:0]
	stream.currentByteLength = 0
	stream.currentBatchStart = nil
	stream.currentBatchEnd = nil

	return nil
}

func (output *OutputPlugin) processRejectedEventsInfo(response *cloudwatchlogs.PutLogEventsOutput) {
	if response.RejectedLogEventsInfo != nil {
		if response.RejectedLogEventsInfo.ExpiredLogEventEndIndex != nil {
			logrus.Warnf("[cloudwatch %d] %d log events were marked as expired by CloudWatch\n", output.PluginInstanceID, aws.Int64Value(response.RejectedLogEventsInfo.ExpiredLogEventEndIndex))
		}
		if response.RejectedLogEventsInfo.TooNewLogEventStartIndex != nil {
			logrus.Warnf("[cloudwatch %d] %d log events were marked as too new by CloudWatch\n", output.PluginInstanceID, aws.Int64Value(response.RejectedLogEventsInfo.TooNewLogEventStartIndex))
		}
		if response.RejectedLogEventsInfo.TooOldLogEventEndIndex != nil {
			logrus.Warnf("[cloudwatch %d] %d log events were marked as too old by CloudWatch\n", output.PluginInstanceID, aws.Int64Value(response.RejectedLogEventsInfo.TooOldLogEventEndIndex))
		}
	}
}

// counts the effective number of bytes in the string, after
// UTF-8 normalization.  UTF-8 normalization includes replacing bytes that do
// not constitute valid UTF-8 encoded Unicode codepoints with the Unicode
// replacement codepoint U+FFFD (a 3-byte UTF-8 sequence, represented in Go as
// utf8.RuneError)
// this works because Go range will parse the string as UTF-8 runes
// copied from AWSLogs driver: https://github.com/moby/moby/commit/1e8ef386279e2e28aff199047e798fad660efbdd
func cloudwatchLen(event string) int {
	effectiveBytes := perEventBytes
	for _, rune := range event {
		effectiveBytes += utf8.RuneLen(rune)
	}
	return effectiveBytes
}

func (stream *logStream) logBatchSpan(timestamp time.Time) time.Duration {
	if stream.currentBatchStart == nil || stream.currentBatchEnd == nil {
		return 0
	}

	if stream.currentBatchStart.After(timestamp) {
		return stream.currentBatchEnd.Sub(timestamp)
	} else if stream.currentBatchEnd.Before(timestamp) {
		return timestamp.Sub(*stream.currentBatchStart)
	}

	return stream.currentBatchEnd.Sub(*stream.currentBatchStart)
}


================================================
FILE: cloudwatch/cloudwatch_test.go
================================================
// Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License"). You may
// not use this file except in compliance with the License. A copy of the
// License is located at
//
//	http://aws.amazon.com/apache2.0/
//
// or in the "license" file accompanying this file. This file is distributed
// on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
// express or implied. See the License for the specific language governing
// permissions and limitations under the License.

package cloudwatch

import (
	"bytes"
	"errors"
	"fmt"
	"os"
	"strings"
	"testing"
	"time"

	"github.com/aws/amazon-cloudwatch-logs-for-fluent-bit/cloudwatch/mock_cloudwatch"
	"github.com/aws/amazon-kinesis-firehose-for-fluent-bit/plugins"
	"github.com/aws/aws-sdk-go/aws"
	"github.com/aws/aws-sdk-go/aws/awserr"
	"github.com/aws/aws-sdk-go/service/cloudwatchlogs"
	fluentbit "github.com/fluent/fluent-bit-go/output"
	"github.com/golang/mock/gomock"
	"github.com/sirupsen/logrus"
	"github.com/stretchr/testify/assert"
)

const (
	testRegion          = "us-west-2"
	testLogGroup        = "my-logs"
	testLogStreamPrefix = "my-prefix"
	testTag             = "tag"
	testNextToken       = "next-token"
	testSequenceToken   = "sequence-token"
)

type configTest struct {
	name          string
	config        OutputPluginConfig
	isValidConfig bool
	expectedError string
}

var (
	configValidationTestCases = []configTest{
		{
			name: "ValidConfiguration",
			config: OutputPluginConfig{
				Region:          testRegion,
				LogGroupName:    testLogGroup,
				LogStreamPrefix: testLogStreamPrefix,
			},
			isValidConfig: true,
			expectedError: "",
		},
		{
			name: "MissingRegion",
			config: OutputPluginConfig{
				LogGroupName:    testLogGroup,
				LogStreamPrefix: testLogStreamPrefix,
			},
			isValidConfig: false,
			expectedError: "region is a required parameter",
		},
		{
			name: "MissingLogGroup",
			config: OutputPluginConfig{
				Region:          testRegion,
				LogStreamPrefix: testLogStreamPrefix,
			},
			isValidConfig: false,
			expectedError: "log_group_name is a required parameter",
		},
		{
			name: "OnlyLogStreamNameProvided",
			config: OutputPluginConfig{
				Region:        testRegion,
				LogGroupName:  testLogGroup,
				LogStreamName: "testLogStream",
			},
			isValidConfig: true,
		},
		{
			name: "OnlyLogStreamPrefixProvided",
			config: OutputPluginConfig{
				Region:          testRegion,
				LogGroupName:    testLogGroup,
				LogStreamPrefix: testLogStreamPrefix,
			},
			isValidConfig: true,
		},
		{
			name: "LogStreamAndPrefixBothProvided",
			config: OutputPluginConfig{
				Region:          testRegion,
				LogGroupName:    testLogGroup,
				LogStreamName:   "testLogStream",
				LogStreamPrefix: testLogStreamPrefix,
			},
			isValidConfig: false,
			expectedError: "either log_stream_name or log_stream_prefix can be configured. They cannot be provided together",
		},
		{
			name: "LogStreamAndPrefixBothMissing",
			config: OutputPluginConfig{
				Region:       testRegion,
				LogGroupName: testLogGroup,
			},
			isValidConfig: false,
			expectedError: "log_stream_name or log_stream_prefix is required",
		},
	}
)

// helper function to make a log stream/log group name template from a string.
func testTemplate(template string) *fastTemplate {
	t, _ := newTemplate(template)
	return t
}

func TestAddEvent(t *testing.T) {
	ctrl := gomock.NewController(t)
	mockCloudWatch := mock_cloudwatch.NewMockLogsClient(ctrl)

	gomock.InOrder(
		mockCloudWatch.EXPECT().DescribeLogStreams(gomock.Any()).Do(func(input *cloudwatchlogs.DescribeLogStreamsInput) {
			assert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, "Expected log group name to match")
		}).Return(&cloudwatchlogs.DescribeLogStreamsOutput{}, nil),
		mockCloudWatch.EXPECT().CreateLogStream(gomock.Any()).Do(func(input *cloudwatchlogs.CreateLogStreamInput) {
			assert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, "Expected log group name to match")
			assert.Equal(t, aws.StringValue(input.LogStreamName), testLogStreamPrefix+testTag, "Expected log group name to match")
		}).Return(&cloudwatchlogs.CreateLogStreamOutput{}, nil),
	)

	output := OutputPlugin{
		logGroupName:     testTemplate(testLogGroup),
		logStreamPrefix:  testLogStreamPrefix,
		client:           mockCloudWatch,
		timer:            setupTimeout(),
		streams:          make(map[string]*logStream),
		groups:           map[string]struct{}{testLogGroup: {}},
		autoCreateStream: true,
	}

	record := map[interface{}]interface{}{
		"somekey": []byte("some value"),
	}

	retCode := output.AddEvent(&Event{TS: time.Now(), Tag: testTag, Record: record})
	assert.Equal(t, retCode, fluentbit.FLB_OK, "Expected return code to FLB_OK")
}

func TestTruncateLargeLogEvent(t *testing.T) {
	ctrl := gomock.NewController(t)
	mockCloudWatch := mock_cloudwatch.NewMockLogsClient(ctrl)

	gomock.InOrder(
		mockCloudWatch.EXPECT().DescribeLogStreams(gomock.Any()).Do(func(input *cloudwatchlogs.DescribeLogStreamsInput) {
			assert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, "Expected log group name to match")
		}).Return(&cloudwatchlogs.DescribeLogStreamsOutput{}, nil),
		mockCloudWatch.EXPECT().CreateLogStream(gomock.Any()).Do(func(input *cloudwatchlogs.CreateLogStreamInput) {
			assert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, "Expected log group name to match")
			assert.Equal(t, aws.StringValue(input.LogStreamName), testLogStreamPrefix+testTag, "Expected log group name to match")
		}).Return(&cloudwatchlogs.CreateLogStreamOutput{}, nil),
	)

	output := OutputPlugin{
		logGroupName:     testTemplate(testLogGroup),
		logStreamPrefix:  testLogStreamPrefix,
		client:           mockCloudWatch,
		timer:            setupTimeout(),
		streams:          make(map[string]*logStream),
		groups:           map[string]struct{}{testLogGroup: {}},
		autoCreateStream: true,
	}

	record := map[interface{}]interface{}{
		"somekey": make([]byte, 256*1024+100),
	}

	retCode := output.AddEvent(&Event{TS: time.Now(), Tag: testTag, Record: record})
	actualData, err := output.processRecord(&Event{TS: time.Now(), Tag: testTag, Record: record})

	if err != nil {
		logrus.Debugf("[cloudwatch %d] Failed to process record: %v\n", output.PluginInstanceID, record)
	}

	assert.Equal(t, retCode, fluentbit.FLB_OK, "Expected return code to be FLB_OK")
	assert.Len(t, actualData, 256*1024-26, "Expected length is 256*1024-26")
}

func TestTruncateLargeLogEventWithSpecialCharacterOneTrailingFragments(t *testing.T) {
	ctrl := gomock.NewController(t)
	mockCloudWatch := mock_cloudwatch.NewMockLogsClient(ctrl)

	gomock.InOrder(
		mockCloudWatch.EXPECT().DescribeLogStreams(gomock.Any()).Do(func(input *cloudwatchlogs.DescribeLogStreamsInput) {
			assert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, "Expected log group name to match")
		}).Return(&cloudwatchlogs.DescribeLogStreamsOutput{}, nil),
		mockCloudWatch.EXPECT().CreateLogStream(gomock.Any()).Do(func(input *cloudwatchlogs.CreateLogStreamInput) {
			assert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, "Expected log group name to match")
			assert.Equal(t, aws.StringValue(input.LogStreamName), testLogStreamPrefix+testTag, "Expected log group name to match")
		}).Return(&cloudwatchlogs.CreateLogStreamOutput{}, nil),
	)

	output := OutputPlugin{
		logGroupName:     testTemplate(testLogGroup),
		logStreamPrefix:  testLogStreamPrefix,
		client:           mockCloudWatch,
		timer:            setupTimeout(),
		streams:          make(map[string]*logStream),
		groups:           map[string]struct{}{testLogGroup: {}},
		autoCreateStream: true,
	}

	var b bytes.Buffer
	for i := 0; i < 262095; i++ {
		b.WriteString("x")
	}
	b.WriteString("𒁈zrgchimqigtm")

	record := map[interface{}]interface{}{
		"key": b.String(),
	}

	retCode := output.AddEvent(&Event{TS: time.Now(), Tag: testTag, Record: record})
	actualData, err := output.processRecord(&Event{TS: time.Now(), Tag: testTag, Record: record})

	if err != nil {
		logrus.Debugf("[cloudwatch %d] Failed to process record: %v\n", output.PluginInstanceID, record)
	}

	/* invalid characters will be expanded when sent as request */
	actualDataString := logString(actualData)
	actualDataString = fmt.Sprintf("%q", actualDataString) /* converts: <invalid> -> \x<hex> */

	exampleWorkingData := "{\"key\":\"x\"}"
	addedLength := len(fmt.Sprintf("%q", exampleWorkingData)) - len(exampleWorkingData)

	assert.Equal(t, retCode, fluentbit.FLB_OK, "Expected return code to be FLB_OK")
	assert.LessOrEqual(t, len(actualDataString), 256*1024-26+addedLength, "Expected length to be less than or equal to 256*1024-26")
}

func TestTruncateLargeLogEventWithSpecialCharacterTwoTrailingFragments(t *testing.T) {
	ctrl := gomock.NewController(t)
	mockCloudWatch := mock_cloudwatch.NewMockLogsClient(ctrl)
	gomock.InOrder(
		mockCloudWatch.EXPECT().DescribeLogStreams(gomock.Any()).Do(func(input *cloudwatchlogs.DescribeLogStreamsInput) {
			assert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, "Expected log group name to match")
		}).Return(&cloudwatchlogs.DescribeLogStreamsOutput{}, nil),
		mockCloudWatch.EXPECT().CreateLogStream(gomock.Any()).Do(func(input *cloudwatchlogs.CreateLogStreamInput) {
			assert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, "Expected log group name to match")
			assert.Equal(t, aws.StringValue(input.LogStreamName), testLogStreamPrefix+testTag, "Expected log group name to match")
		}).Return(&cloudwatchlogs.CreateLogStreamOutput{}, nil),
	)

	output := OutputPlugin{
		logGroupName:     testTemplate(testLogGroup),
		logStreamPrefix:  testLogStreamPrefix,
		client:           mockCloudWatch,
		timer:            setupTimeout(),
		streams:          make(map[string]*logStream),
		groups:           map[string]struct{}{testLogGroup: {}},
		autoCreateStream: true,
	}

	var b bytes.Buffer
	for i := 0; i < 262094; i++ {
		b.WriteString("x")
	}
	b.WriteString("𒁈zrgchimqigtm")

	record := map[interface{}]interface{}{
		"key": b.String(),
	}

	retCode := output.AddEvent(&Event{TS: time.Now(), Tag: testTag, Record: record})
	actualData, err := output.processRecord(&Event{TS: time.Now(), Tag: testTag, Record: record})

	if err != nil {
		logrus.Debugf("[cloudwatch %d] Failed to process record: %v\n", output.PluginInstanceID, record)
	}

	/* invalid characters will be expanded when sent as request */
	actualDataString := logString(actualData)
	actualDataString = fmt.Sprintf("%q", actualDataString) /* converts: <invalid> -> \x<hex> */

	exampleWorkingData := "{\"key\":\"x\"}"
	addedLength := len(fmt.Sprintf("%q", exampleWorkingData)) - len(exampleWorkingData)

	assert.Equal(t, retCode, fluentbit.FLB_OK, "Expected return code to be FLB_OK")
	assert.LessOrEqual(t, len(actualDataString), 256*1024-26+addedLength, "Expected length to be less than or equal to 256*1024-26")
}

func TestTruncateLargeLogEventWithSpecialCharacterThreeTrailingFragments(t *testing.T) {
	ctrl := gomock.NewController(t)
	mockCloudWatch := mock_cloudwatch.NewMockLogsClient(ctrl)
	gomock.InOrder(
		mockCloudWatch.EXPECT().DescribeLogStreams(gomock.Any()).Do(func(input *cloudwatchlogs.DescribeLogStreamsInput) {
			assert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, "Expected log group name to match")
		}).Return(&cloudwatchlogs.DescribeLogStreamsOutput{}, nil),
		mockCloudWatch.EXPECT().CreateLogStream(gomock.Any()).Do(func(input *cloudwatchlogs.CreateLogStreamInput) {
			assert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, "Expected log group name to match")
			assert.Equal(t, aws.StringValue(input.LogStreamName), testLogStreamPrefix+testTag, "Expected log group name to match")
		}).Return(&cloudwatchlogs.CreateLogStreamOutput{}, nil),
	)

	output := OutputPlugin{
		logGroupName:     testTemplate(testLogGroup),
		logStreamPrefix:  testLogStreamPrefix,
		client:           mockCloudWatch,
		timer:            setupTimeout(),
		streams:          make(map[string]*logStream),
		groups:           map[string]struct{}{testLogGroup: {}},
		autoCreateStream: true,
	}

	var b bytes.Buffer
	for i := 0; i < 262093; i++ {
		b.WriteString("x")
	}
	b.WriteString("𒁈zrgchimqigtm")

	record := map[interface{}]interface{}{
		"key": b.String(),
	}

	retCode := output.AddEvent(&Event{TS: time.Now(), Tag: testTag, Record: record})
	actualData, err := output.processRecord(&Event{TS: time.Now(), Tag: testTag, Record: record})

	if err != nil {
		logrus.Debugf("[cloudwatch %d] Failed to process record: %v\n", output.PluginInstanceID, record)
	}

	/* invalid characters will be expanded when sent as request */
	actualDataString := logString(actualData)
	actualDataString = fmt.Sprintf("%q", actualDataString) /* converts: <invalid> -> \x<hex> */

	exampleWorkingData := "{\"key\":\"x\"}"
	addedLength := len(fmt.Sprintf("%q", exampleWorkingData)) - len(exampleWorkingData)

	assert.Equal(t, retCode, fluentbit.FLB_OK, "Expected return code to be FLB_OK")
	assert.LessOrEqual(t, len(actualDataString), 256*1024-26+addedLength, "Expected length to be less than or equal to 256*1024-26")
}

func TestAddEventCreateLogGroup(t *testing.T) {
	ctrl := gomock.NewController(t)
	mockCloudWatch := mock_cloudwatch.NewMockLogsClient(ctrl)

	gomock.InOrder(
		mockCloudWatch.EXPECT().CreateLogGroup(gomock.Any()).Return(&cloudwatchlogs.CreateLogGroupOutput{}, nil),
		mockCloudWatch.EXPECT().PutRetentionPolicy(gomock.Any()).Return(&cloudwatchlogs.PutRetentionPolicyOutput{}, nil),
		mockCloudWatch.EXPECT().DescribeLogStreams(gomock.Any()).Do(func(input *cloudwatchlogs.DescribeLogStreamsInput) {
			assert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, "Expected log group name to match")
		}).Return(&cloudwatchlogs.DescribeLogStreamsOutput{}, nil),
		mockCloudWatch.EXPECT().CreateLogStream(gomock.Any()).Do(func(input *cloudwatchlogs.CreateLogStreamInput) {
			assert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, "Expected log group name to match")
			assert.Equal(t, aws.StringValue(input.LogStreamName), testLogStreamPrefix+testTag, "Expected log group name to match")
		}).Return(&cloudwatchlogs.CreateLogStreamOutput{}, nil),
	)

	output := OutputPlugin{
		logGroupName:      testTemplate(testLogGroup),
		logStreamPrefix:   testLogStreamPrefix,
		client:            mockCloudWatch,
		timer:             setupTimeout(),
		streams:           make(map[string]*logStream),
		groups:            make(map[string]struct{}),
		logGroupRetention: 14,
		autoCreateGroup:   true,
		autoCreateStream:  true,
	}

	record := map[interface{}]interface{}{
		"somekey": []byte("some value"),
	}

	retCode := output.AddEvent(&Event{TS: time.Now(), Tag: testTag, Record: record})
	assert.Equal(t, retCode, fluentbit.FLB_OK, "Expected return code to FLB_OK")

}

// Existing Log Stream that requires 2 API calls to find
func TestAddEventExistingStream(t *testing.T) {
	ctrl := gomock.NewController(t)
	mockCloudWatch := mock_cloudwatch.NewMockLogsClient(ctrl)

	gomock.InOrder(
		mockCloudWatch.EXPECT().DescribeLogStreams(gomock.Any()).Do(func(input *cloudwatchlogs.DescribeLogStreamsInput) {
			assert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, "Expected log group name to match")
			assert.Equal(t, aws.StringValue(input.LogStreamNamePrefix), testLogStreamPrefix+testTag, "Expected log group name to match")
		}).Return(&cloudwatchlogs.DescribeLogStreamsOutput{
			LogStreams: []*cloudwatchlogs.LogStream{
				&cloudwatchlogs.LogStream{
					LogStreamName: aws.String("wrong stream"),
				},
			},
			NextToken: aws.String(testNextToken),
		}, nil),
		mockCloudWatch.EXPECT().DescribeLogStreams(gomock.Any()).Do(func(input *cloudwatchlogs.DescribeLogStreamsInput) {
			assert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, "Expected log group name to match")
			assert.Equal(t, aws.StringValue(input.LogStreamNamePrefix), testLogStreamPrefix+testTag, "Expected log group name to match")
			assert.Equal(t, aws.StringValue(input.NextToken), testNextToken, "Expected next token to match")
		}).Return(&cloudwatchlogs.DescribeLogStreamsOutput{
			LogStreams: []*cloudwatchlogs.LogStream{
				&cloudwatchlogs.LogStream{
					LogStreamName: aws.String(testLogStreamPrefix + testTag),
				},
			},
			NextToken: aws.String(testNextToken),
		}, nil),
	)

	output := OutputPlugin{
		logGroupName:    testTemplate(testLogGroup),
		logStreamPrefix: testLogStreamPrefix,
		client:          mockCloudWatch,
		timer:           setupTimeout(),
		streams:         make(map[string]*logStream),
		groups:          map[string]struct{}{testLogGroup: {}},
	}

	record := map[interface{}]interface{}{
		"somekey": []byte("some value"),
	}

	retCode := output.AddEvent(&Event{TS: time.Now(), Tag: testTag, Record: record})
	assert.Equal(t, retCode, fluentbit.FLB_OK, "Expected return code to FLB_OK")

}

func TestAddEventDescribeStreamsException(t *testing.T) {
	ctrl := gomock.NewController(t)
	mockCloudWatch := mock_cloudwatch.NewMockLogsClient(ctrl)

	mockCloudWatch.EXPECT().DescribeLogStreams(gomock.Any()).Do(func(input *cloudwatchlogs.DescribeLogStreamsInput) {
	}).Return(nil, awserr.New(cloudwatchlogs.ErrCodeResourceNotFoundException, "The specified log group does not exist.", fmt.Errorf("API Error")))

	output := OutputPlugin{
		logGroupName:     testTemplate(testLogGroup),
		logStreamPrefix:  testLogStreamPrefix,
		client:           mockCloudWatch,
		timer:            setupTimeout(),
		streams:          make(map[string]*logStream),
		groups:           map[string]struct{}{testLogGroup: {}},
		autoCreateStream: true,
	}

	record := map[interface{}]interface{}{
		"somekey": []byte("some value"),
	}

	retCode := output.AddEvent(&Event{TS: time.Now(), Tag: testTag, Record: record})
	assert.Equal(t, retCode, fluentbit.FLB_RETRY, "Expected return code to FLB_OK")
}

func TestAddEventAutoCreateDisabled(t *testing.T) {
	ctrl := gomock.NewController(t)
	mockCloudWatch := mock_cloudwatch.NewMockLogsClient(ctrl)

	mockCloudWatch.EXPECT().DescribeLogStreams(gomock.Any()).Do(func(input *cloudwatchlogs.DescribeLogStreamsInput) {
		assert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, "Expected log group name to match")
	}).Return(&cloudwatchlogs.DescribeLogStreamsOutput{}, nil)
	mockCloudWatch.EXPECT().CreateLogStream(gomock.Any()).Do(func(input *cloudwatchlogs.CreateLogStreamInput) {
	}).Return(&cloudwatchlogs.CreateLogStreamOutput{}, nil).Times(0)

	output := OutputPlugin{
		logGroupName:     testTemplate(testLogGroup),
		logStreamPrefix:  testLogStreamPrefix,
		client:           mockCloudWatch,
		timer:            setupTimeout(),
		streams:          make(map[string]*logStream),
		groups:           map[string]struct{}{testLogGroup: {}},
		autoCreateStream: false,
	}

	record := map[interface{}]interface{}{
		"somekey": []byte("some value"),
	}

	retCode := output.AddEvent(&Event{TS: time.Now(), Tag: testTag, Record: record})
	assert.Equal(t, retCode, fluentbit.FLB_RETRY, "Expected return code to FLB_RETRY")
}

func TestAddEventExistingStreamNotFound(t *testing.T) {
	ctrl := gomock.NewController(t)
	mockCloudWatch := mock_cloudwatch.NewMockLogsClient(ctrl)

	gomock.InOrder(
		mockCloudWatch.EXPECT().DescribeLogStreams(gomock.Any()).Do(func(input *cloudwatchlogs.DescribeLogStreamsInput) {
			assert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, "Expected log group name to match")
			assert.Equal(t, aws.StringValue(input.LogStreamNamePrefix), testLogStreamPrefix+testTag, "Expected log group name to match")
		}).Return(&cloudwatchlogs.DescribeLogStreamsOutput{
			LogStreams: []*cloudwatchlogs.LogStream{
				&cloudwatchlogs.LogStream{
					LogStreamName: aws.String("wrong stream"),
				},
			},
			NextToken: aws.String(testNextToken),
		}, nil),
		mockCloudWatch.EXPECT().DescribeLogStreams(gomock.Any()).Do(func(input *cloudwatchlogs.DescribeLogStreamsInput) {
			assert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, "Expected log group name to match")
			assert.Equal(t, aws.StringValue(input.LogStreamNamePrefix), testLogStreamPrefix+testTag, "Expected log group name to match")
			assert.Equal(t, aws.StringValue(input.NextToken), testNextToken, "Expected next token to match")
		}).Return(&cloudwatchlogs.DescribeLogStreamsOutput{
			LogStreams: []*cloudwatchlogs.LogStream{
				&cloudwatchlogs.LogStream{
					LogStreamName: aws.String("another wrong stream"),
				},
			},
		}, nil),
		mockCloudWatch.EXPECT().CreateLogStream(gomock.Any()).Do(func(input *cloudwatchlogs.CreateLogStreamInput) {
			assert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, "Expected log group name to match")
			assert.Equal(t, aws.StringValue(input.LogStreamName), testLogStreamPrefix+testTag, "Expected log group name to match")
		}).Return(nil, awserr.New(cloudwatchlogs.ErrCodeResourceAlreadyExistsException, "Log Stream already exists", fmt.Errorf("API Error"))),
	)

	output := OutputPlugin{
		logGroupName:     testTemplate(testLogGroup),
		logStreamPrefix:  testLogStreamPrefix,
		client:           mockCloudWatch,
		timer:            setupTimeout(),
		streams:          make(map[string]*logStream),
		groups:           map[string]struct{}{testLogGroup: {}},
		autoCreateStream: true,
	}

	record := map[interface{}]interface{}{
		"somekey": []byte("some value"),
	}

	retCode := output.AddEvent(&Event{TS: time.Now(), Tag: testTag, Record: record})
	assert.Equal(t, retCode, fluentbit.FLB_RETRY, "Expected return code to FLB_RETRY")

}

func TestAddEventEmptyRecord(t *testing.T) {
	ctrl := gomock.NewController(t)
	mockCloudWatch := mock_cloudwatch.NewMockLogsClient(ctrl)

	output := OutputPlugin{
		logGroupName:    testTemplate(testLogGroup),
		logStreamPrefix: testLogStreamPrefix,
		client:          mockCloudWatch,
		timer:           setupTimeout(),
		streams:         make(map[string]*logStream),
		logKey:          "somekey",
		groups:          map[string]struct{}{testLogGroup: {}},
	}

	record := map[interface{}]interface{}{
		"somekey": []byte(""),
	}

	retCode := output.AddEvent(&Event{TS: time.Now(), Tag: testTag, Record: record})
	assert.Equal(t, retCode, fluentbit.FLB_OK, "Expected return code to FLB_OK")

}

func TestAddEventAndFlush(t *testing.T) {
	ctrl := gomock.NewController(t)
	mockCloudWatch := mock_cloudwatch.NewMockLogsClient(ctrl)

	gomock.InOrder(
		mockCloudWatch.EXPECT().DescribeLogStreams(gomock.Any()).Do(func(input *cloudwatchlogs.DescribeLogStreamsInput) {
			assert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, "Expected log group name to match")
		}).Return(&cloudwatchlogs.DescribeLogStreamsOutput{}, nil),
		mockCloudWatch.EXPECT().CreateLogStream(gomock.Any()).Do(func(input *cloudwatchlogs.CreateLogStreamInput) {
			assert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, "Expected log group name to match")
			assert.Equal(t, aws.StringValue(input.LogStreamName), testLogStreamPrefix+testTag, "Expected log stream name to match")
		}).Return(&cloudwatchlogs.CreateLogStreamOutput{}, nil),
		mockCloudWatch.EXPECT().PutLogEvents(gomock.Any()).Do(func(input *cloudwatchlogs.PutLogEventsInput) {
			assert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, "Expected log group name to match")
			assert.Equal(t, aws.StringValue(input.LogStreamName), testLogStreamPrefix+testTag, "Expected log stream name to match")
		}).Return(&cloudwatchlogs.PutLogEventsOutput{
			NextSequenceToken: aws.String("token"),
		}, nil),
	)

	output := OutputPlugin{
		logGroupName:     testTemplate(testLogGroup),
		logStreamPrefix:  testLogStreamPrefix,
		client:           mockCloudWatch,
		timer:            setupTimeout(),
		streams:          make(map[string]*logStream),
		groups:           map[string]struct{}{testLogGroup: {}},
		autoCreateStream: true,
	}

	record := map[interface{}]interface{}{
		"somekey": []byte("some value"),
	}

	retCode := output.AddEvent(&Event{TS: time.Now(), Tag: testTag, Record: record})
	assert.Equal(t, retCode, fluentbit.FLB_OK, "Expected return code to FLB_OK")
	output.Flush()
}

func TestPutLogEvents(t *testing.T) {
	ctrl := gomock.NewController(t)
	mockCloudWatch := mock_cloudwatch.NewMockLogsClient(ctrl)

	output := OutputPlugin{
		logGroupName:    testTemplate(testLogGroup),
		logStreamPrefix: testLogStreamPrefix,
		client:          mockCloudWatch,
		timer:           setupTimeout(),
		streams:         make(map[string]*logStream),
		logKey:          "somekey",
		groups:          map[string]struct{}{testLogGroup: {}},
	}

	stream := &logStream{}
	err := output.putLogEvents(stream)
	assert.Nil(t, err)
}

func TestSetGroupStreamNames(t *testing.T) {
	record := map[interface{}]interface{}{
		"ident": "cron",
		"msg":   "my cool log message",
		"details": map[interface{}]interface{}{
			"region": "us-west-2",
			"az":     "a",
		},
	}

	e := &Event{Tag: "syslog.0", Record: record}

	// Test against non-template name.
	output := OutputPlugin{
		logStreamName:        testTemplate("/aws/ecs/test-stream-name"),
		logGroupName:         testTemplate(""),
		defaultLogGroupName:  "fluentbit-default",
		defaultLogStreamName: "/fluentbit-default",
	}

	output.setGroupStreamNames(e)
	assert.Equal(t, "/aws/ecs/test-stream-name", e.stream,
		"The provided stream name must be returned exactly, without modifications.")

	output.logStreamName = testTemplate("")
	output.setGroupStreamNames(e)
	assert.Equal(t, output.defaultLogStreamName, e.stream,
		"The default stream name must be set when no stream name is provided.")

	// Test against a simple log stream prefix.
	output.logStreamPrefix = "/aws/ecs/test-stream-prefix/"
	output.setGroupStreamNames(e)
	assert.Equal(t, output.logStreamPrefix+"syslog.0", e.stream,
		"The provided stream prefix must be prefixed to the provided tag name.")

	// Test replacing items from template variables.
	output.logStreamPrefix = ""
	output.logStreamName = testTemplate("/aws/ecs/$(tag[0])/$(tag[1])/$(details['region'])/$(details['az'])/$(ident)")
	output.setGroupStreamNames(e)
	assert.Equal(t, "/aws/ecs/syslog/0/us-west-2/a/cron", e.stream,
		"The stream name template was not correctly parsed.")
	assert.Equal(t, output.defaultLogGroupName, e.group,
		"The default log group name must be set when no log group is provided.")

	// Test another bad template ] missing.
	output.logStreamName = testTemplate("/aws/ecs/$(details['region')")
	output.setGroupStreamNames(e)
	assert.Equal(t, "/aws/ecs/['region'", e.stream,
		"The provided stream name must match when the tag is incomplete.")

	// Make sure we get default group and stream names when their variables cannot be parsed.
	output.logStreamName = testTemplate("/aws/ecs/$(details['activity'])")
	output.logGroupName = testTemplate("$(details['activity'])")
	output.setGroupStreamNames(e)
	assert.Equal(t, output.defaultLogStreamName, e.stream,
		"The default stream name must return when elements are missing.")
	assert.Equal(t, output.defaultLogGroupName, e.group,
		"The default group name must return when elements are missing.")

	// Test that log stream and log group names get truncated to the maximum allowed.
	b := make([]byte, maxGroupStreamLength*2)
	for i := range b { // make a string twice the max
		b[i] = '_'
	}

	ident := string(b)
	assert.True(t, len(ident) > maxGroupStreamLength, "test string creation failed")

	e.Record = map[interface{}]interface{}{"ident": ident} // set the long string into our record.
	output.logStreamName = testTemplate("/aws/ecs/$(ident)")
	output.logGroupName = testTemplate("/aws/ecs/$(ident)")

	output.setGroupStreamNames(e)
	assert.Equal(t, maxGroupStreamLength, len(e.stream), "the stream name should be truncated to the maximum size")
	assert.Equal(t, maxGroupStreamLength, len(e.group), "the group name should be truncated to the maximum size")
	assert.Equal(t, "/aws/ecs/"+string(b[:maxGroupStreamLength-len("/aws/ecs/")]),
		e.stream, "the stream name was incorrectly truncated")
	assert.Equal(t, "/aws/ecs/"+string(b[:maxGroupStreamLength-len("/aws/ecs/")]),
		e.group, "the group name was incorrectly truncated")
}

func TestAddEventAndFlushDataAlreadyAcceptedException(t *testing.T) {
	ctrl := gomock.NewController(t)
	mockCloudWatch := mock_cloudwatch.NewMockLogsClient(ctrl)

	gomock.InOrder(
		mockCloudWatch.EXPECT().DescribeLogStreams(gomock.Any()).Do(func(input *cloudwatchlogs.DescribeLogStreamsInput) {
			assert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, "Expected log group name to match")
		}).Return(&cloudwatchlogs.DescribeLogStreamsOutput{}, nil),
		mockCloudWatch.EXPECT().CreateLogStream(gomock.Any()).Do(func(input *cloudwatchlogs.CreateLogStreamInput) {
			assert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, "Expected log group name to match")
			assert.Equal(t, aws.StringValue(input.LogStreamName), testLogStreamPrefix+testTag, "Expected log stream name to match")
		}).Return(&cloudwatchlogs.CreateLogStreamOutput{}, nil),
		mockCloudWatch.EXPECT().PutLogEvents(gomock.Any()).Do(func(input *cloudwatchlogs.PutLogEventsInput) {
			assert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, "Expected log group name to match")
			assert.Equal(t, aws.StringValue(input.LogStreamName), testLogStreamPrefix+testTag, "Expected log stream name to match")
		}).Return(nil, awserr.New(cloudwatchlogs.ErrCodeDataAlreadyAcceptedException, "Data already accepted; The next expected sequenceToken is: "+testSequenceToken, fmt.Errorf("API Error"))),
	)

	output := OutputPlugin{
		logGroupName:     testTemplate(testLogGroup),
		logStreamPrefix:  testLogStreamPrefix,
		client:           mockCloudWatch,
		timer:            setupTimeout(),
		streams:          make(map[string]*logStream),
		groups:           map[string]struct{}{testLogGroup: {}},
		autoCreateStream: true,
	}

	record := map[interface{}]interface{}{
		"somekey": []byte("some value"),
	}

	retCode := output.AddEvent(&Event{TS: time.Now(), Tag: testTag, Record: record})
	assert.Equal(t, retCode, fluentbit.FLB_OK, "Expected return code to FLB_OK")
	output.Flush()
}

func TestAddEventAndFlushDataInvalidSequenceTokenException(t *testing.T) {
	ctrl := gomock.NewController(t)
	mockCloudWatch := mock_cloudwatch.NewMockLogsClient(ctrl)

	gomock.InOrder(
		mockCloudWatch.EXPECT().DescribeLogStreams(gomock.Any()).Do(func(input *cloudwatchlogs.DescribeLogStreamsInput) {
			assert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, "Expected log group name to match")
		}).Return(&cloudwatchlogs.DescribeLogStreamsOutput{}, nil),
		mockCloudWatch.EXPECT().CreateLogStream(gomock.Any()).Do(func(input *cloudwatchlogs.CreateLogStreamInput) {
			assert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, "Expected log group name to match")
			assert.Equal(t, aws.StringValue(input.LogStreamName), testLogStreamPrefix+testTag, "Expected log stream name to match")
		}).Return(&cloudwatchlogs.CreateLogStreamOutput{}, nil),
		mockCloudWatch.EXPECT().PutLogEvents(gomock.Any()).Do(func(input *cloudwatchlogs.PutLogEventsInput) {
			assert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, "Expected log group name to match")
			assert.Equal(t, aws.StringValue(input.LogStreamName), testLogStreamPrefix+testTag, "Expected log stream name to match")
		}).Return(nil, awserr.New(cloudwatchlogs.ErrCodeInvalidSequenceTokenException, "The given sequenceToken is invalid; The next expected sequenceToken is: "+testSequenceToken, fmt.Errorf("API Error"))),
		mockCloudWatch.EXPECT().PutLogEvents(gomock.Any()).Do(func(input *cloudwatchlogs.PutLogEventsInput) {
			assert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, "Expected log group name to match")
			assert.Equal(t, aws.StringValue(input.LogStreamName), testLogStreamPrefix+testTag, "Expected log stream name to match")
			assert.Equal(t, aws.StringValue(input.SequenceToken), testSequenceToken, "Expected sequence token to match response from previous error")
		}).Return(&cloudwatchlogs.PutLogEventsOutput{
			NextSequenceToken: aws.String("token"),
		}, nil),
	)

	output := OutputPlugin{
		logGroupName:     testTemplate(testLogGroup),
		logStreamPrefix:  testLogStreamPrefix,
		client:           mockCloudWatch,
		timer:            setupTimeout(),
		streams:          make(map[string]*logStream),
		groups:           map[string]struct{}{testLogGroup: {}},
		autoCreateStream: true,
	}

	record := map[interface{}]interface{}{
		"somekey": []byte("some value"),
	}

	retCode := output.AddEvent(&Event{TS: time.Now(), Tag: testTag, Record: record})
	assert.Equal(t, retCode, fluentbit.FLB_OK, "Expected return code to FLB_OK")
	output.Flush()
}

func TestAddEventAndFlushDataInvalidSequenceTokenNextNullException(t *testing.T) {
	ctrl := gomock.NewController(t)
	mockCloudWatch := mock_cloudwatch.NewMockLogsClient(ctrl)

	gomock.InOrder(
		mockCloudWatch.EXPECT().DescribeLogStreams(gomock.Any()).Do(func(input *cloudwatchlogs.DescribeLogStreamsInput) {
			assert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, "Expected log group name to match")
		}).Return(&cloudwatchlogs.DescribeLogStreamsOutput{}, nil),
		mockCloudWatch.EXPECT().CreateLogStream(gomock.Any()).Do(func(input *cloudwatchlogs.CreateLogStreamInput) {
			assert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, "Expected log group name to match")
			assert.Equal(t, aws.StringValue(input.LogStreamName), testLogStreamPrefix+testTag, "Expected log stream name to match")
		}).Return(&cloudwatchlogs.CreateLogStreamOutput{}, nil),
		mockCloudWatch.EXPECT().PutLogEvents(gomock.Any()).Do(func(input *cloudwatchlogs.PutLogEventsInput) {
			assert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, "Expected log group name to match")
			assert.Equal(t, aws.StringValue(input.LogStreamName), testLogStreamPrefix+testTag, "Expected log stream name to match")
		}).Return(nil, awserr.New(cloudwatchlogs.ErrCodeInvalidSequenceTokenException, "The given sequenceToken is invalid; The next expected sequenceToken is: null", fmt.Errorf("API Error"))),
		mockCloudWatch.EXPECT().PutLogEvents(gomock.Any()).Do(func(input *cloudwatchlogs.PutLogEventsInput) {
			assert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, "Expected log group name to match")
			assert.Equal(t, aws.StringValue(input.LogStreamName), testLogStreamPrefix+testTag, "Expected log stream name to match")
			assert.Nil(t, input.SequenceToken, "Expected sequence token to be nil")
		}).Return(&cloudwatchlogs.PutLogEventsOutput{
			NextSequenceToken: aws.String("token"),
		}, nil),
	)

	output := OutputPlugin{
		logGroupName:     testTemplate(testLogGroup),
		logStreamPrefix:  testLogStreamPrefix,
		client:           mockCloudWatch,
		timer:            setupTimeout(),
		streams:          make(map[string]*logStream),
		groups:           map[string]struct{}{testLogGroup: {}},
		autoCreateStream: true,
	}

	record := map[interface{}]interface{}{
		"somekey": []byte("some value"),
	}

	retCode := output.AddEvent(&Event{TS: time.Now(), Tag: testTag, Record: record})
	assert.Equal(t, retCode, fluentbit.FLB_OK, "Expected return code to FLB_OK")
	output.Flush()
}

func TestAddEventAndDataResourceNotFoundExceptionWithNoLogGroup(t *testing.T) {
	ctrl := gomock.NewController(t)
	mockCloudWatch := mock_cloudwatch.NewMockLogsClient(ctrl)

	gomock.InOrder(
		mockCloudWatch.EXPECT().DescribeLogStreams(gomock.Any()).Do(func(input *cloudwatchlogs.DescribeLogStreamsInput) {
			assert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, "Expected log group name to match")
		}).Return(&cloudwatchlogs.DescribeLogStreamsOutput{}, nil),
		mockCloudWatch.EXPECT().CreateLogStream(gomock.Any()).Do(func(input *cloudwatchlogs.CreateLogStreamInput) {
			assert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, "Expected log group name to match")
			assert.Equal(t, aws.StringValue(input.LogStreamName), testLogStreamPrefix+testTag, "Expected log stream name to match")
		}).Return(&cloudwatchlogs.CreateLogStreamOutput{}, nil),
		mockCloudWatch.EXPECT().PutLogEvents(gomock.Any()).Do(func(input *cloudwatchlogs.PutLogEventsInput) {
			assert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, "Expected log group name to match")
			assert.Equal(t, aws.StringValue(input.LogStreamName), testLogStreamPrefix+testTag, "Expected log stream name to match")
		}).Return(nil, awserr.New(cloudwatchlogs.ErrCodeResourceNotFoundException, "The specified log group does not exist.", fmt.Errorf("API Error"))),
		mockCloudWatch.EXPECT().CreateLogGroup(gomock.Any()).Do(func(input *cloudwatchlogs.CreateLogGroupInput) {
			assert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, "Expected log group name to match")
		}).Return(&cloudwatchlogs.CreateLogGroupOutput{}, nil),
	)

	output := OutputPlugin{
		logGroupName:     testTemplate(testLogGroup),
		logStreamPrefix:  testLogStreamPrefix,
		client:           mockCloudWatch,
		timer:            setupTimeout(),
		streams:          make(map[string]*logStream),
		groups:           map[string]struct{}{testLogGroup: {}},
		autoCreateStream: true,
		autoCreateGroup:  true,
	}

	record := map[interface{}]interface{}{
		"somekey": []byte("some value"),
	}

	retCode := output.AddEvent(&Event{TS: time.Now(), Tag: testTag, Record: record})
	assert.Equal(t, retCode, fluentbit.FLB_OK, "Expected return code to FLB_OK")
	// Flush triggers PutLogEvents which will encounter the ResourceNotFoundException
	err := output.Flush()
	// After creating the missing log group, the code returns an error and expects retry on next flush
	assert.Error(t, err, "Expected error after flush when log group is missing")
}

func TestAddEventAndDataResourceNotFoundExceptionWithNoLogStream(t *testing.T) {
	ctrl := gomock.NewController(t)
	mockCloudWatch := mock_cloudwatch.NewMockLogsClient(ctrl)

	gomock.InOrder(
		mockCloudWatch.EXPECT().DescribeLogStreams(gomock.Any()).Do(func(input *cloudwatchlogs.DescribeLogStreamsInput) {
			assert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, "Expected log group name to match")
		}).Return(&cloudwatchlogs.DescribeLogStreamsOutput{}, nil),
		mockCloudWatch.EXPECT().CreateLogStream(gomock.Any()).Do(func(input *cloudwatchlogs.CreateLogStreamInput) {
			assert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, "Expected log group name to match")
			assert.Equal(t, aws.StringValue(input.LogStreamName), testLogStreamPrefix+testTag, "Expected log stream name to match")
		}).Return(&cloudwatchlogs.CreateLogStreamOutput{}, nil),
		mockCloudWatch.EXPECT().PutLogEvents(gomock.Any()).Do(func(input *cloudwatchlogs.PutLogEventsInput) {
			assert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, "Expected log group name to match")
			assert.Equal(t, aws.StringValue(input.LogStreamName), testLogStreamPrefix+testTag, "Expected log stream name to match")
		}).Return(nil, awserr.New(cloudwatchlogs.ErrCodeResourceNotFoundException, "The specified log stream does not exist.", fmt.Errorf("API Error"))),
		mockCloudWatch.EXPECT().CreateLogStream(gomock.Any()).Do(func(input *cloudwatchlogs.CreateLogStreamInput) {
			assert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, "Expected log group name to match")
			assert.Equal(t, aws.StringValue(input.LogStreamName), testLogStreamPrefix+testTag, "Expected log stream name to match")
		}).Return(&cloudwatchlogs.CreateLogStreamOutput{}, nil),
	)

	output := OutputPlugin{
		logGroupName:     testTemplate(testLogGroup),
		logStreamPrefix:  testLogStreamPrefix,
		client:           mockCloudWatch,
		timer:            setupTimeout(),
		streams:          make(map[string]*logStream),
		groups:           map[string]struct{}{testLogGroup: {}},
		autoCreateStream: true,
	}

	record := map[interface{}]interface{}{
		"somekey": []byte("some value"),
	}

	retCode := output.AddEvent(&Event{TS: time.Now(), Tag: testTag, Record: record})
	assert.Equal(t, retCode, fluentbit.FLB_OK, "Expected return code to FLB_OK")
	// Flush triggers PutLogEvents which will encounter the ResourceNotFoundException
	err := output.Flush()
	// After creating the missing log stream, the code returns an error and expects retry on next flush
	assert.Error(t, err, "Expected error after flush when log stream is missing")
}

func TestAddEventAndBatchSpanLimit(t *testing.T) {
	output := setupLimitTestOutput(t, 2)

	record := map[interface{}]interface{}{
		"somekey": []byte("some value"),
	}

	before := time.Now()
	start := before.Add(time.Nanosecond)
	end := start.Add(time.Hour*24 - time.Nanosecond)
	after := start.Add(time.Hour * 24)

	retCode := output.AddEvent(&Event{TS: start, Tag: testTag, Record: record})
	assert.Equal(t, retCode, fluentbit.FLB_OK, "Expected return code to FLB_OK")

	retCode = output.AddEvent(&Event{TS: end, Tag: testTag, Record: record})
	assert.Equal(t, retCode, fluentbit.FLB_OK, "Expected return code to FLB_OK")

	retCode = output.AddEvent(&Event{TS: before, Tag: testTag, Record: record})
	assert.Equal(t, retCode, fluentbit.FLB_RETRY, "Expected return code to FLB_RETRY")

	retCode = output.AddEvent(&Event{TS: after, Tag: testTag, Record: record})
	assert.Equal(t, retCode, fluentbit.FLB_RETRY, "Expected return code to FLB_RETRY")
}

func TestAddEventAndBatchSpanLimitOnReverseOrder(t *testing.T) {
	output := setupLimitTestOutput(t, 2)

	record := map[interface{}]interface{}{
		"somekey": []byte("some value"),
	}

	before := time.Now()
	start := before.Add(time.Nanosecond)
	end := start.Add(time.Hour*24 - time.Nanosecond)
	after := start.Add(time.Hour * 24)

	retCode := output.AddEvent(&Event{TS: end, Tag: testTag, Record: record})
	assert.Equal(t, retCode, fluentbit.FLB_OK, "Expected return code to FLB_OK")

	retCode = output.AddEvent(&Event{TS: start, Tag: testTag, Record: record})
	assert.Equal(t, retCode, fluentbit.FLB_OK, "Expected return code to FLB_OK")

	retCode = output.AddEvent(&Event{TS: before, Tag: testTag, Record: record})
	assert.Equal(t, retCode, fluentbit.FLB_RETRY, "Expected return code to FLB_RETRY")

	retCode = output.AddEvent(&Event{TS: after, Tag: testTag, Record: record})
	assert.Equal(t, retCode, fluentbit.FLB_RETRY, "Expected return code to FLB_RETRY")
}

func TestAddEventAndEventsCountLimit(t *testing.T) {
	output := setupLimitTestOutput(t, 1)

	record := map[interface{}]interface{}{
		"somekey": []byte("some value"),
	}

	now := time.Now()

	for i := 0; i < 10000; i++ {
		retCode := output.AddEvent(&Event{TS: now, Tag: testTag, Record: record})
		assert.Equal(t, retCode, fluentbit.FLB_OK, fmt.Sprintf("Expected return code to FLB_OK on %d iteration", i))
	}
	retCode := output.AddEvent(&Event{TS: now, Tag: testTag, Record: record})
	assert.Equal(t, retCode, fluentbit.FLB_RETRY, "Expected return code to FLB_RETRY")
}

func TestAddEventAndBatchSizeLimit(t *testing.T) {
	output := setupLimitTestOutput(t, 1)

	record := map[interface{}]interface{}{
		"somekey": []byte(strings.Repeat("some value", 100)),
	}

	now := time.Now()

	for i := 0; i < 104; i++ { // 104 * 10_000 < 1_048_576
		retCode := output.AddEvent(&Event{TS: now, Tag: testTag, Record: record})
		assert.Equal(t, retCode, fluentbit.FLB_OK, "Expected return code to FLB_OK")
	}

	// 105 * 10_000 > 1_048_576
	retCode := output.AddEvent(&Event{TS: now.Add(time.Hour*24 + time.Nanosecond), Tag: testTag, Record: record})
	assert.Equal(t, retCode, fluentbit.FLB_RETRY, "Expected return code to FLB_RETRY")
}

func setupLimitTestOutput(t *testing.T, times int) OutputPlugin {
	ctrl := gomock.NewController(t)
	mockCloudWatch := mock_cloudwatch.NewMockLogsClient(ctrl)

	gomock.InOrder(
		mockCloudWatch.EXPECT().DescribeLogStreams(gomock.Any()).Do(func(input *cloudwatchlogs.DescribeLogStreamsInput) {
			assert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, "Expected log group name to match")
		}).Return(&cloudwatchlogs.DescribeLogStreamsOutput{}, nil),
		mockCloudWatch.EXPECT().CreateLogStream(gomock.Any()).AnyTimes().Do(func(input *cloudwatchlogs.CreateLogStreamInput) {
			assert.Equal(t, aws.StringValue(input.LogGroupName), testLogGroup, "Expected log group name to match")
			assert.Equal(t, aws.StringValue(input.LogStreamName), testLogStreamPrefix+testTag, "Expected log stream name to match")
		}).Return(&cloudwatchlogs.CreateLogStreamOutput{}, nil),
		mockCloudWatch.EXPECT().PutLogEvents(gomock.Any()).Times(times).Return(nil, errors.New("should fail")),
	)

	return OutputPlugin{
		logGroupName:     testTemplate(testLogGroup),
		logStreamPrefix:  testLogStreamPrefix,
		client:           mockCloudWatch,
		timer:            setupTimeout(),
		streams:          make(map[string]*logStream),
		groups:           map[string]struct{}{testLogGroup: {}},
		autoCreateStream: true,
	}
}

func setupTimeout() *plugins.Timeout {
	timer, _ := plugins.NewTimeout(func(d time.Duration) {
		logrus.Errorf("[firehose] timeout threshold reached: Failed to send logs for %v\n", d)
		logrus.Error("[firehose] Quitting Fluent Bit")
		os.Exit(1)
	})
	return timer
}

func TestValidate(t *testing.T) {
	for _, test := range configValidationTestCases {
		t.Run(test.name, func(t *testing.T) {
			err := test.config.Validate()

			if test.isValidConfig {
				assert.Nil(t, err)
			} else {
				assert.NotNil(t, err)
				assert.Equal(t, err.Error(), test.expectedError)
			}
		})
	}
}


================================================
FILE: cloudwatch/generate_mock.go
================================================
// Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License"). You may
// not use this file except in compliance with the License. A copy of the
// License is located at
//
//	http://aws.amazon.com/apache2.0/
//
// or in the "license" file accompanying this file. This file is distributed
// on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
// express or implied. See the License for the specific language governing
// permissions and limitations under the License.

package cloudwatch

//go:generate ../scripts/mockgen.sh github.com/aws/amazon-cloudwatch-logs-for-fluent-bit/cloudwatch LogsClient mock_cloudwatch/mock.go


================================================
FILE: cloudwatch/handlers.go
================================================
// Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License"). You may
// not use this file except in compliance with the License. A copy of the
// License is located at
//
//	http://aws.amazon.com/apache2.0/
//
// or in the "license" file accompanying this file. This file is distributed
// on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
// express or implied. See the License for the specific language governing
// permissions and limitations under the License.

package cloudwatch

import "github.com/aws/aws-sdk-go/aws/request"

const logFormatHeader = "x-amzn-logs-format"

// LogFormatHandler returns an http request handler that sets an HTTP header.
// The header is used to indicate the format of the logs being sent.
func LogFormatHandler(format string) request.NamedHandler {
	return request.NamedHandler{
		Name: "LogFormatHandler",
		Fn: func(req *request.Request) {
			req.HTTPRequest.Header.Set(logFormatHeader, format)
		},
	}
}


================================================
FILE: cloudwatch/handlers_test.go
================================================
// Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License"). You may
// not use this file except in compliance with the License. A copy of the
// License is located at
//
//	http://aws.amazon.com/apache2.0/
//
// or in the "license" file accompanying this file. This file is distributed
// on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
// express or implied. See the License for the specific language governing
// permissions and limitations under the License.

package cloudwatch

import (
	"net/http"
	"testing"

	"github.com/aws/aws-sdk-go/aws/request"
	"github.com/stretchr/testify/assert"
)

func TestLogFormatHandler(t *testing.T) {
	httpReq, _ := http.NewRequest("POST", "", nil)
	r := &request.Request{
		HTTPRequest: httpReq,
		Body:        nil,
	}
	r.SetBufferBody([]byte{})

	handler := LogFormatHandler("json/emf")
	handler.Fn(r)

	header := r.HTTPRequest.Header.Get(logFormatHeader)
	assert.Equal(t, "json/emf", header)
}


================================================
FILE: cloudwatch/helpers.go
================================================
package cloudwatch

import (
	"fmt"
	"io"
	"strconv"
	"strings"

	"github.com/valyala/bytebufferpool"
	"github.com/valyala/fasttemplate"
)

// Errors output by the help procedures.
var (
	ErrNoTagValue     = fmt.Errorf("not enough dots in the tag to satisfy the index position")
	ErrMissingTagName = fmt.Errorf("tag name not found")
	ErrMissingSubName = fmt.Errorf("sub-tag name not found")
)

// newTemplate is the only place you'll find the template start and end tags.
func newTemplate(template string) (*fastTemplate, error) {
	t, err := fasttemplate.NewTemplate(template, "$(", ")")

	return &fastTemplate{Template: t, String: template}, err
}

// tagKeysToMap converts a raw string into a go map.
// This is used by input data to create AWS tags applied to newly-created log groups.
//
// The input string should be match this: "key=value,key2=value2".
// Spaces are trimmed, empty values are permitted, empty keys are ignored.
// The final value in the input string wins in case of duplicate keys.
func tagKeysToMap(tags string) map[string]*string {
	output := make(map[string]*string)

	for _, tag := range strings.Split(strings.TrimSpace(tags), ",") {
		split := strings.SplitN(tag, "=", 2)
		key := strings.TrimSpace(split[0])
		value := ""

		if key == "" {
			continue
		}

		if len(split) > 1 {
			value = strings.TrimSpace(split[1])
		}

		output[key] = &value
	}

	if len(output) == 0 {
		return nil
	}

	return output
}

// parseKeysTemplate takes in an interface map and a list of nested keys. It returns
// the value of the final key, or the name of the first key not found in the chain.
// example keys := "['level1']['level2']['level3']"
// This is called by parseDataMapTags any time a nested value is found in a log Event.
// This procedure checks if any of the nested values match variable identifiers in the logStream or logGroups.
func parseKeysTemplate(data map[interface{}]interface{}, keys string, w io.Writer) (int64, error) {
	return fasttemplate.ExecuteFunc(keys, "['", "']", w, func(w io.Writer, tag string) (int, error) {
		switch val := data[tag].(type) {
		case []byte:
			return w.Write(val)
		case string:
			return w.Write([]byte(val))
		case map[interface{}]interface{}:
			data = val // drill down another level.
			return 0, nil
		default: // missing
			return 0, fmt.Errorf("%s: %w", tag, ErrMissingSubName)
		}
	})
}

// parseDataMapTags parses the provided tag values in template form,
// from an interface{} map (expected to contain strings or more interface{} maps).
// This runs once for every log line.
// Used to fill in any template variables that may exist in the logStream or logGroup names.
func parseDataMapTags(e *Event, logTags []string, t *fastTemplate, metadata TaskMetadata, uuid string, w io.Writer) (int64, error) {
	return t.ExecuteFunc(w, func(w io.Writer, tag string) (int, error) {
		switch tag {
		case "ecs_task_id":
			if metadata.TaskID != "" {
				return w.Write([]byte(metadata.TaskID))
			}

			return 0, fmt.Errorf("Failed to fetch ecs_task_id; The container is not running in ECS")
		case "ecs_cluster":
			if metadata.Cluster != "" {
				return w.Write([]byte(metadata.Cluster))
			}

			return 0, fmt.Errorf("Failed to fetch ecs_cluster; The container is not running in ECS")
		case "ecs_task_arn":
			if metadata.TaskARN != "" {
				return w.Write([]byte(metadata.TaskARN))
			}

			return 0, fmt.Errorf("Failed to fetch ecs_task_arn; The container is not running in ECS")
		case "uuid":
			return w.Write([]byte(uuid))
		}

		v := strings.Index(tag, "[")
		if v == -1 {
			v = len(tag)
		}

		if tag[:v] == "tag" {
			switch {
			default: // input string is either `tag` or `tag[`, so return the $tag.
				return w.Write([]byte(e.Tag))
			case len(tag) >= 5: // input string is at least "tag[x" where x is hopefully an integer 0-9.
				// The index value is always in the same position: 4:5 (this is why supporting more than 0-9 is rough)
				if v, _ = strconv.Atoi(tag[4:5]); len(logTags) <= v {
					return 0, fmt.Errorf("%s: %w", tag, ErrNoTagValue)
				}

				return w.Write([]byte(logTags[v]))
			}
		}

		switch val := e.Record[tag[:v]].(type) {
		case string:
			return w.Write([]byte(val))
		case map[interface{}]interface{}:
			i, err := parseKeysTemplate(val, tag[v:], w)

			return int(i), err
		case []byte:
			// we should never land here because the interface{} map should have already been converted to strings.
			return w.Write(val)
		default: // missing
			return 0, fmt.Errorf("%s: %w", tag, ErrMissingTagName)
		}
	})
}

// sanitizer implements io.Writer for fasttemplate usage.
// Instead of just writing bytes to a buffer, sanitize them first.
type sanitizer struct {
	sanitize func(b []byte) []byte
	buf      *bytebufferpool.ByteBuffer
}

// Write completes the io.Writer implementation.
func (s *sanitizer) Write(b []byte) (int, error) {
	return s.buf.Write(s.sanitize(b))
}

// sanitizeGroup removes special characters from the log group names bytes.
// https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-logs-loggroup.html
func sanitizeGroup(b []byte) []byte {
	for i, r := range b {
		// 45-47 = / . -
		// 48-57 = 0-9
		// 65-90 = A-Z
		// 95 = _
		// 97-122 = a-z
		if r == 95 || (r > 44 && r < 58) ||
			(r > 64 && r < 91) || (r > 96 && r < 123) {
			continue
		}

		b[i] = '.'
	}

	return b
}

// sanitizeStream removes : and * from the log stream bytes.
// https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-logs-logstream.html
func sanitizeStream(b []byte) []byte {
	for i, r := range b {
		if r == '*' || r == ':' {
			b[i] = '.'
		}
	}

	return b
}


================================================
FILE: cloudwatch/helpers_test.go
================================================
package cloudwatch

import (
	"testing"

	"github.com/stretchr/testify/assert"
	"github.com/valyala/bytebufferpool"
)

func TestTagKeysToMap(t *testing.T) {
	t.Parallel()

	// Testable values. Purposely "messed up" - they should all parse out OK.
	values := " key1 =value , key2=value2, key3= value3 ,key4=, key5  = v5,,key7==value7," +
		" k8, k9,key1=value1,space key = space value"
	// The values above should return a map like this.
	expect := map[string]string{"key1": "value1", "key2": "value2", "key3": "value3",
		"key4": "", "key5": "v5", "key7": "=value7", "k8": "", "k9": "", "space key": "space value"}

	for k, v := range tagKeysToMap(values) {
		assert.Equal(t, *v, expect[k], "Tag key or value failed parser.")
	}
}

func TestParseDataMapTags(t *testing.T) {
	t.Parallel()

	template := testTemplate("$(ecs_task_id).$(ecs_cluster).$(ecs_task_arn).$(uuid).$(tag).$(pam['item2']['subitem2']['more']).$(pam['item']).$(pam['item2'])." +
		"$(pam['item2']['subitem'])-$(pam['item2']['subitem2']['more'])-$(tag[1])")
	data := map[interface{}]interface{}{
		"pam": map[interface{}]interface{}{
			"item": "soup",
			"item2": map[interface{}]interface{}{"subitem": []byte("SubIt3m"),
				"subitem2": map[interface{}]interface{}{"more": "final"}},
		},
	}

	s := &sanitizer{buf: bytebufferpool.Get(), sanitize: sanitizeGroup}
	defer bytebufferpool.Put(s.buf)

	_, err := parseDataMapTags(&Event{Record: data, Tag: "syslog.0"}, []string{"syslog", "0"}, template, TaskMetadata{Cluster: "cluster", TaskARN: "taskARN", TaskID: "taskID"}, "123", s)

	assert.Nil(t, err, err)
	assert.Equal(t, "taskID.cluster.taskARN.123.syslog.0.final.soup..SubIt3m-final-0", s.buf.String(), "Rendered string is incorrect.")

	// Test missing variables. These should always return an error and an empty string.
	s.buf.Reset()
	template = testTemplate("$(missing-variable).stuff")
	_, err = parseDataMapTags(&Event{Record: data, Tag: "syslog.0"}, []string{"syslog", "0"}, template, TaskMetadata{Cluster: "cluster", TaskARN: "taskARN", TaskID: "taskID"}, "123", s)
	assert.EqualError(t, err, "missing-variable: "+ErrMissingTagName.Error(), "the wrong error was returned")
	assert.Empty(t, s.buf.String())

	s.buf.Reset()
	template = testTemplate("$(pam['item6']).stuff")
	_, err = parseDataMapTags(&Event{Record: data, Tag: "syslog.0"}, []string{"syslog", "0"}, template, TaskMetadata{}, "", s)
	assert.EqualError(t, err, "item6: "+ErrMissingSubName.Error(), "the wrong error was returned")
	assert.Empty(t, s.buf.String())

	s.buf.Reset()
	template = testTemplate("$(tag[9]).stuff")
	_, err = parseDataMapTags(&Event{Record: data, Tag: "syslog.0"}, []string{"syslog", "0"}, template, TaskMetadata{}, "", s)
	assert.EqualError(t, err, "tag[9]: "+ErrNoTagValue.Error(), "the wrong error was returned")
	assert.Empty(t, s.buf.String())
}

func TestSanitizeGroup(t *testing.T) {
	t.Parallel()

	tests := map[string]string{ // "send": "expect",
		"this.is.a.log.group.name":             "this.is.a.log.group.name",
		"1234567890abcdefghijklmnopqrstuvwxyz": "1234567890abcdefghijklmnopqrstuvwxyz",
		"ABCDEFGHIJKLMNOPQRSTUVWXYZ":           "ABCDEFGHIJKLMNOPQRSTUVWXYZ",
		`!@#$%^&*()_+}{][=-';":/.?>,<~"']}`:    ".........._......-..../..........",
		"":                                     "",
	}

	for send, expect := range tests {
		actual := sanitizeGroup([]byte(send))
		assert.Equal(t, expect, string(actual), "the wrong characters were modified in sanitizeGroup")
	}
}

func TestSanitizeStream(t *testing.T) {
	t.Parallel()

	tests := map[string]string{ // "send": "expect",
		"this.is.a.log.group.name":             "this.is.a.log.group.name",
		"1234567890abcdefghijklmnopqrstuvwxyz": "1234567890abcdefghijklmnopqrstuvwxyz",
		"ABCDEFGHIJKLMNOPQRSTUVWXYZ":           "ABCDEFGHIJKLMNOPQRSTUVWXYZ",
		`!@#$%^&*()_+}{][=-';":/.?>,<~"']}`:    `!@#$%^&.()_+}{][=-';"./.?>,<~"']}`,
		"":                                     "",
	}

	for send, expect := range tests {
		actual := sanitizeStream([]byte(send))
		assert.Equal(t, expect, string(actual), "the wrong characters were modified in sanitizeStream")
	}
}


================================================
FILE: cloudwatch/mock_cloudwatch/mock.go
================================================
// Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License"). You may
// not use this file except in compliance with the License. A copy of the
// License is located at
//
//     http://aws.amazon.com/apache2.0/
//
// or in the "license" file accompanying this file. This file is distributed
// on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
// express or implied. See the License for the specific language governing
// permissions and limitations under the License.

// Code generated by MockGen. DO NOT EDIT.
// Source: github.com/aws/amazon-cloudwatch-logs-for-fluent-bit/cloudwatch (interfaces: LogsClient)

// Package mock_cloudwatch is a generated GoMock package.
package mock_cloudwatch

import (
	reflect "reflect"

	cloudwatchlogs "github.com/aws/aws-sdk-go/service/cloudwatchlogs"
	gomock "github.com/golang/mock/gomock"
)

// MockLogsClient is a mock of LogsClient interface
type MockLogsClient struct {
	ctrl     *gomock.Controller
	recorder *MockLogsClientMockRecorder
}

// MockLogsClientMockRecorder is the mock recorder for MockLogsClient
type MockLogsClientMockRecorder struct {
	mock *MockLogsClient
}

// NewMockLogsClient creates a new mock instance
func NewMockLogsClient(ctrl *gomock.Controller) *MockLogsClient {
	mock := &MockLogsClient{ctrl: ctrl}
	mock.recorder = &MockLogsClientMockRecorder{mock}
	return mock
}

// EXPECT returns an object that allows the caller to indicate expected use
func (m *MockLogsClient) EXPECT() *MockLogsClientMockRecorder {
	return m.recorder
}

// CreateLogGroup mocks base method
func (m *MockLogsClient) CreateLogGroup(arg0 *cloudwatchlogs.CreateLogGroupInput) (*cloudwatchlogs.CreateLogGroupOutput, error) {
	m.ctrl.T.Helper()
	ret := m.ctrl.Call(m, "CreateLogGroup", arg0)
	ret0, _ := ret[0].(*cloudwatchlogs.CreateLogGroupOutput)
	ret1, _ := ret[1].(error)
	return ret0, ret1
}

// CreateLogGroup indicates an expected call of CreateLogGroup
func (mr *MockLogsClientMockRecorder) CreateLogGroup(arg0 interface{}) *gomock.Call {
	mr.mock.ctrl.T.Helper()
	return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "CreateLogGroup", reflect.TypeOf((*MockLogsClient)(nil).CreateLogGroup), arg0)
}

// CreateLogStream mocks base method
func (m *MockLogsClient) CreateLogStream(arg0 *cloudwatchlogs.CreateLogStreamInput) (*cloudwatchlogs.CreateLogStreamOutput, error) {
	m.ctrl.T.Helper()
	ret := m.ctrl.Call(m, "CreateLogStream", arg0)
	ret0, _ := ret[0].(*cloudwatchlogs.CreateLogStreamOutput)
	ret1, _ := ret[1].(error)
	return ret0, ret1
}

// CreateLogStream indicates an expected call of CreateLogStream
func (mr *MockLogsClientMockRecorder) CreateLogStream(arg0 interface{}) *gomock.Call {
	mr.mock.ctrl.T.Helper()
	return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "CreateLogStream", reflect.TypeOf((*MockLogsClient)(nil).CreateLogStream), arg0)
}

// DescribeLogStreams mocks base method
func (m *MockLogsClient) DescribeLogStreams(arg0 *cloudwatchlogs.DescribeLogStreamsInput) (*cloudwatchlogs.DescribeLogStreamsOutput, error) {
	m.ctrl.T.Helper()
	ret := m.ctrl.Call(m, "DescribeLogStreams", arg0)
	ret0, _ := ret[0].(*cloudwatchlogs.DescribeLogStreamsOutput)
	ret1, _ := ret[1].(error)
	return ret0, ret1
}

// DescribeLogStreams indicates an expected call of DescribeLogStreams
func (mr *MockLogsClientMockRecorder) DescribeLogStreams(arg0 interface{}) *gomock.Call {
	mr.mock.ctrl.T.Helper()
	return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "DescribeLogStreams", reflect.TypeOf((*MockLogsClient)(nil).DescribeLogStreams), arg0)
}

// PutLogEvents mocks base method
func (m *MockLogsClient) PutLogEvents(arg0 *cloudwatchlogs.PutLogEventsInput) (*cloudwatchlogs.PutLogEventsOutput, error) {
	m.ctrl.T.Helper()
	ret := m.ctrl.Call(m, "PutLogEvents", arg0)
	ret0, _ := ret[0].(*cloudwatchlogs.PutLogEventsOutput)
	ret1, _ := ret[1].(error)
	return ret0, ret1
}

// PutLogEvents indicates an expected call of PutLogEvents
func (mr *MockLogsClientMockRecorder) PutLogEvents(arg0 interface{}) *gomock.Call {
	mr.mock.ctrl.T.Helper()
	return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "PutLogEvents", reflect.TypeOf((*MockLogsClient)(nil).PutLogEvents), arg0)
}

// PutRetentionPolicy mocks base method
func (m *MockLogsClient) PutRetentionPolicy(arg0 *cloudwatchlogs.PutRetentionPolicyInput) (*cloudwatchlogs.PutRetentionPolicyOutput, error) {
	m.ctrl.T.Helper()
	ret := m.ctrl.Call(m, "PutRetentionPolicy", arg0)
	ret0, _ := ret[0].(*cloudwatchlogs.PutRetentionPolicyOutput)
	ret1, _ := ret[1].(error)
	return ret0, ret1
}

// PutRetentionPolicy indicates an expected call of PutRetentionPolicy
func (mr *MockLogsClientMockRecorder) PutRetentionPolicy(arg0 interface{}) *gomock.Call {
	mr.mock.ctrl.T.Helper()
	return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "PutRetentionPolicy", reflect.TypeOf((*MockLogsClient)(nil).PutRetentionPolicy), arg0)
}


================================================
FILE: fluent-bit-cloudwatch.go
================================================
// Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License"). You may
// not use this file except in compliance with the License. A copy of the
// License is located at
//
//	http://aws.amazon.com/apache2.0/
//
// or in the "license" file accompanying this file. This file is distributed
// on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
// express or implied. See the License for the specific language governing
// permissions and limitations under the License.

package main

import (
	"C"
	"fmt"
	"strconv"
	"time"
	"unsafe"

	"github.com/aws/amazon-cloudwatch-logs-for-fluent-bit/cloudwatch"
	"github.com/aws/amazon-kinesis-firehose-for-fluent-bit/plugins"
	"github.com/fluent/fluent-bit-go/output"

	"github.com/sirupsen/logrus"
)
import (
	"strings"
)

var (
	pluginInstances []*cloudwatch.OutputPlugin
)

func addPluginInstance(ctx unsafe.Pointer) error {
	pluginID := len(pluginInstances)

	config := getConfiguration(ctx, pluginID)
	err := config.Validate()
	if err != nil {
		return err
	}

	instance, err := cloudwatch.NewOutputPlugin(config)
	if err != nil {
		return err
	}

	output.FLBPluginSetContext(ctx, pluginID)
	pluginInstances = append(pluginInstances, instance)

	return nil
}

func getPluginInstance(ctx unsafe.Pointer) *cloudwatch.OutputPlugin {
	pluginID := output.FLBPluginGetContext(ctx).(int)
	return pluginInstances[pluginID]
}

//export FLBPluginRegister
func FLBPluginRegister(ctx unsafe.Pointer) int {
	return output.FLBPluginRegister(ctx, "cloudwatch", "AWS CloudWatch Fluent Bit Plugin!")
}

func getConfiguration(ctx unsafe.Pointer, pluginID int) cloudwatch.OutputPluginConfig {
	config := cloudwatch.OutputPluginConfig{}
	config.PluginInstanceID = pluginID

	config.LogGroupName = output.FLBPluginConfigKey(ctx, "log_group_name")
	logrus.Infof("[cloudwatch %d] plugin parameter log_group_name = '%s'", pluginID, config.LogGroupName)

	config.DefaultLogGroupName = output.FLBPluginConfigKey(ctx, "default_log_group_name")
	if config.DefaultLogGroupName == "" {
		config.DefaultLogGroupName = "fluentbit-default"
	}

	logrus.Infof("[cloudwatch %d] plugin parameter default_log_group_name = '%s'", pluginID, config.DefaultLogGroupName)

	config.LogStreamPrefix = output.FLBPluginConfigKey(ctx, "log_stream_prefix")
	logrus.Infof("[cloudwatch %d] plugin parameter log_stream_prefix = '%s'", pluginID, config.LogStreamPrefix)

	config.LogStreamName = output.FLBPluginConfigKey(ctx, "log_stream_name")
	logrus.Infof("[cloudwatch %d] plugin parameter log_stream_name = '%s'", pluginID, config.LogStreamName)

	config.DefaultLogStreamName = output.FLBPluginConfigKey(ctx, "default_log_stream_name")
	if config.DefaultLogStreamName == "" {
		config.DefaultLogStreamName = "/fluentbit-default"
	}

	logrus.Infof("[cloudwatch %d] plugin parameter default_log_stream_name = '%s'", pluginID, config.DefaultLogStreamName)

	config.Region = output.FLBPluginConfigKey(ctx, "region")
	logrus.Infof("[cloudwatch %d] plugin parameter region = '%s'", pluginID, config.Region)

	config.LogKey = output.FLBPluginConfigKey(ctx, "log_key")
	logrus.Infof("[cloudwatch %d] plugin parameter log_key = '%s'", pluginID, config.LogKey)

	config.RoleARN = output.FLBPluginConfigKey(ctx, "role_arn")
	logrus.Infof("[cloudwatch %d] plugin parameter role_arn = '%s'", pluginID, config.RoleARN)

	config.AutoCreateGroup = getBoolParam(ctx, "auto_create_group", false)
	logrus.Infof("[cloudwatch %d] plugin parameter auto_create_group = '%v'", pluginID, config.AutoCreateGroup)

	config.AutoCreateStream = getBoolParam(ctx, "auto_create_stream", true)
	logrus.Infof("[cloudwatch %d] plugin parameter auto_create_stream = '%v'", pluginID, config.AutoCreateStream)

	config.NewLogGroupTags = output.FLBPluginConfigKey(ctx, "new_log_group_tags")
	logrus.Infof("[cloudwatch %d] plugin parameter new_log_group_tags = '%s'", pluginID, config.NewLogGroupTags)

	config.LogRetentionDays, _ = strconv.ParseInt(output.FLBPluginConfigKey(ctx, "log_retention_days"), 10, 64)
	logrus.Infof("[cloudwatch %d] plugin parameter log_retention_days = '%d'", pluginID, config.LogRetentionDays)

	config.CWEndpoint = output.FLBPluginConfigKey(ctx, "endpoint")
	logrus.Infof("[cloudwatch %d] plugin parameter endpoint = '%s'", pluginID, config.CWEndpoint)

	config.STSEndpoint = output.FLBPluginConfigKey(ctx, "sts_endpoint")
	logrus.Infof("[cloudwatch %d] plugin parameter sts_endpoint = '%s'", pluginID, config.STSEndpoint)

	config.ExternalID = output.FLBPluginConfigKey(ctx, "external_id")
	logrus.Infof("[cloudwatch %d] plugin parameter external_id = '%s'", pluginID, config.ExternalID)

	config.CredsEndpoint = output.FLBPluginConfigKey(ctx, "credentials_endpoint")
	logrus.Infof("[cloudwatch %d] plugin parameter credentials_endpoint = %s", pluginID, config.CredsEndpoint)

	config.LogFormat = output.FLBPluginConfigKey(ctx, "log_format")
	logrus.Infof("[cloudwatch %d] plugin parameter log_format = '%s'", pluginID, config.LogFormat)

	config.ExtraUserAgent = output.FLBPluginConfigKey(ctx, "extra_user_agent")

	return config
}

func getBoolParam(ctx unsafe.Pointer, param string, defaultVal bool) bool {
	val := strings.ToLower(output.FLBPluginConfigKey(ctx, param))
	if val == "true" {
		return true
	} else if val == "false" {
		return false
	} else {
		return defaultVal
	}
}

//export FLBPluginInit
func FLBPluginInit(ctx unsafe.Pointer) int {
	plugins.SetupLogger()

	logrus.Debug("A new higher performance CloudWatch Logs plugin has been released; " +
		"you are using the old plugin. Check out the new plugin's documentation and " +
		"determine if you can migrate.\n" +
		"https://docs.fluentbit.io/manual/pipeline/outputs/cloudwatch")

	err := addPluginInstance(ctx)
	if err != nil {
		logrus.Error(err)
		return output.FLB_ERROR
	}
	return output.FLB_OK
}

//export FLBPluginFlushCtx
func FLBPluginFlushCtx(ctx, data unsafe.Pointer, length C.int, tag *C.char) int {
	var count int
	var ret int
	var ts interface{}
	var record map[interface{}]interface{}

	// Create Fluent Bit decoder
	dec := output.NewDecoder(data, int(length))

	cloudwatchLogs := getPluginInstance(ctx)

	fluentTag := C.GoString(tag)
	logrus.Debugf("[cloudwatch %d] Found logs with tag: %s", cloudwatchLogs.PluginInstanceID, fluentTag)

	for {
		// Extract Record
		ret, ts, record = output.GetRecord(dec)
		if ret != 0 {
			break
		}

		var timestamp time.Time
		switch tts := ts.(type) {
		case output.FLBTime:
			timestamp = tts.Time
		case uint64:
			// when ts is of type uint64 it appears to
			// be the amount of seconds since unix epoch.
			timestamp = time.Unix(int64(tts), 0)
		default:
			timestamp = time.Now()
		}

		retCode := cloudwatchLogs.AddEvent(&cloudwatch.Event{Tag: fluentTag, Record: record, TS: timestamp})
		if retCode != output.FLB_OK {
			return retCode
		}
		count++
	}
	err := cloudwatchLogs.Flush()
	if err != nil {
		fmt.Println(err)
		// TODO: Better error handling
		return output.FLB_RETRY
	}

	logrus.Debugf("[cloudwatch %d] Processed %d events", cloudwatchLogs.PluginInstanceID, count)

	// Return options:
	//
	// output.FLB_OK    = data have been processed.
	// output.FLB_ERROR = unrecoverable error, do not try this again. Never returned by flush.
	// output.FLB_RETRY = retry to flush later.
	return output.FLB_OK
}

//export FLBPluginExit
func FLBPluginExit() int {
	return output.FLB_OK
}

func main() {
}


================================================
FILE: go.mod
================================================
module github.com/aws/amazon-cloudwatch-logs-for-fluent-bit

go 1.24.11

require (
	github.com/aws/amazon-kinesis-firehose-for-fluent-bit v1.7.2
	github.com/aws/aws-sdk-go v1.55.8
	github.com/fluent/fluent-bit-go v0.0.0-20201210173045-3fd1e0486df2
	github.com/golang/mock v1.6.0
	github.com/json-iterator/go v1.1.12
	github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
	github.com/segmentio/ksuid v1.0.4
	github.com/sirupsen/logrus v1.9.4
	github.com/stretchr/testify v1.11.1
	github.com/valyala/bytebufferpool v1.0.0
	github.com/valyala/fasttemplate v1.2.2
	golang.org/x/sys v0.40.0 // indirect
)

require (
	github.com/davecgh/go-spew v1.1.1 // indirect
	github.com/jmespath/go-jmespath v0.4.0 // indirect
	github.com/modern-go/reflect2 v1.0.2 // indirect
	github.com/pmezard/go-difflib v1.0.0 // indirect
	github.com/ugorji/go/codec v1.2.6 // indirect
	gopkg.in/yaml.v3 v3.0.1 // indirect
)


================================================
FILE: go.sum
================================================
github.com/aws/amazon-kinesis-firehose-for-fluent-bit v1.7.2 h1:uKb5MpgJ0aDa/TkAPWQIExnO/caFEpTyWTXiZz+hPkA=
github.com/aws/amazon-kinesis-firehose-for-fluent-bit v1.7.2/go.mod h1:bs9SAGlYjBbPqBpoopCAuYG/vHTgiucyb71mp8wyF88=
github.com/aws/aws-sdk-go v1.55.8 h1:JRmEUbU52aJQZ2AjX4q4Wu7t4uZjOu71uyNmaWlUkJQ=
github.com/aws/aws-sdk-go v1.55.8/go.mod h1:ZkViS9AqA6otK+JBBNH2++sx1sgxrPKcSzPPvQkUtXk=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/fluent/fluent-bit-go v0.0.0-20201210173045-3fd1e0486df2 h1:G57WNyWS0FQf43hjRXLy5JT1V5LWVsSiEpkUcT67Ugk=
github.com/fluent/fluent-bit-go v0.0.0-20201210173045-3fd1e0486df2/go.mod h1:L92h+dgwElEyUuShEwjbiHjseW410WIcNz+Bjutc8YQ=
github.com/golang/mock v1.6.0 h1:ErTB+efbowRARo13NNdxyJji2egdxLGQhRaY+DUumQc=
github.com/golang/mock v1.6.0/go.mod h1:p6yTPP+5HYm5mzsMV8JkE6ZKdX+/wYM6Hr+LicevLPs=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/jmespath/go-jmespath v0.4.0 h1:BEgLn5cpjn8UN1mAw4NjwDrS35OdebyEtFe+9YPoQUg=
github.com/jmespath/go-jmespath v0.4.0/go.mod h1:T8mJZnbsbmF+m6zOOFylbeCJqk5+pHWvzYPziyZiYoo=
github.com/jmespath/go-jmespath/internal/testify v1.5.1 h1:shLQSRRSCCPj3f2gpwzGwWFoC7ycTf1rcQZHOlsJ6N8=
github.com/jmespath/go-jmespath/internal/testify v1.5.1/go.mod h1:L3OGu8Wl2/fWfCI6z80xFu9LTZmf1ZRjMHUOPmWr69U=
github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM=
github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo=
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/reflect2 v1.0.2 h1:xBagoLtFs94CBntxluKeaWgTMpvLxC4ur3nMaC9Gz0M=
github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/segmentio/ksuid v1.0.4 h1:sBo2BdShXjmcugAMwjugoGUdUV0pcxY5mW4xKRn3v4c=
github.com/segmentio/ksuid v1.0.4/go.mod h1:/XUiZBD3kVx5SmUOl55voK5yeAbBNNIed+2O73XgrPE=
github.com/sirupsen/logrus v1.9.4 h1:TsZE7l11zFCLZnZ+teH4Umoq5BhEIfIzfRDZ1Uzql2w=
github.com/sirupsen/logrus v1.9.4/go.mod h1:ftWc9WdOfJ0a92nsE2jF5u5ZwH8Bv2zdeOC42RjbV2g=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
github.com/ugorji/go v1.1.7/go.mod h1:kZn38zHttfInRq0xu/PH0az30d+z6vm202qpg1oXVMw=
github.com/ugorji/go v1.2.6/go.mod h1:anCg0y61KIhDlPZmnH+so+RQbysYVyDko0IMgJv0Nn0=
github.com/ugorji/go/codec v1.1.7/go.mod h1:Ax+UKWsSmolVDwsd+7N3ZtXu+yMGCf907BLYF3GoBXY=
github.com/ugorji/go/codec v1.2.6 h1:7kbGefxLoDBuYXOms4yD7223OpNMMPNPZxXk5TvFcyQ=
github.com/ugorji/go/codec v1.2.6/go.mod h1:V6TCNZ4PHqoHGFZuSG1W8nrCzzdgA2DozYxWFFpvxTw=
github.com/valyala/bytebufferpool v1.0.0 h1:GqA5TC/0021Y/b9FG4Oi9Mr3q7XYx6KllzawFIhcdPw=
github.com/valyala/bytebufferpool v1.0.0/go.mod h1:6bBcMArwyJ5K/AmCkWv1jt77kVWyCJ6HpOuEn7z0Csc=
github.com/valyala/fasttemplate v1.2.2 h1:lxLXG0uE3Qnshl9QyaK6XJxMXlQZELvChBOCmQD0Loo=
github.com/valyala/fasttemplate v1.2.2/go.mod h1:KHLXt3tVN2HBp8eijSv/kGJopbvo7S+qRAEEKiv+SiQ=
github.com/yuin/goldmark v1.3.5/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/mod v0.4.2/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.40.0 h1:DBZZqJ2Rkml6QMQsZywtnjnnGvHza6BTfYFWY9kjEWQ=
golang.org/x/sys v0.40.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.1.1/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/yaml.v2 v2.2.8 h1:obN1ZagJSUGI0Ek/LBmuj4SNLPfIny3KsKFopxRdj10=
gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=


================================================
FILE: scripts/mockgen.sh
================================================
#!/bin/bash
# Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the
# "License"). You may not use this file except in compliance
#  with the License. A copy of the License is located at
#
#     http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
# CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and
# limitations under the License.
#
# This script wraps the mockgen tool and inserts licensing information.

set -e
package=${1?Must provide package}
interfaces=${2?Must provide interface names}
outputfile=${3?Must provide an output file}

export PATH="${GOPATH//://bin:}/bin:$PATH"

data=$(
cat << EOF
// Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License"). You may
// not use this file except in compliance with the License. A copy of the
// License is located at
//
//     http://aws.amazon.com/apache2.0/
//
// or in the "license" file accompanying this file. This file is distributed
// on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
// express or implied. See the License for the specific language governing
// permissions and limitations under the License.

$(mockgen "$package" "$interfaces")
EOF
)

echo "$data" | goimports > "${outputfile}"
Download .txt
gitextract_6qru89jo/

├── .github/
│   ├── CODEOWNERS
│   ├── PULL_REQUEST_TEMPLATE.md
│   ├── dependabot.yml
│   └── workflows/
│       └── build.yml
├── .gitignore
├── CHANGELOG.md
├── CODEOWNERS
├── CODE_OF_CONDUCT.md
├── CONTRIBUTING.md
├── LICENSE
├── Makefile
├── NOTICE
├── README.md
├── THIRD-PARTY
├── VERSION
├── cloudwatch/
│   ├── cloudwatch.go
│   ├── cloudwatch_test.go
│   ├── generate_mock.go
│   ├── handlers.go
│   ├── handlers_test.go
│   ├── helpers.go
│   ├── helpers_test.go
│   └── mock_cloudwatch/
│       └── mock.go
├── fluent-bit-cloudwatch.go
├── go.mod
├── go.sum
└── scripts/
    └── mockgen.sh
Download .txt
SYMBOL INDEX (116 symbols across 8 files)

FILE: cloudwatch/cloudwatch.go
  constant perEventBytes (line 48) | perEventBytes          = 26
  constant maximumBytesPerPut (line 49) | maximumBytesPerPut     = 1048576
  constant maximumLogEventsPerPut (line 50) | maximumLogEventsPerPut = 10000
  constant maximumBytesPerEvent (line 51) | maximumBytesPerEvent   = 1024 * 256
  constant maximumTimeSpanPerPut (line 52) | maximumTimeSpanPerPut  = time.Hour * 24
  constant truncatedSuffix (line 53) | truncatedSuffix        = "[Truncated...]"
  constant maxGroupStreamLength (line 54) | maxGroupStreamLength   = 512
  constant logStreamInactivityTimeout (line 59) | logStreamInactivityTimeout = time.Hour
  constant logStreamInactivityCheckInterval (line 61) | logStreamInactivityCheckInterval = 10 * time.Minute
  constant linuxBaseUserAgent (line 63) | linuxBaseUserAgent = "aws-fluent-bit-plugin"
  constant windowsBaseUserAgent (line 65) | windowsBaseUserAgent = "aws-fluent-bit-plugin-windows"
  type LogsClient (line 69) | type LogsClient interface
  type logStream (line 77) | type logStream struct
    method isExpired (line 110) | func (stream *logStream) isExpired() bool {
    method updateExpiration (line 117) | func (stream *logStream) updateExpiration() {
    method logBatchSpan (line 860) | func (stream *logStream) logBatchSpan(timestamp time.Time) time.Durati...
  type Event (line 90) | type Event struct
  type TaskMetadata (line 99) | type TaskMetadata struct
  type streamDoesntExistError (line 105) | type streamDoesntExistError struct
    method Error (line 452) | func (err *streamDoesntExistError) Error() string {
  type fastTemplate (line 121) | type fastTemplate struct
  type OutputPlugin (line 127) | type OutputPlugin struct
    method AddEvent (line 355) | func (output *OutputPlugin) AddEvent(e *Event) int {
    method cleanUpExpiredLogStreams (line 437) | func (output *OutputPlugin) cleanUpExpiredLogStreams() {
    method getLogStream (line 456) | func (output *OutputPlugin) getLogStream(e *Event) (*logStream, error) {
    method existingLogStream (line 472) | func (output *OutputPlugin) existingLogStream(e *Event) (*logStream, e...
    method describeLogStreams (line 512) | func (output *OutputPlugin) describeLogStreams(e *Event, nextToken *st...
    method setGroupStreamNames (line 531) | func (output *OutputPlugin) setGroupStreamNames(e *Event) {
    method createStream (line 570) | func (output *OutputPlugin) createStream(e *Event) (*logStream, error) {
    method createLogGroup (line 599) | func (output *OutputPlugin) createLogGroup(e *Event) error {
    method setLogGroupRetention (line 622) | func (output *OutputPlugin) setLogGroupRetention(name string) error {
    method processRecord (line 646) | func (output *OutputPlugin) processRecord(e *Event) ([]byte, error) {
    method getECSMetadata (line 703) | func (output *OutputPlugin) getECSMetadata() error {
    method Flush (line 734) | func (output *OutputPlugin) Flush() error {
    method flushStream (line 746) | func (output *OutputPlugin) flushStream(stream *logStream) error {
    method putLogEvents (line 751) | func (output *OutputPlugin) putLogEvents(stream *logStream) error {
    method processRejectedEventsInfo (line 831) | func (output *OutputPlugin) processRejectedEventsInfo(response *cloudw...
  type OutputPluginConfig (line 152) | type OutputPluginConfig struct
    method Validate (line 175) | func (config OutputPluginConfig) Validate() error {
  function NewOutputPlugin (line 195) | func NewOutputPlugin(config OutputPluginConfig) (*OutputPlugin, error) {
  function newCloudWatchLogsClient (line 251) | func newCloudWatchLogsClient(config OutputPluginConfig) (*cloudwatchlogs...
  function customUserAgentHandler (line 329) | func customUserAgentHandler(config OutputPluginConfig) request.NamedHand...
  function logString (line 642) | func logString(record []byte) string {
  function cloudwatchLen (line 852) | func cloudwatchLen(event string) int {

FILE: cloudwatch/cloudwatch_test.go
  constant testRegion (line 37) | testRegion          = "us-west-2"
  constant testLogGroup (line 38) | testLogGroup        = "my-logs"
  constant testLogStreamPrefix (line 39) | testLogStreamPrefix = "my-prefix"
  constant testTag (line 40) | testTag             = "tag"
  constant testNextToken (line 41) | testNextToken       = "next-token"
  constant testSequenceToken (line 42) | testSequenceToken   = "sequence-token"
  type configTest (line 45) | type configTest struct
  function testTemplate (line 124) | func testTemplate(template string) *fastTemplate {
  function TestAddEvent (line 129) | func TestAddEvent(t *testing.T) {
  function TestTruncateLargeLogEvent (line 161) | func TestTruncateLargeLogEvent(t *testing.T) {
  function TestTruncateLargeLogEventWithSpecialCharacterOneTrailingFragments (line 200) | func TestTruncateLargeLogEventWithSpecialCharacterOneTrailingFragments(t...
  function TestTruncateLargeLogEventWithSpecialCharacterTwoTrailingFragments (line 252) | func TestTruncateLargeLogEventWithSpecialCharacterTwoTrailingFragments(t...
  function TestTruncateLargeLogEventWithSpecialCharacterThreeTrailingFragments (line 303) | func TestTruncateLargeLogEventWithSpecialCharacterThreeTrailingFragments...
  function TestAddEventCreateLogGroup (line 354) | func TestAddEventCreateLogGroup(t *testing.T) {
  function TestAddEventExistingStream (line 392) | func TestAddEventExistingStream(t *testing.T) {
  function TestAddEventDescribeStreamsException (line 440) | func TestAddEventDescribeStreamsException(t *testing.T) {
  function TestAddEventAutoCreateDisabled (line 465) | func TestAddEventAutoCreateDisabled(t *testing.T) {
  function TestAddEventExistingStreamNotFound (line 493) | func TestAddEventExistingStreamNotFound(t *testing.T) {
  function TestAddEventEmptyRecord (line 545) | func TestAddEventEmptyRecord(t *testing.T) {
  function TestAddEventAndFlush (line 568) | func TestAddEventAndFlush(t *testing.T) {
  function TestPutLogEvents (line 607) | func TestPutLogEvents(t *testing.T) {
  function TestSetGroupStreamNames (line 626) | func TestSetGroupStreamNames(t *testing.T) {
  function TestAddEventAndFlushDataAlreadyAcceptedException (line 707) | func TestAddEventAndFlushDataAlreadyAcceptedException(t *testing.T) {
  function TestAddEventAndFlushDataInvalidSequenceTokenException (line 744) | func TestAddEventAndFlushDataInvalidSequenceTokenException(t *testing.T) {
  function TestAddEventAndFlushDataInvalidSequenceTokenNextNullException (line 788) | func TestAddEventAndFlushDataInvalidSequenceTokenNextNullException(t *te...
  function TestAddEventAndDataResourceNotFoundExceptionWithNoLogGroup (line 832) | func TestAddEventAndDataResourceNotFoundExceptionWithNoLogGroup(t *testi...
  function TestAddEventAndDataResourceNotFoundExceptionWithNoLogStream (line 876) | func TestAddEventAndDataResourceNotFoundExceptionWithNoLogStream(t *test...
  function TestAddEventAndBatchSpanLimit (line 920) | func TestAddEventAndBatchSpanLimit(t *testing.T) {
  function TestAddEventAndBatchSpanLimitOnReverseOrder (line 945) | func TestAddEventAndBatchSpanLimitOnReverseOrder(t *testing.T) {
  function TestAddEventAndEventsCountLimit (line 970) | func TestAddEventAndEventsCountLimit(t *testing.T) {
  function TestAddEventAndBatchSizeLimit (line 987) | func TestAddEventAndBatchSizeLimit(t *testing.T) {
  function setupLimitTestOutput (line 1006) | func setupLimitTestOutput(t *testing.T, times int) OutputPlugin {
  function setupTimeout (line 1032) | func setupTimeout() *plugins.Timeout {
  function TestValidate (line 1041) | func TestValidate(t *testing.T) {

FILE: cloudwatch/handlers.go
  constant logFormatHeader (line 18) | logFormatHeader = "x-amzn-logs-format"
  function LogFormatHandler (line 22) | func LogFormatHandler(format string) request.NamedHandler {

FILE: cloudwatch/handlers_test.go
  function TestLogFormatHandler (line 24) | func TestLogFormatHandler(t *testing.T) {

FILE: cloudwatch/helpers.go
  function newTemplate (line 21) | func newTemplate(template string) (*fastTemplate, error) {
  function tagKeysToMap (line 33) | func tagKeysToMap(tags string) map[string]*string {
  function parseKeysTemplate (line 64) | func parseKeysTemplate(data map[interface{}]interface{}, keys string, w ...
  function parseDataMapTags (line 84) | func parseDataMapTags(e *Event, logTags []string, t *fastTemplate, metad...
  type sanitizer (line 146) | type sanitizer struct
    method Write (line 152) | func (s *sanitizer) Write(b []byte) (int, error) {
  function sanitizeGroup (line 158) | func sanitizeGroup(b []byte) []byte {
  function sanitizeStream (line 178) | func sanitizeStream(b []byte) []byte {

FILE: cloudwatch/helpers_test.go
  function TestTagKeysToMap (line 10) | func TestTagKeysToMap(t *testing.T) {
  function TestParseDataMapTags (line 25) | func TestParseDataMapTags(t *testing.T) {
  function TestSanitizeGroup (line 66) | func TestSanitizeGroup(t *testing.T) {
  function TestSanitizeStream (line 83) | func TestSanitizeStream(t *testing.T) {

FILE: cloudwatch/mock_cloudwatch/mock.go
  type MockLogsClient (line 28) | type MockLogsClient struct
    method EXPECT (line 46) | func (m *MockLogsClient) EXPECT() *MockLogsClientMockRecorder {
    method CreateLogGroup (line 51) | func (m *MockLogsClient) CreateLogGroup(arg0 *cloudwatchlogs.CreateLog...
    method CreateLogStream (line 66) | func (m *MockLogsClient) CreateLogStream(arg0 *cloudwatchlogs.CreateLo...
    method DescribeLogStreams (line 81) | func (m *MockLogsClient) DescribeLogStreams(arg0 *cloudwatchlogs.Descr...
    method PutLogEvents (line 96) | func (m *MockLogsClient) PutLogEvents(arg0 *cloudwatchlogs.PutLogEvent...
    method PutRetentionPolicy (line 111) | func (m *MockLogsClient) PutRetentionPolicy(arg0 *cloudwatchlogs.PutRe...
  type MockLogsClientMockRecorder (line 34) | type MockLogsClientMockRecorder struct
    method CreateLogGroup (line 60) | func (mr *MockLogsClientMockRecorder) CreateLogGroup(arg0 interface{})...
    method CreateLogStream (line 75) | func (mr *MockLogsClientMockRecorder) CreateLogStream(arg0 interface{}...
    method DescribeLogStreams (line 90) | func (mr *MockLogsClientMockRecorder) DescribeLogStreams(arg0 interfac...
    method PutLogEvents (line 105) | func (mr *MockLogsClientMockRecorder) PutLogEvents(arg0 interface{}) *...
    method PutRetentionPolicy (line 120) | func (mr *MockLogsClientMockRecorder) PutRetentionPolicy(arg0 interfac...
  function NewMockLogsClient (line 39) | func NewMockLogsClient(ctrl *gomock.Controller) *MockLogsClient {

FILE: fluent-bit-cloudwatch.go
  function addPluginInstance (line 37) | func addPluginInstance(ctx unsafe.Pointer) error {
  function getPluginInstance (line 57) | func getPluginInstance(ctx unsafe.Pointer) *cloudwatch.OutputPlugin {
  function FLBPluginRegister (line 63) | func FLBPluginRegister(ctx unsafe.Pointer) int {
  function getConfiguration (line 67) | func getConfiguration(ctx unsafe.Pointer, pluginID int) cloudwatch.Outpu...
  function getBoolParam (line 135) | func getBoolParam(ctx unsafe.Pointer, param string, defaultVal bool) bool {
  function FLBPluginInit (line 147) | func FLBPluginInit(ctx unsafe.Pointer) int {
  function FLBPluginFlushCtx (line 164) | func FLBPluginFlushCtx(ctx, data unsafe.Pointer, length C.int, tag *C.ch...
  function FLBPluginExit (line 221) | func FLBPluginExit() int {
  function main (line 225) | func main() {
Condensed preview — 27 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (180K chars).
[
  {
    "path": ".github/CODEOWNERS",
    "chars": 20,
    "preview": "* @aws/aws-firelens\n"
  },
  {
    "path": ".github/PULL_REQUEST_TEMPLATE.md",
    "chars": 197,
    "preview": "*Issue #, if available:*\n\n*Description of changes:*\n\n\nBy submitting this pull request, I confirm that you can use, modif"
  },
  {
    "path": ".github/dependabot.yml",
    "chars": 140,
    "preview": "version: 2\nupdates:\n  - package-ecosystem: gomod \n    directory: \"/\" # Location of package manifests\n    schedule:\n     "
  },
  {
    "path": ".github/workflows/build.yml",
    "chars": 664,
    "preview": "name: Build\n\non:\n  push:\n    branches: [ mainline ]\n  pull_request:\n    branches: [ mainline ]\n\njobs:\n\n  build:\n    name"
  },
  {
    "path": ".gitignore",
    "chars": 217,
    "preview": "\n# Binaries for programs and plugins\n*.exe\n*.exe~\n*.dll\n*.so\n*.dylib\n\n# Test binary, built with `go test -c`\n*.test\n\n# b"
  },
  {
    "path": "CHANGELOG.md",
    "chars": 2895,
    "preview": "# Changelog\n\n## 1.9.5\n* Enhancement - Update github.com/aws/aws-sdk-go to v1.55.8\n* Enhancement - Update github.com/siru"
  },
  {
    "path": "CODEOWNERS",
    "chars": 20,
    "preview": "/ @aws/aws-firelens\n"
  },
  {
    "path": "CODE_OF_CONDUCT.md",
    "chars": 309,
    "preview": "## Code of Conduct\nThis project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-condu"
  },
  {
    "path": "CONTRIBUTING.md",
    "chars": 3639,
    "preview": "# Contributing Guidelines\n\nThank you for your interest in contributing to our project. Whether it's a bug report, new fe"
  },
  {
    "path": "LICENSE",
    "chars": 10142,
    "preview": "\n                                 Apache License\n                           Version 2.0, January 2004\n                  "
  },
  {
    "path": "Makefile",
    "chars": 1660,
    "preview": "# Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n#\n# Licensed under the Apache License, Version"
  },
  {
    "path": "NOTICE",
    "chars": 111,
    "preview": "Fluent Bit Plugin for CloudWatch Logs\nCopyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved. \n"
  },
  {
    "path": "README.md",
    "chars": 14648,
    "preview": "[![Test Actions Status](https://github.com/aws/amazon-cloudwatch-logs-for-fluent-bit/workflows/Build/badge.svg)](https:/"
  },
  {
    "path": "THIRD-PARTY",
    "chars": 25606,
    "preview": "** github.com/aws/amazon-kinesis-firehose-for-fluent-bit; version c41b42995068\n-- https://github.com/aws/amazon-kinesis-"
  },
  {
    "path": "VERSION",
    "chars": 6,
    "preview": "1.9.5\n"
  },
  {
    "path": "cloudwatch/cloudwatch.go",
    "chars": 30566,
    "preview": "// Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n//\n// Licensed under the Apache License, Vers"
  },
  {
    "path": "cloudwatch/cloudwatch_test.go",
    "chars": 45239,
    "preview": "// Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n//\n// Licensed under the Apache License, Vers"
  },
  {
    "path": "cloudwatch/generate_mock.go",
    "chars": 729,
    "preview": "// Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n//\n// Licensed under the Apache License, Vers"
  },
  {
    "path": "cloudwatch/handlers.go",
    "chars": 1053,
    "preview": "// Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n//\n// Licensed under the Apache License, Vers"
  },
  {
    "path": "cloudwatch/handlers_test.go",
    "chars": 1047,
    "preview": "// Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n//\n// Licensed under the Apache License, Vers"
  },
  {
    "path": "cloudwatch/helpers.go",
    "chars": 5612,
    "preview": "package cloudwatch\n\nimport (\n\t\"fmt\"\n\t\"io\"\n\t\"strconv\"\n\t\"strings\"\n\n\t\"github.com/valyala/bytebufferpool\"\n\t\"github.com/valya"
  },
  {
    "path": "cloudwatch/helpers_test.go",
    "chars": 4091,
    "preview": "package cloudwatch\n\nimport (\n\t\"testing\"\n\n\t\"github.com/stretchr/testify/assert\"\n\t\"github.com/valyala/bytebufferpool\"\n)\n\nf"
  },
  {
    "path": "cloudwatch/mock_cloudwatch/mock.go",
    "chars": 4938,
    "preview": "// Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n//\n// Licensed under the Apache License, Vers"
  },
  {
    "path": "fluent-bit-cloudwatch.go",
    "chars": 7428,
    "preview": "// Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n//\n// Licensed under the Apache License, Vers"
  },
  {
    "path": "go.mod",
    "chars": 921,
    "preview": "module github.com/aws/amazon-cloudwatch-logs-for-fluent-bit\n\ngo 1.24.11\n\nrequire (\n\tgithub.com/aws/amazon-kinesis-fireho"
  },
  {
    "path": "go.sum",
    "chars": 6935,
    "preview": "github.com/aws/amazon-kinesis-firehose-for-fluent-bit v1.7.2 h1:uKb5MpgJ0aDa/TkAPWQIExnO/caFEpTyWTXiZz+hPkA=\ngithub.com/"
  },
  {
    "path": "scripts/mockgen.sh",
    "chars": 1513,
    "preview": "#!/bin/bash\n# Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n#\n# Licensed under the Apache Lice"
  }
]

About this extraction

This page contains the full source code of the aws/amazon-cloudwatch-logs-for-fluent-bit GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 27 files (166.4 KB), approximately 44.4k tokens, and a symbol index with 116 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!