[
  {
    "path": ".gitignore",
    "content": "*.iml\n.idea\ntmp\nlogs\ndependency-reduced-pom.xml\n*~\ntarget\n!target/*.jar\n*.log\n.settings\n.classpath\n.project\n"
  },
  {
    "path": "LICENSE.txt",
    "content": "Copyright 2017-2021 Jonathan Cobb\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n"
  },
  {
    "path": "README.md",
    "content": "s3s3mirror\n==========\n\nA utility for mirroring content from one S3 bucket to another.\n\nDesigned to be lightning-fast and highly concurrent, with modest CPU and memory requirements.\n\nAn object will be copied if and only if at least one of the following holds true:\n\n* The object does not exist in the destination bucket.\n* The \"sync strategy\" triggers (by default uses the Etag sync strategy)\n    * Etag Strategy (Default): If the size or Etags don't match between the source and destination bucket.\n    * Size Strategy: If the sizes don't match between the source and destination bucket.\n    * Size and Last Modified Strategy: If the source and destination objects have a different size, or the source bucket object has a more recent last modified date. \n\nWhen copying, the source metadata and ACL lists are also copied to the destination object.\n\nNote: [the 2.1-stable branch](https://github.com/cobbzilla/s3s3mirror/tree/2.1-stable) supports copying to/from local directories.\n\n### Motivation\n\nI started with \"s3cmd sync\" but found that with buckets containing many thousands of objects, it was incredibly slow\nto start and consumed *massive* amounts of memory. So I designed s3s3mirror to start copying immediately with an intelligently\nchosen \"chunk size\" and to operate in a highly-threaded, streaming fashion, so memory requirements are much lower.\n\nRunning with 100 threads, I found the gating factor to be *how fast I could list items from the source bucket* (!?!)\nWhich makes me wonder if there is any way to do this faster. I'm sure there must be, but this is pretty damn fast.\n\n### AWS Credentials\n\n* s3s3mirror will first look for credentials in your system environment. If variables named AWS\\_ACCESS\\_KEY\\_ID and AWS\\_SECRET\\_ACCESS\\_KEY are defined, then these will be used.\n* Next, it checks for a ~/.s3cfg file (which you might have for using s3cmd). If present, the access key and secret key are read from there.\n* IAM Roles can be used on EC2 instances by specifying the --iam flag\n* If none of the above is found, it will error out and refuse to run.\n\n### System Requirements\n\n* Java 8 or higher\n\n### Building\n\n    mvn package\n\nNote that s3s3mirror now has a prebuilt jar checked in to github, so you'll only need to do this if you've been playing with the source code.\nThe above command requires that Maven 3 is installed.\n\n### License\n\ns3s3mirror is available under the [Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0).\n\n### Usage\n\n    s3s3mirror.sh [options] <source-bucket>[/src-prefix/path/...] <destination-bucket>[/dest-prefix/path/...]\n\n### Versions\n\nThe 1.x branch (currently master) has been in use by the most number of people and is the most battle tested.\n\nThe 2.x branch supports copying between S3 and any local filesystem. It has seen heavy use and performs well, but is not as widely used as the 1.x branch.\n\n**In the near future, the 1.x branch will offshoot from master, and the 2.x branch will be merged into master.** There are a handful of features\non the 1.x branch that have not yet been ported to 2.x. If you can live without them, I encourage you to use the 2.x branch. If you really need them,\nI encourage you to port them to the 2.x branch, if you have the ability.\n\n### Options\n\n    -c (--ctime) N           : Only copy objects whose Last-Modified date is younger than this many days\n                               For other time units, use these suffixes: y (years), M (months), d (days), w (weeks),\n                                                                         h (hours), m (minutes), s (seconds)\n    -i (--iam) : Attempt to use IAM Role if invoked on an EC2 instance\n    -P (--profile) VAL        : Use a specific profile from your credential file (~/.aws/config)\n    -m (--max-connections) N  : Maximum number of connections to S3 (default 100)\n    -n (--dry-run)            : Do not actually do anything, but show what would be done (default false)\n    -r (--max-retries) N      : Maximum number of retries for S3 requests (default 5)\n    -p (--prefix) VAL         : Only copy objects whose keys start with this prefix\n    -d (--dest-prefix) VAL    : Destination prefix (replacing the one specified in --prefix, if any)\n    -e (--endpoint) VAL       : AWS endpoint to use (or set AWS_ENDPOINT in your environment)\n    -X (--delete-removed)     : Delete objects from the destination bucket if they do not exist in the source bucket\n    -t (--max-threads) N      : Maximum number of threads (default 100)\n    -v (--verbose)            : Verbose output (default false)\n    -z (--proxy) VAL          : host:port of proxy server to use.\n                                Defaults to proxy_host and proxy_port defined in ~/.s3cfg,\n                                or no proxy if these values are not found in ~/.s3cfg\n    -u (--upload-part-size) N : The upload size (in bytes) of each part uploaded as part of a multipart request\n                                for files that are greater than the max allowed file size of 5368709120 bytes (5 GB)\n                                Defaults to 4294967296 bytes (4 GB)\n    -C (--cross-account-copy) : Copy across AWS accounts. Only Resource-based policies are supported (as\n                                specified by AWS documentation) for cross account copying\n                                Default is false (copying within same account, preserving ACLs across copies)\n                                If this option is active, the owner of the destination bucket will receive full control\n                                \n    -s (--ssl)                    : Use SSL for all S3 api operations (default false)\n    -E (--server-side-encryption) : Enable AWS managed server-side encryption (default false)\n    -l (--storage-class)\t\t  : S3 storage class \"Standard\" or \"ReducedRedundancy\" (default Standard)\n    -S (--size-only)              : Only takes size of objects in consideration when determining if a copy is required.\n    -L (--size-and-last-modified) : Uses size and last modified to determine if files have change like the AWS CLI and ignores etags. If -S (--size-only) is also specified that strategy is selected over this strategy.\n\n\n### Examples\n\nCopy everything from a bucket named \"source\" to another bucket named \"dest\"\n\n    s3s3mirror.sh source dest\n\nCopy everything from \"source\" to \"dest\", but only copy objects created or modified within the past week\n\n    s3s3mirror.sh -c 7 source dest\n    s3s3mirror.sh -c 7d source dest\n    s3s3mirror.sh -c 1w source dest\n    s3s3mirror.sh --ctime 1w source dest\n\nCopy everything from \"source/foo\" to \"dest/bar\"\n\n    s3s3mirror.sh source/foo dest/bar\n    s3s3mirror.sh -p foo -d bar source dest\n\nCopy everything from \"source/foo\" to \"dest/bar\" and delete anything in \"dest/bar\" that does not exist in \"source/foo\"\n\n    s3s3mirror.sh -X source/foo dest/bar\n    s3s3mirror.sh --delete-removed source/foo dest/bar\n    s3s3mirror.sh -p foo -d bar -X source dest\n    s3s3mirror.sh -p foo -d bar --delete-removed source dest\n\nCopy within a single bucket -- copy everything from \"source/foo\" to \"source/bar\"\n\n    s3s3mirror.sh source/foo source/bar\n    s3s3mirror.sh -p foo -d bar source source\n\nBAD IDEA: If copying within a single bucket, do *not* put the destination below the source\n\n    s3s3mirror.sh source/foo source/foo/subfolder\n    s3s3mirror.sh -p foo -d foo/subfolder source source\n*This might cause recursion and raise your AWS bill unnecessarily*\n\n###### If you've enjoyed using s3s3mirror and are looking for a warm-fuzzy feeling, consider dropping a little somethin' into my [tip jar](https://cobbzilla.org/tipjar.html)\n"
  },
  {
    "path": "pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n\n<!--\n  (c) Copyright 2013-2021 Jonathan Cobb\n  This code is available under the Apache License, version 2: http://www.apache.org/licenses/LICENSE-2.0.html\n-->\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n         xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd\">\n\n    <modelVersion>4.0.0</modelVersion>\n\n    <groupId>org.cobbzilla</groupId>\n    <artifactId>s3s3mirror</artifactId>\n    <version>1.2.8-SNAPSHOT</version>\n    <packaging>jar</packaging>\n\n    <licenses>\n        <license>\n            <name>The Apache Software License, Version 2.0</name>\n            <url>http://www.apache.org/licenses/LICENSE-2.0.txt</url>\n            <distribution>repo</distribution>\n        </license>\n    </licenses>\n\n    <properties>\n        <org.slf4j.version>1.7.30</org.slf4j.version>\n        <junit.version>4.13.1</junit.version>\n    </properties>\n\n    <dependencies>\n\n        <!-- Logging -->\n        <dependency>\n            <groupId>org.slf4j</groupId>\n            <artifactId>slf4j-api</artifactId>\n            <version>${org.slf4j.version}</version>\n        </dependency>\n        <dependency>\n            <groupId>org.slf4j</groupId>\n            <artifactId>jcl-over-slf4j</artifactId>\n            <version>${org.slf4j.version}</version>\n            <scope>runtime</scope>\n        </dependency>\n        <dependency>\n            <groupId>org.slf4j</groupId>\n            <artifactId>slf4j-log4j12</artifactId>\n            <version>${org.slf4j.version}</version>\n            <scope>runtime</scope>\n        </dependency>\n        <dependency>\n            <groupId>org.apache.logging.log4j</groupId>\n            <artifactId>log4j-core</artifactId>\n            <version>2.17.1</version>\n            <scope>runtime</scope>\n        </dependency>\n\n        <!-- auto-generate java boilerplate -->\n        <dependency>\n            <groupId>org.projectlombok</groupId>\n            <artifactId>lombok</artifactId>\n            <version>1.18.16</version>\n            <scope>compile</scope>\n        </dependency>\n\n        <!-- Testing -->\n        <dependency>\n            <groupId>junit</groupId>\n            <artifactId>junit</artifactId>\n            <version>${junit.version}</version>\n            <scope>test</scope>\n        </dependency>\n        <dependency>\n            <groupId>commons-io</groupId>\n            <artifactId>commons-io</artifactId>\n            <version>2.8.0</version>\n            <scope>test</scope>\n        </dependency>\n        <dependency>\n            <groupId>org.apache.commons</groupId>\n            <artifactId>commons-lang3</artifactId>\n            <version>3.11</version>\n            <scope>test</scope>\n        </dependency>\n        <dependency>\n            <groupId>org.mockito</groupId>\n            <artifactId>mockito-core</artifactId>\n            <version>3.7.7</version>\n            <scope>test</scope>\n        </dependency>\n\n\n        <!-- command line argument handling -->\n        <dependency>\n            <groupId>args4j</groupId>\n            <artifactId>args4j</artifactId>\n            <version>2.33</version>\n        </dependency>\n\n        <!-- for ctime argument -->\n        <dependency>\n            <groupId>joda-time</groupId>\n            <artifactId>joda-time</artifactId>\n            <version>2.10.9</version>\n        </dependency>\n\n        <!-- Amazon SDK -->\n        <dependency>\n            <groupId>com.amazonaws</groupId>\n            <artifactId>aws-java-sdk-s3</artifactId>\n            <version>1.12.261</version>\n        </dependency>\n    </dependencies>\n\n    <build>\n        <plugins>\n            <!-- Force Java 1.8 at all times -->\n            <plugin>\n                <groupId>org.apache.maven.plugins</groupId>\n                <artifactId>maven-compiler-plugin</artifactId>\n                <version>2.3.2</version>\n                <configuration>\n                    <source>1.8</source>\n                    <target>1.8</target>\n                    <showWarnings>true</showWarnings>\n                </configuration>\n            </plugin>\n\n            <plugin>\n                <groupId>org.apache.maven.plugins</groupId>\n                <artifactId>maven-shade-plugin</artifactId>\n                <version>2.1</version>\n                <executions>\n                    <execution>\n                        <phase>package</phase>\n                        <goals>\n                            <goal>shade</goal>\n                        </goals>\n                        <configuration>\n                            <transformers>\n                                <transformer implementation=\"org.apache.maven.plugins.shade.resource.ManifestResourceTransformer\">\n                                    <mainClass>org.cobbzilla.s3s3mirror.MirrorMain</mainClass>\n                                </transformer>\n                            </transformers>\n                            <!-- Exclude signed jars to avoid errors\n                            see: http://stackoverflow.com/a/6743609/1251543\n                            -->\n                            <filters>\n                                <filter>\n                                    <artifact>*:*</artifact>\n                                    <excludes>\n                                        <exclude>META-INF/*.SF</exclude>\n                                        <exclude>META-INF/*.DSA</exclude>\n                                        <exclude>META-INF/*.RSA</exclude>\n                                    </excludes>\n                                </filter>\n                            </filters>\n                        </configuration>\n                    </execution>\n                </executions>\n            </plugin>\n\n        </plugins>\n    </build>\n\n</project>\n"
  },
  {
    "path": "s3s3mirror.bat",
    "content": "@echo off\njava -Dlog4j.configuration=file:target/classes/log4j.xml -Ds3s3mirror.version=1.2.8 -jar target/s3s3mirror-1.2.7-SNAPSHOT.jar %*\n"
  },
  {
    "path": "s3s3mirror.sh",
    "content": "#!/bin/bash\n\nTHISDIR=$(cd \"$(dirname $0)\" && pwd)\n\nVERSION=1.2.8\nJARFILE=\"${THISDIR}/target/s3s3mirror-${VERSION}-SNAPSHOT.jar\"\nVERSION_ARG=\"-Ds3s3mirror.version=${VERSION}\"\n\nDEBUG=$1\nif [ \"${DEBUG}\" = \"--debug\" ] ; then\n  # Run in debug mode\n  shift   # remove --debug from options\n  java -Dlog4j.configuration=file:target/classes/log4j.xml -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=5005 ${VERSION_ARG} -jar \"${JARFILE}\" \"$@\"\n\nelse\n  # Run in regular mode\n  java ${VERSION_ARG} -Dlog4j.configuration=file:target/classes/log4j.xml -jar \"${JARFILE}\" \"$@\"\nfi\n\nexit $?\n"
  },
  {
    "path": "src/main/java/org/cobbzilla/s3s3mirror/CopyMaster.java",
    "content": "package org.cobbzilla.s3s3mirror;\n\nimport com.amazonaws.services.s3.AmazonS3Client;\nimport com.amazonaws.services.s3.model.S3ObjectSummary;\nimport org.cobbzilla.s3s3mirror.comparisonstrategies.ComparisonStrategy;\nimport org.cobbzilla.s3s3mirror.comparisonstrategies.ComparisonStrategyFactory;\nimport org.cobbzilla.s3s3mirror.comparisonstrategies.SizeOnlyComparisonStrategy;\n\nimport java.util.concurrent.BlockingQueue;\nimport java.util.concurrent.ThreadPoolExecutor;\n\npublic class CopyMaster extends KeyMaster {\n    private final ComparisonStrategy comparisonStrategy;\n\n    public CopyMaster(AmazonS3Client client, MirrorContext context, BlockingQueue<Runnable> workQueue, ThreadPoolExecutor executorService) {\n        super(client, context, workQueue, executorService);\n        comparisonStrategy = ComparisonStrategyFactory.getStrategy(context.getOptions());\n    }\n\n    protected String getPrefix(MirrorOptions options) { return options.getPrefix(); }\n    protected String getBucket(MirrorOptions options) { return options.getSourceBucket(); }\n\n    protected KeyCopyJob getTask(S3ObjectSummary summary) {\n        if (summary.getSize() > MirrorOptions.MAX_SINGLE_REQUEST_UPLOAD_FILE_SIZE) {\n            return new MultipartKeyCopyJob(client, context, summary, notifyLock, new SizeOnlyComparisonStrategy());\n        }\n        return new KeyCopyJob(client, context, summary, notifyLock, comparisonStrategy);\n    }\n}\n"
  },
  {
    "path": "src/main/java/org/cobbzilla/s3s3mirror/DeleteMaster.java",
    "content": "package org.cobbzilla.s3s3mirror;\n\nimport com.amazonaws.services.s3.AmazonS3Client;\nimport com.amazonaws.services.s3.model.S3ObjectSummary;\n\nimport java.util.concurrent.BlockingQueue;\nimport java.util.concurrent.ThreadPoolExecutor;\n\npublic class DeleteMaster extends KeyMaster {\n\n    public DeleteMaster(AmazonS3Client client, MirrorContext context, BlockingQueue<Runnable> workQueue, ThreadPoolExecutor executorService) {\n        super(client, context, workQueue, executorService);\n    }\n\n    protected String getPrefix(MirrorOptions options) {\n        return options.hasDestPrefix() ? options.getDestPrefix() : options.getPrefix();\n    }\n\n    protected String getBucket(MirrorOptions options) { return options.getDestinationBucket(); }\n\n    @Override\n    protected KeyJob getTask(S3ObjectSummary summary) {\n        return new KeyDeleteJob(client, context, summary, notifyLock);\n    }\n}\n"
  },
  {
    "path": "src/main/java/org/cobbzilla/s3s3mirror/KeyCopyJob.java",
    "content": "package org.cobbzilla.s3s3mirror;\n\nimport com.amazonaws.services.s3.AmazonS3Client;\nimport com.amazonaws.services.s3.model.*;\nimport lombok.extern.slf4j.Slf4j;\nimport org.cobbzilla.s3s3mirror.comparisonstrategies.ComparisonStrategy;\nimport org.slf4j.Logger;\n\nimport java.util.Date;\n\n/**\n * Handles a single key. Determines if it should be copied, and if so, performs the copy operation.\n */\n@Slf4j\npublic class KeyCopyJob extends KeyJob {\n\n    protected String keydest;\n    protected ComparisonStrategy comparisonStrategy;\n\n    public KeyCopyJob(AmazonS3Client client, MirrorContext context, S3ObjectSummary summary, Object notifyLock, ComparisonStrategy comparisonStrategy) {\n        super(client, context, summary, notifyLock);\n\n        keydest = summary.getKey();\n        final MirrorOptions options = context.getOptions();\n        if (options.hasDestPrefix()) {\n            keydest = keydest.substring(options.getPrefixLength());\n            keydest = options.getDestPrefix() + keydest;\n        }\n        this.comparisonStrategy = comparisonStrategy;\n    }\n\n    @Override public Logger getLog() { return log; }\n\n    @Override\n    public void run() {\n        final MirrorOptions options = context.getOptions();\n        final String key = summary.getKey();\n        try {\n            if (!shouldTransfer()) return;\n            final ObjectMetadata sourceMetadata = getObjectMetadata(options.getSourceBucket(), key, options);\n            final AccessControlList objectAcl = getAccessControlList(options, key);\n\n            if (options.isDryRun()) {\n                log.info(\"Would have copied \" + key + \" to destination: \" + keydest);\n            } else {\n                if (keyCopied(sourceMetadata, objectAcl)) {\n                    context.getStats().objectsCopied.incrementAndGet();\n                } else {\n                    context.getStats().copyErrors.incrementAndGet();\n                }\n            }\n        } catch (Exception e) {\n            log.error(\"error copying key: \" + key + \": \" + e);\n\n        } finally {\n            synchronized (notifyLock) {\n                notifyLock.notifyAll();\n            }\n            if (options.isVerbose()) log.info(\"done with \" + key);\n        }\n    }\n\n    boolean keyCopied(ObjectMetadata sourceMetadata, AccessControlList objectAcl) {\n        String key = summary.getKey();\n        MirrorOptions options = context.getOptions();\n        boolean verbose = options.isVerbose();\n        int maxRetries= options.getMaxRetries();\n        MirrorStats stats = context.getStats();\n        for (int tries = 0; tries < maxRetries; tries++) {\n            if (verbose) log.info(\"copying (try #\" + tries + \"): \" + key + \" to: \" + keydest);\n            final CopyObjectRequest request = new CopyObjectRequest(options.getSourceBucket(), key, options.getDestinationBucket(), keydest);\n            \n            request.setStorageClass(StorageClass.valueOf(options.getStorageClass()));\n            \n            if (options.isEncrypt()) {\n\t\t\t\trequest.putCustomRequestHeader(\"x-amz-server-side-encryption\", \"AES256\");\n\t\t\t}\n\n            request.setNewObjectMetadata(sourceMetadata);\n            if (options.isCrossAccountCopy()) {\n                request.setAccessControlList(buildCrossAccountAcl(objectAcl));\n            } else {\n                request.setAccessControlList(objectAcl);\n            }\n            try {\n                stats.s3copyCount.incrementAndGet();\n                client.copyObject(request);\n                stats.bytesCopied.addAndGet(sourceMetadata.getContentLength());\n                if (verbose) log.info(\"successfully copied (on try #\" + tries + \"): \" + key + \" to: \" + keydest);\n                return true;\n            } catch (AmazonS3Exception s3e) {\n                log.error(\"s3 exception copying (try #\" + tries + \") \" + key + \" to: \" + keydest + \": \" + s3e);\n            } catch (Exception e) {\n                log.error(\"unexpected exception copying (try #\" + tries + \") \" + key + \" to: \" + keydest + \": \" + e);\n            }\n            try {\n                Thread.sleep(10);\n            } catch (InterruptedException e) {\n                log.error(\"interrupted while waiting to retry key: \" + key);\n                return false;\n            }\n        }\n        return false;\n    }\n\n    private boolean shouldTransfer() {\n        final MirrorOptions options = context.getOptions();\n        final String key = summary.getKey();\n        final boolean verbose = options.isVerbose();\n\n        if (options.hasCtime()) {\n            final Date lastModified = summary.getLastModified();\n            if (lastModified == null) {\n                if (verbose) log.info(\"No Last-Modified header for key: \" + key);\n\n            } else {\n                if (lastModified.getTime() < options.getMaxAge()) {\n                    if (verbose) log.info(\"key \"+key+\" (lastmod=\"+lastModified+\") is older than \"+options.getCtime()+\" (cutoff=\"+options.getMaxAgeDate()+\"), not copying\");\n                    return false;\n                }\n            }\n        }\n        final ObjectMetadata metadata;\n        try {\n            metadata = getObjectMetadata(options.getDestinationBucket(), keydest, options);\n        } catch (AmazonS3Exception e) {\n            if (e.getStatusCode() == 404) {\n                if (verbose) log.info(\"Key not found in destination bucket (will copy): \"+ keydest);\n                return true;\n            } else {\n                log.warn(\"Error getting metadata for \" + options.getDestinationBucket() + \"/\" + keydest + \" (not copying): \" + e);\n                return false;\n            }\n        } catch (Exception e) {\n            log.warn(\"Error getting metadata for \" + options.getDestinationBucket() + \"/\" + keydest + \" (not copying): \" + e);\n            return false;\n        }\n\n        if (summary.getSize() > MirrorOptions.MAX_SINGLE_REQUEST_UPLOAD_FILE_SIZE) {\n            return metadata.getContentLength() != summary.getSize();\n        }\n        final boolean objectChanged = comparisonStrategy.sourceDifferent(summary, metadata);\n        if (verbose && !objectChanged) log.info(\"Destination file is same as source, not copying: \"+ key);\n\n        return objectChanged;\n    }\n}\n"
  },
  {
    "path": "src/main/java/org/cobbzilla/s3s3mirror/KeyDeleteJob.java",
    "content": "package org.cobbzilla.s3s3mirror;\n\nimport com.amazonaws.services.s3.AmazonS3Client;\nimport com.amazonaws.services.s3.model.*;\nimport lombok.extern.slf4j.Slf4j;\nimport org.slf4j.Logger;\n\n@Slf4j\npublic class KeyDeleteJob extends KeyJob {\n\n    private String keysrc;\n\n    public KeyDeleteJob (AmazonS3Client client, MirrorContext context, S3ObjectSummary summary, Object notifyLock) {\n        super(client, context, summary, notifyLock);\n\n        final MirrorOptions options = context.getOptions();\n        keysrc = summary.getKey(); // NOTE: summary.getKey is the key in the destination bucket\n        if (options.hasPrefix()) {\n            keysrc = keysrc.substring(options.getDestPrefixLength());\n            keysrc = options.getPrefix() + keysrc;\n        }\n    }\n\n    @Override public Logger getLog() { return log; }\n\n    @Override\n    public void run() {\n        final MirrorOptions options = context.getOptions();\n        final MirrorStats stats = context.getStats();\n        final boolean verbose = options.isVerbose();\n        final int maxRetries = options.getMaxRetries();\n        final String key = summary.getKey();\n        try {\n            if (!shouldDelete()) return;\n\n            final DeleteObjectRequest request = new DeleteObjectRequest(options.getDestinationBucket(), key);\n\n            if (options.isDryRun()) {\n                log.info(\"Would have deleted \"+key+\" from destination because \"+keysrc+\" does not exist in source\");\n            } else {\n                boolean deletedOK = false;\n                for (int tries=0; tries<maxRetries; tries++) {\n                    if (verbose) log.info(\"deleting (try #\"+tries+\"): \"+key);\n                    try {\n                        stats.s3deleteCount.incrementAndGet();\n                        client.deleteObject(request);\n                        deletedOK = true;\n                        if (verbose) log.info(\"successfully deleted (on try #\"+tries+\"): \"+key);\n                        break;\n\n                    } catch (AmazonS3Exception s3e) {\n                        log.error(\"s3 exception deleting (try #\"+tries+\") \"+key+\": \"+s3e);\n\n                    } catch (Exception e) {\n                        log.error(\"unexpected exception deleting (try #\"+tries+\") \"+key+\": \"+e);\n                    }\n                    try {\n                        Thread.sleep(10);\n                    } catch (InterruptedException e) {\n                        log.error(\"interrupted while waiting to retry key: \"+key);\n                        break;\n                    }\n                }\n                if (deletedOK) {\n                    context.getStats().objectsDeleted.incrementAndGet();\n                } else {\n                    context.getStats().deleteErrors.incrementAndGet();\n                }\n            }\n\n        } catch (Exception e) {\n            log.error(\"error deleting key: \"+key+\": \"+e);\n\n        } finally {\n            synchronized (notifyLock) {\n                notifyLock.notifyAll();\n            }\n            if (verbose) log.info(\"done with \"+key);\n        }\n    }\n\n    private boolean shouldDelete() {\n\n        final MirrorOptions options = context.getOptions();\n        final boolean verbose = options.isVerbose();\n\n        // Does it exist in the source bucket\n        try {\n            ObjectMetadata metadata = getObjectMetadata(options.getSourceBucket(), keysrc, options);\n            return false; // object exists in source bucket, don't delete it from destination bucket\n\n        } catch (AmazonS3Exception e) {\n            if (e.getStatusCode() == 404) {\n                if (verbose) log.info(\"Key not found in source bucket (will delete from destination): \"+ keysrc);\n                return true;\n            } else {\n                log.warn(\"Error getting metadata for \" + options.getSourceBucket() + \"/\" + keysrc + \" (not deleting): \" + e);\n                return false;\n            }\n        } catch (Exception e) {\n            log.warn(\"Error getting metadata for \" + options.getSourceBucket() + \"/\" + keysrc + \" (not deleting): \" + e);\n            return false;\n        }\n    }\n\n}\n"
  },
  {
    "path": "src/main/java/org/cobbzilla/s3s3mirror/KeyFingerprint.java",
    "content": "package org.cobbzilla.s3s3mirror;\n\nimport lombok.*;\n\n@EqualsAndHashCode(callSuper=false) @AllArgsConstructor\npublic class KeyFingerprint {\n\n    @Getter private final long size;\n    @Getter private final String etag;\n   \n    public KeyFingerprint(long size) {\n        this(size, null);\n    }\n\n}\n"
  },
  {
    "path": "src/main/java/org/cobbzilla/s3s3mirror/KeyJob.java",
    "content": "package org.cobbzilla.s3s3mirror;\n\nimport com.amazonaws.services.s3.AmazonS3Client;\nimport com.amazonaws.services.s3.model.*;\nimport org.slf4j.Logger;\n\npublic abstract class KeyJob implements Runnable {\n\n    protected final AmazonS3Client client;\n    protected final MirrorContext context;\n    protected final S3ObjectSummary summary;\n    protected final Object notifyLock;\n\n    public KeyJob(AmazonS3Client client, MirrorContext context, S3ObjectSummary summary, Object notifyLock) {\n        this.client = client;\n        this.context = context;\n        this.summary = summary;\n        this.notifyLock = notifyLock;\n    }\n\n    public abstract Logger getLog();\n\n    @Override public String toString() { return summary.getKey(); }\n\n    protected ObjectMetadata getObjectMetadata(String bucket, String key, MirrorOptions options) throws Exception {\n        Exception ex = null;\n        for (int tries=0; tries<options.getMaxRetries(); tries++) {\n            try {\n                context.getStats().s3getCount.incrementAndGet();\n                return client.getObjectMetadata(bucket, key);\n\n            } catch (AmazonS3Exception e) {\n                if (e.getStatusCode() == 404) throw e;\n\n            } catch (Exception e) {\n                ex = e;\n                if (options.isVerbose()) {\n                    if (tries >= options.getMaxRetries()) {\n                        getLog().error(\"getObjectMetadata(\" + key + \") failed (try #\" + tries + \"), giving up\");\n                        break;\n                    } else {\n                        getLog().warn(\"getObjectMetadata(\"+key+\") failed (try #\"+tries+\"), retrying...\");\n                    }\n                }\n            }\n        }\n        throw ex;\n    }\n\n    protected AccessControlList getAccessControlList(MirrorOptions options, String key) throws Exception {\n        Exception ex = null;\n\n        for (int tries=0; tries<=options.getMaxRetries(); tries++) {\n            try {\n                context.getStats().s3getCount.incrementAndGet();\n                return client.getObjectAcl(options.getSourceBucket(), key);\n\n            } catch (Exception e) {\n                ex = e;\n\n                if (tries >= options.getMaxRetries()) {\n                    // Annoyingly there can be two reasons for this to fail. It will fail if the IAM account\n                    // permissions are wrong, but it will also fail if we are copying an item that we don't\n                    // own ourselves. This may seem unusual, but it occurs when copying AWS Detailed Billing\n                    // objects since although they live in your bucket, the object owner is AWS.\n                    getLog().warn(\"Unable to obtain object ACL, copying item without ACL data.\");\n                    return new AccessControlList();\n\t\t}\n\n                if (options.isVerbose()) {\n                   if (tries >= options.getMaxRetries()) {\n                        getLog().warn(\"getObjectAcl(\" + key + \") failed (try #\" + tries + \"), giving up.\");\n\t\t\tbreak;\n                    } else {\n                        getLog().warn(\"getObjectAcl(\"+key+\") failed (try #\"+tries+\"), retrying...\");\n                    }\n                }\n            }\n        }\n        throw ex;\n    }\n\n    AccessControlList buildCrossAccountAcl(AccessControlList original) {\n        AccessControlList result = new AccessControlList();\n        for (Grant grant : original.getGrantsAsList()) {\n            // Covers all 3 types: Everyone, Authenticate User, Log Delivery\n            if (grant.getGrantee() instanceof GroupGrantee) {\n                result.grantPermission(grant.getGrantee(), grant.getPermission());\n            }\n        }\n\n        // Equal to the canned way: request.setCannedAccessControlList(CannedAccessControlList.BucketOwnerFullControl);\n        result.grantPermission(new CanonicalGrantee(context.getOwner().getId()), Permission.FullControl);\n        result.setOwner(context.getOwner());\n\n        return result;\n    }\n\n}\n"
  },
  {
    "path": "src/main/java/org/cobbzilla/s3s3mirror/KeyLister.java",
    "content": "package org.cobbzilla.s3s3mirror;\n\nimport com.amazonaws.services.s3.AmazonS3Client;\nimport com.amazonaws.services.s3.model.AmazonS3Exception;\nimport com.amazonaws.services.s3.model.ListObjectsRequest;\nimport com.amazonaws.services.s3.model.ObjectListing;\nimport com.amazonaws.services.s3.model.S3ObjectSummary;\nimport lombok.extern.slf4j.Slf4j;\n\nimport java.util.ArrayList;\nimport java.util.List;\nimport java.util.concurrent.atomic.AtomicBoolean;\n\n@Slf4j\npublic class KeyLister implements Runnable {\n\n    private AmazonS3Client client;\n    private MirrorContext context;\n    private int maxQueueCapacity;\n\n    private final List<S3ObjectSummary> summaries;\n    private final AtomicBoolean done = new AtomicBoolean(false);\n    private ObjectListing listing;\n\n    public boolean isDone () { return done.get(); }\n\n    public KeyLister(AmazonS3Client client, MirrorContext context, int maxQueueCapacity, String bucket, String prefix) {\n        this.client = client;\n        this.context = context;\n        this.maxQueueCapacity = maxQueueCapacity;\n\n        final MirrorOptions options = context.getOptions();\n        int fetchSize = options.getMaxThreads();\n        this.summaries = new ArrayList<S3ObjectSummary>(10*fetchSize);\n\n        final ListObjectsRequest request = new ListObjectsRequest(bucket, prefix, null, null, fetchSize);\n        listing = s3getFirstBatch(client, request);\n        synchronized (summaries) {\n            final List<S3ObjectSummary> objectSummaries = listing.getObjectSummaries();\n            summaries.addAll(objectSummaries);\n            context.getStats().objectsRead.addAndGet(objectSummaries.size());\n            if (options.isVerbose()) log.info(\"added initial set of \"+objectSummaries.size()+\" keys\");\n        }\n    }\n\n    @Override\n    public void run() {\n        final MirrorOptions options = context.getOptions();\n        final boolean verbose = options.isVerbose();\n        int counter = 0;\n        log.info(\"starting...\");\n        try {\n            while (true) {\n                while (getSize() < maxQueueCapacity) {\n                    if (listing.isTruncated()) {\n                        listing = s3getNextBatch();\n                        if (++counter % 100 == 0) context.getStats().logStats();\n                        synchronized (summaries) {\n                            final List<S3ObjectSummary> objectSummaries = listing.getObjectSummaries();\n                            summaries.addAll(objectSummaries);\n                            context.getStats().objectsRead.addAndGet(objectSummaries.size());\n                            if (verbose) log.info(\"queued next set of \"+objectSummaries.size()+\" keys (total now=\"+getSize()+\")\");\n                        }\n\n                    } else {\n                        log.info(\"No more keys found in source bucket, exiting\");\n                        return;\n                    }\n                }\n                try {\n                    Thread.sleep(50);\n                } catch (InterruptedException e) {\n                    log.error(\"interrupted!\");\n                    return;\n                }\n            }\n        } catch (Exception e) {\n            log.error(\"Error in run loop, KeyLister thread now exiting: \"+e);\n\n        } finally {\n            if (verbose) log.info(\"KeyLister run loop finished\");\n            done.set(true);\n        }\n    }\n\n    private ObjectListing s3getFirstBatch(AmazonS3Client client, ListObjectsRequest request) {\n\n        final MirrorOptions options = context.getOptions();\n        final boolean verbose = options.isVerbose();\n        final int maxRetries = options.getMaxRetries();\n\n        Exception lastException = null;\n        for (int tries=0; tries<maxRetries; tries++) {\n            try {\n                context.getStats().s3getCount.incrementAndGet();\n                ObjectListing listing = client.listObjects(request);\n                if (verbose) log.info(\"successfully got first batch of objects (on try #\"+tries+\")\");\n                return listing;\n\n            } catch (Exception e) {\n                lastException = e;\n                log.warn(\"s3getFirstBatch: error listing (try #\"+tries+\"): \"+e);\n                if (Sleep.sleep(50)) {\n                    log.info(\"s3getFirstBatch: interrupted while waiting for next try\");\n                    break;\n                }\n            }\n        }\n        throw new IllegalStateException(\"s3getFirstBatch: error listing: \"+lastException, lastException);\n    }\n\n    private ObjectListing s3getNextBatch() {\n        final MirrorOptions options = context.getOptions();\n        final boolean verbose = options.isVerbose();\n        final int maxRetries = options.getMaxRetries();\n\n        for (int tries=0; tries<maxRetries; tries++) {\n            try {\n                context.getStats().s3getCount.incrementAndGet();\n                ObjectListing next = client.listNextBatchOfObjects(listing);\n                if (verbose) log.info(\"successfully got next batch of objects (on try #\"+tries+\")\");\n                return next;\n\n            } catch (AmazonS3Exception s3e) {\n                log.error(\"s3 exception listing objects (try #\"+tries+\"): \"+s3e);\n\n            } catch (Exception e) {\n                log.error(\"unexpected exception listing objects (try #\"+tries+\"): \"+e);\n            }\n            if (Sleep.sleep(50)) {\n                log.info(\"s3getNextBatch: interrupted while waiting for next try\");\n                break;\n            }\n        }\n        throw new IllegalStateException(\"Too many errors trying to list objects (maxRetries=\"+maxRetries+\")\");\n    }\n\n    private int getSize() {\n        synchronized (summaries) {\n            return summaries.size();\n        }\n    }\n\n    public List<S3ObjectSummary> getNextBatch() {\n        List<S3ObjectSummary> copy;\n        synchronized (summaries) {\n            copy = new ArrayList<S3ObjectSummary>(summaries);\n            summaries.clear();\n        }\n        return copy;\n    }\n}"
  },
  {
    "path": "src/main/java/org/cobbzilla/s3s3mirror/KeyMaster.java",
    "content": "package org.cobbzilla.s3s3mirror;\n\nimport com.amazonaws.services.s3.AmazonS3Client;\nimport com.amazonaws.services.s3.model.S3ObjectSummary;\nimport lombok.extern.slf4j.Slf4j;\n\nimport java.util.List;\nimport java.util.concurrent.BlockingQueue;\nimport java.util.concurrent.ThreadPoolExecutor;\nimport java.util.concurrent.TimeUnit;\nimport java.util.concurrent.atomic.AtomicBoolean;\n\n@Slf4j\npublic abstract class KeyMaster implements Runnable {\n\n    public static final int STOP_TIMEOUT_SECONDS = 10;\n    private static final long STOP_TIMEOUT = TimeUnit.SECONDS.toMillis(STOP_TIMEOUT_SECONDS);\n\n    protected AmazonS3Client client;\n    protected MirrorContext context;\n\n    private AtomicBoolean done = new AtomicBoolean(false);\n    public boolean isDone () { return done.get(); }\n\n    private BlockingQueue<Runnable> workQueue;\n    private ThreadPoolExecutor executorService;\n    protected final Object notifyLock = new Object();\n\n    private Thread thread;\n\n    public KeyMaster(AmazonS3Client client, MirrorContext context, BlockingQueue<Runnable> workQueue, ThreadPoolExecutor executorService) {\n        this.client = client;\n        this.context = context;\n        this.workQueue = workQueue;\n        this.executorService = executorService;\n    }\n\n    protected abstract String getPrefix(MirrorOptions options);\n    protected abstract String getBucket(MirrorOptions options);\n\n    protected abstract KeyJob getTask(S3ObjectSummary summary);\n\n    public void start () {\n        this.thread = new Thread(this);\n        this.thread.start();\n    }\n\n    public void stop () {\n        final String name = getClass().getSimpleName();\n        final long start = System.currentTimeMillis();\n        log.info(\"stopping \"+ name +\"...\");\n        try {\n            if (isDone()) return;\n            this.thread.interrupt();\n            while (!isDone() && System.currentTimeMillis() - start < STOP_TIMEOUT) {\n                if (Sleep.sleep(50)) return;\n            }\n        } finally {\n            if (!isDone()) {\n                try {\n                    log.warn(name+\" didn't stop within \"+STOP_TIMEOUT_SECONDS+\" after interrupting it, forcibly killing the thread...\");\n                    this.thread.stop();\n                } catch (Exception e) {\n                    log.error(\"Error calling Thread.stop on \" + name + \": \" + e, e);\n                }\n            }\n            if (isDone()) log.info(name+\" stopped\");\n        }\n    }\n\n    public void run() {\n\n        final MirrorOptions options = context.getOptions();\n        final boolean verbose = options.isVerbose();\n\n        final int maxQueueCapacity = MirrorMaster.getMaxQueueCapacity(options);\n\n        int counter = 0;\n        try {\n            final KeyLister lister = new KeyLister(client, context, maxQueueCapacity, getBucket(options), getPrefix(options));\n            executorService.submit(lister);\n\n            List<S3ObjectSummary> summaries = lister.getNextBatch();\n            if (verbose) log.info(summaries.size()+\" keys found in first batch from source bucket -- processing...\");\n\n            while (true) {\n                for (S3ObjectSummary summary : summaries) {\n                    while (workQueue.size() >= maxQueueCapacity) {\n                        try {\n                            synchronized (notifyLock) {\n                                notifyLock.wait(50);\n                            }\n                            Thread.sleep(50);\n\n                        } catch (InterruptedException e) {\n                            log.error(\"interrupted!\");\n                            return;\n                        }\n                    }\n                    executorService.submit(getTask(summary));\n                    counter++;\n                }\n\n                summaries = lister.getNextBatch();\n                if (summaries.size() > 0) {\n                    if (verbose) log.info(summaries.size()+\" more keys found in source bucket -- continuing (queue size=\"+workQueue.size()+\", total processed=\"+counter+\")...\");\n\n                } else if (lister.isDone()) {\n                    if (verbose) log.info(\"No more keys found in source bucket -- ALL DONE\");\n                    return;\n\n                } else {\n                    if (verbose) log.info(\"Lister has no keys queued, but is not done, waiting and retrying\");\n                    if (Sleep.sleep(50)) return;\n                }\n            }\n\n        } catch (Exception e) {\n            log.error(\"Unexpected exception in MirrorMaster: \"+e, e);\n\n        } finally {\n            while (workQueue.size() > 0 || executorService.getActiveCount() > 0) {\n                // wait for the queue to be empty\n                if (Sleep.sleep(100)) break;\n            }\n            // this will wait for currently executing tasks to finish\n            executorService.shutdown();\n            done.set(true);\n        }\n    }\n}\n"
  },
  {
    "path": "src/main/java/org/cobbzilla/s3s3mirror/MirrorConstants.java",
    "content": "package org.cobbzilla.s3s3mirror;\n\npublic class MirrorConstants {\n\n    public static final long KB = 1024L;\n    public static final long MB = KB * 1024L;\n    public static final long GB = MB * 1024L;\n    public static final long TB = GB * 1024L;\n    public static final long PB = TB * 1024L;\n    public static final long EB = PB * 1024L;\n\n}\n"
  },
  {
    "path": "src/main/java/org/cobbzilla/s3s3mirror/MirrorContext.java",
    "content": "package org.cobbzilla.s3s3mirror;\n\nimport com.amazonaws.services.s3.model.Owner;\nimport lombok.AllArgsConstructor;\nimport lombok.Getter;\nimport lombok.Setter;\n\n@AllArgsConstructor\npublic class MirrorContext {\n\n    @Getter @Setter private MirrorOptions options;\n    @Getter @Setter private Owner owner;\n    @Getter private final MirrorStats stats = new MirrorStats();\n\n}\n"
  },
  {
    "path": "src/main/java/org/cobbzilla/s3s3mirror/MirrorMain.java",
    "content": "package org.cobbzilla.s3s3mirror;\n\nimport com.amazonaws.ClientConfiguration;\nimport com.amazonaws.Protocol;\nimport com.amazonaws.services.s3.AmazonS3Client;\nimport com.amazonaws.auth.InstanceProfileCredentialsProvider;\nimport com.amazonaws.auth.BasicSessionCredentials;\nimport com.amazonaws.services.s3.model.AccessControlList;\nimport com.amazonaws.services.s3.model.Owner;\nimport lombok.Cleanup;\nimport lombok.Getter;\nimport lombok.Setter;\nimport lombok.extern.slf4j.Slf4j;\nimport org.kohsuke.args4j.CmdLineParser;\n\nimport java.io.BufferedReader;\nimport java.io.File;\nimport java.io.FileReader;\n\n/**\n * Provides the \"main\" method. Responsible for parsing options and setting up the MirrorMaster to manage the copy.\n */\n@Slf4j\npublic class MirrorMain {\n\n    @Getter @Setter private String[] args;\n\n    @Getter private final MirrorOptions options = new MirrorOptions();\n\n    private final CmdLineParser parser = new CmdLineParser(options);\n\n    private final Thread.UncaughtExceptionHandler uncaughtExceptionHandler = new Thread.UncaughtExceptionHandler() {\n        @Override public void uncaughtException(Thread t, Throwable e) {\n            log.error(\"Uncaught Exception (thread \"+t.getName()+\"): \"+e, e);\n        }\n    };\n\n    @Getter private AmazonS3Client client;\n    @Getter private MirrorContext context;\n    @Getter private MirrorMaster master;\n\n    public MirrorMain(String[] args) { this.args = args; }\n\n    public static void main (String[] args) {\n        MirrorMain main = new MirrorMain(args);\n        main.run();\n    }\n\n    public void run() {\n        init();\n        master.mirror();\n    }\n\n    public void init() {\n        if (client == null) {\n            try {\n                parseArguments();\n            } catch (Exception e) {\n                System.err.println(e.getMessage());\n                parser.printUsage(System.err);\n                System.exit(1);\n            }\n\n            client = getAmazonS3Client();\n            context = new MirrorContext(options, getTargetBucketOwner(client));\n            master = new MirrorMaster(client, context);\n\n            Runtime.getRuntime().addShutdownHook(context.getStats().getShutdownHook());\n            Thread.setDefaultUncaughtExceptionHandler(uncaughtExceptionHandler);\n        }\n    }\n\n    protected AmazonS3Client getAmazonS3Client() {\n        ClientConfiguration clientConfiguration = new ClientConfiguration().withProtocol((options.isSsl() ? Protocol.HTTPS : Protocol.HTTP))\n                .withMaxConnections(options.getMaxConnections());\n        if (options.getHasProxy()) {\n            clientConfiguration = clientConfiguration\n                    .withProxyHost(options.getProxyHost())\n                    .withProxyPort(options.getProxyPort());\n        }\n        AmazonS3Client client = null;\n        if(System.getenv(\"AWS_SECURITY_TOKEN\") != null) {\n            BasicSessionCredentials basicSessionCredentials = new BasicSessionCredentials(System.getenv(\"AWS_ACCESS_KEY_ID\"), System.getenv(\"AWS_SECRET_ACCESS_KEY\"), System.getenv(\"AWS_SECURITY_TOKEN\"));\n            client = new AmazonS3Client(basicSessionCredentials, clientConfiguration);\n        } else if (options.hasAwsKeys()) {\n            client = new AmazonS3Client(options, clientConfiguration);\n        } else if (options.isUseIamRole()) {\n            client = new AmazonS3Client(new InstanceProfileCredentialsProvider(), clientConfiguration);\n        } else {\n            throw new IllegalStateException(\"No authenication method available, please specify IAM Role usage or AWS key and secret\");\n        }        \n        if (options.hasEndpoint()) client.setEndpoint(options.getEndpoint());\n        return client;\n    }\n\n    protected void parseArguments() throws Exception {\n        parser.parseArgument(args);\n        \n        // for credentials, check for IAM role usage if not then...\n        // try the .aws/config file first if there is a profile specified, otherwise defer to\n        // .s3cfg before using the default .aws/config credentials \n        // (this may attempt .aws/config twice for no reason, but maintains backward compatibility)\n        if (options.isUseIamRole() == false) {\n            if (!options.hasAwsKeys() && options.getProfile() != null) loadAwsKeysFromAwsConfig();\n            if (!options.hasAwsKeys()) loadAwsKeysFromS3Config();\n            if (!options.hasAwsKeys()) loadAwsKeysFromAwsConfig();\n            if (!options.hasAwsKeys()) loadAwsKeysFromAwsCredentials();\n            if (!options.hasAwsKeys()) {\n                throw new IllegalStateException(\"Could not find credentials, IAM Role usage not specified and ENV vars not defined: \" + MirrorOptions.AWS_ACCESS_KEY + \" and/or \" + MirrorOptions.AWS_SECRET_KEY);\n            }\n        } else {\n            InstanceProfileCredentialsProvider client = new InstanceProfileCredentialsProvider();\n            if (client.getCredentials() == null) {\n                throw new IllegalStateException(\"Could not find IAM Instance Profile credentials from the AWS metadata service.\");\n            }\n        }\n        options.initDerivedFields();\n    }\n\n    private void loadAwsKeysFromS3Config() {\n        try {\n            // try to load from ~/.s3cfg\n            @Cleanup BufferedReader reader = new BufferedReader(new FileReader(System.getProperty(\"user.home\")+File.separator+\".s3cfg\"));\n            String line;\n            while ((line = reader.readLine()) != null) {\n                if (line.trim().startsWith(\"access_key\")) {\n                    options.setAWSAccessKeyId(line.substring(line.indexOf(\"=\") + 1).trim());\n                } else if (line.trim().startsWith(\"secret_key\")) {\n                    options.setAWSSecretKey(line.substring(line.indexOf(\"=\") + 1).trim());\n                } else if (!options.getHasProxy() && line.trim().startsWith(\"proxy_host\")) {\n                    options.setProxyHost(line.substring(line.indexOf(\"=\") + 1).trim());\n                } else if (!options.getHasProxy() && line.trim().startsWith(\"proxy_port\")){\n                    options.setProxyPort(Integer.parseInt(line.substring(line.indexOf(\"=\") + 1).trim()));\n                }\n            }\n        } catch (Exception e) {\n            // ignore - let other credential-discovery processes have a crack\n        }\n    }\n\n    private void loadAwsKeysFromAwsConfig() {\n        try {\n            // try to load from ~/.aws/config\n            @Cleanup BufferedReader reader = new BufferedReader(new FileReader(\n                    System.getProperty(\"user.home\") + File.separator + \".aws\" + File.separator + \"config\"));\n            String line;\n            boolean skipSection = true;\n            while ((line = reader.readLine()) != null) {\n                line = line.trim();\n                if (line.startsWith(\"[\")) {\n                    // if no defined profile, use '[default]' otherwise use profile with matching name\n                    if ((options.getProfile() == null && line.equals(\"[default]\"))\n                            || (options.getProfile() != null && line.equals(\"[profile \" + options.getProfile() + \"]\"))) {\n                        skipSection = false;\n                    } else {\n                        skipSection = true;\n                    }\n                    continue;\n                }\n                if (skipSection) continue;\n                if (line.startsWith(\"aws_access_key_id\")) {\n                    options.setAWSAccessKeyId(line.substring(line.indexOf(\"=\") + 1).trim());\n                } else if (line.startsWith(\"aws_secret_access_key\")) {\n                    options.setAWSSecretKey(line.substring(line.indexOf(\"=\") + 1).trim());\n                }\n            }\n        } catch (Exception e) {\n            // ignore - let other credential-discovery processes have a crack\n        }\n    }\n    \n    private void loadAwsKeysFromAwsCredentials() {\n        try {\n            // try to load from ~/.aws/config\n            @Cleanup BufferedReader reader = new BufferedReader(new FileReader(\n                    System.getProperty(\"user.home\") + File.separator + \".aws\" + File.separator + \"credentials\"));\n            String line;\n            boolean skipSection = true;\n            while ((line = reader.readLine()) != null) {\n                line = line.trim();\n                if (line.startsWith(\"[\")) {\n                    // if no defined profile, use '[default]' otherwise use profile with matching name\n                    if ((options.getProfile() == null && line.equals(\"[default]\"))\n                            || (options.getProfile() != null && line.equals(\"[\" + options.getProfile() + \"]\"))) {\n                        skipSection = false;\n                    } else {\n                        skipSection = true;\n                    }\n                    continue;\n                }\n                if (skipSection) continue;\n                if (line.startsWith(\"aws_access_key_id\")) {\n                    options.setAWSAccessKeyId(line.substring(line.indexOf(\"=\") + 1).trim());\n                } else if (line.startsWith(\"aws_secret_access_key\")) {\n                    options.setAWSSecretKey(line.substring(line.indexOf(\"=\") + 1).trim());\n                }\n            }\n        } catch (Exception e) {\n            // ignore - let other credential-discovery processes have a crack\n        }\n    }\n\n    private Owner getTargetBucketOwner(AmazonS3Client client) {\n        AccessControlList targetBucketAcl = client.getBucketAcl(options.getDestinationBucket());\n        return targetBucketAcl.getOwner();\n    }\n\n}\n"
  },
  {
    "path": "src/main/java/org/cobbzilla/s3s3mirror/MirrorMaster.java",
    "content": "package org.cobbzilla.s3s3mirror;\n\nimport com.amazonaws.services.s3.AmazonS3Client;\nimport lombok.extern.slf4j.Slf4j;\n\nimport java.util.concurrent.*;\n\n/**\n * Manages the Starts a KeyLister and sends batches of keys to the ExecutorService for handling by KeyJobs\n */\n@Slf4j\npublic class MirrorMaster {\n\n    public static final String VERSION = System.getProperty(\"s3s3mirror.version\");\n\n    private final AmazonS3Client client;\n    private final MirrorContext context;\n\n    public MirrorMaster(AmazonS3Client client, MirrorContext context) {\n        this.client = client;\n        this.context = context;\n    }\n\n    public void mirror() {\n\n        log.info(\"version \"+VERSION+\" starting\");\n\n        final MirrorOptions options = context.getOptions();\n\n        if (options.isVerbose() && options.hasCtime()) log.info(\"will not copy anything older than \"+options.getCtime()+\" (cutoff=\"+options.getMaxAgeDate()+\")\");\n\n        final int maxQueueCapacity = getMaxQueueCapacity(options);\n        final BlockingQueue<Runnable> workQueue = new LinkedBlockingQueue<Runnable>(maxQueueCapacity);\n        final RejectedExecutionHandler rejectedExecutionHandler = new RejectedExecutionHandler() {\n            @Override\n            public void rejectedExecution(Runnable r, ThreadPoolExecutor executor) {\n                log.error(\"Error submitting job: \"+r+\", possible queue overflow\");\n            }\n        };\n\n        final ThreadPoolExecutor executorService = new ThreadPoolExecutor(options.getMaxThreads(), options.getMaxThreads(), 1, TimeUnit.MINUTES, workQueue, rejectedExecutionHandler);\n\n        final KeyMaster copyMaster = new CopyMaster(client, context, workQueue, executorService);\n        KeyMaster deleteMaster = null;\n\n        try {\n            copyMaster.start();\n\n            if (context.getOptions().isDeleteRemoved()) {\n                deleteMaster = new DeleteMaster(client, context, workQueue, executorService);\n                deleteMaster.start();\n            }\n\n            while (true) {\n                if (copyMaster.isDone() && (deleteMaster == null || deleteMaster.isDone())) {\n                    log.info(\"mirror: completed\");\n                    break;\n                }\n                if (Sleep.sleep(100)) return;\n            }\n\n        } catch (Exception e) {\n            log.error(\"Unexpected exception in mirror: \"+e, e);\n\n        } finally {\n            try { copyMaster.stop();   } catch (Exception e) { log.error(\"Error stopping copyMaster: \"+e, e); }\n            if (deleteMaster != null) {\n                try { deleteMaster.stop(); } catch (Exception e) { log.error(\"Error stopping deleteMaster: \"+e, e); }\n            }\n        }\n    }\n\n    public static int getMaxQueueCapacity(MirrorOptions options) {\n        return 10 * options.getMaxThreads();\n    }\n\n}\n"
  },
  {
    "path": "src/main/java/org/cobbzilla/s3s3mirror/MirrorOptions.java",
    "content": "package org.cobbzilla.s3s3mirror;\n\nimport com.amazonaws.auth.AWSCredentials;\n\nimport lombok.Getter;\nimport lombok.Setter;\nimport org.joda.time.DateTime;\nimport org.kohsuke.args4j.Argument;\nimport org.kohsuke.args4j.Option;\n\nimport java.util.Date;\n\nimport static org.cobbzilla.s3s3mirror.MirrorConstants.*;\n\npublic class MirrorOptions implements AWSCredentials {\n\n    public static final String S3_PROTOCOL_PREFIX = \"s3://\";\n\n    public static final String AWS_ACCESS_KEY = \"AWS_ACCESS_KEY_ID\";\n    public static final String AWS_SECRET_KEY = \"AWS_SECRET_ACCESS_KEY\";\n    @Getter @Setter private String aWSAccessKeyId = System.getenv().get(AWS_ACCESS_KEY);\n    @Getter @Setter private String aWSSecretKey = System.getenv().get(AWS_SECRET_KEY);\n\n    public boolean hasAwsKeys() { return aWSAccessKeyId != null && aWSSecretKey != null; }\n\n    public static final String USAGE_USE_IAM_ROLE = \"Use IAM role from EC2 instance, can only be used in AWS\";\n    public static final String OPT_USE_IAM_ROLE = \"-i\";\n    public static final String LONGOPT_USE_IAM_ROLE = \"--iam\";\n    @Option(name=OPT_USE_IAM_ROLE, aliases=LONGOPT_USE_IAM_ROLE, usage=USAGE_USE_IAM_ROLE)\n    @Getter @Setter private boolean useIamRole = false;\n\n    public static final String USAGE_PROFILE= \"Use a specific profile from your credential file (~/.aws/config)\";\n    public static final String OPT_PROFILE= \"-P\";\n    public static final String LONGOPT_PROFILE = \"--profile\";\n    @Option(name=OPT_PROFILE, aliases=LONGOPT_PROFILE, usage=USAGE_PROFILE)\n    @Getter @Setter private String profile = null;\n\n    public static final String USAGE_DRY_RUN = \"Do not actually do anything, but show what would be done\";\n    public static final String OPT_DRY_RUN = \"-n\";\n    public static final String LONGOPT_DRY_RUN = \"--dry-run\";\n    @Option(name=OPT_DRY_RUN, aliases=LONGOPT_DRY_RUN, usage=USAGE_DRY_RUN)\n    @Getter @Setter private boolean dryRun = false;\n\n    public static final String USAGE_VERBOSE = \"Verbose output\";\n    public static final String OPT_VERBOSE = \"-v\";\n    public static final String LONGOPT_VERBOSE = \"--verbose\";\n    @Option(name=OPT_VERBOSE, aliases=LONGOPT_VERBOSE, usage=USAGE_VERBOSE)\n    @Getter @Setter private boolean verbose = false;\n    \n    public static final String USAGE_SSL = \"Use SSL for all S3 api operations\";\n    public static final String OPT_SSL = \"-s\";\n    public static final String LONGOPT_SSL = \"--ssl\";\n    @Option(name=OPT_SSL, aliases=LONGOPT_SSL, usage=USAGE_SSL)\n    @Getter @Setter private boolean ssl = false;\n    \n    public static final String USAGE_ENCRYPT = \"Enable AWS managed server-side encryption\";\n    public static final String OPT_ENCRYPT = \"-E\";\n    public static final String LONGOPT_ENCRYPT = \"--server-side-encryption\";\n    @Option(name=OPT_ENCRYPT, aliases=LONGOPT_ENCRYPT, usage=USAGE_ENCRYPT)\n    @Getter @Setter private boolean encrypt = false;\n    \n    public static final String USAGE_STORAGE_CLASS = \"Specify the S3 StorageClass (Standard | ReducedRedundancy)\";\n    public static final String OPT_STORAGE_CLASS = \"-l\";\n    public static final String LONGOPT_STORAGE_CLASS = \"--storage-class\";\n    @Option(name=OPT_STORAGE_CLASS, aliases=LONGOPT_STORAGE_CLASS, usage=USAGE_STORAGE_CLASS)\n    @Getter @Setter private String storageClass = \"Standard\"; \n\n    public static final String USAGE_PREFIX = \"Only copy objects whose keys start with this prefix\";\n    public static final String OPT_PREFIX = \"-p\";\n    public static final String LONGOPT_PREFIX = \"--prefix\";\n    @Option(name=OPT_PREFIX, aliases=LONGOPT_PREFIX, usage=USAGE_PREFIX)\n    @Getter @Setter private String prefix = null;\n\n    public boolean hasPrefix () { return prefix != null && prefix.length() > 0; }\n    public int getPrefixLength () { return prefix == null ? 0 : prefix.length(); }\n\n    public static final String USAGE_DEST_PREFIX = \"Destination prefix (replacing the one specified in --prefix, if any)\";\n    public static final String OPT_DEST_PREFIX= \"-d\";\n    public static final String LONGOPT_DEST_PREFIX = \"--dest-prefix\";\n    @Option(name=OPT_DEST_PREFIX, aliases=LONGOPT_DEST_PREFIX, usage=USAGE_DEST_PREFIX)\n    @Getter @Setter private String destPrefix = null;\n\n    public boolean hasDestPrefix() { return destPrefix != null && destPrefix.length() > 0; }\n    public int getDestPrefixLength () { return destPrefix == null ? 0 : destPrefix.length(); }\n\n    public static final String AWS_ENDPOINT = \"AWS_ENDPOINT\";\n\n    public static final String USAGE_ENDPOINT = \"AWS endpoint to use (or set \"+AWS_ENDPOINT+\" in your environment)\";\n    public static final String OPT_ENDPOINT = \"-e\";\n    public static final String LONGOPT_ENDPOINT = \"--endpoint\";\n    @Option(name=OPT_ENDPOINT, aliases=LONGOPT_ENDPOINT, usage=USAGE_ENDPOINT)\n    @Getter @Setter private String endpoint = System.getenv().get(AWS_ENDPOINT);\n\n    public boolean hasEndpoint () { return endpoint != null && endpoint.trim().length() > 0; }\n\n    public static final String USAGE_MAX_CONNECTIONS = \"Maximum number of connections to S3 (default 100)\";\n    public static final String OPT_MAX_CONNECTIONS = \"-m\";\n    public static final String LONGOPT_MAX_CONNECTIONS = \"--max-connections\";\n    @Option(name=OPT_MAX_CONNECTIONS, aliases=LONGOPT_MAX_CONNECTIONS, usage=USAGE_MAX_CONNECTIONS)\n    @Getter @Setter private int maxConnections = 100;\n\n    public static final String USAGE_MAX_THREADS = \"Maximum number of threads (default 100)\";\n    public static final String OPT_MAX_THREADS = \"-t\";\n    public static final String LONGOPT_MAX_THREADS = \"--max-threads\";\n    @Option(name=OPT_MAX_THREADS, aliases=LONGOPT_MAX_THREADS, usage=USAGE_MAX_THREADS)\n    @Getter @Setter private int maxThreads = 100;\n\n    public static final String USAGE_MAX_RETRIES = \"Maximum number of retries for S3 requests (default 5)\";\n    public static final String OPT_MAX_RETRIES = \"-r\";\n    public static final String LONGOPT_MAX_RETRIES = \"--max-retries\";\n    @Option(name=OPT_MAX_RETRIES, aliases=LONGOPT_MAX_RETRIES, usage=USAGE_MAX_RETRIES)\n    @Getter @Setter private int maxRetries = 5;\n    \n    public static final String USAGE_SIZE_ONLY = \"Only use object size when checking for equality and ignore etags\";\n    public static final String OPT_SIZE_ONLY = \"-S\";\n    public static final String LONGOPT_SIZE_ONLY = \"--size-only\";\n    @Option(name=OPT_SIZE_ONLY, aliases=LONGOPT_SIZE_ONLY, usage=USAGE_SIZE_ONLY)\n    @Getter @Setter private boolean sizeOnly = false;\n\n    public static final String USAGE_SIZE_LAST_MODIFIED = \"Uses size and last modified to determine if files have change like the AWS CLI and ignores etags. If size only is also specified that strategy is selected.\";\n    public static final String OPT_SIZE_LAST_MODIFIED = \"-L\";\n    public static final String LONGOPT_SIZE_LAST_MODIFIED = \"--size-and-last-modified\";\n    @Option(name=OPT_SIZE_LAST_MODIFIED, aliases=LONGOPT_SIZE_LAST_MODIFIED, usage=USAGE_SIZE_LAST_MODIFIED)\n    @Getter @Setter private boolean sizeAndLastModified = false;\n\n    public static final String USAGE_CTIME = \"Only copy objects whose Last-Modified date is younger than this many days. \" +\n            \"For other time units, use these suffixes: y (years), M (months), d (days), w (weeks), h (hours), m (minutes), s (seconds)\";\n    public static final String OPT_CTIME = \"-c\";\n    public static final String LONGOPT_CTIME = \"--ctime\";\n    @Option(name=OPT_CTIME, aliases=LONGOPT_CTIME, usage=USAGE_CTIME)\n    @Getter @Setter private String ctime = null;\n    public boolean hasCtime() { return ctime != null; }\n\n    private static final String PROXY_USAGE = \"host:port of proxy server to use. \" +\n            \"Defaults to proxy_host and proxy_port defined in ~/.s3cfg, or no proxy if these values are not found in ~/.s3cfg\";\n    public static final String OPT_PROXY = \"-z\";\n    public static final String LONGOPT_PROXY = \"--proxy\";\n\n    @Option(name=OPT_PROXY, aliases=LONGOPT_PROXY, usage=PROXY_USAGE)\n    public void setProxy(String proxy) {\n        final String[] splits = proxy.split(\":\");\n        if (splits.length != 2) {\n            throw new IllegalArgumentException(\"Invalid proxy setting (\"+proxy+\"), please use host:port\");\n        }\n\n        proxyHost = splits[0];\n        if (proxyHost.trim().length() == 0) {\n            throw new IllegalArgumentException(\"Invalid proxy setting (\"+proxy+\"), please use host:port\");\n        }\n        try {\n            proxyPort = Integer.parseInt(splits[1]);\n        } catch (Exception e) {\n            throw new IllegalArgumentException(\"Invalid proxy setting (\"+proxy+\"), port could not be parsed as a number\");\n        }\n    }\n    @Getter @Setter public String proxyHost = null;\n    @Getter @Setter public int proxyPort = -1;\n\n    public boolean getHasProxy() {\n        boolean hasProxyHost = proxyHost != null && proxyHost.trim().length() > 0;\n        boolean hasProxyPort = proxyPort != -1;\n\n        return hasProxyHost && hasProxyPort;\n    }\n\n    private long initMaxAge() {\n\n        DateTime dateTime = new DateTime(nowTime);\n\n        // all digits -- assume \"days\"\n        if (ctime.matches(\"^[0-9]+$\")) return dateTime.minusDays(Integer.parseInt(ctime)).getMillis();\n\n        // ensure there is at least one digit, and exactly one character suffix, and the suffix is a legal option\n        if (!ctime.matches(\"^[0-9]+[yMwdhms]$\")) throw new IllegalArgumentException(\"Invalid option for ctime: \"+ctime);\n\n        if (ctime.endsWith(\"y\")) return dateTime.minusYears(getCtimeNumber(ctime)).getMillis();\n        if (ctime.endsWith(\"M\")) return dateTime.minusMonths(getCtimeNumber(ctime)).getMillis();\n        if (ctime.endsWith(\"w\")) return dateTime.minusWeeks(getCtimeNumber(ctime)).getMillis();\n        if (ctime.endsWith(\"d\")) return dateTime.minusDays(getCtimeNumber(ctime)).getMillis();\n        if (ctime.endsWith(\"h\")) return dateTime.minusHours(getCtimeNumber(ctime)).getMillis();\n        if (ctime.endsWith(\"m\")) return dateTime.minusMinutes(getCtimeNumber(ctime)).getMillis();\n        if (ctime.endsWith(\"s\")) return dateTime.minusSeconds(getCtimeNumber(ctime)).getMillis();\n        throw new IllegalArgumentException(\"Invalid option for ctime: \"+ctime);\n    }\n\n    private int getCtimeNumber(String ctime) {\n        return Integer.parseInt(ctime.substring(0, ctime.length() - 1));\n    }\n\n    @Getter private long nowTime = System.currentTimeMillis();\n    @Getter private long maxAge;\n    @Getter private String maxAgeDate;\n\n    public static final String USAGE_DELETE_REMOVED = \"Delete objects from the destination bucket if they do not exist in the source bucket\";\n    public static final String OPT_DELETE_REMOVED = \"-X\";\n    public static final String LONGOPT_DELETE_REMOVED = \"--delete-removed\";\n    @Option(name=OPT_DELETE_REMOVED, aliases=LONGOPT_DELETE_REMOVED, usage=USAGE_DELETE_REMOVED)\n    @Getter @Setter private boolean deleteRemoved = false;\n\n    @Argument(index=0, required=true, usage=\"source bucket[/source/prefix]\") @Getter @Setter private String source;\n    @Argument(index=1, required=true, usage=\"destination bucket[/dest/prefix]\") @Getter @Setter private String destination;\n\n    @Getter private String sourceBucket;\n    @Getter private String destinationBucket;\n\n    /**\n     * Current max file size allowed in amazon is 5 GB. We can try and provide this as an option too.\n     */\n    public static final long MAX_SINGLE_REQUEST_UPLOAD_FILE_SIZE = 5 * GB;\n    private static final long DEFAULT_PART_SIZE = 4 * GB;\n    private static final String MULTI_PART_UPLOAD_SIZE_USAGE = \"The upload size (in bytes) of each part uploaded as part of a multipart request \" +\n            \"for files that are greater than the max allowed file size of \" + MAX_SINGLE_REQUEST_UPLOAD_FILE_SIZE + \" bytes (\"+(MAX_SINGLE_REQUEST_UPLOAD_FILE_SIZE/GB)+\"GB). \" +\n            \"Defaults to \" + DEFAULT_PART_SIZE + \" bytes (\"+(DEFAULT_PART_SIZE/GB)+\"GB).\";\n    private static final String OPT_MULTI_PART_UPLOAD_SIZE = \"-u\";\n    private static final String LONGOPT_MULTI_PART_UPLOAD_SIZE = \"--upload-part-size\";\n    @Option(name=OPT_MULTI_PART_UPLOAD_SIZE, aliases=LONGOPT_MULTI_PART_UPLOAD_SIZE, usage=MULTI_PART_UPLOAD_SIZE_USAGE)\n    @Getter @Setter private long uploadPartSize = DEFAULT_PART_SIZE;\n\n    private static final String CROSS_ACCOUNT_USAGE =\"Copy across AWS accounts. Only Resource-based policies are supported (as \" +\n            \"specified by AWS documentation) for cross account copying. \" +\n            \"Default is false (copying within same account, preserving ACLs across copies). \" +\n            \"If this option is active, we give full access to owner of the destination bucket.\";\n    private static final String OPT_CROSS_ACCOUNT_COPY = \"-C\";\n    private static final String LONGOPT_CROSS_ACCOUNT_COPY = \"--cross-account-copy\";\n    @Option(name=OPT_CROSS_ACCOUNT_COPY, aliases=LONGOPT_CROSS_ACCOUNT_COPY, usage=CROSS_ACCOUNT_USAGE)\n    @Getter @Setter private boolean crossAccountCopy = false;\n\n    public void initDerivedFields() {\n\n        if (hasCtime()) {\n            this.maxAge = initMaxAge();\n            this.maxAgeDate = new Date(maxAge).toString();\n        }\n\n        String scrubbed;\n        int slashPos;\n\n        scrubbed = scrubS3ProtocolPrefix(source);\n        slashPos = scrubbed.indexOf('/');\n        if (slashPos == -1) {\n            sourceBucket = scrubbed;\n        } else {\n            sourceBucket = scrubbed.substring(0, slashPos);\n            if (hasPrefix()) throw new IllegalArgumentException(\"Cannot use a \"+OPT_PREFIX+\"/\"+LONGOPT_PREFIX+\" argument and source path that includes a prefix at the same time\");\n            prefix = scrubbed.substring(slashPos+1);\n        }\n\n        scrubbed = scrubS3ProtocolPrefix(destination);\n        slashPos = scrubbed.indexOf('/');\n        if (slashPos == -1) {\n            destinationBucket = scrubbed;\n        } else {\n            destinationBucket = scrubbed.substring(0, slashPos);\n            if (hasDestPrefix()) throw new IllegalArgumentException(\"Cannot use a \"+OPT_DEST_PREFIX+\"/\"+LONGOPT_DEST_PREFIX+\" argument and destination path that includes a dest-prefix at the same time\");\n            destPrefix = scrubbed.substring(slashPos+1);\n        }\n    }\n\n    protected String scrubS3ProtocolPrefix(String bucket) {\n        bucket = bucket.trim();\n        if (bucket.startsWith(S3_PROTOCOL_PREFIX)) {\n            bucket = bucket.substring(S3_PROTOCOL_PREFIX.length());\n        }\n        return bucket;\n    }\n\n}\n"
  },
  {
    "path": "src/main/java/org/cobbzilla/s3s3mirror/MirrorStats.java",
    "content": "package org.cobbzilla.s3s3mirror;\n\nimport lombok.Getter;\nimport lombok.extern.slf4j.Slf4j;\n\nimport java.util.concurrent.TimeUnit;\nimport java.util.concurrent.atomic.AtomicLong;\n\nimport static org.cobbzilla.s3s3mirror.MirrorConstants.*;\n\n@Slf4j\npublic class MirrorStats {\n\n    @Getter private final Thread shutdownHook = new Thread() {\n        @Override public void run() { logStats(); }\n    };\n\n    private static final String BANNER = \"\\n--------------------------------------------------------------------\\n\";\n    public void logStats() {\n        log.info(BANNER + \"STATS BEGIN\\n\" + toString() + \"STATS END \" + BANNER);\n    }\n\n    private long start = System.currentTimeMillis();\n\n    public final AtomicLong objectsRead = new AtomicLong(0);\n    public final AtomicLong objectsCopied = new AtomicLong(0);\n    public final AtomicLong copyErrors = new AtomicLong(0);\n    public final AtomicLong objectsDeleted = new AtomicLong(0);\n    public final AtomicLong deleteErrors = new AtomicLong(0);\n\n    public final AtomicLong s3copyCount = new AtomicLong(0);\n    public final AtomicLong s3deleteCount = new AtomicLong(0);\n    public final AtomicLong s3getCount = new AtomicLong(0);\n    public final AtomicLong bytesCopied = new AtomicLong(0);\n\n    public static final long HOUR = TimeUnit.HOURS.toMillis(1);\n    public static final long MINUTE = TimeUnit.MINUTES.toMillis(1);\n    public static final long SECOND = TimeUnit.SECONDS.toMillis(1);\n\n    public String toString () {\n        final long durationMillis = System.currentTimeMillis() - start;\n        final double durationMinutes = durationMillis / 60000.0d;\n        final String duration = String.format(\"%d:%02d:%02d\", durationMillis / HOUR, (durationMillis % HOUR) / MINUTE, (durationMillis % MINUTE) / SECOND);\n        final double readRate = objectsRead.get() / durationMinutes;\n        final double copyRate = objectsCopied.get() / durationMinutes;\n        final double deleteRate = objectsDeleted.get() / durationMinutes;\n        return \"read: \"+objectsRead+ \"\\n\"\n                + \"copied: \"+objectsCopied+\"\\n\"\n                + \"copy errors: \"+copyErrors+\"\\n\"\n                + \"deleted: \"+objectsDeleted+\"\\n\"\n                + \"delete errors: \"+deleteErrors+\"\\n\"\n                + \"duration: \"+duration+\"\\n\"\n                + \"read rate: \"+readRate+\"/minute\\n\"\n                + \"copy rate: \"+copyRate+\"/minute\\n\"\n                + \"delete rate: \"+deleteRate+\"/minute\\n\"\n                + \"bytes copied: \"+formatBytes(bytesCopied.get())+\"\\n\"\n                + \"GET operations: \"+s3getCount+\"\\n\"\n                + \"COPY operations: \"+ s3copyCount+\"\\n\"\n                + \"DELETE operations: \"+ s3deleteCount+\"\\n\";\n    }\n\n    private String formatBytes(long bytesCopied) {\n        if (bytesCopied > EB) return ((double) bytesCopied) / ((double) EB) + \" EB (\"+bytesCopied+\" bytes)\";\n        if (bytesCopied > PB) return ((double) bytesCopied) / ((double) PB) + \" PB (\"+bytesCopied+\" bytes)\";\n        if (bytesCopied > TB) return ((double) bytesCopied) / ((double) TB) + \" TB (\"+bytesCopied+\" bytes)\";\n        if (bytesCopied > GB) return ((double) bytesCopied) / ((double) GB) + \" GB (\"+bytesCopied+\" bytes)\";\n        if (bytesCopied > MB) return ((double) bytesCopied) / ((double) MB) + \" MB (\"+bytesCopied+\" bytes)\";\n        if (bytesCopied > KB) return ((double) bytesCopied) / ((double) KB) + \" KB (\"+bytesCopied+\" bytes)\";\n        return bytesCopied + \" bytes\";\n    }\n\n}\n"
  },
  {
    "path": "src/main/java/org/cobbzilla/s3s3mirror/MultipartKeyCopyJob.java",
    "content": "package org.cobbzilla.s3s3mirror;\n\nimport com.amazonaws.services.s3.AmazonS3Client;\nimport com.amazonaws.services.s3.model.*;\nimport lombok.extern.slf4j.Slf4j;\nimport org.cobbzilla.s3s3mirror.comparisonstrategies.ComparisonStrategy;\n\nimport java.util.ArrayList;\nimport java.util.List;\n\n@Slf4j\npublic class MultipartKeyCopyJob extends KeyCopyJob {\n\n    public MultipartKeyCopyJob(AmazonS3Client client, MirrorContext context, S3ObjectSummary summary, Object notifyLock, ComparisonStrategy comparisonStrategy) {\n        super(client, context, summary, notifyLock, comparisonStrategy);\n    }\n\n    @Override\n    boolean keyCopied(ObjectMetadata sourceMetadata, AccessControlList objectAcl) {\n        long objectSize = summary.getSize();\n        MirrorOptions options = context.getOptions();\n        String sourceBucketName = options.getSourceBucket();\n        int maxPartRetries = options.getMaxRetries();\n        String targetBucketName = options.getDestinationBucket();\n        List<CopyPartResult> copyResponses = new ArrayList<CopyPartResult>();\n        if (options.isVerbose()) {\n            log.info(\"Initiating multipart upload request for \" + summary.getKey());\n        }\n        InitiateMultipartUploadRequest initiateRequest = new InitiateMultipartUploadRequest(targetBucketName, keydest)\n                .withObjectMetadata(sourceMetadata);\n\n        if (options.isCrossAccountCopy()) {\n            initiateRequest.withAccessControlList(buildCrossAccountAcl(objectAcl));\n        } else {\n            initiateRequest.withAccessControlList(objectAcl);\n        }\n\n        InitiateMultipartUploadResult initResult = client.initiateMultipartUpload(initiateRequest);\n\n        long partSize = options.getUploadPartSize();\n        long bytePosition = 0;\n\n        for (int i = 1; bytePosition < objectSize; i++) {\n            long lastByte = bytePosition + partSize - 1 >= objectSize ? objectSize - 1 : bytePosition + partSize - 1;\n            String infoMessage = \"copying : \" + bytePosition + \" to \" + lastByte;\n            if (options.isVerbose()) {\n                log.info(infoMessage);\n            }\n            CopyPartRequest copyRequest = new CopyPartRequest()\n                    .withDestinationBucketName(targetBucketName)\n                    .withDestinationKey(keydest)\n                    .withSourceBucketName(sourceBucketName)\n                    .withSourceKey(summary.getKey())\n                    .withUploadId(initResult.getUploadId())\n                    .withFirstByte(bytePosition)\n                    .withLastByte(lastByte)\n                    .withPartNumber(i);\n\n            for (int tries = 1; tries <= maxPartRetries; tries++) {\n                try {\n                    if (options.isVerbose()) log.info(\"try :\" + tries);\n                    context.getStats().s3copyCount.incrementAndGet();\n                    CopyPartResult copyPartResult = client.copyPart(copyRequest);\n                    copyResponses.add(copyPartResult);\n                    if (options.isVerbose()) log.info(\"completed \" + infoMessage);\n                    break;\n                } catch (Exception e) {\n                    if (tries == maxPartRetries) {\n                        client.abortMultipartUpload(new AbortMultipartUploadRequest(\n                                targetBucketName, keydest, initResult.getUploadId()));\n                        log.error(\"Exception while doing multipart copy\", e);\n                        return false;\n                    }\n                }\n            }\n            bytePosition += partSize;\n        }\n        CompleteMultipartUploadRequest completeRequest = new CompleteMultipartUploadRequest(targetBucketName, keydest,\n                initResult.getUploadId(), getETags(copyResponses));\n        client.completeMultipartUpload(completeRequest);\n        if(options.isVerbose()) {\n            log.info(\"completed multipart request for : \" + summary.getKey());\n        }\n        context.getStats().bytesCopied.addAndGet(objectSize);\n        return true;\n    }\n\n    private List<PartETag> getETags(List<CopyPartResult> copyResponses) {\n        List<PartETag> eTags = new ArrayList<PartETag>();\n        for (CopyPartResult response : copyResponses) {\n            eTags.add(new PartETag(response.getPartNumber(), response.getETag()));\n        }\n        return eTags;\n    }\n}\n"
  },
  {
    "path": "src/main/java/org/cobbzilla/s3s3mirror/Sleep.java",
    "content": "package org.cobbzilla.s3s3mirror;\n\nimport lombok.extern.slf4j.Slf4j;\n\n@Slf4j\npublic class Sleep {\n\n    public static boolean sleep(int millis) {\n        try {\n            Thread.sleep(millis);\n        } catch (InterruptedException e) {\n            log.error(\"interrupted!\");\n            return true;\n        }\n        return false;\n    }\n\n}\n"
  },
  {
    "path": "src/main/java/org/cobbzilla/s3s3mirror/comparisonstrategies/ComparisonStrategy.java",
    "content": "package org.cobbzilla.s3s3mirror.comparisonstrategies;\n\nimport com.amazonaws.services.s3.model.ObjectMetadata;\nimport com.amazonaws.services.s3.model.S3ObjectSummary;\n\npublic interface ComparisonStrategy {\n    boolean sourceDifferent(S3ObjectSummary source, ObjectMetadata destination);\n}\n"
  },
  {
    "path": "src/main/java/org/cobbzilla/s3s3mirror/comparisonstrategies/ComparisonStrategyFactory.java",
    "content": "package org.cobbzilla.s3s3mirror.comparisonstrategies;\n\nimport lombok.AccessLevel;\nimport lombok.NoArgsConstructor;\nimport org.cobbzilla.s3s3mirror.MirrorOptions;\n\n@NoArgsConstructor(access = AccessLevel.PRIVATE)\npublic class ComparisonStrategyFactory {\n    public static ComparisonStrategy getStrategy(MirrorOptions mirrorOptions) {\n        if (mirrorOptions.isSizeOnly()) {\n            return new SizeOnlyComparisonStrategy();\n        } else if (mirrorOptions.isSizeAndLastModified()) {\n            return new SizeAndLastModifiedComparisonStrategy();\n        } else {\n            return new EtagComparisonStrategy();\n        }\n    }\n}\n"
  },
  {
    "path": "src/main/java/org/cobbzilla/s3s3mirror/comparisonstrategies/EtagComparisonStrategy.java",
    "content": "package org.cobbzilla.s3s3mirror.comparisonstrategies;\n\nimport com.amazonaws.services.s3.model.ObjectMetadata;\nimport com.amazonaws.services.s3.model.S3ObjectSummary;\n\npublic class EtagComparisonStrategy extends SizeOnlyComparisonStrategy {\n    @Override\n    public boolean sourceDifferent(S3ObjectSummary source, ObjectMetadata destination) {\n\n        return super.sourceDifferent(source, destination) || !source.getETag().equals(destination.getETag());\n    }\n}\n"
  },
  {
    "path": "src/main/java/org/cobbzilla/s3s3mirror/comparisonstrategies/SizeAndLastModifiedComparisonStrategy.java",
    "content": "package org.cobbzilla.s3s3mirror.comparisonstrategies;\n\nimport com.amazonaws.services.s3.model.ObjectMetadata;\nimport com.amazonaws.services.s3.model.S3ObjectSummary;\n\npublic class SizeAndLastModifiedComparisonStrategy extends SizeOnlyComparisonStrategy {\n    @Override\n    public boolean sourceDifferent(S3ObjectSummary source, ObjectMetadata destination) {\n        return super.sourceDifferent(source, destination) || source.getLastModified().after(destination.getLastModified());\n    }\n}\n"
  },
  {
    "path": "src/main/java/org/cobbzilla/s3s3mirror/comparisonstrategies/SizeOnlyComparisonStrategy.java",
    "content": "package org.cobbzilla.s3s3mirror.comparisonstrategies;\n\nimport com.amazonaws.services.s3.model.ObjectMetadata;\nimport com.amazonaws.services.s3.model.S3ObjectSummary;\n\npublic class SizeOnlyComparisonStrategy implements ComparisonStrategy {\n    @Override\n    public boolean sourceDifferent(S3ObjectSummary source, ObjectMetadata destination) {\n        return source.getSize() != destination.getContentLength();\n    }\n}\n"
  },
  {
    "path": "src/main/resources/log4j.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!DOCTYPE log4j:configuration PUBLIC \"-//LOGGER\" \"log4j.dtd\">\n\n<log4j:configuration xmlns:log4j=\"http://jakarta.apache.org/log4j/\">\n\n    <!-- Appenders -->\n    <appender name=\"console\" class=\"org.apache.log4j.ConsoleAppender\">\n        <param name=\"Target\" value=\"System.err\" />\n        <layout class=\"org.apache.log4j.PatternLayout\">\n            <param name=\"ConversionPattern\" value=\"%t %-5p: %c - %m%n\" />\n        </layout>\n    </appender>\n\n    <!-- our loggers -->\n    <logger name=\"org.cobbzilla\">\n        <level value=\"info\" />\n    </logger>\n\n    <!-- Root Logger -->\n    <root>\n        <priority value=\"info\" />\n        <appender-ref ref=\"console\" />\n    </root>\n\n</log4j:configuration>\n"
  },
  {
    "path": "src/test/java/org/cobbzilla/s3s3mirror/MirrorMainTest.java",
    "content": "package org.cobbzilla.s3s3mirror;\n\nimport org.junit.Test;\n\nimport static org.junit.Assert.*;\n\npublic class MirrorMainTest {\n\n    public static final String SOURCE = \"s3://from-bucket\";\n    public static final String DESTINATION = \"s3://to-bucket\";\n\n    @Test\n    public void testBasicArgs() throws Exception {\n\n        final MirrorMain main = new MirrorMain(new String[]{SOURCE, DESTINATION});\n        main.parseArguments();\n\n        final MirrorOptions options = main.getOptions();\n        assertFalse(options.isDryRun());\n        assertEquals(SOURCE, options.getSource());\n        assertEquals(DESTINATION, options.getDestination());\n    }\n\n    @Test\n    public void testDryRunArgs() throws Exception {\n\n        final MirrorMain main = new MirrorMain(new String[]{MirrorOptions.OPT_DRY_RUN, SOURCE, DESTINATION});\n        main.parseArguments();\n\n        final MirrorOptions options = main.getOptions();\n        assertTrue(options.isDryRun());\n        assertEquals(SOURCE, options.getSource());\n        assertEquals(DESTINATION, options.getDestination());\n    }\n\n    @Test\n    public void testMaxConnectionsArgs() throws Exception {\n\n        int maxConns = 42;\n        final MirrorMain main = new MirrorMain(new String[]{MirrorOptions.OPT_MAX_CONNECTIONS, String.valueOf(maxConns), SOURCE, DESTINATION});\n        main.parseArguments();\n\n        final MirrorOptions options = main.getOptions();\n        assertFalse(options.isDryRun());\n        assertEquals(maxConns, options.getMaxConnections());\n        assertEquals(SOURCE, options.getSource());\n        assertEquals(DESTINATION, options.getDestination());\n    }\n\n    @Test\n    public void testInlinePrefix() throws Exception {\n        final String prefix = \"foo\";\n        final MirrorMain main = new MirrorMain(new String[]{SOURCE + \"/\" + prefix, DESTINATION});\n        main.parseArguments();\n\n        final MirrorOptions options = main.getOptions();\n        assertEquals(prefix, options.getPrefix());\n        assertNull(options.getDestPrefix());\n    }\n\n    @Test\n    public void testInlineDestPrefix() throws Exception {\n        final String destPrefix = \"foo\";\n        final MirrorMain main = new MirrorMain(new String[]{SOURCE, DESTINATION + \"/\" + destPrefix});\n        main.parseArguments();\n\n        final MirrorOptions options = main.getOptions();\n        assertEquals(destPrefix, options.getDestPrefix());\n        assertNull(options.getPrefix());\n    }\n\n    @Test\n    public void testInlineSourceAndDestPrefix() throws Exception {\n        final String prefix = \"foo\";\n        final String destPrefix = \"bar\";\n        final MirrorMain main = new MirrorMain(new String[]{SOURCE + \"/\" + prefix, DESTINATION + \"/\" + destPrefix});\n        main.parseArguments();\n\n        final MirrorOptions options = main.getOptions();\n        assertEquals(prefix, options.getPrefix());\n        assertEquals(destPrefix, options.getDestPrefix());\n    }\n\n    @Test\n    public void testInlineSourcePrefixAndPrefixOption() throws Exception {\n        final String prefix = \"foo\";\n        final MirrorMain main = new MirrorMain(new String[]{MirrorOptions.OPT_PREFIX, prefix, SOURCE + \"/\" + prefix, DESTINATION});\n        try {\n            main.parseArguments();\n            fail(\"expected IllegalArgumentException\");\n        } catch (Exception e) {\n            assertTrue(e instanceof IllegalArgumentException);\n        }\n    }\n\n    @Test\n    public void testInlineDestinationPrefixAndPrefixOption() throws Exception {\n        final String prefix = \"foo\";\n        final MirrorMain main = new MirrorMain(new String[]{MirrorOptions.OPT_DEST_PREFIX, prefix, SOURCE, DESTINATION + \"/\" + prefix});\n        try {\n            main.parseArguments();\n            fail(\"expected IllegalArgumentException\");\n        } catch (Exception e) {\n            assertTrue(e instanceof IllegalArgumentException);\n        }\n    }\n\n    /**\n     * When access keys are read from environment then the --proxy setting is valid.\n     * If access keys are ready from s3cfg file then proxy settings are picked from there.\n     * @throws Exception\n     */\n    @Test\n    public void testProxyHostAndProxyPortOption() throws Exception {\n        final String proxy = \"localhost:8080\";\n        final MirrorMain main = new MirrorMain(new String[]{MirrorOptions.OPT_PROXY, proxy, SOURCE, DESTINATION});\n\n        main.getOptions().setAWSAccessKeyId(\"accessKey\");\n        main.getOptions().setAWSSecretKey(\"secretKey\");\n        main.parseArguments();\n        assertEquals(\"localhost\", main.getOptions().getProxyHost());\n        assertEquals(8080, main.getOptions().getProxyPort());\n    }\n\n    @Test\n    public void testInvalidProxyOption () throws Exception {\n        for (String proxy : new String[] {\"localhost\", \"localhost:\", \":1234\", \"localhost:invalid\", \":\", \"\"} ) {\n            testInvalidProxySetting(proxy);\n        }\n    }\n\n    private void testInvalidProxySetting(String proxy) throws Exception {\n        final MirrorMain main = new MirrorMain(new String[]{MirrorOptions.OPT_PROXY, proxy, SOURCE, DESTINATION});\n        main.getOptions().setAWSAccessKeyId(\"accessKey\");\n        main.getOptions().setAWSSecretKey(\"secretKey\");\n        try {\n            main.parseArguments();\n            fail(\"Invalid proxy setting (\"+proxy+\") should have thrown exception\");\n        } catch (IllegalArgumentException expected) {}\n    }\n}\n"
  },
  {
    "path": "src/test/java/org/cobbzilla/s3s3mirror/MirrorTest.java",
    "content": "package org.cobbzilla.s3s3mirror;\n\nimport com.amazonaws.services.s3.AmazonS3Client;\nimport com.amazonaws.services.s3.model.AmazonS3Exception;\nimport com.amazonaws.services.s3.model.ObjectMetadata;\nimport lombok.extern.slf4j.Slf4j;\nimport org.apache.commons.lang3.RandomStringUtils;\nimport org.junit.After;\nimport org.junit.Test;\n\nimport java.util.ArrayList;\nimport java.util.List;\n\nimport static org.cobbzilla.s3s3mirror.MirrorOptions.*;\nimport static org.cobbzilla.s3s3mirror.TestFile.Clean;\nimport static org.cobbzilla.s3s3mirror.TestFile.Copy;\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.fail;\n\n@Slf4j\npublic class MirrorTest {\n\n    public static final String SOURCE_ENV_VAR = \"S3S3_TEST_SOURCE\";\n    public static final String DEST_ENV_VAR = \"S3S3_TEST_DEST\";\n\n    public static final String SOURCE = System.getenv(SOURCE_ENV_VAR);\n    public static final String DESTINATION = System.getenv(DEST_ENV_VAR);\n\n    private List<S3Asset> stuffToCleanup = new ArrayList<S3Asset>();\n\n    // Every individual test *must* initialize the \"main\" instance variable, otherwise NPE gets thrown here.\n    private MirrorMain main = null;\n\n    private TestFile createTestFile(String key, Copy copy, Clean clean) throws Exception {\n        return TestFile.create(key, main.getClient(), stuffToCleanup, copy, clean);\n    }\n\n    public static String random(int size) {\n        return RandomStringUtils.randomAlphanumeric(size) + \"_\" + System.currentTimeMillis();\n    }\n\n    private boolean checkEnvs() {\n        if (SOURCE == null || DESTINATION == null) {\n            log.warn(\"No \"+SOURCE_ENV_VAR+\" and/or no \"+DEST_ENV_VAR+\" found in enviroment, skipping test\");\n            return false;\n        }\n        return true;\n    }\n\n    @After\n    public void cleanupS3Assets () {\n        // Every individual test *must* initialize the \"main\" instance variable, otherwise NPE gets thrown here.\n        if (checkEnvs()) {\n            AmazonS3Client client = main.getClient();\n            for (S3Asset asset : stuffToCleanup) {\n                try {\n                    log.info(\"cleanupS3Assets: deleting \"+asset);\n                    client.deleteObject(asset.bucket, asset.key);\n                } catch (Exception e) {\n                    log.error(\"Error cleaning up object: \"+asset+\": \"+e.getMessage());\n                }\n            }\n            main = null;\n        }\n    }\n\n    @Test\n    public void testSimpleCopy () throws Exception {\n        if (!checkEnvs()) return;\n        final String key = \"testSimpleCopy_\"+random(10);\n        final String[] args = {OPT_VERBOSE, OPT_PREFIX, key, SOURCE, DESTINATION};\n\n        testSimpleCopyInternal(key, args);\n    }\n\n    @Test\n    public void testSimpleCopyWithInlinePrefix () throws Exception {\n        if (!checkEnvs()) return;\n        final String key = \"testSimpleCopyWithInlinePrefix_\"+random(10);\n        final String[] args = {OPT_VERBOSE, SOURCE + \"/\" + key, DESTINATION};\n\n        testSimpleCopyInternal(key, args);\n    }\n\n    private void testSimpleCopyInternal(String key, String[] args) throws Exception {\n\n        main = new MirrorMain(args);\n        main.init();\n\n        final TestFile testFile = createTestFile(key, Copy.SOURCE, Clean.SOURCE_AND_DEST);\n\n        main.run();\n\n        assertEquals(1, main.getContext().getStats().objectsCopied.get());\n        assertEquals(testFile.data.length(), main.getContext().getStats().bytesCopied.get());\n\n        final ObjectMetadata metadata = main.getClient().getObjectMetadata(DESTINATION, key);\n        assertEquals(testFile.data.length(), metadata.getContentLength());\n    }\n\n    @Test\n    public void testSimpleCopyWithDestPrefix () throws Exception {\n        if (!checkEnvs()) return;\n        final String key = \"testSimpleCopyWithDestPrefix_\"+random(10);\n        final String destKey = \"dest_testSimpleCopyWithDestPrefix_\"+random(10);\n        final String[] args = {OPT_PREFIX, key, OPT_DEST_PREFIX, destKey, SOURCE, DESTINATION};\n        testSimpleCopyWithDestPrefixInternal(key, destKey, args);\n    }\n\n    @Test\n    public void testSimpleCopyWithInlineDestPrefix () throws Exception {\n        if (!checkEnvs()) return;\n        final String key = \"testSimpleCopyWithInlineDestPrefix_\"+random(10);\n        final String destKey = \"dest_testSimpleCopyWithInlineDestPrefix_\"+random(10);\n        final String[] args = {SOURCE+\"/\"+key, DESTINATION+\"/\"+destKey };\n        testSimpleCopyWithDestPrefixInternal(key, destKey, args);\n    }\n\n    private void testSimpleCopyWithDestPrefixInternal(String key, String destKey, String[] args) throws Exception {\n        main = new MirrorMain(args);\n        main.init();\n\n        final TestFile testFile = createTestFile(key, Copy.SOURCE, Clean.SOURCE);\n        stuffToCleanup.add(new S3Asset(DESTINATION, destKey));\n\n        main.run();\n\n        assertEquals(1, main.getContext().getStats().objectsCopied.get());\n        assertEquals(testFile.data.length(), main.getContext().getStats().bytesCopied.get());\n\n        final ObjectMetadata metadata = main.getClient().getObjectMetadata(DESTINATION, destKey);\n        assertEquals(testFile.data.length(), metadata.getContentLength());\n    }\n\n    @Test\n    public void testDeleteRemoved () throws Exception {\n        if (!checkEnvs()) return;\n\n        final String key = \"testDeleteRemoved_\"+random(10);\n\n        main = new MirrorMain(new String[]{OPT_VERBOSE, OPT_PREFIX, key,\n                                           OPT_DELETE_REMOVED, SOURCE, DESTINATION});\n        main.init();\n\n        // Write some files to dest\n        final int numDestFiles = 3;\n        final String[] destKeys = new String[numDestFiles];\n        final TestFile[] destFiles = new TestFile[numDestFiles];\n        for (int i=0; i<numDestFiles; i++) {\n            destKeys[i] = key + \"-dest\" + i;\n            destFiles[i] = createTestFile(destKeys[i], Copy.DEST, Clean.DEST);\n        }\n\n        // Write 1 file to source\n        final String srcKey = key + \"-src\";\n        final TestFile srcFile = createTestFile(srcKey, Copy.SOURCE, Clean.SOURCE_AND_DEST);\n\n        // Initiate copy\n        main.run();\n\n        // Expect only 1 copy and numDestFiles deletes\n        assertEquals(1, main.getContext().getStats().objectsCopied.get());\n        assertEquals(numDestFiles, main.getContext().getStats().objectsDeleted.get());\n\n        // Expect none of the original dest files to be there anymore\n        for (int i=0; i<numDestFiles; i++) {\n            try {\n                main.getClient().getObjectMetadata(DESTINATION, destKeys[i]);\n                fail(\"testDeleteRemoved: expected \"+destKeys[i]+\" to be removed from destination bucket \"+DESTINATION);\n            } catch (AmazonS3Exception e) {\n                if (e.getStatusCode() != 404) {\n                    fail(\"testDeleteRemoved: unexpected exception (expected statusCode == 404): \"+e);\n                }\n            }\n        }\n\n        // Expect source file to now be present in both source and destination buckets\n        ObjectMetadata metadata;\n        metadata = main.getClient().getObjectMetadata(SOURCE, srcKey);\n        assertEquals(srcFile.data.length(), metadata.getContentLength());\n\n        metadata = main.getClient().getObjectMetadata(DESTINATION, srcKey);\n        assertEquals(srcFile.data.length(), metadata.getContentLength());\n    }\n\n}\n"
  },
  {
    "path": "src/test/java/org/cobbzilla/s3s3mirror/S3Asset.java",
    "content": "package org.cobbzilla.s3s3mirror;\n\nimport lombok.AllArgsConstructor;\nimport lombok.ToString;\n\n@AllArgsConstructor @ToString\nclass S3Asset {\n    public String bucket;\n    public String key;\n}\n"
  },
  {
    "path": "src/test/java/org/cobbzilla/s3s3mirror/SyncStrategiesTest.java",
    "content": "package org.cobbzilla.s3s3mirror;\n\nimport com.amazonaws.services.s3.model.ObjectMetadata;\nimport com.amazonaws.services.s3.model.S3ObjectSummary;\nimport org.cobbzilla.s3s3mirror.comparisonstrategies.EtagComparisonStrategy;\nimport org.cobbzilla.s3s3mirror.comparisonstrategies.SizeAndLastModifiedComparisonStrategy;\nimport org.cobbzilla.s3s3mirror.comparisonstrategies.SizeOnlyComparisonStrategy;\nimport org.junit.Test;\n\nimport java.sql.Timestamp;\nimport java.time.LocalDateTime;\nimport java.util.Random;\n\nimport static org.junit.Assert.assertFalse;\nimport static org.junit.Assert.assertTrue;\nimport static org.mockito.Mockito.doReturn;\nimport static org.mockito.Mockito.mock;\n\npublic class SyncStrategiesTest {\n    private final EtagComparisonStrategy etagComparisonStrategy = new EtagComparisonStrategy();\n    private final SizeOnlyComparisonStrategy sizeOnlyComparisonStrategy = new SizeOnlyComparisonStrategy();\n    private final SizeAndLastModifiedComparisonStrategy sizeAndLastModifiedComparisonStrategy = new SizeAndLastModifiedComparisonStrategy();\n\n    private static final String ETAG_A = \"ETAG_A\";\n    private static final String ETAG_B = \"ETAG_B\";\n    private static final long SIZE_A = 0;\n    private static final long SIZE_B = 1;\n    private static final LocalDateTime TIME_EARLY = LocalDateTime.of(2020, 1, 1, 0, 0);\n    private static final LocalDateTime TIME_LATER = TIME_EARLY.plusDays(1);\n\n\n    @Test\n    public void testEtaStrategygEtagAndSizeMatch() {\n        S3ObjectSummary source = createTestS3ObjectSummary(ETAG_A, SIZE_A);\n        ObjectMetadata destination = createTestObjectMetadata(ETAG_A, SIZE_A);\n\n        assertFalse(etagComparisonStrategy.sourceDifferent(source, destination));\n    }\n\n    @Test\n    public void testEtagStrategySizeMatchEtagDoesNot() {\n        S3ObjectSummary source = createTestS3ObjectSummary(ETAG_A, SIZE_A);\n        ObjectMetadata destination = createTestObjectMetadata(ETAG_B, SIZE_A);\n\n        assertTrue(etagComparisonStrategy.sourceDifferent(source, destination));\n    }\n\n    @Test\n    public void testEtagStrategyEtagMatchSizeDoesNot() {\n        S3ObjectSummary source = createTestS3ObjectSummary(ETAG_A, SIZE_A);\n        ObjectMetadata destination = createTestObjectMetadata(ETAG_A, SIZE_B);\n\n        assertTrue(etagComparisonStrategy.sourceDifferent(source, destination));\n    }\n\n    @Test\n    public void testSizeStrategySizeMatches() {\n        S3ObjectSummary source = createTestS3ObjectSummary(SIZE_A);\n        ObjectMetadata destination = createTestObjectMetadata(SIZE_A);\n\n        assertFalse(sizeOnlyComparisonStrategy.sourceDifferent(source, destination));\n    }\n\n    @Test\n    public void testSizeStrategySizeDoesNotMatches() {\n        S3ObjectSummary source = createTestS3ObjectSummary(SIZE_A);\n        ObjectMetadata destination = createTestObjectMetadata(SIZE_B);\n\n        assertTrue(sizeOnlyComparisonStrategy.sourceDifferent(source, destination));\n    }\n\n    @Test\n    public void testSizeAndLastModifiedStrategySizeAndLastModifiedMatch() {\n        S3ObjectSummary source = createTestS3ObjectSummary(SIZE_A, TIME_EARLY);\n        ObjectMetadata destination = createTestObjectMetadata(SIZE_A, TIME_EARLY);\n\n        assertFalse(sizeAndLastModifiedComparisonStrategy.sourceDifferent(source, destination));\n    }\n\n    @Test\n    public void testSizeAndLastModifiedStrategyLastModifiedMatchSizeDoesNot() {\n        S3ObjectSummary source = createTestS3ObjectSummary(SIZE_A, TIME_EARLY);\n        ObjectMetadata destination = createTestObjectMetadata(SIZE_B, TIME_EARLY);\n\n        assertTrue(sizeAndLastModifiedComparisonStrategy.sourceDifferent(source, destination));\n    }\n\n    @Test\n    public void testSizeAndLastModifiedStrategySizeMatchDestinationAfterSource() {\n        S3ObjectSummary source = createTestS3ObjectSummary(SIZE_A, TIME_EARLY);\n        ObjectMetadata destination = createTestObjectMetadata(SIZE_A, TIME_LATER);\n\n        assertFalse(sizeAndLastModifiedComparisonStrategy.sourceDifferent(source, destination));\n    }\n\n    @Test\n    public void testSizeAndLastModifiedStrategySizeMatchSourceAfterDestination() {\n        S3ObjectSummary source = createTestS3ObjectSummary(SIZE_A, TIME_LATER);\n        ObjectMetadata destination = createTestObjectMetadata(SIZE_A, TIME_EARLY);\n\n        assertTrue(sizeAndLastModifiedComparisonStrategy.sourceDifferent(source, destination));\n    }\n\n    private S3ObjectSummary createTestS3ObjectSummary(long size) {\n        return createTestS3ObjectSummary(randomString(), size);\n    }\n\n    private S3ObjectSummary createTestS3ObjectSummary(String etag, long size) {\n        return createTestS3ObjectSummary(etag, size, LocalDateTime.now());\n    }\n\n    private S3ObjectSummary createTestS3ObjectSummary(long size, LocalDateTime lastModifiedDate) {\n        return createTestS3ObjectSummary(randomString(), size, lastModifiedDate);\n    }\n\n    private S3ObjectSummary createTestS3ObjectSummary(String etag, long size, LocalDateTime lastModified) {\n        S3ObjectSummary summary = new S3ObjectSummary();\n\n        summary.setETag(etag);\n        summary.setSize(size);\n        summary.setLastModified(Timestamp.valueOf(lastModified));\n\n        return summary;\n    }\n\n    private ObjectMetadata createTestObjectMetadata(long size) {\n        return createTestObjectMetadata(randomString(), size);\n    }\n\n    private ObjectMetadata createTestObjectMetadata(String etag, long size) {\n        return createTestObjectMetadata(etag, size, LocalDateTime.now());\n    }\n\n    private ObjectMetadata createTestObjectMetadata(long size, LocalDateTime lastModified) {\n        return createTestObjectMetadata(randomString(), size, lastModified);\n    }\n\n    private ObjectMetadata createTestObjectMetadata(String etag, long size, LocalDateTime lastModified) {\n        ObjectMetadata metadata = mock(ObjectMetadata.class);\n\n        doReturn(etag).when(metadata).getETag();\n        doReturn(size).when(metadata).getContentLength();\n        doReturn(Timestamp.valueOf(lastModified)).when(metadata).getLastModified();\n\n        return metadata;\n    }\n\n    private String randomString() {\n        return Integer.toString(new Random().nextInt(1000));\n    }\n}\n"
  },
  {
    "path": "src/test/java/org/cobbzilla/s3s3mirror/TestFile.java",
    "content": "package org.cobbzilla.s3s3mirror;\n\nimport com.amazonaws.services.s3.AmazonS3Client;\nimport lombok.Cleanup;\nimport org.apache.commons.io.IOUtils;\nimport org.apache.commons.lang3.RandomUtils;\n\nimport java.io.ByteArrayInputStream;\nimport java.io.File;\nimport java.io.FileOutputStream;\nimport java.util.List;\n\nclass TestFile {\n\n    public static final int TEST_FILE_SIZE = 1024;\n\n    enum Copy  { SOURCE, DEST, SOURCE_AND_DEST }\n    enum Clean { SOURCE, DEST, SOURCE_AND_DEST }\n\n    public File file;\n    public String data;\n\n    public TestFile() throws Exception{\n        file = File.createTempFile(getClass().getName(), \".tmp\");\n        data = MirrorTest.random(TEST_FILE_SIZE + (RandomUtils.nextInt() % 1024));\n        @Cleanup FileOutputStream out = new FileOutputStream(file);\n        IOUtils.copy(new ByteArrayInputStream(data.getBytes()), out);\n        file.deleteOnExit();\n    }\n\n    public static TestFile create(String key, AmazonS3Client client, List<S3Asset> stuffToCleanup, Copy copy, Clean clean) throws Exception {\n        TestFile testFile = new TestFile();\n        switch (clean) {\n            case SOURCE:\n                stuffToCleanup.add(new S3Asset(MirrorTest.SOURCE, key));\n                break;\n            case DEST:\n                stuffToCleanup.add(new S3Asset(MirrorTest.DESTINATION, key));\n                break;\n            case SOURCE_AND_DEST:\n                stuffToCleanup.add(new S3Asset(MirrorTest.SOURCE, key));\n                stuffToCleanup.add(new S3Asset(MirrorTest.DESTINATION, key));\n                break;\n        }\n        switch (copy) {\n            case SOURCE:\n                client.putObject(MirrorTest.SOURCE, key, testFile.file);\n                break;\n            case DEST:\n                client.putObject(MirrorTest.DESTINATION, key, testFile.file);\n                break;\n            case SOURCE_AND_DEST:\n                client.putObject(MirrorTest.SOURCE, key, testFile.file);\n                client.putObject(MirrorTest.DESTINATION, key, testFile.file);\n                break;\n        }\n        return testFile;\n    }\n}\n"
  }
]