Repository: cobbzilla/s3s3mirror Branch: master Commit: 17c94a28ae19 Files: 33 Total size: 99.6 KB Directory structure: gitextract_cx814f8m/ ├── .gitignore ├── LICENSE.txt ├── README.md ├── pom.xml ├── s3s3mirror.bat ├── s3s3mirror.sh └── src/ ├── main/ │ ├── java/ │ │ └── org/ │ │ └── cobbzilla/ │ │ └── s3s3mirror/ │ │ ├── CopyMaster.java │ │ ├── DeleteMaster.java │ │ ├── KeyCopyJob.java │ │ ├── KeyDeleteJob.java │ │ ├── KeyFingerprint.java │ │ ├── KeyJob.java │ │ ├── KeyLister.java │ │ ├── KeyMaster.java │ │ ├── MirrorConstants.java │ │ ├── MirrorContext.java │ │ ├── MirrorMain.java │ │ ├── MirrorMaster.java │ │ ├── MirrorOptions.java │ │ ├── MirrorStats.java │ │ ├── MultipartKeyCopyJob.java │ │ ├── Sleep.java │ │ └── comparisonstrategies/ │ │ ├── ComparisonStrategy.java │ │ ├── ComparisonStrategyFactory.java │ │ ├── EtagComparisonStrategy.java │ │ ├── SizeAndLastModifiedComparisonStrategy.java │ │ └── SizeOnlyComparisonStrategy.java │ └── resources/ │ └── log4j.xml └── test/ └── java/ └── org/ └── cobbzilla/ └── s3s3mirror/ ├── MirrorMainTest.java ├── MirrorTest.java ├── S3Asset.java ├── SyncStrategiesTest.java └── TestFile.java ================================================ FILE CONTENTS ================================================ ================================================ FILE: .gitignore ================================================ *.iml .idea tmp logs dependency-reduced-pom.xml *~ target !target/*.jar *.log .settings .classpath .project ================================================ FILE: LICENSE.txt ================================================ Copyright 2017-2021 Jonathan Cobb Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ================================================ FILE: README.md ================================================ s3s3mirror ========== A utility for mirroring content from one S3 bucket to another. Designed to be lightning-fast and highly concurrent, with modest CPU and memory requirements. An object will be copied if and only if at least one of the following holds true: * The object does not exist in the destination bucket. * The "sync strategy" triggers (by default uses the Etag sync strategy) * Etag Strategy (Default): If the size or Etags don't match between the source and destination bucket. * Size Strategy: If the sizes don't match between the source and destination bucket. * Size and Last Modified Strategy: If the source and destination objects have a different size, or the source bucket object has a more recent last modified date. When copying, the source metadata and ACL lists are also copied to the destination object. Note: [the 2.1-stable branch](https://github.com/cobbzilla/s3s3mirror/tree/2.1-stable) supports copying to/from local directories. ### Motivation I started with "s3cmd sync" but found that with buckets containing many thousands of objects, it was incredibly slow to start and consumed *massive* amounts of memory. So I designed s3s3mirror to start copying immediately with an intelligently chosen "chunk size" and to operate in a highly-threaded, streaming fashion, so memory requirements are much lower. Running with 100 threads, I found the gating factor to be *how fast I could list items from the source bucket* (!?!) Which makes me wonder if there is any way to do this faster. I'm sure there must be, but this is pretty damn fast. ### AWS Credentials * s3s3mirror will first look for credentials in your system environment. If variables named AWS\_ACCESS\_KEY\_ID and AWS\_SECRET\_ACCESS\_KEY are defined, then these will be used. * Next, it checks for a ~/.s3cfg file (which you might have for using s3cmd). If present, the access key and secret key are read from there. * IAM Roles can be used on EC2 instances by specifying the --iam flag * If none of the above is found, it will error out and refuse to run. ### System Requirements * Java 8 or higher ### Building mvn package Note that s3s3mirror now has a prebuilt jar checked in to github, so you'll only need to do this if you've been playing with the source code. The above command requires that Maven 3 is installed. ### License s3s3mirror is available under the [Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0). ### Usage s3s3mirror.sh [options] [/src-prefix/path/...] [/dest-prefix/path/...] ### Versions The 1.x branch (currently master) has been in use by the most number of people and is the most battle tested. The 2.x branch supports copying between S3 and any local filesystem. It has seen heavy use and performs well, but is not as widely used as the 1.x branch. **In the near future, the 1.x branch will offshoot from master, and the 2.x branch will be merged into master.** There are a handful of features on the 1.x branch that have not yet been ported to 2.x. If you can live without them, I encourage you to use the 2.x branch. If you really need them, I encourage you to port them to the 2.x branch, if you have the ability. ### Options -c (--ctime) N : Only copy objects whose Last-Modified date is younger than this many days For other time units, use these suffixes: y (years), M (months), d (days), w (weeks), h (hours), m (minutes), s (seconds) -i (--iam) : Attempt to use IAM Role if invoked on an EC2 instance -P (--profile) VAL : Use a specific profile from your credential file (~/.aws/config) -m (--max-connections) N : Maximum number of connections to S3 (default 100) -n (--dry-run) : Do not actually do anything, but show what would be done (default false) -r (--max-retries) N : Maximum number of retries for S3 requests (default 5) -p (--prefix) VAL : Only copy objects whose keys start with this prefix -d (--dest-prefix) VAL : Destination prefix (replacing the one specified in --prefix, if any) -e (--endpoint) VAL : AWS endpoint to use (or set AWS_ENDPOINT in your environment) -X (--delete-removed) : Delete objects from the destination bucket if they do not exist in the source bucket -t (--max-threads) N : Maximum number of threads (default 100) -v (--verbose) : Verbose output (default false) -z (--proxy) VAL : host:port of proxy server to use. Defaults to proxy_host and proxy_port defined in ~/.s3cfg, or no proxy if these values are not found in ~/.s3cfg -u (--upload-part-size) N : The upload size (in bytes) of each part uploaded as part of a multipart request for files that are greater than the max allowed file size of 5368709120 bytes (5 GB) Defaults to 4294967296 bytes (4 GB) -C (--cross-account-copy) : Copy across AWS accounts. Only Resource-based policies are supported (as specified by AWS documentation) for cross account copying Default is false (copying within same account, preserving ACLs across copies) If this option is active, the owner of the destination bucket will receive full control -s (--ssl) : Use SSL for all S3 api operations (default false) -E (--server-side-encryption) : Enable AWS managed server-side encryption (default false) -l (--storage-class) : S3 storage class "Standard" or "ReducedRedundancy" (default Standard) -S (--size-only) : Only takes size of objects in consideration when determining if a copy is required. -L (--size-and-last-modified) : Uses size and last modified to determine if files have change like the AWS CLI and ignores etags. If -S (--size-only) is also specified that strategy is selected over this strategy. ### Examples Copy everything from a bucket named "source" to another bucket named "dest" s3s3mirror.sh source dest Copy everything from "source" to "dest", but only copy objects created or modified within the past week s3s3mirror.sh -c 7 source dest s3s3mirror.sh -c 7d source dest s3s3mirror.sh -c 1w source dest s3s3mirror.sh --ctime 1w source dest Copy everything from "source/foo" to "dest/bar" s3s3mirror.sh source/foo dest/bar s3s3mirror.sh -p foo -d bar source dest Copy everything from "source/foo" to "dest/bar" and delete anything in "dest/bar" that does not exist in "source/foo" s3s3mirror.sh -X source/foo dest/bar s3s3mirror.sh --delete-removed source/foo dest/bar s3s3mirror.sh -p foo -d bar -X source dest s3s3mirror.sh -p foo -d bar --delete-removed source dest Copy within a single bucket -- copy everything from "source/foo" to "source/bar" s3s3mirror.sh source/foo source/bar s3s3mirror.sh -p foo -d bar source source BAD IDEA: If copying within a single bucket, do *not* put the destination below the source s3s3mirror.sh source/foo source/foo/subfolder s3s3mirror.sh -p foo -d foo/subfolder source source *This might cause recursion and raise your AWS bill unnecessarily* ###### If you've enjoyed using s3s3mirror and are looking for a warm-fuzzy feeling, consider dropping a little somethin' into my [tip jar](https://cobbzilla.org/tipjar.html) ================================================ FILE: pom.xml ================================================ 4.0.0 org.cobbzilla s3s3mirror 1.2.8-SNAPSHOT jar The Apache Software License, Version 2.0 http://www.apache.org/licenses/LICENSE-2.0.txt repo 1.7.30 4.13.1 org.slf4j slf4j-api ${org.slf4j.version} org.slf4j jcl-over-slf4j ${org.slf4j.version} runtime org.slf4j slf4j-log4j12 ${org.slf4j.version} runtime org.apache.logging.log4j log4j-core 2.17.1 runtime org.projectlombok lombok 1.18.16 compile junit junit ${junit.version} test commons-io commons-io 2.8.0 test org.apache.commons commons-lang3 3.11 test org.mockito mockito-core 3.7.7 test args4j args4j 2.33 joda-time joda-time 2.10.9 com.amazonaws aws-java-sdk-s3 1.12.261 org.apache.maven.plugins maven-compiler-plugin 2.3.2 1.8 1.8 true org.apache.maven.plugins maven-shade-plugin 2.1 package shade org.cobbzilla.s3s3mirror.MirrorMain *:* META-INF/*.SF META-INF/*.DSA META-INF/*.RSA ================================================ FILE: s3s3mirror.bat ================================================ @echo off java -Dlog4j.configuration=file:target/classes/log4j.xml -Ds3s3mirror.version=1.2.8 -jar target/s3s3mirror-1.2.7-SNAPSHOT.jar %* ================================================ FILE: s3s3mirror.sh ================================================ #!/bin/bash THISDIR=$(cd "$(dirname $0)" && pwd) VERSION=1.2.8 JARFILE="${THISDIR}/target/s3s3mirror-${VERSION}-SNAPSHOT.jar" VERSION_ARG="-Ds3s3mirror.version=${VERSION}" DEBUG=$1 if [ "${DEBUG}" = "--debug" ] ; then # Run in debug mode shift # remove --debug from options java -Dlog4j.configuration=file:target/classes/log4j.xml -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=5005 ${VERSION_ARG} -jar "${JARFILE}" "$@" else # Run in regular mode java ${VERSION_ARG} -Dlog4j.configuration=file:target/classes/log4j.xml -jar "${JARFILE}" "$@" fi exit $? ================================================ FILE: src/main/java/org/cobbzilla/s3s3mirror/CopyMaster.java ================================================ package org.cobbzilla.s3s3mirror; import com.amazonaws.services.s3.AmazonS3Client; import com.amazonaws.services.s3.model.S3ObjectSummary; import org.cobbzilla.s3s3mirror.comparisonstrategies.ComparisonStrategy; import org.cobbzilla.s3s3mirror.comparisonstrategies.ComparisonStrategyFactory; import org.cobbzilla.s3s3mirror.comparisonstrategies.SizeOnlyComparisonStrategy; import java.util.concurrent.BlockingQueue; import java.util.concurrent.ThreadPoolExecutor; public class CopyMaster extends KeyMaster { private final ComparisonStrategy comparisonStrategy; public CopyMaster(AmazonS3Client client, MirrorContext context, BlockingQueue workQueue, ThreadPoolExecutor executorService) { super(client, context, workQueue, executorService); comparisonStrategy = ComparisonStrategyFactory.getStrategy(context.getOptions()); } protected String getPrefix(MirrorOptions options) { return options.getPrefix(); } protected String getBucket(MirrorOptions options) { return options.getSourceBucket(); } protected KeyCopyJob getTask(S3ObjectSummary summary) { if (summary.getSize() > MirrorOptions.MAX_SINGLE_REQUEST_UPLOAD_FILE_SIZE) { return new MultipartKeyCopyJob(client, context, summary, notifyLock, new SizeOnlyComparisonStrategy()); } return new KeyCopyJob(client, context, summary, notifyLock, comparisonStrategy); } } ================================================ FILE: src/main/java/org/cobbzilla/s3s3mirror/DeleteMaster.java ================================================ package org.cobbzilla.s3s3mirror; import com.amazonaws.services.s3.AmazonS3Client; import com.amazonaws.services.s3.model.S3ObjectSummary; import java.util.concurrent.BlockingQueue; import java.util.concurrent.ThreadPoolExecutor; public class DeleteMaster extends KeyMaster { public DeleteMaster(AmazonS3Client client, MirrorContext context, BlockingQueue workQueue, ThreadPoolExecutor executorService) { super(client, context, workQueue, executorService); } protected String getPrefix(MirrorOptions options) { return options.hasDestPrefix() ? options.getDestPrefix() : options.getPrefix(); } protected String getBucket(MirrorOptions options) { return options.getDestinationBucket(); } @Override protected KeyJob getTask(S3ObjectSummary summary) { return new KeyDeleteJob(client, context, summary, notifyLock); } } ================================================ FILE: src/main/java/org/cobbzilla/s3s3mirror/KeyCopyJob.java ================================================ package org.cobbzilla.s3s3mirror; import com.amazonaws.services.s3.AmazonS3Client; import com.amazonaws.services.s3.model.*; import lombok.extern.slf4j.Slf4j; import org.cobbzilla.s3s3mirror.comparisonstrategies.ComparisonStrategy; import org.slf4j.Logger; import java.util.Date; /** * Handles a single key. Determines if it should be copied, and if so, performs the copy operation. */ @Slf4j public class KeyCopyJob extends KeyJob { protected String keydest; protected ComparisonStrategy comparisonStrategy; public KeyCopyJob(AmazonS3Client client, MirrorContext context, S3ObjectSummary summary, Object notifyLock, ComparisonStrategy comparisonStrategy) { super(client, context, summary, notifyLock); keydest = summary.getKey(); final MirrorOptions options = context.getOptions(); if (options.hasDestPrefix()) { keydest = keydest.substring(options.getPrefixLength()); keydest = options.getDestPrefix() + keydest; } this.comparisonStrategy = comparisonStrategy; } @Override public Logger getLog() { return log; } @Override public void run() { final MirrorOptions options = context.getOptions(); final String key = summary.getKey(); try { if (!shouldTransfer()) return; final ObjectMetadata sourceMetadata = getObjectMetadata(options.getSourceBucket(), key, options); final AccessControlList objectAcl = getAccessControlList(options, key); if (options.isDryRun()) { log.info("Would have copied " + key + " to destination: " + keydest); } else { if (keyCopied(sourceMetadata, objectAcl)) { context.getStats().objectsCopied.incrementAndGet(); } else { context.getStats().copyErrors.incrementAndGet(); } } } catch (Exception e) { log.error("error copying key: " + key + ": " + e); } finally { synchronized (notifyLock) { notifyLock.notifyAll(); } if (options.isVerbose()) log.info("done with " + key); } } boolean keyCopied(ObjectMetadata sourceMetadata, AccessControlList objectAcl) { String key = summary.getKey(); MirrorOptions options = context.getOptions(); boolean verbose = options.isVerbose(); int maxRetries= options.getMaxRetries(); MirrorStats stats = context.getStats(); for (int tries = 0; tries < maxRetries; tries++) { if (verbose) log.info("copying (try #" + tries + "): " + key + " to: " + keydest); final CopyObjectRequest request = new CopyObjectRequest(options.getSourceBucket(), key, options.getDestinationBucket(), keydest); request.setStorageClass(StorageClass.valueOf(options.getStorageClass())); if (options.isEncrypt()) { request.putCustomRequestHeader("x-amz-server-side-encryption", "AES256"); } request.setNewObjectMetadata(sourceMetadata); if (options.isCrossAccountCopy()) { request.setAccessControlList(buildCrossAccountAcl(objectAcl)); } else { request.setAccessControlList(objectAcl); } try { stats.s3copyCount.incrementAndGet(); client.copyObject(request); stats.bytesCopied.addAndGet(sourceMetadata.getContentLength()); if (verbose) log.info("successfully copied (on try #" + tries + "): " + key + " to: " + keydest); return true; } catch (AmazonS3Exception s3e) { log.error("s3 exception copying (try #" + tries + ") " + key + " to: " + keydest + ": " + s3e); } catch (Exception e) { log.error("unexpected exception copying (try #" + tries + ") " + key + " to: " + keydest + ": " + e); } try { Thread.sleep(10); } catch (InterruptedException e) { log.error("interrupted while waiting to retry key: " + key); return false; } } return false; } private boolean shouldTransfer() { final MirrorOptions options = context.getOptions(); final String key = summary.getKey(); final boolean verbose = options.isVerbose(); if (options.hasCtime()) { final Date lastModified = summary.getLastModified(); if (lastModified == null) { if (verbose) log.info("No Last-Modified header for key: " + key); } else { if (lastModified.getTime() < options.getMaxAge()) { if (verbose) log.info("key "+key+" (lastmod="+lastModified+") is older than "+options.getCtime()+" (cutoff="+options.getMaxAgeDate()+"), not copying"); return false; } } } final ObjectMetadata metadata; try { metadata = getObjectMetadata(options.getDestinationBucket(), keydest, options); } catch (AmazonS3Exception e) { if (e.getStatusCode() == 404) { if (verbose) log.info("Key not found in destination bucket (will copy): "+ keydest); return true; } else { log.warn("Error getting metadata for " + options.getDestinationBucket() + "/" + keydest + " (not copying): " + e); return false; } } catch (Exception e) { log.warn("Error getting metadata for " + options.getDestinationBucket() + "/" + keydest + " (not copying): " + e); return false; } if (summary.getSize() > MirrorOptions.MAX_SINGLE_REQUEST_UPLOAD_FILE_SIZE) { return metadata.getContentLength() != summary.getSize(); } final boolean objectChanged = comparisonStrategy.sourceDifferent(summary, metadata); if (verbose && !objectChanged) log.info("Destination file is same as source, not copying: "+ key); return objectChanged; } } ================================================ FILE: src/main/java/org/cobbzilla/s3s3mirror/KeyDeleteJob.java ================================================ package org.cobbzilla.s3s3mirror; import com.amazonaws.services.s3.AmazonS3Client; import com.amazonaws.services.s3.model.*; import lombok.extern.slf4j.Slf4j; import org.slf4j.Logger; @Slf4j public class KeyDeleteJob extends KeyJob { private String keysrc; public KeyDeleteJob (AmazonS3Client client, MirrorContext context, S3ObjectSummary summary, Object notifyLock) { super(client, context, summary, notifyLock); final MirrorOptions options = context.getOptions(); keysrc = summary.getKey(); // NOTE: summary.getKey is the key in the destination bucket if (options.hasPrefix()) { keysrc = keysrc.substring(options.getDestPrefixLength()); keysrc = options.getPrefix() + keysrc; } } @Override public Logger getLog() { return log; } @Override public void run() { final MirrorOptions options = context.getOptions(); final MirrorStats stats = context.getStats(); final boolean verbose = options.isVerbose(); final int maxRetries = options.getMaxRetries(); final String key = summary.getKey(); try { if (!shouldDelete()) return; final DeleteObjectRequest request = new DeleteObjectRequest(options.getDestinationBucket(), key); if (options.isDryRun()) { log.info("Would have deleted "+key+" from destination because "+keysrc+" does not exist in source"); } else { boolean deletedOK = false; for (int tries=0; tries= options.getMaxRetries()) { getLog().error("getObjectMetadata(" + key + ") failed (try #" + tries + "), giving up"); break; } else { getLog().warn("getObjectMetadata("+key+") failed (try #"+tries+"), retrying..."); } } } } throw ex; } protected AccessControlList getAccessControlList(MirrorOptions options, String key) throws Exception { Exception ex = null; for (int tries=0; tries<=options.getMaxRetries(); tries++) { try { context.getStats().s3getCount.incrementAndGet(); return client.getObjectAcl(options.getSourceBucket(), key); } catch (Exception e) { ex = e; if (tries >= options.getMaxRetries()) { // Annoyingly there can be two reasons for this to fail. It will fail if the IAM account // permissions are wrong, but it will also fail if we are copying an item that we don't // own ourselves. This may seem unusual, but it occurs when copying AWS Detailed Billing // objects since although they live in your bucket, the object owner is AWS. getLog().warn("Unable to obtain object ACL, copying item without ACL data."); return new AccessControlList(); } if (options.isVerbose()) { if (tries >= options.getMaxRetries()) { getLog().warn("getObjectAcl(" + key + ") failed (try #" + tries + "), giving up."); break; } else { getLog().warn("getObjectAcl("+key+") failed (try #"+tries+"), retrying..."); } } } } throw ex; } AccessControlList buildCrossAccountAcl(AccessControlList original) { AccessControlList result = new AccessControlList(); for (Grant grant : original.getGrantsAsList()) { // Covers all 3 types: Everyone, Authenticate User, Log Delivery if (grant.getGrantee() instanceof GroupGrantee) { result.grantPermission(grant.getGrantee(), grant.getPermission()); } } // Equal to the canned way: request.setCannedAccessControlList(CannedAccessControlList.BucketOwnerFullControl); result.grantPermission(new CanonicalGrantee(context.getOwner().getId()), Permission.FullControl); result.setOwner(context.getOwner()); return result; } } ================================================ FILE: src/main/java/org/cobbzilla/s3s3mirror/KeyLister.java ================================================ package org.cobbzilla.s3s3mirror; import com.amazonaws.services.s3.AmazonS3Client; import com.amazonaws.services.s3.model.AmazonS3Exception; import com.amazonaws.services.s3.model.ListObjectsRequest; import com.amazonaws.services.s3.model.ObjectListing; import com.amazonaws.services.s3.model.S3ObjectSummary; import lombok.extern.slf4j.Slf4j; import java.util.ArrayList; import java.util.List; import java.util.concurrent.atomic.AtomicBoolean; @Slf4j public class KeyLister implements Runnable { private AmazonS3Client client; private MirrorContext context; private int maxQueueCapacity; private final List summaries; private final AtomicBoolean done = new AtomicBoolean(false); private ObjectListing listing; public boolean isDone () { return done.get(); } public KeyLister(AmazonS3Client client, MirrorContext context, int maxQueueCapacity, String bucket, String prefix) { this.client = client; this.context = context; this.maxQueueCapacity = maxQueueCapacity; final MirrorOptions options = context.getOptions(); int fetchSize = options.getMaxThreads(); this.summaries = new ArrayList(10*fetchSize); final ListObjectsRequest request = new ListObjectsRequest(bucket, prefix, null, null, fetchSize); listing = s3getFirstBatch(client, request); synchronized (summaries) { final List objectSummaries = listing.getObjectSummaries(); summaries.addAll(objectSummaries); context.getStats().objectsRead.addAndGet(objectSummaries.size()); if (options.isVerbose()) log.info("added initial set of "+objectSummaries.size()+" keys"); } } @Override public void run() { final MirrorOptions options = context.getOptions(); final boolean verbose = options.isVerbose(); int counter = 0; log.info("starting..."); try { while (true) { while (getSize() < maxQueueCapacity) { if (listing.isTruncated()) { listing = s3getNextBatch(); if (++counter % 100 == 0) context.getStats().logStats(); synchronized (summaries) { final List objectSummaries = listing.getObjectSummaries(); summaries.addAll(objectSummaries); context.getStats().objectsRead.addAndGet(objectSummaries.size()); if (verbose) log.info("queued next set of "+objectSummaries.size()+" keys (total now="+getSize()+")"); } } else { log.info("No more keys found in source bucket, exiting"); return; } } try { Thread.sleep(50); } catch (InterruptedException e) { log.error("interrupted!"); return; } } } catch (Exception e) { log.error("Error in run loop, KeyLister thread now exiting: "+e); } finally { if (verbose) log.info("KeyLister run loop finished"); done.set(true); } } private ObjectListing s3getFirstBatch(AmazonS3Client client, ListObjectsRequest request) { final MirrorOptions options = context.getOptions(); final boolean verbose = options.isVerbose(); final int maxRetries = options.getMaxRetries(); Exception lastException = null; for (int tries=0; tries getNextBatch() { List copy; synchronized (summaries) { copy = new ArrayList(summaries); summaries.clear(); } return copy; } } ================================================ FILE: src/main/java/org/cobbzilla/s3s3mirror/KeyMaster.java ================================================ package org.cobbzilla.s3s3mirror; import com.amazonaws.services.s3.AmazonS3Client; import com.amazonaws.services.s3.model.S3ObjectSummary; import lombok.extern.slf4j.Slf4j; import java.util.List; import java.util.concurrent.BlockingQueue; import java.util.concurrent.ThreadPoolExecutor; import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicBoolean; @Slf4j public abstract class KeyMaster implements Runnable { public static final int STOP_TIMEOUT_SECONDS = 10; private static final long STOP_TIMEOUT = TimeUnit.SECONDS.toMillis(STOP_TIMEOUT_SECONDS); protected AmazonS3Client client; protected MirrorContext context; private AtomicBoolean done = new AtomicBoolean(false); public boolean isDone () { return done.get(); } private BlockingQueue workQueue; private ThreadPoolExecutor executorService; protected final Object notifyLock = new Object(); private Thread thread; public KeyMaster(AmazonS3Client client, MirrorContext context, BlockingQueue workQueue, ThreadPoolExecutor executorService) { this.client = client; this.context = context; this.workQueue = workQueue; this.executorService = executorService; } protected abstract String getPrefix(MirrorOptions options); protected abstract String getBucket(MirrorOptions options); protected abstract KeyJob getTask(S3ObjectSummary summary); public void start () { this.thread = new Thread(this); this.thread.start(); } public void stop () { final String name = getClass().getSimpleName(); final long start = System.currentTimeMillis(); log.info("stopping "+ name +"..."); try { if (isDone()) return; this.thread.interrupt(); while (!isDone() && System.currentTimeMillis() - start < STOP_TIMEOUT) { if (Sleep.sleep(50)) return; } } finally { if (!isDone()) { try { log.warn(name+" didn't stop within "+STOP_TIMEOUT_SECONDS+" after interrupting it, forcibly killing the thread..."); this.thread.stop(); } catch (Exception e) { log.error("Error calling Thread.stop on " + name + ": " + e, e); } } if (isDone()) log.info(name+" stopped"); } } public void run() { final MirrorOptions options = context.getOptions(); final boolean verbose = options.isVerbose(); final int maxQueueCapacity = MirrorMaster.getMaxQueueCapacity(options); int counter = 0; try { final KeyLister lister = new KeyLister(client, context, maxQueueCapacity, getBucket(options), getPrefix(options)); executorService.submit(lister); List summaries = lister.getNextBatch(); if (verbose) log.info(summaries.size()+" keys found in first batch from source bucket -- processing..."); while (true) { for (S3ObjectSummary summary : summaries) { while (workQueue.size() >= maxQueueCapacity) { try { synchronized (notifyLock) { notifyLock.wait(50); } Thread.sleep(50); } catch (InterruptedException e) { log.error("interrupted!"); return; } } executorService.submit(getTask(summary)); counter++; } summaries = lister.getNextBatch(); if (summaries.size() > 0) { if (verbose) log.info(summaries.size()+" more keys found in source bucket -- continuing (queue size="+workQueue.size()+", total processed="+counter+")..."); } else if (lister.isDone()) { if (verbose) log.info("No more keys found in source bucket -- ALL DONE"); return; } else { if (verbose) log.info("Lister has no keys queued, but is not done, waiting and retrying"); if (Sleep.sleep(50)) return; } } } catch (Exception e) { log.error("Unexpected exception in MirrorMaster: "+e, e); } finally { while (workQueue.size() > 0 || executorService.getActiveCount() > 0) { // wait for the queue to be empty if (Sleep.sleep(100)) break; } // this will wait for currently executing tasks to finish executorService.shutdown(); done.set(true); } } } ================================================ FILE: src/main/java/org/cobbzilla/s3s3mirror/MirrorConstants.java ================================================ package org.cobbzilla.s3s3mirror; public class MirrorConstants { public static final long KB = 1024L; public static final long MB = KB * 1024L; public static final long GB = MB * 1024L; public static final long TB = GB * 1024L; public static final long PB = TB * 1024L; public static final long EB = PB * 1024L; } ================================================ FILE: src/main/java/org/cobbzilla/s3s3mirror/MirrorContext.java ================================================ package org.cobbzilla.s3s3mirror; import com.amazonaws.services.s3.model.Owner; import lombok.AllArgsConstructor; import lombok.Getter; import lombok.Setter; @AllArgsConstructor public class MirrorContext { @Getter @Setter private MirrorOptions options; @Getter @Setter private Owner owner; @Getter private final MirrorStats stats = new MirrorStats(); } ================================================ FILE: src/main/java/org/cobbzilla/s3s3mirror/MirrorMain.java ================================================ package org.cobbzilla.s3s3mirror; import com.amazonaws.ClientConfiguration; import com.amazonaws.Protocol; import com.amazonaws.services.s3.AmazonS3Client; import com.amazonaws.auth.InstanceProfileCredentialsProvider; import com.amazonaws.auth.BasicSessionCredentials; import com.amazonaws.services.s3.model.AccessControlList; import com.amazonaws.services.s3.model.Owner; import lombok.Cleanup; import lombok.Getter; import lombok.Setter; import lombok.extern.slf4j.Slf4j; import org.kohsuke.args4j.CmdLineParser; import java.io.BufferedReader; import java.io.File; import java.io.FileReader; /** * Provides the "main" method. Responsible for parsing options and setting up the MirrorMaster to manage the copy. */ @Slf4j public class MirrorMain { @Getter @Setter private String[] args; @Getter private final MirrorOptions options = new MirrorOptions(); private final CmdLineParser parser = new CmdLineParser(options); private final Thread.UncaughtExceptionHandler uncaughtExceptionHandler = new Thread.UncaughtExceptionHandler() { @Override public void uncaughtException(Thread t, Throwable e) { log.error("Uncaught Exception (thread "+t.getName()+"): "+e, e); } }; @Getter private AmazonS3Client client; @Getter private MirrorContext context; @Getter private MirrorMaster master; public MirrorMain(String[] args) { this.args = args; } public static void main (String[] args) { MirrorMain main = new MirrorMain(args); main.run(); } public void run() { init(); master.mirror(); } public void init() { if (client == null) { try { parseArguments(); } catch (Exception e) { System.err.println(e.getMessage()); parser.printUsage(System.err); System.exit(1); } client = getAmazonS3Client(); context = new MirrorContext(options, getTargetBucketOwner(client)); master = new MirrorMaster(client, context); Runtime.getRuntime().addShutdownHook(context.getStats().getShutdownHook()); Thread.setDefaultUncaughtExceptionHandler(uncaughtExceptionHandler); } } protected AmazonS3Client getAmazonS3Client() { ClientConfiguration clientConfiguration = new ClientConfiguration().withProtocol((options.isSsl() ? Protocol.HTTPS : Protocol.HTTP)) .withMaxConnections(options.getMaxConnections()); if (options.getHasProxy()) { clientConfiguration = clientConfiguration .withProxyHost(options.getProxyHost()) .withProxyPort(options.getProxyPort()); } AmazonS3Client client = null; if(System.getenv("AWS_SECURITY_TOKEN") != null) { BasicSessionCredentials basicSessionCredentials = new BasicSessionCredentials(System.getenv("AWS_ACCESS_KEY_ID"), System.getenv("AWS_SECRET_ACCESS_KEY"), System.getenv("AWS_SECURITY_TOKEN")); client = new AmazonS3Client(basicSessionCredentials, clientConfiguration); } else if (options.hasAwsKeys()) { client = new AmazonS3Client(options, clientConfiguration); } else if (options.isUseIamRole()) { client = new AmazonS3Client(new InstanceProfileCredentialsProvider(), clientConfiguration); } else { throw new IllegalStateException("No authenication method available, please specify IAM Role usage or AWS key and secret"); } if (options.hasEndpoint()) client.setEndpoint(options.getEndpoint()); return client; } protected void parseArguments() throws Exception { parser.parseArgument(args); // for credentials, check for IAM role usage if not then... // try the .aws/config file first if there is a profile specified, otherwise defer to // .s3cfg before using the default .aws/config credentials // (this may attempt .aws/config twice for no reason, but maintains backward compatibility) if (options.isUseIamRole() == false) { if (!options.hasAwsKeys() && options.getProfile() != null) loadAwsKeysFromAwsConfig(); if (!options.hasAwsKeys()) loadAwsKeysFromS3Config(); if (!options.hasAwsKeys()) loadAwsKeysFromAwsConfig(); if (!options.hasAwsKeys()) loadAwsKeysFromAwsCredentials(); if (!options.hasAwsKeys()) { throw new IllegalStateException("Could not find credentials, IAM Role usage not specified and ENV vars not defined: " + MirrorOptions.AWS_ACCESS_KEY + " and/or " + MirrorOptions.AWS_SECRET_KEY); } } else { InstanceProfileCredentialsProvider client = new InstanceProfileCredentialsProvider(); if (client.getCredentials() == null) { throw new IllegalStateException("Could not find IAM Instance Profile credentials from the AWS metadata service."); } } options.initDerivedFields(); } private void loadAwsKeysFromS3Config() { try { // try to load from ~/.s3cfg @Cleanup BufferedReader reader = new BufferedReader(new FileReader(System.getProperty("user.home")+File.separator+".s3cfg")); String line; while ((line = reader.readLine()) != null) { if (line.trim().startsWith("access_key")) { options.setAWSAccessKeyId(line.substring(line.indexOf("=") + 1).trim()); } else if (line.trim().startsWith("secret_key")) { options.setAWSSecretKey(line.substring(line.indexOf("=") + 1).trim()); } else if (!options.getHasProxy() && line.trim().startsWith("proxy_host")) { options.setProxyHost(line.substring(line.indexOf("=") + 1).trim()); } else if (!options.getHasProxy() && line.trim().startsWith("proxy_port")){ options.setProxyPort(Integer.parseInt(line.substring(line.indexOf("=") + 1).trim())); } } } catch (Exception e) { // ignore - let other credential-discovery processes have a crack } } private void loadAwsKeysFromAwsConfig() { try { // try to load from ~/.aws/config @Cleanup BufferedReader reader = new BufferedReader(new FileReader( System.getProperty("user.home") + File.separator + ".aws" + File.separator + "config")); String line; boolean skipSection = true; while ((line = reader.readLine()) != null) { line = line.trim(); if (line.startsWith("[")) { // if no defined profile, use '[default]' otherwise use profile with matching name if ((options.getProfile() == null && line.equals("[default]")) || (options.getProfile() != null && line.equals("[profile " + options.getProfile() + "]"))) { skipSection = false; } else { skipSection = true; } continue; } if (skipSection) continue; if (line.startsWith("aws_access_key_id")) { options.setAWSAccessKeyId(line.substring(line.indexOf("=") + 1).trim()); } else if (line.startsWith("aws_secret_access_key")) { options.setAWSSecretKey(line.substring(line.indexOf("=") + 1).trim()); } } } catch (Exception e) { // ignore - let other credential-discovery processes have a crack } } private void loadAwsKeysFromAwsCredentials() { try { // try to load from ~/.aws/config @Cleanup BufferedReader reader = new BufferedReader(new FileReader( System.getProperty("user.home") + File.separator + ".aws" + File.separator + "credentials")); String line; boolean skipSection = true; while ((line = reader.readLine()) != null) { line = line.trim(); if (line.startsWith("[")) { // if no defined profile, use '[default]' otherwise use profile with matching name if ((options.getProfile() == null && line.equals("[default]")) || (options.getProfile() != null && line.equals("[" + options.getProfile() + "]"))) { skipSection = false; } else { skipSection = true; } continue; } if (skipSection) continue; if (line.startsWith("aws_access_key_id")) { options.setAWSAccessKeyId(line.substring(line.indexOf("=") + 1).trim()); } else if (line.startsWith("aws_secret_access_key")) { options.setAWSSecretKey(line.substring(line.indexOf("=") + 1).trim()); } } } catch (Exception e) { // ignore - let other credential-discovery processes have a crack } } private Owner getTargetBucketOwner(AmazonS3Client client) { AccessControlList targetBucketAcl = client.getBucketAcl(options.getDestinationBucket()); return targetBucketAcl.getOwner(); } } ================================================ FILE: src/main/java/org/cobbzilla/s3s3mirror/MirrorMaster.java ================================================ package org.cobbzilla.s3s3mirror; import com.amazonaws.services.s3.AmazonS3Client; import lombok.extern.slf4j.Slf4j; import java.util.concurrent.*; /** * Manages the Starts a KeyLister and sends batches of keys to the ExecutorService for handling by KeyJobs */ @Slf4j public class MirrorMaster { public static final String VERSION = System.getProperty("s3s3mirror.version"); private final AmazonS3Client client; private final MirrorContext context; public MirrorMaster(AmazonS3Client client, MirrorContext context) { this.client = client; this.context = context; } public void mirror() { log.info("version "+VERSION+" starting"); final MirrorOptions options = context.getOptions(); if (options.isVerbose() && options.hasCtime()) log.info("will not copy anything older than "+options.getCtime()+" (cutoff="+options.getMaxAgeDate()+")"); final int maxQueueCapacity = getMaxQueueCapacity(options); final BlockingQueue workQueue = new LinkedBlockingQueue(maxQueueCapacity); final RejectedExecutionHandler rejectedExecutionHandler = new RejectedExecutionHandler() { @Override public void rejectedExecution(Runnable r, ThreadPoolExecutor executor) { log.error("Error submitting job: "+r+", possible queue overflow"); } }; final ThreadPoolExecutor executorService = new ThreadPoolExecutor(options.getMaxThreads(), options.getMaxThreads(), 1, TimeUnit.MINUTES, workQueue, rejectedExecutionHandler); final KeyMaster copyMaster = new CopyMaster(client, context, workQueue, executorService); KeyMaster deleteMaster = null; try { copyMaster.start(); if (context.getOptions().isDeleteRemoved()) { deleteMaster = new DeleteMaster(client, context, workQueue, executorService); deleteMaster.start(); } while (true) { if (copyMaster.isDone() && (deleteMaster == null || deleteMaster.isDone())) { log.info("mirror: completed"); break; } if (Sleep.sleep(100)) return; } } catch (Exception e) { log.error("Unexpected exception in mirror: "+e, e); } finally { try { copyMaster.stop(); } catch (Exception e) { log.error("Error stopping copyMaster: "+e, e); } if (deleteMaster != null) { try { deleteMaster.stop(); } catch (Exception e) { log.error("Error stopping deleteMaster: "+e, e); } } } } public static int getMaxQueueCapacity(MirrorOptions options) { return 10 * options.getMaxThreads(); } } ================================================ FILE: src/main/java/org/cobbzilla/s3s3mirror/MirrorOptions.java ================================================ package org.cobbzilla.s3s3mirror; import com.amazonaws.auth.AWSCredentials; import lombok.Getter; import lombok.Setter; import org.joda.time.DateTime; import org.kohsuke.args4j.Argument; import org.kohsuke.args4j.Option; import java.util.Date; import static org.cobbzilla.s3s3mirror.MirrorConstants.*; public class MirrorOptions implements AWSCredentials { public static final String S3_PROTOCOL_PREFIX = "s3://"; public static final String AWS_ACCESS_KEY = "AWS_ACCESS_KEY_ID"; public static final String AWS_SECRET_KEY = "AWS_SECRET_ACCESS_KEY"; @Getter @Setter private String aWSAccessKeyId = System.getenv().get(AWS_ACCESS_KEY); @Getter @Setter private String aWSSecretKey = System.getenv().get(AWS_SECRET_KEY); public boolean hasAwsKeys() { return aWSAccessKeyId != null && aWSSecretKey != null; } public static final String USAGE_USE_IAM_ROLE = "Use IAM role from EC2 instance, can only be used in AWS"; public static final String OPT_USE_IAM_ROLE = "-i"; public static final String LONGOPT_USE_IAM_ROLE = "--iam"; @Option(name=OPT_USE_IAM_ROLE, aliases=LONGOPT_USE_IAM_ROLE, usage=USAGE_USE_IAM_ROLE) @Getter @Setter private boolean useIamRole = false; public static final String USAGE_PROFILE= "Use a specific profile from your credential file (~/.aws/config)"; public static final String OPT_PROFILE= "-P"; public static final String LONGOPT_PROFILE = "--profile"; @Option(name=OPT_PROFILE, aliases=LONGOPT_PROFILE, usage=USAGE_PROFILE) @Getter @Setter private String profile = null; public static final String USAGE_DRY_RUN = "Do not actually do anything, but show what would be done"; public static final String OPT_DRY_RUN = "-n"; public static final String LONGOPT_DRY_RUN = "--dry-run"; @Option(name=OPT_DRY_RUN, aliases=LONGOPT_DRY_RUN, usage=USAGE_DRY_RUN) @Getter @Setter private boolean dryRun = false; public static final String USAGE_VERBOSE = "Verbose output"; public static final String OPT_VERBOSE = "-v"; public static final String LONGOPT_VERBOSE = "--verbose"; @Option(name=OPT_VERBOSE, aliases=LONGOPT_VERBOSE, usage=USAGE_VERBOSE) @Getter @Setter private boolean verbose = false; public static final String USAGE_SSL = "Use SSL for all S3 api operations"; public static final String OPT_SSL = "-s"; public static final String LONGOPT_SSL = "--ssl"; @Option(name=OPT_SSL, aliases=LONGOPT_SSL, usage=USAGE_SSL) @Getter @Setter private boolean ssl = false; public static final String USAGE_ENCRYPT = "Enable AWS managed server-side encryption"; public static final String OPT_ENCRYPT = "-E"; public static final String LONGOPT_ENCRYPT = "--server-side-encryption"; @Option(name=OPT_ENCRYPT, aliases=LONGOPT_ENCRYPT, usage=USAGE_ENCRYPT) @Getter @Setter private boolean encrypt = false; public static final String USAGE_STORAGE_CLASS = "Specify the S3 StorageClass (Standard | ReducedRedundancy)"; public static final String OPT_STORAGE_CLASS = "-l"; public static final String LONGOPT_STORAGE_CLASS = "--storage-class"; @Option(name=OPT_STORAGE_CLASS, aliases=LONGOPT_STORAGE_CLASS, usage=USAGE_STORAGE_CLASS) @Getter @Setter private String storageClass = "Standard"; public static final String USAGE_PREFIX = "Only copy objects whose keys start with this prefix"; public static final String OPT_PREFIX = "-p"; public static final String LONGOPT_PREFIX = "--prefix"; @Option(name=OPT_PREFIX, aliases=LONGOPT_PREFIX, usage=USAGE_PREFIX) @Getter @Setter private String prefix = null; public boolean hasPrefix () { return prefix != null && prefix.length() > 0; } public int getPrefixLength () { return prefix == null ? 0 : prefix.length(); } public static final String USAGE_DEST_PREFIX = "Destination prefix (replacing the one specified in --prefix, if any)"; public static final String OPT_DEST_PREFIX= "-d"; public static final String LONGOPT_DEST_PREFIX = "--dest-prefix"; @Option(name=OPT_DEST_PREFIX, aliases=LONGOPT_DEST_PREFIX, usage=USAGE_DEST_PREFIX) @Getter @Setter private String destPrefix = null; public boolean hasDestPrefix() { return destPrefix != null && destPrefix.length() > 0; } public int getDestPrefixLength () { return destPrefix == null ? 0 : destPrefix.length(); } public static final String AWS_ENDPOINT = "AWS_ENDPOINT"; public static final String USAGE_ENDPOINT = "AWS endpoint to use (or set "+AWS_ENDPOINT+" in your environment)"; public static final String OPT_ENDPOINT = "-e"; public static final String LONGOPT_ENDPOINT = "--endpoint"; @Option(name=OPT_ENDPOINT, aliases=LONGOPT_ENDPOINT, usage=USAGE_ENDPOINT) @Getter @Setter private String endpoint = System.getenv().get(AWS_ENDPOINT); public boolean hasEndpoint () { return endpoint != null && endpoint.trim().length() > 0; } public static final String USAGE_MAX_CONNECTIONS = "Maximum number of connections to S3 (default 100)"; public static final String OPT_MAX_CONNECTIONS = "-m"; public static final String LONGOPT_MAX_CONNECTIONS = "--max-connections"; @Option(name=OPT_MAX_CONNECTIONS, aliases=LONGOPT_MAX_CONNECTIONS, usage=USAGE_MAX_CONNECTIONS) @Getter @Setter private int maxConnections = 100; public static final String USAGE_MAX_THREADS = "Maximum number of threads (default 100)"; public static final String OPT_MAX_THREADS = "-t"; public static final String LONGOPT_MAX_THREADS = "--max-threads"; @Option(name=OPT_MAX_THREADS, aliases=LONGOPT_MAX_THREADS, usage=USAGE_MAX_THREADS) @Getter @Setter private int maxThreads = 100; public static final String USAGE_MAX_RETRIES = "Maximum number of retries for S3 requests (default 5)"; public static final String OPT_MAX_RETRIES = "-r"; public static final String LONGOPT_MAX_RETRIES = "--max-retries"; @Option(name=OPT_MAX_RETRIES, aliases=LONGOPT_MAX_RETRIES, usage=USAGE_MAX_RETRIES) @Getter @Setter private int maxRetries = 5; public static final String USAGE_SIZE_ONLY = "Only use object size when checking for equality and ignore etags"; public static final String OPT_SIZE_ONLY = "-S"; public static final String LONGOPT_SIZE_ONLY = "--size-only"; @Option(name=OPT_SIZE_ONLY, aliases=LONGOPT_SIZE_ONLY, usage=USAGE_SIZE_ONLY) @Getter @Setter private boolean sizeOnly = false; public static final String USAGE_SIZE_LAST_MODIFIED = "Uses size and last modified to determine if files have change like the AWS CLI and ignores etags. If size only is also specified that strategy is selected."; public static final String OPT_SIZE_LAST_MODIFIED = "-L"; public static final String LONGOPT_SIZE_LAST_MODIFIED = "--size-and-last-modified"; @Option(name=OPT_SIZE_LAST_MODIFIED, aliases=LONGOPT_SIZE_LAST_MODIFIED, usage=USAGE_SIZE_LAST_MODIFIED) @Getter @Setter private boolean sizeAndLastModified = false; public static final String USAGE_CTIME = "Only copy objects whose Last-Modified date is younger than this many days. " + "For other time units, use these suffixes: y (years), M (months), d (days), w (weeks), h (hours), m (minutes), s (seconds)"; public static final String OPT_CTIME = "-c"; public static final String LONGOPT_CTIME = "--ctime"; @Option(name=OPT_CTIME, aliases=LONGOPT_CTIME, usage=USAGE_CTIME) @Getter @Setter private String ctime = null; public boolean hasCtime() { return ctime != null; } private static final String PROXY_USAGE = "host:port of proxy server to use. " + "Defaults to proxy_host and proxy_port defined in ~/.s3cfg, or no proxy if these values are not found in ~/.s3cfg"; public static final String OPT_PROXY = "-z"; public static final String LONGOPT_PROXY = "--proxy"; @Option(name=OPT_PROXY, aliases=LONGOPT_PROXY, usage=PROXY_USAGE) public void setProxy(String proxy) { final String[] splits = proxy.split(":"); if (splits.length != 2) { throw new IllegalArgumentException("Invalid proxy setting ("+proxy+"), please use host:port"); } proxyHost = splits[0]; if (proxyHost.trim().length() == 0) { throw new IllegalArgumentException("Invalid proxy setting ("+proxy+"), please use host:port"); } try { proxyPort = Integer.parseInt(splits[1]); } catch (Exception e) { throw new IllegalArgumentException("Invalid proxy setting ("+proxy+"), port could not be parsed as a number"); } } @Getter @Setter public String proxyHost = null; @Getter @Setter public int proxyPort = -1; public boolean getHasProxy() { boolean hasProxyHost = proxyHost != null && proxyHost.trim().length() > 0; boolean hasProxyPort = proxyPort != -1; return hasProxyHost && hasProxyPort; } private long initMaxAge() { DateTime dateTime = new DateTime(nowTime); // all digits -- assume "days" if (ctime.matches("^[0-9]+$")) return dateTime.minusDays(Integer.parseInt(ctime)).getMillis(); // ensure there is at least one digit, and exactly one character suffix, and the suffix is a legal option if (!ctime.matches("^[0-9]+[yMwdhms]$")) throw new IllegalArgumentException("Invalid option for ctime: "+ctime); if (ctime.endsWith("y")) return dateTime.minusYears(getCtimeNumber(ctime)).getMillis(); if (ctime.endsWith("M")) return dateTime.minusMonths(getCtimeNumber(ctime)).getMillis(); if (ctime.endsWith("w")) return dateTime.minusWeeks(getCtimeNumber(ctime)).getMillis(); if (ctime.endsWith("d")) return dateTime.minusDays(getCtimeNumber(ctime)).getMillis(); if (ctime.endsWith("h")) return dateTime.minusHours(getCtimeNumber(ctime)).getMillis(); if (ctime.endsWith("m")) return dateTime.minusMinutes(getCtimeNumber(ctime)).getMillis(); if (ctime.endsWith("s")) return dateTime.minusSeconds(getCtimeNumber(ctime)).getMillis(); throw new IllegalArgumentException("Invalid option for ctime: "+ctime); } private int getCtimeNumber(String ctime) { return Integer.parseInt(ctime.substring(0, ctime.length() - 1)); } @Getter private long nowTime = System.currentTimeMillis(); @Getter private long maxAge; @Getter private String maxAgeDate; public static final String USAGE_DELETE_REMOVED = "Delete objects from the destination bucket if they do not exist in the source bucket"; public static final String OPT_DELETE_REMOVED = "-X"; public static final String LONGOPT_DELETE_REMOVED = "--delete-removed"; @Option(name=OPT_DELETE_REMOVED, aliases=LONGOPT_DELETE_REMOVED, usage=USAGE_DELETE_REMOVED) @Getter @Setter private boolean deleteRemoved = false; @Argument(index=0, required=true, usage="source bucket[/source/prefix]") @Getter @Setter private String source; @Argument(index=1, required=true, usage="destination bucket[/dest/prefix]") @Getter @Setter private String destination; @Getter private String sourceBucket; @Getter private String destinationBucket; /** * Current max file size allowed in amazon is 5 GB. We can try and provide this as an option too. */ public static final long MAX_SINGLE_REQUEST_UPLOAD_FILE_SIZE = 5 * GB; private static final long DEFAULT_PART_SIZE = 4 * GB; private static final String MULTI_PART_UPLOAD_SIZE_USAGE = "The upload size (in bytes) of each part uploaded as part of a multipart request " + "for files that are greater than the max allowed file size of " + MAX_SINGLE_REQUEST_UPLOAD_FILE_SIZE + " bytes ("+(MAX_SINGLE_REQUEST_UPLOAD_FILE_SIZE/GB)+"GB). " + "Defaults to " + DEFAULT_PART_SIZE + " bytes ("+(DEFAULT_PART_SIZE/GB)+"GB)."; private static final String OPT_MULTI_PART_UPLOAD_SIZE = "-u"; private static final String LONGOPT_MULTI_PART_UPLOAD_SIZE = "--upload-part-size"; @Option(name=OPT_MULTI_PART_UPLOAD_SIZE, aliases=LONGOPT_MULTI_PART_UPLOAD_SIZE, usage=MULTI_PART_UPLOAD_SIZE_USAGE) @Getter @Setter private long uploadPartSize = DEFAULT_PART_SIZE; private static final String CROSS_ACCOUNT_USAGE ="Copy across AWS accounts. Only Resource-based policies are supported (as " + "specified by AWS documentation) for cross account copying. " + "Default is false (copying within same account, preserving ACLs across copies). " + "If this option is active, we give full access to owner of the destination bucket."; private static final String OPT_CROSS_ACCOUNT_COPY = "-C"; private static final String LONGOPT_CROSS_ACCOUNT_COPY = "--cross-account-copy"; @Option(name=OPT_CROSS_ACCOUNT_COPY, aliases=LONGOPT_CROSS_ACCOUNT_COPY, usage=CROSS_ACCOUNT_USAGE) @Getter @Setter private boolean crossAccountCopy = false; public void initDerivedFields() { if (hasCtime()) { this.maxAge = initMaxAge(); this.maxAgeDate = new Date(maxAge).toString(); } String scrubbed; int slashPos; scrubbed = scrubS3ProtocolPrefix(source); slashPos = scrubbed.indexOf('/'); if (slashPos == -1) { sourceBucket = scrubbed; } else { sourceBucket = scrubbed.substring(0, slashPos); if (hasPrefix()) throw new IllegalArgumentException("Cannot use a "+OPT_PREFIX+"/"+LONGOPT_PREFIX+" argument and source path that includes a prefix at the same time"); prefix = scrubbed.substring(slashPos+1); } scrubbed = scrubS3ProtocolPrefix(destination); slashPos = scrubbed.indexOf('/'); if (slashPos == -1) { destinationBucket = scrubbed; } else { destinationBucket = scrubbed.substring(0, slashPos); if (hasDestPrefix()) throw new IllegalArgumentException("Cannot use a "+OPT_DEST_PREFIX+"/"+LONGOPT_DEST_PREFIX+" argument and destination path that includes a dest-prefix at the same time"); destPrefix = scrubbed.substring(slashPos+1); } } protected String scrubS3ProtocolPrefix(String bucket) { bucket = bucket.trim(); if (bucket.startsWith(S3_PROTOCOL_PREFIX)) { bucket = bucket.substring(S3_PROTOCOL_PREFIX.length()); } return bucket; } } ================================================ FILE: src/main/java/org/cobbzilla/s3s3mirror/MirrorStats.java ================================================ package org.cobbzilla.s3s3mirror; import lombok.Getter; import lombok.extern.slf4j.Slf4j; import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicLong; import static org.cobbzilla.s3s3mirror.MirrorConstants.*; @Slf4j public class MirrorStats { @Getter private final Thread shutdownHook = new Thread() { @Override public void run() { logStats(); } }; private static final String BANNER = "\n--------------------------------------------------------------------\n"; public void logStats() { log.info(BANNER + "STATS BEGIN\n" + toString() + "STATS END " + BANNER); } private long start = System.currentTimeMillis(); public final AtomicLong objectsRead = new AtomicLong(0); public final AtomicLong objectsCopied = new AtomicLong(0); public final AtomicLong copyErrors = new AtomicLong(0); public final AtomicLong objectsDeleted = new AtomicLong(0); public final AtomicLong deleteErrors = new AtomicLong(0); public final AtomicLong s3copyCount = new AtomicLong(0); public final AtomicLong s3deleteCount = new AtomicLong(0); public final AtomicLong s3getCount = new AtomicLong(0); public final AtomicLong bytesCopied = new AtomicLong(0); public static final long HOUR = TimeUnit.HOURS.toMillis(1); public static final long MINUTE = TimeUnit.MINUTES.toMillis(1); public static final long SECOND = TimeUnit.SECONDS.toMillis(1); public String toString () { final long durationMillis = System.currentTimeMillis() - start; final double durationMinutes = durationMillis / 60000.0d; final String duration = String.format("%d:%02d:%02d", durationMillis / HOUR, (durationMillis % HOUR) / MINUTE, (durationMillis % MINUTE) / SECOND); final double readRate = objectsRead.get() / durationMinutes; final double copyRate = objectsCopied.get() / durationMinutes; final double deleteRate = objectsDeleted.get() / durationMinutes; return "read: "+objectsRead+ "\n" + "copied: "+objectsCopied+"\n" + "copy errors: "+copyErrors+"\n" + "deleted: "+objectsDeleted+"\n" + "delete errors: "+deleteErrors+"\n" + "duration: "+duration+"\n" + "read rate: "+readRate+"/minute\n" + "copy rate: "+copyRate+"/minute\n" + "delete rate: "+deleteRate+"/minute\n" + "bytes copied: "+formatBytes(bytesCopied.get())+"\n" + "GET operations: "+s3getCount+"\n" + "COPY operations: "+ s3copyCount+"\n" + "DELETE operations: "+ s3deleteCount+"\n"; } private String formatBytes(long bytesCopied) { if (bytesCopied > EB) return ((double) bytesCopied) / ((double) EB) + " EB ("+bytesCopied+" bytes)"; if (bytesCopied > PB) return ((double) bytesCopied) / ((double) PB) + " PB ("+bytesCopied+" bytes)"; if (bytesCopied > TB) return ((double) bytesCopied) / ((double) TB) + " TB ("+bytesCopied+" bytes)"; if (bytesCopied > GB) return ((double) bytesCopied) / ((double) GB) + " GB ("+bytesCopied+" bytes)"; if (bytesCopied > MB) return ((double) bytesCopied) / ((double) MB) + " MB ("+bytesCopied+" bytes)"; if (bytesCopied > KB) return ((double) bytesCopied) / ((double) KB) + " KB ("+bytesCopied+" bytes)"; return bytesCopied + " bytes"; } } ================================================ FILE: src/main/java/org/cobbzilla/s3s3mirror/MultipartKeyCopyJob.java ================================================ package org.cobbzilla.s3s3mirror; import com.amazonaws.services.s3.AmazonS3Client; import com.amazonaws.services.s3.model.*; import lombok.extern.slf4j.Slf4j; import org.cobbzilla.s3s3mirror.comparisonstrategies.ComparisonStrategy; import java.util.ArrayList; import java.util.List; @Slf4j public class MultipartKeyCopyJob extends KeyCopyJob { public MultipartKeyCopyJob(AmazonS3Client client, MirrorContext context, S3ObjectSummary summary, Object notifyLock, ComparisonStrategy comparisonStrategy) { super(client, context, summary, notifyLock, comparisonStrategy); } @Override boolean keyCopied(ObjectMetadata sourceMetadata, AccessControlList objectAcl) { long objectSize = summary.getSize(); MirrorOptions options = context.getOptions(); String sourceBucketName = options.getSourceBucket(); int maxPartRetries = options.getMaxRetries(); String targetBucketName = options.getDestinationBucket(); List copyResponses = new ArrayList(); if (options.isVerbose()) { log.info("Initiating multipart upload request for " + summary.getKey()); } InitiateMultipartUploadRequest initiateRequest = new InitiateMultipartUploadRequest(targetBucketName, keydest) .withObjectMetadata(sourceMetadata); if (options.isCrossAccountCopy()) { initiateRequest.withAccessControlList(buildCrossAccountAcl(objectAcl)); } else { initiateRequest.withAccessControlList(objectAcl); } InitiateMultipartUploadResult initResult = client.initiateMultipartUpload(initiateRequest); long partSize = options.getUploadPartSize(); long bytePosition = 0; for (int i = 1; bytePosition < objectSize; i++) { long lastByte = bytePosition + partSize - 1 >= objectSize ? objectSize - 1 : bytePosition + partSize - 1; String infoMessage = "copying : " + bytePosition + " to " + lastByte; if (options.isVerbose()) { log.info(infoMessage); } CopyPartRequest copyRequest = new CopyPartRequest() .withDestinationBucketName(targetBucketName) .withDestinationKey(keydest) .withSourceBucketName(sourceBucketName) .withSourceKey(summary.getKey()) .withUploadId(initResult.getUploadId()) .withFirstByte(bytePosition) .withLastByte(lastByte) .withPartNumber(i); for (int tries = 1; tries <= maxPartRetries; tries++) { try { if (options.isVerbose()) log.info("try :" + tries); context.getStats().s3copyCount.incrementAndGet(); CopyPartResult copyPartResult = client.copyPart(copyRequest); copyResponses.add(copyPartResult); if (options.isVerbose()) log.info("completed " + infoMessage); break; } catch (Exception e) { if (tries == maxPartRetries) { client.abortMultipartUpload(new AbortMultipartUploadRequest( targetBucketName, keydest, initResult.getUploadId())); log.error("Exception while doing multipart copy", e); return false; } } } bytePosition += partSize; } CompleteMultipartUploadRequest completeRequest = new CompleteMultipartUploadRequest(targetBucketName, keydest, initResult.getUploadId(), getETags(copyResponses)); client.completeMultipartUpload(completeRequest); if(options.isVerbose()) { log.info("completed multipart request for : " + summary.getKey()); } context.getStats().bytesCopied.addAndGet(objectSize); return true; } private List getETags(List copyResponses) { List eTags = new ArrayList(); for (CopyPartResult response : copyResponses) { eTags.add(new PartETag(response.getPartNumber(), response.getETag())); } return eTags; } } ================================================ FILE: src/main/java/org/cobbzilla/s3s3mirror/Sleep.java ================================================ package org.cobbzilla.s3s3mirror; import lombok.extern.slf4j.Slf4j; @Slf4j public class Sleep { public static boolean sleep(int millis) { try { Thread.sleep(millis); } catch (InterruptedException e) { log.error("interrupted!"); return true; } return false; } } ================================================ FILE: src/main/java/org/cobbzilla/s3s3mirror/comparisonstrategies/ComparisonStrategy.java ================================================ package org.cobbzilla.s3s3mirror.comparisonstrategies; import com.amazonaws.services.s3.model.ObjectMetadata; import com.amazonaws.services.s3.model.S3ObjectSummary; public interface ComparisonStrategy { boolean sourceDifferent(S3ObjectSummary source, ObjectMetadata destination); } ================================================ FILE: src/main/java/org/cobbzilla/s3s3mirror/comparisonstrategies/ComparisonStrategyFactory.java ================================================ package org.cobbzilla.s3s3mirror.comparisonstrategies; import lombok.AccessLevel; import lombok.NoArgsConstructor; import org.cobbzilla.s3s3mirror.MirrorOptions; @NoArgsConstructor(access = AccessLevel.PRIVATE) public class ComparisonStrategyFactory { public static ComparisonStrategy getStrategy(MirrorOptions mirrorOptions) { if (mirrorOptions.isSizeOnly()) { return new SizeOnlyComparisonStrategy(); } else if (mirrorOptions.isSizeAndLastModified()) { return new SizeAndLastModifiedComparisonStrategy(); } else { return new EtagComparisonStrategy(); } } } ================================================ FILE: src/main/java/org/cobbzilla/s3s3mirror/comparisonstrategies/EtagComparisonStrategy.java ================================================ package org.cobbzilla.s3s3mirror.comparisonstrategies; import com.amazonaws.services.s3.model.ObjectMetadata; import com.amazonaws.services.s3.model.S3ObjectSummary; public class EtagComparisonStrategy extends SizeOnlyComparisonStrategy { @Override public boolean sourceDifferent(S3ObjectSummary source, ObjectMetadata destination) { return super.sourceDifferent(source, destination) || !source.getETag().equals(destination.getETag()); } } ================================================ FILE: src/main/java/org/cobbzilla/s3s3mirror/comparisonstrategies/SizeAndLastModifiedComparisonStrategy.java ================================================ package org.cobbzilla.s3s3mirror.comparisonstrategies; import com.amazonaws.services.s3.model.ObjectMetadata; import com.amazonaws.services.s3.model.S3ObjectSummary; public class SizeAndLastModifiedComparisonStrategy extends SizeOnlyComparisonStrategy { @Override public boolean sourceDifferent(S3ObjectSummary source, ObjectMetadata destination) { return super.sourceDifferent(source, destination) || source.getLastModified().after(destination.getLastModified()); } } ================================================ FILE: src/main/java/org/cobbzilla/s3s3mirror/comparisonstrategies/SizeOnlyComparisonStrategy.java ================================================ package org.cobbzilla.s3s3mirror.comparisonstrategies; import com.amazonaws.services.s3.model.ObjectMetadata; import com.amazonaws.services.s3.model.S3ObjectSummary; public class SizeOnlyComparisonStrategy implements ComparisonStrategy { @Override public boolean sourceDifferent(S3ObjectSummary source, ObjectMetadata destination) { return source.getSize() != destination.getContentLength(); } } ================================================ FILE: src/main/resources/log4j.xml ================================================ ================================================ FILE: src/test/java/org/cobbzilla/s3s3mirror/MirrorMainTest.java ================================================ package org.cobbzilla.s3s3mirror; import org.junit.Test; import static org.junit.Assert.*; public class MirrorMainTest { public static final String SOURCE = "s3://from-bucket"; public static final String DESTINATION = "s3://to-bucket"; @Test public void testBasicArgs() throws Exception { final MirrorMain main = new MirrorMain(new String[]{SOURCE, DESTINATION}); main.parseArguments(); final MirrorOptions options = main.getOptions(); assertFalse(options.isDryRun()); assertEquals(SOURCE, options.getSource()); assertEquals(DESTINATION, options.getDestination()); } @Test public void testDryRunArgs() throws Exception { final MirrorMain main = new MirrorMain(new String[]{MirrorOptions.OPT_DRY_RUN, SOURCE, DESTINATION}); main.parseArguments(); final MirrorOptions options = main.getOptions(); assertTrue(options.isDryRun()); assertEquals(SOURCE, options.getSource()); assertEquals(DESTINATION, options.getDestination()); } @Test public void testMaxConnectionsArgs() throws Exception { int maxConns = 42; final MirrorMain main = new MirrorMain(new String[]{MirrorOptions.OPT_MAX_CONNECTIONS, String.valueOf(maxConns), SOURCE, DESTINATION}); main.parseArguments(); final MirrorOptions options = main.getOptions(); assertFalse(options.isDryRun()); assertEquals(maxConns, options.getMaxConnections()); assertEquals(SOURCE, options.getSource()); assertEquals(DESTINATION, options.getDestination()); } @Test public void testInlinePrefix() throws Exception { final String prefix = "foo"; final MirrorMain main = new MirrorMain(new String[]{SOURCE + "/" + prefix, DESTINATION}); main.parseArguments(); final MirrorOptions options = main.getOptions(); assertEquals(prefix, options.getPrefix()); assertNull(options.getDestPrefix()); } @Test public void testInlineDestPrefix() throws Exception { final String destPrefix = "foo"; final MirrorMain main = new MirrorMain(new String[]{SOURCE, DESTINATION + "/" + destPrefix}); main.parseArguments(); final MirrorOptions options = main.getOptions(); assertEquals(destPrefix, options.getDestPrefix()); assertNull(options.getPrefix()); } @Test public void testInlineSourceAndDestPrefix() throws Exception { final String prefix = "foo"; final String destPrefix = "bar"; final MirrorMain main = new MirrorMain(new String[]{SOURCE + "/" + prefix, DESTINATION + "/" + destPrefix}); main.parseArguments(); final MirrorOptions options = main.getOptions(); assertEquals(prefix, options.getPrefix()); assertEquals(destPrefix, options.getDestPrefix()); } @Test public void testInlineSourcePrefixAndPrefixOption() throws Exception { final String prefix = "foo"; final MirrorMain main = new MirrorMain(new String[]{MirrorOptions.OPT_PREFIX, prefix, SOURCE + "/" + prefix, DESTINATION}); try { main.parseArguments(); fail("expected IllegalArgumentException"); } catch (Exception e) { assertTrue(e instanceof IllegalArgumentException); } } @Test public void testInlineDestinationPrefixAndPrefixOption() throws Exception { final String prefix = "foo"; final MirrorMain main = new MirrorMain(new String[]{MirrorOptions.OPT_DEST_PREFIX, prefix, SOURCE, DESTINATION + "/" + prefix}); try { main.parseArguments(); fail("expected IllegalArgumentException"); } catch (Exception e) { assertTrue(e instanceof IllegalArgumentException); } } /** * When access keys are read from environment then the --proxy setting is valid. * If access keys are ready from s3cfg file then proxy settings are picked from there. * @throws Exception */ @Test public void testProxyHostAndProxyPortOption() throws Exception { final String proxy = "localhost:8080"; final MirrorMain main = new MirrorMain(new String[]{MirrorOptions.OPT_PROXY, proxy, SOURCE, DESTINATION}); main.getOptions().setAWSAccessKeyId("accessKey"); main.getOptions().setAWSSecretKey("secretKey"); main.parseArguments(); assertEquals("localhost", main.getOptions().getProxyHost()); assertEquals(8080, main.getOptions().getProxyPort()); } @Test public void testInvalidProxyOption () throws Exception { for (String proxy : new String[] {"localhost", "localhost:", ":1234", "localhost:invalid", ":", ""} ) { testInvalidProxySetting(proxy); } } private void testInvalidProxySetting(String proxy) throws Exception { final MirrorMain main = new MirrorMain(new String[]{MirrorOptions.OPT_PROXY, proxy, SOURCE, DESTINATION}); main.getOptions().setAWSAccessKeyId("accessKey"); main.getOptions().setAWSSecretKey("secretKey"); try { main.parseArguments(); fail("Invalid proxy setting ("+proxy+") should have thrown exception"); } catch (IllegalArgumentException expected) {} } } ================================================ FILE: src/test/java/org/cobbzilla/s3s3mirror/MirrorTest.java ================================================ package org.cobbzilla.s3s3mirror; import com.amazonaws.services.s3.AmazonS3Client; import com.amazonaws.services.s3.model.AmazonS3Exception; import com.amazonaws.services.s3.model.ObjectMetadata; import lombok.extern.slf4j.Slf4j; import org.apache.commons.lang3.RandomStringUtils; import org.junit.After; import org.junit.Test; import java.util.ArrayList; import java.util.List; import static org.cobbzilla.s3s3mirror.MirrorOptions.*; import static org.cobbzilla.s3s3mirror.TestFile.Clean; import static org.cobbzilla.s3s3mirror.TestFile.Copy; import static org.junit.Assert.assertEquals; import static org.junit.Assert.fail; @Slf4j public class MirrorTest { public static final String SOURCE_ENV_VAR = "S3S3_TEST_SOURCE"; public static final String DEST_ENV_VAR = "S3S3_TEST_DEST"; public static final String SOURCE = System.getenv(SOURCE_ENV_VAR); public static final String DESTINATION = System.getenv(DEST_ENV_VAR); private List stuffToCleanup = new ArrayList(); // Every individual test *must* initialize the "main" instance variable, otherwise NPE gets thrown here. private MirrorMain main = null; private TestFile createTestFile(String key, Copy copy, Clean clean) throws Exception { return TestFile.create(key, main.getClient(), stuffToCleanup, copy, clean); } public static String random(int size) { return RandomStringUtils.randomAlphanumeric(size) + "_" + System.currentTimeMillis(); } private boolean checkEnvs() { if (SOURCE == null || DESTINATION == null) { log.warn("No "+SOURCE_ENV_VAR+" and/or no "+DEST_ENV_VAR+" found in enviroment, skipping test"); return false; } return true; } @After public void cleanupS3Assets () { // Every individual test *must* initialize the "main" instance variable, otherwise NPE gets thrown here. if (checkEnvs()) { AmazonS3Client client = main.getClient(); for (S3Asset asset : stuffToCleanup) { try { log.info("cleanupS3Assets: deleting "+asset); client.deleteObject(asset.bucket, asset.key); } catch (Exception e) { log.error("Error cleaning up object: "+asset+": "+e.getMessage()); } } main = null; } } @Test public void testSimpleCopy () throws Exception { if (!checkEnvs()) return; final String key = "testSimpleCopy_"+random(10); final String[] args = {OPT_VERBOSE, OPT_PREFIX, key, SOURCE, DESTINATION}; testSimpleCopyInternal(key, args); } @Test public void testSimpleCopyWithInlinePrefix () throws Exception { if (!checkEnvs()) return; final String key = "testSimpleCopyWithInlinePrefix_"+random(10); final String[] args = {OPT_VERBOSE, SOURCE + "/" + key, DESTINATION}; testSimpleCopyInternal(key, args); } private void testSimpleCopyInternal(String key, String[] args) throws Exception { main = new MirrorMain(args); main.init(); final TestFile testFile = createTestFile(key, Copy.SOURCE, Clean.SOURCE_AND_DEST); main.run(); assertEquals(1, main.getContext().getStats().objectsCopied.get()); assertEquals(testFile.data.length(), main.getContext().getStats().bytesCopied.get()); final ObjectMetadata metadata = main.getClient().getObjectMetadata(DESTINATION, key); assertEquals(testFile.data.length(), metadata.getContentLength()); } @Test public void testSimpleCopyWithDestPrefix () throws Exception { if (!checkEnvs()) return; final String key = "testSimpleCopyWithDestPrefix_"+random(10); final String destKey = "dest_testSimpleCopyWithDestPrefix_"+random(10); final String[] args = {OPT_PREFIX, key, OPT_DEST_PREFIX, destKey, SOURCE, DESTINATION}; testSimpleCopyWithDestPrefixInternal(key, destKey, args); } @Test public void testSimpleCopyWithInlineDestPrefix () throws Exception { if (!checkEnvs()) return; final String key = "testSimpleCopyWithInlineDestPrefix_"+random(10); final String destKey = "dest_testSimpleCopyWithInlineDestPrefix_"+random(10); final String[] args = {SOURCE+"/"+key, DESTINATION+"/"+destKey }; testSimpleCopyWithDestPrefixInternal(key, destKey, args); } private void testSimpleCopyWithDestPrefixInternal(String key, String destKey, String[] args) throws Exception { main = new MirrorMain(args); main.init(); final TestFile testFile = createTestFile(key, Copy.SOURCE, Clean.SOURCE); stuffToCleanup.add(new S3Asset(DESTINATION, destKey)); main.run(); assertEquals(1, main.getContext().getStats().objectsCopied.get()); assertEquals(testFile.data.length(), main.getContext().getStats().bytesCopied.get()); final ObjectMetadata metadata = main.getClient().getObjectMetadata(DESTINATION, destKey); assertEquals(testFile.data.length(), metadata.getContentLength()); } @Test public void testDeleteRemoved () throws Exception { if (!checkEnvs()) return; final String key = "testDeleteRemoved_"+random(10); main = new MirrorMain(new String[]{OPT_VERBOSE, OPT_PREFIX, key, OPT_DELETE_REMOVED, SOURCE, DESTINATION}); main.init(); // Write some files to dest final int numDestFiles = 3; final String[] destKeys = new String[numDestFiles]; final TestFile[] destFiles = new TestFile[numDestFiles]; for (int i=0; i stuffToCleanup, Copy copy, Clean clean) throws Exception { TestFile testFile = new TestFile(); switch (clean) { case SOURCE: stuffToCleanup.add(new S3Asset(MirrorTest.SOURCE, key)); break; case DEST: stuffToCleanup.add(new S3Asset(MirrorTest.DESTINATION, key)); break; case SOURCE_AND_DEST: stuffToCleanup.add(new S3Asset(MirrorTest.SOURCE, key)); stuffToCleanup.add(new S3Asset(MirrorTest.DESTINATION, key)); break; } switch (copy) { case SOURCE: client.putObject(MirrorTest.SOURCE, key, testFile.file); break; case DEST: client.putObject(MirrorTest.DESTINATION, key, testFile.file); break; case SOURCE_AND_DEST: client.putObject(MirrorTest.SOURCE, key, testFile.file); client.putObject(MirrorTest.DESTINATION, key, testFile.file); break; } return testFile; } }