Full Code of choonkeat/attache for AI

master eaee2aca211b cached
46 files
103.5 KB
28.2k tokens
78 symbols
1 requests
Download .txt
Repository: choonkeat/attache
Branch: master
Commit: eaee2aca211b
Files: 46
Total size: 103.5 KB

Directory structure:
gitextract_4787py6e/

├── .gitignore
├── .rspec
├── .travis.yml
├── CONTRIBUTING.md
├── Gemfile
├── Guardfile
├── LICENSE
├── Procfile
├── README.md
├── Rakefile
├── app.json
├── attache.gemspec
├── config/
│   ├── puma.rb
│   └── vhost.example.yml
├── config.ru
├── docker/
│   ├── Dockerfile
│   └── bundler_geminstaller_install_with_timeout.rb
├── exe/
│   └── attache
├── lib/
│   ├── attache/
│   │   ├── backup.rb
│   │   ├── base.rb
│   │   ├── delete.rb
│   │   ├── download.rb
│   │   ├── file_response_body.rb
│   │   ├── job.rb
│   │   ├── resize_job.rb
│   │   ├── tasks.rb
│   │   ├── tus/
│   │   │   └── upload.rb
│   │   ├── tus.rb
│   │   ├── upload.rb
│   │   ├── upload_url.rb
│   │   ├── version.rb
│   │   └── vhost.rb
│   └── attache.rb
├── public/
│   ├── index.html
│   └── vendor/
│       └── roboto/
│           └── Apache License.txt
└── spec/
    ├── fixtures/
    │   └── sample.txt
    ├── lib/
    │   └── attache/
    │       ├── backup_spec.rb
    │       ├── delete_spec.rb
    │       ├── download_spec.rb
    │       ├── resize_job_spec.rb
    │       ├── tus/
    │       │   └── upload_spec.rb
    │       ├── tus_spec.rb
    │       ├── upload_spec.rb
    │       ├── upload_url_spec.rb
    │       └── vhost_spec.rb
    └── spec_helper.rb

================================================
FILE CONTENTS
================================================

================================================
FILE: .gitignore
================================================
vhost.yml


================================================
FILE: .rspec
================================================
--color
--require spec_helper
--format documentation



================================================
FILE: .travis.yml
================================================
language: ruby
bundler_args: --retry=3 --jobs=8 --no-deployment
cache: bundler
sudo: false
rvm:
  - 2.2.3
matrix:
  fast_finish: true


================================================
FILE: CONTRIBUTING.md
================================================
# Contributing

1. Fork it ( http://github.com/choonkeat/attache/fork )
2. Create your feature branch (`git checkout -b my-new-feature`)
3. Commit your changes (`git commit -am 'Add some feature'`)
4. Push to the branch (`git push origin my-new-feature`)
5. Create new Pull Request


================================================
FILE: Gemfile
================================================
source 'https://rubygems.org'

ruby "2.2.3"

gemspec


================================================
FILE: Guardfile
================================================
guard :rspec, cmd: "bundle exec rspec" do
  watch(%r{^spec/.+_spec\.rb$})
  watch(%r{^lib/(.+)\.rb$})     { |m| "spec/lib/#{m[1]}_spec.rb" }
end


================================================
FILE: LICENSE
================================================
The MIT License (MIT)

Copyright (c) 2015 Chew Choon Keat

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.



================================================
FILE: Procfile
================================================
web: bundle exec puma -C config/puma.rb
worker: bundle exec sidekiq -e production -q attache_vhost_jobs -r ./lib/attache.rb


================================================
FILE: README.md
================================================
# attache

[![Gem Version](https://badge.fury.io/rb/attache.svg)](https://badge.fury.io/rb/attache)
[![Build Status](https://travis-ci.org/choonkeat/attache.svg?branch=master)](https://travis-ci.org/choonkeat/attache)

## But why?

If you're interested in the "why", checkout [my slides](http://www.slideshare.net/choonkeat/file-upload-2015) and [the blog post](http://blog.choonkeat.com/weblog/2015/10/file-uploads-2015.html).

Your app can easily support
- dynamic resize of images (no predefined styles in your app)
- all file types since attache let apps [display non-image files as icons through `<img src...>`](https://github.com/choonkeat/attache/pull/28)
- resumeable upload over unreliable (mobile) networks [using TUS protocol](https://github.com/choonkeat/attache/pull/10)

## Run an instance

#### Heroku

You can run your own instance on your own Heroku server

[![Deploy](https://www.herokucdn.com/deploy/button.svg)](https://heroku.com/deploy)

#### Docker

```
docker run -it -p 9292:5000 --rm attache/attache
```

Also, see [Deploying Attache on Digital Ocean using Docker](https://github.com/choonkeat/attache/wiki/Deploying-Attache-on-Digital-Ocean-using-Docker)

#### RubyGem

You can install the gem and then execute `attache` command

```
gem install attache
attache start -c web=1 -p 9292
```

NOTE: some config files will be written into your current directory

```
.
├── Procfile
├── config
│   ├── puma.rb
│   └── vhost.yml
└── config.ru
```

#### Bundler

You can also use bundler to manage the gem; add this into your `Gemfile`

```
gem 'attache'
```

then execute

```
bundle install
bundle exec attache start -c web=1 -p 9292
```

NOTE: some config files will be written into your current directory (see RubyGems above)

#### Source code

You can checkout the source code and run it like a regular [a Procfile-based app](https://ddollar.github.io/foreman/):

```
git clone https://github.com/choonkeat/attache.git
cd attache
bundle install
foreman start -c web=1 -p 9292
```

See [foreman](https://github.com/ddollar/foreman) for more details.

## Configuration

`LOCAL_DIR` is where your local disk cache will be. By default, attache will use a system assigned temporary directory which may not be the same everytime you run attache.

`CACHE_SIZE_BYTES` determines how much disk space will be used for the local disk cache. If the size of cache exceeds, least recently used files will be evicted after `CACHE_EVICTION_INTERVAL_SECONDS` duration.

#### Asynchronous delete

By default `attache` will delete files from cloud storage using the lightweight, async processing library [sucker_punch](https://github.com/brandonhilkert/sucker_punch). This requires no additional setup (read: 1x free dyno).

However if you prefer a more durable queue for reliable uploads, configuring `REDIS_PROVIDER` or `REDIS_URL` will switch `attache` to use a `redis` queue instead, via `sidekiq`. [Read Sidekiq's documentation](https://github.com/mperham/sidekiq/wiki/Using-Redis#using-an-env-variable) for details on these variables.

If for some reason you'd want the cloud storage delete to be synchronous, set `INLINE_JOB=1` instead.

#### Virtual Host Cloud Storage

`attache` uses a different config (and backup files into a different cloud service) depending on the request hostname that it was accessed by.

This means a single attache server can be the workhorse for different apps. Refer to `config/vhost.example.yml` file for configuration details.

At boot time, `attache` server will first look at `VHOST` environment variable. If that is missing, it will load the content of `config/vhost.yml`. If neither exist, the `attache` server run in development mode; uploaded files are only stored locally and may be evicted to free up disk space.

If you do not want to write down sensitive information like aws access key and secrets into a `config/vhost.yml` file, you can convert the entire content into `json` format and assign it to the `VHOST` environment variable instead.

```
# bash
export VHOST=$(bundle exec rake attache:vhost)

# heroku
heroku config:set VHOST=$(bundle exec rake attache:vhost)
```

#### Virtual Host Authorization

By default `attache` will accept uploads and delete requests from any client. Set `SECRET_KEY` to ensure attache only receives upload (and delete commands) from your own app.

To most app developers *using* attache in your rails app through a library like [attache-rails gem](https://github.com/choonkeat/attache-rails), how this work may not matter. But if you are developing attache itself or writing a client library for attache, then read on.

#### Virtual Host Authorization (Developer)

When `SECRET_KEY` is set, `attache` will require a valid `hmac` parameter in the upload request. Upload and Delete requests will be refused with `HTTP 401` error unless the `hmac` is correct.

The additional parameters required for authorized request are:

* `uuid` is a uuid string
* `expiration` is a unix timestamp of a future time. the significance is, if the timestamp has passed, the upload will be regarded as invalid
* `hmac` is the `HMAC-SHA1` of the `SECRET_KEY` and the concatenated value of `uuid` and `expiration`

i.e.

``` ruby
hmac = OpenSSL::HMAC.hexdigest(OpenSSL::Digest.new('sha1'), SECRET_KEY, uuid + expiration)
```

## APIs

The attache server is a reference implementation of these interfaces. If you write your own server, [compatibility can be verified by running a test suite](https://github.com/choonkeat/attache_api#testing-against-an-attache-compatible-server).

#### Upload

Users will upload files directly into the `attache` server from their browser, bypassing the main app.

> ```
> PUT /upload?file=image123.jpg
> ```
> file content is the http request body

The main app front end will receive a unique `path` for each uploaded file - the only information to store in the main app database.

> ```
> {"path":"pre/fix/image123.jpg","content_type":"image/jpeg","geometry":"1920x1080"}
> ```
> json response from attache after upload.

##### Upload by url

> ```
> GET /upload_url?url=https://example.com/logo.png
> ```

Attache will download the file from `url` supplied and uploads it through the regular `/upload` handler. So be expecting the same json response after upload. works with `GET`, `POST`, `PUT`.

Data URIs (aka base64 encoded file binaries) can also be uploaded to the same `/upload_url` endpoint through the same `url` parameter.

#### Download

Whenever the main app wants to display the uploaded file, constrained to a particular size, it will use a helper method provided by the `attache` lib. e.g. `embed_attache(path)` which will generate the necessary, barebones markup.

> ```
> <img src="https://example.com/view/pre/fix/100x100/image123.jpg" />
> ```
> use [the imagemagick resize syntax](http://www.imagemagick.org/Usage/resize/) to specify the desired output.
>
> make sure to `escape` the geometry string.
> e.g. for a hard crop of `50x50#`, the url should be `50x50%23`
>
> ```
> <img src="https://example.com/view/pre/fix/50x50%23/image123.jpg" />
> ```
> requesting for a geometry of `original` will return the uploaded file. this works well for non-image file uploads.
> requesting for a geometry of `remote` will skip the local cache and serve from cloud storage.

* Attache keeps the uploaded file in the local harddisk (a temp directory)
* Attache will also upload the file into cloud storage if `FOG_CONFIG` is set
* If the local file does not exist for some reason (e.g. cleared cache), it will download from cloud storage and store it locally
* When a specific size is requested, it will generate the resized file based on the local file and serve it in the http response
* If cloud storage is defined, local disk cache will store up to a maximum of `CACHE_SIZE_BYTES` bytes. By default `CACHE_SIZE_BYTES` will 80% of available diskspace

#### Delete

> ```
> DELETE /delete
> paths=image1.jpg%0Aprefix2%2Fimage2.jpg%0Aimage3.jpg
> ```

Removing 1 or more files from the local cache and remote storage can be done via a http `POST` or `DELETE` request to `/delete`, with a `paths` parameter in the request body.

The `paths` value should be delimited by the newline character, aka `\n`. In the example above, 3 files will be requested for deletion: `image1.jpg`, `prefix2/image2.jpg`, and `image3.jpg`.

#### Backup

> ```
> POST /backup
> paths=image1.jpg%0Aprefix2%2Fimage2.jpg%0Aimage3.jpg
> ```

This feature might be known as `promote` in other file upload solutions. `attache` allows client app to `backup` uploaded images to another bucket for longer term storage.

Copying 1 or more files from the default remote storage to the backup remote storage (backup) can be done via a http `POST` request to `/backup`, with a `paths` parameter in the request body.

The `paths` value should be delimited by the newline character, aka `\n`. In the example above, 3 files will be requested for backup: `image1.jpg`, `prefix2/image2.jpg`, and `image3.jpg`.

If backup remote storage is not configured, this API call will be a noop. If configured, the backup storage must be accessible by the same credentials as default cloud storage as the system. Please refer to the `BACKUP_CONFIG` configuration illustrated in `config/vhost.example.yml` file in this repository.

By default, `backup` operation is performed synchronously. Set `BACKUP_ASYNC` environment variable to make it follow the same synchronicity as `delete`

The main reason to configure a backup storage is to make the default cloud storage auto expire files; mitigating [abuse](https://github.com/choonkeat/attache/issues/13). You should consult the documentation of your cloud storage provider on how to setup auto expiry, e.g. [here](https://aws.amazon.com/blogs/aws/amazon-s3-object-expiration/) or [here](https://cloud.google.com/storage/docs/lifecycle)

## License

MIT


================================================
FILE: Rakefile
================================================
if ENV['RACK_ENV'] == 'production'
  # Heroku
  # https://gist.github.com/Geesu/d0b58488cfae51f361c6
  namespace :assets do
    task 'precompile' do
      puts "Not applicable"
    end
  end
else
  require "bundler/gem_tasks"
  require 'rspec/core/rake_task'
  require 'attache/tasks'

  RSpec::Core::RakeTask.new(:spec)
  task :default => :spec
end


================================================
FILE: app.json
================================================
{
  "name": "attache server",
  "description": "Image server",
  "repository": "https://github.com/choonkeat/attache",
  "keywords": ["ruby", "rack", "image", "resize", "direct", "upload"],
  "env": {
    "REMOTE_DIR": "attache"
  }
}


================================================
FILE: attache.gemspec
================================================
$:.push File.expand_path("../lib", __FILE__)

# Maintain your gem's version:
require "attache/version"

# Describe your gem and declare its dependencies:
Gem::Specification.new do |s|
  s.name        = "attache"
  s.version     = Attache::VERSION
  s.authors     = ["choonkeat"]
  s.email       = ["choonkeat@gmail.com"]
  s.homepage    = "https://github.com/choonkeat/attache"
  s.summary     = "Image server for everybody"
  s.description = "Standalone rack app to manage files onbehalf of your app"
  s.license     = "MIT"

  s.files       = Dir["{app,lib}/**/*", "MIT-LICENSE", "Rakefile", "README.md", 'exe/**/*',
                      "config/vhost.example.yml", "config/puma.rb", "config.ru", 'public/**/*']
  s.bindir      = 'exe'
  s.executables = ['attache']

  s.add_runtime_dependency 'rack', '~> 1.6'
  s.add_runtime_dependency 'activesupport'
  s.add_runtime_dependency 'paperclip', '~> 4.3'
  s.add_runtime_dependency 'puma', '~> 2.14'
  s.add_runtime_dependency 'net-ssh'
  s.add_runtime_dependency 'fog', '~> 1.34'
  s.add_runtime_dependency 'excon', '~> 0.45'
  s.add_runtime_dependency 'sys-filesystem', '~> 0'
  s.add_runtime_dependency 'disk_store', '~> 0'
  s.add_runtime_dependency 'celluloid', '< 0.17' # 0.17 has compatibility issues with disk_store
  s.add_runtime_dependency 'foreman', '~> 0'
  s.add_runtime_dependency 'connection_pool', '~> 2.2'
  s.add_runtime_dependency 'sidekiq', '~> 3.4'
  s.add_runtime_dependency 'sucker_punch', '~> 1.5' # single-process Ruby asynchronous processing library

  s.add_development_dependency 'rspec', '~> 3.2'
  s.add_development_dependency 'shoulda', '~> 3.5'
  s.add_development_dependency 'guard-rspec', '~> 4.6'
end


================================================
FILE: config/puma.rb
================================================
workers Integer(ENV['PUMA_WORKERS'] || 1)
threads Integer(ENV['MIN_THREADS']  || 1), Integer(ENV['MAX_THREADS'] || 16)

preload_app!

rackup      DefaultRackup
port        ENV['PORT']     || 3000
environment ENV['RACK_ENV'] || 'development'


================================================
FILE: config/vhost.example.yml
================================================
# This is an example file. You can copy this file as `vhost.yml` and edit
# the content with the correct values.

# This section will only take effect if a request is made to `google.lvh.me:9292`
"google.lvh.me:9292":
  "SECRET_KEY": CHANGEME                         # this is the shared secret between your app and this attache server
  "REMOTE_DIR": CHANGEME                         # this is the root directory to use in the `bucket`; omit to use root
  "GEOMETRY_WHITELIST":                          # this limits the type of `geometry` we resize to; optional
    - "100x100"
    - "1024>"
  "FOG_CONFIG":                                  #
    "provider": Google                           # refer to `fog.io/storage` documentation
    "google_storage_access_key_id": CHANGEME     #
    "google_storage_secret_access_key": CHANGEME #
    "bucket": CHANGEME                           # This `bucket` key is not standard Fog config. BUT attache server needs it

# This section will only take effect if a request is made to `aws.example.com`
"aws.example.com":
  "SECRET_KEY": CHANGEME
  "FOG_CONFIG":
    "provider": AWS
    "aws_access_key_id": CHANGEME
    "aws_secret_access_key": CHANGEME
    "bucket": CHANGEME
    "region": us-west-1
  "BACKUP_CONFIG":
    "bucket": CHANGEME_BAK
    # only supports 1 key: `bucket`

# This section will only take effect if a request is made to `localhost:9292`
"localhost:9292":


# This section will apply if a request did not match anything else
"0.0.0.0":


================================================
FILE: config.ru
================================================
require 'attache'

use Attache::Delete
use Attache::UploadUrl
use Attache::Upload
use Attache::Download
use Attache::Tus::Upload
use Attache::Backup
use Rack::Static, urls: ["/"], root: Attache.publicdir, index: "index.html"

run proc {|env| [200, {}, []] }


================================================
FILE: docker/Dockerfile
================================================
FROM ruby:2.2

RUN DEBIAN_FRONTEND=noninteractive apt-get update && apt-get install -y imagemagick ghostscript
RUN curl -sSL https://raw.githubusercontent.com/choonkeat/attache/master/docker/bundler_geminstaller_install_with_timeout.rb | ruby

RUN useradd -d /app -m app && \
    chown -R app /usr/local/bundle
USER app
RUN mkdir -p /app/src
WORKDIR /app/src

RUN curl -sSL http://johnvansickle.com/ffmpeg/releases/ffmpeg-release-32bit-static.tar.xz | tar -xJv
ENV PATH "$PATH:/app/src/ffmpeg-2.8.3-32bit-static"

RUN echo 'source "https://rubygems.org"' > Gemfile && \
    echo 'gem "attache", ">= 2.3.0"'     >> Gemfile && bundle && \
    gem install --no-ri --no-rdoc attache --version '>= 2.3.0'

EXPOSE 5000
CMD ["attache", "start", "-c", "web=1"]


================================================
FILE: docker/bundler_geminstaller_install_with_timeout.rb
================================================
# Usage:
#   ruby bundler_geminstaller_install_with_timeout.rb

target = `which bundle`.chomp
*old_lines, last_line = IO.read(target).split(/[\r\n]+/)
if (old_lines.grep(/install_with_timeout/)).empty?
  new_line = DATA.read.strip
  combined = (old_lines + [new_line, last_line]).join($/)
  open(target, "w") {|f| f.write(combined) }
  puts "installed."
else
  puts "already installed."
end

__END__

require "timeout"

require "rubygems/installer"
Gem::Installer.class_eval do
  def install_with_timeout
    puts "Gem install_with_timeout..."
    Timeout.timeout(Integer(ENV.fetch("GEM_INSTALL_TIMEOUT", 60))) {
      install_without_timeout
    }
  rescue Timeout::Error
    @tries = @tries.to_i + 1
    raise unless @tries < 5
    STDERR.puts "Gem timed out #{$!} (#{@tries})..."
    retry
  end

  alias :install_without_timeout :install
  alias :install :install_with_timeout
end

require "bundler/installer/gem_installer"
Bundler::GemInstaller.class_eval do
  def install_with_timeout
    puts "Bundler install_with_timeout..."
    Timeout.timeout(Integer(ENV.fetch("GEM_INSTALL_TIMEOUT", 60))) {
      install_without_timeout
    }
  rescue Timeout::Error
    @tries = @tries.to_i + 1
    raise unless @tries < 5
    STDERR.puts "Bundler timed out #{$!} (#{@tries})..."
    retry
  end

  alias :install_without_timeout :install
  alias :install :install_with_timeout
end


================================================
FILE: exe/attache
================================================
#!/bin/env ruby

require 'fileutils'

# attache config
if ENV['VHOST']
  puts "Using VHOST env"
elsif File.exists?("config/vhost.yml")
  puts "Using config/vhost.yml"
else
  FileUtils.mkdir_p 'config'
  FileUtils.copy File.expand_path("../config/vhost.example.yml", File.dirname(__FILE__)), 'config/vhost.yml'
  puts "Initialized config/vhost.yml"
end

# puma config
if File.exists?("config/puma.rb")
  puts "Using config/puma.rb"
else
  FileUtils.mkdir_p 'config'
  FileUtils.copy File.expand_path("../config/puma.rb", File.dirname(__FILE__)), 'config/puma.rb'
  puts "Initialized config/puma.rb"
end

# procfile
if File.exists?("Procfile")
  puts "Using Procfile"
else
  open("Procfile", "w") do |f|
    f.write <<-EOM.gsub(/^\s+/, '')
      web: bundle exec puma -C config/puma.rb
      worker: bundle exec sidekiq -e production -q attache_vhost_jobs -r #{File.expand_path("../lib/attache.rb", File.dirname(__FILE__))}
    EOM
  end
  puts "Initialized Procfile"
end

# rakefile
if File.exists?("Rakefile")
  puts "Using Rakefile"
else
  open("Rakefile", "w") do |f|
    f.write <<-EOM.gsub(/^\s+/, '')
      require 'attache/tasks'
    EOM
  end
  puts "Initialized Rakefile"
end

# rack config
if File.exists?("config.ru")
  puts "Using config.ru"
else
  FileUtils.copy File.expand_path("../config.ru", File.dirname(__FILE__)), 'config.ru'
  puts "Initialized config.ru"
end

case ARGV.first
when 'start'
  require "foreman/cli"
  Foreman::CLI.start
else
  puts ""
  puts "Setup complete: run `foreman start` to begin"
  puts ""
end


================================================
FILE: lib/attache/backup.rb
================================================
class Attache::Backup < Attache::Base
  def initialize(app)
    @app = app
  end

  def _call(env, config)
    case env['PATH_INFO']
    when '/backup'
      request  = Rack::Request.new(env)
      params   = request.params
      return config.unauthorized unless config.authorized?(params)

      if config.storage && config.bucket
        sync_method = (ENV['BACKUP_ASYNC'] ? :async : :send)
        threads = []
        params['paths'].to_s.split("\n").each do |relpath|
          threads << Thread.new do
            Attache.logger.info "BACKUP remote #{relpath}"
            config.send(sync_method, :backup_file, relpath: relpath)
          end
        end
        threads.each(&:join)
      end
      [200, config.headers_with_cors, []]
    else
      @app.call(env)
    end
  end
end


================================================
FILE: lib/attache/base.rb
================================================
class Attache::Base
  def call(env)
    if vhost = vhost_for(request_hostname(env))
      dup._call(env, vhost)
    else
      @app.call(env)
    end
  rescue Timeout::Error
    Attache.logger.error $@
    Attache.logger.error $!
    Attache.logger.error "ERROR 503 #{env['PATH_INFO']} REFERER #{env['HTTP_REFERER'].inspect}"
    [503, { 'X-Exception' => $!.to_s }, []]
  rescue Exception
    Attache.logger.error $@
    Attache.logger.error $!
    Attache.logger.error "ERROR 500 #{env['PATH_INFO']} REFERER #{env['HTTP_REFERER'].inspect}"
    [500, { 'X-Exception' => $!.to_s }, []]
  end

  def vhost_for(host)
    Attache::VHost.new(Attache.vhost[host] || Attache.vhost['0.0.0.0'])
  end

  def request_hostname(env)
    env['HTTP_X_FORWARDED_HOST'] || env['HTTP_HOST'] || "unknown.host"
  end

  def content_type_of(fullpath)
    Paperclip::ContentTypeDetector.new(fullpath).detect
  rescue Paperclip::Errors::NotIdentifiedByImageMagickError
    # best effort only
  end

  def geometry_of(fullpath)
    Paperclip::Geometry.from_file(fullpath).tap(&:auto_orient).to_s
  rescue Paperclip::Errors::NotIdentifiedByImageMagickError
    # best effort only
  end

  def filesize_of(fullpath)
    File.stat(fullpath).size
  end

  def params_of(env)
    env['QUERY_STRING'].to_s.split('&').inject({}) do |sum, pair|
      k, v = pair.split('=').collect {|s| CGI.unescape(s) }
      sum.merge(k => v)
    end
  end

  def path_of(cachekey)
    Attache.cache.send(:key_file_path, cachekey)
  end

  def rack_response_body_for(file)
    Attache::FileResponseBody.new(file)
  end

  def generate_relpath(basename)
    File.join(*SecureRandom.hex.scan(/\w\w/), basename)
  end

  def json_of(relpath, cachekey, vhost)
    filepath = path_of(cachekey)
    json = {
      path:         relpath,
      content_type: content_type_of(filepath),
      geometry:     geometry_of(filepath),
      bytes:        filesize_of(filepath),
    }
    if vhost && vhost.secret_key
      content = json.sort.collect {|k,v| "#{k}=#{v}" }.join('&')
      json['signature'] = OpenSSL::HMAC.hexdigest(OpenSSL::Digest.new('sha1'), vhost.secret_key, content)
    end
    json.to_json
  end

end


================================================
FILE: lib/attache/delete.rb
================================================
class Attache::Delete < Attache::Base
  def initialize(app)
    @app = app
  end

  def _call(env, config)
    case env['PATH_INFO']
    when '/delete'
      request  = Rack::Request.new(env)
      params   = request.params
      return config.unauthorized unless config.authorized?(params)

      threads = []
      params['paths'].to_s.split("\n").each do |relpath|
        if Attache.cache
          threads << Thread.new do
            Attache.logger.info "DELETING local #{relpath}"
            cachekey = File.join(request_hostname(env), relpath)
            Attache.cache.delete(cachekey)
          end
        end
        if config.storage && config.bucket
          threads << Thread.new do
            Attache.logger.info "DELETING remote #{relpath}"
            config.async(:storage_destroy, relpath: relpath)
          end
        end
        if config.backup
          threads << Thread.new do
            Attache.logger.info "DELETING backup #{relpath}"
            config.backup.async(:storage_destroy, relpath: relpath)
          end
        end
      end
      threads.each(&:join)
      [200, config.headers_with_cors, []]
    else
      @app.call(env)
    end
  end
end


================================================
FILE: lib/attache/download.rb
================================================
require 'connection_pool'

class Attache::Download < Attache::Base
  RESIZE_JOB_POOL = ConnectionPool.new(JSON.parse(ENV.fetch('RESIZE_POOL') { '{ "size": 2, "timeout": 60 }' }).symbolize_keys) { Attache::ResizeJob.new }

  def initialize(app)
    @app = app
    @mutexes = {}
  end

  def _call(env, config)
    case env['PATH_INFO']
    when %r{\A/view/}
      vhosts = {}
      vhosts[ENV.fetch('REMOTE_GEOMETRY') { 'remote' }] = config.storage && config.bucket && config
      vhosts[ENV.fetch('BACKUP_GEOMETRY') { 'backup' }] = config.backup

      parse_path_info(env['PATH_INFO']['/view/'.length..-1]) do |dirname, geometry, basename, relpath|
        unless config.try(:geometry_whitelist).blank? || config.geometry_whitelist.include?(geometry)
          return [415, config.download_headers, ["#{geometry} is not supported"]]
        end

        if vhost = vhosts[geometry]
          headers = vhost.download_headers.merge({
                      'Location' => vhost.storage_url(relpath: relpath),
                      'Cache-Control' => 'private, no-cache',
                    })
          return [302, headers, []]
        end

        thumbnail = case geometry
          when 'original', *vhosts.keys
            get_original_file(relpath, vhosts, env)
          else
            get_thumbnail_file(geometry, basename, relpath, vhosts, env)
          end

        return [404, config.download_headers, []] if thumbnail.try(:size).to_i == 0

        headers = {
          'Content-Type' => content_type_of(thumbnail.path),
        }.merge(config.download_headers)

        [200, headers, rack_response_body_for(thumbnail)]
      end
    else
      @app.call(env)
    end
  end

  private

    def parse_path_info(geometrypath)
      parts = geometrypath.split('/')
      basename = CGI.unescape parts.pop
      geometry = CGI.unescape parts.pop
      dirname  = parts.join('/')
      relpath  = File.join(dirname, basename)
      yield dirname, geometry, basename, relpath
    end

    def synchronize(key, &block)
      mutex = @mutexes[key] ||= Mutex.new
      mutex.synchronize(&block)
    ensure
      @mutexes.delete(key)
    end

    def get_thumbnail_file(geometry, basename, relpath, vhosts, env)
      cachekey = File.join(request_hostname(env), relpath, geometry)
      synchronize(cachekey) do
        tempfile = nil
        Attache.cache.fetch(cachekey) do
          Attache.logger.info "[POOL] new job"
          tempfile = RESIZE_JOB_POOL.with do |job|
            job.perform(geometry, basename, relpath, vhosts, env) do
              # opens up possibility that job implementation
              # does not require we download original file prior
              get_original_file(relpath, vhosts, env)
            end
          end
        end.tap { File.unlink(tempfile.path) if tempfile.try(:path) }
      end
    end

    def get_original_file(relpath, vhosts, env)
      cachekey = File.join(request_hostname(env), relpath)
      synchronize(cachekey) do
        Attache.cache.fetch(cachekey) do
          name_with_vhost_pairs = vhosts.inject({}) { |sum,(k,v)| (v ? sum.merge(k => v) : sum) }
          get_first_result_present_async(name_with_vhost_pairs.collect {|name, vhost|
            lambda { Thread.handle_interrupt(BasicObject => :on_blocking) {
              begin
                Attache.logger.info "[POOL] looking for #{name} #{relpath}..."
                vhost.storage_get(relpath: relpath).tap do |v|
                  Attache.logger.info "[POOL] found #{name} #{relpath} = #{v.inspect}"
                end
              rescue Exception
                Attache.logger.error $!
                Attache.logger.error $@
                Attache.logger.info "[POOL] not found #{name} #{relpath}"
                nil
              end
            } }
          })
        end
      end
    rescue Exception # Errno::ECONNREFUSED, OpenURI::HTTPError, Excon::Errors, Fog::Errors::Error
      Attache.logger.error "ERROR REFERER #{env['HTTP_REFERER'].inspect}"
      nil
    end

    # Ref https://gist.github.com/sferik/39831f34eb87686b639c#gistcomment-1652888
    # a bit more complicated because we *want* to ignore falsey result
    def get_first_result_present_async(lambdas)
      return if lambdas.empty? # queue.pop will never happen
      queue = Queue.new
      threads = lambdas.shuffle.collect { |code| Thread.new { queue << [Thread.current, code.call] } }
      until (item = queue.pop).last do
        thread, _ = item
        thread.join # we could be popping `queue` before thread exited
        break unless threads.any?(&:alive?) || queue.size > 0
      end
      threads.each(&:kill)
      _, result = item
      result
    end
end


================================================
FILE: lib/attache/file_response_body.rb
================================================
class Attache::FileResponseBody
  def initialize(file, range_start = nil, range_end = nil)
    @file        = file
    @range_start = range_start || 0
    @range_end   = range_end || File.size(@file.path)
  end

  # adapted from rack/file.rb
  def each
    @file.seek(@range_start)
    remaining_len = @range_end
    while remaining_len > 0
      part = @file.read([8192, remaining_len].min)
      break unless part
      remaining_len -= part.length

      yield part
    end
  end
end


================================================
FILE: lib/attache/job.rb
================================================
class Attache::Job
  RETRY_DURATION = ENV.fetch('CACHE_EVICTION_INTERVAL_SECONDS') { 60 }.to_i / 3

  def perform(method, env, args)
    config = Attache::VHost.new(env)
    config.send(method, args.symbolize_keys)
  rescue Exception
    Attache.logger.error $@
    Attache.logger.error $!
    Attache.logger.error [method, args].inspect
    self.class.perform_in(RETRY_DURATION, method, env, args)
  end

  # Background processing setup

  if defined?(::SuckerPunch::Job)
    include ::SuckerPunch::Job
    def later(sec, *args)
      after(sec) { perform(*args) }
    end
    def self.perform_async(*args)
      self.new.async.perform(*args)
    end
    def self.perform_in(duration, *args)
      self.new.async.later(duration, *args)
    end
  else
    include Sidekiq::Worker
    sidekiq_options :queue => :attache_vhost_jobs
    sidekiq_retry_in {|count| RETRY_DURATION} # uncaught exception, retry after RETRY_DURATION
  end
end


================================================
FILE: lib/attache/resize_job.rb
================================================
require 'digest/sha1'
require 'stringio'

class Attache::ResizeJob
  def perform(target_geometry_string, basename, relpath, vhosts, env, t = Time.now)
    closed_file = yield
    return StringIO.new if closed_file.try(:size).to_i == 0

    extension = basename.split(/\W+/).last
    Attache.logger.info "[POOL] start"
    return make_nonimage_preview(closed_file, basename) if ['pdf', 'txt'].include?(extension.to_s.downcase)

    thumbnail = thumbnail_for(closed_file: closed_file, target_geometry_string: target_geometry_string, extension: extension)
    thumbnail.instance_variable_set('@basename', make_safe_filename(thumbnail.instance_variable_get('@basename')))
    thumbnail.make
  rescue Paperclip::Errors::NotIdentifiedByImageMagickError
    make_nonimage_preview(closed_file, basename)
  ensure
    Attache.logger.info "[POOL] done in #{Time.now - t}s"
  end

  private

    BOLD_FONT_FILE = ENV.fetch('FONT_FILE', File.join(Attache.publicdir, "vendor/roboto/Roboto-Medium.ttf"))
    THIN_FONT_FILE = ENV.fetch('FONT_FILE', File.join(Attache.publicdir, "vendor/roboto/Roboto-Light.ttf"))
    BORDER_SIZE = ENV.fetch('BORDER_SIZE', "3")
    FG_COLOR = ENV.fetch('FG_COLOR', "#ffffff")
    BG_COLOR = ENV.fetch('BG_COLOR', "#dddddd")
    EXT_COLOR = ENV.fetch('EXT_COLOR', "#333333")
    TXT_SIZE = ENV.fetch('TXT_SIZE', "12")
    PREVIEW_SIZE = ENV.fetch('PREVIEW_SIZE', '96x')

    def make_nonimage_preview(closed_file, basename)
      t = Time.now
      Attache.logger.info "[POOL] start nonimage preview"
      output_file = Tempfile.new(["preview", ".png"]).tap(&:close)
      cmd = case basename
      when /\.pdf$/i
        "convert -size #{PREVIEW_SIZE.inspect} #{closed_file.path.inspect}[0] -thumbnail #{PREVIEW_SIZE.inspect} -font #{BOLD_FONT_FILE.inspect}"
      else
        "convert -size #{PREVIEW_SIZE.inspect} \\( -gravity center -font #{BOLD_FONT_FILE.inspect} -fill #{EXT_COLOR.inspect} label:'#{make_safe_filename(basename).split(/\W+/).last}' \\)"
      end + " -bordercolor #{FG_COLOR.inspect} -border #{BORDER_SIZE} -background #{BG_COLOR.inspect} -gravity center -font #{THIN_FONT_FILE.inspect} -pointsize 12 -set caption #{basename.inspect} -polaroid 0 #{output_file.path.inspect}"
      Attache.logger.info cmd
      system cmd
      File.new(output_file.path)
    ensure
      Attache.logger.info "[POOL] done nonimage preview in #{Time.now - t}s"
    end

    def make_safe_filename(str)
      str.to_s.gsub(/[^\w\.]/, '_')
    end

    def thumbnail_for(closed_file:, target_geometry_string:, extension:, max: 2048)
      convert_options = '-interlace Plane' if %w(jpg jpeg).include?(extension.to_s.downcase)
      thumbnail = Paperclip::Thumbnail.new(closed_file, geometry: target_geometry_string, format: extension, convert_options: convert_options)
      current_geometry = current_geometry_for(thumbnail)
      target_geometry = Paperclip::GeometryParser.new(target_geometry_string).make
      if target_geometry.larger <= max && current_geometry.larger > max
        # optimization:
        #  when users upload "super big files", we can speed things up
        #  by working from a "reasonably large 2048x2048 thumbnail" (<2 seconds)
        #  instead of operating on the original (>10 seconds)
        #  we store this reusably in Attache.cache to persist reboot, but not uploaded to cloud
        working_geometry = "#{max}x#{max}>"
        working_file = Attache.cache.fetch(Digest::SHA1.hexdigest(working_geometry + closed_file.path)) do
          Attache.logger.info "[POOL] generate working_file"
          Paperclip::Thumbnail.new(closed_file, geometry: working_geometry, format: extension).make
        end
        Attache.logger.info "[POOL] use working_file #{working_file.path}"
        thumbnail = Paperclip::Thumbnail.new(working_file.tap(&:close), geometry: target_geometry_string, format: extension, convert_options: convert_options)
      end
      thumbnail
    end

    # allow stub in spec
    def current_geometry_for(thumbnail)
      thumbnail.current_geometry.tap(&:auto_orient)
    end

end


================================================
FILE: lib/attache/tasks.rb
================================================
require "rake"

namespace :attache do

  desc "Convert content of FILE to a JSON string; default FILE=config/vhost.yml"
  task :vhost do
    require 'yaml'
    require 'json'

    file = ENV.fetch("FILE") { "config/vhost.yml" }
    puts YAML.load(IO.read(file)).to_json
  end

end


================================================
FILE: lib/attache/tus/upload.rb
================================================
class Attache::Tus::Upload < Attache::Base
  def initialize(app)
    @app = app
  end

  def _call(env, config)
    case env['PATH_INFO']
    when '/tus/files'
      tus = ::Attache::Tus.new(env, config)
      params = params_of(env) # avoid unnecessary `invalid byte sequence in UTF-8` on `request.params`
      return config.unauthorized unless config.authorized?(params)

      case env['REQUEST_METHOD']
      when 'POST'
        if positive_number?(tus.upload_length)
          relpath = generate_relpath(Attache::Upload.sanitize(tus.upload_metadata['filename'] || params['file']))
          cachekey = File.join(request_hostname(env), relpath)

          bytes_wrote = Attache.cache.write(cachekey, StringIO.new)
          uri = URI.parse(Rack::Request.new(env).url)
          uri.query = (uri.query ? "#{uri.query}&" : '') + "relpath=#{CGI.escape relpath}"
          [201, tus.headers_with_cors('Location' => uri.to_s), []]
        else
          [400, tus.headers_with_cors('X-Exception' => "Bad upload length"), []]
        end

      when 'PATCH'
        relpath = params['relpath']
        cachekey = File.join(request_hostname(env), relpath)
        http_offset = tus.upload_offset
        if positive_number?(env['CONTENT_LENGTH']) &&
           positive_number?(http_offset) &&
           (env['CONTENT_TYPE'] == 'application/offset+octet-stream') &&
           tus.resumable_version.to_s == '1.0.0' &&
           current_offset(cachekey, relpath, config) >= http_offset.to_i

          append_to(cachekey, http_offset, env['rack.input'])
          config.storage_create(relpath: relpath, cachekey: cachekey) if config.storage && config.bucket

          [200,
            tus.headers_with_cors({'Content-Type' => 'text/json'}, offset: current_offset(cachekey, relpath, config)),
            [json_of(relpath, cachekey, config)],
          ]
        else
          [400, tus.headers_with_cors('X-Exception' => 'Bad headers'), []]
        end

      when 'OPTIONS'
        [201, tus.headers_with_cors, []]

      when 'HEAD'
        relpath = params['relpath']
        cachekey = File.join(request_hostname(env), relpath)
        [200,
          tus.headers_with_cors({'Content-Type' => 'text/json'}, offset: current_offset(cachekey, relpath, config)),
          [json_of(relpath, cachekey, config)],
        ]

      when 'GET'
        relpath = params['relpath']
        uri = URI.parse(Rack::Request.new(env).url)
        uri.query = nil
        uri.path = File.join('/view', File.dirname(relpath), 'original', CGI.escape(File.basename(relpath)))
        [302, tus.headers_with_cors('Location' => uri.to_s), []]
      end
    else
      @app.call(env)
    end
  end

  private

    def current_offset(cachekey, relpath, config)
      file = Attache.cache.fetch(cachekey) do
        config.storage_get(relpath: relpath) if config.storage && config.bucket
      end
      file.size
    rescue
      Attache.cache.write(cachekey, StringIO.new)
    ensure
      file.tap(&:close)
    end

    def append_to(cachekey, offset, io)
      f = File.open(path_of(cachekey), 'r+b')
      f.sync = true
      f.seek(offset.to_i)
      f.write(io.read)
    ensure
      f.close
    end

    def positive_number?(value)
      (value.to_s == "0" || value.to_i > 0)
    end

end


================================================
FILE: lib/attache/tus.rb
================================================
class Attache::Tus
  LENGTH_KEYS   = %w[Upload-Length   Entity-Length]
  OFFSET_KEYS   = %w[Upload-Offset   Offset]
  METADATA_KEYS = %w[Upload-Metadata Metadata]

  attr_accessor :env, :config

  def initialize(env, config)
    @env = env
    @config = config
  end

  def header_value(keys)
    value = nil
    keys.find {|k| value = env["HTTP_#{k.gsub('-', '_').upcase}"]}
    value
  end

  def upload_length
    header_value LENGTH_KEYS
  end

  def upload_offset
    header_value OFFSET_KEYS
  end

  def upload_metadata
    value = header_value METADATA_KEYS
    Hash[*value.split(/[, ]/)].inject({}) do |h, (k, v)|
      h.merge(k => Base64.decode64(v))
    end
  end

  def resumable_version
    header_value ["Tus-Resumable"]
  end

  def headers_with_cors(headers = {}, offset: nil)
    tus_headers = {
      "Access-Control-Allow-Methods" => "PATCH",
      "Access-Control-Allow-Headers" => "Tus-Resumable, #{LENGTH_KEYS.join(', ')}, #{METADATA_KEYS.join(', ')}, #{OFFSET_KEYS.join(', ')}",
      "Access-Control-Expose-Headers" => "Location, #{OFFSET_KEYS.join(', ')}",
    }
    OFFSET_KEYS.each do |k|
      tus_headers[k] = offset
    end if offset

    # append
    tus_headers.inject(config.headers_with_cors.merge(headers)) do |sum, (k, v)|
      sum.merge(k => [*sum[k], v].join(', '))
    end
  end
end


================================================
FILE: lib/attache/upload.rb
================================================
class Attache::Upload < Attache::Base
  def initialize(app)
    @app = app
  end

  def _call(env, config)
    case env['PATH_INFO']
    when '/upload'
      case env['REQUEST_METHOD']
      when 'POST', 'PUT', 'PATCH'
        request  = Rack::Request.new(env)
        params   = request.GET # stay away from parsing body
        return config.unauthorized unless config.authorized?(params)

        relpath = generate_relpath(Attache::Upload.sanitize params['file'])
        cachekey = File.join(request_hostname(env), relpath)

        bytes_wrote = Attache.cache.write(cachekey, request.body)
        if bytes_wrote == 0
          return [500, config.headers_with_cors.merge('X-Exception' => 'Local file failed'), []]
        else
          Attache.logger.info "[Upload] received #{bytes_wrote} #{cachekey}"
        end

        config.storage_create(relpath: relpath, cachekey: cachekey) if config.storage && config.bucket

        [200, config.headers_with_cors.merge('Content-Type' => 'text/json'), [json_of(relpath, cachekey, config)]]
      when 'OPTIONS'
        [200, config.headers_with_cors, []]
      else
        [400, config.headers_with_cors, []]
      end
    else
      @app.call(env)
    end
  end

  def self.sanitize(filename)
    filename.to_s.gsub(/\%/, '_')
  end
end


================================================
FILE: lib/attache/upload_url.rb
================================================
class Attache::UploadUrl < Attache::Base
  def initialize(app)
    @app = app
  end

  def _call(env, config)
    case env['PATH_INFO']
    when '/upload_url'

      # always pretend to be `POST /upload`
      env['PATH_INFO'] = '/upload'
      env['REQUEST_METHOD'] = 'POST'

      request  = Rack::Request.new(env)
      params   = request.params
      return config.unauthorized unless config.authorized?(params)

      if params['url']
        file, filename, content_type = download_file(params['url'])
        filename = "index" if filename == '/'

        env['CONTENT_TYPE'] = content_type || content_type_of(file.path)
        env['rack.request.query_hash'] = (env['rack.request.query_hash'] || {}).merge('file' => filename)
        env['rack.input'] = file
      end
    end
    @app.call(env)
  end

  MAX_DEPTH = 30
  def download_file(url, depth = 0)
    raise Net::HTTPError, "Too many redirects" if depth > MAX_DEPTH
    Attache.logger.info "Upload GET #{url}"

    if url.match /\Adata:([^;,]+|)(;base64|),/
      # data:[<mediatype>][;base64],<data>
      # http://tools.ietf.org/html/rfc2397

      data = URI.decode(url[url.index(',')+1..-1])
      data = Base64.decode64(data) if $2 == ';base64'
      content_type = ($1 == '' ? "text/plain" : $1)
      filename = "data.#{content_type.gsub(/\W+/, '.')}"
      return [StringIO.new(data), filename, content_type]
    end

    uri = uri.kind_of?(URI::Generic) ? url : URI.parse(url)
    http = Net::HTTP.new(uri.host, uri.port)
    http.use_ssl = true if uri.scheme == 'https'
    req = Net::HTTP::Get.new(uri.request_uri)
    req.initialize_http_header({"User-Agent" => ENV['USER_AGENT']}) if ENV['USER_AGENT']
    req.basic_auth(uri.user, uri.password) if uri.user || uri.password
    res = http.request(req)
    case res.code
    when /\A30[1,2]\z/
      download_file URI.join(url, res['Location']).to_s, depth + 1

    when /\A2\d\d\z/
      f = Tempfile.new(["upload_url", File.extname(uri.path)])
      f.write(res.body)
      f.close
      [f.tap(&:open), File.basename(uri.path)]

    else
      raise Net::HTTPError, "Failed #{res.code}"
    end
  end
end


================================================
FILE: lib/attache/version.rb
================================================
module Attache
  VERSION = "3.0.0"
end


================================================
FILE: lib/attache/vhost.rb
================================================
class Attache::VHost
  attr_accessor :remotedir,
                :secret_key,
                :backup,
                :bucket,
                :storage,
                :download_headers,
                :headers_with_cors,
                :geometry_whitelist,
                :env

  def initialize(hash)
    self.env = hash || {}
    self.remotedir  = env['REMOTE_DIR'] # nil means no fixed top level remote directory, and that's fine.
    self.secret_key = env['SECRET_KEY'] # nil means no auth check; anyone can upload a file
    self.geometry_whitelist = env['GEOMETRY_WHITELIST'] # nil means everything is acceptable

    if env['FOG_CONFIG']
      self.bucket       = env['FOG_CONFIG'].fetch('bucket')
      self.storage      = Fog::Storage.new(env['FOG_CONFIG'].except('bucket').symbolize_keys)

      if env['BACKUP_CONFIG']
        backup_fog = env['FOG_CONFIG'].merge(env['BACKUP_CONFIG'])
        self.backup = Attache::VHost.new(env.except('BACKUP_CONFIG').merge('FOG_CONFIG' => backup_fog))
      end
    end
    self.download_headers = {
      "Cache-Control" => "public, max-age=31536000"
    }.merge(env['DOWNLOAD_HEADERS'] || {})
    self.headers_with_cors = {
      'Access-Control-Allow-Origin' => '*',
      'Access-Control-Allow-Methods' => 'POST, PUT',
      'Access-Control-Allow-Headers' => 'Content-Type',
    }.merge(env['UPLOAD_HEADERS'] || {})
  end

  def hmac_for(content)
    OpenSSL::HMAC.hexdigest(OpenSSL::Digest.new('sha1'), secret_key, content)
  end

  def hmac_valid?(params)
    params['uuid'] &&
    params['hmac']  &&
    params['expiration'] &&
    Time.at(params['expiration'].to_i) > Time.now &&
    Rack::Utils.secure_compare(params['hmac'], hmac_for("#{params['uuid']}#{params['expiration']}"))
  end

  def storage_url(args)
    object = remote_api.new({
      key: File.join(*remotedir, args[:relpath]),
    })
    result = if object.respond_to?(:url)
      object.url(Time.now + 600)
    else
      object.public_url
    end
  ensure
    Attache.logger.info "storage_url: #{result}"
  end

  def storage_get(args)
    open storage_url(args)
  end

  def storage_create(args)
    Attache.logger.info "[JOB] uploading #{args[:cachekey].inspect}"
    body = begin
      Attache.cache.read(args[:cachekey])
    rescue Errno::ENOENT
      :no_entry # upload file no longer exist; likely deleted immediately after upload
    end
    unless body == :no_entry
      remote_api.create({
        key: File.join(*remotedir, args[:relpath]),
        body: body,
      })
      Attache.logger.info "[JOB] uploaded #{args[:cachekey]}"
    end
  end

  def storage_destroy(args)
    Attache.logger.info "[JOB] deleting #{args[:relpath]}"
    remote_api.new({
      key: File.join(*remotedir, args[:relpath]),
    }).destroy
    Attache.logger.info "[JOB] deleted #{args[:relpath]}"
  end

  def remote_api
    storage.directories.new(key: bucket).files
  end

  def async(method, args)
    ::Attache::Job.perform_async(method, env, args)
  end

  def authorized?(params)
    secret_key.blank? || hmac_valid?(params)
  end

  def unauthorized
    [401, headers_with_cors.merge('X-Exception' => 'Authorization failed'), []]
  end

  def backup_file(args)
    if backup
      key = File.join(*remotedir, args[:relpath])
      storage.copy_object(bucket, key, backup.bucket, key)
    end
  end
end


================================================
FILE: lib/attache.rb
================================================
require 'active_support/all'
require 'sys/filesystem'
require 'securerandom'
require 'disk_store'
require 'fileutils'
require 'paperclip'
require 'net/http'
require 'tempfile'
require 'sidekiq'
require 'tmpdir'
require 'logger'
require 'base64'
require 'rack'
require 'json'
require 'uri'
require 'cgi'
require 'fog'

if ENV['REDIS_PROVIDER'] || ENV['REDIS_URL']
  # default sidekiq
elsif ENV['INLINE_JOB']
  require 'sidekiq/testing/inline'
else
  require 'sucker_punch'
end

module Attache
  class << self
    attr_accessor :localdir,
                  :vhost,
                  :cache,
                  :logger,
                  :publicdir
  end
end

Attache.logger     = Logger.new(STDOUT)
Attache.localdir   = File.expand_path(ENV.fetch('LOCAL_DIR') { Dir.tmpdir })
Attache.vhost      = JSON.parse(ENV.fetch('VHOST') { YAML.load(IO.read('config/vhost.yml')).to_json rescue '{}' })
Attache.cache      = DiskStore.new(Attache.localdir, {
  cache_size: ENV.fetch('CACHE_SIZE_BYTES') {
    stat = Sys::Filesystem.stat("/")
    available = stat.block_size * stat.blocks_available
    (available * 0.8).floor # use 80% free disk by default
  }.to_i,
  reaper_interval:   ENV.fetch('CACHE_EVICTION_INTERVAL_SECONDS') { 60 }.to_i,
  eviction_strategy: (Attache.vhost.empty? ? nil : :LRU), # lru eviction only when there is remote storage
})
Attache.publicdir = ENV.fetch("PUBLIC_DIR") { File.expand_path("../public", File.dirname(__FILE__)) }

require 'attache/job'
require 'attache/resize_job'
require 'attache/base'
require 'attache/vhost'
require 'attache/upload_url'
require 'attache/upload'
require 'attache/delete'
require 'attache/backup'
require 'attache/download'
require 'attache/file_response_body'

require 'attache/tus'
require 'attache/tus/upload'


================================================
FILE: public/index.html
================================================
It works!


================================================
FILE: public/vendor/roboto/Apache License.txt
================================================
Font data copyright Google 2012

                                Apache License
                           Version 2.0, January 2004
                        http://www.apache.org/licenses/

   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION

   1. Definitions.

      "License" shall mean the terms and conditions for use, reproduction,
      and distribution as defined by Sections 1 through 9 of this document.

      "Licensor" shall mean the copyright owner or entity authorized by
      the copyright owner that is granting the License.

      "Legal Entity" shall mean the union of the acting entity and all
      other entities that control, are controlled by, or are under common
      control with that entity. For the purposes of this definition,
      "control" means (i) the power, direct or indirect, to cause the
      direction or management of such entity, whether by contract or
      otherwise, or (ii) ownership of fifty percent (50%) or more of the
      outstanding shares, or (iii) beneficial ownership of such entity.

      "You" (or "Your") shall mean an individual or Legal Entity
      exercising permissions granted by this License.

      "Source" form shall mean the preferred form for making modifications,
      including but not limited to software source code, documentation
      source, and configuration files.

      "Object" form shall mean any form resulting from mechanical
      transformation or translation of a Source form, including but
      not limited to compiled object code, generated documentation,
      and conversions to other media types.

      "Work" shall mean the work of authorship, whether in Source or
      Object form, made available under the License, as indicated by a
      copyright notice that is included in or attached to the work
      (an example is provided in the Appendix below).

      "Derivative Works" shall mean any work, whether in Source or Object
      form, that is based on (or derived from) the Work and for which the
      editorial revisions, annotations, elaborations, or other modifications
      represent, as a whole, an original work of authorship. For the purposes
      of this License, Derivative Works shall not include works that remain
      separable from, or merely link (or bind by name) to the interfaces of,
      the Work and Derivative Works thereof.

      "Contribution" shall mean any work of authorship, including
      the original version of the Work and any modifications or additions
      to that Work or Derivative Works thereof, that is intentionally
      submitted to Licensor for inclusion in the Work by the copyright owner
      or by an individual or Legal Entity authorized to submit on behalf of
      the copyright owner. For the purposes of this definition, "submitted"
      means any form of electronic, verbal, or written communication sent
      to the Licensor or its representatives, including but not limited to
      communication on electronic mailing lists, source code control systems,
      and issue tracking systems that are managed by, or on behalf of, the
      Licensor for the purpose of discussing and improving the Work, but
      excluding communication that is conspicuously marked or otherwise
      designated in writing by the copyright owner as "Not a Contribution."

      "Contributor" shall mean Licensor and any individual or Legal Entity
      on behalf of whom a Contribution has been received by Licensor and
      subsequently incorporated within the Work.

   2. Grant of Copyright License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      copyright license to reproduce, prepare Derivative Works of,
      publicly display, publicly perform, sublicense, and distribute the
      Work and such Derivative Works in Source or Object form.

   3. Grant of Patent License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      (except as stated in this section) patent license to make, have made,
      use, offer to sell, sell, import, and otherwise transfer the Work,
      where such license applies only to those patent claims licensable
      by such Contributor that are necessarily infringed by their
      Contribution(s) alone or by combination of their Contribution(s)
      with the Work to which such Contribution(s) was submitted. If You
      institute patent litigation against any entity (including a
      cross-claim or counterclaim in a lawsuit) alleging that the Work
      or a Contribution incorporated within the Work constitutes direct
      or contributory patent infringement, then any patent licenses
      granted to You under this License for that Work shall terminate
      as of the date such litigation is filed.

   4. Redistribution. You may reproduce and distribute copies of the
      Work or Derivative Works thereof in any medium, with or without
      modifications, and in Source or Object form, provided that You
      meet the following conditions:

      (a) You must give any other recipients of the Work or
          Derivative Works a copy of this License; and

      (b) You must cause any modified files to carry prominent notices
          stating that You changed the files; and

      (c) You must retain, in the Source form of any Derivative Works
          that You distribute, all copyright, patent, trademark, and
          attribution notices from the Source form of the Work,
          excluding those notices that do not pertain to any part of
          the Derivative Works; and

      (d) If the Work includes a "NOTICE" text file as part of its
          distribution, then any Derivative Works that You distribute must
          include a readable copy of the attribution notices contained
          within such NOTICE file, excluding those notices that do not
          pertain to any part of the Derivative Works, in at least one
          of the following places: within a NOTICE text file distributed
          as part of the Derivative Works; within the Source form or
          documentation, if provided along with the Derivative Works; or,
          within a display generated by the Derivative Works, if and
          wherever such third-party notices normally appear. The contents
          of the NOTICE file are for informational purposes only and
          do not modify the License. You may add Your own attribution
          notices within Derivative Works that You distribute, alongside
          or as an addendum to the NOTICE text from the Work, provided
          that such additional attribution notices cannot be construed
          as modifying the License.

      You may add Your own copyright statement to Your modifications and
      may provide additional or different license terms and conditions
      for use, reproduction, or distribution of Your modifications, or
      for any such Derivative Works as a whole, provided Your use,
      reproduction, and distribution of the Work otherwise complies with
      the conditions stated in this License.

   5. Submission of Contributions. Unless You explicitly state otherwise,
      any Contribution intentionally submitted for inclusion in the Work
      by You to the Licensor shall be under the terms and conditions of
      this License, without any additional terms or conditions.
      Notwithstanding the above, nothing herein shall supersede or modify
      the terms of any separate license agreement you may have executed
      with Licensor regarding such Contributions.

   6. Trademarks. This License does not grant permission to use the trade
      names, trademarks, service marks, or product names of the Licensor,
      except as required for reasonable and customary use in describing the
      origin of the Work and reproducing the content of the NOTICE file.

   7. Disclaimer of Warranty. Unless required by applicable law or
      agreed to in writing, Licensor provides the Work (and each
      Contributor provides its Contributions) on an "AS IS" BASIS,
      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
      implied, including, without limitation, any warranties or conditions
      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
      PARTICULAR PURPOSE. You are solely responsible for determining the
      appropriateness of using or redistributing the Work and assume any
      risks associated with Your exercise of permissions under this License.

   8. Limitation of Liability. In no event and under no legal theory,
      whether in tort (including negligence), contract, or otherwise,
      unless required by applicable law (such as deliberate and grossly
      negligent acts) or agreed to in writing, shall any Contributor be
      liable to You for damages, including any direct, indirect, special,
      incidental, or consequential damages of any character arising as a
      result of this License or out of the use or inability to use the
      Work (including but not limited to damages for loss of goodwill,
      work stoppage, computer failure or malfunction, or any and all
      other commercial damages or losses), even if such Contributor
      has been advised of the possibility of such damages.

   9. Accepting Warranty or Additional Liability. While redistributing
      the Work or Derivative Works thereof, You may choose to offer,
      and charge a fee for, acceptance of support, warranty, indemnity,
      or other liability obligations and/or rights consistent with this
      License. However, in accepting such obligations, You may act only
      on Your own behalf and on Your sole responsibility, not on behalf
      of any other Contributor, and only if You agree to indemnify,
      defend, and hold each Contributor harmless for any liability
      incurred by, or claims asserted against, such Contributor by reason
      of your accepting any such warranty or additional liability.

   END OF TERMS AND CONDITIONS

   APPENDIX: How to apply the Apache License to your work.

      To apply the Apache License to your work, attach the following
      boilerplate notice, with the fields enclosed by brackets "[]"
      replaced with your own identifying information. (Don't include
      the brackets!)  The text should be enclosed in the appropriate
      comment syntax for the file format. We also recommend that a
      file or class name and description of purpose be included on the
      same "printed page" as the copyright notice for easier
      identification within third-party archives.

   Copyright [yyyy] [name of copyright owner]

   Licensed under the Apache License, Version 2.0 (the "License");
   you may not use this file except in compliance with the License.
   You may obtain a copy of the License at

       http://www.apache.org/licenses/LICENSE-2.0

   Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an "AS IS" BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License.

================================================
FILE: spec/fixtures/sample.txt
================================================
data:not a data uri


================================================
FILE: spec/lib/attache/backup_spec.rb
================================================
require 'spec_helper'

describe Attache::Backup do
  let(:app) { ->(env) { [200, env, "app"] } }
  let(:middleware) { Attache::Backup.new(app) }
  let(:params) { {} }
  let(:filename) { "hello#{rand}.gif" }
  let(:reldirname) { "path#{rand}" }
  let(:file) { StringIO.new(IO.binread("spec/fixtures/transparent.gif"), 'rb') }

  before do
    allow(Attache).to receive(:localdir).and_return(Dir.tmpdir) # forced, for safety
    allow_any_instance_of(Attache::VHost).to receive(:secret_key).and_return(nil)
    allow_any_instance_of(Attache::VHost).to receive(:storage).and_return(nil)
    allow_any_instance_of(Attache::VHost).to receive(:bucket).and_return(nil)
  end

  after do
    FileUtils.rm_rf(Attache.localdir)
  end

  it "should passthrough irrelevant request" do
    code, env = middleware.call Rack::MockRequest.env_for('http://example.com', {})
    expect(code).to eq 200
  end

  context "backup file" do
    let(:params) { Hash(paths: ['image1.jpg', filename].join("\n")) }

    subject { proc { middleware.call Rack::MockRequest.env_for('http://example.com/backup?' + params.collect {|k,v| "#{CGI.escape k.to_s}=#{CGI.escape v.to_s}"}.join('&'), method: 'DELETE', "HTTP_HOST" => "example.com") } }

    it 'should respond with json' do
    end

    it 'should not touch local file' do
      expect(Attache).not_to receive(:cache)
      code, headers, body = subject.call
      expect(code).to eq(200)
    end

    context 'storage configured' do
      before do
        allow_any_instance_of(Attache::VHost).to receive(:storage).and_return(double(:storage))
        allow_any_instance_of(Attache::VHost).to receive(:bucket).and_return(double(:bucket))
      end

      it 'should backup file' do
        expect_any_instance_of(Attache::VHost).to receive(:backup_file).exactly(2).times
        subject.call
      end
    end

    context 'storage NOT configured' do
      it 'should backup file' do
        expect_any_instance_of(Attache::VHost).not_to receive(:backup_file)
        subject.call
      end
    end

    context 'with secret_key' do
      let(:secret_key) { "topsecret#{rand}" }

      before do
        allow_any_instance_of(Attache::VHost).to receive(:secret_key).and_return(secret_key)
      end

      it 'should respond with error' do
        code, headers, body = subject.call
        expect(code).to eq(401)
        expect(headers['X-Exception']).to eq('Authorization failed')
      end

      context 'invalid auth' do
        let(:expiration) { (Time.now + 10).to_i }
        let(:uuid) { "hi#{rand}" }
        let(:digest) { OpenSSL::Digest.new('sha1') }
        let(:params) { Hash(file: filename, expiration: expiration, uuid: uuid, hmac: OpenSSL::HMAC.hexdigest(digest, "wrong#{secret_key}", "#{uuid}#{expiration}")) }

        it 'should respond with error' do
          code, headers, body = subject.call
          expect(code).to eq(401)
          expect(headers['X-Exception']).to eq('Authorization failed')
        end
      end

      context 'valid auth' do
        let(:expiration) { (Time.now + 10).to_i }
        let(:uuid) { "hi#{rand}" }
        let(:digest) { OpenSSL::Digest.new('sha1') }
        let(:params) { Hash(file: filename, expiration: expiration, uuid: uuid, hmac: OpenSSL::HMAC.hexdigest(digest, secret_key, "#{uuid}#{expiration}")) }

        it 'should respond with success' do
          code, headers, body = subject.call
          expect(code).to eq(200)
        end

        context 'expired' do
          let(:expiration) { (Time.now - 1).to_i } # the past

          it 'should respond with error' do
            code, headers, body = subject.call
            expect(code).to eq(401)
            expect(headers['X-Exception']).to eq('Authorization failed')
          end
        end
      end
    end
  end
end


================================================
FILE: spec/lib/attache/delete_spec.rb
================================================
require 'spec_helper'

describe Attache::Delete do
  let(:app) { ->(env) { [200, env, "app"] } }
  let(:middleware) { Attache::Delete.new(app) }
  let(:params) { {} }
  let(:filename) { "hello#{rand}.gif" }
  let(:reldirname) { "path#{rand}" }
  let(:file) { StringIO.new(IO.binread("spec/fixtures/transparent.gif"), 'rb') }

  before do
    allow(Attache).to receive(:localdir).and_return(Dir.tmpdir) # forced, for safety
    allow_any_instance_of(Attache::VHost).to receive(:secret_key).and_return(nil)
    allow_any_instance_of(Attache::VHost).to receive(:storage).and_return(nil)
    allow_any_instance_of(Attache::VHost).to receive(:bucket).and_return(nil)
  end

  after do
    FileUtils.rm_rf(Attache.localdir)
  end

  it "should passthrough irrelevant request" do
    code, env = middleware.call Rack::MockRequest.env_for('http://example.com', {})
    expect(code).to eq 200
  end

  context "deleting" do
    let(:params) { Hash(paths: ['image1.jpg', filename].join("\n")) }

    subject { proc { middleware.call Rack::MockRequest.env_for('http://example.com/delete?' + params.collect {|k,v| "#{CGI.escape k.to_s}=#{CGI.escape v.to_s}"}.join('&'), method: 'DELETE', "HTTP_HOST" => "example.com") } }

    it 'should respond with json' do
    end

    it 'should delete file locally' do
      expect(Attache.cache).to receive(:delete) do |path|
        expect(path).to start_with('example.com')
      end.exactly(2).times
      code, headers, body = subject.call
      expect(code).to eq(200)
    end

    context 'delete fail locally' do
      before do
        expect(Attache.cache).to receive(:delete) do
          raise Exception.new
        end.at_least(1).times
      end

      it 'should respond with error' do
        code, headers, body = subject.call
        expect(code).to eq(500)
      end
    end

    context 'storage configured' do
      before do
        allow_any_instance_of(Attache::VHost).to receive(:storage).and_return(double(:storage))
        allow_any_instance_of(Attache::VHost).to receive(:bucket).and_return(double(:bucket))
      end

      it 'should delete file remotely' do
        expect_any_instance_of(Attache::VHost).to receive(:async) do |instance, method, path|
          expect(method).to eq(:storage_destroy)
        end.exactly(2).times
        subject.call
      end
    end

    context 'storage NOT configured' do
      it 'should NOT delete file remotely' do
        expect_any_instance_of(Attache::VHost).not_to receive(:async)
        subject.call
      end
    end

    context 'backup configured' do
      let(:backup) { double(:backup) }

      before do
        allow_any_instance_of(Attache::VHost).to receive(:backup).and_return(backup)
      end

      it 'should delete file in backup' do
        expect(backup).to receive(:async) do |method, path|
          expect(method).to eq(:storage_destroy)
        end.exactly(2).times
        subject.call
      end
    end

    context 'backup NOT configured' do
      it 'should NOT delete file in backup' do
        expect_any_instance_of(Attache::VHost).not_to receive(:async)
        subject.call
      end
    end

    context 'with secret_key' do
      let(:secret_key) { "topsecret#{rand}" }

      before do
        allow_any_instance_of(Attache::VHost).to receive(:secret_key).and_return(secret_key)
      end

      it 'should respond with error' do
        code, headers, body = subject.call
        expect(code).to eq(401)
        expect(headers['X-Exception']).to eq('Authorization failed')
      end

      context 'invalid auth' do
        let(:expiration) { (Time.now + 10).to_i }
        let(:uuid) { "hi#{rand}" }
        let(:digest) { OpenSSL::Digest.new('sha1') }
        let(:params) { Hash(file: filename, expiration: expiration, uuid: uuid, hmac: OpenSSL::HMAC.hexdigest(digest, "wrong#{secret_key}", "#{uuid}#{expiration}")) }

        it 'should respond with error' do
          code, headers, body = subject.call
          expect(code).to eq(401)
          expect(headers['X-Exception']).to eq('Authorization failed')
        end
      end

      context 'valid auth' do
        let(:expiration) { (Time.now + 10).to_i }
        let(:uuid) { "hi#{rand}" }
        let(:digest) { OpenSSL::Digest.new('sha1') }
        let(:params) { Hash(file: filename, expiration: expiration, uuid: uuid, hmac: OpenSSL::HMAC.hexdigest(digest, secret_key, "#{uuid}#{expiration}")) }

        it 'should respond with success' do
          code, headers, body = subject.call
          expect(code).to eq(200)
        end

        context 'expired' do
          let(:expiration) { (Time.now - 1).to_i } # the past

          it 'should respond with error' do
            code, headers, body = subject.call
            expect(code).to eq(401)
            expect(headers['X-Exception']).to eq('Authorization failed')
          end
        end
      end
    end
  end
end


================================================
FILE: spec/lib/attache/download_spec.rb
================================================
require 'spec_helper'

describe Attache::Download do
  let(:app) { ->(env) { [200, env, "app"] } }
  let(:middleware) { Attache::Download.new(app) }
  let(:params) { {} }
  let(:filename) { "hello#{rand}.gif" }
  let(:reldirname) { "path#{rand}" }
  let(:geometry) { CGI.escape('2x2#') }
  let(:file) { StringIO.new(IO.binread("spec/fixtures/transparent.gif"), 'rb') }
  let(:remote_url) { "http://example.com/image.jpg" }

  before do
    allow(Attache).to receive(:localdir).and_return(Dir.tmpdir) # forced, for safety
  end

  after do
    FileUtils.rm_rf(Attache.localdir)
  end

  it "should passthrough irrelevant request" do
    code, env = middleware.call Rack::MockRequest.env_for('http://example.com', "HTTP_HOST" => "example.com")
    expect(code).to eq 200
  end

  context 'downloading' do
    subject { proc { middleware.call Rack::MockRequest.env_for("http://example.com/view/#{reldirname}/#{geometry}/#{filename}", "HTTP_HOST" => "example.com") } }

    context 'not in local cache' do
      before do
        Attache.cache.delete("example.com/#{reldirname}/#{filename}")
      end

      context 'no cloud storage configured' do
        before do
          allow_any_instance_of(Attache::VHost).to receive(:storage).and_return(nil)
          allow_any_instance_of(Attache::VHost).to receive(:bucket).and_return(nil)
        end

        it 'should respond not found' do
          code, headers, body = subject.call
          expect(code).to eq(404)
        end

        it 'should continue to respond not found' do
          code, headers, body = subject.call
          expect(code).to eq(404)
          code, headers, body = subject.call
          expect(code).to eq(404)
        end
      end

      context 'with cloud storage configured' do
        before do
          allow_any_instance_of(Attache::VHost).to receive(:storage).and_return(double(:storage, directories: Struct.new(:key, :files)))
          allow_any_instance_of(Attache::VHost).to receive(:bucket).and_return(double(:bucket))
        end

        it 'should respond not found' do
          code, headers, body = subject.call
          expect(code).to eq(404)
        end

        context 'with backup configured' do
          it 'should respond not found' do
            allow_any_instance_of(Attache::VHost).to receive(:backup).and_return(double(:backup, storage_get: nil))
            code, headers, body = subject.call
            expect(code).to eq(404)
          end

          it 'should respond found if in backup' do
            allow_any_instance_of(Attache::VHost).to receive(:backup).and_return(double(:backup, storage_get: file))
            code, headers, body = subject.call
            expect(code).to eq(200)
          end
        end

        context 'available remotely' do
          before do
            allow_any_instance_of(Attache::VHost).to receive(:storage_get).and_return(file)
            allow_any_instance_of(Attache::VHost).to receive(:storage_url).and_return(remote_url)
          end

          it 'should proceed normally' do
            code, headers, body = subject.call
            expect(code).to eq(200)
          end

          context 'geometry is "remote"' do

            let(:geometry) { CGI.escape('remote') }

            it 'should send remote file' do
              expect(Attache.cache).not_to receive(:fetch)
              expect_any_instance_of(Attache::VHost).to receive(:storage_url)
              code, headers, body = subject.call
              response_content = ''
              body.each {|p| response_content += p }
              expect(response_content).to eq('')
              expect(code).to eq(302)
              expect(headers['Location']).to eq(remote_url)
              expect(headers['Cache-Control']).to eq("private, no-cache")
            end
          end
        end
      end
    end

    context 'in local cache' do
      before do
        Attache.cache.write("example.com/#{reldirname}/#{filename}", file)
      end

      context 'geometry is "original"' do
        let(:geometry) { CGI.escape('original') }

        it 'should send original file' do
          expect_any_instance_of(middleware.class).not_to receive(:get_thumbnail_file)
          code, headers, body = subject.call
          response_content = ''
          body.each {|p| response_content += p }
          original_content = file.tap(&:rewind).read
          expect(response_content).to eq(original_content)
        end
      end

      context 'geometry_whitelist is present' do
        let(:geometry_whitelist) { ['100x100'] }

        before do
          allow(middleware).to receive(:vhost_for).and_return(double(:vhost,
            geometry_whitelist: geometry_whitelist,
            storage: nil,
            backup: nil,
            download_headers: {}))
        end

        context 'geometry is whitelisted' do
          let(:geometry) { geometry_whitelist.sample }

          it 'should be allowed' do
            code, headers, body = subject.call
            expect(code).to eq(200)
          end
        end

        context 'geometry is NOT whitelisted' do
          let(:geometry) { '999x999' }

          it 'should NOT be allowed' do
            code, headers, body = subject.call
            expect(code).to eq(415)
            expect(body).to eq(["#{geometry} is not supported"])
          end
        end
      end

      context 'rendering' do
        context 'non image' do
          let(:file) { StringIO.new(IO.binread("spec/fixtures/sample.txt"), 'rb') }
          let(:filename) { "hello#{rand}.txt" }

          it 'should output as png' do
            expect_any_instance_of(Attache::ResizeJob).to receive(:make_nonimage_preview).exactly(1).times.and_call_original
            code, headers, body = subject.call
            expect(code).to eq(200)
            expect(headers['Content-Type']).to eq("image/png")
          end
        end

        context 'image' do
          it 'should output as gif' do
            expect_any_instance_of(Attache::ResizeJob).not_to receive(:make_nonimage_preview)
            code, headers, body = subject.call
            expect(code).to eq(200)
            expect(headers['Content-Type']).to eq("image/gif")
          end
        end
      end
    end
  end
end


================================================
FILE: spec/lib/attache/resize_job_spec.rb
================================================
require 'spec_helper'

describe Attache::ResizeJob do
  describe '#thumbnail_for' do
    let(:max) { 2048 }
    let(:current) { [1, 1] }
    let(:current_w) { current.shift }
    let(:current_h) { current.shift }
    let(:job) { Attache::ResizeJob.new }
    let(:original_path) { "spec/fixtures/transparent.gif" }

    before {
      allow(job).to receive(:current_geometry_for).and_return(Paperclip::Geometry.new(current_w, current_h))
      Attache.cache.delete(Digest::SHA1.hexdigest("#{max}x#{max}>" + original_path))
    }

    subject {
      job.send(:thumbnail_for, closed_file: File.new(original_path),
                               target_geometry_string: target,
                               extension: "gif").file.path
    }

    context 'target > max' do
      let(:target) { ["#{max+1}x1>", "1x#{max+1}>"].sample }

      it {
        expect_any_instance_of(Paperclip::Thumbnail).not_to receive(:make)
        is_expected.to eq(original_path)
      }
    end

    context 'target <= max' do
      let(:target) { ["#{max}x1>", "1x#{max}>"].sample }

      context 'current > max' do
        let(:current) { [max+1, 1].shuffle }

        it {
          expect_any_instance_of(Paperclip::Thumbnail).to receive(:make) do |instance|
            expect(instance.target_geometry.to_s).to eq("#{max}x#{max}>")
            File.new(original_path)
          end
          is_expected.not_to eq(original_path)
        }
      end

      context 'current <= max' do
        let(:current) { [max-1, 1].shuffle }

        it {
          expect_any_instance_of(Paperclip::Thumbnail).not_to receive(:make)
          is_expected.to eq(original_path)
        }
      end
    end

    context 'convert_options for jpg, jpeg' do
      let(:jpg_path) { "spec/fixtures/landscape.jpg" }
      let(:thumbnail) {
        job.send(:thumbnail_for, closed_file: File.new(jpg_path),
                               target_geometry_string: '1x1>',
                               extension: %w(jpg jpeg).sample)
      }

      it { expect(thumbnail.convert_options).to eq(%w(-interlace Plane)) }
    end

    context 'convert_options for other file extensions' do
      let(:thumbnail) {
        job.send(:thumbnail_for, closed_file: File.new(original_path),
                               target_geometry_string: '1x1>',
                               extension: %w(png gif tiff bmp).sample)
      }

      it { expect(thumbnail.convert_options).to be_nil }
    end
  end
end


================================================
FILE: spec/lib/attache/tus/upload_spec.rb
================================================
require 'spec_helper'

describe Attache::Tus::Upload do
  let(:app) { ->(env) { [200, env, "app"] } }
  let(:middleware) { Attache::Tus::Upload.new(app) }
  let(:params) { Hash(file: filename) }
  let(:filename) { "Exãmple %#{rand} %20.gif" }
  let(:file) { StringIO.new(IO.binread("spec/fixtures/landscape.jpg"), 'rb') }
  let(:filesize) { File.size "spec/fixtures/landscape.jpg" }
  let(:hostname) { "example.com" }
  let(:create_path) { '/tus/files?' + params.collect {|k,v| "#{CGI.escape k.to_s}=#{CGI.escape v.to_s}"}.join('&') }
  let(:resume_path) { @location }

  before do
    allow(Attache).to receive(:localdir).and_return(Dir.tmpdir) # forced, for safety
    allow_any_instance_of(Attache::VHost).to receive(:secret_key).and_return(nil)
    allow_any_instance_of(Attache::VHost).to receive(:storage).and_return(nil)
    allow_any_instance_of(Attache::VHost).to receive(:bucket).and_return(nil)
  end

  after do
    FileUtils.rm_rf(Attache.localdir)
  end

  it "should passthrough irrelevant request" do
    code, headers, body = middleware.call Rack::MockRequest.env_for('http://' + hostname, "HTTP_HOST" => hostname)
    expect(code).to eq 200
  end


  def make_request_to(request_uri, headers)
    middleware.call Rack::MockRequest.env_for('http://' + hostname + request_uri, Hash("HTTP_HOST" => hostname, 'HTTP_UPLOAD_METADATA' => "key #{Base64.encode64('value')},filename #{Base64.encode64(filename)}").merge(headers))
  end

  context "tus creation" do
    it "must reject missing HTTP_ENTITY_LENGTH" do
      code, headers, body = make_request_to(create_path, method: 'POST', input: file)
      expect(code).to eq(400)
    end

    it "must reject invalid HTTP_ENTITY_LENGTH" do
      code, headers, body = make_request_to(create_path, method: 'POST', input: file, 'HTTP_ENTITY_LENGTH' => [-1, 'abc'].sample)
      expect(code).to eq(400)
    end

    it "must respond successfully with HTTP 201 + Location header" do
      code, headers, body = make_request_to(create_path, method: 'POST', input: file, 'HTTP_ENTITY_LENGTH' => filesize)
      expect(code).to eq(201)
      expect(headers['Location']).to be_present
    end
  end

  context "with uploaded file" do
    let(:relpath)  { CGI.unescape @location.match(/relpath=([^&]+)/)[1] }
    let(:cachekey) { File.join(hostname, relpath) }
    let(:current_offset) { 3 + rand(10) }

    before do
      code, headers, body = make_request_to(create_path, method: 'POST', input: file, 'HTTP_ENTITY_LENGTH' => filesize)
      expect(code).to eq(201)
      @location = URI.parse(headers['Location']).request_uri
      open(middleware.path_of(cachekey), "a") {|f| f.write('a' * current_offset) }
    end

    context "tus patch" do
      it "must reject invalid HTTP_OFFSET" do
        code, headers, body = make_request_to(resume_path, method: 'PATCH', input: file)
        expect(code).to eq(400)
      end

      it "must reject invalid HTTP_CONTENT_LENGTH" do
        code, headers, body = make_request_to(resume_path, method: 'PATCH', input: file, "HTTP_OFFSET" => 0)
        expect(code).to eq(400)
      end

      it "must reject invalid Content-Type: application/offset+octet-stream" do
        code, headers, body = make_request_to(resume_path, method: 'PATCH', input: file, "HTTP_OFFSET" => 0, "HTTP_CONTENT_LENGTH" => filesize, "CONTENT_TYPE" => ["application/octet-stream", nil].sample)
        expect(code).to eq(400)
      end

      it "must reject invalid Tus-Resumable version" do
        code, headers, body = make_request_to(resume_path, method: 'PATCH', input: file, "HTTP_OFFSET" => 0, "HTTP_CONTENT_LENGTH" => filesize, "CONTENT_TYPE" => "application/offset+octet-stream", "HTTP_TUS_RESUMABLE" => ["0.9.9", "2.0.0"].sample)
        expect(code).to eq(400)
      end

      it "must respond successfully with HTTP 200" do
        code, headers, body = make_request_to(resume_path, method: 'PATCH', input: file, "HTTP_OFFSET" => 0, "HTTP_CONTENT_LENGTH" => filesize, "CONTENT_TYPE" => "application/offset+octet-stream", "HTTP_TUS_RESUMABLE" => "1.0.0")
        expect(headers['X-Exception']).to be_nil
        expect(code).to eq(200)
      end

      it "must accept `offset` smaller or equal to current offset" do
        code, headers, body = make_request_to(resume_path, method: 'PATCH', input: file, "HTTP_OFFSET" => current_offset - [0, 1].sample, "HTTP_CONTENT_LENGTH" => filesize, "CONTENT_TYPE" => "application/offset+octet-stream", "HTTP_TUS_RESUMABLE" => "1.0.0")
        expect(headers['X-Exception']).to be_nil
        expect(code).to eq(200)
      end

      it "must reject `offset` larger than current offset" do
        code, headers, body = make_request_to(resume_path, method: 'PATCH', input: file, "HTTP_OFFSET" => current_offset + 1, "HTTP_CONTENT_LENGTH" => filesize, "CONTENT_TYPE" => "application/offset+octet-stream", "HTTP_TUS_RESUMABLE" => "1.0.0")
        expect(code).to eq(400)
      end

      it "new current offset must be Offset + Content-Length" do
        expect {
          make_request_to(resume_path, method: 'PATCH', input: file, "HTTP_OFFSET" => current_offset, "HTTP_CONTENT_LENGTH" => filesize, "CONTENT_TYPE" => "application/offset+octet-stream", "HTTP_TUS_RESUMABLE" => "1.0.0")
        }.to change {
          middleware.send(:current_offset, cachekey, relpath, config = {})
        }.by(filesize)
      end
    end

    context "tus head" do
      it "must respond successfully with HTTP 200 + Offset header" do
        code, headers, body = make_request_to(resume_path, method: 'HEAD')
        expect(code).to eq(200)
        expect(headers).to eq({
          "Access-Control-Allow-Origin"   => "*",
          "Access-Control-Allow-Methods"  => "POST, PUT, PATCH",
          "Access-Control-Allow-Headers"  => "Content-Type, Tus-Resumable, Upload-Length, Entity-Length, Upload-Metadata, Metadata, Upload-Offset, Offset",
          "Content-Type"                  => "text/json",
          "Access-Control-Expose-Headers" => "Location, Upload-Offset, Offset",
          "Offset"                        => current_offset.to_s,
          "Upload-Offset"                 => current_offset.to_s,
        })
      end
    end

    context "tus get" do
      it "must redirec to attache download url for original geometry" do
        code, headers, body = make_request_to(resume_path, method: 'GET')
        expect(code).to eq(302)
        expect(File.basename headers['Location']).to eq(CGI.escape(Attache::Upload.sanitize filename))
      end
    end
  end
end


================================================
FILE: spec/lib/attache/tus_spec.rb
================================================
require 'spec_helper'

describe Attache::Tus do
  let(:env) { @env }
  let(:config) { @config }
  let(:tus) { Attache::Tus.new(env, config) }

  it "should return Entity-Length for upload_length" do
    @env = { 'HTTP_ENTITY_LENGTH' => rand }
    expect(tus.upload_length).to eq(@env['HTTP_ENTITY_LENGTH'])
  end

  it "should prefer Upload-Length for upload_length" do
    @env = { 'HTTP_ENTITY_LENGTH' => rand, 'HTTP_UPLOAD_LENGTH' => rand }
    expect(tus.upload_length).to eq(@env['HTTP_UPLOAD_LENGTH'])
  end

  it "should return Offset for upload_offset" do
    @env = { 'HTTP_OFFSET' => rand }
    expect(tus.upload_offset).to eq(@env['HTTP_OFFSET'])
  end

  it "should prefer Upload-Offset for upload_offset" do
    @env = { 'HTTP_OFFSET' => rand, 'HTTP_UPLOAD_OFFSET' => rand }
    expect(tus.upload_offset).to eq(@env['HTTP_UPLOAD_OFFSET'])
  end

  it "should parse upload_metadata" do
    @env = { "HTTP_UPLOAD_METADATA" => "key dmFsdWU=,randkey0.87393016369783 cmFuZHZhbHVlMC44MzYxNjcyOTk3OTQyMTU2" }
    expect(tus.upload_metadata).to eq({
      "key" => "value",
      "randkey0.87393016369783" => "randvalue0.8361672997942156",
    })
  end
end


================================================
FILE: spec/lib/attache/upload_spec.rb
================================================
require 'spec_helper'

describe Attache::Upload do
  let(:app) { ->(env) { [200, env, "app"] } }
  let(:middleware) { Attache::Upload.new(app) }
  let(:params) { {} }
  let(:filename) { "Exãmple %#{rand} %20.gif" }
  let(:file) { StringIO.new(IO.binread("spec/fixtures/landscape.jpg"), 'rb') }
  let(:request_input) { file }
  let(:base64_data) { "data:image/gif;base64," + Base64.encode64(file.read) }
  let(:hostname) { "example.com" }

  before do
    allow(Attache).to receive(:localdir).and_return(Dir.tmpdir) # forced, for safety
    allow_any_instance_of(Attache::VHost).to receive(:secret_key).and_return(nil)
  end

  after do
    FileUtils.rm_rf(Attache.localdir)
  end

  it "should passthrough irrelevant request" do
    code, headers, body = middleware.call Rack::MockRequest.env_for('http://' + hostname, "HTTP_HOST" => hostname)
    expect(code).to eq 200
  end

  context "uploading" do
    let(:params) { Hash(file: filename) }

    subject { proc { middleware.call Rack::MockRequest.env_for('http://' + hostname + '/upload?' + params.collect {|k,v| "#{CGI.escape k.to_s}=#{CGI.escape v.to_s}"}.join('&'), method: 'PUT', input: request_input, "HTTP_HOST" => hostname) } }

    it 'should respond successfully with json' do
      code, headers, body = subject.call
      expect(code).to eq(200)
      expect(headers['Content-Type']).to eq('text/json')
      JSON.parse(body.join('')).tap do |json|
        expect(json).to be_has_key('path')
        expect(json['geometry']).to eq('4x3')
        expect(json['bytes']).to eq(425)
        expect(json['signature']).to eq(nil)
      end
    end

    it 'should wrote to cache with Attache::Upload.sanitize(params[:file]) as filename' do
      code, headers, body = subject.call
      json = JSON.parse(body.join(''))
      relpath = json['path']
      expect(relpath).to end_with(Attache::Upload.sanitize params[:file])
      expect(Attache.cache.read(hostname + '/' + relpath).tap(&:close)).to be_kind_of(File)
    end

    # does not support base64/data uri here
    # see upload_url.rb
    context 'base64' do
      context 'base64-encoded image' do
        let!(:request_input) { StringIO.new("data:image/gif;base64," + Base64.encode64(file.read)) }

        it 'should respond identically as when uploading binary' do
          code, headers, body = subject.call
          expect(code).to eq(200)
          expect(headers['Content-Type']).to eq('text/json')
          JSON.parse(body.join('')).tap do |json|
            expect(json).to be_has_key('path')
            expect(json['geometry']).to eq(nil)
            expect(json['content_type']).to eq('text/plain')
          end
        end
      end

      # various Data URI permutations
      # https://developer.mozilla.org/en-US/docs/Web/HTTP/data_URIs
      context 'simple text/plain data' do
        let!(:request_input) { StringIO.new "data:,Hello%2C%20World!" }

        it 'should decode' do
          code, headers, body = subject.call
          expect(code).to eq(200)
          expect(headers['Content-Type']).to eq('text/json')
          JSON.parse(body.join('')).tap do |json|
            expect(json).to be_has_key('path')
            expect(json['content_type']).to eq('text/plain')
            expect(json['bytes']).to eq(23)
          end
        end
      end

      context "base64-encoded version of the above" do
        let!(:request_input) { StringIO.new "data:text/plain;base64,SGVsbG8sIFdvcmxkIQ%3D%3D" }

        it 'should decode' do
          code, headers, body = subject.call
          expect(code).to eq(200)
          expect(headers['Content-Type']).to eq('text/json')
          JSON.parse(body.join('')).tap do |json|
            expect(json).to be_has_key('path')
            expect(json['content_type']).to eq('text/plain')
            expect(json['bytes']).to eq(47)
          end
        end
      end

      context "An HTML document with <html><body><h1>Hello, World!</h1></body></html>" do
        let!(:request_input) { StringIO.new "data:text/html,%3Chtml%3E%3Cbody%3E%3Ch1%3EHello,%20World!%3C/h1%3E%3C/body%3E%3C/html%3E" }

        it 'should decode' do
          code, headers, body = subject.call
          expect(code).to eq(200)
          expect(headers['Content-Type']).to eq('text/json')
          JSON.parse(body.join('')).tap do |json|
            expect(json).to be_has_key('path')
            expect(json['content_type']).to eq('text/plain')
            expect(json['bytes']).to eq(89)
          end
        end
      end

      context "An HTML document that executes a JavaScript alert" do
        let!(:request_input) { StringIO.new "data:text/html,<script>alert('hi');</script>" }

        it 'should decode' do
          code, headers, body = subject.call
          expect(code).to eq(200)
          expect(headers['Content-Type']).to eq('text/json')
          JSON.parse(body.join('')).tap do |json|
            expect(json).to be_has_key('path')
            expect(json['content_type']).to eq('text/html')
            expect(json['bytes']).to eq(44)
          end
        end
      end
    end

    context 'plain text with data: prefix' do
      let!(:file) { StringIO.new(IO.binread("spec/fixtures/sample.txt"), 'rb') }

      it 'should not be mangled by Base64 decoding' do
        code, headers, body = subject.call
        expect(code).to eq(200)
        expect(headers['Content-Type']).to eq('text/json')
        JSON.parse(body.join('')).tap do |json|
          expect(json).to be_has_key('path')
          expect(json['content_type']).to eq('text/plain')
          expect(json['bytes']).to eq(20)
        end
      end
    end

    context 'save fail locally' do
      before do
        allow(Attache.cache).to receive(:write).and_return(0)
      end

      it 'should respond with error' do
        code, headers, body = subject.call
        expect(code).to eq(500)
        expect(headers['X-Exception']).to eq('Local file failed')
      end
    end

    context 'storage not configured' do
      before do
        allow_any_instance_of(Attache::VHost).to receive(:storage).and_return(nil)
        allow_any_instance_of(Attache::VHost).to receive(:bucket).and_return(nil)
      end

      it 'should NOT save file remotely' do
        expect_any_instance_of(Attache::VHost).not_to receive(:storage_create)
        subject.call
      end
    end

    context 'storage configured' do
      before do
        allow_any_instance_of(Attache::VHost).to receive(:storage).and_return(double(:storage))
        allow_any_instance_of(Attache::VHost).to receive(:bucket).and_return(double(:bucket))
      end

      it 'should save file remotely' do
        expect_any_instance_of(Attache::VHost).to receive(:storage_create).and_return(anything)
        subject.call
      end
    end

    context 'with secret_key' do
      let(:secret_key) { "topsecret#{rand}" }

      before do
        allow_any_instance_of(Attache::VHost).to receive(:secret_key).and_return(secret_key)
      end

      it 'should respond with error' do
        code, headers, body = subject.call
        expect(code).to eq(401)
        expect(headers['X-Exception']).to eq('Authorization failed')
      end

      context 'invalid auth' do
        let(:expiration) { (Time.now + 10).to_i }
        let(:uuid) { "hi#{rand}" }
        let(:digest) { OpenSSL::Digest.new('sha1') }
        let(:params) { Hash(file: filename, expiration: expiration, uuid: uuid, hmac: OpenSSL::HMAC.hexdigest(digest, "wrong#{secret_key}", "#{uuid}#{expiration}")) }

        it 'should respond with error' do
          code, headers, body = subject.call
          expect(code).to eq(401)
          expect(headers['X-Exception']).to eq('Authorization failed')
        end
      end

      context 'valid auth' do
        let(:expiration) { (Time.now + 10).to_i }
        let(:uuid) { "hi#{rand}" }
        let(:digest) { OpenSSL::Digest.new('sha1') }
        let(:params) { Hash(file: filename, expiration: expiration, uuid: uuid, hmac: OpenSSL::HMAC.hexdigest(digest, secret_key, "#{uuid}#{expiration}")) }

        it 'should respond with success' do
          code, headers, body = subject.call
          expect(code).to eq(200)
        end

        it 'should respond successfully with json with signature' do
          code, headers, body = subject.call
          expect(code).to eq(200)
          expect(headers['Content-Type']).to eq('text/json')
          JSON.parse(body.join('')).tap do |json|
            json_without_signature = json.reject {|k,v| k == 'signature' }
            generated_signature = OpenSSL::HMAC.hexdigest(digest, secret_key, json_without_signature.sort.collect {|k,v| "#{k}=#{v}" }.join('&'))
            expect(json['signature']).to eq(generated_signature)
          end
        end

        context 'expired' do
          let(:expiration) { (Time.now - 1).to_i } # the past

          it 'should respond with error' do
            code, headers, body = subject.call
            expect(code).to eq(401)
            expect(headers['X-Exception']).to eq('Authorization failed')
          end
        end
      end
    end
  end
end


================================================
FILE: spec/lib/attache/upload_url_spec.rb
================================================
require 'spec_helper'

describe Attache::UploadUrl do
  let(:app) { ->(env) { [200, env, "app"] } }
  let(:uploader) { Attache::Upload.new(app) }
  let(:middleware) { Attache::UploadUrl.new(uploader) }
  let(:hostname) { "example.com" }

  before do
    allow(Attache).to receive(:localdir).and_return(Dir.tmpdir) # forced, for safety
    allow_any_instance_of(Attache::VHost).to receive(:secret_key).and_return(nil)
  end

  after do
    FileUtils.rm_rf(Attache.localdir)
  end

  it "should passthrough irrelevant request" do
    code, headers, body = middleware.call Rack::MockRequest.env_for('http://' + hostname, "HTTP_HOST" => hostname)
    expect(code).to eq 200
  end

  context 'upload as url' do
    subject { proc { middleware.call Rack::MockRequest.env_for('http://' + hostname + '/upload_url?' + params.collect {|k,v| "#{CGI.escape k.to_s}=#{CGI.escape v.to_s}"}.join('&'), method: 'PUT', "HTTP_HOST" => hostname) } }

    context 'to image' do
      let(:params) { Hash(url: "https://raw.githubusercontent.com/choonkeat/attache/master/spec/fixtures/landscape.jpg") }

      it 'should respond successfully with json' do
        code, headers, body = subject.call
        expect(code).to eq(200)
        expect(headers['Content-Type']).to eq('text/json')
        JSON.parse(body.join('')).tap do |json|
          expect(json).to be_has_key('path')
          expect(json['geometry']).to eq('4x3')
        end
      end
    end

    context 'follow redirect; works with non image too' do
      let(:params) { Hash(url: "http://google.com") }

      it 'should respond successfully with json' do
        code, headers, body = subject.call
        expect(code).to eq(200)
        expect(headers['Content-Type']).to eq('text/json')
        JSON.parse(body.join('')).tap do |json|
          expect(json).to be_has_key('path')
          expect(json['path']).not_to end_with('/')
          expect(json['content_type']).to eq('text/html')
        end
      end
    end

    context 'data uri' do
      context 'base64-encoded image' do
        let(:file) { StringIO.new(IO.binread("spec/fixtures/landscape.jpg"), 'rb') }
        let(:params) { Hash(url: "data:image/gif;base64," + Base64.encode64(file.read)) }

        it 'should respond identically as when uploading binary' do
          code, headers, body = subject.call
          expect(code).to eq(200)
          expect(headers['Content-Type']).to eq('text/json')
          JSON.parse(body.join('')).tap do |json|
            expect(json).to be_has_key('path')
            expect(json['geometry']).to eq('4x3')
            expect(json['bytes']).to eq(425)
          end
        end
      end

      # various Data URI permutations
      # https://developer.mozilla.org/en-US/docs/Web/HTTP/data_URIs
      context 'simple text/plain data' do
        let(:params) { Hash(url: "data:,Hello%2C%20World!") }

        it 'should decode' do
          code, headers, body = subject.call
          expect(code).to eq(200)
          expect(headers['Content-Type']).to eq('text/json')
          JSON.parse(body.join('')).tap do |json|
            expect(json).to be_has_key('path')
            expect(json['content_type']).to eq('text/plain')
            expect(json['bytes']).to eq(13)
          end
        end
      end

      context "base64-encoded version of the above" do
        let(:params) { Hash(url: "data:text/plain;base64,SGVsbG8sIFdvcmxkIQ%3D%3D") }

        it 'should decode' do
          code, headers, body = subject.call
          expect(code).to eq(200)
          expect(headers['Content-Type']).to eq('text/json')
          JSON.parse(body.join('')).tap do |json|
            expect(json).to be_has_key('path')
            expect(json['content_type']).to eq('text/plain')
            expect(json['bytes']).to eq(13)
          end
        end
      end

      context "An HTML document with <html><body><h1>Hello, World!</h1></body></html>" do
        let(:params) { Hash(url: "data:text/html,%3Chtml%3E%3Cbody%3E%3Ch1%3EHello,%20World!%3C/h1%3E%3C/body%3E%3C/html%3E") }

        it 'should decode' do
          code, headers, body = subject.call
          expect(code).to eq(200)
          expect(headers['Content-Type']).to eq('text/json')
          JSON.parse(body.join('')).tap do |json|
            expect(json).to be_has_key('path')
            expect(json['content_type']).to eq('text/html')
            expect(json['bytes']).to eq(48)
          end
        end
      end

      context "An HTML document that executes a JavaScript alert" do
        let(:params) { Hash(url: "data:text/html,<script>alert('hi');</script>") }

        it 'should decode' do
          code, headers, body = subject.call
          expect(code).to eq(200)
          expect(headers['Content-Type']).to eq('text/json')
          JSON.parse(body.join('')).tap do |json|
            expect(json).to be_has_key('path')
            expect(json['content_type']).to eq('text/html')
            expect(json['bytes']).to eq(29)
          end
        end
      end
    end
  end
end


================================================
FILE: spec/lib/attache/vhost_spec.rb
================================================
require 'spec_helper'
require 'fog/storage/local/models/file'
require 'fog/aws/models/storage/file'

describe Attache::VHost do
  let(:config) { { 'REMOTE_DIR' => remotedir } }
  let(:config_with_backup) { YAML.load_file('config/vhost.example.yml').fetch("aws.example.com").merge('REMOTE_DIR' => remotedir) }
  let(:vhost) { Attache::VHost.new(config) }
  let(:remote_api) { double(:remote_api) }
  let(:file_io) { StringIO.new("") }
  let(:relpath) { 'relpath' }
  let(:cachekey) { 'hostname/relpath' }
  let(:remotedir) { 'remote_directory' }

  before do
    allow(vhost).to receive(:remote_api).and_return(remote_api)
  end

  describe '#storage_url' do
    let(:url) { 'http://example.com/a/b/c' }

    before do
      allow(remote_api).to receive(:new).and_return(files)
      allow_any_instance_of(Fog::Storage::Local::File).to receive(:public_url).and_return(url)
      allow_any_instance_of(Fog::Storage::AWS::File).to receive(:url).and_return(url)
    end

    context 'fog local storage' do
      let(:files) { return Fog::Storage::Local::File.new }

      it 'should return' do
        expect(vhost.storage_url(relpath: relpath)).to eq(url)
      end
    end

    context 'fog s3 storage' do
      let(:files) { return Fog::Storage::AWS::File.new }

      it 'should return' do
        expect(vhost.storage_url(relpath: relpath)).to eq(url)
      end
    end
  end

  describe '#storage_create' do
    it 'should read with cachekey, write with remotedir prefix' do
      expect(Attache.cache).to receive(:read).with(cachekey).and_return(file_io)
      expect(remote_api).to receive(:create).with(key: "#{remotedir}/#{relpath}", body: file_io)
      vhost.storage_create(relpath: relpath, cachekey: cachekey)
    end

    it 'should raise on other errors' do
      allow(Attache.cache).to receive(:read) { raise Exception.new }
      expect(remote_api).not_to receive(:create)

      expect { vhost.storage_create(relpath: relpath, cachekey: cachekey) }.to raise_error(Exception)
    end
  end

  describe '#storage' do
    it { expect(vhost.storage).to be_nil }

    context 'configured' do
      let(:config) { config_with_backup }

      it { expect(vhost.storage).to be_kind_of(Fog::Storage::AWS::Real) }
      it { expect(vhost.storage.region).to eq('us-west-1') }
    end
  end

  describe '#bucket' do
    it { expect(vhost.bucket).to be_nil }

    context 'configured' do
      let(:config) { config_with_backup }

      it { expect(vhost.bucket).to eq("CHANGEME") }
    end
  end

  describe '#backup' do
    it { expect(vhost.backup).to be_nil }

    describe '#backup_file' do
      it 'should not do anything' do
        allow_message_expectations_on_nil
        expect(vhost.storage).not_to receive(:copy_object)
        vhost.backup_file(relpath: relpath)
      end
    end

    context 'configured' do
      let(:config) { config_with_backup }

      it { expect(vhost.backup).to be_kind_of(Attache::VHost) }
      it { expect(vhost.backup.storage).to be_kind_of(Fog::Storage::AWS::Real) }
      it { expect(vhost.backup.storage.region).to eq('us-west-1') }
      it { expect(vhost.backup.bucket).to eq("CHANGEME_BAK") }

      describe '#backup_file' do
        it 'should not do anything' do
          expect(vhost.storage).to receive(:copy_object).with(
            vhost.bucket, "#{remotedir}/#{relpath}",
            vhost.backup.bucket, "#{remotedir}/#{relpath}"
          )
          vhost.backup_file(relpath: relpath)
        end
      end
    end
  end
end


================================================
FILE: spec/spec_helper.rb
================================================
ENV['VHOST'] = '{"0.0.0.0":{}}'

require 'attache.rb'
require 'sucker_punch/testing/inline'

# This file was generated by the `rspec --init` command. Conventionally, all
# specs live under a `spec` directory, which RSpec adds to the `$LOAD_PATH`.
# The generated `.rspec` file contains `--require spec_helper` which will cause
# this file to always be loaded, without a need to explicitly require it in any
# files.
#
# Given that it is always loaded, you are encouraged to keep this file as
# light-weight as possible. Requiring heavyweight dependencies from this file
# will add to the boot time of your test suite on EVERY test run, even for an
# individual file that may not need all of that loaded. Instead, consider making
# a separate helper file that requires the additional dependencies and performs
# the additional setup, and require it from the spec files that actually need
# it.
#
# The `.rspec` file also contains a few flags that are not defaults but that
# users commonly want.
#
# See http://rubydoc.info/gems/rspec-core/RSpec/Core/Configuration
RSpec.configure do |config|
  # rspec-expectations config goes here. You can use an alternate
  # assertion/expectation library such as wrong or the stdlib/minitest
  # assertions if you prefer.
  config.expect_with :rspec do |expectations|
    # This option will default to `true` in RSpec 4. It makes the `description`
    # and `failure_message` of custom matchers include text for helper methods
    # defined using `chain`, e.g.:
    #     be_bigger_than(2).and_smaller_than(4).description
    #     # => "be bigger than 2 and smaller than 4"
    # ...rather than:
    #     # => "be bigger than 2"
    expectations.include_chain_clauses_in_custom_matcher_descriptions = true
  end

  # rspec-mocks config goes here. You can use an alternate test double
  # library (such as bogus or mocha) by changing the `mock_with` option here.
  config.mock_with :rspec do |mocks|
    # Prevents you from mocking or stubbing a method that does not exist on
    # a real object. This is generally recommended, and will default to
    # `true` in RSpec 4.
    mocks.verify_partial_doubles = true
  end

# The settings below are suggested to provide a good initial experience
# with RSpec, but feel free to customize to your heart's content.
  # These two settings work together to allow you to limit a spec run
  # to individual examples or groups you care about by tagging them with
  # `:focus` metadata. When nothing is tagged with `:focus`, all examples
  # get run.
  config.filter_run :focus
  config.run_all_when_everything_filtered = true

  # Run specs in random order to surface order dependencies. If you find an
  # order dependency and want to debug it, you can fix the order by providing
  # the seed, which is printed after each run.
  #     --seed 1234
  config.order = :random

=begin
  # Limits the available syntax to the non-monkey patched syntax that is
  # recommended. For more details, see:
  #   - http://myronmars.to/n/dev-blog/2012/06/rspecs-new-expectation-syntax
  #   - http://teaisaweso.me/blog/2013/05/27/rspecs-new-message-expectation-syntax/
  #   - http://myronmars.to/n/dev-blog/2014/05/notable-changes-in-rspec-3#new__config_option_to_disable_rspeccore_monkey_patching
  config.disable_monkey_patching!

  # This setting enables warnings. It's recommended, but in some cases may
  # be too noisy due to issues in dependencies.
  config.warnings = true

  # Many RSpec users commonly either run the entire suite or an individual
  # file, and it's useful to allow more verbose output when running an
  # individual spec file.
  if config.files_to_run.one?
    # Use the documentation formatter for detailed output,
    # unless a formatter has already been configured
    # (e.g. via a command-line flag).
    config.default_formatter = 'doc'
  end

  # Print the 10 slowest examples and example groups at the
  # end of the spec run, to help surface which specs are running
  # particularly slow.
  config.profile_examples = 10

  # Seed global randomization in this process using the `--seed` CLI option.
  # Setting this allows you to use `--seed` to deterministically reproduce
  # test failures related to randomization by passing the same `--seed` value
  # as the one that triggered the failure.
  Kernel.srand config.seed
=end
end

Attache.logger = Logger.new("/dev/null")
Paperclip.options[:log] = false
Sidekiq::Logging.logger = nil
SuckerPunch.logger = nil
Download .txt
gitextract_4787py6e/

├── .gitignore
├── .rspec
├── .travis.yml
├── CONTRIBUTING.md
├── Gemfile
├── Guardfile
├── LICENSE
├── Procfile
├── README.md
├── Rakefile
├── app.json
├── attache.gemspec
├── config/
│   ├── puma.rb
│   └── vhost.example.yml
├── config.ru
├── docker/
│   ├── Dockerfile
│   └── bundler_geminstaller_install_with_timeout.rb
├── exe/
│   └── attache
├── lib/
│   ├── attache/
│   │   ├── backup.rb
│   │   ├── base.rb
│   │   ├── delete.rb
│   │   ├── download.rb
│   │   ├── file_response_body.rb
│   │   ├── job.rb
│   │   ├── resize_job.rb
│   │   ├── tasks.rb
│   │   ├── tus/
│   │   │   └── upload.rb
│   │   ├── tus.rb
│   │   ├── upload.rb
│   │   ├── upload_url.rb
│   │   ├── version.rb
│   │   └── vhost.rb
│   └── attache.rb
├── public/
│   ├── index.html
│   └── vendor/
│       └── roboto/
│           └── Apache License.txt
└── spec/
    ├── fixtures/
    │   └── sample.txt
    ├── lib/
    │   └── attache/
    │       ├── backup_spec.rb
    │       ├── delete_spec.rb
    │       ├── download_spec.rb
    │       ├── resize_job_spec.rb
    │       ├── tus/
    │       │   └── upload_spec.rb
    │       ├── tus_spec.rb
    │       ├── upload_spec.rb
    │       ├── upload_url_spec.rb
    │       └── vhost_spec.rb
    └── spec_helper.rb
Download .txt
SYMBOL INDEX (78 symbols across 15 files)

FILE: lib/attache.rb
  type Attache (line 27) | module Attache

FILE: lib/attache/backup.rb
  class Attache::Backup (line 1) | class Attache::Backup < Attache::Base
    method initialize (line 2) | def initialize(app)
    method _call (line 6) | def _call(env, config)

FILE: lib/attache/base.rb
  class Attache::Base (line 1) | class Attache::Base
    method call (line 2) | def call(env)
    method vhost_for (line 20) | def vhost_for(host)
    method request_hostname (line 24) | def request_hostname(env)
    method content_type_of (line 28) | def content_type_of(fullpath)
    method geometry_of (line 34) | def geometry_of(fullpath)
    method filesize_of (line 40) | def filesize_of(fullpath)
    method params_of (line 44) | def params_of(env)
    method path_of (line 51) | def path_of(cachekey)
    method rack_response_body_for (line 55) | def rack_response_body_for(file)
    method generate_relpath (line 59) | def generate_relpath(basename)
    method json_of (line 63) | def json_of(relpath, cachekey, vhost)

FILE: lib/attache/delete.rb
  class Attache::Delete (line 1) | class Attache::Delete < Attache::Base
    method initialize (line 2) | def initialize(app)
    method _call (line 6) | def _call(env, config)

FILE: lib/attache/download.rb
  class Attache::Download (line 3) | class Attache::Download < Attache::Base
    method initialize (line 6) | def initialize(app)
    method _call (line 11) | def _call(env, config)
    method parse_path_info (line 53) | def parse_path_info(geometrypath)
    method synchronize (line 62) | def synchronize(key, &block)
    method get_thumbnail_file (line 69) | def get_thumbnail_file(geometry, basename, relpath, vhosts, env)
    method get_original_file (line 86) | def get_original_file(relpath, vhosts, env)
    method get_first_result_present_async (line 115) | def get_first_result_present_async(lambdas)

FILE: lib/attache/file_response_body.rb
  class Attache::FileResponseBody (line 1) | class Attache::FileResponseBody
    method initialize (line 2) | def initialize(file, range_start = nil, range_end = nil)
    method each (line 9) | def each

FILE: lib/attache/job.rb
  class Attache::Job (line 1) | class Attache::Job
    method perform (line 4) | def perform(method, env, args)
    method later (line 18) | def later(sec, *args)
    method perform_async (line 21) | def self.perform_async(*args)
    method perform_in (line 24) | def self.perform_in(duration, *args)

FILE: lib/attache/resize_job.rb
  class Attache::ResizeJob (line 4) | class Attache::ResizeJob
    method perform (line 5) | def perform(target_geometry_string, basename, relpath, vhosts, env, t ...
    method make_nonimage_preview (line 33) | def make_nonimage_preview(closed_file, basename)
    method make_safe_filename (line 50) | def make_safe_filename(str)
    method thumbnail_for (line 54) | def thumbnail_for(closed_file:, target_geometry_string:, extension:, m...
    method current_geometry_for (line 77) | def current_geometry_for(thumbnail)

FILE: lib/attache/tus.rb
  class Attache::Tus (line 1) | class Attache::Tus
    method initialize (line 8) | def initialize(env, config)
    method header_value (line 13) | def header_value(keys)
    method upload_length (line 19) | def upload_length
    method upload_offset (line 23) | def upload_offset
    method upload_metadata (line 27) | def upload_metadata
    method resumable_version (line 34) | def resumable_version
    method headers_with_cors (line 38) | def headers_with_cors(headers = {}, offset: nil)

FILE: lib/attache/tus/upload.rb
  class Attache::Tus::Upload (line 1) | class Attache::Tus::Upload < Attache::Base
    method initialize (line 2) | def initialize(app)
    method _call (line 6) | def _call(env, config)
    method current_offset (line 73) | def current_offset(cachekey, relpath, config)
    method append_to (line 84) | def append_to(cachekey, offset, io)
    method positive_number? (line 93) | def positive_number?(value)

FILE: lib/attache/upload.rb
  class Attache::Upload (line 1) | class Attache::Upload < Attache::Base
    method initialize (line 2) | def initialize(app)
    method _call (line 6) | def _call(env, config)
    method sanitize (line 38) | def self.sanitize(filename)

FILE: lib/attache/upload_url.rb
  class Attache::UploadUrl (line 1) | class Attache::UploadUrl < Attache::Base
    method initialize (line 2) | def initialize(app)
    method _call (line 6) | def _call(env, config)
    method download_file (line 31) | def download_file(url, depth = 0)

FILE: lib/attache/version.rb
  type Attache (line 1) | module Attache

FILE: lib/attache/vhost.rb
  class Attache::VHost (line 1) | class Attache::VHost
    method initialize (line 12) | def initialize(hash)
    method hmac_for (line 37) | def hmac_for(content)
    method hmac_valid? (line 41) | def hmac_valid?(params)
    method storage_url (line 49) | def storage_url(args)
    method storage_get (line 62) | def storage_get(args)
    method storage_create (line 66) | def storage_create(args)
    method storage_destroy (line 82) | def storage_destroy(args)
    method remote_api (line 90) | def remote_api
    method async (line 94) | def async(method, args)
    method authorized? (line 98) | def authorized?(params)
    method unauthorized (line 102) | def unauthorized
    method backup_file (line 106) | def backup_file(args)

FILE: spec/lib/attache/tus/upload_spec.rb
  function make_request_to (line 31) | def make_request_to(request_uri, headers)
Condensed preview — 46 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (112K chars).
[
  {
    "path": ".gitignore",
    "chars": 10,
    "preview": "vhost.yml\n"
  },
  {
    "path": ".rspec",
    "chars": 54,
    "preview": "--color\n--require spec_helper\n--format documentation\n\n"
  },
  {
    "path": ".travis.yml",
    "chars": 134,
    "preview": "language: ruby\nbundler_args: --retry=3 --jobs=8 --no-deployment\ncache: bundler\nsudo: false\nrvm:\n  - 2.2.3\nmatrix:\n  fast"
  },
  {
    "path": "CONTRIBUTING.md",
    "chars": 282,
    "preview": "# Contributing\n\n1. Fork it ( http://github.com/choonkeat/attache/fork )\n2. Create your feature branch (`git checkout -b "
  },
  {
    "path": "Gemfile",
    "chars": 53,
    "preview": "source 'https://rubygems.org'\n\nruby \"2.2.3\"\n\ngemspec\n"
  },
  {
    "path": "Guardfile",
    "chars": 145,
    "preview": "guard :rspec, cmd: \"bundle exec rspec\" do\n  watch(%r{^spec/.+_spec\\.rb$})\n  watch(%r{^lib/(.+)\\.rb$})     { |m| \"spec/li"
  },
  {
    "path": "LICENSE",
    "chars": 1083,
    "preview": "The MIT License (MIT)\n\nCopyright (c) 2015 Chew Choon Keat\n\nPermission is hereby granted, free of charge, to any person o"
  },
  {
    "path": "Procfile",
    "chars": 124,
    "preview": "web: bundle exec puma -C config/puma.rb\nworker: bundle exec sidekiq -e production -q attache_vhost_jobs -r ./lib/attache"
  },
  {
    "path": "README.md",
    "chars": 9866,
    "preview": "# attache\n\n[![Gem Version](https://badge.fury.io/rb/attache.svg)](https://badge.fury.io/rb/attache)\n[![Build Status](htt"
  },
  {
    "path": "Rakefile",
    "chars": 350,
    "preview": "if ENV['RACK_ENV'] == 'production'\n  # Heroku\n  # https://gist.github.com/Geesu/d0b58488cfae51f361c6\n  namespace :assets"
  },
  {
    "path": "app.json",
    "chars": 235,
    "preview": "{\n  \"name\": \"attache server\",\n  \"description\": \"Image server\",\n  \"repository\": \"https://github.com/choonkeat/attache\",\n "
  },
  {
    "path": "attache.gemspec",
    "chars": 1688,
    "preview": "$:.push File.expand_path(\"../lib\", __FILE__)\n\n# Maintain your gem's version:\nrequire \"attache/version\"\n\n# Describe your "
  },
  {
    "path": "config/puma.rb",
    "chars": 241,
    "preview": "workers Integer(ENV['PUMA_WORKERS'] || 1)\nthreads Integer(ENV['MIN_THREADS']  || 1), Integer(ENV['MAX_THREADS'] || 16)\n\n"
  },
  {
    "path": "config/vhost.example.yml",
    "chars": 1501,
    "preview": "# This is an example file. You can copy this file as `vhost.yml` and edit\n# the content with the correct values.\n\n# This"
  },
  {
    "path": "config.ru",
    "chars": 258,
    "preview": "require 'attache'\n\nuse Attache::Delete\nuse Attache::UploadUrl\nuse Attache::Upload\nuse Attache::Download\nuse Attache::Tus"
  },
  {
    "path": "docker/Dockerfile",
    "chars": 753,
    "preview": "FROM ruby:2.2\n\nRUN DEBIAN_FRONTEND=noninteractive apt-get update && apt-get install -y imagemagick ghostscript\nRUN curl "
  },
  {
    "path": "docker/bundler_geminstaller_install_with_timeout.rb",
    "chars": 1379,
    "preview": "# Usage:\n#   ruby bundler_geminstaller_install_with_timeout.rb\n\ntarget = `which bundle`.chomp\n*old_lines, last_line = IO"
  },
  {
    "path": "exe/attache",
    "chars": 1538,
    "preview": "#!/bin/env ruby\n\nrequire 'fileutils'\n\n# attache config\nif ENV['VHOST']\n  puts \"Using VHOST env\"\nelsif File.exists?(\"conf"
  },
  {
    "path": "lib/attache/backup.rb",
    "chars": 792,
    "preview": "class Attache::Backup < Attache::Base\n  def initialize(app)\n    @app = app\n  end\n\n  def _call(env, config)\n    case env["
  },
  {
    "path": "lib/attache/base.rb",
    "chars": 2165,
    "preview": "class Attache::Base\n  def call(env)\n    if vhost = vhost_for(request_hostname(env))\n      dup._call(env, vhost)\n    else"
  },
  {
    "path": "lib/attache/delete.rb",
    "chars": 1190,
    "preview": "class Attache::Delete < Attache::Base\n  def initialize(app)\n    @app = app\n  end\n\n  def _call(env, config)\n    case env["
  },
  {
    "path": "lib/attache/download.rb",
    "chars": 4693,
    "preview": "require 'connection_pool'\n\nclass Attache::Download < Attache::Base\n  RESIZE_JOB_POOL = ConnectionPool.new(JSON.parse(ENV"
  },
  {
    "path": "lib/attache/file_response_body.rb",
    "chars": 487,
    "preview": "class Attache::FileResponseBody\n  def initialize(file, range_start = nil, range_end = nil)\n    @file        = file\n    @"
  },
  {
    "path": "lib/attache/job.rb",
    "chars": 935,
    "preview": "class Attache::Job\n  RETRY_DURATION = ENV.fetch('CACHE_EVICTION_INTERVAL_SECONDS') { 60 }.to_i / 3\n\n  def perform(method"
  },
  {
    "path": "lib/attache/resize_job.rb",
    "chars": 4059,
    "preview": "require 'digest/sha1'\nrequire 'stringio'\n\nclass Attache::ResizeJob\n  def perform(target_geometry_string, basename, relpa"
  },
  {
    "path": "lib/attache/tasks.rb",
    "chars": 281,
    "preview": "require \"rake\"\n\nnamespace :attache do\n\n  desc \"Convert content of FILE to a JSON string; default FILE=config/vhost.yml\"\n"
  },
  {
    "path": "lib/attache/tus/upload.rb",
    "chars": 3279,
    "preview": "class Attache::Tus::Upload < Attache::Base\n  def initialize(app)\n    @app = app\n  end\n\n  def _call(env, config)\n    case"
  },
  {
    "path": "lib/attache/tus.rb",
    "chars": 1324,
    "preview": "class Attache::Tus\n  LENGTH_KEYS   = %w[Upload-Length   Entity-Length]\n  OFFSET_KEYS   = %w[Upload-Offset   Offset]\n  ME"
  },
  {
    "path": "lib/attache/upload.rb",
    "chars": 1292,
    "preview": "class Attache::Upload < Attache::Base\n  def initialize(app)\n    @app = app\n  end\n\n  def _call(env, config)\n    case env["
  },
  {
    "path": "lib/attache/upload_url.rb",
    "chars": 2135,
    "preview": "class Attache::UploadUrl < Attache::Base\n  def initialize(app)\n    @app = app\n  end\n\n  def _call(env, config)\n    case e"
  },
  {
    "path": "lib/attache/version.rb",
    "chars": 39,
    "preview": "module Attache\n  VERSION = \"3.0.0\"\nend\n"
  },
  {
    "path": "lib/attache/vhost.rb",
    "chars": 3330,
    "preview": "class Attache::VHost\n  attr_accessor :remotedir,\n                :secret_key,\n                :backup,\n                :"
  },
  {
    "path": "lib/attache.rb",
    "chars": 1761,
    "preview": "require 'active_support/all'\nrequire 'sys/filesystem'\nrequire 'securerandom'\nrequire 'disk_store'\nrequire 'fileutils'\nre"
  },
  {
    "path": "public/index.html",
    "chars": 10,
    "preview": "It works!\n"
  },
  {
    "path": "public/vendor/roboto/Apache License.txt",
    "chars": 11388,
    "preview": "Font data copyright Google 2012\n\n                                Apache License\n                           Version 2.0, "
  },
  {
    "path": "spec/fixtures/sample.txt",
    "chars": 20,
    "preview": "data:not a data uri\n"
  },
  {
    "path": "spec/lib/attache/backup_spec.rb",
    "chars": 3785,
    "preview": "require 'spec_helper'\n\ndescribe Attache::Backup do\n  let(:app) { ->(env) { [200, env, \"app\"] } }\n  let(:middleware) { At"
  },
  {
    "path": "spec/lib/attache/delete_spec.rb",
    "chars": 4885,
    "preview": "require 'spec_helper'\n\ndescribe Attache::Delete do\n  let(:app) { ->(env) { [200, env, \"app\"] } }\n  let(:middleware) { At"
  },
  {
    "path": "spec/lib/attache/download_spec.rb",
    "chars": 6256,
    "preview": "require 'spec_helper'\n\ndescribe Attache::Download do\n  let(:app) { ->(env) { [200, env, \"app\"] } }\n  let(:middleware) { "
  },
  {
    "path": "spec/lib/attache/resize_job_spec.rb",
    "chars": 2462,
    "preview": "require 'spec_helper'\n\ndescribe Attache::ResizeJob do\n  describe '#thumbnail_for' do\n    let(:max) { 2048 }\n    let(:cur"
  },
  {
    "path": "spec/lib/attache/tus/upload_spec.rb",
    "chars": 6491,
    "preview": "require 'spec_helper'\n\ndescribe Attache::Tus::Upload do\n  let(:app) { ->(env) { [200, env, \"app\"] } }\n  let(:middleware)"
  },
  {
    "path": "spec/lib/attache/tus_spec.rb",
    "chars": 1162,
    "preview": "require 'spec_helper'\n\ndescribe Attache::Tus do\n  let(:env) { @env }\n  let(:config) { @config }\n  let(:tus) { Attache::T"
  },
  {
    "path": "spec/lib/attache/upload_spec.rb",
    "chars": 9094,
    "preview": "require 'spec_helper'\n\ndescribe Attache::Upload do\n  let(:app) { ->(env) { [200, env, \"app\"] } }\n  let(:middleware) { At"
  },
  {
    "path": "spec/lib/attache/upload_url_spec.rb",
    "chars": 5029,
    "preview": "require 'spec_helper'\n\ndescribe Attache::UploadUrl do\n  let(:app) { ->(env) { [200, env, \"app\"] } }\n  let(:uploader) { A"
  },
  {
    "path": "spec/lib/attache/vhost_spec.rb",
    "chars": 3493,
    "preview": "require 'spec_helper'\nrequire 'fog/storage/local/models/file'\nrequire 'fog/aws/models/storage/file'\n\ndescribe Attache::V"
  },
  {
    "path": "spec/spec_helper.rb",
    "chars": 4459,
    "preview": "ENV['VHOST'] = '{\"0.0.0.0\":{}}'\n\nrequire 'attache.rb'\nrequire 'sucker_punch/testing/inline'\n\n# This file was generated b"
  }
]

About this extraction

This page contains the full source code of the choonkeat/attache GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 46 files (103.5 KB), approximately 28.2k tokens, and a symbol index with 78 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!