[
  {
    "path": ".circleci/config.yml",
    "content": "# https://circleci.com/developer/orbs/orb/circleci/ruby\nversion: 2.1\n\n# https://circleci.com/developer/orbs/orb/circleci/ruby\norbs:\n  ruby: circleci/ruby@2.1.0\n\njobs:\n  build:\n    docker:\n      - image: cimg/ruby:3.2.2\n    steps:\n      - checkout\n      - ruby/install-deps\n\n  test:\n    parallelism: 3\n    docker:\n      - image: cimg/ruby:3.2.2\n      - image: cimg/postgres:16.0\n        environment:\n          POSTGRES_USER: postgres\n          POSTGRES_DB: rideshare_test\n          POSTGRES_PASSWORD: postgres\n    environment:\n      BUNDLE_JOBS: \"3\"\n      BUNDLE_RETRY: \"3\"\n      PGHOST: 127.0.0.1\n      RAILS_ENV: test\n      PGSLICE_URL: \"postgres://postgres:postgres@localhost:5432/rideshare_test\"\n    steps:\n      - checkout\n      - ruby/install-deps\n      - run:\n          name: Wait for DB\n          command: dockerize -wait tcp://localhost:5432 -timeout 1m\n      - run:\n          name: Test Database setup\n          command: sh db/setup_test_database.sh\n      - run:\n          name: Database schema load\n          command: bundle exec rails db:schema:load --trace\n      - run:\n          name: Partition conversion\n          command: sh bin/partition_conversion.sh\n      - run:\n          name: run tests\n          command: bin/rails test\n\nworkflows:\n  version: 2\n  build_and_test:\n    jobs:\n      - build\n      - test:\n          requires:\n            - build\n"
  },
  {
    "path": ".erdconfig",
    "content": "attributes:\n  - content\n  - primary_keys\n  - foreign_keys\n  - inheritance\n  - timestamps\ndisconnected: false\nfilename: erd\nfiletype: pdf\nindirect: true\ninheritance: false\nmarkup: true\nnotation: simple\norientation: horizontal\npolymorphism: true\nsort: true\nwarn: true\ntitle: Rideshare\nexclude: null\nonly: null\nonly_recursion_depth: null\nprepend_primary: false\ncluster: false\nsplines: spline\nfonts:\n normal: \"Arial\"\n bold: \"Arial Bold\"\n italic: \"Arial Italic\"\n"
  },
  {
    "path": ".git-blame-ignore-revs",
    "content": ""
  },
  {
    "path": ".gitignore",
    "content": "# See https://help.github.com/articles/ignoring-files for more about ignoring files.\n#\n# If you find yourself ignoring temporary files generated by your text editor\n# or operating system, you probably want to add a global ignore instead:\n#   git config --global core.excludesfile '~/.gitignore_global'\n\n# Ignore bundler config.\n/.bundle\n\n# Ignore the default SQLite database.\n/db/*.sqlite3\n/db/*.sqlite3-journal\n\n# Ignore all logfiles and tempfiles.\n/log/*\n/tmp/*\n!/log/.keep\n!/tmp/.keep\n\n# Ignore uploaded files in development.\n/storage/*\n!/storage/.keep\n\n/public/assets\n.byebug_history\n\n# Ignore master key for decrypting credentials and more.\n/config/master.key\n\n/public/packs\n/public/packs-test\n/node_modules\n/yarn-error.log\nyarn-debug.log*\n.yarn-integrity\n\n.sql\n\n\n# Docker volume directory\ndocker/postgres-docker/*\npostgres-docker/\n\n# Ignore backup files\ndocker/pg_hba.backup.conf\ndocker/postgresql.backup.conf\ndocker/pg_hba.conf\ndocker/postgresql.conf\ndocker/replication_user.sql\ndocker/.pgpass\n\noutput.log\n\n.pgpass\n"
  },
  {
    "path": ".rubocop.yml",
    "content": "AllCops:\n  NewCops: enable\n  Exclude:\n    - \"db/schema.rb\"\n    - \"db/structure.sql\"\n    - \"Gemfile\"\n    - \"lib/tasks/*.rake\"\n    - \"bin/*\"\n    - \"config/puma.rb\"\n    - \"config/spring.rb\"\n    - \"config/environments/development.rb\"\n    - \"config/environments/production.rb\"\n    - \"spec/spec_helper.rb\"\nStyle/Documentation:\n  Enabled: false\n\n# Disable suggestions for rubocop-rails for now\nAllCops:\n  SuggestExtensions: false\n"
  },
  {
    "path": ".ruby-version",
    "content": "3.2.2\n"
  },
  {
    "path": "GUIDES.md",
    "content": "# Guides\n\n## Set Up Databases\n```sh\nsh db/setup.sh\n\nsh db/setup_test_database.sh\n```\n\n## Set Database Connection\nUse the readwrite `owner` role for schema modifications.\n\nThis is not a superuser role, although it does have write capabilities.\n\nThe password is supplied from `~/.pgpass`.\n\n```sh\nexport DATABASE_URL=\"postgres://app:@localhost:5432/rideshare_development\"\n```\n\nThe `app` user cannot `TRUNCATE` tables.\n\n## Teardown Databases\nThis is mostly useful for testing the \"setup\" automation. You wouldn't want to do this normally because you'd lose your data.\n\n```sh\nsh db/teardown.sh\n```\n\n## Generate Data\n```sh\nbin/rails db:reset\n\nbin/rails data_generators:generate_all\n```\n\n## Simulate App Activity\nStart up the server in one terminal:\n\n```sh\nbin/rails server\n```\n\nIn another terminal, run the script:\n```sh\nbin/rails simulate:app_activity\n```\n\nOr run it with a iteration count, for example 2 or more:\n```sh\nbin/rails simulate:app_activity[2]\n```\n\n## Local Circle CI\nInspired by [Issue #99](https://github.com/andyatkinson/rideshare/issues/99) from @momer.\n\nUsing this configuration, Circle CI can use its configuration and run locally.\n\nCurrently there is this error: \"invalid UTS mode\"\n\n```sh\nbrew install circleci\n\ncircleci local execute -c process.yml build # works\n\ncircleci local execute -c process.yml test # error\n```\n\n## Scrub Database\n```sh\ncd db\n\nsh scrubbing/scrubber.sh\n```\n\n## PgBouncer Prepared Statements\n* Configure `pool_mode` to be `statement` in the PgBouncer config file\n* Disable Query Logs (unfortunately) (`config/application.rb`)\n* Make sure Prepared Statements aren't disabled in `config/database.yml`\n* Connect through port 6432 and confirm prepared statements work correctly\n"
  },
  {
    "path": "Gemfile",
    "content": "source 'https://rubygems.org'\n\ngem 'activerecord-import', '~> 1.5'\ngem 'bcrypt', '~> 3.1' # Use ActiveModel has_secure_password\ngem 'fast_jsonapi', '~> 1.5'\ngem 'geocoder', '~> 1.8'\ngem 'jwt', '~> 2.7'\ngem 'pg', '~> 1.5'\ngem 'pg_query', '~> 6.1'\ngem 'pg_search', '~> 2.3'\ngem 'prosopite', '~> 1.4' # identify N+1 queries\ngem 'puma', '~> 6.4'\ngem 'rails', '>= 7.2', '~> 7.2' # , git: 'https://github.com/rails/rails.git'\ngem 'whenever', '~> 1.0', require: false # manage scheduled jobs\ngem 'fast_count', '~> 0.3'\n\n# assets gems default Rails 7 app\ngem 'importmap-rails', '~> 1.2'\ngem 'sprockets-rails', '~> 3.4'\n\n# Forks\ngem 'pghero', git: 'https://github.com/andyatkinson/pghero.git'\ngem 'pgslice', git: 'https://github.com/andyatkinson/pgslice.git'\n\n# Keep these updated\ngem 'fx', '~> 0.9' # manage DB functions, triggers\ngem 'scenic', '~> 1.9' # manage DB views, materialized views\ngem 'strong_migrations', '~> 2.4' # Use safe Migration patterns\n\ngem 'rubocop', '~> 1.77'\n\ngroup :development, :test do\n  gem 'active_record_doctor', '~> 1.15'\n  gem 'benchmark-ips', '~> 2.14'\n  gem 'benchmark-memory', '~> 0.2'\n  gem 'database_consistency', '~> 2.0'\n  gem 'dotenv-rails', '~> 3.1' # Manage .env\n  gem 'faker', '~> 3.5', require: false\n  gem 'faraday', '~> 2.13'\n  gem 'json', '~> 2.1'\n  gem 'pry', '~> 0.15'\n  gem 'rails_best_practices', '~> 1.23'\n  gem 'rails-erd', '~> 1.7'\n  gem 'rails-pg-extras', '~> 5.6'\nend\n"
  },
  {
    "path": "README.md",
    "content": "[![CircleCI](https://circleci.com/gh/andyatkinson/rideshare.svg?style=svg)](https://circleci.com/gh/andyatkinson/rideshare)\n\n# 📚 High Performance PostgreSQL for Rails\nRideshare is the Rails application supporting the book \"High Performance PostgreSQL for Rails\" <http://pragprog.com/titles/aapsql>, published by Pragmatic Programmers in 2024. \n\n# Installation\nPrepare your development machine.\n\n<details>\n<summary>🎥 Installation - Rideshare on a Mac, Ruby, PostgreSQL, Gems</summary>\n<div>\n  <a href=\"https://www.loom.com/share/8bfc4e79758a42d39cead8f6637aa314\">\n    <img style=\"max-width:300px;\" src=\"https://cdn.loom.com/sessions/thumbnails/8bfc4e79758a42d39cead8f6637aa314-1714771702452-with-play.gif\">\n  </a>\n</div>\n</details>\n\n## Homebrew Packages\nFirst, install [Homebrew](https://brew.sh).\n\n### Graphviz\n```sh\nbrew install graphviz\n```\n\n## Ruby Version Manager\nBefore installing Ruby, install a *Ruby version manager*. The recommended one is [Rbenv](https://github.com/rbenv/rbenv). Run:\n\n```sh\nbrew install rbenv\n```\n\n## PostgreSQL\nPostgreSQL 16 or greater is required. Installation may be via Homebrew, although the recommended method is [Postgres.app](https://postgresapp.com)\n\n### PostgresApp\n- Once installed, from the Menu Bar app, choose \"Open Postgres\" then click the \"+\" icon to create a new PostgreSQL 16 server\n\n\n## Ruby\nRun `cat .ruby-version` from the Rideshare directory to find the needed version of Ruby.\n\nFor example, if `3.2.2` is listed, run:\n\n```sh\nrbenv install 3.2.2\n```\n\nRun `rbenv versions` to confirm the correct version is active. The current version has an asterisk.\n\n```sh\n  system\n* 3.2.2 (set by /Users/andy/Projects/rideshare/.ruby-version)\n```\n\nRunning into rbenv trouble? Review *Learn how to load rbenv in your shell* using [`rbenv init`](https://github.com/rbenv/rbenv).\n\n## Bundler and Gems\nBundler is included when you install Ruby using Rbenv. You're ready to install the Ruby gems for Rideshare.\n\nRun the following command from the Rideshare directory:\n\n```sh\nbundle install\n```\n\n## Rideshare Development Database\n⚠️  This scripts expects PostgreSQL version 16. If you see syntax errors with underscore numbers like `10_000`, it's probably from using an older version that doesn't support that number style.\n\n⚠️   Normally in Ruby on Rails applications, you'd run `bin/rails db:create` to create the development and test databases. Don't do that here. Rideshare uses a custom script.\n\nThe script is called [`db/setup.sh`](db/setup.sh). Don't run it yet. The video below shows common issues for this section.\n\n<details>\n<summary>🎥 Rideshare DB setup. Common issues running db/setup.sh</summary>\n<a href=\"https://www.loom.com/share/fc919520089c4e0abb2c0a02b68bbd91\">\n  <img style=\"max-width:300px;\" src=\"https://cdn.loom.com/sessions/thumbnails/fc919520089c4e0abb2c0a02b68bbd91-with-play.gif\">\n</a>\n</div>\n</details>\n\nBefore you run it, let's set some environment variables. Open the file `db/setup.sh` and read the comments at the top for more info about these env vars:\n\n- `RIDESHARE_DB_PASSWORD`\n- `DB_URL`\n\n⚠️  The script generates a password value using `openssl`, assuming it's installed and available.\n\nOnce you've set values, before running the script, run `echo $RIDESHARE_DB_PASSWORD` (and `echo $DB_URL`) to make sure they're set.\n\nOnce both are set, you're ready to run the script.\n\nLet's capture the output of the script. Use the command below to do that. The script output goes into `output.log` file so we can more easily review it for errors.\n\n```sh\nsh db/setup.sh 2>&1 | tee -a output.log\n```\n\nSince you set `RIDESHARE_DB_PASSWORD` earlier, create or update the special `~/.pgpass` file with the password you generated.\nThis allows us to put the PostgreSQL user in the connection string, without needing to also supply the password.\n\nRefer to `postgresql/.pgpass.sample` for an example, and copy the example into your own `~/.pgpass` file, replacing the password with your generated one.\n\nWhen you've updated `~/.pgpass`, it should look like the line below. The last segment (`2C6uw3LprgUMwSLQ` below) is the password you generated.\n\n```sh\nlocalhost:5432:rideshare_development:owner:2C6uw3LprgUMwSLQ\n```\n\nRun `chmod 0600 ~/.pgpass` to change the file mode (permissions).\n\nFinally, run `export DATABASE_URL=<value from .env>`, getting the value from the `.env` file in this project, set as the value of the `DATABASE_URL` environment variable.\n\nConfirm that's a non-empty value by running `echo $DATABASE_URL`.\n\nOnce `DATABASE_URL` is set, we'll use it as an argument to `psql` to connect to the database. Run `psql $DATABASE_URL` to do that.\n\nOnce connected, you're good to go. If you'd like to do more checks, expand the checks and run through them below.\n\n<details open>\n\n<summary>Installation Checks</summary>\n\nFrom within psql, run this:\n\n```sql\nSELECT current_user;\n```\n\nConfirm user `owner` is displayed.\n\n```sql\nowner@localhost:5432 rideshare_development# select current_user;\n current_user\n --------------\n  owner\n```\n\nFrom psql, run the *describe namespace* meta-command:\n\n```sql\n\\dn\n```\n\nVerify the `rideshare` schema is displayed.\n\n```sql\nowner@localhost:5432 rideshare_development# \\dn\n  List of schemas\n   Name    | Owner\n-----------+-------\n rideshare | owner\n```\n\nNow that you've confirmed the `owner` user and the `rideshare` schema have been set up correctly, you can run the migrations to create Rideshare's tables.\n</details>\n\n\n## Run Migrations\nRun migrations the standard way:\n\n```sh\nbin/rails db:migrate\n```\n\nRun the *describe table* meta command next: `\\dt`. Rideshare tables like `users`, `trips` are listed.\n\nNote that migrations are preceded by the command `SET role = owner`, so they're run with `owner` as the owner of database objects.\n\nSee `lib/tasks/migration_hooks.rake` for more details.\n\nIf migrations ran successfully, you're good to go!\n\n## Data Loads\nTo load some sample data, check out: [db/README.md](db/README.md)\n\n\n# Development Guides and Documentation\n\n## Troubleshooting\nThe Rideshare repository has many `README.md` files within subdirectories. Run `find . -name 'README.md'` to see them all.\n\n- For expanded installation and troubleshooting, visit: [Development Guides](https://github.com/andyatkinson/development_guides)\n- For DB things: [db/README.md](db/README.md)\n- For database scripts: [db/scripts/README.md](db/scripts/README.md)\n- For PostgreSQL things: [postgresql/README.md](postgresql/README.md)\n- For Docker things: [docker/README.md](docker/README.md)\n- For DB scrubbing: [db/scrubbing/README.md](db/scrubbing/README.md)\n- For test environment details in Rideshare, check out: [TESTING.md](TESTING.md)\n- For Guides and Tasks in this repo, check out: [Guides](GUIDES.md)\n\n# User Interfaces\nAlthough Rideshare is an *API-only* app, there are some UI elements.\n\nRideshare runs [PgHero](https://github.com/ankane/pghero) which has a UI.\n\nConnect to it:\n\n```sh\nbin/rails server\n```\n\nOnce that's running, visit <http://localhost:3000/pghero> in your browser to see it.\n\n![Screenshot of PgHero for Rideshare](https://i.imgur.com/VduvxSK.png)\n"
  },
  {
    "path": "Rakefile",
    "content": "# Add your own tasks in files placed in lib/tasks ending in .rake,\n# for example lib/tasks/capistrano.rake, and they will automatically be available to Rake.\n\nrequire_relative 'config/application'\n\nRails.application.load_tasks\n\nRake::Task['db:reset'].clear\n\nnamespace :db do\n  desc 'Custom database tasks'\n  task :reset do\n    Rake::Task['custom:db_reset'].invoke\n  end\nend\n"
  },
  {
    "path": "TESTING.md",
    "content": "# Test Environment Installation\n\nIn the development database, you'll use good practices like a custom schema and user, with reduced privileges.\n\nFor the test database, we'll keep things simpler. The `postgres` superuser is used along with the `public` schema.\n\nThis configuration is also used for Circle CI.\n\nFrom the Rideshare directory, run:\n\n1. `sh db/setup_test_database.sh`, which sets up `rideshare_test`\n1. `RAILS_ENV=test bin/rails db:migrate`\n1. `bin/rails test`\n\nRefer to `.circleci/config.yml` for the Circle CI config.\n\nYou should now have a test database, and tests should have passed.\n"
  },
  {
    "path": "app/assets/config/manifest.js",
    "content": "// app/assets/config/manifest.js\n\n//= link_tree ../images\n//= link_directory ../stylesheets .css\n//= link_tree ../../javascript .js\n"
  },
  {
    "path": "app/assets/images/.keep",
    "content": ""
  },
  {
    "path": "app/assets/stylesheets/application.css",
    "content": "/*\n * This is a manifest file that'll be compiled into application.css, which will include all the files\n * listed below.\n *\n * Any CSS and SCSS file within this directory, lib/assets/stylesheets, or any plugin's\n * vendor/assets/stylesheets directory can be referenced here using a relative path.\n *\n * You're free to add application-wide styles to this file and they'll appear at the bottom of the\n * compiled file so the styles you add here take precedence over styles defined in any other CSS/SCSS\n * files in this directory. Styles in this file should be added after the last require_* statement.\n * It is generally better to create a new file per style scope.\n *\n *= require_tree .\n *= require_self\n */\n"
  },
  {
    "path": "app/channels/application_cable/channel.rb",
    "content": "module ApplicationCable\n  class Channel < ActionCable::Channel::Base\n  end\nend\n"
  },
  {
    "path": "app/channels/application_cable/connection.rb",
    "content": "module ApplicationCable\n  class Connection < ActionCable::Connection::Base\n  end\nend\n"
  },
  {
    "path": "app/controllers/api/trip_requests_controller.rb",
    "content": "class Api::TripRequestsController < ApiController\n  def create\n    if start_location && end_location && current_rider\n      trip_request = current_rider.trip_requests.create!(\n        start_location: start_location,\n        end_location: end_location\n      )\n      TripCreator.new(\n        trip_request_id: trip_request.id\n      ).create_trip!\n      render json: { trip_request_id: trip_request.id },\n             status: :created\n    else\n      render nothing: true,\n             status: :unprocessable_entity\n    end\n  end\n\n  def show\n    if current_trip_request\n      render json: {\n        trip_request_id: current_trip_request.id,\n        trip_id: created_trip&.id\n      }\n    else\n      render nothing: true,\n             status: :unprocessable_entity\n    end\n  end\n\n  private\n\n  def trip_request_params\n    params\n      .require(:trip_request)\n      .permit(:rider_id, :start_address, :end_address)\n  end\n\n  def current_trip_request\n    @trip_request ||= TripRequest.find(params[:id])\n  end\n\n  def created_trip\n    return unless Trip.exists?(trip_request_id: params[:id])\n\n    Trip.find_by(trip_request_id: params[:id])\n  end\n\n  def current_rider\n    @rider ||= Rider.find(trip_request_params[:rider_id])\n  end\n\n  def start_location\n    @start_location ||= Location.find_or_create_by(\n      address: trip_request_params[:start_address]\n    )\n  end\n\n  def end_location\n    @end_location ||= Location.find_or_create_by(\n      address: trip_request_params[:end_address]\n    )\n  end\nend\n"
  },
  {
    "path": "app/controllers/api/trips_controller.rb",
    "content": "class Api::TripsController < ApiController\n  before_action :authorize_request, only: :my\n\n  # Search params: `start_location`\n  #   => `New%20York%2C%20NY`\n  def index\n    search = TripSearch.new(search_params)\n    trips = Trip.apply_scopes(\n      search.start_location,\n      search.driver_name,\n      search.rider_name\n    )\n\n    render json: trips\n  end\n\n  def show\n    expires_in 1.minute, public: true\n    @trip = Trip.find(params[:id])\n\n    return unless stale?(@trip)\n\n    render json: @trip\n  end\n\n  # Get more details about a single trip\n  # TODO add JSON API mime type\n  def details\n    options = {}\n    # include=driver\n    # fields[driver]=average_rating\n    if params[:fields]\n      driver_fields = params[:fields].permit(:driver).to_h\n                                     .each_with_object({}) do |(k, v), h|\n        h[k.to_sym] = v.split(',').map(&:to_sym)\n      end\n      options.merge!(fields: driver_fields)\n    end\n\n    # multiple associated resources are comma-separated\n    options[:include] = params[:include].split(',').map(&:to_sym) if params[:include]\n\n    @trip = Trip.includes(:driver).find_by(id: params[:id])\n\n    render json: TripSerializer.new(@trip, options).serializable_hash\n  end\n\n  # TODO: add JSON API mime type\n  def my\n    @trips = Trip.completed\n                 .includes(:driver, { trip_request: :rider })\n                 .joins(trip_request: :rider)\n                 .where(users: { id: params[:rider_id] })\n\n    options = {}\n    # JSON API: https://jsonapi.org/format/#fetching-sparse-fieldsets\n    # fast_jsonapi: https://github.com/Netflix/fast_jsonapi#sparse-fieldsets\n    #\n    # convert input params to options arguments\n    if params[:fields]\n      trip_params = params[:fields].permit(:trips).to_h\n                                   .each_with_object({}) do |(k, v), h|\n        h[k.singularize.to_sym] = v.split(',').map(&:to_sym)\n      end\n      options.merge!(fields: trip_params)\n    end\n\n    render json: TripSerializer.new(@trips, options).serializable_hash\n  end\n\n  private\n\n  def search_params\n    params.permit(\n      :start_location,\n      :driver_name,\n      :rider_name\n    )\n  end\nend\n"
  },
  {
    "path": "app/controllers/api_controller.rb",
    "content": "class ApiController < ActionController::API\n  def authorize_request\n    header = request.headers['Authorization']\n    header = header.split(' ').last if header\n    begin\n      @decoded = JsonWebToken.decode(header)\n      @current_user = User.find(@decoded[:user_id])\n    rescue ActiveRecord::RecordNotFound => e\n      render json: { errors: e.message }, status: :unauthorized\n    rescue JWT::DecodeError => e\n      render json: { errors: e.message }, status: :unauthorized\n    end\n  end\nend\n"
  },
  {
    "path": "app/controllers/application_controller.rb",
    "content": "class ApplicationController < ActionController::Base\nend\n"
  },
  {
    "path": "app/controllers/authentication_controller.rb",
    "content": "class AuthenticationController < ApiController\n  before_action :authorize_request, except: :login\n\n  # POST /auth/login\n  def login\n    @user = User.find_by(email: login_params[:email])\n    if @user&.authenticate(login_params[:password])\n      token = JsonWebToken.encode(user_id: @user.id)\n      time = Time.now + 24.hours.to_i\n\n      render json: {\n        token: token,\n        exp: time.strftime('%m-%d-%Y %H:%M'),\n        username: @user.display_name\n      }, status: :ok\n    else\n\n      render json: { error: 'unauthorized' }, status: :unauthorized\n    end\n  end\n\n  private\n\n  def login_params\n    params.permit(:email, :password)\n  end\nend\n"
  },
  {
    "path": "app/controllers/concerns/.keep",
    "content": ""
  },
  {
    "path": "app/helpers/application_helper.rb",
    "content": "module ApplicationHelper\nend\n"
  },
  {
    "path": "app/javascript/application.js",
    "content": "// Configure your import map in config/importmap.rb. Read more: https://github.com/rails/importmap-rails\n"
  },
  {
    "path": "app/jobs/application_job.rb",
    "content": "class ApplicationJob < ActiveJob::Base\n  # Automatically retry jobs that encountered a deadlock\n  # retry_on ActiveRecord::Deadlocked\n\n  # Most jobs are safe to ignore if the underlying records are no longer available\n  # discard_on ActiveJob::DeserializationError\nend\n"
  },
  {
    "path": "app/lib/pgslice_helper.rb",
    "content": "# Safe by default, add dry_run=false when ready\n# Prep:\n# export PGSLICE_URL\n# - Retire default\n# - bin/rails runner \"PgsliceHelper.new.retire_default_partition(table_name: 'trip_positions')\"\n# - bin/rails runner \"PgsliceHelper.new.add_partitions(table_name: 'trip_positions', past: 0, future: 3, dry_run: false)\"\n# - bin/rails runner \"PgsliceHelper.new.fill(table_name: 'trip_positions', from_date: '2021-01-01')\"\n# - bin/rails runner \"PgsliceHelper.new.analyze(table_name: 'trip_positions')\"\n#\n# Data export (Safe by default, add dry_run=false when ready)\n# - bin/rails runner \"PgsliceHelper.new.dump_retired_table(table_name: 'trip_positions')\"\n# - bin/rails runner \"PgsliceHelper.new.drop_retired_table(table_name: 'trip_positions')\"\n#\n# To test app compatibility:\n# - Make sure latest changes from dev DB are applied: `bin/rails db:test:prepare`\n# - change PGSLICE_URL in .env, specify test DB\n# - run `bin/rails test`\nclass PgsliceHelper\n  DEFAULT_COLUMN = 'created_at'\n\n  def add_partitions(table_name:, past:, future:, intermediate: true, dry_run: true)\n    cmd = %(./bin/pgslice add_partitions #{table_name} \\\n    #{'--intermediate ' if intermediate} \\\n    #{\"--past #{past}\" if past} \\\n    #{\"--future #{future}\" if future} \\\n    #{'--dry-run' if dry_run} \\\n    ).squish\n    log(\"dry_run=#{dry_run} invoking: #{cmd}\")\n    system(cmd)\n  end\n\n  def fill(table_name:, from_date:, partition_column: DEFAULT_COLUMN, swapped: false)\n    cmd = %(./bin/pgslice fill #{table_name}\n    #{\"--where \\\"date(#{partition_column}) >= date('#{from_date}')\\\"\" if from_date}\n    #{'--swapped' if swapped}\n    ).squish\n    log(\"fill cmd: #{cmd}\")\n    system(cmd)\n  end\n\n  def analyze(table_name:)\n    cmd = %(./bin/pgslice analyze #{table_name}).squish\n    log(\"cmd: #{cmd}\")\n    system(cmd)\n  end\n\n  def swap(table_name:)\n    cmd = %(./bin/pgslice swap #{table_name}).squish\n    log(\"cmd: #{cmd}\")\n    system(cmd)\n  end\n\n  def unswap(table_name:)\n    cmd = %(./bin/pgslice unswap #{table_name}).squish\n    log(\"cmd: #{cmd}\")\n    system(cmd)\n  end\n\n  # default partitions cannot be detached concurrently\n  # \"ERROR:  cannot detach partitions concurrently when a default partition exists\"\n  def retire_default_partition(table_name:, dry_run: true)\n    tbl_name = \"#{table_name}_intermediate\" # assumes intermediate table\n    partition_name = \"#{tbl_name}_default\"\n    retired_name = \"#{partition_name}_retired\"\n\n    sql = %(\n      BEGIN;\n\n      ALTER TABLE #{tbl_name} \\\n      DETACH PARTITION #{partition_name};\n\n      ALTER TABLE #{partition_name}\n      RENAME TO #{retired_name};\n\n      COMMIT;\n    ).squish\n\n    cmd = %(psql $PGSLICE_URL -c '#{sql}')\n    log(\"detaching and retiring dry_run=#{dry_run} cmd=#{cmd}\")\n    log(\"cmd=#{cmd}\")\n    system(cmd) unless dry_run\n  end\n\n  def unretire_default_partition(table_name:, dry_run: false)\n    table_name = \"#{table_name}_intermediate\" # assumes intermediate table\n    partition_name = \"#{table_name}_default\"\n    retired_name = \"#{partition_name}_retired\"\n\n    sql = %(\n      BEGIN;\n\n      ALTER TABLE #{retired_name}\n      RENAME TO #{partition_name};\n\n      ALTER TABLE #{table_name}\n      ATTACH PARTITION #{partition_name}\n      DEFAULT;\n\n      COMMIT;\n    ).squish\n    cmd = %(psql $PGSLICE_URL -c '#{sql}')\n    log(\"unretiring and attaching. dry_run=#{dry_run}\")\n    log(\"cmd=#{cmd}\")\n    system(cmd) unless dry_run\n  end\n\n  def dump_retired_table(table_name:, dry_run: true)\n    retired_name = \"#{table_name}_retired\"\n    dump_name = \"#{retired_name}.dump\"\n    cmd = %(pg_dump -c -Fc -t #{retired_name} $PGSLICE_URL > #{dump_name})\n    log(\"cmd=#{cmd}\")\n    system(cmd) unless dry_run\n  end\n\n  def drop_retired_table(table_name:, dry_run: true)\n    retired_name = \"#{table_name}_retired\"\n    cmd = %(psql -c 'DROP TABLE #{retired_name}' $PGSLICE_URL)\n    log(\"cmd: #{cmd}\")\n    system(cmd) unless dry_run\n  end\n\n  private\n\n  def log(line)\n    Rails.logger.info \"[pgslice] #{line}\"\n  end\nend\n"
  },
  {
    "path": "app/mailers/application_mailer.rb",
    "content": "class ApplicationMailer < ActionMailer::Base\n  default from: 'from@example.com'\n  layout 'mailer'\nend\n"
  },
  {
    "path": "app/models/application_record.rb",
    "content": "class ApplicationRecord < ActiveRecord::Base\n  self.abstract_class = true\n\n  # connects_to database: {\n  #   writing: :rideshare,\n  #   reading: :rideshare_replica\n  # }\nend\n"
  },
  {
    "path": "app/models/concerns/.keep",
    "content": ""
  },
  {
    "path": "app/models/driver.rb",
    "content": "class Driver < User\n  has_many :trips\n\n  validates :drivers_license_number,\n            presence: true,\n            uniqueness: true,\n            drivers_license: true\n\n  # X out of 5, with 1 to 5 options selected by Riders\n  def average_rating\n    trips.average(:rating)\n  end\nend\n"
  },
  {
    "path": "app/models/fast_search_result.rb",
    "content": "class FastSearchResult < ApplicationRecord\n  # this isn't strictly necessary, but it will prevent\n  # rails from calling save, which would fail anyway.\n  def readonly?\n    true\n  end\n\n  def self.refresh(concurrently: false)\n    Scenic.database.refresh_materialized_view(\n      table_name,\n      concurrently: concurrently,\n      cascade: false\n    )\n  end\nend\n"
  },
  {
    "path": "app/models/location.rb",
    "content": "class Location < ApplicationRecord\n  validates :address,\n            presence: true,\n            uniqueness: true # simple approach, assumes fully address, all parts\n\n  validates :position, presence: true\n  validates :state,\n            presence: true,\n            length: { is: 2 }\n\n  geocoded_by :address\n\n  after_validation :geocode, if: ->(obj) { obj.address_changed? && obj.position.nil? }\nend\n"
  },
  {
    "path": "app/models/rider.rb",
    "content": "class Rider < User\n  has_many :trip_requests\n  has_many :trips, through: :trip_requests\nend\n"
  },
  {
    "path": "app/models/search_result.rb",
    "content": "class SearchResult < ApplicationRecord\n  # this isn't strictly necessary, but it will prevent\n  # rails from calling save, which would fail anyway.\n  def readonly?\n    true\n  end\nend\n"
  },
  {
    "path": "app/models/trip.rb",
    "content": "class Trip < ApplicationRecord\n  belongs_to :trip_request\n  belongs_to :driver, class_name: 'User', counter_cache: true\n  has_many :trip_positions\n\n  delegate :rider, to: :trip_request, allow_nil: false\n\n  validates :trip_request, :rider, :driver, presence: true\n\n  validates :rating, numericality: {\n    only_integer: true,\n    greater_than_or_equal_to: 1,\n    less_than_or_equal_to: 5\n  }, allow_nil: true\n\n  validate :rating_requires_completed_trip\n\n  scope :with_start_location, lambda { |text|\n    joins(trip_request: :start_location)\n      .where('locations.address ILIKE ?', \"%#{text}%\")\n  }\n\n  scope :with_driver_name, lambda { |text|\n    joins(:driver)\n      .where('users.first_name ILIKE ?', \"%#{text}%\")\n  }\n\n  scope :with_rider_name, lambda { |text|\n    joins(trip_request: :rider)\n      .where('users.first_name ILIKE ?', \"%#{text}%\")\n  }\n\n  scope :completed, -> { where.not(completed_at: nil) }\n\n  def rating_requires_completed_trip\n    return unless rating_changed? && completed_at.nil?\n\n    errors.add(:rating, 'must be completed before a rating can be added')\n  end\n\n  def self.apply_scopes(*filters)\n    filters.inject(all) do |scope_chain, filter|\n      scope_chain.merge(filter)\n    end\n  end\nend\n"
  },
  {
    "path": "app/models/trip_position.rb",
    "content": "class TripPosition < ApplicationRecord\n  belongs_to :trip\n\n  validates :trip_id, presence: true\n  validates :position, presence: true\nend\n"
  },
  {
    "path": "app/models/trip_request.rb",
    "content": "class TripRequest < ApplicationRecord\n  belongs_to :rider, class_name: 'User'\n  belongs_to :start_location, class_name: 'Location'\n  belongs_to :end_location, class_name: 'Location'\n  has_one :trip\n\n  has_many :vehicle_reservations\n\n  # A unique trip request could be per driver, start and end location, that is\n  # in progress. In other words, in order to avoid duplicated data, require that\n  # only trip could be in progress for a rider between the same locations.\n\n  validates :rider, :start_location, :end_location, presence: true\nend\n"
  },
  {
    "path": "app/models/user.rb",
    "content": "class User < ApplicationRecord\n  has_secure_password\n  validates :first_name, :last_name, presence: true\n  validates :drivers_license_number,\n            length: { maximum: 100 }\n\n  include PgSearch::Model\n\n  # searchable_full_name column combines\n  # first_name and last_name\n  # Each receives a weight, in the stored generated column\n  # definition\n  pg_search_scope :search_by_full_name,\n                  against: {\n                    first_name: 'A', # highest weight\n                    last_name: 'B'\n                  }\n  # Swap the config above for the one on the next line,\n  # after adding the column `searchable_full_name`\n  # against: :searchable_full_name, # stored generated column tsvector\n  # using: {\n  #   tsearch: {\n  #     dictionary: 'english',\n  #     tsvector_column: 'searchable_full_name'\n  #   }\n  # }\n\n  pg_search_scope :unaccent_search,\n                  against: %i[first_name last_name],\n                  ignoring: :accents\n\n  validates :email,\n            presence: true,\n            uniqueness: true,\n            email: true # custom validator\n\n  validates :password,\n            length: { minimum: 6 },\n            confirmation: true, # automatically added by has_secure_password, prob. redundant\n            if: -> { new_record? || !password.nil? }\n\n  validates :type,\n            presence: true\n\n  # NOTE: on password confirmation:\n  # Validation only called when password_confirmation attribute is present\n\n  def display_name\n    \"#{first_name.capitalize} #{last_name[0].capitalize}.\"\n  end\nend\n"
  },
  {
    "path": "app/models/vehicle.rb",
    "content": "class Vehicle < ApplicationRecord\n  validates :name,\n            presence: true,\n            uniqueness: true\n\n  attr_accessor :status\n\n  has_many :vehicle_reservations, dependent: :destroy\n\n  enum :status, {\n    draft: VehicleStatus::DRAFT,\n    published: VehicleStatus::PUBLISHED\n  }, prefix: true\n\n  validates :status,\n            inclusion: { in: VehicleStatus::VALID_STATUSES },\n            presence: true\nend\n"
  },
  {
    "path": "app/models/vehicle_reservation.rb",
    "content": "class VehicleReservation < ApplicationRecord\n  belongs_to :vehicle\n  belongs_to :trip_request\n\n  validates :vehicle_id, :starts_at, :ends_at,\n            presence: true\nend\n"
  },
  {
    "path": "app/models/vehicle_status.rb",
    "content": "class VehicleStatus\n  DRAFT = 'draft'.freeze\n  PUBLISHED = 'published'.freeze\n  VALID_STATUSES = [\n    DRAFT,\n    PUBLISHED\n  ]\nend\n"
  },
  {
    "path": "app/queries/top_drivers.sql",
    "content": "-- With new drivers\nWITH new_drivers AS (\n    SELECT *\n    FROM users\n    WHERE created_at >= (NOW() - INTERVAL '30 days')\n-- And top rated trips\n), top_rated_trips AS (\n    SELECT\n        id,\n        driver_id\n    FROM trips\n    WHERE rating IS NOT NULL\n)\n-- display their name and average rating\nSELECT\n    trips.driver_id,\n    CONCAT(users.first_name, ' ', users.last_name) AS driver_name,\n    ROUND(AVG(trips.rating), 2) as avg_rating\nFROM trips\nJOIN users ON trips.driver_id = users.id\nWHERE users.type = 'Driver'\nAND users.id IN (select id from new_drivers)\nAND trips.id IN (select id from top_rated_trips)\nGROUP by 1, 2\nORDER BY 3 DESC\nLIMIT 10;\n"
  },
  {
    "path": "app/serializers/driver_serializer.rb",
    "content": "class DriverSerializer\n  include FastJsonapi::ObjectSerializer\n\n  attribute :display_name\n\n  attribute :average_rating do |driver|\n    driver.average_rating.round(2)\n  end\nend\n"
  },
  {
    "path": "app/serializers/trip_serializer.rb",
    "content": "class TripSerializer\n  include FastJsonapi::ObjectSerializer\n\n  attribute :rider_name do |trip|\n    trip.rider.display_name\n  end\n\n  attribute :driver_name do |trip|\n    trip.driver.display_name\n  end\n\n  belongs_to :driver\nend\n"
  },
  {
    "path": "app/services/book_reservation.rb",
    "content": "class BookReservation\n  def initialize(vehicle_id:, rider_id:,\n                 start_location_id:, end_location_id:,\n                 starts_at:, ends_at:)\n    @vehicle = Vehicle.find(vehicle_id)\n    @rider = Rider.find(rider_id)\n    @start_location = Location.find(start_location_id)\n    @end_location = Location.find(end_location_id)\n    @starts_at = starts_at\n    @ends_at = ends_at\n  end\n\n  def reserve!\n    ActiveRecord::Base.transaction do\n      trip_request = TripRequest.create!(\n        rider: @rider,\n        start_location: @start_location,\n        end_location: @end_location\n      )\n\n      trip_request.vehicle_reservations.create!(\n        vehicle: @vehicle,\n        starts_at: @starts_at,\n        ends_at: @ends_at\n      )\n    end\n  end\nend\n"
  },
  {
    "path": "app/services/trip_creator.rb",
    "content": "class TripCreator\n  class TripCreationFailure < StandardError; end\n\n  attr_reader :trip_request_id\n\n  def initialize(trip_request_id:)\n    @trip_request_id = trip_request_id\n  end\n\n  def create_trip!\n    trip = Trip.new(\n      trip_request_id: trip_request.id,\n      driver: best_available_driver\n    )\n    raise TripCreationFailure unless trip.valid?\n\n    trip.save!\n  end\n\n  private\n\n  # NOTE: this would be a place to add intelligence\n  # to the selection process:\n  # available? completing a trip nearby? other business\n  # criteria like tenure, driver score etc.\n  def best_available_driver\n    Driver.all.sample\n  end\n\n  def trip_request\n    @trip_request ||= TripRequest.find(trip_request_id)\n  end\nend\n"
  },
  {
    "path": "app/services/trip_search.rb",
    "content": "class TripSearch\n  attr_reader :params\n\n  def initialize(params)\n    @params = params\n  end\n\n  def start_location\n    if text = params[:start_location]\n      Trip.with_start_location(sanitize(text))\n    else\n      Trip.all\n    end\n  end\n\n  def driver_name\n    if text = params[:driver_name]\n      Trip.with_driver_name(sanitize(text))\n    else\n      Trip.all\n    end\n  end\n\n  def rider_name\n    if text = params[:rider_name]\n      Trip.with_rider_name(sanitize(text))\n    else\n      Trip.all\n    end\n  end\n\n  private\n\n  def sanitize(text)\n    CGI.unescape(text.to_s)\n  end\nend\n"
  },
  {
    "path": "app/validators/drivers_license_validator.rb",
    "content": "class DriversLicenseValidator < ActiveModel::EachValidator\n  # https://success.myshn.net/Data_Protection/Data_Identifiers/U.S._Driver%27s_License_Numbers\n  # valid example: P800000224322\n  DL_MN_REGEXP_FORMAT = /[a-zA-Z]\\d{12}/i\n  DEFAULT_MESSAGE = \"is not a valid driver's license number\"\n\n  def validate_each(record, attribute, value)\n    return if value =~ DL_MN_REGEXP_FORMAT\n\n    record.errors.add(\n      attribute,\n      options[:message] || DEFAULT_MESSAGE\n    )\n  end\nend\n"
  },
  {
    "path": "app/validators/email_validator.rb",
    "content": "# https://guides.rubyonrails.org/active_record_validations.html#custom-validators\nclass EmailValidator < ActiveModel::EachValidator\n  EMAIL_REGEXP_FORMAT = /\\A([^@\\s]+)@((?:[-a-z0-9]+\\.)+[a-z]{2,})\\z/i\n\n  def validate_each(record, attribute, value)\n    return if value =~ EMAIL_REGEXP_FORMAT\n\n    record.errors.add(attribute, options[:message] || 'is not an email')\n  end\nend\n"
  },
  {
    "path": "bin/bundle",
    "content": "#!/usr/bin/env ruby\n# frozen_string_literal: true\n\n#\n# This file was generated by Bundler.\n#\n# The application 'bundle' is installed as part of a gem, and\n# this file is here to facilitate running it.\n#\n\nrequire 'rubygems'\n\nm = Module.new do\n  module_function\n\n  def invoked_as_script?\n    File.expand_path($0) == File.expand_path(__FILE__)\n  end\n\n  def env_var_version\n    ENV['BUNDLER_VERSION']\n  end\n\n  def cli_arg_version\n    return unless invoked_as_script? # don't want to hijack other binstubs\n    return unless 'update'.start_with?(ARGV.first || ' ') # must be running `bundle update`\n\n    bundler_version = nil\n    update_index = nil\n    ARGV.each_with_index do |a, i|\n      bundler_version = a if update_index && update_index.succ == i && a =~ Gem::Version::ANCHORED_VERSION_PATTERN\n      next unless a =~ /\\A--bundler(?:[= ](#{Gem::Version::VERSION_PATTERN}))?\\z/\n\n      bundler_version = Regexp.last_match(1) || '>= 0.a'\n      update_index = i\n    end\n    bundler_version\n  end\n\n  def gemfile\n    gemfile = ENV['BUNDLE_GEMFILE']\n    return gemfile if gemfile && !gemfile.empty?\n\n    File.expand_path('../Gemfile', __dir__)\n  end\n\n  def lockfile\n    lockfile =\n      case File.basename(gemfile)\n      when 'gems.rb' then gemfile.sub(/\\.rb$/, gemfile)\n      else \"#{gemfile}.lock\"\n      end\n    File.expand_path(lockfile)\n  end\n\n  def lockfile_version\n    return unless File.file?(lockfile)\n\n    lockfile_contents = File.read(lockfile)\n    return unless lockfile_contents =~ /\\n\\nBUNDLED WITH\\n\\s{2,}(#{Gem::Version::VERSION_PATTERN})\\n/\n\n    Regexp.last_match(1)\n  end\n\n  def bundler_version\n    @bundler_version ||= env_var_version || cli_arg_version ||\n                         lockfile_version || \"#{Gem::Requirement.default}.a\"\n  end\n\n  def load_bundler!\n    ENV['BUNDLE_GEMFILE'] ||= gemfile\n\n    # must dup string for RG < 1.8 compatibility\n    activate_bundler(bundler_version.dup)\n  end\n\n  def activate_bundler(bundler_version)\n    if Gem::Version.correct?(bundler_version) && Gem::Version.new(bundler_version).release < Gem::Version.new('2.0')\n      bundler_version = '< 2'\n    end\n    gem_error = activation_error_handling do\n      gem 'bundler', bundler_version\n    end\n    return if gem_error.nil?\n\n    require_error = activation_error_handling do\n      require 'bundler/version'\n    end\n    if require_error.nil? && Gem::Requirement.new(bundler_version).satisfied_by?(Gem::Version.new(Bundler::VERSION))\n      return\n    end\n\n    warn \"Activating bundler (#{bundler_version}) failed:\\n#{gem_error.message}\\n\\nTo install the version of bundler this project requires, run `gem install bundler -v '#{bundler_version}'`\"\n    exit 42\n  end\n\n  def activation_error_handling\n    yield\n    nil\n  rescue StandardError, LoadError => e\n    e\n  end\nend\n\nm.load_bundler!\n\nload Gem.bin_path('bundler', 'bundle') if m.invoked_as_script?\n"
  },
  {
    "path": "bin/importmap",
    "content": "#!/usr/bin/env ruby\n\nrequire_relative '../config/application'\nrequire 'importmap/commands'\n"
  },
  {
    "path": "bin/partition_conversion.sh",
    "content": "#!/bin/bash\n\necho \"A script for the test DB\"\nbin/rails db:test:prepare\necho \"Reminder: Set PGSLICE_URL to test DB in .env\"\necho \"Value is:\"\necho $PGSLICE_URL\n\nbin/rails runner \"PgsliceHelper.new.retire_default_partition(table_name: 'trip_positions', dry_run: false)\"\nbin/rails runner \"PgsliceHelper.new.add_partitions(table_name: 'trip_positions', past: 0, future: 3, dry_run: false)\"\nbin/rails runner \"PgsliceHelper.new.fill(table_name: 'trip_positions', from_date: '2023-03-01')\"\nbin/rails runner \"PgsliceHelper.new.analyze(table_name: 'trip_positions')\"\nbin/rails runner \"PgsliceHelper.new.swap(table_name: 'trip_positions')\"\n"
  },
  {
    "path": "bin/pgslice",
    "content": "#!/usr/bin/env ruby\n# frozen_string_literal: true\n\n#\n# This file was generated by Bundler.\n#\n# The application 'pgslice' is installed as part of a gem, and\n# this file is here to facilitate running it.\n#\n\nrequire 'pathname'\nENV['BUNDLE_GEMFILE'] ||= File.expand_path('../../Gemfile',\n                                           Pathname.new(__FILE__).realpath)\n\nbundle_binstub = File.expand_path('bundle', __dir__)\n\nif File.file?(bundle_binstub)\n  if File.read(bundle_binstub, 300) =~ /This file was generated by Bundler/\n    load(bundle_binstub)\n  else\n    abort(\"Your `bin/bundle` was not generated by Bundler, so this binstub cannot run.\nReplace `bin/bundle` by running `bundle binstubs bundler --force`, then run this command again.\")\n  end\nend\n\nrequire 'rubygems'\nrequire 'bundler/setup'\n\nload Gem.bin_path('pgslice', 'pgslice')\n"
  },
  {
    "path": "bin/rails",
    "content": "#!/usr/bin/env ruby\nAPP_PATH = File.expand_path('../config/application', __dir__)\nrequire_relative '../config/boot'\nrequire 'rails/commands'\n"
  },
  {
    "path": "bin/rails_best_practices",
    "content": "#!/usr/bin/env ruby\n# frozen_string_literal: true\n\n#\n# This file was generated by Bundler.\n#\n# The application 'rails_best_practices' is installed as part of a gem, and\n# this file is here to facilitate running it.\n#\n\nrequire 'pathname'\nENV['BUNDLE_GEMFILE'] ||= File.expand_path('../../Gemfile',\n                                           Pathname.new(__FILE__).realpath)\n\nbundle_binstub = File.expand_path('bundle', __dir__)\n\nif File.file?(bundle_binstub)\n  if File.read(bundle_binstub, 300) =~ /This file was generated by Bundler/\n    load(bundle_binstub)\n  else\n    abort(\"Your `bin/bundle` was not generated by Bundler, so this binstub cannot run.\nReplace `bin/bundle` by running `bundle binstubs bundler --force`, then run this command again.\")\n  end\nend\n\nrequire 'rubygems'\nrequire 'bundler/setup'\n\nload Gem.bin_path('rails_best_practices', 'rails_best_practices')\n"
  },
  {
    "path": "bin/rake",
    "content": "#!/usr/bin/env ruby\nrequire_relative '../config/boot'\nrequire 'rake'\nRake.application.run\n"
  },
  {
    "path": "bin/setup",
    "content": "#!/usr/bin/env ruby\nrequire 'fileutils'\n\n# path to your application root.\nAPP_ROOT = File.expand_path('..', __dir__)\n\ndef system!(*args)\n  system(*args) || abort(\"\\n== Command #{args} failed ==\")\nend\n\nFileUtils.chdir APP_ROOT do\n  # This script is a way to setup or update your development environment automatically.\n  # This script is idempotent, so that you can run it at anytime and get an expectable outcome.\n  # Add necessary setup steps to this file.\n\n  puts '== Installing dependencies =='\n  system! 'gem install bundler --conservative'\n  system('bundle check') || system!('bundle install')\n\n  # Install JavaScript dependencies\n  # system('bin/yarn')\n\n  # puts \"\\n== Copying sample files ==\"\n  # unless File.exist?('config/database.yml')\n  #   FileUtils.cp 'config/database.yml.sample', 'config/database.yml'\n  # end\n\n  puts \"\\n== Preparing database ==\"\n  system! 'bin/rails db:prepare'\n\n  puts \"\\n== Removing old logs and tempfiles ==\"\n  system! 'bin/rails log:clear tmp:clear'\n\n  puts \"\\n== Restarting application server ==\"\n  system! 'bin/rails restart'\nend\n"
  },
  {
    "path": "config/application.rb",
    "content": "require_relative 'boot'\n\n# https://andycroll.com/ruby/turn-off-the-bits-of-rails-you-dont-use/\n# require 'rails/all'\n\nrequire 'rails'\n# Pick the frameworks you want:\nrequire 'active_model/railtie'\nrequire 'active_job/railtie'\nrequire 'active_record/railtie'\n# # require \"active_storage/engine\"\nrequire 'action_controller/railtie'\nrequire 'action_mailer/railtie'\n# # require \"action_mailbox/engine\"\n# # require \"action_text/engine\"\nrequire 'action_view/railtie'\nrequire 'action_cable/engine'\nrequire 'sprockets/railtie'\nrequire 'rails/test_unit/railtie'\n\n# Require the gems listed in Gemfile, including any gems\n# you've limited to :test, :development, or :production.\nBundler.require(*Rails.groups)\n\nmodule Rideshare\n  class Application < Rails::Application\n    # Initialize configuration defaults for originally generated Rails version.\n    config.load_defaults 7.1\n\n    # Settings in config/environments/* take precedence over those specified here.\n    # Application configuration can go into files in config/initializers\n    # -- all .rb files in that directory are automatically loaded after loading\n    # the framework and any gems in your application.\n\n    # https://blog.bigbinary.com/2016/08/29/rails-5-disables-autoloading-after-booting-the-app-in-production.html\n    config.eager_load_paths << Rails.root.join('app/services')\n    config.eager_load_paths << Rails.root.join('lib')\n\n    # Use structure.sql\n    # https://edgeguides.rubyonrails.org/configuring.html#config-active-record-schema-format\n    config.active_record.schema_format = :sql\n\n    # set a timezone. Times are generally stored as\n    # timestamps without a time zone. This application\n    # would need to treat times based on the user's timezone.\n    config.time_zone = 'Central Time (US & Canada)'\n\n    # Enable Query Logging\n    # NOTE: Disable in order to use Prepared Statements\n    # config.active_record.query_log_tags_enabled = true\n\n    # https://www.bigbinary.com/blog/rails-7-adds-setting-for-enumerating-columns-in-select-statements#\n    # config.active_record.enumerate_columns_in_select_statements = true\n\n    # Add '--if-exists' flag to pg_dump\n    # https://github.com/rails/rails/issues/38695#issuecomment-763588402\n    ActiveRecord::Tasks::DatabaseTasks.structure_dump_flags = ['--clean', '--if-exists']\n\n    # Consider limiting the conversion of timestamp without time zone columns to UTC\n    # https://engineering.ezcater.com/youre-not-in-the-zone\n    # ActiveRecord::Base.time_zone_aware_types = [:datetime]\n\n    # Consider timestamps in the local time zone\n    # This is because the app used \"timestamp without time zone\" columns and times are\n    # stored in the local timezone (CST).\n    config.active_record.default_timezone = :local\n  end\nend\n"
  },
  {
    "path": "config/boot.rb",
    "content": "ENV['BUNDLE_GEMFILE'] ||= File.expand_path('../Gemfile', __dir__)\n\nrequire 'bundler/setup' # Set up gems listed in the Gemfile.\n"
  },
  {
    "path": "config/cable.yml",
    "content": "development:\n  adapter: async\n\ntest:\n  adapter: test\n\nproduction:\n  adapter: redis\n  url: <%= ENV.fetch(\"REDIS_URL\") { \"redis://localhost:6379/1\" } %>\n  channel_prefix: rideshare_production\n"
  },
  {
    "path": "config/credentials.yml.enc",
    "content": "itjGwmz6U75xCi1uE8pPIYsLQH/TmelhEw1qDOxAfjyT+F7kKtHQ9kFBFmfmqVu9kcVPLg1ajw7ejk79XodWo+193YdLwpRvj4On5KgPCOfXrGxJleasmqP2lU+Hfwv93CSSGipFFWlwB6kjtvCsqYycnxERqh1PKyoXQUcb9Niely5We+et32LQo7Rb6rkEYKPWcTZp9YNIMLKtNMSObXsJoUVmpafIhtqJ2UC6zpz6RW+7VVpTGoQz++Dc+itByNX0KRSi1/BwKKPfwSqlc2uYtWUQ6VDZWS1lS1lSeSip0YyZKmYgXlUDmZoYBOWaM/PRor7gN9oMKr/J2C+YeNM93kAwUh21RtSJ36q0V7qtjaFsMN7GqQfcf9pwzq2VreM3gSrHVYwhsbwcT/O6vhzhex/KUalNyzWG--eImVIKRRjaAVTrI/--MwJavKULW+yt/C67osImsg=="
  },
  {
    "path": "config/database-multiple.sample.yml",
    "content": "default: &default\n  adapter: postgresql\n  pool: <%= ENV.fetch(\"RAILS_MAX_THREADS\") { 5 } %>\n  variables:\n    statement_timeout: 5000\n\ndevelopment:\n  rideshare:\n    <<: *default\n    database: rideshare_development\n    url: <%= ENV['DATABASE_URL_PRIMARY'] %>\n    schema_search_path: rideshare\n  rideshare_replica:\n    <<: *default\n    database: rideshare_development\n    url: <%= ENV['DATABASE_URL_REPLICA'] %>\n    schema_search_path: rideshare\n    replica: true\n    database_tasks: true #default:true, false=physical https://guides.rubyonrails.org/active_record_multiple_databases.html#connecting-to-databases-without-managing-schema-and-migrations\n\ntest:\n  <<: *default\n  url: postgresql://postgres:@localhost/rideshare_test\n"
  },
  {
    "path": "config/database-slow-clients.sample.yml",
    "content": "#\n# Configuring Active Record:\n# <https://guides.rubyonrails.org/configuring.html#configuring-active-record>\n#\n# Database Connection Control Functions\n# <https://www.postgresql.org/docs/current/libpq-connect.html>\n#\ndefault: &default\n  adapter: postgresql\n  schema_search_path: rideshare\n  prepared_statements: true # enabled by default\n  advisory_locks: true # enabled by default\n  # Optional (PostgreSQL):\n  # checkout_timeout, read_timeout\n\ntest:\n  <<: *default\n  pool: <%= ENV.fetch(\"RAILS_MAX_THREADS\") { 5 } %>\n  url: postgresql://postgres:@localhost/rideshare_test\n\ndevelopment:\n  <<: *default\n  pool: <%= ENV.fetch(\"RAILS_MAX_THREADS\") { 5 } %>\n  url: <%= ENV['DATABASE_URL'] %>\n  database: rideshare_development\n  variables:\n    # https://www.postgresql.org/docs/current/runtime-config-client.html\n    statement_timeout: 5000 # seconds, set at client level\n    idle_in_transaction_session_timeout: 300000 # milliseconds\n    # PostgreSQL params:\n    # idle_timeout\n    # lock_timeout\n    # idle_session_timeout\n\n# class SlowClientModel < ApplicationRecord\n#   self.establish_connection :slow_clients\n# end\n#\n# Put \"allowed\" slow code in SlowClientModel\n# or a class that inherits from it. Slow clients:\n#\n# - Use fewer, limited (up to) database connections\n# - Queries are permitted a higher statement_timeout\n#\nslow_clients:\n  <<: *default\n  pool: <%= 2 %>\n  url: <%= ENV['DATABASE_URL'] %>\n  database: rideshare_development\n  variables:\n    # https://www.postgresql.org/docs/current/runtime-config-client.html\n    statement_timeout: 60000 # seconds, set at client level\n    idle_in_transaction_session_timeout: 300000 # milliseconds\n"
  },
  {
    "path": "config/database.yml",
    "content": "#\n# Configuring Active Record:\n# <https://guides.rubyonrails.org/configuring.html#configuring-active-record>\n#\n# Database Connection Control Functions\n# <https://www.postgresql.org/docs/current/libpq-connect.html>\n#\ndefault: &default\n  adapter: postgresql\n  pool: <%= ENV.fetch(\"RAILS_MAX_THREADS\") { 5 } %>\n  schema_search_path: rideshare\n  prepared_statements: true # enabled by default\n  advisory_locks: true # enabled by default\n  # Optional (PostgreSQL):\n  # checkout_timeout, read_timeout\n\ntest:\n  <<: *default\n  url: postgresql://postgres:@localhost/rideshare_test\n\ndevelopment:\n  <<: *default\n  url: <%= ENV['DATABASE_URL'] %>\n  database: rideshare_development\n  variables:\n    # https://www.postgresql.org/docs/current/runtime-config-client.html\n    statement_timeout: 20000 # (in seconds, consider lowering to 5s for OLTP)\n    idle_in_transaction_session_timeout: 300000 # milliseconds\n    # Consider setting all these params:\n    # idle_timeout\n    # lock_timeout\n    # idle_session_timeout\n"
  },
  {
    "path": "config/environment.rb",
    "content": "# Load the Rails application.\nrequire_relative 'application'\n\n# Initialize the Rails application.\nRails.application.initialize!\n"
  },
  {
    "path": "config/environments/development.rb",
    "content": "Rails.application.configure do\n  # Settings specified here will take precedence over those in config/application.rb.\n\n  # In the development environment your application's code is reloaded on\n  # every request. This slows down response time but is perfect for development\n  # since you don't have to restart the web server when you make code changes.\n  config.cache_classes = false\n\n  # Do not eager load code on boot.\n  config.eager_load = false\n\n  # Show full error reports.\n  config.consider_all_requests_local = true\n\n  # Enable/disable caching. By default caching is disabled.\n  # Run rails dev:cache to toggle caching.\n  if Rails.root.join('tmp', 'caching-dev.txt').exist?\n    config.action_controller.perform_caching = true\n    config.action_controller.enable_fragment_cache_logging = true\n\n    config.cache_store = :memory_store\n    config.public_file_server.headers = {\n      'Cache-Control' => \"public, max-age=#{2.days.to_i}\"\n    }\n  else\n    config.action_controller.perform_caching = false\n\n    config.cache_store = :null_store\n  end\n\n  # Store uploaded files on the local file system (see config/storage.yml for options).\n  # config.active_storage.service = :local\n\n  # Don't care if the mailer can't send.\n  config.action_mailer.raise_delivery_errors = false\n\n  config.action_mailer.perform_caching = false\n\n  # Print deprecation notices to the Rails logger.\n  config.active_support.deprecation = :log\n\n  # Raise an error on page load if there are pending migrations.\n  config.active_record.migration_error = :page_load\n\n  # Highlight code that triggered database queries in logs.\n  config.active_record.verbose_query_logs = true\n\n  # Debug mode disables concatenation and preprocessing of assets.\n  # This option may cause significant delays in view rendering with a large\n  # number of complex assets.\n  # config.assets.debug = true\n\n  # Suppress logger output for asset requests.\n  # config.assets.quiet = true\n\n  # Raises error for missing translations.\n  # config.action_view.raise_on_missing_translations = true\n\n  # https://github.com/rails/sprockets-rails/issues/376#issuecomment-287560399\n  logger = ActiveSupport::Logger.new(STDOUT)\n  logger.formatter = config.logger\n  config.logger = ActiveSupport::TaggedLogging.new(logger)\n\n  # config.active_record.database_selector = { delay: 2.seconds }\n  # config.active_record.database_resolver = ActiveRecord::Middleware::DatabaseSelector::Resolver\n  # config.active_record.database_resolver_context = ActiveRecord::Middleware::DatabaseSelector::Resolver::Session\nend\n"
  },
  {
    "path": "config/environments/production.rb",
    "content": "Rails.application.configure do\n  # Settings specified here will take precedence over those in config/application.rb.\n\n  # Code is not reloaded between requests.\n  config.cache_classes = true\n\n  # Eager load code on boot. This eager loads most of Rails and\n  # your application in memory, allowing both threaded web servers\n  # and those relying on copy on write to perform better.\n  # Rake tasks automatically ignore this option for performance.\n  config.eager_load = true\n\n  # Full error reports are disabled and caching is turned on.\n  config.consider_all_requests_local       = false\n  config.action_controller.perform_caching = true\n\n  # Ensures that a master key has been made available in either ENV[\"RAILS_MASTER_KEY\"]\n  # or in config/master.key. This key is used to decrypt credentials (and other encrypted files).\n  # config.require_master_key = true\n\n  # Disable serving static files from the `/public` folder by default since\n  # Apache or NGINX already handles this.\n  config.public_file_server.enabled = ENV['RAILS_SERVE_STATIC_FILES'].present?\n\n  # Compress CSS using a preprocessor.\n  # config.assets.css_compressor = :sass\n\n  # Do not fallback to assets pipeline if a precompiled asset is missed.\n  config.assets.compile = false\n\n  # Enable serving of images, stylesheets, and JavaScripts from an asset server.\n  # config.action_controller.asset_host = 'http://assets.example.com'\n\n  # Specifies the header that your server uses for sending files.\n  # config.action_dispatch.x_sendfile_header = 'X-Sendfile' # for Apache\n  # config.action_dispatch.x_sendfile_header = 'X-Accel-Redirect' # for NGINX\n\n  # Store uploaded files on the local file system (see config/storage.yml for options).\n  # config.active_storage.service = :local\n\n  # Mount Action Cable outside main process or domain.\n  # config.action_cable.mount_path = nil\n  # config.action_cable.url = 'wss://example.com/cable'\n  # config.action_cable.allowed_request_origins = [ 'http://example.com', /http:\\/\\/example.*/ ]\n\n  # Force all access to the app over SSL, use Strict-Transport-Security, and use secure cookies.\n  # config.force_ssl = true\n\n  # Use the lowest log level to ensure availability of diagnostic information\n  # when problems arise.\n  config.log_level = :debug\n\n  # Prepend all log lines with the following tags.\n  config.log_tags = [:request_id]\n\n  # Use a different cache store in production.\n  # config.cache_store = :mem_cache_store\n\n  # Use a real queuing backend for Active Job (and separate queues per environment).\n  # config.active_job.queue_adapter     = :resque\n  # config.active_job.queue_name_prefix = \"rideshare_production\"\n\n  config.action_mailer.perform_caching = false\n\n  # Ignore bad email addresses and do not raise email delivery errors.\n  # Set this to true and configure the email server for immediate delivery to raise delivery errors.\n  # config.action_mailer.raise_delivery_errors = false\n\n  # Enable locale fallbacks for I18n (makes lookups for any locale fall back to\n  # the I18n.default_locale when a translation cannot be found).\n  config.i18n.fallbacks = true\n\n  # Send deprecation notices to registered listeners.\n  config.active_support.deprecation = :notify\n\n  # Use default logging formatter so that PID and timestamp are not suppressed.\n  config.log_formatter = ::Logger::Formatter.new\n\n  # Use a different logger for distributed setups.\n  # require 'syslog/logger'\n  # config.logger = ActiveSupport::TaggedLogging.new(Syslog::Logger.new 'app-name')\n\n  if ENV['RAILS_LOG_TO_STDOUT'].present?\n    logger           = ActiveSupport::Logger.new(STDOUT)\n    logger.formatter = config.log_formatter\n    config.logger    = ActiveSupport::TaggedLogging.new(logger)\n  end\n\n  # Do not dump schema after migrations.\n  config.active_record.dump_schema_after_migration = false\n\n  # Inserts middleware to perform automatic connection switching.\n  # The `database_selector` hash is used to pass options to the DatabaseSelector\n  # middleware. The `delay` is used to determine how long to wait after a write\n  # to send a subsequent read to the primary.\n  #\n  # The `database_resolver` class is used by the middleware to determine which\n  # database is appropriate to use based on the time delay.\n  #\n  # The `database_resolver_context` class is used by the middleware to set\n  # timestamps for the last write to the primary. The resolver uses the context\n  # class timestamps to determine how long to wait before reading from the\n  # replica.\n  #\n  # By default Rails will store a last write timestamp in the session. The\n  # DatabaseSelector middleware is designed as such you can define your own\n  # strategy for connection switching and pass that into the middleware through\n  # these configuration options.\n  # config.active_record.database_selector = { delay: 2.seconds }\n  # config.active_record.database_resolver = ActiveRecord::Middleware::DatabaseSelector::Resolver\n  # config.active_record.database_resolver_context = ActiveRecord::Middleware::DatabaseSelector::Resolver::Session\nend\n"
  },
  {
    "path": "config/environments/test.rb",
    "content": "# The test environment is used exclusively to run your application's\n# test suite. You never need to work with it otherwise. Remember that\n# your test database is \"scratch space\" for the test suite and is wiped\n# and recreated between test runs. Don't rely on the data there!\n\nRails.application.configure do # Settings specified here will take precedence over those in config/application.rb.\n  config.cache_classes = false\n\n  # Do not eager load code on boot. This avoids loading your whole application\n  # just for the purpose of running a single test. If you are using a tool that\n  # preloads Rails for running tests, you may have to set it to true.\n  config.eager_load = false\n\n  # Configure public file server for tests with Cache-Control for performance.\n  config.public_file_server.enabled = true\n  config.public_file_server.headers = {\n    'Cache-Control' => \"public, max-age=#{1.hour.to_i}\"\n  }\n\n  # Show full error reports and disable caching.\n  config.consider_all_requests_local       = true\n  config.action_controller.perform_caching = false\n  config.cache_store = :null_store\n\n  # Raise exceptions instead of rendering exception templates.\n  config.action_dispatch.show_exceptions = false\n\n  # Disable request forgery protection in test environment.\n  config.action_controller.allow_forgery_protection = false\n\n  # Store uploaded files on the local file system in a temporary directory.\n  # config.active_storage.service = :test\n\n  config.action_mailer.perform_caching = false\n\n  # Tell Action Mailer not to deliver emails to the real world.\n  # The :test delivery method accumulates sent emails in the\n  # ActionMailer::Base.deliveries array.\n  config.action_mailer.delivery_method = :test\n\n  # Print deprecation notices to the stderr.\n  config.active_support.deprecation = :stderr\n\n  # Raises error for missing translations.\n  # config.action_view.raise_on_missing_translations = true\n  #\n  #\n  # NOTE: For the test database, we don't want to dump after migrating,\n  # especially since the test database is using UNLOGGED tables\n  # which will modify the content of db/structure.sql, adding that\n  # keyword to the dump output\n  config.active_record.dump_schema_after_migration = false\nend\n\n# Rails Guides:\n# https://guides.rubyonrails.org/configuring.html\\\n# activerecord-connectionadapters-postgresqladapter-create-unlogged-tables\nActiveSupport.on_load(:active_record_postgresqladapter) do\n  self.create_unlogged_tables = true\nend\n"
  },
  {
    "path": "config/importmap.rb",
    "content": "# Pin npm packages by running ./bin/importmap\n\npin 'application', preload: true\n"
  },
  {
    "path": "config/initializers/application_controller_renderer.rb",
    "content": "# Be sure to restart your server when you modify this file.\n\n# ActiveSupport::Reloader.to_prepare do\n#   ApplicationController.renderer.defaults.merge!(\n#     http_host: 'example.org',\n#     https: false\n#   )\n# end\n"
  },
  {
    "path": "config/initializers/assets.rb",
    "content": "# Be sure to restart your server when you modify this file.\n\n# Version of your assets, change this if you want to expire all your assets.\nRails.application.config.assets.version = '1.0'\n\n# Add additional assets to the asset load path.\n# Rails.application.config.assets.paths << Emoji.images_path\n# Add Yarn node_modules folder to the asset load path.\n# Rails.application.config.assets.paths << Rails.root.join('node_modules')\n\n# Precompile additional assets.\n# application.js, application.css, and all non-JS/CSS in the app/assets\n# folder are already added.\n# Rails.application.config.assets.precompile += %w( admin.js admin.css )\n"
  },
  {
    "path": "config/initializers/backtrace_silencers.rb",
    "content": "# Be sure to restart your server when you modify this file.\n\n# You can add backtrace silencers for libraries that you're using but don't wish to see in your backtraces.\n# Rails.backtrace_cleaner.add_silencer { |line| line =~ /my_noisy_library/ }\n\n# You can also remove all the silencers if you're trying to debug a problem that might stem from framework code.\n# Rails.backtrace_cleaner.remove_silencers!\n"
  },
  {
    "path": "config/initializers/cookies_serializer.rb",
    "content": "# Be sure to restart your server when you modify this file.\n\n# Specify a serializer for the signed and encrypted cookie jars.\n# Valid options are :json, :marshal, and :hybrid.\nRails.application.config.action_dispatch.cookies_serializer = :json\n"
  },
  {
    "path": "config/initializers/filter_parameter_logging.rb",
    "content": "# Be sure to restart your server when you modify this file.\n\n# Configure sensitive parameters which will be filtered from the log file.\nRails.application.config.filter_parameters += [:password]\n"
  },
  {
    "path": "config/initializers/geocoder.rb",
    "content": "Geocoder.configure\n# Geocoding options\n# timeout: 3,                 # geocoding service timeout (secs)\n# lookup: :nominatim,         # name of geocoding service (symbol)\n# ip_lookup: :ipinfo_io,      # name of IP address geocoding service (symbol)\n# language: :en,              # ISO-639 language code\n# use_https: false,           # use HTTPS for lookup requests? (if supported)\n# http_proxy: nil,            # HTTP proxy server (user:pass@host:port)\n# https_proxy: nil,           # HTTPS proxy server (user:pass@host:port)\n# api_key: nil,               # API key for geocoding service\n# cache: nil,                 # cache object (must respond to #[], #[]=, and #del)\n# cache_prefix: 'geocoder:',  # prefix (string) to use for all cache keys\n\n# Exceptions that should not be rescued by default\n# (if you want to implement custom error handling);\n# supports SocketError and Timeout::Error\n# always_raise: [],\n\n# Calculation options\n# units: :mi,                 # :km for kilometers or :mi for miles\n# distances: :linear          # :spherical or :linear\n"
  },
  {
    "path": "config/initializers/inflections.rb",
    "content": "# Be sure to restart your server when you modify this file.\n\n# Add new inflection rules using the following format. Inflections\n# are locale specific, and you may define rules for as many different\n# locales as you wish. All of these examples are active by default:\n# ActiveSupport::Inflector.inflections(:en) do |inflect|\n#   inflect.plural /^(ox)$/i, '\\1en'\n#   inflect.singular /^(ox)en/i, '\\1'\n#   inflect.irregular 'person', 'people'\n#   inflect.uncountable %w( fish sheep )\n# end\n\n# These inflection rules are supported but not enabled by default:\n# ActiveSupport::Inflector.inflections(:en) do |inflect|\n#   inflect.acronym 'RESTful'\n# end\n"
  },
  {
    "path": "config/initializers/mime_types.rb",
    "content": "# Be sure to restart your server when you modify this file.\n\n# Add new mime types for use in respond_to blocks:\n# Mime::Type.register \"text/richtext\", :rtf\n"
  },
  {
    "path": "config/initializers/slow_query_subscriber.rb",
    "content": "# Inspiration: https://twitter.com/kukicola/status/1578842934849724416\nclass SlowQuerySubscriber < ActiveSupport::Subscriber\n  SECONDS_THRESHOLD = 1.0\n\n  ActiveSupport::Notifications.subscribe('sql.active_record') do |name, start, finish, _unique_id, data|\n    duration = finish - start\n\n    if duration > SECONDS_THRESHOLD\n      sql = data[:sql]\n      Rails.logger.info \"[#{name}] #{duration} #{sql}\"\n    end\n  end\nend\n"
  },
  {
    "path": "config/initializers/strong_migrations.rb",
    "content": "# Strong Migrations initializer\nStrongMigrations.lock_timeout = 10.seconds\nStrongMigrations.statement_timeout = 1.hour\n"
  },
  {
    "path": "config/initializers/wrap_parameters.rb",
    "content": "# Be sure to restart your server when you modify this file.\n\n# This file contains settings for ActionController::ParamsWrapper which\n# is enabled by default.\n\n# Enable parameter wrapping for JSON. You can disable this by setting :format to an empty array.\nActiveSupport.on_load(:action_controller) do\n  wrap_parameters format: [:json]\nend\n\n# To enable root element in JSON for ActiveRecord objects.\n# ActiveSupport.on_load(:active_record) do\n#   self.include_root_in_json = true\n# end\n"
  },
  {
    "path": "config/locales/en.yml",
    "content": "# Files in the config/locales directory are used for internationalization\n# and are automatically loaded by Rails. If you want to use locales other\n# than English, add the necessary files in this directory.\n#\n# To use the locales, use `I18n.t`:\n#\n#     I18n.t 'hello'\n#\n# In views, this is aliased to just `t`:\n#\n#     <%= t('hello') %>\n#\n# To use a different locale, set it with `I18n.locale`:\n#\n#     I18n.locale = :es\n#\n# This would use the information in config/locales/es.yml.\n#\n# The following keys must be escaped otherwise they will not be retrieved by\n# the default I18n backend:\n#\n# true, false, on, off, yes, no\n#\n# Instead, surround them with single quotes.\n#\n# en:\n#   'true': 'foo'\n#\n# To learn more, please read the Rails Internationalization guide\n# available at https://guides.rubyonrails.org/i18n.html.\n\nen:\n  hello: \"Hello world\"\n"
  },
  {
    "path": "config/puma.rb",
    "content": "# Puma can serve each request in a thread from an internal thread pool.\n# The `threads` method setting takes two numbers: a minimum and maximum.\n# Any libraries that use thread pools should be configured to match\n# the maximum value specified for Puma. Default is set to 5 threads for minimum\n# and maximum; this matches the default thread size of Active Record.\n#\nmax_threads_count = ENV.fetch('RAILS_MAX_THREADS') { 5 }\nmin_threads_count = ENV.fetch('RAILS_MIN_THREADS') { max_threads_count }\nthreads min_threads_count, max_threads_count\n\n# Specifies the `port` that Puma will listen on to receive requests; default is 3000.\n#\nport        ENV.fetch('PORT') { 3000 }\n\n# Specifies the `environment` that Puma will run in.\n#\nenvironment ENV.fetch('RAILS_ENV') { 'development' }\n\n# Specifies the `pidfile` that Puma will use.\npidfile ENV.fetch('PIDFILE') { 'tmp/pids/server.pid' }\n\n# Specifies the number of `workers` to boot in clustered mode.\n# Workers are forked web server processes. If using threads and workers together\n# the concurrency of the application would be max `threads` * `workers`.\n# Workers do not work on JRuby or Windows (both of which do not support\n# processes).\n#\n# workers ENV.fetch(\"WEB_CONCURRENCY\") { 2 }\n\n# Use the `preload_app!` method when specifying a `workers` number.\n# This directive tells Puma to first boot the application and load code\n# before forking the application. This takes advantage of Copy On Write\n# process behavior so workers use less memory.\n#\n# preload_app!\n\n# Allow puma to be restarted by `rails restart` command.\nplugin :tmp_restart\n"
  },
  {
    "path": "config/routes.rb",
    "content": "Rails.application.routes.draw do\n  mount PgHero::Engine, at: 'pghero'\n\n  namespace :api do\n    resources :trips, only: %i[index show] do\n      collection do\n        get :my\n      end\n      member do\n        get :details\n      end\n    end\n    resources :trip_requests, only: %i[create show]\n  end\n\n  post '/auth/login', to: 'authentication#login'\nend\n"
  },
  {
    "path": "config/schedule.rb",
    "content": "# Use this file to easily define all of your cron jobs.\n#\n# It's helpful, but not entirely necessary to understand cron before proceeding.\n# http://en.wikipedia.org/wiki/Cron\n\n# Example:\n#\n# set :output, \"/path/to/my/cron_log.log\"\n#\n# every 2.hours do\n#   command \"/usr/bin/some_great_command\"\n#   runner \"MyModel.some_method\"\n#   rake \"some:great:rake:task\"\n# end\n#\n# every 4.days do\n#   runner \"AnotherModel.prune_old_records\"\n# end\n\n# Learn more: http://github.com/javan/whenever\n#\nevery 15.minutes do\n  runner 'FastSearchResult.refresh'\nend\n\nevery 1.month do\n  command \"pgslice add_partitions trip_positions\n  --future 6\n  --url postgres://owner:@localhost/rideshare_development\"\nend\n"
  },
  {
    "path": "config.ru",
    "content": "# This file is used by Rack-based servers to start the application.\n\nrequire_relative 'config/environment'\n\nrun Rails.application\n"
  },
  {
    "path": "db/README.md",
    "content": "# Database Setup\n\n## PostgreSQL Version\nMake sure you're running PostgreSQL 16 or newer.\n\nWe recommend Postgres.app, however Homebrew is popular. Make sure you've used this formula:\n\n<https://formulae.brew.sh/formula/postgresql@16>\n\n## Fake data\nFake data generated from Ruby, using the Faker gem, may be generated using the following commands.\n\nThis will generate around 20K user records which is useful for most tests. More data will be needed for performance testing.\n```sh\nbin/rails data_generators:generate_all\n\nbin/rails data_generators:drivers\n\nbin/rails data_generators:trips_and_requests\n```\n\nFor more data, see SQL scripts in: [db/scripts/README.md](db/scripts/README.md)\n\n```sh\nsh db/scripts/bulk_load.sh\nsh db/scripts/bulk_load_extended.sh\n```\n\n## Data Loads Video Demo\nTo see a demonstration of both methods:\n\n\n<details>\n<summary>🎥 Rideshare - Loading data using a Rake task and Shell Script</summary>\n<div>\n<div>\n  <a href=\"https://www.loom.com/share/6a1419efae7b4c3aac51e7d95726baf0\">\n    <img style=\"max-width:300px;\" src=\"https://cdn.loom.com/sessions/thumbnails/6a1419efae7b4c3aac51e7d95726baf0-1714505177620-with-play.gif\">\n  </a>\n</div>\n</details>\n\n## Security Goals\nThe *Principle of least privilege*[^prin] is followed by creating explicit `GRANT` commands for the `owner`, `app`, and `app_readonly` users.\n\nThe configuration is based on *My GOTO Postgres Configuration for Web Services*.[^gotocon] One of the other goals besides minimizing access, is to prevent accidental table drops.\n\nSince the schema `rideshare` is created, the `public` schema is not needed and is removed.\n\nFor `psql` commands, use a `DATABASE_URL` environment variable that's set in your terminal.\n\nThe connection string connects to the Rideshare database, using the `owner` user.\n\nThe value of `DATABASE_URL` is a connection string, with the format `protocol://role:password@host:port/databasename`. An example is checked in to `.env`.\n\n[^prin]: <https://en.wikipedia.org/wiki/Principle_of_least_privilege>\n[^gotocon]: <https://tightlycoupled.io/my-goto-postgres-configuration-for-web-services/>\n\n## Configuring Host Based Authentication (HBA)\nYou may want to configure *Host Based Authentication* (`HBA`)[^pghba].\n\nDo that by editing your `pg_hba.conf` file. Changes in `pg_hba.conf` can be applied by *reloading* PostgreSQL.\n\n\n## Reloading your PostgreSQL configuration\nFinding config file: `psql -U postgres -c 'SHOW config_file'`\n\nTo reload your configuration, run: `pg_ctl reload` in your terminal. If you run into the following message, read on for more information.\n\n```sh\npg_ctl: no database directory specified and environment variable PGDATA unset\nTry \"pg_ctl --help\" for more information.\n```\n\nThis command assumes the `PGDATA` environment variable is set, and points to the data directory for your PostgreSQL installation.\n\nRun `echo $PGDATA` to confirm it's set and see the value. How do you set the value if it's empty? Run the following commands in your terminal:\n\n```sh\n# Look up the value\npsql -U postgres -c 'SHOW data_directory'\n\n# Assign the value to PGDATA\nexport PGDATA=\"$(psql -U postgres \\\n  -c 'SHOW data_directory' \\\n  --tuples-only | sed 's/^[ \\t]*//')\"\necho \"Set PGDATA: $PGDATA\"\n```\n\nWhen you've confirmed `PGDATA` is set, run `pg_ctl reload` again. The command should reload the PostgreSQL config, referencing your data directory via `PGDATA`.\n\n[^pghba]: <https://www.postgresql.org/docs/current/auth-pg-hba-conf.html>\n\n## Docker\nReset everything:\n\n```sh\nsh reset_docker_instances.sh\n```\n\nTear down docker:\n\n```sh\nsh teardown_docker.sh\n```\n\n## Slow Clients\nReplace `config/database.yml` (or just the \"slow clients\" section)\n\n```\ncp config/database-slow-clients.sample.yml config/database.yml\n```\n\nWith that in place, create a model:\n\n```ruby\nclass SlowClientModel < ApplicationRecord\n  self.establish_connection :slow_clients\nend\n```\n\nRun query code that takes 5 seconds, and verify that it's canceled in the normal configuration.\n\nThe \"slow client\" configuration allows it since it has a higher statement timeout configured.\n\n```rb\nTrip.connection.execute(\"SELECT PG_SLEEP(5)\")\nSlowClientModel.connection.execute(\"SELECT PG_SLEEP(5)\").first\n```\n\n## pg_cron\n[Scheduling maintenance with pg_cron](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/PostgreSQL_pg_cron.html)\n\n- The extension is created using the postgres superuser\n- The superuser grants usage privileges to the owner role, for the cron schema\n- Now the owner user can schedule their own jobs, for objects they own\n\n```sql\npsql -U postgres -d rideshare_development;\n\nCREATE EXTENSION pg_cron;\n\nGRANT USAGE ON SCHEMA cron TO owner;\n```\n\nRun a job:\n```sql\nSELECT cron.schedule(\n  'rideshare trips manual vacuum',\n  '10 * * * *',\n  'VACUUM (ANALYZE) rideshare.trips'\n);\n```\n\nView the jobs:\n```sql\nSELECT * FROM cron.job;\n```\n\nView job runs:\n```sql\nSELECT * FROM cron.job_run_details;\n```\n\n![Screenshot of PgHero Scheduled Jobs](https://i.imgur.com/rxRf7Qn.png)\n\n## active-record-doctor\nRun the tool from your terminal:\n\n```sh\nbundle exec rake active_record_doctor:\n```\n\n## database_consistency\nRun the tool from your terminal:\n\n```sh\ndatabase_consistency\n```\n\n## rails-pg-extras\nSpecify a custom schema for table_cache_hit\n\n```sh\nbin/rails runner \\\n  'RailsPgExtras.table_cache_hit(args: { schema: \"rideshare\" })'\n```\n\nOr for version >= 5.3.1, set a schema using an environment variable:\n\n```sh\nexport PG_EXTRAS_SCHEMA=rideshare\n```\n\nFor example, we can search for unused indexes, and indexes within\nthe expected schema (`rideshare`) are examined\n\n```sh\nbin/rails pg_extras:unused_indexes\n```\n```sh\nbin/rails pg_extras:diagnose\n```\n\n## rails_best_practices\n```sh\nbin/rails_best_practices .\n```\n\n\n## PgBouncer Prepared Statements\n- Run `brew services` and confirm PgBouncer is running on port 6432\n- Set `DATABASE_URL` to be port 6432\n- Disable Query Logs in `config/application.rb` (currently incompatible)\n- Restart PgBouncer to clear out the prepared statements\n\n\nRun the following script to observe how prepared statements are populated:\n\n```sh\nsh pgbouncer_prepared_statements_check.sh\n```\n\n## pgbench\nWe can use pgbench and some pre-made SQL queries forming a transaction,\nto measure the transactions per second (TPS) that the server is capable of.\n\n```sh\nsh db/scripts/benchmark.sh\n```\n"
  },
  {
    "path": "db/alter_default_privileges_public.sql",
    "content": "--\n-- tables, sequences, functions, types, schemas\n--\n\\c rideshare_development\n\nALTER DEFAULT PRIVILEGES\n  FOR ROLE owner\n  REVOKE ALL PRIVILEGES\n  ON TABLES\n  FROM PUBLIC;\n\nALTER DEFAULT PRIVILEGES\n  FOR ROLE owner\n  REVOKE ALL PRIVILEGES\n  ON SEQUENCES\n  FROM PUBLIC;\n\nALTER DEFAULT PRIVILEGES\n  FOR ROLE owner\n  REVOKE ALL PRIVILEGES\n  ON FUNCTIONS\n  FROM PUBLIC;\n\nALTER DEFAULT PRIVILEGES\n  FOR ROLE owner\n  REVOKE ALL PRIVILEGES\n  ON TYPES\n  FROM PUBLIC;\n\nALTER DEFAULT PRIVILEGES\n  FOR ROLE owner\n  REVOKE ALL PRIVILEGES\n  ON SCHEMAS\n  FROM PUBLIC;\n"
  },
  {
    "path": "db/alter_default_privileges_readonly.sql",
    "content": "-- Schema\n-- readonly role\n--\n\\c rideshare_development\n\nALTER DEFAULT PRIVILEGES\n  FOR ROLE owner\n  IN SCHEMA rideshare\n  GRANT SELECT\n  ON TABLES\n  TO readonly_users;\n\nALTER DEFAULT PRIVILEGES\n  FOR ROLE owner\n  IN SCHEMA rideshare\n  GRANT USAGE, SELECT\n  ON SEQUENCES\n  TO readonly_users;\n\nALTER DEFAULT PRIVILEGES\n  FOR ROLE owner\n  IN SCHEMA rideshare\n  GRANT EXECUTE\n  ON FUNCTIONS\n  TO readonly_users;\n\nALTER DEFAULT PRIVILEGES\n  FOR ROLE owner\n  IN SCHEMA rideshare\n  GRANT USAGE\n  ON TYPES\n  TO readonly_users;\n"
  },
  {
    "path": "db/alter_default_privileges_readwrite.sql",
    "content": "-- https://tightlycoupled.io/my-goto-postgres-configuration-for-web-services/\n-- Schema default privileges\n-- readwrite role\n--\n\n\\c rideshare_development\n\nALTER DEFAULT PRIVILEGES\n  FOR ROLE owner\n  IN SCHEMA rideshare\n  GRANT SELECT, INSERT, UPDATE, DELETE\n  ON TABLES\n  TO readwrite_users;\n\nALTER DEFAULT PRIVILEGES\n  FOR ROLE owner\n  IN SCHEMA rideshare\n  GRANT USAGE, SELECT, UPDATE\n  ON SEQUENCES\n  TO readwrite_users;\n\nALTER DEFAULT PRIVILEGES\n  FOR ROLE owner\n  IN SCHEMA rideshare\n  GRANT EXECUTE\n  ON FUNCTIONS\n  TO readwrite_users;\n\nALTER DEFAULT PRIVILEGES\n  FOR ROLE owner\n  IN SCHEMA rideshare\n  GRANT USAGE\n  ON TYPES\n  TO readwrite_users;\n"
  },
  {
    "path": "db/create_database.sql",
    "content": "CREATE DATABASE rideshare_development\nWITH OWNER owner\nENCODING UTF8;\n-- LC_COLLATE 'en_US.UTF-8'\n-- LC_CTYPE 'en_US.UTF-8';\n"
  },
  {
    "path": "db/create_grants_database.sql",
    "content": "\\c rideshare_development\n\nGRANT CONNECT ON DATABASE rideshare_development TO readwrite_users;\nGRANT TEMPORARY ON DATABASE rideshare_development TO readwrite_users;\n\nGRANT CONNECT ON DATABASE rideshare_development TO readonly_users;\n\nGRANT CONNECT ON DATABASE rideshare_development TO app_readonly;\n"
  },
  {
    "path": "db/create_grants_schema.sql",
    "content": "\\c rideshare_development\n\nGRANT USAGE ON SCHEMA rideshare TO readwrite_users;\nGRANT USAGE ON SCHEMA rideshare TO readonly_users;\n\n-- Not needed, but being explicit helps with \\dn+\nGRANT CREATE, USAGE ON SCHEMA rideshare TO owner;\n\n\n-- Grants for app_readonly\nGRANT USAGE ON SCHEMA rideshare TO app_readonly;\nGRANT SELECT ON ALL TABLES IN SCHEMA rideshare TO app_readonly;\nGRANT USAGE ON ALL SEQUENCES IN SCHEMA rideshare TO app_readonly;\nGRANT EXECUTE ON ALL FUNCTIONS IN SCHEMA rideshare TO app_readonly;\n\n-- Use pg_read_all_data instead of using Default Privileges\nGRANT pg_read_all_data TO app_readonly;\n\n-- Needed for: SHOW data_directory;\n-- export PGDATA=\"$(psql $DATABASE_URL -c 'SHOW data_directory' --tuples-only)\"\nGRANT pg_read_all_settings TO owner;\nGRANT pg_read_all_data TO owner;\nGRANT pg_read_all_stats TO owner;\n"
  },
  {
    "path": "db/create_role_app_readonly.sql",
    "content": "-- A login role\n-- https://www.crunchydata.com/blog/creating-a-read-only-postgres-user\nCREATE ROLE app_readonly\n  LOGIN\n  ENCRYPTED PASSWORD :'password_to_save'\n  CONNECTION LIMIT 3;\n"
  },
  {
    "path": "db/create_role_app_user.sql",
    "content": "-- https://tightlycoupled.io/my-goto-postgres-configuration-for-web-services/\n--\nCREATE ROLE app WITH\n  LOGIN\n  ENCRYPTED PASSWORD :'password_to_save' -- https://stackoverflow.com/a/72985243/126688\n  CONNECTION LIMIT 90 -- because of postgres default of 100\n  IN ROLE readwrite_users;\n\nALTER ROLE app SET statement_timeout = 1000;\nALTER ROLE app SET lock_timeout = 750;\n\n-- v9.6+\nALTER ROLE app SET idle_in_transaction_session_timeout = 1000;\n\nALTER ROLE app SET search_path = rideshare;\n"
  },
  {
    "path": "db/create_role_owner.sql",
    "content": "-- https://tightlycoupled.io/my-goto-postgres-configuration-for-web-services/\nCREATE ROLE owner\n  LOGIN\n  ENCRYPTED PASSWORD :'password_to_save' -- https://stackoverflow.com/a/72985243/126688\n  CONNECTION LIMIT 10;\n\nALTER ROLE owner SET statement_timeout = 20000;\nALTER ROLE owner SET lock_timeout = 3000;\n"
  },
  {
    "path": "db/create_role_readonly_users.sql",
    "content": "CREATE ROLE readonly_users NOLOGIN;\n"
  },
  {
    "path": "db/create_role_readwrite_users.sql",
    "content": "CREATE ROLE readwrite_users NOLOGIN;\n"
  },
  {
    "path": "db/create_schema.sql",
    "content": "\\c rideshare_development\n\nSET ROLE owner;\nCREATE SCHEMA rideshare;\nRESET ROLE;\n\n-- set up owner earlier:\n-- https://tightlycoupled.io/my-goto-postgres-configuration-for-web-services/\nALTER ROLE owner SET search_path TO rideshare;\n\nSET search_path TO rideshare;\n"
  },
  {
    "path": "db/env_vars_sample.sh",
    "content": "# Replace postgres/postgres with \"owner\" or \"app\" credentials\n# Use the password created at provision time\nexport DATABASE_URL_PRIMARY=\"postgres://postgres:postgres@localhost:54321/rideshare_development\"\n\nexport DATABASE_URL_REPLICA=\"postgres://postgres:postgres@localhost:54322/rideshare_development\"\n"
  },
  {
    "path": "db/functions/scrub_email_v01.sql",
    "content": "CREATE OR REPLACE FUNCTION scrub_email(email_address varchar(255)) RETURNS varchar(255) AS $$\nBEGIN\nRETURN\n  -- take random MD5 text that is the same\n  -- length as the first part of the email address\n  -- EXCEPT when it's less than 5 chars, since we might\n  -- have a collision. In that case use 5: greatest(length,6)\n  CONCAT(\n    substr(\n      md5(random()::text),\n      0,\n      greatest(length(split_part(email_address, '@', 1)) + 1, 6)\n    ),\n    '@',\n    split_part(email_address, '@', 2)\n  );\nEND;\n$$ LANGUAGE plpgsql;\n"
  },
  {
    "path": "db/functions/scrub_email_v02.sql",
    "content": "-- replace email_address with random text that is the same\n-- length as the unique portion of an email address\n-- before the \"@\" symbol.\n-- Make the minimum length 5 characters to avoid\n-- MD5 text generation collisions\nCREATE OR REPLACE FUNCTION scrub_email(email_address varchar(255)) RETURNS varchar(255) AS $$\nSELECT\nCONCAT(\n  SUBSTR(\n    MD5(RANDOM()::text),\n    0,\n    GREATEST(LENGTH(SPLIT_PART(email_address, '@', 1)) + 1, 6)\n  ),\n  '@',\n  SPLIT_PART(email_address, '@', 2)\n);\n$$ LANGUAGE SQL;\n"
  },
  {
    "path": "db/functions/scrub_text_v01.sql",
    "content": "CREATE OR REPLACE FUNCTION scrub_text(text varchar(255)) RETURNS varchar(255) AS $$\nBEGIN\nRETURN\n  -- replace from position 0, to max(length or 6)\n  substr(\n    md5(random()::text),\n    0,\n    greatest(length(text) + 1, 6)\n  );\nEND;\n$$ LANGUAGE plpgsql;\n"
  },
  {
    "path": "db/functions/scrub_text_v02.sql",
    "content": "CREATE OR REPLACE FUNCTION scrub_text(input varchar(255)) RETURNS varchar(255) AS $$\nSELECT\n-- replace from position 0, to max(length or 6)\nSUBSTR(\n  MD5(RANDOM()::text),\n  0,\n  GREATEST(LENGTH(input) + 1, 6)\n);\n$$ LANGUAGE SQL;\n"
  },
  {
    "path": "db/migrate/20191107212726_create_users.rb",
    "content": "class CreateUsers < ActiveRecord::Migration[6.0]\n  def change\n    # index: Rails adds PK index\n    create_table :users do |t|\n      t.string :first_name, null: false # index: no\n      t.string :last_name, null: false # index: no\n      t.string :email, null: false, index: true, unique: true # index: true\n      t.string :type, null: false # index: maybe in future, partitioning\n\n      t.timestamps # nullability: Rails adds null: false\n    end\n  end\nend\n"
  },
  {
    "path": "db/migrate/20191108221519_create_locations.rb",
    "content": "class CreateLocations < ActiveRecord::Migration[6.0]\n  def change\n    # index: Rails adds PK index\n    create_table :locations do |t|\n      t.string :address, null: false, index: true # store the string form of the address [See below], index: yes, search\n      t.decimal :latitude, precision: 15, scale: 10, null: false, index: true # index: yes, search\n      t.decimal :longitude, precision: 15, scale: 10, null: false, index: true # index: yes, search\n\n      t.timestamps # Nullability: Rails adds null: false\n    end\n  end\nend\n\n# NOTE: We could also make separate fields for house number, street address, city, state etc.\n# This is a simplified version\n#\n# NOTE: We expect to search on address text, or on latitude and longitude\n"
  },
  {
    "path": "db/migrate/20191111151637_create_trip_requests.rb",
    "content": "class CreateTripRequests < ActiveRecord::Migration[6.0]\n  def change\n    # Indexes: Rails adds PK index\n    # Nullability: no nulls\n    create_table :trip_requests do |t|\n      t.integer :rider_id, index: true, null: false # index: FK\n      t.integer :start_location_id, index: true, null: false # index: FK\n      t.integer :end_location_id, index: true, null: false # index: FK\n\n      t.timestamps # Rails adds null: false\n    end\n  end\nend\n"
  },
  {
    "path": "db/migrate/20191112165848_create_trips.rb",
    "content": "class CreateTrips < ActiveRecord::Migration[6.0]\n  def change\n    # Indexes: Rails adds PK index\n    create_table :trips do |t|\n      t.integer :trip_request_id, index: true, null: false # index: FK\n      t.integer :driver_id, index: true, null: false # index: FK\n      t.timestamp :completed_at # nullable\n      t.integer :rating, index: true # index: aggregate queries\n\n      t.timestamps # Rails adds null: false\n    end\n  end\nend\n"
  },
  {
    "path": "db/migrate/20191121175429_install_blazer.rb",
    "content": "class InstallBlazer < ActiveRecord::Migration[6.0]\n  def change\n    create_table :blazer_queries do |t|\n      t.references :creator\n      t.string :name\n      t.text :description\n      t.text :statement\n      t.string :data_source\n      t.timestamps null: false\n    end\n\n    create_table :blazer_audits do |t|\n      t.references :user\n      t.references :query\n      t.text :statement\n      t.string :data_source\n      t.timestamp :created_at\n    end\n\n    create_table :blazer_dashboards do |t|\n      t.references :creator\n      t.text :name\n      t.timestamps null: false\n    end\n\n    create_table :blazer_dashboard_queries do |t|\n      t.references :dashboard\n      t.references :query\n      t.integer :position\n      t.timestamps null: false\n    end\n\n    create_table :blazer_checks do |t|\n      t.references :creator\n      t.references :query\n      t.string :state\n      t.string :schedule\n      t.text :emails\n      t.text :slack_channels\n      t.string :check_type\n      t.text :message\n      t.timestamp :last_run_at\n      t.timestamps null: false\n    end\n  end\nend\n"
  },
  {
    "path": "db/migrate/20191203212055_add_foreign_key_constraints.rb",
    "content": "class AddForeignKeyConstraints < ActiveRecord::Migration[6.0]\n  def change\n    # https://guides.rubyonrails.org/active_record_migrations.html#foreign-keys\n    #\n    # Strong migrations provides a warning:\n    #\n    # === Dangerous operation detected #strong_migrations ===\n    # New foreign keys are validated by default. This acquires an AccessExclusiveLock,\n    # which is expensive on large tables. Instead, validate it in a separate migration\n    # with a more agreeable RowShareLock.\n    #\n    # We could de-couple the introduction of the FK from the validation of it.\n\n    add_foreign_key :trip_requests, :locations, column: :start_location_id, validate: false\n    add_foreign_key :trip_requests, :locations, column: :end_location_id, validate: false\n\n    # Because of STI, we want author_id to be a FK to users.id\n    add_foreign_key :trip_requests, :users, column: :rider_id, primary_key: :id, validate: false\n\n    add_foreign_key :trips, :trip_requests, validate: false\n    add_foreign_key :trips, :users, column: :driver_id, primary_key: :id, validate: false\n  end\nend\n"
  },
  {
    "path": "db/migrate/20191203213103_validate_foreign_key_constraints.rb",
    "content": "class ValidateForeignKeyConstraints < ActiveRecord::Migration[6.0]\n  def change\n    # https://github.com/ankane/strong_migrations#good-5\n    validate_foreign_key :trip_requests, :locations, column: :start_location_id\n    validate_foreign_key :trip_requests, :locations, column: :end_location_id\n\n    validate_foreign_key :trip_requests, :users, column: :rider_id, primary_key: :id\n\n    validate_foreign_key :trips, :trip_requests\n    validate_foreign_key :trips, :users, column: :driver_id, primary_key: :id\n  end\nend\n"
  },
  {
    "path": "db/migrate/20200603150442_add_column_users_password_digest.rb",
    "content": "class AddColumnUsersPasswordDigest < ActiveRecord::Migration[6.0]\n  def change\n    add_column :users, :password_digest, :string\n  end\nend\n"
  },
  {
    "path": "db/migrate/20220711010541_add_db_comments_to_users.rb",
    "content": "class AddDbCommentsToUsers < ActiveRecord::Migration[7.0]\n  def change\n    comment = 'sensitive_fields|first_name:scrub_text,last_name:scrub_text,email:scrub_email'\n    change_table_comment :users, from: nil, to: comment\n  end\nend\n"
  },
  {
    "path": "db/migrate/20220711015454_create_function_scrub_email.rb",
    "content": "class CreateFunctionScrubEmail < ActiveRecord::Migration[7.0]\n  def change\n    create_function :scrub_email\n  end\nend\n"
  },
  {
    "path": "db/migrate/20220711015524_create_function_scrub_text.rb",
    "content": "class CreateFunctionScrubText < ActiveRecord::Migration[7.0]\n  def change\n    create_function :scrub_text\n  end\nend\n"
  },
  {
    "path": "db/migrate/20220716020213_add_index_users_last_name.rb",
    "content": "class AddIndexUsersLastName < ActiveRecord::Migration[7.0]\n  disable_ddl_transaction!\n\n  def change\n    add_index :users, :last_name, algorithm: :concurrently\n  end\nend\n"
  },
  {
    "path": "db/migrate/20220729014635_create_vehicle_reservations.rb",
    "content": "class CreateVehicleReservations < ActiveRecord::Migration[7.0]\n  # https://wiki.postgresql.org/wiki/Don%27t_Do_This#Don.27t_use_timestamp_.28without_time_zone.29\n  # https://discuss.rubyonrails.org/t/postgres-timestampz-by-default-in-rails-6-2/76537\n  #\n  # db/schema.rb does not seem to capture the timestamptz column type\n  # https://blog.appsignal.com/2020/01/15/the-pros-and-cons-of-using-structure-sql-in-your-ruby-on-rails-application.html\n  #\n  def change\n    create_table :vehicle_reservations do |t|\n      t.integer :vehicle_id, null: false, index: true\n      t.integer :trip_request_id, null: false\n      t.boolean :canceled, null: false, default: false\n      t.timestamptz :starts_at, null: false\n      t.timestamptz :ends_at, null: false\n\n      t.timestamps\n    end\n  end\nend\n"
  },
  {
    "path": "db/migrate/20220729020430_create_vehicles.rb",
    "content": "class CreateVehicles < ActiveRecord::Migration[7.0]\n  def change\n    create_table :vehicles do |t|\n      t.string :name, null: false, index: { unique: true }\n\n      t.timestamps\n    end\n  end\nend\n"
  },
  {
    "path": "db/migrate/20220801140121_add_exclusion_constraint_vehicle_registrations.rb",
    "content": "class AddExclusionConstraintVehicleRegistrations < ActiveRecord::Migration[7.0]\n  def change\n    # NOTE: Depends on btree_gist extension being created in scripts/db_setup.sh by superuser\n    #\n    # Prevent overlapping reservations for\n    # the same vehicle\n    #\n    # - vehicle_id is the vehicle being reserved\n    # - starts_at is the start time of the reservation\n    # - ends_at is the end time of the reservation\n    # - a reservation is associated with a trip_request_id\n    # - a reservation may be canceled\n    safety_assured do\n      execute <<-SQL\n      ALTER TABLE vehicle_reservations ADD CONSTRAINT non_overlapping_vehicle_registration\n      EXCLUDE USING gist (\n        int4range(vehicle_id, vehicle_id, '[]') WITH =,\n        tstzrange(starts_at, ends_at) WITH &&\n      )\n      WHERE (not canceled)\n\n      SQL\n    end\n\n    # Error: data type integer has no default operator class for access method \"gist\"\n    # #=> Needed to enable the extension\n    #\n    # Error: PG::InvalidObjectDefinition: ERROR:  functions in index expression must be marked IMMUTABLE\n    # #=> Changed from tstzrange operator to tsrange operator, starts_at, ends_at are timestamp columns\n    #\n    # https://www.cybertec-postgresql.com/en/postgresql-exclude-beyond-unique/\n  end\nend\n"
  },
  {
    "path": "db/migrate/20220814175213_add_trips_count_to_users.rb",
    "content": "class AddTripsCountToUsers < ActiveRecord::Migration[7.0]\n  def change\n    add_column :users, :trips_count, :integer\n  end\nend\n"
  },
  {
    "path": "db/migrate/20220916171314_create_search_results.rb",
    "content": "class CreateSearchResults < ActiveRecord::Migration[7.0]\n  def change\n    create_view :search_results\n  end\nend\n"
  },
  {
    "path": "db/migrate/20221007184855_create_fast_search_results.rb",
    "content": "class CreateFastSearchResults < ActiveRecord::Migration[7.0]\n  def change\n    create_view :fast_search_results, materialized: true\n  end\nend\n"
  },
  {
    "path": "db/migrate/20221108172933_add_status_column_to_vehicles.rb",
    "content": "class AddStatusColumnToVehicles < ActiveRecord::Migration[7.0]\n  def change\n    add_column :vehicles, :status, :string,\n               null: false,\n               default: VehicleStatus::DRAFT\n  end\nend\n"
  },
  {
    "path": "db/migrate/20221108175321_remove_status_column_from_vehicles.rb",
    "content": "class RemoveStatusColumnFromVehicles < ActiveRecord::Migration[7.0]\n  def change\n    # removing this to replace it with a DB enum\n    # NOTE: if this was in production, do not immediately\n    # drop this column, but create a new one to begin using\n    # migrate to, and then retire the old column\n    safety_assured do\n      remove_column :vehicles, :status\n    end\n  end\nend\n"
  },
  {
    "path": "db/migrate/20221108175619_add_status_column_db_enum_type_to_vehicles.rb",
    "content": "class AddStatusColumnDbEnumTypeToVehicles < ActiveRecord::Migration[7.0]\n  def change\n    create_enum :vehicle_status, [\n      VehicleStatus::DRAFT,\n      VehicleStatus::PUBLISHED\n    ]\n\n    add_column :vehicles, :status, :enum,\n               enum_type: :vehicle_status,\n               default: VehicleStatus::DRAFT,\n               null: false\n  end\nend\n"
  },
  {
    "path": "db/migrate/20221110020532_add_drivers_license_number_to_users.rb",
    "content": "class AddDriversLicenseNumberToUsers < ActiveRecord::Migration[7.0]\n  def change\n    add_column :users, :drivers_license_number, :string, limit: 100\n  end\nend\n"
  },
  {
    "path": "db/migrate/20221111212740_add_trip_rating_check_constraint.rb",
    "content": "class AddTripRatingCheckConstraint < ActiveRecord::Migration[7.0]\n  def change\n    add_check_constraint :trips,\n                         'rating IS NULL OR (rating >= 1 AND rating <= 5)',\n                         name: 'rating_check',\n                         validate: false\n  end\nend\n"
  },
  {
    "path": "db/migrate/20221111213918_validate_add_trip_rating_check_constraint.rb",
    "content": "class ValidateAddTripRatingCheckConstraint < ActiveRecord::Migration[7.0]\n  def change\n    validate_check_constraint :trips, name: 'rating_check'\n  end\nend\n"
  },
  {
    "path": "db/migrate/20221219164626_add_unique_address_to_locations.rb",
    "content": "class AddUniqueAddressToLocations < ActiveRecord::Migration[7.1]\n  disable_ddl_transaction!\n\n  def change\n    remove_index :locations, :address\n    add_index :locations, :address, unique: true, algorithm: :concurrently\n  end\nend\n"
  },
  {
    "path": "db/migrate/20221220201836_enable_extension_pg_stat_statements.rb",
    "content": "class EnableExtensionPgStatStatements < ActiveRecord::Migration[7.1]\n  # PGSS = 'pg_stat_statements'\n  #\n  # def change\n  #   # prereq: added to shared_preload_libraries='pg_stat_statements'\n  #   enable_extension(PGSS) unless extension_enabled?(PGSS)\n  # end\n\n  # Replaced by:\n  # sh scripts/setup_db.sh\n  #\n  # Extension should be enabled by superuser\nend\n"
  },
  {
    "path": "db/migrate/20221221052616_change_column_trips_trip_request_id.rb",
    "content": "class ChangeColumnTripsTripRequestId < ActiveRecord::Migration[7.1]\n  # Purpose: changing int->bigint\n  # for FK column trip_requests.trip_id\n  # bundle exec rake active_record_doctor\n  #\n  def change\n    # don't do this in prod\n    # https://github.com/ankane/strong_migrations#changing-the-type-of-a-column\n    safety_assured do\n      # not in prod, so just performing it\n      change_column :trips, :trip_request_id, :bigint\n    end\n  end\nend\n"
  },
  {
    "path": "db/migrate/20221223161403_create_trip_positions.rb",
    "content": "class CreateTripPositions < ActiveRecord::Migration[7.1]\n  def change\n    create_table :trip_positions do |t|\n      t.point :position\n      t.bigint :trip_id, null: false\n\n      t.timestamps\n    end\n\n    # Skipping FK for now since a lot of data will be inserted,\n    # preferring faster inserts. `trip_id` would also likely\n    # be indexed.\n    #\n    # new table, so skipping safety checks\n    # safety_assured do\n    #   add_foreign_key :trip_positions, :trips\n    # end\n  end\nend\n"
  },
  {
    "path": "db/migrate/20221230200725_add_unique_constraint_users_email.rb",
    "content": "class AddUniqueConstraintUsersEmail < ActiveRecord::Migration[7.1]\n  def change\n    # Potentially unsafe in production, but ok\n    # to add here (only used locally)\n\n    # remove former index that does not support\n    # unique constraint\n    remove_index(:users, :email) if index_exists?(:users, :email)\n\n    safety_assured do\n      add_index :users, [:email], unique: true\n    end\n  end\nend\n"
  },
  {
    "path": "db/migrate/20221230203627_fix_canceled_column_default.rb",
    "content": "class FixCanceledColumnDefault < ActiveRecord::Migration[7.1]\n  def change\n    # by default, reservations should be canceled=false\n    change_column_default :vehicle_reservations, :canceled, false\n  end\nend\n"
  },
  {
    "path": "db/migrate/20230125003531_add_searchable_full_name_to_users.rb",
    "content": "class AddSearchableFullNameToUsers < ActiveRecord::Migration[7.1]\n  def change\n    safety_assured do # executing in non-prod\n      execute <<-SQL\n        ALTER TABLE users\n        ADD COLUMN searchable_full_name TSVECTOR GENERATED ALWAYS AS (\n          SETWEIGHT(TO_TSVECTOR('english', COALESCE(first_name, '')), 'A') ||\n          SETWEIGHT(TO_TSVECTOR('english', COALESCE(last_name,'')), 'B')\n        ) STORED;\n      SQL\n    end\n  end\nend\n"
  },
  {
    "path": "db/migrate/20230125003946_add_index_searchable_full_name_to_users.rb",
    "content": "class AddIndexSearchableFullNameToUsers < ActiveRecord::Migration[7.1]\n  disable_ddl_transaction!\n\n  def change\n    add_index :users, :searchable_full_name,\n              using: :gin, # GIN index\n              algorithm: :concurrently\n  end\nend\n"
  },
  {
    "path": "db/migrate/20230126025656_remove_blazer_from_rideshare.rb",
    "content": "class RemoveBlazerFromRideshare < ActiveRecord::Migration[7.1]\n  def change\n    # No longer using Blazer\n    drop_table(:blazer_queries)\n    drop_table(:blazer_audits)\n    drop_table(:blazer_dashboards)\n    drop_table(:blazer_dashboard_queries)\n    drop_table(:blazer_checks)\n  end\nend\n"
  },
  {
    "path": "db/migrate/20230314204931_create_trip_positions_partitioned_intermediate_table.rb",
    "content": "class CreateTripPositionsPartitionedIntermediateTable < ActiveRecord::Migration[7.1]\n  def change\n    safety_assured do # skipping Strong Migrations safeguard\n      execute <<-SQL.squish\n      BEGIN;\n\n      CREATE TABLE trip_positions_intermediate (\n        LIKE trip_positions\n        INCLUDING DEFAULTS\n        INCLUDING CONSTRAINTS\n        INCLUDING STORAGE\n        INCLUDING COMMENTS\n      ) PARTITION BY RANGE (\"created_at\");\n\n      COMMENT ON TABLE trip_positions_intermediate\n      IS 'column:created_at,period:month,cast:date,version:3';\n\n      COMMIT;\n      SQL\n    end\n  end\nend\n"
  },
  {
    "path": "db/migrate/20230314210022_add_trip_positions_intermediate_default_partition.rb",
    "content": "class AddTripPositionsIntermediateDefaultPartition < ActiveRecord::Migration[7.1]\n  def change\n    safety_assured do\n      execute <<-SQL.squish\n      CREATE TABLE \"trip_positions_intermediate_default\"\n      PARTITION OF \"trip_positions_intermediate\"\n      DEFAULT;\n\n      ALTER TABLE \"trip_positions_intermediate_default\" ADD PRIMARY KEY (\"id\");\n      SQL\n    end\n  end\nend\n"
  },
  {
    "path": "db/migrate/20230619213546_add_locations_city_state.rb",
    "content": "class AddLocationsCityState < ActiveRecord::Migration[7.1]\n  def change\n    add_column :locations, :city, :string\n    add_column :locations, :state, 'character(2)'\n  end\nend\n"
  },
  {
    "path": "db/migrate/20230620030038_remove_unused_indexes.rb",
    "content": "class RemoveUnusedIndexes < ActiveRecord::Migration[7.1]\n  disable_ddl_transaction!\n\n  def change\n    remove_index :locations, :latitude, name: 'index_locations_on_latitude',\n                                        algorithm: :concurrently\n\n    remove_index :locations, :longitude, name: 'index_locations_on_longitude',\n                                         algorithm: :concurrently\n  end\nend\n"
  },
  {
    "path": "db/migrate/20230625151410_add_foreign_keys.rb",
    "content": "class AddForeignKeys < ActiveRecord::Migration[7.1]\n  def change\n    safety_assured do\n      add_foreign_key :trip_positions, :trips\n\n      add_foreign_key :vehicle_reservations, :vehicles\n    end\n  end\nend\n"
  },
  {
    "path": "db/migrate/20230711015123_add_fast_count_gem.rb",
    "content": "class AddFastCountGem < ActiveRecord::Migration[7.1]\n  def change\n    FastCount.install\n  end\nend\n"
  },
  {
    "path": "db/migrate/20230713150550_update_function_scrub_email_to_version_2.rb",
    "content": "class UpdateFunctionScrubEmailToVersion2 < ActiveRecord::Migration[7.1]\n  def change\n    update_function :scrub_email, version: 2, revert_to_version: 1\n  end\nend\n"
  },
  {
    "path": "db/migrate/20230713150710_update_function_scrub_text_to_version_2.rb",
    "content": "class UpdateFunctionScrubTextToVersion2 < ActiveRecord::Migration[7.1]\n  def change\n    update_function :scrub_text, version: 2, revert_to_version: 1\n  end\nend\n"
  },
  {
    "path": "db/migrate/20230714013609_trips_check_constraints.rb",
    "content": "class TripsCheckConstraints < ActiveRecord::Migration[7.1]\n  def change\n    safety_assured do\n      # Add it back with the NULL check, which is unnecessary\n      remove_check_constraint :trips, name: 'rating_check'\n\n      add_check_constraint :trips,\n                           'rating >= 1 AND rating <= 5',\n                           name: 'rating_check'\n\n      add_check_constraint :trips,\n                           'completed_at > created_at',\n                           validate: false # Some existing data in pre-made dump violates this\n    end\n  end\nend\n"
  },
  {
    "path": "db/migrate/20230716174139_add_foreign_key_column_vehicle_reservations.rb",
    "content": "class AddForeignKeyColumnVehicleReservations < ActiveRecord::Migration[7.1]\n  def change\n    safety_assured do\n      add_foreign_key :vehicle_reservations, :trip_requests\n    end\n  end\nend\n"
  },
  {
    "path": "db/migrate/20230726020548_add_not_null_trip_positions_position.rb",
    "content": "class AddNotNullTripPositionsPosition < ActiveRecord::Migration[7.1]\n  def change\n    # Not on a live system\n    safety_assured do\n      change_column_null :trip_positions, :position, false\n    end\n  end\nend\n"
  },
  {
    "path": "db/migrate/20230925150207_add_position_to_locations.rb",
    "content": "class AddPositionToLocations < ActiveRecord::Migration[7.1]\n  def change\n    add_column :locations, :position, :point, null: false\n  end\nend\n"
  },
  {
    "path": "db/migrate/20230925150831_drop_locations_latitude_longitude.rb",
    "content": "class DropLocationsLatitudeLongitude < ActiveRecord::Migration[7.1]\n  def change\n    # migrated these to a single point type column=>\"position\"\n    safety_assured do\n      remove_column :locations, :latitude\n      remove_column :locations, :longitude\n    end\n  end\nend\n"
  },
  {
    "path": "db/migrate/20231018153441_update_fast_search_results_to_version_2.rb",
    "content": "class UpdateFastSearchResultsToVersion2 < ActiveRecord::Migration[7.1]\n  def change\n    update_view :fast_search_results,\n                version: 2,\n                revert_to_version: 1,\n                materialized: true\n  end\nend\n"
  },
  {
    "path": "db/migrate/20231018153712_add_unique_index_fast_search_results.rb",
    "content": "class AddUniqueIndexFastSearchResults < ActiveRecord::Migration[7.1]\n  disable_ddl_transaction!\n\n  def change\n    add_index :fast_search_results, :driver_id,\n              unique: true,\n              algorithm: :concurrently\n  end\nend\n"
  },
  {
    "path": "db/migrate/20231208050516_drop_column_searchable_full_name.rb",
    "content": "class DropColumnSearchableFullName < ActiveRecord::Migration[7.1]\n  def change\n    # Add this migration back in order to use:\n    # `searchable_full_name` in the User model:\n    # - concatenates first_name and last_name\n    # - Configures it with pg_search\n    # - Index added for this column\n    # db/migrate/20230125003531_add_searchable_full_name_to_users.rb\n\n    safety_assured do\n      remove_column :users, :searchable_full_name\n    end\n  end\nend\n"
  },
  {
    "path": "db/migrate/20231213045957_add_constraints_locations_state.rb",
    "content": "class AddConstraintsLocationsState < ActiveRecord::Migration[7.1]\n  def change\n    # I've verified all the locations have a 2-char state\n    # This opts out of Strong Migrations checks\n    safety_assured do\n      change_column_null(:locations, :state, false)\n    end\n\n    # Opt-out of Strong Migrations checks\n    safety_assured do\n      add_check_constraint :locations,\n                           'LENGTH(state) = 2',\n                           name: 'state_length_check',\n                           validate: true\n    end\n  end\nend\n"
  },
  {
    "path": "db/migrate/20231218215836_remove_trip_positions_intermediate.rb",
    "content": "class RemoveTripPositionsIntermediate < ActiveRecord::Migration[7.1]\n  def change\n    safety_assured do\n      drop_table :trip_positions_intermediate\n    end\n  end\nend\n"
  },
  {
    "path": "db/migrate/20231220043547_install_fast_count.rb",
    "content": "class InstallFastCount < ActiveRecord::Migration[7.1]\n  def change\n    # We are upgrading the gem, so we want to replace the current fast_count function\n    safety_assured do\n      execute('DROP FUNCTION IF EXISTS fast_count')\n    end\n\n    FastCount.install\n  end\nend\n"
  },
  {
    "path": "db/pgbouncer_prepared_statements_check.sh",
    "content": "#!/bin/bash\n#\n# Disable Query Logs if they're enabled\n#\n# Configure DATABASE_URL with password\n# (can't read from ~/.pgpass), set port 6432\n#\n# Overwrite DATABASE_URL to use PgBouncer port\nconn=\"postgres://owner:\"\nconn+=\"@localhost:6432/rideshare_development\"\nexport DATABASE_URL=\"${conn}\"\n\n# Confirm prepared statements are initially empty\necho \"List Prepared Statements results (empty to start):\"\nbin/rails runner \"puts ActiveRecord::Base.connection.\n  execute('SELECT * FROM pg_prepared_statements').values\"\n\necho \"Run a query to populate prepared statements:\"\nbin/rails runner \"Trip.first\"\n\n# Check again\necho \"List Prepared Statements results again:\"\nbin/rails runner \"puts ActiveRecord::Base.connection.\n  execute('SELECT * FROM pg_prepared_statements').values\"\n"
  },
  {
    "path": "db/reset.sh",
    "content": "sh db/teardown.sh\n\nsh db/setup.sh\n"
  },
  {
    "path": "db/revoke_drop_public_schema.sql",
    "content": "\\c rideshare_development\nREVOKE ALL ON DATABASE rideshare_development FROM PUBLIC;\nDROP SCHEMA public;\n"
  },
  {
    "path": "db/scripts/README.md",
    "content": "# DB Scripts\n\nRun all scripts from the `db` directory.\n\nFrom the Rideshare root, `cd` into `db`.\n\n## Bulk Load\nCreate `10_000_000` records, mix of Drivers and Riders, in `rideshare.users` using SQL\n\nInspiration: <https://vnegrisolo.github.io/postgresql/generate-fake-data-using-sql>\n\n```sh\nsh scripts/bulk_load.sh\n```\n\n## pgbench\n```sh\nsh scripts/benchmark.sh\n```\n\n## List table comments\n```sh\nsh scripts/list_table_comments.sh\n```\n\n## Simulate bloat\n```sh\nsh scripts/simulate_bloat.sh\n```\n"
  },
  {
    "path": "db/scripts/benchmark.sh",
    "content": "#!/bin/bash\n#\n# https://access.crunchydata.com/documentation/postgresql11/11.5/pgbench.html\n#\n# Tested on 16.0\n\necho \"Running pgbench\"\npgbench \\\n  --username owner \\\n  --protocol prepared \\\n  --time 10 \\\n  --jobs 2 \\\n  --client 2 \\\n  --no-vacuum \\\n  --file scripts/queries.sql \\\n  --report-per-command \\\n  rideshare_development\n"
  },
  {
    "path": "db/scripts/bulk_load.sh",
    "content": "#!/bin/bash\n\n# USAGE:\n# sh bulk_load.sh\n#\n# PURPOSE: Create 10_000_000 users table records for performance testing\n# - Mix of Drivers and Riders\n# Technique credit: <https://vnegrisolo.github.io/postgresql/generate-fake-data-using-sql>\n#\nquery=\"\nINSERT INTO rideshare.users(\n  first_name,\n  last_name,\n  email,\n  type,\n  created_at,\n  updated_at\n)\nSELECT\n  'fname' || seq,\n  'lname' || seq,\n  'user_' || seq || '@' || (\n    CASE (RANDOM() * 2)::INT\n      WHEN 0 THEN 'gmail'\n      WHEN 1 THEN 'hotmail'\n      WHEN 2 THEN 'yahoo'\n    END\n  ) || '.com' AS email,\n  CASE (seq % 2)\n    WHEN 0 THEN 'Driver'\n    ELSE 'Rider'\n  END,\n  NOW(),\n  NOW()\nFROM GENERATE_SERIES(1, 10_000_000) seq;\n\n-- To add additional batches of 10 million rows that\n-- with unique values, uncomment the following lines\n--FROM GENERATE_SERIES(10_000_001, 20_000_000) seq;\n--FROM GENERATE_SERIES(20_000_001, 30_000_000) seq;\n--FROM GENERATE_SERIES(30_000_001, 40_000_000) seq;\n--FROM GENERATE_SERIES(40_000_001, 50_000_000) seq;\n\"\n\nif [ -z \"$DATABASE_URL\" ]; then\n    echo \"Error: DATABASE_URL is not set, which provides connection information for this script.\"\n    echo \"To set it, run the following in your terminal:\"\n    echo\n    echo \"export DATABASE_URL='postgres://owner:@localhost:5432/rideshare_development'\"\n    exit 1\nfi\n\necho \"Creating batch of rideshare.users rows, raising statement_timeout to 30min\"\npsql $DATABASE_URL -c \"SET statement_timeout = '30min'; $query\";\n\necho \"ANALYZE rideshare.users\"\npsql $DATABASE_URL -c \"ANALYZE rideshare.users\";\n\necho \"Estimated count:\"\npsql $DATABASE_URL -c \"SELECT reltuples::numeric FROM pg_class WHERE relname IN ('users');\"\n"
  },
  {
    "path": "db/scripts/bulk_load_extended.sh",
    "content": "#!/bin/bash\n\n# PURPOSE:\n# - Adds millions of trips and trip_requests records\n# for performance testing\n#\n# USAGE:\n# sh bulk_load_extended.sh\n#\necho \"Loading millions of records for trip_requests, trips...\"\n\n########################\n#\n# TRIP REQUESTS\n# - Fake data, optimizing more for load speed vs. realistic data\n#\n########################\nquery=\"\nINSERT INTO rideshare.trip_requests(\n  rider_id,\n  start_location_id,\n  end_location_id,\n  created_at,\n  updated_at\n)\nSELECT\n  (SELECT id FROM users WHERE type = 'Rider' ORDER BY RANDOM() LIMIT 1),\n  (SELECT id FROM locations WHERE address = 'New York, NY'),\n  (SELECT id FROM locations WHERE address = 'Boston, MA'),\n  NOW(),\n  NOW()\nFROM GENERATE_SERIES(1, 1_000_000) seq;\n\"\n\nif [ -z \"$DATABASE_URL\" ]; then\n    echo \"Error: DATABASE_URL is not set.\"\n    echo \"Run: export DATABASE_URL='postgres://owner:@localhost:5432/rideshare_development'\"\n    exit 1\nfi\n\necho \"Raising statement_timeout to 30 minutes, running $query...\"\npsql $DATABASE_URL -c \"SET statement_timeout = '30min'; $query\";\npsql $DATABASE_URL -c \"ANALYZE (VERBOSE) rideshare.trip_requests\";\n\n\n########################\n#\n# TRIPS\n# - Fake data, optimizing more for load speed vs. realistic data\n# - Trip records are created before they're completed, CHECK constraint enforces that\n#\n########################\n\nquery=\"\nWITH last_90_days AS (\n  SELECT NOW() - ((RANDOM()*90)::INTEGER || 'day')::INTERVAL AS timestamp\n)\nINSERT INTO rideshare.trips(\n  trip_request_id,\n  driver_id,\n  completed_at,\n  rating,\n  created_at,\n  updated_at\n)\nSELECT\n  (SELECT id FROM trip_requests ORDER BY RANDOM() LIMIT 1),\n  (SELECT id FROM users WHERE type = 'Driver' ORDER BY RANDOM() LIMIT 1),\n  (SELECT timestamp FROM last_90_days),\n  (SELECT (RANDOM()*5)::INTEGER),\n  (SELECT (timestamp - INTERVAL '1 day') from last_90_days),\n  NOW()\nFROM GENERATE_SERIES(1, 10_000_000) seq;\n\"\n\nif [ -z \"$DATABASE_URL\" ]; then\n    echo \"Error: DATABASE_URL is not set.\"\n    echo \"Run: export DATABASE_URL='postgres://owner:@localhost:5432/rideshare_development'\"\n    exit 1\nfi\n\necho \"Raising statement_timeout to 30 minutes, running $query...\"\npsql $DATABASE_URL -c \"SET statement_timeout = '30min'; $query\";\npsql $DATABASE_URL -c \"ANALYZE (VERBOSE) rideshare.trips\";\n\necho \"Estimated counts:\"\nquery=\"SELECT\nrelname AS tablename,\nreltuples::numeric AS estimated_count\nFROM pg_class WHERE relname IN ('trips', 'trip_requests');\n\"\npsql $DATABASE_URL -c \"$query\"\n"
  },
  {
    "path": "db/scripts/list_table_comments.sh",
    "content": "#!/bin/bash\n#\n# Or run: \\dt+\n#\n# choose tables with a table level comment\nquery=\"SELECT relname, obj_description(oid)\nFROM pg_class\nWHERE relkind = 'r'\nAND obj_description(oid) is not null\"\n\n# this should find the \"users\" table which has table comments\n# the value for the comment can be inspected and parsed\necho \"Listing comments from: $DATABASE_URL\"\necho\npsql $DATABASE_URL -c \"$query\" --csv | head -3 | tail -1\n"
  },
  {
    "path": "db/scripts/queries.sql",
    "content": "-- Don't remove\n-- Used by ./benchmark.sh\n\n-- one \"Transaction\" counts as one run of this file\n-- but file can contain multiple SQL statements, terminated\n-- by a semicolon\n\n\n-- Drivers with average rating, trip count, presented as\n-- First name and Last name\n-- Consider adding: expression index\nSELECT\nCONCAT(d.first_name, ' ', d.last_name) AS driver_name,\nAVG(t.rating) AS avg_rating,\nCOUNT(t.rating) AS trip_count\nFROM trips t\nJOIN users d ON t.driver_id = d.id\nGROUP BY t.driver_id, d.first_name, d.last_name\nORDER BY COUNT(t.rating) DESC;\n\n\n-- Groups the users, consider adding an index on 'type'\nSELECT\n  COUNT(*),\n  type\nFROM users\nGROUP BY type;\n\n\n-- Adds average trip length to earlier query\nSELECT\nCONCAT(d.first_name, ' ', d.last_name) AS driver_name,\nCOUNT(t.id) AS trip_count,\nAVG(t.rating) AS avg_rating,\nAVG(t.completed_at - t.created_at) AS avg_trip_length\nFROM trips t\nJOIN users d ON t.driver_id = d.id AND d.type = 'Driver'\nGROUP BY t.driver_id, d.first_name, d.last_name\nORDER BY COUNT(t.rating) DESC;\n"
  },
  {
    "path": "db/scripts/simulate_bloat.sh",
    "content": "#!/bin/bash\n#\n# first run scripts/bulk_load.sh\n# which will load at least 100,000 user records\n# consider working with 1 million or 10 million records\n\n# measure the estimated bloat percentage\n# for the indexes on the users table\n\n# update a portion of the rows\n# for all the \"even\" primary key id numbers\n# update their first name to Bill\n#\nquery=\"\nUPDATE users\nSET first_name = \n  CASE (seq % 2)\n    WHEN 0 THEN 'Bill' || FLOOR(RANDOM() * 10) || FLOOR(RANDOM() * 10)\n    ELSE 'Jane'\n  END\nFROM GENERATE_SERIES(1,100_000) seq\nWHERE id = seq;\n\"\n\npsql $DATABASE_URL -c \"$query\";\n"
  },
  {
    "path": "db/scrubbing/.gitignore",
    "content": "temp_*.sql\n"
  },
  {
    "path": "db/scrubbing/README.md",
    "content": "# Scrubbing\n\nIn this section, we're looking at how to scrub sensitive columns within table rows.\n\nThe example assumes you've started from a physical or logical copy of rows, for all tables. You'll apply scrubbing only to columns that contain sensitive data, tracking which ones they are using a simple and maintainable system.\n\nFor an example to work with, you'll use the `rideshare.users` table. You'll consider a couple of the fields within `rideshare.users` to be sensitive. Since the scrubbing is all done with standard PostgreSQL procedural language, shell scripts, and without extensions or Ruby gems, this solution is portable to anywhere PostgreSQL is running.\n\nThe following scripts clone the table structure, without row data. The scripts fill in rows from\nthe original table and perform scrubbing on the fly. You'll also learn a basic mechanism to track which columns are sensitive, allowing you to maintain that information over time using your normal Rails Migrations process.\n\nCompare rows before and after running the script.\n\n## Run Scrubbing\n```sh\ncd db\n\nsh scrubbing/scrubber.sh\n```\n\n## View Comments\nDatabase comments are used to record which fields are sensitive.\n\n```sh\nsh db/list_table_comments.sh\n```\n\n## Batching\nReview the batched `UPDATE` example:\n\n[scrub_batched_direct_updates.sql](scrub_batched_direct_updates.sql)\n\nFor more information, please check out [High Performance PostgreSQL for Rails](https://pragprog.com/titles/aapsql/high-performance-postgresql-for-rails/), where this section is covered extensively in a full \"Performance Database\" chapter.\n"
  },
  {
    "path": "db/scrubbing/assign_sequence.sql",
    "content": "-- assumes the sequence was already created\nALTER SEQUENCE rideshare.users_id_seq\nOWNED BY rideshare.users.id;\n\nALTER TABLE rideshare.users\nALTER COLUMN id\nSET DEFAULT nextval('users_id_seq'::regclass);\n"
  },
  {
    "path": "db/scrubbing/create_tables.sql",
    "content": "-- Among tables:\n-- users, locations, trip_requests, trips, vehicles, vehicle_reservations\n-- Only sensitive fields in tables: users\nDROP TABLE IF EXISTS users_copy CASCADE;\n\nCREATE TABLE users_copy (LIKE users INCLUDING ALL);\n"
  },
  {
    "path": "db/scrubbing/drop_and_swap_users.sql",
    "content": "BEGIN;\n  DROP TABLE IF EXISTS users CASCADE;\n  ALTER TABLE users_copy RENAME TO users;\nCOMMIT;\n"
  },
  {
    "path": "db/scrubbing/dump_foreign_keys_ddl_target_table.sql",
    "content": "SELECT\n    'ALTER TABLE ' || nsp.nspname || '.' || cls.relname ||\n    ' ADD CONSTRAINT ' || conname ||\n    ' FOREIGN KEY (' || STRING_AGG(att.attname, ', ') OVER(PARTITION BY conname) || ')' ||\n    ' REFERENCES ' || refnsp.nspname || '.' || refcls.relname ||\n    ' (' || STRING_AGG(refatt.attname, ', ') OVER(PARTITION BY conname) || ')'\n    || CASE\n        WHEN confupdtype = 'c' THEN ' ON UPDATE CASCADE'\n        WHEN confupdtype = 'n' THEN ' ON UPDATE SET NULL'\n        WHEN confupdtype = 'd' THEN ' ON UPDATE SET DEFAULT'\n        ELSE ''\n    END ||\n    CASE\n        WHEN confdeltype = 'c' THEN ' ON DELETE CASCADE'\n        WHEN confdeltype = 'n' THEN ' ON DELETE SET NULL'\n        WHEN confdeltype = 'd' THEN ' ON DELETE SET DEFAULT'\n        ELSE ''\n    END || ';'\nFROM pg_constraint con\nJOIN pg_class cls ON con.conrelid = cls.oid\nJOIN pg_namespace nsp ON cls.relnamespace = nsp.oid\nJOIN pg_class refcls ON con.confrelid = refcls.oid\nJOIN pg_namespace refnsp ON refcls.relnamespace = refnsp.oid\nJOIN pg_attribute att ON att.attnum = ANY(con.conkey) AND att.attrelid = con.conrelid\nJOIN pg_attribute refatt ON refatt.attnum = ANY(con.confkey) AND refatt.attrelid = con.confrelid\nWHERE refcls.relname = 'users'  -- replace with your table name\nAND refnsp.nspname = 'rideshare'  -- replace with your schema if different\nGROUP BY\n  conname, nsp.nspname, cls.relname, refnsp.nspname,\n  refcls.relname, confupdtype,\n  confdeltype, att.attname, refatt.attname;\n"
  },
  {
    "path": "db/scrubbing/dump_sequence_creation_ddl.sql",
    "content": "SELECT\n  'CREATE SEQUENCE ' || schemaname || '.' || sequencename ||\n  ' INCREMENT ' || increment_by ||\n  ' MINVALUE ' || min_value ||\n  ' MAXVALUE ' || max_value ||\n  ' START ' || start_value || \n  ';'\nFROM pg_sequences\nWHERE\n  schemaname = 'rideshare'  -- adjust this for your schema if necessary\n  AND sequencename = 'users_id_seq';  -- replace with your sequence name\n"
  },
  {
    "path": "db/scrubbing/dump_views_ddl.sql",
    "content": "SELECT 'CREATE VIEW ' || viewname || ' AS ' || definition\nFROM pg_views\nWHERE schemaname = 'rideshare'  -- adjust the schema if your view is in another schema\nAND viewname = 'search_results';-- replace with your view name\n\nSELECT\n  'CREATE MATERIALIZED VIEW ' || matviewname || ' AS ' || definition || ';' ||\n  COALESCE(E'\\n\\nREFRESH MATERIALIZED VIEW ' || matviewname || ' WITH ' || \n  CASE\n    WHEN matviewname IN (SELECT conname FROM pg_constraint WHERE contype = 'p') THEN 'NO DATA;' \n    ELSE 'DATA;' \n  END, '')\nFROM pg_matviews\nWHERE schemaname = 'rideshare'  -- adjust the schema if your view is in another schema\nAND matviewname = 'fast_search_results';  -- replace with your materialized view name\n"
  },
  {
    "path": "db/scrubbing/generate_add_constraint_statements.sql",
    "content": "CREATE OR REPLACE FUNCTION generate_add_constraint_statements()\nRETURNS TABLE(stmt text) AS $$\n\nDECLARE\n  v_table_name text;\n  v_statement text;\nBEGIN\n\n  FOR v_table_name IN (SELECT tablename FROM pg_tables WHERE schemaname = 'rideshare' AND tablename IN ('users')) -- could add more tables in future\n  LOOP\n    SELECT string_agg('ALTER TABLE '||nspname||'.'||relname||' ADD CONSTRAINT '||conname||' '|| pg_get_constraintdef(pg_constraint.oid)||';', '')\n    INTO v_statement\n    FROM pg_constraint\n    INNER JOIN pg_class ON conrelid=pg_class.oid\n    INNER JOIN pg_namespace ON pg_namespace.oid=pg_class.relnamespace\n    WHERE nspname = 'rideshare'\n    AND relname = v_table_name;\n\n    stmt := v_statement;\n\n    RETURN NEXT;\n  END LOOP; -- end loop\n\nEND; -- end BEGIN\n$$ LANGUAGE plpgsql;\n"
  },
  {
    "path": "db/scrubbing/scrub_batched_direct_updates.sql",
    "content": "CREATE OR REPLACE PROCEDURE SCRUB_BATCHES()\nLANGUAGE PLPGSQL\nAS $$\nDECLARE\n  current_id INT := (SELECT MIN(id) FROM users);\n  max_id INT := (SELECT MAX(id) FROM users);\n  batch_size INT := 1000;\n  rows_updated INT;\nBEGIN\n  WHILE current_id <= max_id LOOP\n    -- the UPDATE by `id` range\n    UPDATE users\n    SET email = SCRUB_EMAIL(email)\n    WHERE id >= current_id\n    AND id < current_id + batch_size;\n\n    GET DIAGNOSTICS rows_updated = ROW_COUNT;\n\n    COMMIT;\n    RAISE NOTICE 'current_id: % - Number of rows updated: %',\n    current_id, rows_updated;\n\n    current_id := current_id + batch_size + 1;\n  END LOOP;\nEND;\n$$;\n\n-- Call the Procedure\nCALL SCRUB_BATCHES();\n"
  },
  {
    "path": "db/scrubbing/scrub_users.sql",
    "content": "INSERT INTO users_copy(id, first_name, last_name, email, type, created_at, updated_at, password_digest, trips_count, drivers_license_number)\n(\n  SELECT\n    id,\n    scrub_text(first_name),\n    scrub_text(last_name),\n    scrub_email(email),\n    type,\n    created_at,\n    updated_at,\n    password_digest,\n    trips_count,\n    scrub_text(drivers_license_number)\n  FROM users\n) ON CONFLICT DO NOTHING;\n"
  },
  {
    "path": "db/scrubbing/scrubber.sh",
    "content": "#!/bin/bash\n\nexport SOURCE_DB=\"postgres://owner:@localhost:5432/rideshare_development\"\necho \"STARTING scrub process...\"\necho \"5 rows BEFORE scrubbing:\"\npsql $SOURCE_DB -c \"SELECT * FROM users ORDER BY id ASC LIMIT 5\"\n\n# Set a seed value\npsql $SOURCE_DB -c \"SELECT SETSEED(0.5);\"\n\necho \"Dump views DDL\"\npsql $SOURCE_DB -f scrubbing/dump_views_ddl.sql \\\n  --tuples-only \\\n  --no-align \\\n  -o scrubbing/temp_views_ddl.sql\necho \"------------------\"\n\necho \"Dump target table foreign keys creation DDL\"\npsql $SOURCE_DB -f scrubbing/dump_foreign_keys_ddl_target_table.sql \\\n  --tuples-only \\\n  --no-align \\\n  -o scrubbing/temp_foreign_keys_ddl.sql\necho \"------------------\"\n\necho \"Dump primary key sequence creation DDL\"\npsql $SOURCE_DB -f scrubbing/dump_sequence_creation_ddl.sql \\\n  --tuples-only \\\n  --no-align \\\n  -o scrubbing/temp_sequences.sql\necho \"------------------\"\n\necho \"Create the users_copy table\"\nsleep 1\npsql $SOURCE_DB -f scrubbing/create_tables.sql\necho \"------------------\"\n\necho \"Fill users_copy with scrubbed values\"\nsleep 1\npsql $SOURCE_DB -f scrubbing/scrub_users.sql\necho \"------------------\"\n\n# There are no constraints besides the PK constraint which was already copied\n# echo \"Add the generate add constraint statements function\"\n# psql $SOURCE_DB -c \"\\i ./generate_add_constraint_statements.sql\"\n\n# echo \"Add function to generate constraints\"\n# psql $SOURCE_DB -c \"\\i scrubbing/generate_add_constraint_statements.sql\"\n\n# echo \"Remove existing temp_constraints.sql\"\n# rm scrubbing/temp_constraints.sql\n\n# echo \"Dump table constraints for tables to file\"\n# psql $SOURCE_DB -c \"SELECT generate_add_constraint_statements()\" \\\n#   --tuples-only \\\n#   -o scrubbing/temp_constraints.sql\n\necho \"Drop and rename users table\"\npsql $SOURCE_DB -f scrubbing/drop_and_swap_users.sql\necho \"------------------\"\n\n# echo \"Add constraints\"\n# psql $SOURCE_DB -f scrubbing/temp_constraints.sql\n\necho \"Add views and materialized views for target table\"\npsql $SOURCE_DB -f scrubbing/temp_views_ddl.sql\necho \"------------------\"\n\necho \"Add constraints that refer to target table, dropped from CASCADE\"\npsql $SOURCE_DB -f scrubbing/temp_foreign_keys_ddl.sql\necho \"------------------\"\n\necho \"Add sequence for target table, dropped from CASCADE\"\npsql $SOURCE_DB -f scrubbing/temp_sequences.sql\necho \"------------------\"\n\necho \"Assign sequence for target table\"\npsql $SOURCE_DB -f scrubbing/assign_sequence.sql\necho \"------------------\"\n\necho \"Success!\"\necho \"View 10 rows from user:\"\npsql $SOURCE_DB -c \"SELECT * FROM users ORDER BY id ASC LIMIT 5\"\n"
  },
  {
    "path": "db/setup.sh",
    "content": "#!/bin/bash\n\n# NOTE: This script expects you've generated a password.\n# You can do that using \"openssl\" as follows, or you could use any password\n# generation mechanism you like.\n#\n# Generate a password value using \"openssl\":\n# openssl rand -hex 12\n#\n# Generate and assign the value to RIDESHARE_DB_PASSWORD:\n# export RIDESHARE_DB_PASSWORD=$(openssl rand -hex 12)\n#\n# Later, you'll create the special password file ~/.pgpass, and\n# place your generated password in it.\n#\n# COMPATIBILITY: Requires PostgreSQL 16\n# ENV VARS: [DB_URL, RIDESHARE_DB_PASSWORD]\n\n# Make sure password is set\nif [ -z \"$RIDESHARE_DB_PASSWORD\" ]; then\n    echo \"Error: 'RIDESHARE_DB_PASSWORD' not set, can't continue.\"\n    echo\n    echo \"Check for an existing value in file: ~/.pgpass\"\n    echo \"If there's a value, set it like this:\"\n    echo 'export RIDESHARE_DB_PASSWORD=\"HSnDDgFtyW9fyFI\"'\n    echo \"OR generate a new value (See comments in: db/setup.sh)\"\n    exit 1\nfi\n# Check if the environment variable DB_URL is set\nif [ -z \"$DB_URL\" ]; then\n    echo \"Error: 'DB_URL' not set, can't continue.\"\n    echo \"This is the connection to your instance, using a superuser like 'postgres'.\"\n    echo \"The password for 'postgres' is also 'postgres'\"\n    echo \"Connect to the 'postgres' database to issue these commands\"\n    echo\n    echo \"See: db/setup.sh\"\n    echo \"Run: export DB_URL='postgres://postgres:@localhost:5432/postgres'\"\n    exit 1\nfi\n\n# Set up Roles and Users on your PostgreSQL instance\npsql $DB_URL -v password_to_save=$RIDESHARE_DB_PASSWORD -a -f db/create_role_owner.sql\npsql $DB_URL -a -f db/create_role_readwrite_users.sql\npsql $DB_URL -a -f db/create_role_readonly_users.sql\npsql $DB_URL -v password_to_save=$RIDESHARE_DB_PASSWORD -a -f db/create_role_app_user.sql\npsql $DB_URL -v password_to_save=$RIDESHARE_DB_PASSWORD -a -f db/create_role_app_readonly.sql\n\n# Set up Rideshare development database\npsql $DB_URL -a -f db/create_database.sql\n\n# Revoke database privileges on public, drop public schema\npsql $DB_URL -a -f db/revoke_drop_public_schema.sql\n\n# Create rideshare schema\npsql $DB_URL -a -f db/create_schema.sql\n\n# Perform GRANT operations\npsql $DB_URL -a -f db/create_grants_database.sql\npsql $DB_URL -a -f db/create_grants_schema.sql\n\n# Alter the default privileges\npsql $DB_URL -a -f db/alter_default_privileges_readwrite.sql\npsql $DB_URL -a -f db/alter_default_privileges_readonly.sql\npsql $DB_URL -a -f db/alter_default_privileges_public.sql\n\n# Add generated password to ~/.pgpass file\necho \"Add to ~/.pgpass\"\necho \"localhost:5432:rideshare_development:owner:$RIDESHARE_DB_PASSWORD\nlocalhost:6432:rideshare_development:owner:$RIDESHARE_DB_PASSWORD\nlocalhost:5432:rideshare_development:app:$RIDESHARE_DB_PASSWORD\nlocalhost:54321:rideshare_development:owner:$RIDESHARE_DB_PASSWORD\nlocalhost:54322:rideshare_development:owner:$RIDESHARE_DB_PASSWORD\n*:*:*:replication_user:$RIDESHARE_DB_PASSWORD\n*:*:*:app_readonly:$RIDESHARE_DB_PASSWORD\" >> ~/.pgpass\n\n# Set file ownership and permissions\necho \"chmod ~/.pgpass\"\nchmod 0600 ~/.pgpass\n\necho\necho \"DONE! 🎉\"\necho \"Notes:\"\necho \"Make sure 'graphviz' is installed: 'brew install graphviz'\"\necho\necho \"Next: run 'bin/rails db:migrate' to apply pending migrations\"\necho\necho \"If you ran as: 'sh db/setup.sh 2>&1 | tee -a output.log'\"\necho \"Open the 'output.log' file and check for errors\"\necho\necho \"The ~/.pgpass file was generated or new values were added to it.\"\necho\n\necho \"Set the 'DATABASE_URL' env var, which you can find in the .env file:\"\necho \"To set it in your terminal, run:\"\necho\necho \"export $(cat .env|grep DATABASE_URL|head -n1)\"\n"
  },
  {
    "path": "db/setup_test_database.sh",
    "content": "#!/bin/bash\n\nexport DB_URL=postgres://postgres:@localhost:5432/postgres # run as OS user/superuser/admin\nexport APP_TEST_DB_NAME=rideshare_test\nexport APP_TEST_USER=rideshare_test\nexport TEST_DB_URL=postgres://postgres:@localhost:5432/rideshare_test # run as OS user/superuser/admin\n\necho \"%%%%%%%%%%%\"\necho \"Test DB\"\necho \"%%%%%%%%%%%\"\n\n# ROLES\necho \"SELECT 'CREATE USER $APP_TEST_USER WITH LOGIN' WHERE NOT EXISTS (SELECT FROM pg_catalog.pg_roles WHERE rolname = '$APP_TEST_USER')\\gexec\" | psql $DB_URL\n\n# DATABASE\necho \"Creating database $APP_TEST_DB_NAME\"\necho \"SELECT 'CREATE DATABASE $APP_TEST_DB_NAME' WHERE NOT EXISTS (SELECT datname FROM pg_database WHERE datname = '$APP_TEST_DB_NAME')\\gexec\" | psql $DB_URL;\npsql $DB_URL -c \"ALTER DATABASE $APP_TEST_DB_NAME OWNER TO $APP_TEST_USER\"\n\n# SUPERUSER ONLY(!) for rideshare_test database test user\n# SUPERUSER required to drop all Foreign Key Constraints, which is done when truncating tables\n# https://stackoverflow.com/a/32213455/126688\npsql $DB_URL -c \"ALTER USER $APP_TEST_USER WITH SUPERUSER\"\n\n# CONNECT\npsql $DB_URL -c \"GRANT CONNECT ON DATABASE $APP_TEST_DB_NAME TO $APP_TEST_USER;\"\n\npsql -U $APP_TEST_USER -d $APP_TEST_DB_NAME -c \"CREATE SCHEMA rideshare;\"\npsql -U $APP_TEST_USER -d $APP_TEST_DB_NAME -c \"ALTER ROLE $APP_TEST_USER SET search_path TO rideshare;\"\npsql -U $APP_TEST_USER -d $APP_TEST_DB_NAME -c \"SET search_path TO rideshare;\"\n"
  },
  {
    "path": "db/structure.sql",
    "content": "SET statement_timeout = 0;\nSET lock_timeout = 0;\nSET idle_in_transaction_session_timeout = 0;\nSET client_encoding = 'UTF8';\nSET standard_conforming_strings = on;\nSELECT pg_catalog.set_config('search_path', '', false);\nSET check_function_bodies = false;\nSET xmloption = content;\nSET client_min_messages = warning;\nSET row_security = off;\n\nALTER TABLE IF EXISTS ONLY rideshare.trip_requests DROP CONSTRAINT IF EXISTS fk_rails_fa2679b626;\nALTER TABLE IF EXISTS ONLY rideshare.trips DROP CONSTRAINT IF EXISTS fk_rails_e7560abc33;\nALTER TABLE IF EXISTS ONLY rideshare.trip_requests DROP CONSTRAINT IF EXISTS fk_rails_c17a139554;\nALTER TABLE IF EXISTS ONLY rideshare.trip_positions DROP CONSTRAINT IF EXISTS fk_rails_9688ac8706;\nALTER TABLE IF EXISTS ONLY rideshare.vehicle_reservations DROP CONSTRAINT IF EXISTS fk_rails_7edc8e666a;\nALTER TABLE IF EXISTS ONLY rideshare.trips DROP CONSTRAINT IF EXISTS fk_rails_6d92acb430;\nALTER TABLE IF EXISTS ONLY rideshare.vehicle_reservations DROP CONSTRAINT IF EXISTS fk_rails_59996232fc;\nALTER TABLE IF EXISTS ONLY rideshare.trip_requests DROP CONSTRAINT IF EXISTS fk_rails_3fdebbfaca;\nDROP INDEX IF EXISTS rideshare.index_vehicles_on_name;\nDROP INDEX IF EXISTS rideshare.index_vehicle_reservations_on_vehicle_id;\nDROP INDEX IF EXISTS rideshare.index_users_on_last_name;\nDROP INDEX IF EXISTS rideshare.index_users_on_email;\nDROP INDEX IF EXISTS rideshare.index_trips_on_trip_request_id;\nDROP INDEX IF EXISTS rideshare.index_trips_on_rating;\nDROP INDEX IF EXISTS rideshare.index_trips_on_driver_id;\nDROP INDEX IF EXISTS rideshare.index_trip_requests_on_start_location_id;\nDROP INDEX IF EXISTS rideshare.index_trip_requests_on_rider_id;\nDROP INDEX IF EXISTS rideshare.index_trip_requests_on_end_location_id;\nDROP INDEX IF EXISTS rideshare.index_locations_on_address;\nDROP INDEX IF EXISTS rideshare.index_fast_search_results_on_driver_id;\nALTER TABLE IF EXISTS ONLY rideshare.vehicles DROP CONSTRAINT IF EXISTS vehicles_pkey;\nALTER TABLE IF EXISTS ONLY rideshare.vehicle_reservations DROP CONSTRAINT IF EXISTS vehicle_reservations_pkey;\nALTER TABLE IF EXISTS ONLY rideshare.users DROP CONSTRAINT IF EXISTS users_pkey;\nALTER TABLE IF EXISTS ONLY rideshare.trips DROP CONSTRAINT IF EXISTS trips_pkey;\nALTER TABLE IF EXISTS ONLY rideshare.trip_requests DROP CONSTRAINT IF EXISTS trip_requests_pkey;\nALTER TABLE IF EXISTS ONLY rideshare.trip_positions DROP CONSTRAINT IF EXISTS trip_positions_pkey;\nALTER TABLE IF EXISTS ONLY rideshare.schema_migrations DROP CONSTRAINT IF EXISTS schema_migrations_pkey;\nALTER TABLE IF EXISTS ONLY rideshare.vehicle_reservations DROP CONSTRAINT IF EXISTS non_overlapping_vehicle_registration;\nALTER TABLE IF EXISTS ONLY rideshare.locations DROP CONSTRAINT IF EXISTS locations_pkey;\nALTER TABLE IF EXISTS rideshare.trips DROP CONSTRAINT IF EXISTS chk_rails_4743ddc2d2;\nALTER TABLE IF EXISTS ONLY rideshare.ar_internal_metadata DROP CONSTRAINT IF EXISTS ar_internal_metadata_pkey;\nALTER TABLE IF EXISTS rideshare.vehicles ALTER COLUMN id DROP DEFAULT;\nALTER TABLE IF EXISTS rideshare.vehicle_reservations ALTER COLUMN id DROP DEFAULT;\nALTER TABLE IF EXISTS rideshare.users ALTER COLUMN id DROP DEFAULT;\nALTER TABLE IF EXISTS rideshare.trips ALTER COLUMN id DROP DEFAULT;\nALTER TABLE IF EXISTS rideshare.trip_requests ALTER COLUMN id DROP DEFAULT;\nALTER TABLE IF EXISTS rideshare.trip_positions ALTER COLUMN id DROP DEFAULT;\nALTER TABLE IF EXISTS rideshare.locations ALTER COLUMN id DROP DEFAULT;\nDROP SEQUENCE IF EXISTS rideshare.vehicles_id_seq;\nDROP TABLE IF EXISTS rideshare.vehicles;\nDROP SEQUENCE IF EXISTS rideshare.vehicle_reservations_id_seq;\nDROP TABLE IF EXISTS rideshare.vehicle_reservations;\nDROP SEQUENCE IF EXISTS rideshare.users_id_seq;\nDROP SEQUENCE IF EXISTS rideshare.trips_id_seq;\nDROP SEQUENCE IF EXISTS rideshare.trip_requests_id_seq;\nDROP TABLE IF EXISTS rideshare.trip_requests;\nDROP SEQUENCE IF EXISTS rideshare.trip_positions_id_seq;\nDROP TABLE IF EXISTS rideshare.trip_positions;\nDROP VIEW IF EXISTS rideshare.search_results;\nDROP TABLE IF EXISTS rideshare.schema_migrations;\nDROP SEQUENCE IF EXISTS rideshare.locations_id_seq;\nDROP TABLE IF EXISTS rideshare.locations;\nDROP MATERIALIZED VIEW IF EXISTS rideshare.fast_search_results;\nDROP TABLE IF EXISTS rideshare.users;\nDROP TABLE IF EXISTS rideshare.trips;\nDROP TABLE IF EXISTS rideshare.ar_internal_metadata;\nDROP FUNCTION IF EXISTS rideshare.scrub_text(input character varying);\nDROP FUNCTION IF EXISTS rideshare.scrub_email(email_address character varying);\nDROP FUNCTION IF EXISTS rideshare.fast_count(identifier text, threshold bigint);\nDROP TYPE IF EXISTS rideshare.vehicle_status;\nDROP SCHEMA IF EXISTS rideshare;\n--\n-- Name: rideshare; Type: SCHEMA; Schema: -; Owner: -\n--\n\nCREATE SCHEMA rideshare;\n\n\n--\n-- Name: vehicle_status; Type: TYPE; Schema: rideshare; Owner: -\n--\n\nCREATE TYPE rideshare.vehicle_status AS ENUM (\n    'draft',\n    'published'\n);\n\n\n--\n-- Name: fast_count(text, bigint); Type: FUNCTION; Schema: rideshare; Owner: -\n--\n\nCREATE FUNCTION rideshare.fast_count(identifier text, threshold bigint) RETURNS bigint\n    LANGUAGE plpgsql\n    AS $$\nDECLARE\n  count bigint;\n  table_parts text[];\n  schema_name text;\n  table_name text;\n  BEGIN\n    SELECT PARSE_IDENT(identifier) INTO table_parts;\n\n    IF ARRAY_LENGTH(table_parts, 1) = 2 THEN\n      schema_name := ''''|| table_parts[1] ||'''';\n      table_name := ''''|| table_parts[2] ||'''';\n    ELSE\n      schema_name := 'ANY (current_schemas(false))';\n      table_name := ''''|| table_parts[1] ||'''';\n    END IF;\n\n    EXECUTE '\n      WITH tables_counts AS (\n        -- inherited and partitioned tables counts\n        SELECT\n          ((SUM(child.reltuples::float) / greatest(SUM(child.relpages), 1))) *\n            (SUM(pg_relation_size(child.oid))::float / (current_setting(''block_size'')::float))::integer AS estimate\n        FROM pg_inherits\n          INNER JOIN pg_class parent ON pg_inherits.inhparent = parent.oid\n          LEFT JOIN pg_namespace n ON n.oid = parent.relnamespace\n          INNER JOIN pg_class child ON pg_inherits.inhrelid = child.oid\n        WHERE n.nspname = '|| schema_name ||' AND\n          parent.relname = '|| table_name ||'\n\n        UNION ALL\n\n        -- table count\n        SELECT\n          (reltuples::float / greatest(relpages, 1)) *\n            (pg_relation_size(c.oid)::float / (current_setting(''block_size'')::float))::integer AS estimate\n        FROM pg_class c\n          LEFT JOIN pg_namespace n ON n.oid = c.relnamespace\n        WHERE n.nspname = '|| schema_name ||' AND\n          c.relname = '|| table_name ||'\n      )\n\n      SELECT\n        CASE\n        WHEN SUM(estimate) < '|| threshold ||' THEN (SELECT COUNT(*) FROM '|| identifier ||')\n        ELSE SUM(estimate)\n        END AS count\n      FROM tables_counts' INTO count;\n    RETURN count;\n  END\n$$;\n\n\n--\n-- Name: scrub_email(character varying); Type: FUNCTION; Schema: rideshare; Owner: -\n--\n\nCREATE FUNCTION rideshare.scrub_email(email_address character varying) RETURNS character varying\n    LANGUAGE sql\n    AS $$\nSELECT\nCONCAT(\n  SUBSTR(\n    MD5(RANDOM()::text),\n    0,\n    GREATEST(LENGTH(SPLIT_PART(email_address, '@', 1)) + 1, 6)\n  ),\n  '@',\n  SPLIT_PART(email_address, '@', 2)\n);\n$$;\n\n\n--\n-- Name: scrub_text(character varying); Type: FUNCTION; Schema: rideshare; Owner: -\n--\n\nCREATE FUNCTION rideshare.scrub_text(input character varying) RETURNS character varying\n    LANGUAGE sql\n    AS $$\nSELECT\n-- replace from position 0, to max(length or 6)\nSUBSTR(\n  MD5(RANDOM()::text),\n  0,\n  GREATEST(LENGTH(input) + 1, 6)\n);\n$$;\n\n\nSET default_tablespace = '';\n\nSET default_table_access_method = heap;\n\n--\n-- Name: ar_internal_metadata; Type: TABLE; Schema: rideshare; Owner: -\n--\n\nCREATE TABLE rideshare.ar_internal_metadata (\n    key character varying NOT NULL,\n    value character varying,\n    created_at timestamp(6) without time zone NOT NULL,\n    updated_at timestamp(6) without time zone NOT NULL\n);\n\n\n--\n-- Name: trips; Type: TABLE; Schema: rideshare; Owner: -\n--\n\nCREATE TABLE rideshare.trips (\n    id bigint NOT NULL,\n    trip_request_id bigint NOT NULL,\n    driver_id integer NOT NULL,\n    completed_at timestamp without time zone,\n    rating integer,\n    created_at timestamp(6) without time zone NOT NULL,\n    updated_at timestamp(6) without time zone NOT NULL,\n    CONSTRAINT rating_check CHECK (((rating >= 1) AND (rating <= 5)))\n);\n\n\n--\n-- Name: users; Type: TABLE; Schema: rideshare; Owner: -\n--\n\nCREATE TABLE rideshare.users (\n    id bigint NOT NULL,\n    first_name character varying NOT NULL,\n    last_name character varying NOT NULL,\n    email character varying NOT NULL,\n    type character varying NOT NULL,\n    created_at timestamp(6) without time zone NOT NULL,\n    updated_at timestamp(6) without time zone NOT NULL,\n    password_digest character varying,\n    trips_count integer,\n    drivers_license_number character varying(100)\n);\n\n\n--\n-- Name: TABLE users; Type: COMMENT; Schema: rideshare; Owner: -\n--\n\nCOMMENT ON TABLE rideshare.users IS 'sensitive_fields|first_name:scrub_text,last_name:scrub_text,email:scrub_email';\n\n\n--\n-- Name: fast_search_results; Type: MATERIALIZED VIEW; Schema: rideshare; Owner: -\n--\n\nCREATE MATERIALIZED VIEW rideshare.fast_search_results AS\n SELECT t.driver_id,\n    concat(d.first_name, ' ', d.last_name) AS driver_name,\n    avg(t.rating) AS avg_rating,\n    count(t.rating) AS trip_count\n   FROM (rideshare.trips t\n     JOIN rideshare.users d ON ((t.driver_id = d.id)))\n  GROUP BY t.driver_id, d.first_name, d.last_name\n  ORDER BY (count(t.rating)) DESC\n  WITH NO DATA;\n\n\n--\n-- Name: locations; Type: TABLE; Schema: rideshare; Owner: -\n--\n\nCREATE TABLE rideshare.locations (\n    id bigint NOT NULL,\n    address character varying NOT NULL,\n    created_at timestamp(6) without time zone NOT NULL,\n    updated_at timestamp(6) without time zone NOT NULL,\n    city character varying,\n    state character(2) NOT NULL,\n    \"position\" point NOT NULL,\n    CONSTRAINT state_length_check CHECK ((length(state) = 2))\n);\n\n\n--\n-- Name: locations_id_seq; Type: SEQUENCE; Schema: rideshare; Owner: -\n--\n\nCREATE SEQUENCE rideshare.locations_id_seq\n    START WITH 1\n    INCREMENT BY 1\n    NO MINVALUE\n    NO MAXVALUE\n    CACHE 1;\n\n\n--\n-- Name: locations_id_seq; Type: SEQUENCE OWNED BY; Schema: rideshare; Owner: -\n--\n\nALTER SEQUENCE rideshare.locations_id_seq OWNED BY rideshare.locations.id;\n\n\n--\n-- Name: schema_migrations; Type: TABLE; Schema: rideshare; Owner: -\n--\n\nCREATE TABLE rideshare.schema_migrations (\n    version character varying NOT NULL\n);\n\n\n--\n-- Name: search_results; Type: VIEW; Schema: rideshare; Owner: -\n--\n\nCREATE VIEW rideshare.search_results AS\n SELECT concat(d.first_name, ' ', d.last_name) AS driver_name,\n    avg(t.rating) AS avg_rating,\n    count(t.rating) AS trip_count\n   FROM (rideshare.trips t\n     JOIN rideshare.users d ON ((t.driver_id = d.id)))\n  GROUP BY t.driver_id, d.first_name, d.last_name\n  ORDER BY (count(t.rating)) DESC;\n\n\n--\n-- Name: trip_positions; Type: TABLE; Schema: rideshare; Owner: -\n--\n\nCREATE TABLE rideshare.trip_positions (\n    id bigint NOT NULL,\n    \"position\" point NOT NULL,\n    trip_id bigint NOT NULL,\n    created_at timestamp(6) without time zone NOT NULL,\n    updated_at timestamp(6) without time zone NOT NULL\n);\n\n\n--\n-- Name: trip_positions_id_seq; Type: SEQUENCE; Schema: rideshare; Owner: -\n--\n\nCREATE SEQUENCE rideshare.trip_positions_id_seq\n    START WITH 1\n    INCREMENT BY 1\n    NO MINVALUE\n    NO MAXVALUE\n    CACHE 1;\n\n\n--\n-- Name: trip_positions_id_seq; Type: SEQUENCE OWNED BY; Schema: rideshare; Owner: -\n--\n\nALTER SEQUENCE rideshare.trip_positions_id_seq OWNED BY rideshare.trip_positions.id;\n\n\n--\n-- Name: trip_requests; Type: TABLE; Schema: rideshare; Owner: -\n--\n\nCREATE TABLE rideshare.trip_requests (\n    id bigint NOT NULL,\n    rider_id integer NOT NULL,\n    start_location_id integer NOT NULL,\n    end_location_id integer NOT NULL,\n    created_at timestamp(6) without time zone NOT NULL,\n    updated_at timestamp(6) without time zone NOT NULL\n);\n\n\n--\n-- Name: trip_requests_id_seq; Type: SEQUENCE; Schema: rideshare; Owner: -\n--\n\nCREATE SEQUENCE rideshare.trip_requests_id_seq\n    START WITH 1\n    INCREMENT BY 1\n    NO MINVALUE\n    NO MAXVALUE\n    CACHE 1;\n\n\n--\n-- Name: trip_requests_id_seq; Type: SEQUENCE OWNED BY; Schema: rideshare; Owner: -\n--\n\nALTER SEQUENCE rideshare.trip_requests_id_seq OWNED BY rideshare.trip_requests.id;\n\n\n--\n-- Name: trips_id_seq; Type: SEQUENCE; Schema: rideshare; Owner: -\n--\n\nCREATE SEQUENCE rideshare.trips_id_seq\n    START WITH 1\n    INCREMENT BY 1\n    NO MINVALUE\n    NO MAXVALUE\n    CACHE 1;\n\n\n--\n-- Name: trips_id_seq; Type: SEQUENCE OWNED BY; Schema: rideshare; Owner: -\n--\n\nALTER SEQUENCE rideshare.trips_id_seq OWNED BY rideshare.trips.id;\n\n\n--\n-- Name: users_id_seq; Type: SEQUENCE; Schema: rideshare; Owner: -\n--\n\nCREATE SEQUENCE rideshare.users_id_seq\n    START WITH 1\n    INCREMENT BY 1\n    NO MINVALUE\n    NO MAXVALUE\n    CACHE 1;\n\n\n--\n-- Name: users_id_seq; Type: SEQUENCE OWNED BY; Schema: rideshare; Owner: -\n--\n\nALTER SEQUENCE rideshare.users_id_seq OWNED BY rideshare.users.id;\n\n\n--\n-- Name: vehicle_reservations; Type: TABLE; Schema: rideshare; Owner: -\n--\n\nCREATE TABLE rideshare.vehicle_reservations (\n    id bigint NOT NULL,\n    vehicle_id integer NOT NULL,\n    trip_request_id integer NOT NULL,\n    canceled boolean DEFAULT false NOT NULL,\n    starts_at timestamp with time zone NOT NULL,\n    ends_at timestamp with time zone NOT NULL,\n    created_at timestamp(6) without time zone NOT NULL,\n    updated_at timestamp(6) without time zone NOT NULL\n);\n\n\n--\n-- Name: vehicle_reservations_id_seq; Type: SEQUENCE; Schema: rideshare; Owner: -\n--\n\nCREATE SEQUENCE rideshare.vehicle_reservations_id_seq\n    START WITH 1\n    INCREMENT BY 1\n    NO MINVALUE\n    NO MAXVALUE\n    CACHE 1;\n\n\n--\n-- Name: vehicle_reservations_id_seq; Type: SEQUENCE OWNED BY; Schema: rideshare; Owner: -\n--\n\nALTER SEQUENCE rideshare.vehicle_reservations_id_seq OWNED BY rideshare.vehicle_reservations.id;\n\n\n--\n-- Name: vehicles; Type: TABLE; Schema: rideshare; Owner: -\n--\n\nCREATE TABLE rideshare.vehicles (\n    id bigint NOT NULL,\n    name character varying NOT NULL,\n    created_at timestamp(6) without time zone NOT NULL,\n    updated_at timestamp(6) without time zone NOT NULL,\n    status rideshare.vehicle_status DEFAULT 'draft'::rideshare.vehicle_status NOT NULL\n);\n\n\n--\n-- Name: vehicles_id_seq; Type: SEQUENCE; Schema: rideshare; Owner: -\n--\n\nCREATE SEQUENCE rideshare.vehicles_id_seq\n    START WITH 1\n    INCREMENT BY 1\n    NO MINVALUE\n    NO MAXVALUE\n    CACHE 1;\n\n\n--\n-- Name: vehicles_id_seq; Type: SEQUENCE OWNED BY; Schema: rideshare; Owner: -\n--\n\nALTER SEQUENCE rideshare.vehicles_id_seq OWNED BY rideshare.vehicles.id;\n\n\n--\n-- Name: locations id; Type: DEFAULT; Schema: rideshare; Owner: -\n--\n\nALTER TABLE ONLY rideshare.locations ALTER COLUMN id SET DEFAULT nextval('rideshare.locations_id_seq'::regclass);\n\n\n--\n-- Name: trip_positions id; Type: DEFAULT; Schema: rideshare; Owner: -\n--\n\nALTER TABLE ONLY rideshare.trip_positions ALTER COLUMN id SET DEFAULT nextval('rideshare.trip_positions_id_seq'::regclass);\n\n\n--\n-- Name: trip_requests id; Type: DEFAULT; Schema: rideshare; Owner: -\n--\n\nALTER TABLE ONLY rideshare.trip_requests ALTER COLUMN id SET DEFAULT nextval('rideshare.trip_requests_id_seq'::regclass);\n\n\n--\n-- Name: trips id; Type: DEFAULT; Schema: rideshare; Owner: -\n--\n\nALTER TABLE ONLY rideshare.trips ALTER COLUMN id SET DEFAULT nextval('rideshare.trips_id_seq'::regclass);\n\n\n--\n-- Name: users id; Type: DEFAULT; Schema: rideshare; Owner: -\n--\n\nALTER TABLE ONLY rideshare.users ALTER COLUMN id SET DEFAULT nextval('rideshare.users_id_seq'::regclass);\n\n\n--\n-- Name: vehicle_reservations id; Type: DEFAULT; Schema: rideshare; Owner: -\n--\n\nALTER TABLE ONLY rideshare.vehicle_reservations ALTER COLUMN id SET DEFAULT nextval('rideshare.vehicle_reservations_id_seq'::regclass);\n\n\n--\n-- Name: vehicles id; Type: DEFAULT; Schema: rideshare; Owner: -\n--\n\nALTER TABLE ONLY rideshare.vehicles ALTER COLUMN id SET DEFAULT nextval('rideshare.vehicles_id_seq'::regclass);\n\n\n--\n-- Name: ar_internal_metadata ar_internal_metadata_pkey; Type: CONSTRAINT; Schema: rideshare; Owner: -\n--\n\nALTER TABLE ONLY rideshare.ar_internal_metadata\n    ADD CONSTRAINT ar_internal_metadata_pkey PRIMARY KEY (key);\n\n\n--\n-- Name: trips chk_rails_4743ddc2d2; Type: CHECK CONSTRAINT; Schema: rideshare; Owner: -\n--\n\nALTER TABLE rideshare.trips\n    ADD CONSTRAINT chk_rails_4743ddc2d2 CHECK ((completed_at > created_at)) NOT VALID;\n\n\n--\n-- Name: locations locations_pkey; Type: CONSTRAINT; Schema: rideshare; Owner: -\n--\n\nALTER TABLE ONLY rideshare.locations\n    ADD CONSTRAINT locations_pkey PRIMARY KEY (id);\n\n\n--\n-- Name: vehicle_reservations non_overlapping_vehicle_registration; Type: CONSTRAINT; Schema: rideshare; Owner: -\n--\n\nALTER TABLE ONLY rideshare.vehicle_reservations\n    ADD CONSTRAINT non_overlapping_vehicle_registration EXCLUDE USING gist (int4range(vehicle_id, vehicle_id, '[]'::text) WITH =, tstzrange(starts_at, ends_at) WITH &&) WHERE ((NOT canceled));\n\n\n--\n-- Name: schema_migrations schema_migrations_pkey; Type: CONSTRAINT; Schema: rideshare; Owner: -\n--\n\nALTER TABLE ONLY rideshare.schema_migrations\n    ADD CONSTRAINT schema_migrations_pkey PRIMARY KEY (version);\n\n\n--\n-- Name: trip_positions trip_positions_pkey; Type: CONSTRAINT; Schema: rideshare; Owner: -\n--\n\nALTER TABLE ONLY rideshare.trip_positions\n    ADD CONSTRAINT trip_positions_pkey PRIMARY KEY (id);\n\n\n--\n-- Name: trip_requests trip_requests_pkey; Type: CONSTRAINT; Schema: rideshare; Owner: -\n--\n\nALTER TABLE ONLY rideshare.trip_requests\n    ADD CONSTRAINT trip_requests_pkey PRIMARY KEY (id);\n\n\n--\n-- Name: trips trips_pkey; Type: CONSTRAINT; Schema: rideshare; Owner: -\n--\n\nALTER TABLE ONLY rideshare.trips\n    ADD CONSTRAINT trips_pkey PRIMARY KEY (id);\n\n\n--\n-- Name: users users_pkey; Type: CONSTRAINT; Schema: rideshare; Owner: -\n--\n\nALTER TABLE ONLY rideshare.users\n    ADD CONSTRAINT users_pkey PRIMARY KEY (id);\n\n\n--\n-- Name: vehicle_reservations vehicle_reservations_pkey; Type: CONSTRAINT; Schema: rideshare; Owner: -\n--\n\nALTER TABLE ONLY rideshare.vehicle_reservations\n    ADD CONSTRAINT vehicle_reservations_pkey PRIMARY KEY (id);\n\n\n--\n-- Name: vehicles vehicles_pkey; Type: CONSTRAINT; Schema: rideshare; Owner: -\n--\n\nALTER TABLE ONLY rideshare.vehicles\n    ADD CONSTRAINT vehicles_pkey PRIMARY KEY (id);\n\n\n--\n-- Name: index_fast_search_results_on_driver_id; Type: INDEX; Schema: rideshare; Owner: -\n--\n\nCREATE UNIQUE INDEX index_fast_search_results_on_driver_id ON rideshare.fast_search_results USING btree (driver_id);\n\n\n--\n-- Name: index_locations_on_address; Type: INDEX; Schema: rideshare; Owner: -\n--\n\nCREATE UNIQUE INDEX index_locations_on_address ON rideshare.locations USING btree (address);\n\n\n--\n-- Name: index_trip_requests_on_end_location_id; Type: INDEX; Schema: rideshare; Owner: -\n--\n\nCREATE INDEX index_trip_requests_on_end_location_id ON rideshare.trip_requests USING btree (end_location_id);\n\n\n--\n-- Name: index_trip_requests_on_rider_id; Type: INDEX; Schema: rideshare; Owner: -\n--\n\nCREATE INDEX index_trip_requests_on_rider_id ON rideshare.trip_requests USING btree (rider_id);\n\n\n--\n-- Name: index_trip_requests_on_start_location_id; Type: INDEX; Schema: rideshare; Owner: -\n--\n\nCREATE INDEX index_trip_requests_on_start_location_id ON rideshare.trip_requests USING btree (start_location_id);\n\n\n--\n-- Name: index_trips_on_driver_id; Type: INDEX; Schema: rideshare; Owner: -\n--\n\nCREATE INDEX index_trips_on_driver_id ON rideshare.trips USING btree (driver_id);\n\n\n--\n-- Name: index_trips_on_rating; Type: INDEX; Schema: rideshare; Owner: -\n--\n\nCREATE INDEX index_trips_on_rating ON rideshare.trips USING btree (rating);\n\n\n--\n-- Name: index_trips_on_trip_request_id; Type: INDEX; Schema: rideshare; Owner: -\n--\n\nCREATE INDEX index_trips_on_trip_request_id ON rideshare.trips USING btree (trip_request_id);\n\n\n--\n-- Name: index_users_on_email; Type: INDEX; Schema: rideshare; Owner: -\n--\n\nCREATE UNIQUE INDEX index_users_on_email ON rideshare.users USING btree (email);\n\n\n--\n-- Name: index_users_on_last_name; Type: INDEX; Schema: rideshare; Owner: -\n--\n\nCREATE INDEX index_users_on_last_name ON rideshare.users USING btree (last_name);\n\n\n--\n-- Name: index_vehicle_reservations_on_vehicle_id; Type: INDEX; Schema: rideshare; Owner: -\n--\n\nCREATE INDEX index_vehicle_reservations_on_vehicle_id ON rideshare.vehicle_reservations USING btree (vehicle_id);\n\n\n--\n-- Name: index_vehicles_on_name; Type: INDEX; Schema: rideshare; Owner: -\n--\n\nCREATE UNIQUE INDEX index_vehicles_on_name ON rideshare.vehicles USING btree (name);\n\n\n--\n-- Name: trip_requests fk_rails_3fdebbfaca; Type: FK CONSTRAINT; Schema: rideshare; Owner: -\n--\n\nALTER TABLE ONLY rideshare.trip_requests\n    ADD CONSTRAINT fk_rails_3fdebbfaca FOREIGN KEY (end_location_id) REFERENCES rideshare.locations(id);\n\n\n--\n-- Name: vehicle_reservations fk_rails_59996232fc; Type: FK CONSTRAINT; Schema: rideshare; Owner: -\n--\n\nALTER TABLE ONLY rideshare.vehicle_reservations\n    ADD CONSTRAINT fk_rails_59996232fc FOREIGN KEY (trip_request_id) REFERENCES rideshare.trip_requests(id);\n\n\n--\n-- Name: trips fk_rails_6d92acb430; Type: FK CONSTRAINT; Schema: rideshare; Owner: -\n--\n\nALTER TABLE ONLY rideshare.trips\n    ADD CONSTRAINT fk_rails_6d92acb430 FOREIGN KEY (trip_request_id) REFERENCES rideshare.trip_requests(id);\n\n\n--\n-- Name: vehicle_reservations fk_rails_7edc8e666a; Type: FK CONSTRAINT; Schema: rideshare; Owner: -\n--\n\nALTER TABLE ONLY rideshare.vehicle_reservations\n    ADD CONSTRAINT fk_rails_7edc8e666a FOREIGN KEY (vehicle_id) REFERENCES rideshare.vehicles(id);\n\n\n--\n-- Name: trip_positions fk_rails_9688ac8706; Type: FK CONSTRAINT; Schema: rideshare; Owner: -\n--\n\nALTER TABLE ONLY rideshare.trip_positions\n    ADD CONSTRAINT fk_rails_9688ac8706 FOREIGN KEY (trip_id) REFERENCES rideshare.trips(id);\n\n\n--\n-- Name: trip_requests fk_rails_c17a139554; Type: FK CONSTRAINT; Schema: rideshare; Owner: -\n--\n\nALTER TABLE ONLY rideshare.trip_requests\n    ADD CONSTRAINT fk_rails_c17a139554 FOREIGN KEY (rider_id) REFERENCES rideshare.users(id);\n\n\n--\n-- Name: trips fk_rails_e7560abc33; Type: FK CONSTRAINT; Schema: rideshare; Owner: -\n--\n\nALTER TABLE ONLY rideshare.trips\n    ADD CONSTRAINT fk_rails_e7560abc33 FOREIGN KEY (driver_id) REFERENCES rideshare.users(id);\n\n\n--\n-- Name: trip_requests fk_rails_fa2679b626; Type: FK CONSTRAINT; Schema: rideshare; Owner: -\n--\n\nALTER TABLE ONLY rideshare.trip_requests\n    ADD CONSTRAINT fk_rails_fa2679b626 FOREIGN KEY (start_location_id) REFERENCES rideshare.locations(id);\n\n\n--\n-- PostgreSQL database dump complete\n--\n\nSET search_path TO rideshare;\n\nINSERT INTO \"schema_migrations\" (version) VALUES\n('20231220043547'),\n('20231218215836'),\n('20231213045957'),\n('20231208050516'),\n('20231018153712'),\n('20231018153441'),\n('20230925150831'),\n('20230925150207'),\n('20230726020548'),\n('20230716174139'),\n('20230714013609'),\n('20230713150710'),\n('20230713150550'),\n('20230711015123'),\n('20230625151410'),\n('20230620030038'),\n('20230619213546'),\n('20230314210022'),\n('20230314204931'),\n('20230126025656'),\n('20230125003946'),\n('20230125003531'),\n('20221230203627'),\n('20221230200725'),\n('20221223161403'),\n('20221221052616'),\n('20221220201836'),\n('20221219164626'),\n('20221111213918'),\n('20221111212740'),\n('20221110020532'),\n('20221108175619'),\n('20221108175321'),\n('20221108172933'),\n('20221007184855'),\n('20220916171314'),\n('20220814175213'),\n('20220801140121'),\n('20220729020430'),\n('20220729014635'),\n('20220716020213'),\n('20220711015524'),\n('20220711015454'),\n('20220711010541'),\n('20200603150442'),\n('20191203213103'),\n('20191203212055'),\n('20191121175429'),\n('20191112165848'),\n('20191111151637'),\n('20191108221519'),\n('20191107212726');\n\n"
  },
  {
    "path": "db/teardown.sh",
    "content": "export DB_URL=\"postgres://postgres:@localhost:5432/postgres\"\n\npsql $DB_URL -c \"DROP DATABASE IF EXISTS rideshare_development\"\npsql $DB_URL -c \"DROP DATABASE IF EXISTS rideshare_test\"\n\n# https://stackoverflow.com/a/54078230/126688\npsql $DB_URL -a -f db/teardown_remove_default_privileges.sql\n\npsql $DB_URL -c \"DROP ROLE IF EXISTS owner\"\npsql $DB_URL -c \"DROP ROLE IF EXISTS readwrite_users\"\npsql $DB_URL -c \"DROP ROLE IF EXISTS readonly_users\"\npsql $DB_URL -c \"DROP ROLE IF EXISTS app\"\npsql $DB_URL -c \"DROP ROLE IF EXISTS app_readonly\"\n"
  },
  {
    "path": "db/teardown_remove_default_privileges.sql",
    "content": "-- Reverse all the DEFAULT PRIVILEGES ....or\n-- https://stackoverflow.com/a/54078230/126688\n\n-- Simpler solution:\n-- https://dba.stackexchange.com/a/155356/272968\n\nREASSIGN OWNED BY owner TO postgres;\nDROP OWNED BY owner;\n"
  },
  {
    "path": "db/views/fast_search_results_v01.sql",
    "content": "-- list all drivers, to search within\nSELECT\nCONCAT(d.first_name, ' ', d.last_name) AS driver_name,\nAVG(t.rating) AS avg_rating,\nCOUNT(t.rating) AS trip_count\nFROM trips t\nJOIN users d ON t.driver_id = d.id\nGROUP BY t.driver_id, d.first_name, d.last_name\nORDER BY COUNT(t.rating) DESC;\n"
  },
  {
    "path": "db/views/fast_search_results_v02.sql",
    "content": "-- list all drivers, to search within\nSELECT\nt.driver_id,\nCONCAT(d.first_name, ' ', d.last_name) AS driver_name,\nAVG(t.rating) AS avg_rating,\nCOUNT(t.rating) AS trip_count\nFROM trips t\nJOIN users d ON t.driver_id = d.id\nGROUP BY t.driver_id, d.first_name, d.last_name\nORDER BY COUNT(t.rating) DESC;\n"
  },
  {
    "path": "db/views/search_results_v01.sql",
    "content": "-- list all drivers, to search within\nSELECT\nCONCAT(d.first_name, ' ', d.last_name) AS driver_name,\nAVG(t.rating) AS avg_rating,\nCOUNT(t.rating) AS trip_count\nFROM trips t\nJOIN users d ON t.driver_id = d.id\nGROUP BY t.driver_id, d.first_name, d.last_name\nORDER BY COUNT(t.rating) DESC;\n"
  },
  {
    "path": "docker/README.md",
    "content": "# Docker\n\nDocker is used to run PostgreSQL instances within a container, using a Docker network, and with different host names.\n\nFor example \"db01\" is the primary host, and \"db02\" is a secondary host. These commands are intended in general to run as shell scripts, from this directory.\n\n```sh\nsh docker/run_db_db01_primary.sh\n\nsh docker/run_db_db02_replica.sh\n\ndocker ps\n```\n\n## Disable Docker Messages\n```sh\nexport DOCKER_CLI_HINTS=false\n```\n\n## Restarting container\n```sh\npg_ctl: cannot be run as root\n```\ndocker restart <container>\n\n## Replacing `pg_hba.conf` content\n```sh\ndocker cp db01:/var/lib/postgresql/data/pg_hba.conf .\ncp pg_hba.conf pg_hba.backup.conf\n\nvim pg_hba.conf\nhost    replication     replication_user 172.19.0.3/32               md5\n\ndocker cp pg_hba.conf db01:/var/lib/postgresql/data/.\n\ndocker restart db01\n```\n\n## Standby process\n1. Create replication slot\n1. Create `pg_hba.conf` entries for replication_user. Use the IP address from db02 and db03 /32 version (IPv4)\n1. Make sure there is a `standby.signal` file\n1. Restart it (should restart in recovery mode)\n\n\n## Docker permissions\n- Run `chown` and `chmod` on the `.pgpass` file\n- Use the `postgres` user\n\n```sh\ndocker exec --user root -it db02 chown postgres:root /var/lib/postgresql/.pgpass\ndocker exec --user root -it db02 chmod 0600 /var/lib/postgresql/.pgpass\n```\n"
  },
  {
    "path": "docker/db01_create_publication.sh",
    "content": "#!/bin/bash\n#\n# Purpose: Create replication slot on primary db01\n\nPGPASSWORD=postgres docker exec -it db01 \\\n  psql -U postgres -c \\\n\"CREATE PUBLICATION my_pub_inserts_only FOR ALL TABLES\nWITH (PUBLISH = 'INSERT');\"\n"
  },
  {
    "path": "docker/db01_create_replication_slot.sh",
    "content": "#!/bin/bash\n#\n# Purpose: Create replication slot on primary db01\n#\nPGPASSWORD=postgres docker exec -it db01 \\\n  psql -U postgres -c \\\n  \"SELECT PG_CREATE_PHYSICAL_REPLICATION_SLOT('rideshare_slot');\"\n\n# To remove the slot:\n# PGPASSWORD=postgres docker exec -it db01 \\\n#   psql -U postgres -c \\\n#   \"SELECT PG_DROP_REPLICATION_SLOT('rideshare_slot');\"\n"
  },
  {
    "path": "docker/db01_create_replication_user.sh",
    "content": "#!/bin/bash\n#\n# Purpose:\n# - Generate password, and place in .pgpass\n# - Create replication_user using generated password, on db01\n# - Copy .pgpass to db02\n#\n# The .pgpass password is used to authenticate replication_user, \n# when they run pg_basebackup\n#\n# Precondition: Make sure db01 and db02 are running\n#\nrunning_containers=$(docker ps --format \"{{.Names}}\")\nif echo \"$running_containers\" | grep -q \"db01\"; then\n  echo \"db01 is running...continuing\"\nelse\n  echo \"db01 is not running\"\n  echo \"Exiting.\"\n  exit 1\nfi\n\nif echo \"$running_containers\" | grep -q \"db02\"; then\n  echo \"db02 is running...continuing\"\nelse\n  echo \"db02 is not running\"\n  echo \"Exiting.\"\n  exit 1\nfi\n\n# Password for replication_user\nexport REP_USER_PASSWORD=$(openssl rand -hex 12)\necho \"Create REP_USER_PASSWORD for replication_user\"\necho $REP_USER_PASSWORD\n\n# \"rm replication_user.sql\" for a clean starting point\n# CREATE USER statement as SQL file\n# Set password to DB_PASSWORD value\nrm -f replication_user.sql\necho \"CREATE USER replication_user WITH ENCRYPTED PASSWORD '$REP_USER_PASSWORD'\nREPLICATION LOGIN;\nGRANT SELECT ON ALL TABLES IN SCHEMA public\nTO replication_user;\nALTER DEFAULT PRIVILEGES IN SCHEMA public\nGRANT SELECT ON TABLES TO replication_user;\" >> replication_user.sql\n\nrm -f .pgpass\necho \"*:*:*:replication_user:$REP_USER_PASSWORD\" >> .pgpass\n\n# Copy replication_user.sql to db01\ndocker cp replication_user.sql db01:.\n\necho \"Copy .pgpass, chown, chmod it for db02\"\n# Copy .pgpass to db02 postgres home dir\ndocker cp .pgpass db02:/var/lib/postgresql/.\ndocker exec --user root -it db02 chown postgres:root /var/lib/postgresql/.pgpass\ndocker exec --user root -it db02 chmod 0600 /var/lib/postgresql/.pgpass\n\n# Create replication_user on db01\ndocker exec -it db01 \\\n  psql -U postgres \\\n  -f /replication_user.sql\n"
  },
  {
    "path": "docker/db03_create_subscription.sh",
    "content": "# Preconditions:\n# - db01: wal_level = logical\n#   - docker exec --user postgres -it db01 psql -c \"SHOW wal_level\"\n# - db03 is running\n# - db01 permits access from IP address of db03:\n#   - See: ./db03_create_subscription_prepare.sh\n# - db01 has publication \"my_pub_inserts_only\"\n\n# Connect to db03 as \"postgres\"\ndocker exec --user postgres -it db03 /bin/bash\n\n# To remove the subscription from /bin/bash db03 if needed:\n# This also removes \"my_sub\" replication slot on db01\n# psql -U postgres -c \"DROP SUBSCRIPTION my_sub\"\n\n# Generate snippet and send to psql\necho \"CREATE SUBSCRIPTION my_sub\nCONNECTION 'dbname=postgres host=db01 user=replication_user'\nPUBLICATION my_pub_inserts_only;\" | psql\n\n# View subscriptions\npsql -c \"SELECT * FROM pg_subscription;\"\n"
  },
  {
    "path": "docker/db03_create_subscription_prepare.sh",
    "content": "#!/bin/bash\n#\n# Purpose: start db03\n# Copy .pgpass to it\n#\n# Precondition: .pgpass file exists/made earlier\n#\nsh run_db_db03_replica.sh\n\necho \"Copy .pgpass, chown, chmod it for db03\"\n# Copy .pgpass to db03 postgres home dir\ndocker cp .pgpass db03:/var/lib/postgresql/.\ndocker exec --user root -it db03 chown postgres:root /var/lib/postgresql/.pgpass\ndocker exec --user root -it db03 chmod 0600 /var/lib/postgresql/.pgpass\n\necho \"Getting IP address for db03...\"\nip2=$(docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' db03)\necho \"$ip2\"\n\necho \"Add this entry to pg_hba.conf\"\necho \"host    replication     replication_user $ip2/32               md5\"\n\necho\necho \"When done, reload:\"\necho 'docker exec --user postgres -it db01 \\\n    psql -c \"SELECT pg_reload_conf();\"'\n"
  },
  {
    "path": "docker/dump_rideshare_local_to_db01.sh",
    "content": "# Copy Rideshare db/setup.sh to db01\n# including all the supporting SQL files\ndocker exec -it db01 mkdir db\ndocker cp db db01:.\n\n# Run \"db/setup.sh\" on db01, which should provision an empty\n# rideshare_development database on the db01 instance\n# On db01, the file is at \"/setup.sh\" in the root dir\n# Preconditions:\n# - env var DB_URL is set\n# - env var RIDESHARE_DB_PASSWORD is set\n#\n# These should be set locally *first*\n# so that they can be supplied to the container\n#\ndocker exec --env DB_URL=\"$DB_URL\" \\\n  --env RIDESHARE_DB_PASSWORD=\"$RIDESHARE_DB_PASSWORD\" \\\n  db01 sh -c \"/setup.sh\"\n\n# Once created, we won't migrate there, since we'll be copying\n# tables using pg_dump\n\n# Connect to db01 and confirm:\n# - schema \"rideshare\" exists (\\dn)\n# - database \"rideshare_development\" exists\n# - database is empty (has no tables)\ndocker exec --user postgres -it db01 \\\n  psql -d rideshare_development\n\n# Dump the local rideshare_development database into a file\npg_dump -U postgres \\\n  -h localhost rideshare_development > rideshare_dump.sql\n\n# Check the size\ndu -h rideshare_dump.sql\n\n# Restore rideshare_development from the file\n# to db01\n# Warning: this might take a few moments!\nPGPASSWORD=postgres psql -U postgres \\\n  -h localhost \\\n  -p 54321 \\\n  -d rideshare_development < rideshare_dump.sql\n\n# Connect again and confirm the tables and row data\n# have been loaded\n# NOTE: connect as \"owner\"\n#\ndocker exec --user postgres -it db01 \\\n  psql -U owner -d rideshare_development\n\n# SELECT COUNT(*) FROM users; -- 20210\n"
  },
  {
    "path": "docker/pg_hba_reset.sh",
    "content": "# Run from the \"docker\" directory in Rideshare\n#\n# Remove any existing file if exists\nrm -f pg_hba.conf\n\necho \"Getting IP address for db02...\"\nip_address=$(docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' db02)\necho \"$ip_address\"\n\nentry=\"host    replication     replication_user $ip_address/32               md5\"\n\necho \"Generating pg_hba.conf file\"\ncat <<EOF >> pg_hba.conf\n# TYPE  DATABASE        USER            ADDRESS                 METHOD\n# Replication\n$(echo \"$entry\")\nlocal   all             all                                     trust\n# IPv4 local connections:\nhost    all             all             127.0.0.1/32            trust\n# IPv6 local connections:\nhost    all             all             ::1/128                 trust\nhost all all all scram-sha-256\nEOF\ncat pg_hba.conf\necho\n\necho \"Copy pg_hba.conf to db01\"\ndocker cp pg_hba.conf db01:/var/lib/postgresql/data/.\n"
  },
  {
    "path": "docker/reset_docker_instances.sh",
    "content": "#!/bin/bash\n\n# We've assumed this was copied locally from the db01 container\nconf_file=\"postgresql.conf\"\nif [ -e \"$conf_file\" ]; then\n    echo \"File '$conf_file' exists...continuing\"\nelse\n    echo \"File '$conf_file' does not exist. Run:\"\n    echo\n    echo \"docker cp db01:/var/lib/postgresql/data/$conf_file .\"\n    echo\n    echo \"Then try again.\"\n    exit 1\nfi\n\ntrap 'echo \"An error occurred with command: $BASH_COMMAND\";' ERR\n\ndocker stop db01 && docker rm db01\ndocker stop db02 && docker rm db02\ndocker stop db03 && docker rm db03\necho \"Stopped containers, waiting a moment\"\nsleep 1\nsh run_db_db01_primary.sh\nsh run_db_db02_replica.sh\necho \"Started containers\"\ndocker ps\nsleep 1\nsh pg_hba_reset.sh\necho \"Restart db01 received new file\"\ndocker restart db01\nsleep 2\n\necho \"Create replication slot on db01\"\nsh db01_create_replication_slot.sh\n\necho \"Configure replication_user\"\nsh db01_create_replication_user.sh\n\necho \"Copy existing postgresql.conf to db01\"\ndocker cp postgresql.conf db01:/var/lib/postgresql/data/.\n\necho \"restart db01\"\ndocker restart db01\n"
  },
  {
    "path": "docker/run_db_db01_primary.sh",
    "content": "#!/bin/bash\n#\n# Run from Rideshare dir\n# Use bind dir: ./postgres-docker/db01\n# network: \"rideshare-net\"\ndocker run \\\n  --name db01 \\\n  --volume ${PWD}/postgres-docker/db01:/var/lib/postgresql \\\n  --publish 54321:5432 \\\n  --env POSTGRES_USER=postgres \\\n  --env POSTGRES_PASSWORD=postgres \\\n  --net=rideshare-net \\\n  --detach postgres:16.1\n"
  },
  {
    "path": "docker/run_db_db02_replica.sh",
    "content": "#!/bin/bash\n#\n# Run from Rideshare dir\n# Use bind dir: ./postgres-docker/db02\n# network: \"rideshare-net\"\ndocker run \\\n  --name db02 \\\n  --volume ${PWD}/postgres-docker/db02:/var/lib/postgresql/data \\\n  --publish 54322:5432 \\\n  --env POSTGRES_USER=postgres \\\n  --env POSTGRES_PASSWORD=postgres \\\n  --net=rideshare-net \\\n  --detach postgres:16.1\n"
  },
  {
    "path": "docker/run_db_db03_replica.sh",
    "content": "#!/bin/bash\n#\n# db03 uses Logical Replication\n#\ndocker run \\\n  --name db03 \\\n  --volume ${PWD}/postgres-docker/db03:/var/lib/postgresql/data \\\n  --publish 54323:5432 \\\n  --env POSTGRES_USER=postgres \\\n  --env POSTGRES_PASSWORD=postgres \\\n  --net=rideshare-net \\\n  --detach postgres:16.1\n"
  },
  {
    "path": "docker/run_pg_basebackup.sh",
    "content": "# Connect to db02 as \"postgres\"\n# replication_user - authenticates from db02 host\ndocker exec --user postgres -it db02 /bin/bash\n\n# ############# WARNING ############\n#\n# Copy the \"rm\" and \"pg_basebackup\" commands\n# to clipboard at once, so they can be pasted together\n#\n# Dependencies:\n# - \"rideshare_slot\" exists\n# - replication_user exists, with password supplied from ~/.pgpass\n# - db01 and db02 are running\n#\n# ##################################\nrm -rf /var/lib/postgresql/data/* && \\\n\npg_basebackup --host db01 \\\n  --username replication_user \\\n  --pgdata /var/lib/postgresql/data \\\n  --verbose \\\n  --progress \\\n  --wal-method stream \\\n  --write-recovery-conf \\\n  --slot=rideshare_slot\n\n# Container \"stops\" from removing the data directory\n# NOTE: Start it again, and it should use the same\n# replaced data directory\ndocker start db02\n\n# Review live logs\ndocker logs -f db02\n"
  },
  {
    "path": "docker/teardown_docker.sh",
    "content": "#!/bin/bash\n#\n# Drop slots\n# - my_subscription\n# - rideshare_slot\nPGPASSWORD=postgres docker exec -it db01 \\\n  psql -U postgres -c \\\n  \"SELECT pg_drop_replication_slot('my_sub');\"\nPGPASSWORD=postgres docker exec -it db01 \\\n  psql -U postgres -c \\\n  \"SELECT pg_drop_replication_slot('rideshare_slot');\"\n\nPGPASSWORD=postgres docker exec -it db01 \\\n  psql -U postgres -c \\\n  \"REASSIGN OWNED BY replication_user TO postgres;\"\nPGPASSWORD=postgres docker exec -it db01 \\\n  psql -U postgres -c \\\n  \"DROP OWNED BY replication_user;\"\n\ndocker exec -it db01 \\\n  psql -U postgres \\\n  -c \"DROP USER IF EXISTS replication_user\"\n\necho \"Stop everything if needed\"\ndocker stop db01 && docker rm db01\ndocker stop db02 && docker rm db02\ndocker stop db03 && docker rm db03\n\necho \"Removing local postgres-docker directory\"\nrm -rf postgres-docker\n"
  },
  {
    "path": "docs/design_document.md",
    "content": "## Welcome\n\nThis document has entries that were chunks of work on this app. The entries are ordered reverse chronologically, so the first entry is on the bottom.\n\nStart from the bottom and work up to navigate the design decisions made along the way.\n\n\n## 2019-11-21\n\nAdd `strong_migrations` gem, which helps prevent migrations that introduce downtime. I think this is a great project and wanted to rep it here.\nAdd `blazer` gem for a demonstration of data reporting.\n\n\n## 2019-11-15\n\nAdding Trip Search on at least 2 dimensions\n\nUniqueness on Trip Requests, they should not have the same start and end location.\n\n\n## 2019-11-12\n\nTrip model. Add rating. Ensure Trip is complete before it can be rated. Use a `completed_at` timestamp as the initial way to record the trip status, either complete or not. We may wish to add a state machine later, and states like `pending`->`in_progress`->`completed` etc.\n\nIdea: User communication (also becomes a uniqueness dimension): add email address?\n\nIndexing Dos and Dont's <https://www.itprotoday.com/sql-server/indexing-dos-and-don-ts>\n\nUse `db/schema.rb` and not `db/structure.sql`\n\nTrip has a `trip_request_id` FK, we could ensure it exists before create\n\n:bulb: Best practice: using `delegate`. Since `TripRequest` already has a Rider, and Trip references TripRequest, we can use `delegate` to access the Rider for a Trip <https://stackoverflow.com/a/11457714/126688>\n\nIdea: DB check constraint on rating, `completed_at` IS NOT NULL, pros and cons <https://naildrivin5.com/blog/2015/11/15/rails-validations-vs-postgres-check-constraints.html>\n\nIdea: Consider using an [Architectural Design Record (ADR)](https://adr.github.io/) style for the Iterations log?\n\n## 2019-11-11\n\n:bulb: Patterns: Introduce Geocoder gem. In order to automatically `geocode` on create, we can use the `after_validation` hook.\n\n:bulb: Patterns: has_one/belongs_to it's about where the FK is. For trip_requests, the foreign key to a rider is on the table, so a TripRequest `belongs_to` a Rider.\n\n:bulb: Trade-off: with this Location data model, we couldn't take advantage currently of common locations being shared among trip requests.\n\nPatterns: ActionController::API, lightweight version of `ActionController::Base` <https://api.rubyonrails.org/classes/ActionController/API.html>. We're creating an `ApiController` that extends `ActionController::API` as we're intending this to be an API app.\n\n:bulb: Patterns: Strong params for Trip Request creation, model attribute params are forbidden to be used for mass assignment until they have been permitted\nPatterns: Fixtures. Use fixtures for test objects that will be re-used, and not change often (like riders)\nPatterns: use namespace for `/api` routes\n\n:bulb: Trade-off: move current_rider to API controller, requires unnesting the rider_id inside the trip request\n\n:bulb: Best practice: render 201 when trip request was created, or unprocessable entity (422) when it failed\n\nBest practice: use the geocoder initializer `rails generate geocoder:config`, and customize the testing behavior so lookups are not happening in test mode.\n\nNOTE: uniqueness among trip request records. The same rider may travel the same trip, so we might want another dimension for uniqueness.\n\nNOTE: We could nest requests and ratings under trips, e.g. /api/trips, /api/trips/requests, /api/trips/ratings\n\n## 2019-11-08\n\nKeeping the `Location` simple for now, a trip would have a start and end location,\na location is a lat/lng pair. A `TripRequest` would be a geocoded rider position based\non their current location, and a geocoded address of their destination (will need a geocoder)\n\nSkip TripRating for now and put a `rating:integer` on the Trip for now. :bulb: Trade-off: this is simpler than a dedicate model. We can still do these aggregate calculations with a simple integer field:\n\n* Average trip rating rider has provided\n* Average trip rating for a driver\n* Avoiding the mutual rating feature (`rider->driver, driver->rider`) for now\n\n\n:bulb: Pattern: Rails STI: use a `type` column and by creating the object using the subclass type, the type information will be stored as a string in the table.\nThe same works when querying, by asking for a particular record, the type information is surfaced as the type of the class.\n\n\n## 2019-11-07 Initial thoughts\n\nThe purpose of this app is to model car-based ride sharing, like Uber or Lyft. This is my take on some objects and their interactions that model this domain. The main model is a Trip and then there are Drivers that provide the trip, and Riders that take the trip.\n\n\nSome Active Record model ideas and notes below. Single-table inheritance can be used for both the Driver and Rider in a `users` table. :bulb: Trade-off: this saves a bit of initial work having separate models and potentially, duplication between two similar models.\n\n```\nDriver(name:string) (use STI?)\nRider(name:string) (use STI?)\nLocation(driver_id:integer,rider_id:integer,latitude:decimal,longitude:decimal)\nTripRequest(rider_id:integer,start:location_id,end:location_id)\nTrip(trip_request_id:integer,driver_id:integer,rider_id:integer,rating:integer)\n~~TripRating(trip_id:integer,rating:integer)~~\n```\n\nInteger IDs (PK and FKs). :bulb: Trade-off: integer primary keys can be exhausted at large scale, and auto increment IDs can be guessable which has security concerns. UUIDs or GUIDs are an alternative, but reduce usability.\n\n\nUse cases:\n\n* A rider makes a trip request, including a start location (geolocate current) and end location (enter destination)\n* A driver accepts the trip request\n* A trip involving a driver and rider begins, the location is tracked (includes driver and rider)\n  * A trip involving a driver and rider completes (driver and rider)\n* A rider can rate a trip\n\n"
  },
  {
    "path": "docs/dev_tips.md",
    "content": "## Postgres versions\n\nI had Homebrew Postgres set up in my path for `pg_dump` but wanted to use the version with Postgres.app.\n\nUndesired version:\n```\n/opt/homebrew/opt/libpq/bin/pg_dump\n```\n\nDesired version:\n```\n/Applications/Postgres.app/Contents/Versions/18/bin/pg_dump\n```\n\nSince I use fish shell I fixed it by running:\n```sh\nset -U fish_user_paths /Applications/Postgres.app/Contents/Versions/18/bin $fish_user_paths\n```\n\nVerify:\n```sh\npg_dump --version\n```\n"
  },
  {
    "path": "docs/development.md",
    "content": "## Ctags\n\nI use it, and running:\n\n`ctags -R --exclude=.git --exclude=test`\n"
  },
  {
    "path": "docs/development_iterations.md",
    "content": "# Development Iterations\n\nDevelopment was done in small iterations over time, and the work was tracked here.\n\nConsider this a development journal that's a \"build in public\" that may be interesting to others, although it was mostly written for my own needs as journal.\n\n\n## Iteration 27\n\nPartition the `trip_positions` table using `pgslice`.\n\n* Add `pgslice` to Gemfile and install the binstub\n\n```sh\n# add 'pgslice` to Gemfile\nbundle install\nbundle binstubs pgslice\n```\nInvoke it with `rails runner`, e.g. `bin/rails runner \"PgsliceHelper.new.add_partitions\"`\n\nTODO, but deferred\n\n* `insert_all` compatibility\n\n## Iteration 26 (2023)\n\n- Remove webpacker, and most front-end JS (this is an API app)\n- Retire Blazer. It's a great tool, but no longer part of the goal of this app.\n\n```sh\ngem update --system\nbrew upgrade ruby-build\nrbenv install 3.2.0\ngem install bundler\nbundle install\nbundle update\nbin/rails test\n```\n\n## Iteration 25\n\nAdd Full Text Search (FTS). Add `pg_search` to evaluate the features.\n\n- tsearch - Full text search, which is built-in to PostgreSQL\n- trigram - Trigram search, which requires the trigram extension\n- dmetaphone - Double Metaphone search, which requires the fuzzystrmatch extension\n\n## Iteration 24\n\nAdd slow query logging using Active Support Instrumentation without 3rd party gems or PostgreSQL extensions\n\nWhen configured to log at >= 1 second duration, test it with:\n\n```rb\nActiveRecord::Base.connection.execute(\"select pg_sleep(1)\")\n```\n\n## Iteration 23\n\n- Add Trip Position model, and populate it with sample rows\n- Remove some experimental PG extensions from the application DB\n- Perform a conversion from unpartitioned to partitioned trip_positions table using pgslice\n\n`drop extension sslinfo`, `drop extension pg_buffercache` for now, these\nmay return later. This cleans up the `db/structure.sql` so that it reflects\nthe extensions in use by the application.\n\n## Iteration 22\n\n- Maintain the data generators\n- Disable Prepared Statements for now\n- Start using Active Record Doctor gem: `bundle exec rake active_record_doctor` for more insights\n\n## Iteration 21\n\nTrip rating database CHECK constraint.\n\n## Iteration 20\n\nCounter cache example for trips that belong to a driver.\n\n## Iteration 19\n\nVehicle Reservation concept (e.g. special car, limo). Has a reservation duration.\n\nWhen vehicle is reserved, cannot be overlapping reservation.\n\nCreate an exclusion constraint. Run a specific test like this:\n\n`rails test test/services/book_reservation_test.rb -n BookReservationTest#test_can_NOT_book_overlapping_reservation`\n\n## Iteration 18\n\nRails Entity Relationship Diagram (ERD)\n\n[Customization](https://voormedia.github.io/rails-erd/customise.html)\n\n```\nbundle exec rake erd \\\n  inheritance=true \\\n  only=\"Driver,Rider,User,Location,TripRequest,Trip,Vehicle,VehicleReservation\" \\\n  attributes=foreign_keys,primary_keys\n```\n\n## Iteration 17\n\nStart a pgbench benchmark basics. Add fx gem to manage DB functions (pl/pgsql). Add data scrub functions.\n\nAdd paranoia gem and create some deleted users for the purposes of different query types.\n\n\n## Iteration 16 (2022)\n\n* Upgrade to Rails 7. Remove some gems.\n\n## Iteration 15\n\n* Add [PgHero](https://github.com/ankane/pghero)\n* Use new CircleCI docker configuration\n\n## Iteration 14\n\nUpgrade Rails 6.0->6.1\n\n## Iteration 13\n\nAdd JSON Web Token support for authenticated API actions. More details TBD.\n\n## Iteration 12\n\nPlan out a public API. A rider's \"my trips\" API. Includes driver details, maybe additional information like my rating, average rating. Includes start and end location.\nUse fast_jsonapi and some of the JSON API features, like sparse fieldsets and compound documents.\n\n## Iteration 11\n\nIntroduce ETag HTTP caching to the trips API. `ETag` is content-based HTTP caching  built in to Rails. ETags can be strong or weak, weak ETags are used by default in Rails, and are identified with a `W/` on the front, e.g. `W/\"02d4d6729566d6bb56f0aa9e644c8c93\"`.\n\nCollections (an `ActiveRecord::Relation`) are supported, although they will be covered here in the future, for now this uses a `trips#show` API as a demonstration.\n\nSending a curl request and asking for headers only, we can see an ETag as a response header, and a 200 status code.\n\nUsing that ETag value as a request header, for example below, if the content for this trip has not changed, we'll see a `304 Not Modified` response.\n\n```\ncurl -I --header 'If-None-Match: W/\"02d4d6729566d6bb56f0aa9e644c8c93\"' localhost:3000/api/trips/1\n```\n\nWe can open a console and updated this trip, e.g. `Trip.find(1).touch`, and then sending the same ETag, we'll see the trip is rendered again, and we get a 200 response as expected, since the content of the trip has changed (the `updated_at` timestamp was updated).\n\nAnother response header that `stale?` introduces (this header doesn't seem to appear with a regular `render`) is `Last-Modified`, e.g. as a header and value an example is `Last-Modified: Thu, 14 May 2020 01:42:08 GMT`.\n\nNow we can create a curl request with the request header `If-Modified-Since` and this timestamp, e.g.\n\n```\ncurl -i --header 'If-Modified-Since: Thu, 14 May 2020 01:42:08 GMT' localhost:3000/api/trips/1\n```\n\nAnd confirm that we receive a `304 Not Modified`. Updating the trip and sending an equivalent request responds with a `200`, which makes sense since the trip has been updated. And similarly, if we replace the timestamp value with the new value from the `Last-Modified`, we are back to getting a `304 Not Modified` response.\n\n\n## Iteration 10\n\nUse Circle CI as a CI system. Set it up so that pushes on master kick off a test test. The repo has a status badge indicating whether the tests are passing or not.\n\n## Iteration 9\n\nAdd two great tools, [Strong Migrations](https://github.com/ankane/strong_migrations) and [Blazer](https://github.com/ankane/blazer). Strong Migrations ensures that migrations will be safe to run in production, avoiding known risky operations.\n\nBlazer is a simple platform for doing data analysis and data pulls. We used this extensively at a previous job and allowed any team member with SQL experience to learn about the data, satisfying their own reporting needs, and served as a repository of knowledge about common operations-related data and queries.\n\nI created a Driver and Rider dashboard here with some queries to look at Top Rated Drivers, and the most Active Riders.\n\n<img src=\"https://i.imgur.com/JdEGWPr.png\" alt=\"Driver and Rider Blazer dashboard\" />\n\n## Iteration 8\n\nImprove test code coverage and maintain a `1:0.6` code to test ratio.\n\n`rake stats`\n\n```\n Code LOC: 198     Test LOC: 115     Code to Test Ratio: 1:0.6\n```\n\nPut together a [Trip Search Sequence Diagram](https://www.planttext.com/).\n\n```\n@startuml\n\ntitle \"Trip Search Sequence Diagram\"\n\nactor User\nboundary \"TripSearch\"\n\nUser -> TripSearch : Search by start location, driver name, rider name\nTripSearch -> User : Respond with matching Trips\n\n@enduml\n```\n\n<img src=\"https://www.plantuml.com/plantuml/img/JOyz3iCm24PtJe4ofnV8K6Ne2Vfp06AZ12csKqnQvVPrdLRj10BU-qIVZTJMC0EOsCpON5KMl32fcqgvhnmTuqbeL0eD03bBYhVC2aDQeoVTTcP7oiLxXuSZ_eROVON3XZKGv-J89CKMlSgZ0942jwZYFptyuKLMfHsUEIyfUdoAJHZ8t2Hnh4aPeEVeooSl\" alt=\"Trip search\" />\n\n\n## Iteration 7\n\nDockerize the application. <https://docs.docker.com/compose/rails/>\n\n* Change the `database.yml` and set the `host: db`\n* Install `yarn`\n\n### Docker Commands\n\n* `docker-compose build`\n* `docker-compose up`\n* `docker-compose run web bundle exec rake db:create`\n* `docker-compose run web bundle exec rake db:migrate`\n* `docker-compose run web bundle exec rake data_generators:trips`\n\nNow query for some data:\n\n`curl http://localhost:3000/api/trips?start_location=New%20York&driver_name=Kasie`\n\n\n## Iteration 6\n\nAdd integration and model tests for trip search. Add trip search by multiple dimensions (Driver name, Rider name, Location).\n\n## Iteration 5\n\nGenerate sample Driver, Trip, Rider, and Rating data (`rake data_generators:trips`). Add basic Driver dashboard. Show driver stats.\n\n<img src=\"https://i.ibb.co/KcgZTBM/driver-dashboard.png\" alt=\"Driver dashboard\"/>\n\n## Iteration 4\n\nUML Sequence Diagram of Rider, Driver, Trip Request, Trip, and Rating messages\n\n```\n@startuml\n\ntitle \"Rider, Driver, Trip Sequence Diagram\"\n\nactor Rider\nboundary \"TripRequest\"\nactor Driver\nentity Trip\n\nRider -> Rider : Enters Start and End Location\nRider -> TripRequest : Requests Trip\nDriver -> TripRequest : Accepts Trip Request\nTripRequest -> Trip : Trip Starts\nTrip -> Trip : Trip Ends\nRider -> Trip : Rider Rates Trip\n\n@enduml\n```\n\n<img src=\"https://www.plantuml.com/plantuml/img/PP0v3i8m44NxESKeDLo00WKfT5G95nZi4RAKsC6U8ENsU4C4gBoz__tiDWXvMQOHG8oCZ4rlDFiTTjuyqtZrPiQ17mjRnTWPkdkQ6W1IuZnc66vkiPhyYasY-mG7QIfIYe1jx5zp7K2EuVvOydZ0inNs0OSaWsHrtD1uSOh4EFl1D_KnL6UXb9Px_gcJKZnNw1s1BL8J4IrlJGuX4xz7KIfyooIBlEv9k8f0orR77tq1\" alt=\"Trips Sequence diagram\">\n\n## Iteration 3\n\n* Trip model, created when a Driver accepts a Trip Request\n* Ratings: Completed Trips can be rated\n\n\n## Iteration 2\n\n* Location (Geo coordinates) and Trip Request models\n* API base controller\n* Trip Requests `index` and `create` API endpoints\n\n\n## Iteration 1\n\n* Started with a [Design Document](/docs/design_document.md)\n  * Wrote out use cases of Riders, Drivers etc.\n  * Planning models, database tables, constraints, validations\n    * Using single-table inheritance for Driver and Rider instances in a Users table\n"
  },
  {
    "path": "docs/project_documentation.md",
    "content": "## My Rails Best Practices and Patterns\n\nDemonstrations of each of these items can be found in the app\n\n* Data Integrity (in the DB and application)\n  * Enforce Null Constraints\n    * Foreign key constraints for referential integrity\n    * Unique constraints\n    * Exclusion\n* Code Quality\n  * Rails best practices gem (`bin/rails_best_practices .`)\n  * Strong Migrations\n  * Use `delegate` in models\n  * Strong Params\n* Performance\n  * DB indexes\n    * Primary, uniqueness, indexed foreign key columns\n* Named Scopes\n* Search functionality\n* Automatic Geocoding\n  * Use callbacks\n  * Disable geocoding in the test environment\n* Testing\n  * Fixtures and factories\n  * Minimum Code to Test Ratio: 1:0.6 (use `bin/rails stats`)\n  * Fake data generators for local development (`faker` gem, rake task), SQL data loads\n* API Application\n  * We only need an API, use `ActionController::API` for lighter weight API code\n  * Use `/api` namespace\n  * JSON:API for API standardization\n    * Sparse Fieldsets\n    * Compound Documents\n  * Status codes\n  * `201` on created\n  * `422` on error\n  * HTTP Caching (ETag, Last Modified, static content)\n* Use [Single table inheritance](https://api.rubyonrails.org/v7.1.4/classes/ActiveRecord/Base.html#class-ActiveRecord::Base-label-Single+table+inheritance) when appropriate\n  * Link: [DB migration commit](https://github.com/andyatkinson/rideshare/commit/39232da339c2c04966e49e3e4ff03d88c2e66842#diff-7d736cc988a61ff29b4b9b2466b7a6ab)\n"
  },
  {
    "path": "docs/search.md",
    "content": "## Search\n\n- Using `pg_search` gem, adds AR scopes\n- `tsearch` is built in, PostgreSQL Full Text Search\n- Creates a `tsvector` from document text\n- Search it using a `tsquery`\n- Rank fields using `ts_rank()`\n- Store tsvectors using a Generated column\n- Index tsvectors using a GIN index\n- Add `users.searchable_full_name` `GENERATED ALWAYS`\n- For unaccent, consider an expression based index\n"
  },
  {
    "path": "docs/workshop/0_introduction.md",
    "content": "# Introduction\n\n## Prerequisites Checklist\nYou have Rideshare running:\n- `rideshare_development` database is reachable\n- `bin/rails console` works\n- DB creation scripts ran\n- Migrations ran (`bin/rails db:migrate`)\n\nIf any of these aren't completed, go back to the main [Workshop README](/docs/workshop/README.md)\n\n## Setup\n- Run shell scripts from Rideshare root directory\n- Learn to add psql to your `bin/rails console` command-line tools\n- Create indexes without Active Record\n\n## Performance\n- Individual query optimization (micro)\n- Macro query optimization, reduce system load\n\n# Micro Optimization\n- Benefit: Lessen load on server\n- Query planning basics\n- Index design basics\n- Index design more advanced\n\n# Macro Optimization\n- Benefit: Lessen load, distribute load\n- Find worst performing queries\n- Move read only queries to a read replica\n\n## What's Next?\nVisit [1 - Psql Basics](/docs/workshop/1_psql_basics.md) to continue.\n"
  },
  {
    "path": "docs/workshop/1_psql_basics.md",
    "content": "# psql basics\n\npsql is the command-line client that comes with PostgreSQL.\n\nWe will use it. Running `bin/rails dbconsole` (or `db` for short), it launches psql.\n\nThe connection string is supplied from the .env file \n\nWe want the one called `DATABASE_URL`.\n\n```sql\ncd rideshare\n\ncat .env | grep DATABASE_URL\n\nbin/rails db\n```\n\nWe can also connect without `bin/rails dbconsole` and use psql directly.\n\n```sh\nexport DATABASE_URL=postgres://owner@localhost:5432/rideshare_development\n\npsql $DATABASE_URL\n```\n\n## What's Next?\nVisit [2 - Shell Scripts](/docs/workshop/2_shell_scripts.md) to continue.\n"
  },
  {
    "path": "docs/workshop/2_shell_scripts.md",
    "content": "# Shell Script Basics\n\nLet's load more data. You may remove all data if needed.\n\n⚠️ (Optional) WARNING: Run this to remove all data and start over.\n```sh\nbin/rails db:reset\n```\n\nIf you've migrated the database and it's empty, let's first\nload some sample data from Rake scripts you're familiar with.\n\n```sh\ncd rideshare\nbin/rails data_generators:generate_all\n```\n\nBulk load via SQL commands running in psql\n\n```sh\nsh db/scripts/bulk_load.sh\nsh db/scripts/bulk_load_extended.sh\n```\n\n## What's Next?\nVisit [3 - Query Planning](/docs/workshop/3_query_planning.md) to continue.\n"
  },
  {
    "path": "docs/workshop/3_query_planning.md",
    "content": "# Query Planning\n\nWe'll use psql (or run `bin/rails db`).\n\n```sql\npsql $DATABASE_URL\n```\n\nTip to clear: `\\! clear`.\n\n## Section 1: We need a query\nWe need a query. Let's get all users that have a certain **first** name. Let's find one from the existing rows.\n\n```sql\nSELECT first_name FROM users ORDER BY id ASC LIMIT 1;\n first_name\n------------\n Alphonso\n```\n\n## Section 2: Enabling timing\nToggle timing to `on`.\n\n```sql\n\\timing\nTiming is on.\n```\n\n```sql\nSELECT * FROM users WHERE first_name = 'Alphonso';\n-- Type \"q\" to exit results\n-- Time: 2012.136 ms (00:02.012)\n```\n\nOn my machine, this returns 8 rows, taking around 2 seconds. Two seconds is quite slow!\n\nLet's look at the query plan. To do that we'll use the `EXPLAIN` keyword.\n\n```sql\nEXPLAIN SELECT * FROM users WHERE first_name = 'Alphonso';\n```\n\n## Section 3: Intro to [`EXPLAIN`](https://www.postgresql.org/docs/current/using-explain.html)\nLet's understand the parts of what we're seeing.\n\n- Plan step is contained within the one above it\n- Filter operation, condition to match, rows removed by filter (when using `ANALYZE`)\n- Sequential scan on `users` table\n- Parallel sequential scan using 2 workers\n- Estimated to match one row (`rows=1`) but we know there are more\n- Width is \"estimated average width of rows\" <https://www.postgresql.org/docs/current/using-explain.html>\n- The cost is based on how many disk pages are accessed\n\nLet's get into the cost details more.\n\n## Section 4: Pages Intro and Cost calculation\nData in PostgreSQL is stored in \"pages\" which are fixed size 8kb (by default) chunks. Row data and index data are stored in the pages.\n\nFor this workshop, we won't go into greater depth. Just know that more pages = slower query. Less pages = faster query.\n\nLet's look at a simplified version of the query:\n\n- Let's disable parallel sequential scans (max worker of 1)\n\n```sql\nSET max_parallel_workers_per_gather = 1;\n```\n\n- Let's scan the whole table with no `WHERE` clause\n- Let's get the number of pages for the table\n- Let's manually reproduce the cost formula calculation\n\nLet's run the simplified query from psql:\n```sql\nEXPLAIN SELECT * FROM users;\n                            QUERY PLAN\n-------------------------------------------------------------------\n Seq Scan on users  (cost=0.00..247873.94 rows=10020294 width=129)\n```\n\nThe rounded estimated cost is 247874.\n\nLet's recalculate 247874. To start, get the number of pages from psql used to store all the rows:\n```sql\nSELECT relpages AS pages, reltuples::numeric AS estimated_rows\nFROM pg_class WHERE relname = 'users';\n\n pages  | estimated_rows\n--------+----------------\n 147671 |       10020300\n```\n\nCost formula from docs:\n`(pages * seq_page_cost) + (estimated_rows * cpu_tuple_cost)`\n\nCost calculation components:\n\n- Pages: `147671`\n- Estimated rows: `10020300`\n- `SHOW seq_page_cost;` (`1`)\n- `SHOW cpu_tuple_cost;` (`0.1`)\n\n\n```sql\nSELECT FLOOR((147671 * 1) + (10020300 * 0.01)) AS estimated_cost;\n estimated_cost\n----------------\n         247874\n```\n\nNow we understand some planner information, let's continue on with query optimization.\n\n## What's Next?\nVisit [4 - Query Optimization](/docs/workshop/4_query_optimization.md) to continue.\n"
  },
  {
    "path": "docs/workshop/4_query_optimization.md",
    "content": "# Query Optimization Part 1\n\nWe're still in psql. We've enabled timing.\n\nWe have a slow query of user rows filtered by first name:\n\n```sql\nSELECT * FROM users WHERE first_name = 'Alphonso';\n```\n\nWe know the plan type is a sequential scan.\n\n## Section 1: Index Design Basics\nThe most significant way we can improve performance for this query is to add an index that supports the query.\n\nWhy is that? The index *duplicates* the `first_name` column value from every row, into an ordered data structure.\n\nBenefits:\n- The index is faster to scan and filter on, being ordered (in ascending or descending order)\n- The index entries are maintained for us as new writes happen\n\nDownsides:\n- Indexed fields add latency to write operations, since the fields are maintained as index entries\n\nOptimization Game Plan:\n- Identify the column we are filtering on.\n- We are filtering on `users.first_name`\n- Create a B-Tree index that includes the first name column\n\nDo this in psql. We can replay it in Active Record later.\n\n```sql\n-- Enable timing to see build time\n\\timing\n\nCREATE INDEX idx_first_name ON users (first_name);\n```\n\nThis took around 10 seconds to build. Before analyzing the improvement, let's discuss the details.\n\n## Section 2: Index Definition Analysis and Query Results\n- This is a \"single column\" index\n- This is using the default index type which is B-Tree, since it's unspecified\n- We are picking all rows from the table\n- We are using the default sort order\n- We are using the default `NULL` handling (although `first_name` doesn't allow nulls)\n\nLet's view our index in psql:\n```sql\n\\d users\n```\n\nWith the index in place, let's re-run the query. Make sure `\\timing` is enabled.\n\nRemember the query time before was around 0.5-1.5 seconds.\n\n```sql\nSELECT * FROM users WHERE first_name = 'Alphonso';\n```\n\nThe query now takes milliseconds or less to run, which is tremendously faster.\n\nWhy is that?\n\n## Section 3: Index Design Concepts\nLet's look at the query plan. Let's introduce `ANALYZE` now to run the query.\n\n```sql\nEXPLAIN (ANALYZE)\nSELECT * FROM users WHERE first_name = 'Alphonso';\n```\n\n```sql\n\\dt+ users            -- 1154MB size\n\\di+ idx_first_name   -- 301MB\n```\n\nTable size vs. index size:\n- Now we're scanning the index which is smaller, it contains one column, and it's in order\n- This is an Index Scan using the index we created `idx_first_name`\n- We still \"filter\" on the index, but with much less data access\n- Startup and actual costs are much lower compared with before\n- Actual rows shows 8 rows, 1 loop\n\nPostgreSQL still needs to access more fields (`SELECT *`) from the heap/table storage, but for a small filtered set of rows.\n\nCan we do better?\n\n## What's Next?\nVisit [5 - Query Optimization Part 2](/docs/workshop/5_query_optimization_part_2.md) to continue.\n"
  },
  {
    "path": "docs/workshop/5_query_optimization_part_2.md",
    "content": "# Query Optimization: Part 2\n\n## Section 1: Efficiency Design Concepts\n- Add more restrictions to the query\n- Add indexes to support more restricted query\n\n```sql\nEXPLAIN (ANALYZE)\nSELECT * FROM users WHERE first_name = 'Alphonso';\n```\n\n## Section 2: Filtering On Rows\nWe can reduce the rows in our index. When we do that, we're making a [Partial Index](https://www.postgresql.org/docs/current/indexes-partial.html).\n\nLet's explore our data and loop for opportunities.\n\nWe store different `type` values in this table, so let's `COUNT()` by type for first name \"Alphonso\".\n\n```sql\nSELECT type, COUNT(*) FROM users\nWHERE first_name = 'Alphonso'\nGROUP BY type;\n\n  type  | count\n--------+-------\n Driver |     4\n Rider  |     4\n```\n\nImagine we only wanted to index the Driver type, since this query finds drivers.\n\nWe can limit our index to just the Drivers. Let's drop our current index, and add it back with the same name.\n\n```sql\n-- Drop existing index\nDROP INDEX IF EXISTS idx_first_name;\n\nCREATE INDEX idx_first_name ON users (first_name)\nWHERE (type = 'Driver');\n```\n\nLet's run: `\\di+ idx_first_name;` again and this time we see the index is half the size at 151MB vs. 301MB.\n\nLet's run our query again:\n\n```sql\nEXPLAIN (ANALYZE)\nSELECT * FROM users\nWHERE first_name = 'Alphonso';\n```\n\n😲 It's slow! We need to add this same condition we added to the index, to the query. Let's try that:\n\n\n```sql\nEXPLAIN (ANALYZE)\nSELECT * FROM users\nWHERE first_name = 'Alphonso'\nAND type = 'Driver'; -- This is the new condition\n```\n\nNow it's super fast again. It's using our index. There are only 4 result rows which makes sense.\n\nCan we do better?\n\n## Section 2: Filtering On Columns\nBesides filtering rows in our index, we can filter columns picked in both our query and index definition.\n\nBy including the exact set of columns our query needs instead of `SELECT *`, PostgreSQL can get all needed data from the index alone, which is very fast.\n\nLet's imagine we needed the `id` of the `Driver` types of `users` named \"Alphonso\".\n\nLet's change our query first and see if it's better:\n\n```sql\nEXPLAIN (ANALYZE)\nSELECT id, first_name\nFROM users WHERE first_name = 'Alphonso'\nAND type = 'Driver'; -- This is the new condition\n```\n\nIt's not really any better despite reducing our columns.\n\nThis is because our current index does not include the `id` column. PostgreSQL can't get all field data from the index alone.\n\nLet's replace it with a new definition that includes those two columns.\n\n\n```sql\n-- Drop existing index\nDROP INDEX IF EXISTS idx_first_name;\n\nCREATE INDEX idx_first_name ON users (first_name, id)\nWHERE (type = 'Driver');\n```\n\nWe now have:\n- A partial index\n- A multicolumn index, where the leading column is our filtered column\n\nLet's check the query plan.\n\n```sql\nEXPLAIN (ANALYZE)\nSELECT id, first_name\nFROM users WHERE first_name = 'Alphonso'\nAND type = 'Driver';\n```\n\nWe're now getting the most efficient plan type possible, which is the \"Index Only Scan.\"\n\nThis is because our index contains the full set of needed columns for the query, meaning PostgreSQL only needs to access the index.\n\n```sql\n                                                         QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------\n Index Only Scan using idx_first_name on users  (cost=0.43..8.45 rows=1 width=20) (actual time=0.032..0.034 rows=4 loops=1)\n   Index Cond: (first_name = 'Alphonso'::text)\n   Heap Fetches: 0\n Planning Time: 0.126 ms\n Execution Time: 0.055 ms\n```\n\n## What's Next?\nVisit [6 - Macro Query Optimization Part 1](/docs/workshop/6_macro_overview_part_1.md) to continue.\n"
  },
  {
    "path": "docs/workshop/6_macro_overview_part_1.md",
    "content": "# Macro Query Optimization Part 1\n\nIn the last few sections, we learned about \"micro\" or individual query optimization.\n\nTo make broad improvements, we can apply the same concepts across all our queries.\n- Tactic #1: Find all the slow queries, and focus on high impact ones\n- Tactic #2: For read-only queries, i.e. the `SELECT` queries but not `INSERT`, `UPDATE`, and `DELETE`, distribute them to a second read-only PostgreSQL instance (a.k.a. replica, follower, secondary)\n\nTo do that, we will explore:\n- The `pg_stat_statements` extension\n- Read and Write Splitting with Active Record\n\nLet's improve our DBA skills!\n\n<details>\n<summary>🎥 Configuring and using pg_stat_statements data, creating generic query exec plans</summary>\n<div>\n  <a href=\"https://www.loom.com/share/25a2903db92c48c5ad42bc1c49d4a8ee\">\n    <img style=\"max-width:300px;\" src=\"https://cdn.loom.com/sessions/thumbnails/25a2903db92c48c5ad42bc1c49d4a8ee-1715978361702-with-play.gif\">\n  </a>\n</div>\n</details>\n\n## Section 1: Configure `pg_stat_statements`\nWhile being an extension, it's officially supported by PostgreSQL and distributed with it, but is not enabled by default.\n\nWe need to enable it using a superuser, for the `rideshare_development` database, in the `rideshare` schema.\n\n⚠️ This part won't be included in the workshop due to time, or can be a self-study opportunity. Presenter will demo.\n\n```sh\nvim \"/Users/andy/Library/Application Support/Postgres/var-16/postgresql.conf\"\n\n# edit shared_preload_libraries\nshared_preload_libraries = 'pg_stat_statements'\n\n# Restart PostgreSQL\npg_ctl restart --pgdata \"/Users/andy/Library/Application Support/Postgres/var-16/\"\n\n# Connect as superuser, e.g. \"postgres\"\npsql -U postgres -d rideshare_development\n\n# Enable the extension (run `CREATE EXTENSION`)\npostgres@[local]:5432 rideshare_development# \\dx\n                 List of installed extensions\n  Name   | Version |   Schema   |         Description\n---------+---------+------------+------------------------------\n plpgsql | 1.0     | pg_catalog | PL/pgSQL procedural language\n\n# Loads into current database\nCREATE EXTENSION IF NOT EXISTS pg_stat_statements\nSCHEMA rideshare;\n\n# Reset (Requires superuser) WARNING: Removes stats data\nSET search_path = 'rideshare';\nSELECT pg_stat_statements_reset();\n\n\\q -- quit psql\n```\n\nWe can go back to our less-privileged app user `owner`.\n\nNow we're ready to view the PGSS data.\n\nLet's connect in psql and then look for the `rideshare_development` DB:\n\n```sql\nSELECT pg_database.oid\nFROM pg_database\nWHERE pg_database.datname = 'rideshare_development';\n   oid\n---------\n 1462704\n```\n\nFilter in `pg_stat_statements` on `dbid` and the `owner` `userid`:\n\n```sql\n\\x -- vertical presentation\n\nWITH mydb AS (\n    SELECT pg_database.oid AS mydbid\n    FROM pg_database\n    WHERE pg_database.datname = 'rideshare_development'\n),\nme AS (\n    SELECT oid AS myuserid\n    FROM pg_roles\n    WHERE rolname = 'owner'\n)\nSELECT * FROM pg_stat_statements\nJOIN mydb ON dbid = mydb.mydbid\nJOIN me ON userid = me.myuserid;\n```\n\nLet's populate some query statistics rows. Run our earlier slow query, to act as slow query data:\n\n```sql\nSELECT * FROM users WHERE first_name = 'Alphonso';\n```\n\nWe can get a query from [`andyatkinson/pg_scripts`](https://github.com/andyatkinson/pg_scripts) for PGSS,\nadapting the 10 worst performers, to get the single worst one.\n\nRun this:\n```sql\nSELECT\n    queryid,\n    query as normalized_query,\n    mean_exec_time AS avg_ms,\n    calls,\n    (rows / calls) AS avg_rows\nFROM\n    pg_stat_statements\nORDER BY\n    3 DESC\nLIMIT 1;\n```\n\nNotes:\n- Get a generic plan on 16+ with `EXPLAIN (GENERIC_PLAN) SELECT * FROM users WHERE first_name = $1;`\n- Re-run the query a few times and observe the growth of \"calls\"\n\nWe can now identify our slowest queries and apply our micro optimization tactics to them.\n\n## What's Next?\nVisit [7 - Macro Query Optimization Part 2](/docs/workshop/7_macro_overview_part_2.md) to continue.\n"
  },
  {
    "path": "docs/workshop/7_macro_overview_part_2.md",
    "content": "# Macro Query Optimization Part 2\n\nIn this section, we'll begin to work with multiple PostgreSQL instances.\n\nRemember this hierarchy:\n```\n__________________________________________________________________\n|\n|--Instance (the server) (localhost, db01, db02, etc.)\n|\n|----Cluster (*all databases*, e.g. postgres, rideshare_development)\n|\n|------Database (postgres, rideshare_development)\n|\n|--------Schema (public, rideshare)\n|\n--------------------------------------------------------\n```\n\nWe'll run these using Docker. Start up Docker.\n\n## Part 1: Docker PostgreSQL Containers\n- Boot up Docker. There may be zero containers running. (`docker ps`)\n- Docker containers are in `docker` directory. Read README: <https://github.com/andyatkinson/rideshare/blob/main/docker/README.md>\n- Create a docker network (`rideshare-net`) the containers can use\n- Run the script to start the `db01` container\n- Run the script to start the `db02` container\n- Verify they're running with `docker ps`\n\n```sh\n# Clean-up from past runs:\ncd docker\nrm postgresql.conf\nrm -rf postgres-docker/\n\n# Starting point:\ndocker network create rideshare-net\nsh run_db_db01_primary.sh\nsh run_db_db02_replica.sh\n```\n\nLet's configure them.\n\n## Part 2: Enabling Physical Replication\nPrep: Remove annoying Docker messages:\n```sh\nexport DOCKER_CLI_HINTS=false\n```\n\n- `db01` and `db02` are now running. Review the network, host names, basics of connection to each instance.\n\n```sh\ndocker ps\n```\n\n- We're running two instances of Postgres in containers, simulating two different hosts\n\nGo to the `docker` directory and run `sh reset_docker_instances.sh`.\n\nIf you're missing `postgresql.conf`, you'll be prompted to create it.\n\n\n```sh\ncd docker\n\nsh reset_docker_instances.sh\n```\n\nFollow the commands to copy down `postgresql.conf`.\n\nEdit the settings `wal_level = logical` and save the changes. The script copies postgresql.conf to db01.\n\nRun the command again to do that:\n\n```sh\nsh reset_docker_instances.sh\n```\n\nLet's walk through the highlights:\n- Replaced postgresql.conf config file on db01\n- Created replication slot on primary db01\n- Created `replication_user` user on db01 with a unique password and permissions\n- Created `pg_hba.conf` on db01 to allow access\n- Placed password in `.pgpass` and copied to db01 and db02 for `replication_user`\n- Restarted db01\n\nCheck logs on db02:\n```sh\ndocker logs -f db02\n```\n\nInitially system identifier won't be the same:\n```sh\ndocker exec --user postgres -it db01 \\\n    psql -c \"SELECT system_identifier FROM pg_control_system();\"\n\ndocker exec --user postgres -it db02 \\\n    psql -c \"SELECT system_identifier FROM pg_control_system();\"\n```\n\nWe'll need to turn the db02 instance into a physical copy of db01.\n\nTo do that, we'll replace the data directory on db02 with a copy of db01,\nwhere it will then be kept in sync.\n\n## Part 3: Run `pg_basebackup`\nNow we have the two instances configured, and db02 can reach db01.\n\nWe need to turn db02 into a read replica, by running `pg_basebackup` on it.\n\nTo do that, open the file `run_pg_basebackup.sh` in the docker directory, but don't run it as a script.\n\nInstead, this file is a reference of individual commands. Copy and paste each one into db02.\n\n💻 Do that now!\n\nAfter running the main `pg_basebackup` command as demonstrated, a success message looks like this:\n\n```sh\npg_basebackup: base backup completed\n```\n\nThe container will exit. You'll want to start it again using `docker start db02`.\n\n<details>\n<summary>🎥 Rideshare - PostgreSQL physical replication with Docker containers</summary>\n<div>\n<a href=\"https://www.loom.com/share/6fb372b9f09d41b59692cf4de44441d8\">\n  <img style=\"max-width:300px;\" src=\"https://cdn.loom.com/sessions/thumbnails/6fb372b9f09d41b59692cf4de44441d8-with-play.gif\">\n</a>\n</div>\n</details>\n\n\nIf everything works, you'll have replication enabled between both instances with a `replication_user` user,\nand a replication slow.\n\n```sh\npg_basebackup: initiating base backup, waiting for checkpoint to complete\npg_basebackup: checkpoint completed\npg_basebackup: write-ahead log start point: 0/5000028 on timeline 1\npg_basebackup: starting background WAL receiver\n31481/31481 kB (100%), 1/1 tablespace\npg_basebackup: write-ahead log end point: 0/5000100\npg_basebackup: waiting for background process to finish streaming ...\npg_basebackup: syncing data to disk ...\npg_basebackup: renaming backup_manifest.tmp to backup_manifest\npg_basebackup: base backup completed\n```\n\nAnd on db02, something like this:\n```\n2024-05-01 02:07:18.636 UTC [30] LOG:  entering standby mode\n2024-05-01 02:07:18.641 UTC [30] LOG:  redo starts at 0/6000028\n2024-05-01 02:07:18.641 UTC [30] LOG:  consistent recovery state reached at 0/6000138\n2024-05-01 02:07:18.641 UTC [1] LOG:  database system is ready to accept read-only connections\n2024-05-01 02:07:18.648 UTC [31] LOG:  started streaming WAL from primary at 0/7000000 on timeline 1\n```\n\n## Conclusion\nThat concludes the basics of setting up a replica instance.\n\nIn the next section we'll continue with adding content, then layer on application-level configuration.\n\n\n## Appendix: Debugging and Troubleshooting\nCheck for connectivity from db02 to db01:\n```sh\ndocker exec --user postgres -it db02 bin/bash\n\npsql -U replication_user -h db01 -d postgres\n```\n\nCheck for replication slot:\n```sh\ndocker exec -it db01 psql -U postgres\n\\x\nselect * from pg_replication_slots;\n```\n\nIf needed, remove the slot:\n```sql\nSELECT pg_drop_replication_slot('rideshare_slot');\n```\n\nTo start over fully, completely remove the locally mapped data directory:\n```sh\n# Local volume for container data, remove this directory if starting over\nrm -rf docker-postgres\n```\n\nStart over from the beginning.\n\n## What's Next?\nVisit [8 - Active Record Multi-DB Part 1](/docs/workshop/8_active_record_multi-db_prep_part_1.md) to continue.\n"
  },
  {
    "path": "docs/workshop/8_active_record_multi-db_prep_part_1.md",
    "content": "# Active Record Multiple Databases Part 1\n\nNow we have `db01` and `db02` running. Let's create the Rideshare DB, and configure it.\n\nWe'll work with `db01`, which is mapped to local port 54321.\n\nWe'll set `DB_URL` and `RIDESHARE_DB_PASSWORD`.\n\n## Section 1: Primary and Secondary DB config\nWe use `postgres/postgres`, and connect to `postgres` on port 54321 (db01).\n\n```sh\nexport DB_URL=\"postgres://postgres:postgres@localhost:54321/postgres\"\n\ncd rideshare\n\n# Run setup, will complain if RIDESHARE_DB_PASSWORD is not set\nsh db/setup.sh 2>&1 | tee -a db01_output.log\n\n# Check for any errors:\nvim db01_output.log\n```\n\nNow let's connect as the owner role using a single-DB config:\n```sh\nexport DATABASE_URL=\"postgres://owner:@localhost:54321/rideshare_development\"\n```\n\nVerify port 54321 is listed. There should be no tables here: `\\dt`. We should see the `rideshare` schema: `\\dn`.\n\nNow we're ready to run migrations on db01:\n```sh\nbin/rails db:migrate\n```\n\nLet's see if the tables were replicated!\n\nWe're gradually moving to a multi-DB setup, but still using a single-DB setup.\n\nTo prepare, let's configure new env vars. These are in the `.env` for Rideshare.\n\n```sh\nexport DATABASE_URL_PRIMARY=\"postgres://owner:@localhost:54321/rideshare_development\"\n\nexport DATABASE_URL_REPLICA=\"postgres://owner:@localhost:54322/rideshare_development\"\n```\n\nLet's connect to the replica and check for tables.\n\nIf you see tables on the replica, it's because they were created via replication not from running migrations there. Migrations only run on the primary instance.\n\n```sh\npsql $DATABASE_URL_REPLICA\n```\n\nNote:\n- We automatically got the `owner` role\n- We're connected to port 54322 (one greater), which is the locally mapped port to db02\n- We see the tables in the `rideshare` schema\n\nCool!\n\nIf we check row counts on both, all the tables are empty.\n\nLet's populate data so that we work on queries.\n\nNote that this still uses `DATABASE_URL`, but that now points at db01.\n\n```sh\nbin/rails data_generators:generate_all\n```\n\nConnect again to db02 and verify the row counts are the same. There should be data on db02! This data came from db01 via replication.\n\nWith data on both instances, we're ready to move to Active Record configuration.\n\n## Section 2: Database config multiple databases\nIn this section, we're going to move to a multi-DB configuration.\n\nCopy and paste the contents from the file below, replacing the current contents of `db/config.yml`:\n\n```sh\nconfig/database-multiple.sample.yml\n```\n\nReplace the contents of `config/database.yml` with the file contents above.\n\nTake note of:\n- These reference the env vars you set earlier: `DATABASE_URL_PRIMARY` and `DATABASE_URL_REPLICA`\n- \"Named\" configurations for both: `rideshare` (db01) and `rideshare_replica` (db02)\n- Database names are `rideshare_development` for both instances\n- db02 has `replica: true` config\n- `schema_search_path` is set to `rideshare` for both\n- `database_tasks: false` for db02, we don't want to run migrations there\n\nNow we can try these out!\n\nNow when running migrations, they should only run on db01 primary instance:\n\n```sh\nbin/rails db:migrate\n```\n\nTest the new configurations, first using `db`:\n\n```sh\nbin/rails db --database rideshare\nbin/rails db --database rideshare_replica\n```\n\nThis concludes the configuration portion of Active Record Multiple Databases.\n\nIn the last section, we'll wrap things up with application level configuration and usage:\n\nSee you there!\n\n## Appendix: Troubleshooting\nTip: Log all statements if desired. Run this on the db01 or db02 instance.\n\n```sql\nALTER DATABASE rideshare_development SET log_statement = 'all';\n```\n\n## What's Next?\nVisit [9 - Active Record Multi-DB Part 2](/docs/workshop/9_active_record_multi-db_roles.md) to continue.\n"
  },
  {
    "path": "docs/workshop/9_active_record_multi-db_roles.md",
    "content": "# Active Record Multiple Databases - Part 2\n\nWith multiple databases configured, we're ready to leverage Active Record Multiple Databases.\n\nWe can move things up to a higher layer of abstraction, by configuring model code, then making different calls to the primary or replica instance by \"role\".\n\nWhat are roles?\n\n## Section 1 - Roles\nLet's change the main application model that classes inherit from.\n\nWe'll specify \"writing\" and \"reading\" roles we can connect to.\n\n- Writing role: db01\n- Reading role: db02\n\n## Section 2 - Configuration\nEdit `app/models/application_record.rb` to uncomment the `connects_to` code.\n\n```rb\nconnects_to database: {\n  writing: :rideshare,\n  reading: :rideshare_replica\n}\n```\n\nLet's try out that new configuration.\n\nUse the rails console now instead of db:\n```sh\nbin/rails console\n```\n\nFrom there, we can establish connections to one role or the other.\n\nTry out queries to each:\n```rb\nActiveRecord::Base.connected_to(role: :writing) { Driver.first }\nActiveRecord::Base.connected_to(role: :reading) { Driver.first }\n```\n\n⚠️ Let's try an update to the reader. This won't work because it's running in read-only mode.\n```rb\nActiveRecord::Base.connected_to(role: :reading) do\n  Driver.first.update_attribute(:first_name, \"Andrew\")\nend\n```\n\nWe get an error like:\n```\nWrite query attempted while in readonly mode\n```\n\nLet's send that to the writer:\n```rb\nActiveRecord::Base.connected_to(role: :writing) do\n  Driver.first.update_attribute(:first_name, \"Andrew\")\nend\n```\n\nGreat! If that committed, in a few moments it will be replicated.\n\nLet's make sure it's replicated:\n```rb\nActiveRecord::Base.connected_to(role: :reading) { Driver.first.first_name }\n```\n\nThat should have returned \"Andrew\".\n\n## Section 3 - Role Switching\n\nWhat you saw earlier was \"manual\" role switching.\n\nActive Record also supports [Automatic Role Switching](https://guides.rubyonrails.org/active_record_multiple_databases.html#activating-automatic-role-switching) based on the HTTP request and other factors.\n\nLet's try that out.\n\n```sh\nbin/rails g active_record:multi_db\n```\n\nAdd to `config/application.rb`:\n```rb\nconfig.active_record.database_selector = { delay: 2.seconds }\nconfig.active_record.database_resolver = ActiveRecord::Middleware::DatabaseSelector::Resolver\nconfig.active_record.database_resolver_context = ActiveRecord::Middleware::DatabaseSelector::Resolver::Session\n```\n\nLet's log all queries. We'd like to verify that sending a GET request runs on db02, although we make this change on db01:\n```sh\ndocker exec --user postgres -it db01 psql\n\nALTER DATABASE rideshare_development SET log_statement = 'all';\n```\n\nLet's tail db01 and db02 logs in different terminals:\n```sh\ndocker logs -f db01\ndocker logs -f db02\n```\n\nStart up the rails server:\n```sh\nbin/rails server\n```\n\nSend a GET request:\n```sh\ncurl localhost:3000/api/trips\n```\n\n💥 Boom. We don't see any queries logged on db01, and we see `SELECT * FROM trips;` logged on db02.\n\nThe query is automatically sent to the replica. It's working!\n\n## Wrap Up\nWe've now seen how to use multiple PostgreSQL databases to distribute the database work, splitting up writes and read queries.\n\nScaling read traffic separately is part of building High Performance Active Record apps.\n\nBeyond write/read role switching, for even more advanced scalability options, Active Record supports Horizontal Sharding, which has a similar pattern to what you've done here for \"shard switching.\"\n"
  },
  {
    "path": "docs/workshop/README.md",
    "content": "# Workshop\n\nHello! This is meant to be a two hour long workshop, facilitated by Andrew Atkinson.\n\n## Prerequisites\nFor prerequisites, you'll use the Rideshare app. Follow the instructions in the main [Rideshare README.md](/README.md) to fully set it up.\n\nWhen you've installed the app, verify that:\n- Running `bundle install` in the Rideshare directory installs all Ruby gems\n- `sh db/setup.sh` ran and created the `rideshare_development` database, users, etc.\n- `bin/rails db:migrate` run and created empty tables, indexes, etc.\n- `bin/rails data_generators:generate_all` ran and created a base set of fake data\n\nThe workshop uses content from my book [\"High Performance PostgreSQL for Rails\"](https://andyatkinson.com/pgrailsbook).\n\nFor book references, check Chapters \"7 - Query Performance &  8 - Optimized Indexes for Fast Retrieval\" for the first half of the workshop.\n\nCheck Chapter \"13 - Scaling with Replication and Sharding\" for the second half of the workshop.\n\n## Workshop Structure\n- Two 1 hr. halves, with a short break\n- Numbered files from 0 through 9, with \"Sections\" in the files\n- Each section has runnable code in backticks blocks, that's expected to be run by participants, unless flagged as \"instructor only\"\n\n## Support\nAs an independent consultant, your support is very meaningful!\n\nIf you'd like to support me financially, please consider [buying my book](https://andyatkinson.com/pgrailsbook) and telling your colleagues about it!\n\nTo get a discount, ask me about codes. Usually there are active discounts during events like conferences.\n\nIf your team needs help, please visit my [Consulting page](http://andyatkinson.com/consulting), where you can find information about what I offer and how to hire me.\n\n## Rideshare and Workshop Loom Videos\n<details>\n<summary>🎥 Installation - Rideshare on a Mac, Ruby, PostgreSQL, Gems</summary>\n    <div>\n    <a href=\"https://www.loom.com/share/8bfc4e79758a42d39cead8f6637aa314\">\n    <img style=\"max-width:300px;\" src=\"https://cdn.loom.com/sessions/thumbnails/8bfc4e79758a42d39cead8f6637aa314-1714771702452-with-play.gif\">\n    </a>\n</div>\n</details>\n\n<details>\n<summary>🎥 Rideshare DB setup. Common issues running db/setup.sh</summary>\n<a href=\"https://www.loom.com/share/fc919520089c4e0abb2c0a02b68bbd91\">\n<img style=\"max-width:300px;\" src=\"https://cdn.loom.com/sessions/thumbnails/fc919520089c4e0abb2c0a02b68bbd91-with-play.gif\">\n</a>\n</div>\n</details>\n\n<details>\n<summary>🎥 Rideshare - Loading data using a Rake task and Shell Script</summary>\n<div>\n<div>\n<a href=\"https://www.loom.com/share/6a1419efae7b4c3aac51e7d95726baf0\">\n<img style=\"max-width:300px;\" src=\"https://cdn.loom.com/sessions/thumbnails/6a1419efae7b4c3aac51e7d95726baf0-1714505177620-with-play.gif\">\n</a>\n</div>\n</details>\n\n<details>\n<summary>🎥 Configuring and using pg_stat_statements data, creating generic query exec plans</summary>\n<div>\n  <a href=\"https://www.loom.com/share/25a2903db92c48c5ad42bc1c49d4a8ee\">\n    <img style=\"max-width:300px;\" src=\"https://cdn.loom.com/sessions/thumbnails/25a2903db92c48c5ad42bc1c49d4a8ee-1715978361702-with-play.gif\">\n  </a>\n</div>\n</details>\n\n<details>\n<summary>🎥 Rideshare - PostgreSQL physical replication with Docker containers</summary>\n<div>\n<a href=\"https://www.loom.com/share/6fb372b9f09d41b59692cf4de44441d8\">\n  <img style=\"max-width:300px;\" src=\"https://cdn.loom.com/sessions/thumbnails/6fb372b9f09d41b59692cf4de44441d8-with-play.gif\">\n</a>\n</div>\n</details>\n\n\n## Let's Get Started\nIn each section, you'll find links at the bottom to the next topic.\n\nClick the [0 - Introduction](/docs/workshop/0_introduction.md) to get started.\n"
  },
  {
    "path": "lib/assets/.keep",
    "content": ""
  },
  {
    "path": "lib/json_web_token.rb",
    "content": "class JsonWebToken\n  SECRET_KEY = Rails.application.credentials.secret_key_base.to_s\n\n  def self.encode(payload, exp = 24.hours.from_now)\n    payload[:exp] = exp.to_i\n    JWT.encode(payload, SECRET_KEY)\n  end\n\n  def self.decode(token)\n    decoded = JWT.decode(token, SECRET_KEY)[0]\n    HashWithIndifferentAccess.new(decoded)\n  end\nend\n"
  },
  {
    "path": "lib/tasks/.keep",
    "content": ""
  },
  {
    "path": "lib/tasks/auto_generate_diagram.rake",
    "content": "# NOTE: only doing this in development as some production environments (Heroku)\n# NOTE: are sensitive to local FS writes, and besides -- it's just not proper\n# NOTE: to have a dev-mode tool do its thing in production.\nRailsERD.load_tasks if Rails.env.development?\n"
  },
  {
    "path": "lib/tasks/benchmarks.rake",
    "content": "namespace :benchmarks do\n  desc 'Code benchmarks'\n\n  task active_record: :environment do\n    Benchmark.memory do |x|\n      x.report('.select_all() single User') do\n        ActiveRecord::Base.connection.select_all('SELECT * FROM users ORDER BY id LIMIT 1')\n      end\n\n      x.report('User.first') { User.first }\n\n      x.compare!\n    end\n  end\nend\n"
  },
  {
    "path": "lib/tasks/custom.rake",
    "content": "#\n# This overrides the built-in db:reset task\n#\nnamespace :custom do\n  desc 'Custom database tasks'\n\n  task :db_reset do\n    sh 'db/teardown.sh'\n    sh 'db/setup.sh'\n    sh 'db/setup_test_database.sh'\n\n    Rake::Task['db:migrate'].invoke\n  end\nend\n"
  },
  {
    "path": "lib/tasks/data_generators.rake",
    "content": "require 'faker'\n\nnamespace :data_generators do\n  desc 'Generate Drivers and Riders'\n  task drivers_and_riders: :environment do |_t, _args|\n    Benchmark.measure do\n      10_000.times.to_a.in_groups_of(10_000).each do |group|\n        [Driver, Rider].each do |klass|\n          batch = group.map do |i|\n            first_name = Faker::Name.first_name\n            last_name = Faker::Name.last_name\n            drivers_license_number = (random_mn_drivers_license_number(first_name, i) if klass.equal?(Driver))\n            klass.new(\n              first_name: first_name,\n              last_name: last_name,\n              email: \"#{first_name}-#{last_name}-#{klass.name.downcase}-#{i}@email.com\",\n              password_digest: SecureRandom.hex,\n              drivers_license_number: drivers_license_number\n            )\n          end.map do |d|\n            d.attributes.symbolize_keys.slice(\n              :first_name, :last_name,\n              :email, :password, :type,\n              :drivers_license_number\n            )\n          end\n\n          puts \"bulk insert batch size: #{batch.size}\"\n          klass.insert_all(batch)\n        end\n      end\n    end\n  end\n\n  desc 'Generate Trips and Trip Requests'\n  task trips_and_requests: :environment do |_t, _args|\n    drivers = []\n    100.times do |i|\n      fname = Faker::Name.first_name\n      lname = Faker::Name.last_name\n\n      drivers_license_number = random_mn_drivers_license_number(fname, i)\n      drivers << Driver.create!(\n        first_name: fname,\n        last_name: lname,\n        email: \"#{fname}-#{lname}-#{i}@email.com\",\n        password: SecureRandom.hex,\n        drivers_license_number: drivers_license_number\n      )\n    end\n\n    riders = []\n    100.times do |i|\n      fname = Faker::Name.first_name\n      lname = Faker::Name.last_name\n      riders << Rider.create!(\n        first_name: fname,\n        last_name: lname,\n        email: \"#{fname}-#{lname}-#{i}@email.com\",\n        password: SecureRandom.hex\n      )\n    end\n\n    nyc = Location.where(\n      address: 'New York, NY'\n    ).first_or_create do |loc|\n      loc.position = '(40.7143528,-74.0059731)'\n      loc.state = 'NY'\n    end\n\n    bos = Location.where(\n      address: 'Boston, MA'\n    ).first_or_create do |loc|\n      loc.position = '(42.361145,-71.057083)'\n      loc.state = 'MA'\n    end\n\n    puts 'creating Trip Requests and Trips'\n    1000.times do |i|\n      request = TripRequest.create!(\n        rider: riders.sample,\n        start_location: nyc,\n        end_location: bos\n      )\n\n      # for about 1/4 of the trips, give them a random rating\n      rating = i % 4 == 0 ? rand(1..5) : nil\n\n      request.create_trip!(\n        driver: drivers.sample,\n        completed_at: 1.minute.from_now,\n        rating: rating\n      )\n    end\n  end\n\n  desc 'Generate Vehicles and Reservations'\n  task vehicles_and_reservations: :environment do |_t, _args|\n    riders = []\n    10.times do |i|\n      fname = Faker::Name.first_name\n      lname = Faker::Name.last_name\n      riders << Rider.create!(\n        first_name: fname,\n        last_name: lname,\n        email: \"#{fname}-#{lname}-#{i}@email.com\",\n        password: SecureRandom.hex\n      )\n    end\n\n    nyc = Location.where(\n      address: 'New York, NY'\n    ).first_or_create do |loc|\n      loc.position = '(40.7143528,-74.0059731)'\n    end\n\n    bos = Location.where(\n      address: 'Boston, MA'\n    ).first_or_create do |loc|\n      loc.position = '(42.361145,-71.057083)'\n    end\n\n    puts 'creating trip requests and trips'\n    10.times do |_i|\n      TripRequest.create!(\n        rider: riders.sample,\n        start_location: nyc,\n        end_location: bos\n      )\n    end\n\n    Vehicle.destroy_all\n    ['Party Bus', 'Limo', 'Ice Cream Truck', 'Food Truck'].each do |name|\n      Vehicle.create!(\n        name: name,\n        status: VehicleStatus::PUBLISHED\n      )\n    end\n\n    # create reservation\n    vehicle = Vehicle.order(Arel.sql('RANDOM()')).first\n    trip_request = TripRequest.order(Arel.sql('RANDOM()')).first\n    starts_at = (rand * 1000).floor.hours.from_now\n    ends_at = starts_at + 1.hour\n    puts \"v=#{vehicle.id} tr=#{trip_request.id} from=#{starts_at} to=#{ends_at}\"\n\n    VehicleReservation.create!(\n      vehicle: vehicle,\n      trip_request: trip_request,\n      starts_at: starts_at,\n      ends_at: ends_at,\n      canceled: false\n    )\n    VehicleReservation.create!(\n      vehicle: vehicle,\n      trip_request: trip_request,\n      starts_at: starts_at + 1.hour,\n      ends_at: ends_at + 1.hour,\n      canceled: true\n    )\n    VehicleReservation.create!(\n      vehicle: vehicle,\n      trip_request: trip_request,\n      starts_at: starts_at + 1.hour,\n      ends_at: ends_at + 1.hour,\n      canceled: false\n    )\n  end\n\n  # bin/rails data_generators:generate_trip_positions\n  desc 'Generate simulated historical trip positions data'\n  task generate_trip_positions: :environment do |_t, _args|\n    # Generate data from 1 year ago, 3 months ago, 2 months ago, 1 month ago\n    # and current month\n    puts 'From 1 year ago'\n    5.times do |_i|\n      @trip = Trip.all.sample\n      TripPosition.create!(\n        position: '(651096.993815166,667028.1146045981)',\n        trip: @trip,\n        created_at: 1.year.ago\n      )\n    end\n    puts 'From 3 months ago'\n    5.times do |_i|\n      @trip = Trip.all.sample\n      TripPosition.create!(\n        position: '(651096.993815166,667028.1146045981)',\n        trip: @trip,\n        created_at: 3.months.ago\n      )\n    end\n    puts 'From 2 months ago'\n    5.times do |_i|\n      @trip = Trip.all.sample\n      TripPosition.create!(\n        position: '(651096.993815166,667028.1146045981)',\n        trip: @trip,\n        created_at: 2.months.ago\n      )\n    end\n    puts 'From 1 month ago'\n    5.times do |_i|\n      @trip = Trip.all.sample\n      TripPosition.create!(\n        position: '(651096.993815166,667028.1146045981)',\n        trip: @trip,\n        created_at: 1.month.ago\n      )\n    end\n    puts 'This month'\n    5.times do |_i|\n      @trip = Trip.all.sample\n      TripPosition.create!(\n        position: '(651096.993815166,667028.1146045981)',\n        trip: @trip\n      )\n    end\n    puts \"Created #{TripPosition.count} records.\"\n  end\n\n  desc 'Run ANALYZE on all involved tables'\n  task analyze_tables: :environment do |_t, _args|\n    %w[ users trips trip_requests trip_positions\n        locations vehicles vehicle_reservations ].each do |table_name|\n      ActiveRecord::Base.connection.execute(\"ANALYZE #{table_name}\")\n    end\n  end\n\n  desc 'Generate All Data'\n  task generate_all: :environment do |_t, _args|\n    Rake::Task['data_generators:drivers_and_riders'].invoke\n    Rake::Task['data_generators:trips_and_requests'].invoke\n    Rake::Task['data_generators:vehicles_and_reservations'].invoke\n    Rake::Task['data_generators:generate_trip_positions'].invoke\n    Rake::Task['data_generators:analyze_tables'].invoke\n  end\nend\n\ndef random_mn_drivers_license_number(fname, i)\n  [\n    \"#{fname.first.upcase}\",\n    '800000',\n    (rand * 10).to_i.to_s,\n    (rand * 10).to_i.to_s,\n    (rand * 10).to_i.to_s,\n    (rand * 10).to_i.to_s,\n    (rand * 10).to_i.to_s,\n    \"#{i}\"\n  ].join\nend\n"
  },
  {
    "path": "lib/tasks/fake_data_generator.rake",
    "content": "require 'faker'\n\nnamespace :data_generators do\n  desc 'Generator Drivers'\n  task drivers: :environment do |_t, _args|\n    TOTAL = 20_000\n    BATCH_SIZE = 10_000\n    results = Benchmark.measure do\n      TOTAL.times.to_a.in_groups_of(BATCH_SIZE).each do |group|\n        batch = group.map do |i|\n          first_name = Faker::Name.first_name\n          last_name = Faker::Name.last_name\n          Driver.new(\n            first_name: first_name,\n            last_name: last_name,\n            email: \"#{first_name}-#{last_name}-#{i}@email.com\",\n            password_digest: SecureRandom.hex\n          )\n        end.map do |d|\n          d.attributes.symbolize_keys.slice(\n            :first_name, :last_name,\n            :email, :password, :type\n          )\n        end\n\n        Driver.insert_all(batch)\n        puts \"Created #{batch.size} drivers.\"\n      end\n    end\n    puts 'VACUUM (ANALYZE) users'\n    Driver.connection.execute('VACUUM (ANALYZE) users')\n    puts results\n  end\nend\n"
  },
  {
    "path": "lib/tasks/migration_hooks.rake",
    "content": "# lib/tasks/migration_hooks.rb\n#\n# https://www.dan-manges.com/blog/modifying-rake-tasks\n\nnamespace :migration_hooks do\n  task set_role: :environment do\n    if Rails.env.development?\n      puts 'Setting role for development'\n      ActiveRecord::Base.connection.execute('SET ROLE owner')\n    end\n  end\nend\n\n# https://dev.to/molly/rake-task-enhance-method-explained-3bo0\nRake::Task['db:migrate'].enhance(['migration_hooks:set_role'])\n"
  },
  {
    "path": "lib/tasks/simulate_app_activity.rake",
    "content": "#\n# bin/rails simulate:app_activity\n#\n# Set an optional iterations (default: 1)\n# bin/rails simulate:app_activity[10]\n#\nnamespace :simulate do\n  desc 'Simulate App Activity'\n  task :app_activity, [:iteration_count] => :environment do |_t, args|\n    args.with_defaults(iteration_count: 1)\n    # Steps in end-to-end cycle\n    # 1. (API) Rider creates trip_request\n    # 1. (API) Rider polls for trip_request status\n    # 1. Best available driver picks up trip request, updates status, trip created\n    # 1. (API) Rider polls for trip status\n    # 1. Driver completes trip\n\n    iterations = args[:iteration_count].to_i\n    puts \"Running script #{iterations} times...\"\n    iterations.times do\n      # 1. create trip request\n      url = 'http://localhost:3000/api/trip_requests'\n      request_body = {\n        trip_request: {\n          rider_id: Rider.first.id,\n          start_address: 'Boston, MA',\n          end_address: 'New York, NY'\n        }\n      }\n      request_headers = {\n        'Accept' => 'application/json'\n      }\n      puts '[trip_request] creating trip request...'\n      response = Faraday.post(url, request_body, request_headers)\n      resp = JSON.parse(response.body, symbolize_names: true)\n      trip_request_id = resp[:trip_request_id]\n      puts \"[trip_request] got trip_request_id: #{trip_request_id}\"\n\n      next unless trip_request_id\n\n      # 1. poll for trip request status, until trip exists\n      # Polling is not implemented, would need some async processing\n      # in the app like Sidekiq or another background processor\n      begin\n        puts '[trip_request] checking for trip_id...'\n        attempts ||= 1\n        url = \"http://localhost:3000/api/trip_requests/#{trip_request_id}\"\n        show_response = Faraday.get(url)\n        show_resp = JSON.parse(show_response.body, symbolize_names: true)\n        puts \"[trip_request] show_resp: #{show_resp.inspect}\"\n        trip_id = show_resp[:trip_id]\n        if trip_id\n          puts \"[trip] Got a trip_id: #{trip_id}\"\n        else\n          puts '[trip] no trip_id...'\n          raise\n        end\n      rescue StandardError\n        if (attempts += 1) < 5 # go back to begin block if condition ok\n          puts 'retrying...'\n          retry\n        end\n      end\n    end\n  end\nend\n"
  },
  {
    "path": "log/.keep",
    "content": ""
  },
  {
    "path": "postgresql/.pg_service.sample.conf",
    "content": "[rideshare_dev]\nhost=localhost\nuser=owner\ndbname=rideshare_development\nport=5432\n"
  },
  {
    "path": "postgresql/.pgpass.sample",
    "content": "localhost:5432:rideshare_development:owner:HSnDDgFtyW9fyFI\nlocalhost:54321:rideshare_development:owner:HSnDDgFtyW9fyFI\nlocalhost:54322:rideshare_development:owner:HSnDDgFtyW9fyFI\nlocalhost:5432:rideshare_development:app:HSTnDDgFtyW9fyFI\nlocalhost:6432:rideshare_development:owner:HSnDDgFtyW9fyFI\n*:*:*:replication_user:cd58b7e22c0af34a34c1572a\n*:*:*:app_readonly:ee0e8cc80c5c244e6582b0de\n"
  },
  {
    "path": "postgresql/.psqlrc.sample",
    "content": "\\encoding unicode\n\\set PROMPT1 '%n@%M:%>%x %/# '\n\\set PROMPT2 ''\n\n\n\\setenv PAGER 'less -S'\n"
  },
  {
    "path": "postgresql/README.md",
    "content": "# PostgreSQL\n\n## `postgresql.conf`\nReview the sample file in this directory. Remove `sample` from the file name\n\n## `pg_hba.conf`\n* Remove `sample` from filename\n\n## `.pg_service.conf`\n* Remove `sample`\n* Copy to `~/.pg_service.conf`\n* Edit the service info with your config\n\n## `~/.pgpass`\n* Remove `sample` from the sample file\n* Copy file content to `~/.pgpass` (scripts will populate it as well)\n* Edit the file with your specific credentials\n* Perform the following changes:\n\n```sh\nchown <user>:<group> /home/dir/.pgpass\nchmod 0600 /home/dir/.pgpass\n```\n\nReplace `/home/dir` with the path to the home directory of the user.\n\nOn PostgreSQL docker containers, that's `/var/lib/postgresql/`\n\nFor user and group on PostgreSQL docker containers, the user is `postgres` and the group is `root`\n\n## `.psqlrc`\n* Remove `sample`\n* Copy to `~/.psqlrc`\n\n## PgBouncer\n> The mode that results in a more sane balance of improved concurrency and retained critical database features is transaction mode.\nFrom: [PgBouncer is useful, important, and fraught with peril](https://jpcamara.com/2023/04/12/pgbouncer-is-useful.html)\n\n* For 1.21.0, recommend `transaction` pool mode (compatible with multi-statement transactions)\n* For macOS, install with Homebrew\n* Copy changes from `pgbouncer.sample.ini` file\n* Restart with `brew services restart pgbouncer`\n"
  },
  {
    "path": "postgresql/pg_hba.sample.conf",
    "content": "# PostgreSQL Client Authentication Configuration File\n# ===================================================\n#\n# Refer to the \"Client Authentication\" section in the PostgreSQL\n# documentation for a complete description of this file.  A short\n# synopsis follows.\n#\n# This file controls: which hosts are allowed to connect, how clients\n# are authenticated, which PostgreSQL user names they can use, which\n# databases they can access.  Records take one of these forms:\n#\n# local         DATABASE  USER  METHOD  [OPTIONS]\n# host          DATABASE  USER  ADDRESS  METHOD  [OPTIONS]\n# hostssl       DATABASE  USER  ADDRESS  METHOD  [OPTIONS]\n# hostnossl     DATABASE  USER  ADDRESS  METHOD  [OPTIONS]\n# hostgssenc    DATABASE  USER  ADDRESS  METHOD  [OPTIONS]\n# hostnogssenc  DATABASE  USER  ADDRESS  METHOD  [OPTIONS]\n#\n# (The uppercase items must be replaced by actual values.)\n#\n# The first field is the connection type:\n# - \"local\" is a Unix-domain socket\n# - \"host\" is a TCP/IP socket (encrypted or not)\n# - \"hostssl\" is a TCP/IP socket that is SSL-encrypted\n# - \"hostnossl\" is a TCP/IP socket that is not SSL-encrypted\n# - \"hostgssenc\" is a TCP/IP socket that is GSSAPI-encrypted\n# - \"hostnogssenc\" is a TCP/IP socket that is not GSSAPI-encrypted\n#\n# DATABASE can be \"all\", \"sameuser\", \"samerole\", \"replication\", a\n# database name, or a comma-separated list thereof. The \"all\"\n# keyword does not match \"replication\". Access to replication\n# must be enabled in a separate record (see example below).\n#\n# USER can be \"all\", a user name, a group name prefixed with \"+\", or a\n# comma-separated list thereof.  In both the DATABASE and USER fields\n# you can also write a file name prefixed with \"@\" to include names\n# from a separate file.\n#\n# ADDRESS specifies the set of hosts the record matches.  It can be a\n# host name, or it is made up of an IP address and a CIDR mask that is\n# an integer (between 0 and 32 (IPv4) or 128 (IPv6) inclusive) that\n# specifies the number of significant bits in the mask.  A host name\n# that starts with a dot (.) matches a suffix of the actual host name.\n# Alternatively, you can write an IP address and netmask in separate\n# columns to specify the set of hosts.  Instead of a CIDR-address, you\n# can write \"samehost\" to match any of the server's own IP addresses,\n# or \"samenet\" to match any address in any subnet that the server is\n# directly connected to.\n#\n# METHOD can be \"trust\", \"reject\", \"md5\", \"password\", \"scram-sha-256\",\n# \"gss\", \"sspi\", \"ident\", \"peer\", \"pam\", \"ldap\", \"radius\" or \"cert\".\n# Note that \"password\" sends passwords in clear text; \"md5\" or\n# \"scram-sha-256\" are preferred since they send encrypted passwords.\n#\n# OPTIONS are a set of options for the authentication in the format\n# NAME=VALUE.  The available options depend on the different\n# authentication methods -- refer to the \"Client Authentication\"\n# section in the documentation for a list of which options are\n# available for which authentication methods.\n#\n# Database and user names containing spaces, commas, quotes and other\n# special characters must be quoted.  Quoting one of the keywords\n# \"all\", \"sameuser\", \"samerole\" or \"replication\" makes the name lose\n# its special character, and just match a database or username with\n# that name.\n#\n# This file is read on server startup and when the server receives a\n# SIGHUP signal.  If you edit the file on a running system, you have to\n# SIGHUP the server for the changes to take effect, run \"pg_ctl reload\",\n# or execute \"SELECT pg_reload_conf()\".\n#\n# Put your actual configuration here\n# ----------------------------------\n#\n# If you want to allow non-local connections, you need to add more\n# \"host\" records.  In that case you will also need to make PostgreSQL\n# listen on a non-local interface via the listen_addresses\n# configuration parameter, or via the -i or -h command line switches.\n\n# CAUTION: Configuring the system for local \"trust\" authentication\n# allows any local user to connect as any PostgreSQL user, including\n# the database superuser.  If you do not trust all your local users,\n# use another authentication method.\n\n\n# TYPE  DATABASE        USER            ADDRESS                 METHOD\n\n# \"local\" is for Unix domain socket connections only\n# local   all             all                                     trust\n# # IPv4 local connections:\n# host    all             all             127.0.0.1/32            trust\n# # IPv6 local connections:\n# host    all             all             ::1/128                 trust\n# # Allow replication connections from localhost, by a user with the\n# # replication privilege.\n# local   replication     all                                     trust\n# host    replication     all             127.0.0.1/32            trust\n# host    replication     all             ::1/128                 trust\n\nlocal   rideshare_development owner md5\nhost    rideshare_development owner 127.0.0.1/32            md5\nhost    rideshare_development owner ::1/128                 md5\n\nlocal   rideshare_development app md5\nhost    rideshare_development app 127.0.0.1/32            md5\nhost    rideshare_development app ::1/128                 md5\n\nlocal   all andy trust\nlocal   all andy trust\nhost    all             postgres localhost               trust\n"
  },
  {
    "path": "postgresql/pgbouncer.sample.ini",
    "content": "[databases]\nrideshare_development = host=127.0.0.1 port=5432 dbname=rideshare_development\n#rideshare_development = host=127.0.0.1 port=5432 dbname=rideshare_development pool_mode=transaction\n\n# For userlist.txt, refer to ./userlist.sample.txt\n# Copy the file, remove the \"sample\" portion, and place the file where it's\n# reachable, i.e. /usr/local/etc/userlist.txt\n[pgbouncer]\n listen_port = 6432\n listen_addr = 127.0.0.1\n auth_type = md5\n auth_file = userlist.txt\n logfile = pgbouncer.log\n pidfile = pgbouncer.pid\n admin_users = owner\n"
  },
  {
    "path": "postgresql/postgresql.sample.conf",
    "content": "# -----------------------------\n# PostgreSQL configuration file\n# -----------------------------\n#\n# This file consists of lines of the form:\n#\n#   name = value\n#\n# (The \"=\" is optional.)  Whitespace may be used.  Comments are introduced with\n# \"#\" anywhere on a line.  The complete list of parameter names and allowed\n# values can be found in the PostgreSQL documentation.\n#\n# The commented-out settings shown in this file represent the default values.\n# Re-commenting a setting is NOT sufficient to revert it to the default value;\n# you need to reload the server.\n#\n# This file is read on server startup and when the server receives a SIGHUP\n# signal.  If you edit the file on a running system, you have to SIGHUP the\n# server for the changes to take effect, run \"pg_ctl reload\", or execute\n# \"SELECT pg_reload_conf()\".  Some parameters, which are marked below,\n# require a server shutdown and restart to take effect.\n#\n# Any parameter can also be given as a command-line option to the server, e.g.,\n# \"postgres -c log_connections=on\".  Some parameters can be changed at run time\n# with the \"SET\" SQL command.\n#\n# Memory units:  B  = bytes            Time units:  us  = microseconds\n#                kB = kilobytes                     ms  = milliseconds\n#                MB = megabytes                     s   = seconds\n#                GB = gigabytes                     min = minutes\n#                TB = terabytes                     h   = hours\n#                                                   d   = days\n\n\n#------------------------------------------------------------------------------\n# FILE LOCATIONS\n#------------------------------------------------------------------------------\n\n# The default values of these variables are driven from the -D command-line\n# option or PGDATA environment variable, represented here as ConfigDir.\n\n#data_directory = 'ConfigDir'\t\t# use data in another directory\n\t\t\t\t\t# (change requires restart)\n#hba_file = 'ConfigDir/pg_hba.conf'\t# host-based authentication file\n\t\t\t\t\t# (change requires restart)\n#ident_file = 'ConfigDir/pg_ident.conf'\t# ident configuration file\n\t\t\t\t\t# (change requires restart)\n\n# If external_pid_file is not explicitly set, no extra PID file is written.\n#external_pid_file = ''\t\t\t# write an extra PID file\n\t\t\t\t\t# (change requires restart)\n\n\n#------------------------------------------------------------------------------\n# CONNECTIONS AND AUTHENTICATION\n#------------------------------------------------------------------------------\n\n# - Connection Settings -\n\n#listen_addresses = 'localhost'\t\t# what IP address(es) to listen on;\n\t\t\t\t\t# comma-separated list of addresses;\n\t\t\t\t\t# defaults to 'localhost'; use '*' for all\n\t\t\t\t\t# (change requires restart)\n#port = 5432\t\t\t\t# (change requires restart)\nmax_connections = 100\t\t\t# (change requires restart)\n#superuser_reserved_connections = 3\t# (change requires restart)\n#unix_socket_directories = '/tmp'\t# comma-separated list of directories\n\t\t\t\t\t# (change requires restart)\n#unix_socket_group = ''\t\t\t# (change requires restart)\n#unix_socket_permissions = 0777\t\t# begin with 0 to use octal notation\n\t\t\t\t\t# (change requires restart)\n#bonjour = off\t\t\t\t# advertise server via Bonjour\n\t\t\t\t\t# (change requires restart)\n#bonjour_name = ''\t\t\t# defaults to the computer name\n\t\t\t\t\t# (change requires restart)\n\n# - TCP settings -\n# see \"man tcp\" for details\n\n#tcp_keepalives_idle = 0\t\t# TCP_KEEPIDLE, in seconds;\n\t\t\t\t\t# 0 selects the system default\n#tcp_keepalives_interval = 0\t\t# TCP_KEEPINTVL, in seconds;\n\t\t\t\t\t# 0 selects the system default\n#tcp_keepalives_count = 0\t\t# TCP_KEEPCNT;\n\t\t\t\t\t# 0 selects the system default\n#tcp_user_timeout = 0\t\t\t# TCP_USER_TIMEOUT, in milliseconds;\n\t\t\t\t\t# 0 selects the system default\n\n#client_connection_check_interval = 0\t# time between checks for client\n\t\t\t\t\t# disconnection while running queries;\n\t\t\t\t\t# 0 for never\n\n# - Authentication -\n\n#authentication_timeout = 1min\t\t# 1s-600s\n#password_encryption = scram-sha-256\t# scram-sha-256 or md5\n#db_user_namespace = off\n\n# GSSAPI using Kerberos\n#krb_server_keyfile = 'FILE:${sysconfdir}/krb5.keytab'\n#krb_caseins_users = off\n\n# - SSL -\n\n#ssl = off\n#ssl_ca_file = ''\n#ssl_cert_file = 'server.crt'\n#ssl_crl_file = ''\n#ssl_crl_dir = ''\n#ssl_key_file = 'server.key'\n#ssl_ciphers = 'HIGH:MEDIUM:+3DES:!aNULL' # allowed SSL ciphers\n#ssl_prefer_server_ciphers = on\n#ssl_ecdh_curve = 'prime256v1'\n#ssl_min_protocol_version = 'TLSv1.2'\n#ssl_max_protocol_version = ''\n#ssl_dh_params_file = ''\n#ssl_passphrase_command = ''\n#ssl_passphrase_command_supports_reload = off\n\n\n#------------------------------------------------------------------------------\n# RESOURCE USAGE (except WAL)\n#------------------------------------------------------------------------------\n\n# - Memory -\n\nshared_buffers = 128MB\t\t\t# min 128kB\n\t\t\t\t\t# (change requires restart)\n#huge_pages = try\t\t\t# on, off, or try\n\t\t\t\t\t# (change requires restart)\n#huge_page_size = 0\t\t\t# zero for system default\n\t\t\t\t\t# (change requires restart)\n#temp_buffers = 8MB\t\t\t# min 800kB\n#max_prepared_transactions = 0\t\t# zero disables the feature\n\t\t\t\t\t# (change requires restart)\n# Caution: it is not advisable to set max_prepared_transactions nonzero unless\n# you actively intend to use prepared transactions.\n#work_mem = 4MB\t\t\t\t# min 64kB\n#hash_mem_multiplier = 2.0\t\t# 1-1000.0 multiplier on hash table work_mem\n#maintenance_work_mem = 64MB\t\t# min 1MB\n#autovacuum_work_mem = -1\t\t# min 1MB, or -1 to use maintenance_work_mem\n#logical_decoding_work_mem = 64MB\t# min 64kB\n#max_stack_depth = 2MB\t\t\t# min 100kB\n#shared_memory_type = mmap\t\t# the default is the first option\n\t\t\t\t\t# supported by the operating system:\n\t\t\t\t\t#   mmap\n\t\t\t\t\t#   sysv\n\t\t\t\t\t#   windows\n\t\t\t\t\t# (change requires restart)\ndynamic_shared_memory_type = posix\t# the default is usually the first option\n\t\t\t\t\t# supported by the operating system:\n\t\t\t\t\t#   posix\n\t\t\t\t\t#   sysv\n\t\t\t\t\t#   windows\n\t\t\t\t\t#   mmap\n\t\t\t\t\t# (change requires restart)\n#min_dynamic_shared_memory = 0MB\t# (change requires restart)\n\n# - Disk -\n\n#temp_file_limit = -1\t\t\t# limits per-process temp file space\n\t\t\t\t\t# in kilobytes, or -1 for no limit\n\n# - Kernel Resources -\n\n#max_files_per_process = 1000\t\t# min 64\n\t\t\t\t\t# (change requires restart)\n\n# - Cost-Based Vacuum Delay -\n\n#vacuum_cost_delay = 0\t\t\t# 0-100 milliseconds (0 disables)\n#vacuum_cost_page_hit = 1\t\t# 0-10000 credits\n#vacuum_cost_page_miss = 2\t\t# 0-10000 credits\n#vacuum_cost_page_dirty = 20\t\t# 0-10000 credits\n#vacuum_cost_limit = 200\t\t# 1-10000 credits\n\n# - Background Writer -\n\n#bgwriter_delay = 200ms\t\t\t# 10-10000ms between rounds\n#bgwriter_lru_maxpages = 100\t\t# max buffers written/round, 0 disables\n#bgwriter_lru_multiplier = 2.0\t\t# 0-10.0 multiplier on buffers scanned/round\n#bgwriter_flush_after = 0\t\t# measured in pages, 0 disables\n\n# - Asynchronous Behavior -\n\n#backend_flush_after = 0\t\t# measured in pages, 0 disables\n#effective_io_concurrency = 0\t\t# 1-1000; 0 disables prefetching\n#maintenance_io_concurrency = 10\t# 1-1000; 0 disables prefetching\n#max_worker_processes = 8\t\t# (change requires restart)\n#max_parallel_workers_per_gather = 2\t# taken from max_parallel_workers\n#max_parallel_maintenance_workers = 2\t# taken from max_parallel_workers\n#max_parallel_workers = 8\t\t# maximum number of max_worker_processes that\n\t\t\t\t\t# can be used in parallel operations\n#parallel_leader_participation = on\n#old_snapshot_threshold = -1\t\t# 1min-60d; -1 disables; 0 is immediate\n\t\t\t\t\t# (change requires restart)\n\n\n#------------------------------------------------------------------------------\n# WRITE-AHEAD LOG\n#------------------------------------------------------------------------------\n\n# - Settings -\n\n#wal_level = replica\t\t\t# minimal, replica, or logical\n\t\t\t\t\t# (change requires restart)\n#fsync = on\t\t\t\t# flush data to disk for crash safety\n\t\t\t\t\t# (turning this off can cause\n\t\t\t\t\t# unrecoverable data corruption)\n#synchronous_commit = on\t\t# synchronization level;\n\t\t\t\t\t# off, local, remote_write, remote_apply, or on\n#wal_sync_method = fsync\t\t# the default is the first option\n\t\t\t\t\t# supported by the operating system:\n\t\t\t\t\t#   open_datasync\n\t\t\t\t\t#   fdatasync (default on Linux and FreeBSD)\n\t\t\t\t\t#   fsync\n\t\t\t\t\t#   fsync_writethrough\n\t\t\t\t\t#   open_sync\n#full_page_writes = on\t\t\t# recover from partial page writes\n#wal_log_hints = off\t\t\t# also do full page writes of non-critical updates\n\t\t\t\t\t# (change requires restart)\n#wal_compression = off\t\t\t# enables compression of full-page writes;\n\t\t\t\t\t# off, pglz, lz4, zstd, or on\n#wal_init_zero = on\t\t\t# zero-fill new WAL files\n#wal_recycle = on\t\t\t# recycle WAL files\n#wal_buffers = -1\t\t\t# min 32kB, -1 sets based on shared_buffers\n\t\t\t\t\t# (change requires restart)\n#wal_writer_delay = 200ms\t\t# 1-10000 milliseconds\n#wal_writer_flush_after = 1MB\t\t# measured in pages, 0 disables\n#wal_skip_threshold = 2MB\n\n#commit_delay = 0\t\t\t# range 0-100000, in microseconds\n#commit_siblings = 5\t\t\t# range 1-1000\n\n# - Checkpoints -\n\n#checkpoint_timeout = 5min\t\t# range 30s-1d\n#checkpoint_completion_target = 0.9\t# checkpoint target duration, 0.0 - 1.0\n#checkpoint_flush_after = 0\t\t# measured in pages, 0 disables\n#checkpoint_warning = 30s\t\t# 0 disables\nmax_wal_size = 1GB\nmin_wal_size = 80MB\n\n# - Prefetching during recovery -\n\n#recovery_prefetch = try\t\t# prefetch pages referenced in the WAL?\n#wal_decode_buffer_size = 512kB\t\t# lookahead window used for prefetching\n\t\t\t\t\t# (change requires restart)\n\n# - Archiving -\n\n#archive_mode = off\t\t# enables archiving; off, on, or always\n\t\t\t\t# (change requires restart)\n#archive_library = ''\t\t# library to use to archive a logfile segment\n\t\t\t\t# (empty string indicates archive_command should\n\t\t\t\t# be used)\n#archive_command = ''\t\t# command to use to archive a logfile segment\n\t\t\t\t# placeholders: %p = path of file to archive\n\t\t\t\t#               %f = file name only\n\t\t\t\t# e.g. 'test ! -f /mnt/server/archivedir/%f && cp %p /mnt/server/archivedir/%f'\n#archive_timeout = 0\t\t# force a logfile segment switch after this\n\t\t\t\t# number of seconds; 0 disables\n\n# - Archive Recovery -\n\n# These are only used in recovery mode.\n\n#restore_command = ''\t\t# command to use to restore an archived logfile segment\n\t\t\t\t# placeholders: %p = path of file to restore\n\t\t\t\t#               %f = file name only\n\t\t\t\t# e.g. 'cp /mnt/server/archivedir/%f %p'\n#archive_cleanup_command = ''\t# command to execute at every restartpoint\n#recovery_end_command = ''\t# command to execute at completion of recovery\n\n# - Recovery Target -\n\n# Set these only when performing a targeted recovery.\n\n#recovery_target = ''\t\t# 'immediate' to end recovery as soon as a\n                                # consistent state is reached\n\t\t\t\t# (change requires restart)\n#recovery_target_name = ''\t# the named restore point to which recovery will proceed\n\t\t\t\t# (change requires restart)\n#recovery_target_time = ''\t# the time stamp up to which recovery will proceed\n\t\t\t\t# (change requires restart)\n#recovery_target_xid = ''\t# the transaction ID up to which recovery will proceed\n\t\t\t\t# (change requires restart)\n#recovery_target_lsn = ''\t# the WAL LSN up to which recovery will proceed\n\t\t\t\t# (change requires restart)\n#recovery_target_inclusive = on # Specifies whether to stop:\n\t\t\t\t# just after the specified recovery target (on)\n\t\t\t\t# just before the recovery target (off)\n\t\t\t\t# (change requires restart)\n#recovery_target_timeline = 'latest'\t# 'current', 'latest', or timeline ID\n\t\t\t\t# (change requires restart)\n#recovery_target_action = 'pause'\t# 'pause', 'promote', 'shutdown'\n\t\t\t\t# (change requires restart)\n\n\n#------------------------------------------------------------------------------\n# REPLICATION\n#------------------------------------------------------------------------------\n\n# - Sending Servers -\n\n# Set these on the primary and on any standby that will send replication data.\n\n#max_wal_senders = 10\t\t# max number of walsender processes\n\t\t\t\t# (change requires restart)\n#max_replication_slots = 10\t# max number of replication slots\n\t\t\t\t# (change requires restart)\n#wal_keep_size = 0\t\t# in megabytes; 0 disables\n#max_slot_wal_keep_size = -1\t# in megabytes; -1 disables\n#wal_sender_timeout = 60s\t# in milliseconds; 0 disables\n#track_commit_timestamp = off\t# collect timestamp of transaction commit\n\t\t\t\t# (change requires restart)\n\n# - Primary Server -\n\n# These settings are ignored on a standby server.\n\n#synchronous_standby_names = ''\t# standby servers that provide sync rep\n\t\t\t\t# method to choose sync standbys, number of sync standbys,\n\t\t\t\t# and comma-separated list of application_name\n\t\t\t\t# from standby(s); '*' = all\n#vacuum_defer_cleanup_age = 0\t# number of xacts by which cleanup is delayed\n\n# - Standby Servers -\n\n# These settings are ignored on a primary server.\n\n#primary_conninfo = ''\t\t\t# connection string to sending server\n#primary_slot_name = ''\t\t\t# replication slot on sending server\n#promote_trigger_file = ''\t\t# file name whose presence ends recovery\n#hot_standby = on\t\t\t# \"off\" disallows queries during recovery\n\t\t\t\t\t# (change requires restart)\n#max_standby_archive_delay = 30s\t# max delay before canceling queries\n\t\t\t\t\t# when reading WAL from archive;\n\t\t\t\t\t# -1 allows indefinite delay\n#max_standby_streaming_delay = 30s\t# max delay before canceling queries\n\t\t\t\t\t# when reading streaming WAL;\n\t\t\t\t\t# -1 allows indefinite delay\n#wal_receiver_create_temp_slot = off\t# create temp slot if primary_slot_name\n\t\t\t\t\t# is not set\n#wal_receiver_status_interval = 10s\t# send replies at least this often\n\t\t\t\t\t# 0 disables\n#hot_standby_feedback = off\t\t# send info from standby to prevent\n\t\t\t\t\t# query conflicts\n#wal_receiver_timeout = 60s\t\t# time that receiver waits for\n\t\t\t\t\t# communication from primary\n\t\t\t\t\t# in milliseconds; 0 disables\n#wal_retrieve_retry_interval = 5s\t# time to wait before retrying to\n\t\t\t\t\t# retrieve WAL after a failed attempt\n#recovery_min_apply_delay = 0\t\t# minimum delay for applying changes during recovery\n\n# - Subscribers -\n\n# These settings are ignored on a publisher.\n\n#max_logical_replication_workers = 4\t# taken from max_worker_processes\n\t\t\t\t\t# (change requires restart)\n#max_sync_workers_per_subscription = 2\t# taken from max_logical_replication_workers\n\n\n#------------------------------------------------------------------------------\n# QUERY TUNING\n#------------------------------------------------------------------------------\n\n# - Planner Method Configuration -\n\n#enable_async_append = on\n#enable_bitmapscan = on\n#enable_gathermerge = on\n#enable_hashagg = on\n#enable_hashjoin = on\n#enable_incremental_sort = on\n#enable_indexscan = on\n#enable_indexonlyscan = on\n#enable_material = on\n#enable_memoize = on\n#enable_mergejoin = on\n#enable_nestloop = on\n#enable_parallel_append = on\n#enable_parallel_hash = on\n#enable_partition_pruning = on\n#enable_partitionwise_join = off\n#enable_partitionwise_aggregate = off\n#enable_seqscan = on\n#enable_sort = on\n#enable_tidscan = on\n\n# - Planner Cost Constants -\n\n#seq_page_cost = 1.0\t\t\t# measured on an arbitrary scale\n#random_page_cost = 4.0\t\t\t# same scale as above\n#cpu_tuple_cost = 0.01\t\t\t# same scale as above\n#cpu_index_tuple_cost = 0.005\t\t# same scale as above\n#cpu_operator_cost = 0.0025\t\t# same scale as above\n#parallel_setup_cost = 1000.0\t# same scale as above\n#parallel_tuple_cost = 0.1\t\t# same scale as above\n#min_parallel_table_scan_size = 8MB\n#min_parallel_index_scan_size = 512kB\n#effective_cache_size = 4GB\n\n#jit_above_cost = 100000\t\t# perform JIT compilation if available\n\t\t\t\t\t# and query more expensive than this;\n\t\t\t\t\t# -1 disables\n#jit_inline_above_cost = 500000\t\t# inline small functions if query is\n\t\t\t\t\t# more expensive than this; -1 disables\n#jit_optimize_above_cost = 500000\t# use expensive JIT optimizations if\n\t\t\t\t\t# query is more expensive than this;\n\t\t\t\t\t# -1 disables\n\n# - Genetic Query Optimizer -\n\n#geqo = on\n#geqo_threshold = 12\n#geqo_effort = 5\t\t\t# range 1-10\n#geqo_pool_size = 0\t\t\t# selects default based on effort\n#geqo_generations = 0\t\t\t# selects default based on effort\n#geqo_selection_bias = 2.0\t\t# range 1.5-2.0\n#geqo_seed = 0.0\t\t\t# range 0.0-1.0\n\n# - Other Planner Options -\n\n#default_statistics_target = 100\t# range 1-10000\n#constraint_exclusion = partition\t# on, off, or partition\n#cursor_tuple_fraction = 0.1\t\t# range 0.0-1.0\n#from_collapse_limit = 8\n#jit = on\t\t\t\t# allow JIT compilation\n#join_collapse_limit = 8\t\t# 1 disables collapsing of explicit\n\t\t\t\t\t# JOIN clauses\n#plan_cache_mode = auto\t\t\t# auto, force_generic_plan or\n\t\t\t\t\t# force_custom_plan\n#recursive_worktable_factor = 10.0\t# range 0.001-1000000\n\n\n#------------------------------------------------------------------------------\n# REPORTING AND LOGGING\n#------------------------------------------------------------------------------\n\n# - Where to Log -\n\n#log_destination = 'stderr'\t\t# Valid values are combinations of\n# From the Bulk Operations chapter, and CSV using the File FDW\nlog_destination = 'stderr,csvlog'\t\t# Valid values are combinations of\n\t\t\t\t\t# stderr, csvlog, jsonlog, syslog, and\n\t\t\t\t\t# eventlog, depending on platform.\n\t\t\t\t\t# csvlog and jsonlog require\n\t\t\t\t\t# logging_collector to be on.\n\n# This is used when logging to stderr:\nlogging_collector = on # Enable capturing of stderr, jsonlog,\n\t\t\t\t\t# and csvlog into log files. Required\n\t\t\t\t\t# to be on for csvlogs and jsonlogs.\n\t\t\t\t\t# (change requires restart)\n\n# These are only used if logging_collector is on:\n#log_directory = 'log'\t\t\t# directory where log files are written,\n\t\t\t\t\t# can be absolute or relative to PGDATA\n#log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'\t# log file name pattern,\n\t\t\t\t\t# can include strftime() escapes\n#log_file_mode = 0600\t\t\t# creation mode for log files,\n\t\t\t\t\t# begin with 0 to use octal notation\n#log_rotation_age = 1d\t\t\t# Automatic rotation of logfiles will\n\t\t\t\t\t# happen after that time.  0 disables.\n#log_rotation_size = 10MB\t\t# Automatic rotation of logfiles will\n\t\t\t\t\t# happen after that much log output.\n\t\t\t\t\t# 0 disables.\n#log_truncate_on_rotation = off\t\t# If on, an existing log file with the\n\t\t\t\t\t# same name as the new log file will be\n\t\t\t\t\t# truncated rather than appended to.\n\t\t\t\t\t# But such truncation only occurs on\n\t\t\t\t\t# time-driven rotation, not on restarts\n\t\t\t\t\t# or size-driven rotation.  Default is\n\t\t\t\t\t# off, meaning append to existing files\n\t\t\t\t\t# in all cases.\n\n# These are relevant when logging to syslog:\n#syslog_facility = 'LOCAL0'\n#syslog_ident = 'postgres'\n#syslog_sequence_numbers = on\n#syslog_split_messages = on\n\n# This is only relevant when logging to eventlog (Windows):\n# (change requires restart)\n#event_source = 'PostgreSQL'\n\n# - When to Log -\n\n#log_min_messages = warning\t\t# values in order of decreasing detail:\n\t\t\t\t\t#   debug5\n\t\t\t\t\t#   debug4\n\t\t\t\t\t#   debug3\n\t\t\t\t\t#   debug2\n\t\t\t\t\t#   debug1\n\t\t\t\t\t#   info\n\t\t\t\t\t#   notice\n\t\t\t\t\t#   warning\n\t\t\t\t\t#   error\n\t\t\t\t\t#   log\n\t\t\t\t\t#   fatal\n\t\t\t\t\t#   panic\n\n#log_min_error_statement = error\t# values in order of decreasing detail:\n\t\t\t\t\t#   debug5\n\t\t\t\t\t#   debug4\n\t\t\t\t\t#   debug3\n\t\t\t\t\t#   debug2\n\t\t\t\t\t#   debug1\n\t\t\t\t\t#   info\n\t\t\t\t\t#   notice\n\t\t\t\t\t#   warning\n\t\t\t\t\t#   error\n\t\t\t\t\t#   log\n\t\t\t\t\t#   fatal\n\t\t\t\t\t#   panic (effectively off)\n\nlog_min_duration_statement = 1000\t# -1 is disabled, 0 logs all statements\n\t\t\t\t\t# and their durations, > 0 logs only\n\t\t\t\t\t# statements running at least this number\n\t\t\t\t\t# of milliseconds\n\n#log_min_duration_sample = -1\t\t# -1 is disabled, 0 logs a sample of statements\n\t\t\t\t\t# and their durations, > 0 logs only a sample of\n\t\t\t\t\t# statements running at least this number\n\t\t\t\t\t# of milliseconds;\n\t\t\t\t\t# sample fraction is determined by log_statement_sample_rate\n\n#log_statement_sample_rate = 1.0\t# fraction of logged statements exceeding\n\t\t\t\t\t# log_min_duration_sample to be logged;\n\t\t\t\t\t# 1.0 logs all such statements, 0.0 never logs\n\n\n#log_transaction_sample_rate = 0.0\t# fraction of transactions whose statements\n\t\t\t\t\t# are logged regardless of their duration; 1.0 logs all\n\t\t\t\t\t# statements from all transactions, 0.0 never logs\n\n#log_startup_progress_interval = 10s\t# Time between progress updates for\n\t\t\t\t\t# long-running startup operations.\n\t\t\t\t\t# 0 disables the feature, > 0 indicates\n\t\t\t\t\t# the interval in milliseconds.\n\n# - What to Log -\n\n#debug_print_parse = off\n#debug_print_rewritten = off\n#debug_print_plan = off\n#debug_pretty_print = on\n#log_autovacuum_min_duration = 10min\t# log autovacuum activity;\n\t\t\t\t\t# -1 disables, 0 logs all actions and\n\t\t\t\t\t# their durations, > 0 logs only\n\t\t\t\t\t# actions running at least this number\n\t\t\t\t\t# of milliseconds.\n#log_checkpoints = on\n#log_connections = off\n#log_disconnections = off\n#log_duration = off\n#log_error_verbosity = default\t\t# terse, default, or verbose messages\n#log_hostname = off\n#log_line_prefix = '%m [%p] '\t\t# special values:\n\t\t\t\t\t#   %a = application name\n\t\t\t\t\t#   %u = user name\n\t\t\t\t\t#   %d = database name\n\t\t\t\t\t#   %r = remote host and port\n\t\t\t\t\t#   %h = remote host\n\t\t\t\t\t#   %b = backend type\n\t\t\t\t\t#   %p = process ID\n\t\t\t\t\t#   %P = process ID of parallel group leader\n\t\t\t\t\t#   %t = timestamp without milliseconds\n\t\t\t\t\t#   %m = timestamp with milliseconds\n\t\t\t\t\t#   %n = timestamp with milliseconds (as a Unix epoch)\n\t\t\t\t\t#   %Q = query ID (0 if none or not computed)\n\t\t\t\t\t#   %i = command tag\n\t\t\t\t\t#   %e = SQL state\n\t\t\t\t\t#   %c = session ID\n\t\t\t\t\t#   %l = session line number\n\t\t\t\t\t#   %s = session start timestamp\n\t\t\t\t\t#   %v = virtual transaction ID\n\t\t\t\t\t#   %x = transaction ID (0 if none)\n\t\t\t\t\t#   %q = stop here in non-session\n\t\t\t\t\t#        processes\n\t\t\t\t\t#   %% = '%'\n\t\t\t\t\t# e.g. '<%u%%%d> '\n#log_lock_waits = off\t\t\t# log lock waits >= deadlock_timeout\n#log_recovery_conflict_waits = off\t# log standby recovery conflict waits\n\t\t\t\t\t# >= deadlock_timeout\n#log_parameter_max_length = -1\t\t# when logging statements, limit logged\n\t\t\t\t\t# bind-parameter values to N bytes;\n\t\t\t\t\t# -1 means print in full, 0 disables\n#log_parameter_max_length_on_error = 0\t# when logging an error, limit logged\n\t\t\t\t\t# bind-parameter values to N bytes;\n\t\t\t\t\t# -1 means print in full, 0 disables\n#log_statement = 'none'\t\t\t# none, ddl, mod, all\n#log_replication_commands = off\n#log_temp_files = -1\t\t\t# log temporary files equal or larger\n\t\t\t\t\t# than the specified size in kilobytes;\n\t\t\t\t\t# -1 disables, 0 logs all temp files\nlog_timezone = 'America/Chicago'\n\n\n#------------------------------------------------------------------------------\n# PROCESS TITLE\n#------------------------------------------------------------------------------\n\n#cluster_name = ''\t\t\t# added to process titles if nonempty\n\t\t\t\t\t# (change requires restart)\n#update_process_title = on\n\n\n#------------------------------------------------------------------------------\n# STATISTICS\n#------------------------------------------------------------------------------\n\n# - Cumulative Query and Index Statistics -\n\n#track_activities = on\n#track_activity_query_size = 1024\t# (change requires restart)\n#track_counts = on\n#track_io_timing = off\n#track_wal_io_timing = off\n#track_functions = none\t\t\t# none, pl, all\n#stats_fetch_consistency = cache\n\n\n# - Monitoring -\n\n#compute_query_id = auto\n#log_statement_stats = off\n#log_parser_stats = off\n#log_planner_stats = off\n#log_executor_stats = off\n\n\n#------------------------------------------------------------------------------\n# AUTOVACUUM\n#------------------------------------------------------------------------------\n\n#autovacuum = on\t\t\t# Enable autovacuum subprocess?  'on'\n\t\t\t\t\t# requires track_counts to also be on.\n#autovacuum_max_workers = 3\t\t# max number of autovacuum subprocesses\n\t\t\t\t\t# (change requires restart)\n#autovacuum_naptime = 1min\t\t# time between autovacuum runs\n#autovacuum_vacuum_threshold = 50\t# min number of row updates before\n\t\t\t\t\t# vacuum\n#autovacuum_vacuum_insert_threshold = 1000\t# min number of row inserts\n\t\t\t\t\t# before vacuum; -1 disables insert\n\t\t\t\t\t# vacuums\n#autovacuum_analyze_threshold = 50\t# min number of row updates before\n\t\t\t\t\t# analyze\n#autovacuum_vacuum_scale_factor = 0.2\t# fraction of table size before vacuum\n#autovacuum_vacuum_insert_scale_factor = 0.2\t# fraction of inserts over table\n\t\t\t\t\t# size before insert vacuum\n#autovacuum_analyze_scale_factor = 0.1\t# fraction of table size before analyze\n#autovacuum_freeze_max_age = 200000000\t# maximum XID age before forced vacuum\n\t\t\t\t\t# (change requires restart)\n#autovacuum_multixact_freeze_max_age = 400000000\t# maximum multixact age\n\t\t\t\t\t# before forced vacuum\n\t\t\t\t\t# (change requires restart)\n#autovacuum_vacuum_cost_delay = 2ms\t# default vacuum cost delay for\n\t\t\t\t\t# autovacuum, in milliseconds;\n\t\t\t\t\t# -1 means use vacuum_cost_delay\n#autovacuum_vacuum_cost_limit = -1\t# default vacuum cost limit for\n\t\t\t\t\t# autovacuum, -1 means use\n\t\t\t\t\t# vacuum_cost_limit\n\n\n#------------------------------------------------------------------------------\n# CLIENT CONNECTION DEFAULTS\n#------------------------------------------------------------------------------\n\n# - Statement Behavior -\n\n#client_min_messages = notice\t\t# values in order of decreasing detail:\n\t\t\t\t\t#   debug5\n\t\t\t\t\t#   debug4\n\t\t\t\t\t#   debug3\n\t\t\t\t\t#   debug2\n\t\t\t\t\t#   debug1\n\t\t\t\t\t#   log\n\t\t\t\t\t#   notice\n\t\t\t\t\t#   warning\n\t\t\t\t\t#   error\n#search_path = '\"$user\", public'\t# schema names\n#row_security = on\n#default_table_access_method = 'heap'\n#default_tablespace = ''\t\t# a tablespace name, '' uses the default\n#default_toast_compression = 'pglz'\t# 'pglz' or 'lz4'\n#temp_tablespaces = ''\t\t\t# a list of tablespace names, '' uses\n\t\t\t\t\t# only default tablespace\n#check_function_bodies = on\n#default_transaction_isolation = 'read committed'\n#default_transaction_read_only = off\n#default_transaction_deferrable = off\n#session_replication_role = 'origin'\n#statement_timeout = 0\t\t\t# in milliseconds, 0 is disabled\n#lock_timeout = 0\t\t\t# in milliseconds, 0 is disabled\n#idle_in_transaction_session_timeout = 0\t# in milliseconds, 0 is disabled\n#idle_session_timeout = 0\t\t# in milliseconds, 0 is disabled\n#vacuum_freeze_table_age = 150000000\n#vacuum_freeze_min_age = 50000000\n#vacuum_failsafe_age = 1600000000\n#vacuum_multixact_freeze_table_age = 150000000\n#vacuum_multixact_freeze_min_age = 5000000\n#vacuum_multixact_failsafe_age = 1600000000\n#bytea_output = 'hex'\t\t\t# hex, escape\n#xmlbinary = 'base64'\n#xmloption = 'content'\n#gin_pending_list_limit = 4MB\n\n# - Locale and Formatting -\n\ndatestyle = 'iso, mdy'\n#intervalstyle = 'postgres'\ntimezone = 'America/Chicago'\n#timezone_abbreviations = 'Default'     # Select the set of available time zone\n\t\t\t\t\t# abbreviations.  Currently, there are\n\t\t\t\t\t#   Default\n\t\t\t\t\t#   Australia (historical usage)\n\t\t\t\t\t#   India\n\t\t\t\t\t# You can create your own file in\n\t\t\t\t\t# share/timezonesets/.\n#extra_float_digits = 1\t\t\t# min -15, max 3; any value >0 actually\n\t\t\t\t\t# selects precise output mode\n#client_encoding = sql_ascii\t\t# actually, defaults to database\n\t\t\t\t\t# encoding\n\n# These settings are initialized by initdb, but they can be changed.\nlc_messages = 'C'\t\t\t# locale for system error message\n\t\t\t\t\t# strings\nlc_monetary = 'C'\t\t\t# locale for monetary formatting\nlc_numeric = 'C'\t\t\t# locale for number formatting\nlc_time = 'C'\t\t\t\t# locale for time formatting\n\n# default configuration for text search\ndefault_text_search_config = 'pg_catalog.english'\n\n# - Shared Library Preloading -\n\n#local_preload_libraries = ''\n#session_preload_libraries = ''\n#shared_preload_libraries = ''\t# (change requires restart)\n#\n# Customized shared_preload_libraries for Rideshare\n# These extensions needed to be compiled for macOS and available\n# export PGDATA=\"$(psql $DATABASE_URL -c 'SHOW data_directory' --tuples-only | sed 's/^[ \\t]*//')\"\n# echo $PGDATA\n# vim $PGDATA/postgresql.conf\nshared_preload_libraries = 'pg_stat_statements,auto_explain,pg_cron,pg_hint_plan'\t# (change requires restart)\n\n#jit_provider = 'llvmjit'\t\t# JIT library to use\n\n# - Other Defaults -\n\n#dynamic_library_path = '$libdir'\n#gin_fuzzy_search_limit = 0\n\n\n#------------------------------------------------------------------------------\n# LOCK MANAGEMENT\n#------------------------------------------------------------------------------\n\n#deadlock_timeout = 1s\n#max_locks_per_transaction = 64\t\t# min 10\n\t\t\t\t\t# (change requires restart)\n#max_pred_locks_per_transaction = 64\t# min 10\n\t\t\t\t\t# (change requires restart)\n#max_pred_locks_per_relation = -2\t# negative values mean\n\t\t\t\t\t# (max_pred_locks_per_transaction\n\t\t\t\t\t#  / -max_pred_locks_per_relation) - 1\n#max_pred_locks_per_page = 2            # min 0\n\n\n#------------------------------------------------------------------------------\n# VERSION AND PLATFORM COMPATIBILITY\n#------------------------------------------------------------------------------\n\n# - Previous PostgreSQL Versions -\n\n#array_nulls = on\n#backslash_quote = safe_encoding\t# on, off, or safe_encoding\n#escape_string_warning = on\n#lo_compat_privileges = off\n#quote_all_identifiers = off\n#standard_conforming_strings = on\n#synchronize_seqscans = on\n\n# - Other Platforms and Clients -\n\n#transform_null_equals = off\n\n\n#------------------------------------------------------------------------------\n# ERROR HANDLING\n#------------------------------------------------------------------------------\n\n#exit_on_error = off\t\t\t# terminate session on any error?\n#restart_after_crash = on\t\t# reinitialize after backend crash?\n#data_sync_retry = off\t\t\t# retry or panic on failure to fsync\n\t\t\t\t\t# data?\n\t\t\t\t\t# (change requires restart)\n#recovery_init_sync_method = fsync\t# fsync, syncfs (Linux 5.8+)\n\n\n#------------------------------------------------------------------------------\n# CONFIG FILE INCLUDES\n#------------------------------------------------------------------------------\n\n# These options allow settings to be loaded from files other than the\n# default postgresql.conf.  Note that these are directives, not variable\n# assignments, so they can usefully be given more than once.\n\n#include_dir = '...'\t\t\t# include files ending in '.conf' from\n\t\t\t\t\t# a directory, e.g., 'conf.d'\n#include_if_exists = '...'\t\t# include file only if it exists\n#include = '...'\t\t\t# include file\n\n\n#------------------------------------------------------------------------------\n# CUSTOMIZED OPTIONS\n#------------------------------------------------------------------------------\n\n# Add settings for extensions here\n#\n\n# For PostgreSQL 14+, compute the query ID (to show)\ncompute_query_id = on\n\n# Log statement duration\nlog_duration = on\n\n# Log customization\n#log_statement = 'all'\n#log_line_prefix = '%t [%p]: [%l-1] user=%u,db=%d,app=%a,client=%h,query_id=%Q '\nlog_line_prefix = 'pid=%p query_id=%Q: '\n\n# log slow queries\nlog_min_duration_statement = 1000\n\n# auto_explain\nauto_explain.log_min_duration = 1000\n\n# pg_cron:\n# - Use the \"postgres\", when running pg_cron for multiple databases\n# - For Rideshare, work locally, use rideshare_development\n# cron.database_name = 'postgres'\ncron.database_name = 'rideshare_development'\n\n# For pg_cron\n# Make sure above is applied, and PG has restarted\n# psql -U postgres -d rideshare_development\n# create extension pg_cron;\n# GRANT USAGE ON SCHEMA cron TO owner;\n"
  },
  {
    "path": "postgresql/userlist.sample.txt",
    "content": "\"owner\" \"HSnDDgFtyW9fyFI\"\n"
  },
  {
    "path": "public/404.html",
    "content": "<!DOCTYPE html>\n<html>\n<head>\n  <title>The page you were looking for doesn't exist (404)</title>\n  <meta name=\"viewport\" content=\"width=device-width,initial-scale=1\">\n  <style>\n  .rails-default-error-page {\n    background-color: #EFEFEF;\n    color: #2E2F30;\n    text-align: center;\n    font-family: arial, sans-serif;\n    margin: 0;\n  }\n\n  .rails-default-error-page div.dialog {\n    width: 95%;\n    max-width: 33em;\n    margin: 4em auto 0;\n  }\n\n  .rails-default-error-page div.dialog > div {\n    border: 1px solid #CCC;\n    border-right-color: #999;\n    border-left-color: #999;\n    border-bottom-color: #BBB;\n    border-top: #B00100 solid 4px;\n    border-top-left-radius: 9px;\n    border-top-right-radius: 9px;\n    background-color: white;\n    padding: 7px 12% 0;\n    box-shadow: 0 3px 8px rgba(50, 50, 50, 0.17);\n  }\n\n  .rails-default-error-page h1 {\n    font-size: 100%;\n    color: #730E15;\n    line-height: 1.5em;\n  }\n\n  .rails-default-error-page div.dialog > p {\n    margin: 0 0 1em;\n    padding: 1em;\n    background-color: #F7F7F7;\n    border: 1px solid #CCC;\n    border-right-color: #999;\n    border-left-color: #999;\n    border-bottom-color: #999;\n    border-bottom-left-radius: 4px;\n    border-bottom-right-radius: 4px;\n    border-top-color: #DADADA;\n    color: #666;\n    box-shadow: 0 3px 8px rgba(50, 50, 50, 0.17);\n  }\n  </style>\n</head>\n\n<body class=\"rails-default-error-page\">\n  <!-- This file lives in public/404.html -->\n  <div class=\"dialog\">\n    <div>\n      <h1>The page you were looking for doesn't exist.</h1>\n      <p>You may have mistyped the address or the page may have moved.</p>\n    </div>\n    <p>If you are the application owner check the logs for more information.</p>\n  </div>\n</body>\n</html>\n"
  },
  {
    "path": "public/422.html",
    "content": "<!DOCTYPE html>\n<html>\n<head>\n  <title>The change you wanted was rejected (422)</title>\n  <meta name=\"viewport\" content=\"width=device-width,initial-scale=1\">\n  <style>\n  .rails-default-error-page {\n    background-color: #EFEFEF;\n    color: #2E2F30;\n    text-align: center;\n    font-family: arial, sans-serif;\n    margin: 0;\n  }\n\n  .rails-default-error-page div.dialog {\n    width: 95%;\n    max-width: 33em;\n    margin: 4em auto 0;\n  }\n\n  .rails-default-error-page div.dialog > div {\n    border: 1px solid #CCC;\n    border-right-color: #999;\n    border-left-color: #999;\n    border-bottom-color: #BBB;\n    border-top: #B00100 solid 4px;\n    border-top-left-radius: 9px;\n    border-top-right-radius: 9px;\n    background-color: white;\n    padding: 7px 12% 0;\n    box-shadow: 0 3px 8px rgba(50, 50, 50, 0.17);\n  }\n\n  .rails-default-error-page h1 {\n    font-size: 100%;\n    color: #730E15;\n    line-height: 1.5em;\n  }\n\n  .rails-default-error-page div.dialog > p {\n    margin: 0 0 1em;\n    padding: 1em;\n    background-color: #F7F7F7;\n    border: 1px solid #CCC;\n    border-right-color: #999;\n    border-left-color: #999;\n    border-bottom-color: #999;\n    border-bottom-left-radius: 4px;\n    border-bottom-right-radius: 4px;\n    border-top-color: #DADADA;\n    color: #666;\n    box-shadow: 0 3px 8px rgba(50, 50, 50, 0.17);\n  }\n  </style>\n</head>\n\n<body class=\"rails-default-error-page\">\n  <!-- This file lives in public/422.html -->\n  <div class=\"dialog\">\n    <div>\n      <h1>The change you wanted was rejected.</h1>\n      <p>Maybe you tried to change something you didn't have access to.</p>\n    </div>\n    <p>If you are the application owner check the logs for more information.</p>\n  </div>\n</body>\n</html>\n"
  },
  {
    "path": "public/500.html",
    "content": "<!DOCTYPE html>\n<html>\n<head>\n  <title>We're sorry, but something went wrong (500)</title>\n  <meta name=\"viewport\" content=\"width=device-width,initial-scale=1\">\n  <style>\n  .rails-default-error-page {\n    background-color: #EFEFEF;\n    color: #2E2F30;\n    text-align: center;\n    font-family: arial, sans-serif;\n    margin: 0;\n  }\n\n  .rails-default-error-page div.dialog {\n    width: 95%;\n    max-width: 33em;\n    margin: 4em auto 0;\n  }\n\n  .rails-default-error-page div.dialog > div {\n    border: 1px solid #CCC;\n    border-right-color: #999;\n    border-left-color: #999;\n    border-bottom-color: #BBB;\n    border-top: #B00100 solid 4px;\n    border-top-left-radius: 9px;\n    border-top-right-radius: 9px;\n    background-color: white;\n    padding: 7px 12% 0;\n    box-shadow: 0 3px 8px rgba(50, 50, 50, 0.17);\n  }\n\n  .rails-default-error-page h1 {\n    font-size: 100%;\n    color: #730E15;\n    line-height: 1.5em;\n  }\n\n  .rails-default-error-page div.dialog > p {\n    margin: 0 0 1em;\n    padding: 1em;\n    background-color: #F7F7F7;\n    border: 1px solid #CCC;\n    border-right-color: #999;\n    border-left-color: #999;\n    border-bottom-color: #999;\n    border-bottom-left-radius: 4px;\n    border-bottom-right-radius: 4px;\n    border-top-color: #DADADA;\n    color: #666;\n    box-shadow: 0 3px 8px rgba(50, 50, 50, 0.17);\n  }\n  </style>\n</head>\n\n<body class=\"rails-default-error-page\">\n  <!-- This file lives in public/500.html -->\n  <div class=\"dialog\">\n    <div>\n      <h1>We're sorry, but something went wrong.</h1>\n    </div>\n    <p>If you are the application owner check the logs for more information.</p>\n  </div>\n</body>\n</html>\n"
  },
  {
    "path": "public/robots.txt",
    "content": "# See https://www.robotstxt.org/robotstxt.html for documentation on how to use the robots.txt file\n"
  },
  {
    "path": "test/application_system_test_case.rb",
    "content": "require 'test_helper'\n\nclass ApplicationSystemTestCase < ActionDispatch::SystemTestCase\n  driven_by :selenium, using: :chrome, screen_size: [1400, 1400]\nend\n"
  },
  {
    "path": "test/controllers/.keep",
    "content": ""
  },
  {
    "path": "test/controllers/api/trip_requests_controller_test.rb",
    "content": "require 'test_helper'\n\nclass Api::TripRequestsControllerTest < ActionDispatch::IntegrationTest\n  test 'CREATE a trip request works' do\n    rider = riders(:jane)\n    trip_request = {\n      rider_id: rider.id,\n      start_address: 'Boston, MA',\n      end_address: 'New York, NY'\n    }\n\n    post api_trip_requests_url, params: { trip_request: trip_request }\n    assert_response 201\n  end\n\n  test 'SHOW status for trip_request' do\n    trip_request = trip_requests(:big_trip)\n    get api_trip_request_url(trip_request)\n    assert_response 200\n    assert response.parsed_body['trip_request_id'].present?\n    assert response.parsed_body['trip_id'].present?\n  end\nend\n"
  },
  {
    "path": "test/controllers/api/trips_controller_test.rb",
    "content": "require 'test_helper'\n\nclass Api::TripsControllerTest < ActionDispatch::IntegrationTest\n  test 'GET to index works' do\n    get api_trips_url\n    assert_response 200\n    assert_equal 3, response.parsed_body.size\n  end\n\n  test 'searching by start location works' do\n    get api_trips_url, params: { start_location: 'New York' }\n    assert_response 200\n    assert_equal 2, response.parsed_body.size\n  end\n\n  test 'searching by driver name works' do\n    get api_trips_url, params: { driver_name: 'Jack' }\n    assert_response 200\n    assert_equal 1, response.parsed_body.size\n  end\n\n  test 'searching by start location and rider name works' do\n    get api_trips_url, params: { start_location: 'JFK', rider_name: 'Jessica' }\n    assert_response 200\n    assert_equal 1, response.parsed_body.size\n  end\n\n  test 'show a single trip' do\n    get api_trip_url(trip)\n    assert_response 200\n    assert_equal trip.id, response.parsed_body['id']\n  end\n\n  ### API: /my ###\n  test 'get my trips' do\n    get my_api_trips_url,\n        headers: { 'Authorization' => auth_token },\n        params: { rider_id: trip.rider.id }\n    assert_response 200\n    assert json = JSON.parse(response.body)\n\n    assert first_trip = json['data'][0]\n    assert_equal 'Jane D.', first_trip['attributes']['rider_name']\n    assert_equal 'Meg W.', first_trip['attributes']['driver_name']\n  end\n\n  test 'get my trips sparse fieldset all fields' do\n    get my_api_trips_url,\n        headers: { 'Authorization' => auth_token },\n        params: { rider_id: trip.rider.id, 'fields[trips]' => 'rider_name,driver_name' }\n    assert_response 200\n    assert json = JSON.parse(response.body)\n\n    assert first_trip = json['data'][0]\n    assert_equal 'Jane D.', first_trip['attributes']['rider_name']\n    assert_equal 'Meg W.', first_trip['attributes']['driver_name']\n  end\n\n  test 'get my trips sparse fieldset subset of fields' do\n    get my_api_trips_url,\n        headers: { 'Authorization' => auth_token },\n        params: { rider_id: trip.rider.id, 'fields[trips]' => 'rider_name' }\n    assert_response 200\n    assert json = JSON.parse(response.body)\n\n    assert first_trip = json['data'][0]\n    assert_equal 'Jane D.', first_trip['attributes']['rider_name']\n    assert_nil first_trip['attributes']['driver_name']\n  end\n\n  test 'get my trips no auth token' do\n    get my_api_trips_url,\n        params: { rider_id: trip.rider.id }\n    assert_response 401\n  end\n\n  test 'get trip details' do\n    get details_api_trip_url(id: trip.id)\n    assert_response 200\n    assert json = JSON.parse(response.body)\n    assert json.has_key?('data')\n    assert_equal 'trip', json['data']['type']\n  end\n\n  test 'get trip details with driver fields as compound document' do\n    get details_api_trip_url(id: trip.id), params: { include: 'driver' }\n    assert_response 200\n    assert json = JSON.parse(response.body)\n    assert json.has_key?('data')\n    assert_equal 'trip', json['data']['type']\n\n    assert json.has_key?('included')\n    assert driver_details = json['included'][0]['attributes']\n\n    assert driver_details.has_key?('display_name')\n    assert driver_details.has_key?('average_rating')\n  end\n\n  test 'get trip details with driver fields as compound document with sparse fieldset on driver' do\n    get details_api_trip_url(id: trip.id),\n        params: { include: 'driver', 'fields[driver]' => 'average_rating' }\n\n    assert_response 200\n    assert json = JSON.parse(response.body)\n    assert json.has_key?('data')\n    assert_equal 'trip', json['data']['type']\n\n    assert json.has_key?('included')\n    assert driver_details = json['included'][0]['attributes']\n\n    assert driver_details.has_key?('average_rating')\n\n    assert_not driver_details.has_key?('display_name'),\n               'did not expect display_name for driver to be included'\n  end\n\n  private\n\n  def trip\n    @trip ||= trips(:completed_trip)\n  end\n\n  def auth_token\n    JsonWebToken.encode(user_id: trip.rider.id)\n  end\nend\n"
  },
  {
    "path": "test/controllers/authentication_controller_test.rb",
    "content": "require 'test_helper'\n\nclass AuthenticationControllerTest < ActionDispatch::IntegrationTest\n  test 'POST to login with correct user credentials' do\n    post auth_login_url, params: {\n      email: rider.email,\n      password: 'abcd1234'\n    }\n\n    assert_response :ok\n    assert jwt_payload = JSON.parse(response.body)\n    assert jwt_payload.has_key?('token')\n    assert jwt_payload.has_key?('exp')\n    assert jwt_payload.has_key?('username')\n\n    assert_equal rider.display_name, jwt_payload['username']\n  end\n\n  test 'POST to login with INVALID credentials' do\n    post auth_login_url, params: {\n      email: rider.email,\n      password: 'abcd123'\n    }\n\n    assert_response :unauthorized\n  end\n\n  private\n\n  def rider\n    # Rider has the hashed password for \"abcd1234\"\n    # Stored in the field `password_digest`\n    @rider ||= riders(:jane)\n  end\nend\n"
  },
  {
    "path": "test/fixtures/.keep",
    "content": ""
  },
  {
    "path": "test/fixtures/drivers.yml",
    "content": "jack:\n  first_name: Jack\n  last_name: White\n  email: jack@email.com\n  type: Driver\n  password_digest: $3a$12$D0/yLVQ67zujhbLZUBkd3eD2w4oNaTg1MK2o2w3f4IOXiZ6az/X0O\n  drivers_license_number: P800000224322\n\nmeg:\n  first_name: Meg\n  last_name: White\n  email: meg@email.com\n  type: Driver\n  password_digest: $4a$12$D0/yLVQ67zujhbLZUBkd3eD2w4oNaTg1MK2o2w3f4IOXiZ6az/X0O\n  drivers_license_number: P800000224323\n"
  },
  {
    "path": "test/fixtures/files/.keep",
    "content": ""
  },
  {
    "path": "test/fixtures/locations.yml",
    "content": "nyc:\n  address: New York, NY\n  position: \"(40.7143528,-74.0059731)\"\n  state: NY\n\ncomedy_cellar:\n  address: \"117 MacDougal St, New York, NY 10012\"\n  position: \"(40.7303492,-74.0003215)\"\n  state: NY\n\nbos:\n  address: Boston, MA\n  position: \"(42.361145,-71.057083)\"\n  state: MA\n\njfk:\n  address: JFK Airport\n  position: \"(40.6413111,-73.7781391)\"\n  state: NY\n"
  },
  {
    "path": "test/fixtures/riders.yml",
    "content": "jane:\n  first_name: Jane\n  last_name: Doe\n  email: jane@email.com\n  type: Rider\n  password_digest: $2a$12$D0/yLVQ67zujhbLZUBkd3eD2w4oNaTg1MK2o2w3f4IOXiZ6az/X0O\n\njessica:\n  first_name: Jessica\n  last_name: Cruz\n  email: jessica@email.com\n  type: Rider\n\n"
  },
  {
    "path": "test/fixtures/trip_requests.yml",
    "content": "big_trip:\n  rider: jane\n  start_location: nyc\n  end_location: bos\n\nairport_trip:\n  rider: jessica\n  start_location: jfk\n  end_location: nyc\n\ngirls_night_out:\n  rider: jessica\n  start_location: nyc\n  end_location: comedy_cellar\n"
  },
  {
    "path": "test/fixtures/trips.yml",
    "content": "incomplete_trip:\n  trip_request: big_trip\n  driver: jack\n  completed_at:\n  rating:\n\ncompleted_trip:\n  trip_request: big_trip\n  driver: meg\n  completed_at: <%= 1.day.from_now.to_fs(:db) %>\n  rating:\n  created_at: <%= 1.day.ago.to_fs(:db) %>\n\nrated_trip:\n  trip_request: airport_trip\n  driver: meg\n  completed_at: <%= 1.day.ago.to_fs(:db) %>\n  rating: 5\n  created_at: <%= 1.week.ago.to_fs(:db) %>\n"
  },
  {
    "path": "test/fixtures/vehicle_reservations.yml",
    "content": "party_bus:\n  vehicle: party_bus\n  trip_request: girls_night_out\n  starts_at: <%= Time.zone.local(2022, 07, 28, 19, 00, 00) %>\n  ends_at: <%= Time.zone.local(2022, 07, 28, 23, 00, 00) %>\n  canceled: false # should be supplied by database as DEFAULT\n\n\n"
  },
  {
    "path": "test/fixtures/vehicles.yml",
    "content": "party_bus:\n  name: Party Bus\n"
  },
  {
    "path": "test/helpers/.keep",
    "content": ""
  },
  {
    "path": "test/mailers/.keep",
    "content": ""
  },
  {
    "path": "test/models/.keep",
    "content": ""
  },
  {
    "path": "test/models/driver_test.rb",
    "content": "require 'test_helper'\n\nclass DriverTest < ActiveSupport::TestCase\n  test 'valid driver' do\n    assert driver = Driver.new(email: 'email@email.com')\n    assert_not driver.valid?\n    assert !driver.errors[:first_name].include?('be blank')\n  end\n\n  test \"driver's license number format is validated\" do\n    assert driver = drivers(:meg)\n    driver.drivers_license_number = '123'\n    assert_not driver.valid?\n    assert_equal [\"is not a valid driver's license number\"], driver.errors[:drivers_license_number]\n  end\n\n  test \"driver's license number format must pass validation\" do\n    assert driver = drivers(:meg)\n    driver.drivers_license_number = 'P800000224325'\n    assert driver.valid?, driver.errors.full_messages\n  end\nend\n"
  },
  {
    "path": "test/models/location_test.rb",
    "content": "require 'test_helper'\n\nclass LocationTest < ActiveSupport::TestCase\n  test 'valid location' do\n    assert location = Location.new\n    assert_not location.valid?\n    assert !location.errors[:address].include?('be blank')\n  end\nend\n"
  },
  {
    "path": "test/models/rider_test.rb",
    "content": "require 'test_helper'\n\nclass RiderTest < ActiveSupport::TestCase\n  test 'valid rider' do\n    assert rider = Rider.new(email: 'email@email.com')\n    assert_not rider.valid?\n    assert !rider.errors[:first_name].include?('be blank')\n  end\nend\n"
  },
  {
    "path": "test/models/trip_request_test.rb",
    "content": "require 'test_helper'\n\nclass TripRequestTest < ActiveSupport::TestCase\n  test 'trip request works' do\n    trip_request = trip_requests(:airport_trip)\n    assert trip_request.trip.present?\n    assert trip_request.start_location.present?\n    assert trip_request.end_location.present?\n    assert trip_request.rider.present?\n  end\nend\n"
  },
  {
    "path": "test/models/trip_test.rb",
    "content": "require 'test_helper'\n\nclass TripTest < ActiveSupport::TestCase\n  setup do\n    @trip = trips(:completed_trip)\n  end\n\n  test 'rating values with valid values' do\n    @trip.rating = 1\n    assert @trip.valid?\n\n    @trip.rating = 5\n    assert @trip.valid?\n  end\n\n  test 'rating values with invalid values' do\n    @trip.rating = 0\n    assert_not @trip.valid?\n    assert_equal ['must be greater than or equal to 1'], @trip.errors[:rating]\n\n    @trip.rating = 6\n    assert_not @trip.valid?\n    assert_equal ['must be less than or equal to 5'], @trip.errors[:rating]\n\n    @trip.rating = 2.5\n    assert_not @trip.valid?\n    assert_equal ['must be an integer'], @trip.errors[:rating]\n  end\n\n  test 'rating requires a completed trip' do\n    @incomplete_trip = trips(:incomplete_trip)\n    @incomplete_trip.rating = 5\n\n    assert_not @incomplete_trip.valid?\n    assert_equal ['must be completed before a rating can be added'], @incomplete_trip.errors[:rating]\n  end\nend\n"
  },
  {
    "path": "test/models/user_test.rb",
    "content": "require 'test_helper'\n\nclass UserTest < ActiveSupport::TestCase\n  test 'user works' do\n    driver = drivers(:jack)\n    driver.email = '@email.com'\n    assert_not driver.valid?\n    assert_equal ['is not an email'], driver.errors[:email]\n  end\nend\n"
  },
  {
    "path": "test/models/vehicle_reservation_test.rb",
    "content": "require 'test_helper'\n\nclass VehicleReservationTest < ActiveSupport::TestCase\n  test 'validity' do\n    party_bus = VehicleReservation.new\n    assert_not party_bus.valid?\n    assert !party_bus.errors[:vehicle_id].include?('be blank')\n    assert !party_bus.errors[:starts_at].include?('be blank')\n    assert !party_bus.errors[:ends_at].include?('be blank')\n  end\nend\n"
  },
  {
    "path": "test/models/vehicle_test.rb",
    "content": "require 'test_helper'\n\nclass VehicleTest < ActiveSupport::TestCase\n  test 'validity' do\n    party_bus = Vehicle.new\n    assert_not party_bus.valid?\n    assert !party_bus.errors[:name].include?('be blank')\n  end\n\n  test 'a vehicle is in a draft state by default' do\n    vehicle = vehicles(:party_bus)\n    assert vehicle.status_draft?\n  end\nend\n"
  },
  {
    "path": "test/services/book_reservation_test.rb",
    "content": "require 'test_helper'\n\nclass BookReservationTest < ActiveSupport::TestCase\n  test 'can book reservation' do\n    jane = riders(:jane)\n    nyc = locations(:nyc)\n    comedy_cellar = locations(:comedy_cellar)\n    party_bus = vehicles(:party_bus)\n\n    reservation = BookReservation.new(\n      vehicle_id: party_bus.id,\n      rider_id: jane.id,\n      start_location_id: nyc.id,\n      end_location_id: comedy_cellar.id,\n      starts_at: Time.zone.local(2022, 0o7, 29, 20, 0o0, 0o0),\n      ends_at: Time.zone.local(2022, 0o7, 29, 23, 0o0, 0o0)\n    )\n\n    assert_difference -> { ::VehicleReservation.count }, +1 do\n      reservation.reserve!\n    end\n  end\n\n  test 'can NOT book overlapping reservation' do\n    existing_reservation = vehicle_reservations(:party_bus)\n\n    violation_msg = 'PG::ExclusionViolation: ERROR:  \" +\n    \"conflicting key value violates exclusion constraint \"non_overlapping_vehicle_registration\"'\n\n    assert_no_difference -> { ::VehicleReservation.count } do\n      assert_raises(ActiveRecord::StatementInvalid, violation_msg) do\n        new_reservation = BookReservation.new(\n          vehicle_id: existing_reservation.vehicle_id,\n          rider_id: existing_reservation.trip_request.rider.id,\n          start_location_id: existing_reservation.trip_request.start_location.id,\n          end_location_id: existing_reservation.trip_request.end_location.id,\n          starts_at: (existing_reservation.starts_at + 1.hour).to_s,\n          ends_at: (existing_reservation.starts_at + 2.hours).to_s\n        )\n\n        new_reservation.reserve!\n      end\n    end\n  end\nend\n"
  },
  {
    "path": "test/services/trip_creator_test.rb",
    "content": "require 'test_helper'\n\nclass TripCreatorTest < ActiveSupport::TestCase\n  test 'can create trip' do\n    drivers(:jack) # at least one exists\n    trip_request = trip_requests(:big_trip)\n\n    trip_creator = TripCreator.new(\n      trip_request_id: trip_request.id\n    )\n\n    assert_difference -> { Trip.count }, +1 do\n      trip_creator.create_trip!\n    end\n  end\nend\n"
  },
  {
    "path": "test/services/trip_search_test.rb",
    "content": "require 'test_helper'\n\nclass TripSearchTest < ActiveSupport::TestCase\n  test 'trip search no params works' do\n    trip_search = TripSearch.new({})\n    assert trip_search.start_location\n    assert trip_search.driver_name\n    assert trip_search.rider_name\n  end\n\n  test 'trip search start location params' do\n    trip_search = TripSearch.new({ start_location: 'JFK' })\n    assert trip_search.start_location.count >= 1\n  end\n\n  test 'trip search driver name' do\n    trip_search = TripSearch.new({ driver_name: 'Meg' })\n    assert trip_search.driver_name.count >= 1\n  end\n\n  test 'trip search rider name' do\n    trip_search = TripSearch.new({ rider_name: 'Jane' })\n    assert trip_search.rider_name.count >= 1\n  end\nend\n"
  },
  {
    "path": "test/system/.keep",
    "content": ""
  },
  {
    "path": "test/test_helper.rb",
    "content": "ENV['RAILS_ENV'] ||= 'test'\nrequire_relative '../config/environment'\nrequire 'rails/test_help'\n\nclass ActiveSupport::TestCase\n  # Run tests in parallel with specified workers\n  parallelize(workers: :number_of_processors)\n\n  # Setup all fixtures in test/fixtures/*.yml for all tests in alphabetical order.\n  fixtures :all\n\n  # Add more helper methods to be used by all tests here...\n  #\n  Geocoder.configure(lookup: :test, ip_lookup: :test)\n\n  Geocoder::Lookup::Test.add_stub(\n    'New York, NY', [\n      {\n        'coordinates' => [40.7143528, -74.0059731],\n        'address' => 'New York, NY, USA',\n        'state' => 'New York',\n        'state_code' => 'NY',\n        'country' => 'United States',\n        'country_code' => 'US'\n      }\n    ]\n  )\n\n  Geocoder::Lookup::Test.add_stub(\n    'Boston, MA', [\n      {\n        'coordinates' => [42.361145, -71.057083],\n        'address' => 'Boston, MA, USA',\n        'state' => 'Boston',\n        'state_code' => 'MA',\n        'country' => 'United States',\n        'country_code' => 'US'\n      }\n    ]\n  )\nend\n"
  }
]