[
  {
    "path": ".gitignore",
    "content": ".DS_Store\r\n\r\n"
  },
  {
    "path": "README.md",
    "content": "> **Warning**\n> The guides are now part of Swift.org and will continue to be evolved there. This repo is now archived\n\n# Swift on Server Development Guide\n\n## Introduction\n\nThis guide is designed to help teams and individuals running Swift Server applications on Linux and to provide orientation for those who want to start with such development. \nIt focuses on how to build, test, deploy and debug such application and provides tips in those areas.\n\n## Contents\n\n- [Setup and code editing](docs/setup-and-ide-alternatives.md)\n- [Building](docs/building.md)\n- [Testing](docs/testing.md)\n- [Debugging Memory leaks](docs/memory-leaks-and-usage.md)\n- [Performance troubleshooting and analysis](docs/performance.md)\n- [Debugging multithreading issues and memory checks](docs/llvm-sanitizers.md)\n- [Deployment](docs/deployment.md)\n\n### Server-side library development\n\nServer-side libraries should, to the best of their ability, play well with established patterns in the SSWG library ecosystem.\nThey should also utilize the core observability libraries, such as: logging, metrics and distributed tracing, where applicable.\n\nThe below guidelines are aimed to help library developers with some of the typical questions and challenges they might face when designing server-side focused libraries with Swift:\n\n- [SwiftLog: Log level guidelines](docs/libs/log-levels.md)\n- [Swift Concurrency Adoption Guidelines](docs/concurrency-adoption-guidelines.md)\n\n_The guide is a community effort, and all are invited to share their tips and know-how. Please provide a PR if you have ideas for improving this guide!_\n"
  },
  {
    "path": "docs/allocations.md",
    "content": "# Allocations\n\nFor high-performance software in Swift, it's often important to understand where your heap allocations are coming from. The next step can then be to reduce the number of allocations your software makes.\n\nThis is very similar to other performance questions: Before you can optimise performance you need to understand where you spend your resources. And resources can be CPU time, as well as memory, or heap allocations.\nIn this document we will solely focus on the number of heap allocations, not their size.\n\nOn macOS, you can use Instruments's \"Allocations\" instrument. The Allocations instrument shows you two sets of values: The live allocations (i.e. allocated and not freed) as well as the transient allocations (all allocations made).\n\nYour production workloads however will likely run on Linux and depending on your setup the number of allocations can differ significantly between macOS and Linux.\n\n## Preparation\n\nTo not waste your time, be sure to do any profiling in _release mode_. Swift's optimiser will produce significantly faster code which will also allocate less in release mode. Usually this means you need to run\n\n    swift run -c release\n    \n#### Install `perf`\n\nFollow the [installation instructions](linux-perf.md) in the Linux `perf` utility guide.\n\n#### Clone the `FlameGraph` project\n\nTo see some pretty graphs, clone the [`FlameGraph`](https://github.com/brendangregg/FlameGraph) repository on the machine/container where you need it. The rest of this guide will assume that it's available at `/FlameGraph`:\n\n```\ngit clone https://github.com/brendangregg/FlameGraph\n```\n\nTip: With Docker, you may want to bind mount the `FlameGraph` repository into the container using\n\n```\ndocker run -it --rm \\\n           --privileged \\\n           -v \"/path/to/FlameGraphOnYourMachine:/FlameGraph:ro\" \\\n           -v \"$PWD:PWD\" -w \"$PWD\" \\\n           swift:latest\n```\n\nor similar.\n\n\n## Tools\n\nIn this guide, we will be using the [Linux `perf`](https://perf.wiki.kernel.org/index.php/Main_Page) tool. If you're struggling to get `perf` to work, have a look at our [information regarding `perf`](linux-perf.md). If you're running in a Docker container, don't forget that you'll need a privileged container. And generally, you will need `root` access, so you may need to prefix the commands with `sudo`.\n\n## Getting a `perf` user probe\n\nIn this guide, we will be counting the number of allocations. Most allocations from a Swift program (on Linux) will be done through the `malloc` function.\n\nTo get information about when an allocation function is called, we will install a `perf` \"user probes\" on the allocation functions. Because Swift also uses other allocation functions such as `calloc` and `posix_memalign`, we'll install a user probe for them all. From then on, there will be an event in `perf` that will fire whenever one of the allocation functions is called.\n\n```bash\n# figures out the path to libc\nlibc_path=$(readlink -e /lib64/libc.so.6 /lib/x86_64-linux-gnu/libc.so.6)\n\n# delete all existing user probes on libc (instead of * you can also list them individually)\nperf probe --del 'probe_libc:*'\n\n# installs a probe on `malloc`, `calloc`, and `posix_memalign`\nperf probe -x \"$libc_path\" --add malloc --add calloc --add posix_memalign\n```\n\nThe result (hopefully) looks somewhat like this:\n\n```\nAdded new events:\n  probe_libc:malloc    (on malloc in /usr/lib/x86_64-linux-gnu/libc-2.31.so)\n  probe_libc:calloc    (on calloc in /usr/lib/x86_64-linux-gnu/libc-2.31.so)\n  probe_libc:posix_memalign (on posix_memalign in /usr/lib/x86_64-linux-gnu/libc-2.31.so)\n\n[...]\n```\n\nWhat `perf` is telling you here is that it added a new events called `probe_libc:malloc`, `probe_libc:calloc`, ... which will fire every time the respective function is called.\n\nLet's confirm that our `probe_libc:malloc` probe actually works by running:\n\n    perf stat -e probe_libc:malloc -- bash -c 'echo Hello World'\n    \nwhich should output something like\n\n```\nHello World\n\n Performance counter stats for 'bash -c echo Hello World':\n\n              1021      probe_libc:malloc                                           \n\n       0.003840500 seconds time elapsed\n\n       0.000000000 seconds user\n       0.003867000 seconds sys\n```\n\nWhich seems to have allocated 1021 times, great. If that probe fired 0 times, something went wrong.\n\n## Running the allocation analysis\n\nAfter we have confirmed that our user probe on `malloc` works in general, let's dial it up a little. The first thing we'll need is a program that we'd like to analyse the allocations of.\n\nFor example, we could analyse a program which does 10 subsequent HTTP requests using [AsyncHTTPClient](https://github.com/swift-server/async-http-client). If you're interested in the full source code, please expand below.\n\n<details>\n<summary>Demo program source code</summary>\n\nWith the following dependencies\n\n```swift\n    dependencies: [\n        .package(url: \"https://github.com/swift-server/async-http-client.git\", from: \"1.3.0\"),\n        .package(url: \"https://github.com/apple/swift-nio.git\", from: \"2.29.0\"),\n        .package(url: \"https://github.com/apple/swift-log.git\", from: \"1.4.2\"),\n    ],\n```\n\nWe could write this program\n\n```swift\nimport AsyncHTTPClient\nimport NIO\nimport Logging\n\nlet urls = Array(repeating:\"http://httpbin.org/get\", count: 10)\nvar logger = Logger(label: \"ahc-alloc-demo\")\n\nlogger.info(\"running HTTP requests\", metadata: [\"count\": \"\\(urls.count)\"])\nMultiThreadedEventLoopGroup.withCurrentThreadAsEventLoop { eventLoop in\n    let httpClient = HTTPClient(eventLoopGroupProvider: .shared(eventLoop),\n                                backgroundActivityLogger: logger)\n\n    func doRemainingRequests(_ remaining: ArraySlice<String>,\n                             overallResult: EventLoopPromise<Void>,\n                             eventLoop: EventLoop) {\n        var remaining = remaining\n        if let first = remaining.popFirst() {\n            httpClient.get(url: first, logger: logger).map { [remaining] _ in\n                eventLoop.execute { // for shorter stacks\n                    doRemainingRequests(remaining, overallResult: overallResult, eventLoop: eventLoop)\n                }\n            }.whenFailure { error in\n                overallResult.fail(error)\n            }\n        } else {\n            return overallResult.succeed(())\n        }\n    }\n\n    let promise = eventLoop.makePromise(of: Void.self)\n    // Kick off the process\n    doRemainingRequests(urls[...],\n                        overallResult: promise,\n                        eventLoop: eventLoop)\n\n    promise.futureResult.whenComplete { result in\n        switch result {\n        case .success:\n            logger.info(\"all HTTP requests succeeded\")\n        case .failure(let error):\n            logger.error(\"HTTP request failure\", metadata: [\"error\": \"\\(error)\"])\n        }\n\n        httpClient.shutdown { maybeError in\n            if let error = maybeError {\n                logger.error(\"AHC shutdown failed\", metadata: [\"error\": \"\\(error)\"])\n            }\n            eventLoop.shutdownGracefully { maybeError in\n                if let error = maybeError {\n                    logger.error(\"EventLoop shutdown failed\", metadata: [\"error\": \"\\(error)\"])\n                }\n            }\n        }\n    }\n}\n\nlogger.info(\"exiting\")\n```\n</details>\n\nAssuming you have a program as a Swift package, we should first of all compile it in release mode using `swift build -c release`. Then you should find a binary called `.build/release/your-program-name` which we can then analyse.\n\n### Allocation counts\n\nBefore we go into visualising the allocations as a flame graph, let's start with the simplest analysis: Getting the total number of allocations\n\n```\nperf stat -e 'probe_libc:*' -- .build/release/your-program-name\n```\n\nThe above command instructs perf to run your program and count the number of times the `probe_libc:malloc` probe was hit. This should be the number of allocations done by your program.\n\nThe output should look something like\n\n```\n Performance counter stats for '.build/release/your-program-name':\n\n                68      probe_libc:posix_memalign\n                35      probe_libc:calloc_1\n                 0      probe_libc:calloc\n              2977      probe_libc:malloc\n\n[...]\n```\n\nIn this case, my program allocated 2,977 times through `malloc` and a few more times through the other allocation functions. If you just want to compare the effects of a pull request you may just want to run this `perf stat` command twice. If you would like to find out _where_ your allocations come from, read on.\n\nPlease note that in this guide we'll use `-e probe_libc:*` instead of individually listing every event like `-e probe_libc:malloc,probe_libc:calloc,probe_libc:calloc_1,probe_libc:posix_memalign`. This assumes that you have _no other_ `perf` user probes installed. If you do, please specify each event you would like to use individually.\n\n### Collecting the raw data\n\nWith `perf`, we can't really create live graphs whilst the program is running. For most analyses, we want to first record some raw data (usually with `perf record`) and later on transform the recorded data into a graph.\n\nTo get started, let's have `perf` run the program for us and collect the information using the `libc_probe:malloc` we set up before.\n\n```\nperf record --call-graph dwarf,16384 \\\n     -m 50000 \\\n     -e 'probe_libc:*' -- \\\n     .build/release/your-program-name\n```\n\nLet's break down this command a little:\n\n- `perf record` instructs `perf` to `record` data, makes sense.\n- `--call-graph dwarf,16384` instructs `perf` to use the [DWARF](http://www.dwarfstd.org) information to create the call graphs. It also sets the maximum stack dump size to 16k which should be enough to get you full stack traces. Unfortunately, using DWARF is rather slow (see below) but it creates the best call graphs for you.\n- `-m 50000`: The size of the ring buffer that `perf` uses to buffer. This is given in multiples of `PAGE_SIZE` (usually 4kB) and especially with DWARF this needs to be pretty huge to prevent data loss.\n- `-e 'probe_libc:*'`: Record when the `malloc`/`calloc`/... probes fire\n\nWhat you want to see if output like this\n\n```\n<your program's output>\n[ perf record: Woken up 2 times to write data ]\n[ perf record: Captured and wrote 401.088 MB perf.data (49640 samples) ]\n```\n\nIf perf tells you about \"lost chunks\" and asks you to \"check the IO/CPU overhead\", you should jump to the 'Overcoming \"lost chunks\"' section at the end of this document.\n\n### Flame graphs\n\nAfter a successful `perf record`, you can invoke the following command line to produce an SVG file with the flame graph\n\n```bash\nperf script | \\\n    /FlameGraph/stackcollapse-perf.pl - | \\\n    swift demangle --simplified | \\\n    /FlameGraph/flamegraph.pl --countname allocations \\\n        --width 1600 > out.svg\n```\n\nLet's expand a little on what the above command does:\n\n- It runs `perf script` which dumps the binary information that `perf record` recorded into a textual form.\n- Next, we invoke `stackcollapse-perf` on it which transforms the stacks that `perf script` outputs into the right format for Flame Graphs,\n- then we invoke `swift demangle --simplified` which will give us nice symbol names,\n- and lastly we create the Flame Graph itself\n\nAfter this command has run (which may run for a while), you should have an SVG file that you can open in your browser.\n\nFor the above example program, please see an example flame graph below. Note how you can hover over the stack frames and get more information. To focus on a sub tree, you can click any stack frame too.\n\nGenerally, in flame graphs, the X axis just means \"count\", it does **not** mean time. In other words, whether a stack appears on the left or the right is not determined when that stack was live (this is different in flame _charts_).\n\nNote that this flame graph is _not_ a CPU flame graph, 1 sample means 1 allocation here and not time spent on the CPU. Also be aware that stack frames that appear wide don't necessarily allocate directly, it means that they or something they call has allocated a lot. For example, `BaseSocketChannel.readable` is a very wide frame, and yet, it is not a function which allocates directly. However, it calls other functions (such as other parts of SwiftNIO and AsyncHTTPClient) that do allocate a lot. It may take a little while to get familiar with flame graphs but there are great resources available online.\n\n![](../images/perf-malloc-full.svg)\n\n## Allocation flame graphs on macOS\n\nSo far, this tutorial focussed on Linux and the `perf` tool. You can however create the same graphs on macOS. The process is fairly similar.\n\nFirst, let's collect the raw data using [DTrace](https://en.wikipedia.org/wiki/DTrace).\n\n```\nsudo dtrace -n 'pid$target::malloc:entry,pid$target::posix_memalign:entry,pid$target::calloc:entry,pid$target::malloc_zone_malloc:entry,pid$target::malloc_zone_calloc:entry,pid$target::malloc_zone_memalign:entry { @s[ustack(100)] = count(); } ::END { printa(@s); }' -c .build/release/your-program > raw.stacks\n```\n\nSimilar to `perf`'s user probes, dtrace also has probes and the above command instructs DTrace to aggregate the number of calls to the allocation functions `malloc`, `posix_memalign`, `calloc`, and the `malloc_zone_*` equivalents. On Apple platforms, Swift uses a slightly larger number of allocation functions than on Linux, therefore we need to specify a few more functions.\n\nOnce we collected the data, we can also create an SVG file using\n\n```bash\ncat raw.stacks |\\\n    /FlameGraph/stackcollapse.pl - | \\\n    swift demangle --simplified | \\\n    /FlameGraph/flamegraph.pl --countname allocations \\\n        --width 1600 > out.svg\n```\n\nwhich you will notice is very similar to the `perf` invocation. The only differences are:\n\n- We use `cat raw.stacks` instead of `perf script` because we already have the textual data in a file with DTrace\n- Instead of `stackcollapse-perf.pl` (which parses `perf script` output) we use `stackcollapse.pl` (which parses DTrace aggregation output)\n\n## Other `perf` tricks\n\n### Prettifying Swift's allocation pattern\n\nAllocations in Swift usually have a very distinct shape:\n - Some code creates for example a class instance (which allocates).\n - This calls `swift_allocObject`,\n - which calls `swift_slowAlloc`,\n - which calls `malloc` (where we have our probe).\n\nTo make our flame graphs look nicer, we can apply a small transformation after we have demangled the collapsed stacks:\n\n```\nsed -e 's/specialized //g' \\\n    -e 's/;swift_allocObject;swift_slowAlloc;__libc_malloc/;A/g'\n```\n\nwhich will get rid of `\"specialized \"` and replaces `swift_allocObject` calling `swift_slowAlloc`, calling `malloc` with just an `A` (for allocation). The full command will then look like\n\n```\nperf script | \\\n    /FlameGraph/stackcollapse-perf.pl - | \\\n    swift demangle --simplified | \\\n    sed -e 's/specialized //g' \\\n        -e 's/;swift_allocObject;swift_slowAlloc;__libc_malloc/;A/g' | \\\n    /FlameGraph/flamegraph.pl --countname allocations --flamechart --hash \\\n    > out.svg\n```\n\n### Overcoming \"lost chunks\"\n\nWhen using `perf` with the DWARF call stack unwinding, it is unfortunately easy to run into the following issue\n\n```\n[ perf record: Woken up 189 times to write data ]\nWarning:\nProcessed 4346 events and lost 144 chunks!\n\nCheck IO/CPU overload!\n\n[ perf record: Captured and wrote 30.868 MB perf.data (3817 samples) ]\n```\n\nWhen `perf` tells you that it lost a number of chunks it means that it lost data. If `perf` lost data, you have a few options:\n\n- Reduce the amount of work your program is doing. For every allocation, `perf` will need to record a stack trace.\n- Reduce the maximum \"stack dump\" that `perf` records by changing the `--call-graph dwarf` parameter to for example `--call-graph dwarf,2048`. The default is to record a maximum of 4096 bytes which gives you pretty deep stacks, if you don't need that you can reduce the number. The tradeoff is that the flame graph may show you `[unknown]` stack frames which means that there are missing stack frames there. The unit is bytes.\n- You can raise the number of the `-m` parameter which is the size of the ring buffer that `perf` uses in memory (in multiples of `PAGE_SIZE`, usually that is 4kB)\n- You can give up nice call graphs and replace `--call-tree dwarf` with `--call-tree fp` (`fp` stands for frame pointer).\n"
  },
  {
    "path": "docs/aws-copilot-fargate-vapor-mongo.md",
    "content": "# Server Side Swift on AWS with Fargate, Vapor, and MongoDB Atlas\n\nThis guide illustrates how to deploy a Server-Side Swift workload on AWS. The workload is a REST API for tracking a To Do List. It uses the [Vapor](https://vapor.codes/) framework to program the API methods. The methods store and retrieve data in a [MongoDB Atlas](https://www.mongodb.com/atlas/database) cloud database. The Vapor application is containerized and deployed to AWS on AWS Fargate using the [AWS Copilot](https://aws.github.io/copilot-cli/) toolkit.\n\n## Architecture\n![Architecture](../images/aws/aws-fargate-vapor-mongo.png)\n\n- Amazon API Gateway receives API requests\n- API Gateway locates your application containers in AWS Fargate through internal DNS managed by AWS Cloud Map\n- API Gateway forwards the requests to the containers\n- The containers run the Vapor framework and have methods to GET and POST items\n- Vapor stores and retrieves items in a MongoDB Atlas cloud database which runs in a MongoDB managed AWS account\n\n## Prerequisites\nTo build this sample application, you need:\n\n- [AWS Account](https://console.aws.amazon.com/)\n- [MongoDB Atlas Database](https://www.mongodb.com/atlas/database)\n- [AWS Copilot](https://aws.github.io/copilot-cli/) - a command-line tool used to create containerized workloads on AWS\n- [Docker Desktop](https://www.docker.com/products/docker-desktop/) - to compile your Swift code into a Docker image\n- [Vapor](https://vapor.codes/) - to code the REST service\n- [AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html) - install the CLI and [configure](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html) it with credentials to your AWS account\n\n## Step 1: Create Your Database\nIf you are new to MongoDB Atlas, follow this [Getting Started Guide](https://www.mongodb.com/docs/atlas/getting-started/). You need to create the following items:\n- Atlas Account \n- Cluster\n- Database Username / Password\n- Database\n- Collection\n\nIn subsequent steps, you provide values to these items to configure the application.\n\n## Step 2: Initialize a New Vapor Project\n\nCreate a folder for your project.\n\n```\nmkdir todo-app && cd todo-app\n```\n\nInitialize a Vapor project named *api*.\n\n```\nvapor new api -n\n```\n\n## Step 3: Add Project Dependencies\nVapor initializes a *Package.swift* file for the project dependencies. Your project requires an additional library, [MongoDBVapor](https://github.com/mongodb/mongodb-vapor). Add the MongoDBVapor library to the project and target dependencies of your *Package.swift* file.\n\nYour updated file should look like this:\n\n**api/Package.swift**\n```swift\n// swift-tools-version:5.6\nimport PackageDescription\n\nlet package = Package(\n    name: \"api\",\n    platforms: [\n       .macOS(.v12)\n    ],\n    dependencies: [\n        .package(url: \"https://github.com/vapor/vapor\", .upToNextMajor(from: \"4.7.0\")),\n        .package(url: \"https://github.com/mongodb/mongodb-vapor\", .upToNextMajor(from: \"1.1.0\"))\n    ],\n    targets: [\n        .target(\n            name: \"App\",\n            dependencies: [\n                .product(name: \"Vapor\", package: \"vapor\"),\n                .product(name: \"MongoDBVapor\", package: \"mongodb-vapor\")\n            ],\n            swiftSettings: [\n                .unsafeFlags([\"-cross-module-optimization\"], .when(configuration: .release))\n            ]\n        ),\n        .executableTarget(name: \"Run\", dependencies: [.target(name: \"App\")]),\n        .testTarget(name: \"AppTests\", dependencies: [\n            .target(name: \"App\"),\n            .product(name: \"XCTVapor\", package: \"vapor\"),\n        ])\n    ]\n)\n```\n\n## Step 4: Update the Dockerfile\nYou deploy your Swift Server code to AWS Fargate as a Docker image. Vapor generates an initial Dockerfile for your application. Your application requires a few modifications to this Dockerfile:\n\n- pull the *build* and *run* images from the [Amazon ECR Public Gallery](https://gallery.ecr.aws)  container repository\n- install *libssl-dev* in the build image\n- install *libxml2* and *curl* in the run image\n\nReplace the contents of the Vapor generated Dockerfile with the following code:\n\n**api/Dockerfile**\n```Dockerfile\n# ================================\n# Build image\n# ================================\nFROM public.ecr.aws/docker/library/swift:5.6.2-focal as build\n\n# Install OS updates\nRUN export DEBIAN_FRONTEND=noninteractive DEBCONF_NONINTERACTIVE_SEEN=true \\\n    && apt-get -q update \\\n    && apt-get -q dist-upgrade -y \\\n    && apt-get -y install libssl-dev \\\n    && rm -rf /var/lib/apt/lists/*\n\n# Set up a build area\nWORKDIR /build\n\n# First just resolve dependencies.\n# This creates a cached layer that can be reused\n# as long as your Package.swift/Package.resolved\n# files do not change.\nCOPY ./Package.* ./\nRUN swift package resolve\n\n# Copy entire repo into container\nCOPY . .\n\n# Build everything, with optimizations\nRUN swift build -c release --static-swift-stdlib\n\n# Switch to the staging area\nWORKDIR /staging\n\n# Copy main executable to staging area\nRUN cp \"$(swift build --package-path /build -c release --show-bin-path)/Run\" ./\n\n# Copy resources bundled by SPM to staging area\nRUN find -L \"$(swift build --package-path /build -c release --show-bin-path)/\" -regex '.*\\.resources$' -exec cp -Ra {} ./ \\;\n\n# Copy any resources from the public directory and views directory if the directories exist\n# Ensure that by default, neither the directory nor any of its contents are writable.\nRUN [ -d /build/Public ] && { mv /build/Public ./Public && chmod -R a-w ./Public; } || true\nRUN [ -d /build/Resources ] && { mv /build/Resources ./Resources && chmod -R a-w ./Resources; } || true\n\n# ================================\n# Run image\n# ================================\nFROM public.ecr.aws/ubuntu/ubuntu:focal\n\n# Make sure all system packages are up to date, and install only essential packages.\nRUN export DEBIAN_FRONTEND=noninteractive DEBCONF_NONINTERACTIVE_SEEN=true \\\n    && apt-get -q update \\\n    && apt-get -q dist-upgrade -y \\\n    && apt-get -q install -y \\\n      ca-certificates \\\n      tzdata \\\n      curl \\\n      libxml2 \\\n    && rm -r /var/lib/apt/lists/*\n\n# Create a vapor user and group with /app as its home directory\nRUN useradd --user-group --create-home --system --skel /dev/null --home-dir /app vapor\n\n# Switch to the new home directory\nWORKDIR /app\n\n# Copy built executable and any staged resources from builder\nCOPY --from=build --chown=vapor:vapor /staging /app\n\n# Ensure all further commands run as the vapor user\nUSER vapor:vapor\n\n# Let Docker bind to port 8080\nEXPOSE 8080\n\n# Start the Vapor service when the image is run, default to listening on 8080 in production environment\nENTRYPOINT [\"./Run\"]\nCMD [\"serve\", \"--env\", \"production\", \"--hostname\", \"0.0.0.0\", \"--port\", \"8080\"]\n```\n## Step 5: Update the Vapor Source Code\nVapor also generates the sample files needed to code an API. You must customize these files with code that exposes your To Do List API methods and interacts with your MongoDB database.\n\nThe *configure.swift* file initializes an application-wide pool of connections to your MongoDB database. It retrieves the connection string to your MongoDB database from an environment variable at runtime.\n\nReplace the contents of the file with the following code:\n\n**api/Sources/App/configure.swift**\n```swift\nimport MongoDBVapor\nimport Vapor\n\npublic func configure(_ app: Application) throws {\n\n    let MONGODB_URI = Environment.get(\"MONGODB_URI\") ?? \"\"\n\n    try app.mongoDB.configure(MONGODB_URI)\n\n    ContentConfiguration.global.use(encoder: ExtendedJSONEncoder(), for: .json)\n    ContentConfiguration.global.use(decoder: ExtendedJSONDecoder(), for: .json)\n\n    try routes(app)\n}\n```\n\nThe *routes.swift* file defines the methods to your API. These include a *POST Item* method to insert a new item and a *GET Items* method to retrieve a list of all existing items. See comments in the code to understand what happens in each section.\n\nReplace the contents of the file with the following code:\n\n**api/Sources/App/routes.swift**\n```swift\nimport Vapor\nimport MongoDBVapor\n\n// define the structure of a ToDoItem\nstruct ToDoItem: Content {\n    var _id: BSONObjectID?\n    let name: String\n    var createdOn: Date?\n}\n\n// import the MongoDB database and collection names from environment variables\nlet MONGODB_DATABASE = Environment.get(\"MONGODB_DATABASE\") ?? \"\"\nlet MONGODB_COLLECTION = Environment.get(\"MONGODB_COLLECTION\") ?? \"\"\n\n// define an extenstion to the Vapor Request object to interact with the database and collection\nextension Request {\n\n    var todoCollection: MongoCollection<ToDoItem> {\n        self.application.mongoDB.client.db(MONGODB_DATABASE).collection(MONGODB_COLLECTION, withType: ToDoItem.self)\n    }\n}\n\n// define the api routes\nfunc routes(_ app: Application) throws {\n\n    // an base level route used for container healthchecks\n    app.get { req in\n        return \"OK\"\n    }\n\n    // GET items returns a JSON array of all items in the database\n    app.get(\"items\") { req async throws -> [ToDoItem] in\n        try await req.todoCollection.find().toArray()\n    }\n\n    // POST item inserts a new item into the database and returns the item as JSON\n    app.post(\"item\") { req async throws -> ToDoItem in\n        \n        var item = try req.content.decode(ToDoItem.self)\n        item.createdOn = Date()\n        \n        let response = try await req.todoCollection.insertOne(item)\n        item._id = response?.insertedID.objectIDValue\n\n        return item\n    }\n}\n```\n\nThe *main.swift* file defines the startup and shutdown code for the application. Change the code to include a *defer* statement to close the connection to your MongoDB database when the application ends.\n\nReplace the contents of the file with the following code:\n\n**api/Sources/Run/main.swift**\n```swift\nimport App\nimport Vapor\nimport MongoDBVapor\n\nvar env = try Environment.detect()\ntry LoggingSystem.bootstrap(from: &env)\nlet app = Application(env)\ntry configure(app)\n\n// shutdown and cleanup the MongoDB connection when the application terminates\ndefer {\n  app.mongoDB.cleanup()\n  cleanupMongoSwift()\n  app.shutdown()\n}\n\ntry app.run()\n```\n\n## Step 6: Initialize AWS Copilot\n[AWS Copilot](https://aws.github.io/copilot-cli/) is a command-line utility for generating a containerized application in AWS. You use Copilot to build and deploy your Vapor code as containers in Fargate. Copilot also creates and tracks an AWS Systems Manager secret parameter for the value of your MongoDB connection string. You store this value as a secret as it contains the username and password to your database.  You never want to store this in your source code. Finally, Copilot creates an API Gateway to expose a public endpoint for your API.\n\nInitialize a new Copilot application.\n\n```bash\ncopilot app init todo\n```\n\nAdd a new Copilot *Backend Service*. The service refers to the Dockerfile of your Vapor project for instructions on how to build the container.\n\n```bash\ncopilot svc init --name api --svc-type \"Backend Service\" --dockerfile ./api/Dockerfile\n```\n\nCreate a Copilot environment for your application. An environment typically aligns to a phase, such as dev, test, or prod. When prompted, select the AWS credentials profile you configured with the AWS CLI.\n\n```bash\ncopilot env init --name dev --app todo --default-config\n```\n\nDeploy the *dev* environment:\n\n```bash\ncopilot env deploy --name dev\n```\n\n## Step 7: Create a Copilot Secret for Database Credentials\n\nYour application requires credentials to authenticate to your MongoDB Atlas database. You should never store this sensitive information in your source code. Create a Copilot *secret* to store the credentials. This stores the connection string to your MongoDB cluster in an AWS Systems Manager Secret Parameter.\n\nDetermine the connection string from the MongoDB Atlas website. Select the *Connect* button on your cluster page and the *Connect your application*.\n\n![Architecture](../images/aws/aws-fargate-vapor-mongo-atlas-connection.png)\n\nSelect *Swift version 1.2.0* as the Driver and copy the displayed connection string. It looks something like this:\n\n```bash\nmongodb+srv://username:<password>@mycluster.mongodb.net/?retryWrites=true&w=majority\n```\n\nThe connection string contains your database username and a placeholder for the password. Replace the **\\<password\\>** section with your database password. Then create a new Copilot secret named MONGODB_URI and save your connection string when prompted for the value.\n\n```bash\ncopilot secret init --app todo --name MONGODB_URI\n```\n\nFargate injects the secret value as an environment variable into your container at runtime. In Step 5 above, you extracted this value in your *api/Sources/App/configure.swift* file and used it to configure your MongoDB connection.\n\n## Step 8: Configure the Backend Service\n\nCopilot generates a *manifest.yml* file for your application that defines the attributes of your service, such as the Docker image, network, secrets, and environment variables. Change the manifest file generated by Copilot to add the following properties:\n\n- configure a health check for the container image\n- add a reference to the MONGODB_URI secret\n- configure the service network as *private*\n- add environment variables for the MONGODB_DATABASE and MONGODB_COLLECTION\n\nTo implement these changes, replace the contents of the *manifest.yml* file with the following code. Update the values of MONGODB_DATABASE and MONGODB_COLLECTION to reflect the names of the database and cluster you created in MongoDB Atlas for this application. \n\nIf you are building this solution on a **Mac M1/M2** machine, uncomment the **platform** property in the manifest.yml file to specify an ARM build. The default value is *linux/x86_64*.\n\n**copilot/api/manifest.yml**\n```yaml\n# The manifest for the \"api\" service.\n# Read the full specification for the \"Backend Service\" type at:\n#  https://aws.github.io/copilot-cli/docs/manifest/backend-service/\n\n# Your service name will be used in naming your resources like log groups, ECS services, etc.\nname: api\ntype: Backend Service\n\n# Your service is reachable at \"http://api.${COPILOT_SERVICE_DISCOVERY_ENDPOINT}:8080\" but is not public.\n\n# Configuration for your containers and service.\nimage:\n  # Docker build arguments. For additional overrides: https://aws.github.io/copilot-cli/docs/manifest/backend-service/#image-build\n  build: api/Dockerfile\n  # Port exposed through your container to route traffic to it.\n  port: 8080\n  healthcheck:\n    command: [\"CMD-SHELL\", \"curl -f http://localhost:8080 || exit 1\"]\n    interval: 10s\n    retries: 2\n    timeout: 5s\n    start_period: 0s\n\n# Mac M1/M2 users - uncomment the following platform line\n# the default platform is linux/x86_64\n\n# platform: linux/arm64\n\ncpu: 256       # Number of CPU units for the task.\nmemory: 512    # Amount of memory in MiB used by the task.\ncount: 2       # Number of tasks that should be running in your service.\nexec: true     # Enable running commands in your container.\n\n# define the network as private. this will place Fargate in private subnets\nnetwork:\n  vpc:\n    placement: private\n\n# Optional fields for more advanced use-cases.\n#\n# Pass environment variables as key value pairs.\nvariables:\n MONGODB_DATABASE: home\n MONGODB_COLLECTION: todolist\n\n# Pass secrets from AWS Systems Manager (SSM) Parameter Store.\nsecrets:\n MONGODB_URI: /copilot/${COPILOT_APPLICATION_NAME}/${COPILOT_ENVIRONMENT_NAME}/secrets/MONGODB_URI\n\n# You can override any of the values defined above by environment.\n#environments:\n#  test:\n#    count: 2               # Number of tasks to run for the \"test\" environment.\n#    deployment:            # The deployment strategy for the \"test\" environment.\n#       rolling: 'recreate' # Stops existing tasks before new ones are started for faster deployments.\n```\n\n## Step 9: Create a Copilot Addon Service for your API Gateway\n\nCopilot does not have the capability to add an API Gateway to your application. You can, however, add additional AWS resources to your application using [Copilot \"Addons\"](https://aws.github.io/copilot-cli/docs/developing/additional-aws-resources/#how-to-do-i-add-other-resources).\n\nDefine an addon by creating an *addons* folder under your Copilot service folder and creating a CloudFormation yaml template to define the services you wish to create.\n\nCreate a folder for the addon:\n\n```bash\nmkdir -p copilot/api/addons\n```\n\nCreate a file to define the API Gateway:\n\n```bash\ntouch copilot/api/addons/apigateway.yml\n```\n\nCreate a file to pass parameters from the main service into the addon service:\n\n```bash\ntouch copilot/api/addons/addons.parameters.yml\n```\n\nCopy the following code into the *addons.parameters.yml* file. It passes the ID of the Cloud Map service into the addon stack.\n\n**copilot/api/addons/addons.parameters.yml**\n```yaml\nParameters:\n   DiscoveryServiceARN:  !GetAtt DiscoveryService.Arn\n```\n\nCopy the following code into the *addons/apigateway.yml* file. It creates an API Gateway using the DiscoveryServiceARN to integrate with the Cloud Map service Copilot created for your Fargate containers.\n\n**copilot/api/addons/apigateway.yml**\n```yaml\nParameters:\n  App:\n    Type: String\n    Description: Your application's name.\n  Env:\n    Type: String\n    Description: The environment name your service, job, or workflow is being deployed to.\n  Name:\n    Type: String\n    Description: The name of the service, job, or workflow being deployed.\n  DiscoveryServiceARN:\n    Type: String\n    Description: The ARN of the Cloud Map discovery service.\n\nResources:\n  ApiVpcLink:\n    Type: AWS::ApiGatewayV2::VpcLink\n    Properties:\n      Name: !Sub \"${App}-${Env}-${Name}\"\n      SubnetIds:\n        !Split [\",\", Fn::ImportValue: !Sub \"${App}-${Env}-PrivateSubnets\"]\n      SecurityGroupIds:\n        - Fn::ImportValue: !Sub \"${App}-${Env}-EnvironmentSecurityGroup\"\n\n  ApiGatewayV2Api:\n    Type: \"AWS::ApiGatewayV2::Api\"\n    Properties:\n      Name: !Sub \"${Name}.${Env}.${App}.api\"\n      ProtocolType: \"HTTP\"\n      CorsConfiguration:\n        AllowHeaders:\n          - \"*\"\n        AllowMethods:\n          - \"*\"\n        AllowOrigins:\n          - \"*\"\n\n  ApiGatewayV2Stage:\n    Type: \"AWS::ApiGatewayV2::Stage\"\n    Properties:\n      StageName: \"$default\"\n      ApiId: !Ref ApiGatewayV2Api\n      AutoDeploy: true\n\n  ApiGatewayV2Integration:\n    Type: \"AWS::ApiGatewayV2::Integration\"\n    Properties:\n      ApiId: !Ref ApiGatewayV2Api\n      ConnectionId: !Ref ApiVpcLink\n      ConnectionType: \"VPC_LINK\"\n      IntegrationMethod: \"ANY\"\n      IntegrationType: \"HTTP_PROXY\"\n      IntegrationUri: !Sub \"${DiscoveryServiceARN}\"\n      TimeoutInMillis: 30000\n      PayloadFormatVersion: \"1.0\"\n\n  ApiGatewayV2Route:\n    Type: \"AWS::ApiGatewayV2::Route\"\n    Properties:\n      ApiId: !Ref ApiGatewayV2Api\n      RouteKey: \"$default\"\n      Target: !Sub \"integrations/${ApiGatewayV2Integration}\"\n```\n\n## Step 10: Deploy the Copilot Service\nWhen deploying your service, Copilot executes the following actions:\n\n- builds your Vapor Docker image\n- deploys the image to the Amazon Elastic Container Registry (ECR) in your AWS account\n- creates and deploys an AWS CloudFormation template into your AWS account. CloudFormation creates all the services defined in your application.\n\n```bash\ncopilot svc deploy --name api --app todo --env dev\n```\n\n## Step 11: Configure MongoDB Atlas Network Access\nMongoDB Atlas uses an IP Access List to restrict access to your database to a specific list of source IP addresses. In your application, traffic from your containers originates from the public IP addresses of the NAT Gateways in your application's network. You must configure MongoDB Atlas to allow traffic from these IP addresses.\n\nTo get the IP address of the NAT Gateways, run the following AWS CLI command:\n\n```bash\naws ec2 describe-nat-gateways --filter \"Name=tag-key, Values=copilot-application\" --query 'NatGateways[?State == `available`].NatGatewayAddresses[].PublicIp' --output table\n```\n\nOutput:\n\n```bash\n---------------------\n|DescribeNatGateways|\n+-------------------+\n|  1.1.1.1          |\n|  2.2.2.2          |\n+-------------------+\n```\n\nUse the IP addresses to create a Network Access rule in your MongoDB Atlas account for each address.\n\n![Architecture](../images/aws/aws-fargate-vapor-mongo-atlas-network-address.png)\n\n## Step 12: Use your API\n\nTo get the endpoint for your API, use the following AWS CLI command:\n\n```bash\naws apigatewayv2 get-apis --query 'Items[?Name==`api.dev.todo.api`].ApiEndpoint' --output table\n```\n\nOutput:\n\n```bash\n------------------------------------------------------------\n|                          GetApis                         |\n+----------------------------------------------------------+\n|  https://[your-api-endpoint]                             |\n+----------------------------------------------------------+\n```\n\nUse cURL or a tool such as [Postman](https://www.postman.com/) to interact with your API:\n\nAdd a To Do List item\n\n```bash\ncurl --request POST 'https://[your-api-endpoint]/item' --header 'Content-Type: application/json' --data-raw '{\"name\": \"my todo item\"}'\n```\n\nRetrieve To Do List items\n\n```bash\ncurl https://[your-api-endpoint]/items\n```\n\n## Cleanup\nWhen finished with your application, use Copilot to delete it. This deletes all the services created in your AWS account.\n\n```bash\ncopilot app delete --name todo\n```\n"
  },
  {
    "path": "docs/aws.md",
    "content": "# Deploying to AWS on Amazon Linux 2\n\nThis guide describes how to launch an AWS instance running Amazon Linux 2 and configure it to run Swift. The approach taken here is a step by step approach through the console. This is a great way to learn, but for a more mature approach we recommend using Infrastructure as Code tools such as AWS Cloudformation, and the instances are created and managed through automated tools such as Autoscaling Groups. For one approach using those tools see this blog article: https://aws.amazon.com/blogs/opensource/continuous-delivery-with-server-side-swift-on-amazon-linux-2/\n\n## Launch AWS Instance\n\nUse the Service menu to select the EC2 service.\n\n![Select EC2 service](../images/aws/services.png)\n\nClick on \"Instances\" in the \"Instances\" menu\n\n![Select Instances](../images/aws/ec2.png)\n\nClick on \"Launch Instance\", either on the top of the screen, or if this is the first instance you have created in the region, in the main section of the screen.\n\n![Launch instance](../images/aws/launch-0.png)\n\nChoose an Amazon Machine Image (AMI). In this case the guide is assuming that we will be using Amazon Linux 2, so select that AMI type.\n\n![Choose AMI](../images/aws/launch-1.png)\n\nChoose an instance type. Larger instances types will have more memory and CPU, but will be more expensive. A smaller instance type will be sufficient to experiment. In this case I have a `t2.micro` instance type selected.\n\n![Choose Instance type](../images/aws/launch-2.png)\n\nConfigure instance details. If you want to access this instance directly to the internet, ensure that the subnet that you select is auto-assigns a public IP. It is assumed that the VPC has internet connectivity, which means that it needs to have a Internet Gateway (IGW) and the correct networking rules, but this is the case for the default VPC. If you wish to set this instance up in a private (non-internet accessible) VPC you will need to set up a bastion host, AWS Systems Manager Session Manager, or some other mechanism to connect to the instance.\n\n![Choose Instance details](../images/aws/launch-3.png)\n\nAdd storage. The AWS EC2 launch wizard will suggest some form of storage by default. For our testing purposes this should be fine, but if you know that you need more storage, or a different storage performance requirements, then you can change the size and volume type here.\n\n![Choose Instance storage](../images/aws/launch-4.png)\n\nAdd tags. It is recommended you add as many tags as you need to correctly identify this server later. Especially if you have many servers, it can be difficult to remember which one was used for which purpose. At a very minimum, add a `Name` tag with something memorable.\n\n![Add tags](../images/aws/launch-5.png)\n\nConfigure security group. The security group is a stateful firewall that limits the traffic that is accepted by your instance. It is recommended to limit this as much as possible. In this case we are configuring it to only allow traffic on port 22 (ssh). It is recommended to restrict the source as well. To limit it to your workstation's current IP, click on the dropdown under \"Source\" and select \"My IP\".\n\n![Configure security group](../images/aws/launch-6.png)\n\nLaunch instance. Click on \"Launch\", and select a key pair that you will use to connect to the instance. If you already have a keypair that you have used previously, you can reuse it here by selecting \"Choose an existing key pair\". Otherwise you can create a keypair now by selecting \"Create a new key pair\".\n\n![Launch instance](../images/aws/launch-7.png)\n\nWait for instance to launch. When it is ready it will show as \"running\" under \"Instance state\", and \"2/2 checks pass\" under \"Status Checks\". Click on the instance to view the details on the bottom pane of the window, and look for the \"IPv4 Public IP\".\n\n![Wait for instance launch and view details](../images/aws/ec2-list.png)\n\nConnect to instance. Using the keypair that you used or created in the launch step and the IP in the previous step, run ssh. Be sure to use the `-A` option with ssh so that in a future step we will be able to use the same key to connect to a second instance.\n\n![Connect to instance](../images/aws/ssh-0.png)\n\nWe have two options to compile the binary: either directly on the instance or using Docker. We will go through both options here.\n\n## Compile on instance\nThere are two alternative ways to compile code on the instance, either by:\n\n- [downloading and using the toolchain directly on the instance](#compile-using-a-downloaded-toolchain),\n- or by [using docker, and compiling inside a docker container](#compile-with-docker)\n\n### Compile using a downloaded toolchain\nRun the following command in the SSH terminal. Note that there may be a more up to date version of the swift toolchain. Check https://swift.org/download/#releases for the latest available toolchain url for Amazon Linux 2.\n\n```\nSwiftToolchainUrl=\"https://swift.org/builds/swift-5.4.1-release/amazonlinux2/swift-5.4.1-RELEASE/swift-5.4.1-RELEASE-amazonlinux2.tar.gz\"\nsudo yum install ruby binutils gcc git glibc-static gzip libbsd libcurl libedit libicu libsqlite libstdc++-static libuuid libxml2 tar tzdata ruby -y\ncd $(mktemp -d)\nwget ${SwiftToolchainUrl} -O swift.tar.gz\ngunzip < swift.tar.gz | sudo tar -C / -xv --strip-components 1\n```\n\nFinally, check that Swift is correctly installed by running the Swift REPL: `swift`.\n\n![Invoke REPL](../images/aws/repl.png)\n\nLet's now download and build an test application. We will use the `--static-swift-stdlib` option so that it can be deployed to a different server without the Swift toolchain installed. These examples will deploy SwiftNIO's [example HTTP server](https://github.com/apple/swift-nio/tree/master/Sources/NIOHTTP1Server), but you can test with your own project.\n\n```\ngit clone https://github.com/apple/swift-nio.git\ncd swift-nio\nswift build -v --static-swift-stdlib -c release\n```\n\n## Compile with Docker\n\nEnsure that Docker and git are installed on the instance:\n\n```\nsudo yum install docker git\nsudo usermod -a -G docker ec2-user\nsudo systemctl start docker\n```\n\nYou may have to log out and log back in to be able to use Docker. Check by running `docker ps`, and ensure that it runs without errors.\n\nDownload and compile SwiftNIO's [example HTTP server](https://github.com/apple/swift-nio/tree/master/Sources/NIOHTTP1Server): \n\n```\ndocker run --rm  -v \"$PWD:/workspace\"  -w /workspace swift:5.4-amazonlinux2   /bin/bash -cl ' \\\n     swift build -v --static-swift-stdlib -c release\n```\n## Test binary\nUsing the same steps as above, launch a second instance (but don't run any of the bash commands above!). Be sure to use the same SSH keypair.\n\nFrom within the AWS management console, navigate to the EC2 service and find the instance that you just launched. Click on the instance to see the details, and find the internal IP. In my example, the internal IP is `172.31.3.29`\n\nFrom the original build instance, copy the binary to the new server instance:\n```scp .build/release/NIOHTTP1Server ec2-user@172.31.3.29```\n\nNow connect to the new instance:\n```ssh ec2-user@172.31.3.29```\n\nFrom within the new instance, test the Swift binary:\n```\nNIOHTTP1Server localhost 8080 &\ncurl localhost:8080\n```\n\nFrom here, options are endless and will depend on your application of Swift. If you wish to run a web service be sure to open the Security Group to the correct port and from the correct source. When you are done testing Swift, shut down the instance to avoid paying for unneeded compute. From the EC2 dashboard, select both instances, select \"Actions\" from the menu, then select \"Instance state\" and then finally \"terminate\".\n\n![Terminate Instance](../images/aws/terminate.png)\n"
  },
  {
    "path": "docs/building.md",
    "content": "# Build system\n\nThe recommended way to build server applications is with [Swift Package Manager](https://swift.org/package-manager/). SwiftPM provides a cross-platform foundation for building Swift code and works nicely for having one code base that can be edited as well as run on many Swift platforms.\n\n## Building\nSwiftPM works from the command line and is also integrated within Xcode.\n\nYou can build your code either by running `swift build` from the terminal, or by triggering the build action in Xcode.\n\n### Docker Usage\nSwift binaries are architecture-specific, so running the build command on macOS will create a macOS binary, and similarly running the command on Linux will create a Linux binary.\n\nMany Swift developers use macOS for development, which enables taking advantage of the great tooling that comes with Xcode. However, most server applications are designed to run on Linux.\n\nIf you are developing on macOS, Docker is a useful tool for building on Linux and creating Linux binaries. Apple publishes official Swift Docker images to [Docker Hub](https://hub.docker.com/_/swift).\n\nFor example, to build your application using the latest Swift Docker image:\n\n`$ docker run -v \"$PWD:/code\" -w /code swift:latest swift build`\n\nNote, if you want to run the Swift compiler for Intel CPUs on an Apple Silicon (M1) Mac, please add `--platform linux/amd64 -e QEMU_CPU=max` to the command line. For example:\n\n`$ docker run -v \"$PWD:/code\" -w /code --platform linux/amd64 -e QEMU_CPU=max swift:latest swift build`\n\nThe above command will run the build using the latest Swift Docker image, utilizing bind mounts to the sources on your Mac. \n\n### Debug vs. Release Mode\nBy default, SwiftPM will build a debug version of the application. Note that debug versions are not suitable for running in production as they are significantly slower. To build a release version of your app, run `swift build -c release`.\n\n### Locating Binaries\nBinary artifacts that can be deployed are found under `.build/x86_64-unknown-linux` on Linux, and `.build/x86_64-apple-macosx` on macOS. \n\nSwiftPM can show you the full binary path using `swift build --show-bin-path -c release`.\n\n### Building for production\n\n- Build production code in release mode by compiling with `swift build -c release`. Running code compiled in debug mode will hurt performance significantly. \n\n- For best performance in Swift 5.2 or later, pass `-Xswiftc -cross-module-optimization` (this won't work in Swift versions before 5.2) - enabling this should be verified with performance tests (as any optimization changes) as it may sometimes cause performance regressions.\n\n- Integrate [`swift-backtrace`](https://github.com/swift-server/swift-backtrace) into your application to make sure backtraces are printed on crash. Backtraces do not work out-of-the-box on Linux, and this library helps to fill the gap. Eventually this will become a language feature and not require a discrete library.\n"
  },
  {
    "path": "docs/concurrency-adoption-guidelines.md",
    "content": "# Swift Concurrency adoption guidelines for Swift Server Libraries\n\nThis writeup attempts to provide a set of guidelines to follow by authors of server-side Swift libraries. Specifically a lot of the discussion here revolves around what to do about existing APIs and libraries making extensive use of Swift NIO’s `EventLoopFuture` and related types.\n\nSwift Concurrency is a multi-year effort. It is very valuable for the server community to participate in this multi-year adoption of the concurrency features, one by one, and provide feedback while doing so. As such, we should not hold off adopting concurrency features until Swift 6 as we may miss valuable opportunity to improve the concurrency model.\n\nIn 2021 we saw structured concurrency and actors arrive with Swift 5.5. Now is a great time to provide APIs using those primitives. In the future we will see fully checked Swift concurrency. This will come with breaking changes. For this reason adopting the new concurrency features can be split into two phases.\n\n\n## What you can do right now\n\n### API Design\n\nFirstly, existing libraries should strive to add `async` functions where possible to their user-facing “surface” APIs in addition to existing `*Future` based APIs wherever possible. These additive APIs can be gated on the Swift version and can be added without breaking existing users' code, for example like this:\n\n```swift\nextension Worker {\n  func work() -> EventLoopFuture<Value> { ... }\n\n  #if compiler(>=5.5) && canImport(_Concurrency)\n  @available(macOS 12.0, iOS 15.0, watchOS 8.0, tvOS 15.0, *)\n  func work() async throws -> Value { ... }\n  #endif\n}\n```\n\nIf a function cannot fail but was using futures before, it should not include the `throws` keyword in its new incarnation. \n\nSuch adoption can begin immediately, and should not cause any issues to existing users of existing libraries. \n\n### SwiftNIO helper functions\n\nTo allow an easy transition to async code, SwiftNIO offers a number of helper methods on `EventLoopFuture` and `-Promise`.\n\nOn every `EventLoopFuture` you can call `.get()` to transition the future into an `await`-able invocation. If you want to translate async/await calls to an `EventLoopFuture` we recommend the following pattern:\n\n```swift \n#if compiler(>=5.5) && canImport(_Concurrency)\n\nfunc yourAsyncFunctionConvertedToAFuture(on eventLoop: EventLoop) \n    -> EventLoopFuture<Result> {\n    let promise = context.eventLoop.makePromise(of: Out.self)\n    promise.completeWithTask {\n        try await yourMethod(yourInputs)\n    }\n    return promise.futureResult\n}\n#endif\n```\n\nFurther helpers exist for `EventLoopGroup`, `Channel`, `ChannelOutboundInvoker` and `ChannelPipeline`.\n\n\n### `#if` guarding code using Concurrency\n\nIn order to have code using concurrency along with code not using concurrency, you may have to `#if` guard certain pieces of code. The correct way to do so is the following:\n\n```swift\n#if compiler(>=5.5) && canImport(_Concurrency)\n...\n#endif\n```\n\nPlease note that you do _not_ need to _import_ the `_Concurrency` at all, if it is present it is imported automatically.\n\n```swift\n#if compiler(>=5.5) && canImport(_Concurrency)\n// DO NOT DO THIS.\n// Instead don't do any import and it'll import automatically when possible.\nimport _Concurrency\n#endif\n```\n\n\n### Sendable Checking\n\n> [SE-0302][SE-0302] introduced the `Sendable` protocol, which is used to indicate which types have values that can safely be copied across actors or, more generally, into any context where a copy of the value might be used concurrently with the original. Applied uniformly to all Swift code, `Sendable` checking eliminates a large class of data races caused by shared mutable state.\n>\n> -- from [Staging in Sendable checking][sendable-staging], which outlines the `Sendable` adoption plan for Swift 6.\n\nIn the future we will see fully checked Swift concurrency. The language features to support this are the `Sendable` protocol and the `@Sendable` keyword for closures. Since sendable checking will break existing Swift code, a new major Swift version is required.\n\nTo ease the transition to fully checked Swift code, it is possible to annotate your APIs with the `Sendable` protocol today.\n\nYou can start adopting Sendable and getting appropriate warnings in Swift 5.5 already by passing the `-warn-concurrency` flag, you can do so in SwiftPM for the entire project like so:\n\n```\nswift build -Xswiftc -Xfrontend -Xswiftc -warn-concurrency\n```\n\n\n#### Sendable checking today\n\nSendable checking is currently disabled in Swift 5.5(.0) because it was causing a number of tricky situations for which we lacked the tools to resolve.\n\nMost of these issues have been resolved on today’s `main` branch of the compiler, and are expected to land in the next Swift 5.5 releases. It may be worthwhile waiting for adoption until the next version(s) after 5.5.0.\n\nFor example, one of such capabilities is the ability for tuples of `Sendable` types to conform to `Sendable` as well. We recommend holding off adoption of `Sendable` until this patch lands in Swift 5.5 (which should be relatively soon). With this change, the difference between Swift 5.5 with `-warn-concurrency` enabled and Swift 6 mode should be very small, and manageable on a case by case basis.\n\n#### Backwards compatibility of declarations and “checked” Swift Concurrency\n\nAdopting Swift Concurrency will progressively cause more warnings, and eventually compile time errors in Swift 6 when sendability checks are violated, marking potentially unsafe code.\n\nIt may be difficult for a library to maintain a version that is compatible with versions prior to Swift 6 while also fully embracing the new concurrency checks. For example, it may be necessary to mark generic types as `Sendable`, like so:\n\n```swift\nstruct Container<Value: Sendable>: Sendable { ... }\n```\n\nHere, the `Value` type must be marked `Sendable` for Swift 6’s concurrency checks to work properly with such container. However, since the `Sendable` type does not exist in Swift prior to Swift 5.5, it would be difficult to maintain a library that supports both Swift 5.4+ as well as Swift 6.\n\nIn such situations, it may be helpful to utilize the following trick to be able to share the same `Container` declaration between both Swift versions of the library:\n\n```swift\n#if swift(>=5.5) && canImport(_Concurrency)\npublic typealias MYPREFIX_Sendable = Swift.Sendable\n#else \npublic typealias MYPREFIX_Sendable = Any\n#endif\n```\n\n> **NOTE:** Yes, we're using `swift(>=5.5)` here, while we're using `compiler(>=5.5)` to guard specific APIs using concurrency features. \n\nThe `Any` alias is effectively a no-op when applied as generic constraint, and thus this way it is possible to keep the same `Container<Value>` declaration working across Swift versions.\n\n### Task Local Values and Logging\n\nThe newly introduced Task Local Values API ([SE-0311][SE-0311]) allows for implicit carrying of metadata along with `Task` execution. It is a natural fit for tracing and carrying metadata around with task execution, and e.g. including it in log messages. \n\nWe are working on adjusting [SwiftLog](https://github.com/apple/swift-log) to become powerful enough to automatically pick up and log specific task local values. This change will be introduced in a source compatible way. \n\nFor now libraries should continue using logger metadata, but we expect that in the future a lot of the cases where metadata is manually passed to each log statement can be replaced with setting task local values. \n\n### Preparing for the concept of Deadlines\n\nDeadlines are another feature that closely relate to Swift Concurrency, and were originally pitched during the early versions of the Structured Concurrency proposal and later on moved out of it. The Swift team remains interested in introducing deadline concepts to the language and some preparation for it already has been performed inside the concurrency runtime. Right now however, there is no support for deadlines in Swift Concurrency and it is fine to continue using mechanisms like `NIODeadline` or similar mechanisms to cancel tasks after some period of time has passed. \n\nOnce Swift Concurrency gains deadline support, they will manifest in being able to cancel a task (and its child tasks) once such deadline (point in time) has been exceeded. For APIs to be “ready for deadlines” they don’t have to do anything special other than preparing to be able to deal with `Task`s and their cancellation.\n\n### Cooperatively handling Task cancellation\n\n`Task` cancellation exists today in Swift Concurrency and is something that libraries may already handle. In practice it means that any asynchronous function (or function which is expected to be called from within `Task`s), may use the [`Task.isCancelled`](https://developer.apple.com/documentation/swift/task/3814832-iscancelled) or [`try Task.checkCancellation()`](https://developer.apple.com/documentation/swift/task/3814826-checkcancellation) APIs to check if the task it is executing in was cancelled, and if so, it may cooperatively abort any operation it was currently performing.\n\nCancellation can be useful in long running operations, or before kicking off some expensive operation. For example, an HTTP client MAY check for cancellation before it sends a request — it perhaps does not make sense to send a request if it is known the task awaiting on it does not care for the result anymore after all!\n\nCancellation in general can be understood as “the one waiting for the result of this task is not interested in it anymore”, and it usually is best to throw a “cancelled” error when the cancellation is encountered. However, in some situations returning a “partial” result may also be appropriate (e.g. if a task is collecting many results, it may return those it managed to collect until now, rather than returning none or ignoring the cancellation and collecting all remaining results).\n\n## What to expect with Swift 6\n\n### Sendable: Global variables & imported code\n\nToday, Swift 5.5 does not yet handle global variables at all within its concurrency checking model. This will soon change but the exact semantics are not set in stone yet. In general, avoid using global properties and variables wherever possible to avoid running into issues in the future. Consider deprecating global variables if able to.\n\nSome global variables have special properties, such as `errno` which contains the error code of system calls. It is a thread local variable and therefore safe to read from any thread/`Task`. We expect to improve the importer to annotate such globals with some kind of “known to be safe” annotation, such that the Swift code using it, even in fully checked concurrency mode won’t complain about it. Having that said, using `errno` and other “thread local” APIs is very error prone in Swift Concurrency because thread-hops may occur at any suspension point, so the following snippet is very likely incorrect:\n\n```swift\nsys_call(...)\nawait ...\nlet err = errno // BAD, we are most likely on a different thread here (!)\n```\n\nPlease take care when interacting with any thread-local API from Swift Concurrency. If your library had used thread local storage before, you will want to move them to use [task-local values](https://github.com/apple/swift-evolution/blob/main/proposals/0311-task-locals.md) instead as they work correctly with Swift’s structured concurrency tasks.\n\nAnother tricky situation is with imported C code. There may be no good way to annotate the imported types as Sendable (or it would be too troublesome to do so by hand). Swift is likely to gain improved support for imported code and potentially allow ignoring some of the concurrency safety checks on imported code. \n\nThese relaxed semantics for imported code are not implemented yet, but keep it in mind when working with C APIs from Swift and trying to adopt the `-warn-concurrency` mode today. Please file any issues you hit on [bugs.swift.org](https://bugs.swift.org/secure/Dashboard.jspa) so we can inform the development of these checking heuristics based on real issues you hit.\n\n### Custom Executors\n\nWe expect that Swift Concurrency will allow custom executors in the future. A custom executor would allow the ability to run actors / tasks “on” such executor. It is possible that `EventLoop`s could become such executors, however the custom executor proposal has not been pitched yet.\n\nWhile we expect potential performance gains from using custom executors “on the same event loop” by avoiding asynchronous hops between calls to different actors, their introduction will not fundamentally change how NIO libraries are structured.\n\nThe guidance here will evolve as Swift Evolution proposals for Custom Executors are proposed, but don’t hold off adopting Swift Concurrency until custom executors “land” - it is important to start adoption early. For most code we believe that the gains from adopting Swift Concurrency vastly outweigh the slight performance cost actor-hops might induce.\n\n\n### Reduce use of SwiftNIO Futures as “Concurrency Library“\n\nSwiftNIO currently provides a number of concurrency types for the Swift on Server ecosystem. Most notably `EventLoopFuture`s and `EventLoopPromise`s, that are used widely for asynchronous results. While the SSWG recommended using those at the API level in the past for easier interplay of server libraries, we advise to deprecate or remove such APIs once Swift 6 lands. The swift-server ecosystem should go all in on the structured concurrency features the languages provides. For this reason, it is crucial to provide async/await APIs today, to give your library users time to adopt the new APIs.\n\nSome NIO types will remain however in the public interfaces of Swift on server libraries. We expect that networking clients and servers continue to be initialized with `EventLoopGroup`s. The underlying transport mechanism (`NIOPosix` and `NIOTransportServices`) should become implementation details however and should not be exposed to library adopters.\n\n### SwiftNIO 3\n\nWhile subject to change, it is likely that SwiftNIO will cut a 3.0 release in the months after Swift 6.0, at which point in time Swift will have enabled “full” `Sendable` checking.\n\nYou should not expect NIO to suddenly become “more async”, NIO’s inherent design principles are about performing small tasks on the event loop and using Futures for any async operations. The design of NIO is not expected to change. Channel pipelines are not expected to become \"async\" in the Swift Concurrency meaning of the word. This is because SwiftNIO is, at its heard, an IO system, and that poses a challenge to the co-operative, shared, thread-pool used by Swift Concurrency. This thread pool must not be blocked by any operation, because doing so will starve the pool and prevent further progress of other async tasks.\n\nI/O systems however must, at some point, block a thread waiting for more I/O events, either in an I/O syscall or in something like epoll_wait. This is how NIO works: each of the event loop threads ultimately blocks on epoll_wait. We can’t do that inside the cooperative thread pool, as to do so would starve it for other async tasks, so we’d have to do so on a different thread. As such, SwiftNIO should not be used _on_ the cooperative threadpool, but should take ownership and full control of its threads–because it is an I/O system.\n\nIt would be possible to make all NIO work happen on the co-operative pool, and thread-hop between each I/O operation and dispatching it onto the async/await pool, however this is not acceptable for high performance I/O: the context switch for _each I/O operation_ is too expensive. As a result, SwiftNIO is not planning to just adopt Swift Concurrency for the ease of use it brings, because in its specific context, the context switches are not an acceptable tradeoff. SwiftNIO could however cooperate with Swift Concurrency with the arrival of \"custom executors\" in the language runtime, however this has not been fully proposed yet, so we are not going to speculate about this too much.\n\nThe NIO team will however use the chance to remove deprecated APIs and improve some APIs. The scope of changes should be comparable to the NIO1 → NIO2 version bump. If your SwiftNIO code compiles today without warnings, chances are high that it will continue to work without modifications in NIO3.\n\nAfter the release of NIO3, NIO2 will see bug fixes only.\n\n### End-user code breakage\n\nIt is expected that Swift 6 will break some code. As mentioned SwiftNIO 3 is also going to be released sometime around Swift 6 dropping. Keeping this in mind, it might be a good idea to align major version releases around the same time, along with updating version requirements to Swift 6 and NIO 3 in your libraries.\n\nBoth Swift and SwiftNIO are not planning to do “vast amounts of change”, so adoption should be possible without major pains.\n\n### Guidance for library users\n\nAs soon as Swift 6 comes out, we recommend using the latest Swift 6 toolchains, even if using the Swift 5.5.n language mode (which may yield only warnings rather than hard failures on failed Sendability checks). This will result in better warnings and compiler hints, than just using a 5.5 toolchain.\n\n[sendable-staging]: https://github.com/DougGregor/swift-evolution/blob/sendable-staging/proposals/nnnn-sendable-staging.md\n[SE-0302]: https://github.com/apple/swift-evolution/blob/main/proposals/0302-concurrent-value-and-concurrent-closures.md\n[SE-0311]: https://github.com/apple/swift-evolution/blob/main/proposals/0311-task-locals.md\n"
  },
  {
    "path": "docs/deployment.md",
    "content": "\n## Deployment to Servers or Public Cloud\n\nThe following guides can help with the deployment to public cloud providers:\n* [AWS](aws.md)\n* [DigitalOcean](digital-ocean.md)\n* [Heroku](heroku.md)\n* [Kubernetes & Docker](packaging.md#docker)\n* [GCP](gcp.md)\n* _Have a guides for other popular public clouds like Azure? Add it here!_\n\nIf you are deploying to you own servers (e.g. bare metal, VMs or Docker) there are several strategies for packaging Swift applications for deployment, see the [Packaging Guide](packaging.md) for more information.\n\n### Deploying a Debuggable Configuration (Production on Linux)\n\n- If you have `--privileged`/`--security-opt seccomp=unconfined` containers or are running in VMs or even bare metal, you can run your binary with\n\n        lldb --batch -o \"break set -n main --auto-continue 1 -C \\\"process handle SIGPIPE -s 0\\\"\" -o run -k \"image list\" -k \"register read\" -k \"bt all\" -k \"exit 134\" ./my-program\n\n    instead of `./my-program` to get something something akin to a 'crash report' on crash.\n\n- If you don't have `--privileged` (or `--security-opt seccomp=unconfined`) containers (meaning you won't be able to use `lldb`) or you don't want to use lldb, consider using a library like [`swift-backtrace`](https://github.com/swift-server/swift-backtrace) to get stack traces on crash.\n"
  },
  {
    "path": "docs/digital-ocean.md",
    "content": "# Deploying to DigitalOcean\n\nThis guide will walk you through setting up an Ubuntu virtual machine on a DigitalOcean [Droplet](https://www.digitalocean.com/products/droplets/). To follow this guide, you will need to have a [DigitalOcean](https://www.digitalocean.com) account with billing configured.\n\n## Create Server\n\nUse the create menu to create a new Droplet.\n\n![Create Droplet](../images/digital-ocean-create-droplet.png)\n\nUnder distributions, select Ubuntu 18.04 LTS.\n\n![Ubuntu Distro](../images/digital-ocean-distributions-ubuntu-18.png)\n\n> Note: You may select any version of Linux that Swift supports. You can check which operating systems are officially supported on the [Swift Releases](https://swift.org/download/#releases) page.\n\nAfter selecting the distribution, choose any plan and datacenter region you prefer. Then setup an SSH key to access the server after it is created. Finally, click create Droplet and wait for the new server to spin up.\n\nOnce the new server is ready, hover over the Droplet's IP address and click copy.\n\n![Droplet List](../images/digital-ocean-droplet-list.png)\n\n## Initial Setup\n\nOpen your terminal and connect to the server as root using SSH.\n\n```sh\nssh root@<server_ip>\n```\n\nDigitalOcean has an in-depth guide for [initial server setup on Ubuntu 18.04](https://www.digitalocean.com/community/tutorials/initial-server-setup-with-ubuntu-18-04). This guide will quickly cover the basics.\n\n### Configure Firewall\n\nAllow OpenSSH through the firewall and enable it.\n\n```sh\nufw allow OpenSSH\nufw enable\n```\n\nThen enable a non-root accessible HTTP port.\n\n```sh\nufw allow 8080\n```\n\n### Add User\n\nCreate a new user besides `root` that will be responsible for running your application. This guide uses a non-root user without access to `sudo` for added security.\n\nThe following guides assume the user is named `swift`.\n\n```sh\nadduser swift\n```\n\nCopy the root user's authorized SSH keys to the newly created user. This will allow you to use SSH (`scp`) as the new user.\n\n```sh\nrsync --archive --chown=swift:swift ~/.ssh /home/swift\n```\n\nYour DigitalOcean virtual machine is now ready. Continue using the [Ubuntu](ubuntu.md) guide. \n"
  },
  {
    "path": "docs/gcp.md",
    "content": "# Deploying to Google Cloud Platform (GCP)\n\nThis guide describes how to build and run your Swift Server on serverless\narchitecture with [Google Cloud Build](https://cloud.google.com/build) and\n[Google Cloud Run](https://cloud.google.com/run). We'll use\n[Artifact Registry](https://cloud.google.com/artifact-registry/docs/docker/quickstart)\nto store the Docker images.\n\n## Google Cloud Platform Setup\n\nYou can read about\n[Getting Started with GCP](https://cloud.google.com/gcp/getting-started/) in\nmore detail. In order to run Swift Server applications, we need to:\n\n- enable [Billing](https://console.cloud.google.com/billing) (requires a credit\n  card). Note that when creating a new account, GCP provides you with $300 of\n  free credit to use in the first 90 days. You can follow this guide for free\n  for a new account. Everything in this guide should fall into the \"Free Tier\"\n  category at GCP (120 build minutes per day, 2 million Cloud Run requests per\n  month\n  [Free Tier Usage Limits](https://cloud.google.com/free/docs/gcp-free-tier#free-tier-usage-limits))\n- enable the\n  [Cloud Build API](https://console.cloud.google.com/apis/api/cloudbuild.googleapis.com/overview)\n- enable the\n  [Cloud Run Admin API](https://console.cloud.google.com/apis/api/run.googleapis.com/overview)\n- enable the\n  [Artifact Registry API](https://console.cloud.google.com/apis/api/artifactregistry.googleapis.com/overview)\n- [create a Repository in the Artifact Registry](https://console.cloud.google.com/artifacts/create-repo)\n  (Format: Docker, Region: your choice)\n\n## Project Requirements\n\nPlease verify that your server listens on `0.0.0.0`, not `127.0.0.1` and it's\nrecommended to use the environment variable `$PORT` instead of a hard-coded\nvalue. For the workflow to pass, two files are essential, both need to be in the\nproject root:\n\n1. Dockerfile\n2. cloudbuild.yaml\n\n### `Dockerfile`\n\nYou should test your Dockerfile with `docker build . -t test` and\n`docker run -p 8080:8080 test` and make sure it builds and runs locally.\n\nThe _Dockerfile_ is the same as in the [packaging guide](./packaging.md#docker).\nReplace `<executable-name>` with your `executableTarget` (ie. \"Server\"):\n\n```Dockerfile\n#------- build -------\nFROM swift:centos as builder\n\n# set up the workspace\nRUN mkdir /workspace\nWORKDIR /workspace\n\n# copy the source to the docker image\nCOPY . /workspace\n\nRUN swift build -c release --static-swift-stdlib\n\n#------- package -------\nFROM centos:8\n# copy executable\nCOPY --from=builder /workspace/.build/release/<executable-name> /\n\n# set the entry point (application name)\nCMD [\"<executable-name>\"]\n```\n\n### `cloudbuild.yaml`\n\nThe `cloudbuild.yaml` files contains a set of steps to build the server image\ndirectly in the cloud and deploy a new Cloud Run instance after the successful\nbuild. `${_VAR}` are\n[\"substitution variables\"](https://cloud.google.com/cloud-build/docs/configuring-builds/substitute-variable-values)\nthat are available during build time and can be passed on into the runtime\nenvironment in the \"deploy\" phase. We will set the variables later when we\nconfigure the [Build Trigger](#deployment) (Step 5).\n\n```yaml\nsteps:\n  - name: 'gcr.io/cloud-builders/docker'\n    entrypoint: 'bash'\n    args:\n      - '-c'\n      - |\n        docker pull ${_REGION}-docker.pkg.dev/$PROJECT_ID/${_REPOSITORY_NAME}/${_SERVICE_NAME}:latest || exit 0\n  - name: 'gcr.io/cloud-builders/docker'\n    args:\n      - build\n      - -t\n      - ${_REGION}-docker.pkg.dev/$PROJECT_ID/${_REPOSITORY_NAME}/${_SERVICE_NAME}:$SHORT_SHA\n      - -t\n      - ${_REGION}-docker.pkg.dev/$PROJECT_ID/${_REPOSITORY_NAME}/${_SERVICE_NAME}:latest\n      - .\n      - --cache-from\n      - ${_REGION}-docker.pkg.dev/$PROJECT_ID/${_REPOSITORY_NAME}/${_SERVICE_NAME}:latest\n  - name: 'gcr.io/cloud-builders/docker'\n    args:\n      [\n        'push',\n        '${_REGION}-docker.pkg.dev/$PROJECT_ID/${_REPOSITORY_NAME}/${_SERVICE_NAME}:$SHORT_SHA'\n      ]\n  - name: 'gcr.io/cloud-builders/gcloud'\n    args:\n      - run\n      - deploy\n      - swift-service\n      - --image=${_REGION}-docker.pkg.dev/$PROJECT_ID/${_REPOSITORY_NAME}/${_SERVICE_NAME}:$SHORT_SHA\n      - --port=8080\n      - --region=${_REGION}\n      - --memory=512Mi\n      - --platform=managed\n      - --allow-unauthenticated\n      - --min-instances=0\n      - --max-instances=5\nimages:\n  - '${_REGION}-docker.pkg.dev/$PROJECT_ID/${_REPOSITORY_NAME}/${_SERVICE_NAME}:$SHORT_SHA'\n  - '${_REGION}-docker.pkg.dev/$PROJECT_ID/${_REPOSITORY_NAME}/${_SERVICE_NAME}:latest'\ntimeout: 1800s\n```\n\n### The steps in detail\n\n1. Pull the latest image from the Artifact Registry to retrieve cached layers\n2. Build the image with `$SHORT_SHA` and `latest` tag\n3. Push the image to the Artifact Registry\n4. Deploy the image to Cloud Run\n\n`images` specifies the build images to store in the registry. The default\n`timeout` is 10 minutes, so we'll need to increase it for Swift builds. We use\n`8080` as the default port here, though it's recommended to remove this line and\nhave the server listen on `$PORT`.\n\n## Deployment\n\n![cloud build trigger settings and how to connect a code repository](../images/gcp-connect-repo.png)\n\nPush all files to a remote repository. Cloud Build currently supports, GitHub,\nBitbucket and GitLab. now) and head to\n[Cloud Build Triggers](https://console.cloud.google.com/cloud-build/triggers)\nand click \"Create Trigger\":\n\n1. Add a name and description\n2. Event: \"Push to a branch\" is active\n3. Source: \"Connect New Repository\" and authorize with your code provider, and\n   add the repository where your Swift server code is hosted. Note that you need\n   to configure\n   [GitHub](https://cloud.google.com/build/docs/automating-builds/build-repos-from-github),\n   [GitLab](https://cloud.google.com/build/docs/automating-builds/build-repos-from-gitlab)\n   or\n   [Bitbucket](https://cloud.google.com/build/docs/automating-builds/build-repos-from-bitbucket-cloud)\n   to allow GCP access first.\n4. Configuration: \"Cloud Build configuration file\" / Location: Repository\n5. Advanced:\n   [Substitution variables](https://cloud.google.com/cloud-build/docs/configuring-builds/substitute-variable-values):\n   You need to set the variables for region, repository name and service name\n   here. You can pick a\n   [region of your choice](https://cloud.google.com/about/locations/) (ie.\n   `us-central1`). All custom variables must start with an underscore\n   (`_REGION`). `_REPOSITORY_NAME` and `_SERVICE_NAME` are up to you. If you use\n   environment variables for example to connect to a database or 3rd party\n   services, you can set the values here too.\n6. \"Create\"\n\nAs a last step before deploying the new service, go to the\n[Cloud Build Settings](https://console.cloud.google.com/cloud-build/settings)\nand make sure \"Cloud Run\" is enabled. This gives Cloud Build the necessary IAM\npermissions to deploy Cloud Run services.\n\n![cloud build settings](../images/gcp-cloud-build-settings.png)\n\nIn the Trigger overview page, you should see your new \"swift-service\" trigger.\nClick on \"RUN\" on the right to start the trigger manually from the `main`\nbranch. With a simple Hummingbird project the build takes about 7-8 minutes.\nVapor takes about 25 minutes on the standard/small build machines, which are\nfairly slow. \"Jordane\" from the Vapor Discord community\n[recommends using `machineType: E2_HIGHCPU_8`](https://discord.com/channels/431917998102675485/447893851374616576/915819735738888222)\nin the `cloudbuild.yaml` to speed up deployments:\n\n```yaml\noptions:\n  machineType: 'E2_HIGHCPU_8'\n```\n\nAfter a successful build you should see the service URL in the build logs:\n\n![successful build and deployment to cloud run](../images/gcp-cloud-build.png)\n\nYou can head over to Cloud Run and see your service running there:\n\n![cloud run overview](../images/gcp-cloud-run.png)\n\nThe trigger will deploy every new commit on `main`. You can also enable Pull\nRequest triggers for feature-driven workflows. Cloud Build also allows\nblue/green builds, auto-scaling and much more.\n\nYou can now connect your custom domain to the new service and go live.\n\n## Cleanup\n\n- delete the Cloud Run service\n- delete the Cloud Build trigger\n- delete the Artifact Registry repository\n"
  },
  {
    "path": "docs/heroku.md",
    "content": "# What is Heroku\n\nHeroku is a popular all in one hosting solution, you can find more at heroku.com\n\n## Signing Up\n\nYou'll need a heroku account, if you don't have one, please sign up here: [https://signup.heroku.com/](https://signup.heroku.com/)\n\n## Installing CLI\n\nMake sure that you've installed the heroku cli tool.\n\n### HomeBrew\n\n```bash\nbrew install heroku/brew/heroku\n```\n\n### Other Install Options\n\nSee alternative install options here: [https://devcenter.heroku.com/articles/heroku-cli#download-and-install](https://devcenter.heroku.com/articles/heroku-cli#download-and-install).\n\n### Logging in\n\nOnce you've installed the CLI, login with the following:\n\n```bash\nheroku login\n```\n\n### Create an application\n\nVisit dashboard.heroku.com to access your account, and create a new application from the drop down in the upper right hand corner. Heroku will ask a few questions such as region and application name, just follow their prompts.\n\n### Project\n\nToday we're going to be hosting Swift NIO's example http server, you can apply these concepts to your own project. Let's start by cloning NIO\n\n```bash\n\ngit clone https://github.com/apple/swift-nio\n```\n\nMake sure to make our newly cloned directory the working directory\n\n\n```bash\ncd swift-nio\n```\n\nBy default, Heroku deploys the **master** branch. Always make sure all changes are checked into this branch before pushing.\n\n#### Connect with Heroku\n\nConnect your app with Heroku (replace with your app's name).\n\n```bash\n$ heroku git:remote -a your-apps-name-here\n```\n\n### Set Stack\n\nAs of 13 September 2018, Heroku’s default stack is Heroku 18, we need it to be 16 for swift projects.\n\n```bash\nheroku stack:set heroku-16 -a your-apps-name-here\n```\n\n### Set Buildpack\n\nSet the buildpack to teach Heroku how to deal with swift, the vapor-communnity buildpack is a good buildpack for *any swift project*. It doesn't install vapor, and it doesn't have any vapor specific setup.\n\n\n```bash\nheroku buildpacks:set vapor/vapor\n```\n\n### Swift version file\n\nThe buildpack we added looks for a **.swift-version** file to know which version of swift to use.\n\n```bash\necho \"5.2\" > .swift-version\n```\n\nThis creates **.swift-version** with `5.2` as its contents.\n\n\n### Procfile\n\nHeroku uses the **Procfile** to know how to run your app. This includes the executable name and any arguments necessary. You'll see `$PORT` below, this allows heroku to assign a specific port when it launches the app. \n\n```\nweb: NIOHTTP1Server 0.0.0.0 $PORT\n```\n\nYou can use this command in terminal to set the file\n\n\n```bash\necho \"web: NIOHTTP1Server 0.0.0.0 $PORT\" > Procfile\n```\n\n### Commit changes\n\nWe have now added the `.swift-version` file, and the `Procfile`, make sure these are committed into master or Heroku will not find them.\n\n### Deploying to Heroku\n\nYou're ready to deploy, run this from the terminal. It may take a while to build, this is normal.\n\n```none\ngit push heroku master\n```\n"
  },
  {
    "path": "docs/libs/log-levels.md",
    "content": "# Library guidelines: Log Levels\n\nThis guide serves as guidelines for library authors with regard to what [SwiftLog](https://github.com/apple/swift-log) log levels are appropriate for use in libraries, and in what situations to use what level.\n\nLibraries need to be well-behaved across various use cases, and cannot assume a specific style of logging backend will be used with them. It is up to developers implementing specific applications and systems to configure those specifics of their application, and some may choose to log to disk, some to memory, or some may employ sophisticated log aggregators. In all those cases a library should behave \"well\", meaning that it should not overwhelm typical (\"stdout\") log backends by logging too much, alerting too much by over-using `error` level log statements etc.\n\nThis is aimed for library authors with regards to what [SwiftLog](https://github.com/apple/swift-log) log levels are appropriate for use in libraries, and also general logging style hints.\n\n## Guidelines for Libraries\n\nSwiftLog defines the following 7 log levels via the [`Logger.Level` enum](https://apple.github.io/swift-log/docs/current/Logging/Structs/Logger/Level.html), ordered from least to most severe:\n\n* `trace`\n* `debug`\n* `info`\n* `notice`\n* `warning`\n* `error`\n* `critical`\n\nOut of those, only levels _less severe than_ info (exclusively) are generally okay to be used by libraries.\n\nIn the following section we'll explore how to use them in practice.\n\n### Recommended log levels\n\nIt is always fine for a library to log at `trace` and `debug` levels, and these two should be the primary levels any library is logging at.\n\n`trace` is the finest log level, and end-users of a library will not usually use it unless debugging very specific issues. You should consider it as a way for library developers to \"log everything we could possibly need to diagnose a hard to reproduce bug.\"  Unrestricted logging at `trace` level may take a toll on the performance of a system, and developers can assume trace level logging will not be used in production deployments, unless enabled specifically to locate some specific issue.\n\nThis is in contrast with `debug` which some users _may_ choose to run enabled on their production systems.\n\n> Debug level logging should be not \"too\" noisy. Developers should assume some production deployments may need to (or want to) run with debug level logging enabled. \n>\n> Debug level logging should not completely undermine the performance of a production system.\n\nAs such, `debug` logging should provide a high value understanding of what is going on in the library for end users, using domain relevant language.  Logging at `debug` level should not be overly noisy or dive deep into internals; this is what `trace` is intended for.\n\nUse `warning` level sparingly. Whenever possible, try to rather return or throw `Error` to end users that are descriptive enough so they can inspect, log them and figure out the issue. Potentially, they may then enable debug logging to find out more about the issue.\n\nIt is okay to log a `warning` \"once\", for example on system startup. This may include some one off \"more secure configuration is available, try upgrading to it!\" log statement upon a server's startup. You may also log warnings from background processes, which otherwise have no other means of informing the end user about some issue.\n\nLogging on `error` level is similar to warnings: prefer to avoid doing so whenever possible. Instead, report errors via your library's API. For example, it is _not_ a good idea to log \"connection failed\" from an HTTP client. Perhaps the end-user intended to make this request to a known offline server to _confirm_ it is offline? From their perspective, this connection error is not a \"real\" error, it is just what they expected -- as such the HTTP client should return or throw such an error, but _not_ log it.\n\nDo also note that in situations when you decide to log an error, be mindful of error rates. Will this error potentially be logged for every single operation while some network failure is happening? Some teams and companies have alerting systems set up based on the rate of errors logged in a system, and if it exceeds some threshold it may start calling and paging people in the middle of the night. When logging at error level, consider if the issue indeed is something that should be waking up people at night. You may also want to consider offering configuration in your library: \"at what log level should this issue be reported?\" This can come in handy in clustered systems which may log network failures themselves, or depend on external systems detecting and reporting this.\n\nLogging `critical` logs is allowed for libraries, however as the name implies - only in the most critical situations. Most often this implies that the library will *stop functioning* after such log has been issued. End users are thought to expect that a logged critical error is _very_ important, and they may have set up their systems to page people in the middle of the night to investigate the production system _right now_ when such log statements are detected. So please be careful about logging these kinds of errors.\n\nSome libraries and situations may not be entirely clear with regard to what log level is \"best\" for them. In such situations, it sometimes is worth it to allow the end-users of the library to be able to configure the levels of specific groups of messages. You can see this in action in the Soto library [here](https://github.com/soto-project/soto-core/pull/423/files#diff-4a8ca7e54da5b22287900dd8cf6b47ded38a94194c1f0b544119030c81a2f238R649) where an `Options` object allows end users to configure the level at which requests are logged (`options.requestLogLevel`) which is then used as `log.log(self.options.requestLogLevel)`.\n\n#### Examples\n\n`trace` level logging:\n\n- Could include various additional information about a request, such as various diagnostics about created data structures, the state of caches or similar, which are created in order to serve a request.\n- Could include \"begin operation\" and \"end operation\" logging statements.\n\n`debug` level logging:\n\n- May include a single log statement for opening a connection, accepting a request, and so on.\n- It can include a _high level_ overview of control flow in an operation. For example: \"started work, processing step X, made X decision, finished work X, result code 200\".  This overview may consist of high cardinality structured data.\n\n> You may also want to consider using [swift-distributed-tracing](https://github.com/apple/swift-distributed-tracing) to instrument \"begin\" and \"end\" events, as tracing may give you additional insights into your system behavior you would have missed with just manually analysing log statements.\n\n### Log levels to avoid\n\nAll these rules are only _general_ guidelines, and as such may have exceptions. Consider the following examples and rationale for why logging at high log levels by a library may not be desirable:\n\nIt is generally _not acceptable_ for a service client (for example, an http client) to log an `error` when a request has failed. End-users may be using the client to probe if an endpoint is even responsive or not, and a failure to respond may be _expected_ behavior. Logging errors would only confuse and pollute their logs.\n\nInstead, libraries should either `throw`, or return an `Error` value that users of the library will have enough knowledge about if they should log or ignore it.\n\nIt is even less acceptable for a library to log any successful operations. This leads to flooding server side systems, especially if, for example, one were to log every successfully handled request. In a server side application, this can easily flood and overwhelm logging systems when deployed to production where many end users are connected to the same server. Such issues are rarely found in development time, because of only a single peer requesting things from the service-under-test.\n\n#### Examples (of things to avoid)\n\nAvoid using `info` or any higher log level for:\n\n- \"Normal operation\" of the library, that is there is no need to log on info level \"accepted a request\" as this is the normal operation of a web service.\n\nAvoid using `error` or `warning`:\n\n- To report errors which the end-user of the library has the means of logging themselves. For example, if a database driver fails to fetch all rows of a query, it should not log an error or warning, but instead return or throw an error on the stream of values (or function, async function, or even the async sequence) that was providing the returned values.\n  - Since the end-user is consuming these values, and has a mean of reporting (or swallowing) this error, the library should not log anything on their behalf.\n- Never report as warnings which is merely an information. For example. \"weird header detected\" may look like a good idea to log as a warning at first sight, however if the \"weird header\" is simply a misconfigured client (or just a \"weird browser\") you may be accidentally completely flooding an end-users logs with these \"weird header\" warnings (!)\n  - Only log warnings about actionable things which the end-user of your library can do something about. Using the \"weird header detected\" log statement as an example: it would not be a good candidate to log as a warning because the server developer has no way to fix the users of their service to stop sending weird headers, so the server should not be logging this information as a warning.\n- It may be tempting to implement a \"log as warning only once\" technique for per-request style situations which may be almost important enough to be a warning, but should not be logged repeatedly after all. Authors may think of smart techniques to log a warning only once per \"weird header discovered\" and later on log the same issue on a different level, such as trace... Such techniques result in confusing hard to debug logs, where developers of a system unaware of the stateful nature of the logging would be left confused when trying to reproduce the issue.\n  - For example, if a developer spots such a warning in a production system, they may attempt to reproduce it — thinking that it only happens in the production environment. However, if the logging system's log level choice is _stateful_ they may actually be successfully reproducing the issue but never seeing it manifest. For this, and related performance reasons (as implementing \"only once per X\" implies growing storage and per-request additional checking requirements), it is not recommended to apply this pattern.\n\nExceptions to the \"avoid logging warnings\" rule:\n\n- \"Background processes\" such as tasks scheduled on a periodic timer, may not have any other means of communicating a failure or warning to the end user of the library other than through logging.\n  - Consider offering an API that would collect errors at runtime, and then you can avoid logging errors manually. This can often take the form of a customizable \"on error\" hook that the library accepts when constructing the scheduled job. If the handler is not customized, we can log the errors, but if it was, it again is up to the end-user of the library to decide what to do with them.\n- An exception to the \"log a warning only once\" rule is when things do not happen very frequently. For example, if a library is warning about an outdated license or something similar during _its initialization_ this isn't necessarily a bad idea. After all, we'd rather see this warning once during initialization rather during every request made to the library. Use your best judgement and consider the developers using your library when designing how often and where from to log such information.\n\n### Suggested logging style\n\nWhile libraries are free to use whichever logging message style they choose, here are some best practices to follow if you want users of your libraries to *love* the logs your library produces.\n\nFirstly, it is important to remember that both the message of a log statement as well as the metadata in [swift-log](https://github.com/apple/swift-log) are [autoclosures](https://docs.swift.org/swift-book/LanguageGuide/Closures.html#ID543), which are only invoked if the logger has a log level set such that it must emit a message for the message given. As such, messages logged at `trace` do not \"materialize\" their string and metadata representation unless they are actually needed:\n\n```swift\n    public func debug(_ message: @autoclosure () -> Logger.Message,\n                      metadata: @autoclosure () -> Logger.Metadata? = nil,\n                      source: @autoclosure () -> String? = nil,\n                      file: String = #file, function: String = #function, line: UInt = #line) {\n```\n\nAnd a minor yet important hint: avoid inserting newlines and other control characters into log statements (!). Many log aggregation systems assume that a single line in a logged output is specifically \"one log statement\" which can accidentally break if we log not sanitized, potentially multi-line, strings. This isn't a problem for _all_ log backends. For example, some will automatically sanitize and form a JSON payload with `{message: \"...\"}` before emitting it to a backend service collecting the logs, but plain old stream (or file) loggers usually assume that one line equals one log statement. It also makes grepping through logs more reliable.\n\n#### Structured Logging (Semantic Logging)\n\nLibraries may want to embrace the structured logging style, which renders logs in a [semi-structured data format](https://en.wikipedia.org/wiki/Semi-structured_data).\n\nIt is a fantastic pattern which makes it easier and more reliable for automated code to process logged information.\n\nConsider the following \"not structured\" log statement:\n\n```swift\n// NOT structured logging style\nlog.info(\"Accepted connection \\(connection.id) from \\(connection.peer), total: \\(connections.count)\")\n```\n\nIt contains 4 pieces of information:\n\n- We accepted a connection.\n- This is its string representation.\n- It is from this peer.\n- We currently have `connections.count` active connections.\n\nWhile this log statement contains all useful information that we meant to relay to end users, it is hard to visually and mechanically parse the detailed information it contains. For example, if we know connections start failing around the time when we reach a total of 100 concurrent connections, it is not trivial to find the specific log statement at which we hit this number. We would have to `grep 'total: 100'` for example, however perhaps there are many other `\"total: \"` strings present in all of our log systems.\n\nInstead, we can express the same information using the structured logging pattern, as follows:\n\n```swift\nlog.info(\"Accepted connection\", metadata: [\n  \"connection.id\": \"\\(connection.id)\",\n  \"connection.peer\": \"\\(connection.peer)\",\n  \"connections.total\": \"\\(connections.count)\"\n])\n\n// example output:\n// <date> info [connection.id:?,connection.peer:?, connections.total:?] Accepted connection\n```\n\nThis structured log can be formatted, depending on the logging backend, slightly differently on various systems. Even in the simple string representation of such a log, we'd be able to grep for `connections.total: 100` rather than having to guess the correct string.\n\nAlso, since the message now does not contain all that much \"human readable wording\", it is less prone to randomly change from \"Accepted\" to \"We have accepted\" or vice versa. This kind of change could break alerting systems which are set up to parse and alert on specific log messages.\n\nStructured logs are very useful in combination with [swift-distributed-tracing](https://github.com/apple/swift-distributed-tracing)'s `LoggingContext`, which automatically populates the metadata with any present trace information. Thanks to this, all logs made in response to some specific request will automatically carry the same TraceID.\n\nYou can see more examples of structured logging on the following pages, and example implementations thereof:\n\n- <https://tersesystems.com/blog/2020/05/26/why-i-wrote-a-logging-library/>\n- <https://cloud.google.com/logging/docs/structured-logging>\n- <https://stackify.com/what-is-structured-logging-and-why-developers-need-it/>\n- <https://kubernetes.io/blog/2020/09/04/kubernetes-1-19-introducing-structured-logs/>\n\n#### Logging with Correlation IDs / Trace IDs\n\nA very common pattern is to log messages with some \"correlation id\". The best approach in general here is to use a `LoggingContext` from [swift-distributed-tracing](https://github.com/apple/swift-distributed-tracing) as then your library will be able to be traced and used with correlation contexts regardless what tracing system the end-user is using (such as open telemetry, zipkin, xray, and other tracing systems) The concept though can be explained well with just a manually logged `requestID` which we'll explain below.\n\nConsider an HTTP client as an example of a library that has a lot of metadata about some request, perhaps something like this:\n\n```swift\nlog.trace(\"Received response\", metadata: [\n   \"id\": \"...\",\n   \"peer.host\": \"...\",\n   \"payload.size\": \"...\",\n   \"headers\": \"...\",\n   \"responseCode\": \"...\",\n   \"responseCode.text\": \"...\",\n])\n```\n\nThe exact metadata does not matter, they're just some placeholder in this example. What matters is that there's \"a lot of it\".\n\n> Side note on metadata keys: while there is no single right way to structure metadata keys, we recommend thinking of them as-if JSON keys: camelCased and `.` separated identifiers. This allows many log analysis backends to treat them as such nested structure.\n\nNow, we would like to avoid logging _all_ this information in every single log statement. Instead, we are able to just repeatedly log the `\"id\"` metadata, like this:\n\n```swift\n// ... \nlog.trace(\"Something something...\", metadata: [\"id\": \"...\"])\nlog.trace(\"Finished streaming response\", metadata: [\"id\": \"...\"]) // good, the same ID is propagated\n```\n\nThanks to the correlation ID (or a tracing provided ID, in which case we'd log as `context.log.trace(\"...\")` as the ID is propagated automatically), in each following log statement after the initial log statement we're able to correlate all those log statements. Then we know that this `\"Finished streaming response\"` message was about a response with a `responseCode` that we're able to look up from the `\"Received response\"` log message.\n\nThis pattern is somewhat advanced and may not always be the right approach, but consider it in high performance code where logging the same information repeatedly can be too costly.\n\n##### Things to avoid with Correlation ID logging\n\nWhen logging with correlation contexts make sure to never \"drop the ID\". It is easiest to get this right when using distributed tracing's `LoggingContext` since propagating it ensures the carrying of identifiers, however the same applies to any kind of correlation identifier.\n\nSpecifically, avoid situations like these:\n\n```swift\ndebug: connection established [connection-id: 7]\ndebug: connection closed unexpectedly [error: foobar] // BAD, the connection-id was dropped\n```\n\nOn the second line, we don't know which connection had the error since the `connection-id` was dropped. Make sure to audit your logging code to ensure all relevant log statements carry necessary correlation identifiers.\n\n### Exceptions to the rules\n\nThese are only general guidelines, and there always will be exceptions to these rules and other situations where these suggestions will be broken, for good reason. Please use your best judgement, and always consider the end-user of a system, and how they'll be interacting with your library and decide case-by-case depending on the library and situation at hand how to handle each situation.\n\nHere are a few examples of situations when logging a message on a relatively high level might still be tolerable for a library.\n\nIt's permissible for a library to log at `critical` level right before a _hard_ crash of the process, as a last resort of informing the log collection systems or end-user about additional information detailing the reason for the crash. This should be _in addition to_ the message from a `fatalError` and can lead to an improved diagnosis/debugging experience for end users.\n\nSometimes libraries may be able to detect a harmful misconfiguration of the library. For example, selecting deprecated protocol versions. In such situations it may be useful to inform users in production by issuing a `warning`. However you should ensure that the warning is not logged repeatedly! For example, it is not acceptable for an HTTP client to log a warning on every single http request using some misconfiguration of the client. It _may_ be acceptable however for the client to log such a warning, for example, _once_ at configuration time, if the library has a good way to do this.\n\nSome libraries may implement a \"log this warning only once\", \"log this warning only at startup\", \"log this error only once an hour\", or similar tricks to keep the noise level low but still informative enough to not be missed. This is, however, usually a pattern reserved for stateful long running libraries, rather than clients of databases and related persistent stores.\n"
  },
  {
    "path": "docs/linux-perf.md",
    "content": "# Linux `perf`\n\n## `perf`, what's that?\n\nThe Linux [`perf` tool](https://perf.wiki.kernel.org/index.php/Main_Page) is an incredibly powerful tool, that can amongst other things be used for:\n\n- Sampling CPU-bound processes (or the whole) system to analyse which part of your application is consuming the CPU time\n- Accessing CPU performance counters (PMU)\n- \"user probes\" (uprobes) which trigger for example when a certain function in your application runs\n\nIn general, `perf` can count and/or record the call stacks of your threads when a certain event occurs. These events can be triggered by:\n\n- Time (e.g. 1000 times per second), useful for time profiling. For an example use, see the [CPU performance debugging guide](performance.md).\n- System calls, useful to see where your system calls are happening.\n- Various system events, for example if you'd like to know when context switches occur.\n- CPU performance counters, useful if your performance issues can be traced down to micro-architectural details of your CPU (such as branch mispredictions). For an example, see [SwiftNIO's Advanced Performance Analysis guide](https://github.com/apple/swift-nio/blob/main/docs/advanced-performance-analysis.md).\n- and much more\n\n## Getting `perf` to work\n\nUnfortunately, getting `perf` to work depends on your environment. Below, please find a selection of environments and how to get `perf` to work there.\n\n### Installing `perf`\n\nTechnically, `perf` is part of the Linux kernel sources and you'd want a `perf` version that exactly matches your Linux kernel version. In many cases however a \"close-enough\" `perf` version will do too. If in doubt, use a `perf` version that's slightly older than your kernel over one that's newer.\n\n- Ubuntu\n\n    ```\n    apt-get update && apt-get -y install linux-tools-generic\n    ```\n    \n  See below for more information because Ubuntu packages a different `perf` per kernel version.\n- Debian\n\n    ```\n    apt-get update && apt-get -y install linux-perf\n    ```\n    \n- Fedora/RedHat derivates\n\n   ```\n   yum install -y perf\n   ```\n   \nYou can confirm that your `perf` installation works using  `perf stat -- sleep 0.1` (if you're already `root`) or `sudo perf stat -- sleep 0.1`.\n\n\n##### `perf` on Ubuntu when you can't match the kernel version\n\nOn Ubuntu (and other distributions that package `perf` per kernel version) you may see an error after installing `linux-tools-generic`. The error message will look similar to\n\n```\n$ perf stat -- sleep 0.1 \nWARNING: perf not found for kernel 5.10.25\n\n  You may need to install the following packages for this specific kernel:\n    linux-tools-5.10.25-linuxkit\n    linux-cloud-tools-5.10.25-linuxkit\n\n  You may also want to install one of the following packages to keep up to date:\n    linux-tools-linuxkit\n    linux-cloud-tools-linuxkit\n```\n\nThe best fix for this is to follow what `perf` says and to install one of the above packages. If you're in a Docker container, this may not be possible because you'd need to match the kernel version (which is especially difficult in Docker for Mac because it uses a VM). For example, the suggested `linux-tools-5.10.25-linuxkit` is not actually available.\n\nAs a workaround, you can try one of the following options\n\n- If you're already `root` and prefer a shell `alias` (only valid in this shell)\n\n    ```\n    alias perf=$(find /usr/lib/linux-tools/*/perf | head -1)\n    ```\n\n- If you're a user and would like to prefer to link `/usr/local/bin/perf`\n\n    ```\n    sudo ln -s \"$(find /usr/lib/linux-tools/*/perf | head -1)\" /usr/local/bin/perf\n    ```\n\nAfter this, you should be able to use `perf stat -- sleep 0.1` (if you're already `root`) or `sudo perf stat -- sleep 0.1` successfully.\n\n### Bare metal\n\nFor a bare metal Linux machine, all you need to do is to install `perf` which should then work in full fidelity.\n\n### In Docker (running on bare-metal Linux)\n\nYou will need to launch your container with `docker run --privileged` (don't run this in production) and then you should have full access to perf (including the PMU).\n\nTo validate that `perf` works correctly, run for example `perf stat -- sleep 0.1`. Whether you'll see the `<not supported>` next to some information will depend on if you have access to the CPU's performance counters (the PMU). In Docker on bare metal, this should work, ie. no `<not supported>`s should show up.\n\n### Docker for Mac\n\nDocker for Mac is like Docker on bare metal but with some extra complexity because we're actually running the Docker containers hosted in a Linux VM. So matching the kernel version will be difficult.\n\nIf you follow the above installation instructions, you should nevertheless get `perf` to work but you won't have access to the CPU's performance counters (the PMU) so you'll see a few events show up as `<not supported>`.\n\n```\n$ perf stat -- sleep 0.1\n\n Performance counter stats for 'sleep 0.1':\n\n              0.44 msec task-clock                #    0.004 CPUs utilized          \n                 1      context-switches          #    0.002 M/sec                  \n                 0      cpu-migrations            #    0.000 K/sec                  \n                57      page-faults               #    0.129 M/sec                  \n   <not supported>      cycles                                                      \n   <not supported>      instructions                                                \n   <not supported>      branches                                                    \n   <not supported>      branch-misses                                               \n\n       0.102869000 seconds time elapsed\n\n       0.000000000 seconds user\n       0.001069000 seconds sys\n```\n\n### In a VM\n\nIn a virtual machine, you would install `perf` just like on bare metal. And either `perf` will work just fine with all its features or it will look similarly to what you get on Docker for Mac.\n\nWhat you need your hypervisor to support (& allow) is \"PMU passthrough\" or \"PMU virtualisation\". VMware Fusion does support PMU virtualisation which they call vPMC (VM settings -> Processors & Memory -> Advanced -> Allow code profiling applications in this VM). If you're on a Mac this setting is unfortunately only supported up to including macOS Catalina (and [not on Big Sur](https://kb.vmware.com/s/article/81623)).\n\nIf you use `libvirt` to manage your hypervisor and VMs, you can use `sudo virsh edit your-domain` and replace the `<cpu .../>` XML tag with\n\n    <cpu mode='host-passthrough' check='none'/>\n\nto allow the PMU to be passed through to the guest. For other hypervisors, an internet search will usually reveal how to enable PMU passthrough.\n"
  },
  {
    "path": "docs/llvm-sanitizers.md",
    "content": "# LLVM TSAN / ASAN\n\nFor multithreaded and low-level unsafe interfacing server code, the ability to use LLVM's [ThreadSanitizer](https://clang.llvm.org/docs/ThreadSanitizer.html) and  \n[AddressSanitizer](https://clang.llvm.org/docs/AddressSanitizer.html) can help troubleshoot invalid thread usage and invalid usage/access of memory.\n\nThere is a [blog post](https://swift.org/blog/tsan-support-on-linux/) outlining the usage of TSAN.\n\nThe short story is to use the swiftc command line options `-sanitize=address` and `-santize=thread` to each respective tool.\n\nAlso for Swift Package Manager projects you can use `--sanitize` at the command line, e.g.:\n\n```\nswift build --sanitize=address\n```\n\nor\n\n```\nswift build --sanitize=thread\n```\n\nand it can be used for the tests too:\n\n```\nswift test --sanitize=address\n```\n\nor\n\n```\nswift test --sanitize=thread\n```\n"
  },
  {
    "path": "docs/memory-leaks-and-usage.md",
    "content": "# Debugging Memory Leaks and Usage\n\nThere are many different tools for troubleshooting memory leaks both on Linux and macOS, each with different strengths and ease-of-use. One excellent tool is the Xcode's [Memory Graph Debugger](https://developer.apple.com/library/archive/documentation/DeveloperTools/Conceptual/debugging_with_xcode/chapters/special_debugging_workflows.html#//apple_ref/doc/uid/TP40015022-CH9-DontLinkElementID_1).\n[Instruments](https://help.apple.com/instruments/mac/10.0/#/dev022f987b) and `leaks` can also be very useful. If you cannot run or reproduce the problem on macOS, there are a number of server-side alternatives below. \n\n## Example program\n\nThe following program doesn't do anything useful but leaks memory so will serve as the example:\n\n```swift\npublic class MemoryLeaker {\n    var closure: () -> Void = { () }\n\n    public init() {}\n\n    public func doNothing() {}\n\n    public func doSomethingThatLeaks() {\n        self.closure = {\n            // This will leak as it'll create a permanent reference cycle:\n            //\n            //     self -> self.closure -> self\n            self.doNothing()\n        }\n    }\n}\n\n@inline(never) // just to be sure to get this in a stack trace\nfunc myFunctionDoingTheAllocation() {\n    let thing = MemoryLeaker()\n    thing.doSomethingThatLeaks()\n}\n\nmyFunctionDoingTheAllocation()\n```\n\n## Debugging leaks with `valgrind`\n\nIf you run your program using\n\n    valgrind --leak-check=full ./test\n\nthen `valgrind` will output\n\n```\n==1== Memcheck, a memory error detector\n==1== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al.\n==1== Using Valgrind-3.13.0 and LibVEX; rerun with -h for copyright info\n==1== Command: ./test\n==1==\n==1==\n==1== HEAP SUMMARY:\n==1==     in use at exit: 824 bytes in 4 blocks\n==1==   total heap usage: 5 allocs, 1 frees, 73,528 bytes allocated\n==1==\n==1== 32 bytes in 1 blocks are definitely lost in loss record 1 of 4\n==1==    at 0x4C2FB0F: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)\n==1==    by 0x52076B1: swift_slowAlloc (in /usr/lib/swift/linux/libswiftCore.so)\n==1==    by 0x5207721: swift_allocObject (in /usr/lib/swift/linux/libswiftCore.so)\n==1==    by 0x108E58: $s4test12MemoryLeakerCACycfC (in /tmp/test)\n==1==    by 0x10900E: $s4test28myFunctionDoingTheAllocationyyF (in /tmp/test)\n==1==    by 0x108CA3: main (in /tmp/test)\n==1==\n==1== LEAK SUMMARY:\n==1==    definitely lost: 32 bytes in 1 blocks\n==1==    indirectly lost: 0 bytes in 0 blocks\n==1==      possibly lost: 0 bytes in 0 blocks\n==1==    still reachable: 792 bytes in 3 blocks\n==1==         suppressed: 0 bytes in 0 blocks\n==1== Reachable blocks (those to which a pointer was found) are not shown.\n==1== To see them, rerun with: --leak-check=full --show-leak-kinds=all\n==1==\n==1== For counts of detected and suppressed errors, rerun with: -v\n==1== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 0 from 0)\n```\n\nThe important part is\n\n```\n==1== 32 bytes in 1 blocks are definitely lost in loss record 1 of 4\n==1==    at 0x4C2FB0F: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)\n==1==    by 0x52076B1: swift_slowAlloc (in /usr/lib/swift/linux/libswiftCore.so)\n==1==    by 0x5207721: swift_allocObject (in /usr/lib/swift/linux/libswiftCore.so)\n==1==    by 0x108E58: $s4test12MemoryLeakerCACycfC (in /tmp/test)\n==1==    by 0x10900E: $s4test28myFunctionDoingTheAllocationyyF (in /tmp/test)\n==1==    by 0x108CA3: main (in /tmp/test)\n```\n\nwhich can demangled by pasting it into `swift demangle`:\n\n```\n==1== 32 bytes in 1 blocks are definitely lost in loss record 1 of 4\n==1==    at 0x4C2FB0F: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)\n==1==    by 0x52076B1: swift_slowAlloc (in /usr/lib/swift/linux/libswiftCore.so)\n==1==    by 0x5207721: swift_allocObject (in /usr/lib/swift/linux/libswiftCore.so)\n==1==    by 0x108E58: test.MemoryLeaker.__allocating_init() -> test.MemoryLeaker (in /tmp/test)\n==1==    by 0x10900E: test.myFunctionDoingTheAllocation() -> () (in /tmp/test)\n==1==    by 0x108CA3: main (in /tmp/test)\n```\n\nSo valgrind is telling us that the allocation that eventually leaked is coming from `test.myFunctionDoingTheAllocation` calling `test.MemoryLeaker.__allocating_init()` which is correct.\n\n### Limitations\n\n- `valgrind` doesn't understand the bit packing that is used in many Swift data types (like `String`) or when you create `enum`s with associated values. Therefore `valgrind` sometimes claims a certain allocation was leaked even though it might not have\n- `valgrind` will make your program run _very slow_ (possibly 100x slower) which might stop you from even getting far enough to reproduce the issue.\n\n## Debugging leaks with `Leak Sanitizer`\n\nIf you build your application using\n\n    swift build --sanitize=address\n\nit will be built with [Address Sanitizer](https://github.com/google/sanitizers/wiki/AddressSanitizerLeakSanitizer) enabled. Address Sanitizer also automatically tries to find leaked memory blocks, just like `valgrind`.\n\nThe output from the above example program would be\n\n```\n=================================================================\n==478==ERROR: LeakSanitizer: detected memory leaks\n\nDirect leak of 32 byte(s) in 1 object(s) allocated from:\n    #0 0x55f72c21ac8d  (/tmp/test+0x95c8d)\n    #1 0x7f7e44e686b1  (/usr/lib/swift/linux/libswiftCore.so+0x3cb6b1)\n    #2 0x55f72c24b2ce  (/tmp/test+0xc62ce)\n    #3 0x55f72c24a4c3  (/tmp/test+0xc54c3)\n    #4 0x7f7e43aecb96  (/lib/x86_64-linux-gnu/libc.so.6+0x21b96)\n\nSUMMARY: AddressSanitizer: 32 byte(s) leaked in 1 allocation(s).\n```\n\nwhich shows the same information as `valgrind`, unfortunately however not symbolicated due to [SR-12601](https://bugs.swift.org/browse/SR-12601).\n\nYou can symbolicate it using `llvm-symbolizer` or `addr2line` if you have `binutils` installed like so:\n\n```\n# /tmp/test+0xc62ce\naddr2line -e /tmp/test -a 0xc62ce -ipf | swift demangle\n0x00000000000c62ce: test.myFunctionDoingTheAllocation() -> () at crtstuff.c:?\n```\n## Debugging transient memory usage with `heaptrack`\n[Heaptrack](https://github.com/KDE/heaptrack) is very useful for analyzing memory leaks/usage with less overhead than `valgrind` - but more importantly is also allows for analyzing transient memory usage which may significantly impact performance by putting to much pressure on the allocator. \n\nIn addition to command line acccess, there is a graphical front-end `heaptrack_gui`.\n\nA key feature is that it allows for diffing between two different runs of your application, making it fairly easy to troubleshoot differences in `malloc` behavior between e.g. feature branches and main.\n\nA short how-to run on Ubuntu 20.04 (using a different example than above, as we look at transient usage in this example), install `heaptrack` with:\n\n```\nsudo apt-get install heaptrack\n```\n\nThen run the binary with `heaptrack` two times — first we do it for `main` to get a baseline:\n```\n> heaptrack .build/x86_64-unknown-linux-gnu/release/test_1000_autoReadGetAndSet\nheaptrack output will be written to \"/tmp/.nio_alloc_counter_tests_GRusAy/heaptrack.test_1000_autoReadGetAndSet.84341.gz\"\nstarting application, this might take some time...\n...\nheaptrack stats:\n    allocations:              319347\n    leaked allocations:       107\n    temporary allocations:    68\nHeaptrack finished! Now run the following to investigate the data:\n\n  heaptrack --analyze \"/tmp/.nio_alloc_counter_tests_GRusAy/heaptrack.test_1000_autoReadGetAndSet.84341.gz\"\n```\nThen run it a second time for the feature branch:\n```\n> heaptrack .build/x86_64-unknown-linux-gnu/release/test_1000_autoReadGetAndSet\nheaptrack output will be written to \"/tmp/.nio_alloc_counter_tests_GRusAy/heaptrack.test_1000_autoReadGetAndSet.84372.gz\"\nstarting application, this might take some time...\n...\nheaptrack stats:\n    allocations:              673989\n    leaked allocations:       117\n    temporary allocations:    341011\nHeaptrack finished! Now run the following to investigate the data:\n\n  heaptrack --analyze \"/tmp/.nio_alloc_counter_tests_GRusAy/heaptrack.test_1000_autoReadGetAndSet.84372.gz\"\nubuntu@ip-172-31-25-161 /t/.nio_alloc_counter_tests_GRusAy> \n```\nHere we could see that we had 673989 allocations in the feature branch version and 319347 in `main`, so clearly a regression.\n\nFinally, we can analyze the output as a diff from these runs using `heaptrack_print` and pipe it through `swift demangle` for readability:\n\n```\nheaptrack_print -T -d heaptrack.test_1000_autoReadGetAndSet.84341.gz heaptrack.test_1000_autoReadGetAndSet.84372.gz | swift demangle\n```\n`-T` gives us the temporary allocations (as it in this case was not a leak, but a transient alloaction - if you have leaks remove `-T`).\n\nThe output can be quite long, but in this case as we look for transient allocations, scroll down to:\n\n```\nMOST TEMPORARY ALLOCATIONS\n307740 temporary allocations of 290324 allocations in total (106.00%) from\nswift_slowAlloc\n  in /home/ubuntu/bin/usr/lib/swift/linux/libswiftCore.so\n43623 temporary allocations of 44553 allocations in total (97.91%) from:\n    swift_allocObject\n      in /home/ubuntu/bin/usr/lib/swift/linux/libswiftCore.so\n    NIO.ServerBootstrap.(bind0 in _C131C0126670CF68D8B594DDFAE0CE57)(makeServerChannel: (NIO.SelectableEventLoop, NIO.EventLoopGroup) throws -> NIO.ServerSocketChannel, _: (NIO.EventLoop, NIO.ServerSocketChannel) -> NIO.EventLoopFuture<()>) -> NIO.EventLoopFuture<NIO.Channel>\n      at /home/ubuntu/swiftnio/swift-nio/Sources/NIO/Bootstrap.swift:295\n      in /tmp/.nio_alloc_counter_tests_GRusAy/.build/x86_64-unknown-linux-gnu/release/test_1000_autoReadGetAndSet\n    merged NIO.ServerBootstrap.bind(host: Swift.String, port: Swift.Int) -> NIO.EventLoopFuture<NIO.Channel>\n      in /tmp/.nio_alloc_counter_tests_GRusAy/.build/x86_64-unknown-linux-gnu/release/test_1000_autoReadGetAndSet\n    NIO.ServerBootstrap.bind(host: Swift.String, port: Swift.Int) -> NIO.EventLoopFuture<NIO.Channel>\n      in /tmp/.nio_alloc_counter_tests_GRusAy/.build/x86_64-unknown-linux-gnu/release/test_1000_autoReadGetAndSet\n    Test_test_1000_autoReadGetAndSet.run(identifier: Swift.String) -> ()\n      at /tmp/.nio_alloc_counter_tests_GRusAy/Sources/Test_test_1000_autoReadGetAndSet/file.swift:24\n      in /tmp/.nio_alloc_counter_tests_GRusAy/.build/x86_64-unknown-linux-gnu/release/test_1000_autoReadGetAndSet\n    main\n      at Sources/bootstrap_test_1000_autoReadGetAndSet/main.c:18\n      in /tmp/.nio_alloc_counter_tests_GRusAy/.build/x86_64-unknown-linux-gnu/release/test_1000_autoReadGetAndSet\n22208 temporary allocations of 22276 allocations in total (99.69%) from:\n    swift_allocObject\n      in /home/ubuntu/bin/usr/lib/swift/linux/libswiftCore.so\n    generic specialization <Swift.UnsafeBufferPointer<Swift.Int8>> of Swift._copyCollectionToContiguousArray<A where A: Swift.Collection>(A) -> Swift.ContiguousArray<A.Element>\n      in /home/ubuntu/bin/usr/lib/swift/linux/libswiftCore.so\n    Swift.String.utf8CString.getter : Swift.ContiguousArray<Swift.Int8>\n      in /home/ubuntu/bin/usr/lib/swift/linux/libswiftCore.so\n    NIO.URing.getEnvironmentVar(Swift.String) -> Swift.String?\n      at /home/ubuntu/swiftnio/swift-nio/Sources/NIO/LinuxURing.swift:291\n      in /tmp/.nio_alloc_counter_tests_GRusAy/.build/x86_64-unknown-linux-gnu/release/test_1000_autoReadGetAndSet\n    NIO.URing._debugPrint(@autoclosure () -> Swift.String) -> ()\n      at /home/ubuntu/swiftnio/swift-nio/Sources/NIO/LinuxURing.swift:297\n...\n22196 temporary allocations of 22276 allocations in total (99.64%) from:\n```\n\nAnd here we could fairly quickly see that the transient extra allocations was due to extra debug printing and querying of environment variables:\n\n```\nNIO.URing.getEnvironmentVar(Swift.String) -> Swift.String?\n  at /home/ubuntu/swiftnio/swift-nio/Sources/NIO/LinuxURing.swift:291\n  in /tmp/.nio_alloc_counter_tests_GRusAy/.build/x86_64-unknown-linux-gnu/release/test_1000_autoReadGetAndSet\nNIO.URing._debugPrint(@autoclosure () -> Swift.String) -> ()\n```\n\nAnd this code will be removed before final integration of the feature branch, so the diff will go away.\n"
  },
  {
    "path": "docs/packaging.md",
    "content": "# Packaging Applications for Deployment\n\nOnce an application is built for production, it still needs to be packaged before it can be deployed to servers. There are several strategies for packaging Swift applications for deployment.\n\n## Docker\n\nOne of the most popular ways to package applications these days is using container technologies such as [Docker](https://www.docker.com).\n\nUsing Docker's tooling, we can build and package the application as a Docker image, publish it to a Docker repository, and later launch it directly on a server or on a platform that supports Docker deployments such as [Kubernetes](https://kubernetes.io). Many public cloud providers including AWS, GCP, Azure, IBM and others encourage this kind of deployment.\n\nHere is an example `Dockerfile` that builds and packages the application on top of CentOS:\n\n```Dockerfile\n#------- build -------\nFROM swift:centos8 as builder\n\n# set up the workspace\nRUN mkdir /workspace\nWORKDIR /workspace\n\n# copy the source to the docker image\nCOPY . /workspace\n\nRUN swift build -c release --static-swift-stdlib\n\n#------- package -------\nFROM centos\n# copy executables\nCOPY --from=builder /workspace/.build/release/<executable-name> /\n\n# set the entry point (application name)\nCMD [\"<executable-name>\"]\n```\n\nTo create a local Docker image from the `Dockerfile` use the `docker build` command from the application's source location, e.g.:\n\n```bash\n$ docker build . -t <my-app>:<my-app-version>\n```\n\nTo test the local image use the `docker run` command, e.g.:\n\n```bash\n$ docker run <my-app>:<my-app-version>\n```\n\nFinally, use the `docker push` command to publish the application's Docker image to a Docker repository of your choice, e.g.:\n\n```bash\n$ docker tag <my-app>:<my-app-version> <docker-hub-user>/<my-app>:<my-app-version>\n$ docker push <docker-hub-user>/<my-app>:<my-app-version>\n```\n\nAt this point, the application's Docker image is ready to be deployed to the server hosts (which need to run docker), or to one of the platforms that supports Docker deployments.\n\nSee [Docker's documentation](https://docs.docker.com/engine/reference/commandline/) for more complete information about Docker.\n\n### Distroless\n\n[Distroless](https://github.com/GoogleContainerTools/distroless) is a project by Google that attempts to create minimal images containing only the application and its runtime dependencies. They do not contain package managers, shells or any other programs you would expect to find in a standard Linux distribution.\n\nSince distroless supports Docker and is based on Debian, packaging a Swift application on it is fairly similar to the Docker process above. Here is an example `Dockerfile` that builds and packages the application on top of a distroless's C++ base image:\n\n```Dockerfile\n#------- build -------\n# Building using Ubuntu Bionic since its compatible with Debian runtime\nFROM swift:bionic as builder\n\n# set up the workspace\nRUN mkdir /workspace\nWORKDIR /workspace\n\n# copy the source to the docker image\nCOPY . /workspace\n\nRUN swift build -c release --static-swift-stdlib\n\n#------- package -------\n# Running on distroless C++ since it includes\n# all(*) the runtime dependencies Swift programs need\nFROM gcr.io/distroless/cc-debian10\n# copy executables\nCOPY --from=builder /workspace/.build/release/<executable-name> /\n\n# set the entry point (application name)\nCMD [\"<executable-name>\"]\n```\n\nNote the above uses `gcr.io/distroless/cc-debian10` as the runtime image which should work for Swift programs that do not use `FoundationNetworking` or `FoundationXML`. In order to provide more complete support we (the community) could put in a PR into distroless to introduce a base image for Swift that includes `libcurl` and `libxml` which are required for `FoundationNetworking` and `FoundationXML` respectively.\n\n## Archive (Tarball, ZIP file, etc.)\n\nSince cross-compiling Swift for Linux is not (yet) supported on Mac or Windows, we need to use virtualization technologies like Docker to compile applications we are targeting to run on Linux.\n\nThat said, this does not mean we must also package the applications as Docker images in order to deploy them. While using Docker images for deployment is convenient and popular, an application can also be packaged using a simple and lightweight archive format like tarball or ZIP file, then uploaded to the server where it can be extracted and run.\n\nHere is an example of using Docker and `tar` to build and package the application for deployment on Ubuntu servers:\n\nFirst, use the `docker run` command from the application's source location to build it:\n\n```bash\n$ docker run --rm \\\n  -v \"$PWD:/workspace\" \\\n  -w /workspace \\\n  swift:bionic \\\n  /bin/bash -cl \"swift build -c release --static-swift-stdlib\"\n```\n\nNote we are bind mounting the source directory so that the build writes the build artifacts to the local drive from which we will package them later.\n\nNext we can create a staging area with the application's executable:\n\n```bash\n$ docker run --rm \\\n  -v \"$PWD:/workspace\" \\\n  -w /workspace \\\n  swift:bionic  \\\n  /bin/bash -cl ' \\\n     rm -rf .build/install && mkdir -p .build/install && \\\n     cp -P .build/release/<executable-name> .build/install/'\n```\n\nNote this command could be combined with the build command above--we separated them to make the example more readable.\n\nFinally, create a tarball from the staging directory:\n\n```bash\n$ tar cvzf <my-app>-<my-app-version>.tar.gz -C .build/install .\n```\n\nWe can test the integrity of the tarball by extracting it to a directory and running the application in a Docker runtime container:\n\n```bash\n$ cd <extracted directory>\n$ docker run -v \"$PWD:/app\" -w /app bionic ./<executable-name>\n```\n\nDeploying the application's tarball to the target server can be done using utilities like `scp`, or in a more sophisticated setup using configuration management system like `chef`, `puppet`, `ansible`, etc.\n\n\n## Source Distribution\n\nAnother distribution technique popular with dynamic languages like Ruby or Javascript is distributing the source to the server, then compiling it on the server itself.\n\nTo build Swift applications directly on the server, the server must have the correct Swift toolchain installed. [Swift.org](https://swift.org/download/#linux) publishes toolchains for a variety of Linux distributions, make sure to use the one matching your server Linux version and desired Swift version.\n\nThe main advantage of this approach is that it is easy. Additional advantage is the server has the full toolchain (e.g. debugger) that can help troubleshoot issues \"live\" on the server.\n\nThe main disadvantage of this approach that the server has the full toolchain (e.g. compiler) which means a sophisticated attacker can potentially find ways to execute code. They can also potentially gain access to the source code which might be sensitive. If the application code needs to be cloned from a private or protected repository, the server needs access to credentials which adds additional attack surface area.\n\nIn most cases, source distribution is not advised due to these security concerns.\n\n## Static linking and Curl/XML\n\n**Note:** if you are compiling with `-static-stdlib` and using Curl with FoundationNetworking or XML with FoundationXML you must have libcurl and/or libxml2 installed on the target system for it to work."
  },
  {
    "path": "docs/performance.md",
    "content": "# Debugging Performance Issues\n\nFirst of all, it's very important to make sure that you compiled your Swift code in _release mode_. The performance difference between debug and release builds is huge in Swift. You can compile your Swift code in release mode using\n\n    swift build -c release\n    \n## Instruments\n\nIf you can reproduce your performance issue on macOS, you probably want to check out Instrument's [Time Profiler](https://developer.apple.com/videos/play/wwdc2016/418/).\n    \n## Flamegraphs\n\n[Flamegraphs](http://www.brendangregg.com/flamegraphs.html) are a nice way to visualise what stack frames were running for what percentage of the time. That often helps pinpointing the areas of your program that need improvement. Flamegraphs can be created on most platforms, in this document we will focus on Linux.\n\n### Flamegraphs on Linux\n\nTo have something to discuss, let's use a program that has a pretty big performance problem:\n\n```swift\n/* a terrible data structure which has a subset of the operations that Swift's\n * array does:\n *  - retrieving elements by index\n *     --> user's reasonable performance expectation: O(1)   (like Swift's Array)\n *     --> implementation's actual performance:       O(n)\n *  - adding elements\n *     --> user's reasonable performance expectation: amortised O(1)   (like Swift's Array)\n *     --> implementation's actual performance:       O(n)\n *\n * ie. the problem I'm trying to demo here is that this is an implementation\n * where the user would expect (amortised) constant time access but in reality\n * is linear time.\n */\nstruct TerribleArray<T: Comparable> {\n    /* this is a terrible idea: storing the index inside of the array (so we can\n     * waste some performance later ;)\n     */\n    private var storage: Array<(Int, T)> = Array()\n\n    /* oh my */\n    private func maximumIndex() -> Int {\n        return (self.storage.map { $0.0 }.max()) ?? -1\n    }\n\n    /* expectation: amortised O(1) but implementation is O(n) */\n    public mutating func append(_ value: T) {\n        let maxIdx = self.maximumIndex()\n        self.storage.append((maxIdx + 1, value))\n        assert(self.storage.count == maxIdx + 2)\n    }\n\n    /* expectation: O(1) but implementation is O(n) */\n    public subscript(index: Int) -> T? {\n        get {\n            return self.storage.filter({ $0.0 == index }).first?.1\n        }\n    }\n}\n\nprotocol FavouriteNumbers {\n    func addFavouriteNumber(_ number: Int)\n    func isFavouriteNumber(_ number: Int) -> Bool\n}\n\npublic class MyFavouriteNumbers: FavouriteNumbers {\n    private var storage: TerribleArray<Int>\n    public init() {\n        self.storage = TerribleArray<Int>()\n    }\n\n    /* - user's expectation: O(n)\n     * - reality O(n^2) because of TerribleArray */\n    public func isFavouriteNumber(_ number: Int) -> Bool {\n        var idx = 0\n        var found = false\n        while true {\n            if let storageNum = self.storage[idx] {\n                if number == storageNum {\n                    found = true\n                    break\n                }\n            } else {\n                break\n            }\n            idx += 1\n        }\n        return found\n    }\n\n    /* - user's expectation: amortised O(1)\n     * - reality O(n) because of TerribleArray */\n    public func addFavouriteNumber(_ number: Int) {\n        self.storage.append(number)\n        precondition(self.isFavouriteNumber(number))\n    }\n}\n\nlet x: FavouriteNumbers = MyFavouriteNumbers()\n\nfor f in 0..<2_000 {\n    x.addFavouriteNumber(f)\n}\n```\n\nThe above program contains the `TerribleArray` data structure which has _O(n)_ appends and not the amortised _O(1)_ that users are used to from `Array`.\n\nWe will assume, that you have Linux's `perf` installed and configured, documentation on how to install `perf` can be found in [this guide](linux-perf.md).\n\nLet's assume we have compiled the above code using `swift build -c release` into a binary called `./slow`. We also assume that the `https://github.com/brendangregg/FlameGraph` repository is cloned in `~/FlameGraph`:\n\n```\n# Step 1: Record the stack frames with a 99 Hz sampling frequency\nsudo perf record -F 99 --call-graph dwarf -- ./slow\n# Alternatively, to attach to an existing process use\n#     sudo perf record -F 99 --call-graph dwarf -p PID_OF_SLOW\n# or if you don't know the pid, you can try (assuming your binary name is \"slow\")\n#     sudo perf record -F 99 --call-graph dwarf -p $(pgrep slow)\n\n# Step 2: Export the recording into `out.perf`\nsudo perf script > out.perf\n\n# Step 3: Aggregate the recorded stacks and demangle the symbols\n~/FlameGraph/stackcollapse-perf.pl out.perf | swift demangle > out.folded\n\n# Step 4: Export the result into a SVG file.\n~/FlameGraph/flamegraph.pl out.folded > out.svg # Produce\n```\n\nThe resulting file will look something like:\n\n![](../images/perf-issues-flamegraph.svg)\n\nAnd we can see that almost all of our runtime is spent in `isFavouriteNumber` which is invoked from `addFavouriteNumber`. That should be a very good hint to the programmer on where to look for improvements. Maybe after all, we should use `Set<Int>` to store the favourite numbers, that should get is an answer to if a number is a favourite number in constant time (_O(1)_).\n\n## Alternate `malloc` libraries\nFor some workloads putting serious pressure on the memory allocation subsystem, it may be beneficial with a custom `malloc` library.\nIt requires no changes to the code, but needs interposing with e.g. an environment variable before running your server.\nIt is worth benchmarking with the default and with a custom memory allocator to see how much it helps for the specific workload.\nThere are many `malloc` implementations out there, but a portable and well-performing one is [Microsofts mimalloc](https://github.com/microsoft/mimalloc). \n\nTypically these are simply enabled by using LD_PRELOAD:\n\n`> LD_PRELOAD=/usr/bin/libmimalloc.so  myprogram`\n"
  },
  {
    "path": "docs/setup-and-ide-alternatives.md",
    "content": "## Installing Swift\n\nThe [supported platforms](https://swift.org/platform-support/) for running Swift on the server and the [ready-built tools packages](https://swift.org/download/) are all hosted on swift.org together with installation instructions. There you also can find the [language reference documentation](https://swift.org/documentation/).\n\n## IDEs/Editors with Swift Support\n\nA number of editors you may already be familiar with have support for writing Swift code. Here we provide a non-exhaustive list of such editors and relevant plugins/extensions, sorted alphabetically.\n\n* [Atom IDE support](https://atom.io/packages/ide-swift)\n    * [Atomic Blonde](https://atom.io/packages/atomic-blonde) a SourceKit based syntax highlighter for Atom.\n\n* [CLion](https://www.jetbrains.com/help/clion/swift.html)\n\n* [Emacs plugin](https://github.com/swift-emacs/swift-mode)\n\n* [VIM plugin](https://github.com/keith/swift.vim)\n\n* [Visual Studio Code](https://code.visualstudio.com)\n    * [Swift for Visual Studio Code extension](https://marketplace.visualstudio.com/items?itemName=sswg.swift-lang)\n\n* [Xcode](https://developer.apple.com/xcode/ide/)\n\n## Language Server Protocol (LSP) Support\n\nThe [SourceKit-LSP project](https://github.com/apple/sourcekit-lsp) provides a Swift implementation of the [Language Server Protocol](https://microsoft.github.io/language-server-protocol/), which provides features such as code completion and jump-to-definition in supported editors. \n\nThe project has both an [extensive list of editors that support it](https://github.com/apple/sourcekit-lsp/tree/main/Editors) and setup instructions for those editors, including many of those listed above.\n\n_Do you know about another IDE or IDE plugin that is missing? Please submit a PR to add them here!_\n"
  },
  {
    "path": "docs/testing.md",
    "content": "# Testing \n\nSwiftPM is integrated with [XCTest, Apple’s unit test framework](https://developer.apple.com/documentation/xctest). Running `swift test` from the terminal, or triggering the test action in your IDE (Xcode or similar), will run all of your XCTest test cases. Test results will be displayed in your IDE or printed out to the terminal.\n\nA convenient way to test on Linux is using Docker. For example:\n\n`$ docker run -v \"$PWD:/code\" -w /code swift:latest swift test`\n\nThe above command will run the tests using the latest Swift Docker image, utilizing bind mounts to the sources on your file system.\n\nSwift supports architecture-specific code. By default, Foundation imports architecture-specific libraries like Darwin or Glibc. While developing on macOS, you may end up using APIs that are not available on Linux. Since you are most likely to deploy a cloud service on Linux, it is critical to test on Linux.\n\nA historically important detail about testing for Linux is the `Tests/LinuxMain.swift` file. \n\n- In Swift versions 5.4 and newer tests are automatically discovered on all platforms, no special file or flag needed.\n- In Swift versions >= 5.1 < 5.4, tests can be automaticlaly discovered on Linux using `swift test --enable-test-discovery` flag.\n- In Swift versions older than 5.1 the `Tests/LinuxMain.swift` file provides SwiftPM an index of all the tests it needs to run on Linux and it is critical to keep this file up-to-date as you add more unit tests. To regenerate this file, run `swift test --generate-linuxmain` after adding tests. It is also a good idea to include this command as part of your continuous integration setup.\n\n### Testing for production\n\n- For Swift versions between Swift 5.1 and 5.4, always test with `--enable-test-discovery` to avoid forgetting tests on Linux.\n\n- Make use of the sanitizers. Before running code in production, and preferably as a regular part of your CI process, do the following:\n    * Run your test suite with TSan (thread sanitizer): `swift test --sanitize=thread`\n    * Run your test suite with ASan (address sanitizer): `swift test --sanitize=address` and `swift test --sanitize=address -c release -Xswiftc -enable-testing`\n\n- Generally, whilst testing, you may want to build using `swift build --sanitize=thread`. The binary will run slower and is not suitable for production, but you might be able to catch threading issues early - before you deploy your software. Often threading issues are really hard to debug and reproduce and also cause random problems. TSan helps catch them early.\n"
  },
  {
    "path": "docs/ubuntu.md",
    "content": "# Deploying on Ubuntu\n\nOnce you have your Ubuntu virtual machine ready, you can deploy your Swift app. This guide assumes you have a fresh install with a non-root user named `swift`. It also assumes both `root` and `swift` are accessible via SSH. For information on setting this up, check out the platform guides:\n\n- [DigitalOcean](digital-ocean.md)\n\nThe [packaging](packaging.md) guide provides an overview of available deployment options. This guide takes you through each deployment option step-by-step for Ubuntu specifically. These examples will deploy SwiftNIO's [example HTTP server](https://github.com/apple/swift-nio/tree/master/Sources/NIOHTTP1Server), but you can test with your own project.\n\n- [Binary Deployment](#binary-deployment)\n- [Source Deployment](#source-deployment)\n\n## Binary Deployment\n\nThis section shows you how to build your app locally and deploy just the binary. \n\n### Build Binaries\n\nThe first step is to build your app locally. The easiest way to do this is with Docker. For this example, we'll be deploying SwiftNIO's demo HTTP server. Start by cloning the repository.\n\n```sh\ngit clone https://github.com/apple/swift-nio.git\ncd swift-nio\n```\n\nOnce inside the project folder, use the following command to build the app though Docker and copy all build arifacts into `.build/install`. Since this example will be deploying to Ubuntu 18.04, the `-bionic` Docker image is used to build.\n\n```sh\ndocker run --rm \\\n  -v \"$PWD:/workspace\" \\\n  -w /workspace \\\n  swift:5.2-bionic  \\\n  /bin/bash -cl ' \\\n     swift build && \\\n     rm -rf .build/install && mkdir -p .build/install && \\\n     cp -P .build/debug/NIOHTTP1Server .build/install/ && \\\n     cp -P /usr/lib/swift/linux/lib*so* .build/install/'\n```\n\n> Tip: If you are building this project for production, use `swift build -c release`, see [building for production](building.md#building-for-production) for more information.\n\nNotice that Swift's shared libraries are being included. This is important since Swift is not ABI stable on Linux. This means Swift programs must run against the shared libraries they were compiled with. \n\nAfter your project is built, use the following command to create an archive for easy transport to the server. \n\n```sh\ntar cvzf hello-world.tar.gz -C .build/install .\n```\n\nNext, use `scp` to copy the archive to the deploy server's home folder.\n\n```sh\nscp hello-world.tar.gz swift@<server_ip>:~/\n```\n\nOnce the copy is complete, login to the deploy server.\n\n```sh\nssh swift@<server_ip>\n```\n\nCreate a new folder to hold the app binaries and decompress the archive.\n\n```sh\nmkdir hello-world\ntar -xvf hello-world.tar.gz -C hello-world\n```\n\nYou can now start the executable. Supply the desired IP address and port. Binding to port `80` requires sudo, so we use `8080` instead.\n\n[TODO]: <> (Link to Nginx guide once available for serving on port 80)\n\n```sh\n./hello-world/NIOHTTP1Server <server_ip> 8080\n```\n\nYou may need to install additional system libraries like `libxml` or `tzdata` if your app uses Foundation. The system dependencies installed by Swift's slim docker images are a [good reference](https://github.com/apple/swift-docker/blob/master/5.2/ubuntu/18.04/slim/Dockerfile).\n\nFinally, visit your server's IP via browser or local terminal and you should see a response.\n\n```\n$ curl http://<server_ip>:8080\nHello world!\n```\n\nUse `CTRL+C` to quit the server.\n\nCongratulations on getting your Swift server app running on Ubuntu!\n\n## Source Deployement\n\nThis section shows you how to build and run your project on the deployment server.\n\n## Install Swift\n\nNow that you've created a new Ubuntu server you can install Swift. You must be logged in as `root` (or separate user with `sudo` access) to do this.\n\n```sh\nssh root@<server_ip>\n```\n\n### Swift Dependencies\n\nInstall Swift's required dependencies.\n\n```sh\nsudo apt update\nsudo apt install clang libicu-dev build-essential pkg-config\n```\n\n### Download Toolchain\n\nThis guide will install Swift 5.2. Visit the [Swift Downloads](https://swift.org/download/#releases) page for a link to latest release. Copy the download link for Ubuntu 18.04.\n\n![Download Swift](../images/swift-download-ubuntu-18-copy-link.png)\n\nDownload and decompress the Swift toolchain.\n\n```sh\nwget https://swift.org/builds/swift-5.2-release/ubuntu1804/swift-5.2-RELEASE/swift-5.2-RELEASE-ubuntu18.04.tar.gz\ntar xzf swift-5.2-RELEASE-ubuntu18.04.tar.gz\n```\n\n> Note: Swift's [Using Downloads](https://swift.org/download/#using-downloads) guide includes information on how to verify downloads using PGP signatures.\n\n### Install Toolchain\n\nMove Swift somewhere easy to acess. This guide will use `/swift` with each compiler version in a subfolder. \n\n```sh\nsudo mkdir /swift\nsudo mv swift-5.2-RELEASE-ubuntu18.04 /swift/5.2.0\n```\n\nAdd Swift to `/usr/bin` so it can be executed by `swift` and `root`.\n\n```sh\nsudo ln -s /swift/5.2.0/usr/bin/swift /usr/bin/swift\n```\n\nVerify that Swift was installed correctly.\n\n```sh\nswift --version\n```\n\n## Setup Project\n\nNow that Swift is installed, let's clone and compile your project. For this example, we'll be using SwiftNIO's [example HTTP server](https://github.com/apple/swift-nio/tree/master/Sources/NIOHTTP1Server).\n\nFirst let's install SwiftNIO's system dependencies.\n\n```sh\nsudo apt-get install zlib1g-dev\n```\n\n### Clone & Build\n\nNow that we're done installing things, we can switch to a non-root user to build and run our application.\n\n```sh\nsu swift\ncd ~\n```\n\nClone the project, then use `swift build` to compile it.\n\n```sh\ngit clone https://github.com/apple/swift-nio.git\ncd swift-nio\nswift build\n```\n\n> Tip: If you are building this project for production, use `swift build -c release`, see [building for production](building.md#building-for-production) for more information.\n\n### Run\n\nOnce the project has finished compiling, run it on your server's IP at port `8080`. \n\n```sh\n.build/debug/NIOHTTP1Server <server_ip> 8080\n```\n\nIf you used `swift build -c release`, then you need to run:\n\n```sh\n.build/release/NIOHTTP1Server <server_ip> 8080\n```\n\nVisit your server's IP via browser or local terminal and you should see a response.\n\n```\n$ curl http://<server_ip>:8080\nHello world!\n```\n\nUse `CTRL+C` to quit the server.\n\nCongratulations on getting your Swift server app running on Ubuntu!\n"
  }
]