Repository: swift-server/guides Branch: main Commit: 5df0aca0c765 Files: 20 Total size: 143.4 KB Directory structure: gitextract_bt7zr2u5/ ├── .gitignore ├── README.md └── docs/ ├── allocations.md ├── aws-copilot-fargate-vapor-mongo.md ├── aws.md ├── building.md ├── concurrency-adoption-guidelines.md ├── deployment.md ├── digital-ocean.md ├── gcp.md ├── heroku.md ├── libs/ │ └── log-levels.md ├── linux-perf.md ├── llvm-sanitizers.md ├── memory-leaks-and-usage.md ├── packaging.md ├── performance.md ├── setup-and-ide-alternatives.md ├── testing.md └── ubuntu.md ================================================ FILE CONTENTS ================================================ ================================================ FILE: .gitignore ================================================ .DS_Store ================================================ FILE: README.md ================================================ > **Warning** > The guides are now part of Swift.org and will continue to be evolved there. This repo is now archived # Swift on Server Development Guide ## Introduction This guide is designed to help teams and individuals running Swift Server applications on Linux and to provide orientation for those who want to start with such development. It focuses on how to build, test, deploy and debug such application and provides tips in those areas. ## Contents - [Setup and code editing](docs/setup-and-ide-alternatives.md) - [Building](docs/building.md) - [Testing](docs/testing.md) - [Debugging Memory leaks](docs/memory-leaks-and-usage.md) - [Performance troubleshooting and analysis](docs/performance.md) - [Debugging multithreading issues and memory checks](docs/llvm-sanitizers.md) - [Deployment](docs/deployment.md) ### Server-side library development Server-side libraries should, to the best of their ability, play well with established patterns in the SSWG library ecosystem. They should also utilize the core observability libraries, such as: logging, metrics and distributed tracing, where applicable. The below guidelines are aimed to help library developers with some of the typical questions and challenges they might face when designing server-side focused libraries with Swift: - [SwiftLog: Log level guidelines](docs/libs/log-levels.md) - [Swift Concurrency Adoption Guidelines](docs/concurrency-adoption-guidelines.md) _The guide is a community effort, and all are invited to share their tips and know-how. Please provide a PR if you have ideas for improving this guide!_ ================================================ FILE: docs/allocations.md ================================================ # Allocations For high-performance software in Swift, it's often important to understand where your heap allocations are coming from. The next step can then be to reduce the number of allocations your software makes. This is very similar to other performance questions: Before you can optimise performance you need to understand where you spend your resources. And resources can be CPU time, as well as memory, or heap allocations. In this document we will solely focus on the number of heap allocations, not their size. On macOS, you can use Instruments's "Allocations" instrument. The Allocations instrument shows you two sets of values: The live allocations (i.e. allocated and not freed) as well as the transient allocations (all allocations made). Your production workloads however will likely run on Linux and depending on your setup the number of allocations can differ significantly between macOS and Linux. ## Preparation To not waste your time, be sure to do any profiling in _release mode_. Swift's optimiser will produce significantly faster code which will also allocate less in release mode. Usually this means you need to run swift run -c release #### Install `perf` Follow the [installation instructions](linux-perf.md) in the Linux `perf` utility guide. #### Clone the `FlameGraph` project To see some pretty graphs, clone the [`FlameGraph`](https://github.com/brendangregg/FlameGraph) repository on the machine/container where you need it. The rest of this guide will assume that it's available at `/FlameGraph`: ``` git clone https://github.com/brendangregg/FlameGraph ``` Tip: With Docker, you may want to bind mount the `FlameGraph` repository into the container using ``` docker run -it --rm \ --privileged \ -v "/path/to/FlameGraphOnYourMachine:/FlameGraph:ro" \ -v "$PWD:PWD" -w "$PWD" \ swift:latest ``` or similar. ## Tools In this guide, we will be using the [Linux `perf`](https://perf.wiki.kernel.org/index.php/Main_Page) tool. If you're struggling to get `perf` to work, have a look at our [information regarding `perf`](linux-perf.md). If you're running in a Docker container, don't forget that you'll need a privileged container. And generally, you will need `root` access, so you may need to prefix the commands with `sudo`. ## Getting a `perf` user probe In this guide, we will be counting the number of allocations. Most allocations from a Swift program (on Linux) will be done through the `malloc` function. To get information about when an allocation function is called, we will install a `perf` "user probes" on the allocation functions. Because Swift also uses other allocation functions such as `calloc` and `posix_memalign`, we'll install a user probe for them all. From then on, there will be an event in `perf` that will fire whenever one of the allocation functions is called. ```bash # figures out the path to libc libc_path=$(readlink -e /lib64/libc.so.6 /lib/x86_64-linux-gnu/libc.so.6) # delete all existing user probes on libc (instead of * you can also list them individually) perf probe --del 'probe_libc:*' # installs a probe on `malloc`, `calloc`, and `posix_memalign` perf probe -x "$libc_path" --add malloc --add calloc --add posix_memalign ``` The result (hopefully) looks somewhat like this: ``` Added new events: probe_libc:malloc (on malloc in /usr/lib/x86_64-linux-gnu/libc-2.31.so) probe_libc:calloc (on calloc in /usr/lib/x86_64-linux-gnu/libc-2.31.so) probe_libc:posix_memalign (on posix_memalign in /usr/lib/x86_64-linux-gnu/libc-2.31.so) [...] ``` What `perf` is telling you here is that it added a new events called `probe_libc:malloc`, `probe_libc:calloc`, ... which will fire every time the respective function is called. Let's confirm that our `probe_libc:malloc` probe actually works by running: perf stat -e probe_libc:malloc -- bash -c 'echo Hello World' which should output something like ``` Hello World Performance counter stats for 'bash -c echo Hello World': 1021 probe_libc:malloc 0.003840500 seconds time elapsed 0.000000000 seconds user 0.003867000 seconds sys ``` Which seems to have allocated 1021 times, great. If that probe fired 0 times, something went wrong. ## Running the allocation analysis After we have confirmed that our user probe on `malloc` works in general, let's dial it up a little. The first thing we'll need is a program that we'd like to analyse the allocations of. For example, we could analyse a program which does 10 subsequent HTTP requests using [AsyncHTTPClient](https://github.com/swift-server/async-http-client). If you're interested in the full source code, please expand below.
Demo program source code With the following dependencies ```swift dependencies: [ .package(url: "https://github.com/swift-server/async-http-client.git", from: "1.3.0"), .package(url: "https://github.com/apple/swift-nio.git", from: "2.29.0"), .package(url: "https://github.com/apple/swift-log.git", from: "1.4.2"), ], ``` We could write this program ```swift import AsyncHTTPClient import NIO import Logging let urls = Array(repeating:"http://httpbin.org/get", count: 10) var logger = Logger(label: "ahc-alloc-demo") logger.info("running HTTP requests", metadata: ["count": "\(urls.count)"]) MultiThreadedEventLoopGroup.withCurrentThreadAsEventLoop { eventLoop in let httpClient = HTTPClient(eventLoopGroupProvider: .shared(eventLoop), backgroundActivityLogger: logger) func doRemainingRequests(_ remaining: ArraySlice, overallResult: EventLoopPromise, eventLoop: EventLoop) { var remaining = remaining if let first = remaining.popFirst() { httpClient.get(url: first, logger: logger).map { [remaining] _ in eventLoop.execute { // for shorter stacks doRemainingRequests(remaining, overallResult: overallResult, eventLoop: eventLoop) } }.whenFailure { error in overallResult.fail(error) } } else { return overallResult.succeed(()) } } let promise = eventLoop.makePromise(of: Void.self) // Kick off the process doRemainingRequests(urls[...], overallResult: promise, eventLoop: eventLoop) promise.futureResult.whenComplete { result in switch result { case .success: logger.info("all HTTP requests succeeded") case .failure(let error): logger.error("HTTP request failure", metadata: ["error": "\(error)"]) } httpClient.shutdown { maybeError in if let error = maybeError { logger.error("AHC shutdown failed", metadata: ["error": "\(error)"]) } eventLoop.shutdownGracefully { maybeError in if let error = maybeError { logger.error("EventLoop shutdown failed", metadata: ["error": "\(error)"]) } } } } } logger.info("exiting") ```
Assuming you have a program as a Swift package, we should first of all compile it in release mode using `swift build -c release`. Then you should find a binary called `.build/release/your-program-name` which we can then analyse. ### Allocation counts Before we go into visualising the allocations as a flame graph, let's start with the simplest analysis: Getting the total number of allocations ``` perf stat -e 'probe_libc:*' -- .build/release/your-program-name ``` The above command instructs perf to run your program and count the number of times the `probe_libc:malloc` probe was hit. This should be the number of allocations done by your program. The output should look something like ``` Performance counter stats for '.build/release/your-program-name': 68 probe_libc:posix_memalign 35 probe_libc:calloc_1 0 probe_libc:calloc 2977 probe_libc:malloc [...] ``` In this case, my program allocated 2,977 times through `malloc` and a few more times through the other allocation functions. If you just want to compare the effects of a pull request you may just want to run this `perf stat` command twice. If you would like to find out _where_ your allocations come from, read on. Please note that in this guide we'll use `-e probe_libc:*` instead of individually listing every event like `-e probe_libc:malloc,probe_libc:calloc,probe_libc:calloc_1,probe_libc:posix_memalign`. This assumes that you have _no other_ `perf` user probes installed. If you do, please specify each event you would like to use individually. ### Collecting the raw data With `perf`, we can't really create live graphs whilst the program is running. For most analyses, we want to first record some raw data (usually with `perf record`) and later on transform the recorded data into a graph. To get started, let's have `perf` run the program for us and collect the information using the `libc_probe:malloc` we set up before. ``` perf record --call-graph dwarf,16384 \ -m 50000 \ -e 'probe_libc:*' -- \ .build/release/your-program-name ``` Let's break down this command a little: - `perf record` instructs `perf` to `record` data, makes sense. - `--call-graph dwarf,16384` instructs `perf` to use the [DWARF](http://www.dwarfstd.org) information to create the call graphs. It also sets the maximum stack dump size to 16k which should be enough to get you full stack traces. Unfortunately, using DWARF is rather slow (see below) but it creates the best call graphs for you. - `-m 50000`: The size of the ring buffer that `perf` uses to buffer. This is given in multiples of `PAGE_SIZE` (usually 4kB) and especially with DWARF this needs to be pretty huge to prevent data loss. - `-e 'probe_libc:*'`: Record when the `malloc`/`calloc`/... probes fire What you want to see if output like this ``` [ perf record: Woken up 2 times to write data ] [ perf record: Captured and wrote 401.088 MB perf.data (49640 samples) ] ``` If perf tells you about "lost chunks" and asks you to "check the IO/CPU overhead", you should jump to the 'Overcoming "lost chunks"' section at the end of this document. ### Flame graphs After a successful `perf record`, you can invoke the following command line to produce an SVG file with the flame graph ```bash perf script | \ /FlameGraph/stackcollapse-perf.pl - | \ swift demangle --simplified | \ /FlameGraph/flamegraph.pl --countname allocations \ --width 1600 > out.svg ``` Let's expand a little on what the above command does: - It runs `perf script` which dumps the binary information that `perf record` recorded into a textual form. - Next, we invoke `stackcollapse-perf` on it which transforms the stacks that `perf script` outputs into the right format for Flame Graphs, - then we invoke `swift demangle --simplified` which will give us nice symbol names, - and lastly we create the Flame Graph itself After this command has run (which may run for a while), you should have an SVG file that you can open in your browser. For the above example program, please see an example flame graph below. Note how you can hover over the stack frames and get more information. To focus on a sub tree, you can click any stack frame too. Generally, in flame graphs, the X axis just means "count", it does **not** mean time. In other words, whether a stack appears on the left or the right is not determined when that stack was live (this is different in flame _charts_). Note that this flame graph is _not_ a CPU flame graph, 1 sample means 1 allocation here and not time spent on the CPU. Also be aware that stack frames that appear wide don't necessarily allocate directly, it means that they or something they call has allocated a lot. For example, `BaseSocketChannel.readable` is a very wide frame, and yet, it is not a function which allocates directly. However, it calls other functions (such as other parts of SwiftNIO and AsyncHTTPClient) that do allocate a lot. It may take a little while to get familiar with flame graphs but there are great resources available online. ![](../images/perf-malloc-full.svg) ## Allocation flame graphs on macOS So far, this tutorial focussed on Linux and the `perf` tool. You can however create the same graphs on macOS. The process is fairly similar. First, let's collect the raw data using [DTrace](https://en.wikipedia.org/wiki/DTrace). ``` sudo dtrace -n 'pid$target::malloc:entry,pid$target::posix_memalign:entry,pid$target::calloc:entry,pid$target::malloc_zone_malloc:entry,pid$target::malloc_zone_calloc:entry,pid$target::malloc_zone_memalign:entry { @s[ustack(100)] = count(); } ::END { printa(@s); }' -c .build/release/your-program > raw.stacks ``` Similar to `perf`'s user probes, dtrace also has probes and the above command instructs DTrace to aggregate the number of calls to the allocation functions `malloc`, `posix_memalign`, `calloc`, and the `malloc_zone_*` equivalents. On Apple platforms, Swift uses a slightly larger number of allocation functions than on Linux, therefore we need to specify a few more functions. Once we collected the data, we can also create an SVG file using ```bash cat raw.stacks |\ /FlameGraph/stackcollapse.pl - | \ swift demangle --simplified | \ /FlameGraph/flamegraph.pl --countname allocations \ --width 1600 > out.svg ``` which you will notice is very similar to the `perf` invocation. The only differences are: - We use `cat raw.stacks` instead of `perf script` because we already have the textual data in a file with DTrace - Instead of `stackcollapse-perf.pl` (which parses `perf script` output) we use `stackcollapse.pl` (which parses DTrace aggregation output) ## Other `perf` tricks ### Prettifying Swift's allocation pattern Allocations in Swift usually have a very distinct shape: - Some code creates for example a class instance (which allocates). - This calls `swift_allocObject`, - which calls `swift_slowAlloc`, - which calls `malloc` (where we have our probe). To make our flame graphs look nicer, we can apply a small transformation after we have demangled the collapsed stacks: ``` sed -e 's/specialized //g' \ -e 's/;swift_allocObject;swift_slowAlloc;__libc_malloc/;A/g' ``` which will get rid of `"specialized "` and replaces `swift_allocObject` calling `swift_slowAlloc`, calling `malloc` with just an `A` (for allocation). The full command will then look like ``` perf script | \ /FlameGraph/stackcollapse-perf.pl - | \ swift demangle --simplified | \ sed -e 's/specialized //g' \ -e 's/;swift_allocObject;swift_slowAlloc;__libc_malloc/;A/g' | \ /FlameGraph/flamegraph.pl --countname allocations --flamechart --hash \ > out.svg ``` ### Overcoming "lost chunks" When using `perf` with the DWARF call stack unwinding, it is unfortunately easy to run into the following issue ``` [ perf record: Woken up 189 times to write data ] Warning: Processed 4346 events and lost 144 chunks! Check IO/CPU overload! [ perf record: Captured and wrote 30.868 MB perf.data (3817 samples) ] ``` When `perf` tells you that it lost a number of chunks it means that it lost data. If `perf` lost data, you have a few options: - Reduce the amount of work your program is doing. For every allocation, `perf` will need to record a stack trace. - Reduce the maximum "stack dump" that `perf` records by changing the `--call-graph dwarf` parameter to for example `--call-graph dwarf,2048`. The default is to record a maximum of 4096 bytes which gives you pretty deep stacks, if you don't need that you can reduce the number. The tradeoff is that the flame graph may show you `[unknown]` stack frames which means that there are missing stack frames there. The unit is bytes. - You can raise the number of the `-m` parameter which is the size of the ring buffer that `perf` uses in memory (in multiples of `PAGE_SIZE`, usually that is 4kB) - You can give up nice call graphs and replace `--call-tree dwarf` with `--call-tree fp` (`fp` stands for frame pointer). ================================================ FILE: docs/aws-copilot-fargate-vapor-mongo.md ================================================ # Server Side Swift on AWS with Fargate, Vapor, and MongoDB Atlas This guide illustrates how to deploy a Server-Side Swift workload on AWS. The workload is a REST API for tracking a To Do List. It uses the [Vapor](https://vapor.codes/) framework to program the API methods. The methods store and retrieve data in a [MongoDB Atlas](https://www.mongodb.com/atlas/database) cloud database. The Vapor application is containerized and deployed to AWS on AWS Fargate using the [AWS Copilot](https://aws.github.io/copilot-cli/) toolkit. ## Architecture ![Architecture](../images/aws/aws-fargate-vapor-mongo.png) - Amazon API Gateway receives API requests - API Gateway locates your application containers in AWS Fargate through internal DNS managed by AWS Cloud Map - API Gateway forwards the requests to the containers - The containers run the Vapor framework and have methods to GET and POST items - Vapor stores and retrieves items in a MongoDB Atlas cloud database which runs in a MongoDB managed AWS account ## Prerequisites To build this sample application, you need: - [AWS Account](https://console.aws.amazon.com/) - [MongoDB Atlas Database](https://www.mongodb.com/atlas/database) - [AWS Copilot](https://aws.github.io/copilot-cli/) - a command-line tool used to create containerized workloads on AWS - [Docker Desktop](https://www.docker.com/products/docker-desktop/) - to compile your Swift code into a Docker image - [Vapor](https://vapor.codes/) - to code the REST service - [AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html) - install the CLI and [configure](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html) it with credentials to your AWS account ## Step 1: Create Your Database If you are new to MongoDB Atlas, follow this [Getting Started Guide](https://www.mongodb.com/docs/atlas/getting-started/). You need to create the following items: - Atlas Account - Cluster - Database Username / Password - Database - Collection In subsequent steps, you provide values to these items to configure the application. ## Step 2: Initialize a New Vapor Project Create a folder for your project. ``` mkdir todo-app && cd todo-app ``` Initialize a Vapor project named *api*. ``` vapor new api -n ``` ## Step 3: Add Project Dependencies Vapor initializes a *Package.swift* file for the project dependencies. Your project requires an additional library, [MongoDBVapor](https://github.com/mongodb/mongodb-vapor). Add the MongoDBVapor library to the project and target dependencies of your *Package.swift* file. Your updated file should look like this: **api/Package.swift** ```swift // swift-tools-version:5.6 import PackageDescription let package = Package( name: "api", platforms: [ .macOS(.v12) ], dependencies: [ .package(url: "https://github.com/vapor/vapor", .upToNextMajor(from: "4.7.0")), .package(url: "https://github.com/mongodb/mongodb-vapor", .upToNextMajor(from: "1.1.0")) ], targets: [ .target( name: "App", dependencies: [ .product(name: "Vapor", package: "vapor"), .product(name: "MongoDBVapor", package: "mongodb-vapor") ], swiftSettings: [ .unsafeFlags(["-cross-module-optimization"], .when(configuration: .release)) ] ), .executableTarget(name: "Run", dependencies: [.target(name: "App")]), .testTarget(name: "AppTests", dependencies: [ .target(name: "App"), .product(name: "XCTVapor", package: "vapor"), ]) ] ) ``` ## Step 4: Update the Dockerfile You deploy your Swift Server code to AWS Fargate as a Docker image. Vapor generates an initial Dockerfile for your application. Your application requires a few modifications to this Dockerfile: - pull the *build* and *run* images from the [Amazon ECR Public Gallery](https://gallery.ecr.aws) container repository - install *libssl-dev* in the build image - install *libxml2* and *curl* in the run image Replace the contents of the Vapor generated Dockerfile with the following code: **api/Dockerfile** ```Dockerfile # ================================ # Build image # ================================ FROM public.ecr.aws/docker/library/swift:5.6.2-focal as build # Install OS updates RUN export DEBIAN_FRONTEND=noninteractive DEBCONF_NONINTERACTIVE_SEEN=true \ && apt-get -q update \ && apt-get -q dist-upgrade -y \ && apt-get -y install libssl-dev \ && rm -rf /var/lib/apt/lists/* # Set up a build area WORKDIR /build # First just resolve dependencies. # This creates a cached layer that can be reused # as long as your Package.swift/Package.resolved # files do not change. COPY ./Package.* ./ RUN swift package resolve # Copy entire repo into container COPY . . # Build everything, with optimizations RUN swift build -c release --static-swift-stdlib # Switch to the staging area WORKDIR /staging # Copy main executable to staging area RUN cp "$(swift build --package-path /build -c release --show-bin-path)/Run" ./ # Copy resources bundled by SPM to staging area RUN find -L "$(swift build --package-path /build -c release --show-bin-path)/" -regex '.*\.resources$' -exec cp -Ra {} ./ \; # Copy any resources from the public directory and views directory if the directories exist # Ensure that by default, neither the directory nor any of its contents are writable. RUN [ -d /build/Public ] && { mv /build/Public ./Public && chmod -R a-w ./Public; } || true RUN [ -d /build/Resources ] && { mv /build/Resources ./Resources && chmod -R a-w ./Resources; } || true # ================================ # Run image # ================================ FROM public.ecr.aws/ubuntu/ubuntu:focal # Make sure all system packages are up to date, and install only essential packages. RUN export DEBIAN_FRONTEND=noninteractive DEBCONF_NONINTERACTIVE_SEEN=true \ && apt-get -q update \ && apt-get -q dist-upgrade -y \ && apt-get -q install -y \ ca-certificates \ tzdata \ curl \ libxml2 \ && rm -r /var/lib/apt/lists/* # Create a vapor user and group with /app as its home directory RUN useradd --user-group --create-home --system --skel /dev/null --home-dir /app vapor # Switch to the new home directory WORKDIR /app # Copy built executable and any staged resources from builder COPY --from=build --chown=vapor:vapor /staging /app # Ensure all further commands run as the vapor user USER vapor:vapor # Let Docker bind to port 8080 EXPOSE 8080 # Start the Vapor service when the image is run, default to listening on 8080 in production environment ENTRYPOINT ["./Run"] CMD ["serve", "--env", "production", "--hostname", "0.0.0.0", "--port", "8080"] ``` ## Step 5: Update the Vapor Source Code Vapor also generates the sample files needed to code an API. You must customize these files with code that exposes your To Do List API methods and interacts with your MongoDB database. The *configure.swift* file initializes an application-wide pool of connections to your MongoDB database. It retrieves the connection string to your MongoDB database from an environment variable at runtime. Replace the contents of the file with the following code: **api/Sources/App/configure.swift** ```swift import MongoDBVapor import Vapor public func configure(_ app: Application) throws { let MONGODB_URI = Environment.get("MONGODB_URI") ?? "" try app.mongoDB.configure(MONGODB_URI) ContentConfiguration.global.use(encoder: ExtendedJSONEncoder(), for: .json) ContentConfiguration.global.use(decoder: ExtendedJSONDecoder(), for: .json) try routes(app) } ``` The *routes.swift* file defines the methods to your API. These include a *POST Item* method to insert a new item and a *GET Items* method to retrieve a list of all existing items. See comments in the code to understand what happens in each section. Replace the contents of the file with the following code: **api/Sources/App/routes.swift** ```swift import Vapor import MongoDBVapor // define the structure of a ToDoItem struct ToDoItem: Content { var _id: BSONObjectID? let name: String var createdOn: Date? } // import the MongoDB database and collection names from environment variables let MONGODB_DATABASE = Environment.get("MONGODB_DATABASE") ?? "" let MONGODB_COLLECTION = Environment.get("MONGODB_COLLECTION") ?? "" // define an extenstion to the Vapor Request object to interact with the database and collection extension Request { var todoCollection: MongoCollection { self.application.mongoDB.client.db(MONGODB_DATABASE).collection(MONGODB_COLLECTION, withType: ToDoItem.self) } } // define the api routes func routes(_ app: Application) throws { // an base level route used for container healthchecks app.get { req in return "OK" } // GET items returns a JSON array of all items in the database app.get("items") { req async throws -> [ToDoItem] in try await req.todoCollection.find().toArray() } // POST item inserts a new item into the database and returns the item as JSON app.post("item") { req async throws -> ToDoItem in var item = try req.content.decode(ToDoItem.self) item.createdOn = Date() let response = try await req.todoCollection.insertOne(item) item._id = response?.insertedID.objectIDValue return item } } ``` The *main.swift* file defines the startup and shutdown code for the application. Change the code to include a *defer* statement to close the connection to your MongoDB database when the application ends. Replace the contents of the file with the following code: **api/Sources/Run/main.swift** ```swift import App import Vapor import MongoDBVapor var env = try Environment.detect() try LoggingSystem.bootstrap(from: &env) let app = Application(env) try configure(app) // shutdown and cleanup the MongoDB connection when the application terminates defer { app.mongoDB.cleanup() cleanupMongoSwift() app.shutdown() } try app.run() ``` ## Step 6: Initialize AWS Copilot [AWS Copilot](https://aws.github.io/copilot-cli/) is a command-line utility for generating a containerized application in AWS. You use Copilot to build and deploy your Vapor code as containers in Fargate. Copilot also creates and tracks an AWS Systems Manager secret parameter for the value of your MongoDB connection string. You store this value as a secret as it contains the username and password to your database. You never want to store this in your source code. Finally, Copilot creates an API Gateway to expose a public endpoint for your API. Initialize a new Copilot application. ```bash copilot app init todo ``` Add a new Copilot *Backend Service*. The service refers to the Dockerfile of your Vapor project for instructions on how to build the container. ```bash copilot svc init --name api --svc-type "Backend Service" --dockerfile ./api/Dockerfile ``` Create a Copilot environment for your application. An environment typically aligns to a phase, such as dev, test, or prod. When prompted, select the AWS credentials profile you configured with the AWS CLI. ```bash copilot env init --name dev --app todo --default-config ``` Deploy the *dev* environment: ```bash copilot env deploy --name dev ``` ## Step 7: Create a Copilot Secret for Database Credentials Your application requires credentials to authenticate to your MongoDB Atlas database. You should never store this sensitive information in your source code. Create a Copilot *secret* to store the credentials. This stores the connection string to your MongoDB cluster in an AWS Systems Manager Secret Parameter. Determine the connection string from the MongoDB Atlas website. Select the *Connect* button on your cluster page and the *Connect your application*. ![Architecture](../images/aws/aws-fargate-vapor-mongo-atlas-connection.png) Select *Swift version 1.2.0* as the Driver and copy the displayed connection string. It looks something like this: ```bash mongodb+srv://username:@mycluster.mongodb.net/?retryWrites=true&w=majority ``` The connection string contains your database username and a placeholder for the password. Replace the **\** section with your database password. Then create a new Copilot secret named MONGODB_URI and save your connection string when prompted for the value. ```bash copilot secret init --app todo --name MONGODB_URI ``` Fargate injects the secret value as an environment variable into your container at runtime. In Step 5 above, you extracted this value in your *api/Sources/App/configure.swift* file and used it to configure your MongoDB connection. ## Step 8: Configure the Backend Service Copilot generates a *manifest.yml* file for your application that defines the attributes of your service, such as the Docker image, network, secrets, and environment variables. Change the manifest file generated by Copilot to add the following properties: - configure a health check for the container image - add a reference to the MONGODB_URI secret - configure the service network as *private* - add environment variables for the MONGODB_DATABASE and MONGODB_COLLECTION To implement these changes, replace the contents of the *manifest.yml* file with the following code. Update the values of MONGODB_DATABASE and MONGODB_COLLECTION to reflect the names of the database and cluster you created in MongoDB Atlas for this application. If you are building this solution on a **Mac M1/M2** machine, uncomment the **platform** property in the manifest.yml file to specify an ARM build. The default value is *linux/x86_64*. **copilot/api/manifest.yml** ```yaml # The manifest for the "api" service. # Read the full specification for the "Backend Service" type at: # https://aws.github.io/copilot-cli/docs/manifest/backend-service/ # Your service name will be used in naming your resources like log groups, ECS services, etc. name: api type: Backend Service # Your service is reachable at "http://api.${COPILOT_SERVICE_DISCOVERY_ENDPOINT}:8080" but is not public. # Configuration for your containers and service. image: # Docker build arguments. For additional overrides: https://aws.github.io/copilot-cli/docs/manifest/backend-service/#image-build build: api/Dockerfile # Port exposed through your container to route traffic to it. port: 8080 healthcheck: command: ["CMD-SHELL", "curl -f http://localhost:8080 || exit 1"] interval: 10s retries: 2 timeout: 5s start_period: 0s # Mac M1/M2 users - uncomment the following platform line # the default platform is linux/x86_64 # platform: linux/arm64 cpu: 256 # Number of CPU units for the task. memory: 512 # Amount of memory in MiB used by the task. count: 2 # Number of tasks that should be running in your service. exec: true # Enable running commands in your container. # define the network as private. this will place Fargate in private subnets network: vpc: placement: private # Optional fields for more advanced use-cases. # # Pass environment variables as key value pairs. variables: MONGODB_DATABASE: home MONGODB_COLLECTION: todolist # Pass secrets from AWS Systems Manager (SSM) Parameter Store. secrets: MONGODB_URI: /copilot/${COPILOT_APPLICATION_NAME}/${COPILOT_ENVIRONMENT_NAME}/secrets/MONGODB_URI # You can override any of the values defined above by environment. #environments: # test: # count: 2 # Number of tasks to run for the "test" environment. # deployment: # The deployment strategy for the "test" environment. # rolling: 'recreate' # Stops existing tasks before new ones are started for faster deployments. ``` ## Step 9: Create a Copilot Addon Service for your API Gateway Copilot does not have the capability to add an API Gateway to your application. You can, however, add additional AWS resources to your application using [Copilot "Addons"](https://aws.github.io/copilot-cli/docs/developing/additional-aws-resources/#how-to-do-i-add-other-resources). Define an addon by creating an *addons* folder under your Copilot service folder and creating a CloudFormation yaml template to define the services you wish to create. Create a folder for the addon: ```bash mkdir -p copilot/api/addons ``` Create a file to define the API Gateway: ```bash touch copilot/api/addons/apigateway.yml ``` Create a file to pass parameters from the main service into the addon service: ```bash touch copilot/api/addons/addons.parameters.yml ``` Copy the following code into the *addons.parameters.yml* file. It passes the ID of the Cloud Map service into the addon stack. **copilot/api/addons/addons.parameters.yml** ```yaml Parameters: DiscoveryServiceARN: !GetAtt DiscoveryService.Arn ``` Copy the following code into the *addons/apigateway.yml* file. It creates an API Gateway using the DiscoveryServiceARN to integrate with the Cloud Map service Copilot created for your Fargate containers. **copilot/api/addons/apigateway.yml** ```yaml Parameters: App: Type: String Description: Your application's name. Env: Type: String Description: The environment name your service, job, or workflow is being deployed to. Name: Type: String Description: The name of the service, job, or workflow being deployed. DiscoveryServiceARN: Type: String Description: The ARN of the Cloud Map discovery service. Resources: ApiVpcLink: Type: AWS::ApiGatewayV2::VpcLink Properties: Name: !Sub "${App}-${Env}-${Name}" SubnetIds: !Split [",", Fn::ImportValue: !Sub "${App}-${Env}-PrivateSubnets"] SecurityGroupIds: - Fn::ImportValue: !Sub "${App}-${Env}-EnvironmentSecurityGroup" ApiGatewayV2Api: Type: "AWS::ApiGatewayV2::Api" Properties: Name: !Sub "${Name}.${Env}.${App}.api" ProtocolType: "HTTP" CorsConfiguration: AllowHeaders: - "*" AllowMethods: - "*" AllowOrigins: - "*" ApiGatewayV2Stage: Type: "AWS::ApiGatewayV2::Stage" Properties: StageName: "$default" ApiId: !Ref ApiGatewayV2Api AutoDeploy: true ApiGatewayV2Integration: Type: "AWS::ApiGatewayV2::Integration" Properties: ApiId: !Ref ApiGatewayV2Api ConnectionId: !Ref ApiVpcLink ConnectionType: "VPC_LINK" IntegrationMethod: "ANY" IntegrationType: "HTTP_PROXY" IntegrationUri: !Sub "${DiscoveryServiceARN}" TimeoutInMillis: 30000 PayloadFormatVersion: "1.0" ApiGatewayV2Route: Type: "AWS::ApiGatewayV2::Route" Properties: ApiId: !Ref ApiGatewayV2Api RouteKey: "$default" Target: !Sub "integrations/${ApiGatewayV2Integration}" ``` ## Step 10: Deploy the Copilot Service When deploying your service, Copilot executes the following actions: - builds your Vapor Docker image - deploys the image to the Amazon Elastic Container Registry (ECR) in your AWS account - creates and deploys an AWS CloudFormation template into your AWS account. CloudFormation creates all the services defined in your application. ```bash copilot svc deploy --name api --app todo --env dev ``` ## Step 11: Configure MongoDB Atlas Network Access MongoDB Atlas uses an IP Access List to restrict access to your database to a specific list of source IP addresses. In your application, traffic from your containers originates from the public IP addresses of the NAT Gateways in your application's network. You must configure MongoDB Atlas to allow traffic from these IP addresses. To get the IP address of the NAT Gateways, run the following AWS CLI command: ```bash aws ec2 describe-nat-gateways --filter "Name=tag-key, Values=copilot-application" --query 'NatGateways[?State == `available`].NatGatewayAddresses[].PublicIp' --output table ``` Output: ```bash --------------------- |DescribeNatGateways| +-------------------+ | 1.1.1.1 | | 2.2.2.2 | +-------------------+ ``` Use the IP addresses to create a Network Access rule in your MongoDB Atlas account for each address. ![Architecture](../images/aws/aws-fargate-vapor-mongo-atlas-network-address.png) ## Step 12: Use your API To get the endpoint for your API, use the following AWS CLI command: ```bash aws apigatewayv2 get-apis --query 'Items[?Name==`api.dev.todo.api`].ApiEndpoint' --output table ``` Output: ```bash ------------------------------------------------------------ | GetApis | +----------------------------------------------------------+ | https://[your-api-endpoint] | +----------------------------------------------------------+ ``` Use cURL or a tool such as [Postman](https://www.postman.com/) to interact with your API: Add a To Do List item ```bash curl --request POST 'https://[your-api-endpoint]/item' --header 'Content-Type: application/json' --data-raw '{"name": "my todo item"}' ``` Retrieve To Do List items ```bash curl https://[your-api-endpoint]/items ``` ## Cleanup When finished with your application, use Copilot to delete it. This deletes all the services created in your AWS account. ```bash copilot app delete --name todo ``` ================================================ FILE: docs/aws.md ================================================ # Deploying to AWS on Amazon Linux 2 This guide describes how to launch an AWS instance running Amazon Linux 2 and configure it to run Swift. The approach taken here is a step by step approach through the console. This is a great way to learn, but for a more mature approach we recommend using Infrastructure as Code tools such as AWS Cloudformation, and the instances are created and managed through automated tools such as Autoscaling Groups. For one approach using those tools see this blog article: https://aws.amazon.com/blogs/opensource/continuous-delivery-with-server-side-swift-on-amazon-linux-2/ ## Launch AWS Instance Use the Service menu to select the EC2 service. ![Select EC2 service](../images/aws/services.png) Click on "Instances" in the "Instances" menu ![Select Instances](../images/aws/ec2.png) Click on "Launch Instance", either on the top of the screen, or if this is the first instance you have created in the region, in the main section of the screen. ![Launch instance](../images/aws/launch-0.png) Choose an Amazon Machine Image (AMI). In this case the guide is assuming that we will be using Amazon Linux 2, so select that AMI type. ![Choose AMI](../images/aws/launch-1.png) Choose an instance type. Larger instances types will have more memory and CPU, but will be more expensive. A smaller instance type will be sufficient to experiment. In this case I have a `t2.micro` instance type selected. ![Choose Instance type](../images/aws/launch-2.png) Configure instance details. If you want to access this instance directly to the internet, ensure that the subnet that you select is auto-assigns a public IP. It is assumed that the VPC has internet connectivity, which means that it needs to have a Internet Gateway (IGW) and the correct networking rules, but this is the case for the default VPC. If you wish to set this instance up in a private (non-internet accessible) VPC you will need to set up a bastion host, AWS Systems Manager Session Manager, or some other mechanism to connect to the instance. ![Choose Instance details](../images/aws/launch-3.png) Add storage. The AWS EC2 launch wizard will suggest some form of storage by default. For our testing purposes this should be fine, but if you know that you need more storage, or a different storage performance requirements, then you can change the size and volume type here. ![Choose Instance storage](../images/aws/launch-4.png) Add tags. It is recommended you add as many tags as you need to correctly identify this server later. Especially if you have many servers, it can be difficult to remember which one was used for which purpose. At a very minimum, add a `Name` tag with something memorable. ![Add tags](../images/aws/launch-5.png) Configure security group. The security group is a stateful firewall that limits the traffic that is accepted by your instance. It is recommended to limit this as much as possible. In this case we are configuring it to only allow traffic on port 22 (ssh). It is recommended to restrict the source as well. To limit it to your workstation's current IP, click on the dropdown under "Source" and select "My IP". ![Configure security group](../images/aws/launch-6.png) Launch instance. Click on "Launch", and select a key pair that you will use to connect to the instance. If you already have a keypair that you have used previously, you can reuse it here by selecting "Choose an existing key pair". Otherwise you can create a keypair now by selecting "Create a new key pair". ![Launch instance](../images/aws/launch-7.png) Wait for instance to launch. When it is ready it will show as "running" under "Instance state", and "2/2 checks pass" under "Status Checks". Click on the instance to view the details on the bottom pane of the window, and look for the "IPv4 Public IP". ![Wait for instance launch and view details](../images/aws/ec2-list.png) Connect to instance. Using the keypair that you used or created in the launch step and the IP in the previous step, run ssh. Be sure to use the `-A` option with ssh so that in a future step we will be able to use the same key to connect to a second instance. ![Connect to instance](../images/aws/ssh-0.png) We have two options to compile the binary: either directly on the instance or using Docker. We will go through both options here. ## Compile on instance There are two alternative ways to compile code on the instance, either by: - [downloading and using the toolchain directly on the instance](#compile-using-a-downloaded-toolchain), - or by [using docker, and compiling inside a docker container](#compile-with-docker) ### Compile using a downloaded toolchain Run the following command in the SSH terminal. Note that there may be a more up to date version of the swift toolchain. Check https://swift.org/download/#releases for the latest available toolchain url for Amazon Linux 2. ``` SwiftToolchainUrl="https://swift.org/builds/swift-5.4.1-release/amazonlinux2/swift-5.4.1-RELEASE/swift-5.4.1-RELEASE-amazonlinux2.tar.gz" sudo yum install ruby binutils gcc git glibc-static gzip libbsd libcurl libedit libicu libsqlite libstdc++-static libuuid libxml2 tar tzdata ruby -y cd $(mktemp -d) wget ${SwiftToolchainUrl} -O swift.tar.gz gunzip < swift.tar.gz | sudo tar -C / -xv --strip-components 1 ``` Finally, check that Swift is correctly installed by running the Swift REPL: `swift`. ![Invoke REPL](../images/aws/repl.png) Let's now download and build an test application. We will use the `--static-swift-stdlib` option so that it can be deployed to a different server without the Swift toolchain installed. These examples will deploy SwiftNIO's [example HTTP server](https://github.com/apple/swift-nio/tree/master/Sources/NIOHTTP1Server), but you can test with your own project. ``` git clone https://github.com/apple/swift-nio.git cd swift-nio swift build -v --static-swift-stdlib -c release ``` ## Compile with Docker Ensure that Docker and git are installed on the instance: ``` sudo yum install docker git sudo usermod -a -G docker ec2-user sudo systemctl start docker ``` You may have to log out and log back in to be able to use Docker. Check by running `docker ps`, and ensure that it runs without errors. Download and compile SwiftNIO's [example HTTP server](https://github.com/apple/swift-nio/tree/master/Sources/NIOHTTP1Server): ``` docker run --rm -v "$PWD:/workspace" -w /workspace swift:5.4-amazonlinux2 /bin/bash -cl ' \ swift build -v --static-swift-stdlib -c release ``` ## Test binary Using the same steps as above, launch a second instance (but don't run any of the bash commands above!). Be sure to use the same SSH keypair. From within the AWS management console, navigate to the EC2 service and find the instance that you just launched. Click on the instance to see the details, and find the internal IP. In my example, the internal IP is `172.31.3.29` From the original build instance, copy the binary to the new server instance: ```scp .build/release/NIOHTTP1Server ec2-user@172.31.3.29``` Now connect to the new instance: ```ssh ec2-user@172.31.3.29``` From within the new instance, test the Swift binary: ``` NIOHTTP1Server localhost 8080 & curl localhost:8080 ``` From here, options are endless and will depend on your application of Swift. If you wish to run a web service be sure to open the Security Group to the correct port and from the correct source. When you are done testing Swift, shut down the instance to avoid paying for unneeded compute. From the EC2 dashboard, select both instances, select "Actions" from the menu, then select "Instance state" and then finally "terminate". ![Terminate Instance](../images/aws/terminate.png) ================================================ FILE: docs/building.md ================================================ # Build system The recommended way to build server applications is with [Swift Package Manager](https://swift.org/package-manager/). SwiftPM provides a cross-platform foundation for building Swift code and works nicely for having one code base that can be edited as well as run on many Swift platforms. ## Building SwiftPM works from the command line and is also integrated within Xcode. You can build your code either by running `swift build` from the terminal, or by triggering the build action in Xcode. ### Docker Usage Swift binaries are architecture-specific, so running the build command on macOS will create a macOS binary, and similarly running the command on Linux will create a Linux binary. Many Swift developers use macOS for development, which enables taking advantage of the great tooling that comes with Xcode. However, most server applications are designed to run on Linux. If you are developing on macOS, Docker is a useful tool for building on Linux and creating Linux binaries. Apple publishes official Swift Docker images to [Docker Hub](https://hub.docker.com/_/swift). For example, to build your application using the latest Swift Docker image: `$ docker run -v "$PWD:/code" -w /code swift:latest swift build` Note, if you want to run the Swift compiler for Intel CPUs on an Apple Silicon (M1) Mac, please add `--platform linux/amd64 -e QEMU_CPU=max` to the command line. For example: `$ docker run -v "$PWD:/code" -w /code --platform linux/amd64 -e QEMU_CPU=max swift:latest swift build` The above command will run the build using the latest Swift Docker image, utilizing bind mounts to the sources on your Mac. ### Debug vs. Release Mode By default, SwiftPM will build a debug version of the application. Note that debug versions are not suitable for running in production as they are significantly slower. To build a release version of your app, run `swift build -c release`. ### Locating Binaries Binary artifacts that can be deployed are found under `.build/x86_64-unknown-linux` on Linux, and `.build/x86_64-apple-macosx` on macOS. SwiftPM can show you the full binary path using `swift build --show-bin-path -c release`. ### Building for production - Build production code in release mode by compiling with `swift build -c release`. Running code compiled in debug mode will hurt performance significantly. - For best performance in Swift 5.2 or later, pass `-Xswiftc -cross-module-optimization` (this won't work in Swift versions before 5.2) - enabling this should be verified with performance tests (as any optimization changes) as it may sometimes cause performance regressions. - Integrate [`swift-backtrace`](https://github.com/swift-server/swift-backtrace) into your application to make sure backtraces are printed on crash. Backtraces do not work out-of-the-box on Linux, and this library helps to fill the gap. Eventually this will become a language feature and not require a discrete library. ================================================ FILE: docs/concurrency-adoption-guidelines.md ================================================ # Swift Concurrency adoption guidelines for Swift Server Libraries This writeup attempts to provide a set of guidelines to follow by authors of server-side Swift libraries. Specifically a lot of the discussion here revolves around what to do about existing APIs and libraries making extensive use of Swift NIO’s `EventLoopFuture` and related types. Swift Concurrency is a multi-year effort. It is very valuable for the server community to participate in this multi-year adoption of the concurrency features, one by one, and provide feedback while doing so. As such, we should not hold off adopting concurrency features until Swift 6 as we may miss valuable opportunity to improve the concurrency model. In 2021 we saw structured concurrency and actors arrive with Swift 5.5. Now is a great time to provide APIs using those primitives. In the future we will see fully checked Swift concurrency. This will come with breaking changes. For this reason adopting the new concurrency features can be split into two phases. ## What you can do right now ### API Design Firstly, existing libraries should strive to add `async` functions where possible to their user-facing “surface” APIs in addition to existing `*Future` based APIs wherever possible. These additive APIs can be gated on the Swift version and can be added without breaking existing users' code, for example like this: ```swift extension Worker { func work() -> EventLoopFuture { ... } #if compiler(>=5.5) && canImport(_Concurrency) @available(macOS 12.0, iOS 15.0, watchOS 8.0, tvOS 15.0, *) func work() async throws -> Value { ... } #endif } ``` If a function cannot fail but was using futures before, it should not include the `throws` keyword in its new incarnation. Such adoption can begin immediately, and should not cause any issues to existing users of existing libraries. ### SwiftNIO helper functions To allow an easy transition to async code, SwiftNIO offers a number of helper methods on `EventLoopFuture` and `-Promise`. On every `EventLoopFuture` you can call `.get()` to transition the future into an `await`-able invocation. If you want to translate async/await calls to an `EventLoopFuture` we recommend the following pattern: ```swift #if compiler(>=5.5) && canImport(_Concurrency) func yourAsyncFunctionConvertedToAFuture(on eventLoop: EventLoop) -> EventLoopFuture { let promise = context.eventLoop.makePromise(of: Out.self) promise.completeWithTask { try await yourMethod(yourInputs) } return promise.futureResult } #endif ``` Further helpers exist for `EventLoopGroup`, `Channel`, `ChannelOutboundInvoker` and `ChannelPipeline`. ### `#if` guarding code using Concurrency In order to have code using concurrency along with code not using concurrency, you may have to `#if` guard certain pieces of code. The correct way to do so is the following: ```swift #if compiler(>=5.5) && canImport(_Concurrency) ... #endif ``` Please note that you do _not_ need to _import_ the `_Concurrency` at all, if it is present it is imported automatically. ```swift #if compiler(>=5.5) && canImport(_Concurrency) // DO NOT DO THIS. // Instead don't do any import and it'll import automatically when possible. import _Concurrency #endif ``` ### Sendable Checking > [SE-0302][SE-0302] introduced the `Sendable` protocol, which is used to indicate which types have values that can safely be copied across actors or, more generally, into any context where a copy of the value might be used concurrently with the original. Applied uniformly to all Swift code, `Sendable` checking eliminates a large class of data races caused by shared mutable state. > > -- from [Staging in Sendable checking][sendable-staging], which outlines the `Sendable` adoption plan for Swift 6. In the future we will see fully checked Swift concurrency. The language features to support this are the `Sendable` protocol and the `@Sendable` keyword for closures. Since sendable checking will break existing Swift code, a new major Swift version is required. To ease the transition to fully checked Swift code, it is possible to annotate your APIs with the `Sendable` protocol today. You can start adopting Sendable and getting appropriate warnings in Swift 5.5 already by passing the `-warn-concurrency` flag, you can do so in SwiftPM for the entire project like so: ``` swift build -Xswiftc -Xfrontend -Xswiftc -warn-concurrency ``` #### Sendable checking today Sendable checking is currently disabled in Swift 5.5(.0) because it was causing a number of tricky situations for which we lacked the tools to resolve. Most of these issues have been resolved on today’s `main` branch of the compiler, and are expected to land in the next Swift 5.5 releases. It may be worthwhile waiting for adoption until the next version(s) after 5.5.0. For example, one of such capabilities is the ability for tuples of `Sendable` types to conform to `Sendable` as well. We recommend holding off adoption of `Sendable` until this patch lands in Swift 5.5 (which should be relatively soon). With this change, the difference between Swift 5.5 with `-warn-concurrency` enabled and Swift 6 mode should be very small, and manageable on a case by case basis. #### Backwards compatibility of declarations and “checked” Swift Concurrency Adopting Swift Concurrency will progressively cause more warnings, and eventually compile time errors in Swift 6 when sendability checks are violated, marking potentially unsafe code. It may be difficult for a library to maintain a version that is compatible with versions prior to Swift 6 while also fully embracing the new concurrency checks. For example, it may be necessary to mark generic types as `Sendable`, like so: ```swift struct Container: Sendable { ... } ``` Here, the `Value` type must be marked `Sendable` for Swift 6’s concurrency checks to work properly with such container. However, since the `Sendable` type does not exist in Swift prior to Swift 5.5, it would be difficult to maintain a library that supports both Swift 5.4+ as well as Swift 6. In such situations, it may be helpful to utilize the following trick to be able to share the same `Container` declaration between both Swift versions of the library: ```swift #if swift(>=5.5) && canImport(_Concurrency) public typealias MYPREFIX_Sendable = Swift.Sendable #else public typealias MYPREFIX_Sendable = Any #endif ``` > **NOTE:** Yes, we're using `swift(>=5.5)` here, while we're using `compiler(>=5.5)` to guard specific APIs using concurrency features. The `Any` alias is effectively a no-op when applied as generic constraint, and thus this way it is possible to keep the same `Container` declaration working across Swift versions. ### Task Local Values and Logging The newly introduced Task Local Values API ([SE-0311][SE-0311]) allows for implicit carrying of metadata along with `Task` execution. It is a natural fit for tracing and carrying metadata around with task execution, and e.g. including it in log messages. We are working on adjusting [SwiftLog](https://github.com/apple/swift-log) to become powerful enough to automatically pick up and log specific task local values. This change will be introduced in a source compatible way. For now libraries should continue using logger metadata, but we expect that in the future a lot of the cases where metadata is manually passed to each log statement can be replaced with setting task local values. ### Preparing for the concept of Deadlines Deadlines are another feature that closely relate to Swift Concurrency, and were originally pitched during the early versions of the Structured Concurrency proposal and later on moved out of it. The Swift team remains interested in introducing deadline concepts to the language and some preparation for it already has been performed inside the concurrency runtime. Right now however, there is no support for deadlines in Swift Concurrency and it is fine to continue using mechanisms like `NIODeadline` or similar mechanisms to cancel tasks after some period of time has passed. Once Swift Concurrency gains deadline support, they will manifest in being able to cancel a task (and its child tasks) once such deadline (point in time) has been exceeded. For APIs to be “ready for deadlines” they don’t have to do anything special other than preparing to be able to deal with `Task`s and their cancellation. ### Cooperatively handling Task cancellation `Task` cancellation exists today in Swift Concurrency and is something that libraries may already handle. In practice it means that any asynchronous function (or function which is expected to be called from within `Task`s), may use the [`Task.isCancelled`](https://developer.apple.com/documentation/swift/task/3814832-iscancelled) or [`try Task.checkCancellation()`](https://developer.apple.com/documentation/swift/task/3814826-checkcancellation) APIs to check if the task it is executing in was cancelled, and if so, it may cooperatively abort any operation it was currently performing. Cancellation can be useful in long running operations, or before kicking off some expensive operation. For example, an HTTP client MAY check for cancellation before it sends a request — it perhaps does not make sense to send a request if it is known the task awaiting on it does not care for the result anymore after all! Cancellation in general can be understood as “the one waiting for the result of this task is not interested in it anymore”, and it usually is best to throw a “cancelled” error when the cancellation is encountered. However, in some situations returning a “partial” result may also be appropriate (e.g. if a task is collecting many results, it may return those it managed to collect until now, rather than returning none or ignoring the cancellation and collecting all remaining results). ## What to expect with Swift 6 ### Sendable: Global variables & imported code Today, Swift 5.5 does not yet handle global variables at all within its concurrency checking model. This will soon change but the exact semantics are not set in stone yet. In general, avoid using global properties and variables wherever possible to avoid running into issues in the future. Consider deprecating global variables if able to. Some global variables have special properties, such as `errno` which contains the error code of system calls. It is a thread local variable and therefore safe to read from any thread/`Task`. We expect to improve the importer to annotate such globals with some kind of “known to be safe” annotation, such that the Swift code using it, even in fully checked concurrency mode won’t complain about it. Having that said, using `errno` and other “thread local” APIs is very error prone in Swift Concurrency because thread-hops may occur at any suspension point, so the following snippet is very likely incorrect: ```swift sys_call(...) await ... let err = errno // BAD, we are most likely on a different thread here (!) ``` Please take care when interacting with any thread-local API from Swift Concurrency. If your library had used thread local storage before, you will want to move them to use [task-local values](https://github.com/apple/swift-evolution/blob/main/proposals/0311-task-locals.md) instead as they work correctly with Swift’s structured concurrency tasks. Another tricky situation is with imported C code. There may be no good way to annotate the imported types as Sendable (or it would be too troublesome to do so by hand). Swift is likely to gain improved support for imported code and potentially allow ignoring some of the concurrency safety checks on imported code. These relaxed semantics for imported code are not implemented yet, but keep it in mind when working with C APIs from Swift and trying to adopt the `-warn-concurrency` mode today. Please file any issues you hit on [bugs.swift.org](https://bugs.swift.org/secure/Dashboard.jspa) so we can inform the development of these checking heuristics based on real issues you hit. ### Custom Executors We expect that Swift Concurrency will allow custom executors in the future. A custom executor would allow the ability to run actors / tasks “on” such executor. It is possible that `EventLoop`s could become such executors, however the custom executor proposal has not been pitched yet. While we expect potential performance gains from using custom executors “on the same event loop” by avoiding asynchronous hops between calls to different actors, their introduction will not fundamentally change how NIO libraries are structured. The guidance here will evolve as Swift Evolution proposals for Custom Executors are proposed, but don’t hold off adopting Swift Concurrency until custom executors “land” - it is important to start adoption early. For most code we believe that the gains from adopting Swift Concurrency vastly outweigh the slight performance cost actor-hops might induce. ### Reduce use of SwiftNIO Futures as “Concurrency Library“ SwiftNIO currently provides a number of concurrency types for the Swift on Server ecosystem. Most notably `EventLoopFuture`s and `EventLoopPromise`s, that are used widely for asynchronous results. While the SSWG recommended using those at the API level in the past for easier interplay of server libraries, we advise to deprecate or remove such APIs once Swift 6 lands. The swift-server ecosystem should go all in on the structured concurrency features the languages provides. For this reason, it is crucial to provide async/await APIs today, to give your library users time to adopt the new APIs. Some NIO types will remain however in the public interfaces of Swift on server libraries. We expect that networking clients and servers continue to be initialized with `EventLoopGroup`s. The underlying transport mechanism (`NIOPosix` and `NIOTransportServices`) should become implementation details however and should not be exposed to library adopters. ### SwiftNIO 3 While subject to change, it is likely that SwiftNIO will cut a 3.0 release in the months after Swift 6.0, at which point in time Swift will have enabled “full” `Sendable` checking. You should not expect NIO to suddenly become “more async”, NIO’s inherent design principles are about performing small tasks on the event loop and using Futures for any async operations. The design of NIO is not expected to change. Channel pipelines are not expected to become "async" in the Swift Concurrency meaning of the word. This is because SwiftNIO is, at its heard, an IO system, and that poses a challenge to the co-operative, shared, thread-pool used by Swift Concurrency. This thread pool must not be blocked by any operation, because doing so will starve the pool and prevent further progress of other async tasks. I/O systems however must, at some point, block a thread waiting for more I/O events, either in an I/O syscall or in something like epoll_wait. This is how NIO works: each of the event loop threads ultimately blocks on epoll_wait. We can’t do that inside the cooperative thread pool, as to do so would starve it for other async tasks, so we’d have to do so on a different thread. As such, SwiftNIO should not be used _on_ the cooperative threadpool, but should take ownership and full control of its threads–because it is an I/O system. It would be possible to make all NIO work happen on the co-operative pool, and thread-hop between each I/O operation and dispatching it onto the async/await pool, however this is not acceptable for high performance I/O: the context switch for _each I/O operation_ is too expensive. As a result, SwiftNIO is not planning to just adopt Swift Concurrency for the ease of use it brings, because in its specific context, the context switches are not an acceptable tradeoff. SwiftNIO could however cooperate with Swift Concurrency with the arrival of "custom executors" in the language runtime, however this has not been fully proposed yet, so we are not going to speculate about this too much. The NIO team will however use the chance to remove deprecated APIs and improve some APIs. The scope of changes should be comparable to the NIO1 → NIO2 version bump. If your SwiftNIO code compiles today without warnings, chances are high that it will continue to work without modifications in NIO3. After the release of NIO3, NIO2 will see bug fixes only. ### End-user code breakage It is expected that Swift 6 will break some code. As mentioned SwiftNIO 3 is also going to be released sometime around Swift 6 dropping. Keeping this in mind, it might be a good idea to align major version releases around the same time, along with updating version requirements to Swift 6 and NIO 3 in your libraries. Both Swift and SwiftNIO are not planning to do “vast amounts of change”, so adoption should be possible without major pains. ### Guidance for library users As soon as Swift 6 comes out, we recommend using the latest Swift 6 toolchains, even if using the Swift 5.5.n language mode (which may yield only warnings rather than hard failures on failed Sendability checks). This will result in better warnings and compiler hints, than just using a 5.5 toolchain. [sendable-staging]: https://github.com/DougGregor/swift-evolution/blob/sendable-staging/proposals/nnnn-sendable-staging.md [SE-0302]: https://github.com/apple/swift-evolution/blob/main/proposals/0302-concurrent-value-and-concurrent-closures.md [SE-0311]: https://github.com/apple/swift-evolution/blob/main/proposals/0311-task-locals.md ================================================ FILE: docs/deployment.md ================================================ ## Deployment to Servers or Public Cloud The following guides can help with the deployment to public cloud providers: * [AWS](aws.md) * [DigitalOcean](digital-ocean.md) * [Heroku](heroku.md) * [Kubernetes & Docker](packaging.md#docker) * [GCP](gcp.md) * _Have a guides for other popular public clouds like Azure? Add it here!_ If you are deploying to you own servers (e.g. bare metal, VMs or Docker) there are several strategies for packaging Swift applications for deployment, see the [Packaging Guide](packaging.md) for more information. ### Deploying a Debuggable Configuration (Production on Linux) - If you have `--privileged`/`--security-opt seccomp=unconfined` containers or are running in VMs or even bare metal, you can run your binary with lldb --batch -o "break set -n main --auto-continue 1 -C \"process handle SIGPIPE -s 0\"" -o run -k "image list" -k "register read" -k "bt all" -k "exit 134" ./my-program instead of `./my-program` to get something something akin to a 'crash report' on crash. - If you don't have `--privileged` (or `--security-opt seccomp=unconfined`) containers (meaning you won't be able to use `lldb`) or you don't want to use lldb, consider using a library like [`swift-backtrace`](https://github.com/swift-server/swift-backtrace) to get stack traces on crash. ================================================ FILE: docs/digital-ocean.md ================================================ # Deploying to DigitalOcean This guide will walk you through setting up an Ubuntu virtual machine on a DigitalOcean [Droplet](https://www.digitalocean.com/products/droplets/). To follow this guide, you will need to have a [DigitalOcean](https://www.digitalocean.com) account with billing configured. ## Create Server Use the create menu to create a new Droplet. ![Create Droplet](../images/digital-ocean-create-droplet.png) Under distributions, select Ubuntu 18.04 LTS. ![Ubuntu Distro](../images/digital-ocean-distributions-ubuntu-18.png) > Note: You may select any version of Linux that Swift supports. You can check which operating systems are officially supported on the [Swift Releases](https://swift.org/download/#releases) page. After selecting the distribution, choose any plan and datacenter region you prefer. Then setup an SSH key to access the server after it is created. Finally, click create Droplet and wait for the new server to spin up. Once the new server is ready, hover over the Droplet's IP address and click copy. ![Droplet List](../images/digital-ocean-droplet-list.png) ## Initial Setup Open your terminal and connect to the server as root using SSH. ```sh ssh root@ ``` DigitalOcean has an in-depth guide for [initial server setup on Ubuntu 18.04](https://www.digitalocean.com/community/tutorials/initial-server-setup-with-ubuntu-18-04). This guide will quickly cover the basics. ### Configure Firewall Allow OpenSSH through the firewall and enable it. ```sh ufw allow OpenSSH ufw enable ``` Then enable a non-root accessible HTTP port. ```sh ufw allow 8080 ``` ### Add User Create a new user besides `root` that will be responsible for running your application. This guide uses a non-root user without access to `sudo` for added security. The following guides assume the user is named `swift`. ```sh adduser swift ``` Copy the root user's authorized SSH keys to the newly created user. This will allow you to use SSH (`scp`) as the new user. ```sh rsync --archive --chown=swift:swift ~/.ssh /home/swift ``` Your DigitalOcean virtual machine is now ready. Continue using the [Ubuntu](ubuntu.md) guide. ================================================ FILE: docs/gcp.md ================================================ # Deploying to Google Cloud Platform (GCP) This guide describes how to build and run your Swift Server on serverless architecture with [Google Cloud Build](https://cloud.google.com/build) and [Google Cloud Run](https://cloud.google.com/run). We'll use [Artifact Registry](https://cloud.google.com/artifact-registry/docs/docker/quickstart) to store the Docker images. ## Google Cloud Platform Setup You can read about [Getting Started with GCP](https://cloud.google.com/gcp/getting-started/) in more detail. In order to run Swift Server applications, we need to: - enable [Billing](https://console.cloud.google.com/billing) (requires a credit card). Note that when creating a new account, GCP provides you with $300 of free credit to use in the first 90 days. You can follow this guide for free for a new account. Everything in this guide should fall into the "Free Tier" category at GCP (120 build minutes per day, 2 million Cloud Run requests per month [Free Tier Usage Limits](https://cloud.google.com/free/docs/gcp-free-tier#free-tier-usage-limits)) - enable the [Cloud Build API](https://console.cloud.google.com/apis/api/cloudbuild.googleapis.com/overview) - enable the [Cloud Run Admin API](https://console.cloud.google.com/apis/api/run.googleapis.com/overview) - enable the [Artifact Registry API](https://console.cloud.google.com/apis/api/artifactregistry.googleapis.com/overview) - [create a Repository in the Artifact Registry](https://console.cloud.google.com/artifacts/create-repo) (Format: Docker, Region: your choice) ## Project Requirements Please verify that your server listens on `0.0.0.0`, not `127.0.0.1` and it's recommended to use the environment variable `$PORT` instead of a hard-coded value. For the workflow to pass, two files are essential, both need to be in the project root: 1. Dockerfile 2. cloudbuild.yaml ### `Dockerfile` You should test your Dockerfile with `docker build . -t test` and `docker run -p 8080:8080 test` and make sure it builds and runs locally. The _Dockerfile_ is the same as in the [packaging guide](./packaging.md#docker). Replace `` with your `executableTarget` (ie. "Server"): ```Dockerfile #------- build ------- FROM swift:centos as builder # set up the workspace RUN mkdir /workspace WORKDIR /workspace # copy the source to the docker image COPY . /workspace RUN swift build -c release --static-swift-stdlib #------- package ------- FROM centos:8 # copy executable COPY --from=builder /workspace/.build/release/ / # set the entry point (application name) CMD [""] ``` ### `cloudbuild.yaml` The `cloudbuild.yaml` files contains a set of steps to build the server image directly in the cloud and deploy a new Cloud Run instance after the successful build. `${_VAR}` are ["substitution variables"](https://cloud.google.com/cloud-build/docs/configuring-builds/substitute-variable-values) that are available during build time and can be passed on into the runtime environment in the "deploy" phase. We will set the variables later when we configure the [Build Trigger](#deployment) (Step 5). ```yaml steps: - name: 'gcr.io/cloud-builders/docker' entrypoint: 'bash' args: - '-c' - | docker pull ${_REGION}-docker.pkg.dev/$PROJECT_ID/${_REPOSITORY_NAME}/${_SERVICE_NAME}:latest || exit 0 - name: 'gcr.io/cloud-builders/docker' args: - build - -t - ${_REGION}-docker.pkg.dev/$PROJECT_ID/${_REPOSITORY_NAME}/${_SERVICE_NAME}:$SHORT_SHA - -t - ${_REGION}-docker.pkg.dev/$PROJECT_ID/${_REPOSITORY_NAME}/${_SERVICE_NAME}:latest - . - --cache-from - ${_REGION}-docker.pkg.dev/$PROJECT_ID/${_REPOSITORY_NAME}/${_SERVICE_NAME}:latest - name: 'gcr.io/cloud-builders/docker' args: [ 'push', '${_REGION}-docker.pkg.dev/$PROJECT_ID/${_REPOSITORY_NAME}/${_SERVICE_NAME}:$SHORT_SHA' ] - name: 'gcr.io/cloud-builders/gcloud' args: - run - deploy - swift-service - --image=${_REGION}-docker.pkg.dev/$PROJECT_ID/${_REPOSITORY_NAME}/${_SERVICE_NAME}:$SHORT_SHA - --port=8080 - --region=${_REGION} - --memory=512Mi - --platform=managed - --allow-unauthenticated - --min-instances=0 - --max-instances=5 images: - '${_REGION}-docker.pkg.dev/$PROJECT_ID/${_REPOSITORY_NAME}/${_SERVICE_NAME}:$SHORT_SHA' - '${_REGION}-docker.pkg.dev/$PROJECT_ID/${_REPOSITORY_NAME}/${_SERVICE_NAME}:latest' timeout: 1800s ``` ### The steps in detail 1. Pull the latest image from the Artifact Registry to retrieve cached layers 2. Build the image with `$SHORT_SHA` and `latest` tag 3. Push the image to the Artifact Registry 4. Deploy the image to Cloud Run `images` specifies the build images to store in the registry. The default `timeout` is 10 minutes, so we'll need to increase it for Swift builds. We use `8080` as the default port here, though it's recommended to remove this line and have the server listen on `$PORT`. ## Deployment ![cloud build trigger settings and how to connect a code repository](../images/gcp-connect-repo.png) Push all files to a remote repository. Cloud Build currently supports, GitHub, Bitbucket and GitLab. now) and head to [Cloud Build Triggers](https://console.cloud.google.com/cloud-build/triggers) and click "Create Trigger": 1. Add a name and description 2. Event: "Push to a branch" is active 3. Source: "Connect New Repository" and authorize with your code provider, and add the repository where your Swift server code is hosted. Note that you need to configure [GitHub](https://cloud.google.com/build/docs/automating-builds/build-repos-from-github), [GitLab](https://cloud.google.com/build/docs/automating-builds/build-repos-from-gitlab) or [Bitbucket](https://cloud.google.com/build/docs/automating-builds/build-repos-from-bitbucket-cloud) to allow GCP access first. 4. Configuration: "Cloud Build configuration file" / Location: Repository 5. Advanced: [Substitution variables](https://cloud.google.com/cloud-build/docs/configuring-builds/substitute-variable-values): You need to set the variables for region, repository name and service name here. You can pick a [region of your choice](https://cloud.google.com/about/locations/) (ie. `us-central1`). All custom variables must start with an underscore (`_REGION`). `_REPOSITORY_NAME` and `_SERVICE_NAME` are up to you. If you use environment variables for example to connect to a database or 3rd party services, you can set the values here too. 6. "Create" As a last step before deploying the new service, go to the [Cloud Build Settings](https://console.cloud.google.com/cloud-build/settings) and make sure "Cloud Run" is enabled. This gives Cloud Build the necessary IAM permissions to deploy Cloud Run services. ![cloud build settings](../images/gcp-cloud-build-settings.png) In the Trigger overview page, you should see your new "swift-service" trigger. Click on "RUN" on the right to start the trigger manually from the `main` branch. With a simple Hummingbird project the build takes about 7-8 minutes. Vapor takes about 25 minutes on the standard/small build machines, which are fairly slow. "Jordane" from the Vapor Discord community [recommends using `machineType: E2_HIGHCPU_8`](https://discord.com/channels/431917998102675485/447893851374616576/915819735738888222) in the `cloudbuild.yaml` to speed up deployments: ```yaml options: machineType: 'E2_HIGHCPU_8' ``` After a successful build you should see the service URL in the build logs: ![successful build and deployment to cloud run](../images/gcp-cloud-build.png) You can head over to Cloud Run and see your service running there: ![cloud run overview](../images/gcp-cloud-run.png) The trigger will deploy every new commit on `main`. You can also enable Pull Request triggers for feature-driven workflows. Cloud Build also allows blue/green builds, auto-scaling and much more. You can now connect your custom domain to the new service and go live. ## Cleanup - delete the Cloud Run service - delete the Cloud Build trigger - delete the Artifact Registry repository ================================================ FILE: docs/heroku.md ================================================ # What is Heroku Heroku is a popular all in one hosting solution, you can find more at heroku.com ## Signing Up You'll need a heroku account, if you don't have one, please sign up here: [https://signup.heroku.com/](https://signup.heroku.com/) ## Installing CLI Make sure that you've installed the heroku cli tool. ### HomeBrew ```bash brew install heroku/brew/heroku ``` ### Other Install Options See alternative install options here: [https://devcenter.heroku.com/articles/heroku-cli#download-and-install](https://devcenter.heroku.com/articles/heroku-cli#download-and-install). ### Logging in Once you've installed the CLI, login with the following: ```bash heroku login ``` ### Create an application Visit dashboard.heroku.com to access your account, and create a new application from the drop down in the upper right hand corner. Heroku will ask a few questions such as region and application name, just follow their prompts. ### Project Today we're going to be hosting Swift NIO's example http server, you can apply these concepts to your own project. Let's start by cloning NIO ```bash git clone https://github.com/apple/swift-nio ``` Make sure to make our newly cloned directory the working directory ```bash cd swift-nio ``` By default, Heroku deploys the **master** branch. Always make sure all changes are checked into this branch before pushing. #### Connect with Heroku Connect your app with Heroku (replace with your app's name). ```bash $ heroku git:remote -a your-apps-name-here ``` ### Set Stack As of 13 September 2018, Heroku’s default stack is Heroku 18, we need it to be 16 for swift projects. ```bash heroku stack:set heroku-16 -a your-apps-name-here ``` ### Set Buildpack Set the buildpack to teach Heroku how to deal with swift, the vapor-communnity buildpack is a good buildpack for *any swift project*. It doesn't install vapor, and it doesn't have any vapor specific setup. ```bash heroku buildpacks:set vapor/vapor ``` ### Swift version file The buildpack we added looks for a **.swift-version** file to know which version of swift to use. ```bash echo "5.2" > .swift-version ``` This creates **.swift-version** with `5.2` as its contents. ### Procfile Heroku uses the **Procfile** to know how to run your app. This includes the executable name and any arguments necessary. You'll see `$PORT` below, this allows heroku to assign a specific port when it launches the app. ``` web: NIOHTTP1Server 0.0.0.0 $PORT ``` You can use this command in terminal to set the file ```bash echo "web: NIOHTTP1Server 0.0.0.0 $PORT" > Procfile ``` ### Commit changes We have now added the `.swift-version` file, and the `Procfile`, make sure these are committed into master or Heroku will not find them. ### Deploying to Heroku You're ready to deploy, run this from the terminal. It may take a while to build, this is normal. ```none git push heroku master ``` ================================================ FILE: docs/libs/log-levels.md ================================================ # Library guidelines: Log Levels This guide serves as guidelines for library authors with regard to what [SwiftLog](https://github.com/apple/swift-log) log levels are appropriate for use in libraries, and in what situations to use what level. Libraries need to be well-behaved across various use cases, and cannot assume a specific style of logging backend will be used with them. It is up to developers implementing specific applications and systems to configure those specifics of their application, and some may choose to log to disk, some to memory, or some may employ sophisticated log aggregators. In all those cases a library should behave "well", meaning that it should not overwhelm typical ("stdout") log backends by logging too much, alerting too much by over-using `error` level log statements etc. This is aimed for library authors with regards to what [SwiftLog](https://github.com/apple/swift-log) log levels are appropriate for use in libraries, and also general logging style hints. ## Guidelines for Libraries SwiftLog defines the following 7 log levels via the [`Logger.Level` enum](https://apple.github.io/swift-log/docs/current/Logging/Structs/Logger/Level.html), ordered from least to most severe: * `trace` * `debug` * `info` * `notice` * `warning` * `error` * `critical` Out of those, only levels _less severe than_ info (exclusively) are generally okay to be used by libraries. In the following section we'll explore how to use them in practice. ### Recommended log levels It is always fine for a library to log at `trace` and `debug` levels, and these two should be the primary levels any library is logging at. `trace` is the finest log level, and end-users of a library will not usually use it unless debugging very specific issues. You should consider it as a way for library developers to "log everything we could possibly need to diagnose a hard to reproduce bug." Unrestricted logging at `trace` level may take a toll on the performance of a system, and developers can assume trace level logging will not be used in production deployments, unless enabled specifically to locate some specific issue. This is in contrast with `debug` which some users _may_ choose to run enabled on their production systems. > Debug level logging should be not "too" noisy. Developers should assume some production deployments may need to (or want to) run with debug level logging enabled. > > Debug level logging should not completely undermine the performance of a production system. As such, `debug` logging should provide a high value understanding of what is going on in the library for end users, using domain relevant language. Logging at `debug` level should not be overly noisy or dive deep into internals; this is what `trace` is intended for. Use `warning` level sparingly. Whenever possible, try to rather return or throw `Error` to end users that are descriptive enough so they can inspect, log them and figure out the issue. Potentially, they may then enable debug logging to find out more about the issue. It is okay to log a `warning` "once", for example on system startup. This may include some one off "more secure configuration is available, try upgrading to it!" log statement upon a server's startup. You may also log warnings from background processes, which otherwise have no other means of informing the end user about some issue. Logging on `error` level is similar to warnings: prefer to avoid doing so whenever possible. Instead, report errors via your library's API. For example, it is _not_ a good idea to log "connection failed" from an HTTP client. Perhaps the end-user intended to make this request to a known offline server to _confirm_ it is offline? From their perspective, this connection error is not a "real" error, it is just what they expected -- as such the HTTP client should return or throw such an error, but _not_ log it. Do also note that in situations when you decide to log an error, be mindful of error rates. Will this error potentially be logged for every single operation while some network failure is happening? Some teams and companies have alerting systems set up based on the rate of errors logged in a system, and if it exceeds some threshold it may start calling and paging people in the middle of the night. When logging at error level, consider if the issue indeed is something that should be waking up people at night. You may also want to consider offering configuration in your library: "at what log level should this issue be reported?" This can come in handy in clustered systems which may log network failures themselves, or depend on external systems detecting and reporting this. Logging `critical` logs is allowed for libraries, however as the name implies - only in the most critical situations. Most often this implies that the library will *stop functioning* after such log has been issued. End users are thought to expect that a logged critical error is _very_ important, and they may have set up their systems to page people in the middle of the night to investigate the production system _right now_ when such log statements are detected. So please be careful about logging these kinds of errors. Some libraries and situations may not be entirely clear with regard to what log level is "best" for them. In such situations, it sometimes is worth it to allow the end-users of the library to be able to configure the levels of specific groups of messages. You can see this in action in the Soto library [here](https://github.com/soto-project/soto-core/pull/423/files#diff-4a8ca7e54da5b22287900dd8cf6b47ded38a94194c1f0b544119030c81a2f238R649) where an `Options` object allows end users to configure the level at which requests are logged (`options.requestLogLevel`) which is then used as `log.log(self.options.requestLogLevel)`. #### Examples `trace` level logging: - Could include various additional information about a request, such as various diagnostics about created data structures, the state of caches or similar, which are created in order to serve a request. - Could include "begin operation" and "end operation" logging statements. `debug` level logging: - May include a single log statement for opening a connection, accepting a request, and so on. - It can include a _high level_ overview of control flow in an operation. For example: "started work, processing step X, made X decision, finished work X, result code 200". This overview may consist of high cardinality structured data. > You may also want to consider using [swift-distributed-tracing](https://github.com/apple/swift-distributed-tracing) to instrument "begin" and "end" events, as tracing may give you additional insights into your system behavior you would have missed with just manually analysing log statements. ### Log levels to avoid All these rules are only _general_ guidelines, and as such may have exceptions. Consider the following examples and rationale for why logging at high log levels by a library may not be desirable: It is generally _not acceptable_ for a service client (for example, an http client) to log an `error` when a request has failed. End-users may be using the client to probe if an endpoint is even responsive or not, and a failure to respond may be _expected_ behavior. Logging errors would only confuse and pollute their logs. Instead, libraries should either `throw`, or return an `Error` value that users of the library will have enough knowledge about if they should log or ignore it. It is even less acceptable for a library to log any successful operations. This leads to flooding server side systems, especially if, for example, one were to log every successfully handled request. In a server side application, this can easily flood and overwhelm logging systems when deployed to production where many end users are connected to the same server. Such issues are rarely found in development time, because of only a single peer requesting things from the service-under-test. #### Examples (of things to avoid) Avoid using `info` or any higher log level for: - "Normal operation" of the library, that is there is no need to log on info level "accepted a request" as this is the normal operation of a web service. Avoid using `error` or `warning`: - To report errors which the end-user of the library has the means of logging themselves. For example, if a database driver fails to fetch all rows of a query, it should not log an error or warning, but instead return or throw an error on the stream of values (or function, async function, or even the async sequence) that was providing the returned values. - Since the end-user is consuming these values, and has a mean of reporting (or swallowing) this error, the library should not log anything on their behalf. - Never report as warnings which is merely an information. For example. "weird header detected" may look like a good idea to log as a warning at first sight, however if the "weird header" is simply a misconfigured client (or just a "weird browser") you may be accidentally completely flooding an end-users logs with these "weird header" warnings (!) - Only log warnings about actionable things which the end-user of your library can do something about. Using the "weird header detected" log statement as an example: it would not be a good candidate to log as a warning because the server developer has no way to fix the users of their service to stop sending weird headers, so the server should not be logging this information as a warning. - It may be tempting to implement a "log as warning only once" technique for per-request style situations which may be almost important enough to be a warning, but should not be logged repeatedly after all. Authors may think of smart techniques to log a warning only once per "weird header discovered" and later on log the same issue on a different level, such as trace... Such techniques result in confusing hard to debug logs, where developers of a system unaware of the stateful nature of the logging would be left confused when trying to reproduce the issue. - For example, if a developer spots such a warning in a production system, they may attempt to reproduce it — thinking that it only happens in the production environment. However, if the logging system's log level choice is _stateful_ they may actually be successfully reproducing the issue but never seeing it manifest. For this, and related performance reasons (as implementing "only once per X" implies growing storage and per-request additional checking requirements), it is not recommended to apply this pattern. Exceptions to the "avoid logging warnings" rule: - "Background processes" such as tasks scheduled on a periodic timer, may not have any other means of communicating a failure or warning to the end user of the library other than through logging. - Consider offering an API that would collect errors at runtime, and then you can avoid logging errors manually. This can often take the form of a customizable "on error" hook that the library accepts when constructing the scheduled job. If the handler is not customized, we can log the errors, but if it was, it again is up to the end-user of the library to decide what to do with them. - An exception to the "log a warning only once" rule is when things do not happen very frequently. For example, if a library is warning about an outdated license or something similar during _its initialization_ this isn't necessarily a bad idea. After all, we'd rather see this warning once during initialization rather during every request made to the library. Use your best judgement and consider the developers using your library when designing how often and where from to log such information. ### Suggested logging style While libraries are free to use whichever logging message style they choose, here are some best practices to follow if you want users of your libraries to *love* the logs your library produces. Firstly, it is important to remember that both the message of a log statement as well as the metadata in [swift-log](https://github.com/apple/swift-log) are [autoclosures](https://docs.swift.org/swift-book/LanguageGuide/Closures.html#ID543), which are only invoked if the logger has a log level set such that it must emit a message for the message given. As such, messages logged at `trace` do not "materialize" their string and metadata representation unless they are actually needed: ```swift public func debug(_ message: @autoclosure () -> Logger.Message, metadata: @autoclosure () -> Logger.Metadata? = nil, source: @autoclosure () -> String? = nil, file: String = #file, function: String = #function, line: UInt = #line) { ``` And a minor yet important hint: avoid inserting newlines and other control characters into log statements (!). Many log aggregation systems assume that a single line in a logged output is specifically "one log statement" which can accidentally break if we log not sanitized, potentially multi-line, strings. This isn't a problem for _all_ log backends. For example, some will automatically sanitize and form a JSON payload with `{message: "..."}` before emitting it to a backend service collecting the logs, but plain old stream (or file) loggers usually assume that one line equals one log statement. It also makes grepping through logs more reliable. #### Structured Logging (Semantic Logging) Libraries may want to embrace the structured logging style, which renders logs in a [semi-structured data format](https://en.wikipedia.org/wiki/Semi-structured_data). It is a fantastic pattern which makes it easier and more reliable for automated code to process logged information. Consider the following "not structured" log statement: ```swift // NOT structured logging style log.info("Accepted connection \(connection.id) from \(connection.peer), total: \(connections.count)") ``` It contains 4 pieces of information: - We accepted a connection. - This is its string representation. - It is from this peer. - We currently have `connections.count` active connections. While this log statement contains all useful information that we meant to relay to end users, it is hard to visually and mechanically parse the detailed information it contains. For example, if we know connections start failing around the time when we reach a total of 100 concurrent connections, it is not trivial to find the specific log statement at which we hit this number. We would have to `grep 'total: 100'` for example, however perhaps there are many other `"total: "` strings present in all of our log systems. Instead, we can express the same information using the structured logging pattern, as follows: ```swift log.info("Accepted connection", metadata: [ "connection.id": "\(connection.id)", "connection.peer": "\(connection.peer)", "connections.total": "\(connections.count)" ]) // example output: // info [connection.id:?,connection.peer:?, connections.total:?] Accepted connection ``` This structured log can be formatted, depending on the logging backend, slightly differently on various systems. Even in the simple string representation of such a log, we'd be able to grep for `connections.total: 100` rather than having to guess the correct string. Also, since the message now does not contain all that much "human readable wording", it is less prone to randomly change from "Accepted" to "We have accepted" or vice versa. This kind of change could break alerting systems which are set up to parse and alert on specific log messages. Structured logs are very useful in combination with [swift-distributed-tracing](https://github.com/apple/swift-distributed-tracing)'s `LoggingContext`, which automatically populates the metadata with any present trace information. Thanks to this, all logs made in response to some specific request will automatically carry the same TraceID. You can see more examples of structured logging on the following pages, and example implementations thereof: - - - - #### Logging with Correlation IDs / Trace IDs A very common pattern is to log messages with some "correlation id". The best approach in general here is to use a `LoggingContext` from [swift-distributed-tracing](https://github.com/apple/swift-distributed-tracing) as then your library will be able to be traced and used with correlation contexts regardless what tracing system the end-user is using (such as open telemetry, zipkin, xray, and other tracing systems) The concept though can be explained well with just a manually logged `requestID` which we'll explain below. Consider an HTTP client as an example of a library that has a lot of metadata about some request, perhaps something like this: ```swift log.trace("Received response", metadata: [ "id": "...", "peer.host": "...", "payload.size": "...", "headers": "...", "responseCode": "...", "responseCode.text": "...", ]) ``` The exact metadata does not matter, they're just some placeholder in this example. What matters is that there's "a lot of it". > Side note on metadata keys: while there is no single right way to structure metadata keys, we recommend thinking of them as-if JSON keys: camelCased and `.` separated identifiers. This allows many log analysis backends to treat them as such nested structure. Now, we would like to avoid logging _all_ this information in every single log statement. Instead, we are able to just repeatedly log the `"id"` metadata, like this: ```swift // ... log.trace("Something something...", metadata: ["id": "..."]) log.trace("Finished streaming response", metadata: ["id": "..."]) // good, the same ID is propagated ``` Thanks to the correlation ID (or a tracing provided ID, in which case we'd log as `context.log.trace("...")` as the ID is propagated automatically), in each following log statement after the initial log statement we're able to correlate all those log statements. Then we know that this `"Finished streaming response"` message was about a response with a `responseCode` that we're able to look up from the `"Received response"` log message. This pattern is somewhat advanced and may not always be the right approach, but consider it in high performance code where logging the same information repeatedly can be too costly. ##### Things to avoid with Correlation ID logging When logging with correlation contexts make sure to never "drop the ID". It is easiest to get this right when using distributed tracing's `LoggingContext` since propagating it ensures the carrying of identifiers, however the same applies to any kind of correlation identifier. Specifically, avoid situations like these: ```swift debug: connection established [connection-id: 7] debug: connection closed unexpectedly [error: foobar] // BAD, the connection-id was dropped ``` On the second line, we don't know which connection had the error since the `connection-id` was dropped. Make sure to audit your logging code to ensure all relevant log statements carry necessary correlation identifiers. ### Exceptions to the rules These are only general guidelines, and there always will be exceptions to these rules and other situations where these suggestions will be broken, for good reason. Please use your best judgement, and always consider the end-user of a system, and how they'll be interacting with your library and decide case-by-case depending on the library and situation at hand how to handle each situation. Here are a few examples of situations when logging a message on a relatively high level might still be tolerable for a library. It's permissible for a library to log at `critical` level right before a _hard_ crash of the process, as a last resort of informing the log collection systems or end-user about additional information detailing the reason for the crash. This should be _in addition to_ the message from a `fatalError` and can lead to an improved diagnosis/debugging experience for end users. Sometimes libraries may be able to detect a harmful misconfiguration of the library. For example, selecting deprecated protocol versions. In such situations it may be useful to inform users in production by issuing a `warning`. However you should ensure that the warning is not logged repeatedly! For example, it is not acceptable for an HTTP client to log a warning on every single http request using some misconfiguration of the client. It _may_ be acceptable however for the client to log such a warning, for example, _once_ at configuration time, if the library has a good way to do this. Some libraries may implement a "log this warning only once", "log this warning only at startup", "log this error only once an hour", or similar tricks to keep the noise level low but still informative enough to not be missed. This is, however, usually a pattern reserved for stateful long running libraries, rather than clients of databases and related persistent stores. ================================================ FILE: docs/linux-perf.md ================================================ # Linux `perf` ## `perf`, what's that? The Linux [`perf` tool](https://perf.wiki.kernel.org/index.php/Main_Page) is an incredibly powerful tool, that can amongst other things be used for: - Sampling CPU-bound processes (or the whole) system to analyse which part of your application is consuming the CPU time - Accessing CPU performance counters (PMU) - "user probes" (uprobes) which trigger for example when a certain function in your application runs In general, `perf` can count and/or record the call stacks of your threads when a certain event occurs. These events can be triggered by: - Time (e.g. 1000 times per second), useful for time profiling. For an example use, see the [CPU performance debugging guide](performance.md). - System calls, useful to see where your system calls are happening. - Various system events, for example if you'd like to know when context switches occur. - CPU performance counters, useful if your performance issues can be traced down to micro-architectural details of your CPU (such as branch mispredictions). For an example, see [SwiftNIO's Advanced Performance Analysis guide](https://github.com/apple/swift-nio/blob/main/docs/advanced-performance-analysis.md). - and much more ## Getting `perf` to work Unfortunately, getting `perf` to work depends on your environment. Below, please find a selection of environments and how to get `perf` to work there. ### Installing `perf` Technically, `perf` is part of the Linux kernel sources and you'd want a `perf` version that exactly matches your Linux kernel version. In many cases however a "close-enough" `perf` version will do too. If in doubt, use a `perf` version that's slightly older than your kernel over one that's newer. - Ubuntu ``` apt-get update && apt-get -y install linux-tools-generic ``` See below for more information because Ubuntu packages a different `perf` per kernel version. - Debian ``` apt-get update && apt-get -y install linux-perf ``` - Fedora/RedHat derivates ``` yum install -y perf ``` You can confirm that your `perf` installation works using `perf stat -- sleep 0.1` (if you're already `root`) or `sudo perf stat -- sleep 0.1`. ##### `perf` on Ubuntu when you can't match the kernel version On Ubuntu (and other distributions that package `perf` per kernel version) you may see an error after installing `linux-tools-generic`. The error message will look similar to ``` $ perf stat -- sleep 0.1 WARNING: perf not found for kernel 5.10.25 You may need to install the following packages for this specific kernel: linux-tools-5.10.25-linuxkit linux-cloud-tools-5.10.25-linuxkit You may also want to install one of the following packages to keep up to date: linux-tools-linuxkit linux-cloud-tools-linuxkit ``` The best fix for this is to follow what `perf` says and to install one of the above packages. If you're in a Docker container, this may not be possible because you'd need to match the kernel version (which is especially difficult in Docker for Mac because it uses a VM). For example, the suggested `linux-tools-5.10.25-linuxkit` is not actually available. As a workaround, you can try one of the following options - If you're already `root` and prefer a shell `alias` (only valid in this shell) ``` alias perf=$(find /usr/lib/linux-tools/*/perf | head -1) ``` - If you're a user and would like to prefer to link `/usr/local/bin/perf` ``` sudo ln -s "$(find /usr/lib/linux-tools/*/perf | head -1)" /usr/local/bin/perf ``` After this, you should be able to use `perf stat -- sleep 0.1` (if you're already `root`) or `sudo perf stat -- sleep 0.1` successfully. ### Bare metal For a bare metal Linux machine, all you need to do is to install `perf` which should then work in full fidelity. ### In Docker (running on bare-metal Linux) You will need to launch your container with `docker run --privileged` (don't run this in production) and then you should have full access to perf (including the PMU). To validate that `perf` works correctly, run for example `perf stat -- sleep 0.1`. Whether you'll see the `` next to some information will depend on if you have access to the CPU's performance counters (the PMU). In Docker on bare metal, this should work, ie. no ``s should show up. ### Docker for Mac Docker for Mac is like Docker on bare metal but with some extra complexity because we're actually running the Docker containers hosted in a Linux VM. So matching the kernel version will be difficult. If you follow the above installation instructions, you should nevertheless get `perf` to work but you won't have access to the CPU's performance counters (the PMU) so you'll see a few events show up as ``. ``` $ perf stat -- sleep 0.1 Performance counter stats for 'sleep 0.1': 0.44 msec task-clock # 0.004 CPUs utilized 1 context-switches # 0.002 M/sec 0 cpu-migrations # 0.000 K/sec 57 page-faults # 0.129 M/sec cycles instructions branches branch-misses 0.102869000 seconds time elapsed 0.000000000 seconds user 0.001069000 seconds sys ``` ### In a VM In a virtual machine, you would install `perf` just like on bare metal. And either `perf` will work just fine with all its features or it will look similarly to what you get on Docker for Mac. What you need your hypervisor to support (& allow) is "PMU passthrough" or "PMU virtualisation". VMware Fusion does support PMU virtualisation which they call vPMC (VM settings -> Processors & Memory -> Advanced -> Allow code profiling applications in this VM). If you're on a Mac this setting is unfortunately only supported up to including macOS Catalina (and [not on Big Sur](https://kb.vmware.com/s/article/81623)). If you use `libvirt` to manage your hypervisor and VMs, you can use `sudo virsh edit your-domain` and replace the `` XML tag with to allow the PMU to be passed through to the guest. For other hypervisors, an internet search will usually reveal how to enable PMU passthrough. ================================================ FILE: docs/llvm-sanitizers.md ================================================ # LLVM TSAN / ASAN For multithreaded and low-level unsafe interfacing server code, the ability to use LLVM's [ThreadSanitizer](https://clang.llvm.org/docs/ThreadSanitizer.html) and [AddressSanitizer](https://clang.llvm.org/docs/AddressSanitizer.html) can help troubleshoot invalid thread usage and invalid usage/access of memory. There is a [blog post](https://swift.org/blog/tsan-support-on-linux/) outlining the usage of TSAN. The short story is to use the swiftc command line options `-sanitize=address` and `-santize=thread` to each respective tool. Also for Swift Package Manager projects you can use `--sanitize` at the command line, e.g.: ``` swift build --sanitize=address ``` or ``` swift build --sanitize=thread ``` and it can be used for the tests too: ``` swift test --sanitize=address ``` or ``` swift test --sanitize=thread ``` ================================================ FILE: docs/memory-leaks-and-usage.md ================================================ # Debugging Memory Leaks and Usage There are many different tools for troubleshooting memory leaks both on Linux and macOS, each with different strengths and ease-of-use. One excellent tool is the Xcode's [Memory Graph Debugger](https://developer.apple.com/library/archive/documentation/DeveloperTools/Conceptual/debugging_with_xcode/chapters/special_debugging_workflows.html#//apple_ref/doc/uid/TP40015022-CH9-DontLinkElementID_1). [Instruments](https://help.apple.com/instruments/mac/10.0/#/dev022f987b) and `leaks` can also be very useful. If you cannot run or reproduce the problem on macOS, there are a number of server-side alternatives below. ## Example program The following program doesn't do anything useful but leaks memory so will serve as the example: ```swift public class MemoryLeaker { var closure: () -> Void = { () } public init() {} public func doNothing() {} public func doSomethingThatLeaks() { self.closure = { // This will leak as it'll create a permanent reference cycle: // // self -> self.closure -> self self.doNothing() } } } @inline(never) // just to be sure to get this in a stack trace func myFunctionDoingTheAllocation() { let thing = MemoryLeaker() thing.doSomethingThatLeaks() } myFunctionDoingTheAllocation() ``` ## Debugging leaks with `valgrind` If you run your program using valgrind --leak-check=full ./test then `valgrind` will output ``` ==1== Memcheck, a memory error detector ==1== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al. ==1== Using Valgrind-3.13.0 and LibVEX; rerun with -h for copyright info ==1== Command: ./test ==1== ==1== ==1== HEAP SUMMARY: ==1== in use at exit: 824 bytes in 4 blocks ==1== total heap usage: 5 allocs, 1 frees, 73,528 bytes allocated ==1== ==1== 32 bytes in 1 blocks are definitely lost in loss record 1 of 4 ==1== at 0x4C2FB0F: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so) ==1== by 0x52076B1: swift_slowAlloc (in /usr/lib/swift/linux/libswiftCore.so) ==1== by 0x5207721: swift_allocObject (in /usr/lib/swift/linux/libswiftCore.so) ==1== by 0x108E58: $s4test12MemoryLeakerCACycfC (in /tmp/test) ==1== by 0x10900E: $s4test28myFunctionDoingTheAllocationyyF (in /tmp/test) ==1== by 0x108CA3: main (in /tmp/test) ==1== ==1== LEAK SUMMARY: ==1== definitely lost: 32 bytes in 1 blocks ==1== indirectly lost: 0 bytes in 0 blocks ==1== possibly lost: 0 bytes in 0 blocks ==1== still reachable: 792 bytes in 3 blocks ==1== suppressed: 0 bytes in 0 blocks ==1== Reachable blocks (those to which a pointer was found) are not shown. ==1== To see them, rerun with: --leak-check=full --show-leak-kinds=all ==1== ==1== For counts of detected and suppressed errors, rerun with: -v ==1== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 0 from 0) ``` The important part is ``` ==1== 32 bytes in 1 blocks are definitely lost in loss record 1 of 4 ==1== at 0x4C2FB0F: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so) ==1== by 0x52076B1: swift_slowAlloc (in /usr/lib/swift/linux/libswiftCore.so) ==1== by 0x5207721: swift_allocObject (in /usr/lib/swift/linux/libswiftCore.so) ==1== by 0x108E58: $s4test12MemoryLeakerCACycfC (in /tmp/test) ==1== by 0x10900E: $s4test28myFunctionDoingTheAllocationyyF (in /tmp/test) ==1== by 0x108CA3: main (in /tmp/test) ``` which can demangled by pasting it into `swift demangle`: ``` ==1== 32 bytes in 1 blocks are definitely lost in loss record 1 of 4 ==1== at 0x4C2FB0F: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so) ==1== by 0x52076B1: swift_slowAlloc (in /usr/lib/swift/linux/libswiftCore.so) ==1== by 0x5207721: swift_allocObject (in /usr/lib/swift/linux/libswiftCore.so) ==1== by 0x108E58: test.MemoryLeaker.__allocating_init() -> test.MemoryLeaker (in /tmp/test) ==1== by 0x10900E: test.myFunctionDoingTheAllocation() -> () (in /tmp/test) ==1== by 0x108CA3: main (in /tmp/test) ``` So valgrind is telling us that the allocation that eventually leaked is coming from `test.myFunctionDoingTheAllocation` calling `test.MemoryLeaker.__allocating_init()` which is correct. ### Limitations - `valgrind` doesn't understand the bit packing that is used in many Swift data types (like `String`) or when you create `enum`s with associated values. Therefore `valgrind` sometimes claims a certain allocation was leaked even though it might not have - `valgrind` will make your program run _very slow_ (possibly 100x slower) which might stop you from even getting far enough to reproduce the issue. ## Debugging leaks with `Leak Sanitizer` If you build your application using swift build --sanitize=address it will be built with [Address Sanitizer](https://github.com/google/sanitizers/wiki/AddressSanitizerLeakSanitizer) enabled. Address Sanitizer also automatically tries to find leaked memory blocks, just like `valgrind`. The output from the above example program would be ``` ================================================================= ==478==ERROR: LeakSanitizer: detected memory leaks Direct leak of 32 byte(s) in 1 object(s) allocated from: #0 0x55f72c21ac8d (/tmp/test+0x95c8d) #1 0x7f7e44e686b1 (/usr/lib/swift/linux/libswiftCore.so+0x3cb6b1) #2 0x55f72c24b2ce (/tmp/test+0xc62ce) #3 0x55f72c24a4c3 (/tmp/test+0xc54c3) #4 0x7f7e43aecb96 (/lib/x86_64-linux-gnu/libc.so.6+0x21b96) SUMMARY: AddressSanitizer: 32 byte(s) leaked in 1 allocation(s). ``` which shows the same information as `valgrind`, unfortunately however not symbolicated due to [SR-12601](https://bugs.swift.org/browse/SR-12601). You can symbolicate it using `llvm-symbolizer` or `addr2line` if you have `binutils` installed like so: ``` # /tmp/test+0xc62ce addr2line -e /tmp/test -a 0xc62ce -ipf | swift demangle 0x00000000000c62ce: test.myFunctionDoingTheAllocation() -> () at crtstuff.c:? ``` ## Debugging transient memory usage with `heaptrack` [Heaptrack](https://github.com/KDE/heaptrack) is very useful for analyzing memory leaks/usage with less overhead than `valgrind` - but more importantly is also allows for analyzing transient memory usage which may significantly impact performance by putting to much pressure on the allocator. In addition to command line acccess, there is a graphical front-end `heaptrack_gui`. A key feature is that it allows for diffing between two different runs of your application, making it fairly easy to troubleshoot differences in `malloc` behavior between e.g. feature branches and main. A short how-to run on Ubuntu 20.04 (using a different example than above, as we look at transient usage in this example), install `heaptrack` with: ``` sudo apt-get install heaptrack ``` Then run the binary with `heaptrack` two times — first we do it for `main` to get a baseline: ``` > heaptrack .build/x86_64-unknown-linux-gnu/release/test_1000_autoReadGetAndSet heaptrack output will be written to "/tmp/.nio_alloc_counter_tests_GRusAy/heaptrack.test_1000_autoReadGetAndSet.84341.gz" starting application, this might take some time... ... heaptrack stats: allocations: 319347 leaked allocations: 107 temporary allocations: 68 Heaptrack finished! Now run the following to investigate the data: heaptrack --analyze "/tmp/.nio_alloc_counter_tests_GRusAy/heaptrack.test_1000_autoReadGetAndSet.84341.gz" ``` Then run it a second time for the feature branch: ``` > heaptrack .build/x86_64-unknown-linux-gnu/release/test_1000_autoReadGetAndSet heaptrack output will be written to "/tmp/.nio_alloc_counter_tests_GRusAy/heaptrack.test_1000_autoReadGetAndSet.84372.gz" starting application, this might take some time... ... heaptrack stats: allocations: 673989 leaked allocations: 117 temporary allocations: 341011 Heaptrack finished! Now run the following to investigate the data: heaptrack --analyze "/tmp/.nio_alloc_counter_tests_GRusAy/heaptrack.test_1000_autoReadGetAndSet.84372.gz" ubuntu@ip-172-31-25-161 /t/.nio_alloc_counter_tests_GRusAy> ``` Here we could see that we had 673989 allocations in the feature branch version and 319347 in `main`, so clearly a regression. Finally, we can analyze the output as a diff from these runs using `heaptrack_print` and pipe it through `swift demangle` for readability: ``` heaptrack_print -T -d heaptrack.test_1000_autoReadGetAndSet.84341.gz heaptrack.test_1000_autoReadGetAndSet.84372.gz | swift demangle ``` `-T` gives us the temporary allocations (as it in this case was not a leak, but a transient alloaction - if you have leaks remove `-T`). The output can be quite long, but in this case as we look for transient allocations, scroll down to: ``` MOST TEMPORARY ALLOCATIONS 307740 temporary allocations of 290324 allocations in total (106.00%) from swift_slowAlloc in /home/ubuntu/bin/usr/lib/swift/linux/libswiftCore.so 43623 temporary allocations of 44553 allocations in total (97.91%) from: swift_allocObject in /home/ubuntu/bin/usr/lib/swift/linux/libswiftCore.so NIO.ServerBootstrap.(bind0 in _C131C0126670CF68D8B594DDFAE0CE57)(makeServerChannel: (NIO.SelectableEventLoop, NIO.EventLoopGroup) throws -> NIO.ServerSocketChannel, _: (NIO.EventLoop, NIO.ServerSocketChannel) -> NIO.EventLoopFuture<()>) -> NIO.EventLoopFuture at /home/ubuntu/swiftnio/swift-nio/Sources/NIO/Bootstrap.swift:295 in /tmp/.nio_alloc_counter_tests_GRusAy/.build/x86_64-unknown-linux-gnu/release/test_1000_autoReadGetAndSet merged NIO.ServerBootstrap.bind(host: Swift.String, port: Swift.Int) -> NIO.EventLoopFuture in /tmp/.nio_alloc_counter_tests_GRusAy/.build/x86_64-unknown-linux-gnu/release/test_1000_autoReadGetAndSet NIO.ServerBootstrap.bind(host: Swift.String, port: Swift.Int) -> NIO.EventLoopFuture in /tmp/.nio_alloc_counter_tests_GRusAy/.build/x86_64-unknown-linux-gnu/release/test_1000_autoReadGetAndSet Test_test_1000_autoReadGetAndSet.run(identifier: Swift.String) -> () at /tmp/.nio_alloc_counter_tests_GRusAy/Sources/Test_test_1000_autoReadGetAndSet/file.swift:24 in /tmp/.nio_alloc_counter_tests_GRusAy/.build/x86_64-unknown-linux-gnu/release/test_1000_autoReadGetAndSet main at Sources/bootstrap_test_1000_autoReadGetAndSet/main.c:18 in /tmp/.nio_alloc_counter_tests_GRusAy/.build/x86_64-unknown-linux-gnu/release/test_1000_autoReadGetAndSet 22208 temporary allocations of 22276 allocations in total (99.69%) from: swift_allocObject in /home/ubuntu/bin/usr/lib/swift/linux/libswiftCore.so generic specialization > of Swift._copyCollectionToContiguousArray(A) -> Swift.ContiguousArray in /home/ubuntu/bin/usr/lib/swift/linux/libswiftCore.so Swift.String.utf8CString.getter : Swift.ContiguousArray in /home/ubuntu/bin/usr/lib/swift/linux/libswiftCore.so NIO.URing.getEnvironmentVar(Swift.String) -> Swift.String? at /home/ubuntu/swiftnio/swift-nio/Sources/NIO/LinuxURing.swift:291 in /tmp/.nio_alloc_counter_tests_GRusAy/.build/x86_64-unknown-linux-gnu/release/test_1000_autoReadGetAndSet NIO.URing._debugPrint(@autoclosure () -> Swift.String) -> () at /home/ubuntu/swiftnio/swift-nio/Sources/NIO/LinuxURing.swift:297 ... 22196 temporary allocations of 22276 allocations in total (99.64%) from: ``` And here we could fairly quickly see that the transient extra allocations was due to extra debug printing and querying of environment variables: ``` NIO.URing.getEnvironmentVar(Swift.String) -> Swift.String? at /home/ubuntu/swiftnio/swift-nio/Sources/NIO/LinuxURing.swift:291 in /tmp/.nio_alloc_counter_tests_GRusAy/.build/x86_64-unknown-linux-gnu/release/test_1000_autoReadGetAndSet NIO.URing._debugPrint(@autoclosure () -> Swift.String) -> () ``` And this code will be removed before final integration of the feature branch, so the diff will go away. ================================================ FILE: docs/packaging.md ================================================ # Packaging Applications for Deployment Once an application is built for production, it still needs to be packaged before it can be deployed to servers. There are several strategies for packaging Swift applications for deployment. ## Docker One of the most popular ways to package applications these days is using container technologies such as [Docker](https://www.docker.com). Using Docker's tooling, we can build and package the application as a Docker image, publish it to a Docker repository, and later launch it directly on a server or on a platform that supports Docker deployments such as [Kubernetes](https://kubernetes.io). Many public cloud providers including AWS, GCP, Azure, IBM and others encourage this kind of deployment. Here is an example `Dockerfile` that builds and packages the application on top of CentOS: ```Dockerfile #------- build ------- FROM swift:centos8 as builder # set up the workspace RUN mkdir /workspace WORKDIR /workspace # copy the source to the docker image COPY . /workspace RUN swift build -c release --static-swift-stdlib #------- package ------- FROM centos # copy executables COPY --from=builder /workspace/.build/release/ / # set the entry point (application name) CMD [""] ``` To create a local Docker image from the `Dockerfile` use the `docker build` command from the application's source location, e.g.: ```bash $ docker build . -t : ``` To test the local image use the `docker run` command, e.g.: ```bash $ docker run : ``` Finally, use the `docker push` command to publish the application's Docker image to a Docker repository of your choice, e.g.: ```bash $ docker tag : /: $ docker push /: ``` At this point, the application's Docker image is ready to be deployed to the server hosts (which need to run docker), or to one of the platforms that supports Docker deployments. See [Docker's documentation](https://docs.docker.com/engine/reference/commandline/) for more complete information about Docker. ### Distroless [Distroless](https://github.com/GoogleContainerTools/distroless) is a project by Google that attempts to create minimal images containing only the application and its runtime dependencies. They do not contain package managers, shells or any other programs you would expect to find in a standard Linux distribution. Since distroless supports Docker and is based on Debian, packaging a Swift application on it is fairly similar to the Docker process above. Here is an example `Dockerfile` that builds and packages the application on top of a distroless's C++ base image: ```Dockerfile #------- build ------- # Building using Ubuntu Bionic since its compatible with Debian runtime FROM swift:bionic as builder # set up the workspace RUN mkdir /workspace WORKDIR /workspace # copy the source to the docker image COPY . /workspace RUN swift build -c release --static-swift-stdlib #------- package ------- # Running on distroless C++ since it includes # all(*) the runtime dependencies Swift programs need FROM gcr.io/distroless/cc-debian10 # copy executables COPY --from=builder /workspace/.build/release/ / # set the entry point (application name) CMD [""] ``` Note the above uses `gcr.io/distroless/cc-debian10` as the runtime image which should work for Swift programs that do not use `FoundationNetworking` or `FoundationXML`. In order to provide more complete support we (the community) could put in a PR into distroless to introduce a base image for Swift that includes `libcurl` and `libxml` which are required for `FoundationNetworking` and `FoundationXML` respectively. ## Archive (Tarball, ZIP file, etc.) Since cross-compiling Swift for Linux is not (yet) supported on Mac or Windows, we need to use virtualization technologies like Docker to compile applications we are targeting to run on Linux. That said, this does not mean we must also package the applications as Docker images in order to deploy them. While using Docker images for deployment is convenient and popular, an application can also be packaged using a simple and lightweight archive format like tarball or ZIP file, then uploaded to the server where it can be extracted and run. Here is an example of using Docker and `tar` to build and package the application for deployment on Ubuntu servers: First, use the `docker run` command from the application's source location to build it: ```bash $ docker run --rm \ -v "$PWD:/workspace" \ -w /workspace \ swift:bionic \ /bin/bash -cl "swift build -c release --static-swift-stdlib" ``` Note we are bind mounting the source directory so that the build writes the build artifacts to the local drive from which we will package them later. Next we can create a staging area with the application's executable: ```bash $ docker run --rm \ -v "$PWD:/workspace" \ -w /workspace \ swift:bionic \ /bin/bash -cl ' \ rm -rf .build/install && mkdir -p .build/install && \ cp -P .build/release/ .build/install/' ``` Note this command could be combined with the build command above--we separated them to make the example more readable. Finally, create a tarball from the staging directory: ```bash $ tar cvzf -.tar.gz -C .build/install . ``` We can test the integrity of the tarball by extracting it to a directory and running the application in a Docker runtime container: ```bash $ cd $ docker run -v "$PWD:/app" -w /app bionic ./ ``` Deploying the application's tarball to the target server can be done using utilities like `scp`, or in a more sophisticated setup using configuration management system like `chef`, `puppet`, `ansible`, etc. ## Source Distribution Another distribution technique popular with dynamic languages like Ruby or Javascript is distributing the source to the server, then compiling it on the server itself. To build Swift applications directly on the server, the server must have the correct Swift toolchain installed. [Swift.org](https://swift.org/download/#linux) publishes toolchains for a variety of Linux distributions, make sure to use the one matching your server Linux version and desired Swift version. The main advantage of this approach is that it is easy. Additional advantage is the server has the full toolchain (e.g. debugger) that can help troubleshoot issues "live" on the server. The main disadvantage of this approach that the server has the full toolchain (e.g. compiler) which means a sophisticated attacker can potentially find ways to execute code. They can also potentially gain access to the source code which might be sensitive. If the application code needs to be cloned from a private or protected repository, the server needs access to credentials which adds additional attack surface area. In most cases, source distribution is not advised due to these security concerns. ## Static linking and Curl/XML **Note:** if you are compiling with `-static-stdlib` and using Curl with FoundationNetworking or XML with FoundationXML you must have libcurl and/or libxml2 installed on the target system for it to work. ================================================ FILE: docs/performance.md ================================================ # Debugging Performance Issues First of all, it's very important to make sure that you compiled your Swift code in _release mode_. The performance difference between debug and release builds is huge in Swift. You can compile your Swift code in release mode using swift build -c release ## Instruments If you can reproduce your performance issue on macOS, you probably want to check out Instrument's [Time Profiler](https://developer.apple.com/videos/play/wwdc2016/418/). ## Flamegraphs [Flamegraphs](http://www.brendangregg.com/flamegraphs.html) are a nice way to visualise what stack frames were running for what percentage of the time. That often helps pinpointing the areas of your program that need improvement. Flamegraphs can be created on most platforms, in this document we will focus on Linux. ### Flamegraphs on Linux To have something to discuss, let's use a program that has a pretty big performance problem: ```swift /* a terrible data structure which has a subset of the operations that Swift's * array does: * - retrieving elements by index * --> user's reasonable performance expectation: O(1) (like Swift's Array) * --> implementation's actual performance: O(n) * - adding elements * --> user's reasonable performance expectation: amortised O(1) (like Swift's Array) * --> implementation's actual performance: O(n) * * ie. the problem I'm trying to demo here is that this is an implementation * where the user would expect (amortised) constant time access but in reality * is linear time. */ struct TerribleArray { /* this is a terrible idea: storing the index inside of the array (so we can * waste some performance later ;) */ private var storage: Array<(Int, T)> = Array() /* oh my */ private func maximumIndex() -> Int { return (self.storage.map { $0.0 }.max()) ?? -1 } /* expectation: amortised O(1) but implementation is O(n) */ public mutating func append(_ value: T) { let maxIdx = self.maximumIndex() self.storage.append((maxIdx + 1, value)) assert(self.storage.count == maxIdx + 2) } /* expectation: O(1) but implementation is O(n) */ public subscript(index: Int) -> T? { get { return self.storage.filter({ $0.0 == index }).first?.1 } } } protocol FavouriteNumbers { func addFavouriteNumber(_ number: Int) func isFavouriteNumber(_ number: Int) -> Bool } public class MyFavouriteNumbers: FavouriteNumbers { private var storage: TerribleArray public init() { self.storage = TerribleArray() } /* - user's expectation: O(n) * - reality O(n^2) because of TerribleArray */ public func isFavouriteNumber(_ number: Int) -> Bool { var idx = 0 var found = false while true { if let storageNum = self.storage[idx] { if number == storageNum { found = true break } } else { break } idx += 1 } return found } /* - user's expectation: amortised O(1) * - reality O(n) because of TerribleArray */ public func addFavouriteNumber(_ number: Int) { self.storage.append(number) precondition(self.isFavouriteNumber(number)) } } let x: FavouriteNumbers = MyFavouriteNumbers() for f in 0..<2_000 { x.addFavouriteNumber(f) } ``` The above program contains the `TerribleArray` data structure which has _O(n)_ appends and not the amortised _O(1)_ that users are used to from `Array`. We will assume, that you have Linux's `perf` installed and configured, documentation on how to install `perf` can be found in [this guide](linux-perf.md). Let's assume we have compiled the above code using `swift build -c release` into a binary called `./slow`. We also assume that the `https://github.com/brendangregg/FlameGraph` repository is cloned in `~/FlameGraph`: ``` # Step 1: Record the stack frames with a 99 Hz sampling frequency sudo perf record -F 99 --call-graph dwarf -- ./slow # Alternatively, to attach to an existing process use # sudo perf record -F 99 --call-graph dwarf -p PID_OF_SLOW # or if you don't know the pid, you can try (assuming your binary name is "slow") # sudo perf record -F 99 --call-graph dwarf -p $(pgrep slow) # Step 2: Export the recording into `out.perf` sudo perf script > out.perf # Step 3: Aggregate the recorded stacks and demangle the symbols ~/FlameGraph/stackcollapse-perf.pl out.perf | swift demangle > out.folded # Step 4: Export the result into a SVG file. ~/FlameGraph/flamegraph.pl out.folded > out.svg # Produce ``` The resulting file will look something like: ![](../images/perf-issues-flamegraph.svg) And we can see that almost all of our runtime is spent in `isFavouriteNumber` which is invoked from `addFavouriteNumber`. That should be a very good hint to the programmer on where to look for improvements. Maybe after all, we should use `Set` to store the favourite numbers, that should get is an answer to if a number is a favourite number in constant time (_O(1)_). ## Alternate `malloc` libraries For some workloads putting serious pressure on the memory allocation subsystem, it may be beneficial with a custom `malloc` library. It requires no changes to the code, but needs interposing with e.g. an environment variable before running your server. It is worth benchmarking with the default and with a custom memory allocator to see how much it helps for the specific workload. There are many `malloc` implementations out there, but a portable and well-performing one is [Microsofts mimalloc](https://github.com/microsoft/mimalloc). Typically these are simply enabled by using LD_PRELOAD: `> LD_PRELOAD=/usr/bin/libmimalloc.so myprogram` ================================================ FILE: docs/setup-and-ide-alternatives.md ================================================ ## Installing Swift The [supported platforms](https://swift.org/platform-support/) for running Swift on the server and the [ready-built tools packages](https://swift.org/download/) are all hosted on swift.org together with installation instructions. There you also can find the [language reference documentation](https://swift.org/documentation/). ## IDEs/Editors with Swift Support A number of editors you may already be familiar with have support for writing Swift code. Here we provide a non-exhaustive list of such editors and relevant plugins/extensions, sorted alphabetically. * [Atom IDE support](https://atom.io/packages/ide-swift) * [Atomic Blonde](https://atom.io/packages/atomic-blonde) a SourceKit based syntax highlighter for Atom. * [CLion](https://www.jetbrains.com/help/clion/swift.html) * [Emacs plugin](https://github.com/swift-emacs/swift-mode) * [VIM plugin](https://github.com/keith/swift.vim) * [Visual Studio Code](https://code.visualstudio.com) * [Swift for Visual Studio Code extension](https://marketplace.visualstudio.com/items?itemName=sswg.swift-lang) * [Xcode](https://developer.apple.com/xcode/ide/) ## Language Server Protocol (LSP) Support The [SourceKit-LSP project](https://github.com/apple/sourcekit-lsp) provides a Swift implementation of the [Language Server Protocol](https://microsoft.github.io/language-server-protocol/), which provides features such as code completion and jump-to-definition in supported editors. The project has both an [extensive list of editors that support it](https://github.com/apple/sourcekit-lsp/tree/main/Editors) and setup instructions for those editors, including many of those listed above. _Do you know about another IDE or IDE plugin that is missing? Please submit a PR to add them here!_ ================================================ FILE: docs/testing.md ================================================ # Testing SwiftPM is integrated with [XCTest, Apple’s unit test framework](https://developer.apple.com/documentation/xctest). Running `swift test` from the terminal, or triggering the test action in your IDE (Xcode or similar), will run all of your XCTest test cases. Test results will be displayed in your IDE or printed out to the terminal. A convenient way to test on Linux is using Docker. For example: `$ docker run -v "$PWD:/code" -w /code swift:latest swift test` The above command will run the tests using the latest Swift Docker image, utilizing bind mounts to the sources on your file system. Swift supports architecture-specific code. By default, Foundation imports architecture-specific libraries like Darwin or Glibc. While developing on macOS, you may end up using APIs that are not available on Linux. Since you are most likely to deploy a cloud service on Linux, it is critical to test on Linux. A historically important detail about testing for Linux is the `Tests/LinuxMain.swift` file. - In Swift versions 5.4 and newer tests are automatically discovered on all platforms, no special file or flag needed. - In Swift versions >= 5.1 < 5.4, tests can be automaticlaly discovered on Linux using `swift test --enable-test-discovery` flag. - In Swift versions older than 5.1 the `Tests/LinuxMain.swift` file provides SwiftPM an index of all the tests it needs to run on Linux and it is critical to keep this file up-to-date as you add more unit tests. To regenerate this file, run `swift test --generate-linuxmain` after adding tests. It is also a good idea to include this command as part of your continuous integration setup. ### Testing for production - For Swift versions between Swift 5.1 and 5.4, always test with `--enable-test-discovery` to avoid forgetting tests on Linux. - Make use of the sanitizers. Before running code in production, and preferably as a regular part of your CI process, do the following: * Run your test suite with TSan (thread sanitizer): `swift test --sanitize=thread` * Run your test suite with ASan (address sanitizer): `swift test --sanitize=address` and `swift test --sanitize=address -c release -Xswiftc -enable-testing` - Generally, whilst testing, you may want to build using `swift build --sanitize=thread`. The binary will run slower and is not suitable for production, but you might be able to catch threading issues early - before you deploy your software. Often threading issues are really hard to debug and reproduce and also cause random problems. TSan helps catch them early. ================================================ FILE: docs/ubuntu.md ================================================ # Deploying on Ubuntu Once you have your Ubuntu virtual machine ready, you can deploy your Swift app. This guide assumes you have a fresh install with a non-root user named `swift`. It also assumes both `root` and `swift` are accessible via SSH. For information on setting this up, check out the platform guides: - [DigitalOcean](digital-ocean.md) The [packaging](packaging.md) guide provides an overview of available deployment options. This guide takes you through each deployment option step-by-step for Ubuntu specifically. These examples will deploy SwiftNIO's [example HTTP server](https://github.com/apple/swift-nio/tree/master/Sources/NIOHTTP1Server), but you can test with your own project. - [Binary Deployment](#binary-deployment) - [Source Deployment](#source-deployment) ## Binary Deployment This section shows you how to build your app locally and deploy just the binary. ### Build Binaries The first step is to build your app locally. The easiest way to do this is with Docker. For this example, we'll be deploying SwiftNIO's demo HTTP server. Start by cloning the repository. ```sh git clone https://github.com/apple/swift-nio.git cd swift-nio ``` Once inside the project folder, use the following command to build the app though Docker and copy all build arifacts into `.build/install`. Since this example will be deploying to Ubuntu 18.04, the `-bionic` Docker image is used to build. ```sh docker run --rm \ -v "$PWD:/workspace" \ -w /workspace \ swift:5.2-bionic \ /bin/bash -cl ' \ swift build && \ rm -rf .build/install && mkdir -p .build/install && \ cp -P .build/debug/NIOHTTP1Server .build/install/ && \ cp -P /usr/lib/swift/linux/lib*so* .build/install/' ``` > Tip: If you are building this project for production, use `swift build -c release`, see [building for production](building.md#building-for-production) for more information. Notice that Swift's shared libraries are being included. This is important since Swift is not ABI stable on Linux. This means Swift programs must run against the shared libraries they were compiled with. After your project is built, use the following command to create an archive for easy transport to the server. ```sh tar cvzf hello-world.tar.gz -C .build/install . ``` Next, use `scp` to copy the archive to the deploy server's home folder. ```sh scp hello-world.tar.gz swift@:~/ ``` Once the copy is complete, login to the deploy server. ```sh ssh swift@ ``` Create a new folder to hold the app binaries and decompress the archive. ```sh mkdir hello-world tar -xvf hello-world.tar.gz -C hello-world ``` You can now start the executable. Supply the desired IP address and port. Binding to port `80` requires sudo, so we use `8080` instead. [TODO]: <> (Link to Nginx guide once available for serving on port 80) ```sh ./hello-world/NIOHTTP1Server 8080 ``` You may need to install additional system libraries like `libxml` or `tzdata` if your app uses Foundation. The system dependencies installed by Swift's slim docker images are a [good reference](https://github.com/apple/swift-docker/blob/master/5.2/ubuntu/18.04/slim/Dockerfile). Finally, visit your server's IP via browser or local terminal and you should see a response. ``` $ curl http://:8080 Hello world! ``` Use `CTRL+C` to quit the server. Congratulations on getting your Swift server app running on Ubuntu! ## Source Deployement This section shows you how to build and run your project on the deployment server. ## Install Swift Now that you've created a new Ubuntu server you can install Swift. You must be logged in as `root` (or separate user with `sudo` access) to do this. ```sh ssh root@ ``` ### Swift Dependencies Install Swift's required dependencies. ```sh sudo apt update sudo apt install clang libicu-dev build-essential pkg-config ``` ### Download Toolchain This guide will install Swift 5.2. Visit the [Swift Downloads](https://swift.org/download/#releases) page for a link to latest release. Copy the download link for Ubuntu 18.04. ![Download Swift](../images/swift-download-ubuntu-18-copy-link.png) Download and decompress the Swift toolchain. ```sh wget https://swift.org/builds/swift-5.2-release/ubuntu1804/swift-5.2-RELEASE/swift-5.2-RELEASE-ubuntu18.04.tar.gz tar xzf swift-5.2-RELEASE-ubuntu18.04.tar.gz ``` > Note: Swift's [Using Downloads](https://swift.org/download/#using-downloads) guide includes information on how to verify downloads using PGP signatures. ### Install Toolchain Move Swift somewhere easy to acess. This guide will use `/swift` with each compiler version in a subfolder. ```sh sudo mkdir /swift sudo mv swift-5.2-RELEASE-ubuntu18.04 /swift/5.2.0 ``` Add Swift to `/usr/bin` so it can be executed by `swift` and `root`. ```sh sudo ln -s /swift/5.2.0/usr/bin/swift /usr/bin/swift ``` Verify that Swift was installed correctly. ```sh swift --version ``` ## Setup Project Now that Swift is installed, let's clone and compile your project. For this example, we'll be using SwiftNIO's [example HTTP server](https://github.com/apple/swift-nio/tree/master/Sources/NIOHTTP1Server). First let's install SwiftNIO's system dependencies. ```sh sudo apt-get install zlib1g-dev ``` ### Clone & Build Now that we're done installing things, we can switch to a non-root user to build and run our application. ```sh su swift cd ~ ``` Clone the project, then use `swift build` to compile it. ```sh git clone https://github.com/apple/swift-nio.git cd swift-nio swift build ``` > Tip: If you are building this project for production, use `swift build -c release`, see [building for production](building.md#building-for-production) for more information. ### Run Once the project has finished compiling, run it on your server's IP at port `8080`. ```sh .build/debug/NIOHTTP1Server 8080 ``` If you used `swift build -c release`, then you need to run: ```sh .build/release/NIOHTTP1Server 8080 ``` Visit your server's IP via browser or local terminal and you should see a response. ``` $ curl http://:8080 Hello world! ``` Use `CTRL+C` to quit the server. Congratulations on getting your Swift server app running on Ubuntu!