Repository: BauplanLabs/quack-reduce Branch: main Commit: aac1326941c1 Files: 22 Total size: 42.0 KB Directory structure: gitextract_ete3vhl4/ ├── .gitignore ├── LICENSE ├── README.md └── src/ ├── Makefile ├── benchmark.py ├── dashboard/ │ ├── dashboard.py │ └── dbt/ │ ├── analysis/ │ │ └── .gitkeep │ ├── dbt_project.yml │ ├── macros/ │ │ └── .gitkeep │ ├── models/ │ │ └── taxi/ │ │ ├── top_pickup_locations.sql │ │ └── trips_by_pickup_location.sql │ ├── snapshots/ │ │ └── .gitkeep │ └── tests/ │ └── .gitkeep ├── data/ │ └── .gitkeep ├── local.env ├── package.json ├── quack.py ├── requirements.txt ├── run_me_first.py ├── serverless/ │ ├── Dockerfile │ └── app.py └── serverless.yml ================================================ FILE CONTENTS ================================================ ================================================ FILE: .gitignore ================================================ # Byte-compiled / optimized / DLL files __pycache__/ *.py[cod] *$py.class # C extensions *.so # Distribution / packaging .Python build/ develop-eggs/ dist/ downloads/ eggs/ .eggs/ lib/ lib64/ parts/ sdist/ var/ wheels/ pip-wheel-metadata/ share/python-wheels/ *.egg-info/ .installed.cfg *.egg MANIFEST # PyInstaller # Usually these files are written by a python script from a template # before PyInstaller builds the exe, so as to inject date/other infos into it. *.manifest *.spec # Installer logs pip-log.txt pip-delete-this-directory.txt # Unit test / coverage reports htmlcov/ .tox/ .nox/ .coverage .coverage.* .cache nosetests.xml coverage.xml *.cover *.py,cover .hypothesis/ .pytest_cache/ # Translations *.mo *.pot # Django stuff: *.log local_settings.py db.sqlite3 db.sqlite3-journal # Flask stuff: instance/ .webassets-cache # Scrapy stuff: .scrapy # Sphinx documentation docs/_build/ # PyBuilder target/ # Jupyter Notebook .ipynb_checkpoints # IPython profile_default/ ipython_config.py # pyenv .python-version # pipenv # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control. # However, in case of collaboration, if having platform-specific dependencies or dependencies # having no cross-platform support, pipenv may install dependencies that don't work, or not # install all needed dependencies. #Pipfile.lock # PEP 582; used by e.g. github.com/David-OConnor/pyflow __pypackages__/ # Celery stuff celerybeat-schedule celerybeat.pid # SageMath parsed files *.sage.py # Environments .env .venv env/ venv/ ENV/ env.bak/ venv.bak/ # Spyder project settings .spyderproject .spyproject # Rope project settings .ropeproject # mkdocs documentation /site # mypy .mypy_cache/ .dmypy.json dmypy.json # Pyre type checker .pyre/ *.parquet .serverless/ node_modules/ .DS_Store .node-version .python-version ================================================ FILE: LICENSE ================================================ MIT License Copyright (c) 2023 Bauplan Labs Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ================================================ FILE: README.md ================================================ # Quack-reduce A playground for running duckdb as a stateless query engine over a data lake. The idea is to have a zero-maintenance, [very fast](https://www.loom.com/share/96f1fd938c814d0a825facb215546f03) and almost free data engine for small analytics apps. This repo is the companion code for this [blog post](https://towardsdatascience.com/a-serverless-query-engine-from-spare-parts-bd6320f10353). Please refer to the blog post for more background information and details on the use case. ## Quick Start ..ε=(。ノ・ω・)ノ If you read the [blog post](https://towardsdatascience.com/a-serverless-query-engine-from-spare-parts-bd6320f10353) and know already what we are up to, follow the quick setup steps below to run everything in no time. ### Setup your account Make sure you have: - A working AWS account and an access key with [sufficient priviledges to deploy a lambda instance](https://www.serverless.com/framework/docs/providers/aws/guide/credentials) -- this could be the `AdministratorAccess` policy in AWS IAM, or something more fine grained; - [Docker](https://docs.docker.com/get-docker/) installed and running on your machine; - Python 3.9+ and Node.js properly installed on your machine; - A `profiles.yaml` file on your local machine to run the dbt project. In the `src` folder, you should copy `local.env` to `.env` (do *not* commit it) and fill it with proper values: | value | type | description | example | |-----------------------|------|------------------------------------------------------|--------------------------:| | AWS_ACCESS_KEY_ID | str | User key for AWS access | AKIAIO... | | AWS_SECRET_ACCESS_KEY | str | Secret key for AWS access | wJalr/... | | AWS_DEFAULT_REGION | str | AWS region for the deployment | us-east-1 | | S3_BUCKET_NAME | str | Bucket to host the data (must be unique) | my-duck-bucket-130dcqda0u | These variables will be used by the setup script and the runner to communicate with AWS services. Make sure the user has the permissions to: - create a bucket and upload files to it; - invoke the lambda below. ### Run the project From the `src` folder: >**1. Create the DuckDB Lambda:** run `make nodejs-init` and then `make serverless-deploy`. Note that `src/serverless.yml` is configured to use `arm64`. This does a local Docker build, so if you're on an `x86_64` machine, it will fail. Replace `arm64` with `x86_64` as needed. After deployment, you can test the lambda is working from the [console](https://www.loom.com/share/97785a387af84924b830b9e0f35d8a1e). >**2. Build the Python env:** run `make python-init`. >**3. Download the data and upload it to S3:** run `make run_me_first` (check your S3 bucket and make sure you find a `partitioned` folder with [this structure](images/s3.png)). >**4. Test the serverless query engine:** run `make test`. >**5. Set up your dbt profile:** to run dbt locally, set up a dbt [profile](https://docs.getdbt.com/docs/core/connection-profiles) named `duckdb-taxi` (see [here](https://github.com/jwills/dbt-duckdb) for examples): ```yaml # ~/.dbt/profiles.yml duckdb-taxi: outputs: dev: type: duckdb path: ':memory:' extensions: - httpfs - parquet target: dev ``` >**6. Run the dbt project:** run `make dbt-run`. >**7. Run the Analytics app:** run `make dashboard`. Note that every time the input field in the dashboard changes, we run a full round-trip on our engine in the back: it can be *this* fast! *** ## Project Overview If you want to give it a try, follow the instructions in the `Quick Start` section above to get the system up and running. The rest of the README explores in more details the various components: * the lambda function running the queries; * interactions from a local script; * a serverless BI application; * running massive parallel workloads through the engine. > NOTE: this project (including this README!) is written for pedagogical purposes and it is not production-ready (or even well tested!): our main goal is to provide a reference implementation of few key concepts as a starting point for future projects - so, sorry for being a bit verbose and perhaps pedantic at times. ### Duckdb lambda The `src/serverless` folder is a standard [serverless](https://www.serverless.com/framework/) project, to build an AWS lambda function with Duckdb on it. It has three main components: - a Dockerfile, which starts from the public AWS lambda image for Python (`public.ecr.aws/lambda/python:3.9`) and add the few dependencies we need; - an `app.py` file, containing the actual code our lambda will execute; - a `../serverless.yml` file, which ties all these things together in the infra-as-code fashion, and allows us to deploy and manage the function from the CLI. The cloud setup is done for you when you run `make nodejs-init` and `make serverless-deploy` (Step 1 in the setup list above). The first time, deployment will take a while as it needs to create the image, ship it to AWS and [create the stack](images/serverless.png) - note that this is _a "one-off" thing_. > NOTE: you may get a `403 Forbidden` error when building the image: in our experience, this usually goes away with `aws ecr-public get-login-password --region us-east-1 | docker login --username AWS --password-stdin public.ecr.aws`. ### Interacting with the engine We can use a simple Python script to interact with our engine. First, we can test the system with a hard-coded query. Make sure you run Step 2 and Step 3 in the quick start list (to setup Python and the dataset): now, we can test everything is working with `make test`. If all looks good, you can now run arbitrary queries (replacing `MY_BUCKET_NAME` with your value) just by using the provided `python quack.py` script; make sure to manually activate your venv with `source ./.venv/bin/activate`, e.g. you can run `python quack.py -q "SELECT pickup_location_id AS location_id, COUNT(*) AS counts FROM read_parquet(['s3://MY_BUCKET_NAME/dataset/taxi_2019_04.parquet']) WHERE pickup_at >= '2019-04-01' AND pickup_at < '2019-04-03' GROUP BY 1 ORDER BY 2 DESC"` to get the most popular pickup location (IDs) for the first few days of April. Since the amount of data that can be returned by a lambda is limited, the lambda will automatically limit your rows if you don't specific a limit in the script. You can get more data back with: `python quack.py -q ... -limit 100` but be mindful of the infrastructure constraints! ### Serverless BI architecture (Optional) If you want to see how this architecture can bridge the gap between offline pipelines preparing artifacts, and real-time querying for BI (or other use cases), you can simulate how a dbt project may prepare a view that is querable in a dashboard, through our engine (check our blog post for some more context on this use case). The quickest setup is running dbt locally, so you need to set up a dbt [profile](https://docs.getdbt.com/docs/core/connection-profiles) named `duckdb-taxi` (see [here](https://github.com/jwills/dbt-duckdb) for examples): ```yaml # ~/.dbt/profiles.yml duckdb-taxi: outputs: dev: type: duckdb path: ':memory:' extensions: - httpfs - parquet target: dev ``` > NOTE: since we run dbt through `make`, there is no need to add credentials to the `extensions`. If you prefer to run it manually, your dbt profile should look more like this: ```yaml # ~/.dbt/profiles.yml duckdb-taxi: outputs: dev: type: duckdb path: ':memory:' extensions: - httpfs - parquet settings: s3_region: us-east-1 s3_access_key_id: YOUR_S3_USER s3_secret_access_key: YOUR_S3_KEY target: dev ``` After the dbt setup is completed, you can use again the `make` file to run a "batch pipeline" that produces an artifact in S3 from raw data: just type `make dbt-run` to materialize our view as a parquet file: > NOTE: different warehouses would need different configurations to export the node to the same location, e.g. [Snowflake](https://docs.snowflake.com/en/user-guide/script-data-load-transform-parquet). To run the front-end (a dashboard built with streamlit querying the view we materialized) run `make dashboard`. A page should open in the browser, displaying a chart: You can use the form to interact in real time with the dataset (video [here](https://www.loom.com/share/9d5de3ba822a445d9d117225c1b0307f)), through the serverless infrastructure we built. ### From quack to quack-reduce (Optional) The staless execution of SQL over an object storage (and therefore, using duckdb not really as a db, but basically as "just" a query engine) coupled with the parallel nature of AWS lambdas opens up interesting optimization possibilities. In particular, we could rephrase (some) SQL queries through a map-reduce programming pattern *with other SQL queries*, and execute them all at the same time. To consider a trivial example, a query such as: `SELECT COUNT(*) FROM myTable WHERE DATE BETWEEN 04/01/2022 AND 04/05/2022` can be rewritten as the SUM of the results of these smaller queries: `SELECT COUNT(*) FROM myTable WHERE DATE BETWEEN 04/01/2022 AND 04/02/2022` + `SELECT COUNT(*) FROM myTable WHERE DATE BETWEEN 04/02/2022 AND 04/03/2022` + ... As the number of files increases (as in a typical hive-partitioned data lake), scanning the object storage (in duckdb syntax `parquet_scan('folder/', HIVE_PARTITIONING=1)`) may take much longer than reading single _k_ files directly through ideally _k_ parallel functions, drastically improving query performances. To test out this hypothesis, we built a script that compares the same engine across different deployment patterns - local, remote etc. You can run the bechmarks with default values with `make benchmark`. The script is minimal, but should be enough to give you a feeling of how the different setups perform compared to each other, and the trade-offs involved (check the code for how it's built, but don't expect much!). [A typical run](https://www.loom.com/share/18a060b89a6a4f6d814e06ffa2674b13) will result in something like this [table](images/benchmarks.png) (numbers will vary). Please refer to the blogpost for more musings on this opportunity (and the non-trivial associated challenges). > NOTE: if you have never raised your concurrency limits on AWS lambda, you may need to request through the console for an increase in parallel execution, otherwise AWS will not allowed the scaling out of the function. ## What's next? If you like what you've seen so far, you may wonder what you could do next! There's a million ways to improve this design, some of which more obvious than others - as a non-exhaustive list ("left as an excercise to the reader"), this is where we would start: * if you always query the same table (say, a view for your dashboard), you may want to leverage the `cold` / `warm` pattern in the lambda code to store the table in memory when cold, and read from there (instead of parquet) when warm; * when you move from one file to multiple files, scanning parquet folders is a huge overhead: wouldn't it be nice to know where to look? While HIVE partitioning is great, modern table formats (e.g. Iceberg) are even better, so you could think of combine their table scan properties with our serverless engine. Performance aside, if you have queried `quack.py`, you know how tedious it is to fully remember the full file name every time: leveraging catalogs like Iceberg, Glue, Nessie etc. would make the experience more "database-like"; * try out other use cases! For example, consider this recent [event collection](https://github.com/fal-ai/fal-events) platform. If you modify it to a dump-to-s3-then-query pattern (leveraging the engine we built with this repo), you end up with a lambda-only version of the [Snowflake architecture](https://github.com/jacopotagliabue/paas-data-ingestion) we open sourced some time ago - an end-to-end analytics platform running without servers; * while we now run the query in memory and return a subset of row from the lambda, this pattern is certainly not perfect: on the one hand, sometime we may wish to write back the result of a query (dbt-style, so to speak); on the other, even if analytics queries are often aggregates, result tables may still grow big (row-wise): writing them to s3 and have the client stream back rows from there may be a nice feature to add! ## License All the code is released without warranty, "as is" under a MIT License. This started as a fun week-end project and should be treated with the appropriate sense of humour. ================================================ FILE: src/Makefile ================================================ include .env # need bash because we use the "source" command (otherwise it fails when make defaults to /bin/sh) SHELL=bash nodejs-init: npm install .PHONY: nodejs-init serverless-deploy: npx serverless deploy .PHONY: serverless-deploy python-init: python3 -m venv ./.venv && source ./.venv/bin/activate && pip install -r requirements.txt .PHONY: python-init run_me_first: source ./.venv/bin/activate && python3 run_me_first.py .PHONY: run_me_first test: source ./.venv/bin/activate && python3 quack.py .PHONY: test test-distinct: source ./.venv/bin/activate && python3 quack.py -q "SELECT pickup_location_id AS location_id, COUNT(*) AS counts FROM read_parquet(['s3://${S3_BUCKET_NAME}/dataset/taxi_2019_04.parquet']) WHERE pickup_at >= '2019-04-01' AND pickup_at < '2019-04-03' GROUP BY 1 ORDER BY 2 DESC" .PHONY: test-distinct benchmark: source ./.venv/bin/activate && python3 benchmark.py .PHONY: benchmark dbt-run: source ./.venv/bin/activate && cd dashboard/dbt && S3_BUCKET_NAME=${S3_BUCKET_NAME} dbt run .PHONY: dbt-run dbt-docs: source ./.venv/bin/activate && cd dashboard/dbt && S3_BUCKET_NAME=${S3_BUCKET_NAME} dbt docs generate && dbt docs serve .PHONY: dbt-docs dashboard: source ./.venv/bin/activate && cd dashboard && streamlit run dashboard.py .PHONY: dashboard ================================================ FILE: src/benchmark.py ================================================ """ This Python script benchmarks the performance of running queries using duckdb as engine and an object storage as data source. The script is minimal and not very configurable, but should be enough to give you a feeling of how the different setups perform and the trade-offs involved. Please note we basically start with 2019-04-01 and based on the -d flag we add days to the date. So by increasing -d you will increase the amount of data to be processed. The map reduce version manually unpacks the queries into queries-by-date and then runs them in parallel. Due to cold start, the first run of the serverless version may be slower than the others, so you should re-run the same script multiple times to get a better idea of the performance. """ import os import duckdb import json import statistics import time from fastcore.parallel import parallel from quack import invoke_lambda, display_table from dotenv import load_dotenv from collections import defaultdict from rich.console import Console # get the environment variables from the .env file load_dotenv() def run_benchmarks( bucket: str, repetitions: int, threads: int, days: int, is_debug: bool = False ): test_location_id = 237 execution_times = [] # NOTE: as usual we re-use the same naming convention as in the setup script # and all the others partitioned_dataset_scan = f"s3://{bucket}/partitioned/*/*.parquet" # run the map reduce version repetition_times = [] print("\n====> Running map reduce version") for i in range(repetitions): start_time = time.time() map_reduce_results = run_map_reduce( bucket=bucket, days=days, threads=threads, is_debug=is_debug ) repetition_times.append(time.time() - start_time) time.sleep(3) execution_times.append({ 'type': 'map_reduce', 'mean': round(sum(repetition_times) / len(repetition_times), 3), 'std': round(statistics.stdev(repetition_times), 3), 'test location': map_reduce_results[test_location_id] }) # run the standard serverless version repetition_times = [] print("\n====> Running serverless duckdb") for i in range(repetitions): start_time = time.time() results = run_serverless_lambda( partitioned_dataset_scan=partitioned_dataset_scan, days=days, is_debug=is_debug ) repetition_times.append(time.time() - start_time) time.sleep(3) execution_times.append({ 'type': 'serverless', 'mean': round(sum(repetition_times) / len(repetition_times), 3), 'std': round(statistics.stdev(repetition_times), 3), 'test location': results[test_location_id] }) # run a local db querying the data lake repetition_times = [] print("\n====> Running local duckdb") for i in range(repetitions): start_time = time.time() # just re-use the code inside the lambda without thinking too much ;-) con = duckdb.connect(database=':memory:') con.execute(f""" INSTALL httpfs; LOAD httpfs; SET s3_region='{os.environ.get('AWS_DEFAULT_REGION', 'us-east-1')}'; SET s3_access_key_id='{os.environ['AWS_ACCESS_KEY_ID']}'; SET s3_secret_access_key='{os.environ['AWS_SECRET_ACCESS_KEY']}'; """) local_results = run_local_db( con=con, partitioned_dataset_scan=partitioned_dataset_scan, days=days, is_debug=is_debug ) del con repetition_times.append(time.time() - start_time) time.sleep(3) execution_times.append({ 'type': 'local', 'mean': round(sum(repetition_times) / len(repetition_times), 3), 'std': round(statistics.stdev(repetition_times), 3), 'test location': local_results[test_location_id] }) # make sure the results are the same assert results[test_location_id] == map_reduce_results[test_location_id] == local_results[test_location_id], "The results are not the same!" # display results in a table console = Console() display_table(console, execution_times, title="Benchmarks", color="cyan") # all done, say goodbye print("All done! See you, duck cowboy!") return def run_local_db( con, partitioned_dataset_scan: str, days: int, is_debug: bool ): single_query = """ SELECT pickup_location_id AS location_id, COUNT(*) AS counts FROM parquet_scan('{}', HIVE_PARTITIONING=1) WHERE DATE >= '2019-04-01' AND DATE < '2019-04-{}' GROUP BY 1 """.format( partitioned_dataset_scan, "{:02d}".format(1 + days) ) if is_debug: print(single_query) # just re-use the code inside the lambda with no particular changes _df = con.execute(single_query).df() _df = _df.head(1000) records = _df.to_dict('records') return { row['location_id']: row['counts'] for row in records } def run_serverless_lambda( partitioned_dataset_scan: str, days: int, is_debug: bool ): single_query = """ SELECT pickup_location_id AS location_id, COUNT(*) AS counts FROM parquet_scan('{}', HIVE_PARTITIONING=1) WHERE DATE >= '2019-04-01' AND DATE < '2019-04-{}' GROUP BY 1 """.format( partitioned_dataset_scan, "{:02d}".format(1 + days) ) if is_debug: print(single_query) response = invoke_lambda(json.dumps({ 'q': single_query, 'limit': 1000})) if 'errorMessage' in response: print(response['errorMessage']) raise Exception("There was an error in the serverless invocation") records = response['data']['records'] return { row['location_id']: row['counts'] for row in records } def run_map_reduce( bucket: str, days: int, threads: int, is_debug: bool ): query = """ SELECT pickup_location_id AS location_id, COUNT(*) AS counts FROM read_parquet('{}', HIVE_PARTITIONING=1) WHERE DATE >= '2019-04-{}' AND DATE < '2019-04-{}' GROUP BY 1 """.strip() # prepare the queries for the map step queries = prepare_map_queries(query, bucket, days) if is_debug: print(queries[:3]) assert len(queries) == days, "The number of queries is not correct" # run the queries in parallel payloads = [json.dumps({'q': q, 'limit': 1000 }) for q in queries] _results = parallel( invoke_lambda, payloads, n_workers=threads) # check for errors in ANY response if any(['errorMessage' in response for response in _results]): print(next(response['errorMessage'] for response in _results if 'errorMessage' in response)) raise Exception("There was an error in the parallel invocation") # do the "reduce" step in code results = defaultdict(lambda: 0) # loop over the results for response in _results: records = response['data']['records'] for row in records: results[row['location_id']] += row['counts'] return results def prepare_map_queries( query: str, bucket: str, days: int ): # template for parquet scan queries = [] for i in range(1, days + 1): start_day_as_str = "{:02d}".format(i) end_day_as_str = "{:02d}".format(i + 1) parquet_scan = f"s3://{bucket}/partitioned/date=2019-04-{start_day_as_str}/*.parquet" queries.append(query.format(parquet_scan, start_day_as_str, end_day_as_str)) return queries if __name__ == "__main__": # make sure the envs are set assert 'S3_BUCKET_NAME' in os.environ, "Please set the S3_BUCKET_NAME environment variable" assert 'AWS_ACCESS_KEY_ID' in os.environ, "Please set the AWS_ACCESS_KEY_ID environment variable" assert 'AWS_SECRET_ACCESS_KEY' in os.environ, "Please set the AWS_SECRET_ACCESS_KEY environment variable" # get args from command line import argparse parser = argparse.ArgumentParser() parser.add_argument( "-n", type=int, help="number of repetitions", default=3) # note: without reserved concurrency, too much concurrency will cause errors parser.add_argument( "-t", type=int, help="concurrent queries for map reduce", default=20) parser.add_argument( "-d", type=int, help="number of days in April to query", default=10) parser.add_argument( "--debug", action="store_true", help="increase output verbosity", default=False) args = parser.parse_args() # run the main function run_benchmarks( bucket=os.environ['S3_BUCKET_NAME'], repetitions=args.n, threads=args.t, days=args.d, is_debug=args.debug ) ================================================ FILE: src/dashboard/dashboard.py ================================================ """ Simple dashboard for the taxi data based on Streamlit. It re-uses through hugly imports the code from the quack.py script, and use seaborn to plot the data. """ import sys import os import matplotlib.pyplot as plt import pandas as pd import seaborn as sns import streamlit as st from dotenv import load_dotenv # get the environment variables from the .env file load_dotenv('../.env') assert 'S3_BUCKET_NAME' in os.environ, "Please set the S3_BUCKET_NAME environment variable" S3_BUCKET_NAME = os.environ['S3_BUCKET_NAME'] # note: this is the same file we exported in the top_pickup_locations.sql query # as part of our data transformation pipeline PARQUET_FILE = f"s3://{S3_BUCKET_NAME}/dashboard/my_view.parquet" # import querying functoin from the runner sys.path.insert(0,'..') from quack import fetch_all # build up the dashboard st.markdown("# Trip Dashboard") st.write("This dashboard shows KPIs for our taxi business.") st.header("Top pickup locations (map id) by number of trips") # hardcode the columns COLS = ['PICKUP_LOCATION_ID', 'TRIPS'] # get the total row count query = f"SELECT COUNT(*) AS C FROM read_parquet(['{PARQUET_FILE}'])" df, metadata = fetch_all(query, limit=1, display=False, is_debug=False) st.write(f"Total row count: {df['C'][0]}") # get the interactive chart base_query = f""" SELECT location_id AS {COLS[0]}, counts AS {COLS[1]} FROM read_parquet(['{PARQUET_FILE}']) """.strip() top_k = st.text_input('# of pickup locations', '5') # add a limit to the query based on the user input final_query = "{} LIMIT {};".format(base_query, top_k).format(top_k) df, metadata = fetch_all(final_query, limit=int(top_k), display=False, is_debug=False) # if no error is returned, we plot the data if df is not None: fig = plt.figure(figsize=(10,5)) sns.barplot( x = COLS[0], y = COLS[1], data = df, order=df.sort_values(COLS[1],ascending = False)[COLS[0]]) plt.xticks(rotation=70) plt.tight_layout() st.pyplot(fig) else: st.write("Sorry, something went wrong :-(") # display metadata st.write(f"Roundtrip ms: {metadata['roundtrip_time']}") st.write(f"Query exec. time ms: {metadata['timeMs']}") st.write(f"Lambda is warm: {metadata['warm']}") ================================================ FILE: src/dashboard/dbt/analysis/.gitkeep ================================================ ================================================ FILE: src/dashboard/dbt/dbt_project.yml ================================================ name: 'taxi_dashboard' version: '1.0.0' config-version: 2 profile: 'duckdb-taxi' source-paths: ["models"] analysis-paths: ["analysis"] test-paths: ["tests"] data-paths: ["data"] macro-paths: ["macros"] snapshot-paths: ["snapshots"] target-path: "target" clean-targets: - "target" - "dbt_modules" models: taxi: foundation: +materialized: view ================================================ FILE: src/dashboard/dbt/macros/.gitkeep ================================================ ================================================ FILE: src/dashboard/dbt/models/taxi/top_pickup_locations.sql ================================================ {{ config(materialized='external', location="s3://{{ env_var('S3_BUCKET_NAME') }}/dashboard/my_view.parquet") }} SELECT location_id, counts FROM {{ ref('trips_by_pickup_location') }} ORDER BY 2 DESC LIMIT 200 ================================================ FILE: src/dashboard/dbt/models/taxi/trips_by_pickup_location.sql ================================================ SELECT pickup_location_id AS location_id, COUNT(*) AS counts FROM read_parquet(['s3://{{ env_var('S3_BUCKET_NAME') }}/dataset/taxi_2019_04.parquet']) GROUP BY 1 ================================================ FILE: src/dashboard/dbt/snapshots/.gitkeep ================================================ ================================================ FILE: src/dashboard/dbt/tests/.gitkeep ================================================ ================================================ FILE: src/data/.gitkeep ================================================ ================================================ FILE: src/local.env ================================================ AWS_ACCESS_KEY_ID= AWS_SECRET_ACCESS_KEY= AWS_DEFAULT_REGION=us-east-1 S3_BUCKET_NAME= ================================================ FILE: src/package.json ================================================ { "dependencies": { "serverless": "^3.30.1", "serverless-iam-roles-per-function": "^3.2.0" } } ================================================ FILE: src/quack.py ================================================ """ Python script to interact with the serverless architecture. Query and limit parameters can be passed through the command line, or the script can be run without parameters to check the status of the lambda (it will return the results from a pre-defined query). Check the README.md for more details. """ import os import time import boto3 import pandas as pd import json from rich.console import Console from rich.table import Table from dotenv import load_dotenv # get the environment variables from the .env file load_dotenv() # we don't allow to display more than 10 rows in the terminal MAX_ROWS_IN_TERMINAL = 10 # instantiate the boto3 client to communicate with the lambda lambda_client = boto3.client('lambda') def invoke_lambda(json_payload_as_str: str): """ Invoke our duckdb lambda function. Note that the payload is a string, so the method should be called with json.dumps(payload) """ response = lambda_client.invoke( # the name of the lambda function should match what you have in your console # if you did not change the serverless.yml file, it should be this one: FunctionName='quack-reduce-lambda-dev-duckdb', InvocationType='RequestResponse', LogType='Tail', Payload=json_payload_as_str ) # return response as dict return json.loads(response['Payload'].read().decode("utf-8")) def fetch_all( query: str, limit: int, display: bool=False, is_debug = False )-> pd.DataFrame: """ Get results from lambda and display them """ if is_debug: print(f"Running query: {query}, with limit: {limit}") # run the query start_time = time.time() response = invoke_lambda(json.dumps({'q': query, 'limit': limit})) roundtrip_time = int((time.time() - start_time) * 1000.0) # check for errors first if 'errorMessage' in response: print(f"Error: {response['errorMessage']}") # just raise an exception now as we don't have a proper error handling raise Exception(response['errorMessage']) # no error returned, display the results if is_debug: print(f"Debug reponse: {response}") rows = response['data']['records'] # add the roundtrip time to the metadata response['metadata']['roundtrip_time'] = roundtrip_time # display in the console if required if display: console = Console() display_query_metadata(console, response['metadata']) display_table(console, rows) # return the results as a pandas dataframe and metadata return pd.DataFrame(rows), response['metadata'] def display_query_metadata( console: Console, metadata: dict ): """ Display the metadata returned by the lambda - we receive a dictionary with few properties (total time, echo of the query, is warm, etc.) """ # NOTE: we cut to 25 max the field values, to avoid the table to be too wide values = [{ 'Field': k, 'Value': str(v)[:50] } for k, v in metadata.items()] display_table(console, values, title="Metadata", color="cyan") return def display_table( console: Console, rows: list, title: str="My query", color: str="green" ): """ We receive a list of rows, each row is a dict with the column names as keys. We use rich (https://rich.readthedocs.io/en/stable/tables.html) to display a nice table in the terminal. """ # build the table table = Table(title=title) # buld the header cols = list(rows[0].keys()) for col in cols: table.add_column(col, justify="left", style=color, no_wrap=True) # add the rows for row in rows[:MAX_ROWS_IN_TERMINAL]: # NOTE: we need to render str table.add_row(*[str(row[col]) for col in cols]) # diplay the table console.print(table) return def runner( bucket: str, query: str=None, limit: int=10, is_debug: bool = False ): """ Run queries against our serverless (and stateless) database. We basically use duckdb not as a database much, but as an engine, and use object storage to store artifacts, like tables. If query and limits are not specified, we overwrite them with sensible choices. """ # if no query is specified, we run a simple count to verify that the lambda is working if query is None: # NOTE: the file path, after the bucket, should be the same as the one we have # in the run_me_first.py script. If you changed it there, you should change it here target_file = f"s3://{bucket}/dataset/taxi_2019_04.parquet" query = f"SELECT COUNT(*) AS COUNTS FROM read_parquet(['{target_file}'])" # since this is a test query, we force debug to be True rows, metadata = fetch_all(query, limit, display=True, is_debug=True) else: # run the query as it is rows, metadata = fetch_all(query, limit, display=True, is_debug=is_debug) return if __name__ == "__main__": assert 'S3_BUCKET_NAME' in os.environ, "Please set the S3_BUCKET_NAME environment variable" # get args from command line import argparse # declare basic arguments parser = argparse.ArgumentParser() parser.add_argument( "-q", type=str, help="query", default=None) parser.add_argument( "-limit", type=int, help="max rows to return from the lambda", default=10) parser.add_argument( "--debug", action="store_true", help="increase output verbosity", default=False) args = parser.parse_args() # run the main function runner( bucket=os.environ['S3_BUCKET_NAME'], query=args.q, limit=args.limit, is_debug=args.debug ) ================================================ FILE: src/requirements.txt ================================================ requests==2.28.2 streamlit==1.20.0 python-dotenv==1.0.0 fsspec==2023.4.0 s3fs==2023.4.0 dbt-duckdb==1.4.1 fastcore==1.5.27 boto3==1.26.3 matplotlib==3.6.3 seaborn==0.12.0 ================================================ FILE: src/run_me_first.py ================================================ """ Python script to run a one-time setup for testing the serverless duckdb architecture. Check the README.md for more details and for the prerequisites. """ import os import boto3 import requests import pandas as pd from dotenv import load_dotenv # get the environment variables from the .env file load_dotenv() def donwload_data(url: str, target_file: str): """ Download a file from a url and save it to a target file. """ r = requests.get(url) open(target_file, 'wb').write(r.content) return True def download_taxi_data(): """ Download the taxi data from the duckdb repo - if the file disappears, you can of course replace it with any other version of the same dataset. """ print('Downloading the taxi dataset') url = 'https://github.com/cwida/duckdb-data/releases/download/v1.0/taxi_2019_04.parquet' file_name = 'data/taxi_2019_04.parquet' donwload_data(url, file_name) return file_name def upload_file_to_bucket(s3_client, file_name, bucket, object_name=None): """ Upload a file to an S3 bucket. """ from botocore.exceptions import ClientError try: print(f"Uploading {object_name}") response = s3_client.upload_file(file_name, bucket, object_name) except ClientError as e: print(f"Error uploading file {file_name} to bucket {bucket} with error {e}") return False return True def upload_datasets(s3_client, bucket: str, taxi_dataset_path: str): """ Upload the datasets to the bucket, first as one parquet file, then as a directory of parquet files with hive partitioning. """ file_name = os.path.basename(taxi_dataset_path) # upload file as is, a single parquet file in the data/ folder of the target bucket is_uploaded = upload_file_to_bucket( s3_client, taxi_dataset_path, bucket, object_name=f"dataset/{file_name}" ) is_uploaded = upload_partioned_dataset( bucket, taxi_dataset_path ) return def upload_partioned_dataset( bucket: str, taxi_dataset_path: str, partition_col: str = 'date' ): """ Use pandas to read the parquet file, then save it again as a directory on our s3 bucket. The final directory will have a subdirectory for each value of the partition column, and each subdirectory will contain parquet files. """ df = pd.read_parquet(taxi_dataset_path) df[partition_col] = pd.to_datetime(df['pickup_at']).dt.date target_folder = os.path.join('s3://', bucket, 'partitioned') print(f"Saving data with hive partitioning ({partition_col}) in {target_folder}") df.to_parquet(target_folder, partition_cols=[partition_col]) return True def setup_project(): # check vars are ok assert 'S3_BUCKET_NAME' in os.environ, "Please set the S3_BUCKET_NAME environment variable" AWS_ACCESS_KEY_ID = os.environ.get('AWS_ACCESS_KEY_ID') AWS_SECRET_ACCESS_KEY = os.environ.get('AWS_SECRET_ACCESS_KEY') AWS_PROFILE = os.environ.get('AWS_PROFILE') assert (AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY) or AWS_PROFILE, "Please set the AWS_ACCESS_KEY_ID & AWS_SECRET_ACCESS_KEY (or the AWS_PROFILE) environment variables" # first download the data taxi_dataset_path = download_taxi_data() # upload the data to the bucket s3_client = boto3.client('s3') upload_datasets( s3_client, os.environ['S3_BUCKET_NAME'], taxi_dataset_path ) # all done print("All done! See you, duck cowboy!") return if __name__ == "__main__": setup_project() ================================================ FILE: src/serverless/Dockerfile ================================================ FROM public.ecr.aws/lambda/python:3.9 # Install pip and other dependencies RUN pip3 install --upgrade pip \ && yum install gcc gcc-c++ -y \ && pip3 install pandas==1.5.3 duckdb==0.7.1 --target "${LAMBDA_TASK_ROOT}" ENV HOME=/home/aws RUN mkdir /home/aws && python3 -c "import duckdb; duckdb.query('INSTALL httpfs;');" COPY app.py ${LAMBDA_TASK_ROOT} # Set the CMD to the lambda handler CMD [ "app.handler" ] ================================================ FILE: src/serverless/app.py ================================================ import uuid import os import time import duckdb import pandas as pd con = None # global conn object - we re-use this across calls DEFAULT_LIMIT = 20 # if we don't specify a limit, we will return at most 20 results def return_duckdb_connection(): """ Return a duckdb connection object """ duckdb_connection = duckdb.connect(database=':memory:') duckdb_connection.execute(f""" LOAD httpfs; SET s3_region='{os.environ['AWS_REGION']}'; SET s3_session_token='{os.environ['AWS_SESSION_TOKEN']}'; """ ) return duckdb_connection def handler(event, context): """ Run a SQL query in a memory db as a serverless function """ is_warm = False # run a timer for info start = time.time() global con if not con: # create a new connection con = return_duckdb_connection() else: # return to the caller the status of the lambda is_warm = True # get the query to be executed from the payload event_query = event.get('q', None) limit = int(event.get('limit', DEFAULT_LIMIT)) results = [] if not event_query: print("No query provided, will return empty results") else: # execute the query and return a pandas dataframe _df = con.execute(event_query).df() # take rows up the limit, to avoid crashing the lambda # by returning too many results _df = _df.head(limit) results = convert_records_to_json(_df) # return response to the client with metadata return wrap_response(start, event_query, results, is_warm) def convert_records_to_json(_df): if len(_df) > 0: # convert timestamp to string to avoid serialization issues cols = [col for col in _df.columns if _df[col].dtype == 'datetime64[ns]'] _df = _df.astype({_: str for _ in cols}) return _df.to_dict('records') def wrap_response(start, event_query, results, is_warm): """ Wrap the response in a format that can be used by the client """ return { "metadata": { "timeMs": int((time.time() - start) * 1000.0), "epochMs": int(time.time() * 1000), "eventId": str(uuid.uuid4()), "query": event_query, "warm": is_warm }, "data": { "records": results } } ================================================ FILE: src/serverless.yml ================================================ service: quack-reduce-lambda useDotenv: true provider: name: aws region: ${env:AWS_DEFAULT_REGION, 'us-east-1'} architecture: arm64 memorySize: 3008 timeout: 600 ecr: images: quackimageblog: path: ./serverless platform: linux/arm64 functions: duckdb: ephemeralStorageSize: 3008 image: name: quackimageblog iamRoleStatements: - Effect: Allow Action: - s3:GetObject - s3:PutObject Resource: - Fn::Join: - '' - - Fn::GetAtt: - S3Bucket - Arn - '/*' - Effect: Allow Action: - s3:ListBucket Resource: - Fn::GetAtt: - S3Bucket - Arn resources: Resources: S3Bucket: Type: AWS::S3::Bucket DeletionPolicy: Retain Properties: BucketName: ${env:S3_BUCKET_NAME} plugins: - serverless-iam-roles-per-function