Repository: karpenkovarya/airflow_for_beginners
Branch: master
Commit: 4a53f24feede
Files: 7
Total size: 11.7 KB
Directory structure:
gitextract_979cgzf2/
├── .gitignore
├── LICENSE
├── README.md
├── dags/
│ ├── dags.py
│ ├── email_template.html
│ └── utils.py
└── requirements.txt
================================================
FILE CONTENTS
================================================
================================================
FILE: .gitignore
================================================
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
.hypothesis/
.pytest_cache/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
db.sqlite3
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
target/
# Jupyter Notebook
.ipynb_checkpoints
# pyenv
.python-version
# celery beat schedule file
celerybeat-schedule
# SageMath parsed files
*.sage.py
# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/
.idea*
*.pid
*.db
*.cfg
logs/
================================================
FILE: LICENSE
================================================
MIT License
Copyright (c) 2019 Varya
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
================================================
FILE: README.md
================================================
This is a small example of the workflow built with Apache Airflow.
You can find slides [here](https://www.slideshare.net/varyakarpenko5/airflow-for-beginners) and watch the talk [here](https://www.youtube.com/watch?v=YWtfU0MQZ_4)
The goal is to set up a data pipline to get a fresh portion of Stack Overflow questions with tag `pandas` to our mailbox daily.
A small python script could do the job, but for the learning purposes we choose to overengineer it.
By writing this workflow we will learn the main concepts of Apache Airflow, such as:
* Operators
* DAG
* Tasks
* Hooks
* Variables
* Connections
* XComs
Happy learning 🤓
### Helpful resources
📝 [Apache Airflow Documentation](https://airflow.apache.org/)
#### Apache Airflow tutorials for beginners
📝 [Apache Airflow Tutorial for Data Pipelines](https://blog.godatadriven.com/practical-airflow-tutorial)
📝 [Apache Airflow for the confused](https://medium.com/nyc-planning-digital/apache-airflow-for-the-confused-b588935669df)
📝 [Airflow: Tutorial and Beginners Guide](https://www.polidea.com/blog/apache-airflow-tutorial-and-beginners-guide/)
📝 [ETL Pipelines With Airflow](http://michael-harmon.com/blog/AirflowETL.html)
#### Some more
📰 [ETL best principles](https://gtoonstra.github.io/etl-with-airflow/principles.html)
📰 [Managing Dependencies in Apache Airflow](https://www.astronomer.io/guides/managing-dependencies/)
📝 [Getting Started with Airflow Using Docker](http://www.marknagelberg.com/getting-started-with-airflow-using-docker/)
🎧 [Putting Airflow Into Production](https://overcast.fm/+H1YNx1QJE)
📝 [How to configure SMTP server for apache airflow](https://stackoverflow.com/questions/51829200/how-to-set-up-airflow-send-email)
If you have any questions or would like to get in touch with me, please drop me a message to `hello@varya.io`
================================================
FILE: dags/dags.py
================================================
from datetime import datetime, timedelta
from airflow import DAG
from airflow.operators.email_operator import EmailOperator
from airflow.operators.postgres_operator import PostgresOperator
from airflow.operators.python_operator import PythonOperator
from utils import insert_question, write_questions_to_s3, render_template
from dags.utils import insert_question_to_db
default_args = {
"owner": "me",
"depends_on_past": False,
"start_date": datetime(2019, 10, 9),
"email": ["my_email@mail.com"],
"email_on_failure": False,
"email_on_retry": False,
"retries": 0,
"retry_delay": timedelta(minutes=1),
"schedule_interval": "@daily",
}
with DAG("stack_overflow_questions", default_args=default_args) as dag:
Task_I = PostgresOperator(
task_id="create_table",
postgres_conn_id="postgres_connection",
database="stack_overflow",
sql="""
DROP TABLE IF EXISTS public.questions;
CREATE TABLE public.questions
(
title text,
is_answered boolean,
link character varying,
score integer,
tags text[],
question_id integer NOT NULL,
owner_reputation integer
)
""",
)
Task_II = PythonOperator(
task_id="insert_question_to_db", python_callable=insert_question_to_db
)
Task_III = PythonOperator(
task_id="write_questions_to_s3", python_callable=write_questions_to_s3
)
Task_IV = PythonOperator(
task_id="render_template",
python_callable=render_template,
provide_context=True,
)
Task_V = EmailOperator(
task_id="send_email",
provide_context=True,
to="my_email@mail.com",
subject="Top questions with tag 'pandas' on {{ ds }}",
html_content="{{ task_instance.xcom_pull(task_ids='render_template', key='html_content') }}",
)
Task_I >> Task_II >> Task_III >> Task_IV >> Task_V
================================================
FILE: dags/email_template.html
================================================
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Title</title>
</head>
<body>
<ul>
{% for question in questions %}
<li>
<a href="{{question['link']}}">{{question['title'].capitalize()}}</a> tagged: <strong>{{', '.join(question['tags'])}}</strong>
</li>
{% endfor %}
</ul>
</body>
</html>
================================================
FILE: dags/utils.py
================================================
import json
import os
from datetime import datetime, timedelta
import requests
from airflow.hooks.S3_hook import S3Hook
from airflow.hooks.postgres_hook import PostgresHook
from airflow.models import Variable
from jinja2 import Environment, FileSystemLoader
S3_FILE_NAME = f"{datetime.today().date()}_top_questions.json"
def call_stack_overflow_api() -> dict:
""" Get first 100 questions created two days ago sorted by user votes """
stack_overflow_question_url = Variable.get("STACK_OVERFLOW_QUESTION_URL")
today = datetime.now()
three_days_ago = today - timedelta(days=7)
two_days_ago = today - timedelta(days=5)
payload = {
"fromdate": int(datetime.timestamp(three_days_ago)),
"todate": int(datetime.timestamp(two_days_ago)),
"sort": "votes",
"site": "stackoverflow",
"order": "desc",
"tagged": Variable.get("TAG"),
"client_id": Variable.get("STACK_OVERFLOW_CLIENT_ID"),
"client_secret": Variable.get("STACK_OVERFLOW_CLIENT_SECRET"),
"key": Variable.get("STACK_OVERFLOW_KEY"),
}
response = requests.get(stack_overflow_question_url, params=payload)
for question in response.json().get("items", []):
yield {
"question_id": question["question_id"],
"title": question["title"],
"is_answered": question["is_answered"],
"link": question["link"],
"owner_reputation": question["owner"].get("reputation", 0),
"score": question["score"],
"tags": question["tags"],
}
def insert_question_to_db():
""" Inserts a new question to the database """
insert_question_query = """
INSERT INTO public.questions (
question_id,
title,
is_answered,
link,
owner_reputation,
score,
tags)
VALUES (%s, %s, %s, %s, %s, %s, %s);
"""
rows = call_stack_overflow_api()
for row in rows:
row = tuple(row.values())
pg_hook = PostgresHook(postgres_conn_id="postgres_connection")
pg_hook.run(insert_question_query, parameters=row)
def filter_questions() -> str:
"""
Read all questions from the database and filter them.
Returns a JSON string that looks like:
[
{
"title": "Question Title",
"is_answered": false,
"link": "https://stackoverflow.com/questions/0000001/...",
"tags": ["tag_a","tag_b"],
"question_id": 0000001
},
]
"""
columns = ("title", "is_answered", "link", "tags", "question_id")
filtering_query = """
SELECT title, is_answered, link, tags, question_id
FROM public.questions
WHERE score >= 1 AND owner_reputation > 1000;
"""
pg_hook = PostgresHook(postgres_conn_id="postgres_connection").get_conn()
with pg_hook.cursor("serverCursor") as pg_cursor:
pg_cursor.execute(filtering_query)
rows = pg_cursor.fetchall()
results = [dict(zip(columns, row)) for row in rows]
return json.dumps(results, indent=2)
def write_questions_to_s3():
hook = S3Hook(aws_conn_id="s3_connection")
hook.load_string(
string_data=filter_questions(),
key=S3_FILE_NAME,
bucket_name=Variable.get("S3_BUCKET"),
replace=True,
)
def render_template(**context):
""" Render HTML template using questions metadata from S3 bucket """
hook = S3Hook(aws_conn_id="s3_connection")
file_content = hook.read_key(
key=S3_FILE_NAME, bucket_name=Variable.get("S3_BUCKET")
)
questions = json.loads(file_content)
root = os.path.dirname(os.path.abspath(__file__))
env = Environment(loader=FileSystemLoader(root))
template = env.get_template("email_template.html")
html_content = template.render(questions=questions)
# Push rendered HTML as a string to the Airflow metadata database
# to make it available for the next task
task_instance = context["task_instance"]
task_instance.xcom_push(key="html_content", value=html_content)
================================================
FILE: requirements.txt
================================================
alembic==1.0.11
apache-airflow==1.10.7
apispec==2.0.2
appdirs==1.4.3
attrs==19.1.0
Babel==2.7.0
black==19.3b0
botocore==1.12.207
cached-property==1.5.1
certifi==2019.6.16
chardet==3.0.4
Click==7.0
colorama==0.4.1
colorlog==4.0.2
configparser==3.5.3
croniter==0.3.30
defusedxml==0.6.0
dill==0.2.9
docutils==0.14
dumb-init==1.2.2
Flask==1.1.1
Flask-Admin==1.5.3
Flask-AppBuilder==1.13.1
Flask-Babel==0.12.2
Flask-Caching==1.3.3
Flask-JWT-Extended==3.21.0
Flask-Login==0.4.1
Flask-OpenID==1.2.5
Flask-SQLAlchemy==2.4.0
flask-swagger==0.2.13
Flask-WTF==0.14.2
funcsigs==1.0.0
future==0.16.0
gunicorn==19.9.0
idna==2.8
iso8601==0.1.12
itsdangerous==1.1.0
Jinja2==2.10.1
jmespath==0.9.4
json-merge-patch==0.2
jsonschema==3.0.2
lazy-object-proxy==1.4.1
lockfile==0.12.2
Mako==1.1.0
Markdown==2.6.11
MarkupSafe==1.1.1
marshmallow==2.19.5
marshmallow-enum==1.4.1
marshmallow-sqlalchemy==0.17.0
numpy==1.17.0
ordereddict==1.1
pandas==0.25.0
pendulum==1.4.4
prison==0.1.0
psutil==5.6.3
psycopg2==2.7.7
psycopg2-binary==2.8.3
Pygments==2.4.2
PyJWT==1.7.1
pyrsistent==0.15.4
python-daemon==2.1.2
python-dateutil==2.8.0
python-editor==1.0.4
python3-openid==3.1.0
pytz==2019.2
pytzdata==2019.2
PyYAML==5.1.2
requests==2.22.0
setproctitle==1.1.10
six==1.12.0
SQLAlchemy==1.3.6
tabulate==0.8.3
tenacity==4.12.0
termcolor==1.1.0
text-unidecode==1.2
thrift==0.11.0
toml==0.10.0
tzlocal==1.5.1
unicodecsv==0.14.1
urllib3==1.25.3
Werkzeug==0.15.5
WTForms==2.2.1
zope.deprecation==4.4.0
gitextract_979cgzf2/ ├── .gitignore ├── LICENSE ├── README.md ├── dags/ │ ├── dags.py │ ├── email_template.html │ └── utils.py └── requirements.txt
SYMBOL INDEX (5 symbols across 1 files) FILE: dags/utils.py function call_stack_overflow_api (line 15) | def call_stack_overflow_api() -> dict: function insert_question_to_db (line 50) | def insert_question_to_db(): function filter_questions (line 72) | def filter_questions() -> str: function write_questions_to_s3 (line 103) | def write_questions_to_s3(): function render_template (line 113) | def render_template(**context):
Condensed preview — 7 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (13K chars).
[
{
"path": ".gitignore",
"chars": 1233,
"preview": "# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[cod]\n*$py.class\n\n# C extensions\n*.so\n\n# Distribution / packagi"
},
{
"path": "LICENSE",
"chars": 1062,
"preview": "MIT License\n\nCopyright (c) 2019 Varya\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof t"
},
{
"path": "README.md",
"chars": 1835,
"preview": "This is a small example of the workflow built with Apache Airflow.\n\nYou can find slides [here](https://www.slideshare.ne"
},
{
"path": "dags/dags.py",
"chars": 1977,
"preview": "from datetime import datetime, timedelta\n\nfrom airflow import DAG\nfrom airflow.operators.email_operator import EmailOper"
},
{
"path": "dags/email_template.html",
"chars": 329,
"preview": "<!DOCTYPE html>\n<html lang=\"en\">\n<head>\n <meta charset=\"UTF-8\">\n <title>Title</title>\n</head>\n<body>\n<ul>\n{% for q"
},
{
"path": "dags/utils.py",
"chars": 4119,
"preview": "import json\nimport os\nfrom datetime import datetime, timedelta\n\nimport requests\nfrom airflow.hooks.S3_hook import S3Hook"
},
{
"path": "requirements.txt",
"chars": 1465,
"preview": "alembic==1.0.11\napache-airflow==1.10.7\napispec==2.0.2\nappdirs==1.4.3\nattrs==19.1.0\nBabel==2.7.0\nblack==19.3b0\nbotocore=="
}
]
About this extraction
This page contains the full source code of the karpenkovarya/airflow_for_beginners GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 7 files (11.7 KB), approximately 3.5k tokens, and a symbol index with 5 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.
Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.