[
  {
    "path": ".claude/commands/review-pending-prs.md",
    "content": "---\ndescription: Review pending PRs against CONTRIBUTING.md acceptance criteria.\nallowed-tools: Bash(gh api:*), Bash(gh pr close:*), Bash(gh pr diff:*), Bash(gh pr edit:*), Bash(gh pr list:*)\n---\n\n## Usage\n\n```\n/review-pending-prs\n```\n\n## Instructions\n\n1. Fetch 10 open PRs with details: `gh pr list --repo vinta/awesome-python --limit 10 --search \"-label:\\\"claude reviewed\\\"\" --json number,title,author,url,body,files,mergeable,mergeStateStatus`\n2. Fetch all PR diffs in parallel: `gh pr diff <number> --repo vinta/awesome-python`\n3. Run quick rejection checks (no API calls needed):\n   - Has merge conflicts? (from `mergeable`/`mergeStateStatus`)\n   - Adds more than one project? (from diff)\n   - Duplicate entry? (from diff - URL already in README)\n   - Not a project submission? (from diff - e.g., random files, contributor list)\n4. For PRs passing quick checks, fetch repo stats: `gh api repos/<owner>/<repo> --jq '{stars: .stargazers_count, created: .created_at, updated: .pushed_at, language: .language, archived: .archived}'`\n5. Review against all criteria in [CONTRIBUTING.md](../../CONTRIBUTING.md)\n6. Present summary table with recommendations\n7. Ask user:\n\n```\nWould you like me to:\n\n1. Close the rejected PRs with comments?\n2. Add \"claude reviewed\" label to the passed PRs?\n3. Do all\n```\n\n## Quick Rejection Checks\n\nCheck these rules first - if any fail, recommend rejection:\n\n- PR has merge conflicts\n- Add more than one project per PR\n- Duplicate of existing entry\n- Placed under an inappropriate category\n- Project is archived or abandoned (no commits in 12+ months)\n- No documentation or unclear use case\n- Less than 100 GitHub stars AND not justified as a hidden gem\n- Too niche — a thin wrapper, single-function utility, or narrow edge-case tool that most Python developers would never need\n\n## Output Format\n\nProvide a simple review:\n\n1. **Rejection Check** - table with the above rules and PASS/REJECT\n2. **Recommendation** - PASS or REJECT\n\n## Close PRs\n\nIf user asks to close/reject:\n\n```bash\ngh pr close <number> --repo vinta/awesome-python --comment \"<brief reason>\"\n```\n\n## Mark as Passed\n\n```bash\ngh pr edit <number> --repo vinta/awesome-python --add-label \"claude reviewed\"\n```\n\n## Extra Instructions (If Provided)\n\n$ARGUMENTS\n"
  },
  {
    "path": ".claude/settings.json",
    "content": "{\n  \"permissions\": {\n    \"allow\": [\n      \"Bash(gh api:*)\",\n      \"Bash(gh pr close:*)\",\n      \"Bash(gh pr comment:*)\",\n      \"Bash(gh pr diff:*)\",\n      \"Bash(gh pr edit:*)\",\n      \"Bash(gh pr list:*)\",\n      \"Bash(gh pr view:*)\",\n      \"Bash(gh run list:*)\",\n      \"Bash(gh run rerun:*)\",\n      \"Bash(gh run view:*)\",\n      \"Bash(gh search:*)\"\n    ],\n    \"deny\": []\n  }\n}\n"
  },
  {
    "path": ".github/PULL_REQUEST_TEMPLATE.md",
    "content": "## Project\n\n[Project Name](url)\n\n## Checklist\n\n- [ ] One project per PR\n- [ ] PR title format: `Add project-name`\n- [ ] Entry format: `* [project-name](url) - Description ending with period.`\n- [ ] Description is concise and short\n\n## Why This Project Is Awesome\n\nWhich criterion does it meet? (pick one)\n\n- [ ] **Industry Standard** - The go-to tool for a specific use case\n- [ ] **Rising Star** - 5000+ stars in < 2 years, significant adoption\n- [ ] **Hidden Gem** - Exceptional quality, solves niche problems elegantly\n\nExplain:\n\n## How It Differs\n\nIf similar entries exist, what makes this one unique?\n"
  },
  {
    "path": ".github/workflows/deploy-website.yml",
    "content": "name: Deploy Website\n\non:\n  push:\n    branches:\n      - master\n  schedule:\n    - cron: \"0 0 * * *\"\n\npermissions:\n  contents: read\n  pages: write\n  id-token: write\n\nconcurrency:\n  group: pages\n  cancel-in-progress: false\n\njobs:\n  build:\n    runs-on: ubuntu-latest\n    steps:\n      - uses: actions/checkout@v6\n\n      - name: Install uv\n        uses: astral-sh/setup-uv@v7\n        with:\n          enable-cache: true\n\n      - name: Install dependencies\n        run: uv sync --group build\n\n      - name: Restore star data cache\n        id: cache-stars\n        uses: actions/cache/restore@v4\n        with:\n          path: website/data/github_stars.json\n          key: github-stars-${{ github.run_id }}\n          restore-keys: github-stars-\n\n      - name: Fetch GitHub stars\n        id: fetch-stars\n        continue-on-error: true\n        env:\n          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}\n        run: make fetch_github_stars\n\n      - name: Save star data cache\n        if: steps.fetch-stars.outcome == 'success'\n        uses: actions/cache/save@v4\n        with:\n          path: website/data/github_stars.json\n          key: github-stars-${{ github.run_id }}\n\n      - name: Verify star data exists\n        run: |\n          if [ ! -f website/data/github_stars.json ]; then\n            echo \"::error::github_stars.json not found. No cache and fetch failed or was skipped.\"\n            exit 1\n          fi\n          echo \"Star data found: $(wc -l < website/data/github_stars.json) lines\"\n\n      - name: Build site\n        run: make build\n\n      - name: Upload artifact\n        uses: actions/upload-pages-artifact@v4\n        with:\n          path: website/output/\n\n  deploy:\n    needs: build\n    runs-on: ubuntu-latest\n    environment:\n      name: github-pages\n      url: https://awesome-python.com/\n    steps:\n      - name: Deploy to GitHub Pages\n        id: deployment\n        uses: actions/deploy-pages@v4\n"
  },
  {
    "path": ".gitignore",
    "content": "# macOS\n.DS_Store\n\n# python\n.venv/\n*.py[co]\n\n# website\nwebsite/output/\nwebsite/data/\n\n# claude code\n.claude/skills/\n.superpowers/\n.gstack/\nskills-lock.json\n"
  },
  {
    "path": "CLAUDE.md",
    "content": "# CLAUDE.md\n\n## Repository Overview\n\nThis is the awesome-python repository - a curated list of Python frameworks, libraries, software and resources. The repository serves as a comprehensive directory about Python ecosystem.\n\n## PR Review Guidelines\n\n**For all PR review tasks, refer to [CONTRIBUTING.md](CONTRIBUTING.md)** which contains:\n\n- Acceptance criteria (Industry Standard, Rising Star, Hidden Gem)\n- Quality requirements\n- Automatic rejection criteria\n- Entry format reference\n- PR description template\n\n## Architecture & Structure\n\nThe repository follows a single-file architecture:\n\n- **README.md**: All content in hierarchical structure (categories, subcategories, entries)\n- **CONTRIBUTING.md**: Submission guidelines and review criteria\n- **sort.py**: Script to enforce alphabetical ordering\n\nEntry format: `* [project-name](url) - Concise description ending with period.`\n\n## Key Considerations\n\n- This is a curated list, not a code project\n- Quality over quantity - only \"awesome\" projects\n- Alphabetical ordering within categories is mandatory\n- README.md is the source of truth for all content\n"
  },
  {
    "path": "CONTRIBUTING.md",
    "content": "# Contributing\n\n## Quality Requirements\n\nAll submissions must satisfy **ALL** of these:\n\n1. **Python-first**: Primarily written in Python (>50% of codebase)\n2. **Active**: Commits within the last 12 months\n3. **Stable**: Production-ready, not alpha/beta/experimental\n4. **Documented**: Clear README with examples and use cases\n5. **Unique**: Adds distinct value, not \"yet another X\"\n6. **Established**: Repository at least 1 month old\n\n## Acceptance Criteria\n\nYour submission must meet **ONE** of the following criteria:\n\n### 1. Industry Standard\n\n- The go-to tool that almost everyone uses for a specific use case\n- Examples: requests, flask, pandas, numpy\n- Limit: 1-3 tools per category\n\n### 2. Rising Star\n\n- Rapid growth: 5,000+ GitHub stars in less than 2 years\n- Significant community buzz and adoption\n- Solving problems in new or better ways\n- Examples: fastapi, ruff, uv\n\n### 3. Hidden Gem\n\n- Exceptional quality despite fewer stars (100-500 stars preferred; < 100 requires strong justification)\n- Solves niche problems elegantly\n- Strong recommendation from experienced developers\n- **Must demonstrate real-world usage** (not a project published last week)\n- Repository must be at least 6 months old with consistent activity\n- Must include compelling justification in PR description\n\n## Entry Format Reference\n\n**Use GitHub repository URLs** whenever possible. Projects linked to a GitHub repo are ranked higher on [awesome-python.com](https://awesome-python.com/).\n\n### Naming Convention\n\nUse the **PyPI package name** as the display name so developers can copy it directly to `pip install`. Check the canonical name at `https://pypi.org/pypi/{package}/json`. If the project is not on PyPI, use the GitHub repository name instead.\n\n### Standard Entry\n\n```markdown\n- [pypi-name](https://github.com/owner/repo) - Description ending with period.\n```\n\n### Standard Library Module\n\n```markdown\n- [module](https://docs.python.org/3/library/module.html) - (Python standard library) Description.\n```\n\n### Fork of Another Project\n\n```markdown\n- [new-name](https://github.com/owner/new-name) - Description ([original-name](original-url) fork).\n```\n\n### Entry with Related Awesome List\n\n```markdown\n- [project](https://github.com/owner/project) - Description.\n  - [awesome-project](https://github.com/someone/awesome-project)\n```\n\n### Subcategory Format\n\n```markdown\n- Subcategory Name\n  - [project](url) - Description.\n```\n\n## Adding a New Section\n\n1. Add section description in italics: `*Libraries for doing X.*`\n2. Add the section under the appropriate thematic group (e.g., **AI & ML**, **Web**, **Data & Science**)\n3. Add the section title to the Table of Contents under its group\n4. Keep entries in alphabetical order within each category\n\n## Review Process\n\nPRs are reviewed by automated tools and maintainers:\n\n1. **Format Check**: Entry follows the correct format\n2. **Category Check**: Placed in the appropriate category/subcategory\n3. **Duplicate Check**: Not already listed or previously rejected\n4. **Activity Check**: Project shows recent activity\n5. **Quality Check**: Meets acceptance criteria\n\nSearch previous Pull Requests and Issues before submitting, as yours may be a duplicate.\n\n## Automatic Rejection\n\nPRs will be **closed** if:\n\n- Adding multiple projects in one PR\n- Duplicate of existing entry or recently-closed PR\n- Empty or placeholder PR descriptions\n- Placed under an inappropriate category\n- Project is archived or abandoned (no commits in 12+ months)\n- No documentation or unclear use case\n- Less than 100 GitHub stars without Hidden Gem justification\n- Repository less than 3 months old\n"
  },
  {
    "path": "LICENSE",
    "content": "Creative Commons Attribution 4.0 International License (CC BY 4.0)\n\nhttp://creativecommons.org/licenses/by/4.0/\n"
  },
  {
    "path": "Makefile",
    "content": "-include .env\nexport\n\ninstall:\n\tuv sync\n\nfetch_github_stars:\n\tuv run python website/fetch_github_stars.py\n\ntest:\n\tuv run pytest website/tests/ -v\n\nbuild:\n\tuv run python website/build.py\n\npreview: build\n\t@echo \"Check the website on http://localhost:8000\"\n\tuv run watchmedo shell-command \\\n\t\t--patterns='*.md;*.html;*.css;*.js;*.py' \\\n\t\t--recursive \\\n\t\t--wait --drop \\\n\t\t--command='uv run python website/build.py' \\\n\t\tREADME.md website/templates website/static website/data & \\\n\tpython -m http.server -b 127.0.0.1 -d website/output/ 8000\n"
  },
  {
    "path": "README.md",
    "content": "# Awesome Python\n\nAn opinionated list of awesome Python frameworks, libraries, tools, software and resources.\n\n> The **#10 most-starred repo on GitHub**. Put your product where Python developers discover tools. [Become a sponsor](SPONSORSHIP.md).\n\n# Categories\n\n**AI & ML**\n\n- [AI and Agents](#ai-and-agents)\n- [Deep Learning](#deep-learning)\n- [Machine Learning](#machine-learning)\n- [Natural Language Processing](#natural-language-processing)\n- [Computer Vision](#computer-vision)\n- [Recommender Systems](#recommender-systems)\n\n**Web**\n\n- [Web Frameworks](#web-frameworks)\n- [Web APIs](#web-apis)\n- [Web Servers](#web-servers)\n- [WebSocket](#websocket)\n- [Template Engines](#template-engines)\n- [Web Asset Management](#web-asset-management)\n- [Authentication](#authentication)\n- [Admin Panels](#admin-panels)\n- [CMS](#cms)\n- [Static Site Generators](#static-site-generators)\n\n**HTTP & Scraping**\n\n- [HTTP Clients](#http-clients)\n- [Web Scraping](#web-scraping)\n- [Email](#email)\n\n**Database & Storage**\n\n- [ORM](#orm)\n- [Database Drivers](#database-drivers)\n- [Database](#database)\n- [Caching](#caching)\n- [Search](#search)\n- [Serialization](#serialization)\n\n**Data & Science**\n\n- [Data Analysis](#data-analysis)\n- [Data Validation](#data-validation)\n- [Data Visualization](#data-visualization)\n- [Geolocation](#geolocation)\n- [Science](#science)\n- [Quantum Computing](#quantum-computing)\n\n**Developer Tools**\n\n- [Algorithms and Design Patterns](#algorithms-and-design-patterns)\n- [Interactive Interpreter](#interactive-interpreter)\n- [Code Analysis](#code-analysis)\n- [Testing](#testing)\n- [Debugging Tools](#debugging-tools)\n- [Build Tools](#build-tools)\n- [Documentation](#documentation)\n\n**DevOps**\n\n- [DevOps Tools](#devops-tools)\n- [Distributed Computing](#distributed-computing)\n- [Task Queues](#task-queues)\n- [Job Schedulers](#job-schedulers)\n- [Logging](#logging)\n- [Network Virtualization](#network-virtualization)\n\n**CLI & GUI**\n\n- [Command-line Interface Development](#command-line-interface-development)\n- [Command-line Tools](#command-line-tools)\n- [GUI Development](#gui-development)\n\n**Text & Documents**\n\n- [Text Processing](#text-processing)\n- [HTML Manipulation](#html-manipulation)\n- [File Format Processing](#file-format-processing)\n- [File Manipulation](#file-manipulation)\n\n**Media**\n\n- [Image Processing](#image-processing)\n- [Audio & Video Processing](#audio--video-processing)\n- [Game Development](#game-development)\n\n**Python Language**\n\n- [Implementations](#implementations)\n- [Built-in Classes Enhancement](#built-in-classes-enhancement)\n- [Functional Programming](#functional-programming)\n- [Asynchronous Programming](#asynchronous-programming)\n- [Date and Time](#date-and-time)\n\n**Python Toolchain**\n\n- [Environment Management](#environment-management)\n- [Package Management](#package-management)\n- [Package Repositories](#package-repositories)\n- [Distribution](#distribution)\n- [Configuration Files](#configuration-files)\n\n**Security**\n\n- [Cryptography](#cryptography)\n- [Penetration Testing](#penetration-testing)\n\n**Miscellaneous**\n\n- [Hardware](#hardware)\n- [Microsoft Windows](#microsoft-windows)\n- [Miscellaneous](#miscellaneous)\n\n---\n\n**AI & ML**\n\n## AI and Agents\n\n_Libraries for building AI applications, LLM integrations, and autonomous agents._\n\n- Frameworks\n  - [autogen](https://github.com/microsoft/autogen) - A programming framework for building agentic AI applications.\n  - [crewai](https://github.com/crewAIInc/crewAI) - A framework for orchestrating role-playing autonomous AI agents for collaborative task solving.\n  - [dspy](https://github.com/stanfordnlp/dspy) - A framework for programming, not prompting, language models.\n  - [instructor](https://github.com/567-labs/instructor) - A library for extracting structured data from LLMs, powered by Pydantic.\n  - [langchain](https://github.com/langchain-ai/langchain) - Building applications with LLMs through composability.\n  - [llama_index](https://github.com/run-llama/llama_index) - A data framework for your LLM application.\n  - [pydantic-ai](https://github.com/pydantic/pydantic-ai) - A Python agent framework for building generative AI applications with structured schemas.\n- Pretrained Models and Inference\n  - [diffusers](https://github.com/huggingface/diffusers) - A library that provides pretrained diffusion models for generating and editing images, audio, and video.\n  - [transformers](https://github.com/huggingface/transformers) - A framework that lets you easily use pretrained transformer models for NLP, vision, and audio tasks.\n  - [vllm](https://github.com/vllm-project/vllm) - A high-throughput and memory-efficient inference and serving engine for LLMs.\n\n## Deep Learning\n\n_Frameworks for Neural Networks and Deep Learning. Also see [awesome-deep-learning](https://github.com/ChristosChristofidis/awesome-deep-learning)._\n\n- [jax](https://github.com/jax-ml/jax) - a library for high-performance numerical computing with automatic differentiation and JIT compilation.\n- [keras](https://github.com/keras-team/keras) - A high-level deep learning library with support for JAX, TensorFlow, and PyTorch backends.\n- [pytorch-lightning](https://github.com/Lightning-AI/pytorch-lightning) - Deep learning framework to train, deploy, and ship AI products Lightning fast.\n- [pytorch](https://github.com/pytorch/pytorch) - Tensors and Dynamic neural networks in Python with strong GPU acceleration.\n- [stable-baselines3](https://github.com/DLR-RM/stable-baselines3) - PyTorch implementations of Stable Baselines (deep) reinforcement learning algorithms.\n- [tensorflow](https://github.com/tensorflow/tensorflow) - The most popular Deep Learning framework created by Google.\n\n## Machine Learning\n\n_Libraries for Machine Learning. Also see [awesome-machine-learning](https://github.com/josephmisiti/awesome-machine-learning#python)._\n\n- [catboost](https://github.com/catboost/catboost) - A fast, scalable, high performance gradient boosting on decision trees library.\n- [feature_engine](https://github.com/feature-engine/feature_engine) - sklearn compatible API with the widest toolset for feature engineering and selection.\n- [h2o](https://github.com/h2oai/h2o-3) - Open Source Fast Scalable Machine Learning Platform.\n- [lightgbm](https://github.com/lightgbm-org/LightGBM) - A fast, distributed, high performance gradient boosting framework.\n- [mindsdb](https://github.com/mindsdb/mindsdb) - MindsDB is an open source AI layer for existing databases that allows you to effortlessly develop, train and deploy state-of-the-art machine learning models using standard queries.\n- [pgmpy](https://github.com/pgmpy/pgmpy) - A Python library for probabilistic graphical models and Bayesian networks.\n- [scikit-learn](https://github.com/scikit-learn/scikit-learn) - The most popular Python library for Machine Learning with extensive documentation and community support.\n- [spark.ml](http://spark.apache.org/docs/latest/ml-guide.html) - [Apache Spark](http://spark.apache.org/)'s scalable Machine Learning library for distributed computing.\n- [xgboost](https://github.com/dmlc/xgboost) - A scalable, portable, and distributed gradient boosting library.\n\n## Natural Language Processing\n\n_Libraries for working with human languages._\n\n- General\n  - [gensim](https://github.com/piskvorky/gensim) - Topic Modeling for Humans.\n  - [nltk](https://github.com/nltk/nltk) - A leading platform for building Python programs to work with human language data.\n  - [spacy](https://github.com/explosion/spaCy) - A library for industrial-strength natural language processing in Python and Cython.\n  - [stanza](https://github.com/stanfordnlp/stanza) - The Stanford NLP Group's official Python library, supporting 60+ languages.\n- Chinese\n  - [funnlp](https://github.com/fighting41love/funNLP) - A collection of tools and datasets for Chinese NLP.\n  - [jieba](https://github.com/fxsjy/jieba) - The most popular Chinese text segmentation library.\n\n## Computer Vision\n\n_Libraries for Computer Vision._\n\n- [easyocr](https://github.com/JaidedAI/EasyOCR) - Ready-to-use OCR with 40+ languages supported.\n- [kornia](https://github.com/kornia/kornia/) - Open Source Differentiable Computer Vision Library for PyTorch.\n- [opencv](https://github.com/opencv/opencv-python) - Open Source Computer Vision Library.\n- [pytesseract](https://github.com/madmaze/pytesseract) - A wrapper for [Google Tesseract OCR](https://github.com/tesseract-ocr).\n\n## Recommender Systems\n\n_Libraries for building recommender systems._\n\n- [annoy](https://github.com/spotify/annoy) - Approximate Nearest Neighbors in C++/Python optimized for memory usage.\n- [implicit](https://github.com/benfred/implicit) - A fast Python implementation of collaborative filtering for implicit datasets.\n- [scikit-surprise](https://github.com/NicolasHug/Surprise) - A scikit for building and analyzing recommender systems.\n\n**Web**\n\n## Web Frameworks\n\n_Traditional full stack web frameworks. Also see [Web APIs](#web-apis)._\n\n- Synchronous\n  - [bottle](https://github.com/bottlepy/bottle) - A fast and simple micro-framework distributed as a single file with no dependencies.\n  - [django](https://github.com/django/django) - The most popular web framework in Python.\n    - [awesome-django](https://github.com/shahraizali/awesome-django)\n  - [flask](https://github.com/pallets/flask) - A microframework for Python.\n    - [awesome-flask](https://github.com/humiaozuzu/awesome-flask)\n  - [pyramid](https://github.com/Pylons/pyramid) - A small, fast, down-to-earth, open source Python web framework.\n    - [awesome-pyramid](https://github.com/uralbash/awesome-pyramid)\n  - [fasthtml](https://github.com/AnswerDotAI/fasthtml) - The fastest way to create an HTML app.\n    - [awesome-fasthtml](https://github.com/amosgyamfi/awesome-fasthtml)\n  - [masonite](https://github.com/MasoniteFramework/masonite) - The modern and developer centric Python web framework.\n- Asynchronous\n  - [litestar](https://github.com/litestar-org/litestar) - Production-ready, capable and extensible ASGI Web framework.\n  - [microdot](https://github.com/miguelgrinberg/microdot) - The impossibly small web framework for Python and MicroPython.\n  - [reflex](https://github.com/reflex-dev/reflex) – A framework for building reactive, full-stack web applications entirely with python .\n  - [robyn](https://github.com/sparckles/Robyn) - A high-performance async Python web framework with a Rust runtime.\n  - [starlette](https://github.com/Kludex/starlette) - A lightweight ASGI framework and toolkit for building high-performance async services.\n  - [tornado](https://github.com/tornadoweb/tornado) - A web framework and asynchronous networking library.\n\n## Web APIs\n\n_Libraries for building RESTful and GraphQL APIs._\n\n- Django\n  - [django-ninja](https://github.com/vitalik/django-ninja) - Fast, Django REST framework based on type hints and Pydantic.\n  - [django-rest-framework](https://github.com/encode/django-rest-framework) - A powerful and flexible toolkit to build web APIs.\n  - [strawberry-django](https://github.com/strawberry-graphql/strawberry-django) - Strawberry GraphQL integration with Django.\n- Flask\n  - [apiflask](https://github.com/apiflask/apiflask) - A lightweight Python web API framework based on Flask and Marshmallow.\n- Framework Agnostic\n  - [connexion](https://github.com/spec-first/connexion) - A spec-first framework that automatically handles requests based on your OpenAPI specification.\n  - [falcon](https://github.com/falconry/falcon) - A high-performance framework for building cloud APIs and web app backends.\n  - [fastapi](https://github.com/fastapi/fastapi) - A modern, fast, web framework for building APIs with standard Python type hints.\n  - [sanic](https://github.com/sanic-org/sanic) - A Python 3.6+ web server and web framework that's written to go fast.\n  - [strawberry](https://github.com/strawberry-graphql/strawberry) - A GraphQL library that leverages Python type annotations for schema definition.\n  - [webargs](https://github.com/marshmallow-code/webargs) - A friendly library for parsing HTTP request arguments with built-in support for popular web frameworks.\n\n## Web Servers\n\n_ASGI and WSGI compatible web servers._\n\n- ASGI\n  - [daphne](https://github.com/django/daphne) - A HTTP, HTTP2 and WebSocket protocol server for ASGI and ASGI-HTTP.\n  - [granian](https://github.com/emmett-framework/granian) - A Rust HTTP server for Python applications built on top of Hyper and Tokio, supporting WSGI/ASGI/RSGI.\n  - [hypercorn](https://github.com/pgjones/hypercorn) - An ASGI and WSGI Server based on Hyper libraries and inspired by Gunicorn.\n  - [uvicorn](https://github.com/Kludex/uvicorn) - A lightning-fast ASGI server implementation, using uvloop and httptools.\n- WSGI\n  - [gunicorn](https://github.com/benoitc/gunicorn) - Pre-forked, ported from Ruby's Unicorn project.\n  - [uwsgi](https://github.com/unbit/uwsgi) - A project aims at developing a full stack for building hosting services, written in C.\n  - [waitress](https://github.com/Pylons/waitress) - Multi-threaded, powers Pyramid.\n- RPC\n  - [grpcio](https://github.com/grpc/grpc) - HTTP/2-based RPC framework with Python bindings, built by Google.\n  - [rpyc](https://github.com/tomerfiliba-org/rpyc) (Remote Python Call) - A transparent and symmetric RPC library for Python.\n\n## WebSocket\n\n_Libraries for working with WebSocket._\n\n- [autobahn-python](https://github.com/crossbario/autobahn-python) - WebSocket & WAMP for Python on Twisted and [asyncio](https://docs.python.org/3/library/asyncio.html).\n- [channels](https://github.com/django/channels) - Developer-friendly asynchrony for Django.\n- [flask-socketio](https://github.com/miguelgrinberg/Flask-SocketIO) - Socket.IO integration for Flask applications.\n- [websockets](https://github.com/python-websockets/websockets) - A library for building WebSocket servers and clients with a focus on correctness and simplicity.\n\n## Template Engines\n\n_Libraries and tools for templating and lexing._\n\n- [jinja](https://github.com/pallets/jinja) - A modern and designer friendly templating language.\n- [mako](https://github.com/sqlalchemy/mako) - Hyperfast and lightweight templating for the Python platform.\n\n## Web Asset Management\n\n_Tools for managing, compressing and minifying website assets._\n\n- [django-compressor](https://github.com/django-compressor/django-compressor) - Compresses linked and inline JavaScript or CSS into a single cached file.\n- [django-storages](https://github.com/jschneier/django-storages) - A collection of custom storage back ends for Django.\n\n## Authentication\n\n_Libraries for implementing authentication schemes._\n\n- OAuth\n  - [authlib](https://github.com/authlib/authlib) - JavaScript Object Signing and Encryption draft implementation.\n  - [django-allauth](https://github.com/pennersr/django-allauth) - Authentication app for Django that \"just works.\"\n  - [django-oauth-toolkit](https://github.com/django-oauth/django-oauth-toolkit) - OAuth 2 goodies for Django.\n  - [oauthlib](https://github.com/oauthlib/oauthlib) - A generic and thorough implementation of the OAuth request-signing logic.\n- JWT\n  - [pyjwt](https://github.com/jpadilla/pyjwt) - JSON Web Token implementation in Python.\n- Permissions\n  - [django-guardian](https://github.com/django-guardian/django-guardian) - Implementation of per object permissions for Django 1.2+\n  - [django-rules](https://github.com/dfunckt/django-rules) - A tiny but powerful app providing object-level permissions to Django, without requiring a database.\n\n## Admin Panels\n\n_Libraries for administrative interfaces._\n\n- [ajenti](https://github.com/ajenti/ajenti) - The admin panel your servers deserve.\n- [django-grappelli](https://github.com/sehmaschine/django-grappelli) - A jazzy skin for the Django Admin-Interface.\n- [django-unfold](https://github.com/unfoldadmin/django-unfold) - Elevate your Django admin with a stunning modern interface, powerful features, and seamless user experience.\n- [flask-admin](https://github.com/pallets-eco/flask-admin) - Simple and extensible administrative interface framework for Flask.\n- [func-to-web](https://github.com/offerrall/FuncToWeb) - Instantly create web UIs from Python functions using type hints. Zero frontend code required.\n- [jet-bridge](https://github.com/jet-admin/jet-bridge) - Admin panel framework for any application with nice UI (ex Jet Django).\n\n## CMS\n\n_Content Management Systems._\n\n- [django-cms](https://github.com/django-cms/django-cms) - The easy-to-use and developer-friendly enterprise CMS powered by Django.\n- [indico](https://github.com/indico/indico) - A feature-rich event management system, made @ [CERN](https://en.wikipedia.org/wiki/CERN).\n- [wagtail](https://github.com/wagtail/wagtail) - A Django content management system.\n\n## Static Site Generators\n\n_Static site generator is a software that takes some text + templates as input and produces HTML files on the output._\n\n- [lektor](https://github.com/lektor/lektor) - An easy to use static CMS and blog engine.\n- [nikola](https://github.com/getnikola/nikola) - A static website and blog generator.\n- [pelican](https://github.com/getpelican/pelican) - Static site generator that supports Markdown and reST syntax.\n\n**HTTP & Scraping**\n\n## HTTP Clients\n\n_Libraries for working with HTTP._\n\n- [aiohttp](https://github.com/aio-libs/aiohttp) - Asynchronous HTTP client/server framework for asyncio and Python.\n- [furl](https://github.com/gruns/furl) - A small Python library that makes parsing and manipulating URLs easy.\n- [httpx](https://github.com/encode/httpx) - A next generation HTTP client for Python.\n- [requests](https://github.com/psf/requests) - HTTP Requests for Humans.\n- [urllib3](https://github.com/urllib3/urllib3) - A HTTP library with thread-safe connection pooling, file post support, sanity friendly.\n\n## Web Scraping\n\n_Libraries to automate web scraping and extract web content._\n\n- Frameworks\n  - [browser-use](https://github.com/browser-use/browser-use) - Make websites accessible for AI agents with easy browser automation.\n  - [crawl4ai](https://github.com/unclecode/crawl4ai) - An open-source, LLM-friendly web crawler that provides lightning-fast, structured data extraction specifically designed for AI agents.\n  - [mechanicalsoup](https://github.com/MechanicalSoup/MechanicalSoup) - A Python library for automating interaction with websites.\n  - [scrapy](https://github.com/scrapy/scrapy) - A fast high-level screen scraping and web crawling framework.\n- Content Extraction\n  - [feedparser](https://github.com/kurtmckee/feedparser) - Universal feed parser.\n  - [html2text](https://github.com/Alir3z4/html2text) - Convert HTML to Markdown-formatted text.\n  - [micawber](https://github.com/coleifer/micawber) - A small library for extracting rich content from URLs.\n  - [sumy](https://github.com/miso-belica/sumy) - A module for automatic summarization of text documents and HTML pages.\n  - [trafilatura](https://github.com/adbar/trafilatura) - A tool for gathering text and metadata from the web, with built-in content filtering.\n\n## Email\n\n_Libraries for sending and parsing email, and mail server management._\n\n- [modoboa](https://github.com/modoboa/modoboa) - A mail hosting and management platform including a modern Web UI.\n- [yagmail](https://github.com/kootenpv/yagmail) - Yet another Gmail/SMTP client.\n\n**Database & Storage**\n\n## ORM\n\n_Libraries that implement Object-Relational Mapping or data mapping techniques._\n\n- Relational Databases\n  - [django.db.models](https://docs.djangoproject.com/en/dev/topics/db/models/) - The Django ORM.\n  - [sqlalchemy](https://github.com/sqlalchemy/sqlalchemy) - The Python SQL Toolkit and Object Relational Mapper.\n    - [awesome-sqlalchemy](https://github.com/dahlia/awesome-sqlalchemy)\n  - [dataset](https://github.com/pudo/dataset) - Store Python dicts in a database - works with SQLite, MySQL, and PostgreSQL.\n  - [peewee](https://github.com/coleifer/peewee) - A small, expressive ORM.\n  - [pony](https://github.com/ponyorm/pony/) - ORM that provides a generator-oriented interface to SQL.\n  - [sqlmodel](https://github.com/fastapi/sqlmodel) - SQLModel is based on Python type annotations, and powered by Pydantic and SQLAlchemy.\n  - [tortoise-orm](https://github.com/tortoise/tortoise-orm) - An easy-to-use asyncio ORM inspired by Django, with relations support.\n- NoSQL Databases\n  - [beanie](https://github.com/BeanieODM/beanie) - An asynchronous Python object-document mapper (ODM) for MongoDB.\n  - [mongoengine](https://github.com/MongoEngine/mongoengine) - A Python Object-Document-Mapper for working with MongoDB.\n  - [pynamodb](https://github.com/pynamodb/PynamoDB) - A Pythonic interface for [Amazon DynamoDB](https://aws.amazon.com/dynamodb/).\n\n## Database Drivers\n\n_Libraries for connecting and operating databases._\n\n- MySQL - [awesome-mysql](https://github.com/shlomi-noach/awesome-mysql)\n  - [mysqlclient](https://github.com/PyMySQL/mysqlclient) - MySQL connector with Python 3 support ([mysql-python](https://sourceforge.net/projects/mysql-python/) fork).\n  - [pymysql](https://github.com/PyMySQL/PyMySQL) - A pure Python MySQL driver compatible to mysql-python.\n- PostgreSQL - [awesome-postgres](https://github.com/dhamaniasad/awesome-postgres)\n  - [psycopg](https://github.com/psycopg/psycopg) - The most popular PostgreSQL adapter for Python.\n- SQlite - [awesome-sqlite](https://github.com/planetopendata/awesome-sqlite)\n  - [sqlite-utils](https://github.com/simonw/sqlite-utils) - Python CLI utility and library for manipulating SQLite databases.\n  - [sqlite3](https://docs.python.org/3/library/sqlite3.html) - (Python standard library) SQlite interface compliant with DB-API 2.0.\n- Other Relational Databases\n  - [clickhouse-driver](https://github.com/mymarilyn/clickhouse-driver) - Python driver with native interface for ClickHouse.\n  - [mssql-python](https://github.com/microsoft/mssql-python) - Official Microsoft driver for SQL Server and Azure SQL, built on ODBC for high performance and low memory usage.\n- NoSQL Databases\n  - [cassandra-driver](https://github.com/apache/cassandra-python-driver) - The Python Driver for Apache Cassandra.\n  - [django-mongodb-backend](https://github.com/mongodb/django-mongodb-backend) - Official MongoDB database backend for Django.\n  - [pymongo](https://github.com/mongodb/mongo-python-driver) - The official Python client for MongoDB.\n  - [redis-py](https://github.com/redis/redis-py) - The Python client for Redis.\n\n## Database\n\n_Databases implemented in Python._\n\n- [chromadb](https://github.com/chroma-core/chroma) - An open-source embedding database for building AI applications with embeddings and semantic search.\n- [duckdb](https://github.com/duckdb/duckdb) - An in-process SQL OLAP database management system; optimized for analytics and fast queries, similar to SQLite but for analytical workloads.\n- [pickledb](https://github.com/patx/pickledb) - A simple and lightweight key-value store for Python.\n- [tinydb](https://github.com/msiemens/tinydb) - A tiny, document-oriented database.\n- [ZODB](https://github.com/zopefoundation/ZODB) - A native object database for Python. A key-value and object graph database.\n\n## Caching\n\n_Libraries for caching data._\n\n- [cachetools](https://github.com/tkem/cachetools) - Extensible memoizing collections and decorators.\n- [django-cacheops](https://github.com/Suor/django-cacheops) - A slick ORM cache with automatic granular event-driven invalidation.\n- [dogpile.cache](https://github.com/sqlalchemy/dogpile.cache) - dogpile.cache is a next generation replacement for Beaker made by the same authors.\n- [python-diskcache](https://github.com/grantjenks/python-diskcache) - SQLite and file backed cache backend with faster lookups than memcached and redis.\n\n## Search\n\n_Libraries and software for indexing and performing search queries on data._\n\n- [django-haystack](https://github.com/django-haystack/django-haystack) - Modular search for Django.\n- [elasticsearch-py](https://github.com/elastic/elasticsearch-py) - The official low-level Python client for [Elasticsearch](https://www.elastic.co/products/elasticsearch).\n- [pysolr](https://github.com/django-haystack/pysolr) - A lightweight Python wrapper for [Apache Solr](https://lucene.apache.org/solr/).\n\n## Serialization\n\n_Libraries for serializing complex data types._\n\n- [marshmallow](https://github.com/marshmallow-code/marshmallow) - A lightweight library for converting complex objects to and from simple Python datatypes.\n- [msgpack](https://github.com/msgpack/msgpack-python) - MessagePack serializer implementation for Python.\n- [orjson](https://github.com/ijl/orjson) - Fast, correct JSON library.\n\n**Data & Science**\n\n## Data Analysis\n\n_Libraries for data analysis._\n\n- General\n  - [aws-sdk-pandas](https://github.com/aws/aws-sdk-pandas) - Pandas on AWS.\n  - [datasette](https://github.com/simonw/datasette) - An open source multi-tool for exploring and publishing data.\n  - [desbordante](https://github.com/desbordante/desbordante-core/) - An open source data profiler for complex pattern discovery.\n  - [ibis](https://github.com/ibis-project/ibis) - A portable Python dataframe library with a single API for 20+ backends.\n  - [modin](https://github.com/modin-project/modin) - A drop-in pandas replacement that scales workflows by changing a single line of code.\n  - [pandas](https://github.com/pandas-dev/pandas) - A library providing high-performance, easy-to-use data structures and data analysis tools.\n  - [pathway](https://github.com/pathwaycom/pathway) - Real-time data processing framework for Python with reactive dataflows.\n  - [polars](https://github.com/pola-rs/polars) - A fast DataFrame library implemented in Rust with a Python API.\n- Financial Data\n  - [akshare](https://github.com/akfamily/akshare) - A financial data interface library, built for human beings!\n  - [edgartools](https://github.com/dgunning/edgartools) - Library for downloading structured data from SEC EDGAR filings and XBRL financial statements.\n  - [openbb](https://github.com/OpenBB-finance/OpenBB) - A financial data platform for analysts, quants and AI agents.\n  - [yfinance](https://github.com/ranaroussi/yfinance) - Easy Pythonic way to download market and financial data from Yahoo Finance.\n\n## Data Validation\n\n_Libraries for validating data. Used for forms in many cases._\n\n- [cerberus](https://github.com/pyeve/cerberus) - A lightweight and extensible data validation library.\n- [jsonschema](https://github.com/python-jsonschema/jsonschema) - An implementation of [JSON Schema](http://json-schema.org/) for Python.\n- [pandera](https://github.com/unionai-oss/pandera) - A data validation library for dataframes, with support for pandas, polars, and Spark.\n- [pydantic](https://github.com/pydantic/pydantic) - Data validation using Python type hints.\n\n## Data Visualization\n\n_Libraries for visualizing data. Also see [awesome-javascript](https://github.com/sorrycc/awesome-javascript#data-visualization)._\n\n- Plotting\n  - [altair](https://github.com/vega/altair) - Declarative statistical visualization library for Python.\n  - [bokeh](https://github.com/bokeh/bokeh) - Interactive Web Plotting for Python.\n  - [bqplot](https://github.com/bqplot/bqplot) - Interactive Plotting Library for the Jupyter Notebook.\n  - [matplotlib](https://github.com/matplotlib/matplotlib) - A Python 2D plotting library.\n  - [plotly](https://github.com/plotly/plotly.py) - Interactive graphing library for Python.\n  - [plotnine](https://github.com/has2k1/plotnine) - A grammar of graphics for Python based on ggplot2.\n  - [pygal](https://github.com/Kozea/pygal) - A Python SVG Charts Creator.\n  - [pyqtgraph](https://github.com/pyqtgraph/pyqtgraph) - Interactive and realtime 2D/3D/Image plotting and science/engineering widgets.\n  - [seaborn](https://github.com/mwaskom/seaborn) - Statistical data visualization using Matplotlib.\n  - [ultraplot](https://github.com/ultraplot/UltraPlot) - Matplotlib wrapper for publication-ready scientific figures with minimal code. Includes advanced subplot management, panel layouts, and batteries-included geoscience plotting.\n  - [vispy](https://github.com/vispy/vispy) - High-performance scientific visualization based on OpenGL.\n- Specialized\n  - [cartopy](https://github.com/SciTools/cartopy) - A cartographic python library with matplotlib support.\n  - [pygraphviz](https://github.com/pygraphviz/pygraphviz/) - Python interface to [Graphviz](http://www.graphviz.org/).\n- Dashboards and Apps\n  - [gradio](https://github.com/gradio-app/gradio) - Build and share machine learning apps, all in Python.\n  - [streamlit](https://github.com/streamlit/streamlit) - A framework which lets you build dashboards, generate reports, or create chat apps in minutes.\n\n## Geolocation\n\n_Libraries for geocoding addresses and working with latitudes and longitudes._\n\n- [django-countries](https://github.com/SmileyChris/django-countries) - A Django app that provides a country field for models and forms.\n- [geodjango](https://docs.djangoproject.com/en/dev/ref/contrib/gis/) - A world-class geographic web framework.\n- [geojson](https://github.com/jazzband/geojson) - Python bindings and utilities for GeoJSON.\n- [geopandas](https://github.com/geopandas/geopandas) - Python tools for geographic data (GeoSeries/GeoDataFrame) built on pandas.\n- [geopy](https://github.com/geopy/geopy) - Python Geocoding Toolbox.\n\n## Science\n\n_Libraries for scientific computing. Also see [Python-for-Scientists](https://github.com/TomNicholas/Python-for-Scientists)._\n\n- Core\n  - [numba](https://github.com/numba/numba) - Python JIT compiler to LLVM aimed at scientific Python.\n  - [numpy](https://github.com/numpy/numpy) - A fundamental package for scientific computing with Python.\n  - [scipy](https://github.com/scipy/scipy) - A Python-based ecosystem of open-source software for mathematics, science, and engineering.\n  - [statsmodels](https://github.com/statsmodels/statsmodels) - Statistical modeling and econometrics in Python.\n  - [sympy](https://github.com/sympy/sympy) - A Python library for symbolic mathematics.\n- Biology and Chemistry\n  - [biopython](https://github.com/biopython/biopython) - Biopython is a set of freely available tools for biological computation.\n  - [cclib](https://github.com/cclib/cclib) - A library for parsing and interpreting the results of computational chemistry packages.\n  - [openbabel](https://github.com/openbabel/openbabel) - A chemical toolbox designed to speak the many languages of chemical data.\n  - [rdkit](https://github.com/rdkit/rdkit) - Cheminformatics and Machine Learning Software.\n- Physics and Engineering\n  - [astropy](https://github.com/astropy/astropy) - A community Python library for Astronomy.\n  - [obspy](https://github.com/obspy/obspy) - A Python toolbox for seismology.\n  - [pydy](https://github.com/pydy/pydy) - Short for Python Dynamics, used to assist with workflow in the modeling of dynamic motion.\n  - [PythonRobotics](https://github.com/AtsushiSakai/PythonRobotics) - This is a compilation of various robotics algorithms with visualizations.\n- Simulation and Modeling\n  - [pathsim](https://github.com/pathsim/pathsim) - A block-based system modeling and simulation framework with a browser-based visual editor.\n  - [pymc](https://github.com/pymc-devs/pymc) - Probabilistic programming and Bayesian modeling in Python.\n  - [simpy](https://gitlab.com/team-simpy/simpy) - A process-based discrete-event simulation framework.\n- Other\n  - [colour](https://github.com/colour-science/colour) - Implementing a comprehensive number of colour theory transformations and algorithms.\n  - [manim](https://github.com/ManimCommunity/manim) - An animation engine for explanatory math videos.\n  - [networkx](https://github.com/networkx/networkx) - A high-productivity software for complex networks.\n  - [shapely](https://github.com/shapely/shapely) - Manipulation and analysis of geometric objects in the Cartesian plane.\n\n## Quantum Computing\n\n_Libraries for quantum computing._\n\n- [Cirq](https://github.com/quantumlib/Cirq) — A Google-developed framework focused on hardware-aware quantum circuit design for NISQ devices.\n- [pennylane](https://github.com/PennyLaneAI/pennylane) — A hybrid quantum-classical machine learning library with automatic differentiation support.\n- [qiskit](https://github.com/Qiskit/qiskit) — An IBM-backed quantum SDK for building, simulating, and running circuits on real quantum hardware.\n- [qutip](https://github.com/qutip/qutip) - Quantum Toolbox in Python.\n\n**Developer Tools**\n\n## Algorithms and Design Patterns\n\n_Python implementation of data structures, algorithms and design patterns. Also see [awesome-algorithms](https://github.com/tayllan/awesome-algorithms)._\n\n- Algorithms\n  - [algorithms](https://github.com/keon/algorithms) - Minimal examples of data structures and algorithms.\n  - [sortedcontainers](https://github.com/grantjenks/python-sortedcontainers) - Fast and pure-Python implementation of sorted collections.\n  - [thealgorithms](https://github.com/TheAlgorithms/Python) - All Algorithms implemented in Python.\n- Design Patterns\n  - [python-cqrs](https://github.com/pypatterns/python-cqrs) - Event-Driven Architecture Framework with CQRS/CQS, Transaction Outbox, Saga orchestration.\n  - [python-patterns](https://github.com/faif/python-patterns) - A collection of design patterns in Python.\n  - [transitions](https://github.com/pytransitions/transitions) - A lightweight, object-oriented finite state machine implementation.\n\n## Interactive Interpreter\n\n_Interactive Python interpreters (REPL)._\n\n- [jupyter](https://github.com/jupyter/notebook) - A rich toolkit to help you make the most out of using Python interactively.\n  - [awesome-jupyter](https://github.com/markusschanta/awesome-jupyter)\n- [marimo](https://github.com/marimo-team/marimo) - Transform data and train models, feels like a next-gen notebook, stored as Git-friendly Python.\n- [ptpython](https://github.com/prompt-toolkit/ptpython) - Advanced Python REPL built on top of the [python-prompt-toolkit](https://github.com/prompt-toolkit/python-prompt-toolkit).\n\n## Code Analysis\n\n_Tools of static analysis, linters and code quality checkers. Also see [awesome-static-analysis](https://github.com/analysis-tools-dev/static-analysis)._\n\n- Code Analysis\n  - [code2flow](https://github.com/scottrogowski/code2flow) - Turn your Python and JavaScript code into DOT flowcharts.\n  - [prospector](https://github.com/prospector-dev/prospector) - A tool to analyze Python code.\n  - [vulture](https://github.com/jendrikseipp/vulture) - A tool for finding and analyzing dead Python code.\n- Code Linters\n  - [bandit](https://github.com/PyCQA/bandit) - A tool designed to find common security issues in Python code.\n  - [flake8](https://github.com/PyCQA/flake8) - A wrapper around `pycodestyle`, `pyflakes` and McCabe.\n    - [awesome-flake8-extensions](https://github.com/DmytroLitvinov/awesome-flake8-extensions)\n  - [pylint](https://github.com/pylint-dev/pylint) - A fully customizable source code analyzer.\n  - [ruff](https://github.com/astral-sh/ruff) - An extremely fast Python linter and code formatter.\n- Code Formatters\n  - [black](https://github.com/psf/black) - The uncompromising Python code formatter.\n  - [isort](https://github.com/PyCQA/isort) - A Python utility / library to sort imports.\n- Static Type Checkers, also see [awesome-python-typing](https://github.com/typeddjango/awesome-python-typing)\n  - [mypy](https://github.com/python/mypy) - Check variable types during compile time.\n  - [pyre-check](https://github.com/facebook/pyre-check) - Performant type checking.\n  - [ty](https://github.com/astral-sh/ty) - An extremely fast Python type checker and language server.\n  - [typeshed](https://github.com/python/typeshed) - Collection of library stubs for Python, with static types.\n- Refactoring\n  - [rope](https://github.com/python-rope/rope) - Rope is a python refactoring library.\n- Static Type Annotations Generators\n  - [monkeytype](https://github.com/Instagram/MonkeyType) - A system for Python that generates static type annotations by collecting runtime types.\n  - [pytype](https://github.com/google/pytype) - Pytype checks and infers types for Python code - without requiring type annotations.\n\n## Testing\n\n_Libraries for testing codebases and generating test data._\n\n- Testing Frameworks\n  - [hypothesis](https://github.com/HypothesisWorks/hypothesis) - Hypothesis is an advanced Quickcheck style property based testing library.\n  - [pytest](https://github.com/pytest-dev/pytest) - A mature full-featured Python testing tool.\n  - [robotframework](https://github.com/robotframework/robotframework) - A generic test automation framework.\n  - [scanapi](https://github.com/scanapi/scanapi) - Automated Testing and Documentation for your REST API.\n  - [unittest](https://docs.python.org/3/library/unittest.html) - (Python standard library) Unit testing framework.\n- Test Runners\n  - [nox](https://github.com/wntrblm/nox) - Flexible test automation for Python.\n  - [tox](https://github.com/tox-dev/tox) - Auto builds and tests distributions in multiple Python versions\n- GUI / Web Testing\n  - [locust](https://github.com/locustio/locust) - Scalable user load testing tool written in Python.\n  - [playwright](https://github.com/microsoft/playwright-python) - Python version of the Playwright testing and automation library.\n  - [pyautogui](https://github.com/asweigart/pyautogui) - PyAutoGUI is a cross-platform GUI automation Python module for human beings.\n  - [schemathesis](https://github.com/schemathesis/schemathesis) - A tool for automatic property-based testing of web applications built with Open API / Swagger specifications.\n  - [selenium](https://github.com/SeleniumHQ/selenium) - Python bindings for [Selenium](https://selenium.dev/) [WebDriver](https://selenium.dev/documentation/webdriver/).\n- Mock\n  - [freezegun](https://github.com/spulec/freezegun) - Travel through time by mocking the datetime module.\n  - [mock](https://docs.python.org/3/library/unittest.mock.html) - (Python standard library) A mocking and patching library.\n  - [mocket](https://github.com/mindflayer/python-mocket) - A socket mock framework with gevent/asyncio/SSL support.\n  - [responses](https://github.com/getsentry/responses) - A utility library for mocking out the requests Python library.\n  - [vcrpy](https://github.com/kevin1024/vcrpy) - Record and replay HTTP interactions on your tests.\n- Object Factories\n  - [factory_boy](https://github.com/FactoryBoy/factory_boy) - A test fixtures replacement for Python.\n  - [polyfactory](https://github.com/litestar-org/polyfactory) - mock data generation library with support to classes (continuation of `pydantic-factories`)\n- Code Coverage\n  - [coverage](https://github.com/coveragepy/coveragepy) - Code coverage measurement.\n- Fake Data\n  - [faker](https://github.com/joke2k/faker) - A Python package that generates fake data.\n  - [mimesis](https://github.com/lk-geimfari/mimesis) - is a Python library that help you generate fake data.\n\n## Debugging Tools\n\n_Libraries for debugging code._\n\n- pdb-like Debugger\n  - [ipdb](https://github.com/gotcha/ipdb) - IPython-enabled [pdb](https://docs.python.org/3/library/pdb.html).\n  - [pudb](https://github.com/inducer/pudb) - A full-screen, console-based Python debugger.\n- Tracing\n  - [manhole](https://github.com/ionelmc/python-manhole) - Debugging UNIX socket connections and present the stacktraces for all threads and an interactive prompt.\n  - [python-hunter](https://github.com/ionelmc/python-hunter) - A flexible code tracing toolkit.\n- Profiler\n  - [py-spy](https://github.com/benfred/py-spy) - A sampling profiler for Python programs. Written in Rust.\n  - [scalene](https://github.com/plasma-umass/scalene) - A high-performance, high-precision CPU, GPU, and memory profiler for Python.\n- Others\n  - [django-debug-toolbar](https://github.com/django-commons/django-debug-toolbar) - Display various debug information for Django.\n  - [flask-debugtoolbar](https://github.com/pallets-eco/flask-debugtoolbar) - A port of the django-debug-toolbar to flask.\n  - [icecream](https://github.com/gruns/icecream) - Inspect variables, expressions, and program execution with a single, simple function call.\n  - [memory_graph](https://github.com/bterwijn/memory_graph) - Visualize Python data at runtime to debug references, mutability, and aliasing.\n\n## Build Tools\n\n_Compile software from source code._\n\n- [bitbake](https://github.com/openembedded/bitbake) - A make-like build tool for embedded Linux.\n- [invoke](https://github.com/pyinvoke/invoke) - A tool for managing shell-oriented subprocesses and organizing executable Python code into CLI-invokable tasks.\n- [platformio](https://github.com/platformio/platformio-core) - A console tool to build code with different development platforms.\n- [pybuilder](https://github.com/pybuilder/pybuilder) - A continuous build tool written in pure Python.\n- [doit](https://github.com/pydoit/doit) - A task runner and build tool.\n- [scons](https://github.com/SCons/scons) - A software construction tool.\n\n## Documentation\n\n_Libraries for generating project documentation._\n\n- [sphinx](https://github.com/sphinx-doc/sphinx/) - Python Documentation generator.\n  - [awesome-sphinxdoc](https://github.com/ygzgxyz/awesome-sphinxdoc)\n- [diagrams](https://github.com/mingrammer/diagrams) - Diagram as Code.\n- [mkdocs](https://github.com/mkdocs/mkdocs/) - Markdown friendly documentation generator.\n- [pdoc](https://github.com/mitmproxy/pdoc) - Epydoc replacement to auto generate API documentation for Python libraries.\n\n**DevOps**\n\n## DevOps Tools\n\n_Software and libraries for DevOps._\n\n- Cloud Providers\n  - [awscli](https://github.com/aws/aws-cli) - Universal Command Line Interface for Amazon Web Services.\n  - [boto3](https://github.com/boto/boto3) - Python interface to Amazon Web Services.\n- Configuration Management\n  - [ansible](https://github.com/ansible/ansible) - A radically simple IT automation platform.\n  - [cloudinit](https://github.com/canonical/cloud-init) - A multi-distribution package that handles early initialization of a cloud instance.\n  - [openstack](https://www.openstack.org/) - Open source software for building private and public clouds.\n  - [pyinfra](https://github.com/pyinfra-dev/pyinfra) - A versatile CLI tools and python libraries to automate infrastructure.\n  - [saltstack](https://github.com/saltstack/salt) - Infrastructure automation and management system.\n- Deployment\n  - [chalice](https://github.com/aws/chalice) - A Python serverless microframework for AWS.\n  - [fabric](https://github.com/fabric/fabric) - A simple, Pythonic tool for remote execution and deployment.\n- Monitoring and Processes\n  - [psutil](https://github.com/giampaolo/psutil) - A cross-platform process and system utilities module.\n  - [sentry-python](https://github.com/getsentry/sentry-python) - Sentry SDK for Python.\n  - [sh](https://github.com/amoffat/sh) - A full-fledged subprocess replacement for Python.\n  - [supervisor](https://github.com/Supervisor/supervisor) - Supervisor process control system for UNIX.\n- Other\n  - [borg](https://github.com/borgbackup/borg) - A deduplicating archiver with compression and encryption.\n  - [chaostoolkit](https://github.com/chaostoolkit/chaostoolkit) - A Chaos Engineering toolkit & Orchestration for Developers.\n  - [pre-commit](https://github.com/pre-commit/pre-commit) - A framework for managing and maintaining multi-language pre-commit hooks.\n\n## Distributed Computing\n\n_Frameworks and libraries for Distributed Computing._\n\n- Batch Processing\n  - [dask](https://github.com/dask/dask) - A flexible parallel computing library for analytic computing.\n  - [luigi](https://github.com/spotify/luigi) - A module that helps you build complex pipelines of batch jobs.\n  - [mpi4py](https://github.com/mpi4py/mpi4py) - Python bindings for MPI.\n  - [pyspark](https://github.com/apache/spark) - [Apache Spark](https://spark.apache.org/) Python API.\n  - [joblib](https://github.com/joblib/joblib) - A set of tools to provide lightweight pipelining in Python.\n  - [ray](https://github.com/ray-project/ray/) - A system for parallel and distributed Python that unifies the machine learning ecosystem.\n\n## Task Queues\n\n_Libraries for working with task queues._\n\n- [celery](https://github.com/celery/celery) - An asynchronous task queue/job queue based on distributed message passing.\n  - [flower](https://github.com/mher/flower) - Real-time monitor and web admin for Celery.\n- [dramatiq](https://github.com/Bogdanp/dramatiq) - A fast and reliable background task processing library for Python 3.\n- [huey](https://github.com/coleifer/huey) - Little multi-threaded task queue.\n- [rq](https://github.com/rq/rq) - Simple job queues for Python.\n\n## Job Schedulers\n\n_Libraries for scheduling jobs._\n\n- [airflow](https://github.com/apache/airflow) - Airflow is a platform to programmatically author, schedule and monitor workflows.\n- [apscheduler](https://github.com/agronholm/apscheduler) - A light but powerful in-process task scheduler that lets you schedule functions.\n- [dagster](https://github.com/dagster-io/dagster) - An orchestration platform for the development, production, and observation of data assets.\n- [prefect](https://github.com/PrefectHQ/prefect) - A modern workflow orchestration framework that makes it easy to build, schedule and monitor robust data pipelines.\n- [schedule](https://github.com/dbader/schedule) - Python job scheduling for humans.\n- [SpiffWorkflow](https://github.com/sartography/SpiffWorkflow) - A powerful workflow engine implemented in pure Python.\n\n## Logging\n\n_Libraries for generating and working with logs._\n\n- [logging](https://docs.python.org/3/library/logging.html) - (Python standard library) Logging facility for Python.\n- [loguru](https://github.com/Delgan/loguru) - Library which aims to bring enjoyable logging in Python.\n- [structlog](https://github.com/hynek/structlog) - Structured logging made easy.\n\n## Network Virtualization\n\n_Tools and libraries for Virtual Networking and SDN (Software Defined Networking)._\n\n- [mininet](https://github.com/mininet/mininet) - A popular network emulator and API written in Python.\n- [napalm](https://github.com/napalm-automation/napalm) - Cross-vendor API to manipulate network devices.\n- [scapy](https://github.com/secdev/scapy) - A brilliant packet manipulation library.\n\n**CLI & GUI**\n\n## Command-line Interface Development\n\n_Libraries for building command-line applications._\n\n- Command-line Application Development\n  - [argparse](https://docs.python.org/3/library/argparse.html) - (Python standard library) Command-line option and argument parsing.\n  - [cement](https://github.com/datafolklabs/cement) - CLI Application Framework for Python.\n  - [click](https://github.com/pallets/click/) - A package for creating beautiful command line interfaces in a composable way.\n  - [python-fire](https://github.com/google/python-fire) - A library for creating command line interfaces from absolutely any Python object.\n  - [python-prompt-toolkit](https://github.com/prompt-toolkit/python-prompt-toolkit) - A library for building powerful interactive command lines.\n  - [typer](https://github.com/fastapi/typer) - Modern CLI framework that uses Python type hints. Built on Click and Pydantic.\n- Terminal Rendering\n  - [alive-progress](https://github.com/rsalmei/alive-progress) - A new kind of Progress Bar, with real-time throughput, eta and very cool animations.\n  - [asciimatics](https://github.com/peterbrittain/asciimatics) - A package to create full-screen text UIs (from interactive forms to ASCII animations).\n  - [colorama](https://github.com/tartley/colorama) - Cross-platform colored terminal text.\n  - [rich](https://github.com/Textualize/rich) - Python library for rich text and beautiful formatting in the terminal. Also provides a great `RichHandler` log handler.\n  - [textual](https://github.com/Textualize/textual) - A framework for building interactive user interfaces that run in the terminal and the browser.\n  - [tqdm](https://github.com/tqdm/tqdm) - Fast, extensible progress bar for loops and CLI.\n\n## Command-line Tools\n\n_Useful CLI-based tools for productivity._\n\n- Productivity Tools\n  - [cookiecutter](https://github.com/cookiecutter/cookiecutter) - A command-line utility that creates projects from cookiecutters (project templates).\n  - [copier](https://github.com/copier-org/copier) - A library and command-line utility for rendering projects templates.\n  - [doitlive](https://github.com/sloria/doitlive) - A tool for live presentations in the terminal.\n  - [thefuck](https://github.com/nvbn/thefuck) - Correcting your previous console command.\n  - [tmuxp](https://github.com/tmux-python/tmuxp) - A [tmux](https://github.com/tmux/tmux) session manager.\n  - [xonsh](https://github.com/xonsh/xonsh/) - A Python-powered shell. Full-featured and cross-platform.\n  - [yt-dlp](https://github.com/yt-dlp/yt-dlp) - A command-line program to download videos from YouTube and other video sites, a fork of youtube-dl.\n- CLI Enhancements\n  - [httpie](https://github.com/httpie/cli) - A command line HTTP client, a user-friendly cURL replacement.\n  - [iredis](https://github.com/laixintao/iredis) - Redis CLI with autocompletion and syntax highlighting.\n  - [litecli](https://github.com/dbcli/litecli) - SQLite CLI with autocompletion and syntax highlighting.\n  - [mycli](https://github.com/dbcli/mycli) - MySQL CLI with autocompletion and syntax highlighting.\n  - [pgcli](https://github.com/dbcli/pgcli) - PostgreSQL CLI with autocompletion and syntax highlighting.\n\n## GUI Development\n\n_Libraries for working with graphical user interface applications._\n\n- Desktop\n  - [customtkinter](https://github.com/tomschimansky/customtkinter) - A modern and customizable python UI-library based on Tkinter.\n  - [dearpygui](https://github.com/hoffstadt/DearPyGui) - A Simple GPU accelerated Python GUI framework\n  - [enaml](https://github.com/nucleic/enaml) - Creating beautiful user-interfaces with Declarative Syntax like QML.\n  - [kivy](https://github.com/kivy/kivy) - A library for creating NUI applications, running on Windows, Linux, Mac OS X, Android and iOS.\n  - [pyglet](https://github.com/pyglet/pyglet) - A cross-platform windowing and multimedia library for Python.\n  - [pygobject](https://github.com/GNOME/pygobject) - Python Bindings for GLib/GObject/GIO/GTK+ (GTK+3).\n  - [PyQt](https://www.riverbankcomputing.com/static/Docs/PyQt6/) - Python bindings for the [Qt](https://www.qt.io/) cross-platform application and UI framework.\n  - [pyside](https://github.com/pyside/pyside-setup) - Qt for Python offers the official Python bindings for [Qt](https://www.qt.io/), this is same as PyQt but it's the official binding with different licensing.\n  - [tkinter](https://docs.python.org/3/library/tkinter.html) - (Python standard library) The standard Python interface to the Tcl/Tk GUI toolkit.\n  - [toga](https://github.com/beeware/toga) - A Python native, OS native GUI toolkit.\n  - [wxPython](https://github.com/wxWidgets/Phoenix) - A blending of the wxWidgets C++ class library with the Python.\n- Web-based\n  - [flet](https://github.com/flet-dev/flet) - Cross-platform GUI framework for building modern apps in pure Python.\n  - [nicegui](https://github.com/zauberzeug/nicegui) - An easy-to-use, Python-based UI framework, which shows up in your web browser.\n  - [pywebview](https://github.com/r0x0r/pywebview/) - A lightweight cross-platform native wrapper around a webview component.\n- Terminal\n  - [curses](https://docs.python.org/3/library/curses.html) - Built-in wrapper for [ncurses](http://www.gnu.org/software/ncurses/) used to create terminal GUI applications.\n  - [urwid](https://github.com/urwid/urwid) - A library for creating terminal GUI applications with strong support for widgets, events, rich colors, etc.\n- Wrappers\n  - [gooey](https://github.com/chriskiehl/Gooey) - Turn command line programs into a full GUI application with one line.\n\n**Text & Documents**\n\n## Text Processing\n\n_Libraries for parsing and manipulating plain texts._\n\n- General\n  - [babel](https://github.com/python-babel/babel) - An internationalization library for Python.\n  - [chardet](https://github.com/chardet/chardet) - Python 2/3 compatible character encoding detector.\n  - [difflib](https://docs.python.org/3/library/difflib.html) - (Python standard library) Helpers for computing deltas.\n  - [ftfy](https://github.com/rspeer/python-ftfy) - Makes Unicode text less broken and more consistent automagically.\n  - [pangu.py](https://github.com/vinta/pangu.py) - Paranoid text spacing.\n  - [pyfiglet](https://github.com/pwaller/pyfiglet) - An implementation of figlet written in Python.\n  - [pypinyin](https://github.com/mozillazg/python-pinyin) - Convert Chinese hanzi (漢字) to pinyin (拼音).\n  - [python-slugify](https://github.com/un33k/python-slugify) - A Python slugify library that translates unicode to ASCII.\n  - [textdistance](https://github.com/life4/textdistance) - Compute distance between sequences with 30+ algorithms.\n  - [unidecode](https://github.com/avian2/unidecode) - ASCII transliterations of Unicode text.\n- Unique identifiers\n  - [sqids](https://github.com/sqids/sqids-python) - A library for generating short unique IDs from numbers.\n  - [shortuuid](https://github.com/skorokithakis/shortuuid) - A generator library for concise, unambiguous and URL-safe UUIDs.\n- Parser\n  - [pygments](https://github.com/pygments/pygments) - A generic syntax highlighter.\n  - [pyparsing](https://github.com/pyparsing/pyparsing) - A general purpose framework for generating parsers.\n  - [python-nameparser](https://github.com/derek73/python-nameparser) - Parsing human names into their individual components.\n  - [python-phonenumbers](https://github.com/daviddrysdale/python-phonenumbers) - Parsing, formatting, storing and validating international phone numbers.\n  - [python-user-agents](https://github.com/selwin/python-user-agents) - Browser user agent parser.\n  - [sqlparse](https://github.com/andialbrecht/sqlparse) - A non-validating SQL parser.\n\n## HTML Manipulation\n\n_Libraries for working with HTML and XML._\n\n- [beautifulsoup](https://www.crummy.com/software/BeautifulSoup/bs4/doc/) - Providing Pythonic idioms for iterating, searching, and modifying HTML or XML.\n- [cssutils](https://github.com/jaraco/cssutils) - A CSS library for Python.\n- [justhtml](https://github.com/EmilStenstrom/justhtml/) - A pure Python HTML5 parser that just works.\n- [lxml](https://github.com/lxml/lxml) - A very fast, easy-to-use and versatile library for handling HTML and XML.\n- [markupsafe](https://github.com/pallets/markupsafe) - Implements a XML/HTML/XHTML Markup safe string for Python.\n- [pyquery](https://github.com/gawel/pyquery) - A jQuery-like library for parsing HTML.\n- [xmltodict](https://github.com/martinblech/xmltodict) - Working with XML feel like you are working with JSON.\n\n## File Format Processing\n\n_Libraries for parsing and manipulating specific text formats._\n\n- General\n  - [docling](https://github.com/docling-project/docling) - Library for converting documents into structured data.\n  - [kreuzberg](https://github.com/kreuzberg-dev/kreuzberg) - High-performance document extraction library with a Rust core, supporting 62+ formats including PDF, Office, images with OCR, HTML, email, and archives.\n  - [pyelftools](https://github.com/eliben/pyelftools) - Parsing and analyzing ELF files and DWARF debugging information.\n  - [tablib](https://github.com/jazzband/tablib) - A module for Tabular Datasets in XLS, CSV, JSON, YAML.\n- Office\n  - [docxtpl](https://github.com/elapouya/python-docx-template) - Editing a docx document by jinja2 template\n  - [openpyxl](https://openpyxl.readthedocs.io/en/stable/) - A library for reading and writing Excel 2010 xlsx/xlsm/xltx/xltm files.\n  - [pyexcel](https://github.com/pyexcel/pyexcel) - Providing one API for reading, manipulating and writing csv, ods, xls, xlsx and xlsm files.\n  - [python-docx](https://github.com/python-openxml/python-docx) - Reads, queries and modifies Microsoft Word 2007/2008 docx files.\n  - [python-pptx](https://github.com/scanny/python-pptx) - Python library for creating and updating PowerPoint (.pptx) files.\n  - [xlsxwriter](https://github.com/jmcnamara/XlsxWriter) - A Python module for creating Excel .xlsx files.\n  - [xlwings](https://github.com/ZoomerAnalytics/xlwings) - A BSD-licensed library that makes it easy to call Python from Excel and vice versa.\n- PDF\n  - [pdf_oxide](https://github.com/yfedoseev/pdf_oxide) - A fast PDF library for text extraction, image extraction, and markdown conversion, powered by Rust.\n  - [pdfminer.six](https://github.com/pdfminer/pdfminer.six) - Pdfminer.six is a community maintained fork of the original PDFMiner.\n  - [pikepdf](https://github.com/pikepdf/pikepdf) - A powerful library for reading and editing PDF files, based on qpdf.\n  - [pypdf](https://github.com/py-pdf/pypdf) - A library capable of splitting, merging, cropping, and transforming PDF pages.\n  - [reportlab](https://www.reportlab.com/opensource/) - Allowing Rapid creation of rich PDF documents.\n  - [weasyprint](https://github.com/Kozea/WeasyPrint) - A visual rendering engine for HTML and CSS that can export to PDF.\n- Markdown\n  - [markdown-it-py](https://github.com/executablebooks/markdown-it-py) - Markdown parser with 100% CommonMark support, extensions, and syntax plugins.\n  - [markdown](https://github.com/waylan/Python-Markdown) - A Python implementation of John Gruber’s Markdown.\n  - [markitdown](https://github.com/microsoft/markitdown) - Python tool for converting files and office documents to Markdown.\n  - [mistune](https://github.com/lepture/mistune) - Fastest and full featured pure Python parsers of Markdown.\n- Data Formats\n  - [csvkit](https://github.com/wireservice/csvkit) - Utilities for converting to and working with CSV.\n  - [pyyaml](https://github.com/yaml/pyyaml) - YAML implementations for Python.\n  - [tomllib](https://docs.python.org/3/library/tomllib.html) - (Python standard library) Parse TOML files.\n\n## File Manipulation\n\n_Libraries for file manipulation._\n\n- [mimetypes](https://docs.python.org/3/library/mimetypes.html) - (Python standard library) Map filenames to MIME types.\n- [pathlib](https://docs.python.org/3/library/pathlib.html) - (Python standard library) A cross-platform, object-oriented path library.\n- [python-magic](https://github.com/ahupp/python-magic) - A Python interface to the libmagic file type identification library.\n- [watchdog](https://github.com/gorakhargosh/watchdog) - API and shell utilities to monitor file system events.\n- [watchfiles](https://github.com/samuelcolvin/watchfiles) - Simple, modern and fast file watching and code reload in python.\n\n**Media**\n\n## Image Processing\n\n_Libraries for manipulating images._\n\n- [pillow](https://github.com/python-pillow/Pillow) - Pillow is the friendly [PIL](http://www.pythonware.com/products/pil/) fork.\n- [pymatting](https://github.com/pymatting/pymatting) - A library for alpha matting.\n- [python-barcode](https://github.com/WhyNotHugo/python-barcode) - Create barcodes in Python with no extra dependencies.\n- [python-qrcode](https://github.com/lincolnloop/python-qrcode) - A pure Python QR Code generator.\n- [pyvips](https://github.com/libvips/pyvips) - A fast image processing library with low memory needs.\n- [scikit-image](https://github.com/scikit-image/scikit-image) - A Python library for (scientific) image processing.\n- [thumbor](https://github.com/thumbor/thumbor) - A smart imaging service. It enables on-demand crop, re-sizing and flipping of images.\n- [wand](https://github.com/emcconville/wand) - Python bindings for [MagickWand](http://www.imagemagick.org/script/magick-wand.php), C API for ImageMagick.\n\n## Audio & Video Processing\n\n_Libraries for manipulating audio, video, and their metadata._\n\n- Audio\n  - [gtts](https://github.com/pndurette/gTTS) - Python library and CLI tool for converting text to speech using Google Translate TTS.\n  - [librosa](https://github.com/librosa/librosa) - Python library for audio and music analysis.\n  - [matchering](https://github.com/sergree/matchering) - A library for automated reference audio mastering.\n  - [pydub](https://github.com/jiaaro/pydub) - Manipulate audio with a simple and easy high level interface.\n- Video\n  - [moviepy](https://github.com/Zulko/moviepy) - A module for script-based movie editing with many formats, including animated GIFs.\n  - [vidgear](https://github.com/abhiTronix/vidgear) - Most Powerful multi-threaded Video Processing framework.\n- Metadata\n  - [beets](https://github.com/beetbox/beets) - A music library manager and [MusicBrainz](https://musicbrainz.org/) tagger.\n  - [mutagen](https://github.com/quodlibet/mutagen) - A Python module to handle audio metadata.\n  - [tinytag](https://github.com/devsnd/tinytag) - A library for reading music meta data of MP3, OGG, FLAC and Wave files.\n\n## Game Development\n\n_Awesome game development libraries._\n\n- [arcade](https://github.com/pythonarcade/arcade) - Arcade is a modern Python framework for crafting games with compelling graphics and sound.\n- [panda3d](https://github.com/panda3d/panda3d) - 3D game engine developed by Disney.\n- [py-sdl2](https://github.com/py-sdl/py-sdl2) - A ctypes based wrapper for the SDL2 library.\n- [pygame](https://github.com/pygame/pygame) - Pygame is a set of Python modules designed for writing games.\n- [pyopengl](https://github.com/mcfletch/pyopengl) - Python ctypes bindings for OpenGL and it's related APIs.\n- [renpy](https://github.com/renpy/renpy) - A Visual Novel engine.\n\n**Python Language**\n\n## Implementations\n\n_Implementations of Python._\n\n- [cpython](https://github.com/python/cpython) - Default, most widely used implementation of the Python programming language written in C.\n- [cython](https://github.com/cython/cython) - Optimizing Static Compiler for Python.\n- [ironpython](https://github.com/IronLanguages/ironpython3) - Implementation of the Python programming language written in C#.\n- [micropython](https://github.com/micropython/micropython) - A lean and efficient Python programming language implementation.\n- [pypy](https://github.com/pypy/pypy) - A very fast and compliant implementation of the Python language.\n\n## Built-in Classes Enhancement\n\n_Libraries for enhancing Python built-in classes._\n\n- [attrs](https://github.com/python-attrs/attrs) - Replacement for `__init__`, `__eq__`, `__repr__`, etc. boilerplate in class definitions.\n- [bidict](https://github.com/jab/bidict) - Efficient, Pythonic bidirectional map data structures and related functionality.\n- [box](https://github.com/cdgriffith/Box) - Python dictionaries with advanced dot notation access.\n\n## Functional Programming\n\n_Functional Programming with Python._\n\n- [coconut](https://github.com/evhub/coconut) - A variant of Python built for simple, elegant, Pythonic functional programming.\n- [functools](https://docs.python.org/3/library/functools.html) - (Python standard library) Higher-order functions and operations on callable objects.\n- [funcy](https://github.com/Suor/funcy) - A fancy and practical functional tools.\n- [more-itertools](https://github.com/erikrose/more-itertools) - More routines for operating on iterables, beyond `itertools`.\n- [returns](https://github.com/dry-python/returns) - A set of type-safe monads, transformers, and composition utilities.\n- [toolz](https://github.com/pytoolz/toolz) - A collection of functional utilities for iterators, functions, and dictionaries. Also available as [cytoolz](https://github.com/pytoolz/cytoolz/) for Cython-accelerated performance.\n\n## Asynchronous Programming\n\n_Libraries for asynchronous, concurrent and parallel execution. Also see [awesome-asyncio](https://github.com/timofurrer/awesome-asyncio)._\n\n- [anyio](https://github.com/agronholm/anyio) - A high-level async concurrency and networking framework that works on top of asyncio or trio.\n- [asyncio](https://docs.python.org/3/library/asyncio.html) - (Python standard library) Asynchronous I/O, event loop, coroutines and tasks.\n  - [awesome-asyncio](https://github.com/timofurrer/awesome-asyncio)\n- [concurrent.futures](https://docs.python.org/3/library/concurrent.futures.html) - (Python standard library) A high-level interface for asynchronously executing callables.\n- [gevent](https://github.com/gevent/gevent) - A coroutine-based Python networking library that uses [greenlet](https://github.com/python-greenlet/greenlet).\n- [multiprocessing](https://docs.python.org/3/library/multiprocessing.html) - (Python standard library) Process-based parallelism.\n- [trio](https://github.com/python-trio/trio) - A friendly library for async concurrency and I/O.\n- [twisted](https://github.com/twisted/twisted) - An event-driven networking engine.\n- [uvloop](https://github.com/MagicStack/uvloop) - Ultra fast asyncio event loop.\n\n## Date and Time\n\n_Libraries for working with dates and times._\n\n- [dateparser](https://github.com/scrapinghub/dateparser) - A Python parser for human-readable dates in dozens of languages.\n- [dateutil](https://github.com/dateutil/dateutil) - Extensions to the standard Python [datetime](https://docs.python.org/3/library/datetime.html) module.\n- [pendulum](https://github.com/python-pendulum/pendulum) - Python datetimes made easy.\n- [zoneinfo](https://docs.python.org/3/library/zoneinfo.html) - (Python standard library) IANA time zone support. Brings the [tz database](https://en.wikipedia.org/wiki/Tz_database) into Python.\n\n**Python Toolchain**\n\n## Environment Management\n\n_Libraries for Python version and virtual environment management._\n\n- [pyenv](https://github.com/pyenv/pyenv) - Simple Python version management.\n  - [pyenv-win](https://github.com/pyenv-win/pyenv-win) - Pyenv for Windows, Simple Python version management.\n- [uv](https://github.com/astral-sh/uv) - An extremely fast Python version, package and project manager, written in Rust.\n- [virtualenv](https://github.com/pypa/virtualenv) - A tool to create isolated Python environments.\n\n## Package Management\n\n_Libraries for package and dependency management._\n\n- [conda](https://github.com/conda/conda/) - Cross-platform, Python-agnostic binary package manager.\n- [pip](https://github.com/pypa/pip) - The package installer for Python.\n- [pipx](https://github.com/pypa/pipx) - Install and Run Python Applications in Isolated Environments. Like `npx` in Node.js.\n- [poetry](https://github.com/python-poetry/poetry) - Python dependency management and packaging made easy.\n- [uv](https://github.com/astral-sh/uv) - An extremely fast Python version, package and project manager, written in Rust.\n\n## Package Repositories\n\n_Local PyPI repository server and proxies._\n\n- [bandersnatch](https://github.com/pypa/bandersnatch/) - PyPI mirroring tool provided by Python Packaging Authority (PyPA).\n- [devpi](https://github.com/devpi/devpi) - PyPI server and packaging/testing/release tool.\n- [warehouse](https://github.com/pypa/warehouse) - Next generation Python Package Repository (PyPI).\n\n## Distribution\n\n_Libraries to create packaged executables for release distribution._\n\n- [cx-Freeze](https://github.com/marcelotduarte/cx_Freeze) - It is a Python tool that converts Python scripts into standalone executables and installers for Windows, macOS, and Linux.\n- [Nuitka](https://github.com/Nuitka/Nuitka) - Compiles Python programs into high-performance standalone executables (cross-platform, supports all Python versions).\n- [pyarmor](https://github.com/dashingsoft/pyarmor) - A tool used to obfuscate python scripts, bind obfuscated scripts to fixed machine or expire obfuscated scripts.\n- [pyinstaller](https://github.com/pyinstaller/pyinstaller) - Converts Python programs into stand-alone executables (cross-platform).\n- [shiv](https://github.com/linkedin/shiv) - A command line utility for building fully self-contained zipapps (PEP 441), but with all their dependencies included.\n\n## Configuration Files\n\n_Libraries for storing and parsing configuration options._\n\n- [configparser](https://docs.python.org/3/library/configparser.html) - (Python standard library) INI file parser.\n- [dynaconf](https://github.com/dynaconf/dynaconf) - Dynaconf is a configuration manager with plugins for Django, Flask and FastAPI.\n- [hydra](https://github.com/facebookresearch/hydra) - Hydra is a framework for elegantly configuring complex applications.\n- [python-decouple](https://github.com/HBNetwork/python-decouple) - Strict separation of settings from code.\n- [python-dotenv](https://github.com/theskumar/python-dotenv) - Reads key-value pairs from a `.env` file and sets them as environment variables.\n\n**Security**\n\n## Cryptography\n\n- [cryptography](https://github.com/pyca/cryptography) - A package designed to expose cryptographic primitives and recipes to Python developers.\n- [paramiko](https://github.com/paramiko/paramiko) - The leading native Python SSHv2 protocol library.\n- [pynacl](https://github.com/pyca/pynacl) - Python binding to the Networking and Cryptography (NaCl) library.\n\n## Penetration Testing\n\n_Frameworks and tools for penetration testing._\n\n- [mitmproxy](https://github.com/mitmproxy/mitmproxy) - An interactive TLS-capable intercepting HTTP proxy for penetration testers and software developers.\n- [setoolkit](https://github.com/trustedsec/social-engineer-toolkit) - A toolkit for social engineering.\n- [sherlock](https://github.com/sherlock-project/sherlock) - Hunt down social media accounts by username across social networks.\n- [sqlmap](https://github.com/sqlmapproject/sqlmap) - Automatic SQL injection and database takeover tool.\n\n**Miscellaneous**\n\n## Hardware\n\n_Libraries for programming with hardware._\n\n- [bleak](https://github.com/hbldh/bleak) - A cross platform Bluetooth Low Energy Client for Python using asyncio.\n- [pynput](https://github.com/moses-palmer/pynput) - A library to control and monitor input devices.\n\n## Microsoft Windows\n\n_Python programming on Microsoft Windows._\n\n- [pythonnet](https://github.com/pythonnet/pythonnet) - Python Integration with the .NET Common Language Runtime (CLR).\n- [pywin32](https://github.com/mhammond/pywin32) - Python Extensions for Windows.\n- [winpython](https://github.com/winpython/winpython) - Portable development environment for Windows 10/11.\n\n## Miscellaneous\n\n_Useful libraries or tools that don't fit in the categories above._\n\n- [blinker](https://github.com/jek/blinker) - A fast Python in-process signal/event dispatching system.\n- [boltons](https://github.com/mahmoud/boltons) - A set of pure-Python utilities.\n- [itsdangerous](https://github.com/pallets/itsdangerous) - Various helpers to pass trusted data to untrusted environments.\n- [tryton](https://github.com/tryton/tryton) - A general-purpose business framework.\n\n# Resources\n\nWhere to discover learning resources or new Python libraries.\n\n## Newsletters\n\n- [Awesome Python Newsletter](http://python.libhunt.com/newsletter)\n- [Pycoder's Weekly](https://pycoders.com/)\n- [Python Tricks](https://realpython.com/python-tricks/)\n- [Python Weekly](https://www.pythonweekly.com/)\n\n## Podcasts\n\n- [Django Chat](https://djangochat.com/)\n- [PyPodcats](https://pypodcats.live)\n- [Python Bytes](https://pythonbytes.fm)\n- [Python Test](https://podcast.pythontest.com/)\n- [Talk Python To Me](https://talkpython.fm/)\n- [The Real Python Podcast](https://realpython.com/podcasts/rpp/)\n\n# Contributing\n\nYour contributions are always welcome! Please take a look at the [contribution guidelines](https://github.com/vinta/awesome-python/blob/master/CONTRIBUTING.md) first.\n\n---\n\nIf you have any question about this opinionated list, do not hesitate to contact [@VintaChen](https://twitter.com/VintaChen) on Twitter.\n"
  },
  {
    "path": "SPONSORSHIP.md",
    "content": "# Sponsor awesome-python\n\n**The #10 most-starred repository on all of GitHub.**\n\nawesome-python is where Python developers go to discover tools. When someone searches Google for \"best Python libraries,\" they land here. When ChatGPT recommends Python tools, it references this list. When developers evaluate frameworks, this is the list they check.\n\nYour sponsorship puts your product in front of developers at the exact moment they're choosing what to use.\n\n## By the Numbers\n\n| Metric       | Value                                                                                                |\n| ------------ | ---------------------------------------------------------------------------------------------------- |\n| Stars        | ![Stars](https://img.shields.io/github/stars/vinta/awesome-python?style=for-the-badge)               |\n| Forks        | ![Forks](https://img.shields.io/github/forks/vinta/awesome-python?style=for-the-badge)               |\n| Watchers     | ![Watchers](https://img.shields.io/github/watchers/vinta/awesome-python?style=for-the-badge)         |\n| Contributors | ![Contributors](https://img.shields.io/github/contributors/vinta/awesome-python?style=for-the-badge) |\n\nTop referrers: GitHub, Google Search, YouTube, Reddit, ChatGPT — developers actively searching for and evaluating Python tools.\n\n## Sponsorship Tiers\n\n### Logo Sponsor — $500/month (2 slots)\n\nYour logo and a one-line description at the top of the README, seen by every visitor.\n\n### Link Sponsor — $150/month (5 slots)\n\nA text link with your product name at the top of the README, right below logo sponsors.\n\n## Past Sponsors\n\n- [Warp](https://www.warp.dev/) - https://github.com/vinta/awesome-python/pull/2766\n\n## Get Started\n\nEmail [vinta.chen@gmail.com](mailto:vinta.chen@gmail.com?subject=awesome-python%20Sponsorship) with your company name and preferred tier. Most sponsors are set up within 24 hours.\n"
  },
  {
    "path": "pyproject.toml",
    "content": "[project]\nname = \"awesome-python\"\nversion = \"0.1.0\"\ndescription = \"An opinionated list of awesome Python frameworks, libraries, software and resources.\"\nauthors = [{ name = \"Vinta Chen\", email = \"vinta.chen@gmail.com\" }]\nreadme = \"README.md\"\nlicense = \"MIT\"\nrequires-python = \">=3.13\"\ndependencies = []\n\n[project.urls]\nHomepage = \"https://awesome-python.com/\"\nRepository = \"https://github.com/vinta/awesome-python\"\n\n[dependency-groups]\nbuild = [\"httpx==0.28.1\", \"jinja2==3.1.6\", \"markdown-it-py==4.0.0\"]\nlint = [\"ruff==0.15.6\"]\ntest = [\"pytest==9.0.2\"]\ndev = [\n    { include-group = \"build\" },\n    { include-group = \"lint\" },\n    { include-group = \"test\" },\n    \"watchdog==6.0.0\",\n]\n\n[tool.pytest.ini_options]\ntestpaths = [\"website/tests\"]\npythonpath = [\"website\"]\n\n[tool.ruff]\nline-length = 200\n"
  },
  {
    "path": "website/build.py",
    "content": "#!/usr/bin/env python3\n\"\"\"Build a single-page HTML site from README.md for the awesome-python website.\"\"\"\n\nimport json\nimport re\nimport shutil\nfrom pathlib import Path\nfrom typing import TypedDict\n\nfrom jinja2 import Environment, FileSystemLoader\nfrom readme_parser import parse_readme, slugify\n\n\ndef group_categories(\n    parsed_groups: list[dict],\n    resources: list[dict],\n) -> list[dict]:\n    \"\"\"Combine parsed groups with resources for template rendering.\"\"\"\n    groups = list(parsed_groups)\n\n    if resources:\n        groups.append(\n            {\n                \"name\": \"Resources\",\n                \"slug\": slugify(\"Resources\"),\n                \"categories\": list(resources),\n            }\n        )\n\n    return groups\n\n\nclass Entry(TypedDict):\n    name: str\n    url: str\n    description: str\n    category: str\n    group: str\n    stars: int | None\n    owner: str | None\n    last_commit_at: str | None\n\n\nclass StarData(TypedDict):\n    stars: int\n    owner: str\n    last_commit_at: str\n    fetched_at: str\n\n\nGITHUB_REPO_URL_RE = re.compile(r\"^https?://github\\.com/([^/]+/[^/]+?)(?:\\.git)?/?$\")\n\n\ndef extract_github_repo(url: str) -> str | None:\n    \"\"\"Extract owner/repo from a GitHub repo URL. Returns None for non-GitHub URLs.\"\"\"\n    m = GITHUB_REPO_URL_RE.match(url)\n    return m.group(1) if m else None\n\n\ndef load_stars(path: Path) -> dict[str, StarData]:\n    \"\"\"Load star data from JSON. Returns empty dict if file doesn't exist or is corrupt.\"\"\"\n    if path.exists():\n        try:\n            return json.loads(path.read_text(encoding=\"utf-8\"))\n        except json.JSONDecodeError:\n            return {}\n    return {}\n\n\ndef sort_entries(entries: list[dict]) -> list[dict]:\n    \"\"\"Sort entries by stars descending, then name ascending. No-star entries go last.\"\"\"\n\n    def sort_key(entry: dict) -> tuple[int, int, str]:\n        stars = entry[\"stars\"]\n        name = entry[\"name\"].lower()\n        if stars is None:\n            return (1, 0, name)\n        return (0, -stars, name)\n\n    return sorted(entries, key=sort_key)\n\n\ndef extract_entries(\n    categories: list[dict],\n    groups: list[dict],\n) -> list[dict]:\n    \"\"\"Flatten categories into individual library entries for table display.\n\n    Entries appearing in multiple categories are merged into a single entry\n    with lists of categories and groups.\n    \"\"\"\n    cat_to_group: dict[str, str] = {}\n    for group in groups:\n        for cat in group[\"categories\"]:\n            cat_to_group[cat[\"name\"]] = group[\"name\"]\n\n    seen: dict[str, dict] = {}  # url -> entry\n    entries: list[dict] = []\n    for cat in categories:\n        group_name = cat_to_group.get(cat[\"name\"], \"Other\")\n        for entry in cat[\"entries\"]:\n            url = entry[\"url\"]\n            if url in seen:\n                existing = seen[url]\n                if cat[\"name\"] not in existing[\"categories\"]:\n                    existing[\"categories\"].append(cat[\"name\"])\n                if group_name not in existing[\"groups\"]:\n                    existing[\"groups\"].append(group_name)\n            else:\n                merged = {\n                    \"name\": entry[\"name\"],\n                    \"url\": url,\n                    \"description\": entry[\"description\"],\n                    \"categories\": [cat[\"name\"]],\n                    \"groups\": [group_name],\n                    \"stars\": None,\n                    \"owner\": None,\n                    \"last_commit_at\": None,\n                    \"also_see\": entry[\"also_see\"],\n                }\n                seen[url] = merged\n                entries.append(merged)\n    return entries\n\n\ndef build(repo_root: str) -> None:\n    \"\"\"Main build: parse README, render single-page HTML via Jinja2 templates.\"\"\"\n    repo = Path(repo_root)\n    website = repo / \"website\"\n    readme_text = (repo / \"README.md\").read_text(encoding=\"utf-8\")\n\n    subtitle = \"\"\n    for line in readme_text.split(\"\\n\"):\n        stripped = line.strip()\n        if stripped and not stripped.startswith(\"#\"):\n            subtitle = stripped\n            break\n\n    parsed_groups, resources = parse_readme(readme_text)\n\n    categories = [cat for g in parsed_groups for cat in g[\"categories\"]]\n    total_entries = sum(c[\"entry_count\"] for c in categories)\n    groups = group_categories(parsed_groups, resources)\n    entries = extract_entries(categories, groups)\n\n    stars_data = load_stars(website / \"data\" / \"github_stars.json\")\n    for entry in entries:\n        repo_key = extract_github_repo(entry[\"url\"])\n        if repo_key and repo_key in stars_data:\n            sd = stars_data[repo_key]\n            entry[\"stars\"] = sd[\"stars\"]\n            entry[\"owner\"] = sd[\"owner\"]\n            entry[\"last_commit_at\"] = sd.get(\"last_commit_at\", \"\")\n\n    entries = sort_entries(entries)\n\n    env = Environment(\n        loader=FileSystemLoader(website / \"templates\"),\n        autoescape=True,\n    )\n\n    site_dir = website / \"output\"\n    if site_dir.exists():\n        shutil.rmtree(site_dir)\n    site_dir.mkdir(parents=True)\n\n    tpl_index = env.get_template(\"index.html\")\n    (site_dir / \"index.html\").write_text(\n        tpl_index.render(\n            categories=categories,\n            resources=resources,\n            groups=groups,\n            subtitle=subtitle,\n            entries=entries,\n            total_entries=total_entries,\n            total_categories=len(categories),\n        ),\n        encoding=\"utf-8\",\n    )\n\n    static_src = website / \"static\"\n    static_dst = site_dir / \"static\"\n    if static_src.exists():\n        shutil.copytree(static_src, static_dst, dirs_exist_ok=True)\n\n    shutil.copy(repo / \"README.md\", site_dir / \"llms.txt\")\n\n    print(f\"Built single page with {len(parsed_groups)} groups, {len(categories)} categories + {len(resources)} resources\")\n    print(f\"Total entries: {total_entries}\")\n    print(f\"Output: {site_dir}\")\n\n\nif __name__ == \"__main__\":\n    build(str(Path(__file__).parent.parent))\n"
  },
  {
    "path": "website/fetch_github_stars.py",
    "content": "#!/usr/bin/env python3\n\"\"\"Fetch GitHub star counts and owner info for all GitHub repos in README.md.\"\"\"\n\nimport json\nimport os\nimport re\nimport sys\nfrom datetime import datetime, timezone\nfrom pathlib import Path\n\nimport httpx\n\nfrom build import extract_github_repo, load_stars\n\nCACHE_MAX_AGE_HOURS = 12\nDATA_DIR = Path(__file__).parent / \"data\"\nCACHE_FILE = DATA_DIR / \"github_stars.json\"\nREADME_PATH = Path(__file__).parent.parent / \"README.md\"\nGRAPHQL_URL = \"https://api.github.com/graphql\"\nBATCH_SIZE = 50\n\n\ndef extract_github_repos(text: str) -> set[str]:\n    \"\"\"Extract unique owner/repo pairs from GitHub URLs in markdown text.\"\"\"\n    repos = set()\n    for url in re.findall(r\"https?://github\\.com/[^\\s)\\]]+\", text):\n        repo = extract_github_repo(url.split(\"#\")[0].rstrip(\"/\"))\n        if repo:\n            repos.add(repo)\n    return repos\n\n\ndef save_cache(cache: dict) -> None:\n    \"\"\"Write the star cache to disk, creating data/ dir if needed.\"\"\"\n    DATA_DIR.mkdir(parents=True, exist_ok=True)\n    CACHE_FILE.write_text(\n        json.dumps(cache, indent=2, ensure_ascii=False) + \"\\n\",\n        encoding=\"utf-8\",\n    )\n\n\ndef build_graphql_query(repos: list[str]) -> str:\n    \"\"\"Build a GraphQL query with aliases for up to 100 repos.\"\"\"\n    if not repos:\n        return \"\"\n    parts = []\n    for i, repo in enumerate(repos):\n        owner, name = repo.split(\"/\", 1)\n        if '\"' in owner or '\"' in name:\n            continue\n        parts.append(\n            f'repo_{i}: repository(owner: \"{owner}\", name: \"{name}\") '\n            f\"{{ stargazerCount owner {{ login }} defaultBranchRef {{ target {{ ... on Commit {{ committedDate }} }} }} }}\"\n        )\n    if not parts:\n        return \"\"\n    return \"query { \" + \" \".join(parts) + \" }\"\n\n\ndef parse_graphql_response(\n    data: dict,\n    repos: list[str],\n) -> dict[str, dict]:\n    \"\"\"Parse GraphQL response into {owner/repo: {stars, owner}} dict.\"\"\"\n    result = {}\n    for i, repo in enumerate(repos):\n        node = data.get(f\"repo_{i}\")\n        if node is None:\n            continue\n        default_branch = node.get(\"defaultBranchRef\") or {}\n        target = default_branch.get(\"target\") or {}\n        result[repo] = {\n            \"stars\": node.get(\"stargazerCount\", 0),\n            \"owner\": node.get(\"owner\", {}).get(\"login\", \"\"),\n            \"last_commit_at\": target.get(\"committedDate\", \"\"),\n        }\n    return result\n\n\ndef fetch_batch(\n    repos: list[str], *, client: httpx.Client,\n) -> dict[str, dict]:\n    \"\"\"Fetch star data for a batch of repos via GitHub GraphQL API.\"\"\"\n    query = build_graphql_query(repos)\n    if not query:\n        return {}\n    resp = client.post(GRAPHQL_URL, json={\"query\": query})\n    resp.raise_for_status()\n    result = resp.json()\n    if \"errors\" in result:\n        for err in result[\"errors\"]:\n            print(f\"  Warning: {err.get('message', err)}\", file=sys.stderr)\n    data = result.get(\"data\", {})\n    return parse_graphql_response(data, repos)\n\n\ndef main() -> None:\n    \"\"\"Fetch GitHub stars for all repos in README.md, updating the JSON cache.\"\"\"\n    token = os.environ.get(\"GITHUB_TOKEN\", \"\")\n    if not token:\n        print(\"Error: GITHUB_TOKEN environment variable is required.\", file=sys.stderr)\n        sys.exit(1)\n\n    readme_text = README_PATH.read_text(encoding=\"utf-8\")\n    current_repos = extract_github_repos(readme_text)\n    print(f\"Found {len(current_repos)} GitHub repos in README.md\")\n\n    cache = load_stars(CACHE_FILE)\n    now = datetime.now(timezone.utc)\n\n    # Prune entries not in current README\n    pruned = {k: v for k, v in cache.items() if k in current_repos}\n    if len(pruned) < len(cache):\n        print(f\"Pruned {len(cache) - len(pruned)} stale cache entries\")\n    cache = pruned\n\n    # Determine which repos need fetching (missing or stale)\n    to_fetch = []\n    for repo in sorted(current_repos):\n        entry = cache.get(repo)\n        if entry and \"fetched_at\" in entry:\n            fetched = datetime.fromisoformat(entry[\"fetched_at\"])\n            age_hours = (now - fetched).total_seconds() / 3600\n            if age_hours < CACHE_MAX_AGE_HOURS:\n                continue\n        to_fetch.append(repo)\n\n    print(f\"{len(to_fetch)} repos to fetch ({len(current_repos) - len(to_fetch)} cached)\")\n\n    if not to_fetch:\n        save_cache(cache)\n        print(\"Cache is up to date.\")\n        return\n\n    # Fetch in batches\n    fetched_count = 0\n    skipped_repos: list[str] = []\n\n    with httpx.Client(\n        headers={\"Authorization\": f\"bearer {token}\", \"Content-Type\": \"application/json\"},\n        transport=httpx.HTTPTransport(retries=2),\n        timeout=30,\n    ) as client:\n        for i in range(0, len(to_fetch), BATCH_SIZE):\n            batch = to_fetch[i : i + BATCH_SIZE]\n            batch_num = i // BATCH_SIZE + 1\n            total_batches = (len(to_fetch) + BATCH_SIZE - 1) // BATCH_SIZE\n            print(f\"Fetching batch {batch_num}/{total_batches} ({len(batch)} repos)...\")\n\n            try:\n                results = fetch_batch(batch, client=client)\n            except httpx.HTTPStatusError as e:\n                print(f\"HTTP error {e.response.status_code}\", file=sys.stderr)\n                if e.response.status_code == 401:\n                    print(\"Error: Invalid GITHUB_TOKEN.\", file=sys.stderr)\n                    sys.exit(1)\n                print(\"Saving partial cache and exiting.\", file=sys.stderr)\n                save_cache(cache)\n                sys.exit(1)\n\n            now_iso = now.isoformat()\n            for repo in batch:\n                if repo in results:\n                    cache[repo] = {\n                        \"stars\": results[repo][\"stars\"],\n                        \"owner\": results[repo][\"owner\"],\n                        \"last_commit_at\": results[repo][\"last_commit_at\"],\n                        \"fetched_at\": now_iso,\n                    }\n                    fetched_count += 1\n                else:\n                    skipped_repos.append(repo)\n\n            # Save after each batch in case of interruption\n            save_cache(cache)\n\n    if skipped_repos:\n        print(f\"Skipped {len(skipped_repos)} repos (deleted/private/renamed)\")\n    print(f\"Done. Fetched {fetched_count} repos, {len(cache)} total cached.\")\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "website/readme_parser.py",
    "content": "\"\"\"Parse README.md into structured section data using markdown-it-py AST.\"\"\"\n\nfrom __future__ import annotations\n\nimport re\nfrom typing import TypedDict\n\nfrom markdown_it import MarkdownIt\nfrom markdown_it.tree import SyntaxTreeNode\nfrom markupsafe import escape\n\n\nclass AlsoSee(TypedDict):\n    name: str\n    url: str\n\n\nclass ParsedEntry(TypedDict):\n    name: str\n    url: str\n    description: str  # inline HTML, properly escaped\n    also_see: list[AlsoSee]\n\n\nclass ParsedSection(TypedDict):\n    name: str\n    slug: str\n    description: str  # plain text, links resolved to text\n    entries: list[ParsedEntry]\n    entry_count: int\n    preview: str\n    content_html: str  # rendered HTML, properly escaped\n\n\nclass ParsedGroup(TypedDict):\n    name: str\n    slug: str\n    categories: list[ParsedSection]\n\n\n# --- Slugify ----------------------------------------------------------------\n\n_SLUG_NON_ALNUM_RE = re.compile(r\"[^a-z0-9\\s-]\")\n_SLUG_WHITESPACE_RE = re.compile(r\"[\\s]+\")\n_SLUG_MULTI_DASH_RE = re.compile(r\"-+\")\n\n\ndef slugify(name: str) -> str:\n    \"\"\"Convert a category name to a URL-friendly slug.\"\"\"\n    slug = name.lower()\n    slug = _SLUG_NON_ALNUM_RE.sub(\"\", slug)\n    slug = _SLUG_WHITESPACE_RE.sub(\"-\", slug.strip())\n    slug = _SLUG_MULTI_DASH_RE.sub(\"-\", slug)\n    return slug\n\n\n# --- Inline renderers -------------------------------------------------------\n\n\ndef render_inline_html(children: list[SyntaxTreeNode]) -> str:\n    \"\"\"Render inline AST nodes to HTML with proper escaping.\"\"\"\n    parts: list[str] = []\n    for child in children:\n        match child.type:\n            case \"text\":\n                parts.append(str(escape(child.content)))\n            case \"softbreak\":\n                parts.append(\" \")\n            case \"link\":\n                href = str(escape(child.attrGet(\"href\") or \"\"))\n                inner = render_inline_html(child.children)\n                parts.append(\n                    f'<a href=\"{href}\" target=\"_blank\" rel=\"noopener\">{inner}</a>'\n                )\n            case \"em\":\n                parts.append(f\"<em>{render_inline_html(child.children)}</em>\")\n            case \"strong\":\n                parts.append(f\"<strong>{render_inline_html(child.children)}</strong>\")\n            case \"code_inline\":\n                parts.append(f\"<code>{escape(child.content)}</code>\")\n            case \"html_inline\":\n                parts.append(str(escape(child.content)))\n    return \"\".join(parts)\n\n\ndef render_inline_text(children: list[SyntaxTreeNode]) -> str:\n    \"\"\"Render inline AST nodes to plain text (links become their text).\"\"\"\n    parts: list[str] = []\n    for child in children:\n        match child.type:\n            case \"text\":\n                parts.append(child.content)\n            case \"softbreak\":\n                parts.append(\" \")\n            case \"code_inline\":\n                parts.append(child.content)\n            case \"em\" | \"strong\" | \"link\":\n                parts.append(render_inline_text(child.children))\n    return \"\".join(parts)\n\n\n# --- AST helpers -------------------------------------------------------------\n\n\ndef _heading_text(node: SyntaxTreeNode) -> str:\n    \"\"\"Extract plain text from a heading node.\"\"\"\n    for child in node.children:\n        if child.type == \"inline\":\n            return render_inline_text(child.children)\n    return \"\"\n\n\ndef _extract_description(nodes: list[SyntaxTreeNode]) -> str:\n    \"\"\"Extract description from the first paragraph if it's a single <em> block.\n\n    Pattern: _Libraries for foo._ -> \"Libraries for foo.\"\n    \"\"\"\n    if not nodes:\n        return \"\"\n    first = nodes[0]\n    if first.type != \"paragraph\":\n        return \"\"\n    for child in first.children:\n        if child.type == \"inline\" and len(child.children) == 1:\n            em = child.children[0]\n            if em.type == \"em\":\n                return render_inline_text(em.children)\n    return \"\"\n\n\n# --- Entry extraction --------------------------------------------------------\n\n_DESC_SEP_RE = re.compile(r\"^\\s*[-\\u2013\\u2014]\\s*\")\n\n\ndef _find_child(node: SyntaxTreeNode, child_type: str) -> SyntaxTreeNode | None:\n    \"\"\"Find first direct child of a given type.\"\"\"\n    for child in node.children:\n        if child.type == child_type:\n            return child\n    return None\n\n\ndef _find_inline(node: SyntaxTreeNode) -> SyntaxTreeNode | None:\n    \"\"\"Find the inline node in a list_item's paragraph.\"\"\"\n    para = _find_child(node, \"paragraph\")\n    if para is None:\n        return None\n    return _find_child(para, \"inline\")\n\n\ndef _find_first_link(inline: SyntaxTreeNode) -> SyntaxTreeNode | None:\n    \"\"\"Find the first link node among inline children.\"\"\"\n    for child in inline.children:\n        if child.type == \"link\":\n            return child\n    return None\n\n\ndef _is_leading_link(inline: SyntaxTreeNode, link: SyntaxTreeNode) -> bool:\n    \"\"\"Check if the link is the first child of inline (a real entry, not a subcategory label).\"\"\"\n    return bool(inline.children) and inline.children[0] is link\n\n\ndef _extract_description_html(inline: SyntaxTreeNode, first_link: SyntaxTreeNode) -> str:\n    \"\"\"Extract description HTML from inline content after the first link.\n\n    AST: [link(\"name\"), text(\" - Description.\")]  ->  \"Description.\"\n    The separator (- / en-dash / em-dash) is stripped.\n    \"\"\"\n    link_idx = next((i for i, c in enumerate(inline.children) if c is first_link), None)\n    if link_idx is None:\n        return \"\"\n    desc_children = inline.children[link_idx + 1 :]\n    if not desc_children:\n        return \"\"\n    html = render_inline_html(desc_children)\n    return _DESC_SEP_RE.sub(\"\", html)\n\n\ndef _parse_list_entries(bullet_list: SyntaxTreeNode) -> list[ParsedEntry]:\n    \"\"\"Extract entries from a bullet_list AST node.\n\n    Handles three patterns:\n    - Text-only list_item -> subcategory label -> recurse into nested list\n    - Link list_item with nested link-only items -> entry with also_see\n    - Link list_item without nesting -> simple entry\n    \"\"\"\n    entries: list[ParsedEntry] = []\n\n    for list_item in bullet_list.children:\n        if list_item.type != \"list_item\":\n            continue\n\n        inline = _find_inline(list_item)\n        if inline is None:\n            continue\n\n        first_link = _find_first_link(inline)\n\n        if first_link is None or not _is_leading_link(inline, first_link):\n            # Subcategory label (plain text or text-before-link) — recurse into nested list\n            nested = _find_child(list_item, \"bullet_list\")\n            if nested:\n                entries.extend(_parse_list_entries(nested))\n            continue\n\n        # Entry with a link\n        name = render_inline_text(first_link.children)\n        url = first_link.attrGet(\"href\") or \"\"\n        desc_html = _extract_description_html(inline, first_link)\n\n        # Collect also_see from nested bullet_list\n        also_see: list[AlsoSee] = []\n        nested = _find_child(list_item, \"bullet_list\")\n        if nested:\n            for sub_item in nested.children:\n                if sub_item.type != \"list_item\":\n                    continue\n                sub_inline = _find_inline(sub_item)\n                if sub_inline:\n                    sub_link = _find_first_link(sub_inline)\n                    if sub_link:\n                        also_see.append(AlsoSee(\n                            name=render_inline_text(sub_link.children),\n                            url=sub_link.attrGet(\"href\") or \"\",\n                        ))\n\n        entries.append(ParsedEntry(\n            name=name,\n            url=url,\n            description=desc_html,\n            also_see=also_see,\n        ))\n\n    return entries\n\n\ndef _parse_section_entries(content_nodes: list[SyntaxTreeNode]) -> list[ParsedEntry]:\n    \"\"\"Extract all entries from a section's content nodes.\"\"\"\n    entries: list[ParsedEntry] = []\n    for node in content_nodes:\n        if node.type == \"bullet_list\":\n            entries.extend(_parse_list_entries(node))\n    return entries\n\n\n# --- Content HTML rendering --------------------------------------------------\n\n\ndef _render_bullet_list_html(\n    bullet_list: SyntaxTreeNode,\n    *,\n    is_sub: bool = False,\n) -> str:\n    \"\"\"Render a bullet_list node to HTML with entry/entry-sub/subcat classes.\"\"\"\n    out: list[str] = []\n\n    for list_item in bullet_list.children:\n        if list_item.type != \"list_item\":\n            continue\n\n        inline = _find_inline(list_item)\n        if inline is None:\n            continue\n\n        first_link = _find_first_link(inline)\n\n        if first_link is None or not _is_leading_link(inline, first_link):\n            # Subcategory label (plain text or text-before-link)\n            label = str(escape(render_inline_text(inline.children)))\n            out.append(f'<div class=\"subcat\">{label}</div>')\n            nested = _find_child(list_item, \"bullet_list\")\n            if nested:\n                out.append(_render_bullet_list_html(nested, is_sub=False))\n            continue\n\n        # Entry with a link\n        name = str(escape(render_inline_text(first_link.children)))\n        url = str(escape(first_link.attrGet(\"href\") or \"\"))\n\n        if is_sub:\n            out.append(f'<div class=\"entry-sub\"><a href=\"{url}\">{name}</a></div>')\n        else:\n            desc = _extract_description_html(inline, first_link)\n            if desc:\n                out.append(\n                    f'<div class=\"entry\"><a href=\"{url}\">{name}</a>'\n                    f'<span class=\"sep\">&mdash;</span>{desc}</div>'\n                )\n            else:\n                out.append(f'<div class=\"entry\"><a href=\"{url}\">{name}</a></div>')\n\n        # Nested items under an entry with a link are sub-entries\n        nested = _find_child(list_item, \"bullet_list\")\n        if nested:\n            out.append(_render_bullet_list_html(nested, is_sub=True))\n\n    return \"\\n\".join(out)\n\n\ndef _render_section_html(content_nodes: list[SyntaxTreeNode]) -> str:\n    \"\"\"Render a section's content nodes to HTML.\"\"\"\n    parts: list[str] = []\n    for node in content_nodes:\n        if node.type == \"bullet_list\":\n            parts.append(_render_bullet_list_html(node))\n    return \"\\n\".join(parts)\n\n\n# --- Section splitting -------------------------------------------------------\n\n\ndef _build_section(name: str, body: list[SyntaxTreeNode]) -> ParsedSection:\n    \"\"\"Build a ParsedSection from a heading name and its body nodes.\"\"\"\n    desc = _extract_description(body)\n    content_nodes = body[1:] if desc else body\n    entries = _parse_section_entries(content_nodes)\n    entry_count = len(entries) + sum(len(e[\"also_see\"]) for e in entries)\n    preview = \", \".join(e[\"name\"] for e in entries[:4])\n    content_html = _render_section_html(content_nodes)\n    return ParsedSection(\n        name=name,\n        slug=slugify(name),\n        description=desc,\n        entries=entries,\n        entry_count=entry_count,\n        preview=preview,\n        content_html=content_html,\n    )\n\n\ndef _group_by_h2(\n    nodes: list[SyntaxTreeNode],\n) -> list[ParsedSection]:\n    \"\"\"Group AST nodes into sections by h2 headings.\"\"\"\n    sections: list[ParsedSection] = []\n    current_name: str | None = None\n    current_body: list[SyntaxTreeNode] = []\n\n    def flush() -> None:\n        nonlocal current_name\n        if current_name is None:\n            return\n        sections.append(_build_section(current_name, current_body))\n        current_name = None\n\n    for node in nodes:\n        if node.type == \"heading\" and node.tag == \"h2\":\n            flush()\n            current_name = _heading_text(node)\n            current_body = []\n        elif current_name is not None:\n            current_body.append(node)\n\n    flush()\n    return sections\n\n\ndef _is_bold_marker(node: SyntaxTreeNode) -> str | None:\n    \"\"\"Detect a bold-only paragraph used as a group marker.\n\n    Pattern: a paragraph whose only content is **Group Name** (possibly\n    surrounded by empty text nodes in the AST).\n    Returns the group name text, or None if not a group marker.\n    \"\"\"\n    if node.type != \"paragraph\":\n        return None\n    for child in node.children:\n        if child.type != \"inline\":\n            continue\n        # Filter out empty text nodes that markdown-it inserts around strong\n        meaningful = [c for c in child.children if not (c.type == \"text\" and c.content == \"\")]\n        if len(meaningful) == 1 and meaningful[0].type == \"strong\":\n            return render_inline_text(meaningful[0].children)\n    return None\n\n\ndef _parse_grouped_sections(\n    nodes: list[SyntaxTreeNode],\n) -> list[ParsedGroup]:\n    \"\"\"Parse nodes into groups of categories using bold markers as group boundaries.\n\n    Bold-only paragraphs (**Group Name**) delimit groups. H2 headings under each\n    bold marker become categories within that group. Categories appearing before\n    any bold marker go into an \"Other\" group.\n    \"\"\"\n    groups: list[ParsedGroup] = []\n    current_group_name: str | None = None\n    current_group_cats: list[ParsedSection] = []\n    current_cat_name: str | None = None\n    current_cat_body: list[SyntaxTreeNode] = []\n\n    def flush_cat() -> None:\n        nonlocal current_cat_name\n        if current_cat_name is None:\n            return\n        current_group_cats.append(_build_section(current_cat_name, current_cat_body))\n        current_cat_name = None\n\n    def flush_group() -> None:\n        nonlocal current_group_name, current_group_cats\n        if not current_group_cats:\n            current_group_name = None\n            current_group_cats = []\n            return\n        name = current_group_name or \"Other\"\n        groups.append(ParsedGroup(\n            name=name,\n            slug=slugify(name),\n            categories=list(current_group_cats),\n        ))\n        current_group_name = None\n        current_group_cats = []\n\n    for node in nodes:\n        bold_name = _is_bold_marker(node)\n        if bold_name is not None:\n            flush_cat()\n            flush_group()\n            current_group_name = bold_name\n            current_cat_body = []\n        elif node.type == \"heading\" and node.tag == \"h2\":\n            flush_cat()\n            current_cat_name = _heading_text(node)\n            current_cat_body = []\n        elif current_cat_name is not None:\n            current_cat_body.append(node)\n\n    flush_cat()\n    flush_group()\n    return groups\n\n\ndef parse_readme(text: str) -> tuple[list[ParsedGroup], list[ParsedSection]]:\n    \"\"\"Parse README.md text into grouped categories and resources.\n\n    Returns (groups, resources) where groups is a list of ParsedGroup dicts\n    containing nested categories, and resources is a flat list of ParsedSection.\n    \"\"\"\n    md = MarkdownIt(\"commonmark\")\n    tokens = md.parse(text)\n    root = SyntaxTreeNode(tokens)\n    children = root.children\n\n    # Find thematic break (---), # Resources, and # Contributing in one pass\n    hr_idx = None\n    resources_idx = None\n    contributing_idx = None\n    for i, node in enumerate(children):\n        if hr_idx is None and node.type == \"hr\":\n            hr_idx = i\n        elif node.type == \"heading\" and node.tag == \"h1\":\n            text_content = _heading_text(node)\n            if text_content == \"Resources\":\n                resources_idx = i\n            elif text_content == \"Contributing\":\n                contributing_idx = i\n    if hr_idx is None:\n        return [], []\n\n    # Slice into category and resource ranges\n    cat_end = resources_idx or contributing_idx or len(children)\n    cat_nodes = children[hr_idx + 1 : cat_end]\n\n    res_nodes: list[SyntaxTreeNode] = []\n    if resources_idx is not None:\n        res_end = contributing_idx or len(children)\n        res_nodes = children[resources_idx + 1 : res_end]\n\n    groups = _parse_grouped_sections(cat_nodes)\n    resources = _group_by_h2(res_nodes)\n\n    return groups, resources\n"
  },
  {
    "path": "website/static/main.js",
    "content": "// State\nvar activeFilter = null; // { type: \"cat\"|\"group\", value: \"...\" }\nvar activeSort = { col: 'stars', order: 'desc' };\nvar searchInput = document.querySelector('.search');\nvar filterBar = document.querySelector('.filter-bar');\nvar filterValue = document.querySelector('.filter-value');\nvar filterClear = document.querySelector('.filter-clear');\nvar noResults = document.querySelector('.no-results');\nvar rows = document.querySelectorAll('.table tbody tr.row');\nvar tags = document.querySelectorAll('.tag');\nvar tbody = document.querySelector('.table tbody');\n\n// Relative time formatting\nfunction relativeTime(isoStr) {\n  var date = new Date(isoStr);\n  var now = new Date();\n  var diffMs = now - date;\n  var diffHours = Math.floor(diffMs / 3600000);\n  var diffDays = Math.floor(diffMs / 86400000);\n  if (diffHours < 1) return 'just now';\n  if (diffHours < 24) return diffHours === 1 ? '1 hour ago' : diffHours + ' hours ago';\n  if (diffDays === 1) return 'yesterday';\n  if (diffDays < 30) return diffDays + ' days ago';\n  var diffMonths = Math.floor(diffDays / 30);\n  if (diffMonths < 12) return diffMonths === 1 ? '1 month ago' : diffMonths + ' months ago';\n  var diffYears = Math.floor(diffDays / 365);\n  return diffYears === 1 ? '1 year ago' : diffYears + ' years ago';\n}\n\n// Format all commit date cells\ndocument.querySelectorAll('.col-commit[data-commit]').forEach(function (td) {\n  var time = td.querySelector('time');\n  if (time) time.textContent = relativeTime(td.dataset.commit);\n});\n\n// Store original row order for sort reset\nrows.forEach(function (row, i) {\n  row._origIndex = i;\n  row._expandRow = row.nextElementSibling;\n});\n\nfunction collapseAll() {\n  var openRows = document.querySelectorAll('.table tbody tr.row.open');\n  openRows.forEach(function (row) {\n    row.classList.remove('open');\n    row.setAttribute('aria-expanded', 'false');\n  });\n}\n\nfunction applyFilters() {\n  var query = searchInput ? searchInput.value.toLowerCase().trim() : '';\n  var visibleCount = 0;\n\n  // Collapse all expanded rows on filter/search change\n  collapseAll();\n\n  rows.forEach(function (row) {\n    var show = true;\n\n    // Category/group filter\n    if (activeFilter) {\n      var attr = activeFilter.type === 'cat' ? row.dataset.cats : row.dataset.groups;\n      show = attr ? attr.split('||').indexOf(activeFilter.value) !== -1 : false;\n    }\n\n    // Text search\n    if (show && query) {\n      if (!row._searchText) {\n        var text = row.textContent.toLowerCase();\n        var next = row.nextElementSibling;\n        if (next && next.classList.contains('expand-row')) {\n          text += ' ' + next.textContent.toLowerCase();\n        }\n        row._searchText = text;\n      }\n      show = row._searchText.includes(query);\n    }\n\n    if (row.hidden !== !show) row.hidden = !show;\n\n    if (show) {\n      visibleCount++;\n      var numCell = row.cells[0];\n      if (numCell.textContent !== String(visibleCount)) {\n        numCell.textContent = String(visibleCount);\n      }\n    }\n  });\n\n  if (noResults) noResults.hidden = visibleCount > 0;\n\n  // Update tag highlights\n  tags.forEach(function (tag) {\n    var isActive = activeFilter\n      && tag.dataset.type === activeFilter.type\n      && tag.dataset.value === activeFilter.value;\n    tag.classList.toggle('active', isActive);\n  });\n\n  // Filter bar\n  if (filterBar) {\n    if (activeFilter) {\n      filterBar.hidden = false;\n      if (filterValue) filterValue.textContent = activeFilter.value;\n    } else {\n      filterBar.hidden = true;\n    }\n  }\n\n  updateURL();\n}\n\nfunction updateURL() {\n  var params = new URLSearchParams();\n  var query = searchInput ? searchInput.value.trim() : '';\n  if (query) params.set('q', query);\n  if (activeFilter) {\n    params.set(activeFilter.type === 'cat' ? 'category' : 'group', activeFilter.value);\n  }\n  if (activeSort.col !== 'stars' || activeSort.order !== 'desc') {\n    params.set('sort', activeSort.col);\n    params.set('order', activeSort.order);\n  }\n  var qs = params.toString();\n  history.replaceState(null, '', qs ? '?' + qs : location.pathname);\n}\n\nfunction getSortValue(row, col) {\n  if (col === 'name') {\n    return row.querySelector('.col-name a').textContent.trim().toLowerCase();\n  }\n  if (col === 'stars') {\n    var text = row.querySelector('.col-stars').textContent.trim().replace(/,/g, '');\n    var num = parseInt(text, 10);\n    return isNaN(num) ? -1 : num;\n  }\n  if (col === 'commit-time') {\n    var attr = row.querySelector('.col-commit').getAttribute('data-commit');\n    return attr ? new Date(attr).getTime() : 0;\n  }\n  return 0;\n}\n\nfunction sortRows() {\n  var arr = Array.prototype.slice.call(rows);\n  if (activeSort) {\n    arr.sort(function (a, b) {\n      var aVal = getSortValue(a, activeSort.col);\n      var bVal = getSortValue(b, activeSort.col);\n      if (activeSort.col === 'name') {\n        var cmp = aVal < bVal ? -1 : aVal > bVal ? 1 : 0;\n        if (cmp === 0) return a._origIndex - b._origIndex;\n        return activeSort.order === 'desc' ? -cmp : cmp;\n      }\n      if (aVal <= 0 && bVal <= 0) return a._origIndex - b._origIndex;\n      if (aVal <= 0) return 1;\n      if (bVal <= 0) return -1;\n      var cmp = aVal - bVal;\n      if (cmp === 0) return a._origIndex - b._origIndex;\n      return activeSort.order === 'desc' ? -cmp : cmp;\n    });\n  } else {\n    arr.sort(function (a, b) { return a._origIndex - b._origIndex; });\n  }\n  arr.forEach(function (row) {\n    tbody.appendChild(row);\n    tbody.appendChild(row._expandRow);\n  });\n  applyFilters();\n}\n\nfunction updateSortIndicators() {\n  document.querySelectorAll('th[data-sort]').forEach(function (th) {\n    th.classList.remove('sort-asc', 'sort-desc');\n    if (activeSort && th.dataset.sort === activeSort.col) {\n      th.classList.add('sort-' + activeSort.order);\n    }\n  });\n}\n\n// Expand/collapse: event delegation on tbody\nif (tbody) {\n  tbody.addEventListener('click', function (e) {\n    // Don't toggle if clicking a link or tag button\n    if (e.target.closest('a') || e.target.closest('.tag')) return;\n\n    var row = e.target.closest('tr.row');\n    if (!row) return;\n\n    var isOpen = row.classList.contains('open');\n    if (isOpen) {\n      row.classList.remove('open');\n      row.setAttribute('aria-expanded', 'false');\n    } else {\n      row.classList.add('open');\n      row.setAttribute('aria-expanded', 'true');\n    }\n  });\n\n  // Keyboard: Enter or Space on focused .row toggles expand\n  tbody.addEventListener('keydown', function (e) {\n    if (e.key !== 'Enter' && e.key !== ' ') return;\n    var row = e.target.closest('tr.row');\n    if (!row) return;\n    e.preventDefault();\n    row.click();\n  });\n}\n\n// Tag click: filter by category or group\ntags.forEach(function (tag) {\n  tag.addEventListener('click', function (e) {\n    e.preventDefault();\n    var type = tag.dataset.type;\n    var value = tag.dataset.value;\n\n    // Toggle: click same filter again to clear\n    if (activeFilter && activeFilter.type === type && activeFilter.value === value) {\n      activeFilter = null;\n    } else {\n      activeFilter = { type: type, value: value };\n    }\n    applyFilters();\n  });\n});\n\n// Clear filter\nif (filterClear) {\n  filterClear.addEventListener('click', function () {\n    activeFilter = null;\n    applyFilters();\n  });\n}\n\n// Column sorting\ndocument.querySelectorAll('th[data-sort]').forEach(function (th) {\n  th.addEventListener('click', function () {\n    var col = th.dataset.sort;\n    var defaultOrder = col === 'name' ? 'asc' : 'desc';\n    var altOrder = defaultOrder === 'asc' ? 'desc' : 'asc';\n    if (activeSort && activeSort.col === col) {\n      if (activeSort.order === defaultOrder) activeSort = { col: col, order: altOrder };\n      else activeSort = { col: 'stars', order: 'desc' };\n    } else {\n      activeSort = { col: col, order: defaultOrder };\n    }\n    sortRows();\n    updateSortIndicators();\n  });\n});\n\n// Search input\nif (searchInput) {\n  var searchTimer;\n  searchInput.addEventListener('input', function () {\n    clearTimeout(searchTimer);\n    searchTimer = setTimeout(applyFilters, 150);\n  });\n\n  // Keyboard shortcuts\n  document.addEventListener('keydown', function (e) {\n    if (e.key === '/' && !['INPUT', 'TEXTAREA', 'SELECT'].includes(document.activeElement.tagName) && !e.ctrlKey && !e.metaKey) {\n      e.preventDefault();\n      searchInput.focus();\n    }\n    if (e.key === 'Escape' && document.activeElement === searchInput) {\n      searchInput.value = '';\n      activeFilter = null;\n      applyFilters();\n      searchInput.blur();\n    }\n  });\n}\n\n// Restore state from URL\n(function () {\n  var params = new URLSearchParams(location.search);\n  var q = params.get('q');\n  var cat = params.get('category');\n  var group = params.get('group');\n  var sort = params.get('sort');\n  var order = params.get('order');\n  if (q && searchInput) searchInput.value = q;\n  if (cat) activeFilter = { type: 'cat', value: cat };\n  else if (group) activeFilter = { type: 'group', value: group };\n  if ((sort === 'name' || sort === 'stars' || sort === 'commit-time') && (order === 'desc' || order === 'asc')) {\n    activeSort = { col: sort, order: order };\n  }\n  if (q || cat || group || sort) {\n    sortRows();\n  }\n  updateSortIndicators();\n})();\n"
  },
  {
    "path": "website/static/style.css",
    "content": "/* === Reset & Base === */\n*, *::before, *::after { box-sizing: border-box; margin: 0; padding: 0; }\n\n:root {\n  --font-display: Georgia, \"Noto Serif\", \"Times New Roman\", serif;\n  --font-body: -apple-system, BlinkMacSystemFont, \"Segoe UI\", system-ui, sans-serif;\n\n  --text-xs: 0.9375rem;\n  --text-sm: 1rem;\n  --text-base: 1.125rem;\n\n  --bg: oklch(99.5% 0.003 240);\n  --bg-hover: oklch(97% 0.008 240);\n  --text: oklch(15% 0.005 240);\n  --text-secondary: oklch(35% 0.005 240);\n  --text-muted: oklch(50% 0.005 240);\n  --border: oklch(90% 0.005 240);\n  --border-strong: oklch(75% 0.008 240);\n  --border-heavy: oklch(25% 0.01 240);\n  --bg-input: oklch(94.5% 0.035 240);\n  --accent: oklch(42% 0.14 240);\n  --accent-hover: oklch(32% 0.16 240);\n  --accent-light: oklch(97% 0.015 240);\n  --highlight: oklch(93% 0.10 90);\n  --highlight-text: oklch(35% 0.10 90);\n  --tag-text: oklch(45% 0.06 240);\n  --tag-hover-bg: oklch(93% 0.025 240);\n}\n\nhtml { font-size: 16px; }\n\nbody {\n  font-family: var(--font-body);\n  background: var(--bg);\n  color: var(--text);\n  line-height: 1.55;\n  min-height: 100vh;\n  display: flex;\n  flex-direction: column;\n  -webkit-font-smoothing: antialiased;\n  -moz-osx-font-smoothing: grayscale;\n}\n\na { color: var(--accent); text-decoration: none; text-underline-offset: 0.15em; }\na:hover { color: var(--accent-hover); text-decoration: underline; }\n\n/* === Skip Link === */\n.skip-link {\n  position: absolute;\n  left: -9999px;\n  top: 0;\n  padding: 0.5rem 1rem;\n  background: var(--text);\n  color: var(--bg);\n  font-size: var(--text-xs);\n  font-weight: 700;\n  z-index: 200;\n}\n\n.skip-link:focus { left: 0; }\n\n/* === Hero === */\n.hero {\n  max-width: 1400px;\n  margin: 0 auto;\n  padding: 3.5rem 2rem 1.5rem;\n}\n\n.hero-main {\n  display: flex;\n  flex-wrap: wrap;\n  justify-content: space-between;\n  align-items: flex-start;\n  gap: 1rem;\n}\n\n.hero-submit {\n  flex-shrink: 0;\n  padding: 0.4rem 1rem;\n  border: 1px solid var(--border-strong);\n  border-radius: 4px;\n  font-size: var(--text-sm);\n  color: var(--text);\n  text-decoration: none;\n  white-space: nowrap;\n  transition: border-color 0.2s, background 0.2s, color 0.2s;\n}\n\n.hero-submit:hover {\n  border-color: var(--accent);\n  background: var(--accent-light);\n  color: var(--accent);\n  text-decoration: none;\n}\n\n.hero-submit:active {\n  transform: scale(0.97);\n}\n\n.hero-submit:focus-visible {\n  outline: 2px solid var(--accent);\n  outline-offset: 2px;\n}\n\n.hero h1 {\n  font-family: var(--font-display);\n  font-size: clamp(2rem, 5vw, 3rem);\n  font-weight: 400;\n  letter-spacing: -0.01em;\n  line-height: 1.1;\n  text-wrap: balance;\n  color: var(--accent);\n  margin-bottom: 0.75rem;\n}\n\n.hero-sub {\n  font-size: var(--text-base);\n  color: var(--text-secondary);\n  line-height: 1.6;\n  margin-bottom: 0.5rem;\n  text-wrap: pretty;\n}\n\n.hero-sub a { color: var(--text-secondary); font-weight: 600; }\n.hero-sub a:hover { color: var(--accent); }\n\n.hero-gh {\n  font-size: var(--text-sm);\n  color: var(--text-muted);\n  font-weight: 500;\n}\n\n.hero-gh:hover { color: var(--accent); }\n\n/* === Controls === */\n.controls {\n  max-width: 1400px;\n  margin: 0 auto;\n  padding: 0 2rem 1rem;\n}\n\n.search-wrap {\n  position: relative;\n  margin-bottom: 0.75rem;\n}\n\n.search-icon {\n  position: absolute;\n  left: 1rem;\n  top: 50%;\n  transform: translateY(-50%);\n  color: var(--text-muted);\n  pointer-events: none;\n}\n\n.search {\n  width: 100%;\n  padding: 0.65rem 1rem 0.65rem 2.75rem;\n  border: 1px solid transparent;\n  border-radius: 4px;\n  background: var(--bg-input);\n  font-family: var(--font-body);\n  font-size: var(--text-sm);\n  color: var(--text);\n  transition: border-color 0.15s, background 0.15s;\n}\n\n.search::placeholder { color: var(--text-muted); }\n\n.search:focus {\n  outline: 2px solid var(--accent);\n  outline-offset: 2px;\n  border-color: var(--accent);\n  background: var(--bg);\n}\n\n.filter-bar[hidden] { display: none; }\n\n.filter-bar {\n  display: flex;\n  align-items: center;\n  gap: 0.75rem;\n  padding: 0.5rem 0;\n  font-size: var(--text-sm);\n  color: var(--text-secondary);\n}\n\n.filter-bar strong {\n  color: var(--text);\n}\n\n.filter-clear {\n  background: none;\n  border: 1px solid var(--border);\n  border-radius: 4px;\n  padding: 0.35rem 0.65rem;\n  font-family: inherit;\n  font-size: var(--text-xs);\n  color: var(--text-muted);\n  cursor: pointer;\n  transition: border-color 0.15s, color 0.15s;\n}\n\n.filter-clear:active {\n  transform: scale(0.97);\n}\n\n.filter-clear:hover {\n  border-color: var(--text-muted);\n  color: var(--text);\n}\n\n.filter-clear:focus-visible {\n  outline: 2px solid var(--accent);\n  outline-offset: 2px;\n}\n\n/* === Table === */\n.table-wrap {\n  width: 100%;\n  padding: 0;\n  overflow-x: auto;\n}\n\n.table-wrap:focus {\n  outline: 2px solid var(--accent);\n  outline-offset: -2px;\n}\n\n.table {\n  width: 100%;\n  border-collapse: separate;\n  border-spacing: 0;\n  font-size: var(--text-sm);\n}\n\n.table thead th {\n  text-align: left;\n  font-weight: 700;\n  font-size: var(--text-base);\n  color: var(--text);\n  padding: 0.65rem 0.75rem;\n  border-bottom: 2px solid var(--border-heavy);\n  position: sticky;\n  top: 0;\n  background: var(--bg);\n  z-index: 10;\n  white-space: nowrap;\n}\n\n.table thead th:first-child,\n.table tbody td:first-child {\n  padding-left: max(2rem, calc(50vw - 700px + 2rem));\n}\n\n.table thead th:last-child,\n.table tbody td:last-child {\n  padding-right: max(2rem, calc(50vw - 700px + 2rem));\n}\n\n.table tbody td {\n  padding: 0.7rem 0.75rem;\n  border-bottom: 1px solid var(--border);\n  vertical-align: top;\n  transition: background 0.15s;\n}\n\n.table tbody tr.row:not(.open):hover td {\n  background: var(--bg-hover);\n}\n\n.table tbody tr[hidden] { display: none; }\n\n.col-num {\n  width: 3rem;\n  color: var(--text-muted);\n  font-variant-numeric: tabular-nums;\n  text-align: left;\n}\n\n.col-name {\n  width: 30%;\n  overflow-wrap: anywhere;\n}\n\n.col-name > a {\n  font-weight: 500;\n  color: var(--accent);\n  text-decoration: none;\n}\n\n.col-name > a:hover { text-decoration: underline; color: var(--accent-hover); }\n\n/* === Sortable Headers === */\nth[data-sort] {\n  cursor: pointer;\n  user-select: none;\n}\n\nth[data-sort]:hover {\n  color: var(--accent);\n}\n\nth[data-sort]::after {\n  content: \" ▼\";\n  opacity: 0;\n  transition: opacity 0.15s;\n}\n\nth[data-sort=\"name\"]::after {\n  content: \" ▲\";\n}\n\nth[data-sort]:hover::after {\n  opacity: 1;\n}\n\nth[data-sort].sort-desc::after {\n  content: \" ▼\";\n  opacity: 1;\n}\n\nth[data-sort].sort-asc::after {\n  content: \" ▲\";\n  opacity: 1;\n}\n\n/* === Stars Column === */\n.col-stars {\n  width: 5rem;\n  font-variant-numeric: tabular-nums;\n  white-space: nowrap;\n  color: var(--text-secondary);\n  text-align: right;\n}\n\n/* === Arrow Column === */\n.col-arrow {\n  width: 2.5rem;\n  text-align: center;\n}\n\n.arrow {\n  display: inline-block;\n  font-size: 0.8rem;\n  color: var(--accent);\n  transition: transform 0.15s ease;\n}\n\n.row.open .arrow {\n  transform: rotate(90deg);\n}\n\n/* === Row Click === */\n.row { cursor: pointer; }\n.row:active td { background: var(--bg-hover); }\n\n.row:focus-visible td {\n  outline: none;\n  background: var(--bg-hover);\n  box-shadow: inset 2px 0 0 var(--accent);\n}\n\n/* === Expand Row === */\n.expand-row {\n  display: none;\n}\n\n.row.open + .expand-row {\n  display: table-row;\n}\n\n.row.open td {\n  background: var(--accent-light);\n  border-bottom-color: transparent;\n  padding-bottom: 0.1rem;\n}\n\n.expand-row td {\n  padding: 0.15rem 0.75rem 0.75rem;\n  background: var(--accent-light);\n  border-bottom: 1px solid var(--border);\n}\n\n@keyframes expand-in {\n  from {\n    opacity: 0;\n    transform: translateY(-4px);\n  }\n  to {\n    opacity: 1;\n    transform: translateY(0);\n  }\n}\n\n.expand-content {\n  font-size: var(--text-sm);\n  color: var(--text-secondary);\n  line-height: 1.6;\n  text-wrap: pretty;\n  animation: expand-in 0.2s cubic-bezier(0.25, 1, 0.5, 1);\n}\n\n.expand-tags {\n  display: flex;\n  gap: 0.4rem;\n  margin-bottom: 0.4rem;\n}\n\n.expand-tag {\n  font-size: var(--text-xs);\n  color: var(--tag-text);\n  background: var(--bg);\n  padding: 0.15rem 0.4rem;\n  border-radius: 3px;\n}\n\n.expand-also-see {\n  margin-top: 0.25rem;\n  font-size: var(--text-xs);\n  color: var(--text-muted);\n}\n\n.expand-also-see a {\n  color: var(--accent);\n  text-decoration: none;\n}\n\n.expand-also-see a:hover {\n  text-decoration: underline;\n}\n\n.expand-meta {\n  margin-top: 0.25rem;\n  font-size: var(--text-xs);\n  color: var(--text-muted);\n  font-weight: normal;\n}\n\n.expand-meta a {\n  color: var(--accent);\n  text-decoration: none;\n}\n\n.expand-meta a:hover {\n  text-decoration: underline;\n}\n\n.expand-sep {\n  margin: 0 0.25rem;\n  color: var(--border);\n}\n\n.col-cat {\n  white-space: nowrap;\n}\n\n.col-cat .tag + .tag {\n  margin-left: 0.35rem;\n}\n\n/* === Last Commit Column === */\n.col-commit {\n  width: 9rem;\n  white-space: nowrap;\n  color: var(--text-muted);\n}\n\n/* === Tags === */\n.tag {\n  position: relative;\n  background: var(--accent-light);\n  border: none;\n  font-family: inherit;\n  font-size: var(--text-xs);\n  color: var(--tag-text);\n  cursor: pointer;\n  padding: 0.25rem 0.5rem;\n  border-radius: 3px;\n  white-space: nowrap;\n  transition: background 0.15s, color 0.15s;\n}\n\n/* Expand touch target to 44x44px minimum */\n.tag::after {\n  content: \"\";\n  position: absolute;\n  inset: -0.5rem -0.25rem;\n}\n\n.tag:active {\n  transform: scale(0.95);\n}\n\n.tag:hover {\n  background: var(--tag-hover-bg);\n  color: var(--accent);\n}\n\n.tag:focus-visible {\n  outline: 2px solid var(--accent);\n  outline-offset: 1px;\n}\n\n.tag.active {\n  background: var(--highlight);\n  color: var(--highlight-text);\n  font-weight: 600;\n}\n\n/* === Noscript === */\n.noscript-msg {\n  text-align: center;\n  padding: 1rem;\n  color: var(--text-muted);\n}\n\n/* === No Results === */\n.no-results {\n  max-width: 1400px;\n  margin: 0 auto;\n  padding: 3rem 2rem;\n  font-size: var(--text-base);\n  color: var(--text-muted);\n  text-align: center;\n}\n\n/* === Footer === */\n.footer {\n  margin-top: auto;\n  border-top: none;\n  width: 100%;\n  padding: 1.25rem 2rem;\n  font-size: var(--text-xs);\n  color: var(--text-muted);\n  background: var(--bg-input);\n  display: flex;\n  align-items: center;\n  justify-content: flex-end;\n  gap: 0.5rem;\n}\n\n.footer a { color: var(--accent); text-decoration: none; }\n.footer a:hover { color: var(--accent-hover); text-decoration: underline; }\n\n.footer-sep { color: var(--border-strong); }\n\n/* === Responsive === */\n@media (max-width: 900px) {\n  .col-commit { display: none; }\n  .tag-group { display: none; }\n  .col-name { width: 50%; }\n}\n\n@media (max-width: 640px) {\n  .hero { padding: 2rem 1.25rem 1rem; }\n  .controls { padding: 0 1.25rem 0.75rem; }\n\n  .table { table-layout: auto; }\n\n  .table thead th,\n  .table tbody td {\n    padding-left: 0.5rem;\n    padding-right: 0.5rem;\n  }\n\n  .table thead th:first-child,\n  .table tbody td:first-child { padding-left: 0.25rem; }\n\n  .table thead th:last-child,\n  .table tbody td:last-child { padding-right: 0.25rem; }\n\n  .table thead th { font-size: var(--text-sm); }\n\n  .col-num { width: 2rem; }\n  .col-stars { width: 4.75rem; }\n  .col-arrow { width: 1.25rem; }\n  .col-cat { display: none; }\n  .col-name {\n    width: auto;\n    white-space: normal;\n  }\n  .footer { padding: 1.25rem; justify-content: center; flex-wrap: wrap; }\n}\n\n/* === Screen Reader Only === */\n.sr-only {\n  position: absolute;\n  width: 1px;\n  height: 1px;\n  padding: 0;\n  margin: -1px;\n  overflow: hidden;\n  clip: rect(0, 0, 0, 0);\n  white-space: nowrap;\n  border: 0;\n}\n\n/* === Reduced Motion === */\n@media (prefers-reduced-motion: reduce) {\n  *, *::before, *::after {\n    animation-duration: 0.01ms !important;\n    animation-iteration-count: 1 !important;\n    transition-duration: 0.01ms !important;\n  }\n}\n"
  },
  {
    "path": "website/templates/base.html",
    "content": "<!doctype html>\n<html lang=\"en\">\n  <head>\n    <meta charset=\"utf-8\" />\n    <meta name=\"viewport\" content=\"width=device-width, initial-scale=1\" />\n    <title>{% block title %}Awesome Python{% endblock %}</title>\n    <meta\n      name=\"description\"\n      content=\"{% block description %}An opinionated list of awesome Python frameworks, libraries, software and resources. {{ total_entries }} libraries across {{ categories | length }} categories.{% endblock %}\"\n    />\n    <link rel=\"canonical\" href=\"https://awesome-python.com/\" />\n    <meta property=\"og:type\" content=\"website\" />\n    <meta property=\"og:title\" content=\"Awesome Python\" />\n    <meta\n      property=\"og:description\"\n      content=\"An opinionated list of awesome Python frameworks, libraries, software and resources.\"\n    />\n    <meta property=\"og:url\" content=\"https://awesome-python.com/\" />\n    <meta name=\"twitter:card\" content=\"summary\" />\n    <link rel=\"icon\" href=\"/static/favicon.svg\" type=\"image/svg+xml\" />\n    <link rel=\"stylesheet\" href=\"/static/style.css\" />\n    <script\n      async\n      src=\"https://www.googletagmanager.com/gtag/js?id=G-0LMLYE0HER\"\n    ></script>\n    <script>\n      window.dataLayer = window.dataLayer || [];\n      function gtag() {\n        dataLayer.push(arguments);\n      }\n      gtag(\"js\", new Date());\n      gtag(\"config\", \"G-0LMLYE0HER\");\n    </script>\n  </head>\n  <body>\n    <a href=\"#content\" class=\"skip-link\">Skip to content</a>\n\n    <main id=\"content\">{% block content %}{% endblock %}</main>\n\n    <footer class=\"footer\">\n      <span\n        >Made by\n        <a href=\"https://vinta.ws/\" target=\"_blank\" rel=\"noopener\"\n          >Vinta</a\n        ></span\n      >\n      <span class=\"footer-sep\">/</span>\n      <a href=\"https://github.com/vinta\" target=\"_blank\" rel=\"noopener\"\n        >GitHub</a\n      >\n      <span class=\"footer-sep\">/</span>\n      <a href=\"https://twitter.com/vinta\" target=\"_blank\" rel=\"noopener\"\n        >Twitter</a\n      >\n    </footer>\n\n    <noscript\n      ><p class=\"noscript-msg\">\n        JavaScript is needed for search and filtering.\n      </p></noscript\n    >\n    <script src=\"/static/main.js\"></script>\n  </body>\n</html>\n"
  },
  {
    "path": "website/templates/index.html",
    "content": "{% extends \"base.html\" %} {% block content %}\n<header class=\"hero\">\n  <div class=\"hero-main\">\n    <div>\n      <h1>Awesome Python</h1>\n      <p class=\"hero-sub\">\n        {{ subtitle }}<br />Maintained by\n        <a href=\"https://github.com/vinta\" target=\"_blank\" rel=\"noopener\"\n          >@vinta</a\n        >\n        and\n        <a\n          href=\"https://github.com/JinyangWang27\"\n          target=\"_blank\"\n          rel=\"noopener\"\n          >@JinyangWang27</a\n        >.\n      </p>\n      <a\n        href=\"https://github.com/vinta/awesome-python\"\n        class=\"hero-gh\"\n        target=\"_blank\"\n        rel=\"noopener\"\n        >awesome-python on GitHub &rarr;</a\n      >\n    </div>\n    <a\n      href=\"https://github.com/vinta/awesome-python/blob/master/CONTRIBUTING.md\"\n      class=\"hero-submit\"\n      target=\"_blank\"\n      rel=\"noopener\"\n      >Submit a Project</a\n    >\n  </div>\n</header>\n\n<h2 class=\"sr-only\">Search and filter</h2>\n<div class=\"controls\">\n  <div class=\"search-wrap\">\n    <svg\n      class=\"search-icon\"\n      width=\"16\"\n      height=\"16\"\n      viewBox=\"0 0 24 24\"\n      fill=\"none\"\n      stroke=\"currentColor\"\n      stroke-width=\"2.5\"\n      stroke-linecap=\"round\"\n      stroke-linejoin=\"round\"\n    >\n      <circle cx=\"11\" cy=\"11\" r=\"8\" />\n      <line x1=\"21\" y1=\"21\" x2=\"16.65\" y2=\"16.65\" />\n    </svg>\n    <input\n      type=\"search\"\n      class=\"search\"\n      placeholder=\"Search {{ entries | length }} libraries across {{ total_categories }} categories...\"\n      aria-label=\"Search libraries\"\n    />\n  </div>\n  <div class=\"filter-bar\" hidden>\n    <span>Showing <strong class=\"filter-value\"></strong></span>\n    <button class=\"filter-clear\" aria-label=\"Clear filter\">\n      &times; Clear\n    </button>\n  </div>\n</div>\n\n<h2 class=\"sr-only\">Results</h2>\n<div class=\"table-wrap\" tabindex=\"0\" role=\"region\" aria-label=\"Libraries table\">\n  <table class=\"table\">\n    <thead>\n      <tr>\n        <th class=\"col-num\"><span class=\"sr-only\">#</span></th>\n        <th class=\"col-name\" data-sort=\"name\">Project Name</th>\n        <th class=\"col-stars\" data-sort=\"stars\">GitHub Stars</th>\n        <th class=\"col-commit\" data-sort=\"commit-time\">Last Commit</th>\n        <th class=\"col-cat\">Category</th>\n        <th class=\"col-arrow\"><span class=\"sr-only\">Details</span></th>\n      </tr>\n    </thead>\n    <tbody>\n      {% for entry in entries %}\n      <tr\n        class=\"row\"\n        role=\"button\"\n        data-cats=\"{{ entry.categories | join('||') }}\"\n        data-groups=\"{{ entry.groups | join('||') }}\"\n        tabindex=\"0\"\n        aria-expanded=\"false\"\n        aria-controls=\"expand-{{ loop.index }}\"\n      >\n        <td class=\"col-num\">{{ loop.index }}</td>\n        <td class=\"col-name\">\n          <a href=\"{{ entry.url }}\" target=\"_blank\" rel=\"noopener\"\n            >{{ entry.name }}</a\n          >\n        </td>\n        <td class=\"col-stars\">\n          {% if entry.stars is not none %}{{ \"{:,}\".format(entry.stars) }}{%\n          else %}&mdash;{% endif %}\n        </td>\n        <td\n          class=\"col-commit\"\n          {%\n          if\n          entry.last_commit_at\n          %}data-commit=\"{{ entry.last_commit_at }}\"\n          {%\n          endif\n          %}\n        >\n          {% if entry.last_commit_at %}<time\n            datetime=\"{{ entry.last_commit_at }}\"\n            >{{ entry.last_commit_at[:10] }}</time\n          >{% else %}&mdash;{% endif %}\n        </td>\n        <td class=\"col-cat\">\n          {% for cat in entry.categories %}\n          <button class=\"tag\" data-type=\"cat\" data-value=\"{{ cat }}\">\n            {{ cat }}\n          </button>\n          {% endfor %}\n          <button class=\"tag tag-group\" data-type=\"group\" data-value=\"{{ entry.groups[0] }}\">\n            {{ entry.groups[0] }}\n          </button>\n        </td>\n        <td class=\"col-arrow\"><span class=\"arrow\">&rarr;</span></td>\n      </tr>\n      <tr class=\"expand-row\" id=\"expand-{{ loop.index }}\">\n        <td></td>\n        <td colspan=\"4\">\n          <div class=\"expand-content\">\n            {% if entry.description %}\n            <div class=\"expand-desc\">{{ entry.description | safe }}</div>\n            {% endif %} {% if entry.also_see %}\n            <div class=\"expand-also-see\">\n              Also see: {% for see in entry.also_see %}<a\n                href=\"{{ see.url }}\"\n                target=\"_blank\"\n                rel=\"noopener\"\n                >{{ see.name }}</a\n              >{% if not loop.last %}, {% endif %}{% endfor %}\n            </div>\n            {% endif %}\n            <div class=\"expand-meta\">\n              {% if entry.owner %}<a\n                href=\"https://github.com/{{ entry.owner }}\"\n                target=\"_blank\"\n                rel=\"noopener\"\n                >{{ entry.owner }}</a\n              ><span class=\"expand-sep\">/</span>{% endif %}<a\n                href=\"{{ entry.url }}\"\n                target=\"_blank\"\n                rel=\"noopener\"\n                >{{ entry.url | replace(\"https://\", \"\") }}</a\n              >\n            </div>\n          </div>\n        </td>\n        <td></td>\n      </tr>\n      {% endfor %}\n    </tbody>\n  </table>\n</div>\n\n<div class=\"no-results\" hidden>No libraries match your search.</div>\n{% endblock %}\n"
  },
  {
    "path": "website/tests/test_build.py",
    "content": "\"\"\"Tests for the build module.\"\"\"\n\nimport json\nimport shutil\nimport textwrap\nfrom pathlib import Path\n\nfrom build import (\n    build,\n    extract_github_repo,\n    group_categories,\n    load_stars,\n    sort_entries,\n)\nfrom readme_parser import slugify\n\n# ---------------------------------------------------------------------------\n# slugify\n# ---------------------------------------------------------------------------\n\n\nclass TestSlugify:\n    def test_simple(self):\n        assert slugify(\"Admin Panels\") == \"admin-panels\"\n\n    def test_uppercase_acronym(self):\n        assert slugify(\"RESTful API\") == \"restful-api\"\n\n    def test_all_caps(self):\n        assert slugify(\"CMS\") == \"cms\"\n\n    def test_hyphenated_input(self):\n        assert slugify(\"Command-line Tools\") == \"command-line-tools\"\n\n    def test_special_chars(self):\n        assert slugify(\"Editor Plugins and IDEs\") == \"editor-plugins-and-ides\"\n\n    def test_single_word(self):\n        assert slugify(\"Audio\") == \"audio\"\n\n    def test_extra_spaces(self):\n        assert slugify(\"  Date  and  Time  \") == \"date-and-time\"\n\n\n# ---------------------------------------------------------------------------\n# group_categories\n# ---------------------------------------------------------------------------\n\n\nclass TestGroupCategories:\n    def test_appends_resources(self):\n        parsed_groups = [\n            {\"name\": \"G1\", \"slug\": \"g1\", \"categories\": [{\"name\": \"Cat1\"}]},\n        ]\n        resources = [{\"name\": \"Newsletters\", \"slug\": \"newsletters\"}]\n        groups = group_categories(parsed_groups, resources)\n        group_names = [g[\"name\"] for g in groups]\n        assert \"G1\" in group_names\n        assert \"Resources\" in group_names\n\n    def test_no_resources_no_extra_group(self):\n        parsed_groups = [\n            {\"name\": \"G1\", \"slug\": \"g1\", \"categories\": [{\"name\": \"Cat1\"}]},\n        ]\n        groups = group_categories(parsed_groups, [])\n        assert len(groups) == 1\n        assert groups[0][\"name\"] == \"G1\"\n\n    def test_preserves_group_order(self):\n        parsed_groups = [\n            {\"name\": \"Second\", \"slug\": \"second\", \"categories\": [{\"name\": \"C2\"}]},\n            {\"name\": \"First\", \"slug\": \"first\", \"categories\": [{\"name\": \"C1\"}]},\n        ]\n        groups = group_categories(parsed_groups, [])\n        assert groups[0][\"name\"] == \"Second\"\n        assert groups[1][\"name\"] == \"First\"\n\n\n# ---------------------------------------------------------------------------\n# build (integration)\n# ---------------------------------------------------------------------------\n\n\nclass TestBuild:\n    def _make_repo(self, tmp_path, readme):\n        (tmp_path / \"README.md\").write_text(readme, encoding=\"utf-8\")\n        tpl_dir = tmp_path / \"website\" / \"templates\"\n        tpl_dir.mkdir(parents=True)\n        (tpl_dir / \"base.html\").write_text(\n            \"<!DOCTYPE html><html lang='en'><head><title>{% block title %}{% endblock %}</title>\"\n            \"<meta name='description' content='{% block description %}{% endblock %}'>\"\n            \"</head><body>{% block content %}{% endblock %}</body></html>\",\n            encoding=\"utf-8\",\n        )\n        (tpl_dir / \"index.html\").write_text(\n            '{% extends \"base.html\" %}{% block content %}'\n            \"{% for group in groups %}\"\n            '<section class=\"group\">'\n            \"<h2>{{ group.name }}</h2>\"\n            \"{% for cat in group.categories %}\"\n            '<div class=\"row\" id=\"{{ cat.slug }}\">'\n            \"<span>{{ cat.name }}</span>\"\n            \"<span>{{ cat.preview }}</span>\"\n            \"<span>{{ cat.entry_count }}</span>\"\n            '<div class=\"row-content\" hidden>{{ cat.content_html | safe }}</div>'\n            \"</div>\"\n            \"{% endfor %}\"\n            \"</section>\"\n            \"{% endfor %}\"\n            \"{% endblock %}\",\n            encoding=\"utf-8\",\n        )\n\n    def test_build_creates_single_page(self, tmp_path):\n        readme = textwrap.dedent(\"\"\"\\\n            # Awesome Python\n\n            Intro.\n\n            ---\n\n            **Tools**\n\n            ## Widgets\n\n            _Widget libraries._\n\n            - [w1](https://example.com) - A widget.\n\n            ## Gadgets\n\n            _Gadget tools._\n\n            - [g1](https://example.com) - A gadget.\n\n            # Resources\n\n            Info.\n\n            ## Newsletters\n\n            - [NL](https://example.com)\n\n            # Contributing\n\n            Help!\n        \"\"\")\n        self._make_repo(tmp_path, readme)\n        build(str(tmp_path))\n\n        site = tmp_path / \"website\" / \"output\"\n        assert (site / \"index.html\").exists()\n        # No category sub-pages\n        assert not (site / \"categories\").exists()\n\n    def test_build_cleans_stale_output(self, tmp_path):\n        readme = textwrap.dedent(\"\"\"\\\n            # T\n\n            ---\n\n            ## Only\n\n            - [x](https://x.com) - X.\n\n            # Contributing\n\n            Done.\n        \"\"\")\n        self._make_repo(tmp_path, readme)\n\n        stale = tmp_path / \"website\" / \"output\" / \"categories\" / \"stale\"\n        stale.mkdir(parents=True)\n        (stale / \"index.html\").write_text(\"old\", encoding=\"utf-8\")\n\n        build(str(tmp_path))\n\n        assert not (tmp_path / \"website\" / \"output\" / \"categories\" / \"stale\").exists()\n\n    def test_index_contains_category_names(self, tmp_path):\n        readme = textwrap.dedent(\"\"\"\\\n            # T\n\n            ---\n\n            **Group A**\n\n            ## Alpha\n\n            - [a](https://x.com) - A.\n\n            **Group B**\n\n            ## Beta\n\n            - [b](https://x.com) - B.\n\n            # Contributing\n\n            Done.\n        \"\"\")\n        self._make_repo(tmp_path, readme)\n        build(str(tmp_path))\n\n        index_html = (tmp_path / \"website\" / \"output\" / \"index.html\").read_text()\n        assert \"Alpha\" in index_html\n        assert \"Beta\" in index_html\n        assert \"Group A\" in index_html\n        assert \"Group B\" in index_html\n\n    def test_index_contains_preview_text(self, tmp_path):\n        readme = textwrap.dedent(\"\"\"\\\n            # T\n\n            ---\n\n            ## Stuff\n\n            - [django](https://x.com) - A framework.\n            - [flask](https://x.com) - A micro.\n\n            # Contributing\n\n            Done.\n        \"\"\")\n        self._make_repo(tmp_path, readme)\n        build(str(tmp_path))\n\n        index_html = (tmp_path / \"website\" / \"output\" / \"index.html\").read_text()\n        assert \"django\" in index_html\n        assert \"flask\" in index_html\n\n    def test_build_with_stars_sorts_by_stars(self, tmp_path):\n        readme = textwrap.dedent(\"\"\"\\\n            # T\n\n            ---\n\n            ## Stuff\n\n            - [low-stars](https://github.com/org/low) - Low.\n            - [high-stars](https://github.com/org/high) - High.\n            - [no-stars](https://example.com/none) - None.\n\n            # Contributing\n\n            Done.\n        \"\"\")\n        (tmp_path / \"README.md\").write_text(readme, encoding=\"utf-8\")\n\n        # Copy real templates\n        real_tpl = Path(__file__).parent / \"..\" / \"templates\"\n        tpl_dir = tmp_path / \"website\" / \"templates\"\n        shutil.copytree(real_tpl, tpl_dir)\n\n        # Create mock star data\n        data_dir = tmp_path / \"website\" / \"data\"\n        data_dir.mkdir(parents=True)\n        stars = {\n            \"org/high\": {\"stars\": 5000, \"owner\": \"org\", \"fetched_at\": \"2026-01-01T00:00:00+00:00\"},\n            \"org/low\": {\"stars\": 100, \"owner\": \"org\", \"fetched_at\": \"2026-01-01T00:00:00+00:00\"},\n        }\n        (data_dir / \"github_stars.json\").write_text(json.dumps(stars), encoding=\"utf-8\")\n\n        build(str(tmp_path))\n\n        html = (tmp_path / \"website\" / \"output\" / \"index.html\").read_text(encoding=\"utf-8\")\n        # Star-sorted: high-stars (5000) before low-stars (100) before no-stars (None)\n        assert html.index(\"high-stars\") < html.index(\"low-stars\")\n        assert html.index(\"low-stars\") < html.index(\"no-stars\")\n        # Formatted star counts\n        assert \"5,000\" in html\n        assert \"100\" in html\n        # Expand content present\n        assert \"expand-content\" in html\n\n\n# ---------------------------------------------------------------------------\n# extract_github_repo\n# ---------------------------------------------------------------------------\n\n\nclass TestExtractGithubRepo:\n    def test_github_url(self):\n        assert extract_github_repo(\"https://github.com/psf/requests\") == \"psf/requests\"\n\n    def test_non_github_url(self):\n        assert extract_github_repo(\"https://foss.heptapod.net/pypy/pypy\") is None\n\n    def test_github_io_url(self):\n        assert extract_github_repo(\"https://user.github.io/proj\") is None\n\n    def test_trailing_slash(self):\n        assert extract_github_repo(\"https://github.com/org/repo/\") == \"org/repo\"\n\n    def test_deep_path(self):\n        assert extract_github_repo(\"https://github.com/org/repo/tree/main\") is None\n\n    def test_dot_git_suffix(self):\n        assert extract_github_repo(\"https://github.com/org/repo.git\") == \"org/repo\"\n\n    def test_org_only(self):\n        assert extract_github_repo(\"https://github.com/org\") is None\n\n\n# ---------------------------------------------------------------------------\n# load_stars\n# ---------------------------------------------------------------------------\n\n\nclass TestLoadStars:\n    def test_returns_empty_when_missing(self, tmp_path):\n        result = load_stars(tmp_path / \"nonexistent.json\")\n        assert result == {}\n\n    def test_loads_valid_json(self, tmp_path):\n        data = {\"psf/requests\": {\"stars\": 52467, \"owner\": \"psf\", \"fetched_at\": \"2026-01-01T00:00:00+00:00\"}}\n        f = tmp_path / \"stars.json\"\n        f.write_text(json.dumps(data), encoding=\"utf-8\")\n        result = load_stars(f)\n        assert result[\"psf/requests\"][\"stars\"] == 52467\n\n    def test_returns_empty_on_corrupt_json(self, tmp_path):\n        f = tmp_path / \"stars.json\"\n        f.write_text(\"not json\", encoding=\"utf-8\")\n        result = load_stars(f)\n        assert result == {}\n\n\n# ---------------------------------------------------------------------------\n# sort_entries\n# ---------------------------------------------------------------------------\n\n\nclass TestSortEntries:\n    def test_sorts_by_stars_descending(self):\n        entries = [\n            {\"name\": \"a\", \"stars\": 100, \"url\": \"\"},\n            {\"name\": \"b\", \"stars\": 500, \"url\": \"\"},\n            {\"name\": \"c\", \"stars\": 200, \"url\": \"\"},\n        ]\n        result = sort_entries(entries)\n        assert [e[\"name\"] for e in result] == [\"b\", \"c\", \"a\"]\n\n    def test_equal_stars_sorted_alphabetically(self):\n        entries = [\n            {\"name\": \"beta\", \"stars\": 100, \"url\": \"\"},\n            {\"name\": \"alpha\", \"stars\": 100, \"url\": \"\"},\n        ]\n        result = sort_entries(entries)\n        assert [e[\"name\"] for e in result] == [\"alpha\", \"beta\"]\n\n    def test_no_stars_go_to_bottom(self):\n        entries = [\n            {\"name\": \"no-stars\", \"stars\": None, \"url\": \"\"},\n            {\"name\": \"has-stars\", \"stars\": 50, \"url\": \"\"},\n        ]\n        result = sort_entries(entries)\n        assert [e[\"name\"] for e in result] == [\"has-stars\", \"no-stars\"]\n\n    def test_no_stars_sorted_alphabetically(self):\n        entries = [\n            {\"name\": \"zebra\", \"stars\": None, \"url\": \"\"},\n            {\"name\": \"apple\", \"stars\": None, \"url\": \"\"},\n        ]\n        result = sort_entries(entries)\n        assert [e[\"name\"] for e in result] == [\"apple\", \"zebra\"]\n"
  },
  {
    "path": "website/tests/test_fetch_github_stars.py",
    "content": "\"\"\"Tests for fetch_github_stars module.\"\"\"\n\nimport json\nimport os\nimport sys\n\nsys.path.insert(0, os.path.join(os.path.dirname(__file__), \"..\"))\nfrom fetch_github_stars import (\n    build_graphql_query,\n    extract_github_repos,\n    parse_graphql_response,\n    save_cache,\n)\n\n\nclass TestExtractGithubRepos:\n    def test_extracts_owner_repo_from_github_url(self):\n        readme = \"* [requests](https://github.com/psf/requests) - HTTP lib.\"\n        result = extract_github_repos(readme)\n        assert result == {\"psf/requests\"}\n\n    def test_multiple_repos(self):\n        readme = (\n            \"* [requests](https://github.com/psf/requests) - HTTP.\\n\"\n            \"* [flask](https://github.com/pallets/flask) - Micro.\"\n        )\n        result = extract_github_repos(readme)\n        assert result == {\"psf/requests\", \"pallets/flask\"}\n\n    def test_ignores_non_github_urls(self):\n        readme = \"* [pypy](https://foss.heptapod.net/pypy/pypy) - Fast Python.\"\n        result = extract_github_repos(readme)\n        assert result == set()\n\n    def test_ignores_github_io_urls(self):\n        readme = \"* [docs](https://user.github.io/project) - Docs site.\"\n        result = extract_github_repos(readme)\n        assert result == set()\n\n    def test_ignores_github_wiki_and_blob_urls(self):\n        readme = (\n            \"* [wiki](https://github.com/org/repo/wiki) - Wiki.\\n\"\n            \"* [file](https://github.com/org/repo/blob/main/f.py) - File.\"\n        )\n        result = extract_github_repos(readme)\n        assert result == set()\n\n    def test_handles_trailing_slash(self):\n        readme = \"* [lib](https://github.com/org/repo/) - Lib.\"\n        result = extract_github_repos(readme)\n        assert result == {\"org/repo\"}\n\n    def test_deduplicates(self):\n        readme = (\n            \"* [a](https://github.com/org/repo) - A.\\n\"\n            \"* [b](https://github.com/org/repo) - B.\"\n        )\n        result = extract_github_repos(readme)\n        assert result == {\"org/repo\"}\n\n    def test_strips_fragment(self):\n        readme = \"* [lib](https://github.com/org/repo#section) - Lib.\"\n        result = extract_github_repos(readme)\n        assert result == {\"org/repo\"}\n\n\nclass TestSaveCache:\n    def test_creates_directory_and_writes_json(self, tmp_path, monkeypatch):\n        data_dir = tmp_path / \"data\"\n        cache_file = data_dir / \"stars.json\"\n        monkeypatch.setattr(\"fetch_github_stars.DATA_DIR\", data_dir)\n        monkeypatch.setattr(\"fetch_github_stars.CACHE_FILE\", cache_file)\n        save_cache({\"a/b\": {\"stars\": 1}})\n        assert cache_file.exists()\n        assert json.loads(cache_file.read_text(encoding=\"utf-8\")) == {\"a/b\": {\"stars\": 1}}\n\n\nclass TestBuildGraphqlQuery:\n    def test_single_repo(self):\n        query = build_graphql_query([\"psf/requests\"])\n        assert \"repository\" in query\n        assert 'owner: \"psf\"' in query\n        assert 'name: \"requests\"' in query\n        assert \"stargazerCount\" in query\n\n    def test_multiple_repos_use_aliases(self):\n        query = build_graphql_query([\"psf/requests\", \"pallets/flask\"])\n        assert \"repo_0:\" in query\n        assert \"repo_1:\" in query\n\n    def test_empty_list(self):\n        query = build_graphql_query([])\n        assert query == \"\"\n\n    def test_skips_repos_with_quotes_in_name(self):\n        query = build_graphql_query(['org/\"bad\"'])\n        assert query == \"\"\n\n    def test_skips_only_bad_repos(self):\n        query = build_graphql_query([\"good/repo\", 'bad/\"repo\"'])\n        assert \"good\" in query\n        assert \"bad\" not in query\n\n\nclass TestParseGraphqlResponse:\n    def test_parses_star_count_and_owner(self):\n        data = {\n            \"repo_0\": {\n                \"stargazerCount\": 52467,\n                \"owner\": {\"login\": \"psf\"},\n            }\n        }\n        repos = [\"psf/requests\"]\n        result = parse_graphql_response(data, repos)\n        assert result[\"psf/requests\"][\"stars\"] == 52467\n        assert result[\"psf/requests\"][\"owner\"] == \"psf\"\n\n    def test_skips_null_repos(self):\n        data = {\"repo_0\": None}\n        repos = [\"deleted/repo\"]\n        result = parse_graphql_response(data, repos)\n        assert result == {}\n\n    def test_handles_missing_owner(self):\n        data = {\"repo_0\": {\"stargazerCount\": 100}}\n        repos = [\"org/repo\"]\n        result = parse_graphql_response(data, repos)\n        assert result[\"org/repo\"][\"owner\"] == \"\"\n\n    def test_multiple_repos(self):\n        data = {\n            \"repo_0\": {\"stargazerCount\": 100, \"owner\": {\"login\": \"a\"}},\n            \"repo_1\": {\"stargazerCount\": 200, \"owner\": {\"login\": \"b\"}},\n        }\n        repos = [\"a/x\", \"b/y\"]\n        result = parse_graphql_response(data, repos)\n        assert len(result) == 2\n        assert result[\"a/x\"][\"stars\"] == 100\n        assert result[\"b/y\"][\"stars\"] == 200\n\n\nclass TestMainSkipsFreshCache:\n    \"\"\"Verify that main() skips fetching when all cache entries are fresh.\"\"\"\n\n    def test_skips_fetch_when_cache_is_fresh(self, tmp_path, monkeypatch, capsys):\n        from datetime import datetime, timedelta, timezone\n\n        from fetch_github_stars import main\n\n        # Set up a minimal README with one repo\n        readme = tmp_path / \"README.md\"\n        readme.write_text(\"* [req](https://github.com/psf/requests) - HTTP.\\n\")\n        monkeypatch.setattr(\"fetch_github_stars.README_PATH\", readme)\n\n        # Pre-populate cache with a fresh entry (1 hour ago)\n        data_dir = tmp_path / \"data\"\n        data_dir.mkdir()\n        cache_file = data_dir / \"github_stars.json\"\n        now = datetime.now(timezone.utc)\n        fresh_cache = {\n            \"psf/requests\": {\n                \"stars\": 52000,\n                \"owner\": \"psf\",\n                \"last_commit_at\": \"2025-01-01T00:00:00+00:00\",\n                \"fetched_at\": (now - timedelta(hours=1)).isoformat(),\n            }\n        }\n        cache_file.write_text(json.dumps(fresh_cache), encoding=\"utf-8\")\n        monkeypatch.setattr(\"fetch_github_stars.CACHE_FILE\", cache_file)\n        monkeypatch.setattr(\"fetch_github_stars.DATA_DIR\", data_dir)\n        monkeypatch.setenv(\"GITHUB_TOKEN\", \"fake-token\")\n\n        main()\n\n        output = capsys.readouterr().out\n        assert \"0 repos to fetch\" in output\n        assert \"Cache is up to date\" in output\n\n    def test_fetches_when_cache_is_stale(self, tmp_path, monkeypatch, capsys):\n        from datetime import datetime, timedelta, timezone\n        from unittest.mock import MagicMock\n\n        from fetch_github_stars import main\n\n        # Set up a minimal README with one repo\n        readme = tmp_path / \"README.md\"\n        readme.write_text(\"* [req](https://github.com/psf/requests) - HTTP.\\n\")\n        monkeypatch.setattr(\"fetch_github_stars.README_PATH\", readme)\n\n        # Pre-populate cache with a stale entry (24 hours ago)\n        data_dir = tmp_path / \"data\"\n        data_dir.mkdir()\n        cache_file = data_dir / \"github_stars.json\"\n        now = datetime.now(timezone.utc)\n        stale_cache = {\n            \"psf/requests\": {\n                \"stars\": 52000,\n                \"owner\": \"psf\",\n                \"last_commit_at\": \"2025-01-01T00:00:00+00:00\",\n                \"fetched_at\": (now - timedelta(hours=24)).isoformat(),\n            }\n        }\n        cache_file.write_text(json.dumps(stale_cache), encoding=\"utf-8\")\n        monkeypatch.setattr(\"fetch_github_stars.CACHE_FILE\", cache_file)\n        monkeypatch.setattr(\"fetch_github_stars.DATA_DIR\", data_dir)\n        monkeypatch.setenv(\"GITHUB_TOKEN\", \"fake-token\")\n\n        # Mock httpx.Client to avoid real API calls\n        mock_response = MagicMock()\n        mock_response.json.return_value = {\n            \"data\": {\n                \"repo_0\": {\n                    \"stargazerCount\": 53000,\n                    \"owner\": {\"login\": \"psf\"},\n                    \"defaultBranchRef\": {\"target\": {\"committedDate\": \"2025-06-01T00:00:00Z\"}},\n                }\n            }\n        }\n        mock_response.raise_for_status = MagicMock()\n        mock_client = MagicMock()\n        mock_client.__enter__ = MagicMock(return_value=mock_client)\n        mock_client.__exit__ = MagicMock(return_value=False)\n        mock_client.post.return_value = mock_response\n        monkeypatch.setattr(\"fetch_github_stars.httpx.Client\", lambda **kwargs: mock_client)\n\n        main()\n\n        output = capsys.readouterr().out\n        assert \"1 repos to fetch\" in output\n        assert \"Done. Fetched 1 repos\" in output\n        mock_client.post.assert_called_once()\n"
  },
  {
    "path": "website/tests/test_readme_parser.py",
    "content": "\"\"\"Tests for the readme_parser module.\"\"\"\n\nimport os\nimport textwrap\n\nimport pytest\n\nfrom readme_parser import (\n    _parse_section_entries,\n    _render_section_html,\n    parse_readme,\n    render_inline_html,\n    render_inline_text,\n)\n\nfrom markdown_it import MarkdownIt\nfrom markdown_it.tree import SyntaxTreeNode\n\n\ndef _parse_inline(md_text: str) -> list[SyntaxTreeNode]:\n    \"\"\"Helper: parse a single paragraph and return its inline children.\"\"\"\n    md = MarkdownIt(\"commonmark\")\n    root = SyntaxTreeNode(md.parse(md_text))\n    # root > paragraph > inline > children\n    return root.children[0].children[0].children\n\n\nclass TestRenderInlineHtml:\n    def test_plain_text_escapes_html(self):\n        children = _parse_inline(\"Hello <world> & friends\")\n        assert render_inline_html(children) == \"Hello &lt;world&gt; &amp; friends\"\n\n    def test_link_with_target(self):\n        children = _parse_inline(\"[name](https://example.com)\")\n        html = render_inline_html(children)\n        assert 'href=\"https://example.com\"' in html\n        assert 'target=\"_blank\"' in html\n        assert 'rel=\"noopener\"' in html\n        assert \">name</a>\" in html\n\n    def test_emphasis(self):\n        children = _parse_inline(\"*italic* text\")\n        assert \"<em>italic</em>\" in render_inline_html(children)\n\n    def test_strong(self):\n        children = _parse_inline(\"**bold** text\")\n        assert \"<strong>bold</strong>\" in render_inline_html(children)\n\n    def test_code_inline(self):\n        children = _parse_inline(\"`some code`\")\n        assert \"<code>some code</code>\" in render_inline_html(children)\n\n    def test_mixed_link_and_text(self):\n        children = _parse_inline(\"See [foo](https://x.com) for details.\")\n        html = render_inline_html(children)\n        assert \"See \" in html\n        assert \">foo</a>\" in html\n        assert \" for details.\" in html\n\n\nclass TestRenderInlineText:\n    def test_plain_text(self):\n        children = _parse_inline(\"Hello world\")\n        assert render_inline_text(children) == \"Hello world\"\n\n    def test_link_becomes_text(self):\n        children = _parse_inline(\"See [awesome-algos](https://github.com/x/y).\")\n        assert render_inline_text(children) == \"See awesome-algos.\"\n\n    def test_emphasis_stripped(self):\n        children = _parse_inline(\"*italic* text\")\n        assert render_inline_text(children) == \"italic text\"\n\n    def test_code_inline_kept(self):\n        children = _parse_inline(\"`code` here\")\n        assert render_inline_text(children) == \"code here\"\n\n\nMINIMAL_README = textwrap.dedent(\"\"\"\\\n    # Awesome Python\n\n    Some intro text.\n\n    ---\n\n    ## Alpha\n\n    _Libraries for alpha stuff._\n\n    - [lib-a](https://example.com/a) - Does A.\n    - [lib-b](https://example.com/b) - Does B.\n\n    ## Beta\n\n    _Tools for beta._\n\n    - [lib-c](https://example.com/c) - Does C.\n\n    # Resources\n\n    Where to discover resources.\n\n    ## Newsletters\n\n    - [News One](https://example.com/n1)\n    - [News Two](https://example.com/n2)\n\n    ## Podcasts\n\n    - [Pod One](https://example.com/p1)\n\n    # Contributing\n\n    Please contribute!\n\"\"\")\n\n\nGROUPED_README = textwrap.dedent(\"\"\"\\\n    # Awesome Python\n\n    Some intro text.\n\n    ---\n\n    **Group One**\n\n    ## Alpha\n\n    _Libraries for alpha stuff._\n\n    - [lib-a](https://example.com/a) - Does A.\n    - [lib-b](https://example.com/b) - Does B.\n\n    **Group Two**\n\n    ## Beta\n\n    _Tools for beta._\n\n    - [lib-c](https://example.com/c) - Does C.\n\n    ## Gamma\n\n    - [lib-d](https://example.com/d) - Does D.\n\n    # Resources\n\n    Where to discover resources.\n\n    ## Newsletters\n\n    - [News One](https://example.com/n1)\n\n    # Contributing\n\n    Please contribute!\n\"\"\")\n\n\nclass TestParseReadmeSections:\n    def test_ungrouped_categories_go_to_other(self):\n        groups, resources = parse_readme(MINIMAL_README)\n        assert len(groups) == 1\n        assert groups[0][\"name\"] == \"Other\"\n        assert len(groups[0][\"categories\"]) == 2\n\n    def test_ungrouped_category_names(self):\n        groups, _ = parse_readme(MINIMAL_README)\n        cats = groups[0][\"categories\"]\n        assert cats[0][\"name\"] == \"Alpha\"\n        assert cats[1][\"name\"] == \"Beta\"\n\n    def test_resource_count(self):\n        _, resources = parse_readme(MINIMAL_README)\n        assert len(resources) == 2\n\n    def test_category_slugs(self):\n        groups, _ = parse_readme(MINIMAL_README)\n        cats = groups[0][\"categories\"]\n        assert cats[0][\"slug\"] == \"alpha\"\n        assert cats[1][\"slug\"] == \"beta\"\n\n    def test_category_description(self):\n        groups, _ = parse_readme(MINIMAL_README)\n        cats = groups[0][\"categories\"]\n        assert cats[0][\"description\"] == \"Libraries for alpha stuff.\"\n        assert cats[1][\"description\"] == \"Tools for beta.\"\n\n    def test_resource_names(self):\n        _, resources = parse_readme(MINIMAL_README)\n        assert resources[0][\"name\"] == \"Newsletters\"\n        assert resources[1][\"name\"] == \"Podcasts\"\n\n    def test_contributing_skipped(self):\n        groups, resources = parse_readme(MINIMAL_README)\n        all_names = []\n        for g in groups:\n            all_names.extend(c[\"name\"] for c in g[\"categories\"])\n        all_names.extend(r[\"name\"] for r in resources)\n        assert \"Contributing\" not in all_names\n\n    def test_no_separator(self):\n        groups, resources = parse_readme(\"# Just a heading\\n\\nSome text.\\n\")\n        assert groups == []\n        assert resources == []\n\n    def test_no_description(self):\n        readme = textwrap.dedent(\"\"\"\\\n            # Title\n\n            ---\n\n            ## NullDesc\n\n            - [item](https://x.com) - Thing.\n\n            # Resources\n\n            ## Tips\n\n            - [tip](https://x.com)\n\n            # Contributing\n\n            Done.\n        \"\"\")\n        groups, resources = parse_readme(readme)\n        cats = groups[0][\"categories\"]\n        assert cats[0][\"description\"] == \"\"\n        assert cats[0][\"entries\"][0][\"name\"] == \"item\"\n\n    def test_description_with_link_stripped(self):\n        readme = textwrap.dedent(\"\"\"\\\n            # T\n\n            ---\n\n            ## Algos\n\n            _Algorithms. Also see [awesome-algos](https://example.com)._\n\n            - [lib](https://x.com) - Lib.\n\n            # Contributing\n\n            Done.\n        \"\"\")\n        groups, _ = parse_readme(readme)\n        cats = groups[0][\"categories\"]\n        assert cats[0][\"description\"] == \"Algorithms. Also see awesome-algos.\"\n\n\nclass TestParseGroupedReadme:\n    def test_group_count(self):\n        groups, _ = parse_readme(GROUPED_README)\n        assert len(groups) == 2\n\n    def test_group_names(self):\n        groups, _ = parse_readme(GROUPED_README)\n        assert groups[0][\"name\"] == \"Group One\"\n        assert groups[1][\"name\"] == \"Group Two\"\n\n    def test_group_slugs(self):\n        groups, _ = parse_readme(GROUPED_README)\n        assert groups[0][\"slug\"] == \"group-one\"\n        assert groups[1][\"slug\"] == \"group-two\"\n\n    def test_group_one_has_one_category(self):\n        groups, _ = parse_readme(GROUPED_README)\n        assert len(groups[0][\"categories\"]) == 1\n        assert groups[0][\"categories\"][0][\"name\"] == \"Alpha\"\n\n    def test_group_two_has_two_categories(self):\n        groups, _ = parse_readme(GROUPED_README)\n        assert len(groups[1][\"categories\"]) == 2\n        assert groups[1][\"categories\"][0][\"name\"] == \"Beta\"\n        assert groups[1][\"categories\"][1][\"name\"] == \"Gamma\"\n\n    def test_resources_still_parsed(self):\n        _, resources = parse_readme(GROUPED_README)\n        assert len(resources) == 1\n        assert resources[0][\"name\"] == \"Newsletters\"\n\n    def test_empty_group_skipped(self):\n        readme = textwrap.dedent(\"\"\"\\\n            # T\n\n            ---\n\n            **Empty**\n\n            **HasCats**\n\n            ## Cat\n\n            - [x](https://x.com) - X.\n\n            # Contributing\n\n            Done.\n        \"\"\")\n        groups, _ = parse_readme(readme)\n        assert len(groups) == 1\n        assert groups[0][\"name\"] == \"HasCats\"\n\n    def test_bold_with_extra_text_not_group_marker(self):\n        readme = textwrap.dedent(\"\"\"\\\n            # T\n\n            ---\n\n            **Note:** This is not a group marker.\n\n            ## Cat\n\n            - [x](https://x.com) - X.\n\n            # Contributing\n\n            Done.\n        \"\"\")\n        groups, _ = parse_readme(readme)\n        # \"Note:\" has text after the strong node, so it's not a group marker\n        # Category goes into \"Other\"\n        assert len(groups) == 1\n        assert groups[0][\"name\"] == \"Other\"\n\n    def test_categories_before_any_group_marker(self):\n        readme = textwrap.dedent(\"\"\"\\\n            # T\n\n            ---\n\n            ## Orphan\n\n            - [x](https://x.com) - X.\n\n            **A Group**\n\n            ## Grouped\n\n            - [y](https://x.com) - Y.\n\n            # Contributing\n\n            Done.\n        \"\"\")\n        groups, _ = parse_readme(readme)\n        assert len(groups) == 2\n        assert groups[0][\"name\"] == \"Other\"\n        assert groups[0][\"categories\"][0][\"name\"] == \"Orphan\"\n        assert groups[1][\"name\"] == \"A Group\"\n        assert groups[1][\"categories\"][0][\"name\"] == \"Grouped\"\n\n\ndef _content_nodes(md_text: str) -> list[SyntaxTreeNode]:\n    \"\"\"Helper: parse markdown and return all block nodes.\"\"\"\n    md = MarkdownIt(\"commonmark\")\n    root = SyntaxTreeNode(md.parse(md_text))\n    return root.children\n\n\nclass TestParseSectionEntries:\n    def test_flat_entries(self):\n        nodes = _content_nodes(\n            \"- [django](https://example.com/d) - A web framework.\\n\"\n            \"- [flask](https://example.com/f) - A micro framework.\\n\"\n        )\n        entries = _parse_section_entries(nodes)\n        assert len(entries) == 2\n        assert entries[0][\"name\"] == \"django\"\n        assert entries[0][\"url\"] == \"https://example.com/d\"\n        assert \"web framework\" in entries[0][\"description\"]\n        assert entries[0][\"also_see\"] == []\n        assert entries[1][\"name\"] == \"flask\"\n\n    def test_link_only_entry(self):\n        nodes = _content_nodes(\"- [tool](https://x.com)\\n\")\n        entries = _parse_section_entries(nodes)\n        assert len(entries) == 1\n        assert entries[0][\"name\"] == \"tool\"\n        assert entries[0][\"description\"] == \"\"\n\n    def test_subcategorized_entries(self):\n        nodes = _content_nodes(\n            \"- Algorithms\\n\"\n            \"  - [algos](https://x.com/a) - Algo lib.\\n\"\n            \"  - [sorts](https://x.com/s) - Sort lib.\\n\"\n            \"- Design Patterns\\n\"\n            \"  - [patterns](https://x.com/p) - Pattern lib.\\n\"\n        )\n        entries = _parse_section_entries(nodes)\n        assert len(entries) == 3\n        assert entries[0][\"name\"] == \"algos\"\n        assert entries[2][\"name\"] == \"patterns\"\n\n    def test_text_before_link_is_subcategory(self):\n        nodes = _content_nodes(\n            \"- MySQL - [awesome-mysql](http://example.com/awesome-mysql/)\\n\"\n            \"  - [mysqlclient](https://example.com/mysqlclient) - MySQL connector.\\n\"\n            \"  - [pymysql](https://example.com/pymysql) - Pure Python MySQL driver.\\n\"\n        )\n        entries = _parse_section_entries(nodes)\n        # awesome-mysql is a subcategory label, not an entry\n        assert len(entries) == 2\n        names = [e[\"name\"] for e in entries]\n        assert \"awesome-mysql\" not in names\n        assert \"mysqlclient\" in names\n        assert \"pymysql\" in names\n\n    def test_also_see_sub_entries(self):\n        nodes = _content_nodes(\n            \"- [asyncio](https://docs.python.org/3/library/asyncio.html) - Async I/O.\\n\"\n            \"  - [awesome-asyncio](https://github.com/timofurrer/awesome-asyncio)\\n\"\n            \"- [trio](https://github.com/python-trio/trio) - Friendly async.\\n\"\n        )\n        entries = _parse_section_entries(nodes)\n        assert len(entries) == 2\n        assert entries[0][\"name\"] == \"asyncio\"\n        assert len(entries[0][\"also_see\"]) == 1\n        assert entries[0][\"also_see\"][0][\"name\"] == \"awesome-asyncio\"\n        assert entries[1][\"name\"] == \"trio\"\n        assert entries[1][\"also_see\"] == []\n\n    def test_entry_count_includes_also_see(self):\n        readme = textwrap.dedent(\"\"\"\\\n            # T\n\n            ---\n\n            ## Async\n\n            - [asyncio](https://x.com) - Async I/O.\n              - [awesome-asyncio](https://y.com)\n            - [trio](https://z.com) - Friendly async.\n\n            # Contributing\n\n            Done.\n        \"\"\")\n        groups, _ = parse_readme(readme)\n        cats = groups[0][\"categories\"]\n        # 2 main entries + 1 also_see = 3\n        assert cats[0][\"entry_count\"] == 3\n\n    def test_preview_first_four_names(self):\n        readme = textwrap.dedent(\"\"\"\\\n            # T\n\n            ---\n\n            ## Libs\n\n            - [alpha](https://x.com) - A.\n            - [beta](https://x.com) - B.\n            - [gamma](https://x.com) - C.\n            - [delta](https://x.com) - D.\n            - [epsilon](https://x.com) - E.\n\n            # Contributing\n\n            Done.\n        \"\"\")\n        groups, _ = parse_readme(readme)\n        cats = groups[0][\"categories\"]\n        assert cats[0][\"preview\"] == \"alpha, beta, gamma, delta\"\n\n    def test_description_html_escapes_xss(self):\n        nodes = _content_nodes('- [lib](https://x.com) - A <script>alert(1)</script> lib.\\n')\n        entries = _parse_section_entries(nodes)\n        assert \"<script>\" not in entries[0][\"description\"]\n        assert \"&lt;script&gt;\" in entries[0][\"description\"]\n\n\nclass TestRenderSectionHtml:\n    def test_basic_entry(self):\n        nodes = _content_nodes(\"- [django](https://example.com) - A web framework.\\n\")\n        html = _render_section_html(nodes)\n        assert 'class=\"entry\"' in html\n        assert 'href=\"https://example.com\"' in html\n        assert \"django\" in html\n        assert \"A web framework.\" in html\n\n    def test_subcategory_label(self):\n        nodes = _content_nodes(\n            \"- Synchronous\\n  - [django](https://x.com) - Framework.\\n\"\n        )\n        html = _render_section_html(nodes)\n        assert 'class=\"subcat\"' in html\n        assert \"Synchronous\" in html\n        assert 'class=\"entry\"' in html\n\n    def test_sub_entry(self):\n        nodes = _content_nodes(\n            \"- [django](https://x.com) - Framework.\\n\"\n            \"  - [awesome-django](https://y.com)\\n\"\n        )\n        html = _render_section_html(nodes)\n        assert 'class=\"entry-sub\"' in html\n        assert \"awesome-django\" in html\n\n    def test_link_only_entry(self):\n        nodes = _content_nodes(\"- [tool](https://x.com)\\n\")\n        html = _render_section_html(nodes)\n        assert 'class=\"entry\"' in html\n        assert 'href=\"https://x.com\"' in html\n        assert \"tool\" in html\n\n    def test_xss_escaped_in_name(self):\n        nodes = _content_nodes('- [<img onerror=alert(1)>](https://x.com) - Bad.\\n')\n        html = _render_section_html(nodes)\n        assert \"onerror\" not in html or \"&\" in html\n\n    def test_xss_escaped_in_subcat(self):\n        nodes = _content_nodes(\"- <script>alert(1)</script>\\n\")\n        html = _render_section_html(nodes)\n        assert \"<script>\" not in html\n\n\nclass TestParseRealReadme:\n    @pytest.fixture(autouse=True)\n    def load_readme(self):\n        readme_path = os.path.join(os.path.dirname(__file__), \"..\", \"..\", \"README.md\")\n        with open(readme_path, encoding=\"utf-8\") as f:\n            self.readme_text = f.read()\n        self.groups, self.resources = parse_readme(self.readme_text)\n        self.cats = [c for g in self.groups for c in g[\"categories\"]]\n\n    def test_at_least_11_groups(self):\n        assert len(self.groups) >= 11\n\n    def test_first_group_is_ai_ml(self):\n        assert self.groups[0][\"name\"] == \"AI & ML\"\n\n    def test_at_least_76_categories(self):\n        assert len(self.cats) >= 76\n\n    def test_resources_has_newsletters_and_podcasts(self):\n        names = [r[\"name\"] for r in self.resources]\n        assert \"Newsletters\" in names\n        assert \"Podcasts\" in names\n\n    def test_contributing_not_in_results(self):\n        all_names = [c[\"name\"] for c in self.cats] + [r[\"name\"] for r in self.resources]\n        assert \"Contributing\" not in all_names\n\n    def test_first_category_is_ai_and_agents(self):\n        assert self.cats[0][\"name\"] == \"AI and Agents\"\n        assert self.cats[0][\"slug\"] == \"ai-and-agents\"\n\n    def test_web_apis_slug(self):\n        slugs = [c[\"slug\"] for c in self.cats]\n        assert \"web-apis\" in slugs\n\n    def test_descriptions_extracted(self):\n        ai = next(c for c in self.cats if c[\"name\"] == \"AI and Agents\")\n        assert \"AI applications\" in ai[\"description\"]\n\n    def test_entry_counts_nonzero(self):\n        for cat in self.cats:\n            assert cat[\"entry_count\"] > 0, f\"{cat['name']} has 0 entries\"\n\n    def test_previews_nonempty(self):\n        for cat in self.cats:\n            assert cat[\"preview\"], f\"{cat['name']} has empty preview\"\n\n    def test_content_html_nonempty(self):\n        for cat in self.cats:\n            assert cat[\"content_html\"], f\"{cat['name']} has empty content_html\"\n\n    def test_algorithms_has_subcategories(self):\n        algos = next(c for c in self.cats if c[\"name\"] == \"Algorithms and Design Patterns\")\n        assert 'class=\"subcat\"' in algos[\"content_html\"]\n\n    def test_async_has_also_see(self):\n        async_cat = next(c for c in self.cats if c[\"name\"] == \"Asynchronous Programming\")\n        asyncio_entry = next(e for e in async_cat[\"entries\"] if e[\"name\"] == \"asyncio\")\n        assert len(asyncio_entry[\"also_see\"]) >= 1\n        assert asyncio_entry[\"also_see\"][0][\"name\"] == \"awesome-asyncio\"\n\n    def test_description_links_stripped_to_text(self):\n        algos = next(c for c in self.cats if c[\"name\"] == \"Algorithms and Design Patterns\")\n        assert \"awesome-algorithms\" in algos[\"description\"]\n        assert \"https://\" not in algos[\"description\"]\n\n    def test_miscellaneous_in_own_group(self):\n        misc_group = next((g for g in self.groups if g[\"name\"] == \"Miscellaneous\"), None)\n        assert misc_group is not None\n        assert any(c[\"name\"] == \"Miscellaneous\" for c in misc_group[\"categories\"])\n"
  }
]