[
  {
    "path": ".github/ISSUE_TEMPLATE/bug_report.md",
    "content": "---\nname: Bug report\nabout: Create a report to help us improve\ntitle: '[BUG] '\nlabels: bug\nassignees: ''\n---\n\n**Describe the bug**\nA clear and concise description of what the bug is.\n\n**To Reproduce**\nSteps to reproduce the behavior:\n1. Use app ID '...'\n2. Call method '....'\n3. See error\n\n**Expected behavior**\nA clear and concise description of what you expected to happen.\n\n**Code Example**\n```python\nfrom gplay_scraper import GPlayScraper\nscraper = GPlayScraper()\n# Your code here\n```\n\n**Error Output**\n```\nPaste the full error message here\n```\n\n**Environment:**\n - OS: [e.g. Windows 10, macOS, Linux]\n - Python version: [e.g. 3.8.5]\n - Library version: [e.g. 1.0.2]\n\n**Additional context**\nAdd any other context about the problem here."
  },
  {
    "path": ".github/ISSUE_TEMPLATE/feature_request.md",
    "content": "---\nname: Feature request\nabout: Suggest an idea for this project\ntitle: '[FEATURE] '\nlabels: enhancement\nassignees: ''\n---\n\n**Is your feature request related to a problem? Please describe.**\nA clear and concise description of what the problem is. Ex. I'm always frustrated when [...]\n\n**Describe the solution you'd like**\nA clear and concise description of what you want to happen.\n\n**Describe alternatives you've considered**\nA clear and concise description of any alternative solutions or features you've considered.\n\n**Use Case**\nDescribe how this feature would be used:\n```python\n# Example of how the new feature would work\nscraper = GPlayScraper()\nresult = scraper.new_method(app_id)\n```\n\n**Additional context**\nAdd any other context or screenshots about the feature request here."
  },
  {
    "path": ".github/pull_request_template.md",
    "content": "# Pull Request\n\n## Description\nBrief description of changes made.\n\n## Type of Change\n- [ ] Bug fix (non-breaking change which fixes an issue)\n- [ ] New feature (non-breaking change which adds functionality)\n- [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)\n- [ ] Documentation update\n\n## Testing\n- [ ] I have tested my changes locally\n- [ ] I have added tests for new functionality\n- [ ] All existing tests pass\n\n## Code Quality\n- [ ] My code follows the project's style guidelines\n- [ ] I have performed a self-review of my own code\n- [ ] I have commented my code, particularly in hard-to-understand areas\n- [ ] I have made corresponding changes to the documentation\n\n## Related Issues\nFixes #(issue number)\n\n## Additional Notes\nAny additional information about the changes."
  },
  {
    "path": ".github/workflows/docs.yml",
    "content": "name: Build and Deploy Documentation\n\non:\n  push:\n    branches: [ main ]\n\npermissions:\n  contents: write\n\njobs:\n  docs:\n    runs-on: ubuntu-latest\n    steps:\n    - name: Checkout\n      uses: actions/checkout@v4\n    \n    - name: Set up Python\n      uses: actions/setup-python@v5\n      with:\n        python-version: '3.11'\n    \n    - name: Install dependencies\n      run: |\n        python -m pip install --upgrade pip\n        pip install -r docs/requirements.txt\n    \n    - name: Build documentation\n      run: |\n        cd docs\n        sphinx-build -b html . _build/html\n        touch _build/html/.nojekyll\n    \n    - name: Deploy to GitHub Pages\n      uses: peaceiris/actions-gh-pages@v4\n      if: github.ref == 'refs/heads/main'\n      with:\n        github_token: ${{ secrets.GITHUB_TOKEN }}\n        publish_dir: ./docs/_build/html\n        force_orphan: true"
  },
  {
    "path": ".github/workflows/test.yml",
    "content": "name: Tests\n\non:\n  push:\n    branches: [ main, develop ]\n  pull_request:\n    branches: [ main ]\n\njobs:\n  test:\n    runs-on: ubuntu-latest\n    strategy:\n      matrix:\n        python-version: [\"3.8\", \"3.9\", \"3.10\", \"3.11\", \"3.12\"]\n      fail-fast: false\n\n    steps:\n    - uses: actions/checkout@v4\n    \n    - name: Set up Python ${{ matrix.python-version }}\n      uses: actions/setup-python@v4\n      with:\n        python-version: ${{ matrix.python-version }}\n    \n    - name: Install dependencies\n      run: |\n        python -m pip install --upgrade pip\n        pip install -r requirements.txt\n        pip install pytest pytest-cov\n    \n    - name: Run package and basic functionality tests\n      run: |\n        python -m unittest tests.test_package tests.test_basic -v\n    \n    - name: Run network-dependent tests (optional)\n      continue-on-error: true\n      timeout-minutes: 15\n      run: |\n        echo \"Running network-dependent tests with delays (failures expected due to rate limiting)...\"\n        python -m unittest tests.test_app_methods -v || echo \"App methods test completed\"\n        python -m unittest tests.test_search_methods -v || echo \"Search methods test completed\"\n        python -m unittest tests.test_reviews_methods -v || echo \"Reviews methods test completed\"\n        python -m unittest tests.test_developer_methods -v || echo \"Developer methods test completed\"\n        python -m unittest tests.test_list_methods -v || echo \"List methods test completed\"\n        python -m unittest tests.test_similar_methods -v || echo \"Similar methods test completed\"\n        python -m unittest tests.test_suggest_methods -v || echo \"Suggest methods test completed\""
  },
  {
    "path": ".gitignore",
    "content": "# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[cod]\n*$py.class\n\n# C extensions\n*.so\n\n# Distribution / packaging\n.Python\nbuild/\ndevelop-eggs/\ndist/\ndownloads/\neggs/\n.eggs/\nlib/\nlib64/\nparts/\nsdist/\nvar/\nwheels/\npip-wheel-metadata/\nshare/python-wheels/\n*.egg-info/\n.installed.cfg\n*.egg\nMANIFEST\n\n# PyInstaller\n*.manifest\n*.spec\n\n# Installer logs\npip-log.txt\npip-delete-this-directory.txt\n\n# Unit test / coverage reports\nhtmlcov/\n.tox/\n.nox/\n.coverage\n.coverage.*\n.cache\nnosetests.xml\ncoverage.xml\n*.cover\n*.py,cover\n.hypothesis/\n.pytest_cache/\n\n# Translations\n*.mo\n*.pot\n\n# Django stuff:\n*.log\nlocal_settings.py\ndb.sqlite3\ndb.sqlite3-journal\n\n# Flask stuff:\ninstance/\n.webassets-cache\n\n# Scrapy stuff:\n.scrapy\n\n# Sphinx documentation\ndocs/_build/\n!docs/.nojekyll\n\n# PyBuilder\ntarget/\n\n# Jupyter Notebook\n.ipynb_checkpoints\n\n# IPython\nprofile_default/\nipython_config.py\n\n# pyenv\n.python-version\n\n# pipenv\nPipfile.lock\n\n# PEP 582\n__pypackages__/\n\n# Celery stuff\ncelerybeat-schedule\ncelerybeat.pid\n\n# SageMath parsed files\n*.sage.py\n\n# Environments\n.env\n.venv\nenv/\nvenv/\nENV/\nenv.bak/\nvenv.bak/\n\n# Spyder project settings\n.spyderproject\n.spyproject\n\n# Rope project settings\n.ropeproject\n\n# mkdocs documentation\n/site\n\n# mypy\n.mypy_cache/\n.dmypy.json\ndmypy.json\n\n# Pyre type checker\n.pyre/\n\n# IDE\n.vscode/\n.idea/\n*.swp\n*.swo\n\n# OS\n.DS_Store\nThumbs.db\n\n# Project specific\nbackup/\ntemp/\n*.tmp\n\n# Publishing scripts (local use only)\npublish_to_github.bat\npublish_to_github.sh\npublish_to_pypi.bat\npublish_to_pypi.sh\nupdate_github.bat\nupdate_github.sh\nupdate_pypi.bat\nupdate_pypi.sh\n\n# Test and debug files\nxx.py\nx_fallback.py\ntest_all_methods.py\ndebug_limbo*.py\nlimbo_ds5_raw.txt\nappbrain_scraper.py\n\n# Documentation folders (local use only)\nwiki/\ncommunity/\n\n# Chrome extensions\nchrome-extension/\nchrome-extension-new/\nfirefox-extensions/\nfirefox-extension-new/\nedge-extensions/\nedge-extension-new/\nopera-extensions/\nopera-extension-new/\nwebstore-upload/\nwebstore-upload-new/\nwebstore-upload-chrome/\nwebstore-upload-firefox/\nwebstore-upload-edge/\nwebstore-upload-opera/\nwebstore-upload-chrome-new/\nwebstore-upload-firefox-new/\nwebstore-upload-edge-new/\nwebstore-upload-opera-new/\nbuild-extensions/\nbuild-extension/\nbuild-extension-chrome/\nbuild-extension-firefox/\nbuild-extension-edge/\nbuild-extension-opera/\nbuild-extension-chrome-new/\nbuild-extension-firefox-new/\nbuild-extension-edge-new/\nbuild-extension-opera-new/\ndist-extensions/\ndist-extension/\ndist-extension-chrome/\ndist-extension-firefox/\ndist-extension-edge/\ndist-extension-opera/\ndist-extension-chrome-new/\ndist-extension-firefox-new/\ndist-extension-edge-new/\ndist-extension-opera-new/\nrelease/\nrelease-chrome/\nrelease-firefox/\nrelease-edge/\nrelease-opera/\nrelease-chrome-new/\nrelease-firefox-new/\nrelease-edge-new/\nrelease-opera-new/\ntemp-extensions/\ntemp-extension/\ntemp-extension-chrome/\ntemp-extension-firefox/\ntemp-extension-edge/\ntemp-extension-opera/\ntemp-extension-chrome-new/\n"
  },
  {
    "path": "CHANGELOG.md",
    "content": "# Changelog\n\nAll notable changes to this project will be documented in this file.\n\n## [1.0.6] - 2025-11-16\n\n### Bug Fixes\n\n- **Reviews Pagination Fix**: Fixed critical issue when requesting more reviews than available\n  - Resolved 'NoneType' object is not subscriptable error\n  - Improved token extraction logic for empty review responses\n  - Now gracefully returns available reviews instead of crashing\n  - Enhanced error handling in ReviewsScraper and ReviewsParser\n- **Empty Response Handling**: Better handling of apps with limited reviews\n  - Safe bounds checking for pagination tokens\n  - Proper null checking for empty data structures\n  - Graceful degradation when no more reviews are available\n\n### Acknowledgments\n\n- Thanks to [@PhamDinhThienVu](https://github.com/PhamDinhThienVu) for reporting the reviews pagination bug\n\n## [1.0.5] - 2025-10-18\n\n### New Features\n\n- **Publisher Country Detection**: Added `publisherCountry` field to app data\n  - Automatically detects developer's country from phone number and address\n  - Uses international phone prefixes and address parsing\n  - Returns country names like \"United States\", \"Germany\", \"Japan\", etc.\n  - Handles multiple countries when phone and address differ (e.g., \"United States/Germany\")\n\n### Removed Features\n\n- **Removed updatedTimestamp**: Removed deprecated timestamp field that was causing confusion\n\n\n### Bug Fixes\n\n- **Enhanced Error Handling**: Improved error handling and retry mechanisms\n  - Better HTTP client fallback when requests fail\n  - More robust JSON parsing with multiple fallback strategies\n  - Improved handling of network timeouts and connection errors\n- **Retry Mechanism**: Fixed automatic retry logic for failed requests\n  - Exponential backoff for rate limiting\n  - Automatic HTTP client switching on failures\n  - Better error recovery for temporary network issues\n- **General Bug Fixes**: Fixed various edge cases and improved stability\n  - Better handling of malformed JSON responses\n  - Improved data extraction for apps with missing fields\n  - Enhanced Unicode handling for international app data\n\n## [1.0.4] - 2025-10-16\n\n### New Features\n\n- **Assets Parameter**: Added configurable image sizes for all app methods\n  - `SMALL` (512px width)\n  - `MEDIUM` (1024px width) - Default\n  - `LARGE` (2048px width)\n  - `ORIGINAL` (Maximum size)\n  - Available in all app methods: `app_analyze()`, `app_get_field()`, `app_get_fields()`, `app_print_field()`, `app_print_fields()`, `app_print_all()`\n  - Affects icon, headerImage, screenshots, and videoImage URLs\n\n### Bug Fixes\n\n- **Release Date Fallback**: Fixed missing release dates when using language/country parameters\n  - Added automatic fallback request without `hl`/`gl` parameters when release date is null\n  - Ensures release date extraction for apps in all regions\n- **Path Resolution**: Fixed various path-related issues in data extraction\n- **Image URL Processing**: Improved image URL formatting with proper size parameters\n\n### Usage Examples\n\n```python\n# Use different asset sizes\ndata = scraper.app_analyze(\"com.whatsapp\", assets=\"LARGE\")\nicon = scraper.app_get_field(\"com.whatsapp\", \"icon\", assets=\"SMALL\")\nscraper.app_print_all(\"com.whatsapp\", assets=\"ORIGINAL\")\n```\n\n## [1.0.3] - 2025-10-15\n\n### New Features\n\n- **Enhanced Search Pagination**: Now able to fetch unlimited search results (300+) with automatic pagination, not limited to 50 results anymore\n- **Improved Search Performance**: Optimized search result fetching with better token handling and batch processing\n\n### Bug Fixes & Code Quality Improvements\n\n- **Code Review**: Addressed security vulnerabilities and code quality issues\n- **Error Handling**: Improved error handling patterns across all modules\n- **Performance**: Optimized JSON parsing and HTTP client fallback logic\n- **Security**: Fixed potential SSRF and injection vulnerabilities\n- **Maintainability**: Enhanced code readability and documentation\n\n## [1.0.2] - 2025-01-15\n\n### Major Release - Complete Library Redesign 🚀\n\nThis version represents a complete rewrite of GPlay Scraper with a focus on modularity, extensibility, and comprehensive data extraction across all Google Play Store features.\n\n### New Features\n\n#### 7 Method Types with 42 Functions\n\n- **App Methods** - Extract 65+ data fields from any app (ratings, installs, pricing, permissions, screenshots, etc.)\n- **Search Methods** - Search Google Play Store apps with comprehensive filtering and pagination\n- **Reviews Methods** - Extract user reviews with ratings, timestamps, helpful votes, and detailed feedback\n- **Developer Methods** - Get all apps published by a specific developer using developer ID\n- **List Methods** - Access top charts (TOP_FREE, TOP_PAID, TOP_GROSSING) by category with 54 categories\n- **Similar Methods** - Find similar/competitor apps for market research and competitive analysis\n- **Suggest Methods** - Get search suggestions and autocomplete for ASO keyword research\n\nEach method type includes 6 functions:\n- `analyze()` - Get all data as dictionary/list\n- `get_field()` - Get single field value\n- `get_fields()` - Get multiple fields as dictionary\n- `print_field()` - Print single field to console\n- `print_fields()` - Print multiple fields to console\n- `print_all()` - Print all data as formatted JSON\n\n#### 7 HTTP Clients with Automatic Fallback\n\n- **requests** (default) - Standard Python HTTP library, reliable and well-tested\n- **curl_cffi** - Browser impersonation with TLS fingerprinting, best for avoiding detection\n- **tls_client** - Custom TLS fingerprinting, good for bypassing restrictions\n- **httpx** - Modern async-capable HTTP client with HTTP/2 support\n- **urllib3** - Low-level HTTP client with connection pooling\n- **cloudscraper** - Cloudflare bypass capabilities\n- **aiohttp** - Async HTTP client for high-performance concurrent requests\n\nAutomatic fallback system tries clients in order until one succeeds, ensuring maximum reliability.\n\n#### Multi-Language & Multi-Region Support\n\n- Support for 100+ languages (en, es, fr, de, ja, ko, zh, ar, etc.)\n- Support for 150+ countries (us, gb, ca, au, in, br, jp, etc.)\n- Get localized app data, reviews, and search results\n- Region-specific pricing and availability information\n\n#### Comprehensive Data Extraction\n\n- **65+ App Fields**: title, developer, ratings, installs, price, screenshots, permissions, release date, update date, size, version, content rating, privacy policy, and more\n- **Review Data**: user name, rating, review text, timestamp, app version, helpful votes, developer reply\n- **Search Results**: app ID, title, developer, rating, price, icon, screenshots, description snippet\n- **Developer Portfolio**: all apps from a developer with complete metadata\n- **Top Charts**: ranked lists with install counts, ratings, and trending data\n- **Similar Apps**: competitor analysis with relevance scoring\n- **Search Suggestions**: popular keywords and autocomplete terms\n\n#### Enhanced Architecture\n\n- **Modular Design**: Separate classes for methods, scrapers, and parsers\n- **Core Modules**: `gplay_methods.py`, `gplay_scraper.py`, `gplay_parser.py`\n- **HTTP Client Abstraction**: `HttpClient` class with pluggable client support\n- **Element Specs**: Reusable CSS selector specifications for data extraction\n- **Helper Utilities**: Text processing, date parsing, JSON cleaning, age calculation\n- **Exception Hierarchy**: 6 custom exception types for specific error scenarios\n\n#### Documentation & Testing\n\n- **Comprehensive Docstrings**: All 42 methods, 7 scrapers, 7 parsers, and utility functions documented\n- **Sphinx Documentation**: Professional HTML documentation with examples, API reference, and guides\n- **HTTP Clients Guide**: Detailed documentation on when and how to use each HTTP client\n- **Fields Reference**: Complete reference of all 65+ fields, categories, and parameters\n- **Unit Tests**: Complete test coverage for all 7 method types\n- **Examples**: Real-world usage examples for each method type\n\n#### Configuration & Customization\n\n- **Configurable Parameters**: Language, country, count, sort order, collection type\n- **Rate Limiting**: Built-in delays to prevent blocking (configurable)\n- **Error Handling**: Graceful fallbacks and informative error messages\n- **Logging**: Detailed logging for debugging and monitoring\n- **Timeout Control**: Configurable request timeouts\n- **Retry Logic**: Automatic retries with exponential backoff\n\n### Breaking Changes\n\n- Complete API redesign - not backward compatible with v1.0.1\n- Method names changed from `get_app_details()` to `app_analyze()`\n- New parameter structure for all methods\n- HTTP client must be specified or uses automatic fallback\n- Exception types renamed and reorganized\n\n### Migration Guide\n\nOld (v1.0.1):\n```python\nscraper = GPlayScraper()\ndata = scraper.get_app_details(\"com.whatsapp\")\n```\n\nNew (v1.0.2):\n```python\nscraper = GPlayScraper()\ndata = scraper.app_analyze(\"com.whatsapp\")\n```\n\n### Performance Improvements\n\n- Faster JSON parsing with optimized regex patterns\n- Reduced memory usage with streaming parsers\n- Better caching of HTTP client instances\n- Parallel request support with async clients\n\n### Bug Fixes\n\n- Fixed JSON parsing for apps with special characters in descriptions\n- Fixed review extraction for apps with no reviews\n- Fixed developer ID extraction from developer pages\n- Fixed category parsing for apps in multiple categories\n- Fixed price parsing for apps with regional pricing\n- Fixed screenshot URL extraction for apps with video previews\n\n## [1.0.1] - 2025-10-07\n\n### Added\n- **Paid App Support**: Fixed JSON parsing issues for paid apps with malformed data structures\n- **Reviews Extraction**: Successfully extracts user reviews for both free and paid apps\n- **Organized Output**: Restructured JSON output with logical field grouping:\n  - Basic Information\n  - Category & Genre\n  - Release & Updates\n  - Media Content\n  - Install Statistics\n  - Ratings & Reviews\n  - Advertising\n  - Technical Details\n  - Content Rating\n  - Privacy & Security\n  - Pricing & Monetization\n  - Developer Information\n  - ASO Analysis\n- **Enhanced JSON Parser**: Bracket-matching algorithm for complex nested structures\n- **Original Price Field**: Added `originalPrice` field for sale price tracking\n\n### Fixed\n- **JSON Parsing Errors**: Resolved \"Expecting ',' delimiter\" errors for paid apps\n- **Reviews Data**: Fixed empty reviews arrays by implementing alternative parsing methods\n- **Malformed Data Handling**: Improved handling of unquoted keys and malformed JSON from Play Store\n\n### Improved\n- **Error Handling**: Better fallback mechanisms for JSON parsing failures\n- **Data Extraction**: More robust extraction for apps with complex pricing structures\n- **Code Organization**: Cleaner separation of parsing logic and error recovery\n\n## [1.0.0] - 2025-10-06\n\n### Added\n- Initial release of GPlay Scraper\n- Complete Google Play Store app data extraction\n- ASO (App Store Optimization) analysis\n- Modular architecture with separate core modules\n- Support for 60+ data fields including:\n  - Basic app information\n  - Install statistics and metrics\n  - Ratings and reviews data\n  - Technical specifications\n  - Developer information\n  - Media content (screenshots, videos, icons)\n  - Pricing and monetization details\n  - ASO keyword analysis\n- Multiple access methods:\n  - `analyze()` - Complete app analysis\n  - `get_field()` - Single field retrieval\n  - `get_fields()` - Multiple field retrieval\n  - `print_field()` - Direct field printing\n  - `print_fields()` - Multiple field printing\n  - `print_all()` - Complete data printing\n- Comprehensive documentation and examples\n- Error handling and logging\n- Rate limiting considerations\n- Cross-platform compatibility\n\n### Features\n- Web scraping of Google Play Store pages\n- JSON data extraction and parsing\n- Automatic install metrics calculation\n- Keyword frequency analysis\n- Readability scoring\n- Review data extraction\n- Image URL processing\n- Date parsing and age calculation"
  },
  {
    "path": "CODE_OF_CONDUCT.md",
    "content": "# Contributor Covenant Code of Conduct\n\n## Our Pledge\n\nWe as members, contributors, and leaders pledge to make participation in our\ncommunity a harassment-free experience for everyone, regardless of age, body\nsize, visible or invisible disability, ethnicity, sex characteristics, gender\nidentity and expression, level of experience, education, socio-economic status,\nnationality, personal appearance, race, religion, or sexual identity\nand orientation.\n\nWe pledge to act and interact in ways that contribute to an open, welcoming,\ndiverse, inclusive, and healthy community.\n\n## Our Standards\n\nExamples of behavior that contributes to a positive environment for our\ncommunity include:\n\n* Demonstrating empathy and kindness toward other people\n* Being respectful of differing opinions, viewpoints, and experiences\n* Giving and gracefully accepting constructive feedback\n* Accepting responsibility and apologizing to those affected by our mistakes,\n  and learning from the experience\n* Focusing on what is best not just for us as individuals, but for the\n  overall community\n\nExamples of unacceptable behavior include:\n\n* The use of sexualized language or imagery, and sexual attention or\n  advances of any kind\n* Trolling, insulting or derogatory comments, and personal or political attacks\n* Public or private harassment\n* Publishing others' private information, such as a physical or email\n  address, without their explicit permission\n* Other conduct which could reasonably be considered inappropriate in a\n  professional setting\n\n## Enforcement Responsibilities\n\nCommunity leaders are responsible for clarifying and enforcing our standards of\nacceptable behavior and will take appropriate and fair corrective action in\nresponse to any behavior that they deem inappropriate, threatening, offensive,\nor harmful.\n\n## Scope\n\nThis Code of Conduct applies within all community spaces, and also applies when\nan individual is officially representing the community in public spaces.\n\n## Enforcement\n\nInstances of abusive, harassing, or otherwise unacceptable behavior may be\nreported to the community leaders responsible for enforcement through GitHub Issues.\n\nAll complaints will be reviewed and investigated promptly and fairly.\n\n## Attribution\n\nThis Code of Conduct is adapted from the [Contributor Covenant][homepage],\nversion 2.0, available at\nhttps://www.contributor-covenant.org/version/2/0/code_of_conduct.html.\n\n[homepage]: https://www.contributor-covenant.org"
  },
  {
    "path": "CONTRIBUTING.md",
    "content": "# Contributing to GPlay Scraper\n\nThank you for your interest in contributing! \n\n## Development Setup\n\n1. Fork the repository\n2. Clone your fork: `git clone https://github.com/yourusername/gplay-scraper.git`\n3. Install in development mode: `pip install -e .`\n4. Install dev dependencies: `pip install pytest`\n\n## Running Tests\n\n```bash\npython -m pytest tests/ -v\n```\n\n## Code Style\n\n- Follow PEP 8\n- Add docstrings to new functions\n- Include type hints where appropriate\n\n## Submitting Changes\n\n1. Create a feature branch: `git checkout -b feature-name`\n2. Make your changes\n3. Add tests for new functionality\n4. Run tests to ensure they pass\n5. Submit a pull request\n\n## Reporting Issues\n\nPlease use GitHub Issues to report bugs or request features."
  },
  {
    "path": "LICENSE",
    "content": "MIT License\n\nCopyright (c) 2025 Mohammed Cha\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE."
  },
  {
    "path": "MANIFEST.in",
    "content": "include README.md\ninclude LICENSE\ninclude requirements.txt\ninclude CHANGELOG.md\ninclude CONTRIBUTING.md\ninclude SECURITY.md\nrecursive-include examples *.py\nrecursive-include tests *.py"
  },
  {
    "path": "README/APP_METHODS.md",
    "content": "# App Methods\n\nExtract detailed information about individual Google Play Store apps.\n\n## Quick Start\n\n```python\nfrom gplay_scraper import GPlayScraper\n\nscraper = GPlayScraper()\n\n# Get all data\ndata = scraper.app_analyze(\"com.whatsapp\")\nprint(data['title'], data['score'], data['installs'])\n\n# Get specific fields\ntitle = scraper.app_get_field(\"com.whatsapp\", \"title\")\nprint(title)  # WhatsApp Messenger\n\n# Get multiple fields\ninfo = scraper.app_get_fields(\"com.whatsapp\", [\"title\", \"score\", \"developer\"])\nprint(info)\n```\n\n---\n\n## HTTP Clients\n\nThe library supports 7 HTTP clients with automatic fallback. If one fails, it tries the next.\n\n### Supported Clients\n1. **requests** (default) - Standard Python HTTP library\n2. **curl_cffi** - cURL with browser impersonation\n3. **tls_client** - Advanced TLS fingerprinting\n4. **httpx** - Modern async-capable HTTP client\n5. **urllib3** - Low-level HTTP client\n6. **cloudscraper** - Cloudflare bypass\n7. **aiohttp** - Async HTTP client\n\n### Usage\n\n```python\n# Default (tries requests first, then others)\nscraper = GPlayScraper()\n\n# Specify a client\nscraper = GPlayScraper(http_client=\"curl_cffi\")\nscraper = GPlayScraper(http_client=\"tls_client\")\nscraper = GPlayScraper(http_client=\"httpx\")\n```\n\n### Installation\n\n```bash\n# Default\npip install requests\n\n# Advanced clients (optional)\npip install curl-cffi\npip install tls-client\npip install httpx\npip install urllib3\npip install cloudscraper\npip install aiohttp\n```\n\n**Note:** The library automatically falls back to available clients if your preferred one fails.\n\n---\n\n## Methods\n\n### `app_analyze(app_id, lang='en', country='us', assets=None)`\nReturns all 65+ fields as a dictionary.\n\n```python\ndata = scraper.app_analyze(\"com.whatsapp\")\n# Returns: {'appId': 'com.whatsapp', 'title': 'WhatsApp Messenger', ...}\n\n# With custom image sizes\ndata = scraper.app_analyze(\"com.whatsapp\", assets=\"LARGE\")\n# Returns same data but with larger image URLs (2048px)\n```\n\n### `app_get_field(app_id, field, lang='en', country='us', assets=None)`\nReturns a single field value.\n\n```python\nscore = scraper.app_get_field(\"com.whatsapp\", \"score\")\n# Returns: 4.2\n\n# Get high-quality icon\nicon = scraper.app_get_field(\"com.whatsapp\", \"icon\", assets=\"ORIGINAL\")\n# Returns: URL with maximum image quality\n```\n\n### `app_get_fields(app_id, fields, lang='en', country='us', assets=None)`\nReturns multiple fields as a dictionary.\n\n```python\ndata = scraper.app_get_fields(\"com.whatsapp\", [\"title\", \"score\", \"installs\"])\n# Returns: {'title': 'WhatsApp Messenger', 'score': 4.2, 'installs': '5,000,000,000+'}\n\n# Get media with custom sizes\nmedia = scraper.app_get_fields(\"com.whatsapp\", [\"icon\", \"screenshots\"], assets=\"SMALL\")\n# Returns: Media URLs with 512px width\n```\n\n### `app_print_field(app_id, field, lang='en', country='us', assets=None)`\nPrints a single field to console.\n\n```python\nscraper.app_print_field(\"com.whatsapp\", \"title\")\n# Output: title: WhatsApp Messenger\n\n# Print large icon URL\nscraper.app_print_field(\"com.whatsapp\", \"icon\", assets=\"LARGE\")\n# Output: icon: https://...=w2048\n```\n\n### `app_print_fields(app_id, fields, lang='en', country='us', assets=None)`\nPrints multiple fields to console.\n\n```python\nscraper.app_print_fields(\"com.whatsapp\", [\"title\", \"score\"])\n# Output:\n# title: WhatsApp Messenger\n# score: 4.2\n\n# Print media with original quality\nscraper.app_print_fields(\"com.whatsapp\", [\"icon\", \"screenshots\"], assets=\"ORIGINAL\")\n# Output: URLs with maximum image quality\n```\n\n### `app_print_all(app_id, lang='en', country='us', assets=None)`\nPrints all fields as formatted JSON.\n\n```python\nscraper.app_print_all(\"com.whatsapp\")\n# Output: Full JSON with all 65+ fields\n\n# Print with high-quality images\nscraper.app_print_all(\"com.whatsapp\", assets=\"LARGE\")\n# Output: Full JSON with 2048px image URLs\n```\n\n---\n\n## Available Fields (65+)\n\n### Basic Information\n- `appId` - Package name (e.g., \"com.whatsapp\")\n- `title` - App name\n- `summary` - Short description\n- `description` - Full description\n- `appUrl` - Play Store URL\n\n### Ratings & Reviews\n- `score` - Average rating (1-5)\n- `ratings` - Total number of ratings\n- `reviews` - Total number of reviews\n- `histogram` - Rating distribution [1★, 2★, 3★, 4★, 5★]\n\n### Install Metrics\n- `installs` - Install range (e.g., \"10,000,000+\")\n- `minInstalls` - Minimum installs\n- `realInstalls` - Estimated real installs\n- `dailyInstalls` - Estimated daily installs\n- `monthlyInstalls` - Estimated monthly installs\n- `minDailyInstalls` - Minimum daily installs\n- `realDailyInstalls` - Real estimated daily installs\n- `minMonthlyInstalls` - Minimum monthly installs\n- `realMonthlyInstalls` - Real estimated monthly installs\n\n### Pricing\n- `price` - Price in currency (0 if free)\n- `currency` - Currency code (e.g., \"USD\")\n- `free` - Boolean, true if free\n- `offersIAP` - Has in-app purchases\n- `inAppProductPrice` - IAP price range\n- `sale` - Currently on sale\n- `originalPrice` - Original price if on sale\n\n### Media\n- `icon` - App icon URL\n- `headerImage` - Header image URL\n- `screenshots` - List of screenshot URLs\n- `video` - Promo video URL\n- `videoImage` - Video thumbnail URL\n\n### Developer\n- `developer` - Developer name\n- `developerId` - Developer ID\n- `developerEmail` - Contact email\n- `developerWebsite` - Website URL\n- `developerAddress` - Physical address\n- `developerPhone` - Contact phone\n- `privacyPolicy` - Privacy policy URL\n- `publisherCountry` - Developer's country\n\n### Category\n- `genre` - Primary category (e.g., \"Communication\")\n- `genreId` - Category ID (e.g., \"COMMUNICATION\")\n- `categories` - List of categories\n\n### Technical\n- `version` - Current version\n- `androidVersion` - Required Android version\n- `minAndroidApi` - Minimum API level\n- `maxAndroidApi` - Maximum API level\n- `appBundle` - App bundle name\n\n### Dates\n- `released` - Release date (e.g., \"Feb 24, 2009\")\n- `appAgeDays` - Age in days\n- `lastUpdated` - Last update date\n\n### Content\n- `contentRating` - Age rating (e.g., \"Everyone\")\n- `contentRatingDescription` - Rating description\n- `whatsNew` - Recent changes list\n- `permissions` - Required permissions dict\n- `dataSafety` - Data safety info list\n\n### Advertising\n- `adSupported` - Contains ads\n- `containsAds` - Shows advertisements\n\n### Availability\n- `available` - App is available\n\n---\n\n## Practical Examples\n\n### Competitive Analysis\n```python\napps = [\"com.whatsapp\", \"com.telegram\", \"com.viber\"]\nfor app_id in apps:\n    data = scraper.app_get_fields(app_id, [\"title\", \"score\", \"realInstalls\"])\n    print(f\"{data['title']}: {data['score']}★ - {data['realInstalls']:,} installs\")\n```\n\n### Monitor App Updates\n```python\napp_id = \"com.whatsapp\"\ndata = scraper.app_get_fields(app_id, [\"version\", \"lastUpdated\", \"whatsNew\"])\nprint(f\"Version: {data['version']}\")\nprint(f\"Updated: {data['lastUpdated']}\")\nprint(f\"Changes: {data['whatsNew']}\")\n```\n\n### Extract Developer Info\n```python\napp_id = \"com.whatsapp\"\ndev_info = scraper.app_get_fields(app_id, [\n    \"developer\", \"developerEmail\", \"developerWebsite\"\n])\nprint(dev_info)\n```\n\n### Get High-Quality Media\n```python\napp_id = \"com.whatsapp\"\n# Get original quality images\nmedia = scraper.app_get_fields(app_id, [\"icon\", \"screenshots\"], assets=\"ORIGINAL\")\nprint(f\"Icon: {media['icon']}\")  # Maximum quality\nprint(f\"Screenshots: {len(media['screenshots'])} images\")\n\n# Get small thumbnails for faster loading\nthumbnails = scraper.app_get_fields(app_id, [\"icon\", \"headerImage\"], assets=\"SMALL\")\nprint(f\"Small icon: {thumbnails['icon']}\")  # 512px\n```\n\n### Check Monetization\n```python\napp_id = \"com.whatsapp\"\nmoney = scraper.app_get_fields(app_id, [\n    \"free\", \"price\", \"offersIAP\", \"containsAds\"\n])\nprint(f\"Free: {money['free']}\")\nprint(f\"Has IAP: {money['offersIAP']}\")\nprint(f\"Has Ads: {money['containsAds']}\")\n```\n\n---\n\n## Parameters\n\n### Initialization\n- `http_client` (str, optional) - HTTP client to use: \"requests\", \"curl_cffi\", \"tls_client\", \"httpx\", \"urllib3\", \"cloudscraper\", \"aiohttp\" (default: \"requests\")\n\n### Method Parameters\n- `app_id` (str, required) - App package name from Play Store URL\n- `lang` (str, optional) - Language code (default: 'en')\n- `country` (str, optional) - Country code (default: 'us')\n- `assets` (str, optional) - Image size: 'SMALL', 'MEDIUM', 'LARGE', 'ORIGINAL' (default: 'MEDIUM')\n- `field` (str) - Single field name\n- `fields` (List[str]) - List of field names\n\n### Assets Parameter (Image Sizes)\n- **SMALL** - 512px width (`w512`)\n- **MEDIUM** - 1024px width (`w1024`) - Default\n- **LARGE** - 2048px width (`w2048`)\n- **ORIGINAL** - Maximum size (`w9999`)\n\nAffects these fields: `icon`, `headerImage`, `screenshots`, `videoImage`\n\n```python\n# Different image qualities\nsmall_icon = scraper.app_get_field(\"com.whatsapp\", \"icon\", assets=\"SMALL\")\n# Returns: https://...=w512\n\nlarge_icon = scraper.app_get_field(\"com.whatsapp\", \"icon\", assets=\"LARGE\")\n# Returns: https://...=w2048\n\noriginal_icon = scraper.app_get_field(\"com.whatsapp\", \"icon\", assets=\"ORIGINAL\")\n# Returns: https://...=w9999\n```\n\n### Finding App IDs\nFrom Play Store URL: `https://play.google.com/store/apps/details?id=com.whatsapp`  \nThe app_id is: `com.whatsapp`\n\n### Language & Country Codes\n- **Language**: 'en', 'es', 'fr', 'de', 'ja', 'ko', 'pt', 'ru', 'zh', etc.\n- **Country**: 'us', 'gb', 'ca', 'au', 'in', 'br', 'jp', 'kr', 'de', 'fr', etc.\n\n---\n\n## When to Use Each Method\n\n- **`app_analyze()`** - Need all data for comprehensive analysis\n- **`app_get_field()`** - Need just one specific value\n- **`app_get_fields()`** - Need several specific fields (more efficient than multiple get_field calls)\n- **`app_print_field()`** - Quick debugging/console output\n- **`app_print_fields()`** - Quick debugging of multiple values\n- **`app_print_all()`** - Explore available data structure\n\n---\n\n## Advanced Features\n\n### Rate Limiting\nBuilt-in rate limiting (1 second delay between requests) prevents blocking.\n\n### Error Handling\n```python\nfrom gplay_scraper import GPlayScraper, AppNotFoundError, NetworkError\n\nscraper = GPlayScraper()\n\ntry:\n    data = scraper.app_analyze(\"invalid.app.id\")\nexcept AppNotFoundError:\n    print(\"App not found\")\nexcept NetworkError:\n    print(\"Network error occurred\")\n```\n\n### Multi-Region Data\n```python\n# Get data from different regions\nus_data = scraper.app_analyze(\"com.whatsapp\", country=\"us\")\nuk_data = scraper.app_analyze(\"com.whatsapp\", country=\"gb\")\njp_data = scraper.app_analyze(\"com.whatsapp\", country=\"jp\", lang=\"ja\")\n```\n"
  },
  {
    "path": "README/DEVELOPER_METHODS.md",
    "content": "# Developer Methods\n\nGet all apps published by a specific developer on Google Play Store.\n\n## Quick Start\n\n```python\nfrom gplay_scraper import GPlayScraper\n\nscraper = GPlayScraper()\n\n# Get all apps from a developer\napps = scraper.developer_analyze(\"5700313618786177705\")\nfor app in apps:\n    print(f\"{app['title']}: {app['score']}★\")\n\n# Get specific fields\ntitles = scraper.developer_get_field(\"5700313618786177705\", \"title\")\nprint(titles)\n\n# Get multiple fields\napps = scraper.developer_get_fields(\"5700313618786177705\", [\"title\", \"score\", \"free\"])\nprint(apps)\n```\n\n---\n\n## HTTP Clients\n\nThe library supports 7 HTTP clients with automatic fallback. If one fails, it tries the next.\n\n### Supported Clients\n1. **requests** (default) - Standard Python HTTP library\n2. **curl_cffi** - cURL with browser impersonation\n3. **tls_client** - Advanced TLS fingerprinting\n4. **httpx** - Modern async-capable HTTP client\n5. **urllib3** - Low-level HTTP client\n6. **cloudscraper** - Cloudflare bypass\n7. **aiohttp** - Async HTTP client\n\n### Usage\n\n```python\n# Default (tries requests first, then others)\nscraper = GPlayScraper()\n\n# Specify a client\nscraper = GPlayScraper(http_client=\"curl_cffi\")\nscraper = GPlayScraper(http_client=\"tls_client\")\nscraper = GPlayScraper(http_client=\"httpx\")\n```\n\n### Installation\n\n```bash\n# Default\npip install requests\n\n# Advanced clients (optional)\npip install curl-cffi\npip install tls-client\npip install httpx\npip install urllib3\npip install cloudscraper\npip install aiohttp\n```\n\n**Note:** The library automatically falls back to available clients if your preferred one fails.\n\n---\n\n## Methods\n\n### `developer_analyze(dev_id, count=100, lang='en', country='us')`\nReturns all apps from a developer as a list of dictionaries.\n\n```python\napps = scraper.developer_analyze(\"5700313618786177705\", count=50)\n# Returns: [{'appId': '...', 'title': '...', 'score': 4.5, ...}, ...]\n```\n\n### `developer_get_field(dev_id, field, count=100, lang='en', country='us')`\nReturns a specific field from all developer apps.\n\n```python\ntitles = scraper.developer_get_field(\"5700313618786177705\", \"title\")\n# Returns: ['App 1', 'App 2', 'App 3', ...]\n```\n\n### `developer_get_fields(dev_id, fields, count=100, lang='en', country='us')`\nReturns multiple fields from all developer apps.\n\n```python\napps = scraper.developer_get_fields(\"5700313618786177705\", [\"title\", \"score\", \"free\"])\n# Returns: [{'title': 'App 1', 'score': 4.5, 'free': True}, ...]\n```\n\n### `developer_print_field(dev_id, field, count=100, lang='en', country='us')`\nPrints a specific field from all developer apps.\n\n```python\nscraper.developer_print_field(\"5700313618786177705\", \"title\")\n# Output:\n# 1. title: App 1\n# 2. title: App 2\n# 3. title: App 3\n```\n\n### `developer_print_fields(dev_id, fields, count=100, lang='en', country='us')`\nPrints multiple fields from all developer apps.\n\n```python\nscraper.developer_print_fields(\"5700313618786177705\", [\"title\", \"score\"])\n# Output:\n# 1. title: App 1, score: 4.5\n# 2. title: App 2, score: 4.2\n```\n\n### `developer_print_all(dev_id, count=100, lang='en', country='us')`\nPrints all data for all developer apps as formatted JSON.\n\n```python\nscraper.developer_print_all(\"5700313618786177705\")\n# Output: Full JSON array with all apps\n```\n\n---\n\n## Available Fields\n\n- `appId` - App package name (e.g., \"com.example.app\")\n- `title` - App name\n- `description` - App description\n- `icon` - App icon URL\n- `url` - Play Store URL\n- `developer` - Developer name\n- `score` - Average rating (1-5)\n- `scoreText` - Rating as text (e.g., \"4.5\")\n- `currency` - Price currency (e.g., \"USD\")\n- `price` - App price (0 if free)\n- `free` - Boolean, true if free\n\n---\n\n## Practical Examples\n\n### Analyze Developer Portfolio\n```python\ndev_id = \"5700313618786177705\"\napps = scraper.developer_analyze(dev_id)\n\nprint(f\"Total apps: {len(apps)}\")\nprint(f\"Average rating: {sum(a['score'] for a in apps if a['score']) / len(apps):.2f}\")\nprint(f\"Free apps: {sum(1 for a in apps if a['free'])}\")\nprint(f\"Paid apps: {sum(1 for a in apps if not a['free'])}\")\n```\n\n### Find Top-Rated Apps\n```python\ndev_id = \"5700313618786177705\"\napps = scraper.developer_get_fields(dev_id, [\"title\", \"score\"])\n\n# Sort by rating\ntop_apps = sorted(apps, key=lambda x: x['score'] or 0, reverse=True)[:5]\nfor i, app in enumerate(top_apps, 1):\n    print(f\"{i}. {app['title']}: {app['score']}★\")\n```\n\n### Compare Free vs Paid Apps\n```python\ndev_id = \"5700313618786177705\"\napps = scraper.developer_get_fields(dev_id, [\"title\", \"free\", \"price\", \"score\"])\n\nfree_apps = [a for a in apps if a['free']]\npaid_apps = [a for a in apps if not a['free']]\n\nprint(f\"Free apps: {len(free_apps)} (avg rating: {sum(a['score'] or 0 for a in free_apps)/len(free_apps):.2f})\")\nprint(f\"Paid apps: {len(paid_apps)} (avg rating: {sum(a['score'] or 0 for a in paid_apps)/len(paid_apps):.2f})\")\n```\n\n### Export Developer Apps\n```python\nimport json\n\ndev_id = \"5700313618786177705\"\napps = scraper.developer_analyze(dev_id)\n\nwith open('developer_apps.json', 'w') as f:\n    json.dump(apps, f, indent=2)\n\nprint(f\"Exported {len(apps)} apps to developer_apps.json\")\n```\n\n---\n\n## Parameters\n\n### Initialization\n- `http_client` (str, optional) - HTTP client to use: \"requests\", \"curl_cffi\", \"tls_client\", \"httpx\", \"urllib3\", \"cloudscraper\", \"aiohttp\" (default: \"requests\")\n\n### Method Parameters\n- `dev_id` (str, required) - Developer ID (numeric or string)\n- `count` (int, optional) - Maximum number of apps to return (default: 100)\n- `lang` (str, optional) - Language code (default: 'en')\n- `country` (str, optional) - Country code (default: 'us')\n- `field` (str) - Single field name\n- `fields` (List[str]) - List of field names\n\n### Finding Developer IDs\n\n**Method 1: From Developer Page URL**\n- Numeric ID: `https://play.google.com/store/apps/dev?id=5700313618786177705`\n  - Developer ID: `5700313618786177705`\n\n- String ID: `https://play.google.com/store/apps/developer?id=Google+LLC`\n  - Developer ID: `Google+LLC` or `Google LLC`\n\n**Method 2: From App Page**\n1. Go to any app by the developer\n2. Click on the developer name\n3. Extract ID from the URL\n\n### Language & Country Codes\n- **Language**: 'en', 'es', 'fr', 'de', 'ja', 'ko', 'pt', 'ru', 'zh', etc.\n- **Country**: 'us', 'gb', 'ca', 'au', 'in', 'br', 'jp', 'kr', 'de', 'fr', etc.\n\n---\n\n## When to Use Each Method\n\n- **`developer_analyze()`** - Need complete data for all apps\n- **`developer_get_field()`** - Need just one field from all apps\n- **`developer_get_fields()`** - Need specific fields from all apps (more efficient)\n- **`developer_print_field()`** - Quick debugging/console output\n- **`developer_print_fields()`** - Quick debugging of multiple fields\n- **`developer_print_all()`** - Explore available data structure\n\n---\n\n## Advanced Features\n\n### Rate Limiting\nBuilt-in rate limiting (1 second delay between requests) prevents blocking.\n\n### Error Handling\n```python\nfrom gplay_scraper import GPlayScraper, AppNotFoundError, NetworkError\n\nscraper = GPlayScraper()\n\ntry:\n    apps = scraper.developer_analyze(\"invalid_dev_id\")\nexcept AppNotFoundError:\n    print(\"Developer not found\")\nexcept NetworkError:\n    print(\"Network error occurred\")\n```\n\n### Multi-Region Data\n```python\n# Get developer apps from different regions\nus_apps = scraper.developer_analyze(\"5700313618786177705\", country=\"us\")\nuk_apps = scraper.developer_analyze(\"5700313618786177705\", country=\"gb\")\njp_apps = scraper.developer_analyze(\"5700313618786177705\", country=\"jp\", lang=\"ja\")\n```\n\n### Pagination\n```python\n# Get first 50 apps\napps_batch1 = scraper.developer_analyze(\"5700313618786177705\", count=50)\n\n# Get more apps (library handles this automatically up to count limit)\napps_all = scraper.developer_analyze(\"5700313618786177705\", count=200)\n```\n"
  },
  {
    "path": "README/LIST_METHODS.md",
    "content": "# List Methods\n\nGet top charts from Google Play Store (top free, top paid, top grossing).\n\n## Quick Start\n\n```python\nfrom gplay_scraper import GPlayScraper\n\nscraper = GPlayScraper()\n\n# Get top free apps\ntop_free = scraper.list_analyze(\"TOP_FREE\", \"GAME\", count=50)\nfor app in top_free[:10]:\n    print(f\"{app['title']}: {app['installs']} installs\")\n\n# Get specific fields\ntitles = scraper.list_get_field(\"TOP_FREE\", \"title\", \"APPLICATION\")\nprint(titles)\n\n# Get multiple fields\napps = scraper.list_get_fields(\"TOP_PAID\", [\"title\", \"price\", \"score\"], \"GAME\")\nprint(apps)\n```\n\n---\n\n## HTTP Clients\n\nThe library supports 7 HTTP clients with automatic fallback. If one fails, it tries the next.\n\n### Supported Clients\n1. **requests** (default) - Standard Python HTTP library\n2. **curl_cffi** - cURL with browser impersonation\n3. **tls_client** - Advanced TLS fingerprinting\n4. **httpx** - Modern async-capable HTTP client\n5. **urllib3** - Low-level HTTP client\n6. **cloudscraper** - Cloudflare bypass\n7. **aiohttp** - Async HTTP client\n\n### Usage\n\n```python\n# Default (tries requests first, then others)\nscraper = GPlayScraper()\n\n# Specify a client\nscraper = GPlayScraper(http_client=\"curl_cffi\")\nscraper = GPlayScraper(http_client=\"tls_client\")\nscraper = GPlayScraper(http_client=\"httpx\")\n```\n\n### Installation\n\n```bash\n# Default\npip install requests\n\n# Advanced clients (optional)\npip install curl-cffi\npip install tls-client\npip install httpx\npip install urllib3\npip install cloudscraper\npip install aiohttp\n```\n\n**Note:** The library automatically falls back to available clients if your preferred one fails.\n\n---\n\n## Methods\n\n### `list_analyze(collection='TOP_FREE', category='APPLICATION', count=100, lang='en', country='us')`\nReturns top chart apps as a list of dictionaries.\n\n```python\napps = scraper.list_analyze(\"TOP_FREE\", \"GAME\", count=50)\n# Returns: [{'appId': '...', 'title': '...', 'installs': '...', ...}, ...]\n```\n\n### `list_get_field(collection, field, category='APPLICATION', count=100, lang='en', country='us')`\nReturns a specific field from all chart apps.\n\n```python\ntitles = scraper.list_get_field(\"TOP_FREE\", \"title\", \"APPLICATION\")\n# Returns: ['App 1', 'App 2', 'App 3', ...]\n```\n\n### `list_get_fields(collection, fields, category='APPLICATION', count=100, lang='en', country='us')`\nReturns multiple fields from all chart apps.\n\n```python\napps = scraper.list_get_fields(\"TOP_PAID\", [\"title\", \"price\", \"score\"], \"GAME\")\n# Returns: [{'title': 'App 1', 'price': 4.99, 'score': 4.5}, ...]\n```\n\n### `list_print_field(collection, field, category='APPLICATION', count=100, lang='en', country='us')`\nPrints a specific field from all chart apps.\n\n```python\nscraper.list_print_field(\"TOP_FREE\", \"title\", \"APPLICATION\", count=20)\n# Output:\n# 1. title: App 1\n# 2. title: App 2\n# 3. title: App 3\n```\n\n### `list_print_fields(collection, fields, category='APPLICATION', count=100, lang='en', country='us')`\nPrints multiple fields from all chart apps.\n\n```python\nscraper.list_print_fields(\"TOP_FREE\", [\"title\", \"score\"], \"GAME\", count=20)\n# Output:\n# 1. title: App 1, score: 4.5\n# 2. title: App 2, score: 4.2\n```\n\n### `list_print_all(collection='TOP_FREE', category='APPLICATION', count=100, lang='en', country='us')`\nPrints all data for all chart apps as formatted JSON.\n\n```python\nscraper.list_print_all(\"TOP_FREE\", \"GAME\", count=50)\n# Output: Full JSON array with all apps\n```\n\n---\n\n## Available Fields\n\n- `appId` - App package name (e.g., \"com.example.app\")\n- `title` - App name\n- `description` - App description\n- `icon` - App icon URL\n- `screenshots` - List of screenshot URLs\n- `url` - Play Store URL\n- `developer` - Developer name\n- `genre` - App category\n- `score` - Average rating (1-5)\n- `scoreText` - Rating as text (e.g., \"4.5\")\n- `installs` - Install count (e.g., \"10,000,000+\")\n- `currency` - Price currency (e.g., \"USD\")\n- `price` - App price (0 if free)\n- `free` - Boolean, true if free\n\n---\n\n## Collection Types\n\n### Available Collections\n- **`TOP_FREE`** - Top free apps (most popular free apps)\n- **`TOP_PAID`** - Top paid apps (most popular paid apps)\n- **`TOP_GROSSING`** - Top grossing apps (highest revenue apps)\n\n---\n\n## Categories\n\n### App Categories (36)\n- `APPLICATION` - All apps (default)\n- `ANDROID_WEAR` - Android Wear apps\n- `ART_AND_DESIGN` - Art & design\n- `AUTO_AND_VEHICLES` - Auto & vehicles\n- `BEAUTY` - Beauty\n- `BOOKS_AND_REFERENCE` - Books & reference\n- `BUSINESS` - Business\n- `COMICS` - Comics\n- `COMMUNICATION` - Communication\n- `DATING` - Dating\n- `EDUCATION` - Education\n- `ENTERTAINMENT` - Entertainment\n- `EVENTS` - Events\n- `FINANCE` - Finance\n- `FOOD_AND_DRINK` - Food & drink\n- `HEALTH_AND_FITNESS` - Health & fitness\n- `HOUSE_AND_HOME` - House & home\n- `LIBRARIES_AND_DEMO` - Libraries & demo\n- `LIFESTYLE` - Lifestyle\n- `MAPS_AND_NAVIGATION` - Maps & navigation\n- `MEDICAL` - Medical\n- `MUSIC_AND_AUDIO` - Music & audio\n- `NEWS_AND_MAGAZINES` - News & magazines\n- `PARENTING` - Parenting\n- `PERSONALIZATION` - Personalization\n- `PHOTOGRAPHY` - Photography\n- `PRODUCTIVITY` - Productivity\n- `SHOPPING` - Shopping\n- `SOCIAL` - Social\n- `SPORTS` - Sports\n- `TOOLS` - Tools\n- `TRAVEL_AND_LOCAL` - Travel & local\n- `VIDEO_PLAYERS` - Video players & editors\n- `WATCH_FACE` - Watch faces\n- `WEATHER` - Weather\n- `FAMILY` - Family\n\n### Game Categories (18)\n- `GAME` - All games\n- `GAME_ACTION` - Action games\n- `GAME_ADVENTURE` - Adventure games\n- `GAME_ARCADE` - Arcade games\n- `GAME_BOARD` - Board games\n- `GAME_CARD` - Card games\n- `GAME_CASINO` - Casino games\n- `GAME_CASUAL` - Casual games\n- `GAME_EDUCATIONAL` - Educational games\n- `GAME_MUSIC` - Music games\n- `GAME_PUZZLE` - Puzzle games\n- `GAME_RACING` - Racing games\n- `GAME_ROLE_PLAYING` - Role playing games\n- `GAME_SIMULATION` - Simulation games\n- `GAME_SPORTS` - Sports games\n- `GAME_STRATEGY` - Strategy games\n- `GAME_TRIVIA` - Trivia games\n- `GAME_WORD` - Word games\n\n---\n\n## Practical Examples\n\n### Top Free Games Analysis\n```python\ntop_games = scraper.list_analyze(\"TOP_FREE\", \"GAME\", count=100)\n\nprint(f\"Total games: {len(top_games)}\")\nprint(f\"Average rating: {sum(a['score'] for a in top_games if a['score']) / len(top_games):.2f}\")\nprint(f\"\\nTop 5 games:\")\nfor i, game in enumerate(top_games[:5], 1):\n    print(f\"{i}. {game['title']} - {game['score']}★ - {game['installs']} installs\")\n```\n\n### Compare Free vs Paid Apps\n```python\ntop_free = scraper.list_get_fields(\"TOP_FREE\", [\"title\", \"score\", \"installs\"], \"APPLICATION\", count=50)\ntop_paid = scraper.list_get_fields(\"TOP_PAID\", [\"title\", \"score\", \"price\"], \"APPLICATION\", count=50)\n\nfree_avg = sum(a['score'] or 0 for a in top_free) / len(top_free)\npaid_avg = sum(a['score'] or 0 for a in top_paid) / len(top_paid)\n\nprint(f\"Top Free Apps - Avg Rating: {free_avg:.2f}\")\nprint(f\"Top Paid Apps - Avg Rating: {paid_avg:.2f}\")\n```\n\n### Find Highest Grossing Apps\n```python\ntop_grossing = scraper.list_get_fields(\"TOP_GROSSING\", [\"title\", \"developer\", \"genre\"], \"APPLICATION\", count=20)\n\nprint(\"Top 10 Highest Grossing Apps:\")\nfor i, app in enumerate(top_grossing[:10], 1):\n    print(f\"{i}. {app['title']} by {app['developer']} ({app['genre']})\")\n```\n\n### Category Comparison\n```python\ncategories = [\"GAME\", \"SOCIAL\", \"PRODUCTIVITY\", \"ENTERTAINMENT\"]\n\nfor category in categories:\n    apps = scraper.list_get_fields(\"TOP_FREE\", [\"title\", \"score\"], category, count=10)\n    avg_score = sum(a['score'] or 0 for a in apps) / len(apps)\n    print(f\"{category}: {avg_score:.2f}★ average\")\n```\n\n### Game Genre Analysis\n```python\ngame_genres = [\"GAME_ACTION\", \"GAME_PUZZLE\", \"GAME_CASUAL\", \"GAME_STRATEGY\"]\n\nfor genre in game_genres:\n    games = scraper.list_get_fields(\"TOP_FREE\", [\"title\", \"score\", \"installs\"], genre, count=5)\n    print(f\"\\n{genre}:\")\n    for i, game in enumerate(games, 1):\n        print(f\"  {i}. {game['title']} - {game['score']}★\")\n```\n\n### Export Top Charts\n```python\nimport json\n\ntop_free = scraper.list_analyze(\"TOP_FREE\", \"GAME\", count=100)\n\nwith open('top_free_games.json', 'w') as f:\n    json.dump(top_free, f, indent=2)\n\nprint(f\"Exported {len(top_free)} games to top_free_games.json\")\n```\n\n### Track Chart Positions\n```python\nimport time\nimport json\nfrom datetime import datetime\n\ndef track_charts():\n    snapshot = {\n        \"timestamp\": datetime.now().isoformat(),\n        \"top_free\": scraper.list_get_fields(\"TOP_FREE\", [\"title\", \"score\"], \"GAME\", count=10),\n        \"top_paid\": scraper.list_get_fields(\"TOP_PAID\", [\"title\", \"price\"], \"GAME\", count=10)\n    }\n    \n    with open(f'charts_{datetime.now().strftime(\"%Y%m%d\")}.json', 'w') as f:\n        json.dump(snapshot, f, indent=2)\n    \n    print(f\"Snapshot saved at {snapshot['timestamp']}\")\n\ntrack_charts()\n```\n\n---\n\n## Parameters\n\n### Initialization\n- `http_client` (str, optional) - HTTP client to use: \"requests\", \"curl_cffi\", \"tls_client\", \"httpx\", \"urllib3\", \"cloudscraper\", \"aiohttp\" (default: \"requests\")\n\n### Method Parameters\n- `collection` (str) - Chart type: \"TOP_FREE\", \"TOP_PAID\", \"TOP_GROSSING\" (default: \"TOP_FREE\")\n- `category` (str, optional) - Category filter (default: \"APPLICATION\")\n- `count` (int, optional) - Maximum number of apps to return (default: 100)\n- `lang` (str, optional) - Language code (default: 'en')\n- `country` (str, optional) - Country code (default: 'us')\n- `field` (str) - Single field name\n- `fields` (List[str]) - List of field names\n\n### Language & Country Codes\n- **Language**: 'en', 'es', 'fr', 'de', 'ja', 'ko', 'pt', 'ru', 'zh', etc.\n- **Country**: 'us', 'gb', 'ca', 'au', 'in', 'br', 'jp', 'kr', 'de', 'fr', etc.\n\n---\n\n## When to Use Each Method\n\n- **`list_analyze()`** - Need complete data for all chart apps\n- **`list_get_field()`** - Need just one field from all apps\n- **`list_get_fields()`** - Need specific fields from all apps (more efficient)\n- **`list_print_field()`** - Quick debugging/console output\n- **`list_print_fields()`** - Quick debugging of multiple fields\n- **`list_print_all()`** - Explore available data structure\n\n---\n\n## Advanced Features\n\n### Rate Limiting\nBuilt-in rate limiting (1 second delay between requests) prevents blocking.\n\n### Error Handling\n```python\nfrom gplay_scraper import GPlayScraper, AppNotFoundError, NetworkError\n\nscraper = GPlayScraper()\n\ntry:\n    apps = scraper.list_analyze(\"INVALID_COLLECTION\", \"GAME\")\nexcept AppNotFoundError:\n    print(\"Collection not found\")\nexcept NetworkError:\n    print(\"Network error occurred\")\n```\n\n### Multi-Region Charts\n```python\n# Get charts from different regions\nus_charts = scraper.list_analyze(\"TOP_FREE\", \"GAME\", country=\"us\")\nuk_charts = scraper.list_analyze(\"TOP_FREE\", \"GAME\", country=\"gb\")\njp_charts = scraper.list_analyze(\"TOP_FREE\", \"GAME\", country=\"jp\", lang=\"ja\")\n\nprint(f\"US Top Game: {us_charts[0]['title']}\")\nprint(f\"UK Top Game: {uk_charts[0]['title']}\")\nprint(f\"JP Top Game: {jp_charts[0]['title']}\")\n```\n\n### Batch Analysis\n```python\n# Analyze multiple collections at once\ncollections = [\"TOP_FREE\", \"TOP_PAID\", \"TOP_GROSSING\"]\nresults = {}\n\nfor collection in collections:\n    apps = scraper.list_get_fields(collection, [\"title\", \"score\"], \"GAME\", count=10)\n    results[collection] = apps\n    print(f\"{collection}: {len(apps)} apps retrieved\")\n```\n"
  },
  {
    "path": "README/README.md",
    "content": "# GPlay Scraper Documentation\n\nComplete documentation for all 7 method types in GPlay Scraper.\n\n## 📚 Method Documentation\n\n### [App Methods](APP_METHODS.md)\nExtract comprehensive app data with 65+ fields including ratings, installs, pricing, screenshots, permissions, and technical details.\n\n**Key Features:**\n- 65+ data fields per app\n- Basic info, ratings, installs, pricing\n- Media content (screenshots, videos, icons)\n- Technical specs (version, size, Android version)\n- Developer information and contact details\n\n**Use Cases:** App analysis, competitive research, market intelligence, data collection\n\n---\n\n### [Search Methods](SEARCH_METHODS.md)\nSearch Google Play Store apps by keyword with filtering and pagination.\n\n**Key Features:**\n- Search by keyword, app name, or category\n- Filter and paginate results\n- Get app titles, developers, ratings, prices\n- Multi-language and multi-region support\n\n**Use Cases:** App discovery, market research, competitor analysis, trend tracking\n\n---\n\n### [Reviews Methods](REVIEWS_METHODS.md)\nExtract user reviews with ratings, timestamps, and detailed feedback for sentiment analysis.\n\n**Key Features:**\n- Get reviews with ratings (1-5 stars)\n- Review text, timestamps, app versions\n- Reviewer names and helpful vote counts\n- Sort by newest, relevant, or highest rated\n\n**Use Cases:** Sentiment analysis, user feedback, app improvement, competitive monitoring\n\n---\n\n### [Developer Methods](DEVELOPER_METHODS.md)\nGet all apps published by a specific developer using their developer ID.\n\n**Key Features:**\n- Complete app portfolio for any developer\n- Track developer's app performance\n- Analyze ratings and install counts\n- Monitor developer's market presence\n\n**Use Cases:** Developer research, portfolio analysis, competitive intelligence, market tracking\n\n---\n\n### [List Methods](LIST_METHODS.md)\nAccess Google Play Store top charts including top free, top paid, and top grossing apps by category.\n\n**Key Features:**\n- Top free, top paid, top grossing charts\n- 54 categories (36 app + 18 game)\n- Ranked lists with install counts and ratings\n- Trending apps and market leaders\n\n**Use Cases:** Market trends, category analysis, competitive benchmarking, app discovery\n\n---\n\n### [Similar Methods](SIMILAR_METHODS.md)\nFind apps similar to a reference app for competitive analysis and market research.\n\n**Key Features:**\n- Discover competitor apps\n- Find similar/related apps\n- Get titles, developers, ratings, pricing\n- Competitive analysis and positioning\n\n**Use Cases:** Competitive analysis, market research, app discovery, positioning strategy\n\n---\n\n### [Suggest Methods](SUGGEST_METHODS.md)\nGet search suggestions and autocomplete from Google Play Store for keyword discovery and ASO.\n\n**Key Features:**\n- Autocomplete suggestions\n- Popular search terms\n- Nested keyword discovery\n- Multi-language support\n\n**Use Cases:** Keyword research, ASO optimization, content strategy, market insights\n\n---\n\n## 🚀 Quick Start\n\n```python\nfrom gplay_scraper import GPlayScraper\n\nscraper = GPlayScraper()\n\n# App Methods\nscraper.app_print_all(\"com.whatsapp\")\n\n# Search Methods\nscraper.search_print_all(\"fitness tracker\", count=20)\n\n# Reviews Methods\nscraper.reviews_print_all(\"com.whatsapp\", count=100, sort=\"NEWEST\")\n\n# Developer Methods\nscraper.developer_print_all(\"5700313618786177705\", count=50)\n\n# List Methods\nscraper.list_print_all(\"TOP_FREE\", \"GAME\", count=50)\n\n# Similar Methods\nscraper.similar_print_all(\"com.whatsapp\", count=30)\n\n# Suggest Methods\nscraper.suggest_print_all(\"photo editor\", count=10)\n```\n\n## 📖 Method Pattern\n\nEach method type follows the same pattern with 6 functions:\n\n- **`analyze()`** - Get all data as dictionary/list\n- **`get_field()`** - Get single field value\n- **`get_fields()`** - Get multiple fields as dictionary\n- **`print_field()`** - Print single field to console\n- **`print_fields()`** - Print multiple fields to console\n- **`print_all()`** - Print all data as formatted JSON\n\n## 🌍 Multi-Language & Multi-Region\n\nAll methods support multi-language and multi-region parameters:\n\n```python\n# Get data in Spanish from Spain\nscraper.app_analyze(\"com.whatsapp\", lang=\"es\", country=\"es\")\n\n# Get data in Japanese from Japan\nscraper.search_analyze(\"game\", count=20, lang=\"ja\", country=\"jp\")\n\n# Get data in French from France\nscraper.reviews_analyze(\"com.whatsapp\", count=50, lang=\"fr\", country=\"fr\")\n```\n\n**Supported:**\n- **Languages:** 100+ (en, es, fr, de, ja, ko, zh, ar, pt, ru, etc.)\n- **Countries:** 150+ (us, gb, ca, au, in, br, jp, kr, de, fr, etc.)\n\n## 🔧 HTTP Clients\n\nAll methods support 7 HTTP clients with automatic fallback:\n\n```python\n# Default (requests)\nscraper = GPlayScraper()\n\n# Specify client\nscraper = GPlayScraper(http_client=\"curl_cffi\")\nscraper = GPlayScraper(http_client=\"tls_client\")\nscraper = GPlayScraper(http_client=\"httpx\")\n```\n\n**Available Clients:**\n1. **requests** (default) - Standard Python HTTP library\n2. **curl_cffi** - Browser impersonation with TLS fingerprinting\n3. **tls_client** - Custom TLS fingerprinting\n4. **httpx** - Modern async-capable HTTP client\n5. **urllib3** - Low-level HTTP client\n6. **cloudscraper** - Cloudflare bypass capabilities\n7. **aiohttp** - Async HTTP client\n\n## 📊 What Can You Scrape?\n\n### App Data (65+ Fields)\n- Basic: title, developer, description, category, genre\n- Ratings: score, ratings count, histogram\n- Installs: install count ranges, statistics\n- Pricing: free/paid, price, in-app purchases\n- Media: icon, screenshots, video, header image\n- Technical: version, size, Android version, dates\n- Content: age rating, privacy policy, contact info\n- Features: permissions, what's new, website\n\n### Search & Discovery\n- Search apps by keyword\n- Get search suggestions\n- Find similar/competitor apps\n- Access top charts by category\n\n### Developer Intelligence\n- Complete app portfolio\n- Performance tracking\n- Market presence analysis\n\n### User Reviews\n- Reviews with ratings and text\n- Timestamps and app versions\n- Reviewer names and votes\n- Filter by sort options\n\n### Market Research\n- Multi-language support (100+ languages)\n- Multi-region data (150+ countries)\n- Localized pricing and availability\n- Competitive analysis\n\n## 🎯 Use Cases\n\n**Market Research**\n- Analyze competitor apps\n- Track market trends\n- Identify opportunities\n- Benchmark performance\n\n**App Development**\n- Monitor user feedback\n- Track app performance\n- Analyze competitors\n- Optimize app store presence\n\n**Data Analysis**\n- Collect app data for research\n- Sentiment analysis from reviews\n- Market intelligence reports\n- Machine learning datasets\n\n**Business Intelligence**\n- Competitive monitoring\n- Market positioning\n- Trend analysis\n- Strategic planning\n\n## 📄 License\n\nThis project is licensed under the MIT License.\n\n---\n\n**For detailed documentation on each method type, click the links above.**\n"
  },
  {
    "path": "README/REVIEWS_METHODS.md",
    "content": "# Reviews Methods\n\nExtract user reviews from Google Play Store apps with ratings, content, and metadata.\n\n## Quick Start\n\n```python\nfrom gplay_scraper import GPlayScraper\n\nscraper = GPlayScraper()\n\n# Get reviews\nreviews = scraper.reviews_analyze(\"com.whatsapp\", count=100, sort=\"NEWEST\")\nfor review in reviews[:5]:\n    print(f\"{review['userName']}: {review['score']}★\")\n    print(f\"  {review['content'][:100]}...\")\n\n# Get specific fields\nscores = scraper.reviews_get_field(\"com.whatsapp\", \"score\", count=100)\nprint(f\"Average: {sum(scores)/len(scores):.2f}★\")\n\n# Get multiple fields\nreviews = scraper.reviews_get_fields(\"com.whatsapp\", [\"userName\", \"score\", \"content\"], count=50)\nprint(reviews)\n```\n\n---\n\n## HTTP Clients\n\nThe library supports 7 HTTP clients with automatic fallback. If one fails, it tries the next.\n\n### Supported Clients\n1. **requests** (default) - Standard Python HTTP library\n2. **curl_cffi** - cURL with browser impersonation\n3. **tls_client** - Advanced TLS fingerprinting\n4. **httpx** - Modern async-capable HTTP client\n5. **urllib3** - Low-level HTTP client\n6. **cloudscraper** - Cloudflare bypass\n7. **aiohttp** - Async HTTP client\n\n### Usage\n\n```python\n# Default (tries requests first, then others)\nscraper = GPlayScraper()\n\n# Specify a client\nscraper = GPlayScraper(http_client=\"curl_cffi\")\nscraper = GPlayScraper(http_client=\"tls_client\")\nscraper = GPlayScraper(http_client=\"httpx\")\n```\n\n### Installation\n\n```bash\n# Default\npip install requests\n\n# Advanced clients (optional)\npip install curl-cffi\npip install tls-client\npip install httpx\npip install urllib3\npip install cloudscraper\npip install aiohttp\n```\n\n**Note:** The library automatically falls back to available clients if your preferred one fails.\n\n---\n\n## Methods\n\n### `reviews_analyze(app_id, count=100, lang='en', country='us', sort='NEWEST')`\nReturns reviews as a list of dictionaries.\n\n```python\nreviews = scraper.reviews_analyze(\"com.whatsapp\", count=100, sort=\"NEWEST\")\n# Returns: [{'reviewId': '...', 'userName': '...', 'score': 5, 'content': '...', ...}, ...]\n```\n\n### `reviews_get_field(app_id, field, count=100, lang='en', country='us', sort='NEWEST')`\nReturns a specific field from all reviews.\n\n```python\nscores = scraper.reviews_get_field(\"com.whatsapp\", \"score\", count=100)\n# Returns: [5, 4, 5, 3, 4, ...]\n```\n\n### `reviews_get_fields(app_id, fields, count=100, lang='en', country='us', sort='NEWEST')`\nReturns multiple fields from all reviews.\n\n```python\nreviews = scraper.reviews_get_fields(\"com.whatsapp\", [\"userName\", \"score\", \"content\"], count=50)\n# Returns: [{'userName': 'John', 'score': 5, 'content': 'Great app!'}, ...]\n```\n\n### `reviews_print_field(app_id, field, count=100, lang='en', country='us', sort='NEWEST')`\nPrints a specific field from all reviews.\n\n```python\nscraper.reviews_print_field(\"com.whatsapp\", \"content\", count=20)\n# Output:\n# 1. content: Great app!\n# 2. content: Love it\n# 3. content: Needs improvement\n```\n\n### `reviews_print_fields(app_id, fields, count=100, lang='en', country='us', sort='NEWEST')`\nPrints multiple fields from all reviews.\n\n```python\nscraper.reviews_print_fields(\"com.whatsapp\", [\"userName\", \"score\"], count=20)\n# Output:\n# userName: John, score: 5\n# userName: Jane, score: 4\n```\n\n### `reviews_print_all(app_id, count=100, lang='en', country='us', sort='NEWEST')`\nPrints all review data as formatted JSON.\n\n```python\nscraper.reviews_print_all(\"com.whatsapp\", count=50)\n# Output: Full JSON array with all reviews\n```\n\n---\n\n## Available Fields\n\n- `reviewId` - Unique review ID\n- `userName` - Reviewer name\n- `userImage` - Reviewer avatar URL\n- `score` - Review rating (1-5 stars)\n- `content` - Review text/comment\n- `thumbsUpCount` - Number of helpful votes\n- `appVersion` - App version reviewed\n- `at` - Review timestamp (ISO 8601 format)\n\n---\n\n## Sort Options\n\n- **`NEWEST`** (default) - Most recent reviews first\n- **`RELEVANT`** - Most relevant/helpful reviews\n- **`RATING`** - Sorted by rating (highest/lowest)\n\n---\n\n## Practical Examples\n\n### Sentiment Analysis\n```python\nreviews = scraper.reviews_get_fields(\"com.whatsapp\", [\"score\", \"content\"], count=200)\n\n# Rating distribution\nrating_dist = {1: 0, 2: 0, 3: 0, 4: 0, 5: 0}\nfor review in reviews:\n    rating_dist[review['score']] += 1\n\nprint(\"Rating Distribution:\")\nfor rating, count in rating_dist.items():\n    print(f\"{rating}★: {'█' * count} ({count})\")\n\n# Average rating\navg = sum(r['score'] for r in reviews) / len(reviews)\nprint(f\"\\nAverage: {avg:.2f}★\")\n```\n\n### Find Common Issues\n```python\nreviews = scraper.reviews_get_fields(\"com.whatsapp\", [\"score\", \"content\"], count=100, sort=\"RATING\")\n\n# Get low-rated reviews\nlow_rated = [r for r in reviews if r['score'] <= 2]\n\nprint(f\"Found {len(low_rated)} low-rated reviews:\")\nfor review in low_rated[:10]:\n    print(f\"- {review['content'][:100]}...\")\n```\n\n### Track Review Trends\n```python\nfrom datetime import datetime\n\nreviews = scraper.reviews_get_fields(\"com.whatsapp\", [\"at\", \"score\"], count=500, sort=\"NEWEST\")\n\n# Group by month\nmonthly_scores = {}\nfor review in reviews:\n    date = datetime.fromisoformat(review['at'])\n    month_key = date.strftime(\"%Y-%m\")\n    \n    if month_key not in monthly_scores:\n        monthly_scores[month_key] = []\n    monthly_scores[month_key].append(review['score'])\n\n# Calculate monthly averages\nfor month, scores in sorted(monthly_scores.items()):\n    avg = sum(scores) / len(scores)\n    print(f\"{month}: {avg:.2f}★ ({len(scores)} reviews)\")\n```\n\n### Compare App Versions\n```python\nreviews = scraper.reviews_get_fields(\"com.whatsapp\", [\"appVersion\", \"score\"], count=300)\n\n# Group by version\nversion_scores = {}\nfor review in reviews:\n    version = review['appVersion'] or \"Unknown\"\n    if version not in version_scores:\n        version_scores[version] = []\n    version_scores[version].append(review['score'])\n\n# Show version ratings\nfor version, scores in sorted(version_scores.items()):\n    if len(scores) >= 5:  # Only versions with 5+ reviews\n        avg = sum(scores) / len(scores)\n        print(f\"v{version}: {avg:.2f}★ ({len(scores)} reviews)\")\n```\n\n### Export Reviews to CSV\n```python\nimport csv\n\nreviews = scraper.reviews_analyze(\"com.whatsapp\", count=500)\n\nwith open('reviews.csv', 'w', newline='', encoding='utf-8') as f:\n    writer = csv.DictWriter(f, fieldnames=['userName', 'score', 'content', 'at', 'appVersion'])\n    writer.writeheader()\n    \n    for review in reviews:\n        writer.writerow({\n            'userName': review['userName'],\n            'score': review['score'],\n            'content': review['content'],\n            'at': review['at'],\n            'appVersion': review['appVersion']\n        })\n\nprint(f\"Exported {len(reviews)} reviews to reviews.csv\")\n```\n\n### Identify Top Reviewers\n```python\nreviews = scraper.reviews_get_fields(\"com.whatsapp\", [\"userName\", \"thumbsUpCount\"], count=200)\n\n# Sort by helpful votes\ntop_reviewers = sorted(reviews, key=lambda x: x['thumbsUpCount'] or 0, reverse=True)[:10]\n\nprint(\"Top 10 Most Helpful Reviewers:\")\nfor i, review in enumerate(top_reviewers, 1):\n    print(f\"{i}. {review['userName']}: {review['thumbsUpCount']} helpful votes\")\n```\n\n### Monitor Recent Feedback\n```python\nimport time\nfrom datetime import datetime\n\ndef monitor_reviews(app_id, interval=3600):\n    \"\"\"Check for new reviews every hour\"\"\"\n    last_check = datetime.now()\n    \n    while True:\n        reviews = scraper.reviews_get_fields(app_id, [\"at\", \"score\", \"content\"], count=50, sort=\"NEWEST\")\n        \n        new_reviews = [r for r in reviews if datetime.fromisoformat(r['at']) > last_check]\n        \n        if new_reviews:\n            print(f\"\\n{len(new_reviews)} new reviews:\")\n            for review in new_reviews:\n                print(f\"- {review['score']}★: {review['content'][:80]}...\")\n        \n        last_check = datetime.now()\n        time.sleep(interval)\n\n# Run monitor (Ctrl+C to stop)\n# monitor_reviews(\"com.whatsapp\")\n```\n\n### Keyword Analysis\n```python\nfrom collections import Counter\nimport re\n\nreviews = scraper.reviews_get_field(\"com.whatsapp\", \"content\", count=500)\n\n# Extract words\nwords = []\nfor content in reviews:\n    if content:\n        words.extend(re.findall(r'\\b\\w+\\b', content.lower()))\n\n# Remove common words\nstop_words = {'the', 'a', 'an', 'and', 'or', 'but', 'is', 'are', 'was', 'were', 'in', 'on', 'at', 'to', 'for'}\nfiltered_words = [w for w in words if w not in stop_words and len(w) > 3]\n\n# Top keywords\ntop_keywords = Counter(filtered_words).most_common(20)\nprint(\"Top Keywords in Reviews:\")\nfor word, count in top_keywords:\n    print(f\"{word}: {count}\")\n```\n\n---\n\n## Parameters\n\n### Initialization\n- `http_client` (str, optional) - HTTP client to use: \"requests\", \"curl_cffi\", \"tls_client\", \"httpx\", \"urllib3\", \"cloudscraper\", \"aiohttp\" (default: \"requests\")\n\n### Method Parameters\n- `app_id` (str, required) - App package name\n- `count` (int, optional) - Maximum number of reviews to return (default: 100)\n- `lang` (str, optional) - Language code (default: 'en')\n- `country` (str, optional) - Country code (default: 'us')\n- `sort` (str, optional) - Sort order: \"NEWEST\", \"RELEVANT\", \"RATING\" (default: \"NEWEST\")\n- `field` (str) - Single field name\n- `fields` (List[str]) - List of field names\n\n### Language & Country Codes\n- **Language**: 'en', 'es', 'fr', 'de', 'ja', 'ko', 'pt', 'ru', 'zh', etc.\n- **Country**: 'us', 'gb', 'ca', 'au', 'in', 'br', 'jp', 'kr', 'de', 'fr', etc.\n\n---\n\n## When to Use Each Method\n\n- **`reviews_analyze()`** - Need complete review data for analysis\n- **`reviews_get_field()`** - Need just one field (e.g., all scores)\n- **`reviews_get_fields()`** - Need specific fields (more efficient)\n- **`reviews_print_field()`** - Quick debugging/console output\n- **`reviews_print_fields()`** - Quick debugging of multiple fields\n- **`reviews_print_all()`** - Explore available data structure\n\n---\n\n## Advanced Features\n\n### Rate Limiting\nBuilt-in rate limiting (1 second delay between requests) prevents blocking.\n\n### Batch Fetching\nReviews are fetched in batches of 50. The library automatically handles pagination.\n\n```python\n# Fetch 500 reviews (10 batches of 50)\nreviews = scraper.reviews_analyze(\"com.whatsapp\", count=500)\nprint(f\"Fetched {len(reviews)} reviews\")\n```\n\n### Error Handling\n```python\nfrom gplay_scraper import GPlayScraper, AppNotFoundError, NetworkError\n\nscraper = GPlayScraper()\n\ntry:\n    reviews = scraper.reviews_analyze(\"invalid.app.id\")\nexcept AppNotFoundError:\n    print(\"App not found\")\nexcept NetworkError:\n    print(\"Network error occurred\")\n```\n\n### Multi-Region Reviews\n```python\n# Get reviews from different regions\nus_reviews = scraper.reviews_analyze(\"com.whatsapp\", country=\"us\", count=100)\nuk_reviews = scraper.reviews_analyze(\"com.whatsapp\", country=\"gb\", count=100)\njp_reviews = scraper.reviews_analyze(\"com.whatsapp\", country=\"jp\", lang=\"ja\", count=100)\n\nprint(f\"US avg: {sum(r['score'] for r in us_reviews)/len(us_reviews):.2f}★\")\nprint(f\"UK avg: {sum(r['score'] for r in uk_reviews)/len(uk_reviews):.2f}★\")\nprint(f\"JP avg: {sum(r['score'] for r in jp_reviews)/len(jp_reviews):.2f}★\")\n```\n\n### Sort Comparison\n```python\n# Compare different sort orders\nnewest = scraper.reviews_get_fields(\"com.whatsapp\", [\"score\"], count=100, sort=\"NEWEST\")\nrelevant = scraper.reviews_get_fields(\"com.whatsapp\", [\"score\"], count=100, sort=\"RELEVANT\")\nrating = scraper.reviews_get_fields(\"com.whatsapp\", [\"score\"], count=100, sort=\"RATING\")\n\nprint(f\"Newest avg: {sum(r['score'] for r in newest)/len(newest):.2f}★\")\nprint(f\"Relevant avg: {sum(r['score'] for r in relevant)/len(relevant):.2f}★\")\nprint(f\"Rating avg: {sum(r['score'] for r in rating)/len(rating):.2f}★\")\n```\n"
  },
  {
    "path": "README/SEARCH_METHODS.md",
    "content": "# Search Methods\n\nSearch for apps on Google Play Store by keyword, app name, or category.\n\n## Quick Start\n\n```python\nfrom gplay_scraper import GPlayScraper\n\nscraper = GPlayScraper()\n\n# Search for apps\nresults = scraper.search_analyze(\"social media\", count=20)\nfor app in results:\n    print(f\"{app['title']}: {app['score']}★ by {app['developer']}\")\n\n# Get specific fields\ntitles = scraper.search_get_field(\"fitness tracker\", \"title\")\nprint(titles)\n\n# Get multiple fields\napps = scraper.search_get_fields(\"photo editor\", [\"title\", \"score\", \"free\"])\nprint(apps)\n```\n\n---\n\n## HTTP Clients\n\nThe library supports 7 HTTP clients with automatic fallback. If one fails, it tries the next.\n\n### Supported Clients\n1. **requests** (default) - Standard Python HTTP library\n2. **curl_cffi** - cURL with browser impersonation\n3. **tls_client** - Advanced TLS fingerprinting\n4. **httpx** - Modern async-capable HTTP client\n5. **urllib3** - Low-level HTTP client\n6. **cloudscraper** - Cloudflare bypass\n7. **aiohttp** - Async HTTP client\n\n### Usage\n\n```python\n# Default (tries requests first, then others)\nscraper = GPlayScraper()\n\n# Specify a client\nscraper = GPlayScraper(http_client=\"curl_cffi\")\nscraper = GPlayScraper(http_client=\"tls_client\")\nscraper = GPlayScraper(http_client=\"httpx\")\n```\n\n### Installation\n\n```bash\n# Default\npip install requests\n\n# Advanced clients (optional)\npip install curl-cffi\npip install tls-client\npip install httpx\npip install urllib3\npip install cloudscraper\npip install aiohttp\n```\n\n**Note:** The library automatically falls back to available clients if your preferred one fails.\n\n---\n\n## Methods\n\n### `search_analyze(query, count=100, lang='en', country='us')`\nReturns search results as a list of dictionaries.\n\n```python\nresults = scraper.search_analyze(\"social media\", count=20)\n# Returns: [{'appId': '...', 'title': '...', 'score': 4.5, ...}, ...]\n```\n\n### `search_get_field(query, field, count=100, lang='en', country='us')`\nReturns a specific field from all search results.\n\n```python\ntitles = scraper.search_get_field(\"fitness tracker\", \"title\")\n# Returns: ['App 1', 'App 2', 'App 3', ...]\n```\n\n### `search_get_fields(query, fields, count=100, lang='en', country='us')`\nReturns multiple fields from all search results.\n\n```python\napps = scraper.search_get_fields(\"photo editor\", [\"title\", \"score\", \"free\"])\n# Returns: [{'title': 'App 1', 'score': 4.5, 'free': True}, ...]\n```\n\n### `search_print_field(query, field, count=100, lang='en', country='us')`\nPrints a specific field from all search results.\n\n```python\nscraper.search_print_field(\"social media\", \"title\", count=10)\n# Output:\n# 0. title: App 1\n# 1. title: App 2\n# 2. title: App 3\n```\n\n### `search_print_fields(query, fields, count=100, lang='en', country='us')`\nPrints multiple fields from all search results.\n\n```python\nscraper.search_print_fields(\"social media\", [\"title\", \"score\"], count=10)\n# Output:\n# 0. title: App 1, score: 4.5\n# 1. title: App 2, score: 4.2\n```\n\n### `search_print_all(query, count=100, lang='en', country='us')`\nPrints all data for all search results as formatted JSON.\n\n```python\nscraper.search_print_all(\"social media\", count=20)\n# Output: Full JSON array with all search results\n```\n\n---\n\n## Available Fields\n\n- `appId` - App package name (e.g., \"com.example.app\")\n- `title` - App name\n- `description` - App description/summary\n- `icon` - App icon URL\n- `url` - Play Store URL\n- `developer` - Developer name\n- `score` - Average rating (1-5)\n- `scoreText` - Rating as text (e.g., \"4.5\")\n- `currency` - Price currency (e.g., \"USD\")\n- `price` - App price (0 if free)\n- `free` - Boolean, true if free\n\n---\n\n## Practical Examples\n\n### Find Top-Rated Apps\n```python\nresults = scraper.search_get_fields(\"productivity\", [\"title\", \"score\", \"developer\"], count=50)\n\n# Filter high-rated apps\ntop_rated = [app for app in results if app['score'] and app['score'] >= 4.5]\ntop_rated.sort(key=lambda x: x['score'], reverse=True)\n\nprint(\"Top-Rated Productivity Apps:\")\nfor i, app in enumerate(top_rated[:10], 1):\n    print(f\"{i}. {app['title']}: {app['score']}★ by {app['developer']}\")\n```\n\n### Compare Free vs Paid Apps\n```python\nresults = scraper.search_get_fields(\"photo editor\", [\"title\", \"free\", \"price\", \"score\"], count=50)\n\nfree_apps = [app for app in results if app['free']]\npaid_apps = [app for app in results if not app['free']]\n\nfree_avg = sum(app['score'] or 0 for app in free_apps) / len(free_apps) if free_apps else 0\npaid_avg = sum(app['score'] or 0 for app in paid_apps) / len(paid_apps) if paid_apps else 0\n\nprint(f\"Free apps: {len(free_apps)} (avg: {free_avg:.2f}★)\")\nprint(f\"Paid apps: {len(paid_apps)} (avg: {paid_avg:.2f}★)\")\n```\n\n### Market Research\n```python\nkeywords = [\"fitness\", \"meditation\", \"diet\", \"sleep tracker\"]\n\nfor keyword in keywords:\n    results = scraper.search_get_fields(keyword, [\"title\", \"score\"], count=10)\n    avg_score = sum(app['score'] or 0 for app in results) / len(results)\n    print(f\"{keyword}: {len(results)} apps, avg {avg_score:.2f}★\")\n```\n\n### Find Competitors\n```python\nquery = \"task manager\"\nresults = scraper.search_get_fields(query, [\"title\", \"developer\", \"score\", \"free\"], count=30)\n\nprint(f\"Competitors for '{query}':\")\nfor i, app in enumerate(results[:15], 1):\n    price = \"Free\" if app['free'] else f\"${app.get('price', 'N/A')}\"\n    print(f\"{i}. {app['title']} by {app['developer']} - {app['score']}★ ({price})\")\n```\n\n### Export Search Results\n```python\nimport json\n\nquery = \"language learning\"\nresults = scraper.search_analyze(query, count=100)\n\nwith open(f'search_{query.replace(\" \", \"_\")}.json', 'w') as f:\n    json.dump(results, f, indent=2)\n\nprint(f\"Exported {len(results)} results for '{query}'\")\n```\n\n### Multi-Keyword Search\n```python\nkeywords = [\"vpn\", \"proxy\", \"security\"]\nall_results = {}\n\nfor keyword in keywords:\n    results = scraper.search_get_fields(keyword, [\"appId\", \"title\", \"score\"], count=20)\n    all_results[keyword] = results\n    print(f\"{keyword}: {len(results)} apps found\")\n\n# Find apps appearing in multiple searches\napp_ids = {}\nfor keyword, results in all_results.items():\n    for app in results:\n        app_id = app['appId']\n        if app_id not in app_ids:\n            app_ids[app_id] = {'title': app['title'], 'keywords': []}\n        app_ids[app_id]['keywords'].append(keyword)\n\n# Apps in multiple categories\nmulti_category = {aid: data for aid, data in app_ids.items() if len(data['keywords']) > 1}\nprint(f\"\\nApps in multiple categories: {len(multi_category)}\")\nfor app_id, data in list(multi_category.items())[:5]:\n    print(f\"- {data['title']}: {', '.join(data['keywords'])}\")\n```\n\n### Analyze Developer Presence\n```python\nfrom collections import Counter\n\nquery = \"puzzle game\"\nresults = scraper.search_get_field(query, \"developer\", count=100)\n\n# Count apps per developer\ndeveloper_counts = Counter(results)\ntop_developers = developer_counts.most_common(10)\n\nprint(f\"Top Developers in '{query}':\")\nfor developer, count in top_developers:\n    print(f\"{developer}: {count} apps\")\n```\n\n### Price Range Analysis\n```python\nquery = \"premium photo editor\"\nresults = scraper.search_get_fields(query, [\"title\", \"price\", \"free\"], count=50)\n\npaid_apps = [app for app in results if not app['free'] and app['price']]\n\nif paid_apps:\n    prices = [app['price'] for app in paid_apps]\n    print(f\"Price Analysis for '{query}':\")\n    print(f\"  Min: ${min(prices):.2f}\")\n    print(f\"  Max: ${max(prices):.2f}\")\n    print(f\"  Avg: ${sum(prices)/len(prices):.2f}\")\n    print(f\"  Total paid apps: {len(paid_apps)}\")\n```\n\n---\n\n## Parameters\n\n### Initialization\n- `http_client` (str, optional) - HTTP client to use: \"requests\", \"curl_cffi\", \"tls_client\", \"httpx\", \"urllib3\", \"cloudscraper\", \"aiohttp\" (default: \"requests\")\n\n### Method Parameters\n- `query` (str, required) - Search keyword or phrase\n- `count` (int, optional) - Maximum number of results to return (default: 100)\n- `lang` (str, optional) - Language code (default: 'en')\n- `country` (str, optional) - Country code (default: 'us')\n- `field` (str) - Single field name\n- `fields` (List[str]) - List of field names\n\n### Search Query Tips\n- Use specific keywords: \"fitness tracker\" vs \"fitness\"\n- Try app categories: \"puzzle game\", \"photo editor\"\n- Search by functionality: \"vpn\", \"password manager\"\n- Use brand names: \"google\", \"microsoft\"\n- Combine terms: \"free music player\"\n\n### Language & Country Codes\n- **Language**: 'en', 'es', 'fr', 'de', 'ja', 'ko', 'pt', 'ru', 'zh', etc.\n- **Country**: 'us', 'gb', 'ca', 'au', 'in', 'br', 'jp', 'kr', 'de', 'fr', etc.\n\n---\n\n## When to Use Each Method\n\n- **`search_analyze()`** - Need complete data for all search results\n- **`search_get_field()`** - Need just one field from all results\n- **`search_get_fields()`** - Need specific fields from all results (more efficient)\n- **`search_print_field()`** - Quick debugging/console output\n- **`search_print_fields()`** - Quick debugging of multiple fields\n- **`search_print_all()`** - Explore available data structure\n\n---\n\n## Advanced Features\n\n### Rate Limiting\nBuilt-in rate limiting (1 second delay between requests) prevents blocking.\n\n### Error Handling\n```python\nfrom gplay_scraper import GPlayScraper, AppNotFoundError, NetworkError\n\nscraper = GPlayScraper()\n\ntry:\n    results = scraper.search_analyze(\"\")\nexcept ValueError:\n    print(\"Query cannot be empty\")\nexcept NetworkError:\n    print(\"Network error occurred\")\n```\n\n### Multi-Region Search\n```python\n# Search in different regions\nus_results = scraper.search_analyze(\"vpn\", country=\"us\", count=20)\nuk_results = scraper.search_analyze(\"vpn\", country=\"gb\", count=20)\njp_results = scraper.search_analyze(\"vpn\", country=\"jp\", lang=\"ja\", count=20)\n\nprint(f\"US: {len(us_results)} results\")\nprint(f\"UK: {len(uk_results)} results\")\nprint(f\"JP: {len(jp_results)} results\")\n```\n\n### Pagination\n```python\n# Get more results\nresults_20 = scraper.search_analyze(\"game\", count=20)\nresults_50 = scraper.search_analyze(\"game\", count=50)\nresults_100 = scraper.search_analyze(\"game\", count=100)\n\nprint(f\"20 results: {len(results_20)}\")\nprint(f\"50 results: {len(results_50)}\")\nprint(f\"100 results: {len(results_100)}\")\n```\n\n### Search Result Filtering\n```python\nresults = scraper.search_analyze(\"music player\", count=50)\n\n# Filter by rating\nhigh_rated = [app for app in results if app['score'] and app['score'] >= 4.0]\n\n# Filter by price\nfree_apps = [app for app in results if app['free']]\n\n# Filter by developer\ngoogle_apps = [app for app in results if 'google' in app['developer'].lower()]\n\nprint(f\"High rated: {len(high_rated)}\")\nprint(f\"Free: {len(free_apps)}\")\nprint(f\"Google: {len(google_apps)}\")\n```\n"
  },
  {
    "path": "README/SIMILAR_METHODS.md",
    "content": "# Similar Methods\n\nFind similar and related apps on Google Play Store based on a reference app.\n\n## Quick Start\n\n```python\nfrom gplay_scraper import GPlayScraper\n\nscraper = GPlayScraper()\n\n# Get similar apps\nsimilar = scraper.similar_analyze(\"com.whatsapp\", count=20)\nfor app in similar:\n    print(f\"{app['title']}: {app['score']}★ by {app['developer']}\")\n\n# Get specific fields\ntitles = scraper.similar_get_field(\"com.whatsapp\", \"title\")\nprint(titles)\n\n# Get multiple fields\napps = scraper.similar_get_fields(\"com.whatsapp\", [\"title\", \"score\", \"free\"])\nprint(apps)\n```\n\n---\n\n## HTTP Clients\n\nThe library supports 7 HTTP clients with automatic fallback. If one fails, it tries the next.\n\n### Supported Clients\n1. **requests** (default) - Standard Python HTTP library\n2. **curl_cffi** - cURL with browser impersonation\n3. **tls_client** - Advanced TLS fingerprinting\n4. **httpx** - Modern async-capable HTTP client\n5. **urllib3** - Low-level HTTP client\n6. **cloudscraper** - Cloudflare bypass\n7. **aiohttp** - Async HTTP client\n\n### Usage\n\n```python\n# Default (tries requests first, then others)\nscraper = GPlayScraper()\n\n# Specify a client\nscraper = GPlayScraper(http_client=\"curl_cffi\")\nscraper = GPlayScraper(http_client=\"tls_client\")\nscraper = GPlayScraper(http_client=\"httpx\")\n```\n\n### Installation\n\n```bash\n# Default\npip install requests\n\n# Advanced clients (optional)\npip install curl-cffi\npip install tls-client\npip install httpx\npip install urllib3\npip install cloudscraper\npip install aiohttp\n```\n\n**Note:** The library automatically falls back to available clients if your preferred one fails.\n\n---\n\n## Methods\n\n### `similar_analyze(app_id, count=100, lang='en', country='us')`\nReturns similar apps as a list of dictionaries.\n\n```python\nsimilar = scraper.similar_analyze(\"com.whatsapp\", count=20)\n# Returns: [{'appId': '...', 'title': '...', 'score': 4.5, ...}, ...]\n```\n\n### `similar_get_field(app_id, field, count=100, lang='en', country='us')`\nReturns a specific field from all similar apps.\n\n```python\ntitles = scraper.similar_get_field(\"com.whatsapp\", \"title\")\n# Returns: ['App 1', 'App 2', 'App 3', ...]\n```\n\n### `similar_get_fields(app_id, fields, count=100, lang='en', country='us')`\nReturns multiple fields from all similar apps.\n\n```python\napps = scraper.similar_get_fields(\"com.whatsapp\", [\"title\", \"score\", \"free\"])\n# Returns: [{'title': 'App 1', 'score': 4.5, 'free': True}, ...]\n```\n\n### `similar_print_field(app_id, field, count=100, lang='en', country='us')`\nPrints a specific field from all similar apps.\n\n```python\nscraper.similar_print_field(\"com.whatsapp\", \"title\", count=10)\n# Output:\n# 1. title: App 1\n# 2. title: App 2\n# 3. title: App 3\n```\n\n### `similar_print_fields(app_id, fields, count=100, lang='en', country='us')`\nPrints multiple fields from all similar apps.\n\n```python\nscraper.similar_print_fields(\"com.whatsapp\", [\"title\", \"score\"], count=10)\n# Output:\n# 1. title: App 1, score: 4.5\n# 2. title: App 2, score: 4.2\n```\n\n### `similar_print_all(app_id, count=100, lang='en', country='us')`\nPrints all data for all similar apps as formatted JSON.\n\n```python\nscraper.similar_print_all(\"com.whatsapp\", count=20)\n# Output: Full JSON array with all similar apps\n```\n\n---\n\n## Available Fields\n\n- `appId` - App package name (e.g., \"com.example.app\")\n- `title` - App name\n- `description` - App description\n- `icon` - App icon URL\n- `url` - Play Store URL\n- `developer` - Developer name\n- `score` - Average rating (1-5)\n- `scoreText` - Rating as text (e.g., \"4.5\")\n- `currency` - Price currency (e.g., \"USD\")\n- `price` - App price (0 if free)\n- `free` - Boolean, true if free\n\n---\n\n## Practical Examples\n\n### Competitive Analysis\n```python\napp_id = \"com.whatsapp\"\nsimilar = scraper.similar_get_fields(app_id, [\"title\", \"score\", \"developer\"], count=30)\n\nprint(f\"Competitors of {app_id}:\")\nfor i, app in enumerate(similar[:10], 1):\n    print(f\"{i}. {app['title']}: {app['score']}★ by {app['developer']}\")\n\n# Calculate average competitor rating\navg_score = sum(app['score'] or 0 for app in similar) / len(similar)\nprint(f\"\\nAverage competitor rating: {avg_score:.2f}★\")\n```\n\n### Find Better Alternatives\n```python\napp_id = \"com.example.app\"\nmy_app = scraper.app_get_field(app_id, \"score\")\nsimilar = scraper.similar_get_fields(app_id, [\"title\", \"score\", \"url\"], count=50)\n\n# Find apps with higher ratings\nbetter_apps = [app for app in similar if app['score'] and app['score'] > my_app]\nbetter_apps.sort(key=lambda x: x['score'], reverse=True)\n\nprint(f\"Apps better than {app_id} ({my_app}★):\")\nfor app in better_apps[:10]:\n    print(f\"- {app['title']}: {app['score']}★\")\n```\n\n### Market Positioning\n```python\napp_id = \"com.whatsapp\"\nsimilar = scraper.similar_get_fields(app_id, [\"title\", \"free\", \"price\", \"score\"], count=50)\n\nfree_apps = [app for app in similar if app['free']]\npaid_apps = [app for app in similar if not app['free']]\n\nprint(f\"Market Analysis for {app_id}:\")\nprint(f\"  Free competitors: {len(free_apps)}\")\nprint(f\"  Paid competitors: {len(paid_apps)}\")\nif free_apps:\n    print(f\"  Free avg rating: {sum(a['score'] or 0 for a in free_apps)/len(free_apps):.2f}★\")\nif paid_apps:\n    print(f\"  Paid avg rating: {sum(a['score'] or 0 for a in paid_apps)/len(paid_apps):.2f}★\")\n```\n\n### Developer Overlap Analysis\n```python\nfrom collections import Counter\n\napp_id = \"com.whatsapp\"\nsimilar = scraper.similar_get_field(app_id, \"developer\", count=50)\n\n# Count apps per developer\ndeveloper_counts = Counter(similar)\ntop_developers = developer_counts.most_common(5)\n\nprint(f\"Top developers in similar apps to {app_id}:\")\nfor developer, count in top_developers:\n    print(f\"{developer}: {count} apps\")\n```\n\n### Export Similar Apps\n```python\nimport json\n\napp_id = \"com.whatsapp\"\nsimilar = scraper.similar_analyze(app_id, count=50)\n\nwith open(f'similar_to_{app_id}.json', 'w') as f:\n    json.dump(similar, f, indent=2)\n\nprint(f\"Exported {len(similar)} similar apps to similar_to_{app_id}.json\")\n```\n\n### Compare Multiple Apps\n```python\napps_to_compare = [\"com.whatsapp\", \"com.telegram\", \"com.viber\"]\nall_similar = {}\n\nfor app_id in apps_to_compare:\n    similar = scraper.similar_get_fields(app_id, [\"appId\", \"title\"], count=20)\n    all_similar[app_id] = [app['appId'] for app in similar]\n    print(f\"{app_id}: {len(similar)} similar apps\")\n\n# Find common competitors\ncommon = set(all_similar[apps_to_compare[0]])\nfor app_id in apps_to_compare[1:]:\n    common &= set(all_similar[app_id])\n\nprint(f\"\\nCommon competitors: {len(common)}\")\nfor app_id in list(common)[:5]:\n    title = scraper.app_get_field(app_id, \"title\")\n    print(f\"- {title}\")\n```\n\n### Feature Gap Analysis\n```python\napp_id = \"com.whatsapp\"\nsimilar = scraper.similar_get_fields(app_id, [\"title\", \"score\"], count=30)\n\n# Get top-rated competitors\ntop_competitors = sorted(similar, key=lambda x: x['score'] or 0, reverse=True)[:5]\n\nprint(f\"Top-rated competitors of {app_id}:\")\nfor i, app in enumerate(top_competitors, 1):\n    print(f\"{i}. {app['title']}: {app['score']}★\")\n    # You can then analyze these apps individually for features\n```\n\n### Price Comparison\n```python\napp_id = \"com.example.paidapp\"\nmy_price = scraper.app_get_field(app_id, \"price\")\nsimilar = scraper.similar_get_fields(app_id, [\"title\", \"price\", \"free\"], count=50)\n\npaid_similar = [app for app in similar if not app['free'] and app['price']]\n\nif paid_similar:\n    prices = [app['price'] for app in paid_similar]\n    print(f\"Price Comparison:\")\n    print(f\"  Your app: ${my_price:.2f}\")\n    print(f\"  Competitor min: ${min(prices):.2f}\")\n    print(f\"  Competitor max: ${max(prices):.2f}\")\n    print(f\"  Competitor avg: ${sum(prices)/len(prices):.2f}\")\n```\n\n---\n\n## Parameters\n\n### Initialization\n- `http_client` (str, optional) - HTTP client to use: \"requests\", \"curl_cffi\", \"tls_client\", \"httpx\", \"urllib3\", \"cloudscraper\", \"aiohttp\" (default: \"requests\")\n\n### Method Parameters\n- `app_id` (str, required) - App package name to find similar apps for\n- `count` (int, optional) - Maximum number of similar apps to return (default: 100)\n- `lang` (str, optional) - Language code (default: 'en')\n- `country` (str, optional) - Country code (default: 'us')\n- `field` (str) - Single field name\n- `fields` (List[str]) - List of field names\n\n### Language & Country Codes\n- **Language**: 'en', 'es', 'fr', 'de', 'ja', 'ko', 'pt', 'ru', 'zh', etc.\n- **Country**: 'us', 'gb', 'ca', 'au', 'in', 'br', 'jp', 'kr', 'de', 'fr', etc.\n\n---\n\n## When to Use Each Method\n\n- **`similar_analyze()`** - Need complete data for all similar apps\n- **`similar_get_field()`** - Need just one field from all similar apps\n- **`similar_get_fields()`** - Need specific fields from all similar apps (more efficient)\n- **`similar_print_field()`** - Quick debugging/console output\n- **`similar_print_fields()`** - Quick debugging of multiple fields\n- **`similar_print_all()`** - Explore available data structure\n\n---\n\n## Use Cases\n\n### Competitive Intelligence\n- Identify direct competitors\n- Monitor competitor ratings and pricing\n- Track market positioning\n- Discover new entrants in your category\n\n### Market Research\n- Understand market landscape\n- Analyze pricing strategies\n- Identify market gaps\n- Study successful competitors\n\n### Product Development\n- Find feature inspiration\n- Identify differentiation opportunities\n- Benchmark against competitors\n- Discover user expectations\n\n### Marketing Strategy\n- Identify target audience overlap\n- Study competitor positioning\n- Find partnership opportunities\n- Analyze market trends\n\n---\n\n## Advanced Features\n\n### Rate Limiting\nBuilt-in rate limiting (1 second delay between requests) prevents blocking.\n\n### Error Handling\n```python\nfrom gplay_scraper import GPlayScraper, AppNotFoundError, NetworkError\n\nscraper = GPlayScraper()\n\ntry:\n    similar = scraper.similar_analyze(\"invalid.app.id\")\nexcept AppNotFoundError:\n    print(\"App not found or no similar apps available\")\nexcept NetworkError:\n    print(\"Network error occurred\")\n```\n\n### Multi-Region Similar Apps\n```python\n# Get similar apps from different regions\nus_similar = scraper.similar_analyze(\"com.whatsapp\", country=\"us\", count=20)\nuk_similar = scraper.similar_analyze(\"com.whatsapp\", country=\"gb\", count=20)\njp_similar = scraper.similar_analyze(\"com.whatsapp\", country=\"jp\", lang=\"ja\", count=20)\n\nprint(f\"US similar apps: {len(us_similar)}\")\nprint(f\"UK similar apps: {len(uk_similar)}\")\nprint(f\"JP similar apps: {len(jp_similar)}\")\n```\n\n### Filtering Results\n```python\nsimilar = scraper.similar_analyze(\"com.whatsapp\", count=50)\n\n# Filter by rating\nhigh_rated = [app for app in similar if app['score'] and app['score'] >= 4.0]\n\n# Filter by price\nfree_apps = [app for app in similar if app['free']]\n\n# Filter by developer\nexclude_dev = [app for app in similar if app['developer'] != \"WhatsApp LLC\"]\n\nprint(f\"High rated: {len(high_rated)}\")\nprint(f\"Free: {len(free_apps)}\")\nprint(f\"Other developers: {len(exclude_dev)}\")\n```\n\n### Batch Analysis\n```python\n# Analyze similar apps for multiple apps\napps = [\"com.whatsapp\", \"com.telegram\", \"com.viber\"]\nresults = {}\n\nfor app_id in apps:\n    similar = scraper.similar_get_fields(app_id, [\"title\", \"score\"], count=10)\n    results[app_id] = similar\n    print(f\"{app_id}: {len(similar)} similar apps found\")\n```\n"
  },
  {
    "path": "README/SUGGEST_METHODS.md",
    "content": "# Suggest Methods\n\nGet search suggestions and autocomplete from Google Play Store for keyword discovery and ASO.\n\n## Quick Start\n\n```python\nfrom gplay_scraper import GPlayScraper\n\nscraper = GPlayScraper()\n\n# Get suggestions\nsuggestions = scraper.suggest_analyze(\"video\", count=5)\nprint(suggestions)\n# ['video player', 'video editor', 'video downloader', 'video maker', 'video call']\n\n# Get nested suggestions\nnested = scraper.suggest_nested(\"video\", count=3)\nfor term, suggestions in nested.items():\n    print(f\"{term}: {suggestions}\")\n```\n\n---\n\n## HTTP Clients\n\nThe library supports 7 HTTP clients with automatic fallback. If one fails, it tries the next.\n\n### Supported Clients\n1. **requests** (default) - Standard Python HTTP library\n2. **curl_cffi** - cURL with browser impersonation\n3. **tls_client** - Advanced TLS fingerprinting\n4. **httpx** - Modern async-capable HTTP client\n5. **urllib3** - Low-level HTTP client\n6. **cloudscraper** - Cloudflare bypass\n7. **aiohttp** - Async HTTP client\n\n### Usage\n\n```python\n# Default (tries requests first, then others)\nscraper = GPlayScraper()\n\n# Specify a client\nscraper = GPlayScraper(http_client=\"curl_cffi\")\nscraper = GPlayScraper(http_client=\"tls_client\")\nscraper = GPlayScraper(http_client=\"httpx\")\n```\n\n### Installation\n\n```bash\n# Default\npip install requests\n\n# Advanced clients (optional)\npip install curl-cffi\npip install tls-client\npip install httpx\npip install urllib3\npip install cloudscraper\npip install aiohttp\n```\n\n**Note:** The library automatically falls back to available clients if your preferred one fails.\n\n---\n\n## Methods\n\n### `suggest_analyze(term, count=5, lang='en', country='us')`\nReturns search suggestions as a list of strings.\n\n```python\nsuggestions = scraper.suggest_analyze(\"video\", count=5)\n# Returns: ['video player', 'video editor', 'video downloader', 'video maker', 'video call']\n```\n\n### `suggest_nested(term, count=5, lang='en', country='us')`\nReturns nested suggestions (suggestions for each suggestion).\n\n```python\nnested = scraper.suggest_nested(\"video\", count=3)\n# Returns: {\n#   'video player': ['video player hd', 'video player all format', 'video player pro'],\n#   'video editor': ['video editor pro', 'video editor free', 'video editor app'],\n#   'video downloader': ['video downloader for facebook', 'video downloader hd', ...]\n# }\n```\n\n### `suggest_print_all(term, count=5, lang='en', country='us')`\nPrints suggestions as formatted JSON.\n\n```python\nscraper.suggest_print_all(\"video\", count=5)\n# Output: [\"video player\", \"video editor\", \"video downloader\", \"video maker\", \"video call\"]\n```\n\n### `suggest_print_nested(term, count=5, lang='en', country='us')`\nPrints nested suggestions as formatted JSON.\n\n```python\nscraper.suggest_print_nested(\"video\", count=3)\n# Output: Full JSON object with nested suggestions\n```\n\n---\n\n## Return Formats\n\n### Simple Suggestions (List)\n```python\n['video player', 'video editor', 'video downloader', 'video maker', 'video call']\n```\n\n### Nested Suggestions (Dictionary)\n```python\n{\n  'video player': ['video player hd', 'video player all format', 'video player pro'],\n  'video editor': ['video editor pro', 'video editor free', 'video editor app'],\n  'video downloader': ['video downloader for facebook', 'video downloader hd']\n}\n```\n\n---\n\n## Practical Examples\n\n### Autocomplete Feature\n```python\ndef autocomplete(user_input):\n    \"\"\"Provide autocomplete suggestions as user types\"\"\"\n    if len(user_input) < 2:\n        return []\n    \n    suggestions = scraper.suggest_analyze(user_input, count=10)\n    return suggestions\n\n# Usage\nprint(autocomplete(\"gam\"))  # ['game', 'games', 'gaming', ...]\nprint(autocomplete(\"photo\"))  # ['photo editor', 'photo collage', ...]\n```\n\n### Keyword Research\n```python\nbase_keywords = [\"fitness\", \"workout\", \"exercise\"]\nall_keywords = set()\n\nfor keyword in base_keywords:\n    suggestions = scraper.suggest_analyze(keyword, count=10)\n    all_keywords.update(suggestions)\n    print(f\"{keyword}: {len(suggestions)} suggestions\")\n\nprint(f\"\\nTotal unique keywords: {len(all_keywords)}\")\nprint(\"Sample keywords:\", list(all_keywords)[:10])\n```\n\n### Deep Keyword Mining\n```python\nterm = \"photo editor\"\nnested = scraper.suggest_nested(term, count=5)\n\nprint(f\"Keyword tree for '{term}':\")\nfor parent, children in nested.items():\n    print(f\"\\n{parent}:\")\n    for child in children:\n        print(f\"  - {child}\")\n```\n\n### ASO Keyword Discovery\n```python\nimport json\n\ndef discover_keywords(seed_term, depth=2):\n    \"\"\"Discover keywords with specified depth\"\"\"\n    keywords = {}\n    \n    # Level 1\n    level1 = scraper.suggest_analyze(seed_term, count=10)\n    keywords[seed_term] = level1\n    \n    if depth > 1:\n        # Level 2\n        for term in level1[:5]:  # Limit to avoid too many requests\n            level2 = scraper.suggest_analyze(term, count=5)\n            keywords[term] = level2\n    \n    return keywords\n\nkeywords = discover_keywords(\"game\", depth=2)\nprint(json.dumps(keywords, indent=2))\n```\n\n### Trending Search Terms\n```python\ncategories = [\"game\", \"social\", \"productivity\", \"photo\", \"music\"]\ntrending = {}\n\nfor category in categories:\n    suggestions = scraper.suggest_analyze(category, count=5)\n    trending[category] = suggestions\n    print(f\"{category}: {', '.join(suggestions[:3])}...\")\n```\n\n### Long-Tail Keywords\n```python\nshort_term = \"vpn\"\nsuggestions = scraper.suggest_analyze(short_term, count=10)\n\n# Filter for long-tail (3+ words)\nlong_tail = [s for s in suggestions if len(s.split()) >= 3]\n\nprint(f\"Long-tail keywords for '{short_term}':\")\nfor keyword in long_tail:\n    print(f\"- {keyword}\")\n```\n\n### Competitor Keyword Analysis\n```python\ncompetitor_apps = [\"whatsapp\", \"telegram\", \"signal\"]\nall_suggestions = {}\n\nfor app in competitor_apps:\n    suggestions = scraper.suggest_analyze(app, count=10)\n    all_suggestions[app] = suggestions\n    print(f\"{app}: {len(suggestions)} suggestions\")\n\n# Find common keywords\ncommon = set(all_suggestions[competitor_apps[0]])\nfor app in competitor_apps[1:]:\n    common &= set(all_suggestions[app])\n\nprint(f\"\\nCommon keywords: {common}\")\n```\n\n### Export Keyword Map\n```python\nimport json\n\nterm = \"fitness\"\nnested = scraper.suggest_nested(term, count=10)\n\nwith open(f'keywords_{term}.json', 'w') as f:\n    json.dump(nested, f, indent=2)\n\nprint(f\"Exported keyword map for '{term}'\")\nprint(f\"Total parent keywords: {len(nested)}\")\nprint(f\"Total child keywords: {sum(len(v) for v in nested.values())}\")\n```\n\n### Search Volume Estimation\n```python\nterm = \"photo editor\"\nsuggestions = scraper.suggest_analyze(term, count=20)\n\n# Suggestions appear in order of popularity (roughly)\nprint(f\"Top suggestions for '{term}' (by estimated popularity):\")\nfor i, suggestion in enumerate(suggestions[:10], 1):\n    print(f\"{i}. {suggestion}\")\n```\n\n---\n\n## Parameters\n\n### Initialization\n- `http_client` (str, optional) - HTTP client to use: \"requests\", \"curl_cffi\", \"tls_client\", \"httpx\", \"urllib3\", \"cloudscraper\", \"aiohttp\" (default: \"requests\")\n\n### Method Parameters\n- `term` (str, required) - Search term or keyword\n- `count` (int, optional) - Number of suggestions to return (default: 5, max: ~10)\n- `lang` (str, optional) - Language code (default: 'en')\n- `country` (str, optional) - Country code (default: 'us')\n\n### Search Term Tips\n- Use partial words: \"gam\" → \"game\", \"games\", \"gaming\"\n- Try categories: \"fitness\", \"photo\", \"music\"\n- Test variations: \"vpn\", \"vpn free\", \"vpn app\"\n- Use brand names: \"whatsapp\", \"instagram\"\n- Combine terms: \"photo editor free\"\n\n### Language & Country Codes\n- **Language**: 'en', 'es', 'fr', 'de', 'ja', 'ko', 'pt', 'ru', 'zh', etc.\n- **Country**: 'us', 'gb', 'ca', 'au', 'in', 'br', 'jp', 'kr', 'de', 'fr', etc.\n\n---\n\n## When to Use Each Method\n\n- **`suggest_analyze()`** - Get simple list of suggestions for autocomplete or keyword research\n- **`suggest_nested()`** - Deep keyword mining with two levels of suggestions\n- **`suggest_print_all()`** - Quick debugging/console output of suggestions\n- **`suggest_print_nested()`** - Quick debugging/console output of nested suggestions\n\n---\n\n## Use Cases\n\n### App Store Optimization (ASO)\n- Discover high-traffic keywords\n- Find long-tail keyword opportunities\n- Analyze competitor keywords\n- Optimize app title and description\n\n### Market Research\n- Identify trending search terms\n- Understand user search behavior\n- Discover niche markets\n- Track keyword trends over time\n\n### Content Strategy\n- Generate content ideas\n- Find related topics\n- Optimize metadata\n- Improve discoverability\n\n### Competitive Analysis\n- Discover competitor keywords\n- Find keyword gaps\n- Identify market opportunities\n- Track competitor positioning\n\n---\n\n## Advanced Features\n\n### Rate Limiting\nBuilt-in rate limiting (1 second delay between requests) prevents blocking.\n\n### Error Handling\n```python\nfrom gplay_scraper import GPlayScraper, NetworkError\n\nscraper = GPlayScraper()\n\ntry:\n    suggestions = scraper.suggest_analyze(\"\")\nexcept ValueError:\n    print(\"Term cannot be empty\")\nexcept NetworkError:\n    print(\"Network error occurred\")\n```\n\n### Multi-Region Suggestions\n```python\n# Get suggestions from different regions\nus_suggestions = scraper.suggest_analyze(\"game\", country=\"us\")\nuk_suggestions = scraper.suggest_analyze(\"game\", country=\"gb\")\njp_suggestions = scraper.suggest_analyze(\"game\", country=\"jp\", lang=\"ja\")\n\nprint(f\"US: {us_suggestions[:3]}\")\nprint(f\"UK: {uk_suggestions[:3]}\")\nprint(f\"JP: {jp_suggestions[:3]}\")\n```\n\n### Batch Processing\n```python\nterms = [\"fitness\", \"diet\", \"workout\", \"yoga\", \"meditation\"]\nall_suggestions = {}\n\nfor term in terms:\n    suggestions = scraper.suggest_analyze(term, count=10)\n    all_suggestions[term] = suggestions\n    print(f\"{term}: {len(suggestions)} suggestions\")\n\n# Find overlapping keywords\nall_keywords = set()\nfor suggestions in all_suggestions.values():\n    all_keywords.update(suggestions)\n\nprint(f\"\\nTotal unique keywords: {len(all_keywords)}\")\n```\n\n### Recursive Keyword Expansion\n```python\ndef expand_keywords(term, max_depth=2, current_depth=0):\n    \"\"\"Recursively expand keywords\"\"\"\n    if current_depth >= max_depth:\n        return []\n    \n    suggestions = scraper.suggest_analyze(term, count=5)\n    all_keywords = suggestions.copy()\n    \n    if current_depth < max_depth - 1:\n        for suggestion in suggestions[:2]:  # Limit to avoid explosion\n            child_keywords = expand_keywords(suggestion, max_depth, current_depth + 1)\n            all_keywords.extend(child_keywords)\n    \n    return all_keywords\n\nkeywords = expand_keywords(\"game\", max_depth=2)\nprint(f\"Expanded to {len(set(keywords))} unique keywords\")\n```\n\n### Suggestion Filtering\n```python\nterm = \"game\"\nsuggestions = scraper.suggest_analyze(term, count=20)\n\n# Filter by length\nshort = [s for s in suggestions if len(s.split()) <= 2]\nlong = [s for s in suggestions if len(s.split()) > 2]\n\n# Filter by keyword\nfree_games = [s for s in suggestions if 'free' in s.lower()]\n\nprint(f\"Short keywords: {len(short)}\")\nprint(f\"Long keywords: {len(long)}\")\nprint(f\"Free games: {len(free_games)}\")\n```\n"
  },
  {
    "path": "README.md",
    "content": "# Google Play Scraper - Python Library 📱\n\n[![PyPI version](https://badge.fury.io/py/gplay-scraper.svg)](https://badge.fury.io/py/gplay-scraper)\n[![Python](https://img.shields.io/badge/python-3.7+-blue.svg)](https://www.python.org/downloads/)\n[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)\n[![Documentation](https://img.shields.io/badge/docs-available-brightgreen.svg)](https://mohammedcha.github.io/gplay-scraper/)\n[![Downloads](https://pepy.tech/badge/gplay-scraper)](https://pepy.tech/project/gplay-scraper)\n[![GitHub stars](https://img.shields.io/github/stars/Mohammedcha/gplay-scraper.svg)](https://github.com/Mohammedcha/gplay-scraper/stargazers)\n[![GitHub issues](https://img.shields.io/github/issues/Mohammedcha/gplay-scraper.svg)](https://github.com/Mohammedcha/gplay-scraper/issues)\n\n<div align=\"center\">\n  <img src=\"https://github.com/Mohammedcha/gplay-scraper/blob/main/assets/gplay-scraper.png\" alt=\"GPlay Scraper\">\n</div>\n\n**GPlay Scraper** is a powerful Python library for extracting comprehensive data from the Google Play Store. Built for developers, data analysts, and researchers, it provides easy access to app information, user reviews, search results, top charts, and market intelligence—all without requiring API keys.\n\n## 🐛 Found a Bug? Help Us Improve!\n\n**We value your feedback!** If you encounter any bugs, errors, or have suggestions for improvements, please [open an issue](https://github.com/Mohammedcha/gplay-scraper/issues). Contributors who report bugs or suggest features will be acknowledged in our [Contributors section](#-contributors) 🙏\n\n## 🎯 What Can You Scrape?\n\n**App Data (65+ Fields)**\n- Basic info: title, developer, description, category, genre\n- Ratings & reviews: score, ratings count, histogram, user reviews\n- Install metrics: install count ranges, download statistics\n- Pricing: free/paid status, price, in-app purchases, currency\n- Media: icon, screenshots, video, header image URLs\n- Technical: version, size, Android version, release date, last update\n- Content: age rating, privacy policy, developer contact info\n- Features: permissions, what's new, developer website\n\n**Search & Discovery**\n- Search apps by keyword with filtering and pagination\n- Get search suggestions and autocomplete terms\n- Find similar/competitor apps for any app\n- Access top charts (free, paid, grossing) across 54 categories\n\n**Developer Intelligence**\n- Get complete app portfolio for any developer\n- Track developer's app performance and ratings\n- Analyze developer's market presence\n\n**User Reviews**\n- Extract reviews with ratings, text, and timestamps\n- Get reviewer names and helpful vote counts\n- Filter by newest, most relevant, or highest rated\n- Track app versions mentioned in reviews\n\n**Market Research**\n- Multi-language support (100+ languages)\n- Multi-region data (150+ countries)\n- Localized pricing and availability\n- Competitive analysis and benchmarking\n\n## 🆕 **What's New in v1.0.6** \n\n**✅ Critical Bug Fixes:**\n- **Reviews Pagination Fix** - Fixed critical issue when requesting more reviews than available\n- **NoneType Error Resolution** - Resolved 'NoneType' object is not subscriptable error in reviews\n- **Empty Response Handling** - Better handling of apps with limited reviews\n- **Token Extraction Logic** - Improved pagination token handling for empty responses\n- **Graceful Degradation** - Now returns available reviews instead of crashing\n\n**✅ Enhanced Reliability:**\n- **Safe Bounds Checking** - Added proper bounds checking for pagination tokens\n- **Null Checking** - Enhanced null checking for empty data structures\n- **Error Recovery** - Improved error handling in ReviewsScraper and ReviewsParser\n- **Stability Improvements** - Better handling of edge cases in reviews extraction\n\n**🙏 Acknowledgments:**\n- Thanks to [@PhamDinhThienVu](https://github.com/PhamDinhThienVu) for reporting the reviews pagination bug\n\n**✅ 7 Method Types:**\n- **App Methods** - Extract 65+ data fields from any app (ratings, installs, pricing, permissions, etc.)\n- **Search Methods** - Search Google Play Store apps with comprehensive filtering\n- **Reviews Methods** - Extract user reviews with ratings, timestamps, and detailed feedback\n- **Developer Methods** - Get all apps published by a specific developer\n- **List Methods** - Access top charts (top free, top paid, top grossing) by category\n- **Similar Methods** - Find similar/competitor apps for market research\n- **Suggest Methods** - Get search suggestions and autocomplete for ASO\n\n## ⚡ Key Features\n\n**Powerful & Flexible**\n- **7 HTTP clients with automatic fallback** - requests, curl_cffi, tls_client, httpx, urllib3, cloudscraper, aiohttp\n- **42 functions across 7 method types** - analyze(), get_field(), get_fields(), print_field(), print_fields(), print_all()\n- **No API keys required** - Direct scraping from Google Play Store\n- **Multi-language & multi-region** - 100+ languages, 150+ countries\n\n**Reliable & Safe**\n- **Built-in rate limiting** - Prevents blocking with automatic delays\n- **Automatic HTTP client fallback** - Ensures maximum reliability\n- **Error handling** - Graceful failures with informative messages\n- **Retry logic** - Automatic retries for failed requests\n\n**Developer Friendly**\n- **Simple API** - Intuitive method names and parameters\n- **Comprehensive documentation** - Examples for every use case\n- **Type hints** - Full IDE autocomplete support\n- **Flexible output** - Get data as dict/list or print as JSON\n\n## 📋 Requirements\n\n- Python 3.7+\n- requests (default HTTP client)\n- Optional: curl-cffi, tls-client, httpx, urllib3, cloudscraper, aiohttp (for advanced HTTP clients)\n\n## 🚀 Installation\n\n```bash\n# Install from PyPI\npip install gplay-scraper\n\n# Or install in development mode\npip install -e .\n```\n\n## 📖 Quick Start\n\n```python\nfrom gplay_scraper import GPlayScraper\n\n# Initialize with HTTP client (curl_cffi recommended for best performance)\nscraper = GPlayScraper(http_client=\"curl_cffi\")\n\n# Get app details with different image sizes\napp_id = \"com.whatsapp\"\nscraper.app_print_all(app_id, lang=\"en\", country=\"us\", assets=\"LARGE\")\n\n# Get high-quality app data\ndata = scraper.app_analyze(app_id, assets=\"ORIGINAL\")  # Maximum image quality\nicon_small = scraper.app_get_field(app_id, \"icon\", assets=\"SMALL\")  # 512px icon\n\n# Print specific fields with custom image sizes\nscraper.app_print_field(app_id, \"icon\", assets=\"LARGE\")  # Print large icon URL\nscraper.app_print_fields(app_id, [\"icon\", \"screenshots\"], assets=\"ORIGINAL\")  # Print multiple fields\n\n# Search for apps\nscraper.search_print_all(\"social media\", count=10, lang=\"en\", country=\"us\")\n\n# Get reviews\nscraper.reviews_print_all(app_id, count=50, sort=\"NEWEST\", lang=\"en\", country=\"us\")\n\n# Get developer apps\nscraper.developer_print_all(\"5700313618786177705\", count=20, lang=\"en\", country=\"us\")\n\n# Get top charts\nscraper.list_print_all(\"TOP_FREE\", \"GAME\", count=20, lang=\"en\", country=\"us\")\n\n# Get similar apps\nscraper.similar_print_all(app_id, count=30, lang=\"en\", country=\"us\")\n\n# Get search suggestions\nscraper.suggest_print_all(\"fitness\", count=5, lang=\"en\", country=\"us\")\n```\n\n## 🎯 7 Method Types\n\nGPlay Scraper provides 7 method types with 42 functions to interact with Google Play Store data:\n\n### 1. [App Methods](https://github.com/Mohammedcha/gplay-scraper/blob/main/README/APP_METHODS.md) - Extract app details (65+ fields)\n### 2. [Search Methods](https://github.com/Mohammedcha/gplay-scraper/blob/main/README/SEARCH_METHODS.md) - Search for apps by keyword\n### 3. [Reviews Methods](https://github.com/Mohammedcha/gplay-scraper/blob/main/README/REVIEWS_METHODS.md) - Get user reviews and ratings\n### 4. [Developer Methods](https://github.com/Mohammedcha/gplay-scraper/blob/main/README/DEVELOPER_METHODS.md) - Get all apps from a developer\n### 5. [List Methods](https://github.com/Mohammedcha/gplay-scraper/blob/main/README/LIST_METHODS.md) - Get top charts (free, paid, grossing)\n### 6. [Similar Methods](https://github.com/Mohammedcha/gplay-scraper/blob/main/README/SIMILAR_METHODS.md) - Find similar/related apps\n### 7. [Suggest Methods](https://github.com/Mohammedcha/gplay-scraper/blob/main/README/SUGGEST_METHODS.md) - Get search suggestions/autocomplete\n\nEach method type has 6 functions:\n- `analyze()` - Get all data as dictionary/list\n- `get_field()` - Get single field value\n- `get_fields()` - Get multiple fields\n- `print_field()` - Print single field to console\n- `print_fields()` - Print multiple fields to console\n- `print_all()` - Print all data as JSON\n\n## 🎯 Method Examples\n\n### 1. [App Methods](https://github.com/Mohammedcha/gplay-scraper/blob/main/README/APP_METHODS.md) - Get App Details\nExtract comprehensive information about any app including ratings, installs, pricing, and 65+ data fields.\n\n📖 **[View detailed documentation →](https://github.com/Mohammedcha/gplay-scraper/blob/main/README/APP_METHODS.md)**\n\n```python\nfrom gplay_scraper import GPlayScraper\n\nscraper = GPlayScraper(http_client=\"curl_cffi\")\n\n# Print all app data as JSON\nscraper.app_print_all(\"com.whatsapp\", lang=\"en\", country=\"us\")\n```\n\n**What you get:** Complete app profile with title, developer, ratings, install counts, pricing, screenshots, permissions, and more.\n\n📄 **[View JSON example →](https://github.com/Mohammedcha/gplay-scraper/blob/main/output/app_example.json)**\n\n---\n\n### 2. [Search Methods](https://github.com/Mohammedcha/gplay-scraper/blob/main/README/SEARCH_METHODS.md) - Find Apps by Keyword\nSearch the Play Store by keyword, app name, or category to discover apps.\n\n📖 **[View detailed documentation →](https://github.com/Mohammedcha/gplay-scraper/blob/main/README/SEARCH_METHODS.md)**\n\n```python\nfrom gplay_scraper import GPlayScraper\n\nscraper = GPlayScraper(http_client=\"curl_cffi\")\n\n# Print all search results as JSON\nscraper.search_print_all(\"fitness tracker\", count=20, lang=\"en\", country=\"us\")\n```\n\n**What you get:** List of apps matching your search with titles, developers, ratings, prices, and Play Store URLs.\n\n📄 **[View JSON example →](https://github.com/Mohammedcha/gplay-scraper/blob/main/output/search_example.json)**\n\n---\n\n### 3. [Reviews Methods](https://github.com/Mohammedcha/gplay-scraper/blob/main/README/REVIEWS_METHODS.md) - Extract User Reviews\nGet user reviews with ratings, comments, timestamps, and helpful votes for sentiment analysis.\n\n📖 **[View detailed documentation →](https://github.com/Mohammedcha/gplay-scraper/blob/main/README/REVIEWS_METHODS.md)**\n\n```python\nfrom gplay_scraper import GPlayScraper\n\nscraper = GPlayScraper(http_client=\"curl_cffi\")\n\n# Print all reviews as JSON\nscraper.reviews_print_all(\"com.whatsapp\", count=100, sort=\"NEWEST\", lang=\"en\", country=\"us\")\n```\n\n**What you get:** User reviews with names, ratings (1-5 stars), review text, timestamps, app versions, and helpful vote counts.\n\n📄 **[View JSON example →](https://github.com/Mohammedcha/gplay-scraper/blob/main/output/reviews_example.json)**\n\n---\n\n### 4. [Developer Methods](https://github.com/Mohammedcha/gplay-scraper/blob/main/README/DEVELOPER_METHODS.md) - Get Developer's Apps\nRetrieve all apps published by a specific developer using their developer ID.\n\n📖 **[View detailed documentation →](https://github.com/Mohammedcha/gplay-scraper/blob/main/README/DEVELOPER_METHODS.md)**\n\n```python\nfrom gplay_scraper import GPlayScraper\n\nscraper = GPlayScraper(http_client=\"curl_cffi\")\n\n# Print all developer apps as JSON\nscraper.developer_print_all(\"5700313618786177705\", count=50, lang=\"en\", country=\"us\")\n```\n\n**What you get:** Complete portfolio of apps from a developer with titles, ratings, prices, and descriptions.\n\n📄 **[View JSON example →](https://github.com/Mohammedcha/gplay-scraper/blob/main/output/developer_example.json)**\n\n---\n\n### 5. [List Methods](https://github.com/Mohammedcha/gplay-scraper/blob/main/README/LIST_METHODS.md) - Get Top Charts\nAccess Play Store top charts including top free, top paid, and top grossing apps by category.\n\n📖 **[View detailed documentation →](https://github.com/Mohammedcha/gplay-scraper/blob/main/README/LIST_METHODS.md)**\n\n```python\nfrom gplay_scraper import GPlayScraper\n\nscraper = GPlayScraper(http_client=\"curl_cffi\")\n\n# Print top free games as JSON\nscraper.list_print_all(\"TOP_FREE\", \"GAME\", count=50, lang=\"en\", country=\"us\")\n```\n\n**What you get:** Top-ranked apps with titles, developers, ratings, install counts, prices, and screenshots.\n\n📄 **[View JSON example →](https://github.com/Mohammedcha/gplay-scraper/blob/main/output/list_example.json)**\n\n---\n\n### 6. [Similar Methods](https://github.com/Mohammedcha/gplay-scraper/blob/main/README/SIMILAR_METHODS.md) - Find Related Apps\nDiscover apps similar to a reference app for competitive analysis and market research.\n\n📖 **[View detailed documentation →](https://github.com/Mohammedcha/gplay-scraper/blob/main/README/SIMILAR_METHODS.md)**\n\n```python\nfrom gplay_scraper import GPlayScraper\n\nscraper = GPlayScraper(http_client=\"curl_cffi\")\n\n# Print similar apps as JSON\nscraper.similar_print_all(\"com.whatsapp\", count=30, lang=\"en\", country=\"us\")\n```\n\n**What you get:** List of similar/competitor apps with titles, developers, ratings, and pricing information.\n\n📄 **[View JSON example →](https://github.com/Mohammedcha/gplay-scraper/blob/main/output/similar_example.json)**\n\n---\n\n### 7. [Suggest Methods](https://github.com/Mohammedcha/gplay-scraper/blob/main/README/SUGGEST_METHODS.md) - Get Search Suggestions\nGet autocomplete suggestions and keyword ideas for ASO and market research.\n\n📖 **[View detailed documentation →](https://github.com/Mohammedcha/gplay-scraper/blob/main/README/SUGGEST_METHODS.md)**\n\n```python\nfrom gplay_scraper import GPlayScraper\n\nscraper = GPlayScraper(http_client=\"curl_cffi\")\n\n# Print search suggestions as JSON\nscraper.suggest_print_all(\"photo editor\", count=10, lang=\"en\", country=\"us\")\n```\n\n**What you get:** List of popular search terms related to your keyword for ASO and keyword research.\n\n📄 **[View JSON example →](https://github.com/Mohammedcha/gplay-scraper/blob/main/output/suggest_example.json)**\n\n---\n\n## 🤝 Contributing\n\n1. Fork the repository\n2. Create your feature branch\n3. Make your changes\n4. Test thoroughly\n5. Submit a pull request\n\n## 📄 License\n\nThis project is licensed under the MIT License.\n\n## 🙏 Contributors\n\nSpecial thanks to developers who helped improve this library:\n\n- [@PhamDinhThienVu](https://github.com/PhamDinhThienVu) - Reported reviews pagination bug (v1.0.6)\n- [@elmissouri16](https://github.com/elmissouri16) - Suggested multiple HTTP clients support (v1.0.3)\n\n---\n\n**Happy Analyzing! 🚀**"
  },
  {
    "path": "SECURITY.md",
    "content": "# Security Policy\n\n## Supported Versions\n\n| Version | Supported          |\n| ------- | ------------------ |\n| 1.0.x   | :white_check_mark: |\n\n## Reporting a Vulnerability\n\nIf you discover a security vulnerability, please report it through GitHub Issues with the \"security\" label.\n\nPlease do not report security vulnerabilities through public GitHub issues.\n\nWe will respond to security reports within 48 hours."
  },
  {
    "path": "build_docs.py",
    "content": "#!/usr/bin/env python3\n\"\"\"Build documentation using Sphinx.\"\"\"\n\nimport os\nimport sys\nimport subprocess\nfrom pathlib import Path\n\ndef install_docs_requirements():\n    \"\"\"Install documentation requirements.\"\"\"\n    print(\"[INFO] Installing documentation requirements...\")\n    try:\n        subprocess.run([\n            sys.executable, \"-m\", \"pip\", \"install\", \n            \"-r\", \"docs/requirements.txt\"\n        ], check=True)\n        print(\"[OK] Documentation requirements installed!\")\n    except subprocess.CalledProcessError as e:\n        print(f\"[ERROR] Failed to install requirements: {e}\")\n        return False\n    return True\n\ndef build_html_docs():\n    \"\"\"Build HTML documentation.\"\"\"\n    print(\"[INFO] Building HTML documentation...\")\n    \n    docs_dir = Path(\"docs\")\n    build_dir = docs_dir / \"_build\" / \"html\"\n    \n    try:\n        # Change to docs directory\n        os.chdir(docs_dir)\n        \n        # Build documentation\n        subprocess.run([\n            \"sphinx-build\", \"-b\", \"html\", \".\", \"_build/html\"\n        ], check=True)\n        \n        print(\"[OK] Documentation built successfully!\")\n        print(f\"[INFO] Open: {Path('_build/html/index.html').absolute()}\")\n        return True\n        \n    except subprocess.CalledProcessError as e:\n        print(f\"[ERROR] Failed to build documentation: {e}\")\n        return False\n    except FileNotFoundError:\n        print(\"[ERROR] Sphinx not found. Installing requirements first...\")\n        return False\n\ndef main():\n    \"\"\"Main function to build documentation.\"\"\"\n    print(\"=== GPlay Scraper Documentation Builder ===\\n\")\n    \n    # Install requirements\n    if not install_docs_requirements():\n        return 1\n    \n    # Build documentation\n    if not build_html_docs():\n        return 1\n    \n    print(\"\\n[SUCCESS] Documentation build complete!\")\n    print(\"[INFO] Open docs/_build/html/index.html in your browser\")\n    return 0\n\nif __name__ == \"__main__\":\n    sys.exit(main())"
  },
  {
    "path": "docs/README.md",
    "content": "# GPlay Scraper Documentation\n\n## Build Documentation\n\n```bash\npip install -r requirements.txt\ncd docs\nsphinx-build -b html . _build/html\n```\n\n## Open Documentation\n\n```bash\nstart _build/html/index.html  # Windows\nopen _build/html/index.html   # Mac\nxdg-open _build/html/index.html  # Linux\n```\n\n## Live Reload\n\n```bash\npip install sphinx-autobuild\nsphinx-autobuild . _build/html\n```\n"
  },
  {
    "path": "docs/api/app.rst",
    "content": "App Methods\n===========\n\nExtract comprehensive app data with 57 fields including install analytics, ratings, pricing, and developer information.\n\nOverview\n--------\n\nThe App methods provide access to detailed information about any Google Play Store app. All methods return data in JSON format with 57 fields.\n\nAvailable Methods\n-----------------\n\n* ``app_analyze()`` - Get all 57 fields\n* ``app_get_field()`` - Get single field value\n* ``app_get_fields()`` - Get multiple field values\n* ``app_print_field()`` - Print single field\n* ``app_print_fields()`` - Print multiple fields\n* ``app_print_all()`` - Print all data as JSON\n\napp_analyze()\n-------------\n\nGet complete app data with all 57 fields.\n\n**Signature:**\n\n.. code-block:: python\n\n   app_analyze(app_id, lang='en', country='', assets=None)\n\n**Parameters:**\n\n* ``app_id`` (str, required) - Google Play app ID (e.g., 'com.whatsapp')\n* ``lang`` (str, optional) - Language code (default: 'en')\n* ``country`` (str, optional) - Country code (default: '')\n* ``assets`` (str, optional) - Image size: 'SMALL', 'MEDIUM', 'LARGE', 'ORIGINAL'\n\n**Returns:**\n\nDictionary with 57 fields\n\n**Example:**\n\n.. code-block:: python\n\n   from gplay_scraper import GPlayScraper\n\n   scraper = GPlayScraper()\n   app = scraper.app_analyze('com.whatsapp')\n   \n   print(app['title'])              # WhatsApp Messenger\n   print(app['developer'])          # WhatsApp LLC\n   print(app['score'])              # 4.2189474\n   print(app['realInstalls'])       # 10931553905\n   print(app['dailyInstalls'])      # 1815870\n   print(app['publisherCountry'])   # United States\n\n**Multi-language Example:**\n\n.. code-block:: python\n\n   # Get app data in Spanish\n   app_es = scraper.app_analyze('com.whatsapp', lang='es')\n   print(app_es['description'])  # Description in Spanish\n   \n   # Get app data for UK region\n   app_uk = scraper.app_analyze('com.whatsapp', country='gb')\n\n**Image Size Example:**\n\n.. code-block:: python\n\n   # Get large images\n   app = scraper.app_analyze('com.whatsapp', assets='LARGE')\n   print(app['icon'])  # URL with =w2048 parameter\n\napp_get_field()\n---------------\n\nGet a single field value from app data.\n\n**Signature:**\n\n.. code-block:: python\n\n   app_get_field(app_id, field, lang='en', country='', assets=None)\n\n**Parameters:**\n\n* ``app_id`` (str, required) - Google Play app ID\n* ``field`` (str, required) - Field name to retrieve\n* ``lang`` (str, optional) - Language code\n* ``country`` (str, optional) - Country code\n* ``assets`` (str, optional) - Image size\n\n**Returns:**\n\nValue of the requested field (type depends on field)\n\n**Example:**\n\n.. code-block:: python\n\n   scraper = GPlayScraper()\n   \n   # Get title\n   title = scraper.app_get_field('com.whatsapp', 'title')\n   print(title)  # \"WhatsApp Messenger\"\n   \n   # Get score\n   score = scraper.app_get_field('com.whatsapp', 'score')\n   print(score)  # 4.2189474\n   \n   # Get daily installs\n   daily = scraper.app_get_field('com.whatsapp', 'dailyInstalls')\n   print(f\"{daily:,}\")  # 1,815,870\n\napp_get_fields()\n----------------\n\nGet multiple field values from app data.\n\n**Signature:**\n\n.. code-block:: python\n\n   app_get_fields(app_id, fields, lang='en', country='', assets=None)\n\n**Parameters:**\n\n* ``app_id`` (str, required) - Google Play app ID\n* ``fields`` (list, required) - List of field names\n* ``lang`` (str, optional) - Language code\n* ``country`` (str, optional) - Country code\n* ``assets`` (str, optional) - Image size\n\n**Returns:**\n\nDictionary with requested fields and values\n\n**Example:**\n\n.. code-block:: python\n\n   scraper = GPlayScraper()\n   \n   fields = ['title', 'developer', 'score', 'realInstalls', 'dailyInstalls']\n   data = scraper.app_get_fields('com.whatsapp', fields)\n   \n   print(data)\n   # {\n   #     'title': 'WhatsApp Messenger',\n   #     'developer': 'WhatsApp LLC',\n   #     'score': 4.2189474,\n   #     'realInstalls': 10931553905,\n   #     'dailyInstalls': 1815870\n   # }\n\napp_print_field()\n-----------------\n\nPrint a single field value to console.\n\n**Signature:**\n\n.. code-block:: python\n\n   app_print_field(app_id, field, lang='en', country='', assets=None)\n\n**Returns:**\n\nNone (prints to console)\n\n**Example:**\n\n.. code-block:: python\n\n   scraper = GPlayScraper()\n   scraper.app_print_field('com.whatsapp', 'title')\n   # Output: title: WhatsApp Messenger\n\napp_print_fields()\n------------------\n\nPrint multiple field values to console.\n\n**Signature:**\n\n.. code-block:: python\n\n   app_print_fields(app_id, fields, lang='en', country='', assets=None)\n\n**Returns:**\n\nNone (prints to console)\n\n**Example:**\n\n.. code-block:: python\n\n   scraper = GPlayScraper()\n   fields = ['title', 'score', 'dailyInstalls']\n   scraper.app_print_fields('com.whatsapp', fields)\n\napp_print_all()\n---------------\n\nPrint all 57 fields as formatted JSON to console.\n\n**Signature:**\n\n.. code-block:: python\n\n   app_print_all(app_id, lang='en', country='', assets=None)\n\n**Returns:**\n\nNone (prints to console)\n\n**Example:**\n\n.. code-block:: python\n\n   scraper = GPlayScraper()\n   scraper.app_print_all('com.whatsapp')\n   # Outputs all 57 fields as formatted JSON\n\nAvailable Fields\n----------------\n\nThe App methods return 57 fields organized into categories:\n\n**Basic Information (5 fields)**\n\n* ``appId`` - Package identifier\n* ``title`` - App name\n* ``summary`` - Short description\n* ``description`` - Full description\n* ``appUrl`` - Play Store URL\n\n**Category (4 fields)**\n\n* ``genre`` - Primary category\n* ``genreId`` - Category ID\n* ``categories`` - All categories (array)\n* ``available`` - Availability status (boolean)\n\n**Release & Updates (3 fields)**\n\n* ``released`` - Release date\n* ``appAgeDays`` - Days since release (computed)\n* ``lastUpdated`` - Last update date\n\n**Media (5 fields)**\n\n* ``icon`` - App icon URL\n* ``headerImage`` - Header image URL\n* ``screenshots`` - Screenshot URLs (array)\n* ``video`` - Promotional video URL\n* ``videoImage`` - Video thumbnail URL\n\n**Install Statistics (10 fields)**\n\n* ``installs`` - Install range string\n* ``minInstalls`` - Minimum installs\n* ``realInstalls`` - Exact install count\n* ``dailyInstalls`` - Average daily installs (computed)\n* ``minDailyInstalls`` - Min daily installs (computed)\n* ``realDailyInstalls`` - Real daily installs (computed)\n* ``monthlyInstalls`` - Average monthly installs (computed)\n* ``minMonthlyInstalls`` - Min monthly installs (computed)\n* ``realMonthlyInstalls`` - Real monthly installs (computed)\n\n**Ratings (4 fields)**\n\n* ``score`` - Average rating (0-5)\n* ``ratings`` - Total number of ratings\n* ``reviews`` - Total number of reviews\n* ``histogram`` - Rating distribution [1★, 2★, 3★, 4★, 5★]\n\n**Ads (2 fields)**\n\n* ``adSupported`` - Supports ads (boolean)\n* ``containsAds`` - Contains ads (boolean)\n\n**Technical (7 fields)**\n\n* ``version`` - Current version\n* ``androidVersion`` - Minimum Android version\n* ``maxAndroidApi`` - Maximum Android API level\n* ``minAndroidApi`` - Minimum Android API level\n* ``appBundle`` - Bundle identifier\n* ``contentRating`` - Age rating\n* ``contentRatingDescription`` - Rating description\n\n**Updates (1 field)**\n\n* ``whatsNew`` - Changelog (array)\n\n**Privacy (2 fields)**\n\n* ``permissions`` - Required permissions (object)\n* ``dataSafety`` - Data safety info (array)\n\n**Pricing (7 fields)**\n\n* ``price`` - App price\n* ``currency`` - Currency code\n* ``free`` - Is free (boolean)\n* ``offersIAP`` - Has in-app purchases (boolean)\n* ``inAppProductPrice`` - IAP price range\n* ``sale`` - On sale (boolean)\n* ``originalPrice`` - Original price if on sale\n\n**Developer (8 fields)**\n\n* ``developer`` - Developer name\n* ``developerId`` - Developer ID\n* ``developerEmail`` - Contact email\n* ``developerWebsite`` - Website URL\n* ``developerAddress`` - Physical address\n* ``developerPhone`` - Contact phone\n* ``publisherCountry`` - Publisher country (computed)\n* ``privacyPolicy`` - Privacy policy URL\n\nCommon Use Cases\n----------------\n\nApp Analytics\n^^^^^^^^^^^^^\n\n.. code-block:: python\n\n   scraper = GPlayScraper()\n   app = scraper.app_analyze('com.whatsapp')\n   \n   print(f\"App: {app['title']}\")\n   print(f\"Total Installs: {app['realInstalls']:,}\")\n   print(f\"Daily Installs: {app['dailyInstalls']:,}\")\n   print(f\"Monthly Installs: {app['monthlyInstalls']:,}\")\n   print(f\"Age: {app['appAgeDays']} days\")\n   print(f\"Rating: {app['score']}/5 ({app['ratings']:,} ratings)\")\n\nCompetitor Comparison\n^^^^^^^^^^^^^^^^^^^^^\n\n.. code-block:: python\n\n   apps = ['com.whatsapp', 'org.telegram.messenger', 'org.thoughtcrime.securesms']\n   \n   for app_id in apps:\n       app = scraper.app_analyze(app_id)\n       print(f\"{app['title']}\")\n       print(f\"  Installs: {app['realInstalls']:,}\")\n       print(f\"  Rating: {app['score']}/5\")\n       print(f\"  Daily Growth: {app['dailyInstalls']:,}\")\n\nMarket Research\n^^^^^^^^^^^^^^^\n\n.. code-block:: python\n\n   # Get key metrics for analysis\n   fields = [\n       'title', 'developer', 'score', 'ratings',\n       'realInstalls', 'dailyInstalls', 'free', 'price'\n   ]\n   \n   apps = ['com.app1', 'com.app2', 'com.app3']\n   \n   for app_id in apps:\n       data = scraper.app_get_fields(app_id, fields)\n       print(data)\n\nSee Also\n--------\n\n* :doc:`../fields` - Complete field reference\n* :doc:`../examples` - More practical examples\n* :doc:`../configuration` - Configuration options\n"
  },
  {
    "path": "docs/api/developer.rst",
    "content": "Developer Methods\n=================\n\nGet all apps from a specific developer or company.\n\nOverview\n--------\n\nThe Developer methods allow you to find all apps published by a developer, returning 11 fields per app.\n\nAvailable Methods\n-----------------\n\n* ``developer_analyze()`` - Get all developer apps\n* ``developer_get_field()`` - Get single field from all apps\n* ``developer_get_fields()`` - Get multiple fields from all apps\n* ``developer_print_field()`` - Print single field\n* ``developer_print_fields()`` - Print multiple fields\n* ``developer_print_all()`` - Print all apps as JSON\n\ndeveloper_analyze()\n-------------------\n\nGet all apps from a developer.\n\n**Signature:**\n\n.. code-block:: python\n\n   developer_analyze(dev_id, count=100, lang='en', country='')\n\n**Parameters:**\n\n* ``dev_id`` (str, required) - Developer name or numeric ID\n* ``count`` (int, optional) - Number of apps (default: 100)\n* ``lang`` (str, optional) - Language code\n* ``country`` (str, optional) - Country code\n\n**Returns:**\n\nList of dictionaries, each with 11 fields\n\n**Example:**\n\n.. code-block:: python\n\n   from gplay_scraper import GPlayScraper\n\n   scraper = GPlayScraper()\n   \n   # Using developer name\n   apps = scraper.developer_analyze('Google LLC')\n   \n   for app in apps:\n       print(f\"{app['title']}\")\n       print(f\"  Developer: {app['developer']}\")\n       print(f\"  Rating: {app['score']}/5\")\n       print(f\"  Free: {app['free']}\")\n\n**Using Numeric Developer ID:**\n\n.. code-block:: python\n\n   # Using numeric ID\n   apps = scraper.developer_analyze('5700313618786177705')\n\nAvailable Fields\n----------------\n\nEach app contains 11 fields:\n\n* ``appId`` - Package identifier\n* ``title`` - App name\n* ``url`` - Play Store URL\n* ``icon`` - App icon URL\n* ``developer`` - Developer name\n* ``description`` - App description\n* ``score`` - Average rating (0-5)\n* ``scoreText`` - Rating as text\n* ``price`` - App price\n* ``free`` - Is free (boolean)\n* ``currency`` - Currency code\n\nCommon Use Cases\n----------------\n\nPortfolio Analysis\n^^^^^^^^^^^^^^^^^^\n\n.. code-block:: python\n\n   scraper = GPlayScraper()\n   apps = scraper.developer_analyze('Google LLC')\n   \n   # Calculate average rating\n   avg_rating = sum(app['score'] for app in apps) / len(apps)\n   \n   # Count free vs paid\n   free_count = sum(1 for app in apps if app['free'])\n   paid_count = len(apps) - free_count\n   \n   print(f\"Total apps: {len(apps)}\")\n   print(f\"Average rating: {avg_rating:.2f}/5\")\n   print(f\"Free apps: {free_count}, Paid apps: {paid_count}\")\n\nCompetitive Analysis\n^^^^^^^^^^^^^^^^^^^^\n\n.. code-block:: python\n\n   developers = ['Google LLC', 'Microsoft Corporation', 'Meta Platforms, Inc.']\n   \n   for dev in developers:\n       apps = scraper.developer_analyze(dev)\n       high_rated = [app for app in apps if app['score'] >= 4.5]\n       \n       print(f\"\\n{dev}:\")\n       print(f\"  Total apps: {len(apps)}\")\n       print(f\"  High-rated apps (4.5+): {len(high_rated)}\")\n\nSee Also\n--------\n\n* :doc:`app` - Get detailed app information\n* :doc:`similar` - Find similar apps\n"
  },
  {
    "path": "docs/api/list.rst",
    "content": "List Methods\n============\n\nGet top charts (top free, top paid, top grossing apps).\n\nOverview\n--------\n\nThe List methods access Play Store top charts, returning 14 fields per app.\n\nlist_analyze()\n--------------\n\n**Signature:**\n\n.. code-block:: python\n\n   list_analyze(collection='TOP_FREE', category='APPLICATION', \n                count=100, lang='en', country='')\n\n**Parameters:**\n\n* ``collection`` - 'TOP_FREE', 'TOP_PAID', 'TOP_GROSSING'\n* ``category`` - App category (default: 'APPLICATION')\n* ``count`` - Number of apps (default: 100, max: ~500)\n\n**Example:**\n\n.. code-block:: python\n\n   from gplay_scraper import GPlayScraper\n\n   scraper = GPlayScraper()\n   \n   # Top free games\n   top_free = scraper.list_analyze('TOP_FREE', category='GAME', count=100)\n   \n   for i, app in enumerate(top_free, 1):\n       print(f\"{i}. {app['title']} - {app['score']}/5\")\n\nAvailable Fields\n----------------\n\n14 fields including: title, appId, url, icon, screenshots, developer, genre, installs, description, score, scoreText, price, free, currency\n\nCollections\n-----------\n\n* **TOP_FREE** - Top free apps\n* **TOP_PAID** - Top paid apps\n* **TOP_GROSSING** - Highest earning apps\n\nApp Categories (36)\n-------------------\n\n* ``APPLICATION`` - All apps (default)\n* ``ANDROID_WEAR`` - Android Wear apps\n* ``ART_AND_DESIGN`` - Art & design\n* ``AUTO_AND_VEHICLES`` - Auto & vehicles\n* ``BEAUTY`` - Beauty\n* ``BOOKS_AND_REFERENCE`` - Books & reference\n* ``BUSINESS`` - Business\n* ``COMICS`` - Comics\n* ``COMMUNICATION`` - Communication\n* ``DATING`` - Dating\n* ``EDUCATION`` - Education\n* ``ENTERTAINMENT`` - Entertainment\n* ``EVENTS`` - Events\n* ``FINANCE`` - Finance\n* ``FOOD_AND_DRINK`` - Food & drink\n* ``HEALTH_AND_FITNESS`` - Health & fitness\n* ``HOUSE_AND_HOME`` - House & home\n* ``LIBRARIES_AND_DEMO`` - Libraries & demo\n* ``LIFESTYLE`` - Lifestyle\n* ``MAPS_AND_NAVIGATION`` - Maps & navigation\n* ``MEDICAL`` - Medical\n* ``MUSIC_AND_AUDIO`` - Music & audio\n* ``NEWS_AND_MAGAZINES`` - News & magazines\n* ``PARENTING`` - Parenting\n* ``PERSONALIZATION`` - Personalization\n* ``PHOTOGRAPHY`` - Photography\n* ``PRODUCTIVITY`` - Productivity\n* ``SHOPPING`` - Shopping\n* ``SOCIAL`` - Social\n* ``SPORTS`` - Sports\n* ``TOOLS`` - Tools\n* ``TRAVEL_AND_LOCAL`` - Travel & local\n* ``VIDEO_PLAYERS`` - Video players & editors\n* ``WATCH_FACE`` - Watch faces\n* ``WEATHER`` - Weather\n* ``FAMILY`` - Family\n\nGame Categories (18)\n---------------------\n\n* ``GAME`` - All games\n* ``GAME_ACTION`` - Action games\n* ``GAME_ADVENTURE`` - Adventure games\n* ``GAME_ARCADE`` - Arcade games\n* ``GAME_BOARD`` - Board games\n* ``GAME_CARD`` - Card games\n* ``GAME_CASINO`` - Casino games\n* ``GAME_CASUAL`` - Casual games\n* ``GAME_EDUCATIONAL`` - Educational games\n* ``GAME_MUSIC`` - Music games\n* ``GAME_PUZZLE`` - Puzzle games\n* ``GAME_RACING`` - Racing games\n* ``GAME_ROLE_PLAYING`` - Role playing games\n* ``GAME_SIMULATION`` - Simulation games\n* ``GAME_SPORTS`` - Sports games\n* ``GAME_STRATEGY`` - Strategy games\n* ``GAME_TRIVIA`` - Trivia games\n* ``GAME_WORD`` - Word games\n\nExample\n-------\n\n.. code-block:: python\n\n   # Top paid communication apps\n   top_paid = scraper.list_analyze('TOP_PAID', \n       category='COMMUNICATION', \n       count=50)\n"
  },
  {
    "path": "docs/api/reviews.rst",
    "content": "Reviews Methods\n===============\n\nExtract user reviews and ratings with sorting options.\n\nOverview\n--------\n\nThe Reviews methods allow you to get user reviews with 8 fields per review, including content, rating, and metadata.\n\nAvailable Methods\n-----------------\n\n* ``reviews_analyze()`` - Get all reviews\n* ``reviews_get_field()`` - Get single field from all reviews\n* ``reviews_get_fields()`` - Get multiple fields from all reviews\n* ``reviews_print_field()`` - Print single field\n* ``reviews_print_fields()`` - Print multiple fields\n* ``reviews_print_all()`` - Print all reviews as JSON\n\nreviews_analyze()\n-----------------\n\nGet user reviews with all details.\n\n**Signature:**\n\n.. code-block:: python\n\n   reviews_analyze(app_id, count=100, sort='NEWEST', lang='en', country='')\n\n**Parameters:**\n\n* ``app_id`` (str, required) - Google Play app ID\n* ``count`` (int, optional) - Number of reviews (default: 100, max: ~1000+)\n* ``sort`` (str, optional) - Sort order: 'NEWEST', 'RELEVANT', 'RATING' (default: 'NEWEST')\n* ``lang`` (str, optional) - Language code\n* ``country`` (str, optional) - Country code\n\n**Returns:**\n\nList of dictionaries, each with 8 fields\n\n**Example:**\n\n.. code-block:: python\n\n   from gplay_scraper import GPlayScraper\n\n   scraper = GPlayScraper()\n   \n   # Get newest reviews\n   reviews = scraper.reviews_analyze('com.whatsapp', count=50, sort='NEWEST')\n   \n   for review in reviews:\n       print(f\"{review['userName']}: {review['score']}/5\")\n       print(f\"Date: {review['at']}\")\n       print(f\"Content: {review['content'][:100]}...\")\n       print(f\"Helpful: {review['thumbsUpCount']} people\")\n\nSort Options\n^^^^^^^^^^^^\n\n* **NEWEST** - Most recent reviews first (default)\n* **RELEVANT** - Most helpful/relevant reviews\n* **RATING** - Sorted by rating score\n\n.. code-block:: python\n\n   # Get most relevant reviews\n   reviews = scraper.reviews_analyze('com.whatsapp', \n       count=100, \n       sort='RELEVANT')\n   \n   # Get reviews sorted by rating\n   reviews = scraper.reviews_analyze('com.whatsapp', \n       count=100, \n       sort='RATING')\n\nreviews_get_field()\n-------------------\n\nGet single field from all reviews.\n\n**Example:**\n\n.. code-block:: python\n\n   scraper = GPlayScraper()\n   \n   # Get all usernames\n   usernames = scraper.reviews_get_field('com.whatsapp', 'userName', count=10)\n   \n   # Get all scores\n   scores = scraper.reviews_get_field('com.whatsapp', 'score', count=100)\n\nreviews_get_fields()\n--------------------\n\nGet multiple fields from all reviews.\n\n**Example:**\n\n.. code-block:: python\n\n   fields = ['userName', 'score', 'content', 'thumbsUpCount']\n   reviews = scraper.reviews_get_fields('com.whatsapp', fields, count=50)\n\nAvailable Fields\n----------------\n\nEach review contains 8 fields:\n\n* ``reviewId`` - Unique review identifier\n* ``userName`` - Reviewer's name\n* ``userImage`` - Reviewer's profile image URL\n* ``content`` - Review text\n* ``score`` - Rating (1-5)\n* ``thumbsUpCount`` - Number of helpful votes\n* ``at`` - Review date (ISO format)\n* ``appVersion`` - App version reviewed\n\nCommon Use Cases\n----------------\n\nMonitor Negative Reviews\n^^^^^^^^^^^^^^^^^^^^^^^^\n\n.. code-block:: python\n\n   scraper = GPlayScraper()\n   \n   reviews = scraper.reviews_analyze('com.myapp', count=100, sort='NEWEST')\n   negative = [r for r in reviews if r['score'] <= 2]\n   \n   print(f\"Found {len(negative)} negative reviews\")\n   for review in negative:\n       print(f\"{review['userName']}: {review['score']}/5\")\n       print(f\"  {review['content']}\")\n\nSentiment Analysis\n^^^^^^^^^^^^^^^^^^\n\n.. code-block:: python\n\n   reviews = scraper.reviews_analyze('com.whatsapp', count=500)\n   \n   # Count by rating\n   rating_counts = {1: 0, 2: 0, 3: 0, 4: 0, 5: 0}\n   for review in reviews:\n       rating_counts[review['score']] += 1\n   \n   print(\"Rating distribution:\")\n   for rating, count in rating_counts.items():\n       print(f\"  {rating}★: {count} reviews\")\n\nFind Helpful Reviews\n^^^^^^^^^^^^^^^^^^^^\n\n.. code-block:: python\n\n   reviews = scraper.reviews_analyze('com.whatsapp', \n       count=100, \n       sort='RELEVANT')\n   \n   # Get most helpful\n   most_helpful = sorted(reviews, \n       key=lambda x: x['thumbsUpCount'], \n       reverse=True)[:10]\n   \n   for review in most_helpful:\n       print(f\"{review['thumbsUpCount']} helpful votes\")\n       print(f\"  {review['content'][:100]}...\")\n\nTrack Version Feedback\n^^^^^^^^^^^^^^^^^^^^^^\n\n.. code-block:: python\n\n   reviews = scraper.reviews_analyze('com.myapp', count=200)\n   \n   # Group by app version\n   by_version = {}\n   for review in reviews:\n       version = review['appVersion']\n       if version not in by_version:\n           by_version[version] = []\n       by_version[version].append(review)\n   \n   # Analyze each version\n   for version, version_reviews in by_version.items():\n       avg_score = sum(r['score'] for r in version_reviews) / len(version_reviews)\n       print(f\"Version {version}: {avg_score:.2f}/5 ({len(version_reviews)} reviews)\")\n\nSee Also\n--------\n\n* :doc:`app` - Get app information\n* :doc:`../examples` - More examples\n"
  },
  {
    "path": "docs/api/search.rst",
    "content": "Search Methods\n==============\n\nSearch for apps on Google Play Store by keyword and get results with 11 fields per app.\n\nOverview\n--------\n\nThe Search methods allow you to find apps by keyword, similar to using the Play Store search bar. Results include basic app information with 11 fields.\n\nAvailable Methods\n-----------------\n\n* ``search_analyze()`` - Search and get all results\n* ``search_get_field()`` - Get single field from all results\n* ``search_get_fields()`` - Get multiple fields from all results\n* ``search_print_field()`` - Print single field\n* ``search_print_fields()`` - Print multiple fields\n* ``search_print_all()`` - Print all results as JSON\n\nsearch_analyze()\n----------------\n\nSearch for apps and get complete results.\n\n**Signature:**\n\n.. code-block:: python\n\n   search_analyze(query, count=100, lang='en', country='')\n\n**Parameters:**\n\n* ``query`` (str, required) - Search query string\n* ``count`` (int, optional) - Number of results to return (default: 100, max: ~250)\n* ``lang`` (str, optional) - Language code (default: 'en')\n* ``country`` (str, optional) - Country code (default: '')\n\n**Returns:**\n\nList of dictionaries, each with 11 fields\n\n**Example:**\n\n.. code-block:: python\n\n   from gplay_scraper import GPlayScraper\n\n   scraper = GPlayScraper()\n   results = scraper.search_analyze('messaging', count=10)\n   \n   for app in results:\n       print(f\"{app['title']} by {app['developer']}\")\n       print(f\"  Rating: {app['score']}/5\")\n       print(f\"  Free: {app['free']}\")\n       print(f\"  URL: {app['url']}\")\n\n**Pagination Example:**\n\n.. code-block:: python\n\n   # Get top 100 results\n   results = scraper.search_analyze('games', count=100)\n   print(f\"Found {len(results)} games\")\n\nsearch_get_field()\n------------------\n\nGet a single field from all search results.\n\n**Signature:**\n\n.. code-block:: python\n\n   search_get_field(query, field, count=100, lang='en', country='')\n\n**Parameters:**\n\n* ``query`` (str, required) - Search query\n* ``field`` (str, required) - Field name to retrieve\n* ``count`` (int, optional) - Number of results\n* ``lang`` (str, optional) - Language code\n* ``country`` (str, optional) - Country code\n\n**Returns:**\n\nList of field values from all results\n\n**Example:**\n\n.. code-block:: python\n\n   scraper = GPlayScraper()\n   \n   # Get all titles\n   titles = scraper.search_get_field('messaging', 'title', count=10)\n   print(titles)\n   # ['WhatsApp Messenger', 'Telegram', 'Signal', ...]\n   \n   # Get all ratings\n   scores = scraper.search_get_field('messaging', 'score', count=10)\n   print(scores)\n   # [4.2, 4.3, 4.5, ...]\n\nsearch_get_fields()\n-------------------\n\nGet multiple fields from all search results.\n\n**Signature:**\n\n.. code-block:: python\n\n   search_get_fields(query, fields, count=100, lang='en', country='')\n\n**Parameters:**\n\n* ``query`` (str, required) - Search query\n* ``fields`` (list, required) - List of field names\n* ``count`` (int, optional) - Number of results\n* ``lang`` (str, optional) - Language code\n* ``country`` (str, optional) - Country code\n\n**Returns:**\n\nList of dictionaries with requested fields\n\n**Example:**\n\n.. code-block:: python\n\n   scraper = GPlayScraper()\n   \n   fields = ['title', 'developer', 'score', 'free']\n   results = scraper.search_get_fields('games', fields, count=5)\n   \n   for app in results:\n       print(f\"{app['title']} - {app['score']}/5 - Free: {app['free']}\")\n\nsearch_print_field()\n--------------------\n\nPrint single field from all search results.\n\n**Signature:**\n\n.. code-block:: python\n\n   search_print_field(query, field, count=100, lang='en', country='')\n\n**Returns:**\n\nNone (prints to console)\n\nsearch_print_fields()\n---------------------\n\nPrint multiple fields from all search results.\n\n**Signature:**\n\n.. code-block:: python\n\n   search_print_fields(query, fields, count=100, lang='en', country='')\n\n**Returns:**\n\nNone (prints to console)\n\nsearch_print_all()\n------------------\n\nPrint all search results as JSON.\n\n**Signature:**\n\n.. code-block:: python\n\n   search_print_all(query, count=100, lang='en', country='')\n\n**Returns:**\n\nNone (prints to console)\n\nAvailable Fields\n----------------\n\nEach search result contains 11 fields:\n\n* ``title`` - App name\n* ``appId`` - Package identifier\n* ``url`` - Play Store URL\n* ``icon`` - App icon URL\n* ``developer`` - Developer name\n* ``summary`` - Short description\n* ``score`` - Average rating (0-5)\n* ``scoreText`` - Rating as text (e.g., \"4.2★\")\n* ``price`` - App price\n* ``free`` - Is free (boolean)\n* ``currency`` - Currency code\n\nCommon Use Cases\n----------------\n\nFind Apps by Category\n^^^^^^^^^^^^^^^^^^^^^\n\n.. code-block:: python\n\n   scraper = GPlayScraper()\n   \n   # Find fitness apps\n   fitness_apps = scraper.search_analyze('fitness tracker', count=20)\n   \n   # Filter by rating\n   high_rated = [app for app in fitness_apps if app['score'] >= 4.5]\n   \n   for app in high_rated:\n       print(f\"{app['title']}: {app['score']}/5\")\n\nMarket Research\n^^^^^^^^^^^^^^^\n\n.. code-block:: python\n\n   # Research competitors\n   results = scraper.search_analyze('photo editor', count=50)\n   \n   # Analyze free vs paid\n   free_apps = [app for app in results if app['free']]\n   paid_apps = [app for app in results if not app['free']]\n   \n   print(f\"Free apps: {len(free_apps)}\")\n   print(f\"Paid apps: {len(paid_apps)}\")\n   \n   # Average ratings\n   avg_free = sum(app['score'] for app in free_apps) / len(free_apps)\n   avg_paid = sum(app['score'] for app in paid_apps) / len(paid_apps)\n   \n   print(f\"Average free app rating: {avg_free:.2f}\")\n   print(f\"Average paid app rating: {avg_paid:.2f}\")\n\nDiscovery\n^^^^^^^^^\n\n.. code-block:: python\n\n   # Discover trending apps\n   keywords = ['ai', 'chatbot', 'productivity']\n   \n   for keyword in keywords:\n       results = scraper.search_analyze(keyword, count=5)\n       print(f\"\\nTop {keyword} apps:\")\n       for app in results:\n           print(f\"  {app['title']} - {app['score']}/5\")\n\nMulti-Language Search\n^^^^^^^^^^^^^^^^^^^^^\n\n.. code-block:: python\n\n   # Search in Spanish\n   results_es = scraper.search_analyze('juegos', lang='es', count=10)\n   \n   # Search in French\n   results_fr = scraper.search_analyze('jeux', lang='fr', count=10)\n   \n   # Regional search (UK)\n   results_uk = scraper.search_analyze('games', country='gb', count=10)\n\nSee Also\n--------\n\n* :doc:`app` - Get detailed app information\n* :doc:`../examples` - More practical examples\n* :doc:`../configuration` - Configuration options\n"
  },
  {
    "path": "docs/api/similar.rst",
    "content": "Similar Methods\n===============\n\nFind apps similar to a given app (competitors or alternatives).\n\nOverview\n--------\n\nThe Similar methods help you discover competitor apps or alternatives, returning 11 fields per app.\n\nsimilar_analyze()\n-----------------\n\n**Signature:**\n\n.. code-block:: python\n\n   similar_analyze(app_id, count=100, lang='en', country='')\n\n**Example:**\n\n.. code-block:: python\n\n   from gplay_scraper import GPlayScraper\n\n   scraper = GPlayScraper()\n   similar = scraper.similar_analyze('com.whatsapp', count=20)\n   \n   for app in similar:\n       print(f\"{app['title']} - {app['score']}/5\")\n\nAvailable Fields\n----------------\n\nSame 11 fields as Developer Methods.\n\nCommon Use Cases\n----------------\n\nCompetitor Analysis\n^^^^^^^^^^^^^^^^^^^\n\n.. code-block:: python\n\n   # Find competitors\n   my_app = scraper.app_analyze('com.myapp')\n   competitors = scraper.similar_analyze('com.myapp', count=10)\n   \n   print(f\"My App: {my_app['score']}/5\")\n   print(\"\\nCompetitors:\")\n   for comp in competitors:\n       print(f\"  {comp['title']}: {comp['score']}/5\")\n"
  },
  {
    "path": "docs/api/suggest.rst",
    "content": "Suggest Methods\n===============\n\nGet search suggestions and autocomplete.\n\nOverview\n--------\n\nThe Suggest methods provide search suggestions, returning lists of strings.\n\nsuggest_analyze()\n-----------------\n\nGet search suggestions for a term.\n\n**Signature:**\n\n.. code-block:: python\n\n   suggest_analyze(term, count=5, lang='en', country='')\n\n**Example:**\n\n.. code-block:: python\n\n   from gplay_scraper import GPlayScraper\n\n   scraper = GPlayScraper()\n   suggestions = scraper.suggest_analyze('mine', count=10)\n   \n   print(suggestions)\n   # ['minecraft', 'minesweeper', 'mineplex', ...]\n\nsuggest_nested()\n----------------\n\nGet nested suggestions (suggestions for suggestions).\n\n**Signature:**\n\n.. code-block:: python\n\n   suggest_nested(term, count=5, lang='en', country='')\n\n**Returns:**\n\nDictionary mapping terms to their suggestion lists\n\n**Example:**\n\n.. code-block:: python\n\n   nested = scraper.suggest_nested('game', count=5)\n   \n   for term, suggestions in nested.items():\n       print(f\"\\n{term}:\")\n       for suggestion in suggestions:\n           print(f\"  - {suggestion}\")\n\nCommon Use Cases\n----------------\n\nKeyword Research\n^^^^^^^^^^^^^^^^\n\n.. code-block:: python\n\n   # Find popular search terms\n   keywords = ['fitness', 'photo', 'music']\n   \n   for keyword in keywords:\n       suggestions = scraper.suggest_analyze(keyword, count=10)\n       print(f\"\\n{keyword} suggestions:\")\n       for s in suggestions:\n           print(f\"  {s}\")\n\nTrend Discovery\n^^^^^^^^^^^^^^^\n\n.. code-block:: python\n\n   # Discover trending topics\n   term = \"ai\"\n   suggestions = scraper.suggest_analyze(term, count=20)\n   \n   print(f\"Popular '{term}' searches:\")\n   for suggestion in suggestions:\n       print(f\"  {suggestion}\")\n"
  },
  {
    "path": "docs/conf.py",
    "content": "project = 'GPlay Scraper'\ncopyright = '2025, GPlay Scraper'\nauthor = 'GPlay Scraper'\nrelease = '1.0.5'\n\nextensions = [\n    'sphinx.ext.autodoc',\n    'sphinx.ext.napoleon',\n    'sphinx.ext.viewcode',\n]\n\ntry:\n    import sphinx_copybutton\n    extensions.append('sphinx_copybutton')\nexcept ImportError:\n    pass\n\ntemplates_path = ['_templates']\nexclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']\nlanguage = 'en'\n\nhtml_theme = 'sphinx_book_theme'\n\nhtml_theme_options = {\n    \"repository_url\": \"https://github.com/mohammedcha/gplay-scraper\",\n    \"repository_branch\": \"main\",\n    \"use_repository_button\": True,\n    \"use_issues_button\": True,\n    \"use_edit_page_button\": False,\n    \"use_download_button\": True,\n    \"home_page_in_toc\": True,\n    \"show_navbar_depth\": 2,\n    \"show_toc_level\": 2,\n    \"navigation_with_keys\": True,\n    \"collapse_navbar\": False,\n    \"logo\": {\n        \"text\": \"GPlay Scraper\",\n    },\n    \"extra_footer\": \"<p>Built with ❤️ using Sphinx Book Theme</p>\",\n    \"search_bar_text\": \"Search documentation...\",\n    \"icon_links\": [\n        {\n            \"name\": \"GitHub\",\n            \"url\": \"https://github.com/mohammedcha/gplay-scraper\",\n            \"icon\": \"fa-brands fa-github\",\n            \"type\": \"fontawesome\",\n        },\n        {\n            \"name\": \"PyPI\",\n            \"url\": \"https://pypi.org/project/gplay-scraper/\",\n            \"icon\": \"fa-brands fa-python\",\n            \"type\": \"fontawesome\",\n        },\n    ],\n}\n\npygments_style = 'monokai'\npygments_dark_style = 'monokai'\n\nhtml_title = \"GPlay Scraper\"\nhtml_static_path = ['_static']\n\nhtml_logo = \"_static/logo.png\"\nhtml_favicon = \"_static/favicon.png\"\n\nif 'sphinx_copybutton' in extensions:\n    copybutton_prompt_text = r\">>> |\\.\\.\\. |\\$ |In \\[\\d*\\]: | {2,5}\\.\\.\\.: | {5,8}: \"\n    copybutton_prompt_is_regexp = True\n"
  },
  {
    "path": "docs/configuration.rst",
    "content": "Configuration\n=============\n\nAdvanced configuration options for GPlay Scraper.\n\nHTTP Client Selection\n---------------------\n\nChoose from 7 HTTP clients with automatic fallback.\n\n.. code-block:: python\n\n   from gplay_scraper import GPlayScraper\n\n   # Default (requests)\n   scraper = GPlayScraper()\n   \n   # Use curl_cffi (best for bypassing blocks)\n   scraper = GPlayScraper(http_client='curl_cffi')\n   \n   # Use tls_client (advanced TLS fingerprinting)\n   scraper = GPlayScraper(http_client='tls_client')\n   \n   # Use httpx (modern HTTP/2)\n   scraper = GPlayScraper(http_client='httpx')\n\nAvailable HTTP Clients\n^^^^^^^^^^^^^^^^^^^^^^\n\n1. **requests** - Default, most compatible\n2. **curl_cffi** - Best for anti-bot bypass (Chrome 110 impersonation)\n3. **tls_client** - Advanced TLS fingerprinting (Chrome 112)\n4. **urllib3** - Low-level HTTP with connection pooling\n5. **cloudscraper** - Cloudflare bypass\n6. **aiohttp** - Async HTTP support\n7. **httpx** - Modern HTTP/2 client\n\nThe library automatically falls back to the next available client if one fails.\n\nRate Limiting\n-------------\n\nConfigure delay between requests to avoid rate limits.\n\n.. code-block:: python\n\n   from gplay_scraper import Config\n   \n   # Set rate limit delay (seconds)\n   Config.RATE_LIMIT_DELAY = 2.0  # 2 seconds between requests\n   \n   # Or use default (1.0 second)\n   Config.RATE_LIMIT_DELAY = 1.0\n\nLanguage & Region\n-----------------\n\nSet default language and country for all requests.\n\n.. code-block:: python\n\n   from gplay_scraper import Config\n   \n   # Set default language\n   Config.DEFAULT_LANGUAGE = 'es'  # Spanish\n   \n   # Set default country\n   Config.DEFAULT_COUNTRY = 'mx'  # Mexico\n\nCommon Language Codes\n^^^^^^^^^^^^^^^^^^^^^^\n\n* ``en`` - English\n* ``es`` - Spanish  \n* ``fr`` - French\n* ``de`` - German\n* ``it`` - Italian\n* ``pt`` - Portuguese\n* ``ja`` - Japanese\n* ``ko`` - Korean\n* ``zh`` - Chinese\n* ``ru`` - Russian\n* ``ar`` - Arabic\n* ``hi`` - Hindi\n\nCommon Country Codes\n^^^^^^^^^^^^^^^^^^^^^\n\n* ``us`` - United States\n* ``gb`` - United Kingdom\n* ``ca`` - Canada\n* ``de`` - Germany\n* ``fr`` - France\n* ``es`` - Spain\n* ``mx`` - Mexico\n* ``jp`` - Japan\n* ``kr`` - South Korea\n* ``cn`` - China\n* ``in`` - India\n* ``br`` - Brazil\n\nRequest Timeout\n---------------\n\nConfigure HTTP request timeout.\n\n.. code-block:: python\n\n   from gplay_scraper import Config\n   \n   # Set timeout (seconds)\n   Config.DEFAULT_TIMEOUT = 30  # 30 seconds\n   \n   # Or use default (10 seconds)\n   Config.DEFAULT_TIMEOUT = 10\n\nRetry Configuration\n-------------------\n\nConfigure automatic retry behavior.\n\n.. code-block:: python\n\n   from gplay_scraper import Config\n   \n   # Set number of retries\n   Config.DEFAULT_RETRY_COUNT = 5  # Try 5 times\n   \n   # Or use default (3 retries)\n   Config.DEFAULT_RETRY_COUNT = 3\n\nImage Asset Sizes\n-----------------\n\nConfigure default image size for all requests.\n\n.. code-block:: python\n\n   scraper = GPlayScraper()\n   \n   # Small images (512px)\n   app = scraper.app_analyze('com.whatsapp', assets='SMALL')\n   \n   # Medium images (1024px) - default\n   app = scraper.app_analyze('com.whatsapp', assets='MEDIUM')\n   \n   # Large images (2048px)\n   app = scraper.app_analyze('com.whatsapp', assets='LARGE')\n   \n   # Original size (maximum)\n   app = scraper.app_analyze('com.whatsapp', assets='ORIGINAL')\n\nPer-Request Configuration\n--------------------------\n\nOverride defaults for specific requests.\n\n.. code-block:: python\n\n   scraper = GPlayScraper()\n   \n   # Per-request language\n   app_es = scraper.app_analyze('com.whatsapp', lang='es')\n   app_fr = scraper.app_analyze('com.whatsapp', lang='fr')\n   \n   # Per-request country\n   app_uk = scraper.app_analyze('com.whatsapp', country='gb')\n   app_de = scraper.app_analyze('com.whatsapp', country='de')\n   \n   # Per-request images\n   app_large = scraper.app_analyze('com.whatsapp', assets='LARGE')\n\nLogging\n-------\n\nConfigure logging level for debugging.\n\n.. code-block:: python\n\n   import logging\n   \n   # Enable debug logging\n   logging.basicConfig(level=logging.DEBUG)\n   \n   # Enable info logging\n   logging.basicConfig(level=logging.INFO)\n   \n   # Disable logging\n   logging.basicConfig(level=logging.ERROR)\n\nComplete Configuration Example\n-------------------------------\n\n.. code-block:: python\n\n   from gplay_scraper import GPlayScraper, Config\n   import logging\n   \n   # Configure library\n   Config.RATE_LIMIT_DELAY = 2.0\n   Config.DEFAULT_LANGUAGE = 'en'\n   Config.DEFAULT_COUNTRY = 'us'\n   Config.DEFAULT_TIMEOUT = 30\n   Config.DEFAULT_RETRY_COUNT = 5\n   \n   # Configure logging\n   logging.basicConfig(\n       level=logging.INFO,\n       format='%(asctime)s - %(levelname)s - %(message)s'\n   )\n   \n   # Initialize with preferred HTTP client\n   scraper = GPlayScraper(http_client='curl_cffi')\n   \n   # Use the scraper\n   app = scraper.app_analyze('com.whatsapp')\n\nEnvironment Variables\n---------------------\n\nYou can also use environment variables for configuration.\n\n.. code-block:: bash\n\n   # Set in your shell or .env file\n   export GPLAY_HTTP_CLIENT=curl_cffi\n   export GPLAY_RATE_LIMIT=2.0\n   export GPLAY_LANGUAGE=en\n   export GPLAY_COUNTRY=us\n\nBest Practices\n--------------\n\n1. **Use curl_cffi or tls_client** for better success rates\n2. **Set rate limiting** to 2+ seconds for large batch operations\n3. **Use field filtering** to reduce data transfer and parsing time\n4. **Enable logging** during development, disable in production\n5. **Handle exceptions** gracefully for production use\n6. **Reuse scraper instance** instead of creating new ones\n\nSee Also\n--------\n\n* :doc:`error_handling` - Error handling guide\n* :doc:`examples` - Practical examples\n"
  },
  {
    "path": "docs/error_handling.rst",
    "content": "Error Handling\n==============\n\nGuide to handling errors and exceptions in GPlay Scraper.\n\nException Types\n---------------\n\nGPlay Scraper provides 6 custom exception types:\n\nAppNotFoundError\n^^^^^^^^^^^^^^^^\n\nRaised when an app, developer, or resource is not found.\n\n.. code-block:: python\n\n   from gplay_scraper import GPlayScraper\n   from gplay_scraper.exceptions import AppNotFoundError\n\n   scraper = GPlayScraper()\n   \n   try:\n       app = scraper.app_analyze('invalid.app.id')\n   except AppNotFoundError as e:\n       print(f\"App not found: {e}\")\n\nNetworkError\n^^^^^^^^^^^^\n\nRaised when network or HTTP errors occur.\n\n.. code-block:: python\n\n   from gplay_scraper.exceptions import NetworkError\n\n   try:\n       app = scraper.app_analyze('com.whatsapp')\n   except NetworkError as e:\n       print(f\"Network error: {e}\")\n\nDataParsingError\n^^^^^^^^^^^^^^^^\n\nRaised when JSON parsing or data extraction fails.\n\n.. code-block:: python\n\n   from gplay_scraper.exceptions import DataParsingError\n\n   try:\n       app = scraper.app_analyze('com.whatsapp')\n   except DataParsingError as e:\n       print(f\"Parsing error: {e}\")\n\nRateLimitError\n^^^^^^^^^^^^^^\n\nRaised when rate limits are exceeded.\n\n.. code-block:: python\n\n   from gplay_scraper.exceptions import RateLimitError\n\n   try:\n       # Making too many requests too quickly\n       for i in range(1000):\n           app = scraper.app_analyze(f'com.app{i}')\n   except RateLimitError as e:\n       print(f\"Rate limited: {e}\")\n\nInvalidAppIdError\n^^^^^^^^^^^^^^^^^\n\nRaised when input validation fails.\n\n.. code-block:: python\n\n   from gplay_scraper.exceptions import InvalidAppIdError\n\n   try:\n       app = scraper.app_analyze('')  # Empty app ID\n   except InvalidAppIdError as e:\n       print(f\"Invalid input: {e}\")\n\nGPlayScraperError\n^^^^^^^^^^^^^^^^^\n\nBase exception for all library errors.\n\n.. code-block:: python\n\n   from gplay_scraper.exceptions import GPlayScraperError\n\n   try:\n       app = scraper.app_analyze('com.whatsapp')\n   except GPlayScraperError as e:\n       print(f\"Library error: {e}\")\n\nComprehensive Error Handling\n-----------------------------\n\nHandle all common exceptions.\n\n.. code-block:: python\n\n   from gplay_scraper import GPlayScraper\n   from gplay_scraper.exceptions import (\n       AppNotFoundError,\n       NetworkError,\n       DataParsingError,\n       RateLimitError,\n       InvalidAppIdError,\n       GPlayScraperError\n   )\n\n   scraper = GPlayScraper()\n\n   try:\n       app = scraper.app_analyze('com.whatsapp')\n   except InvalidAppIdError as e:\n       print(f\"Invalid app ID: {e}\")\n   except AppNotFoundError as e:\n       print(f\"App not found: {e}\")\n   except NetworkError as e:\n       print(f\"Network error: {e}\")\n   except DataParsingError as e:\n       print(f\"Parsing error: {e}\")\n   except RateLimitError as e:\n       print(f\"Rate limited: {e}\")\n   except GPlayScraperError as e:\n       print(f\"Unknown library error: {e}\")\n\nAutomatic Retries\n-----------------\n\nThe library automatically retries failed requests with HTTP client fallback.\n\n.. code-block:: python\n\n   from gplay_scraper import Config\n   \n   # Configure retries\n   Config.DEFAULT_RETRY_COUNT = 5  # Try 5 times\n   \n   scraper = GPlayScraper()\n   app = scraper.app_analyze('com.whatsapp')\n   # Automatically retries up to 5 times if it fails\n   # Switches HTTP clients between retries\n\nGraceful Degradation\n--------------------\n\nMethods return None or empty lists on failure instead of crashing.\n\n.. code-block:: python\n\n   scraper = GPlayScraper()\n   \n   # Returns None if app not found (after retries)\n   app = scraper.app_analyze('invalid.app')\n   if app is None:\n       print(\"App not found\")\n   \n   # Returns empty list if search fails\n   results = scraper.search_analyze('invalid query')\n   if not results:\n       print(\"No results found\")\n\nProduction Error Handling\n--------------------------\n\nExample for production use.\n\n.. code-block:: python\n\n   import logging\n   from gplay_scraper import GPlayScraper, Config\n   from gplay_scraper.exceptions import GPlayScraperError\n\n   # Configure logging\n   logging.basicConfig(\n       level=logging.ERROR,\n       format='%(asctime)s - %(levelname)s - %(message)s',\n       filename='gplay_scraper.log'\n   )\n\n   logger = logging.getLogger(__name__)\n\n   # Configure retries\n   Config.DEFAULT_RETRY_COUNT = 5\n\n   scraper = GPlayScraper(http_client='curl_cffi')\n\n   def safe_analyze_app(app_id):\n       \"\"\"Safely analyze an app with error handling.\"\"\"\n       try:\n           return scraper.app_analyze(app_id)\n       except GPlayScraperError as e:\n           logger.error(f\"Failed to analyze {app_id}: {e}\")\n           return None\n\n   # Use in production\n   app_ids = ['com.app1', 'com.app2', 'com.app3']\n   results = []\n   \n   for app_id in app_ids:\n       app = safe_analyze_app(app_id)\n       if app:\n           results.append(app)\n   \n   print(f\"Successfully analyzed {len(results)}/{len(app_ids)} apps\")\n\nBatch Processing with Error Handling\n-------------------------------------\n\n.. code-block:: python\n\n   from gplay_scraper import GPlayScraper\n   from gplay_scraper.exceptions import GPlayScraperError\n\n   scraper = GPlayScraper()\n   \n   app_ids = ['com.app1', 'com.app2', 'invalid.app', 'com.app3']\n   \n   successful = []\n   failed = []\n   \n   for app_id in app_ids:\n       try:\n           app = scraper.app_analyze(app_id)\n           if app:\n               successful.append(app)\n       except GPlayScraperError as e:\n           failed.append((app_id, str(e)))\n   \n   print(f\"Successful: {len(successful)}\")\n   print(f\"Failed: {len(failed)}\")\n   \n   if failed:\n       print(\"\\nFailed apps:\")\n       for app_id, error in failed:\n           print(f\"  {app_id}: {error}\")\n\nBest Practices\n--------------\n\n1. **Always handle exceptions** in production code\n2. **Use specific exceptions** when possible instead of catching all\n3. **Log errors** for debugging and monitoring\n4. **Implement retries** for transient failures\n5. **Use graceful degradation** - continue processing even if some items fail\n6. **Monitor error rates** to detect issues early\n\nSee Also\n--------\n\n* :doc:`configuration` - Configuration options\n* :doc:`examples` - More practical examples\n"
  },
  {
    "path": "docs/examples.rst",
    "content": "Examples\n========\n\nPractical examples of using GPlay Scraper for common tasks.\n\nApp Analytics Dashboard\n-----------------------\n\nTrack key metrics for your app.\n\n.. code-block:: python\n\n   from gplay_scraper import GPlayScraper\n\n   scraper = GPlayScraper()\n   app = scraper.app_analyze('com.myapp')\n   \n   print(\"=== App Analytics Dashboard ===\")\n   print(f\"App: {app['title']}\")\n   print(f\"Developer: {app['developer']}\")\n   print(f\"Rating: {app['score']}/5 ({app['ratings']:,} ratings)\")\n   print(f\"\\nInstall Metrics:\")\n   print(f\"  Total Installs: {app['realInstalls']:,}\")\n   print(f\"  Daily Installs: {app['dailyInstalls']:,}\")\n   print(f\"  Monthly Installs: {app['monthlyInstalls']:,}\")\n   print(f\"  App Age: {app['appAgeDays']} days\")\n   print(f\"\\nRating Distribution:\")\n   hist = app['histogram']\n   for i, count in enumerate(hist, 1):\n       print(f\"  {i}★: {count:,}\")\n\nMarket Research\n---------------\n\nAnalyze a market segment.\n\n.. code-block:: python\n\n   scraper = GPlayScraper()\n   \n   # Search for fitness apps\n   results = scraper.search_analyze('fitness tracker', count=100)\n   \n   # Filter by rating\n   high_rated = [app for app in results if app['score'] >= 4.5]\n   free_apps = [app for app in high_rated if app['free']]\n   \n   print(f\"Total fitness tracker apps: {len(results)}\")\n   print(f\"High-rated (4.5+): {len(high_rated)}\")\n   print(f\"High-rated & Free: {len(free_apps)}\")\n   \n   print(\"\\nTop 5 Free High-Rated Apps:\")\n   for app in free_apps[:5]:\n       print(f\"  {app['title']}: {app['score']}/5\")\n\nCompetitor Monitoring\n---------------------\n\nTrack your competitors.\n\n.. code-block:: python\n\n   scraper = GPlayScraper()\n   \n   competitors = ['com.competitor1', 'com.competitor2', 'com.competitor3']\n   \n   print(\"Competitor Analysis\")\n   print(\"-\" * 60)\n   \n   for app_id in competitors:\n       app = scraper.app_analyze(app_id)\n       reviews = scraper.reviews_analyze(app_id, count=100, sort='NEWEST')\n       \n       avg_recent_rating = sum(r['score'] for r in reviews) / len(reviews)\n       \n       print(f\"\\n{app['title']}\")\n       print(f\"  Overall Rating: {app['score']}/5\")\n       print(f\"  Recent Rating: {avg_recent_rating:.2f}/5\")\n       print(f\"  Daily Installs: {app['dailyInstalls']:,}\")\n       print(f\"  Total Installs: {app['realInstalls']:,}\")\n\nReview Sentiment Analysis\n--------------------------\n\nAnalyze user feedback.\n\n.. code-block:: python\n\n   scraper = GPlayScraper()\n   \n   reviews = scraper.reviews_analyze('com.myapp', count=500)\n   \n   # Categorize by rating\n   positive = [r for r in reviews if r['score'] >= 4]\n   neutral = [r for r in reviews if r['score'] == 3]\n   negative = [r for r in reviews if r['score'] <= 2]\n   \n   print(\"Review Sentiment Analysis\")\n   print(f\"Total Reviews: {len(reviews)}\")\n   print(f\"Positive (4-5★): {len(positive)} ({len(positive)/len(reviews)*100:.1f}%)\")\n   print(f\"Neutral (3★): {len(neutral)} ({len(neutral)/len(reviews)*100:.1f}%)\")\n   print(f\"Negative (1-2★): {len(negative)} ({len(negative)/len(reviews)*100:.1f}%)\")\n   \n   # Show recent negative reviews\n   print(\"\\nRecent Negative Reviews:\")\n   for review in negative[:5]:\n       print(f\"  {review['userName']}: {review['score']}/5\")\n       print(f\"    {review['content'][:100]}...\")\n\nTop Charts Tracking\n-------------------\n\nMonitor top charts positions.\n\n.. code-block:: python\n\n   scraper = GPlayScraper()\n   \n   # Track top free games\n   top_games = scraper.list_analyze('TOP_FREE', category='GAME', count=50)\n   \n   # Find your app's position\n   my_app_id = 'com.mygame'\n   position = next((i for i, app in enumerate(top_games, 1) \n                    if app['appId'] == my_app_id), None)\n   \n   if position:\n       print(f\"Your game is ranked #{position} in top free games!\")\n   else:\n       print(\"Your game is not in top 50\")\n   \n   # Show top 10\n   print(\"\\nTop 10 Free Games:\")\n   for i, app in enumerate(top_games[:10], 1):\n       print(f\"{i}. {app['title']} - {app['score']}/5\")\n\nDeveloper Portfolio Overview\n-----------------------------\n\nAnalyze a developer's entire portfolio.\n\n.. code-block:: python\n\n   scraper = GPlayScraper()\n   \n   apps = scraper.developer_analyze('Google LLC')\n   \n   # Calculate metrics\n   avg_rating = sum(app['score'] for app in apps) / len(apps)\n   free_count = sum(1 for app in apps if app['free'])\n   high_rated = [app for app in apps if app['score'] >= 4.5]\n   \n   print(f\"Developer: Google LLC\")\n   print(f\"Total Apps: {len(apps)}\")\n   print(f\"Average Rating: {avg_rating:.2f}/5\")\n   print(f\"Free Apps: {free_count}/{len(apps)}\")\n   print(f\"High-Rated Apps (4.5+): {len(high_rated)}\")\n   \n   # Best rated apps\n   sorted_apps = sorted(apps, key=lambda x: x['score'], reverse=True)\n   print(\"\\nTop 5 Highest Rated:\")\n   for app in sorted_apps[:5]:\n       print(f\"  {app['title']}: {app['score']}/5\")\n\nBatch Data Collection\n----------------------\n\nCollect data for multiple apps efficiently.\n\n.. code-block:: python\n\n   import json\n   from gplay_scraper import GPlayScraper\n\n   scraper = GPlayScraper()\n   \n   app_ids = [\n       'com.whatsapp',\n       'org.telegram.messenger',\n       'org.thoughtcrime.securesms',\n       'com.discord'\n   ]\n   \n   results = []\n   for app_id in app_ids:\n       # Get only the fields you need\n       fields = ['title', 'developer', 'score', 'realInstalls', 'dailyInstalls']\n       data = scraper.app_get_fields(app_id, fields)\n       results.append(data)\n   \n   # Save to JSON\n   with open('messaging_apps.json', 'w') as f:\n       json.dump(results, f, indent=2)\n   \n   print(f\"Collected data for {len(results)} apps\")\n\nMulti-Language Content\n----------------------\n\nGet localized app information.\n\n.. code-block:: python\n\n   scraper = GPlayScraper()\n   \n   languages = {\n       'en': 'English',\n       'es': 'Spanish',\n       'fr': 'French',\n       'de': 'German',\n       'ja': 'Japanese'\n   }\n   \n   for lang_code, lang_name in languages.items():\n       app = scraper.app_analyze('com.whatsapp', lang=lang_code)\n       print(f\"\\n{lang_name} ({lang_code}):\")\n       print(f\"  Title: {app['title']}\")\n       print(f\"  Summary: {app['summary']}\")\n\nTrend Discovery\n---------------\n\nDiscover trending apps in a category.\n\n.. code-block:: python\n\n   scraper = GPlayScraper()\n   \n   # Get top free apps\n   top_free = scraper.list_analyze('TOP_FREE', category='PRODUCTIVITY', count=100)\n   \n   # Filter for new apps (less than 180 days old)\n   new_apps = [app for app in top_free if 'New' in app.get('description', '')]\n   \n   # Get apps with high install velocity\n   trending = []\n   for app in top_free[:20]:\n       full_data = scraper.app_analyze(app['appId'])\n       if full_data['dailyInstalls'] > 10000:\n           trending.append(full_data)\n   \n   print(\"Trending Productivity Apps:\")\n   for app in trending:\n       print(f\"  {app['title']}\")\n       print(f\"    Daily Installs: {app['dailyInstalls']:,}\")\n       print(f\"    Rating: {app['score']}/5\")\n\nSee Also\n--------\n\n* :doc:`quickstart` - Basic usage guide\n* :doc:`api/app` - Complete API reference\n* :doc:`configuration` - Configuration options\n"
  },
  {
    "path": "docs/fields.rst",
    "content": "Field Reference\n===============\n\nComplete reference of all 112 fields returned by GPlay Scraper.\n\nApp Fields (57 Fields)\n-----------------------\n\nBasic Information (5 fields)\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n* ``appId`` (string) - Package identifier (e.g., \"com.whatsapp\")\n* ``title`` (string) - App name\n* ``summary`` (string) - Short description\n* ``description`` (string) - Full description\n* ``appUrl`` (string) - Play Store URL\n\nCategory (4 fields)\n^^^^^^^^^^^^^^^^^^^\n\n* ``genre`` (string) - Primary category\n* ``genreId`` (string) - Category ID\n* ``categories`` (array) - All categories\n* ``available`` (boolean) - Availability status\n\nRelease & Updates (3 fields)\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n* ``released`` (string) - Release date (e.g., \"Oct 18, 2010\")\n* ``appAgeDays`` (integer) - Days since release (computed)\n* ``lastUpdated`` (string) - Last update date\n\nMedia (5 fields)\n^^^^^^^^^^^^^^^^\n\n* ``icon`` (string) - App icon URL\n* ``headerImage`` (string) - Header image URL\n* ``screenshots`` (array) - Screenshot URLs\n* ``video`` (string or null) - Promotional video URL\n* ``videoImage`` (string or null) - Video thumbnail URL\n\nInstall Statistics (10 fields)\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n* ``installs`` (string) - Install range (e.g., \"10,000,000,000+\")\n* ``minInstalls`` (integer) - Minimum installs\n* ``realInstalls`` (integer) - Exact install count\n* ``dailyInstalls`` (integer) - Average daily installs (computed)\n* ``minDailyInstalls`` (integer) - Min daily installs (computed)\n* ``realDailyInstalls`` (integer) - Real daily installs (computed)\n* ``monthlyInstalls`` (integer) - Average monthly installs (computed)\n* ``minMonthlyInstalls`` (integer) - Min monthly installs (computed)\n* ``realMonthlyInstalls`` (integer) - Real monthly installs (computed)\n\nRatings (4 fields)\n^^^^^^^^^^^^^^^^^^\n\n* ``score`` (float) - Average rating (0-5)\n* ``ratings`` (integer) - Total ratings count\n* ``reviews`` (integer) - Total reviews count\n* ``histogram`` (array) - Rating distribution [1★, 2★, 3★, 4★, 5★]\n\nAds (2 fields)\n^^^^^^^^^^^^^^\n\n* ``adSupported`` (boolean) - Supports ads\n* ``containsAds`` (boolean) - Contains ads\n\nTechnical (7 fields)\n^^^^^^^^^^^^^^^^^^^^\n\n* ``version`` (string) - Current version\n* ``androidVersion`` (string) - Minimum Android version\n* ``maxAndroidApi`` (integer) - Maximum Android API\n* ``minAndroidApi`` (string or integer) - Minimum Android API\n* ``appBundle`` (string) - Bundle identifier\n* ``contentRating`` (string) - Age rating\n* ``contentRatingDescription`` (string) - Rating description\n\nUpdates (1 field)\n^^^^^^^^^^^^^^^^^\n\n* ``whatsNew`` (array) - Changelog entries\n\nPrivacy (2 fields)\n^^^^^^^^^^^^^^^^^^\n\n* ``permissions`` (object) - Required permissions\n* ``dataSafety`` (array) - Data safety information\n\nPricing (7 fields)\n^^^^^^^^^^^^^^^^^^\n\n* ``price`` (number) - App price\n* ``currency`` (string) - Currency code\n* ``free`` (boolean) - Is free\n* ``offersIAP`` (boolean) - Has in-app purchases\n* ``inAppProductPrice`` (string or null) - IAP price range\n* ``sale`` (boolean) - On sale\n* ``originalPrice`` (number or null) - Original price if on sale\n\nDeveloper (8 fields)\n^^^^^^^^^^^^^^^^^^^^\n\n* ``developer`` (string) - Developer name\n* ``developerId`` (string) - Developer ID\n* ``developerEmail`` (string) - Contact email\n* ``developerWebsite`` (string) - Website URL\n* ``developerAddress`` (string) - Physical address\n* ``developerPhone`` (string or null) - Contact phone\n* ``publisherCountry`` (string) - Publisher country (computed)\n* ``privacyPolicy`` (string) - Privacy policy URL\n\nSearch Fields (11 Fields)\n--------------------------\n\n* ``title`` - App name\n* ``appId`` - Package identifier\n* ``url`` - Play Store URL\n* ``icon`` - App icon URL\n* ``developer`` - Developer name\n* ``summary`` - Short description\n* ``score`` - Average rating (0-5)\n* ``scoreText`` - Rating as text\n* ``price`` - App price\n* ``free`` - Is free (boolean)\n* ``currency`` - Currency code\n\nReview Fields (8 Fields)\n-------------------------\n\n* ``reviewId`` - Unique review ID\n* ``userName`` - Reviewer name\n* ``userImage`` - Reviewer image URL\n* ``content`` - Review text\n* ``score`` - Rating (1-5)\n* ``thumbsUpCount`` - Helpful votes\n* ``at`` - Review date (ISO format)\n* ``appVersion`` - App version reviewed\n\nDeveloper App Fields (11 Fields)\n---------------------------------\n\nSame as Search Fields.\n\nSimilar App Fields (11 Fields)\n-------------------------------\n\nSame as Search Fields.\n\nList (Top Charts) Fields (14 Fields)\n-------------------------------------\n\n* ``title`` - App name\n* ``appId`` - Package identifier\n* ``url`` - Play Store URL\n* ``icon`` - App icon URL\n* ``screenshots`` - Screenshot URLs (array)\n* ``developer`` - Developer name\n* ``genre`` - Category/genre\n* ``installs`` - Install count\n* ``description`` - App description\n* ``score`` - Average rating\n* ``scoreText`` - Rating as text\n* ``price`` - App price\n* ``free`` - Is free (boolean)\n* ``currency`` - Currency code\n\nComputed Fields\n---------------\n\nThe following 8 fields are computed at runtime:\n\n**appAgeDays**\n   Calculated as: ``(current_date - release_date).days``\n\n**dailyInstalls**\n   Calculated as: ``total_installs / days_since_release``\n\n**minDailyInstalls**\n   Calculated as: ``min_installs / days_since_release``\n\n**realDailyInstalls**\n   Calculated as: ``real_installs / days_since_release``\n\n**monthlyInstalls**\n   Calculated as: ``total_installs / (days_since_release / 30.44)``\n\n**minMonthlyInstalls**\n   Calculated as: ``min_installs / months_since_release``\n\n**realMonthlyInstalls**\n   Calculated as: ``real_installs / months_since_release``\n\n**publisherCountry**\n   Extracted from developer phone prefix or address\n\nField Count Summary\n-------------------\n\n* App: 57 fields\n* Search: 11 fields\n* Reviews: 8 fields\n* Developer: 11 fields\n* Similar: 11 fields\n* List: 14 fields\n* **Total: 112 unique fields**\n"
  },
  {
    "path": "docs/index.rst",
    "content": "GPlay Scraper Documentation\n============================\n\nA comprehensive Python library for scraping Google Play Store data with 40 methods across 7 categories.\n\nFeatures\n--------\n\n* **57 app fields** including install analytics\n* **40 methods** for different data types\n* **7 HTTP clients** with automatic fallback\n* **Multi-language** and **multi-region** support\n* **Automatic retries** and error handling\n* **Rate limiting** built-in\n\nQuick Example\n-------------\n\n.. code-block:: python\n\n   from gplay_scraper import GPlayScraper\n\n   scraper = GPlayScraper()\n   \n   # Get complete app data (57 fields)\n   app = scraper.app_analyze('com.whatsapp')\n   print(app['title'])              # WhatsApp Messenger\n   print(app['realInstalls'])       # 10931553905\n   print(app['dailyInstalls'])      # 1815870\n   print(app['publisherCountry'])   # United States\n\nTable of Contents\n-----------------\n\n.. toctree::\n   :maxdepth: 2\n   :caption: Getting Started\n\n   installation\n   quickstart\n   examples\n\n.. toctree::\n   :maxdepth: 2\n   :caption: API Reference\n\n   api/app\n   api/search\n   api/reviews\n   api/developer\n   api/similar\n   api/list\n   api/suggest\n\n.. toctree::\n   :maxdepth: 2\n   :caption: Advanced\n\n   configuration\n   error_handling\n   fields\n\nIndices and tables\n==================\n\n* :ref:`genindex`\n* :ref:`modindex`\n* :ref:`search`\n"
  },
  {
    "path": "docs/installation.rst",
    "content": "Installation\n============\n\nRequirements\n------------\n\n* Python 3.7 or higher\n* pip package manager\n\nBasic Installation\n------------------\n\nInstall using pip:\n\n.. code-block:: bash\n\n   pip install gplay-scraper\n\nThis installs the library with the default HTTP client (requests).\n\nOptional Dependencies\n---------------------\n\nFor better performance and anti-bot protection, install additional HTTP clients:\n\n.. code-block:: bash\n\n   # Install all optional HTTP clients\n   pip install httpx curl-cffi tls-client aiohttp cloudscraper\n\n   # Or install individually as needed\n   pip install httpx        # Modern HTTP/2 client\n   pip install curl-cffi    # Best for bypassing blocks\n   pip install tls-client   # Advanced TLS fingerprinting\n   pip install aiohttp      # Async support\n   pip install cloudscraper # Cloudflare bypass\n\nVerify Installation\n-------------------\n\nTest your installation:\n\n.. code-block:: python\n\n   from gplay_scraper import GPlayScraper\n\n   scraper = GPlayScraper()\n   app = scraper.app_analyze('com.whatsapp')\n   print(f\"Successfully installed! Got: {app['title']}\")\n\nDevelopment Installation\n------------------------\n\nTo install from source:\n\n.. code-block:: bash\n\n   git clone https://github.com/yourusername/gplay-scraper.git\n   cd gplay-scraper\n   pip install -e .\n\nUpgrading\n---------\n\nTo upgrade to the latest version:\n\n.. code-block:: bash\n\n   pip install --upgrade gplay-scraper\n\nTroubleshooting\n---------------\n\nImportError\n^^^^^^^^^^^\n\nIf you get an ImportError, ensure the package is installed:\n\n.. code-block:: bash\n\n   pip show gplay-scraper\n\nHTTP Client Issues\n^^^^^^^^^^^^^^^^^^\n\nIf you encounter HTTP errors, try installing alternative clients:\n\n.. code-block:: bash\n\n   pip install curl-cffi\n\nThen specify the client:\n\n.. code-block:: python\n\n   from gplay_scraper import GPlayScraper\n   \n   scraper = GPlayScraper(http_client='curl_cffi')\n\nNext Steps\n----------\n\n* :doc:`quickstart` - Get started with basic usage\n* :doc:`examples` - See practical examples\n* :doc:`api/app` - Explore the API reference\n"
  },
  {
    "path": "docs/quickstart.rst",
    "content": "Quick Start Guide\n=================\n\nThis guide will get you started with GPlay Scraper in 5 minutes.\n\nBasic Usage\n-----------\n\nInitialize the Scraper\n^^^^^^^^^^^^^^^^^^^^^^\n\n.. code-block:: python\n\n   from gplay_scraper import GPlayScraper\n\n   # Initialize once\n   scraper = GPlayScraper()\n\nGet App Data\n^^^^^^^^^^^^\n\nExtract complete app information with 57 fields:\n\n.. code-block:: python\n\n   # Get all app data\n   app = scraper.app_analyze('com.whatsapp')\n   \n   # Access the data\n   print(app['title'])              # App name\n   print(app['developer'])          # Developer name\n   print(app['score'])              # Rating (0-5)\n   print(app['realInstalls'])       # Exact install count\n   print(app['dailyInstalls'])      # Average daily installs\n   print(app['publisherCountry'])   # Publisher country\n\nGet Specific Fields\n^^^^^^^^^^^^^^^^^^^\n\nIf you only need certain fields:\n\n.. code-block:: python\n\n   # Get single field\n   title = scraper.app_get_field('com.whatsapp', 'title')\n   \n   # Get multiple fields\n   fields = scraper.app_get_fields('com.whatsapp', \n       ['title', 'score', 'dailyInstalls'])\n\nSearch for Apps\n^^^^^^^^^^^^^^^\n\nSearch the Play Store by keyword:\n\n.. code-block:: python\n\n   # Search for apps\n   results = scraper.search_analyze('messaging', count=10)\n   \n   # Iterate through results\n   for app in results:\n       print(f\"{app['title']} by {app['developer']}\")\n       print(f\"  Rating: {app['score']}/5\")\n       print(f\"  Free: {app['free']}\")\n\nGet Reviews\n^^^^^^^^^^^\n\nExtract user reviews with ratings:\n\n.. code-block:: python\n\n   # Get newest reviews\n   reviews = scraper.reviews_analyze('com.whatsapp', \n       count=50, \n       sort='NEWEST')\n   \n   # Process reviews\n   for review in reviews:\n       print(f\"{review['userName']}: {review['score']}/5\")\n       print(f\"  {review['content'][:100]}...\")\n\nGet Developer Apps\n^^^^^^^^^^^^^^^^^^\n\nFind all apps from a developer:\n\n.. code-block:: python\n\n   # Get all apps from Google\n   apps = scraper.developer_analyze('Google LLC')\n   \n   for app in apps:\n       print(f\"{app['title']} - {app['score']}/5\")\n\nFind Similar Apps\n^^^^^^^^^^^^^^^^^\n\nDiscover competitor or similar apps:\n\n.. code-block:: python\n\n   # Find apps similar to WhatsApp\n   similar = scraper.similar_analyze('com.whatsapp', count=20)\n   \n   for app in similar:\n       print(f\"{app['title']} - {app['score']}/5\")\n\nGet Top Charts\n^^^^^^^^^^^^^^\n\nAccess top free, paid, or grossing apps:\n\n.. code-block:: python\n\n   # Top free games\n   top_free = scraper.list_analyze('TOP_FREE', \n       category='GAME', \n       count=100)\n   \n   # Top paid apps\n   top_paid = scraper.list_analyze('TOP_PAID', \n       category='APPLICATION', \n       count=50)\n   \n   # Top grossing\n   top_grossing = scraper.list_analyze('TOP_GROSSING', count=100)\n\nGet Search Suggestions\n^^^^^^^^^^^^^^^^^^^^^^\n\nGet autocomplete suggestions:\n\n.. code-block:: python\n\n   # Get suggestions\n   suggestions = scraper.suggest_analyze('mine', count=10)\n   print(suggestions)\n   # ['minecraft', 'minesweeper', 'mineplex', ...]\n\nMulti-Language Support\n----------------------\n\nGet data in different languages:\n\n.. code-block:: python\n\n   # Spanish\n   app = scraper.app_analyze('com.whatsapp', lang='es')\n   \n   # French\n   app = scraper.app_analyze('com.whatsapp', lang='fr')\n   \n   # Japanese\n   app = scraper.app_analyze('com.whatsapp', lang='ja')\n\nRegional Data\n-------------\n\nGet region-specific data:\n\n.. code-block:: python\n\n   # UK data\n   app = scraper.app_analyze('com.whatsapp', country='gb')\n   \n   # Germany\n   app = scraper.app_analyze('com.whatsapp', country='de')\n   \n   # Japan\n   app = scraper.app_analyze('com.whatsapp', country='jp')\n\nImage Sizes\n-----------\n\nControl image quality:\n\n.. code-block:: python\n\n   # Small images (512px)\n   app = scraper.app_analyze('com.whatsapp', assets='SMALL')\n   \n   # Medium images (1024px) - default\n   app = scraper.app_analyze('com.whatsapp', assets='MEDIUM')\n   \n   # Large images (2048px)\n   app = scraper.app_analyze('com.whatsapp', assets='LARGE')\n   \n   # Original size\n   app = scraper.app_analyze('com.whatsapp', assets='ORIGINAL')\n\nError Handling\n--------------\n\nHandle errors gracefully:\n\n.. code-block:: python\n\n   from gplay_scraper import GPlayScraper\n   from gplay_scraper.exceptions import (\n       AppNotFoundError,\n       InvalidAppIdError,\n       NetworkError\n   )\n\n   scraper = GPlayScraper()\n\n   try:\n       app = scraper.app_analyze('invalid.app.id')\n   except InvalidAppIdError:\n       print(\"Invalid app ID format\")\n   except AppNotFoundError:\n       print(\"App not found on Play Store\")\n   except NetworkError:\n       print(\"Network error occurred\")\n\nCommon Patterns\n---------------\n\nBatch Processing\n^^^^^^^^^^^^^^^^\n\n.. code-block:: python\n\n   app_ids = ['com.whatsapp', 'com.telegram', 'com.signal']\n   \n   for app_id in app_ids:\n       app = scraper.app_analyze(app_id)\n       print(f\"{app['title']}: {app['realInstalls']:,} installs\")\n\nMarket Research\n^^^^^^^^^^^^^^^\n\n.. code-block:: python\n\n   # Find highly-rated messaging apps\n   results = scraper.search_analyze('messaging', count=100)\n   high_rated = [app for app in results if app['score'] >= 4.5]\n   \n   for app in high_rated:\n       print(f\"{app['title']}: {app['score']}/5\")\n\nCompetitor Analysis\n^^^^^^^^^^^^^^^^^^^\n\n.. code-block:: python\n\n   # Analyze your app vs competitors\n   my_app = scraper.app_analyze('com.myapp')\n   competitors = scraper.similar_analyze('com.myapp', count=10)\n   \n   print(f\"My App: {my_app['score']}/5\")\n   print(\"\\nCompetitors:\")\n   for comp in competitors:\n       print(f\"  {comp['title']}: {comp['score']}/5\")\n\nReview Monitoring\n^^^^^^^^^^^^^^^^^\n\n.. code-block:: python\n\n   # Monitor negative reviews\n   reviews = scraper.reviews_analyze('com.myapp', \n       count=100, \n       sort='NEWEST')\n   \n   negative = [r for r in reviews if r['score'] <= 2]\n   \n   for review in negative:\n       print(f\"{review['userName']}: {review['score']}/5\")\n       print(f\"  {review['content']}\")\n\nNext Steps\n----------\n\n* :doc:`examples` - See more detailed examples\n* :doc:`api/app` - Complete API reference\n* :doc:`configuration` - Advanced configuration options\n* :doc:`fields` - All available fields reference\n"
  },
  {
    "path": "docs/requirements.txt",
    "content": "sphinx>=7.0.0\nsphinx-book-theme>=1.0.0\nsphinx-copybutton>=0.5.0\nsphinx-autobuild>=2021.3.14\n"
  },
  {
    "path": "examples/README.md",
    "content": "# Examples\n\nThis folder contains example scripts demonstrating all methods for each of the 7 method types.\n\n## Files\n\n### 1. app_methods_example.py\nDemonstrates all 6 app methods:\n- `app_analyze()` - Get all data as dictionary\n- `app_get_field()` - Get single field value\n- `app_get_fields()` - Get multiple fields\n- `app_print_field()` - Print single field to console\n- `app_print_fields()` - Print multiple fields to console\n- `app_print_all()` - Print all data as JSON\n\n### 2. search_methods_example.py\nDemonstrates all 6 search methods:\n- `search_analyze()` - Get all search results\n- `search_get_field()` - Get single field from results\n- `search_get_fields()` - Get multiple fields from results\n- `search_print_field()` - Print single field from results\n- `search_print_fields()` - Print multiple fields from results\n- `search_print_all()` - Print all results as JSON\n\n### 3. reviews_methods_example.py\nDemonstrates all 6 reviews methods:\n- `reviews_analyze()` - Get all reviews\n- `reviews_get_field()` - Get single field from reviews\n- `reviews_get_fields()` - Get multiple fields from reviews\n- `reviews_print_field()` - Print single field from reviews\n- `reviews_print_fields()` - Print multiple fields from reviews\n- `reviews_print_all()` - Print all reviews as JSON\n\n### 4. developer_methods_example.py\nDemonstrates all 6 developer methods:\n- `developer_analyze()` - Get all developer apps\n- `developer_get_field()` - Get single field from apps\n- `developer_get_fields()` - Get multiple fields from apps\n- `developer_print_field()` - Print single field from apps\n- `developer_print_fields()` - Print multiple fields from apps\n- `developer_print_all()` - Print all apps as JSON\n\n### 5. list_methods_example.py\nDemonstrates all 6 list methods:\n- `list_analyze()` - Get all top chart apps\n- `list_get_field()` - Get single field from apps\n- `list_get_fields()` - Get multiple fields from apps\n- `list_print_field()` - Print single field from apps\n- `list_print_fields()` - Print multiple fields from apps\n- `list_print_all()` - Print all apps as JSON\n\n### 6. similar_methods_example.py\nDemonstrates all 6 similar methods:\n- `similar_analyze()` - Get all similar apps\n- `similar_get_field()` - Get single field from apps\n- `similar_get_fields()` - Get multiple fields from apps\n- `similar_print_field()` - Print single field from apps\n- `similar_print_fields()` - Print multiple fields from apps\n- `similar_print_all()` - Print all apps as JSON\n\n### 7. suggest_methods_example.py\nDemonstrates all 4 suggest methods:\n- `suggest_analyze()` - Get search suggestions\n- `suggest_nested()` - Get nested suggestions\n- `suggest_print_all()` - Print suggestions as JSON\n- `suggest_print_nested()` - Print nested suggestions as JSON\n\n## Running Examples\n\n```bash\n# Run any example\npython examples/app_methods_example.py\npython examples/search_methods_example.py\npython examples/reviews_methods_example.py\npython examples/developer_methods_example.py\npython examples/list_methods_example.py\npython examples/similar_methods_example.py\npython examples/suggest_methods_example.py\n```\n\n## Note\n\nThese examples are simple demonstrations. For more advanced use cases, check the documentation in the `README/` folder.\n"
  },
  {
    "path": "examples/app_methods_example.py",
    "content": "\"\"\"\nApp Methods Example\nDemonstrates all 6 app methods for extracting app details\n\nParameters:\n- app_id: App package name\n- lang: Language code (default: 'en')\n- country: Country code (default: 'us')\n\"\"\"\n\nfrom gplay_scraper import GPlayScraper\n\nscraper = GPlayScraper()\napp_id = \"com.whatsapp\"\nlang = \"en\"\ncountry = \"us\"\n\nprint(\"=== App Methods Example ===\\n\")\n\n# 1. app_analyze() - Get all data as dictionary\nprint(\"1. app_analyze(app_id, lang='en', country='us')\")\ndata = scraper.app_analyze(app_id, lang=lang, country=country)\nprint(f\"   Retrieved {len(data)} fields\")\nprint(f\"   Title: {data['title']}\")\nprint(f\"   Score: {data['score']}\")\n\n# 2. app_get_field() - Get single field\nprint(\"\\n2. app_get_field(app_id, field, lang='en', country='us')\")\ntitle = scraper.app_get_field(app_id, \"title\", lang=lang, country=country)\nprint(f\"   Title: {title}\")\n\n# 3. app_get_fields() - Get multiple fields\nprint(\"\\n3. app_get_fields(app_id, fields, lang='en', country='us')\")\nfields = scraper.app_get_fields(app_id, [\"title\", \"score\", \"installs\"], lang=lang, country=country)\nprint(f\"   {fields}\")\n\n# 4. app_print_field() - Print single field\nprint(\"\\n4. app_print_field(app_id, field, lang='en', country='us')\")\nscraper.app_print_field(app_id, \"developer\", lang=lang, country=country)\n\n# 5. app_print_fields() - Print multiple fields\nprint(\"\\n5. app_print_fields(app_id, fields, lang='en', country='us')\")\nscraper.app_print_fields(app_id, [\"title\", \"score\", \"free\"], lang=lang, country=country)\n\n# 6. app_print_all() - Print all data as JSON\nprint(\"\\n6. app_print_all(app_id, lang='en', country='us')\")\nscraper.app_print_all(app_id, lang=lang, country=country)\n"
  },
  {
    "path": "examples/developer_methods_example.py",
    "content": "\"\"\"\nDeveloper Methods Example\nDemonstrates all 6 developer methods for getting developer's apps\n\nParameters:\n- dev_id: Developer ID (numeric or string)\n- count: Number of apps (default: 100)\n- lang: Language code (default: 'en')\n- country: Country code (default: 'us')\n\"\"\"\n\nfrom gplay_scraper import GPlayScraper\n\nscraper = GPlayScraper()\ndev_id = \"5700313618786177705\"  # Google LLC\ncount = 20\nlang = \"en\"\ncountry = \"us\"\n\nprint(\"=== Developer Methods Example ===\\n\")\n\n# 1. developer_analyze() - Get all developer apps\nprint(\"1. developer_analyze(dev_id, count=100, lang='en', country='us')\")\napps = scraper.developer_analyze(dev_id, count=count, lang=lang, country=country)\nprint(f\"   Found {len(apps)} apps\")\nprint(f\"   First app: {apps[0]['title']}\")\n\n# 2. developer_get_field() - Get single field from all apps\nprint(\"\\n2. developer_get_field(dev_id, field, count=100, lang='en', country='us')\")\ntitles = scraper.developer_get_field(dev_id, \"title\", count=count, lang=lang, country=country)\nprint(f\"   Titles: {titles[:3]}\")\n\n# 3. developer_get_fields() - Get multiple fields from all apps\nprint(\"\\n3. developer_get_fields(dev_id, fields, count=100, lang='en', country='us')\")\napps_data = scraper.developer_get_fields(dev_id, [\"title\", \"score\"], count=10, lang=lang, country=country)\nprint(f\"   First 2 apps: {apps_data[:2]}\")\n\n# 4. developer_print_field() - Print single field from all apps\nprint(\"\\n4. developer_print_field(dev_id, field, count=100, lang='en', country='us')\")\nscraper.developer_print_field(dev_id, \"title\", count=5, lang=lang, country=country)\n\n# 5. developer_print_fields() - Print multiple fields from all apps\nprint(\"\\n5. developer_print_fields(dev_id, fields, count=100, lang='en', country='us')\")\nscraper.developer_print_fields(dev_id, [\"title\", \"score\"], count=5, lang=lang, country=country)\n\n# 6. developer_print_all() - Print all developer apps as JSON\nprint(\"\\n6. developer_print_all(dev_id, count=100, lang='en', country='us')\")\nscraper.developer_print_all(dev_id, count=5, lang=lang, country=country)\n"
  },
  {
    "path": "examples/list_methods_example.py",
    "content": "\"\"\"\nList Methods Example\nDemonstrates all 6 list methods for getting top charts\n\nParameters:\n- collection: Chart type - 'TOP_FREE', 'TOP_PAID', 'TOP_GROSSING' (default: 'TOP_FREE')\n- category: Category filter (default: 'APPLICATION')\n- count: Number of apps (default: 100)\n- lang: Language code (default: 'en')\n- country: Country code (default: 'us')\n\"\"\"\n\nfrom gplay_scraper import GPlayScraper\n\nscraper = GPlayScraper()\ncollection = \"TOP_FREE\"\ncategory = \"GAME\"\ncount = 20\nlang = \"en\"\ncountry = \"us\"\n\nprint(\"=== List Methods Example ===\\n\")\n\n# 1. list_analyze() - Get all top chart apps\nprint(\"1. list_analyze(collection='TOP_FREE', category='APPLICATION', count=100, lang='en', country='us')\")\napps = scraper.list_analyze(collection, category, count=count, lang=lang, country=country)\nprint(f\"   Found {len(apps)} apps\")\nprint(f\"   First app: {apps[0]['title']}\")\n\n# 2. list_get_field() - Get single field from all apps\nprint(\"\\n2. list_get_field(collection, field, category='APPLICATION', count=100, lang='en', country='us')\")\ntitles = scraper.list_get_field(collection, \"title\", category, count=count, lang=lang, country=country)\nprint(f\"   Titles: {titles[:3]}\")\n\n# 3. list_get_fields() - Get multiple fields from all apps\nprint(\"\\n3. list_get_fields(collection, fields, category='APPLICATION', count=100, lang='en', country='us')\")\napps_data = scraper.list_get_fields(collection, [\"title\", \"score\"], category, count=10, lang=lang, country=country)\nprint(f\"   First 2 apps: {apps_data[:2]}\")\n\n# 4. list_print_field() - Print single field from all apps\nprint(\"\\n4. list_print_field(collection, field, category='APPLICATION', count=100, lang='en', country='us')\")\nscraper.list_print_field(collection, \"title\", category, count=5, lang=lang, country=country)\n\n# 5. list_print_fields() - Print multiple fields from all apps\nprint(\"\\n5. list_print_fields(collection, fields, category='APPLICATION', count=100, lang='en', country='us')\")\nscraper.list_print_fields(collection, [\"title\", \"score\"], category, count=5, lang=lang, country=country)\n\n# 6. list_print_all() - Print all top chart apps as JSON\nprint(\"\\n6. list_print_all(collection='TOP_FREE', category='APPLICATION', count=100, lang='en', country='us')\")\nscraper.list_print_all(collection, category, count=5, lang=lang, country=country)\n"
  },
  {
    "path": "examples/reviews_methods_example.py",
    "content": "\"\"\"\nReviews Methods Example\nDemonstrates all 6 reviews methods for extracting user reviews\n\nParameters:\n- app_id: App package name\n- count: Number of reviews (default: 100)\n- lang: Language code (default: 'en')\n- country: Country code (default: 'us')\n- sort: Sort order - 'NEWEST', 'RELEVANT', 'RATING' (default: 'NEWEST')\n\"\"\"\n\nfrom gplay_scraper import GPlayScraper\n\nscraper = GPlayScraper()\napp_id = \"com.whatsapp\"\ncount = 20\nlang = \"en\"\ncountry = \"us\"\nsort = \"NEWEST\"\n\nprint(\"=== Reviews Methods Example ===\\n\")\n\n# 1. reviews_analyze() - Get all reviews\nprint(\"1. reviews_analyze(app_id, count=100, lang='en', country='us', sort='NEWEST')\")\nreviews = scraper.reviews_analyze(app_id, count=count, lang=lang, country=country, sort=sort)\nprint(f\"   Retrieved {len(reviews)} reviews\")\nprint(f\"   First review score: {reviews[0]['score']}\")\n\n# 2. reviews_get_field() - Get single field from all reviews\nprint(\"\\n2. reviews_get_field(app_id, field, count=100, lang='en', country='us', sort='NEWEST')\")\nscores = scraper.reviews_get_field(app_id, \"score\", count=count, lang=lang, country=country, sort=sort)\nprint(f\"   Scores: {scores[:5]}\")\n\n# 3. reviews_get_fields() - Get multiple fields from all reviews\nprint(\"\\n3. reviews_get_fields(app_id, fields, count=100, lang='en', country='us', sort='NEWEST')\")\nreview_data = scraper.reviews_get_fields(app_id, [\"userName\", \"score\"], count=10, lang=lang, country=country, sort=sort)\nprint(f\"   First 2 reviews: {review_data[:2]}\")\n\n# 4. reviews_print_field() - Print single field from all reviews\nprint(\"\\n4. reviews_print_field(app_id, field, count=100, lang='en', country='us', sort='NEWEST')\")\nscraper.reviews_print_field(app_id, \"score\", count=5, lang=lang, country=country, sort=sort)\n\n# 5. reviews_print_fields() - Print multiple fields from all reviews\nprint(\"\\n5. reviews_print_fields(app_id, fields, count=100, lang='en', country='us', sort='NEWEST')\")\nscraper.reviews_print_fields(app_id, [\"userName\", \"score\"], count=5, lang=lang, country=country, sort=sort)\n\n# 6. reviews_print_all() - Print all reviews as JSON\nprint(\"\\n6. reviews_print_all(app_id, count=100, lang='en', country='us', sort='NEWEST')\")\nscraper.reviews_print_all(app_id, count=5, lang=lang, country=country, sort=sort)\n"
  },
  {
    "path": "examples/search_methods_example.py",
    "content": "\"\"\"\nSearch Methods Example\nDemonstrates all 6 search methods for finding apps\n\nParameters:\n- query: Search keyword\n- count: Number of results (default: 100)\n- lang: Language code (default: 'en')\n- country: Country code (default: 'us')\n\"\"\"\n\nfrom gplay_scraper import GPlayScraper\n\nscraper = GPlayScraper()\nquery = \"social media\"\ncount = 10\nlang = \"en\"\ncountry = \"us\"\n\nprint(\"=== Search Methods Example ===\\n\")\n\n# 1. search_analyze() - Get all search results\nprint(\"1. search_analyze(query, count=100, lang='en', country='us')\")\nresults = scraper.search_analyze(query, count=count, lang=lang, country=country)\nprint(f\"   Found {len(results)} apps\")\nprint(f\"   First app: {results[0]['title']}\")\n\n# 2. search_get_field() - Get single field from all results\nprint(\"\\n2. search_get_field(query, field, count=100, lang='en', country='us')\")\ntitles = scraper.search_get_field(query, \"title\", count=count, lang=lang, country=country)\nprint(f\"   Titles: {titles[:3]}\")\n\n# 3. search_get_fields() - Get multiple fields from all results\nprint(\"\\n3. search_get_fields(query, fields, count=100, lang='en', country='us')\")\napps = scraper.search_get_fields(query, [\"title\", \"score\"], count=count, lang=lang, country=country)\nprint(f\"   First 2 apps: {apps[:2]}\")\n\n# 4. search_print_field() - Print single field from all results\nprint(\"\\n4. search_print_field(query, field, count=100, lang='en', country='us')\")\nscraper.search_print_field(query, \"title\", count=5, lang=lang, country=country)\n\n# 5. search_print_fields() - Print multiple fields from all results\nprint(\"\\n5. search_print_fields(query, fields, count=100, lang='en', country='us')\")\nscraper.search_print_fields(query, [\"title\", \"developer\"], count=5, lang=lang, country=country)\n\n# 6. search_print_all() - Print all search results as JSON\nprint(\"\\n6. search_print_all(query, count=100, lang='en', country='us')\")\nscraper.search_print_all(query, count=5, lang=lang, country=country)\n"
  },
  {
    "path": "examples/similar_methods_example.py",
    "content": "\"\"\"\nSimilar Methods Example\nDemonstrates all 6 similar methods for finding related apps\n\nParameters:\n- app_id: App package name\n- count: Number of similar apps (default: 100)\n- lang: Language code (default: 'en')\n- country: Country code (default: 'us')\n\"\"\"\n\nfrom gplay_scraper import GPlayScraper\n\nscraper = GPlayScraper()\napp_id = \"com.whatsapp\"\ncount = 20\nlang = \"en\"\ncountry = \"us\"\n\nprint(\"=== Similar Methods Example ===\\n\")\n\n# 1. similar_analyze() - Get all similar apps\nprint(\"1. similar_analyze(app_id, count=100, lang='en', country='us')\")\napps = scraper.similar_analyze(app_id, count=count, lang=lang, country=country)\nprint(f\"   Found {len(apps)} similar apps\")\nprint(f\"   First app: {apps[0]['title']}\")\n\n# 2. similar_get_field() - Get single field from all similar apps\nprint(\"\\n2. similar_get_field(app_id, field, count=100, lang='en', country='us')\")\ntitles = scraper.similar_get_field(app_id, \"title\", count=count, lang=lang, country=country)\nprint(f\"   Titles: {titles[:3]}\")\n\n# 3. similar_get_fields() - Get multiple fields from all similar apps\nprint(\"\\n3. similar_get_fields(app_id, fields, count=100, lang='en', country='us')\")\napps_data = scraper.similar_get_fields(app_id, [\"title\", \"score\"], count=10, lang=lang, country=country)\nprint(f\"   First 2 apps: {apps_data[:2]}\")\n\n# 4. similar_print_field() - Print single field from all similar apps\nprint(\"\\n4. similar_print_field(app_id, field, count=100, lang='en', country='us')\")\nscraper.similar_print_field(app_id, \"title\", count=5, lang=lang, country=country)\n\n# 5. similar_print_fields() - Print multiple fields from all similar apps\nprint(\"\\n5. similar_print_fields(app_id, fields, count=100, lang='en', country='us')\")\nscraper.similar_print_fields(app_id, [\"title\", \"score\"], count=5, lang=lang, country=country)\n\n# 6. similar_print_all() - Print all similar apps as JSON\nprint(\"\\n6. similar_print_all(app_id, count=100, lang='en', country='us')\")\nscraper.similar_print_all(app_id, count=5, lang=lang, country=country)\n"
  },
  {
    "path": "examples/suggest_methods_example.py",
    "content": "\"\"\"\nSuggest Methods Example\nDemonstrates all 4 suggest methods for getting search suggestions\n\nParameters:\n- term: Search term\n- count: Number of suggestions (default: 5)\n- lang: Language code (default: 'en')\n- country: Country code (default: 'us')\n\"\"\"\n\nfrom gplay_scraper import GPlayScraper\n\nscraper = GPlayScraper()\nterm = \"fitness\"\ncount = 5\nlang = \"en\"\ncountry = \"us\"\n\nprint(\"=== Suggest Methods Example ===\\n\")\n\n# 1. suggest_analyze() - Get search suggestions\nprint(\"1. suggest_analyze(term, count=5, lang='en', country='us')\")\nsuggestions = scraper.suggest_analyze(term, count=count, lang=lang, country=country)\nprint(f\"   Suggestions: {suggestions}\")\n\n# 2. suggest_nested() - Get nested suggestions\nprint(\"\\n2. suggest_nested(term, count=5, lang='en', country='us')\")\nnested = scraper.suggest_nested(term, count=count, lang=lang, country=country)\nprint(f\"   Nested suggestions (first 2):\")\nfor i, (key, values) in enumerate(list(nested.items())[:2]):\n    print(f\"   {key}: {values}\")\n\n# 3. suggest_print_all() - Print suggestions as JSON\nprint(\"\\n3. suggest_print_all(term, count=5, lang='en', country='us')\")\nscraper.suggest_print_all(term, count=count, lang=lang, country=country)\n\n# 4. suggest_print_nested() - Print nested suggestions as JSON\nprint(\"\\n4. suggest_print_nested(term, count=5, lang='en', country='us')\")\nscraper.suggest_print_nested(term, count=count, lang=lang, country=country)\n"
  },
  {
    "path": "gplay_scraper/__init__.py",
    "content": "\"\"\"GPlay Scraper - Google Play Store scraping library.\n\nThis package provides comprehensive tools for scraping Google Play Store data including:\n- App details (65+ fields)\n- Search results\n- User reviews\n- Developer portfolios\n- Similar apps\n- Top charts\n- Search suggestions\n\"\"\"\n\nimport logging\n\n# Import main scraper class\nfrom .app import GPlayScraper\n\n# Import all method classes\nfrom .core.gplay_methods import AppMethods, SearchMethods, ReviewsMethods, DeveloperMethods, SimilarMethods, ListMethods, SuggestMethods\n\n# Import configuration\nfrom .config import Config\n\n# Import custom exceptions\nfrom .exceptions import (\n    GPlayScraperError,\n    InvalidAppIdError,\n    AppNotFoundError,\n    RateLimitError,\n    NetworkError,\n    DataParsingError,\n)\n\n# Configure logging to use NullHandler by default\nlogging.getLogger(__name__).addHandler(logging.NullHandler())\n\n# Package metadata\n__version__ = \"1.0.6\"\n\n# Public API exports\n__all__ = [\n    \"GPlayScraper\",\n    \"AppMethods\",\n    \"SearchMethods\",\n    \"ReviewsMethods\",\n    \"DeveloperMethods\",\n    \"SimilarMethods\",\n    \"ListMethods\",\n    \"SuggestMethods\",\n    \"Config\",\n    \"GPlayScraperError\",\n    \"InvalidAppIdError\",\n    \"AppNotFoundError\",\n    \"RateLimitError\",\n    \"NetworkError\",\n    \"DataParsingError\",\n]"
  },
  {
    "path": "gplay_scraper/app.py",
    "content": "\"\"\"Main GPlayScraper class that provides unified access to all scraping methods.\n\nThis module contains the main GPlayScraper class which aggregates all 7 method types\nand provides 42 functions for interacting with Google Play Store data.\n\"\"\"\n\nfrom .core.gplay_methods import AppMethods, SearchMethods, ReviewsMethods, DeveloperMethods, SimilarMethods, ListMethods, SuggestMethods\nfrom .config import Config\nfrom typing import Any, List, Dict\n\n\nclass GPlayScraper:\n    \"\"\"Main scraper class providing access to all Google Play Store scraping methods.\n    \n    This class aggregates 7 method types:\n    - App Methods: Extract 65+ fields from any app\n    - Search Methods: Search for apps by keyword\n    - Reviews Methods: Extract user reviews and ratings\n    - Developer Methods: Get all apps from a developer\n    - List Methods: Get top charts (free, paid, grossing)\n    - Similar Methods: Find similar/competitor apps\n    - Suggest Methods: Get search suggestions\n    \n    Args:\n        http_client: HTTP client to use (requests, curl_cffi, tls_client, httpx, urllib3, cloudscraper, aiohttp)\n    \"\"\"\n    \n    def __init__(self, http_client: str = None):\n        \"\"\"Initialize GPlayScraper with all method types.\n        \n        Args:\n            http_client: Optional HTTP client name. Defaults to 'requests' with automatic fallback.\n        \"\"\"\n        # Initialize all 7 method types\n        self.app_methods = AppMethods(http_client)\n        self.search_methods = SearchMethods(http_client)\n        self.reviews_methods = ReviewsMethods(http_client)\n        self.developer_methods = DeveloperMethods(http_client)\n        self.similar_methods = SimilarMethods(http_client)\n        self.list_methods = ListMethods(http_client)\n        self.suggest_methods = SuggestMethods(http_client)\n\n    # ==================== App Methods ====================\n    \n    def app_analyze(self, app_id: str, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY, assets: str = None) -> Dict:\n        \"\"\"Get complete app data with 65+ fields.\n        \n        Args:\n            app_id: Google Play app ID (e.g., 'com.whatsapp')\n            lang: Language code (default: 'en')\n            country: Country code (default: 'us')\n            assets: Asset size (SMALL=512px, MEDIUM=1024px, LARGE=2048px, ORIGINAL=max)\n            \n        Returns:\n            Dictionary containing all app data\n        \"\"\"\n        return self.app_methods.app_analyze(app_id, lang, country, assets)\n\n    def app_get_field(self, app_id: str, field: str, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY, assets: str = None) -> Any:\n        \"\"\"Get single field value from app data.\n        \n        Args:\n            app_id: Google Play app ID\n            field: Field name to retrieve\n            lang: Language code\n            country: Country code\n            assets: Asset size (SMALL, MEDIUM, LARGE, ORIGINAL)\n            \n        Returns:\n            Value of the requested field\n        \"\"\"\n        return self.app_methods.app_get_field(app_id, field, lang, country, assets)\n\n    def app_get_fields(self, app_id: str, fields: List[str], lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY, assets: str = None) -> Dict[str, Any]:\n        \"\"\"Get multiple field values from app data.\n        \n        Args:\n            app_id: Google Play app ID\n            fields: List of field names to retrieve\n            lang: Language code\n            country: Country code\n            assets: Asset size (SMALL, MEDIUM, LARGE, ORIGINAL)\n            \n        Returns:\n            Dictionary with requested fields and values\n        \"\"\"\n        return self.app_methods.app_get_fields(app_id, fields, lang, country, assets)\n\n    def app_print_field(self, app_id: str, field: str, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY, assets: str = None) -> None:\n        \"\"\"Print single field value to console.\n        \n        Args:\n            app_id: Google Play app ID\n            field: Field name to print\n            lang: Language code\n            country: Country code\n            assets: Asset size (SMALL, MEDIUM, LARGE, ORIGINAL)\n        \"\"\"\n        return self.app_methods.app_print_field(app_id, field, lang, country, assets)\n\n    def app_print_fields(self, app_id: str, fields: List[str], lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY, assets: str = None) -> None:\n        \"\"\"Print multiple field values to console.\n        \n        Args:\n            app_id: Google Play app ID\n            fields: List of field names to print\n            lang: Language code\n            country: Country code\n            assets: Asset size (SMALL, MEDIUM, LARGE, ORIGINAL)\n        \"\"\"\n        return self.app_methods.app_print_fields(app_id, fields, lang, country, assets)\n\n    def app_print_all(self, app_id: str, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY, assets: str = None) -> None:\n        \"\"\"Print all app data as JSON to console.\n        \n        Args:\n            app_id: Google Play app ID\n            lang: Language code\n            country: Country code\n            assets: Asset size (SMALL, MEDIUM, LARGE, ORIGINAL)\n        \"\"\"\n        return self.app_methods.app_print_all(app_id, lang, country, assets)\n\n    # ==================== Search Methods ====================\n    \n    def search_analyze(self, query: str, count: int = Config.DEFAULT_SEARCH_COUNT, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY) -> List[Dict]:\n        \"\"\"Search for apps and get complete results.\n        \n        Args:\n            query: Search query string\n            count: Number of results to return\n            lang: Language code\n            country: Country code\n            \n        Returns:\n            List of dictionaries containing app data\n        \"\"\"\n        return self.search_methods.search_analyze(query, count, lang, country)\n\n    def search_get_field(self, query: str, field: str, count: int = Config.DEFAULT_SEARCH_COUNT, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY) -> List[Any]:\n        \"\"\"Get single field from search results.\n        \n        Args:\n            query: Search query string\n            field: Field name to retrieve\n            count: Number of results\n            lang: Language code\n            country: Country code\n            \n        Returns:\n            List of field values\n        \"\"\"\n        return self.search_methods.search_get_field(query, field, count, lang, country)\n\n    def search_get_fields(self, query: str, fields: List[str], count: int = Config.DEFAULT_SEARCH_COUNT, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY) -> List[Dict[str, Any]]:\n        \"\"\"Get multiple fields from search results.\n        \n        Args:\n            query: Search query string\n            fields: List of field names\n            count: Number of results\n            lang: Language code\n            country: Country code\n            \n        Returns:\n            List of dictionaries with requested fields\n        \"\"\"\n        return self.search_methods.search_get_fields(query, fields, count, lang, country)\n\n    def search_print_field(self, query: str, field: str, count: int = Config.DEFAULT_SEARCH_COUNT, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY) -> None:\n        \"\"\"Print single field from search results.\n        \n        Args:\n            query: Search query string\n            field: Field name to print\n            count: Number of results\n            lang: Language code\n            country: Country code\n        \"\"\"\n        return self.search_methods.search_print_field(query, field, count, lang, country)\n\n    def search_print_fields(self, query: str, fields: List[str], count: int = Config.DEFAULT_SEARCH_COUNT, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY) -> None:\n        \"\"\"Print multiple fields from search results.\n        \n        Args:\n            query: Search query string\n            fields: List of field names\n            count: Number of results\n            lang: Language code\n            country: Country code\n        \"\"\"\n        return self.search_methods.search_print_fields(query, fields, count, lang, country)\n\n    def search_print_all(self, query: str, count: int = Config.DEFAULT_SEARCH_COUNT, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY) -> None:\n        \"\"\"Print all search results as JSON.\n        \n        Args:\n            query: Search query string\n            count: Number of results\n            lang: Language code\n            country: Country code\n        \"\"\"\n        return self.search_methods.search_print_all(query, count, lang, country)\n\n    # ==================== Reviews Methods ====================\n    \n    def reviews_analyze(self, app_id: str, count: int = Config.DEFAULT_REVIEWS_COUNT, lang: str = Config.DEFAULT_LANGUAGE, \n                       country: str = Config.DEFAULT_COUNTRY, sort: str = Config.DEFAULT_REVIEWS_SORT) -> List[Dict]:\n        \"\"\"Get user reviews for an app.\n        \n        Args:\n            app_id: Google Play app ID\n            count: Number of reviews to fetch\n            lang: Language code\n            country: Country code\n            sort: Sort order (NEWEST, RELEVANT, RATING)\n            \n        Returns:\n            List of review dictionaries\n        \"\"\"\n        return self.reviews_methods.reviews_analyze(app_id, count, lang, country, sort)\n\n    def reviews_get_field(self, app_id: str, field: str, count: int = Config.DEFAULT_REVIEWS_COUNT, \n                         lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY, sort: str = Config.DEFAULT_REVIEWS_SORT) -> List[Any]:\n        \"\"\"Get single field from reviews.\n        \n        Args:\n            app_id: Google Play app ID\n            field: Field name to retrieve\n            count: Number of reviews\n            lang: Language code\n            country: Country code\n            sort: Sort order\n            \n        Returns:\n            List of field values\n        \"\"\"\n        return self.reviews_methods.reviews_get_field(app_id, field, count, lang, country, sort)\n\n    def reviews_get_fields(self, app_id: str, fields: List[str], count: int = Config.DEFAULT_REVIEWS_COUNT,\n                          lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY, sort: str = Config.DEFAULT_REVIEWS_SORT) -> List[Dict[str, Any]]:\n        \"\"\"Get multiple fields from reviews.\n        \n        Args:\n            app_id: Google Play app ID\n            fields: List of field names\n            count: Number of reviews\n            lang: Language code\n            country: Country code\n            sort: Sort order\n            \n        Returns:\n            List of dictionaries with requested fields\n        \"\"\"\n        return self.reviews_methods.reviews_get_fields(app_id, fields, count, lang, country, sort)\n\n    def reviews_print_field(self, app_id: str, field: str, count: int = Config.DEFAULT_REVIEWS_COUNT,\n                           lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY, sort: str = Config.DEFAULT_REVIEWS_SORT) -> None:\n        \"\"\"Print single field from reviews.\n        \n        Args:\n            app_id: Google Play app ID\n            field: Field name to print\n            count: Number of reviews\n            lang: Language code\n            country: Country code\n            sort: Sort order\n        \"\"\"\n        return self.reviews_methods.reviews_print_field(app_id, field, count, lang, country, sort)\n\n    def reviews_print_fields(self, app_id: str, fields: List[str], count: int = Config.DEFAULT_REVIEWS_COUNT,\n                            lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY, sort: str = Config.DEFAULT_REVIEWS_SORT) -> None:\n        \"\"\"Print multiple fields from reviews.\n        \n        Args:\n            app_id: Google Play app ID\n            fields: List of field names\n            count: Number of reviews\n            lang: Language code\n            country: Country code\n            sort: Sort order\n        \"\"\"\n        return self.reviews_methods.reviews_print_fields(app_id, fields, count, lang, country, sort)\n\n    def reviews_print_all(self, app_id: str, count: int = Config.DEFAULT_REVIEWS_COUNT, lang: str = Config.DEFAULT_LANGUAGE,\n                         country: str = Config.DEFAULT_COUNTRY, sort: str = Config.DEFAULT_REVIEWS_SORT) -> None:\n        \"\"\"Print all reviews as JSON.\n        \n        Args:\n            app_id: Google Play app ID\n            count: Number of reviews\n            lang: Language code\n            country: Country code\n            sort: Sort order\n        \"\"\"\n        return self.reviews_methods.reviews_print_all(app_id, count, lang, country, sort)\n\n    # ==================== Developer Methods ====================\n    \n    def developer_analyze(self, dev_id: str, count: int = Config.DEFAULT_DEVELOPER_COUNT, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY) -> List[Dict]:\n        \"\"\"Get all apps from a developer.\n        \n        Args:\n            dev_id: Developer ID (numeric or string)\n            count: Number of apps to return\n            lang: Language code\n            country: Country code\n            \n        Returns:\n            List of app dictionaries\n        \"\"\"\n        return self.developer_methods.developer_analyze(dev_id, count, lang, country)\n\n    def developer_get_field(self, dev_id: str, field: str, count: int = Config.DEFAULT_DEVELOPER_COUNT, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY) -> List[Any]:\n        \"\"\"Get single field from developer apps.\n        \n        Args:\n            dev_id: Developer ID\n            field: Field name to retrieve\n            count: Number of apps\n            lang: Language code\n            country: Country code\n            \n        Returns:\n            List of field values\n        \"\"\"\n        return self.developer_methods.developer_get_field(dev_id, field, count, lang, country)\n\n    def developer_get_fields(self, dev_id: str, fields: List[str], count: int = Config.DEFAULT_DEVELOPER_COUNT, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY) -> List[Dict[str, Any]]:\n        \"\"\"Get multiple fields from developer apps.\n        \n        Args:\n            dev_id: Developer ID\n            fields: List of field names\n            count: Number of apps\n            lang: Language code\n            country: Country code\n            \n        Returns:\n            List of dictionaries with requested fields\n        \"\"\"\n        return self.developer_methods.developer_get_fields(dev_id, fields, count, lang, country)\n\n    def developer_print_field(self, dev_id: str, field: str, count: int = Config.DEFAULT_DEVELOPER_COUNT, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY) -> None:\n        \"\"\"Print single field from developer apps.\n        \n        Args:\n            dev_id: Developer ID\n            field: Field name to print\n            count: Number of apps\n            lang: Language code\n            country: Country code\n        \"\"\"\n        return self.developer_methods.developer_print_field(dev_id, field, count, lang, country)\n\n    def developer_print_fields(self, dev_id: str, fields: List[str], count: int = Config.DEFAULT_DEVELOPER_COUNT, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY) -> None:\n        \"\"\"Print multiple fields from developer apps.\n        \n        Args:\n            dev_id: Developer ID\n            fields: List of field names\n            count: Number of apps\n            lang: Language code\n            country: Country code\n        \"\"\"\n        return self.developer_methods.developer_print_fields(dev_id, fields, count, lang, country)\n\n    def developer_print_all(self, dev_id: str, count: int = Config.DEFAULT_DEVELOPER_COUNT, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY) -> None:\n        \"\"\"Print all developer apps as JSON.\n        \n        Args:\n            dev_id: Developer ID\n            count: Number of apps\n            lang: Language code\n            country: Country code\n        \"\"\"\n        return self.developer_methods.developer_print_all(dev_id, count, lang, country)\n\n    # ==================== Similar Methods ====================\n    \n    def similar_analyze(self, app_id: str, count: int = Config.DEFAULT_SIMILAR_COUNT, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY) -> List[Dict]:\n        \"\"\"Get similar/competitor apps.\n        \n        Args:\n            app_id: Google Play app ID\n            count: Number of similar apps to return\n            lang: Language code\n            country: Country code\n            \n        Returns:\n            List of similar app dictionaries\n        \"\"\"\n        return self.similar_methods.similar_analyze(app_id, count, lang, country)\n\n    def similar_get_field(self, app_id: str, field: str, count: int = Config.DEFAULT_SIMILAR_COUNT, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY) -> List[Any]:\n        \"\"\"Get single field from similar apps.\n        \n        Args:\n            app_id: Google Play app ID\n            field: Field name to retrieve\n            count: Number of similar apps\n            lang: Language code\n            country: Country code\n            \n        Returns:\n            List of field values\n        \"\"\"\n        return self.similar_methods.similar_get_field(app_id, field, count, lang, country)\n\n    def similar_get_fields(self, app_id: str, fields: List[str], count: int = Config.DEFAULT_SIMILAR_COUNT, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY) -> List[Dict[str, Any]]:\n        \"\"\"Get multiple fields from similar apps.\n        \n        Args:\n            app_id: Google Play app ID\n            fields: List of field names\n            count: Number of similar apps\n            lang: Language code\n            country: Country code\n            \n        Returns:\n            List of dictionaries with requested fields\n        \"\"\"\n        return self.similar_methods.similar_get_fields(app_id, fields, count, lang, country)\n\n    def similar_print_field(self, app_id: str, field: str, count: int = Config.DEFAULT_SIMILAR_COUNT, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY) -> None:\n        \"\"\"Print single field from similar apps.\n        \n        Args:\n            app_id: Google Play app ID\n            field: Field name to print\n            count: Number of similar apps\n            lang: Language code\n            country: Country code\n        \"\"\"\n        return self.similar_methods.similar_print_field(app_id, field, count, lang, country)\n\n    def similar_print_fields(self, app_id: str, fields: List[str], count: int = Config.DEFAULT_SIMILAR_COUNT, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY) -> None:\n        \"\"\"Print multiple fields from similar apps.\n        \n        Args:\n            app_id: Google Play app ID\n            fields: List of field names\n            count: Number of similar apps\n            lang: Language code\n            country: Country code\n        \"\"\"\n        return self.similar_methods.similar_print_fields(app_id, fields, count, lang, country)\n\n    def similar_print_all(self, app_id: str, count: int = Config.DEFAULT_SIMILAR_COUNT, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY) -> None:\n        \"\"\"Print all similar apps as JSON.\n        \n        Args:\n            app_id: Google Play app ID\n            count: Number of similar apps\n            lang: Language code\n            country: Country code\n        \"\"\"\n        return self.similar_methods.similar_print_all(app_id, count, lang, country)\n\n    # ==================== List Methods ====================\n    \n    def list_analyze(self, collection: str = Config.DEFAULT_LIST_COLLECTION, category: str = Config.DEFAULT_LIST_CATEGORY, count: int = Config.DEFAULT_LIST_COUNT, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY) -> List[Dict]:\n        \"\"\"Get top charts (top free, top paid, top grossing).\n        \n        Args:\n            collection: Collection type (TOP_FREE, TOP_PAID, TOP_GROSSING)\n            category: App category\n            count: Number of apps to return\n            lang: Language code\n            country: Country code\n            \n        Returns:\n            List of app dictionaries from top charts\n        \"\"\"\n        return self.list_methods.list_analyze(collection, category, count, lang, country)\n\n    def list_get_field(self, collection: str, field: str, category: str = Config.DEFAULT_LIST_CATEGORY, count: int = Config.DEFAULT_LIST_COUNT, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY) -> List[Any]:\n        \"\"\"Get single field from top charts.\n        \n        Args:\n            collection: Collection type\n            field: Field name to retrieve\n            category: App category\n            count: Number of apps\n            lang: Language code\n            country: Country code\n            \n        Returns:\n            List of field values\n        \"\"\"\n        return self.list_methods.list_get_field(collection, field, category, count, lang, country)\n\n    def list_get_fields(self, collection: str, fields: List[str], category: str = Config.DEFAULT_LIST_CATEGORY, count: int = Config.DEFAULT_LIST_COUNT, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY) -> List[Dict[str, Any]]:\n        \"\"\"Get multiple fields from top charts.\n        \n        Args:\n            collection: Collection type\n            fields: List of field names\n            category: App category\n            count: Number of apps\n            lang: Language code\n            country: Country code\n            \n        Returns:\n            List of dictionaries with requested fields\n        \"\"\"\n        return self.list_methods.list_get_fields(collection, fields, category, count, lang, country)\n\n    def list_print_field(self, collection: str, field: str, category: str = Config.DEFAULT_LIST_CATEGORY, count: int = Config.DEFAULT_LIST_COUNT, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY) -> None:\n        \"\"\"Print single field from top charts.\n        \n        Args:\n            collection: Collection type\n            field: Field name to print\n            category: App category\n            count: Number of apps\n            lang: Language code\n            country: Country code\n        \"\"\"\n        return self.list_methods.list_print_field(collection, field, category, count, lang, country)\n\n    def list_print_fields(self, collection: str, fields: List[str], category: str = Config.DEFAULT_LIST_CATEGORY, count: int = Config.DEFAULT_LIST_COUNT, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY) -> None:\n        \"\"\"Print multiple fields from top charts.\n        \n        Args:\n            collection: Collection type\n            fields: List of field names\n            category: App category\n            count: Number of apps\n            lang: Language code\n            country: Country code\n        \"\"\"\n        return self.list_methods.list_print_fields(collection, fields, category, count, lang, country)\n\n    def list_print_all(self, collection: str = Config.DEFAULT_LIST_COLLECTION, category: str = Config.DEFAULT_LIST_CATEGORY, count: int = Config.DEFAULT_LIST_COUNT, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY) -> None:\n        \"\"\"Print all top charts as JSON.\n        \n        Args:\n            collection: Collection type\n            category: App category\n            count: Number of apps\n            lang: Language code\n            country: Country code\n        \"\"\"\n        return self.list_methods.list_print_all(collection, category, count, lang, country)\n\n    # ==================== Suggest Methods ====================\n    \n    def suggest_analyze(self, term: str, count: int = Config.DEFAULT_SUGGEST_COUNT, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY) -> List[str]:\n        \"\"\"Get search suggestions for a term.\n        \n        Args:\n            term: Search term\n            count: Number of suggestions to return\n            lang: Language code\n            country: Country code\n            \n        Returns:\n            List of suggestion strings\n        \"\"\"\n        return self.suggest_methods.suggest_analyze(term, count, lang, country)\n\n    def suggest_nested(self, term: str, count: int = Config.DEFAULT_SUGGEST_COUNT, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY) -> Dict[str, List[str]]:\n        \"\"\"Get nested suggestions (suggestions for suggestions).\n        \n        Args:\n            term: Search term\n            count: Number of suggestions\n            lang: Language code\n            country: Country code\n            \n        Returns:\n            Dictionary mapping terms to their suggestions\n        \"\"\"\n        return self.suggest_methods.suggest_nested(term, count, lang, country)\n\n    def suggest_print_all(self, term: str, count: int = Config.DEFAULT_SUGGEST_COUNT, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY) -> None:\n        \"\"\"Print all suggestions as JSON.\n        \n        Args:\n            term: Search term\n            count: Number of suggestions\n            lang: Language code\n            country: Country code\n        \"\"\"\n        return self.suggest_methods.suggest_print_all(term, count, lang, country)\n\n    def suggest_print_nested(self, term: str, count: int = Config.DEFAULT_SUGGEST_COUNT, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY) -> None:\n        \"\"\"Print nested suggestions as JSON.\n        \n        Args:\n            term: Search term\n            count: Number of suggestions\n            lang: Language code\n            country: Country code\n        \"\"\"\n        return self.suggest_methods.suggest_print_nested(term, count, lang, country)"
  },
  {
    "path": "gplay_scraper/config.py",
    "content": "\"\"\"Configuration module for GPlay Scraper.\n\nContains all constants, default values, URLs, and error messages.\n\"\"\"\n\nimport random\nfrom typing import Dict, Any\n\n\nclass Config:\n    \"\"\"Configuration class containing all settings and constants.\"\"\"\n    # HTTP request settings\n    DEFAULT_TIMEOUT = 30  # Request timeout in seconds\n    RATE_LIMIT_DELAY = 1.0  # Delay between requests in seconds\n    DEFAULT_RETRY_COUNT = 3  # Number of retries for failed requests\n    \n    # User agent strings for HTTP requests\n    USER_AGENTS = [\n        \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/139.0.0.0 Safari/537.36\",\n        \"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/138.0.0.0 Safari/537.36\",\n        \"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/137.0.0.0 Safari/537.36\",\n        \"Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:133.0) Gecko/20100101 Firefox/133.0\",\n        \"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/18.2 Safari/605.1.15\",\n    ]\n      \n    # Google Play Store URLs\n    PLAY_STORE_BASE_URL = \"https://play.google.com\"\n    APP_DETAILS_ENDPOINT = \"/store/apps/details\"  # App details page\n    BATCHEXECUTE_ENDPOINT = \"/_/PlayStoreUi/data/batchexecute\"  # Batch API endpoint\n    DEVELOPER_NUMERIC_ENDPOINT = \"/store/apps/dev\"  # Developer page (numeric ID)\n    DEVELOPER_STRING_ENDPOINT = \"/store/apps/developer\"  # Developer page (string ID)\n    \n    # Default parameters\n    DEFAULT_LANGUAGE = \"en\"  # Default language code\n    DEFAULT_COUNTRY = \"\"  # Default country code\n    DEFAULT_REVIEWS_SORT = \"NEWEST\"  # Options: NEWEST, RELEVANT, RATING\n    DEFAULT_HTTP_CLIENT = \"requests\"  # Options: requests, httpx, curl-cffi, tls-client, aiohttp, urllib3, cloudscraper\n    \n    # Default collection and category for list methods\n    DEFAULT_LIST_COLLECTION = \"TOP_FREE\"  # Options: TOP_FREE, TOP_PAID, TOP_GROSSING\n    DEFAULT_LIST_CATEGORY = \"APPLICATION\"  # Default category\n    \n    # Default count values for different methods\n    DEFAULT_LIST_COUNT = 100  # Number of apps to fetch from lists\n    DEFAULT_REVIEWS_COUNT = 100  # Number of reviews to fetch\n    DEFAULT_REVIEWS_BATCH_SIZE = 50  # Reviews per batch request\n    DEFAULT_SUGGEST_COUNT = 5  # Number of suggestions to fetch\n    DEFAULT_SIMILAR_COUNT = 100  # Number of similar apps to fetch\n    DEFAULT_DEVELOPER_COUNT = 100  # Number of developer apps to fetch\n    DEFAULT_SEARCH_COUNT = 100  # Number of search results to fetch\n    \n    # Image size configurations\n    IMAGE_SIZES = {\n        \"SMALL\": \"w512\",    # 512px width\n        \"MEDIUM\": \"w1024\",  # 1024px width  \n        \"LARGE\": \"w2048\",   # 2048px width\n        \"ORIGINAL\": \"w9999\" # Original/max size\n    }\n    DEFAULT_IMAGE_SIZE = \"MEDIUM\"  # Default image size\n    \n    # Error message templates\n    ERROR_MESSAGES = {\n        \"INVALID_APP_ID\": \"app_id must be a non-empty string\",\n        \"INVALID_DEV_ID\": \"dev_id must be a non-empty string\",\n        \"INVALID_QUERY\": \"query must be a non-empty string\",\n        \"NO_DS5_DATA\": \"No data found in dataset\",\n        \"DS5_NOT_FOUND\": \"Could not find data\",\n        \"JSON_PARSE_FAILED\": \"Failed to parse JSON: {error}\",\n        \"APP_FETCH_FAILED\": \"Failed to fetch app page for {app_id}: {error}\",\n        \"SEARCH_FETCH_FAILED\": \"Failed to fetch search results for '{query}': {error}\",\n        \"REVIEWS_FETCH_FAILED\": \"Failed to fetch reviews batch for {app_id}: {error}\",\n        \"REVIEWS_SCRAPE_FAILED\": \"Failed to scrape reviews for {app_id}: {error}\",\n        \"DEVELOPER_FETCH_FAILED\": \"Failed to fetch developer page for {dev_id}: {error}\",\n        \"CLUSTER_FETCH_FAILED\": \"Failed to fetch cluster page: {error}\",\n        \"LIST_FETCH_FAILED\": \"Failed to fetch list page: {error}\",\n        \"SUGGEST_FETCH_FAILED\": \"Failed to fetch suggestions for '{term}': {error}\",\n        \"RATE_LIMIT_SLEEP\": \"Rate limiting: sleeping for {sleep_time:.2f} seconds\",\n        \"HTTP_CLIENT_NOT_AVAILABLE\": \"{client} not available\",\n        \"HTTP_ERROR\": \"HTTP {status_code} Error\",\n        \"NO_HTTP_CLIENT\": \"No HTTP client libraries found\",\n        \"CLIENT_FAILED_TRYING_NEXT\": \"{client_type} failed, trying next client: {error}\",\n        \"UNKNOWN_CLIENT_TYPE\": \"Unknown client type: {client_type}\",\n        \"APP_NOT_FOUND\": \"App not found: {app_id}\",\n        \"SEARCH_NOT_FOUND\": \"Search not found: {query}\",\n        \"REVIEWS_NOT_FOUND\": \"Reviews not found for app: {app_id}\",\n        \"DEVELOPER_NOT_FOUND\": \"Developer not found: {dev_id}\",\n        \"CLUSTER_NOT_FOUND\": \"Cluster not found: {cluster_url}\",\n        \"LIST_NOT_FOUND\": \"List not found: {collection}/{category}\",\n        \"SUGGEST_NOT_FOUND\": \"Suggestions not found for: {term}\",\n        \"NO_DS3_DATA\": \"No data found in dataset\",\n        \"DS3_NOT_FOUND\": \"Could not find data\",\n        \"DS3_JSON_PARSE_FAILED\": \"Failed to parse JSON: {error}\",\n        \"SEARCH_PAGINATION_FAILED\": \"Failed to fetch paginated search results: {error}\"\n    }\n    \n    @classmethod\n    def get_headers(cls, user_agent: str = None) -> Dict[str, str]:\n        \"\"\"Generate HTTP headers with random or specified user agent.\n        \n        Args:\n            user_agent: Optional custom user agent string\n            \n        Returns:\n            Dictionary containing HTTP headers\n        \"\"\"\n        return {\n            \"User-Agent\": user_agent or random.choice(cls.USER_AGENTS)\n        }\n    \n    @classmethod\n    def get_image_size(cls, size: str = None) -> str:\n        \"\"\"Get image size parameter.\n        \n        Args:\n            size: Size name (SMALL, MEDIUM, LARGE, ORIGINAL) or None for default\n            \n        Returns:\n            Image size parameter string\n        \"\"\"\n        size = size or cls.DEFAULT_IMAGE_SIZE\n        return cls.IMAGE_SIZES.get(size.upper(), cls.IMAGE_SIZES[cls.DEFAULT_IMAGE_SIZE])"
  },
  {
    "path": "gplay_scraper/core/__init__.py",
    "content": "\"\"\"Core module containing all 7 method classes for Google Play Store scraping.\"\"\"\n\nfrom .gplay_methods import AppMethods, SearchMethods, ReviewsMethods, DeveloperMethods, SimilarMethods, ListMethods, SuggestMethods\n\n__all__ = ['AppMethods', 'SearchMethods', 'ReviewsMethods', 'DeveloperMethods', 'SimilarMethods', 'ListMethods', 'SuggestMethods']"
  },
  {
    "path": "gplay_scraper/core/gplay_methods.py",
    "content": "\"\"\"Method classes for all 7 scraping types.\n\nThis module contains 7 method classes, each providing 6 functions (except Suggest with 4):\n- analyze(): Get all data\n- get_field(): Get single field\n- get_fields(): Get multiple fields\n- print_field(): Print single field\n- print_fields(): Print multiple fields\n- print_all(): Print all data as JSON\n\"\"\"\n\nimport json\nfrom typing import Any, List, Dict\nimport logging\nfrom .gplay_scraper import AppScraper, SearchScraper, ReviewsScraper, DeveloperScraper, SimilarScraper, ListScraper, SuggestScraper\nfrom .gplay_parser import AppParser, SearchParser, ReviewsParser, DeveloperParser, SimilarParser, ListParser, SuggestParser\nfrom ..config import Config\nfrom ..exceptions import InvalidAppIdError, AppNotFoundError\nfrom ..utils.error_handling import comprehensive_error_handler, safe_print \n\n# Configure logging\nif not logging.getLogger().handlers:\n    logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')\nlogger = logging.getLogger(__name__)\n\n\nclass AppMethods:\n    \"\"\"Methods for extracting app details with 65+ fields.\"\"\"\n    def __init__(self, http_client: str = None):\n        \"\"\"Initialize AppMethods with scraper and parser.\n        \n        Args:\n            http_client: Optional HTTP client name\n        \"\"\"\n        self.scraper = AppScraper(http_client=http_client)\n        self.parser = AppParser()\n\n    @comprehensive_error_handler()\n    def app_analyze(self, app_id: str, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY, assets: str = None) -> Dict:\n        \"\"\"Get complete app data with all 65+ fields.\n        \n        Args:\n            app_id: Google Play app ID\n            lang: Language code\n            country: Country code\n            assets: Asset size (SMALL, MEDIUM, LARGE, ORIGINAL)\n            \n        Returns:\n            Dictionary with all app data or None if app not found after retries\n            \n        Raises:\n            InvalidAppIdError: If app_id is invalid\n        \"\"\"\n        if not app_id or not isinstance(app_id, str):\n            raise InvalidAppIdError(Config.ERROR_MESSAGES[\"INVALID_APP_ID\"])\n        \n        dataset = self.scraper.scrape_play_store_data(app_id, lang, country)\n        app_details = self.parser.parse_app_data(dataset, app_id, self.scraper, assets)\n        return self.parser.format_app_data(app_details)\n\n    @comprehensive_error_handler()\n    def app_get_field(self, app_id: str, field: str, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY, assets: str = None) -> Any:\n        \"\"\"Get single field value from app data.\n        \n        Args:\n            app_id: Google Play app ID\n            field: Field name to retrieve\n            lang: Language code\n            country: Country code\n            assets: Asset size (SMALL, MEDIUM, LARGE, ORIGINAL)\n            \n        Returns:\n            Value of the requested field\n        \"\"\"\n        return self.app_analyze(app_id, lang, country, assets).get(field)\n\n    @comprehensive_error_handler()\n    def app_get_fields(self, app_id: str, fields: List[str], lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY, assets: str = None) -> Dict[str, Any]:\n        \"\"\"Get multiple field values from app data.\n        \n        Args:\n            app_id: Google Play app ID\n            fields: List of field names to retrieve\n            lang: Language code\n            country: Country code\n            assets: Asset size (SMALL, MEDIUM, LARGE, ORIGINAL)\n            \n        Returns:\n            Dictionary with requested fields and values\n        \"\"\"\n        data = self.app_analyze(app_id, lang, country, assets)\n        return {field: data.get(field) for field in fields}\n\n    @safe_print()\n    def app_print_field(self, app_id: str, field: str, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY, assets: str = None) -> None:\n        \"\"\"Print single field value to console.\n        \n        Args:\n            app_id: Google Play app ID\n            field: Field name to print\n            lang: Language code\n            country: Country code\n            assets: Asset size (SMALL, MEDIUM, LARGE, ORIGINAL)\n        \"\"\"\n        value = self.app_get_field(app_id, field, lang, country, assets)\n        try:\n            print(f\"{field}: {value}\")\n        except UnicodeEncodeError:\n            print(f\"{field}: {repr(value)}\")\n\n    @safe_print()\n    def app_print_fields(self, app_id: str, fields: List[str], lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY, assets: str = None) -> None:\n        \"\"\"Print multiple field values to console.\n        \n        Args:\n            app_id: Google Play app ID\n            fields: List of field names to print\n            lang: Language code\n            country: Country code\n            assets: Asset size (SMALL, MEDIUM, LARGE, ORIGINAL)\n        \"\"\"\n        data = self.app_get_fields(app_id, fields, lang, country, assets)\n        for field, value in data.items():\n            try:\n                print(f\"{field}: {value}\")\n            except UnicodeEncodeError:\n                print(f\"{field}: {repr(value)}\")\n\n    @safe_print()\n    def app_print_all(self, app_id: str, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY, assets: str = None) -> None:\n        \"\"\"Print all app data as JSON to console.\n        \n        Args:\n            app_id: Google Play app ID\n            lang: Language code\n            country: Country code\n            assets: Asset size (SMALL, MEDIUM, LARGE, ORIGINAL)\n        \"\"\"\n        data = self.app_analyze(app_id, lang, country, assets)\n        try:\n            print(json.dumps(data, indent=2, ensure_ascii=False))\n        except UnicodeEncodeError:\n            print(json.dumps(data, indent=2, ensure_ascii=True))\n\n\nclass SearchMethods:\n    \"\"\"Methods for searching apps by keyword.\"\"\"\n    def __init__(self, http_client: str = None):\n        \"\"\"Initialize SearchMethods with scraper and parser.\n        \n        Args:\n            http_client: Optional HTTP client name\n        \"\"\"\n        self.scraper = SearchScraper(http_client=http_client)\n        self.parser = SearchParser()\n\n    @comprehensive_error_handler(return_empty=True)\n    def search_analyze(self, query: str, count: int = Config.DEFAULT_SEARCH_COUNT, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY) -> List[Dict]:\n        \"\"\"Search for apps and get complete results with pagination support.\n        \n        Args:\n            query: Search query string\n            count: Number of results to return\n            lang: Language code\n            country: Country code\n            \n        Returns:\n            List of dictionaries containing app data\n            \n        Raises:\n            InvalidAppIdError: If query is invalid\n        \"\"\"\n        if not query or not isinstance(query, str):\n            raise InvalidAppIdError(Config.ERROR_MESSAGES[\"INVALID_QUERY\"])\n        \n        dataset = self.scraper.scrape_play_store_data(query, count, lang, country)\n        raw_results = self.parser.parse_search_results(dataset, count)\n        return [self.parser.format_search_result(result) for result in raw_results]\n\n    @comprehensive_error_handler(return_empty=True)\n    def search_get_field(self, query: str, field: str, count: int = Config.DEFAULT_SEARCH_COUNT, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY) -> List[Any]:\n        \"\"\"Get single field from all search results.\n        \n        Args:\n            query: Search query string\n            field: Field name to retrieve\n            count: Number of results\n            lang: Language code\n            country: Country code\n            \n        Returns:\n            List of field values from all results\n        \"\"\"\n        results = self.search_analyze(query, count, lang, country)\n        return [app.get(field) for app in results]\n\n    @comprehensive_error_handler(return_empty=True)\n    def search_get_fields(self, query: str, fields: List[str], count: int = Config.DEFAULT_SEARCH_COUNT, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY) -> List[Dict[str, Any]]:\n        \"\"\"Get multiple fields from all search results.\n        \n        Args:\n            query: Search query string\n            fields: List of field names to retrieve\n            count: Number of results\n            lang: Language code\n            country: Country code\n            \n        Returns:\n            List of dictionaries with requested fields\n        \"\"\"\n        results = self.search_analyze(query, count, lang, country)\n        return [{field: app.get(field) for field in fields} for app in results]\n\n    @safe_print()\n    def search_print_field(self, query: str, field: str, count: int = Config.DEFAULT_SEARCH_COUNT, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY) -> None:\n        \"\"\"Print single field from all search results.\n        \n        Args:\n            query: Search query string\n            field: Field name to print\n            count: Number of results\n            lang: Language code\n            country: Country code\n        \"\"\"\n        values = self.search_get_field(query, field, count, lang, country)\n        for i, value in enumerate(values):\n            try:\n                print(f\"{i}. {field}: {value}\")\n            except UnicodeEncodeError:\n                print(f\"{i}. {field}: {repr(value)}\")\n\n    @safe_print()\n    def search_print_fields(self, query: str, fields: List[str], count: int = Config.DEFAULT_SEARCH_COUNT, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY) -> None:\n        \"\"\"Print multiple fields from all search results.\n        \n        Args:\n            query: Search query string\n            fields: List of field names to print\n            count: Number of results\n            lang: Language code\n            country: Country code\n        \"\"\"\n        data = self.search_get_fields(query, fields, count, lang, country)\n        for i, app_data in enumerate(data):\n            try:\n                field_str = ', '.join(f'{field}: {value}' for field, value in app_data.items())\n                print(f\"{i}. {field_str}\")\n            except UnicodeEncodeError:\n                field_str = ', '.join(f'{field}: {repr(value)}' for field, value in app_data.items())\n                print(f\"{i}. {field_str}\")\n\n    @safe_print()\n    def search_print_all(self, query: str, count: int = Config.DEFAULT_SEARCH_COUNT, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY) -> None:\n        \"\"\"Print all search results as JSON.\n        \n        Args:\n            query: Search query string\n            count: Number of results\n            lang: Language code\n            country: Country code\n        \"\"\"\n        results = self.search_analyze(query, count, lang, country)\n        for i, result in enumerate(results):\n            try:\n                print(json.dumps(result, indent=2, ensure_ascii=False))\n            except UnicodeEncodeError:\n                print(json.dumps(result, indent=2, ensure_ascii=True))\n\n\nclass ReviewsMethods:\n    \"\"\"Methods for extracting user reviews and ratings.\"\"\"\n    def __init__(self, http_client: str = None):\n        \"\"\"Initialize ReviewsMethods with scraper and parser.\n        \n        Args:\n            http_client: Optional HTTP client name\n        \"\"\"\n        self.scraper = ReviewsScraper(http_client=http_client)\n        self.parser = ReviewsParser()\n\n    @comprehensive_error_handler(return_empty=True)\n    def reviews_analyze(self, app_id: str, count: int = Config.DEFAULT_REVIEWS_COUNT, lang: str = Config.DEFAULT_LANGUAGE, \n                       country: str = Config.DEFAULT_COUNTRY, sort: str = Config.DEFAULT_REVIEWS_SORT) -> List[Dict]:\n        \"\"\"Get user reviews for an app.\n        \n        Args:\n            app_id: Google Play app ID\n            count: Number of reviews to fetch\n            lang: Language code\n            country: Country code\n            sort: Sort order (NEWEST, RELEVANT, RATING)\n            \n        Returns:\n            List of review dictionaries\n            \n        Raises:\n            InvalidAppIdError: If app_id is invalid\n        \"\"\"\n        if not app_id or not isinstance(app_id, str):\n            raise InvalidAppIdError(Config.ERROR_MESSAGES[\"INVALID_APP_ID\"])\n            \n        if count <= 0:\n            return []\n            \n        try:\n            dataset = self.scraper.scrape_reviews_data(app_id, count, lang, country, sort)\n            reviews_data = self.parser.parse_multiple_responses(dataset)\n        except Exception as e:\n            logger.error(Config.ERROR_MESSAGES[\"REVIEWS_SCRAPE_FAILED\"].format(app_id=app_id, error=e))\n            raise\n\n        return self.parser.format_reviews_data(reviews_data)\n\n    @comprehensive_error_handler(return_empty=True)\n    def reviews_get_field(self, app_id: str, field: str, count: int = Config.DEFAULT_REVIEWS_COUNT, \n                         lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY, sort: str = Config.DEFAULT_REVIEWS_SORT) -> List[Any]:\n        \"\"\"Get single field from all reviews.\n        \n        Args:\n            app_id: Google Play app ID\n            field: Field name to retrieve\n            count: Number of reviews\n            lang: Language code\n            country: Country code\n            sort: Sort order\n            \n        Returns:\n            List of field values from all reviews\n        \"\"\"\n        reviews_data = self.reviews_analyze(app_id, count, lang, country, sort)\n        return [review.get(field) for review in reviews_data]\n\n    @comprehensive_error_handler(return_empty=True)\n    def reviews_get_fields(self, app_id: str, fields: List[str], count: int = Config.DEFAULT_REVIEWS_COUNT,\n                          lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY, sort: str = Config.DEFAULT_REVIEWS_SORT) -> List[Dict[str, Any]]:\n        \"\"\"Get multiple fields from all reviews.\n        \n        Args:\n            app_id: Google Play app ID\n            fields: List of field names to retrieve\n            count: Number of reviews\n            lang: Language code\n            country: Country code\n            sort: Sort order\n            \n        Returns:\n            List of dictionaries with requested fields\n        \"\"\"\n        reviews_data = self.reviews_analyze(app_id, count, lang, country, sort)\n        return [{field: review.get(field) for field in fields} for review in reviews_data]\n\n    @safe_print()\n    def reviews_print_field(self, app_id: str, field: str, count: int = Config.DEFAULT_REVIEWS_COUNT,\n                           lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY, sort: str = Config.DEFAULT_REVIEWS_SORT) -> None:\n        \"\"\"Print single field from all reviews.\n        \n        Args:\n            app_id: Google Play app ID\n            field: Field name to print\n            count: Number of reviews\n            lang: Language code\n            country: Country code\n            sort: Sort order\n        \"\"\"\n        field_values = self.reviews_get_field(app_id, field, count, lang, country, sort)\n        for i, value in enumerate(field_values):\n            try:\n                print(f\"{i+1}. {field}: {value}\")\n            except UnicodeEncodeError:\n                print(f\"{i+1}. {field}: {repr(value)}\")\n\n    @safe_print()\n    def reviews_print_fields(self, app_id: str, fields: List[str], count: int = Config.DEFAULT_REVIEWS_COUNT,\n                            lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY, sort: str = Config.DEFAULT_REVIEWS_SORT) -> None:\n        \"\"\"Print multiple fields from all reviews.\n        \n        Args:\n            app_id: Google Play app ID\n            fields: List of field names to print\n            count: Number of reviews\n            lang: Language code\n            country: Country code\n            sort: Sort order\n        \"\"\"\n        reviews_data = self.reviews_get_fields(app_id, fields, count, lang, country, sort)\n        for i, review in enumerate(reviews_data):\n            for field, value in review.items():\n                try:\n                    print(f\"{field}: {value}\")\n                except UnicodeEncodeError:\n                    print(f\"{field}: {repr(value)}\")\n\n    @safe_print()\n    def reviews_print_all(self, app_id: str, count: int = Config.DEFAULT_REVIEWS_COUNT, lang: str = Config.DEFAULT_LANGUAGE,\n                         country: str = Config.DEFAULT_COUNTRY, sort: str = Config.DEFAULT_REVIEWS_SORT) -> None:\n        \"\"\"Print all reviews as JSON.\n        \n        Args:\n            app_id: Google Play app ID\n            count: Number of reviews\n            lang: Language code\n            country: Country code\n            sort: Sort order\n        \"\"\"\n        reviews_data = self.reviews_analyze(app_id, count, lang, country, sort)\n        try:\n            print(json.dumps(reviews_data, indent=2, ensure_ascii=False))\n        except UnicodeEncodeError:\n            print(json.dumps(reviews_data, indent=2, ensure_ascii=True))\n\n\nclass DeveloperMethods:\n    \"\"\"Methods for getting all apps from a developer.\"\"\"\n    def __init__(self, http_client: str = None):\n        \"\"\"Initialize DeveloperMethods with scraper and parser.\n        \n        Args:\n            http_client: Optional HTTP client name\n        \"\"\"\n        self.scraper = DeveloperScraper(http_client=http_client)\n        self.parser = DeveloperParser()\n\n    @comprehensive_error_handler(return_empty=True)\n    def developer_analyze(self, dev_id: str, count: int = Config.DEFAULT_DEVELOPER_COUNT, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY) -> List[Dict]:\n        \"\"\"Get all apps from a developer.\n        \n        Args:\n            dev_id: Developer ID (numeric or string)\n            count: Number of apps to return\n            lang: Language code\n            country: Country code\n            \n        Returns:\n            List of app dictionaries\n            \n        Raises:\n            InvalidAppIdError: If dev_id is invalid\n        \"\"\"\n        if not dev_id or not isinstance(dev_id, str):\n            raise InvalidAppIdError(Config.ERROR_MESSAGES[\"INVALID_DEV_ID\"])\n            \n        dataset = self.scraper.scrape_play_store_data(dev_id, lang, country)\n        apps_data = self.parser.parse_developer_data(dataset, dev_id)\n        return self.parser.format_developer_data(apps_data)[:count]\n\n    @comprehensive_error_handler(return_empty=True)\n    def developer_get_field(self, dev_id: str, field: str, count: int = Config.DEFAULT_DEVELOPER_COUNT, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY) -> List[Any]:\n        \"\"\"Get single field from all developer apps.\n        \n        Args:\n            dev_id: Developer ID\n            field: Field name to retrieve\n            count: Number of apps\n            lang: Language code\n            country: Country code\n            \n        Returns:\n            List of field values from all apps\n        \"\"\"\n        results = self.developer_analyze(dev_id, count, lang, country)\n        return [app.get(field) for app in results]\n\n    @comprehensive_error_handler(return_empty=True)\n    def developer_get_fields(self, dev_id: str, fields: List[str], count: int = Config.DEFAULT_DEVELOPER_COUNT, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY) -> List[Dict[str, Any]]:\n        \"\"\"Get multiple fields from all developer apps.\n        \n        Args:\n            dev_id: Developer ID\n            fields: List of field names to retrieve\n            count: Number of apps\n            lang: Language code\n            country: Country code\n            \n        Returns:\n            List of dictionaries with requested fields\n        \"\"\"\n        results = self.developer_analyze(dev_id, count, lang, country)\n        return [{field: app.get(field) for field in fields} for app in results]\n\n    @safe_print()\n    def developer_print_field(self, dev_id: str, field: str, count: int = Config.DEFAULT_DEVELOPER_COUNT, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY) -> None:\n        \"\"\"Print single field from all developer apps.\n        \n        Args:\n            dev_id: Developer ID\n            field: Field name to print\n            count: Number of apps\n            lang: Language code\n            country: Country code\n        \"\"\"\n        values = self.developer_get_field(dev_id, field, count, lang, country)\n        for i, value in enumerate(values):\n            try:\n                print(f\"{i+1}. {field}: {value}\")\n            except UnicodeEncodeError:\n                print(f\"{i+1}. {field}: {repr(value)}\")\n\n    @safe_print()\n    def developer_print_fields(self, dev_id: str, fields: List[str], count: int = Config.DEFAULT_DEVELOPER_COUNT, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY) -> None:\n        \"\"\"Print multiple fields from all developer apps.\n        \n        Args:\n            dev_id: Developer ID\n            fields: List of field names to print\n            count: Number of apps\n            lang: Language code\n            country: Country code\n        \"\"\"\n        data = self.developer_get_fields(dev_id, fields, count, lang, country)\n        for i, app_data in enumerate(data):\n            try:\n                field_str = ', '.join(f'{field}: {value}' for field, value in app_data.items())\n                print(f\"{i+1}. {field_str}\")\n            except UnicodeEncodeError:\n                field_str = ', '.join(f'{field}: {repr(value)}' for field, value in app_data.items())\n                print(f\"{i+1}. {field_str}\")\n\n    @safe_print()\n    def developer_print_all(self, dev_id: str, count: int = Config.DEFAULT_DEVELOPER_COUNT, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY) -> None:\n        \"\"\"Print all developer apps as JSON.\n        \n        Args:\n            dev_id: Developer ID\n            count: Number of apps\n            lang: Language code\n            country: Country code\n        \"\"\"\n        results = self.developer_analyze(dev_id, count, lang, country)\n        try:\n            print(json.dumps(results, indent=2, ensure_ascii=False))\n        except UnicodeEncodeError:\n            print(json.dumps(results, indent=2, ensure_ascii=True))\n\n\nclass SimilarMethods:\n    \"\"\"Methods for finding similar/competitor apps.\"\"\"\n    def __init__(self, http_client: str = None):\n        \"\"\"Initialize SimilarMethods with scraper and parser.\n        \n        Args:\n            http_client: Optional HTTP client name\n        \"\"\"\n        self.scraper = SimilarScraper(http_client=http_client)\n        self.parser = SimilarParser()\n\n    @comprehensive_error_handler(return_empty=True)\n    def similar_analyze(self, app_id: str, count: int = Config.DEFAULT_SIMILAR_COUNT, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY) -> List[Dict]:\n        \"\"\"Get similar/competitor apps.\n        \n        Args:\n            app_id: Google Play app ID\n            count: Number of similar apps to return\n            lang: Language code\n            country: Country code\n            \n        Returns:\n            List of similar app dictionaries\n            \n        Raises:\n            InvalidAppIdError: If app_id is invalid\n        \"\"\"\n        if not app_id or not isinstance(app_id, str):\n            raise InvalidAppIdError(Config.ERROR_MESSAGES[\"INVALID_APP_ID\"])\n            \n        dataset = self.scraper.scrape_play_store_data(app_id, lang, country)\n        apps_data = self.parser.parse_similar_data(dataset)\n        return self.parser.format_similar_data(apps_data)[:count]\n\n    @comprehensive_error_handler(return_empty=True)\n    def similar_get_field(self, app_id: str, field: str, count: int = Config.DEFAULT_SIMILAR_COUNT, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY) -> List[Any]:\n        \"\"\"Get single field from all similar apps.\n        \n        Args:\n            app_id: Google Play app ID\n            field: Field name to retrieve\n            count: Number of similar apps\n            lang: Language code\n            country: Country code\n            \n        Returns:\n            List of field values from all similar apps\n        \"\"\"\n        results = self.similar_analyze(app_id, count, lang, country)\n        return [app.get(field) for app in results]\n\n    @comprehensive_error_handler(return_empty=True)\n    def similar_get_fields(self, app_id: str, fields: List[str], count: int = Config.DEFAULT_SIMILAR_COUNT, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY) -> List[Dict[str, Any]]:\n        \"\"\"Get multiple fields from all similar apps.\n        \n        Args:\n            app_id: Google Play app ID\n            fields: List of field names to retrieve\n            count: Number of similar apps\n            lang: Language code\n            country: Country code\n            \n        Returns:\n            List of dictionaries with requested fields\n        \"\"\"\n        results = self.similar_analyze(app_id, count, lang, country)\n        return [{field: app.get(field) for field in fields} for app in results]\n\n    @safe_print()\n    def similar_print_field(self, app_id: str, field: str, count: int = Config.DEFAULT_SIMILAR_COUNT, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY) -> None:\n        \"\"\"Print single field from all similar apps.\n        \n        Args:\n            app_id: Google Play app ID\n            field: Field name to print\n            count: Number of similar apps\n            lang: Language code\n            country: Country code\n        \"\"\"\n        values = self.similar_get_field(app_id, field, count, lang, country)\n        for i, value in enumerate(values):\n            try:\n                print(f\"{i+1}. {field}: {value}\")\n            except UnicodeEncodeError:\n                print(f\"{i+1}. {field}: {repr(value)}\")\n\n    @safe_print()\n    def similar_print_fields(self, app_id: str, fields: List[str], count: int = Config.DEFAULT_SIMILAR_COUNT, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY) -> None:\n        \"\"\"Print multiple fields from all similar apps.\n        \n        Args:\n            app_id: Google Play app ID\n            fields: List of field names to print\n            count: Number of similar apps\n            lang: Language code\n            country: Country code\n        \"\"\"\n        data = self.similar_get_fields(app_id, fields, count, lang, country)\n        for i, app_data in enumerate(data):\n            try:\n                field_str = ', '.join(f'{field}: {value}' for field, value in app_data.items())\n                print(f\"{i+1}. {field_str}\")\n            except UnicodeEncodeError:\n                field_str = ', '.join(f'{field}: {repr(value)}' for field, value in app_data.items())\n                print(f\"{i+1}. {field_str}\")\n\n    @safe_print()\n    def similar_print_all(self, app_id: str, count: int = Config.DEFAULT_SIMILAR_COUNT, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY) -> None:\n        \"\"\"Print all similar apps as JSON.\n        \n        Args:\n            app_id: Google Play app ID\n            count: Number of similar apps\n            lang: Language code\n            country: Country code\n        \"\"\"\n        results = self.similar_analyze(app_id, count, lang, country)\n        try:\n            print(json.dumps(results, indent=2, ensure_ascii=False))\n        except UnicodeEncodeError:\n            print(json.dumps(results, indent=2, ensure_ascii=True))\n\n\nclass ListMethods:\n    \"\"\"Methods for getting top charts (free, paid, grossing).\"\"\"\n    def __init__(self, http_client: str = None):\n        \"\"\"Initialize ListMethods with scraper and parser.\n        \n        Args:\n            http_client: Optional HTTP client name\n        \"\"\"\n        self.scraper = ListScraper(http_client=http_client)\n        self.parser = ListParser()\n\n    @comprehensive_error_handler(return_empty=True)\n    def list_analyze(self, collection: str = Config.DEFAULT_LIST_COLLECTION, category: str = Config.DEFAULT_LIST_CATEGORY, count: int = Config.DEFAULT_LIST_COUNT, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY) -> List[Dict]:\n        \"\"\"Get top charts (top free, top paid, top grossing).\n        \n        Args:\n            collection: Collection type (TOP_FREE, TOP_PAID, TOP_GROSSING)\n            category: App category\n            count: Number of apps to return\n            lang: Language code\n            country: Country code\n            \n        Returns:\n            List of app dictionaries from top charts\n        \"\"\"\n        dataset = self.scraper.scrape_play_store_data(collection, category, count, lang, country)\n        apps_data = self.parser.parse_list_data(dataset, count)\n        return self.parser.format_list_data(apps_data)\n\n    @comprehensive_error_handler(return_empty=True)\n    def list_get_field(self, collection: str, field: str, category: str = Config.DEFAULT_LIST_CATEGORY, count: int = Config.DEFAULT_LIST_COUNT, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY) -> List[Any]:\n        \"\"\"Get single field from all list apps.\n        \n        Args:\n            collection: Collection type\n            field: Field name to retrieve\n            category: App category\n            count: Number of apps\n            lang: Language code\n            country: Country code\n            \n        Returns:\n            List of field values from all apps\n        \"\"\"\n        results = self.list_analyze(collection, category, count, lang, country)\n        return [app.get(field) for app in results]\n\n    @comprehensive_error_handler(return_empty=True)\n    def list_get_fields(self, collection: str, fields: List[str], category: str = Config.DEFAULT_LIST_CATEGORY, count: int = Config.DEFAULT_LIST_COUNT, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY) -> List[Dict[str, Any]]:\n        \"\"\"Get multiple fields from all list apps.\n        \n        Args:\n            collection: Collection type\n            fields: List of field names to retrieve\n            category: App category\n            count: Number of apps\n            lang: Language code\n            country: Country code\n            \n        Returns:\n            List of dictionaries with requested fields\n        \"\"\"\n        results = self.list_analyze(collection, category, count, lang, country)\n        return [{field: app.get(field) for field in fields} for app in results]\n\n    @safe_print()\n    def list_print_field(self, collection: str, field: str, category: str = Config.DEFAULT_LIST_CATEGORY, count: int = Config.DEFAULT_LIST_COUNT, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY) -> None:\n        \"\"\"Print single field from all list apps.\n        \n        Args:\n            collection: Collection type\n            field: Field name to print\n            category: App category\n            count: Number of apps\n            lang: Language code\n            country: Country code\n        \"\"\"\n        values = self.list_get_field(collection, field, category, count, lang, country)\n        for i, value in enumerate(values):\n            try:\n                print(f\"{i+1}. {field}: {value}\")\n            except UnicodeEncodeError:\n                print(f\"{i+1}. {field}: {repr(value)}\")\n\n    @safe_print()\n    def list_print_fields(self, collection: str, fields: List[str], category: str = Config.DEFAULT_LIST_CATEGORY, count: int = Config.DEFAULT_LIST_COUNT, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY) -> None:\n        \"\"\"Print multiple fields from all list apps.\n        \n        Args:\n            collection: Collection type\n            fields: List of field names to print\n            category: App category\n            count: Number of apps\n            lang: Language code\n            country: Country code\n        \"\"\"\n        data = self.list_get_fields(collection, fields, category, count, lang, country)\n        for i, app_data in enumerate(data):\n            try:\n                field_str = ', '.join(f'{field}: {value}' for field, value in app_data.items())\n                print(f\"{i+1}. {field_str}\")\n            except UnicodeEncodeError:\n                field_str = ', '.join(f'{field}: {repr(value)}' for field, value in app_data.items())\n                print(f\"{i+1}. {field_str}\")\n\n    @safe_print()\n    def list_print_all(self, collection: str = Config.DEFAULT_LIST_COLLECTION, category: str = Config.DEFAULT_LIST_CATEGORY, count: int = Config.DEFAULT_LIST_COUNT, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY) -> None:\n        \"\"\"Print all list apps as JSON.\n        \n        Args:\n            collection: Collection type\n            category: App category\n            count: Number of apps\n            lang: Language code\n            country: Country code\n        \"\"\"\n        results = self.list_analyze(collection, category, count, lang, country)\n        try:\n            print(json.dumps(results, indent=2, ensure_ascii=False))\n        except UnicodeEncodeError:\n            print(json.dumps(results, indent=2, ensure_ascii=True))\n\n\nclass SuggestMethods:\n    \"\"\"Methods for getting search suggestions and autocomplete.\"\"\"\n    def __init__(self, http_client: str = None):\n        \"\"\"Initialize SuggestMethods with scraper and parser.\n        \n        Args:\n            http_client: Optional HTTP client name\n        \"\"\"\n        self.scraper = SuggestScraper(http_client=http_client)\n        self.parser = SuggestParser()\n\n    @comprehensive_error_handler(return_empty=True)\n    def suggest_analyze(self, term: str, count: int = Config.DEFAULT_SUGGEST_COUNT, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY) -> List[str]:\n        \"\"\"Get search suggestions for a term.\n        \n        Args:\n            term: Search term\n            count: Number of suggestions to return\n            lang: Language code\n            country: Country code\n            \n        Returns:\n            List of suggestion strings\n            \n        Raises:\n            InvalidAppIdError: If term is invalid\n        \"\"\"\n        if not term or not isinstance(term, str):\n            raise InvalidAppIdError(Config.ERROR_MESSAGES[\"INVALID_QUERY\"])\n        \n        dataset = self.scraper.scrape_suggestions(term, lang, country)\n        suggestions = self.parser.parse_suggestions(dataset)\n        return self.parser.format_suggestions(suggestions[:count])\n\n    @comprehensive_error_handler()\n    def suggest_nested(self, term: str, count: int = Config.DEFAULT_SUGGEST_COUNT, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY) -> Dict[str, List[str]]:\n        \"\"\"Get nested suggestions (suggestions for suggestions).\n        \n        Args:\n            term: Search term\n            count: Number of suggestions per level\n            lang: Language code\n            country: Country code\n            \n        Returns:\n            Dictionary mapping suggestions to their nested suggestions\n            \n        Raises:\n            InvalidAppIdError: If term is invalid\n        \"\"\"\n        if not term or not isinstance(term, str):\n            raise InvalidAppIdError(Config.ERROR_MESSAGES[\"INVALID_QUERY\"])\n        \n        first_level = self.suggest_analyze(term, count, lang, country)\n        results = {}\n        for suggestion in first_level:\n            second_level = self.suggest_analyze(suggestion, count, lang, country)\n            results[suggestion] = second_level\n        return results\n\n    @safe_print()\n    def suggest_print_all(self, term: str, count: int = Config.DEFAULT_SUGGEST_COUNT, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY) -> None:\n        \"\"\"Print all suggestions as JSON.\n        \n        Args:\n            term: Search term\n            count: Number of suggestions\n            lang: Language code\n            country: Country code\n        \"\"\"\n        suggestions = self.suggest_analyze(term, count, lang, country)\n        try:\n            print(json.dumps(suggestions, indent=2, ensure_ascii=False))\n        except UnicodeEncodeError:\n            print(json.dumps(suggestions, indent=2, ensure_ascii=True))\n\n    @safe_print()\n    def suggest_print_nested(self, term: str, count: int = Config.DEFAULT_SUGGEST_COUNT, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY) -> None:\n        \"\"\"Print nested suggestions as JSON.\n        \n        Args:\n            term: Search term\n            count: Number of suggestions per level\n            lang: Language code\n            country: Country code\n        \"\"\"\n        nested = self.suggest_nested(term, count, lang, country)\n        try:\n            print(json.dumps(nested, indent=2, ensure_ascii=False))\n        except UnicodeEncodeError:\n            print(json.dumps(nested, indent=2, ensure_ascii=True))"
  },
  {
    "path": "gplay_scraper/core/gplay_parser.py",
    "content": "\"\"\"Parser classes for extracting and formatting data from raw responses.\n\nThis module contains 7 parser classes that handle JSON/HTML parsing and\ndata formatting for all scraping methods.\n\"\"\"\n\nimport json\nimport re\nfrom datetime import datetime, timezone\nfrom typing import Dict, Any, List, Optional, Tuple\nfrom ..models.element_specs import ElementSpecs, nested_lookup, format_image_url\nfrom ..utils.helpers import clean_json_string, alternative_json_clean, calculate_app_age, calculate_daily_installs, calculate_monthly_installs, tamp_to_date, get_publisher_country\nfrom ..config import Config\nfrom ..exceptions import DataParsingError\nfrom ..utils.error_handling import handle_parsing_errors\nfrom ..utils.helpers import pho_count, add_count\n\nclass AppParser:\n    \"\"\"Parser for extracting and formatting app data.\"\"\"\n    @handle_parsing_errors()\n    def parse_app_data(self, dataset: Dict, app_id: str, scraper=None, assets: str = None) -> Dict[str, Any]:\n        \"\"\"Parse raw app data from dataset with fallback for missing release date.\n        \n        Args:\n            dataset: Raw dataset from scraper\n            app_id: Google Play app ID\n            scraper: AppScraper instance for fallback requests\n            \n        Returns:\n            Dictionary with parsed app details\n            \n        Raises:\n            DataParsingError: If parsing fails\n        \"\"\"\n        ds5_data = dataset.get(\"ds:5\", \"\")\n        if not ds5_data:\n            raise DataParsingError(Config.ERROR_MESSAGES[\"NO_DS5_DATA\"])\n        \n        json_str_cleaned = clean_json_string(ds5_data)\n        try:\n            data = json.loads(json_str_cleaned)\n        except json.JSONDecodeError as e:\n            try:\n                alternative_cleaned = alternative_json_clean(ds5_data)\n                data = json.loads(alternative_cleaned)\n            except Exception:\n                raise DataParsingError(Config.ERROR_MESSAGES[\"JSON_PARSE_FAILED\"].format(error=str(e)))\n\n        app_details = {}\n        for key, spec in ElementSpecs.App.items():\n            value = spec.extract_content(data.get(\"data\", data))\n            if key in [\"icon\", \"headerImage\", \"videoImage\"] and value:\n                app_details[key] = format_image_url(value, assets)\n            elif key == \"screenshots\" and value:\n                app_details[key] = [format_image_url(url, assets) for url in value if url]\n            else:\n                app_details[key] = value\n\n        app_details['appId'] = app_id\n        app_details['url'] = f\"{Config.PLAY_STORE_BASE_URL}{Config.APP_DETAILS_ENDPOINT}?id={app_id}\"\n        app_details['publisherCountry'] = get_publisher_country(app_details.get('developerPhone'), app_details.get('developerAddress'))\n        \n\n\n        rating_fields = [\"released\", \"score\", \"ratings\", \"reviews\", \"histogram\"]\n        missing_rating_fields = []\n        for key in rating_fields:\n            value = app_details.get(key)\n            if key == \"histogram\":\n                if not value or (isinstance(value, list) and all(x == 0 for x in value)):\n                    missing_rating_fields.append(key)\n            elif not value:\n                missing_rating_fields.append(key)\n        \n        if missing_rating_fields and scraper:\n            try:\n                country_code = None\n                phone = app_details.get(\"developerPhone\")\n                if phone:\n                    country_code = pho_count(phone)\n                \n                if not country_code:\n                    address = app_details.get(\"developerAddress\")\n                    if address:\n                        country_code = add_count(address)\n                \n                if country_code:\n                    fallback_dataset = scraper.fetch_fallback_data(app_id, gl=country_code)\n                    suffix = f\"fallback_{country_code}\"\n                else:\n                    fallback_dataset = scraper.fetch_fallback_data(app_id, no_locale=True)\n                    suffix = \"fallback_no_locale\"\n                \n                if fallback_dataset and fallback_dataset.get(\"ds:5\"):\n                    fallback_cleaned = clean_json_string(fallback_dataset[\"ds:5\"])\n                    try:\n                        fallback_data = json.loads(fallback_cleaned)\n                        \n                        for field in missing_rating_fields:\n                            if field in ElementSpecs.App:\n                                spec = ElementSpecs.App[field]\n                                fallback_value = spec.extract_content(fallback_data.get(\"data\", fallback_data))\n                                if fallback_value:\n                                    app_details[field] = fallback_value\n                    except:\n                        pass\n            except:\n                pass\n\n        if not app_details.get(\"score\"):\n            app_details[\"score\"] = 0\n        if not app_details.get(\"ratings\"):\n            app_details[\"ratings\"] = 0\n        if not app_details.get(\"reviews\"):\n            app_details[\"reviews\"] = 0\n        if not app_details.get(\"installs\"):\n            app_details[\"installs\"] = 0\n        if not app_details.get(\"minInstalls\"):\n            app_details[\"minInstalls\"] = 0\n\n        current_date = datetime.now(timezone.utc)\n        release_date_str = app_details.get(\"released\")\n        if release_date_str:\n            app_details[\"appAge\"] = calculate_app_age(release_date_str, current_date)\n            app_details[\"dailyInstalls\"] = calculate_daily_installs(app_details.get(\"installs\"), release_date_str, current_date)\n            app_details[\"minDailyInstalls\"] = calculate_daily_installs(app_details.get(\"minInstalls\"), release_date_str, current_date)\n            app_details[\"realDailyInstalls\"] = calculate_daily_installs(app_details.get(\"realInstalls\"), release_date_str, current_date)\n            app_details[\"monthlyInstalls\"] = calculate_monthly_installs(app_details.get(\"installs\"), release_date_str, current_date)\n            app_details[\"minMonthlyInstalls\"] = calculate_monthly_installs(app_details.get(\"minInstalls\"), release_date_str, current_date)\n            app_details[\"realMonthlyInstalls\"] = calculate_monthly_installs(app_details.get(\"realInstalls\"), release_date_str, current_date)\n        else:\n            metric_keys = [\n                \"appAge\", \"dailyInstalls\", \"minDailyInstalls\", \"realDailyInstalls\",\n                \"monthlyInstalls\", \"minMonthlyInstalls\", \"realMonthlyInstalls\"\n            ]\n            for key in metric_keys:\n                app_details[key] = 0\n\n        return app_details\n\n    @handle_parsing_errors()\n    def format_app_data(self, details: dict) -> dict:\n        \"\"\"Format parsed app data into final structure.\n        \n        Args:\n            details: Parsed app details\n            \n        Returns:\n            Formatted dictionary with all app fields\n        \"\"\"\n        return {\n            \"appId\": details.get(\"appId\"),\n            \"title\": details.get(\"title\"),\n            \"summary\": details.get(\"summary\"),\n            \"description\": details.get(\"description\"),\n            \"genre\": details.get(\"genre\"),\n            \"genreId\": details.get(\"genreId\"),\n            \"categories\": details.get(\"categories\"),\n            \"available\": details.get(\"available\"),\n            \"released\": details.get(\"released\"),\n            \"appAgeDays\": details.get(\"appAge\"),\n            \"lastUpdated\": tamp_to_date(details.get(\"updated\")),\n            \"icon\": details.get(\"icon\"),\n            \"headerImage\": details.get(\"headerImage\"),\n            \"screenshots\": details.get(\"screenshots\"),\n            \"video\": details.get(\"video\"),\n            \"videoImage\": details.get(\"videoImage\"),\n            \"installs\": details.get(\"installs\"),\n            \"minInstalls\": details.get(\"minInstalls\"),\n            \"realInstalls\": details.get(\"realInstalls\"),\n            \"dailyInstalls\": details.get(\"dailyInstalls\"),\n            \"minDailyInstalls\": details.get(\"minDailyInstalls\"),\n            \"realDailyInstalls\": details.get(\"realDailyInstalls\"),\n            \"monthlyInstalls\": details.get(\"monthlyInstalls\"),\n            \"minMonthlyInstalls\": details.get(\"minMonthlyInstalls\"),\n            \"realMonthlyInstalls\": details.get(\"realMonthlyInstalls\"),\n            \"score\": details.get(\"score\"),\n            \"ratings\": details.get(\"ratings\"),\n            \"reviews\": details.get(\"reviews\"),\n            \"histogram\": details.get(\"histogram\"),\n            \"adSupported\": details.get(\"adSupported\"),\n            \"containsAds\": details.get(\"containsAds\"),\n            \"version\": details.get(\"version\"),\n            \"androidVersion\": details.get(\"androidVersion\"),\n            \"maxAndroidApi\": details.get(\"maxandroidapi\"),\n            \"minAndroidApi\": details.get(\"minandroidapi\"),\n            \"appBundle\": details.get(\"appBundle\"),\n            \"contentRating\": details.get(\"contentRating\"),\n            \"contentRatingDescription\": details.get(\"contentRatingDescription\"),\n            \"whatsNew\": details.get(\"whatsNew\"),\n            \"permissions\": details.get(\"permissions\"),\n            \"dataSafety\": details.get(\"dataSafety\"),\n            \"price\": details.get(\"price\"),\n            \"currency\": details.get(\"currency\"),\n            \"free\": details.get(\"free\"),\n            \"offersIAP\": details.get(\"offersIAP\"),\n            \"inAppProductPrice\": details.get(\"inAppProductPrice\"),\n            \"sale\": details.get(\"sale\"),\n            \"originalPrice\": details.get(\"originalPrice\"),\n            \"developer\": details.get(\"developer\"),\n            \"developerId\": details.get(\"developerId\"),\n            \"developerEmail\": details.get(\"developerEmail\"),\n            \"developerWebsite\": details.get(\"developerWebsite\"),\n            \"developerAddress\": details.get(\"developerAddress\"),\n            \"developerPhone\": details.get(\"developerPhone\"),\n            \"publisherCountry\": details.get(\"publisherCountry\"),\n            \"privacyPolicy\": details.get(\"privacyPolicy\"),\n            \"appUrl\": details.get(\"url\"),\n        }\n\n\nclass SearchParser:\n    \"\"\"Parser for extracting and formatting search results.\"\"\"\n    \n    @handle_parsing_errors(return_empty=True)\n    def parse_search_results(self, dataset: Dict, count: int) -> List[Dict]:\n        \"\"\"Parse search results from dataset.\n        \n        Args:\n            dataset: Raw dataset from scraper\n            count: Maximum number of results to parse\n            \n        Returns:\n            List of parsed search result dictionaries\n        \"\"\"\n        if \"ds:1\" not in dataset:\n            return []\n        \n        search_data = nested_lookup(dataset.get(\"ds:1\", {}), [0, 1, 0, 0, 0])\n        \n        if not search_data:\n            return []\n        \n        results = []\n        n_apps = min(len(search_data), count)\n        for i in range(n_apps):\n            app = self.extract_search_result(search_data[i])\n            if app:\n                results.append(app)\n        \n        return results[:count]\n\n    @handle_parsing_errors()\n    def extract_search_result(self, data) -> Dict:\n        \"\"\"Extract single search result from raw data.\n        \n        Args:\n            data: Raw search result data\n            \n        Returns:\n            Dictionary with extracted search result or None if extraction fails\n        \"\"\"\n        try:\n            result = {}\n            for key, spec in ElementSpecs.Search.items():\n                result[key] = spec.extract_content(data)\n            return result\n        except Exception:\n            return None\n\n    @handle_parsing_errors()\n    def format_search_result(self, result: dict) -> dict:\n        \"\"\"Format parsed search result into final structure.\n        \n        Args:\n            result: Parsed search result\n            \n        Returns:\n            Formatted dictionary with search result fields\n        \"\"\"\n        return {\n            \"appId\": result.get(\"appId\"),\n            \"title\": result.get(\"title\"),\n            \"description\": result.get(\"summary\"),\n            \"icon\": result.get(\"icon\"),\n            \"developer\": result.get(\"developer\"),\n            \"score\": result.get(\"score\"),\n            \"scoreText\": result.get(\"scoreText\"),\n            \"currency\": result.get(\"currency\"),\n            \"price\": result.get(\"price\"),\n            \"free\": result.get(\"free\"),\n            \"url\": result.get(\"url\"),\n        }\n    \n    @handle_parsing_errors()\n    def extract_pagination_token(self, dataset: Dict) -> str:\n        \"\"\"Extract pagination token from search dataset.\n        \n        Args:\n            dataset: Search dataset\n            \n        Returns:\n            Pagination token or None\n        \"\"\"\n        sections = nested_lookup(dataset.get(\"ds:1\", {}), [0, 1, 0, 0])\n        \n        if not sections:\n            return None\n            \n        for section in sections:\n            if isinstance(section, list) and len(section) > 1:\n                potential_token = nested_lookup(section, [1])\n                if isinstance(potential_token, str):\n                    return potential_token\n        return None\n    \n    @handle_parsing_errors()\n    def parse_html_content(self, html_content: str) -> Dict:\n        \"\"\"Extract datasets from search page HTML.\n        \n        Args:\n            html_content: HTML content of search page\n            \n        Returns:\n            Dictionary containing all datasets\n            \n        Raises:\n            DataParsingError: If no datasets found\n        \"\"\"\n        script_regex = re.compile(r\"AF_initDataCallback[\\s\\S]*?</script\")\n        key_regex = re.compile(r\"(ds:.*?)'\")\n        value_regex = re.compile(r\"data:([\\s\\S]*?), sideChannel: \\{\\}\\}\\);</\")\n        \n        matches = script_regex.findall(html_content)\n        dataset = {}\n        \n        for match in matches:\n            key_match = key_regex.findall(match)\n            value_match = value_regex.findall(match)\n            \n            if key_match and value_match:\n                key = key_match[0]\n                try:\n                    value = json.loads(value_match[0])\n                    dataset[key] = value\n                except json.JSONDecodeError:\n                    continue\n        \n        if not dataset:\n            raise DataParsingError(\"No search data found in HTML\")\n        \n        return dataset\n\n\nclass ReviewsParser:\n    \"\"\"Parser for extracting and formatting user reviews.\"\"\"\n    \n    @handle_parsing_errors(return_empty=True)\n    def parse_reviews_response(self, content: str) -> Tuple[List[Dict], Optional[str]]:\n        \"\"\"Parse reviews from API response content.\n        \n        Args:\n            content: Raw API response content\n            \n        Returns:\n            Tuple of (list of review dictionaries, next page token)\n        \"\"\"\n        if not content or not isinstance(content, str):\n            return [], None\n            \n        regex = re.compile(r\"\\)]}'\\n\\n([\\s\\S]+)\")\n        matches = regex.findall(content)\n        \n        if not matches:\n            return [], None\n        \n        try:\n            data = json.loads(matches[0])\n            if not data or len(data) == 0 or len(data[0]) < 3:\n                return [], None\n                \n            reviews_data = json.loads(data[0][2])\n            \n            # Handle case where reviews_data is None or empty\n            if not reviews_data:\n                return [], None\n                \n            next_token = None\n            try:\n                if (isinstance(reviews_data, list) and len(reviews_data) >= 2 and \n                    reviews_data[-2] and isinstance(reviews_data[-2], list) and len(reviews_data[-2]) > 0):\n                    potential_token = reviews_data[-2][-1]\n                    if isinstance(potential_token, str):\n                        next_token = potential_token\n            except (IndexError, TypeError, AttributeError):\n                pass\n            \n            # Check if we have actual reviews data\n            if (not isinstance(reviews_data, list) or len(reviews_data) == 0 or \n                not isinstance(reviews_data[0], list) or len(reviews_data[0]) == 0):\n                return [], None\n            \n            reviews = []\n            for review_raw in reviews_data[0]:\n                if review_raw:  # Make sure review_raw is not None\n                    review = self.extract_review_data(review_raw)\n                    if review:\n                        reviews.append(review)\n            \n            return reviews, next_token\n            \n        except (json.JSONDecodeError, IndexError, KeyError, TypeError, AttributeError):\n            return [], None\n\n    @handle_parsing_errors()\n    def extract_review_data(self, review_raw) -> Optional[Dict]:\n        \"\"\"Extract single review from raw data.\n        \n        Args:\n            review_raw: Raw review data array\n            \n        Returns:\n            Dictionary with extracted review data or None if extraction fails\n        \"\"\"\n        try:\n            review = {\n                \"reviewId\": review_raw[0] if len(review_raw) > 0 else None,\n                \"userName\": review_raw[1][0] if len(review_raw) > 1 and review_raw[1] else None,\n                \"userImage\": None,\n                \"content\": review_raw[4] if len(review_raw) > 4 else None,\n                \"score\": review_raw[2] if len(review_raw) > 2 else None,\n                \"thumbsUpCount\": review_raw[6] if len(review_raw) > 6 else None,\n                \"at\": datetime.fromtimestamp(review_raw[5][0]).isoformat() if len(review_raw) > 5 and review_raw[5] else None,\n                \"appVersion\": review_raw[10] if len(review_raw) > 10 else None,\n            }\n            try:\n                if len(review_raw) > 1 and review_raw[1] and len(review_raw[1]) > 1 and review_raw[1][1]:\n                    review[\"userImage\"] = review_raw[1][1][3][2]\n            except:\n                pass\n            return review\n        except Exception:\n            return None\n\n    @handle_parsing_errors(return_empty=True)\n    def parse_multiple_responses(self, dataset: Dict) -> List[Dict]:\n        \"\"\"Parse multiple review responses.\n        \n        Args:\n            dataset: Dataset containing multiple review responses\n            \n        Returns:\n            List of all parsed reviews\n        \"\"\"\n        if not dataset or not isinstance(dataset, dict):\n            return []\n            \n        responses = dataset.get(\"reviews\", [])\n        if not responses or not isinstance(responses, list):\n            return []\n            \n        all_reviews = []\n        \n        for response in responses:\n            if response and isinstance(response, str):\n                try:\n                    reviews, _ = self.parse_reviews_response(response)\n                    if reviews:  # Only extend if we got actual reviews\n                        all_reviews.extend(reviews)\n                except Exception:\n                    continue  # Skip this response if it fails\n        \n        return all_reviews\n\n    @handle_parsing_errors(return_empty=True)\n    def format_reviews_data(self, reviews_data: List[Dict]) -> List[Dict]:\n        \"\"\"Format parsed reviews into final structure.\n        \n        Args:\n            reviews_data: List of parsed reviews\n            \n        Returns:\n            List of formatted review dictionaries\n        \"\"\"\n        formatted_reviews = []\n        \n        for review in reviews_data:\n            formatted_review = {\n                \"reviewId\": review.get(\"reviewId\"),\n                \"userName\": review.get(\"userName\"),\n                \"userImage\": review.get(\"userImage\"),\n                \"score\": review.get(\"score\"),\n                \"content\": review.get(\"content\"),\n                \"thumbsUpCount\": review.get(\"thumbsUpCount\"),\n                \"appVersion\": review.get(\"appVersion\"),\n                \"at\": review.get(\"at\"),\n            }\n            formatted_reviews.append(formatted_review)\n        \n        return formatted_reviews\n\n\nclass DeveloperParser:\n    \"\"\"Parser for extracting and formatting developer apps.\"\"\"\n    \n    @handle_parsing_errors(return_empty=True)\n    def parse_developer_data(self, dataset: Dict, dev_id: str) -> List[Dict]:\n        \"\"\"Parse developer apps from dataset.\n        \n        Args:\n            dataset: Raw dataset from scraper\n            dev_id: Developer ID (numeric or string)\n            \n        Returns:\n            List of parsed app dictionaries\n            \n        Raises:\n            DataParsingError: If parsing fails\n        \"\"\"\n        ds3_data = dataset.get(\"ds:3\", \"\")\n        if not ds3_data:\n            raise DataParsingError(Config.ERROR_MESSAGES[\"NO_DS3_DATA\"])\n        \n        json_str_cleaned = clean_json_string(ds3_data)\n        try:\n            data = json.loads(json_str_cleaned)\n        except json.JSONDecodeError as e:\n            try:\n                alternative_cleaned = alternative_json_clean(ds3_data)\n                data = json.loads(alternative_cleaned)\n            except Exception:\n                raise DataParsingError(Config.ERROR_MESSAGES[\"DS3_JSON_PARSE_FAILED\"].format(error=str(e)))\n\n        # Navigate to apps array based on dev_id type\n        is_numeric = dev_id.isdigit()\n        if is_numeric:\n            apps_path = [0, 1, 0, 21, 0]\n        else:\n            apps_path = [0, 1, 0, 22, 0]\n        \n        apps_data = nested_lookup(data.get(\"data\", data), apps_path)\n        if not apps_data:\n            return []\n        \n        apps = []\n        for app_data in apps_data:\n            app_details = {}\n            for key, spec in ElementSpecs.Developer.items():\n                app_details[key] = spec.extract_content(app_data)\n            \n            if app_details.get(\"title\"):\n                apps.append(app_details)\n        \n        return apps\n\n    @handle_parsing_errors(return_empty=True)\n    def format_developer_data(self, apps_data: List[Dict]) -> List[Dict]:\n        \"\"\"Format parsed developer apps into final structure.\n        \n        Args:\n            apps_data: List of parsed apps\n            \n        Returns:\n            List of formatted app dictionaries\n        \"\"\"\n        formatted_apps = []\n        \n        for app in apps_data:\n            formatted_app = {\n                \"appId\": app.get(\"appId\"),\n                \"title\": app.get(\"title\"),\n                \"description\": app.get(\"description\"),\n                \"icon\": app.get(\"icon\"),\n                \"developer\": app.get(\"developer\"),\n                \"score\": app.get(\"score\"),\n                \"scoreText\": app.get(\"scoreText\"),\n                \"currency\": app.get(\"currency\"),\n                \"price\": app.get(\"price\"),\n                \"free\": app.get(\"free\"),\n                \"url\": app.get(\"url\"),\n            }\n            formatted_apps.append(formatted_app)\n        \n        return formatted_apps\n\n\nclass SimilarParser:\n    \"\"\"Parser for extracting and formatting similar apps.\"\"\"\n    \n    @handle_parsing_errors(return_empty=True)\n    def parse_similar_data(self, dataset: Dict) -> List[Dict]:\n        \"\"\"Parse similar apps from dataset.\n        \n        Args:\n            dataset: Raw dataset from scraper\n            \n        Returns:\n            List of parsed similar app dictionaries\n        \"\"\"\n        ds3_data = dataset.get(\"ds:3\", \"\")\n        if not ds3_data:\n            return []\n        \n        json_str_cleaned = clean_json_string(ds3_data)\n        try:\n            data = json.loads(json_str_cleaned)\n        except json.JSONDecodeError as e:\n            try:\n                alternative_cleaned = alternative_json_clean(ds3_data)\n                data = json.loads(alternative_cleaned)\n            except Exception:\n                return []\n\n        apps_data = nested_lookup(data.get(\"data\", data), [0, 1, 0, 21, 0])\n        if not apps_data:\n            return []\n        \n        apps = []\n        for app_data in apps_data:\n            app_details = {}\n            for key, spec in ElementSpecs.Similar.items():\n                app_details[key] = spec.extract_content(app_data)\n            \n            if app_details.get(\"title\"):\n                apps.append(app_details)\n        \n        return apps\n\n    @handle_parsing_errors(return_empty=True)\n    def format_similar_data(self, apps_data: List[Dict]) -> List[Dict]:\n        \"\"\"Format parsed similar apps into final structure.\n        \n        Args:\n            apps_data: List of parsed apps\n            \n        Returns:\n            List of formatted app dictionaries\n        \"\"\"\n        formatted_apps = []\n        \n        for app in apps_data:\n            formatted_app = {\n                \"appId\": app.get(\"appId\"),\n                \"title\": app.get(\"title\"),\n                \"description\": app.get(\"description\"),\n                \"icon\": app.get(\"icon\"),\n                \"developer\": app.get(\"developer\"),\n                \"score\": app.get(\"score\"),\n                \"scoreText\": app.get(\"scoreText\"),\n                \"currency\": app.get(\"currency\"),\n                \"price\": app.get(\"price\"),\n                \"free\": app.get(\"free\"),\n                \"url\": app.get(\"url\"),\n            }\n            formatted_apps.append(formatted_app)\n        \n        return formatted_apps\n\n\nclass ListParser:\n    \"\"\"Parser for extracting and formatting top chart apps.\"\"\"\n    \n    @handle_parsing_errors(return_empty=True)\n    def parse_list_data(self, dataset: Dict, count: int) -> List[Dict]:\n        \"\"\"Parse top chart apps from dataset.\n        \n        Args:\n            dataset: Raw dataset from scraper\n            count: Maximum number of apps to parse\n            \n        Returns:\n            List of parsed app dictionaries\n        \"\"\"\n        collection_data = dataset.get(\"collection_data\")\n        if not collection_data:\n            return []\n        \n        apps_data = nested_lookup(collection_data, [0, 1, 0, 28, 0])\n        if not apps_data:\n            return []\n        \n        apps = []\n        for app_data in apps_data[:count]:\n            app_details = {}\n            for key, spec in ElementSpecs.List.items():\n                app_details[key] = spec.extract_content(app_data)\n            \n            if app_details.get(\"title\"):\n                apps.append(app_details)\n        \n        return apps\n\n    @handle_parsing_errors(return_empty=True)\n    def format_list_data(self, apps_data: List[Dict]) -> List[Dict]:\n        \"\"\"Format parsed list apps into final structure.\n        \n        Args:\n            apps_data: List of parsed apps\n            \n        Returns:\n            List of formatted app dictionaries\n        \"\"\"\n        formatted_apps = []\n        \n        for app in apps_data:\n            formatted_app = {\n                \"appId\": app.get(\"appId\"),\n                \"title\": app.get(\"title\"),\n                \"description\": app.get(\"description\"),\n                \"icon\": app.get(\"icon\"),\n                \"screenshots\": app.get(\"screenshots\"),\n                \"developer\": app.get(\"developer\"),\n                \"genre\": app.get(\"genre\"),\n                \"score\": app.get(\"score\"),\n                \"scoreText\": app.get(\"scoreText\"),\n                \"installs\": app.get(\"installs\"),\n                \"currency\": app.get(\"currency\"),\n                \"price\": app.get(\"price\"),\n                \"free\": app.get(\"free\"),\n                \"url\": app.get(\"url\"),\n            }\n            formatted_apps.append(formatted_app)\n        \n        return formatted_apps\n\n\nclass SuggestParser:\n    \"\"\"Parser for extracting and formatting search suggestions.\"\"\"\n    \n    @handle_parsing_errors(return_empty=True)\n    def parse_suggestions(self, dataset: Dict) -> List[str]:\n        \"\"\"Parse suggestions from dataset.\n        \n        Args:\n            dataset: Raw dataset from scraper\n            \n        Returns:\n            List of suggestion strings\n        \"\"\"\n        return dataset.get(\"suggestions\", [])\n\n    @handle_parsing_errors(return_empty=True)\n    def format_suggestions(self, suggestions: List[str]) -> List[str]:\n        \"\"\"Format suggestions (pass-through for strings).\n        \n        Args:\n            suggestions: List of suggestion strings\n            \n        Returns:\n            Same list of suggestion strings\n        \"\"\"\n        return suggestions"
  },
  {
    "path": "gplay_scraper/core/gplay_scraper.py",
    "content": "import json\nimport re\nimport logging\nfrom typing import Dict\nfrom ..utils.http_client import HttpClient\nfrom ..config import Config\nfrom ..exceptions import DataParsingError, InvalidAppIdError, AppNotFoundError\nfrom ..utils.error_handling import handle_network_errors, handle_parsing_errors, validate_inputs\nfrom urllib.parse import quote\nfrom .gplay_parser import SearchParser\nfrom ..utils.constants import SORT_NAMES, CLUSTER_NAMES\n\nlogger = logging.getLogger(__name__)\n\n\nclass AppScraper:\n    \"\"\"Scraper for fetching app details from Google Play Store.\n    \n    Handles the extraction of comprehensive app information including ratings,\n    reviews, install counts, pricing, and metadata. Supports fallback data\n    fetching when primary requests fail to retrieve certain fields.\n    \n    Features:\n        - Primary app data extraction from HTML pages\n        - Fallback data fetching for missing fields (release dates, ratings)\n        - Multiple locale support for regional data\n        - Automatic retry with different parameters\n    \"\"\"\n    \n    def __init__(self, rate_limit_delay: float = None, http_client: str = None):\n        \"\"\"Initialize AppScraper with HTTP client.\n        \n        Args:\n            rate_limit_delay: Delay between requests in seconds (default: 1.0)\n            http_client: HTTP client to use (requests, curl_cffi, etc.)\n        \"\"\"\n        self.http_client = HttpClient(rate_limit_delay, http_client)\n\n    def fetch_playstore_page(self, app_id: str, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY) -> str:\n        \"\"\"Fetch app page HTML from Google Play Store.\n        \n        Args:\n            app_id: Google Play app ID\n            lang: Language code\n            country: Country code\n            \n        Returns:\n            HTML content of app page\n        \"\"\"\n        return self.http_client.fetch_app_page(app_id, lang, country)\n    \n    def fetch_fallback_data(self, app_id: str, gl: str = None, no_locale: bool = False) -> Dict:\n        \"\"\"Fetch app data with specific country or without locale parameters.\n        \n        Args:\n            app_id: Google Play app ID\n            gl: Country code for fallback request\n            no_locale: If True, fetch without hl and gl parameters\n            \n        Returns:\n            Dictionary containing ds:5 dataset from fallback request\n        \"\"\"\n        if no_locale:\n            html_content = self.http_client.fetch_app_page_no_locale(app_id)\n        elif gl:\n            html_content = self.http_client.fetch_app_page(app_id, lang=Config.DEFAULT_LANGUAGE, country=gl)\n        else:\n            html_content = self.http_client.fetch_app_page_no_locale(app_id)\n        \n        ds_match = re.search(r'AF_initDataCallback\\s*\\(\\s*({\\s*key:\\s*[\"\\']ds:5[\"\\'][\\s\\S]*?})\\s*\\)\\s*;', html_content, re.DOTALL)\n        if ds_match:\n            ds5_data = ds_match.group(1)\n        else:\n            all_callbacks = re.findall(r'AF_initDataCallback\\s*\\(\\s*({[\\s\\S]*?})\\s*\\)\\s*;', html_content, re.DOTALL)\n            ds5_data = \"\"\n            for callback in all_callbacks:\n                if \"'ds:5'\" in callback or '\"ds:5\"' in callback:\n                    ds5_data = callback\n                    break\n        \n        return {\"ds:5\": ds5_data} if ds5_data else None\n\n    @validate_inputs()\n    @handle_network_errors()\n    @handle_parsing_errors()\n    def scrape_play_store_data(self, app_id: str, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY) -> Dict:\n        \"\"\"Extract dataset from app page HTML.\n        \n        Args:\n            app_id: Google Play app ID\n            lang: Language code\n            country: Country code\n            \n        Returns:\n            Dictionary containing ds:5 dataset\n            \n        Raises:\n            DataParsingError: If dataset not found\n            AppNotFoundError: If app not found\n        \"\"\"\n        html_content = self.fetch_playstore_page(app_id, lang, country)\n        \n        ds_match = re.search(r'AF_initDataCallback\\s*\\(\\s*({\\s*key:\\s*[\"\\']ds:5[\"\\'][\\s\\S]*?})\\s*\\)\\s*;', html_content, re.DOTALL)\n        if ds_match:\n            ds5_data = ds_match.group(1)\n        else:\n            all_callbacks = re.findall(r'AF_initDataCallback\\s*\\(\\s*({[\\s\\S]*?})\\s*\\)\\s*;', html_content, re.DOTALL)\n            ds5_data = \"\"\n            for callback in all_callbacks:\n                if \"'ds:5'\" in callback or '\"ds:5\"' in callback:\n                    ds5_data = callback\n                    break\n        \n        if not ds5_data:\n            raise DataParsingError(Config.ERROR_MESSAGES[\"DS5_NOT_FOUND\"])\n            \n        return {\"ds:5\": ds5_data, \"fallback_needed\": False}\n\n\nclass SearchScraper:\n    \"\"\"Scraper for fetching search results from Google Play Store.\n    \n    Handles app search functionality with support for pagination to retrieve\n    large numbers of search results. Integrates with SearchParser for data extraction.\n    \n    Features:\n        - Initial search page fetching\n        - Automatic pagination for large result sets\n        - Token-based continuation for additional results\n        - Configurable result limits\n    \"\"\"\n    \n    def __init__(self, rate_limit_delay: float = None, http_client: str = None):\n        \"\"\"Initialize SearchScraper with HTTP client and parser.\n        \n        Args:\n            rate_limit_delay: Delay between requests in seconds\n            http_client: HTTP client to use for requests\n        \"\"\"\n        self.http_client = HttpClient(rate_limit_delay, http_client)\n        self.parser = SearchParser()\n\n    def fetch_playstore_search(self, query: str, count: int, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY) -> str:\n        \"\"\"Fetch search page HTML from Google Play Store.\n        \n        Args:\n            query: Search query string\n            count: Number of results needed\n            lang: Language code\n            country: Country code\n            \n        Returns:\n            HTML content of search page\n            \n        Raises:\n            InvalidAppIdError: If query is invalid\n        \"\"\"\n        if not query or not isinstance(query, str):\n            raise InvalidAppIdError(Config.ERROR_MESSAGES[\"INVALID_QUERY\"])\n        \n        if count <= 0:\n            return \"\"\n        \n        return self.http_client.fetch_search_page(query=query, lang=lang, country=country)\n\n    @validate_inputs()\n    @handle_network_errors()\n    @handle_parsing_errors()\n    def scrape_play_store_data(self, query: str, count: int = Config.DEFAULT_SEARCH_COUNT, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY) -> Dict:\n        \"\"\"Scrape search results with automatic pagination support.\n        \n        Args:\n            query: Search query string\n            count: Total number of results to fetch\n            lang: Language code\n            country: Country code\n            \n        Returns:\n            Dictionary containing all search results\n            \n        Raises:\n            DataParsingError: If parsing fails\n        \"\"\"\n        html_content = self.fetch_playstore_search(query, count, lang, country)\n        \n        dataset = self.parser.parse_html_content(html_content)\n        \n        if count <= Config.DEFAULT_SEARCH_COUNT // 5:\n            return dataset\n\n        token = self.parser.extract_pagination_token(dataset)\n        \n        all_results = []\n        initial_results = self._get_nested_value(dataset.get(\"ds:1\", []), [0, 1, 0, 0, 0], [])\n        all_results.extend(initial_results)\n\n        while len(all_results) < count and token:\n            needed = min(Config.DEFAULT_REVIEWS_BATCH_SIZE * 2, count - len(all_results))\n            try:\n                response_text = self.http_client.fetch_search_page(token=token, needed=needed, lang=lang, country=country)\n                data = json.loads(response_text[5:])\n                parsed_data = json.loads(data[0][2])\n                if parsed_data:\n                    paginated_results = self._get_nested_value(parsed_data, [0, 0, 0], [])\n                    all_results.extend(paginated_results)\n                    token = self._get_nested_value(parsed_data, [0, 0, 7, 1])\n                else:\n                    break\n            except (json.JSONDecodeError, IndexError, KeyError, Exception):\n                break\n        if \"ds:1\" in dataset:\n            dataset[\"ds:1\"][0][1][0][0][0] = all_results[:count]\n        \n        return dataset\n\n    def _get_nested_value(self, data, path, default=None):\n        \"\"\"Safely get nested value from data structure.\n        \n        Args:\n            data: Data structure to traverse\n            path: List of keys/indices to follow\n            default: Default value if path not found\n            \n        Returns:\n            Value at path or default\n        \"\"\"\n        try:\n            for key in path:\n                data = data[key]\n            return data\n        except (KeyError, IndexError, TypeError):\n            return default\n\n\nclass ReviewsScraper:\n    \"\"\"Scraper for fetching user reviews from Google Play Store.\n    \n    Handles extraction of user reviews using Google Play's internal API.\n    Supports different sorting options and pagination for large review sets.\n    \n    Features:\n        - Multiple sort orders (newest, relevant, rating)\n        - Batch processing for large review counts\n        - Pagination token management\n        - Configurable batch sizes\n    \"\"\"\n    \n    def __init__(self, rate_limit_delay: float = None, http_client: str = None):\n        \"\"\"Initialize ReviewsScraper with HTTP client.\n        \n        Args:\n            rate_limit_delay: Delay between requests in seconds\n            http_client: HTTP client to use for API requests\n        \"\"\"\n        self.http_client = HttpClient(rate_limit_delay, http_client)\n\n    def fetch_reviews_batch(self, app_id: str, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY, \n                           sort: int = Config.DEFAULT_REVIEWS_SORT, batch_count: int = Config.DEFAULT_REVIEWS_BATCH_SIZE, token: str = None) -> str:\n        \"\"\"Fetch single batch of reviews from API.\n        \n        Args:\n            app_id: Google Play app ID\n            lang: Language code\n            country: Country code\n            sort: Sort order (NEWEST, RELEVANT, RATING)\n            batch_count: Number of reviews per batch\n            token: Pagination token for next batch\n            \n        Returns:\n            Raw API response content\n        \"\"\"\n        sort_value = SORT_NAMES.get(sort, sort) if isinstance(sort, str) else sort\n        return self.http_client.fetch_reviews_batch(app_id, lang, country, sort_value, batch_count, token)\n\n    @validate_inputs()\n    @handle_network_errors()\n    @handle_parsing_errors()\n    def scrape_reviews_data(self, app_id: str, count: int = Config.DEFAULT_REVIEWS_COUNT, lang: str = Config.DEFAULT_LANGUAGE, \n                           country: str = Config.DEFAULT_COUNTRY, sort: int = Config.DEFAULT_REVIEWS_SORT) -> Dict:\n        \"\"\"Scrape multiple batches of reviews.\n        \n        Args:\n            app_id: Google Play app ID\n            count: Total number of reviews to fetch\n            lang: Language code\n            country: Country code\n            sort: Sort order\n            \n        Returns:\n            Dictionary containing all review responses\n        \"\"\"\n        all_responses = []\n        token = None\n        batch_size = Config.DEFAULT_REVIEWS_BATCH_SIZE\n        \n        while len(all_responses) * batch_size < count:\n            remaining = count - (len(all_responses) * batch_size)\n            fetch_count = min(batch_size, remaining)\n            \n            response = self.fetch_reviews_batch(app_id, lang, country, sort, fetch_count, token)\n            \n            if not response:\n                break\n                \n            all_responses.append(response)\n            \n            try:\n                regex = re.compile(r\"\\)]}'\\n\\n([\\s\\S]+)\")\n                matches = regex.findall(response)\n                if matches:\n                    data = json.loads(matches[0])\n                    parsed_data = json.loads(data[0][2])\n                    \n                    # Check if we got any reviews in this batch\n                    if not parsed_data or len(parsed_data) == 0 or (len(parsed_data) > 0 and len(parsed_data[0]) == 0):\n                        break\n                    \n                    # Extract next token safely\n                    try:\n                        if len(parsed_data) >= 2 and parsed_data[-2] and len(parsed_data[-2]) > 0:\n                            token = parsed_data[-2][-1]\n                        else:\n                            token = None\n                    except (IndexError, TypeError, AttributeError):\n                        token = None\n                        \n                    if not token or isinstance(token, list) or not isinstance(token, str):\n                        break\n                else:\n                    break\n            except (json.JSONDecodeError, IndexError, KeyError, TypeError):\n                break\n        \n        return {\"reviews\": all_responses if all_responses else []}\n\n\nclass DeveloperScraper:\n    \"\"\"Scraper for fetching developer portfolio from Google Play Store.\n    \n    Extracts all apps published by a specific developer, supporting both\n    numeric developer IDs and string-based developer names.\n    \n    Features:\n        - Numeric developer ID support (e.g., '5700313618786177705')\n        - String developer name support (e.g., 'Google LLC')\n        - Complete app portfolio extraction\n        - Developer metadata collection\n    \"\"\"\n    \n    def __init__(self, rate_limit_delay: float = None, http_client: str = None):\n        \"\"\"Initialize DeveloperScraper with HTTP client.\n        \n        Args:\n            rate_limit_delay: Delay between requests in seconds\n            http_client: HTTP client to use for requests\n        \"\"\"\n        self.http_client = HttpClient(rate_limit_delay, http_client)\n\n    def fetch_developer_page(self, dev_id: str, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY) -> str:\n        \"\"\"Fetch developer page HTML from Google Play Store.\n        \n        Args:\n            dev_id: Developer ID (numeric or string)\n            lang: Language code\n            country: Country code\n            \n        Returns:\n            HTML content of developer page\n        \"\"\"\n        return self.http_client.fetch_developer_page(dev_id, lang, country)\n\n    @validate_inputs()\n    @handle_network_errors()\n    @handle_parsing_errors()\n    def scrape_play_store_data(self, dev_id: str, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY) -> Dict:\n        \"\"\"Extract dataset from developer page HTML.\n        \n        Args:\n            dev_id: Developer ID\n            lang: Language code\n            country: Country code\n            \n        Returns:\n            Dictionary containing ds:3 dataset and dev_id\n            \n        Raises:\n            DataParsingError: If dataset not found\n        \"\"\"\n        html_content = self.fetch_developer_page(dev_id, lang, country)\n        \n        ds_match = re.search(r'AF_initDataCallback\\s*\\(\\s*({\\s*key:\\s*[\"\\']ds:3[\"\\'][\\s\\S]*?})\\s*\\)\\s*;', html_content, re.DOTALL)\n        if ds_match:\n            ds3_data = ds_match.group(1)\n        else:\n            all_callbacks = re.findall(r'AF_initDataCallback\\s*\\(\\s*({[\\s\\S]*?})\\s*\\)\\s*;', html_content, re.DOTALL)\n            ds3_data = \"\"\n            for callback in all_callbacks:\n                if \"'ds:3'\" in callback or '\"ds:3\"' in callback:\n                    ds3_data = callback\n                    break\n        \n        if not ds3_data:\n            raise DataParsingError(Config.ERROR_MESSAGES[\"DS3_NOT_FOUND\"])\n        \n        return {\"ds:3\": ds3_data, \"dev_id\": dev_id}\n\n\nclass SimilarScraper:\n    \"\"\"Scraper for fetching similar apps from Google Play Store.\n    \n    Extracts similar/related apps by finding cluster URLs from app pages\n    and fetching the corresponding collection pages.\n    \n    Features:\n        - Cluster URL extraction from app pages\n        - Similar app collection fetching\n        - Related app recommendations\n        - Competitive analysis data\n    \"\"\"\n    \n    def __init__(self, rate_limit_delay: float = None, http_client: str = None):\n        \"\"\"Initialize SimilarScraper with HTTP client.\n        \n        Args:\n            rate_limit_delay: Delay between requests in seconds\n            http_client: HTTP client to use for requests\n        \"\"\"\n        self.http_client = HttpClient(rate_limit_delay, http_client)\n\n    def fetch_similar_page(self, app_id: str, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY) -> str:\n        \"\"\"Fetch app page HTML to extract similar apps cluster URL.\n        \n        Args:\n            app_id: Google Play app ID\n            lang: Language code\n            country: Country code\n            \n        Returns:\n            HTML content of app page\n        \"\"\"\n        return self.http_client.fetch_app_page(app_id, lang, country)\n\n    @validate_inputs()\n    @handle_network_errors()\n    @handle_parsing_errors()\n    def scrape_play_store_data(self, app_id: str, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY) -> Dict:\n        \"\"\"Extract similar apps dataset from cluster page.\n        \n        Args:\n            app_id: Google Play app ID\n            lang: Language code\n            country: Country code\n            \n        Returns:\n            Dictionary containing ds:3 dataset\n            \n        Raises:\n            DataParsingError: If dataset not found\n        \"\"\"\n        html_content = self.fetch_similar_page(app_id, lang, country)\n        \n        pattern1 = r'&quot;(/store/apps/collection/cluster\\?gsr=[^&]+)&quot;'\n        matches1 = re.findall(pattern1, html_content)\n        pattern2 = r'\"(/store/apps/collection/cluster\\?gsr=[^\"]+)\"'\n        matches2 = re.findall(pattern2, html_content)\n        all_matches = list(set(matches1 + matches2))\n        \n        if not all_matches:\n            return {\"ds:3\": None}\n        \n        cluster_url = all_matches[0].replace('&amp;', '&')\n        cluster_html = self.http_client.fetch_cluster_page(cluster_url, lang, country)\n        \n        ds_match = re.search(r'AF_initDataCallback\\s*\\(\\s*({\\s*key:\\s*[\"\\']ds:3[\"\\'][\\s\\S]*?})\\s*\\)\\s*;', cluster_html, re.DOTALL)\n        if ds_match:\n            ds3_data = ds_match.group(1)\n        else:\n            all_callbacks = re.findall(r'AF_initDataCallback\\s*\\(\\s*({[\\s\\S]*?})\\s*\\)\\s*;', cluster_html, re.DOTALL)\n            ds3_data = \"\"\n            for callback in all_callbacks:\n                if \"'ds:3'\" in callback or '\"ds:3\"' in callback:\n                    ds3_data = callback\n                    break\n        \n        if not ds3_data:\n            raise DataParsingError(Config.ERROR_MESSAGES[\"DS3_NOT_FOUND\"])\n        \n        return {\"ds:3\": ds3_data}\n\n\nclass ListScraper:\n    \"\"\"Scraper for fetching top charts from Google Play Store.\n    \n    Handles extraction of ranked app lists including top free, top paid,\n    and top grossing apps across different categories.\n    \n    Features:\n        - Multiple collection types (free, paid, grossing)\n        - Category-specific charts (games, social, productivity, etc.)\n        - Configurable result counts\n        - Regional chart variations\n    \"\"\"\n    \n    def __init__(self, rate_limit_delay: float = None, http_client: str = None):\n        \"\"\"Initialize ListScraper with HTTP client.\n        \n        Args:\n            rate_limit_delay: Delay between requests in seconds\n            http_client: HTTP client to use for API requests\n        \"\"\"\n        self.http_client = HttpClient(rate_limit_delay, http_client)\n\n    @handle_network_errors()\n    @handle_parsing_errors()\n    def scrape_play_store_data(self, collection: str, category: str = Config.DEFAULT_LIST_CATEGORY, count: int = Config.DEFAULT_LIST_COUNT, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY) -> Dict:\n        \"\"\"Scrape top charts data from Google Play Store.\n        \n        Args:\n            collection: Collection type (TOP_FREE, TOP_PAID, TOP_GROSSING)\n            category: App category (e.g., GAME, SOCIAL)\n            count: Number of apps to fetch\n            lang: Language code\n            country: Country code\n            \n        Returns:\n            Dictionary containing collection data\n            \n        Raises:\n            DataParsingError: If JSON parsing fails\n        \"\"\"\n        cluster = CLUSTER_NAMES.get(collection, collection)\n        response_text = self.http_client.fetch_list_page(cluster, category, count, lang, country)\n        \n        try:\n            lines = response_text.strip().split('\\n')\n            data = json.loads(lines[2])\n            collection_data = json.loads(data[0][2])\n            return {\"collection_data\": collection_data}\n        except (json.JSONDecodeError, IndexError, KeyError) as e:\n            raise DataParsingError(Config.ERROR_MESSAGES[\"JSON_PARSE_FAILED\"].format(error=str(e)))\n\n\nclass SuggestScraper:\n    \"\"\"Scraper for fetching search suggestions from Google Play Store.\n    \n    Provides autocomplete functionality for search terms, useful for\n    keyword research and ASO (App Store Optimization) analysis.\n    \n    Features:\n        - Real-time search suggestions\n        - Keyword research capabilities\n        - ASO optimization data\n        - Popular search term discovery\n    \"\"\"\n    \n    def __init__(self, rate_limit_delay: float = None, http_client: str = None):\n        \"\"\"Initialize SuggestScraper with HTTP client.\n        \n        Args:\n            rate_limit_delay: Delay between requests in seconds\n            http_client: HTTP client to use for API requests\n        \"\"\"\n        self.http_client = HttpClient(rate_limit_delay, http_client)\n\n    @validate_inputs()\n    @handle_network_errors()\n    @handle_parsing_errors()\n    def scrape_suggestions(self, term: str, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY) -> Dict:\n        \"\"\"Scrape search suggestions from Google Play Store.\n        \n        Args:\n            term: Search term for suggestions\n            lang: Language code\n            country: Country code\n            \n        Returns:\n            Dictionary containing list of suggestions\n            \n        Raises:\n            DataParsingError: If JSON parsing fails\n        \"\"\"\n        if not term:\n            return {\"suggestions\": []}\n        \n        response_text = self.http_client.fetch_suggest_page(term, lang, country)\n        \n        try:\n            input_data = json.loads(response_text[5:])\n            data = json.loads(input_data[0][2])\n            \n            if data is None:\n                return {\"suggestions\": []}\n            \n            suggestions = [s[0] for s in data[0][0]]\n            return {\"suggestions\": suggestions}\n        except (json.JSONDecodeError, IndexError, KeyError, TypeError) as e:\n            raise DataParsingError(Config.ERROR_MESSAGES[\"JSON_PARSE_FAILED\"].format(error=str(e)))\n\n"
  },
  {
    "path": "gplay_scraper/exceptions.py",
    "content": "\"\"\"Custom exceptions for GPlay Scraper.\n\nThis module defines all custom exceptions used throughout the library.\n\"\"\"\n\n\nclass GPlayScraperError(Exception):\n    \"\"\"Base exception for all GPlay Scraper errors.\"\"\"\n    pass\n\n\nclass InvalidAppIdError(GPlayScraperError):\n    \"\"\"Raised when an invalid app ID, dev ID, or query is provided.\"\"\"\n    pass\n\n\nclass AppNotFoundError(GPlayScraperError):\n    \"\"\"Raised when an app, developer, or resource is not found (404 error).\"\"\"\n    pass\n\n\nclass RateLimitError(GPlayScraperError):\n    \"\"\"Raised when rate limiting is triggered by Google Play Store.\"\"\"\n    pass\n\n\nclass NetworkError(GPlayScraperError):\n    \"\"\"Raised when network requests fail.\"\"\"\n    pass\n\n\nclass DataParsingError(GPlayScraperError):\n    \"\"\"Raised when parsing JSON or HTML data fails.\"\"\"\n    pass"
  },
  {
    "path": "gplay_scraper/models/__init__.py",
    "content": ""
  },
  {
    "path": "gplay_scraper/models/element_specs.py",
    "content": "\"\"\"Element specifications for data extraction from Google Play Store.\n\nThis module defines ElementSpec class and ElementSpecs for all 7 method types.\nEach spec defines how to extract specific fields from raw JSON data.\n\"\"\"\n\nfrom typing import Any, Callable, List, Optional, Dict, Union\nimport html\nfrom datetime import datetime\nfrom ..utils.helpers import unescape_text\nfrom ..config import Config\n\n\ndef parse_permissions(perms_data: Any) -> Dict[str, List[str]]:\n    \"\"\"Parse permissions from various Google Play Store data formats.\n    \n    Google Play Store uses complex nested data structures for app permissions.\n    This function handles all known formats and extracts human-readable permission\n    descriptions organized by category.\n    \n    Data Structure Patterns:\n        - Format 1: [[[category, [...], [[null, description]], [...]]]] \n        - Format 2: [[category, [...], [description1, description2]]]\n        - Format 3: Mixed with \"Other\" category for uncategorized permissions\n        - Format 4: Empty/null data for apps with no permissions\n    \n    Args:\n        perms_data: Raw permissions data from Play Store JSON (nested lists/dicts)\n        \n    Returns:\n        Dictionary mapping permission categories to lists of permission descriptions\n        Example: {\"Location\": [\"GPS access\"], \"Storage\": [\"Read files\", \"Write files\"]}\n        \n    Examples:\n        >>> parse_permissions(None)\n        {}\n        >>> parse_permissions([[[[\"Location\", [...], [[null, \"GPS access\"]], [...]]]])\n        {\"Location\": [\"GPS access\"]}\n        >>> parse_permissions([[\"Storage\", [...], [\"Read files\", \"Write files\"]]])\n        {\"Storage\": [\"Read files\", \"Write files\"]}\n    \"\"\"\n    if not perms_data:\n        return {}\n    \n    permissions = {}\n    \n    try:\n        if isinstance(perms_data, list) and len(perms_data) > 2:\n            sections = perms_data[2] if len(perms_data) > 2 else []\n            if isinstance(sections, list):\n                for section in sections:\n                    if not isinstance(section, list):\n                        continue\n                    for perm_group in section:\n                        if not isinstance(perm_group, list) or len(perm_group) < 3:\n                            continue\n                        category = None\n                        if isinstance(perm_group[0], str):\n                            category = perm_group[0]\n                        elif isinstance(perm_group[0], list) and len(perm_group[0]) > 0:\n                            category = perm_group[0][0] if isinstance(perm_group[0][0], str) else \"Other\"\n                        if not category:\n                            category = \"Other\"\n                        details = []\n                        perm_details = perm_group[2] if len(perm_group) > 2 else []\n                        if isinstance(perm_details, list):\n                            for detail in perm_details:\n                                if isinstance(detail, list) and len(detail) > 1:\n                                    if detail[1] and isinstance(detail[1], str):\n                                        details.append(detail[1])\n                                elif isinstance(detail, str):\n                                    details.append(detail)\n                        if details:\n                            if category in permissions:\n                                permissions[category].extend(details)\n                            else:\n                                permissions[category] = details\n                if len(sections) > 2:\n                    additional_perms = sections[2] if len(sections) > 2 else []\n                    if isinstance(additional_perms, list):\n                        other_perms = []\n                        for item in additional_perms:\n                            if isinstance(item, list) and len(item) > 1 and isinstance(item[1], str):\n                                other_perms.append(item[1])\n                        if other_perms:\n                            if \"Other\" in permissions:\n                                permissions[\"Other\"].extend(other_perms)\n                            else:\n                                permissions[\"Other\"] = other_perms\n        elif isinstance(perms_data, list):\n            for item in perms_data:\n                if isinstance(item, list) and len(item) > 2:\n                    category = item[0] if isinstance(item[0], str) else \"Other\"\n                    details = []\n                    if isinstance(item[2], list):\n                        for detail in item[2]:\n                            if isinstance(detail, list) and len(detail) > 1 and isinstance(detail[1], str):\n                                details.append(detail[1])\n                    if details:\n                        permissions[category] = details\n        permissions = {k: v for k, v in permissions.items() if v}\n    except (IndexError, KeyError, TypeError, AttributeError):\n        pass\n    \n    return permissions\n\n\ndef nested_lookup(obj: Any, key_list: List) -> Any:\n    \"\"\"Safely navigate nested dictionary/list structure.\n    \n    Traverses complex nested data structures (mix of dicts and lists) following\n    a path of keys/indices. Returns None if any step in the path fails.\n    \n    Args:\n        obj: Object to navigate (dict, list, or any nested structure)\n        key_list: List of keys/indices to follow (e.g., [0, 'data', 1, 'title'])\n        \n    Returns:\n        Value at the nested location or None if path doesn't exist\n        \n    Examples:\n        >>> data = {'users': [{'name': 'John'}, {'name': 'Jane'}]}\n        >>> nested_lookup(data, ['users', 1, 'name'])\n        'Jane'\n        >>> nested_lookup(data, ['users', 5, 'name'])  # Index out of range\n        None\n        >>> nested_lookup(data, ['invalid', 'path'])\n        None\n    \"\"\"\n    current = obj\n    for key in key_list:\n        try:\n            current = current[key]\n        except (IndexError, KeyError, TypeError):\n            return None\n    return current\n\n\ndef format_image_url(url: str, size: str = None) -> str:\n    \"\"\"Format image URL with size parameter.\n    \n    Google Play Store images can be resized by appending size parameters.\n    This function adds the appropriate size parameter to get images in desired resolution.\n    \n    Args:\n        url: Base image URL from Google Play Store\n        size: Size parameter - SMALL (512px), MEDIUM (1024px), LARGE (2048px), ORIGINAL (max)\n        \n    Returns:\n        Formatted URL with size parameter appended, or None if url is empty\n        \n    Examples:\n        >>> format_image_url('https://play-lh.googleusercontent.com/abc123', 'LARGE')\n        'https://play-lh.googleusercontent.com/abc123=w2048'\n        >>> format_image_url('https://example.com/image.jpg', 'SMALL')\n        'https://example.com/image.jpg=w512'\n        >>> format_image_url('', 'LARGE')\n        None\n    \"\"\"\n    if not url:\n        return None\n    size_param = Config.get_image_size(size)\n    return f\"{url}={size_param}\"\n\n\nclass ElementSpec:\n    \"\"\"Specification for extracting a single field from raw data.\n    \n    Defines how to extract a specific piece of information from Google Play Store's\n    complex nested JSON data structures. Each spec contains a navigation path and\n    optional processing logic.\n    \n    The extraction process:\n        1. Navigate through nested data using data_map path\n        2. Apply post_processor function if specified\n        3. Return fallback_value if extraction fails\n        4. Handle asset sizing for image URLs\n    \n    Attributes:\n        ds_num: Dataset number (legacy, kept for compatibility)\n        data_map: List of keys/indices to navigate to the field (e.g., [1, 2, 0, 0])\n        post_processor: Optional function to process extracted value (e.g., unescape_text)\n        fallback_value: Value to return if extraction fails (can be another ElementSpec)\n        assets: Asset size parameter for image URLs\n        \n    Examples:\n        # Simple field extraction\n        title_spec = ElementSpec(\"raw\", [1, 2, 0, 0])\n        \n        # With post-processing\n        price_spec = ElementSpec(\"raw\", [1, 2, 57, 0], lambda x: x / 1000000)\n        \n        # With fallback\n        version_spec = ElementSpec(\"raw\", [1, 2, 140, 0], fallback_value=\"Unknown\")\n    \"\"\"\n    \n    def __init__(\n        self,\n        ds_num: Optional[int],\n        data_map: List[int],\n        post_processor: Callable = None,\n        fallback_value: Any = None,\n        assets: str = None,\n    ):\n        \"\"\"Initialize ElementSpec with extraction parameters.\"\"\"\n        self.ds_num = ds_num\n        self.data_map = data_map\n        self.post_processor = post_processor\n        self.fallback_value = fallback_value\n        self.assets = assets\n\n    def extract_content(self, source: dict, assets: str = None) -> Any:\n        \"\"\"Extract content from source using data_map.\n        \n        Performs the actual data extraction by following the navigation path,\n        applying post-processing, and handling fallbacks.\n        \n        Args:\n            source: Source dictionary/list (Google Play Store JSON data)\n            assets: Override asset size for this extraction (SMALL, MEDIUM, LARGE, ORIGINAL)\n            \n        Returns:\n            Extracted and processed value, or fallback_value if extraction fails\n            \n        Process:\n            1. Navigate through source data using data_map path\n            2. Apply post_processor function if available\n            3. Handle image URL formatting for asset-related fields\n            4. Return fallback_value if any step fails\n            \n        Examples:\n            >>> spec = ElementSpec(\"raw\", [1, 2, 0, 0])\n            >>> spec.extract_content({'1': {'2': [['App Title']]})\n            'App Title'\n            >>> spec.extract_content({'invalid': 'data'})\n            None  # or fallback_value if specified\n        \"\"\"\n        try:\n            result = nested_lookup(source, self.data_map)\n            \n            if self.post_processor is not None:\n                try:\n                    if hasattr(self.post_processor, '__name__') and 'image' in self.post_processor.__name__:\n                        result = self.post_processor(result, assets or self.assets)\n                    else:\n                        result = self.post_processor(result)\n                except Exception:\n                    pass\n        except (KeyError, IndexError, TypeError, AttributeError):\n            result = None\n            \n        if result is None and self.fallback_value is not None:\n            if isinstance(self.fallback_value, ElementSpec):\n                result = self.fallback_value.extract_content(source, assets)\n            else:\n                result = self.fallback_value\n        return result\n\n\nclass ElementSpecs:\n    \"\"\"Collection of element specifications for all method types.\n    \n    Central registry of data extraction specifications for all Google Play Store\n    data types. Each specification defines exactly how to extract specific fields\n    from the complex nested JSON structures returned by Google's APIs.\n    \n    Data Categories:\n        - App: 65+ fields for complete app details (ratings, installs, permissions, etc.)\n        - Search: Fields for search results (title, developer, price, etc.)\n        - Review: Fields for user reviews (content, rating, timestamp, etc.)\n        - Developer: Fields for developer app listings\n        - Similar: Fields for similar/related apps\n        - List: Fields for top chart apps (rankings, categories, etc.)\n        \n    Usage Pattern:\n        Each category contains ElementSpec objects that define:\n        - Navigation path through JSON data\n        - Post-processing functions for data transformation\n        - Fallback values for missing data\n        - Asset sizing for images\n        \n    Example:\n        >>> app_title = ElementSpecs.App['title'].extract_content(app_data)\n        >>> search_results = [ElementSpecs.Search['title'].extract_content(item) for item in results]\n    \"\"\"\n    # App Data Specifications - 65+ fields for complete app analysis\n    App = {\n        \"title\": ElementSpec(\"raw\", [1, 2, 0, 0]),\n        \"description\": ElementSpec(\n            \"raw\",\n            [1, 2],\n            lambda s: (lambda desc_text: unescape_text(desc_text) if desc_text else None)(\n                nested_lookup(s, [72, 0, 0]) or nested_lookup(s, [72, 0, 1])\n            ),\n        ),\n        \"summary\": ElementSpec(\"raw\", [1, 2, 73, 0, 1], unescape_text),\n        \"installs\": ElementSpec(\"raw\", [1, 2, 13, 0]),\n        \"minInstalls\": ElementSpec(\"raw\", [1, 2, 13, 1]),\n        \"realInstalls\": ElementSpec(\"raw\", [1, 2, 13, 2]),\n        \"score\": ElementSpec(\"raw\", [1, 2, 51, 0, 1]),\n        \"ratings\": ElementSpec(\"raw\", [1, 2, 51, 2, 1]),\n        \"reviews\": ElementSpec(\"raw\", [1, 2, 51, 3, 1]),\n        \"histogram\": ElementSpec(\n            \"raw\",\n            [1, 2, 51, 1],\n            lambda container: [\n                container[1][1],\n                container[2][1],\n                container[3][1],\n                container[4][1],\n                container[5][1],\n            ],\n            [0, 0, 0, 0, 0], \n        ),\n        \"price\": ElementSpec(\n            \"raw\", [1, 2, 57, 0, 0, 0, 0, 1, 0, 0], \n            lambda price: (price / 1000000) or 0 \n        ),\n        \"free\": ElementSpec(\"raw\", [1, 2, 57, 0, 0, 0, 0, 1, 0, 0], lambda s: s == 0), \n        \"currency\": ElementSpec(\"raw\", [1, 2, 57, 0, 0, 0, 0, 1, 0, 1]),\n        \"sale\": ElementSpec(\"raw\", [1, 2, 57, 0, 0, 0, 0, 14, 0, 0], bool, False),\n        \"originalPrice\": ElementSpec(\"raw\", [1, 2, 57, 0, 0, 0, 0, 1, 1, 0], lambda price: (price / 1000000) if price else None),\n        \"offersIAP\": ElementSpec(\"raw\", [1, 2, 19, 0], bool, False),\n        \"inAppProductPrice\": ElementSpec(\"raw\", [1, 2, 19, 0]),\n        \"developer\": ElementSpec(\"raw\", [1, 2, 68, 0]),\n        \"developerId\": ElementSpec(\"raw\", [1, 2, 68, 1, 4, 2], lambda s: s.split(\"id=\")[1] if s and \"id=\" in s else None),\n        \"developerEmail\": ElementSpec(\"raw\", [1, 2, 69, 1, 0]),\n        \"developerWebsite\": ElementSpec(\"raw\", [1, 2, 69, 0, 5, 2]),\n        \"developerAddress\": ElementSpec(\"raw\", [1, 2, 69, 4, 2, 0]),\n        \"developerPhone\": ElementSpec(\"raw\", [1, 2, 69, 4, 3]),\n        \"privacyPolicy\": ElementSpec(\"raw\", [1, 2, 99, 0, 5, 2]),\n        \"genre\": ElementSpec(\"raw\", [1, 2, 79, 0, 0, 0]),\n        \"genreId\": ElementSpec(\"raw\", [1, 2, 79, 0, 0, 2]),\n        \"categories\": ElementSpec(\"raw\", [1, 2, 79, 0, 0, 0], lambda cat: [cat] if cat else [], []),\n        \"icon\": ElementSpec(\"raw\", [1, 2, 95, 0, 3, 2]),\n        \"headerImage\": ElementSpec(\"raw\", [1, 2, 96, 0, 3, 2]),\n        \"screenshots\": ElementSpec(\"raw\", [1, 2, 78, 0], lambda container: [item[3][2] for item in container] if container else [], []),\n        \"video\": ElementSpec(\"raw\", [1, 2, 100, 0, 0, 3, 2]),\n        \"videoImage\": ElementSpec(\"raw\", [1, 2, 100, 1, 0, 3, 2]),\n        \"contentRating\": ElementSpec(\"raw\", [1, 2, 9, 0]),\n        \"contentRatingDescription\": ElementSpec(\"raw\", [1, 2, 9, 6, 1], fallback_value=ElementSpec(\"raw\", [1, 2, 9, 2, 1], fallback_value=ElementSpec(\"raw\", [1, 2, 9, 0]))),\n        \"appId\": ElementSpec(\"raw\", [1, 2, 1, 0, 0]),\n        \"adSupported\": ElementSpec(\"raw\", [1, 2, 48], bool),\n        \"containsAds\": ElementSpec(\"raw\", [1, 2, 48], bool, False),\n        \"released\": ElementSpec(\"raw\", [1, 2, 10, 0]),\n        \"updated\": ElementSpec(\"raw\", [1, 2, 145, 0, 1, 0], fallback_value=ElementSpec(\"raw\", [1, 2, 103, \"146\", 0, 0], fallback_value=ElementSpec(\"raw\", [1, 2, 145, 0, 0], fallback_value=ElementSpec(\"raw\", [1, 2, 112, \"146\", 0, 0], fallback_value=ElementSpec(\"raw\", [1, 2, 103, \"146\", 0, 1,0], fallback_value=\"Never updated\"))))),\n        \"version\": ElementSpec(\"raw\", [1, 2, 140, 0, 0, 0], fallback_value=ElementSpec(\"raw\", [1, 2, 103, \"141\", 0, 0, 0], fallback_value=\"Varies with device\")),\n        \"androidVersion\": ElementSpec(\"raw\", [1, 2, 140, 1, 1, 0, 0, 1], fallback_value=ElementSpec(\"raw\", [1, 2, 103, \"155\", 1, 2], fallback_value=ElementSpec(\"raw\", [1, 2, 112, \"141\", 1, 1, 0, 0, 0], fallback_value=\"Varies with device\"))),\n        \"permissions\": ElementSpec(\"raw\", [1, 2, 74], parse_permissions),\n        \"dataSafety\": ElementSpec(\"raw\", [1, 2, 136], lambda data: [item[1] for item in data[1] if item and len(item) > 1] if data and len(data) > 1 and data[1] else []),\n        \"appBundle\": ElementSpec(\"raw\", [1, 2, 77, 0]),\n        \"maxandroidapi\": ElementSpec(\"raw\", [1, 2, 140, 1, 0, 0, 0], fallback_value=ElementSpec(\"raw\", [1, 2, 103, \"141\", 1, 0 , 0, 0], fallback_value=ElementSpec(\"raw\", [1, 2, 112, \"141\", 1, 0, 0, 0], fallback_value=\"Varies with device\"))),\n        \"minandroidapi\": ElementSpec(\"raw\", [1, 2, 140, 1, 1, 0, 0, 0], fallback_value=ElementSpec(\"raw\", [1, 2, 103, \"141\", 1, 1, 0, 0, 0], fallback_value=ElementSpec(\"raw\", [1, 2, 112, \"141\", 1, 1, 0, 0, 0], fallback_value=\"Varies with device\"))),\n        \"whatsNew\": ElementSpec(\"raw\", [1, 2, 144, 1, 1], lambda x: [line.strip() for line in html.unescape(x).split('<br>') if line.strip()] if x else []),\n        \"available\": ElementSpec(\"raw\", [1, 2, 18, 0], bool, False),\n        \"url\": ElementSpec(\"raw\", [1, 2, 1, 0, 0], lambda app_id: f\"https://play.google.com/store/apps/details?id={app_id}\" if app_id else None),\n    }\n    \n    # Search Results Specifications - Fields for app search results\n    Search = {\n        \"title\": ElementSpec(\"raw\", [2]),\n        \"appId\": ElementSpec(\"raw\", [12, 0]),\n        \"icon\": ElementSpec(\"raw\", [1, 1, 0, 3, 2]),\n        \"developer\": ElementSpec(\"raw\", [4, 0, 0, 0]),\n        \"currency\": ElementSpec(\"raw\", [7, 0, 3, 2, 1, 0, 1]),\n        \"price\": ElementSpec(\"raw\", [7, 0, 3, 2, 1, 0, 0], lambda price: (price / 1000000) if price else 0),\n        \"free\": ElementSpec(\"raw\", [7, 0, 3, 2, 1, 0, 0], lambda s: s == 0),\n        \"summary\": ElementSpec(\"raw\", [4, 1, 1, 1, 1], unescape_text),\n        \"scoreText\": ElementSpec(\"raw\", [6, 0, 2, 1, 0]),\n        \"score\": ElementSpec(\"raw\", [6, 0, 2, 1, 1]),\n        \"url\": ElementSpec(\"raw\", [12, 0], lambda app_id: f\"https://play.google.com/store/apps/details?id={app_id}\" if app_id else None),\n    }\n    \n    # Review Data Specifications - Fields for user reviews\n    Review = {\n        \"reviewId\": ElementSpec(\"raw\", [0]), \n        \"userName\": ElementSpec(\"raw\", [1, 0]), \n        \"userImage\": ElementSpec(\"raw\", [1, 1, 3, 2]),\n        \"content\": ElementSpec(\"raw\", [4], unescape_text),\n        \"score\": ElementSpec(\"raw\", [2]),\n        \"thumbsUpCount\": ElementSpec(\"raw\", [6]),\n        \"at\": ElementSpec(\"raw\", [5, 0], lambda timestamp: datetime.fromtimestamp(timestamp).isoformat() if timestamp else None),\n        \"appVersion\": ElementSpec(\"raw\", [10]),\n    }\n    \n    # Developer App Specifications - Fields for developer's app listings\n    Developer = {\n        \"appId\": ElementSpec(\"raw\", [0, 0]),\n        \"title\": ElementSpec(\"raw\", [3]), \n        \"icon\": ElementSpec(\"raw\", [1, 3, 2]),\n        \"developer\": ElementSpec(\"raw\", [14]),\n        \"description\": ElementSpec(\"raw\", [13, 1], unescape_text),\n        \"score\": ElementSpec(\"raw\", [4, 1]),\n        \"scoreText\": ElementSpec(\"raw\", [4, 0]),\n        \"price\": ElementSpec(\"raw\", [8, 1, 0, 0], lambda price: (price / 1000000) if price else 0),\n        \"currency\": ElementSpec(\"raw\", [8, 1, 0, 1]),\n        \"free\": ElementSpec(\"raw\", [8, 1, 0, 0], lambda s: s == 0),\n        \"url\": ElementSpec(\"raw\", [10, 4, 2], lambda path: f\"https://play.google.com{path}\" if path else None),\n    }\n    \n    # Similar Apps Specifications - Fields for related/similar apps\n    Similar = {\n        \"appId\": ElementSpec(\"raw\", [0, 0]), \n        \"title\": ElementSpec(\"raw\", [3]),\n        \"icon\": ElementSpec(\"raw\", [1, 3, 2]),\n        \"developer\": ElementSpec(\"raw\", [14]),\n        \"description\": ElementSpec(\"raw\", [13, 1], unescape_text),\n        \"score\": ElementSpec(\"raw\", [4, 1]),\n        \"scoreText\": ElementSpec(\"raw\", [4, 0]),\n        \"price\": ElementSpec(\"raw\", [8, 1, 0, 0], lambda price: (price / 1000000) if price else 0),\n        \"currency\": ElementSpec(\"raw\", [8, 1, 0, 1]),\n        \"free\": ElementSpec(\"raw\", [8, 1, 0, 0], lambda s: s == 0),\n        \"url\": ElementSpec(\"raw\", [10, 4, 2], lambda path: f\"https://play.google.com{path}\" if path else None),\n    }\n    \n    # Top Charts Specifications - Fields for ranked app lists\n    List = {\n        \"title\": ElementSpec(\"raw\", [0, 3]),\n        \"appId\": ElementSpec(\"raw\", [0, 0, 0]), \n        \"icon\": ElementSpec(\"raw\", [0, 1, 3, 2]),\n        \"screenshots\": ElementSpec(\"raw\", [0, 2], lambda container: [s[3][2] for s in container if s and len(s) > 3] if container else [], []),\n        \"developer\": ElementSpec(\"raw\", [0, 14]),\n        \"genre\": ElementSpec(\"raw\", [0, 5]),\n        \"installs\": ElementSpec(\"raw\", [0, 15]),\n        \"currency\": ElementSpec(\"raw\", [0, 8, 1, 0, 1]),\n        \"price\": ElementSpec(\"raw\", [0, 8, 1, 0, 0], lambda price: (price / 1000000) if price else 0),\n        \"free\": ElementSpec(\"raw\", [0, 8, 1, 0, 0], lambda s: s == 0),\n        \"description\": ElementSpec(\"raw\", [0, 13, 1], unescape_text),\n        \"scoreText\": ElementSpec(\"raw\", [0, 4, 0]),\n        \"score\": ElementSpec(\"raw\", [0, 4, 1]),\n        \"url\": ElementSpec(\"raw\", [0, 10, 4, 2], lambda path: f\"https://play.google.com{path}\" if path else None),\n    }"
  },
  {
    "path": "gplay_scraper/utils/__init__.py",
    "content": "from .helpers import *\n\n__all__ = [\n    'nested_lookup', 'unescape_text', 'extract_categories', 'get_categories',\n    'parse_release_date', 'calculate_app_age', 'parse_installs_string',\n    'calculate_daily_installs', 'calculate_monthly_installs', 'clean_json_string'\n]"
  },
  {
    "path": "gplay_scraper/utils/constants.py",
    "content": "# Review sort options\nSORT_NAMES = {\n    'RELEVANT': 1,  # Most relevant reviews\n    'NEWEST': 2,    # Newest reviews first\n    'RATING': 3     # Sorted by rating\n}\n\n# List collection types\nCLUSTER_NAMES = {\n    'TOP_FREE': 'topselling_free',      # Top free apps\n    'TOP_PAID': 'topselling_paid',      # Top paid apps\n    'TOP_GROSSING': 'topgrossing'       # Top grossing apps\n}\n\n# Phone country code mappings\nPHONE_PREFIXES = [\n    (1201, 'us', 'united states'),\n    (1202, 'us', 'united states'),\n    (1203, 'us', 'united states'),\n    (1204, 'ca', 'canada'),\n    (1205, 'us', 'united states'),\n    (1206, 'us', 'united states'),\n    (1207, 'us', 'united states'),\n    (1208, 'us', 'united states'),\n    (1209, 'us', 'united states'),\n    (1210, 'us', 'united states'),\n    (1212, 'us', 'united states'),\n    (1213, 'us', 'united states'),\n    (1214, 'us', 'united states'),\n    (1215, 'us', 'united states'),\n    (1216, 'us', 'united states'),\n    (1217, 'us', 'united states'),\n    (1218, 'us', 'united states'),\n    (1219, 'us', 'united states'),\n    (1224, 'us', 'united states'),\n    (1225, 'us', 'united states'),\n    (1226, 'ca', 'canada'),\n    (1228, 'us', 'united states'),\n    (1229, 'us', 'united states'),\n    (1231, 'us', 'united states'),\n    (1234, 'us', 'united states'),\n    (1236, 'ca', 'canada'),\n    (1239, 'us', 'united states'),\n    (1240, 'us', 'united states'),\n    (1242, 'bs', 'bahamas'),\n    (1246, 'bb', 'barbados'),\n    (1248, 'us', 'united states'),\n    (1249, 'ca', 'canada'),\n    (1250, 'ca', 'canada'),\n    (1251, 'us', 'united states'),\n    (1252, 'us', 'united states'),\n    (1253, 'us', 'united states'),\n    (1254, 'us', 'united states'),\n    (1256, 'us', 'united states'),\n    (1260, 'us', 'united states'),\n    (1262, 'us', 'united states'),\n    (1264, 'ai', 'anguilla'),\n    (1267, 'us', 'united states'),\n    (1268, 'ag', 'antigua and barbuda'),\n    (1269, 'us', 'united states'),\n    (1270, 'us', 'united states'),\n    (1272, 'us', 'united states'),\n    (1274, 'us', 'united states'),\n    (1276, 'us', 'united states'),\n    (1281, 'us', 'united states'),\n    (1284, 'vg', 'british virgin islands'),\n    (1289, 'ca', 'canada'),\n    (1301, 'us', 'united states'),\n    (1302, 'us', 'united states'),\n    (1303, 'us', 'united states'),\n    (1304, 'us', 'united states'),\n    (1305, 'us', 'united states'),\n    (1306, 'ca', 'canada'),\n    (1307, 'us', 'united states'),\n    (1308, 'us', 'united states'),\n    (1309, 'us', 'united states'),\n    (1310, 'us', 'united states'),\n    (1312, 'us', 'united states'),\n    (1313, 'us', 'united states'),\n    (1314, 'us', 'united states'),\n    (1315, 'us', 'united states'),\n    (1316, 'us', 'united states'),\n    (1317, 'us', 'united states'),\n    (1318, 'us', 'united states'),\n    (1319, 'us', 'united states'),\n    (1320, 'us', 'united states'),\n    (1321, 'us', 'united states'),\n    (1323, 'us', 'united states'),\n    (1325, 'us', 'united states'),\n    (1330, 'us', 'united states'),\n    (1331, 'us', 'united states'),\n    (1334, 'us', 'united states'),\n    (1336, 'us', 'united states'),\n    (1337, 'us', 'united states'),\n    (1339, 'us', 'united states'),\n    (1340, 'vi', 'u.s. virgin islands'),\n    (1343, 'ca', 'canada'),\n    (1345, 'ky', 'cayman islands'),\n    (1346, 'us', 'united states'),\n    (1347, 'us', 'united states'),\n    (1351, 'us', 'united states'),\n    (1352, 'us', 'united states'),\n    (1360, 'us', 'united states'),\n    (1361, 'us', 'united states'),\n    (1364, 'us', 'united states'),\n    (1365, 'ca', 'canada'),\n    (1385, 'us', 'united states'),\n    (1386, 'us', 'united states'),\n    (1401, 'us', 'united states'),\n    (1402, 'us', 'united states'),\n    (1403, 'ca', 'canada'),\n    (1404, 'us', 'united states'),\n    (1405, 'us', 'united states'),\n    (1406, 'us', 'united states'),\n    (1407, 'us', 'united states'),\n    (1408, 'us', 'united states'),\n    (1409, 'us', 'united states'),\n    (1410, 'us', 'united states'),\n    (1412, 'us', 'united states'),\n    (1413, 'us', 'united states'),\n    (1414, 'us', 'united states'),\n    (1415, 'us', 'united states'),\n    (1416, 'ca', 'canada'),\n    (1417, 'us', 'united states'),\n    (1418, 'ca', 'canada'),\n    (1419, 'us', 'united states'),\n    (1423, 'us', 'united states'),\n    (1424, 'us', 'united states'),\n    (1425, 'us', 'united states'),\n    (1430, 'us', 'united states'),\n    (1431, 'ca', 'canada'),\n    (1432, 'us', 'united states'),\n    (1434, 'us', 'united states'),\n    (1435, 'us', 'united states'),\n    (1437, 'ca', 'canada'),\n    (1438, 'ca', 'canada'),\n    (1440, 'us', 'united states'),\n    (1441, 'bm', 'bermuda'),\n    (1442, 'us', 'united states'),\n    (1443, 'us', 'united states'),\n    (1450, 'ca', 'canada'),\n    (1457, 'ca', 'canada'),\n    (1458, 'us', 'united states'),\n    (1469, 'us', 'united states'),\n    (1470, 'us', 'united states'),\n    (1473, 'gd', 'grenada'),\n    (1475, 'us', 'united states'),\n    (1478, 'us', 'united states'),\n    (1479, 'us', 'united states'),\n    (1480, 'us', 'united states'),\n    (1484, 'us', 'united states'),\n    (1500, 'us', 'united states'),\n    (1501, 'us', 'united states'),\n    (1502, 'us', 'united states'),\n    (1503, 'us', 'united states'),\n    (1504, 'us', 'united states'),\n    (1505, 'us', 'united states'),\n    (1506, 'ca', 'canada'),\n    (1507, 'us', 'united states'),\n    (1508, 'us', 'united states'),\n    (1509, 'us', 'united states'),\n    (1510, 'us', 'united states'),\n    (1512, 'us', 'united states'),\n    (1513, 'us', 'united states'),\n    (1514, 'ca', 'canada'),\n    (1515, 'us', 'united states'),\n    (1516, 'us', 'united states'),\n    (1517, 'us', 'united states'),\n    (1518, 'us', 'united states'),\n    (1519, 'ca', 'canada'),\n    (1520, 'us', 'united states'),\n    (1530, 'us', 'united states'),\n    (1531, 'us', 'united states'),\n    (1533, 'us', 'united states'),\n    (1534, 'us', 'united states'),\n    (1539, 'us', 'united states'),\n    (1540, 'us', 'united states'),\n    (1541, 'us', 'united states'),\n    (1544, 'us', 'united states'),\n    (1551, 'us', 'united states'),\n    (1559, 'us', 'united states'),\n    (1561, 'us', 'united states'),\n    (1562, 'us', 'united states'),\n    (1563, 'us', 'united states'),\n    (1566, 'us', 'united states'),\n    (1567, 'us', 'united states'),\n    (1570, 'us', 'united states'),\n    (1571, 'us', 'united states'),\n    (1573, 'us', 'united states'),\n    (1574, 'us', 'united states'),\n    (1575, 'us', 'united states'),\n    (1577, 'us', 'united states'),\n    (1579, 'ca', 'canada'),\n    (1580, 'us', 'united states'),\n    (1581, 'ca', 'canada'),\n    (1585, 'us', 'united states'),\n    (1586, 'us', 'united states'),\n    (1587, 'ca', 'canada'),\n    (1600, 'ca', 'canada'),\n    (1601, 'us', 'united states'),\n    (1602, 'us', 'united states'),\n    (1603, 'us', 'united states'),\n    (1604, 'ca', 'canada'),\n    (1605, 'us', 'united states'),\n    (1606, 'us', 'united states'),\n    (1607, 'us', 'united states'),\n    (1608, 'us', 'united states'),\n    (1609, 'us', 'united states'),\n    (1610, 'us', 'united states'),\n    (1612, 'us', 'united states'),\n    (1613, 'ca', 'canada'),\n    (1614, 'us', 'united states'),\n    (1615, 'us', 'united states'),\n    (1616, 'us', 'united states'),\n    (1617, 'us', 'united states'),\n    (1618, 'us', 'united states'),\n    (1619, 'us', 'united states'),\n    (1620, 'us', 'united states'),\n    (1623, 'us', 'united states'),\n    (1626, 'us', 'united states'),\n    (1628, 'us', 'united states'),\n    (1629, 'us', 'united states'),\n    (1630, 'us', 'united states'),\n    (1631, 'us', 'united states'),\n    (1636, 'us', 'united states'),\n    (1639, 'ca', 'canada'),\n    (1641, 'us', 'united states'),\n    (1646, 'us', 'united states'),\n    (1647, 'ca', 'canada'),\n    (1649, 'tc', 'turks and caicos islands'),\n    (1650, 'us', 'united states'),\n    (1651, 'us', 'united states'),\n    (1657, 'us', 'united states'),\n    (1660, 'us', 'united states'),\n    (1661, 'us', 'united states'),\n    (1662, 'us', 'united states'),\n    (1664, 'ms', 'montserrat'),\n    (1667, 'us', 'united states'),\n    (1669, 'us', 'united states'),\n    (1670, 'mp', 'northern mariana islands'),\n    (1671, 'gu', 'guam'),\n    (1678, 'us', 'united states'),\n    (1681, 'us', 'united states'),\n    (1682, 'us', 'united states'),\n    (1684, 'as', 'american samoa'),\n    (1700, 'us', 'united states'),\n    (1701, 'us', 'united states'),\n    (1702, 'us', 'united states'),\n    (1703, 'us', 'united states'),\n    (1704, 'us', 'united states'),\n    (1705, 'ca', 'canada'),\n    (1706, 'us', 'united states'),\n    (1707, 'us', 'united states'),\n    (1708, 'us', 'united states'),\n    (1709, 'ca', 'canada'),\n    (1710, 'us', 'united states'),\n    (1712, 'us', 'united states'),\n    (1713, 'us', 'united states'),\n    (1714, 'us', 'united states'),\n    (1715, 'us', 'united states'),\n    (1716, 'us', 'united states'),\n    (1717, 'us', 'united states'),\n    (1718, 'us', 'united states'),\n    (1719, 'us', 'united states'),\n    (1720, 'us', 'united states'),\n    (1721, 'sx', 'sint maarten'),\n    (1724, 'us', 'united states'),\n    (1725, 'us', 'united states'),\n    (1727, 'us', 'united states'),\n    (1731, 'us', 'united states'),\n    (1732, 'us', 'united states'),\n    (1734, 'us', 'united states'),\n    (1737, 'us', 'united states'),\n    (1740, 'us', 'united states'),\n    (1747, 'us', 'united states'),\n    (1754, 'us', 'united states'),\n    (1757, 'us', 'united states'),\n    (1758, 'lc', 'saint lucia'),\n    (1760, 'us', 'united states'),\n    (1762, 'us', 'united states'),\n    (1763, 'us', 'united states'),\n    (1765, 'us', 'united states'),\n    (1767, 'dm', 'dominica'),\n    (1769, 'us', 'united states'),\n    (1770, 'us', 'united states'),\n    (1772, 'us', 'united states'),\n    (1773, 'us', 'united states'),\n    (1774, 'us', 'united states'),\n    (1775, 'us', 'united states'),\n    (1778, 'ca', 'canada'),\n    (1779, 'us', 'united states'),\n    (1780, 'ca', 'canada'),\n    (1781, 'us', 'united states'),\n    (1782, 'ca', 'canada'),\n    (1784, 'vc', 'saint vincent and the grenadines'),\n    (1785, 'us', 'united states'),\n    (1786, 'us', 'united states'),\n    (1787, 'pr', 'puerto rico'),\n    (1800, 'us', 'united states'),\n    (1801, 'us', 'united states'),\n    (1802, 'us', 'united states'),\n    (1803, 'us', 'united states'),\n    (1804, 'us', 'united states'),\n    (1805, 'us', 'united states'),\n    (1806, 'us', 'united states'),\n    (1807, 'ca', 'canada'),\n    (1808, 'us', 'united states'),\n    (1809, 'do', 'dominican republic'),\n    (1810, 'us', 'united states'),\n    (1812, 'us', 'united states'),\n    (1813, 'us', 'united states'),\n    (1814, 'us', 'united states'),\n    (1815, 'us', 'united states'),\n    (1816, 'us', 'united states'),\n    (1817, 'us', 'united states'),\n    (1818, 'us', 'united states'),\n    (1819, 'ca', 'canada'),\n    (1825, 'ca', 'canada'),\n    (1828, 'us', 'united states'),\n    (1829, 'do', 'dominican republic'),\n    (1830, 'us', 'united states'),\n    (1831, 'us', 'united states'),\n    (1832, 'us', 'united states'),\n    (1843, 'us', 'united states'),\n    (1844, 'us', 'united states'),\n    (1845, 'us', 'united states'),\n    (1847, 'us', 'united states'),\n    (1848, 'us', 'united states'),\n    (1849, 'do', 'dominican republic'),\n    (1850, 'us', 'united states'),\n    (1855, 'us', 'united states'),\n    (1856, 'us', 'united states'),\n    (1857, 'us', 'united states'),\n    (1858, 'us', 'united states'),\n    (1859, 'us', 'united states'),\n    (1860, 'us', 'united states'),\n    (1862, 'us', 'united states'),\n    (1863, 'us', 'united states'),\n    (1864, 'us', 'united states'),\n    (1865, 'us', 'united states'),\n    (1866, 'us', 'united states'),\n    (1867, 'ca', 'canada'),\n    (1868, 'tt', 'trinidad and tobago'),\n    (1869, 'kn', 'saint kitts and nevis'),\n    (1870, 'us', 'united states'),\n    (1872, 'us', 'united states'),\n    (1873, 'ca', 'canada'),\n    (1876, 'jm', 'jamaica'),\n    (1877, 'us', 'united states'),\n    (1878, 'us', 'united states'),\n    (1888, 'us', 'united states'),\n    (1900, 'us', 'united states'),\n    (1901, 'us', 'united states'),\n    (1902, 'ca', 'canada'),\n    (1903, 'us', 'united states'),\n    (1904, 'us', 'united states'),\n    (1905, 'ca', 'canada'),\n    (1906, 'us', 'united states'),\n    (1907, 'us', 'united states'),\n    (1908, 'us', 'united states'),\n    (1909, 'us', 'united states'),\n    (1910, 'us', 'united states'),\n    (1912, 'us', 'united states'),\n    (1913, 'us', 'united states'),\n    (1914, 'us', 'united states'),\n    (1915, 'us', 'united states'),\n    (1916, 'us', 'united states'),\n    (1917, 'us', 'united states'),\n    (1918, 'us', 'united states'),\n    (1919, 'us', 'united states'),\n    (1920, 'us', 'united states'),\n    (1925, 'us', 'united states'),\n    (1928, 'us', 'united states'),\n    (1929, 'us', 'united states'),\n    (1930, 'us', 'united states'),\n    (1931, 'us', 'united states'),\n    (1935, 'us', 'united states'),\n    (1936, 'us', 'united states'),\n    (1937, 'us', 'united states'),\n    (1938, 'us', 'united states'),\n    (1939, 'pr', 'puerto rico'),\n    (1940, 'us', 'united states'),\n    (1941, 'us', 'united states'),\n    (1947, 'us', 'united states'),\n    (1949, 'us', 'united states'),\n    (1951, 'us', 'united states'),\n    (1952, 'us', 'united states'),\n    (1954, 'us', 'united states'),\n    (1956, 'us', 'united states'),\n    (1959, 'us', 'united states'),\n    (1970, 'us', 'united states'),\n    (1971, 'us', 'united states'),\n    (1972, 'us', 'united states'),\n    (1973, 'us', 'united states'),\n    (1978, 'us', 'united states'),\n    (1979, 'us', 'united states'),\n    (1980, 'us', 'united states'),\n    (1984, 'us', 'united states'),\n    (1985, 'us', 'united states'),\n    (1989, 'us', 'united states'),\n    (20, 'eg', 'egypt'),\n    (211, 'ss', 'south sudan'),\n    (212, 'ma', 'morocco'),\n    (213, 'dz', 'algeria'),\n    (216, 'tn', 'tunisia'),\n    (218, 'ly', 'libya'),\n    (220, 'gm', 'gambia'),\n    (221, 'sn', 'senegal'),\n    (222, 'mr', 'mauritania'),\n    (223, 'ml', 'mali'),\n    (224, 'gn', 'guinea'),\n    (225, 'ci', 'ivory coast'),\n    (226, 'bf', 'burkina faso'),\n    (227, 'ne', 'niger'),\n    (228, 'tg', 'togo'),\n    (229, 'bj', 'benin'),\n    (230, 'mu', 'mauritius'),\n    (231, 'lr', 'liberia'),\n    (232, 'sl', 'sierra leone'),\n    (233, 'gh', 'ghana'),\n    (234, 'ng', 'nigeria'),\n    (235, 'td', 'chad'),\n    (236, 'cf', 'central african republic'),\n    (237, 'cm', 'cameroon'),\n    (238, 'cv', 'cape verde'),\n    (239, 'st', 'sao tome and principe'),\n    (240, 'gq', 'equatorial guinea'),\n    (241, 'ga', 'gabon'),\n    (242, 'cg', 'republic of the congo'),\n    (243, 'cd', 'democratic republic of the congo'),\n    (244, 'ao', 'angola'),\n    (245, 'gw', 'guinea-bissau'),\n    (246, 'io', 'british indian ocean territory'),\n    (247, 'ac', 'ascension island'),\n    (248, 'sc', 'seychelles'),\n    (249, 'sd', 'sudan'),\n    (250, 'rw', 'rwanda'),\n    (251, 'et', 'ethiopia'),\n    (252, 'so', 'somalia'),\n    (253, 'dj', 'djibouti'),\n    (254, 'ke', 'kenya'),\n    (255, 'tz', 'tanzania'),\n    (256, 'ug', 'uganda'),\n    (257, 'bi', 'burundi'),\n    (258, 'mz', 'mozambique'),\n    (260, 'zm', 'zambia'),\n    (261, 'mg', 'madagascar'),\n    (262, 're', 'réunion'),\n    (262269, 'yt', 'mayotte'),\n    (262639, 'yt', 'mayotte'),\n    (263, 'zw', 'zimbabwe'),\n    (264, 'na', 'namibia'),\n    (265, 'mw', 'malawi'),\n    (266, 'ls', 'lesotho'),\n    (267, 'bw', 'botswana'),\n    (268, 'sz', 'eswatini'),\n    (269, 'km', 'comoros'),\n    (27, 'za', 'south africa'),\n    (290, 'sh', 'saint helena'),\n    (291, 'er', 'eritrea'),\n    (297, 'aw', 'aruba'),\n    (298, 'fo', 'faroe islands'),\n    (299, 'gl', 'greenland'),\n    (30, 'gr', 'greece'),\n    (31, 'nl', 'netherlands'),\n    (32, 'be', 'belgium'),\n    (33, 'fr', 'france'),\n    (34, 'es', 'spain'),\n    (350, 'gi', 'gibraltar'),\n    (351, 'pt', 'portugal'),\n    (352, 'lu', 'luxembourg'),\n    (353, 'ie', 'ireland'),\n    (354, 'is', 'iceland'),\n    (355, 'al', 'albania'),\n    (356, 'mt', 'malta'),\n    (357, 'cy', 'cyprus'),\n    (358, 'fi', 'finland'),\n    (35818, 'ax', 'åland islands'),\n    (359, 'bg', 'bulgaria'),\n    (36, 'hu', 'hungary'),\n    (370, 'lt', 'lithuania'),\n    (371, 'lv', 'latvia'),\n    (372, 'ee', 'estonia'),\n    (373, 'md', 'moldova'),\n    (374, 'am', 'armenia'),\n    (375, 'by', 'belarus'),\n    (376, 'ad', 'andorra'),\n    (377, 'mc', 'monaco'),\n    (378, 'sm', 'san marino'),\n    (379, 'va', 'vatican city'),\n    (380, 'ua', 'ukraine'),\n    (381, 'rs', 'serbia'),\n    (382, 'me', 'montenegro'),\n    (385, 'hr', 'croatia'),\n    (386, 'si', 'slovenia'),\n    (387, 'ba', 'bosnia and herzegovina'),\n    (389, 'mk', 'north macedonia'),\n    (39, 'it', 'italy'),\n    (40, 'ro', 'romania'),\n    (41, 'ch', 'switzerland'),\n    (420, 'cz', 'czech republic'),\n    (421, 'sk', 'slovakia'),\n    (423, 'li', 'liechtenstein'),\n    (43, 'at', 'austria'),\n    (441481, 'gg', 'guernsey'),\n    (441624, 'im', 'isle of man'),\n    (441534, 'je', 'jersey'),\n    (44, 'gb', 'united kingdom'),\n    (45, 'dk', 'denmark'),\n    (46, 'se', 'sweden'),\n    (47, 'no', 'norway'),\n    (4779, 'sj', 'svalbard and jan mayen'),\n    (48, 'pl', 'poland'),\n    (49, 'de', 'germany'),\n    (500, 'fk', 'falkland islands'),\n    (501, 'bz', 'belize'),\n    (502, 'gt', 'guatemala'),\n    (503, 'sv', 'el salvador'),\n    (504, 'hn', 'honduras'),\n    (505, 'ni', 'nicaragua'),\n    (506, 'cr', 'costa rica'),\n    (507, 'pa', 'panama'),\n    (508, 'pm', 'saint pierre and miquelon'),\n    (509, 'ht', 'haiti'),\n    (51, 'pe', 'peru'),\n    (52, 'mx', 'mexico'),\n    (53, 'cu', 'cuba'),\n    (54, 'ar', 'argentina'),\n    (55, 'br', 'brazil'),\n    (56, 'cl', 'chile'),\n    (57, 'co', 'colombia'),\n    (58, 've', 'venezuela'),\n    (590, 'gp', 'guadeloupe'),\n    (591, 'bo', 'bolivia'),\n    (592, 'gy', 'guyana'),\n    (593, 'ec', 'ecuador'),\n    (594, 'gf', 'french guiana'),\n    (595, 'py', 'paraguay'),\n    (596, 'mq', 'martinique'),\n    (597, 'sr', 'suriname'),\n    (598, 'uy', 'uruguay'),\n    (5993, 'bq', 'bonaire, sint eustatius and saba'),\n    (5994, 'bq', 'bonaire, sint eustatius and saba'),\n    (5997, 'bq', 'bonaire, sint eustatius and saba'),\n    (5999, 'cw', 'curaçao'),\n    (60, 'my', 'malaysia'),\n    (61, 'au', 'australia'),\n    (6189164, 'cx', 'christmas island'),\n    (6189162, 'cc', 'cocos (keeling) islands'),\n    (62, 'id', 'indonesia'),\n    (63, 'ph', 'philippines'),\n    (64, 'nz', 'new zealand'),\n    (65, 'sg', 'singapore'),\n    (66, 'th', 'thailand'),\n    (670, 'tl', 'timor-leste'),\n    (6721, 'aq', 'antarctica'),\n    (6723, 'nf', 'norfolk island'),\n    (673, 'bn', 'brunei'),\n    (674, 'nr', 'nauru'),\n    (675, 'pg', 'papua new guinea'),\n    (676, 'to', 'tonga'),\n    (677, 'sb', 'solomon islands'),\n    (678, 'vu', 'vanuatu'),\n    (679, 'fj', 'fiji'),\n    (680, 'pw', 'palau'),\n    (681, 'wf', 'wallis and futuna'),\n    (682, 'ck', 'cook islands'),\n    (683, 'nu', 'niue'),\n    (685, 'ws', 'samoa'),\n    (686, 'ki', 'kiribati'),\n    (687, 'nc', 'new caledonia'),\n    (688, 'tv', 'tuvalu'),\n    (689, 'pf', 'french polynesia'),\n    (690, 'tk', 'tokelau'),\n    (691, 'fm', 'micronesia'),\n    (692, 'mh', 'marshall islands'),\n    (7, 'ru', 'russia'),\n    (76, 'kz', 'kazakhstan'),\n    (77, 'kz', 'kazakhstan'),\n    (800, 'xt', 'international toll-free'),\n    (808, 'xs', 'shared-cost service'),\n    (81, 'jp', 'japan'),\n    (82, 'kr', 'south korea'),\n    (84, 'vn', 'vietnam'),\n    (850, 'kp', 'north korea'),\n    (852, 'hk', 'hong kong'),\n    (853, 'mo', 'macao'),\n    (855, 'kh', 'cambodia'),\n    (856, 'la', 'laos'),\n    (86, 'cn', 'china'),\n    (870, 'xn', 'inmarsat'),\n    (878, 'xp', 'universal personal telecommunications'),\n    (880, 'bd', 'bangladesh'),\n    (881, 'xg', 'global mobile satellite system'),\n    (882, 'xv', 'international networks'),\n    (883, 'xv', 'international networks'),\n    (886, 'tw', 'taiwan'),\n    (90, 'tr', 'turkey'),\n    (91, 'in', 'india'),\n    (92, 'pk', 'pakistan'),\n    (93, 'af', 'afghanistan'),\n    (94, 'lk', 'sri lanka'),\n    (95, 'mm', 'myanmar'),\n    (960, 'mv', 'maldives'),\n    (961, 'lb', 'lebanon'),\n    (962, 'jo', 'jordan'),\n    (963, 'sy', 'syria'),\n    (964, 'iq', 'iraq'),\n    (965, 'kw', 'kuwait'),\n    (966, 'sa', 'saudi arabia'),\n    (967, 'ye', 'yemen'),\n    (968, 'om', 'oman'),\n    (970, 'ps', 'palestine'),\n    (971, 'ae', 'united arab emirates'),\n    (972, 'il', 'israel'),\n    (973, 'bh', 'bahrain'),\n    (974, 'qa', 'qatar'),\n    (975, 'bt', 'bhutan'),\n    (976, 'mn', 'mongolia'),\n    (977, 'np', 'nepal'),\n    (98, 'ir', 'iran'),\n    (992, 'tj', 'tajikistan'),\n    (993, 'tm', 'turkmenistan'),\n    (994, 'az', 'azerbaijan'),\n    (995, 'ge', 'georgia'),\n    (996, 'kg', 'kyrgyzstan'),\n    (998, 'uz', 'uzbekistan')\n]\n\n"
  },
  {
    "path": "gplay_scraper/utils/error_handling.py",
    "content": "\"\"\"Unified error handling decorators for all gplay_scraper methods.\"\"\"\n\nimport time\nimport logging\nimport json\nfrom functools import wraps\nfrom ..config import Config\nfrom ..exceptions import AppNotFoundError, NetworkError, DataParsingError, RateLimitError, InvalidAppIdError\n\nlogger = logging.getLogger(__name__)\n\ndef retry_on_not_found(max_retries=Config.DEFAULT_RETRY_COUNT, delay=1.0):\n    \"\"\"Decorator to retry methods on AppNotFoundError with all HTTP clients.\n    \n    This decorator implements automatic retry logic when apps are not found,\n    cycling through different HTTP clients to overcome potential blocking.\n    \n    Args:\n        max_retries: Maximum number of retry attempts (default: 3)\n        delay: Delay in seconds between retries (default: 1.0)\n        \n    Returns:\n        Decorated function with retry logic\n        \n    Example:\n        @retry_on_not_found(max_retries=5, delay=2.0)\n        def fetch_app_data(self, app_id):\n            # Method implementation\n            pass\n    \"\"\"\n    def decorator(func):\n        @wraps(func)\n        def wrapper(self, *args, **kwargs):\n            for attempt in range(max_retries):\n                try:\n                    return func(self, *args, **kwargs)\n                except AppNotFoundError as e:\n                    if attempt < max_retries - 1:\n                        logger.warning(f\"Attempt {attempt + 1} failed: {e}. Retrying in {delay}s...\")\n                        time.sleep(delay)\n                        # Switch to next HTTP client for retry\n                        if hasattr(self, 'scraper') and hasattr(self.scraper, 'http_client'):\n                            self.scraper.http_client._try_next_client()\n                    continue\n                except Exception as e:\n                    # Re-raise non-recoverable exceptions immediately\n                    raise e\n            logger.error(f\"All {max_retries} attempts failed. Skipping.\")\n            return None\n        return wrapper\n    return decorator\n\ndef handle_network_errors(return_empty=False):\n    \"\"\"Decorator to handle network errors gracefully.\n    \n    Catches network-related exceptions and provides graceful degradation\n    instead of crashing the application.\n    \n    Args:\n        return_empty: If True, return empty list/dict on errors instead of None\n        \n    Returns:\n        Decorated function with network error handling\n        \n    Example:\n        @handle_network_errors(return_empty=True)\n        def fetch_search_results(self, query):\n            # Method implementation that may fail due to network issues\n            pass\n    \"\"\"\n    def decorator(func):\n        @wraps(func)\n        def wrapper(self, *args, **kwargs):\n            try:\n                return func(self, *args, **kwargs)\n            except NetworkError as e:\n                logger.warning(f\"Network error in {func.__name__}: {e}\")\n                return [] if return_empty else None\n            except Exception as e:\n                logger.error(f\"Unexpected error in {func.__name__}: {e}\")\n                return [] if return_empty else None\n        return wrapper\n    return decorator\n\ndef handle_parsing_errors(return_empty=False):\n    \"\"\"Decorator to handle data parsing errors gracefully.\n    \n    Catches JSON parsing and data extraction errors that may occur when\n    Google Play Store changes their data structure.\n    \n    Args:\n        return_empty: If True, return empty list/dict on errors instead of None\n        \n    Returns:\n        Decorated function with parsing error handling\n        \n    Example:\n        @handle_parsing_errors(return_empty=True)\n        def parse_app_data(self, raw_data):\n            # Method implementation that may fail due to data format changes\n            pass\n    \"\"\"\n    def decorator(func):\n        @wraps(func)\n        def wrapper(self, *args, **kwargs):\n            try:\n                return func(self, *args, **kwargs)\n            except DataParsingError as e:\n                logger.warning(f\"Parsing error in {func.__name__}: {e}\")\n                return [] if return_empty else None\n            except Exception as e:\n                logger.error(f\"Unexpected error in {func.__name__}: {e}\")\n                return [] if return_empty else None\n        return wrapper\n    return decorator\n\ndef handle_rate_limit():\n    \"\"\"Decorator to handle rate limiting with exponential backoff.\n    \n    Implements exponential backoff strategy when rate limits are encountered,\n    gradually increasing delay between retries to avoid overwhelming the server.\n    \n    Returns:\n        Decorated function with rate limit handling\n        \n    Example:\n        @handle_rate_limit()\n        def make_api_request(self, endpoint):\n            # Method implementation that may trigger rate limits\n            pass\n            \n    Note:\n        Uses exponential backoff: 1s, 2s, 4s, 8s, etc.\n    \"\"\"\n    def decorator(func):\n        @wraps(func)\n        def wrapper(self, *args, **kwargs):\n            max_attempts = Config.DEFAULT_RETRY_COUNT\n            base_delay = Config.RATE_LIMIT_DELAY\n            \n            for attempt in range(max_attempts):\n                try:\n                    return func(self, *args, **kwargs)\n                except RateLimitError as e:\n                    if attempt < max_attempts - 1:\n                        # Exponential backoff: 1s, 2s, 4s, 8s...\n                        delay = base_delay * (2 ** attempt)\n                        logger.warning(f\"Rate limited. Waiting {delay}s before retry...\")\n                        time.sleep(delay)\n                        continue\n                    logger.error(f\"Rate limit exceeded after {max_attempts} attempts\")\n                    return None\n                except Exception as e:\n                    # Re-raise non-rate-limit exceptions\n                    raise e\n        return wrapper\n    return decorator\n\ndef validate_inputs():\n    \"\"\"Decorator to validate method inputs.\n    \n    Performs basic input validation to ensure app IDs, queries, and other\n    parameters are valid before processing.\n    \n    Returns:\n        Decorated function with input validation\n        \n    Raises:\n        InvalidAppIdError: When input validation fails\n        \n    Example:\n        @validate_inputs()\n        def fetch_app_details(self, app_id):\n            # Method implementation with validated inputs\n            pass\n            \n    Note:\n        Validates that first argument is non-empty string\n    \"\"\"\n    def decorator(func):\n        @wraps(func)\n        def wrapper(self, *args, **kwargs):\n            try:\n                # Validate first argument (usually app_id, query, etc.)\n                if args and not args[0]:\n                    raise InvalidAppIdError(\"Input cannot be empty\")\n                if args and not isinstance(args[0], str):\n                    raise InvalidAppIdError(\"Input must be a string\")\n                return func(self, *args, **kwargs)\n            except InvalidAppIdError:\n                # Re-raise validation errors as-is\n                raise\n            except Exception as e:\n                logger.error(f\"Input validation error in {func.__name__}: {e}\")\n                raise InvalidAppIdError(f\"Invalid input: {e}\")\n        return wrapper\n    return decorator\n\ndef safe_print():\n    \"\"\"Decorator to handle Unicode errors in print methods.\n    \n    Handles Unicode encoding issues when printing data containing\n    special characters from different languages.\n    \n    Returns:\n        Decorated function with Unicode-safe printing\n        \n    Example:\n        @safe_print()\n        def print_app_data(self, app_id):\n            # Method implementation that prints data with Unicode characters\n            pass\n            \n    Note:\n        Falls back to ASCII encoding if Unicode printing fails\n    \"\"\"\n    def decorator(func):\n        @wraps(func)\n        def wrapper(self, *args, **kwargs):\n            try:\n                return func(self, *args, **kwargs)\n            except UnicodeEncodeError:\n                # Fallback: get data and print with ASCII encoding\n                data = getattr(self, func.__name__.replace('print', 'analyze'))(*args, **kwargs)\n                if data:\n                    print(json.dumps(data, indent=2, ensure_ascii=True))\n                else:\n                    print(\"No data available\")\n            except Exception as e:\n                print(f\"Error: {e}\")\n        return wrapper\n    return decorator\n\ndef comprehensive_error_handler(return_empty=False):\n    \"\"\"Comprehensive decorator combining all error handling strategies.\n    \n    This decorator provides unified error handling for all scraper methods,\n    combining input validation, retry logic, network error handling,\n    and graceful degradation in a single decorator.\n    \n    Args:\n        return_empty: If True, return empty list/dict on errors instead of None\n        \n    Returns:\n        Decorated function with comprehensive error handling\n        \n    Example:\n        @comprehensive_error_handler(return_empty=True)\n        def scrape_app_data(self, app_id):\n            # Method implementation with full error protection\n            pass\n            \n    Features:\n        - Input validation\n        - Automatic retries with HTTP client fallback\n        - Network and parsing error handling\n        - Rate limit management\n        - Graceful degradation\n    \"\"\"\n    def decorator(func):\n        @wraps(func)\n        def wrapper(self, *args, **kwargs):\n            try:\n                # Input validation\n                if args and not args[0]:\n                    raise InvalidAppIdError(\"Input cannot be empty\")\n                if args and not isinstance(args[0], str):\n                    raise InvalidAppIdError(\"Input must be a string\")\n                    \n                # Retry logic with HTTP client fallback\n                max_retries = Config.DEFAULT_RETRY_COUNT\n                for attempt in range(max_retries):\n                    try:\n                        return func(self, *args, **kwargs)\n                    except AppNotFoundError as e:\n                        if attempt < max_retries - 1:\n                            logger.warning(f\"Attempt {attempt + 1} failed: {e}. Retrying...\")\n                            time.sleep(Config.RATE_LIMIT_DELAY)\n                            # Switch to next HTTP client for retry\n                            if hasattr(self, 'scraper') and hasattr(self.scraper, 'http_client'):\n                                self.scraper.http_client._try_next_client()\n                            continue\n                        logger.error(f\"All {max_retries} attempts failed\")\n                        return [] if return_empty else None\n                    except (NetworkError, DataParsingError, RateLimitError) as e:\n                        # Handle recoverable errors gracefully\n                        logger.warning(f\"Recoverable error in {func.__name__}: {e}\")\n                        return [] if return_empty else None\n                    except Exception as e:\n                        # Handle unexpected errors\n                        logger.error(f\"Unexpected error in {func.__name__}: {e}\")\n                        return [] if return_empty else None\n                        \n            except InvalidAppIdError:\n                # Re-raise validation errors\n                raise \n            except Exception as e:\n                # Handle critical errors\n                logger.error(f\"Critical error in {func.__name__}: {e}\")\n                return [] if return_empty else None\n        return wrapper\n    return decorator"
  },
  {
    "path": "gplay_scraper/utils/helpers.py",
    "content": "\"\"\"Helper functions for data processing and manipulation.\n\nThis module contains utility functions for:\n- Text unescaping and cleaning\n- JSON string cleaning\n- Date parsing and calculations\n- Install metrics calculations\n\"\"\"\n\nimport re\nimport json\nimport os\nfrom html import unescape\nfrom typing import Any, List, Optional, Dict\nfrom datetime import datetime, timezone\n\nfrom urllib.parse import urlparse\nfrom .constants import PHONE_PREFIXES\n\ndef unescape_text(s: Optional[str]) -> Optional[str]:\n    \"\"\"Unescape HTML entities and remove HTML tags from text.\n    \n    Args:\n        s: Input string with HTML\n        \n    Returns:\n        Cleaned text without HTML tags\n    \"\"\"\n    if s is None:\n        return None\n    \n    text = s.replace(\"<br>\", \"\\n\").replace(\"<br/>\", \"\\n\").replace(\"<br />\", \"\\n\")\n    text = text.replace(\"<b>\", \"\").replace(\"</b>\", \"\")\n    text = text.replace(\"<i>\", \"\").replace(\"</i>\", \"\")\n    text = text.replace(\"<u>\", \"\").replace(\"</u>\", \"\")\n    text = text.replace(\"<strong>\", \"\").replace(\"</strong>\", \"\")\n    text = text.replace(\"<em>\", \"\").replace(\"</em>\", \"\")\n    \n    text = re.sub(r'<[^>]+>', '', text)\n    \n    return unescape(text).strip()\n\n\ndef clean_json_string(json_str: str) -> str:\n    \"\"\"Clean malformed JSON string from Google Play Store.\n    \n    Args:\n        json_str: Raw JSON string\n        \n    Returns:\n        Cleaned JSON string\n    \"\"\"\n    json_str = re.sub(r',\\s*sideChannel:\\s*\\{\\}', '', json_str)\n    \n    json_str = re.sub(r'([{,]\\s*)([a-zA-Z_$][a-zA-Z0-9_$]*)\\s*:', r'\\1\"\\2\":', json_str)\n    \n    json_str = re.sub(r'\\bfunction\\s*\\([^)]*\\)\\s*\\{[^}]*\\}', 'null', json_str)\n    json_str = re.sub(r'\\bundefined\\b', 'null', json_str)\n    \n    json_str = re.sub(r\":\\s*'([^']*)'\", r': \"\\1\"', json_str)\n    \n    json_str = re.sub(r'(\\])\\s*(\\[)', r'\\1,\\2', json_str)\n    json_str = re.sub(r'(\\})\\s*(\\{)', r'\\1,\\2', json_str)\n    \n    json_str = re.sub(r',(\\s*[}\\]])', r'\\1', json_str)\n    \n    json_str = re.sub(r',,+', ',', json_str)\n    \n    json_str = re.sub(r':\\s*\\$([0-9.]+)', r': \"$\\1\"', json_str)\n    \n    json_str = re.sub(r'\"version\"\\s*:\\s*([0-9.]+)(?=\\s*[,}])', r'\"version\": \"\\1\"', json_str)\n    \n    return json_str\n\n\ndef alternative_json_clean(json_str: str) -> str:\n    \"\"\"Alternative JSON cleaning method using bracket matching.\n    \n    Fallback method for cleaning malformed JSON when the primary cleaning fails.\n    Uses bracket counting to extract valid JSON arrays from complex structures.\n    \n    Args:\n        json_str: Raw JSON string from Google Play Store\n        \n    Returns:\n        Cleaned JSON string ready for parsing\n        \n    Process:\n        1. Find 'data:' marker in the string\n        2. Use bracket counting to extract complete array\n        3. Wrap in standard ds:5 format\n        4. Apply basic cleaning as fallback\n        \n    Example:\n        >>> alternative_json_clean('data: [1,2,3] extra content')\n        '{\"key\": \"ds:5\", \"hash\": \"13\", \"data\": [1,2,3]}'\n    \"\"\"\n    # Look for data array marker\n    data_start = json_str.find('data:')\n    if data_start != -1:\n        bracket_start = json_str.find('[', data_start)\n        if bracket_start != -1:\n            bracket_count = 0\n            pos = bracket_start\n            \n            # Count brackets to find complete array\n            while pos < len(json_str):\n                if json_str[pos] == '[':\n                    bracket_count += 1\n                elif json_str[pos] == ']':\n                    bracket_count -= 1\n                    if bracket_count == 0:\n                        data_end = pos + 1\n                        break\n                pos += 1\n            \n            # Extract and parse the complete array\n            if bracket_count == 0:\n                data_array = json_str[bracket_start:data_end]\n                \n                try:\n                    parsed_array = json.loads(data_array)\n                    \n                    # Wrap in standard ds:5 format\n                    return json.dumps({\n                        \"key\": \"ds:5\",\n                        \"hash\": \"13\",\n                        \"data\": parsed_array\n                    })\n                except json.JSONDecodeError:\n                    pass\n    \n    # Fallback: basic cleaning\n    json_str = re.sub(r'\\bNaN\\b', 'null', json_str)\n    return clean_json_string(json_str)\n\ndef parse_release_date(release_date_str: Optional[str]) -> Optional[datetime]:\n    \"\"\"Parse release date string to datetime object.\n    \n    Converts Google Play Store date format to Python datetime object\n    for date calculations and comparisons.\n    \n    Args:\n        release_date_str: Date string in format 'Mon DD, YYYY' (e.g., 'Jan 15, 2020')\n        \n    Returns:\n        Datetime object or None if parsing fails\n        \n    Example:\n        >>> parse_release_date('Jan 15, 2020')\n        datetime.datetime(2020, 1, 15, 0, 0)\n        >>> parse_release_date('invalid date')\n        None\n    \"\"\"\n    if release_date_str is None:\n        return None\n    try:\n        # Parse Google Play Store date format\n        return datetime.strptime(release_date_str, \"%b %d, %Y\")\n    except (ValueError, TypeError):\n        # Return None for invalid date formats\n        return None\n\n\ndef calculate_app_age(release_date_str: Optional[str], current_date: datetime) -> Optional[int]:\n    \"\"\"Calculate app age in days since release.\n    \n    Computes the number of days between app release date and current date,\n    useful for analyzing app maturity and growth metrics.\n    \n    Args:\n        release_date_str: Release date string (e.g., 'Jan 15, 2020')\n        current_date: Current date for calculation (usually datetime.now())\n        \n    Returns:\n        Number of days since release (non-negative) or None if date invalid\n        \n    Example:\n        >>> from datetime import datetime\n        >>> current = datetime(2023, 1, 15)\n        >>> calculate_app_age('Jan 15, 2020', current)\n        1095  # 3 years = ~1095 days\n    \"\"\"\n    release_date = parse_release_date(release_date_str)\n    if release_date is None:\n        return None\n    \n    # Handle timezone differences\n    if current_date.tzinfo is not None and release_date.tzinfo is None:\n        release_date = release_date.replace(tzinfo=timezone.utc)\n    \n    # Calculate days difference\n    days_since_release = (current_date - release_date).days\n    # Ensure non-negative result\n    return max(0, days_since_release)\n\n\ndef parse_installs_string(installs_str: str) -> Optional[int]:\n    \"\"\"Parse install count string to integer.\n    \n    Converts Google Play Store install count strings (with commas and plus signs)\n    to numeric values for calculations and comparisons.\n    \n    Args:\n        installs_str: Install count string (e.g., '1,000,000+', '500,000')\n        \n    Returns:\n        Integer install count or None if parsing fails\n        \n    Example:\n        >>> parse_installs_string('1,000,000+')\n        1000000\n        >>> parse_installs_string('500,000')\n        500000\n        >>> parse_installs_string('invalid')\n        None\n    \"\"\"\n    if installs_str is None:\n        return None\n    \n    # Remove formatting characters\n    cleaned_str = installs_str.replace(',', '').replace('+', '')\n    try:\n        return int(cleaned_str)\n    except (ValueError, TypeError):\n        return None\n\n\ndef calculate_daily_installs(install_count, release_date_str: Optional[str], current_date: datetime) -> Optional[int]:\n    \"\"\"Calculate average daily installs since release.\n    \n    Computes the average number of installs per day since app release,\n    providing insight into app growth rate and popularity trends.\n    \n    Args:\n        install_count: Total install count (int or string like '1,000,000+')\n        release_date_str: Release date string (e.g., 'Jan 15, 2020')\n        current_date: Current date for calculation\n        \n    Returns:\n        Average daily installs (integer) or None if calculation impossible\n        \n    Example:\n        >>> from datetime import datetime\n        >>> current = datetime(2023, 1, 15)\n        >>> calculate_daily_installs(1000000, 'Jan 15, 2020', current)\n        913  # ~1M installs over ~1095 days\n    \"\"\"\n    # Convert string install count to integer if needed\n    if isinstance(install_count, str):\n        install_count = parse_installs_string(install_count)\n    \n    if install_count is None or release_date_str is None:\n        return None\n    \n    release_date = parse_release_date(release_date_str)\n    if release_date is None:\n        return None\n    \n    # Handle timezone differences\n    if current_date.tzinfo is not None and release_date.tzinfo is None:\n        release_date = release_date.replace(tzinfo=timezone.utc)\n    \n    days_since_release = (current_date - release_date).days\n    if days_since_release <= 0:\n        return 0\n    \n    # Calculate average daily installs\n    return int(install_count / days_since_release)\n\n\ndef calculate_monthly_installs(install_count, release_date_str: Optional[str], current_date: datetime) -> Optional[int]:\n    \"\"\"Calculate average monthly installs since release.\n    \n    Computes the average number of installs per month since app release,\n    useful for understanding monthly growth patterns and trends.\n    \n    Args:\n        install_count: Total install count (int or string like '1,000,000+')\n        release_date_str: Release date string (e.g., 'Jan 15, 2020')\n        current_date: Current date for calculation\n        \n    Returns:\n        Average monthly installs (integer) or None if calculation impossible\n        \n    Note:\n        Uses 30.44 days per month (365.25/12) for accurate monthly calculations\n        \n    Example:\n        >>> from datetime import datetime\n        >>> current = datetime(2023, 1, 15)\n        >>> calculate_monthly_installs(1000000, 'Jan 15, 2020', current)\n        27777  # ~1M installs over ~36 months\n    \"\"\"\n    # Convert string install count to integer if needed\n    if isinstance(install_count, str):\n        install_count = parse_installs_string(install_count)\n    \n    if install_count is None or release_date_str is None:\n        return None\n    \n    release_date = parse_release_date(release_date_str)\n    if release_date is None:\n        return None\n    \n    # Handle timezone differences\n    if current_date.tzinfo is not None and release_date.tzinfo is None:\n        release_date = release_date.replace(tzinfo=timezone.utc)\n    \n    days_since_release = (current_date - release_date).days\n    if days_since_release <= 0:\n        return 0\n    \n    # Convert days to months (using average month length)\n    months_since_release = days_since_release / 30.44  # 365.25/12\n    return int(install_count / months_since_release)\n\n\ndef tamp_to_date(value) -> str:\n    \"\"\"Convert timestamp to date format 'Jul 21, 2023' if value is a timestamp.\n    \n    Detects Unix timestamps and converts them to human-readable date format.\n    Non-timestamp values are returned unchanged.\n    \n    Args:\n        value: Value to check and convert (int, float, or any other type)\n        \n    Returns:\n        Formatted date string (e.g., 'Jul 21, 2023') if timestamp, otherwise original value\n        \n    Example:\n        >>> tamp_to_date(1642780800)\n        'Jan 21, 2022'\n        >>> tamp_to_date('not a timestamp')\n        'not a timestamp'\n    \"\"\"\n    # Check if value looks like a Unix timestamp (> 1 billion = after 2001)\n    if isinstance(value, (int, float)) and value > 1000000000:\n        try:\n            dt = datetime.fromtimestamp(value)\n            return dt.strftime(\"%b %d, %Y\")\n        except (ValueError, OSError):\n            # Invalid timestamp, return original value\n            pass\n    return value\n\n\ndef get_publisher_country(phone: Optional[str], address: Optional[str]) -> str:\n    \"\"\"Determine publisher country from phone and address information.\n    \n    Analyzes developer contact information to determine their likely country\n    of origin by extracting country codes from phone numbers and addresses.\n    \n    Args:\n        phone: Developer phone number (e.g., '+1-555-123-4567')\n        address: Developer address string (may contain country name)\n        \n    Returns:\n        Country name(s) or 'Unknown' if cannot be determined\n        \n    Examples:\n        >>> get_publisher_country('+1-555-123-4567', None)\n        'United States'\n        >>> get_publisher_country(None, 'London, UK')\n        'United Kingdom'\n        >>> get_publisher_country('+1-555-123', 'Berlin, Germany')\n        'United States/Germany'\n    \"\"\"\n    # Extract country codes from phone and address\n    phone_code = pho_count(phone) if phone else None\n    address_code = add_count(address) if address else None\n    \n    # Convert country codes to readable names\n    code_to_name = {item[1]: item[2].title() for item in PHONE_PREFIXES}\n    \n    phone_country = code_to_name.get(phone_code) if phone_code else None\n    address_country = code_to_name.get(address_code) if address_code else None\n    \n    # Determine final country based on available information\n    if not phone_country and not address_country:\n        return \"Unknown\"\n    elif phone_country and not address_country:\n        return phone_country\n    elif address_country and not phone_country:\n        return address_country\n    elif phone_country == address_country:\n        return phone_country\n    else:\n        # Different countries detected, show both\n        return f\"{phone_country}/{address_country}\"\n\ndef add_count(address):\n    \"\"\"Extract country code from address string.\n    \n    Parses address text to find country names or codes and returns\n    the corresponding country code for country identification.\n    \n    Args:\n        address: Address string with country information (e.g., 'London, UK')\n        \n    Returns:\n        Country code (e.g., 'gb') or None if not found\n        \n    Example:\n        >>> add_count('123 Main St\\nLondon, UK')\n        'gb'\n        >>> add_count('Berlin, Germany')\n        'de'\n        >>> add_count('Unknown location')\n        None\n    \"\"\"\n    if not address:\n        return None\n    \n    # Split address into parts (lines)\n    parts = address.split('\\n')\n    \n    # Create lookup dictionaries from phone prefixes data\n    code_to_code = {item[1].lower(): item[1] for item in PHONE_PREFIXES}\n    name_to_code = {item[2].lower(): item[1] for item in PHONE_PREFIXES}\n    \n    # Check each part of the address for country matches\n    for part in parts:\n        part_lower = part.strip().lower()\n        # Check for exact country code match\n        if part_lower in code_to_code:\n            return code_to_code[part_lower]\n        # Check for country name match\n        if part_lower in name_to_code:\n            return name_to_code[part_lower]\n    \n    return None\n\n\ndef pho_count(phone):\n    \"\"\"Extract country code from phone number.\n    \n    Analyzes phone number format to determine the country code by matching\n    against known international phone prefixes.\n    \n    Args:\n        phone: Phone number string (e.g., '+1-555-123-4567', '44-20-1234-5678')\n        \n    Returns:\n        Country code (e.g., 'us', 'gb') or None if not found\n        \n    Example:\n        >>> pho_count('+1-555-123-4567')\n        'us'\n        >>> pho_count('44-20-1234-5678')\n        'gb'\n        >>> pho_count('invalid')\n        None\n    \"\"\"\n    if not isinstance(phone, str) or len(phone) < 10:\n        return None\n    \n    # Remove leading '+' if present\n    phone = phone.lstrip('+')\n    \n    # Create lookup dictionary from phone prefixes\n    prefix_to_code = {item[0]: item[1] for item in PHONE_PREFIXES}\n    \n    # Handle North American numbers (starting with 1)\n    if phone.startswith('1'):\n        try:\n            # North American numbers use 4-digit area codes\n            prefix = int(phone[:4])\n            return prefix_to_code.get(prefix)\n        except ValueError:\n            return None\n    else:\n        # Try different prefix lengths (longest first)\n        for length in range(7, 0, -1):\n            try:\n                prefix = int(phone[:length])\n                if prefix in prefix_to_code:\n                    return prefix_to_code[prefix]\n            except ValueError:\n                continue\n        return None\n\n\n\n"
  },
  {
    "path": "gplay_scraper/utils/http_client.py",
    "content": "\"\"\"HTTP client with support for 7 different libraries and automatic fallback.\n\nThis module provides a unified HTTP client interface that supports:\n- requests\n- curl_cffi\n- tls_client\n- httpx\n- urllib3\n- cloudscraper\n- aiohttp\n\nWith automatic fallback if the primary client fails.\n\"\"\"\n\nimport time\nimport logging\nfrom typing import Optional\nfrom urllib.parse import quote\n\nfrom ..config import Config\nfrom ..exceptions import AppNotFoundError, NetworkError\n\nlogger = logging.getLogger(__name__)\n\nclass HttpClient:\n    \"\"\"HTTP client with automatic fallback support for 7 libraries.\n    \n    Provides a unified interface for making HTTP requests using various client libraries.\n    Automatically falls back to alternative clients if the preferred one fails or is unavailable.\n    \n    Supported Clients:\n        - requests: Most compatible, widely used\n        - curl_cffi: Best for bypassing anti-bot protections\n        - tls_client: Advanced TLS fingerprinting\n        - urllib3: Low-level HTTP with connection pooling\n        - cloudscraper: Automatic Cloudflare bypass\n        - aiohttp: Asynchronous HTTP operations\n        - httpx: Modern HTTP client with HTTP/2 support\n    \n    Features:\n        - Automatic client fallback on failures\n        - Rate limiting to prevent blocks\n        - Browser impersonation capabilities\n        - Connection pooling and reuse\n        - Comprehensive error handling\n    \"\"\"\n    def __init__(self, rate_limit_delay: float = None, client_type: str = None):\n        \"\"\"Initialize HTTP client with specified or default client type.\n        \n        Args:\n            rate_limit_delay: Delay between requests in seconds (default: 1.0)\n            client_type: HTTP client to use - options:\n                        'requests', 'curl_cffi', 'tls_client', 'urllib3',\n                        'cloudscraper', 'aiohttp', 'httpx' (default: 'requests')\n        \"\"\"\n        self.headers = Config.get_headers()\n        self.timeout = Config.DEFAULT_TIMEOUT\n        self.rate_limit_delay = rate_limit_delay or Config.RATE_LIMIT_DELAY\n        self.last_request_time = 0\n        self.client_type = client_type or Config.DEFAULT_HTTP_CLIENT\n        self.available_clients = [\"requests\", \"curl_cffi\", \"tls_client\", \"urllib3\", \"cloudscraper\", \"aiohttp\", \"httpx\"]\n        self.current_client_index = 0\n        self._setup_client()\n    \n    def _setup_client(self):\n        \"\"\"Setup HTTP client based on client_type with automatic fallback.\n        \n        Initializes the specified HTTP client library. If the requested client\n        is not available, automatically falls back to the next available client\n        in the priority order.\n        \n        Priority Order:\n            1. requests (default, most compatible)\n            2. curl_cffi (best for bypassing blocks)\n            3. tls_client (advanced TLS fingerprinting)\n            4. urllib3 (low-level HTTP)\n            5. cloudscraper (anti-bot protection)\n            6. aiohttp (async support)\n            7. httpx (modern HTTP client)\n        \"\"\"\n        # Try to initialize the specified client, fallback to next if unavailable\n        if self.client_type == \"requests\" or self.client_type is None:\n            self._try_requests()\n        elif self.client_type == \"curl_cffi\":\n            self._try_curl_cffi()\n        elif self.client_type == \"tls_client\":\n            self._try_tls_client()\n        elif self.client_type == \"urllib3\":\n            self._try_urllib3()\n        elif self.client_type == \"cloudscraper\":\n            self._try_cloudscraper()\n        elif self.client_type == \"aiohttp\":\n            self._try_aiohttp()\n        elif self.client_type == \"httpx\":\n            self._try_httpx()\n        else:\n            # Default fallback to requests\n            self._try_requests()\n    \n    def _try_requests(self):\n        \"\"\"Try to initialize requests client, fallback to curl_cffi if unavailable.\n        \n        Requests is the most widely used HTTP library for Python, providing\n        simple and reliable HTTP functionality. Used as the default client.\n        \n        Fallback: curl_cffi (if requests unavailable)\n        \"\"\"\n        try:\n            import requests\n            self.client = requests\n            self.client_type = \"requests\"\n        except ImportError:\n            # Fallback to next available client\n            self._try_curl_cffi()\n    \n    def _try_curl_cffi(self):\n        \"\"\"Try to initialize curl_cffi client, fallback to tls_client if unavailable.\n        \n        curl_cffi provides libcurl bindings with browser impersonation capabilities,\n        making it excellent for bypassing anti-bot protections by mimicking real browsers.\n        \n        Features:\n            - Browser impersonation (Chrome 110)\n            - Advanced TLS fingerprinting\n            - Better success rate against blocks\n        \n        Fallback: tls_client (if curl_cffi unavailable)\n        \"\"\"\n        try:\n            from curl_cffi import requests as curl_requests\n            # Impersonate Chrome 110 for better compatibility\n            self.client = curl_requests.Session(impersonate=\"chrome110\")\n            self.client_type = \"curl_cffi\"\n        except ImportError:\n            # Fallback to next available client\n            self._try_tls_client()\n    \n    def _try_tls_client(self):\n        \"\"\"Try to initialize tls_client, fallback to urllib3 if unavailable.\n        \n        tls_client provides advanced TLS fingerprinting and browser simulation\n        capabilities, useful for bypassing sophisticated detection systems.\n        \n        Features:\n            - Chrome 112 client identifier\n            - Random TLS extension ordering\n            - Advanced fingerprint randomization\n        \n        Fallback: urllib3 (if tls_client unavailable)\n        \"\"\"\n        try:\n            import tls_client\n            self.client = tls_client.Session(\n                client_identifier=\"chrome112\",  # Simulate Chrome 112\n                random_tls_extension_order=True  # Randomize TLS fingerprint\n            )\n            self.client_type = \"tls_client\"\n        except ImportError:\n            # Fallback to next available client\n            self._try_urllib3()\n    \n    def _try_urllib3(self):\n        \"\"\"Try to initialize urllib3 client, fallback to cloudscraper if unavailable.\n        \n        urllib3 is a powerful HTTP client library that provides connection pooling,\n        thread safety, and many other features. It's the foundation for requests.\n        \n        Features:\n            - Connection pooling for better performance\n            - Thread-safe operations\n            - Low-level HTTP control\n        \n        Fallback: cloudscraper (if urllib3 unavailable)\n        \"\"\"\n        try:\n            import urllib3\n            # Use PoolManager for connection pooling\n            self.client = urllib3.PoolManager()\n            self.client_type = \"urllib3\"\n        except ImportError:\n            # Fallback to next available client\n            self._try_cloudscraper()\n    \n    def _try_cloudscraper(self):\n        \"\"\"Try to initialize cloudscraper client, fallback to aiohttp if unavailable.\n        \n        cloudscraper is designed to bypass Cloudflare's anti-bot protection\n        and other similar security measures automatically.\n        \n        Features:\n            - Automatic Cloudflare bypass\n            - JavaScript challenge solving\n            - Anti-bot protection circumvention\n        \n        Fallback: aiohttp (if cloudscraper unavailable)\n        \"\"\"\n        try:\n            import cloudscraper\n            # Create scraper with automatic anti-bot bypass\n            self.client = cloudscraper.create_scraper()\n            self.client_type = \"cloudscraper\"\n        except ImportError:\n            # Fallback to next available client\n            self._try_aiohttp()\n\n    def _try_aiohttp(self):\n        \"\"\"Try to initialize aiohttp client, fallback to httpx if unavailable.\n        \n        aiohttp provides asynchronous HTTP client/server functionality,\n        allowing for better performance with concurrent requests.\n        \n        Features:\n            - Asynchronous operations\n            - Better performance for multiple requests\n            - WebSocket support\n        \n        Note: Used synchronously via asyncio.run() in this implementation\n        \n        Fallback: httpx (if aiohttp unavailable)\n        \"\"\"\n        try:\n            import aiohttp\n            # Store aiohttp module for async operations\n            self.client = aiohttp\n            self.client_type = \"aiohttp\"\n        except ImportError:\n            # Fallback to next available client\n            self._try_httpx()\n    \n    def _try_httpx(self):\n        \"\"\"Try to initialize httpx client, raise error if unavailable.\n        \n        httpx is a modern HTTP client library with async support and HTTP/2 capabilities.\n        It provides a requests-compatible API with additional features.\n        \n        Features:\n            - HTTP/2 support\n            - Async and sync APIs\n            - Modern Python features\n            - Requests-compatible interface\n        \n        Raises:\n            ImportError: If no HTTP client libraries are available\n        \"\"\"\n        try:\n            import httpx\n            # Create client with configured timeout\n            self.client = httpx.Client(timeout=self.timeout)\n            self.client_type = \"httpx\"\n        except ImportError:\n            # No more fallback options available\n            raise ImportError(Config.ERROR_MESSAGES[\"NO_HTTP_CLIENT\"])\n    \n    def fetch_app_page(self, app_id: str, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY) -> str:\n        \"\"\"Fetch app details page from Google Play Store.\n        \n        Retrieves the HTML content of an app's details page, which contains\n        all the app information including ratings, reviews, description, etc.\n        \n        Args:\n            app_id: Google Play app ID (e.g., 'com.whatsapp')\n            lang: Language code for localization (e.g., 'en', 'es')\n            country: Country code for regional content (e.g., 'us', 'uk')\n            \n        Returns:\n            HTML content of app page containing embedded JSON data\n            \n        Raises:\n            AppNotFoundError: If app not found (404 error)\n            NetworkError: If request fails due to network issues\n            \n        Example:\n            html = client.fetch_app_page('com.whatsapp', 'en', 'us')\n        \"\"\"\n        self.rate_limit()\n        \n        url = f\"{Config.PLAY_STORE_BASE_URL}{Config.APP_DETAILS_ENDPOINT}?id={app_id}&hl={lang}&gl={country}\"\n        \n        try:\n            response = self._make_request(\"GET\", url)\n            return response.text\n        except Exception as e:\n            if self._is_404_error(e):\n                raise AppNotFoundError(Config.ERROR_MESSAGES[\"APP_NOT_FOUND\"].format(app_id=app_id))\n            # Retry without country parameter\n            url = f\"{Config.PLAY_STORE_BASE_URL}{Config.APP_DETAILS_ENDPOINT}?id={app_id}&hl={lang}\"\n            try:\n                response = self._make_request(\"GET\", url)\n                return response.text\n            except Exception as e2:\n                if self._is_404_error(e2):\n                    raise AppNotFoundError(Config.ERROR_MESSAGES[\"APP_NOT_FOUND\"].format(app_id=app_id))\n                logger.error(Config.ERROR_MESSAGES[\"APP_FETCH_FAILED\"].format(app_id=app_id, error=e))\n                raise NetworkError(Config.ERROR_MESSAGES[\"APP_FETCH_FAILED\"].format(app_id=app_id, error=e))\n    \n    def fetch_app_page_no_locale(self, app_id: str) -> str:\n        \"\"\"Fetch app page without hl/gl parameters for fallback data.\n        \n        Args:\n            app_id: Google Play app ID\n            \n        Returns:\n            HTML content of app page\n        \"\"\"\n        self.rate_limit()\n        \n        url = f\"{Config.PLAY_STORE_BASE_URL}{Config.APP_DETAILS_ENDPOINT}?id={app_id}\"\n        \n        try:\n            response = self._make_request(\"GET\", url)\n            return response.text\n        except Exception as e:\n            logger.error(f\"Fallback fetch failed for {app_id}: {e}\")\n            return \"\"\n\n    def fetch_search_page(self, query: str = None, token: str = None, needed: int = None, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY) -> str:\n        \"\"\"Fetch search results from Google Play Store (initial or paginated).\n        \n        Args:\n            query: Search query string (for initial search)\n            token: Pagination token (for paginated search)\n            needed: Number of results needed (for pagination)\n            lang: Language code\n            country: Country code\n            \n        Returns:\n            HTML content (initial) or raw API response (pagination)\n            \n        Raises:\n            AppNotFoundError: If search fails\n            NetworkError: If request fails\n        \"\"\"\n        self.rate_limit()\n        \n        # Pagination request\n        if token and needed:\n            url = f\"{Config.PLAY_STORE_BASE_URL}/_/PlayStoreUi/data/batchexecute\"\n            params = f\"rpcids=qnKhOb&source-path=%2Fwork%2Fsearch&hl={lang}&gl={country}\"\n            \n            body = f'f.req=%5B%5B%5B%22qnKhOb%22%2C%22%5B%5Bnull%2C%5B%5B10%2C%5B10%2C{needed}%5D%5D%2Ctrue%2Cnull%2C%5B96%2C27%2C4%2C8%2C57%2C30%2C110%2C79%2C11%2C16%2C49%2C1%2C3%2C9%2C12%2C104%2C55%2C56%2C51%2C10%2C34%2C77%5D%5D%2Cnull%2C%5C%22{token}%5C%22%5D%5D%22%2Cnull%2C%22generic%22%5D%5D%5D'\n            \n            headers = {\n                **self.headers,\n                \"Content-Type\": \"application/x-www-form-urlencoded;charset=UTF-8\"\n            }\n            \n            try:\n                response = self._make_request(\"POST\", f\"{url}?{params}\", data=body, headers=headers)\n                return response.text\n            except Exception as e:\n                logger.error(Config.ERROR_MESSAGES[\"SEARCH_PAGINATION_FAILED\"].format(error=e))\n                raise NetworkError(Config.ERROR_MESSAGES[\"SEARCH_PAGINATION_FAILED\"].format(error=e))\n        \n        # Initial search request\n        elif query:\n            encoded_query = quote(query)\n            url = f\"{Config.PLAY_STORE_BASE_URL}/work/search?q={encoded_query}&hl={lang}&gl={country}&price=0\"\n            \n            try:\n                response = self._make_request(\"GET\", url)\n                return response.text\n            except Exception as e:\n                if self._is_404_error(e):\n                    raise AppNotFoundError(Config.ERROR_MESSAGES[\"SEARCH_NOT_FOUND\"].format(query=query))\n                url = f\"{Config.PLAY_STORE_BASE_URL}/work/search?q={encoded_query}&hl={lang}&price=0\"\n                try:\n                    response = self._make_request(\"GET\", url)\n                    return response.text\n                except Exception as e2:\n                    if self._is_404_error(e2):\n                        raise AppNotFoundError(Config.ERROR_MESSAGES[\"SEARCH_NOT_FOUND\"].format(query=query))\n                    logger.error(Config.ERROR_MESSAGES[\"SEARCH_FETCH_FAILED\"].format(query=query, error=e))\n                    raise NetworkError(Config.ERROR_MESSAGES[\"SEARCH_FETCH_FAILED\"].format(query=query, error=e))\n        \n        else:\n            raise ValueError(\"Either query or (token and needed) must be provided\")\n\n\n    def fetch_reviews_batch(self, app_id: str, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY, \n                           sort: int = Config.DEFAULT_REVIEWS_SORT, batch_count: int = Config.DEFAULT_REVIEWS_BATCH_SIZE, token: str = None) -> str:\n        \"\"\"Fetch single batch of reviews from Google Play Store API.\n        \n        Args:\n            app_id: Google Play app ID\n            lang: Language code\n            country: Country code\n            sort: Sort order (1=RELEVANT, 2=NEWEST, 3=RATING)\n            batch_count: Number of reviews per batch\n            token: Pagination token for next batch\n            \n        Returns:\n            Raw API response text\n            \n        Raises:\n            AppNotFoundError: If reviews not found\n            NetworkError: If request fails\n        \"\"\"\n        self.rate_limit()\n        \n        url = f\"{Config.PLAY_STORE_BASE_URL}{Config.BATCHEXECUTE_ENDPOINT}?hl={lang}&gl={country}\"\n        \n        headers = {\n            **self.headers,\n            \"content-type\": \"application/x-www-form-urlencoded\"\n        }\n        \n        if token:\n            payload = f\"f.req=%5B%5B%5B%22oCPfdb%22%2C%22%5Bnull%2C%5B2%2C{sort}%2C%5B{batch_count}%2Cnull%2C%5C%22{token}%5C%22%5D%2Cnull%2C%5Bnull%2Cnull%2Cnull%2Cnull%2Cnull%2Cnull%2Cnull%2Cnull%2Cnull%5D%5D%2C%5B%5C%22{app_id}%5C%22%2C7%5D%5D%22%2Cnull%2C%22generic%22%5D%5D%5D\"\n        else:\n            payload = f\"f.req=%5B%5B%5B%22oCPfdb%22%2C%22%5Bnull%2C%5B2%2C{sort}%2C%5B{batch_count}%5D%2Cnull%2C%5Bnull%2Cnull%2Cnull%2Cnull%2Cnull%2Cnull%2Cnull%2Cnull%2Cnull%5D%5D%2C%5B%5C%22{app_id}%5C%22%2C7%5D%5D%22%2Cnull%2C%22generic%22%5D%5D%5D\"\n        \n        try:\n            response = self._make_request(\"POST\", url, data=payload, headers=headers)\n            return response.text\n        except Exception as e:\n            if self._is_404_error(e):\n                raise AppNotFoundError(Config.ERROR_MESSAGES[\"REVIEWS_NOT_FOUND\"].format(app_id=app_id))\n            logger.error(Config.ERROR_MESSAGES[\"REVIEWS_FETCH_FAILED\"].format(app_id=app_id, error=e))\n            raise NetworkError(Config.ERROR_MESSAGES[\"REVIEWS_FETCH_FAILED\"].format(app_id=app_id, error=e))\n\n    def fetch_developer_page(self, dev_id: str, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY) -> str:\n        \"\"\"Fetch developer portfolio page from Google Play Store.\n        \n        Args:\n            dev_id: Developer ID (numeric or string)\n            lang: Language code\n            country: Country code\n            \n        Returns:\n            HTML content of developer page\n            \n        Raises:\n            AppNotFoundError: If developer not found\n            NetworkError: If request fails\n        \"\"\"\n        self.rate_limit()\n        \n        if dev_id.isdigit():\n            url = f\"{Config.PLAY_STORE_BASE_URL}{Config.DEVELOPER_NUMERIC_ENDPOINT}?id={quote(dev_id)}&hl={lang}&gl={country}\"\n        else:\n            url = f\"{Config.PLAY_STORE_BASE_URL}{Config.DEVELOPER_STRING_ENDPOINT}?id={quote(dev_id)}&hl={lang}&gl={country}\"\n        \n        try:\n            response = self._make_request(\"GET\", url)\n            return response.text\n        except Exception as e:\n            if self._is_404_error(e):\n                raise AppNotFoundError(Config.ERROR_MESSAGES[\"DEVELOPER_NOT_FOUND\"].format(dev_id=dev_id))\n            if dev_id.isdigit():\n                url = f\"{Config.PLAY_STORE_BASE_URL}{Config.DEVELOPER_NUMERIC_ENDPOINT}?id={quote(dev_id)}&hl={lang}\"\n            else:\n                url = f\"{Config.PLAY_STORE_BASE_URL}{Config.DEVELOPER_STRING_ENDPOINT}?id={quote(dev_id)}&hl={lang}\"\n            try:\n                response = self._make_request(\"GET\", url)\n                return response.text\n            except Exception as e2:\n                if self._is_404_error(e2):\n                    raise AppNotFoundError(Config.ERROR_MESSAGES[\"DEVELOPER_NOT_FOUND\"].format(dev_id=dev_id))\n                logger.error(Config.ERROR_MESSAGES[\"DEVELOPER_FETCH_FAILED\"].format(dev_id=dev_id, error=e))\n                raise NetworkError(Config.ERROR_MESSAGES[\"DEVELOPER_FETCH_FAILED\"].format(dev_id=dev_id, error=e))\n\n    def fetch_cluster_page(self, cluster_url: str, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY) -> str:\n        \"\"\"Fetch cluster page (similar apps collection) from Google Play Store.\n        \n        Args:\n            cluster_url: Cluster URL path\n            lang: Language code\n            country: Country code\n            \n        Returns:\n            HTML content of cluster page\n            \n        Raises:\n            AppNotFoundError: If cluster not found\n            NetworkError: If request fails\n        \"\"\"\n        self.rate_limit()\n        \n        url = f\"{Config.PLAY_STORE_BASE_URL}{cluster_url}&gl={country}&hl={lang}\"\n        \n        try:\n            response = self._make_request(\"GET\", url)\n            return response.text\n        except Exception as e:\n            if self._is_404_error(e):\n                raise AppNotFoundError(Config.ERROR_MESSAGES[\"CLUSTER_NOT_FOUND\"].format(cluster_url=cluster_url))\n            logger.error(Config.ERROR_MESSAGES[\"CLUSTER_FETCH_FAILED\"].format(error=e))\n            raise NetworkError(Config.ERROR_MESSAGES[\"CLUSTER_FETCH_FAILED\"].format(error=e))\n\n    def fetch_list_page(self, collection: str, category: str = Config.DEFAULT_LIST_CATEGORY, count: int = Config.DEFAULT_LIST_COUNT, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY) -> str:\n        \"\"\"Fetch top charts list page from Google Play Store.\n        \n        Args:\n            collection: Collection type (topselling_free, topselling_paid, topgrossing)\n            category: App category\n            count: Number of apps to fetch\n            lang: Language code\n            country: Country code\n            \n        Returns:\n            Raw API response text\n            \n        Raises:\n            AppNotFoundError: If list not found\n            NetworkError: If request fails\n        \"\"\"\n        self.rate_limit()\n        \n        body = f'f.req=%5B%5B%5B%22vyAe2%22%2C%22%5B%5Bnull%2C%5B%5B8%2C%5B20%2C{count}%5D%5D%2Ctrue%2Cnull%2C%5B64%2C1%2C195%2C71%2C8%2C72%2C9%2C10%2C11%2C139%2C12%2C16%2C145%2C148%2C150%2C151%2C152%2C27%2C30%2C31%2C96%2C32%2C34%2C163%2C100%2C165%2C104%2C169%2C108%2C110%2C113%2C55%2C56%2C57%2C122%5D%2C%5Bnull%2Cnull%2C%5B%5B%5Btrue%5D%2Cnull%2C%5B%5Bnull%2C%5B%5D%5D%5D%2Cnull%2Cnull%2Cnull%2Cnull%2C%5Bnull%2C2%5D%2Cnull%2Cnull%2Cnull%2Cnull%2Cnull%2Cnull%2C%5B1%5D%2Cnull%2Cnull%2Cnull%2Cnull%2Cnull%2Cnull%2Cnull%2C%5B1%5D%5D%2C%5Bnull%2C%5B%5Bnull%2C%5B%5D%5D%5D%5D%2C%5Bnull%2C%5B%5Bnull%2C%5B%5D%5D%5D%2Cnull%2C%5Btrue%5D%5D%2C%5Bnull%2C%5B%5Bnull%2C%5B%5D%5D%5D%5D%2Cnull%2Cnull%2Cnull%2Cnull%2C%5B%5B%5Bnull%2C%5B%5D%5D%5D%5D%2C%5B%5B%5Bnull%2C%5B%5D%5D%5D%5D%5D%2C%5B%5B%5B%5B7%2C1%5D%2C%5B%5B1%2C73%2C96%2C103%2C97%2C58%2C50%2C92%2C52%2C112%2C69%2C19%2C31%2C101%2C123%2C74%2C49%2C80%2C38%2C20%2C10%2C14%2C79%2C43%2C42%2C139%5D%5D%5D%5D%5D%5D%2Cnull%2Cnull%2C%5B%5B%5B1%2C2%5D%2C%5B10%2C8%2C9%5D%2C%5B%5D%2C%5B%5D%5D%5D%5D%2C%5B2%2C%5C%22{collection}%5C%22%2C%5C%22{category}%5C%22%5D%5D%5D%22%2Cnull%2C%22generic%22%5D%5D%5D&at=AFSRYlx8XZfN8-O-IKASbNBDkB6T%3A1655531200971&'\n        \n        url = f\"{Config.PLAY_STORE_BASE_URL}{Config.BATCHEXECUTE_ENDPOINT}?rpcids=vyAe2&source-path=%2Fstore%2Fapps&hl={lang}&gl={country}\"\n        \n        headers = {\n            **self.headers,\n            \"Content-Type\": \"application/x-www-form-urlencoded;charset=UTF-8\"\n        }\n        \n        try:\n            response = self._make_request(\"POST\", url, data=body, headers=headers)\n            return response.text\n        except Exception as e:\n            if self._is_404_error(e):\n                raise AppNotFoundError(Config.ERROR_MESSAGES[\"LIST_NOT_FOUND\"].format(collection=collection, category=category))\n            logger.error(Config.ERROR_MESSAGES[\"LIST_FETCH_FAILED\"].format(error=e))\n            raise NetworkError(Config.ERROR_MESSAGES[\"LIST_FETCH_FAILED\"].format(error=e))\n\n    def fetch_suggest_page(self, term: str, lang: str = Config.DEFAULT_LANGUAGE, country: str = Config.DEFAULT_COUNTRY) -> str:\n        \"\"\"Fetch search suggestions from Google Play Store.\n        \n        Args:\n            term: Search term for suggestions\n            lang: Language code\n            country: Country code\n            \n        Returns:\n            Raw API response text\n            \n        Raises:\n            AppNotFoundError: If suggestions not found\n            NetworkError: If request fails\n        \"\"\"\n        self.rate_limit()\n        \n        encoded_term = quote(term)\n        url = f\"{Config.PLAY_STORE_BASE_URL}{Config.BATCHEXECUTE_ENDPOINT}?rpcids=IJ4APc&f.sid=-697906427155521722&bl=boq_playuiserver_20190903.08_p0&hl={lang}&gl={country}&authuser&soc-app=121&soc-platform=1&soc-device=1&_reqid=1065213\"\n        \n        body = f\"f.req=%5B%5B%5B%22IJ4APc%22%2C%22%5B%5Bnull%2C%5B%5C%22{encoded_term}%5C%22%5D%2C%5B10%5D%2C%5B2%5D%2C4%5D%5D%22%5D%5D%5D\"\n        \n        headers = {\n            **self.headers,\n            \"Content-Type\": \"application/x-www-form-urlencoded;charset=UTF-8\"\n        }\n        \n        try:\n            response = self._make_request(\"POST\", url, data=body, headers=headers)\n            return response.text\n        except Exception as e:\n            if self._is_404_error(e):\n                raise AppNotFoundError(Config.ERROR_MESSAGES[\"SUGGEST_NOT_FOUND\"].format(term=term))\n            logger.error(Config.ERROR_MESSAGES[\"SUGGEST_FETCH_FAILED\"].format(term=term, error=e))\n            raise NetworkError(Config.ERROR_MESSAGES[\"SUGGEST_FETCH_FAILED\"].format(term=term, error=e))\n\n    def _make_request(self, method: str, url: str, **kwargs):\n        \"\"\"Make HTTP request with automatic client fallback.\n        \n        Attempts to make an HTTP request using the configured client. If the request\n        fails, automatically tries other available HTTP clients in priority order\n        until one succeeds or all clients are exhausted.\n        \n        Args:\n            method: HTTP method (GET or POST)\n            url: Request URL\n            **kwargs: Additional request parameters (data, headers, etc.)\n            \n        Returns:\n            Response object with .text attribute\n            \n        Raises:\n            Exception: If all HTTP clients fail to make the request\n            \n        Example:\n            response = self._make_request(\"GET\", \"https://example.com\")\n            content = response.text\n        \"\"\"\n        # Build list of clients to try, starting with preferred client\n        clients_to_try = [self.client_type]\n        all_clients = [\"requests\", \"curl_cffi\", \"tls_client\", \"urllib3\", \"cloudscraper\", \"aiohttp\", \"httpx\"]\n        \n        # Add remaining clients as fallback options\n        for client in all_clients:\n            if client != self.client_type:\n                clients_to_try.append(client)\n        \n        last_error = None\n        \n        # Try each client until one succeeds\n        for client_type in clients_to_try:\n            try:\n                return self._try_request_with_client(client_type, method, url, **kwargs)\n            except Exception as e:\n                last_error = e\n                logger.warning(Config.ERROR_MESSAGES[\"CLIENT_FAILED_TRYING_NEXT\"].format(client_type=client_type, error=e))\n                continue\n        \n        # All clients failed, raise the last error\n        raise last_error\n    \n    def _try_request_with_client(self, client_type: str, method: str, url: str, **kwargs):\n        \"\"\"Attempt request with specific HTTP client.\n        \n        Tries to make an HTTP request using the specified client library.\n        Each client has its own implementation details and capabilities.\n        \n        Args:\n            client_type: HTTP client name (requests, curl_cffi, etc.)\n            method: HTTP method (GET or POST)\n            url: Request URL\n            **kwargs: Additional request parameters (data, headers, timeout)\n            \n        Returns:\n            Response object with .text attribute and .status_code\n            \n        Raises:\n            Exception: If client unavailable or request fails\n            \n        Note:\n            Different clients may have different response object structures,\n            so this method normalizes them to a common interface.\n        \"\"\"\n        headers = kwargs.get('headers', self.headers)\n        \n        if client_type == \"requests\":\n            try:\n                import requests\n                if method == \"GET\":\n                    response = requests.get(url, headers=headers, timeout=self.timeout)\n                else:\n                    response = requests.post(url, data=kwargs.get('data'), headers=headers, timeout=self.timeout)\n                response.raise_for_status()\n                return response\n            except ImportError:\n                raise Exception(Config.ERROR_MESSAGES[\"HTTP_CLIENT_NOT_AVAILABLE\"].format(client=\"requests\"))\n        \n        elif client_type == \"curl_cffi\":\n            try:\n                from curl_cffi import requests as curl_requests\n                session = curl_requests.Session(impersonate=\"chrome110\")\n                if method == \"GET\":\n                    response = session.get(url, headers=headers, timeout=self.timeout)\n                else:\n                    response = session.post(url, data=kwargs.get('data'), headers=headers, timeout=self.timeout)\n                response.raise_for_status()\n                return response\n            except ImportError:\n                raise Exception(Config.ERROR_MESSAGES[\"HTTP_CLIENT_NOT_AVAILABLE\"].format(client=\"curl_cffi\"))\n        \n        elif client_type == \"tls_client\":\n            try:\n                import tls_client\n                session = tls_client.Session(client_identifier=\"chrome112\", random_tls_extension_order=True)\n                if method == \"GET\":\n                    response = session.get(url, headers=headers)\n                else:\n                    response = session.post(url, data=kwargs.get('data'), headers=headers)\n                if response.status_code >= 400:\n                    raise Exception(Config.ERROR_MESSAGES[\"HTTP_ERROR\"].format(status_code=response.status_code))\n                return response\n            except ImportError:\n                raise Exception(Config.ERROR_MESSAGES[\"HTTP_CLIENT_NOT_AVAILABLE\"].format(client=\"tls_client\"))\n        \n        elif client_type == \"httpx\":\n            try:\n                import httpx\n                with httpx.Client(timeout=self.timeout) as client:\n                    if method == \"GET\":\n                        response = client.get(url, headers=headers)\n                    else:\n                        response = client.post(url, data=kwargs.get('data'), headers=headers)\n                    response.raise_for_status()\n                    return response\n            except ImportError:\n                raise Exception(Config.ERROR_MESSAGES[\"HTTP_CLIENT_NOT_AVAILABLE\"].format(client=\"httpx\"))\n        \n        elif client_type == \"urllib3\":\n            try:\n                import urllib3\n                http = urllib3.PoolManager()\n                if method == \"GET\":\n                    response = http.request('GET', url, headers=headers)\n                else:\n                    response = http.request('POST', url, body=kwargs.get('data'), headers=headers)\n                if response.status >= 400:\n                    raise Exception(Config.ERROR_MESSAGES[\"HTTP_ERROR\"].format(status_code=response.status))\n                class MockResponse:\n                    def __init__(self, data, status):\n                        self.text = data.decode('utf-8')\n                        self.status_code = status\n                    def raise_for_status(self):\n                        pass\n                return MockResponse(response.data, response.status)\n            except ImportError:\n                raise Exception(Config.ERROR_MESSAGES[\"HTTP_CLIENT_NOT_AVAILABLE\"].format(client=\"urllib3\"))\n        \n        elif client_type == \"cloudscraper\":\n            try:\n                import cloudscraper\n                scraper = cloudscraper.create_scraper()\n                if method == \"GET\":\n                    response = scraper.get(url, headers=headers, timeout=self.timeout)\n                else:\n                    response = scraper.post(url, data=kwargs.get('data'), headers=headers, timeout=self.timeout)\n                response.raise_for_status()\n                return response\n            except ImportError:\n                raise Exception(Config.ERROR_MESSAGES[\"HTTP_CLIENT_NOT_AVAILABLE\"].format(client=\"cloudscraper\"))\n        \n\n        elif client_type == \"aiohttp\":\n            try:\n                import asyncio\n                return asyncio.run(self._async_request(method, url, **kwargs))\n            except ImportError:\n                raise Exception(Config.ERROR_MESSAGES[\"HTTP_CLIENT_NOT_AVAILABLE\"].format(client=\"aiohttp\"))\n        \n        raise Exception(Config.ERROR_MESSAGES[\"UNKNOWN_CLIENT_TYPE\"].format(client_type=client_type))\n    \n    async def _async_request(self, method: str, url: str, **kwargs):\n        \"\"\"Async HTTP request using aiohttp.\n        \n        Args:\n            method: HTTP method (GET or POST)\n            url: Request URL\n            **kwargs: Additional request parameters\n            \n        Returns:\n            MockResponse object with text attribute\n        \"\"\"\n        import aiohttp\n        headers = kwargs.get('headers', self.headers)\n        \n        async with aiohttp.ClientSession(timeout=aiohttp.ClientTimeout(total=self.timeout)) as session:\n            if method == \"GET\":\n                async with session.get(url, headers=headers) as response:\n                    response.raise_for_status()\n                    text = await response.text()\n                    class MockResponse:\n                        def __init__(self, text):\n                            self.text = text\n                        def raise_for_status(self):\n                            pass\n                    return MockResponse(text)\n            else:\n                async with session.post(url, data=kwargs.get('data'), headers=headers) as response:\n                    response.raise_for_status()\n                    text = await response.text()\n                    class MockResponse:\n                        def __init__(self, text):\n                            self.text = text\n                        def raise_for_status(self):\n                            pass\n                    return MockResponse(text)\n    \n    def _is_404_error(self, error: Exception) -> bool:\n        \"\"\"Check if error is a 404 not found error.\n        \n        Analyzes exception messages to determine if the error indicates\n        that the requested resource (app, developer, etc.) was not found.\n        \n        Args:\n            error: Exception to check\n            \n        Returns:\n            True if 404 error, False otherwise\n            \n        Example:\n            if self._is_404_error(exception):\n                raise AppNotFoundError(\"App not found\")\n        \"\"\"\n        error_str = str(error).lower()\n        # Check for common 404 error indicators\n        return \"404\" in error_str or \"not found\" in error_str\n    \n    def _try_next_client(self):\n        \"\"\"Switch to next available HTTP client for retry.\n        \n        Cycles through available HTTP clients when the current one fails,\n        providing automatic fallback functionality for improved reliability.\n        \n        Process:\n            1. Move to next client in the list\n            2. Log the client switch\n            3. Reinitialize with new client\n            \n        Note:\n            Called automatically by error handling decorators when retries are needed\n        \"\"\"\n        # Cycle to next available client\n        self.current_client_index = (self.current_client_index + 1) % len(self.available_clients)\n        next_client = self.available_clients[self.current_client_index]\n        \n        logger.info(f\"Switching to HTTP client: {next_client}\")\n        \n        # Update client type and reinitialize\n        self.client_type = next_client\n        self._setup_client()\n    \n    def rate_limit(self):\n        \"\"\"Apply rate limiting delay between requests.\n        \n        Implements rate limiting to prevent overwhelming the Google Play Store\n        servers and avoid getting blocked. Calculates the time since the last\n        request and sleeps if necessary to maintain the configured delay.\n        \n        Rate Limiting Strategy:\n            - Tracks time of last request\n            - Enforces minimum delay between requests\n            - Prevents rapid-fire requests that could trigger blocks\n            \n        Note:\n            Called automatically before each HTTP request\n        \"\"\"\n        current_time = time.time()\n        time_since_last = current_time - self.last_request_time\n        \n        # Check if we need to wait before making the next request\n        if time_since_last < self.rate_limit_delay:\n            sleep_time = self.rate_limit_delay - time_since_last\n            logger.debug(Config.ERROR_MESSAGES[\"RATE_LIMIT_SLEEP\"].format(sleep_time=sleep_time))\n            time.sleep(sleep_time)\n        \n        # Update last request time\n        self.last_request_time = time.time()"
  },
  {
    "path": "output/app_example.json",
    "content": "{\n  \"appId\": \"com.playdead.limbo.full\",\n  \"title\": \"LIMBO\",\n  \"summary\": \"Uncertain of his sister's fate, a boy enters LIMBO.\",\n  \"description\": \"What the press said: \\n\\n“Limbo is as close to perfect at what it does as a game can get.” \\n10/10 – Destructoid \\n\\n“The game is a masterpiece.” \\n5/5 – GiantBomb \\n\\n“Limbo is genius. Freaky, weird genius. Disturbing, uncomfortable genius.” \\n5/5 – The Escapist \\n\\n“Dark, disturbing, yet eerily beautiful, Limbo is a world that deserves to be explored.” \\n5/5 – Joystiq \\n\\nWinner of more than 100 awards, including: \\n\\nGameinformer’s “Best Downloadable” \\nGamespot’s “Best Puzzle Game” \\nKotaku’s “The Best Indie Game” \\nGameReactor’s “Digital Game of the Year” \\nSpike TV’s “Best Independent Game” \\nX-Play’s “Best Downloadable Game” \\nIGN’s “Best Horror Game”\\n\\nLimbo is an award-winning indie adventure, critically acclaimed for its captivating puzzle design and immersive sound and visuals. Its dark, misty spaces and haunting narrative will stay with you forever.\",\n  \"genre\": \"Adventure\",\n  \"genreId\": \"GAME_ADVENTURE\",\n  \"categories\": [\n    \"Adventure\"\n  ],\n  \"available\": true,\n  \"released\": \"Feb 11, 2015\",\n  \"appAgeDays\": 3898,\n  \"lastUpdated\": \"Mar 11, 2025\",\n  \"updatedTimestamp\": 1741708881,\n  \"icon\": \"https://play-lh.googleusercontent.com/FJ8e7NYhyjzrjuROUSpigJ1TQNnZKUDh6AZc1SFjiD665bZsxr_7zus0DzlHIrC6Lgk=w9999\",\n  \"headerImage\": \"https://play-lh.googleusercontent.com/dPW585vpaYb9oNLGwCc9mrMc90NwUjzxYb-pJK07sUBhAmRib6DW5P3zSeA7DecMEw=w9999\",\n  \"screenshots\": [\n    \"https://play-lh.googleusercontent.com/GaptorFLFNZRTHSaV4Wh3R4nnhnd_LCCW1fNIwLERCyNcI3X5LOlK3TxeCKZLeoZowE=w9999\",\n    \"https://play-lh.googleusercontent.com/mAQtHG90gFvktB5AkGhqPqZEW6s-Ghqql4Jq6_Az4Y2hqOl7JN5oDG3MNHJJEEr4Rg=w9999\",\n    \"https://play-lh.googleusercontent.com/yxKadViR_JwU7Q5_s6OCEx3yU9WLR6Bo--kkkFThwUx5vNgxSPf3zbDDkjCWu_y0VQ=w9999\",\n    \"https://play-lh.googleusercontent.com/K1oLWUAPvUNSjbYQI4wHrDTwS2og0yDy5Hg2p2fCUlyOx9FchJzISBFyWj6Agip0Cms=w9999\",\n    \"https://play-lh.googleusercontent.com/Kj9lmcNbHpiaqJJLKvqwI9oPUKlO6Uqq0Alx5LjOZx7MrmCYA1JTQPtAO5FMpqUXcs2S=w9999\",\n    \"https://play-lh.googleusercontent.com/15Omhs7Wf3nNV0OL6SeAn7S-IL7_w0wRJjFcTWySrJWDXVnAWDvnepe44NtpdP4V-DRw=w9999\",\n    \"https://play-lh.googleusercontent.com/sMcfQtOUE_DgUOYY9vPVW6SCuAlwNSdbBcfmEMuNbjj9cFcAObSVV_06MOCMHNNJ_A=w9999\",\n    \"https://play-lh.googleusercontent.com/IJyIN5LH_WE9Wg8wurdKYH_mZZ3uZoc8qMIl4dcGAsorEDeSBnGGy7R9BW3VT_uwE4Q=w9999\",\n    \"https://play-lh.googleusercontent.com/zID8lHY-aSu8XwlNFbGeyYL9wHW4wcO3cpXb4nL6VzK404wE-Q164TL6a5uytSrbV0o=w9999\",\n    \"https://play-lh.googleusercontent.com/ofwJOH4PAJa1_-1YBopixmk1ps6A2Fk66yLj6TsW3aIEXLzLzhWoXuEKhyHc2kw72A=w9999\",\n    \"https://play-lh.googleusercontent.com/-72WGAXqv2DwytmhHKtXITNWD-nRrx1bsm7SFkckFFO0DoClhBxhukB0EhjBJ2bGDA=w9999\",\n    \"https://play-lh.googleusercontent.com/Ry0IjKLkNJpGCHHz17aW_dFa9geqgQyHJYyshAWhgzQCCQ55nUJgC8bkyE_TTxBxNQ=w9999\",\n    \"https://play-lh.googleusercontent.com/E-jL0foVeggCKVnuT8Y04jgnMZtpwxUqcPkEn29dDnI72rZk-r8N5yW5gW8tduLGFw=w9999\",\n    \"https://play-lh.googleusercontent.com/MBqe75pbbkCna_Nd3e8fa0nxKDwLrW17Q9o_y939DEXiiCxIQ_3DSxaRNto_AheG3XU=w9999\",\n    \"https://play-lh.googleusercontent.com/kBUx7Fp1gPWIyLpSo-VPf-o0inzbTUqQLpZ7mhCjLz4SR377SBCfDC4M3AZ78TRA2b4=w9999\",\n    \"https://play-lh.googleusercontent.com/_Uw03W-2-u3OnLPY7s5qaHzgdOPXGFezuXGHW9wQNbQSS82dzs-grm0VCBJPhcht-Q0O=w9999\",\n    \"https://play-lh.googleusercontent.com/xY4o2dmT4W8bVUthLgglH4IvZL-8Hn_7CSlgPbzLcJIuqdvJzcR7abQtN_TPxT1AvW0=w9999\",\n    \"https://play-lh.googleusercontent.com/-jMy1IndLzUj5YYaAMFyoeKALVNFgzH5giNhm4_45XcmOUXGXnr_1RNwjm_7q4X1g2vs=w9999\",\n    \"https://play-lh.googleusercontent.com/5sV41ilEro7ryUlt73TBfhnd5yqsY-Chv3_VxT2-qvRVbJM1E0vDYw5nK7Y_fvXh=w9999\",\n    \"https://play-lh.googleusercontent.com/Bd4BUBcKHDJCdPQ65jy-cVuZSldpIiCyXCprH3ZWjWGEMadCIf1j2VUd6_Q35luJ7zs=w9999\",\n    \"https://play-lh.googleusercontent.com/SZz2ehvolv0mNQBW6cZJ2Tm9mSVWl_S_2F2Kmojk8Kc-koWxOTyM5ZEx_Fd5uTo8Bi0=w9999\",\n    \"https://play-lh.googleusercontent.com/EgaZGAiGZ9fXz3BHY00j9-J32hLCRZ17tgG_LiJ-PPB2Kt160VOc_MT5wmWqma-y5GY=w9999\",\n    \"https://play-lh.googleusercontent.com/s3l_JhA1CbvglJO5XUN3Wlsfgc3c7BKI4xDaL8tjchPhU-Pb-uF9U82s8nOgvgi9KA=w9999\",\n    \"https://play-lh.googleusercontent.com/foTC5sz9VzPmFwsmDOZVHJL44LSwqcXHlgJdkPnKn38J-mOSr2KPXTvtZkSoDOaA4A=w9999\"\n  ],\n  \"video\": \"https://play.google.com/video/lava/web/player/yt:movie:Y4HSyVXKYz8?autoplay=1&embed=play\",\n  \"videoImage\": \"https://play-lh.googleusercontent.com/dPW585vpaYb9oNLGwCc9mrMc90NwUjzxYb-pJK07sUBhAmRib6DW5P3zSeA7DecMEw=w9999\",\n  \"installs\": \"1,000,000+\",\n  \"minInstalls\": 1000000,\n  \"realInstalls\": 3292879,\n  \"dailyInstalls\": 256,\n  \"minDailyInstalls\": 256,\n  \"realDailyInstalls\": 844,\n  \"monthlyInstalls\": 7809,\n  \"minMonthlyInstalls\": 7809,\n  \"realMonthlyInstalls\": 25714,\n  \"score\": 4.376488,\n  \"ratings\": 80722,\n  \"reviews\": 3899,\n  \"histogram\": [\n    6965,\n    2879,\n    2879,\n    8047,\n    59936\n  ],\n  \"adSupported\": false,\n  \"containsAds\": false,\n  \"version\": \"1.21\",\n  \"androidVersion\": 7,\n  \"maxAndroidApi\": 35,\n  \"minAndroidApi\": 21,\n  \"appBundle\": \"com.playdead.limbo.full\",\n  \"contentRating\": \"Teen\",\n  \"contentRatingDescription\": null,\n  \"whatsNew\": [\n    \"Bug fix for Snapdragon 8 Elite/Adreno 830 phones\",\n    \"Google API updates and crash fix\",\n    \"Update to packaging format (AAB)\",\n    \"Small icon update\"\n  ],\n  \"permissions\": {\n    \"Photos/Media/Files\": [\n      \"read the contents of your USB storage\"\n    ],\n    \"Storage\": [\n      \"read the contents of your USB storage\"\n    ],\n    \"Other\": [\n      \"view network connections\",\n      \"Google Play license check\"\n    ]\n  },\n  \"dataSafety\": [\n    \"No data shared with third parties\",\n    \"No data collected\"\n  ],\n  \"price\": 3.99,\n  \"currency\": \"USD\",\n  \"free\": false,\n  \"offersIAP\": false,\n  \"inAppProductPrice\": null,\n  \"sale\": false,\n  \"originalPrice\": null,\n  \"developer\": \"Playdead\",\n  \"developerId\": \"9183662361966484245\",\n  \"developerEmail\": \"support@playdead.com\",\n  \"developerWebsite\": \"http://playdead.com\",\n  \"developerAddress\": \"Flæsketorvet 41\\n1711 København V\\nDenmark\",\n  \"developerPhone\": \"+45 53 76 41 00\",\n  \"privacyPolicy\": \"https://playdead.com/privacypolicy/index.php\",\n  \"appUrl\": \"https://play.google.com/store/apps/details?id=com.playdead.limbo.full\"\n}"
  },
  {
    "path": "output/developer_example.json",
    "content": "[\n  {\n    \"appId\": \"com.google.android.apps.bard\",\n    \"title\": \"Google Gemini\",\n    \"description\": \"Supercharge your creativity and productivity with Gemini, your AI assistant from Google.\\n\\nGemini gives you direct access to Google’s best family of AI models on your phone so you can:\\n\\n- Go Live with Gemini to brainstorm ideas, simplify complex topics, and rehearse for important moments. Just click on the Gemini Live button in your Gemini app\\n- Connect with your favorite Google apps like Search, YouTube, Google Maps, Gmail, and more\\n- Study smarter and explore any topic with interactive visuals and real-world examples\\n- Turn any file into a podcast that you can listen to anytime, anywhere\\n- Create stunning images from just a few words\\n- Plan trips better and faster\\n- Get summaries, deep dives, and source links, all in one place\\n- Brainstorm new ideas, or improve existing ones\\n\\nTry Nano Banana: state of the art image generation and editing built on Gemini 2.5 Flash\\n\\nLevel-up your Gemini app experience by upgrading to the Pro plan–unlock new and powerful features to tackle complex tasks and projects and enjoy industry-leading 1M token context window (enabling Gemini to process up to 1,500 pages of text or 30k lines of code), and:\\n- Get more access our most powerful model, like 2.5 Pro\\n- Generate and dive into detailed reports on any topic with Deep Research powered by 2.5 Pro\\n- Turn words into high-quality, 8 second video clips with video generation with Veo 3, and more. \\n\\nGoogle AI Pro is available in 150 countries and territories, and includes additional benefits. Gemini app, as part of Google AI Pro, will continue to be available to qualifying Google Workspace business and education plans. Learn more: https://gemini.google/subscriptions/\\n\\nGet the best of Gemini app by upgrading to the Ultra plan–unlock the highest level of access and exclusive features to turn anything into anything. Get the highest access to Google’s most powerful model, like 2.5 Pro, and features like video generation with Veo 3 and Deep Research. You’ll also get early access to try our newest AI innovations as they become available, including Agent Mode.\\n\\nGemini in Google AI Ultra is available in the US and includes additional benefits as part of a Google AI Ultra subscription. Google AI Ultra is not currently available to Google Workspace business and education customers. Learn more: https://gemini.google/subscriptions/\\n\\nIf you opt in to the Gemini app, it will replace your Google Assistant as the primary assistant on the phone. Some Google Assistant voice features aren't available through the Gemini app yet. You can switch back to Google Assistant in settings.\\n\\nReview the Gemini Apps Privacy Notice:\\nhttps://support.google.com/gemini?p=privacy_notice\",\n    \"icon\": \"https://play-lh.googleusercontent.com/bTpNtZ6rYYX2SeI-wC4cnr7MJnOh2hjtgYu3UIrSxE09lM3GPl_Uhf9_Ih2Smje2bc0V\",\n    \"developer\": \"Google LLC\",\n    \"score\": 4.605983,\n    \"scoreText\": \"4.6\",\n    \"currency\": \"USD\",\n    \"price\": 0,\n    \"free\": true,\n    \"url\": \"https://play.google.com/store/apps/details?id=com.google.android.apps.bard\"\n  },\n  {\n    \"appId\": \"com.google.android.apps.youtube.unplugged\",\n    \"title\": \"YouTube TV: Live TV & more\",\n    \"description\": \"YouTube TV is now the exclusive home of NFL Sunday Ticket. Watch every out-of-market Sunday game* on your TV and supported devices. Learn more: https://yt.be/nflsundayticket\\n\\nWatch cable-free live TV. Download to watch & record live TV from 100+ networks, including local sports & news, as part of your monthly membership. Cancel anytime.\\n\\n+Cable-free live TV. No cable box required.\\n+Stream major broadcast and cable networks, including ABC, CBS, FOX, NBC, NFL Network, ESPN, HGTV, TNT, AMC, and more, including your local sports & news channels.\\n+Watch on your smartphone, tablet, computer, and TV\\n+Cloud DVR without DVR storage space limits. Each recording will be stored for 9 months.\\n+6 YouTube TV accounts per household. Everyone gets their own login, recommendations and DVR.\\n+Monthly pay-as-you-go membership; cancel anytime.\\n\\nOver 100 networks are available in YouTube TV:\\n\\nBROADCAST\\nABC, CBS, FOX, NBC, NFL Network, PBS, and more\\n\\nSPORTS\\nCBS Sports Network, NBC Sports RSN (regional), NFL Network, ESPN, ESPN2, ESPNews, ESPNU, Golf Channel, NBA TV, SEC Network, and more\\n\\nENTERTAINMENT & LIFESTYLE\\nAMC, Animal Planet, BBC America, BET, Bravo, Cheddar, CMT, Comedy Central, Comet, Cozi TV, Decades, Discovery, E!, Food Network, Freeform, FX, FXM, FXX, IFC, Investigation Discovery, HGTV, MotorTrend, MTV, Nat Geo, Nat Geo Wild, Oxygen, Paramount Network, Pop, Smithsonian Channel, SundanceTV, SyFy, TBS, TCM, TLC, TNT, Travel Channel, TruTV, TV Land, USA, VH1, WE tv, YouTube Originals, and more\\n\\nNEWS\\nBBC World News, Cheddar Big News, CNBC, CNN, HLN, MSNBC, and more\\n\\nKIDS\\nCartoon Network, Disney Channel, Disney Junior, Disney XD, Nickelodeon, PBS Kids, Universal Kids\\n\\n\\nAvailability:\\nYouTube TV is available nationwide in the United States.\\n\\nFor more information, please visit our Help Center.\\n\\nYour membership will automatically continue for as long as you choose to remain a member. Your membership is a month-to-month subscription that begins at sign up. You can easily cancel anytime, online, 24 hours a day. There are no long-term contracts or cancellation fees.\\n\\nSubscriptions automatically renew unless auto-renew is turned off at least 24-hours before the end of the current period. Account will be charged for renewal within 24-hours prior to the end of the current period.\\n\\nSubscriptions may be managed by the user and auto-renewal may be turned off by going to the user's Account Settings on the device.\\n\\nTerms of service: tv.youtube.com/tv/terms\\nPaid terms of service: tv.youtube.com/tv/paidterms\\nPrivacy policy: tv.youtube.com/tv/privacy\\n\\nEnjoy cable-free live tv now!\\n\\n*Commercial use excluded. Locally broadcast Fox and CBS games, Sunday Night Football on NBC, select digital-only games and international games excluded from NFL Sunday Ticket.\",\n    \"icon\": \"https://play-lh.googleusercontent.com/C-fxk_9e65qoZ9V9rb5uU8udyAnJU3IWnSldnoMqfFzk3wm4jCM9drsO2afVXGwXKyU\",\n    \"developer\": \"Google LLC\",\n    \"score\": 3.9264245,\n    \"scoreText\": \"3.9\",\n    \"currency\": \"USD\",\n    \"price\": 0,\n    \"free\": true,\n    \"url\": \"https://play.google.com/store/apps/details?id=com.google.android.apps.youtube.unplugged\"\n  },\n  {\n    \"appId\": \"com.google.android.apps.translate\",\n    \"title\": \"Google Translate\",\n    \"description\": \"• Text translation: Translate between 108 languages by typing\\n• Tap to Translate: Copy text in any app and tap the Google Translate icon to translate (all languages)\\n• Offline: Translate with no internet connection (59 languages)\\n• Instant camera translation: Translate text in images instantly by just pointing your camera (94 languages)\\n• Photos: Take or import photos for higher quality translations (90 languages)\\n• Conversations: Translate bilingual conversations on the fly (70 languages)\\n• Handwriting: Draw text characters instead of typing (96 languages)\\n• Phrasebook: Star and save translated words and phrases for future reference (all languages)\\n• Cross-device syncing: Login to sync phrasebook between app and desktop\\n• Transcribe: Continuously translate someone speaking a different language in near real-time (8 languages)\\n\\nTranslations between the following languages are supported:\\nAfrikaans, Albanian, Amharic, Arabic, Armenian, Assamese, Aymara, Azerbaijani, Bambara, Basque, Belarusian, Bengali, Bhojpuri, Bosnian, Bulgarian, Catalan, Cebuano, Chichewa, Chinese (Simplified), Chinese (Traditional), Corsican, Croatian, Czech, Danish, Dhivehi, Dogri, Dutch, English, Esperanto, Estonian, Ewe, Filipino, Finnish, French, Frisian, Galician, Georgian, German, Greek, Guarani, Gujarati, Haitian Creole, Hausa, Hawaiian, Hebrew, Hindi, Hmong, Hungarian, Icelandic, Igbo, Ilocano, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Kinyarwanda, Konkani, Korean, Krio, Kurdish (Kurmanji), Kurdish (Sorani), Kyrgyz, Lao, Latin, Latvian, Lingala, Lithuanian, Luganda, Luxembourgish, Macedonian, Maithili, Malagasy, Malay, Malayalam, Maltese, Maori, Marathi, Meiteilon (Manipuri), Mizo, Mongolian, Myanmar (Burmese), Nepali, Norwegian, Odia (Oriya), Oromo, Pashto, Persian, Polish, Portuguese, Punjabi, Quechua, Romanian, Russian, Samoan, Sanskrit, Scots Gaelic, Sepedi, Serbian, Sesotho, Shona, Sindhi, Sinhala, Slovak, Slovenian, Somali, Spanish, Sundanese, Swahili, Swedish, Tajik, Tamil, Tatar, Telugu, Thai, Tigrinya, Tsonga, Turkish, Turkmen, Twi, Ukrainian, Urdu, Uyghur, Uzbek, Vietnamese, Welsh, Xhosa, Yiddish, Yoruba, Zulu\\n\\nPermissions Notice\\nGoogle Translate may ask for the following optional permissions*:\\n• Microphone for speech translation\\n• Camera for translating text via the camera\\n• External storage for downloading offline translation data\\n• Contacts for setup and management of your account\\n\\n*Note: The app may be used even if optional permissions are not granted.\",\n    \"icon\": \"https://play-lh.googleusercontent.com/ZrNeuKthBirZN7rrXPN1JmUbaG8ICy3kZSHt-WgSnREsJzo2txzCzjIoChlevMIQEA\",\n    \"developer\": \"Google LLC\",\n    \"score\": 4.2877836,\n    \"scoreText\": \"4.3\",\n    \"currency\": \"USD\",\n    \"price\": 0,\n    \"free\": true,\n    \"url\": \"https://play.google.com/store/apps/details?id=com.google.android.apps.translate\"\n  },\n  {\n    \"appId\": \"com.google.android.apps.walletnfcrel\",\n    \"title\": \"Google Wallet\",\n    \"description\": \"Google Wallet gives you fast, secure access to your everyday essentials. Tap to pay everywhere Google Pay is accepted, board a flight, go to a movie, and more –  all with just your phone. Keep everything protected in one place, no matter where you go.\\n\\nCONVENIENT\\n\\nGet what you need fast\\n+ Three quick ways for accessing your everyday essentials: use your phone’s quick settings for fast access, open the Wallet app from your homescreen or use Google Assistant when your hands are busy.\\n\\nAccess Google Wallet from your Wear OS watch\\n+ Get instant access to Wallet on the Wear OS main watch face with complications.\\n\\nCarry cards, tickets, passes, and more\\n+ Catch a train, see a concert, or earn rewards at your favorite stores  with a digital wallet that carries more\\n+ [US Only] Unlock the world around you with a digital wallet that carries your drivers license and digital car keys\\n\\nWhat you need, right when you need it\\n+ Your Wallet can suggest what you need, right when you need it. Get a notification for your boarding pass on the day of travel, so you’ll never have to fumble in your bag again.\\n\\nHELPFUL\\n\\nKeep track of receipts\\n+ Easily find transaction details in Wallet, including smart details like location pulled from Google Maps\\n\\nSeamless integration across Google\\n+ Sync your Wallet to keep your Calendar and Assistant up to date with the latest info like flight updates and event notifications\\n+ Shop smarter by seeing your point balances and loyalty benefits in Maps, Shopping, and more\\n\\nGet started in a snap\\n+ Set up is seamless with the ability to import cards, transit passes, loyalty cards and more that you’ve saved on Gmail.\\n\\nStay in the know on the go\\n+ Make boarding flights a breeze with the latest information pulled from Google Search. Google Wallet can keep you posted on gate changes or unexpected flight delays.\\n\\nSAFE & PRIVATE\\n\\nA secure way to carry it all\\n+ Security and privacy are built into every part of Google Wallet to keep all your essentials protected.\\n\\nAndroid security you can count on\\n+ Keep your data and essentials secure with advanced Android security features like 2-Step Verification, Find My Phone, and remotely erasing data.\\n\\nTap to pay keeps your card secure\\nALT: + When you tap to pay with your Android phone, Google Pay doesn’t share your real credit card number with the business, so your payment info stays safe.\\n\\nYou’re in control of your data\\n+ Easy to use privacy controls allow you to opt-in to sharing information across Google products for a tailored experience.\\n\\nGoogle Wallet is available on all Android phones (Lollipop 5.0+), Wear OS and Fitbit devices.\\nNot all features are available for supervised accounts. Learn more about Wallet for supervised accounts here: https://support.google.com/wallet?p=about_wallet_supervised.\\nStill have questions? Head over to support.google.com/wallet.\",\n    \"icon\": \"https://play-lh.googleusercontent.com/DHBlQKvUNbopIS-VjQb3fUKQ_QH0Em-Q66AwG6LwD1Sach3lUvEWDb6hh8xNvKGmctU\",\n    \"developer\": \"Google LLC\",\n    \"score\": 4.4905887,\n    \"scoreText\": \"4.5\",\n    \"currency\": \"USD\",\n    \"price\": 0,\n    \"free\": true,\n    \"url\": \"https://play.google.com/store/apps/details?id=com.google.android.apps.walletnfcrel\"\n  },\n  {\n    \"appId\": \"com.google.android.apps.youtube.kids\",\n    \"title\": \"YouTube Kids\",\n    \"description\": \"Inspire your kids to uncover their unique interests\\nHelp your kids explore video content they love and parents trust, in an app made just for kids. With easy navigation tools and a suite of features, you can help your kids spend time online uncovering new interests, unleashing their imagination, and building their confidence in their own unique world.\\n\\nHelp your kids grow at their own pace\\nYour kids are unique, so they should only see content they're ready to explore. Decide what videos will help them make the most of their time online, then personalize individual profiles using custom content filters as they grow.\\n\\n- Help your youngest kids learn their ABCs, nurture their curiosity, and more in the Preschool mode.\\n- Expand your kids interests to songs, cartoons, or DIY crafts in Younger mode.\\n- Give your older kids the freedom to search popular music and gaming videos in Older mode.\\n- Or hand-pick the videos, channels, and collections your kids can see in Approved Content Only mode.\\n\\nRewatch videos and bond over favorites\\nQuickly find your kids’ favorite videos and the content you’ve shared with them in the Watch it Again tab.\\n\\nShape your kids’ viewing experience with Parental Controls\\nParental Control features help you limit what your kids watch and better guide their viewing experience. Our blocking process aims to help keep videos on YouTube Kids family-friendly and safe – but each family's preferences are unique. Don't like a video or channel, or see inappropriate content? Flag it for our team to review.\\n\\nSet a screen-time limit\\nEncourage your kids to take a break in between exploring content. Use the Timer feature to freeze the app when screen time is up so your kids can apply their new skills to the real world.\\n\\nSee important information\\n- Parental setup is needed to ensure the best experience for your family.\\n- Kids may see commercial content from YouTube creators that are not paid ads.\\n- See The Privacy Notice for Google Accounts managed with Family Link for information on our privacy practices for signing in with a Google Account.\\n- If your kids use the app without signing in with their Google Account, the YouTube Kids Privacy Notice applies.\",\n    \"icon\": \"https://play-lh.googleusercontent.com/OxNGx8LU6gm8aLfJcwcJxunvj2a7zDgDyPOD4J9HRSIc6N_1O1iZ2dLr3xQMbuMy_wE\",\n    \"developer\": \"Google LLC\",\n    \"score\": 4.1885204,\n    \"scoreText\": \"4.2\",\n    \"currency\": \"USD\",\n    \"price\": 0,\n    \"free\": true,\n    \"url\": \"https://play.google.com/store/apps/details?id=com.google.android.apps.youtube.kids\"\n  },\n  {\n    \"appId\": \"com.google.android.apps.authenticator2\",\n    \"title\": \"Google Authenticator\",\n    \"description\": \"Google Authenticator adds an extra layer of security to your online accounts by adding a second step of verification when you sign in. This means that in addition to your password, you'll also need to enter a code that is generated by the Google Authenticator app on your phone.  The verification code can be generated by the Google Authenticator app on your phone, even if you don't have a network or cellular connection.  \\n* Sync your Authenticator codes to your Google Account and across your devices. This way, you can always access them even if you lose your phone. \\n* Set up your Authenticator accounts automatically with a QR code. This is quick and easy, and it helps to ensure that your codes are set up correctly. \\n* Support for multiple accounts. You can use the Authenticator app to manage multiple accounts, so you don't have to switch between apps every time you need to sign in. \\n* Support for time-based and counter-based code generation. You can choose the type of code generation that best suits your needs. \\n* Transfer accounts between devices with a QR code. This is a convenient way to move your accounts to a new device. \\n* To use Google Authenticator with Google, you need to enable 2-Step Verification on your Google Account. To get started visit http://www.google.com/2step  Permission notice: Camera: Needed to add accounts using QR codes\",\n    \"icon\": \"https://play-lh.googleusercontent.com/NntMALIH4odanPPYSqUOXsX8zy_giiK2olJiqkcxwFIOOspVrhMi9Miv6LYdRnKIg-3R\",\n    \"developer\": \"Google LLC\",\n    \"score\": 3.7759008,\n    \"scoreText\": \"3.8\",\n    \"currency\": \"USD\",\n    \"price\": 0,\n    \"free\": true,\n    \"url\": \"https://play.google.com/store/apps/details?id=com.google.android.apps.authenticator2\"\n  },\n  {\n    \"appId\": \"com.google.android.apps.adm\",\n    \"title\": \"Google’s Find Hub\",\n    \"description\": \"For your devices and items\\n  • View your phone, tablet, headphones, and other accessories on a map–even if they’re offline.\\n  • Play a sound to locate your lost device if it’s nearby.\\n  • If you’ve lost a device, you can remotely secure or erase it. You can also add a custom message to display on the lock screen in case someone finds your device.\\n  • All location data in the Find Hub network is encrypted. This location data is not visible even to Google.\\n\\n  For location sharing\\n  • Share your live location to coordinate a meetup with a friend or check on family to make sure they got home safe.\",\n    \"icon\": \"https://play-lh.googleusercontent.com/GCsSBgR93cedwf2weP7s6VPsBitwir9ioOO0DYjLydIjdCkfQEv0GQzK34ky96L6XMc\",\n    \"developer\": \"Google LLC\",\n    \"score\": 4.3170166,\n    \"scoreText\": \"4.3\",\n    \"currency\": \"USD\",\n    \"price\": 0,\n    \"free\": true,\n    \"url\": \"https://play.google.com/store/apps/details?id=com.google.android.apps.adm\"\n  },\n  {\n    \"appId\": \"com.google.android.apps.docs.editors.docs\",\n    \"title\": \"Google Docs\",\n    \"description\": \"Create, edit, and collaborate with others on documents from your Android phone or tablet with the Google Docs app. \\n\\nWork together in real time\\n• Share documents with your team\\n• Edit, comment, and add action items in real time\\n\\nCreate anywhere, anytime—even offline\\n• Capture spontaneous ideas on the fly\\n• Get things done, even on the go, with offline mode\\n• Save time and add polish with easy-to-use templates\\n\\nEdit and share multiple file types\\n• Open a variety of files, including Microsoft Word files, right in Google Docs\\n• Frictionless collaboration, no matter which application your teammates use\\n• Convert and export files seamlessly\\n\\nGoogle Docs is part of Google Workspace: where teams of any size can chat, create, and collaborate.\\nGoogle Workspace paid subscribers have access to additional Google Docs features, including:\\n• Use Gemini in Docs to quickly draft and edit content\\n• Draft outlines, blog posts, briefs, and more on the go\\n• Create unique images to customize your documents\\n• Improve your writing with AI-powered suggestions\\n\\nLearn more about Google Docs: https://workspace.google.com/products/docs/\\n\\nFollow us for more:\\n• X: https://x.com/googleworkspace\\n• Linkedin: https://www.linkedin.com/showcase/googleworkspace\\n• Facebook: https://www.facebook.com/googleworkspace\",\n    \"icon\": \"https://play-lh.googleusercontent.com/emmbClh_hm0WpWZqJ0X59B8Pz1mKoB9HVLkYMktxhGE6_-30SdGoa-BmYW73RJ8MGZQ\",\n    \"developer\": \"Google LLC\",\n    \"score\": 4.2061143,\n    \"scoreText\": \"4.2\",\n    \"currency\": \"USD\",\n    \"price\": 0,\n    \"free\": true,\n    \"url\": \"https://play.google.com/store/apps/details?id=com.google.android.apps.docs.editors.docs\"\n  },\n  {\n    \"appId\": \"com.google.earth\",\n    \"title\": \"Google Earth\",\n    \"description\": \"Create and collaborate on immersive, data-driven maps from anywhere, with the new Google Earth. See the world from above with high-resolution satellite imagery, explore 3D terrain and buildings in hundreds of cities, and dive in to streets and neighborhoods with Street View's 360° perspectives.\",\n    \"icon\": \"https://play-lh.googleusercontent.com/9ORDOmn8l9dh-j4Sg3_S7CLcy0RRAI_wWt5jZtJOPztwnEkQ4y7mmGgoSYqbFR5jTc3m\",\n    \"developer\": \"Google LLC\",\n    \"score\": 3.8520932,\n    \"scoreText\": \"3.9\",\n    \"currency\": \"USD\",\n    \"price\": 0,\n    \"free\": true,\n    \"url\": \"https://play.google.com/store/apps/details?id=com.google.earth\"\n  },\n  {\n    \"appId\": \"com.google.android.apps.chromecast.app\",\n    \"title\": \"Google Home\",\n    \"description\": \"The Google Home app helps you get the most out of Gemini for Home.\\n\\n See your home at a glance \\n The Google Home app is designed to show you the status of your home and keep you up to date with what you may have missed. \\n\\nKeep up with what’s important\\nUpdated design and streamlined organization help you group your devices into dashboards and easily navigate your settings. Plus you can check in on your home anytime.\\n\\n Scan camera events quickly\\n The camera live view and history interface makes it easier than ever to see what happened.\\n\\n Search or ask your home\\n Control your home in a brand new way. Just say what you want your devices to do with Gemini for Home.\\n\\n* Some products and features may not be available in all regions. Compatible devices required.\",\n    \"icon\": \"https://play-lh.googleusercontent.com/-UJfyCGXOGNlA_7ys13IRTRxnrA5NGick84oQx-dBTr2swpki2xGIlHPcQaF46H21-u4\",\n    \"developer\": \"Google LLC\",\n    \"score\": 4.159965,\n    \"scoreText\": \"4.2\",\n    \"currency\": \"USD\",\n    \"price\": 0,\n    \"free\": true,\n    \"url\": \"https://play.google.com/store/apps/details?id=com.google.android.apps.chromecast.app\"\n  }\n]"
  },
  {
    "path": "output/list_example.json",
    "content": "[\n  {\n    \"appId\": \"com.block.juggle\",\n    \"title\": \"Block Blast!\",\n    \"description\": \"Enter the world of Block Blast, a fun and free block puzzle game where every move challenges your brain and strategy, rewarding you with satisfying crush effects as you blast cubes away. Whether you're seeking offline relaxation or looking to enhance your logic skills, Block Blast delivers an engaging puzzle experience.\\n\\n🌟 Why You'll Love Block Blast\\n🔸 Vibrant Puzzle Adventure: Match colorful blocks on an 8x8 board, solve logic puzzles, and watch the cubes crush in a cascade of color.\\n🔹 Strategic Combos & Streaks: \\nUse your brain to clear multiple lines of blocks in one move for powerful combos. Maintain your streaks to achieve massive scores!\\n🔸 Casual Yet Challenging: Play at your own pace, relax your brain, or apply deep logic strategies to beat your highest score.\\n🔹 Offline Fun, No WiFi Needed: Enjoy free block puzzle game anytime and anywhere, perfect for endless fun on-the-go.\\n\\n💥 Features of Block Blast\\n● Puzzle Gameplay: Strategically place blocks and cubes, match colors, and solve each challenging level.\\n● Adventure Mode: Take on progressively challenging puzzles, each level with unique theme jigsaw and visual styles.\\n● Daily Challenges: Keep your brain sharp with daily logic puzzles and earn exclusive achievements.\\n● Optimized for All Devices: Smooth, seamless gameplay experience with minimal memory usage—perfect for phones and tablets.\\n\\n🎮 How to Play\\n● Drag & Match: Strategically place cubes on the 8x8 board.\\n● Crush & Score: Complete rows or columns to crush blocks away and earn points.\\n● Chase Combos: Plan your moves and trigger massive combos for bonus points.\\n● Stay Strategic: Apply logic and strategy to prevent your board from filling up.\\n\\n✨ Pro Tips for Puzzle Masters\\n● Plan Ahead: Visualize your moves to keep the board open for larger cubes.\\n● Maximize Combos: Match multiple rows and blocks simultaneously to boost your combos.\\n● Master the Streak: Consistent clears will maintain your fun streaks and enhance your score rewards.\\n\\n🔥 Jump into the Free Block Puzzle Game!\\nReady to test your brain, sharpen your logic, and enjoy hours of offline puzzle fun? Download Block Blast now and embark on an unforgettable block puzzle adventure.\",\n    \"icon\": \"https://play-lh.googleusercontent.com/R0qgNDYYHbRhw6JFsdEbDMqONplEvJx0m0W9wzYVvY3eNF1c2rfBWYjQxW0sLEzFe1E\",\n    \"screenshots\": [\n      \"https://play-lh.googleusercontent.com/JH136Ry9BOxHs9cIpcw5yo7A5UaSsNJz9Ovj_vqqytRjJuSPtEZTF51dtpyJtZcxdg\",\n      \"https://play-lh.googleusercontent.com/Iu9-uepyJdBINl618OEI4SGUoA1rj0QzUGPlhY855UDDNppNZ3J77CXIQn14imoWuw\",\n      \"https://play-lh.googleusercontent.com/wXreIezMLn2ozcR1ENWNIynZaxG3MbY2OjZDN6Rd868uN-09grdxmFdMLChhEaGyY3Y\",\n      \"https://play-lh.googleusercontent.com/PY0e_rJe6FlDmvBq8TVTUP3leYptQkXotS4doXK9vzbVatXJ6kYkQcJbVPQjYOCNVco\",\n      \"https://play-lh.googleusercontent.com/EjrWRONeRO8EYhV6tWb2cyb3wRYWqbDxy_B8vVqCsnJTJ3hMeceI0wdESsyip9lTrkU\",\n      \"https://play-lh.googleusercontent.com/hE9WNCB4Rxsa_H938nAuWk8AuLV4agpgKWO93vRsy3UT7_XGmSn0Alwidb6ue9xTYKo\",\n      \"https://play-lh.googleusercontent.com/k0so2CHbsaTGhMQL8jc0r68y9LAQu_zH6nrsFRp3-DSah3d1dpLt3l3hsa8n36kmWBA\",\n      \"https://play-lh.googleusercontent.com/bRBFNo-YL5jw2Pqg6xfg64AQoMpJgzRpZC2bJ0hGPVpDu_j1jByZnTBUDpak-kb4M9U\",\n      \"https://play-lh.googleusercontent.com/mwc7--BK6xtanBvV1nDXiIs0LfmK6eLavzQR3yf51UyNjDfFDHn9H_KueRcB99IQC9Y\",\n      \"https://play-lh.googleusercontent.com/ohIoigvvKwLfKHRclTAUYzgtwGWuk8EvoA7QdqH8GVtQ3vIT-DBtQpn95c0w0S_gxO8j\",\n      \"https://play-lh.googleusercontent.com/7mJin-3Gov7TEjaNAAboEEXqpZCdI1I8MOlAVht55WO1qS2MxmhFakmNerQaB4xXBQU\",\n      \"https://play-lh.googleusercontent.com/8GGwXesJ2MlKRvxnPdIHk5R8CjIR7BscsYk_vDIeVqY7bI1Mh4BeIxy62zEKkuTzMVI\",\n      \"https://play-lh.googleusercontent.com/DpK-36oHtNE0aKFY928s5SWKkbc0AHfp8AFAT1t2ByEczaTruknrM6lqdPc_7yAb__4\",\n      \"https://play-lh.googleusercontent.com/V_L-yGwRnrPOg5WNvKooDioYnlzFW24EkXO71MyyRt2OaQPW6V12uVzApqFY9r-OpQ\",\n      \"https://play-lh.googleusercontent.com/YxbhB1CWMdpVJdmUKfLx1VkJBbKl8kcUbMxplDBHFE4S0-BWA2hpQnfSTwoux4eig4g\",\n      \"https://play-lh.googleusercontent.com/RpTpLs0KyvlkgTfxxV0Mb1OmuSa9ztMvEdxQvCIPPROO0nNPA50O8hEfctPl2cZXkXJA\",\n      \"https://play-lh.googleusercontent.com/IFKlsGcUwVpsk0ZC8bdqyLazRfC9dsmNjM8GGLFNUaMIPi27A5u0hT2po9LgQnJ1aDY\",\n      \"https://play-lh.googleusercontent.com/kpa6YPXIhsHfpHhuIp8zcqv4DsN-l5g_JuTMzCLyFwfChkd9CjrlbOg0fWvSiS-PZZU\",\n      \"https://play-lh.googleusercontent.com/mu3aB1q44fGu-Dw8zChd5jA_BPbIp8j0JHwq48szrJ1LXftLbxmkW08ZZM0_LBgBAdQ\",\n      \"https://play-lh.googleusercontent.com/N6y3WtmTuEbkHgIsU-kqiVkWSC5JAmDeUplJGflPjn4BLYSUUBHQVQJgufCDb7bCa6vj\",\n      \"https://play-lh.googleusercontent.com/KQwgw62ch32NPRON-GQmsZfvbyEtaJ3zgKLCkeQ9NHJAbkOETQUIpcT7-tQ4xSh22Y8\",\n      \"https://play-lh.googleusercontent.com/pghY7dgd8j6orApxNpbmf9H4f9uEHgyFG0-EeWRRcWex2sovygRDoKVQqvDn0wRjfTc\",\n      \"https://play-lh.googleusercontent.com/MNBGZQp3syHpVfc2VP35Hg0H6GFXOC3h-BQOl9_IHOLOmy55tfdoK5o4p0cacvuSpQ\",\n      \"https://play-lh.googleusercontent.com/FY5ZkuQ-sxW97T9XDlXouB0JWKOlwe2FsGhxyAEWrY8JyJjCTXcioDcEhaHzWza1wdk\"\n    ],\n    \"developer\": \"HungryStudio\",\n    \"genre\": \"Puzzle\",\n    \"score\": 4.845834,\n    \"scoreText\": \"4.8\",\n    \"installs\": \"500,000,000+\",\n    \"currency\": \"USD\",\n    \"price\": 0,\n    \"free\": true,\n    \"url\": \"https://play.google.com/store/apps/details?id=com.block.juggle\"\n  },\n  {\n    \"appId\": \"com.gameberry.sorry.card.board.game\",\n    \"title\": \"Sorry! World - Board game\",\n    \"description\": \"Sorry! is Online Now\\n\\nNow you can enjoy the classic Sorry! game online for free with Sorry World, a digital adaptation of Hasbro's popular board game. \\n\\nSorry World features pawns, a game board, a modified deck of cards, and a designated Home zone. The goal is to move all of your pawns across the board into the Home zone, which is a safe area. The player who successfully gets all of their pawns Home first is the winner.\\n\\nHow To Play\\n\\nSorry World is a family-friendly board game for 2 to 4 players where the goal is to move all three of your pawns from Start to Home before your opponents. \\nHere's how to play:\\n\\n1. Setup: Each player selects a color and places their three pawns in the Start area. Shuffle the deck of cards and place it face down.\\n\\n2. Objective: The first player to move all three of their pawns around the board and into their Home space wins the game.\\n\\n3. Starting: Players take turns drawing a card from the deck and move their pawns according to the card’s instructions. The deck includes cards that allow players to move forward, backward, or swap places with an opponent.\\n\\n4. Sorry Card: Drawing a \\\"Sorry!\\\" card lets you replace any opponent's pawn on the board with one of your own, sending their pawn back to Start.\\n\\n5. Landing on Opponents: If you land on a space occupied by another player's pawn, that pawn is bumped back to Start.\\n\\n6. Safety Zones and Home: Pawns must enter their Home space by exact count, and the final stretch leading to Home is a \\\"safe zone\\\" where opponents can’t bump you out.\\n\\nSorry World combines strategy, luck, and opportunities to foil opponents’ plans, making each game competitive and exciting.\\n\\nSorry World is a fun, free to play online board game. It is very similar to Ludo, Parcheesi, like board games.\",\n    \"icon\": \"https://play-lh.googleusercontent.com/rr_m94Psm7I9MB83viB-MSn3Sh4p_ZmVCkLBZFLKx9SQIkbE0sb7K9WJx2cuYVf-OCk\",\n    \"screenshots\": [\n      \"https://play-lh.googleusercontent.com/WUEznddfz88QtmOd2Dbhvs8Ox5mwdO4PsIKK9_TahQN5haxAKPeUXCNnGZY1MvjoGOty3tM3FlFDJQa2kA2W\",\n      \"https://play-lh.googleusercontent.com/CvK50Kp7ED_SKDKHJnodtbAZwoGyFuW1WP4EaMu2AF5LpIq1H-PhOM5tGVXb-YRe_A33BQNyWPB8yn_-xe2bGQ\",\n      \"https://play-lh.googleusercontent.com/nImu-z3SVbbzQ1vcl8FoAMlqWHBWq7fgy1spc7T16JxLwnOlGGcXughyxgvq08pphpB172itvbtN3Z3vMU-i0A\",\n      \"https://play-lh.googleusercontent.com/WBGxESr1IPf393lIOLG9yjGDVhkwtoP8Sm8IkMWB5OfFrsFASQ5B9Dz679ehPRqa4TNpDG2AwvRUvDv_a0ioIsU\",\n      \"https://play-lh.googleusercontent.com/kv8G4djWURJ5Mqc1P8CAxGJDrLUjHD2CsfqbZhyiFdWRxRASZfydSi8F0MVRYz2Nvm42u6do4mycBhUbGl2B\",\n      \"https://play-lh.googleusercontent.com/MiZO1fgcS_BtvrSdBgTc_2pJvGCuSa7RWT_7fmlolIFVrrgboiNe0ztl4mLbdpQp4b4mcA7POyfVciyDOS2JEA\",\n      \"https://play-lh.googleusercontent.com/yHn2U48tWNYs-IatnQV-iVn1X4p7x5EvpfFE_Ez2Ons2HjwFXwgo4RgjEcogx6kjvbGbDBNb1WM5_a2L_YY9vg\",\n      \"https://play-lh.googleusercontent.com/ylr42iSdiBp-LrjfKzvhI1JJKzu3shbKtH_7NSu-MlcTSwH9Nd-pG-lF-4WrTJAUKXn91bJlhVKTO2whbYVNAA\",\n      \"https://play-lh.googleusercontent.com/WUEznddfz88QtmOd2Dbhvs8Ox5mwdO4PsIKK9_TahQN5haxAKPeUXCNnGZY1MvjoGOty3tM3FlFDJQa2kA2W\",\n      \"https://play-lh.googleusercontent.com/CvK50Kp7ED_SKDKHJnodtbAZwoGyFuW1WP4EaMu2AF5LpIq1H-PhOM5tGVXb-YRe_A33BQNyWPB8yn_-xe2bGQ\",\n      \"https://play-lh.googleusercontent.com/nImu-z3SVbbzQ1vcl8FoAMlqWHBWq7fgy1spc7T16JxLwnOlGGcXughyxgvq08pphpB172itvbtN3Z3vMU-i0A\",\n      \"https://play-lh.googleusercontent.com/WBGxESr1IPf393lIOLG9yjGDVhkwtoP8Sm8IkMWB5OfFrsFASQ5B9Dz679ehPRqa4TNpDG2AwvRUvDv_a0ioIsU\",\n      \"https://play-lh.googleusercontent.com/kv8G4djWURJ5Mqc1P8CAxGJDrLUjHD2CsfqbZhyiFdWRxRASZfydSi8F0MVRYz2Nvm42u6do4mycBhUbGl2B\",\n      \"https://play-lh.googleusercontent.com/MiZO1fgcS_BtvrSdBgTc_2pJvGCuSa7RWT_7fmlolIFVrrgboiNe0ztl4mLbdpQp4b4mcA7POyfVciyDOS2JEA\",\n      \"https://play-lh.googleusercontent.com/yHn2U48tWNYs-IatnQV-iVn1X4p7x5EvpfFE_Ez2Ons2HjwFXwgo4RgjEcogx6kjvbGbDBNb1WM5_a2L_YY9vg\",\n      \"https://play-lh.googleusercontent.com/ylr42iSdiBp-LrjfKzvhI1JJKzu3shbKtH_7NSu-MlcTSwH9Nd-pG-lF-4WrTJAUKXn91bJlhVKTO2whbYVNAA\",\n      \"https://play-lh.googleusercontent.com/WUEznddfz88QtmOd2Dbhvs8Ox5mwdO4PsIKK9_TahQN5haxAKPeUXCNnGZY1MvjoGOty3tM3FlFDJQa2kA2W\",\n      \"https://play-lh.googleusercontent.com/CvK50Kp7ED_SKDKHJnodtbAZwoGyFuW1WP4EaMu2AF5LpIq1H-PhOM5tGVXb-YRe_A33BQNyWPB8yn_-xe2bGQ\",\n      \"https://play-lh.googleusercontent.com/nImu-z3SVbbzQ1vcl8FoAMlqWHBWq7fgy1spc7T16JxLwnOlGGcXughyxgvq08pphpB172itvbtN3Z3vMU-i0A\",\n      \"https://play-lh.googleusercontent.com/WBGxESr1IPf393lIOLG9yjGDVhkwtoP8Sm8IkMWB5OfFrsFASQ5B9Dz679ehPRqa4TNpDG2AwvRUvDv_a0ioIsU\",\n      \"https://play-lh.googleusercontent.com/kv8G4djWURJ5Mqc1P8CAxGJDrLUjHD2CsfqbZhyiFdWRxRASZfydSi8F0MVRYz2Nvm42u6do4mycBhUbGl2B\",\n      \"https://play-lh.googleusercontent.com/MiZO1fgcS_BtvrSdBgTc_2pJvGCuSa7RWT_7fmlolIFVrrgboiNe0ztl4mLbdpQp4b4mcA7POyfVciyDOS2JEA\",\n      \"https://play-lh.googleusercontent.com/yHn2U48tWNYs-IatnQV-iVn1X4p7x5EvpfFE_Ez2Ons2HjwFXwgo4RgjEcogx6kjvbGbDBNb1WM5_a2L_YY9vg\",\n      \"https://play-lh.googleusercontent.com/ylr42iSdiBp-LrjfKzvhI1JJKzu3shbKtH_7NSu-MlcTSwH9Nd-pG-lF-4WrTJAUKXn91bJlhVKTO2whbYVNAA\"\n    ],\n    \"developer\": \"Gameberry Labs\",\n    \"genre\": \"Board\",\n    \"score\": 4.61758,\n    \"scoreText\": \"4.6\",\n    \"installs\": \"500,000+\",\n    \"currency\": \"USD\",\n    \"price\": 0,\n    \"free\": true,\n    \"url\": \"https://play.google.com/store/apps/details?id=com.gameberry.sorry.card.board.game\"\n  },\n  {\n    \"appId\": \"com.roblox.client\",\n    \"title\": \"Roblox\",\n    \"description\": \"Roblox is the ultimate virtual universe that lets you create, share experiences with friends, and be anything you can imagine. Join millions of people and discover an infinite variety of immersive experiences created by a global community!\\n\\nAlready have an account? Log in with your existing Roblox account and explore the infinite metaverse of Roblox.\\n\\nMILLIONS OF EXPERIENCES\\n\\nIn the mood for an epic adventure? Want to compete against rivals worldwide? Or do you just want to hang out and chat with your friends online? A growing library of experiences created by the community means there’s always something new and exciting for you every day.\\n\\nEXPLORE TOGETHER ANYTIME, ANYWHERE\\n\\nTake the fun on the go. Roblox features full cross-platform support, meaning you can join your friends and millions of other people on their computers, mobile devices, Xbox One, or VR headsets.\\n\\nBE ANYTHING YOU CAN IMAGINE\\n\\nBe creative and show off your unique style! Customize your avatar with tons of hats, shirts, faces, gear, and more. With an ever-expanding catalog of items, there’s no limit to the looks you can create.\\n\\nCHAT WITH PEOPLE YOU KNOW\\n\\nParty is a seamless way for up to six friends to group up and jump into an experience together. Join people you know and stay together as you move across experiences. 13+ users can also use Party to chat through voice or text. It's never been easier to coordinate and communicate on Roblox.\\n\\nCREATE YOUR OWN EXPERIENCES: https://www.roblox.com/develop\\nSUPPORT: https://en.help.roblox.com/hc/en-us\\nCONTACT: https://corp.roblox.com/contact/\\nPRIVACY POLICY: https://www.roblox.com/info/privacy\\nPARENT’S GUIDE: https://corp.roblox.com/parents/\\nTERMS OF USE: https://en.help.roblox.com/hc/en-us/articles/115004647846\\n\\nPLEASE NOTE: A network connection is required to join. Roblox works best over Wi-Fi.\",\n    \"icon\": \"https://play-lh.googleusercontent.com/7cIIPlWm4m7AGqVpEsIfyL-HW4cQla4ucXnfalMft1TMIYQIlf2vqgmthlZgbNAQoaQ\",\n    \"screenshots\": [\n      \"https://play-lh.googleusercontent.com/qUcj9ZD2zwc43jbDwpF2BpGPE6PeVKvrJYPaPdYBf9Hn3MSjZcDvKVSFTWlNT0Q75J2ur-rlSxKSud5fjm6Y_yQ\",\n      \"https://play-lh.googleusercontent.com/WkH4eVcLSN3Wt9FE2wA5Us_FKsZkQag_XKVij50MWUgCS4lcxMl8Qhz5KHwUeREVBQB49QQTlFVra-grTWA\",\n      \"https://play-lh.googleusercontent.com/DWcwhYgqQeWlyMaIzkWjdty00t4otXy6qcCRVsTT-weay9uqhnNYM9x_127dA92vWTY2ujxFRo2UQIFxHZlWRmk\",\n      \"https://play-lh.googleusercontent.com/CtOd6lRcb_4EpkF6fXdabp2wuUssau6a2e_oBebJs3_xDBeTFLO9rPikfEO6pUdnO5un75_OviGepFo6zk3FNm8\",\n      \"https://play-lh.googleusercontent.com/c5Ix5Ct0yy0Jv3QqYA1MWSp5sSW_8OlnWRYWwximhHLDlBmTwteJAi_pl51DYPV_wZK0E5YEukRC7pQFLfQE\",\n      \"https://play-lh.googleusercontent.com/qUcj9ZD2zwc43jbDwpF2BpGPE6PeVKvrJYPaPdYBf9Hn3MSjZcDvKVSFTWlNT0Q75J2ur-rlSxKSud5fjm6Y_yQ\",\n      \"https://play-lh.googleusercontent.com/WkH4eVcLSN3Wt9FE2wA5Us_FKsZkQag_XKVij50MWUgCS4lcxMl8Qhz5KHwUeREVBQB49QQTlFVra-grTWA\",\n      \"https://play-lh.googleusercontent.com/DWcwhYgqQeWlyMaIzkWjdty00t4otXy6qcCRVsTT-weay9uqhnNYM9x_127dA92vWTY2ujxFRo2UQIFxHZlWRmk\",\n      \"https://play-lh.googleusercontent.com/CtOd6lRcb_4EpkF6fXdabp2wuUssau6a2e_oBebJs3_xDBeTFLO9rPikfEO6pUdnO5un75_OviGepFo6zk3FNm8\",\n      \"https://play-lh.googleusercontent.com/c5Ix5Ct0yy0Jv3QqYA1MWSp5sSW_8OlnWRYWwximhHLDlBmTwteJAi_pl51DYPV_wZK0E5YEukRC7pQFLfQE\",\n      \"https://play-lh.googleusercontent.com/qUcj9ZD2zwc43jbDwpF2BpGPE6PeVKvrJYPaPdYBf9Hn3MSjZcDvKVSFTWlNT0Q75J2ur-rlSxKSud5fjm6Y_yQ\",\n      \"https://play-lh.googleusercontent.com/WkH4eVcLSN3Wt9FE2wA5Us_FKsZkQag_XKVij50MWUgCS4lcxMl8Qhz5KHwUeREVBQB49QQTlFVra-grTWA\",\n      \"https://play-lh.googleusercontent.com/DWcwhYgqQeWlyMaIzkWjdty00t4otXy6qcCRVsTT-weay9uqhnNYM9x_127dA92vWTY2ujxFRo2UQIFxHZlWRmk\",\n      \"https://play-lh.googleusercontent.com/CtOd6lRcb_4EpkF6fXdabp2wuUssau6a2e_oBebJs3_xDBeTFLO9rPikfEO6pUdnO5un75_OviGepFo6zk3FNm8\",\n      \"https://play-lh.googleusercontent.com/c5Ix5Ct0yy0Jv3QqYA1MWSp5sSW_8OlnWRYWwximhHLDlBmTwteJAi_pl51DYPV_wZK0E5YEukRC7pQFLfQE\",\n      \"https://play-lh.googleusercontent.com/qUcj9ZD2zwc43jbDwpF2BpGPE6PeVKvrJYPaPdYBf9Hn3MSjZcDvKVSFTWlNT0Q75J2ur-rlSxKSud5fjm6Y_yQ\",\n      \"https://play-lh.googleusercontent.com/WkH4eVcLSN3Wt9FE2wA5Us_FKsZkQag_XKVij50MWUgCS4lcxMl8Qhz5KHwUeREVBQB49QQTlFVra-grTWA\",\n      \"https://play-lh.googleusercontent.com/DWcwhYgqQeWlyMaIzkWjdty00t4otXy6qcCRVsTT-weay9uqhnNYM9x_127dA92vWTY2ujxFRo2UQIFxHZlWRmk\",\n      \"https://play-lh.googleusercontent.com/CtOd6lRcb_4EpkF6fXdabp2wuUssau6a2e_oBebJs3_xDBeTFLO9rPikfEO6pUdnO5un75_OviGepFo6zk3FNm8\",\n      \"https://play-lh.googleusercontent.com/c5Ix5Ct0yy0Jv3QqYA1MWSp5sSW_8OlnWRYWwximhHLDlBmTwteJAi_pl51DYPV_wZK0E5YEukRC7pQFLfQE\"\n    ],\n    \"developer\": \"Roblox Corporation\",\n    \"genre\": \"Adventure\",\n    \"score\": 4.417539,\n    \"scoreText\": \"4.4\",\n    \"installs\": \"1,000,000,000+\",\n    \"currency\": \"USD\",\n    \"price\": 0,\n    \"free\": true,\n    \"url\": \"https://play.google.com/store/apps/details?id=com.roblox.client\"\n  },\n  {\n    \"appId\": \"com.dreamgames.royalkingdom\",\n    \"title\": \"Royal Kingdom\",\n    \"description\": \"From the creators of Royal Match comes a brand new match 3 puzzle adventure in Royal Kingdom, starring the extended Royal Family!\\n\\nYou’ll meet King Richard, King Robert's younger brother, as well as a charming cast of new characters, including Princess Bella and the Wizard, to embark on a journey to build legendary kingdoms! Solve match 3 puzzles to explore new lands and defeat the Dark King & his army!\\n\\nMASTER MATCH 3 PUZZLES\\nTest your skills and become the ultimate match 3 expert by solving fun yet challenging puzzles! Beat thrilling levels and overcome unique obstacles!\\n\\nBUILD AND EXPLORE KINGDOMS\\nWith the help of the Builder, craft a kingdom befitting royalty. Solve puzzles, earn coins, and unlock diverse districts - from the Parliament Square to the University and the Princess Tower.\\n\\nCONQUER THE DARK KING\\nDefend the kingdom from the Dark King’s attack by solving match 3 puzzles - destroy his castles and evil minions to see him fall. Victory is one match away!\\n\\nEXPAND YOUR RULING\\nRise through the ranks and claim the top spot on the leaderboard, master your puzzle solving skills for generous rewards, and expand your kingdom by uncovering uncharted lands as you play!\\n\\nENJOY THE FINEST VISUALS\\nImmerse yourself in Royal Kingdom's stunning graphics and smooth animations. A puzzle game experience like never before - captivating and seamless.\\n\\nWhat are you waiting for? Download Royal Kingdom and join the ranks of the noble adventurers! With hours of fun, challenging gameplay, and a magical world, this puzzle game is fit for royalty!\",\n    \"icon\": \"https://play-lh.googleusercontent.com/nAAF2bxrD2BeoM1IkUZ6qeRG_QGG_Rekrrh4VYQSMDeRa7FR8sWMWaHClID547XJ3D4\",\n    \"screenshots\": [\n      \"https://play-lh.googleusercontent.com/GVMQQ6FoNEuF4cJJNk2Rn6Tgagc4zvI8L7_YmBazeLvoiTcrAdLsHjo5QJlr0QJL2G68\",\n      \"https://play-lh.googleusercontent.com/R0kpGTgMTYqsZr4ivwVlkRHFXGXdaVyiLJ7mZvTWlNdSBQB3mMpSM-V2XIXl2xacYtY4\",\n      \"https://play-lh.googleusercontent.com/xsoi6B61S3SpYnF035o1bdijETm9E_Ff-Z-mqUH76jmGovj_7Za-7z5B3FSRweMPPY4\",\n      \"https://play-lh.googleusercontent.com/vbXGl-v3lQRefMG5PwT1bGeWSe0MYBDtSzD7OO24cnZRMmzmFo5u4Q85bBivzJqNbMpH\",\n      \"https://play-lh.googleusercontent.com/Tm86QGILOzisu1czGXBnpOIXbrf-Yt11R09EmkEiqRXmYGS6XdWp7uXCIgWxLLc6nw\",\n      \"https://play-lh.googleusercontent.com/g28RpdvTloLnI9uO4hZgcSKD58Rs7tSjv3CpwwwisciuEhHJq5ADPy4Ao00PxKL05h5p\",\n      \"https://play-lh.googleusercontent.com/R3j0OQDK5ybQssm4gTzSrPLhrTk-y1KDJjqC3ipOutd5Feo1yrnE6vnYXPinre4f7w\",\n      \"https://play-lh.googleusercontent.com/ll3r1Nic53TS6xQo-KfHr7Mx1UklrXpywpQuJ-l672QVpMFYWfAc34K692rpAnjO_bon\",\n      \"https://play-lh.googleusercontent.com/vJz9J6bHwrONLFBIvtC8u3Imkw4hNlpLlZrbbOpQPW6OG0BVZFTc4KITBD3TBQyWpSE\",\n      \"https://play-lh.googleusercontent.com/tTF-7VQIFEgQnW_dlijwIr1f4oltoV3I-_H7gRv8SSaU-uds6-SKMF-2dH_i3wrU2Q\",\n      \"https://play-lh.googleusercontent.com/ItehiqS4Mw6K_cQsKVkNXqn3C6DA8SH3oBVPOC3-Lnlhu0y85VbtqfjygQvNPR0bNIdd\",\n      \"https://play-lh.googleusercontent.com/IU5-2OPurC0IezgkRqBJ1JhqatPVeBHVik1XL4gPw1joyqfd-FQxzJGtdI2R4lELiw\",\n      \"https://play-lh.googleusercontent.com/Vl-BjUFcI_AEzH9oP-94Nzxs3nQfMeVyfUiDbL8lFRo7Jw4Se2YWZZakvgyYTGWAmUI\",\n      \"https://play-lh.googleusercontent.com/TuGo_ZwivBos_X2aUJ2scwu0Jux_rpczRIEfRoEvub_1_9H6CfIimycVWmr2gK7wKrI\",\n      \"https://play-lh.googleusercontent.com/83sNVPSUjvASONLwNg_Kt_UPcJY_FxEqZGGFz69qpEIRmKR-AkkRhQmAZckGdMhRrOg\",\n      \"https://play-lh.googleusercontent.com/OjgLAF7x6BPS_le-YbewdeHhC-iWxdWBg5hQLSS0SSgfgTfBI4mfgrJETpRKZtIZcrs\",\n      \"https://play-lh.googleusercontent.com/mm3eq8gnqU75nxIKumYi7_1tk21FlmbYtuXsTOgiU3AapX34PrAItfzME8g4eBtYMSc\",\n      \"https://play-lh.googleusercontent.com/jOyXPjIN0FWP3SOH98Yi2YcfSKNH_3mqOMWEqwH-haZJqYqQv_vD2DePP95ArZMiWWbT\",\n      \"https://play-lh.googleusercontent.com/VnE7CTLw-LJNDGYux1YKDRcuGX-5u5I7KzTfZOwcFzQaxi5vQb__k_4JPq8n7LgFipfZ\",\n      \"https://play-lh.googleusercontent.com/2Xcmz31MStvxfty8Zz7JbnFqdKgwOjo4B0O2UhfYudrBGF01tkM0swlzpCURMHZOTXsj\",\n      \"https://play-lh.googleusercontent.com/IBfFxYssBlzlw4D9FqErfLjz5i9vXbVF6wbaYOAVm0tavwiKF7ggDOI6XE0RMjXcJTA\"\n    ],\n    \"developer\": \"Dream Games, Ltd.\",\n    \"genre\": \"Puzzle\",\n    \"score\": 4.629104,\n    \"scoreText\": \"4.6\",\n    \"installs\": \"10,000,000+\",\n    \"currency\": \"USD\",\n    \"price\": 0,\n    \"free\": true,\n    \"url\": \"https://play.google.com/store/apps/details?id=com.dreamgames.royalkingdom\"\n  },\n  {\n    \"appId\": \"com.tfgco.apps.coloring.free.color.by.number\",\n    \"title\": \"Color by Number：Coloring Games\",\n    \"description\": \"🎨 Discover a world of relaxation, creativity, and diversity with Color by Number: Coloring Game - the ultimate stress-relieving pixel art game! Explore our vast collection of color by number images from various cultures and artists worldwide or import your own pictures to create a personalized color by number experience. Join millions of users and share your masterpieces with our vibrant community.\\n\\n🌟 Key Features:\\n• Easy color by number: Dive into a wide variety of pixel art images designed for both beginners and experienced colorists, featuring popular themes and unique content.\\n• Create unique images: Import pictures from your gallery or snap a photo to turn your memories into color by number masterpieces.\\n• Community sharing: Share your color by number creations and discover other users' pixel art in our thriving community of coloring enthusiasts.\\n• Diverse painting tools: Experiment with different tools to make coloring and drawing more fun and efficient.\\n• Inclusive and diverse art: Immerse yourself in a rich variety of color by number art from different cultures and artists, promoting creativity and inclusiveness.\\n• Antistress and relaxation: Enjoy the calming effects of art therapy as you unwind with our engaging and satisfying color by number game.\\n\\n🖌️ Get ready to relax and express your creativity with this free coloring game that combines the joy of art with the benefits of stress relief. Whether you're a seasoned artist or just looking for a peaceful pastime, Color by Number: Coloring Game offers an engaging and satisfying experience for everyone. Download now and start your colorful journey today!\",\n    \"icon\": \"https://play-lh.googleusercontent.com/7BIu-nLPuxUMTDATJ_mZu1wVMZAaxMGjJuQFrGxUS7-pb4IXZqGRq8LKJEXzJrejB3Tf\",\n    \"screenshots\": [\n      \"https://play-lh.googleusercontent.com/iLsVJPqCXBrQexaEgePloGuwmHFeXA_C6eNwHdiUh9GnAWY0EHyeYyxcIkQQ4YfqSXuW\",\n      \"https://play-lh.googleusercontent.com/ZfqyCfBt77R_6mtBjDthtKcqFK6gbDv0GaPMvSV7Ghosr1miLdt4C9eT-0vZ8X5zDKU\",\n      \"https://play-lh.googleusercontent.com/xfWILkgd_NR3FOh_6Kk6hsPPE6-DrwoCPkieYB5j5SUj7xQt3WsCf36-19C87RIeFw\",\n      \"https://play-lh.googleusercontent.com/38UtJpiKd2StyNnWqmaZL3BFMDXCyfwILczdDkmFjpAv_X4fIR7Y_SxoXXx9l_M5s8uh\",\n      \"https://play-lh.googleusercontent.com/BWSmX9LCO13zVDwsIztA0z61cNymwmuGmAFm0yKZ1pnhlse4nJ2Z_l0z1bPLboKT_g\",\n      \"https://play-lh.googleusercontent.com/KjcL67-nw7RkfHENmoA0YzEVi0m4uSo5a1bBhq_-GHbMJZMJNCDEku6crY-gRqcQGnGC\",\n      \"https://play-lh.googleusercontent.com/pcPJlKTDDCBJ-3mywRGZ-2h0E6rHPU0OcQedx5kTBnFzrAbLa6waMh_We5pimSV5xNa0\",\n      \"https://play-lh.googleusercontent.com/tnd2cISvUULqLPYLdxC1ladXxHYZqQ0U9O6_IY0lMHwLjrMv1WJ_m5jVfcv2T00nMRb-\",\n      \"https://play-lh.googleusercontent.com/AF5HnTkbCczub7OsV6J4ubQLg08Jq5juaYet7kW_XFKHdp_7oQInn3LIcXYLULeFohfK\",\n      \"https://play-lh.googleusercontent.com/Knsw9oZVTYvaeNEXsRvc7db2KrW925bUeUws_ZqvIGRfDDPRMNd-dGtoJCcQm3aRli8r\",\n      \"https://play-lh.googleusercontent.com/Xk4MVHqiqpaNwwj5EIYp9-GaF8yrET0Ob3lI1LO73JpmqEwcqyvH1lYYThfEG91V5K8\",\n      \"https://play-lh.googleusercontent.com/dNilqHW9D3ZYUTwG9OvwecSIhvjL5gVKQZDNpdmumymReg2CnTpYE31o0FShdqQ4OwE\",\n      \"https://play-lh.googleusercontent.com/OSp2zlX6cJfm6dxnwArXg0wdbL2IHM5Jvvt0ukEe_-t_AXIkH7F2qfRQwl9zJkHo6ho\",\n      \"https://play-lh.googleusercontent.com/7-e4VD3Jj6z5vwf8F4vWuITM0Mun-cQz0NuoLw53Lp5DofmzSzDJiIKi4KnN0RnBJMU\",\n      \"https://play-lh.googleusercontent.com/yaB56aFLGCrGC9HUTSRHrjRlOQzsLVMBfKhW3tMgRvn5N-B-MFC1vjbNRWUUulo47bo\",\n      \"https://play-lh.googleusercontent.com/T5vaIf8H7qsBVlF_JkJW8LzeFjvICkyjMB240oMdTQ0XjGXtDwrYoMWE6Hk0nD6Qi9M\",\n      \"https://play-lh.googleusercontent.com/B0ue2cfyYB_sjKCeRG-lmlm0UTy2U8Jp4An0-YPvbXlWGeazlcEE7AjY8IElwOSVV-4\",\n      \"https://play-lh.googleusercontent.com/55BLcOJh2rQE3HYCzl9-LYrAWo-PCIgjYk2zLVDxQfdKxabO9MModtKhR6i1Tfin5Pc\",\n      \"https://play-lh.googleusercontent.com/BIuj-TOEXm4zOf-3QQDNwgUoh_A2KMd7onLEpXlgk52OTDrKGU8Zg1s1U9rU24ZOuw\",\n      \"https://play-lh.googleusercontent.com/PMg3SGh1t3eVlmUYW2-b0vOlQBqRb6r-_x1PHuJQq-EgE6j_9fmVlRsIRD8wbgYYuQ\",\n      \"https://play-lh.googleusercontent.com/LPZ7C5hDUfbIuDwBC1vqzKgDOAw88pfzGpsklI2GZRYbRUiw36qrWW34x7jp90hB9Rg\",\n      \"https://play-lh.googleusercontent.com/jefU3f4ZliV4IQzcBCeGaLrkEpE-2PJRcxuKHEoaVkA17c7aINs1Gl1SwjY-WqWvxA\",\n      \"https://play-lh.googleusercontent.com/mxlJtS05Uv3JU-vnTcmbuq6b3p7ODePrJmRQHzMG3KX3pd1Wn1LANTwFDQ_BnotZBhXO\",\n      \"https://play-lh.googleusercontent.com/JEV9d-M5b52WluoY2Q-XCuEzH5xwHpkGpEncsaq23xhCX3m3lep11WDl8qyelPcN9kY\"\n    ],\n    \"developer\": \"Wildlife Studios\",\n    \"genre\": \"Board\",\n    \"score\": 4.662116,\n    \"scoreText\": \"4.7\",\n    \"installs\": \"50,000,000+\",\n    \"currency\": \"USD\",\n    \"price\": 0,\n    \"free\": true,\n    \"url\": \"https://play.google.com/store/apps/details?id=com.tfgco.apps.coloring.free.color.by.number\"\n  },\n  {\n    \"appId\": \"com.jamcity.pdt\",\n    \"title\": \"Disney Magic Match 3D\",\n    \"description\": \"Dive into the enchanting world of Disney Magic Match 3D, a captivating puzzle adventure with beloved Disney treasures! Disney Magic Match 3D is a brand-new 3D matching game where you’ll solve captivating Disney puzzles by sorting and collecting iconic items from beloved Disney and Pixar films including Moana, Aladdin and Toy Story. The ancient Disney Book of Magic, filled with iconic artifacts, has been opened, releasing a whirlwind of nostalgia. It's your mission to return these cherished items to their rightful place by sorting and matching the pieces in magical puzzles. \\n\\nDISCOVER ICONIC 3D DISNEY TREASURES!\\n\\nExplore stunning 3D levels inspired by your favorite Disney films and discover a vast array of nostalgic items. From Mickey Mouse's iconic gloves to Aladdin's magic lamp or Woody’s boots from Toy Story, each 3D piece is brimming with Disney charm. Unlock new chapters in the magic book and uncover even more Disney treasures in Enchanted Levels! Enchanted Levels include themed pieces from your favorite films or nostalgic characters, featuring classics like Mickey Mouse’s shoes, Minnie Mouse’s bow, and Donald Duck’s hat. \\n\\nSORT AND MATCH IN ICONIC DISNEY PUZZLES!\\n\\nTest your skills with a variety of engaging Disney puzzles that will challenge your mind and invoke nostalgia. Match and sort scattered Disney items from the Heart of Te Fiti to a vast array of princess dresses. By identifying and sorting scattered items, each completed match brings you closer to restoring order to the universe. Feel the satisfaction of sorting the treasures back into their correct places in Disney Magic Match 3D! \\n\\nDownload Disney Magic Match 3D and experience enchanting Disney puzzle levels with Aladdin, Toy Story, Moana and more!\",\n    \"icon\": \"https://play-lh.googleusercontent.com/nJlQmDZ8r33Gr9DY04eREaokemxlsy3SA59kFrkTzy3cyq_xUUX6Qt6AhqWIR-QNCycJ0KwquGPPO7KsLNLzKMQ\",\n    \"screenshots\": [\n      \"https://play-lh.googleusercontent.com/nKBxYzDNM2WqGBaZFJV3Mk8Ki-qb4YVzkcs5ONAsMkDHX0FhMSXRClwZqAUveQC0owhpqDtBmzSSTD0CQ4FzOg\",\n      \"https://play-lh.googleusercontent.com/Dbh2WnpaDXxo8JIOSF1OUu6W3ZeMibRZmVozUtCW1P5TAx8DuQ22Px9ogzBu6kT5-HJQiAVjElYvbFJUmevzoUY\",\n      \"https://play-lh.googleusercontent.com/BzNObgKE1bPjzaHekXEOPaBN1Z91mkYCo_3OTNG9CZwMbRNk4Jk2brUPrnPO9yVl9kX8xdsQf-jNd8xQJr5mO1M\",\n      \"https://play-lh.googleusercontent.com/p7H30il41lJc28JkqpZdLpnZ4H4fM5g_R6pNUB-FYLTVZ7RhG7Os996rJF1phAO_AyojFIzXIQYBcCvrUP-tcm4\",\n      \"https://play-lh.googleusercontent.com/7CD3AEJE2kVwxVOiKAZaBjr0-UPFoTCjALZvyQOQRdIjNqcppKz0C5esgtMhg3GwJnJIK8BcpdAiLITg9SC6jQ\",\n      \"https://play-lh.googleusercontent.com/DX8g_26YI8L2ebLEdVtHain5E3UDjbwM5zd9wofgsWzVxh_uU__MVrzUSLoPHzbiEw\",\n      \"https://play-lh.googleusercontent.com/nKBxYzDNM2WqGBaZFJV3Mk8Ki-qb4YVzkcs5ONAsMkDHX0FhMSXRClwZqAUveQC0owhpqDtBmzSSTD0CQ4FzOg\",\n      \"https://play-lh.googleusercontent.com/Dbh2WnpaDXxo8JIOSF1OUu6W3ZeMibRZmVozUtCW1P5TAx8DuQ22Px9ogzBu6kT5-HJQiAVjElYvbFJUmevzoUY\",\n      \"https://play-lh.googleusercontent.com/BzNObgKE1bPjzaHekXEOPaBN1Z91mkYCo_3OTNG9CZwMbRNk4Jk2brUPrnPO9yVl9kX8xdsQf-jNd8xQJr5mO1M\",\n      \"https://play-lh.googleusercontent.com/p7H30il41lJc28JkqpZdLpnZ4H4fM5g_R6pNUB-FYLTVZ7RhG7Os996rJF1phAO_AyojFIzXIQYBcCvrUP-tcm4\",\n      \"https://play-lh.googleusercontent.com/7CD3AEJE2kVwxVOiKAZaBjr0-UPFoTCjALZvyQOQRdIjNqcppKz0C5esgtMhg3GwJnJIK8BcpdAiLITg9SC6jQ\",\n      \"https://play-lh.googleusercontent.com/PRbZBM1C2o86ZnmjZQydtNoBLYX4cjciATPWfjqq5eN34P-t2Bgcvtb4NAbrgJ_znw\",\n      \"https://play-lh.googleusercontent.com/nKBxYzDNM2WqGBaZFJV3Mk8Ki-qb4YVzkcs5ONAsMkDHX0FhMSXRClwZqAUveQC0owhpqDtBmzSSTD0CQ4FzOg\",\n      \"https://play-lh.googleusercontent.com/Dbh2WnpaDXxo8JIOSF1OUu6W3ZeMibRZmVozUtCW1P5TAx8DuQ22Px9ogzBu6kT5-HJQiAVjElYvbFJUmevzoUY\",\n      \"https://play-lh.googleusercontent.com/BzNObgKE1bPjzaHekXEOPaBN1Z91mkYCo_3OTNG9CZwMbRNk4Jk2brUPrnPO9yVl9kX8xdsQf-jNd8xQJr5mO1M\",\n      \"https://play-lh.googleusercontent.com/p7H30il41lJc28JkqpZdLpnZ4H4fM5g_R6pNUB-FYLTVZ7RhG7Os996rJF1phAO_AyojFIzXIQYBcCvrUP-tcm4\",\n      \"https://play-lh.googleusercontent.com/7CD3AEJE2kVwxVOiKAZaBjr0-UPFoTCjALZvyQOQRdIjNqcppKz0C5esgtMhg3GwJnJIK8BcpdAiLITg9SC6jQ\",\n      \"https://play-lh.googleusercontent.com/2xrFn5QMt39xAKYUMi5_B1CVhKh9-q60GDEyWmW0P51K6sZsDjDxMqxYuhEY_z49ww\"\n    ],\n    \"developer\": \"Jam City, Inc.\",\n    \"genre\": \"Puzzle\",\n    \"score\": 4.751634,\n    \"scoreText\": \"4.8\",\n    \"installs\": \"500,000+\",\n    \"currency\": \"USD\",\n    \"price\": 0,\n    \"free\": true,\n    \"url\": \"https://play.google.com/store/apps/details?id=com.jamcity.pdt\"\n  },\n  {\n    \"appId\": \"com.supercell.clashroyale\",\n    \"title\": \"Clash Royale\",\n    \"description\": \"Enter the Arena! Build your Battle Deck and outsmart the enemy in fast real-time PvP tower defense card games. From the creators of CLASH OF CLANS comes a real-time multiplayer card battle game starring your favourite Clash® characters and more. Start battling against players from around the world!\\n\\nBECOME A MASTER OF STRATEGY, TOWER DEFENSE AND DECK BUILDING\\nChoose unique Cards for your Battle Deck and head to the Arena for multiplayer PvP strategy games!\\nPlace your Cards right and knock down the enemy King and Princesses from their Tower defenses in strategic, fast-paced matches.\\n\\nCOLLECT AND UPGRADE 100+ CARDS\\nHog Rider! Collect and upgrade 100+ Cards featuring the Clash of Clans troops, spells and defences you know and love. Win multiplayer PvP card battle games and progress to new Arenas to unlock powerful new Cards for your collection!\\n\\nBATTLE YOUR WAY TO THE TOP\\nStrengthen your tower defense, fine-tune your strategy and card battle your way to the League games and Global Tournaments! Match against the best players in the world and compete in multiplayer PvP battles for glory and rewards!\\n\\nSEASONAL EVENTS\\nUnlock new Seasonal items like Tower Skins, Emotes and powerful Magic Items with the Season Pass and participate in fun Challenges that put your card battle and tower defense skills to the test!\\n\\nJOIN A CLAN AND GO TO WAR\\nJoin or form a Clan with other players to share Cards, and battle in multiplayer Clan Wars card games for BIG rewards!\\n\\nSee you in the Arena!\\n\\nPLEASE NOTE! Clash Royale is free to download and play, however, some game items can also be purchased for real money. If you do not want to use this feature, please set up password protection for purchases in the settings of your Google Play Store app. Also, under our Terms of Service and Privacy Policy, you must be at least 13 years of age to play or download Clash Royale.\\n\\nA network connection is also required.\\n\\nSupport\\nAre you having problems? Visit http://supercell.helpshift.com/a/clash-royale/ or http://supr.cl/ClashRoyaleForum or contact us in game by going to Settings > Help and Support.\\n\\nPrivacy Policy:\\nhttp://supercell.com/en/privacy-policy/\\n\\nTerms of Service:\\nhttp://supercell.com/en/terms-of-service/\\n\\nParent’s Guide:\\nhttp://supercell.com/en/parents/\",\n    \"icon\": \"https://play-lh.googleusercontent.com/gnSC6s8-6Tjc4uhvDW7nfrSJxpbhllzYhgX8y374N1LYvWBStn2YhozS9XXaz1T_Pi2q\",\n    \"screenshots\": [\n      \"https://play-lh.googleusercontent.com/UOJ0N42bDu2lUbZIx4n9UCnHtnY5IEyG1jOLXByCbbCvi6wammxVR4XC9endWA5rAA\",\n      \"https://play-lh.googleusercontent.com/-H9tX-JiL-u169fW5aPKBOO2kIO20HIPdLcuSE-VKH3UPzhg5225q6OPy-If0pu_c1o\",\n      \"https://play-lh.googleusercontent.com/KJyB5262htBywP40LmNgt3WCr0yGjXDkiaY2t1T3FL71Ph_Dk5iYqd5UJC6NIaUbf-w\",\n      \"https://play-lh.googleusercontent.com/4xvy3712rgooE8zG-b3ePxPXU5eWFa78q57TClvKQcHtEn9yFVHq_By3do9wqzyy18Ig\",\n      \"https://play-lh.googleusercontent.com/KMU1IyCAbHzfNVzihvyedCDXDW3uIpKbaWNj2BPSxhrTyCnzWH88lRUIX2BnMyRD888\",\n      \"https://play-lh.googleusercontent.com/ve38p_0sUn23Be3ou6p1BLJXuhrADDN97V7hTr_7Hdvj5-LyMusyaRHqn_oEMHqClKo\",\n      \"https://play-lh.googleusercontent.com/4spxVWDXk34CQGpAXvyp2GRhOuPhJquP21iGUwgbl0QbPisEMU_waQGFttSNkety4SQ\",\n      \"https://play-lh.googleusercontent.com/HKLKrTnr3COplkpdKN5tsdDKz0K38EOQUQAI1EuQBOOCpFIN7rGhRQwdkTI0VAA8c_8\",\n      \"https://play-lh.googleusercontent.com/0lxyuOHmLJ0nU446lyJxMB-YPqhnHAtcfXBAzD3cdg_L2jtr0-19IDKKtZQfzWUho_M\",\n      \"https://play-lh.googleusercontent.com/QsLLNH_rDEpTLJplnyIHNlyRShGDEnL8bziYdxsP55moVU-SzQ-iSq-ZWJz7CSPoRt27\",\n      \"https://play-lh.googleusercontent.com/QJ9ykyt6fihGGdYjs6SRqAx2kK3KMqpNoxL80ibkRu2t9HqiApaGvyPy6AmgdQK9MIQ\",\n      \"https://play-lh.googleusercontent.com/fzxOuYpsjRyHzm81NZUHstQsJEKBTcAm4UrFiAcBtf1fJAyTRFudFXXQBlWpGIeXPzA\",\n      \"https://play-lh.googleusercontent.com/Ze81cW27HS-ABTlyyT0Y1XjN3fv-wVXHtAPvdYTOBqptV6DuLzGR7-xVZnGjr3HLGA3T\",\n      \"https://play-lh.googleusercontent.com/XRXQjoDacywQQs66Ea1rbywlZ9hoq8pvigIxQhI_UMQQITfLN9CTH-6YcGubz1OQ3_jO\",\n      \"https://play-lh.googleusercontent.com/BTSNh21ZUoXvEXpAlVhXCsnKpk_8BCaa4ofU0Q9DxNR-pERChD7TpTw0GXPrat7mPA\",\n      \"https://play-lh.googleusercontent.com/vPmwD3jbwKj2W_xYiKgoV7wLmoqnceDVHVE6r6AoZQpBlolCkRNRG_0ZgMJfySQLKg\",\n      \"https://play-lh.googleusercontent.com/OTIPaWEQ2STGiqlh7bCTnIUtXliKYVBu1JN6A1z31xjlCKmNnpbM1ZR6J_vNOn1zNzTb\",\n      \"https://play-lh.googleusercontent.com/5VYLM_j_5sFd0pqL8j95Ms8Jkvd4RTE1jzqQPmcRRpRNCfFmP6-Dh-men3apSlDlfWA\"\n    ],\n    \"developer\": \"Supercell\",\n    \"genre\": \"Strategy\",\n    \"score\": 4.435455,\n    \"scoreText\": \"4.4\",\n    \"installs\": \"500,000,000+\",\n    \"currency\": \"USD\",\n    \"price\": 0,\n    \"free\": true,\n    \"url\": \"https://play.google.com/store/apps/details?id=com.supercell.clashroyale\"\n  },\n  {\n    \"appId\": \"com.kiloo.subwaysurf\",\n    \"title\": \"Subway Surfers\",\n    \"description\": \"DASH as fast as you can! \\nDODGE the oncoming trains! \\n\\nHelp Jake, Tricky & Fresh escape from the grumpy Guard and his dog. \\n\\n★ Grind trains with your cool crew! \\n★ Colorful and vivid HD graphics! \\n★ Hoverboard Surfing! \\n★ Paint powered jetpack! \\n★ Lightning fast swipe acrobatics! \\n★ Challenge and help your friends! \\n\\nJoin the most daring chase! \\n\\nA Universal App with HD optimized graphics.\\n\\nOriginally co-developed by SYBO and Kiloo.\",\n    \"icon\": \"https://play-lh.googleusercontent.com/2j-z3t20D8yLBLScuzp3sFfLCR40PHmQxdVO-O2JrFtcDA7pmg0ln-MmEDcm7tUsLn4\",\n    \"screenshots\": [\n      \"https://play-lh.googleusercontent.com/aRRu8u4--Ssh6rtXEeZYKvc-mK6x_xoAOssFP_v0JJrGZp0VgO9jXWEh2bDyZT30Ww\",\n      \"https://play-lh.googleusercontent.com/gwQSt5GSRzvmnAW3-W3h8YCV295XbVg3iijuTHiYKGViPSb97Fk3Qd_4WJbe5CcU35iR\",\n      \"https://play-lh.googleusercontent.com/Gu-RYFUPkwbeCbfHa1hrrEqIO8SMOIpvRMiw8ELnar1zjmda__2VmqPkVT_VtKxHBA\",\n      \"https://play-lh.googleusercontent.com/ecqSKEAsKcGvcszd6g28h3qBYzqpHp_6DPet-sURQBOmFVK6cg30LXPKr2wpPafEvBQ\",\n      \"https://play-lh.googleusercontent.com/OSHM1XAxe1z8UYAn57p5YPTv-6i8C3sahAToA80ywjMaU46XAz1fybb0I7r0eB3Fcr0\",\n      \"https://play-lh.googleusercontent.com/QYENgJVrNTT8zAFGiRvnxcdULnVlhLtnijKP8ALLRVhJ_ISRYOhgjK-LtJF_aSU6CQ\",\n      \"https://play-lh.googleusercontent.com/L6R-9z8rMZzTCFMeQnwKX1VP1hHfVy9zFQXszgm2agR1IO-ulSKdYngM3MyC91a_dM0\",\n      \"https://play-lh.googleusercontent.com/e384H7-gEctPIPFKDj4BODEvWYKP60efW18crVFv5QoDKd2zQWZcFSrhIqTUIRvetVU\",\n      \"https://play-lh.googleusercontent.com/osgNROHm5FqAIMv30Ss4lfkFaKJSrxYGO0dhxIsCPksnsGQoKYHdRJorTf0VCWP0Tzc\",\n      \"https://play-lh.googleusercontent.com/pp9XR7B6NXVXzi17sSMpCR-SIgx8BQh1No79Y2njSzbKQ_0aX1PnEVmP_L9TEuUfYKe2\",\n      \"https://play-lh.googleusercontent.com/5NBH4_1q8VVuqePC_nXlhRN78wBJANSZvu-VnsR-OZkK_26Q4oUTv5r9JlqzGiWWsuiU\",\n      \"https://play-lh.googleusercontent.com/BM5MraXpqcSHjMrPIjl17qdkAI9bfp-8Og9FdgsDlrmWbCNNSOZKoiuzAlBDkAbcQbM\",\n      \"https://play-lh.googleusercontent.com/PAoDaUr7QJPCXSluO6-INdqGySnhR-Wrgou6An7wGhYdLA-hsAQ35LTpYKb74OsqMQA\",\n      \"https://play-lh.googleusercontent.com/w6ePMD1wpQQazdud2ES1u6O2KHuH0qbH6UQpiStmsIGqzMrcywORNeX4K-6EnJxhJVr5\",\n      \"https://play-lh.googleusercontent.com/qgoitoEsf5j3JcazVxx3fNpBDId7ZsSRJJ_xVjf4GySe7Kanf6vEdNTteLilP_YywuaK\",\n      \"https://play-lh.googleusercontent.com/hBi_b1JHC5keSM5-tDekL6F-IKl5vr03jC8ERffMnjzPn0R6b_N9vUmYCEN51sxX-Q\",\n      \"https://play-lh.googleusercontent.com/dtTDm-sy2hzhSOTojaqWUbXLhUYpB3UmXKN04pFqSezREptORcP28cl0fi3kMxQGRNc\",\n      \"https://play-lh.googleusercontent.com/3CgA6eyTNGmkFGii0OVjtTawnSYB3klgVN8sXoXGVtEvBozM3HYPsDVj-kLFlrI5rOkr\",\n      \"https://play-lh.googleusercontent.com/9tjEiO2gr6kwvdzpLEaZvlDqvdjZvMvLyXctmRhePLHVXE6a6zRdq2aTy9mSqOInDqI\",\n      \"https://play-lh.googleusercontent.com/a-cT5PeAAQgX1HnrYrhT6mZf7JgV3R9xGz1_ey9KUiiR0Qro8dpdNleHvJ6OCAf6nR8\",\n      \"https://play-lh.googleusercontent.com/3ie8HMeVTyjTL9ACFrssPuKK9kvPImof-6HN5rTzF_uGkbhCrNjcpejvUpVbWmaZ0M8\",\n      \"https://play-lh.googleusercontent.com/4oLzN8lDSbSQmvspQqrL00uKD501B2HqfTvk9kUzb_q13XgJ7bs99sEfU3a-9F-Xy7JA\",\n      \"https://play-lh.googleusercontent.com/8wIDXrpESoPfAhiDxBQXaFuzeeRe9Hrm9Ablu9yiixQwOxpdS9bnmAD79xj2Czy1W8s\",\n      \"https://play-lh.googleusercontent.com/H6hk3ZusKbJOjppMrDd-mTDVtbgl6-Ik9Yz6qq6RzYhUh-4Q24yhUAcslarR4byO5fA\"\n    ],\n    \"developer\": \"SYBO Games\",\n    \"genre\": \"Arcade\",\n    \"score\": 4.5551095,\n    \"scoreText\": \"4.6\",\n    \"installs\": \"1,000,000,000+\",\n    \"currency\": \"USD\",\n    \"price\": 0,\n    \"free\": true,\n    \"url\": \"https://play.google.com/store/apps/details?id=com.kiloo.subwaysurf\"\n  },\n  {\n    \"appId\": \"com.scopely.monopolygo\",\n    \"title\": \"MONOPOLY GO!\",\n    \"description\": \"Hit GO! Roll the dice! Earn MONOPOLY money, interact with your friends, family members and fellow Tycoons from around the world as you explore the expanding universe of MONOPOLY GO! It’s the new way to play the classic board game - board flipping cleanup not required in these fun board games!\\n\\nTake a Break!\\nEscape, enjoy, dream, scheme, gain rewards and stay in touch with this newly reimagined twist on MONOPOLY online games! Let everyone’s favorite zillionaire, Mr. MONOPOLY, guide you through new boards themed after world-famous cities, fantastical lands and imaginative locales.  Each adventure begins with a roll of the dice - whether you're a board game beginner or one of the strategy masters, there's fun to be had for everyone!\\n\\nSo MONOPOLY GO!\\n· Experience classic fun and visuals with gameplay fit for your phone! Collect Properties, build Houses and Hotels, pull Chance Cards, and of course, roll the dice to earn that MONOPOLY Money!\\n· Play with your favorite game Tokens such as the Racecar, Top Hat, Battleship, and more. Earn more tokens as you go to keep the win going in these board games!\\n· See classic MONOPOLY icons like Mr. M, Scottie and Ms. MONOPOLY come to life, and brand new characters too!\\n\\nYour Family Table!\\n· Help or hinder! - Play dice & pick a card - You and friends can earn easy money with Community Chest and co-op events! Or heist their banks to help yourself get to the top. Don’t feel bad – that’s just how the dice roll!   \\n· Collect and trade story-filled Stickers with friends and family around the world and in our MONOPOLY GO game! Facebook Trading Groups! Complete gorgeous, sticker albums to win huge rewards! The more Stickers you collect, the closer you get to candy sweet exclusive bonuses and limited-edition collections!\\n\\nFeatures!\\n\\nBUY & BUILD YOUR WAY TO THE TOP\\nCollect Property Tile Sets to build Houses and upgrade your Houses to Hotels to get even more rent from friends! All you have to do is hit GO and roll the dice! Build your empire and be lord of the board! Whether you're in it for casual fun or to prove you're one of the true masters of the game – the board awaits!\\n\\nENJOY THAT CLASSIC MONOPOLY ATMOSPHERE\\nRoll the dice to enjoy the classic game of MONOPOLY. Featuring familiar faces such as MR. MONOPOLY, familiar spaces such as jail (womp womp!), Railroads, Properties, Tokens, and familiar elements like drawing the perfect lucky card and more!\\n\\nPLAY WITH FRIENDS AND FAMILY\\nGet social! With a variety of fun games, you can play with friends around the world to take full advantage of new multiplayer mini-games such as Community Chest – where you and friends take a break from mischief and work together for fun and rewards!\\n\\nNEW OPPORTUNITIES EVERY DAY\\nPlay Tournaments, the Prize Drop plinko mini-game, the Cash Grab mini-game and follow our Events for big rewards. New Events run every hour, there are new ways to play and win every day! Compete in every tournament to climb the ranks. Keep an eye out for our time-limited games, perfect to challenge even seasoned tournament champions and MONOPOLY masters alike. Every roll of the dice counts - will it land you bonus money, a valuable Sticker, or a big build upgrade in new lands from a candy factory to Martian land?!\\n\\nMONOPOLY GO! is free to play, though some in-game items can also be purchased for real money. Internet connection is required to play the game.\\n\\nThe MONOPOLY name and logo, the distinctive design of the game board, the four corner squares, the MR. MONOPOLY name and character, as well as each of the distinctive elements of board and playing pieces are trademarks of Hasbro, Inc. for its property trading game and game equipment. © 1935, 2023 Hasbro.\\n\\nPrivacy Policy: https://scopely.com/privacy/\\n\\nTerms of Service: http://scopely.com/tos/\\n\\nAdditional Information, Rights, and Choices Available to California Players: https:scopely.com/privacy/#additionalinfo-california\\n  \\nBy installing this game you agree to the terms of the license agreements.\",\n    \"icon\": \"https://play-lh.googleusercontent.com/DfYkSl-nQoMNLX2bec7EwHemrvyDYmDgzIR1jcsyt0ZAcmO_SKjuu0a1o1iSwtnl8_g\",\n    \"screenshots\": [\n      \"https://play-lh.googleusercontent.com/WNKQ6fTz9CGyaCRGu7s-Bte4RMpblAruUx_RE7QQ1OmaywMASbYg0MBxRZu-42njpRTd\",\n      \"https://play-lh.googleusercontent.com/rr1qJMrWD-dKQhPef-XR9v-Sy_hiyXsvr0fauB3McVoK_k7kwn8otr5v8xEjkSElN30\",\n      \"https://play-lh.googleusercontent.com/btLf4QGAB24NJYON3S1dipSO6qmBrrBy1g0KjX-5PPpTXX6fJbPMAYLXHtx_Rca4jww\",\n      \"https://play-lh.googleusercontent.com/v5r7Wb7sq11nMFl4WEjkeSGd3mnti_gCmoQPzI6WJ0vYjllGhmVu9qF_Pn_vQtyvxQGW\",\n      \"https://play-lh.googleusercontent.com/wJoBNVNl0dryHECtUV8B-BlddxVcd76YkJwJ1y7KIfDH-fYNOHlCx1TzA7hFx6y_rB0\",\n      \"https://play-lh.googleusercontent.com/M-XKIP-T-UyJd3IyTPTon7RFDUTm1qNNlu7bUmkqwuycFyg1LthPZ1aVwsuKvqgFnRU\",\n      \"https://play-lh.googleusercontent.com/UfaRoz251VtP8-5wGw16GhTkopL47rqmOzoBoQexcZyxsCgALF4q6W9ycLhBFmcCoBk\",\n      \"https://play-lh.googleusercontent.com/UbcoF0euH86amf-EKoLaocRG8bAb601pN3ODrW6Zsa3KMwNbqcJ8yP3uJf_yT_QHVg\",\n      \"https://play-lh.googleusercontent.com/ZRds25BCinCMjFE6KPXBsHPxeF2qf33XXhCPYbep63dvKxGoEhxBJiwASq7p1kTh2ahz\",\n      \"https://play-lh.googleusercontent.com/iuGZbyGeS-8wZi_xw1B1d3A8zTaEj1SxvXns0n-yD5hojYItDcw4OBt_6v2HCkqMQw\",\n      \"https://play-lh.googleusercontent.com/D2kHrhJgWsGxquJjCRy_whhx_k4gsPEf6hhzVvP0Slb6lyZqmfXHOH4StHbSndPGwg0\",\n      \"https://play-lh.googleusercontent.com/n_cHy8m8Bv_xYJ4kZ3p0Yk5kuWCzw-ypbCtmWropBfUhMhwyVJULs1HngfLNAf3FcLI\",\n      \"https://play-lh.googleusercontent.com/_OZxTwMrIv8H_yGtvn1e6Q46vs0w8KeRrDGay_nzYy62Yx6omLmeuK6X9CKHxkt9mFs\",\n      \"https://play-lh.googleusercontent.com/uyGSikeWJmOnLtStNucCN2k25THwO6hVo7dyMVVTCjo5z6COzAOefPIsJyQdxVojS20\",\n      \"https://play-lh.googleusercontent.com/nhfYCYHZrXLlVCZCZHuVvejuHBgs40OnWfWnRRumjAqjdJVrBOQLU3TzIpg_xTS5jKU\",\n      \"https://play-lh.googleusercontent.com/6Se8VezTdt2XRz4wxMdwSv18DYhH8B3dgQvGU5euhnBP3ggdlAM5SSkP6EzE2e0JPg\",\n      \"https://play-lh.googleusercontent.com/9KWlnQCIHz1m7IERSEPDWYAx3J8oNUQ47yfqv7MOdyrKqQ68rNJJtgyiWvKxm85OEEU\",\n      \"https://play-lh.googleusercontent.com/7kSLZ_MwqXQpdxw8GwFcnTyD6IIK_VwJOOnU3RpthD_nOS8rhyrGSBTf6PJZR_Aib-s\",\n      \"https://play-lh.googleusercontent.com/t4flDlmdC1rUJbNZAEAtSku6ZzGk0K9xZEFkjKAhD2avEYEm1VXjVTnq3_3Y7GNTBw\",\n      \"https://play-lh.googleusercontent.com/7GfvHqDJMvtS4cMhjVQGGvEwhC6oOdI6PNx_PUNDVw8SZWAoFmLt3XLhYzTfMdOttg\",\n      \"https://play-lh.googleusercontent.com/eHA0WcX8Q8aHjM5_1idtwlOYGN5QqAglO9_QmHFjDEOKl0BUGy_51WBqnbwHiDJgU9M\",\n      \"https://play-lh.googleusercontent.com/orPNnERkLcQc1gH2y_X41r2CkBHJ7C4Xo60LD8zj8mIIcT-HoswT35V6dVpVskwHtNE\",\n      \"https://play-lh.googleusercontent.com/JTDnVbve9Scy9BubgXTGl8MiWHuO4t_H2Eqqb-qHBi5DseIICcwgPtZS3D4d3LED_kM\",\n      \"https://play-lh.googleusercontent.com/8zODvpEX2n9W2lfAJ-g0oYLLthFjVkNr6jJhZNxzDTHeCEvQxSkUTsbe6IymrZtsvA\"\n    ],\n    \"developer\": \"Scopely\",\n    \"genre\": \"Board\",\n    \"score\": 4.6633234,\n    \"scoreText\": \"4.7\",\n    \"installs\": \"100,000,000+\",\n    \"currency\": \"USD\",\n    \"price\": 0,\n    \"free\": true,\n    \"url\": \"https://play.google.com/store/apps/details?id=com.scopely.monopolygo\"\n  },\n  {\n    \"appId\": \"io.voodoo.holeio\",\n    \"title\": \"Hole.io\",\n    \"description\": \"Hole.io – Swallow Everything & Dominate the City!\\n\\nEnter the ultimate black hole battle and compete to become the biggest hole in town! Move your hungry black hole, swallow buildings, cars, and even opponents to grow bigger before time runs out. The more you absorb, the stronger you become. Can you outsize the competition and take over the arena?\\n\\nKey Features:\\n- Addictive black hole gameplay – Swallow objects and expand\\n- Real-time multiplayer battles – Compete against other players\\n- Time-based challenges – Grow fast before the clock runs out\\n- Custom skins – Choose your favorite black hole design\\n\\nDownload Hole.io now and prove you’re the ultimate hole master in this fast-paced, city-eating battle!\",\n    \"icon\": \"https://play-lh.googleusercontent.com/VmbS-Smui9i1h9mxAVxnehbLI4HP4g-wsdWeJ2k9MbOTFK5wsiZifDkxdkm4f4wEIsE\",\n    \"screenshots\": [\n      \"https://play-lh.googleusercontent.com/l17Dh8NbbMAy9IjqLaWLgpexLyUtgcRTCjLC9lIBSS6KZyJAJc8t1L0IC-pdkp_EPMA\",\n      \"https://play-lh.googleusercontent.com/cHZjod0BGZyzPmxU47x1kTV5qjTkiOxHnSnU_mh8JA84Gu1JlfaGrjEPRNeHfX_NZg\",\n      \"https://play-lh.googleusercontent.com/ASB65zbzKNq4E1qbaddMVd-8p6yRjBisTONCtH9Ezdp4h9VQLHS2J7Tz5ifcasQFy4Zl\",\n      \"https://play-lh.googleusercontent.com/m34rd3EHiVhKljvhzvZEOn0TNO0gn1gssj0cNSj8BAMTtlv4c0g2EXTkjEkaLGYLni4\",\n      \"https://play-lh.googleusercontent.com/U8X6FE9YH2auqG7gmiOLnhVdWHM1aBBRHKPk-YhJyl30605oyF74icpBrYnD5CdrKzbN\",\n      \"https://play-lh.googleusercontent.com/ePjsmRoAypHloOSmlMDheHD-yRwJK0gDuyYXytKtkDMirxXdxnmQzwr7_zsfir2IIA\",\n      \"https://play-lh.googleusercontent.com/WSgXQ93MCO-8hXSz4N2Wlnj7wARPaGbpqbIpw0Pd3SldIOETo6cZvznQ-GY76fh0o-Q\",\n      \"https://play-lh.googleusercontent.com/Pyxg9HsuKHmqc0toUUXXNfv9drxAvnYEjd5ZMFJtjPSYd8uyMUv6jXe6PCh1Vr0KR3ww\",\n      \"https://play-lh.googleusercontent.com/-zYzl17Q2h8ToJIrYkaXC499PGEETGgwPUzpjAUeTM2plVXvac_4SX8NfUKSNg9L4g\",\n      \"https://play-lh.googleusercontent.com/wiEF1waTb8W1hecf7aCwCubP7mlLdCJkhXlXRlYJ8-rPZdABigdTARRqgOX2kgM2UA\",\n      \"https://play-lh.googleusercontent.com/BmONoE5HRs3UofP4HGfotSYIuZXch1RIHM3qr_Z-sNZmRGyIj_R1IyNnEZ6fJFKuEA\",\n      \"https://play-lh.googleusercontent.com/5KZ4ssKf6g8BfwCig8sw7IxHRAXQdqOIBbgZSTcoZmh3950CDgx1YIAPzX0JN3EDW88\",\n      \"https://play-lh.googleusercontent.com/gCzvKOhE7IEvitfSqn81M7Cevw80UzQtxIwkkoKzsXW5aL9DnB1yqKMD5Oo4unRURFUc\",\n      \"https://play-lh.googleusercontent.com/n6ec3XKolKYFtimDGyh8u_jIw5aWyn7W9ieb8nARdJZnEs8Od_D5LTd-Mz-yA5fSVA\",\n      \"https://play-lh.googleusercontent.com/J5e8ofyy9hpon5o5d_bw-ZZN62ITjdCdeVuY7hpV8cGr9tOZPB8Htimcaw702tuOMPE\",\n      \"https://play-lh.googleusercontent.com/EYl0uHi8GtLsNAqSpXiheOJ-5wwLOV7E1lE3oXZaak4qsNUoigYXaHQA1Npz6HEkMyk\",\n      \"https://play-lh.googleusercontent.com/w7frGmcLkDK-VhqKFNhEqbrj-6nAmhN5u3cz74uE-CzwZoSjrEuTH5WwX01wyq0ElFQ\",\n      \"https://play-lh.googleusercontent.com/M_1W90HoWOFkWUG5sLsgzJX0u1hBwptimTnVNdv1IQwnP0Dn7MK8tDdqbexZbJeV_0Q\",\n      \"https://play-lh.googleusercontent.com/_DJsKJotpIs43RudiUu9a6lq1tLUKcOYYQUlt362b_0MbXMI984CJEMkngC2g3-A47Pa\",\n      \"https://play-lh.googleusercontent.com/PFpYV_GNhunDKtkg55HUT7FtT_59g57oFdePKFfcOT1L9nVADmEK13iX7B1LlqeB5cA\",\n      \"https://play-lh.googleusercontent.com/Ib4lIp0U7K7lWrAF2M52JFBDcqwmMvTVeZnzo9j0KoyG8fQDuaikvTlN-FP4XTLb1suq\",\n      \"https://play-lh.googleusercontent.com/HALmyMfC0TKhLo4aYQkN8VJR4jw5nJIrAdDDkhTaky7SrwaBFS3XKB6AqTMuEI7v3Ug\",\n      \"https://play-lh.googleusercontent.com/b9gZMcPQkeRALpXg42Z5T14ASA3lpk8lwORYczdDzCwW6C_03JKwbsZI2D-IuVDWpqWa\",\n      \"https://play-lh.googleusercontent.com/AkkyxFJRMkGwyIdX7qYhCeOsbrsO6ct7j9zQ3DJvfAFHBEUR_l0lPxCmYcg_1wPYu0U\"\n    ],\n    \"developer\": \"VOODOO\",\n    \"genre\": \"Arcade\",\n    \"score\": 3.0541568,\n    \"scoreText\": \"3.1\",\n    \"installs\": \"100,000,000+\",\n    \"currency\": \"USD\",\n    \"price\": 0,\n    \"free\": true,\n    \"url\": \"https://play.google.com/store/apps/details?id=io.voodoo.holeio\"\n  }\n]"
  },
  {
    "path": "output/reviews_example.json",
    "content": "[\n  {\n    \"reviewId\": \"89d0170a-6e0a-4fd3-8f9c-b52ed975b701\",\n    \"userName\": \"Athul L Kumar\",\n    \"userImage\": \"https://play-lh.googleusercontent.com/a/ACg8ocIht0HMzD3U5L-Ck8nnt4caVTACxTkF4w8clfPx1hrFQQOwXw=mo\",\n    \"score\": 1,\n    \"content\": \"chapter 35 got a bug\",\n    \"thumbsUpCount\": 0,\n    \"appVersion\": \"1.21\",\n    \"at\": \"2025-10-13T10:42:05\"\n  },\n  {\n    \"reviewId\": \"ae919cd4-aee9-432e-833f-d63201dff153\",\n    \"userName\": \"Gourab Bhowal\",\n    \"userImage\": \"https://play-lh.googleusercontent.com/a-/ALV-UjVFglo1B-E1qE58LCO75T_YVZft2WsWbicrY2ClfJzY1U0JSxsX\",\n    \"score\": 5,\n    \"content\": \"Hands down one of the best games on the app...\",\n    \"thumbsUpCount\": 0,\n    \"appVersion\": \"1.21\",\n    \"at\": \"2025-10-12T09:37:21\"\n  },\n  {\n    \"reviewId\": \"839ae108-0edc-49ae-9f1b-6d5885a2705d\",\n    \"userName\": \"Ryl Ino\",\n    \"userImage\": \"https://play-lh.googleusercontent.com/a/ACg8ocLeqm0FMKCuVXygngB5zk3hu05YAbAzfYhyxPSejLtgWXZ0=mo\",\n    \"score\": 5,\n    \"content\": \"I enjoyed this game a lot and the cliff hanger ending was beautiful yet makes you want for more. ❤️\",\n    \"thumbsUpCount\": 0,\n    \"appVersion\": null,\n    \"at\": \"2025-10-12T08:01:08\"\n  },\n  {\n    \"reviewId\": \"ad0505b9-2c37-44d7-9ac8-dce40c125d96\",\n    \"userName\": \"Diederik Greef\",\n    \"userImage\": \"https://play-lh.googleusercontent.com/a-/ALV-UjWk0CkHOqbZSiznb4pSaA-RqLlTtF9nAU11vP2UpkCttYk41JU\",\n    \"score\": 4,\n    \"content\": \"very nice game I like it a lot but I just need more levels because I completed all of those levels with in a very small time limit is perfect please add more levels. game play experience some of the level's was looking like if it's impossible to play it but it's actually eazy when you see the puzzle in front of you but for sure need more levels it's not enough to keep someone playing. thank you it's more of a brain testing this game but it's good I recommend. but need more levels please.\",\n    \"thumbsUpCount\": 0,\n    \"appVersion\": \"1.21\",\n    \"at\": \"2025-10-12T02:29:57\"\n  },\n  {\n    \"reviewId\": \"007acb04-ffb8-4c38-974b-b8986509405f\",\n    \"userName\": \"Thinker Goat\",\n    \"userImage\": \"https://play-lh.googleusercontent.com/a-/ALV-UjVrMycBA0ee8fZbohoX2ifOssMp8edFwril5AQY3iAadHcBc9W6xA\",\n    \"score\": 5,\n    \"content\": \"This game is art\",\n    \"thumbsUpCount\": 0,\n    \"appVersion\": \"1.21\",\n    \"at\": \"2025-10-11T19:40:04\"\n  },\n  {\n    \"reviewId\": \"83f849df-f5a8-4df1-a0d2-2a83d5010d9a\",\n    \"userName\": \"Dinu Thomas\",\n    \"userImage\": \"https://play-lh.googleusercontent.com/a-/ALV-UjUFmqkINGFA75ErquMDMbvNZWyle6U-c-XU82Q9XUQH8fJ-fkjO\",\n    \"score\": 5,\n    \"content\": \"amazing game... i like it... difficult puzzles nice 💗\",\n    \"thumbsUpCount\": 0,\n    \"appVersion\": \"1.21\",\n    \"at\": \"2025-10-11T08:47:59\"\n  },\n  {\n    \"reviewId\": \"1120eabb-c7d5-4473-9693-4f950bf79fc0\",\n    \"userName\": \"Victor\",\n    \"userImage\": \"https://play-lh.googleusercontent.com/a-/ALV-UjWfgWOZ2uY5sF8jCIguoWYmL6-1TEe8emfpfb-Pbvx0dbBhQkdh\",\n    \"score\": 5,\n    \"content\": \"Definitely irritating to say the least, but amazing\",\n    \"thumbsUpCount\": 0,\n    \"appVersion\": \"1.21\",\n    \"at\": \"2025-10-11T08:47:49\"\n  },\n  {\n    \"reviewId\": \"6885b0da-07ff-474b-bca6-5e1abaeceefd\",\n    \"userName\": \"Ђорђе Вучковић\",\n    \"userImage\": \"https://play-lh.googleusercontent.com/a-/ALV-UjV8TBgfQqx-QYiRlntJ5L_PWO0P2ktdSLJKPvUpzM7ONa4uIP92Fg\",\n    \"score\": 5,\n    \"content\": \"A masterpiece.\",\n    \"thumbsUpCount\": 0,\n    \"appVersion\": \"1.21\",\n    \"at\": \"2025-10-10T19:25:33\"\n  },\n  {\n    \"reviewId\": \"de6ff595-c0c3-48b2-a2b6-83468ffd534f\",\n    \"userName\": \"Rajat Koparde\",\n    \"userImage\": \"https://play-lh.googleusercontent.com/a-/ALV-UjV1Q6yEdZyiaXX2NrQpgQJVTFRqjeUiNwWnTn7wvPi8iG6kSMmU\",\n    \"score\": 5,\n    \"content\": \"This game is very well made, I played it like 5-6 years ago, playing it now feels so nostalgic\",\n    \"thumbsUpCount\": 0,\n    \"appVersion\": \"1.21\",\n    \"at\": \"2025-10-09T19:59:39\"\n  },\n  {\n    \"reviewId\": \"d2d37b8a-ac73-4450-b78b-ea2ad7b39fcf\",\n    \"userName\": \"Retshepile Phori\",\n    \"userImage\": \"https://play-lh.googleusercontent.com/a-/ALV-UjVCgeoENmL3hLpr0AWTa1_jdA6vXqS8KO8ND1-6Qra4PEDtvek\",\n    \"score\": 5,\n    \"content\": \"motsamai\",\n    \"thumbsUpCount\": 0,\n    \"appVersion\": null,\n    \"at\": \"2025-10-09T08:16:21\"\n  }\n]"
  },
  {
    "path": "output/search_example.json",
    "content": "[\n  {\n    \"appId\": \"com.instagram.android\",\n    \"title\": \"Instagram\",\n    \"description\": \"Create & share photos, stories, & reels with friends you love\",\n    \"icon\": \"https://play-lh.googleusercontent.com/VRMWkE5p3CkWhJs6nv-9ZsLAs1QOg5ob1_3qg-rckwYW7yp1fMrYZqnEFpk0IoVP4LM\",\n    \"developer\": \"Instagram\",\n    \"score\": 4.4554186,\n    \"scoreText\": \"4.5\",\n    \"currency\": null,\n    \"price\": 0,\n    \"free\": false,\n    \"url\": \"https://play.google.com/store/apps/details?id=com.instagram.android\"\n  },\n  {\n    \"appId\": \"com.twitter.android\",\n    \"title\": \"X\",\n    \"description\": \"Breaking News & Social Media\",\n    \"icon\": \"https://play-lh.googleusercontent.com/A-Rnrh0J7iKmABskTonqFAANRLGTGUg_nuE4PEMYwJavL3nPt5uWsU2WO_DSgV_mOOM\",\n    \"developer\": \"X Corp.\",\n    \"score\": 4.052508,\n    \"scoreText\": \"4.1\",\n    \"currency\": null,\n    \"price\": 0,\n    \"free\": false,\n    \"url\": \"https://play.google.com/store/apps/details?id=com.twitter.android\"\n  },\n  {\n    \"appId\": \"com.snapchat.android\",\n    \"title\": \"Snapchat\",\n    \"description\": \"Share the moment!\",\n    \"icon\": \"https://play-lh.googleusercontent.com/KxeSAjPTKliCErbivNiXrd6cTwfbqUJcbSRPe_IBVK_YmwckfMRS1VIHz-5cgT09yMo\",\n    \"developer\": \"Snap Inc\",\n    \"score\": 3.9922094,\n    \"scoreText\": \"4.0\",\n    \"currency\": null,\n    \"price\": 0,\n    \"free\": false,\n    \"url\": \"https://play.google.com/store/apps/details?id=com.snapchat.android\"\n  },\n  {\n    \"appId\": \"com.facebook.katana\",\n    \"title\": \"Facebook\",\n    \"description\": \"Explore the things you love\",\n    \"icon\": \"https://play-lh.googleusercontent.com/KCMTYuiTrKom4Vyf0G4foetVOwhKWzNbHWumV73IXexAIy5TTgZipL52WTt8ICL-oIo\",\n    \"developer\": \"Meta Platforms, Inc.\",\n    \"score\": 4.4118733,\n    \"scoreText\": \"4.4\",\n    \"currency\": null,\n    \"price\": 0,\n    \"free\": false,\n    \"url\": \"https://play.google.com/store/apps/details?id=com.facebook.katana\"\n  },\n  {\n    \"appId\": \"com.zhiliaoapp.musically\",\n    \"title\": \"TikTok\",\n    \"description\": \"Videos, Music & Live Streams\",\n    \"icon\": \"https://play-lh.googleusercontent.com/waX_CXnriskbccUeOevisOQwwR-tlPRPX0hiFZ4X-w2nHDTb_I2yeBliFX5VAqfetw\",\n    \"developer\": \"TikTok Pte. Ltd.\",\n    \"score\": 4.2110004,\n    \"scoreText\": \"4.2\",\n    \"currency\": null,\n    \"price\": 0,\n    \"free\": false,\n    \"url\": \"https://play.google.com/store/apps/details?id=com.zhiliaoapp.musically\"\n  },\n  {\n    \"appId\": \"com.pinterest\",\n    \"title\": \"Pinterest\",\n    \"description\": \"One destination for a world of inspiration.\",\n    \"icon\": \"https://play-lh.googleusercontent.com/6CFQQ0b9r5fzF1v6f0gIirWsOGL7sGWkJifuUQxxhbCMcBx5aSG_cNXpjDKDn5c1jwjq\",\n    \"developer\": \"Pinterest\",\n    \"score\": 4.6440973,\n    \"scoreText\": \"4.6\",\n    \"currency\": null,\n    \"price\": 0,\n    \"free\": false,\n    \"url\": \"https://play.google.com/store/apps/details?id=com.pinterest\"\n  },\n  {\n    \"appId\": \"com.tumblr\",\n    \"title\": \"Tumblr - Social Media Fandom\",\n    \"description\": \"Dive into diverse fandoms. Connect, create, reblog.\",\n    \"icon\": \"https://play-lh.googleusercontent.com/G4R3eZm3sDGJk0lVCOr72BwLFIYV0Jg5G7_PBOf9ZMpWwZjXaUdoZMyjFbJwxcmF5qBH\",\n    \"developer\": \"Tumblr, Inc\",\n    \"score\": 4.4767933,\n    \"scoreText\": \"4.5\",\n    \"currency\": null,\n    \"price\": 0,\n    \"free\": false,\n    \"url\": \"https://play.google.com/store/apps/details?id=com.tumblr\"\n  },\n  {\n    \"appId\": \"com.reddit.frontpage\",\n    \"title\": \"Reddit\",\n    \"description\": \"Find your community. Forums, threads, debates. Not just social networking fluff.\",\n    \"icon\": \"https://play-lh.googleusercontent.com/NaFAbO7ExS4NRAvt2GYkNY6OQf9oVXwmdMTZzA6zrgjjSxhQuTCnjHyf7TgYcoSGqQ\",\n    \"developer\": \"reddit Inc.\",\n    \"score\": 4.6419964,\n    \"scoreText\": \"4.6\",\n    \"currency\": null,\n    \"price\": 0,\n    \"free\": false,\n    \"url\": \"https://play.google.com/store/apps/details?id=com.reddit.frontpage\"\n  },\n  {\n    \"appId\": \"com.instagram.barcelona\",\n    \"title\": \"Threads\",\n    \"description\": \"Connect and share ideas\",\n    \"icon\": \"https://play-lh.googleusercontent.com/G6jK9S77RN0laf9_6nhDo3AVxbRP9SgMmt8ZmQjKQ2hibn9xhOY-W5YFn_7stJD1CA\",\n    \"developer\": \"Instagram\",\n    \"score\": 4.322976,\n    \"scoreText\": \"4.3\",\n    \"currency\": null,\n    \"price\": 0,\n    \"free\": false,\n    \"url\": \"https://play.google.com/store/apps/details?id=com.instagram.barcelona\"\n  },\n  {\n    \"appId\": \"com.discord\",\n    \"title\": \"Discord - Talk, Play, Hang Out\",\n    \"description\": \"Group Chat That’s Fun & Games\",\n    \"icon\": \"https://play-lh.googleusercontent.com/0oO5sAneb9lJP6l8c6DH4aj6f85qNpplQVHmPmbbBxAukDnlO7DarDW0b-kEIHa8SQ\",\n    \"developer\": \"Discord Inc.\",\n    \"score\": 4.3991246,\n    \"scoreText\": \"4.4\",\n    \"currency\": null,\n    \"price\": 0,\n    \"free\": false,\n    \"url\": \"https://play.google.com/store/apps/details?id=com.discord\"\n  }\n]"
  },
  {
    "path": "output/similar_example.json",
    "content": "[\n  {\n    \"appId\": \"com.playdigious.littlenightmare\",\n    \"title\": \"Little Nightmares\",\n    \"description\": \"First available on PC and consoles, the horror adventure tale Little Nightmares is available on mobile!\\nImmerse yourself in Little Nightmares, a dark whimsical tale that will confront you with your childhood fears!\\nHelp Six escape The Maw – a vast, mysterious vessel inhabited by corrupted souls looking for their next meal.\\nAs you progress on your journey, explore the most disturbing dollhouse offering a prison to escape from and a playground full of secrets to discover.\\nReconnect with your inner child to unleash your imagination and find the way out!\\nLittle Nightmares features a subtle mix of action and puzzle-platformer mechanics rooted in an eerie artistic direction and creepy sound design.\\nSneak your way out of the Maw’s dreary maze and run from its corrupted inhabitants to escape your childhood fears. \\n\\nFEATURES\\n\\n- Tiptoe your way through a dark and thrilling adventure\\n- Rediscover your childhood fears inside a haunting vessel and escape its eerie inhabitants\\n- Climb, crawl and hide through nightmarish environments to solve tricky platform puzzles\\n- Immerse yourself in the Maw through its creepy sound design\\n\\nPlease make sure your device is connected to Wifi to download the game for the first time.\\n\\nIf you run into a problem, please contact us at https://playdigious.helpshift.com/hc/en/12-playdigious/ with as much information as possible on the issue.\",\n    \"icon\": \"https://play-lh.googleusercontent.com/vEJAwZwhOv7Wzf1Md7PsMzOyo087y0Z4rhRgidtfv03c682RicoBz3BOsrigdiXA-7I\",\n    \"developer\": \"Playdigious\",\n    \"score\": 4.27,\n    \"scoreText\": \"4.3\",\n    \"currency\": \"USD\",\n    \"price\": 8.99,\n    \"free\": false,\n    \"url\": \"https://play.google.com/store/apps/details?id=com.playdigious.littlenightmare\"\n  },\n  {\n    \"appId\": \"com.agaming.reporter\",\n    \"title\": \"Reporter - Scary Horror Game\",\n    \"description\": \"Discover heart-thumping terror, chase and scary creatures in this atmospheric horror game. But no matter what, never play alone in the dark.\\n\\nIs Your mind is hungry for tricky puzzles and nerves are suffering without ticklish situations? This action-horror \\\"Reporter\\\" from \\\"AGaming+\\\" will shake you to the core! Turn off the light and get your earphones! Be attentive, because it’s the only thing that can help you to get out from the paws of horror that is happening here. \\n\\nAll the story started in a small town. Once on a wonderful day, the town was shocked by series of horrible kills under terrifying and inexplicable circumstances. Police have tried to hide the facts, but some information leaked and was published by local press. Trying to understand what has happened there, you start your search for the truth. But suddenly, you become the part of the story that you'll remember till the end of your life. It all depends on you, will you unravel this plexus of horror and chaos and survive?\\n\\nWarning: for perfect experience and correct game functioning 1GB of RAM is required.\",\n    \"icon\": \"https://play-lh.googleusercontent.com/ApOs3N1Qgf4bZGpaBI4DK7DKokTa-db309_RHCoSGuY2cKq6JpX59Sg1C2WhJDKirw\",\n    \"developer\": \"AGaming+\",\n    \"score\": 3.61,\n    \"scoreText\": \"3.6\",\n    \"currency\": \"USD\",\n    \"price\": 0.99,\n    \"free\": false,\n    \"url\": \"https://play.google.com/store/apps/details?id=com.agaming.reporter\"\n  },\n  {\n    \"appId\": \"com.cowcat.brok\",\n    \"title\": \"BROK the InvestiGator\",\n    \"description\": \"BROK is an innovative adventure mixed with beat 'em up and RPG elements. In a grim world where animals have replaced mankind, what kind of detective will you be?\\n\\nIn a futuristic \\\"light cyberpunk\\\" world where animals have replaced humans, privileged citizens live under a protective dome from the ambient air pollution while others struggle to make a living on the outside.\\n\\nBrok, a private detective and former boxer, lives with Graff, the son of his deceased wife. Although he could never elucidate her accident, recent events may shed some light on an even more tragic outcome... one that may be linked to their own existence. \\nWill they be able to withstand the threats of this corrupted world and face their own destiny? \\n\\n--------------------\\nFEATURES\\n--------------------\\n- Solve puzzles with your wits... or muscles!\\n- Make choices impacting gameplay and/or story\\n- Relaxed mode for pure \\\"Point & Click\\\" gameplay (fights can be skipped)\\n- Level up to beat enemies and bosses\\n- Combine clues to uncover the truth!\\n- In-game hints\\n- Two playable characters, switch at any time\\n- 15 to 20 hours long on first playthrough \\n- Multiple distinct endings to unlock \\n- Fully voice acted (23,000 lines)\\n- Optimized for touch screens (fight using touch swipes or virtual buttons)\\n- Compatible with most Bluetooth controllers\\n- Play the adventure with friends in local co-op (up to 4 players)\\n- Text fully translated into 10 languages\\n\\n---------------------------------\\nACCESSIBILITY\\n---------------------------------\\nBROK is the first full-fledged adventure game to be fully playable by blind or visually impaired players!\\n\\n- Fully narrated via quality text-to-speech and audiodescriptions (characters, locations and scenes.)\\n- Puzzles adapted for blindness.\\n- All puzzles and fights can be skipped.\\n- Adapted tutorials.\\n- Ability to repeat the last voice speech and instructions.\\n- Positional audio for fights.\\n- No online connectivity required (after the download).\\n- No specific device required\\n- Additional options: larger fonts and increased contrast (backgrounds and enemies.)\\n\\nTo enter the accessibility menu, press two fingers on the title screen, then follow the audio instructions.\\n\\nIMPORTANT: Accessibility speeches are only available in English.\\n\\n---------------------------------\\nMONETIZATION\\n---------------------------------\\n- Chapter 1 is entirely free (2 to 3 hours of gameplay)\\n- Each additional chapter is $1.99\\n- An alternative Premium option to buy all chapters at once is $7.99 (the game has 6 chapters)\",\n    \"icon\": \"https://play-lh.googleusercontent.com/XQFeA6xQ9TgaQmKYI-OSkEOPlZpnvCdHUxT-x2GqagCbDkIByAl2V2JgcXx9-ZnEoA\",\n    \"developer\": \"Breton Fabrice\",\n    \"score\": 4.65,\n    \"scoreText\": \"4.7\",\n    \"currency\": \"USD\",\n    \"price\": 0,\n    \"free\": true,\n    \"url\": \"https://play.google.com/store/apps/details?id=com.cowcat.brok\"\n  },\n  {\n    \"appId\": \"com.MorionStudio.ConquistadorioFull\",\n    \"title\": \"Conquistadorio\",\n    \"description\": \"Embark on an enthralling journey with Conquistodoro, a captivating point-and-click adventure that beckons you to unravel mysteries, solve puzzles, and conquer challenging trials. In this immersive world, every click propels you deeper into the heart of the adventure, where the thrill of exploration meets the satisfaction of clever point-and-click interactions.\\n\\n🗺️ Discover a World of Adventure\\n\\nSet in a mysterious world, Conquistodoro promises a rich narrative filled with twists and turns. Your protagonist, a daring bandit, faces expulsion from his coffin and sets forth on an epic quest to find a new resting place. Navigate through the story, guided by helpful souls and driven by the need to secure a mysterious cup that holds the key to reviving zombies in ancient tombs.\\n\\n🧩 Point-and-Click Challenges Await\\n\\nAs you progress, encounter a variety of challenges and puzzles that will test your point-and-click skills. Each decision you make shapes the outcome, adding layers of complexity to the adventure. Conquistodoro ensures that your journey is not just a visual feast but a mental challenge, with every click contributing to the unfolding saga.\\n\\n🤠 The Hero's Quest Unfolds\\n\\nJoin forces with a charismatic protagonist as he faces adversity, completes missions, and navigates through a visually stunning world. The commander, zombies, and a boss beetle become integral parts of this thrilling adventure, where every point-and-click action propels the hero closer to his ultimate goal.\\n\\n🎨 Stunning Visuals and Animations\\n\\nImmerse yourself in Conquistodoro's visually captivating world, where stunning graphics and animations bring the narrative to life. The meticulously crafted design enhances the overall point-and-click experience, ensuring that every interaction is a visual treat.\\n\\n🎶 Adventure with an Enchanting Score\\n\\nAccompanying your quest is an enchanting musical score that heightens the atmosphere and emotion of the game. The soundtrack is thoughtfully curated to elevate the overall adventure, making Conquistodoro a symphony of point-and-click excitement and immersive storytelling.\\n\\n📲 Download Now and Embark on Your Adventure!\\n\\nReady to experience the magic of point-and-click adventure? Download Conquistodoro now and dive into a world where each click takes you closer to solving puzzles, overcoming challenges, and unraveling the mysteries that await. Your adventure begins – click your way to triumph in Conquistodoro!\",\n    \"icon\": \"https://play-lh.googleusercontent.com/MC1Dspw44XFvGizJfpH90LjlCyuUHuPSxg37m2Wbq6dbzO4KsLcYDNAB8EGVge7L_yc\",\n    \"developer\": \"Morion Studio\",\n    \"score\": 4.32,\n    \"scoreText\": \"4.3\",\n    \"currency\": \"USD\",\n    \"price\": 4.99,\n    \"free\": false,\n    \"url\": \"https://play.google.com/store/apps/details?id=com.MorionStudio.ConquistadorioFull\"\n  },\n  {\n    \"appId\": \"eu.bandainamcoent.verylittlenightmares\",\n    \"title\": \"Very Little Nightmares\",\n    \"description\": \"Enter the world of Very Little Nightmares, a puzzle adventure game that mixes a cute and creepy universe. 👻\\n\\nHelp the Girl in the Yellow Raincoat survive in a hostile house and find a way to get her out. 💛\\n\\nAs she awakens in an unknown mansion, you must guide her through each room. What a fate to fall here, a place where everything wants to see her dead. 💀\\n\\nHer life is in your hands, avoid enemies, discover intriguing puzzles to finally pierce the secrets of this strange house. 🏚\\n\\n=====\\n\\nEXPLORE 🔎\\nThe Nest, a vast maze filled with life-threatening traps.\\n\\nSOLVE 💡\\nThe challenging puzzles that bar your way. Use your wits and any resources at your disposal.\\n\\nSURVIVE 😬\\nThe frightening enemies that will do everything to capture you.\\n\\nDISCOVER 🕵‍♀\\nA dark universe in this original prequel story of the events in Little Nightmares.\\n\\nKeep in touch & unveil the mysteries of this world:\\n\\nFacebook: https://www.facebook.com/LittleNightmaresEU/ \\nTwitter: https://twitter.com/LittleNights \\nInstagram: https://www.instagram.com/little__nightmares/ \\n\\nYou are purchasing a license for digital goods. For full terms and conditions, please see the License Agreement below.\\n\\n⭐ SUPPORT: Having problems? Let us know at http://bnent.eu/msupportvln\\n⭐ PRIVACY POLICY: http://bnent.eu/mprivacy\\n⭐ TERMS OF USE: http://bnent.eu/mterms\",\n    \"icon\": \"https://play-lh.googleusercontent.com/aSfffFeiMMNDux_Vet9VyQ97_Pt0XBJwZeCOVPBF6swycKN1sdc_gCDyuwXeR4CVvMc\",\n    \"developer\": \"BANDAI NAMCO Entertainment Europe\",\n    \"score\": 4.4785895,\n    \"scoreText\": \"4.5\",\n    \"currency\": \"USD\",\n    \"price\": 6.99,\n    \"free\": false,\n    \"url\": \"https://play.google.com/store/apps/details?id=eu.bandainamcoent.verylittlenightmares\"\n  }\n]"
  },
  {
    "path": "output/suggest_example.json",
    "content": "{\n    \"photo editor\": [\n        \"photo editor\",\n        \"photo editor free\",\n        \"photo editor app\",\n        \"photo editor 2023\",\n        \"photo editor free app android\"\n    ],\n    \"photo editor free\": [\n        \"photo editor free\",\n        \"photo editor free app android\",\n        \"photo editor free app\",\n        \"photo editor free app android free\",\n        \"photo editor free 2025\"\n    ],\n    \"photo editor app\": [\n        \"photo editor app\",\n        \"photo editor app new style 2020\",\n        \"photo editor app free\",\n        \"photo editor app new style 2025\",\n        \"photo editor app 2025\"\n    ],\n    \"photo editor 2023\": [\n        \"photo editor 2023\",\n        \"photo editor 2023 video song\",\n        \"photo editor 2023 background\",\n        \"photo editor 2023 new app\",\n        \"photo editor 2023 background change\"\n    ],\n    \"photo editor free app android\": [\n        \"photo editor free app android\",\n        \"photo editor free app android offline\",\n        \"photo editor free app android free\",\n        \"photo editor free app android 2025\"\n    ]\n}"
  },
  {
    "path": "requirements.txt",
    "content": "# Core dependencies\nrequests>=2.25.0\nbeautifulsoup4>=4.9.0\n\n# HTTP client libraries\ncurl-cffi>=0.5.0\ntls-client>=1.0.0\nurllib3>=1.26.0\ncloudscraper>=1.2.0\naiohttp>=3.8.0\nhttpx>=0.24.0\n\n# Development dependencies\npytest>=6.0.0\n\n# Documentation dependencies\nsphinx>=4.0.0\nsphinx-rtd-theme>=1.0.0"
  },
  {
    "path": "setup.py",
    "content": "from setuptools import setup, find_packages\n\nwith open(\"README.md\", \"r\", encoding=\"utf-8\") as fh:\n    long_description = fh.read()\n\nwith open(\"requirements.txt\", \"r\", encoding=\"utf-8\") as fh:\n    requirements = [line.strip() for line in fh if line.strip() and not line.startswith(\"#\")]\n\nsetup(\n    name=\"gplay-scraper\",\n    version=\"1.0.6\",\n    description=\"🚀 Advanced Google Play Store Scraper - Extract 65+ app fields, reviews, ratings, ASO data, developer info, top charts, search results with 7 HTTP clients & unlimited pagination support\",\n    long_description=long_description,\n    long_description_content_type=\"text/markdown\",\n    author=\"Mohammed Cha\",\n    author_email=\"contact@mohammedcha.com\",\n    maintainer=\"Mohammed Cha\",\n    maintainer_email=\"contact@mohammedcha.com\",\n    url=\"https://github.com/mohammedcha/gplay-scraper\",\n    download_url=\"https://github.com/mohammedcha/gplay-scraper/archive/v1.0.6.tar.gz\",\n    project_urls={\n        \"Homepage\": \"https://github.com/mohammedcha/gplay-scraper\",\n        \"Documentation\": \"https://mohammedcha.github.io/gplay-scraper/\",\n        \"Source Code\": \"https://github.com/mohammedcha/gplay-scraper\",\n        \"Bug Reports\": \"https://github.com/mohammedcha/gplay-scraper/issues\",\n        \"Feature Requests\": \"https://github.com/mohammedcha/gplay-scraper/issues\",\n        \"Changelog\": \"https://github.com/mohammedcha/gplay-scraper/blob/main/CHANGELOG.md\",\n        \"Examples\": \"https://github.com/mohammedcha/gplay-scraper/tree/main/examples\",\n        \"PyPI\": \"https://pypi.org/project/gplay-scraper/\",\n    },\n    packages=find_packages(exclude=[\"tests*\", \"docs*\", \"examples*\"]),\n    install_requires=requirements,\n    extras_require={\n        \"dev\": [\"pytest>=7.0.0\", \"pytest-cov>=4.0.0\", \"black>=22.0.0\", \"flake8>=5.0.0\"],\n        \"http-clients\": [\n            \"curl-cffi>=0.5.0\",\n            \"tls-client>=0.2.0\", \n            \"httpx>=0.24.0\",\n            \"urllib3>=1.26.0\",\n            \"cloudscraper>=1.2.0\",\n            \"aiohttp>=3.8.0\",\n        ],\n        \"all\": [\n            \"pytest>=7.0.0\", \"pytest-cov>=4.0.0\", \"black>=22.0.0\", \"flake8>=5.0.0\",\n            \"curl-cffi>=0.5.0\", \"tls-client>=0.2.0\", \"httpx>=0.24.0\", \n            \"urllib3>=1.26.0\", \"cloudscraper>=1.2.0\", \"aiohttp>=3.8.0\",\n        ],\n    },\n    python_requires=\">=3.8\",\n    keywords=\"google-play-scraper, playstore-scraper, android-scraper, gplay-scraper, google-play-store, play-store-api, app-data-extraction, app-analytics, mobile-analytics, aso-tools, app-store-optimization, mobile-seo, app-marketing, keyword-research, competitor-analysis, market-research, app-reviews, user-reviews, review-scraper, rating-analysis, sentiment-analysis, developer-tools, api-scraping, web-scraping, data-mining, python-scraper, automation-tools, business-intelligence, market-intelligence, competitive-intelligence, app-monitoring, trend-analysis, performance-tracking, install-tracking, revenue-analysis\",\n    classifiers=[\n        \"Development Status :: 5 - Production/Stable\",\n        \"Intended Audience :: Developers\",\n        \"Intended Audience :: Information Technology\",\n        \"Intended Audience :: Science/Research\",\n        \"Intended Audience :: Financial and Insurance Industry\",\n        \"License :: OSI Approved :: MIT License\",\n        \"Operating System :: OS Independent\",\n        \"Operating System :: Microsoft :: Windows\",\n        \"Operating System :: POSIX :: Linux\",\n        \"Operating System :: MacOS\",\n        \"Programming Language :: Python :: 3\",\n        \"Programming Language :: Python :: 3.8\",\n        \"Programming Language :: Python :: 3.9\",\n        \"Programming Language :: Python :: 3.10\",\n        \"Programming Language :: Python :: 3.11\",\n        \"Programming Language :: Python :: 3.12\",\n        \"Programming Language :: Python :: 3 :: Only\",\n        \"Topic :: Internet :: WWW/HTTP :: Dynamic Content\",\n        \"Topic :: Internet :: WWW/HTTP :: Indexing/Search\",\n        \"Topic :: Software Development :: Libraries :: Python Modules\",\n        \"Topic :: Software Development :: Libraries :: Application Frameworks\",\n        \"Topic :: Utilities\",\n        \"Topic :: Scientific/Engineering :: Information Analysis\",\n        \"Topic :: Office/Business :: Financial :: Investment\",\n        \"Topic :: Internet :: WWW/HTTP :: Browsers\",\n        \"Topic :: Text Processing :: Markup :: HTML\",\n        \"Topic :: Database\",\n        \"Natural Language :: English\",\n        \"Environment :: Console\",\n        \"Environment :: Web Environment\",\n        \"Framework :: AsyncIO\",\n        \"Typing :: Typed\",\n    ],\n    platforms=[\"any\"],\n    include_package_data=True,\n    zip_safe=False,\n)"
  },
  {
    "path": "tests/__init__.py",
    "content": "# Tests package"
  },
  {
    "path": "tests/test_app_methods.py",
    "content": "import unittest\nimport warnings\nimport time\nfrom gplay_scraper import GPlayScraper\nfrom gplay_scraper.exceptions import GPlayScraperError, NetworkError, RateLimitError\n\n\nclass TestAppMethods(unittest.TestCase):\n    \n    @classmethod\n    def setUpClass(cls):\n        cls.scraper = GPlayScraper()  # Initialize scraper\n        cls.app_id = \"com.whatsapp\"  # WhatsApp app ID for testing\n        cls.lang = \"en\"  # Language\n        cls.country = \"us\"  # Country\n    \n    def test_app_analyze(self):\n        \"\"\"Test app_analyze returns dictionary with data or handles errors gracefully\"\"\"\n        time.sleep(2)  # Wait 2 seconds before request\n        try:\n            result = self.scraper.app_analyze(self.app_id, lang=self.lang, country=self.country)\n            self.assertIsInstance(result, dict)\n            if result:  # Only check if we got data\n                self.assertIn('title', result)\n                print(f\"\\n✅ App data retrieved for {self.app_id}:\")\n                print(f\"Title: {result.get('title', 'N/A')}\")\n                print(f\"Score: {result.get('score', 'N/A')}\")\n                print(f\"Installs: {result.get('installs', 'N/A')}\")\n                print(f\"Developer: {result.get('developer', 'N/A')}\")\n                print(f\"Total fields: {len(result)}\")\n        except (NetworkError, RateLimitError, GPlayScraperError) as e:\n            warnings.warn(f\"Network/Rate limit error in test_app_analyze: {e}\")\n            self.skipTest(f\"Skipping due to network/rate limit: {e}\")\n    \n    def test_app_get_field(self):\n        \"\"\"Test app_get_field returns single field value or handles errors gracefully\"\"\"\n        time.sleep(2)  # Wait 2 seconds before request\n        try:\n            result = self.scraper.app_get_field(self.app_id, \"title\", lang=self.lang, country=self.country)\n            if result is not None:\n                self.assertIsInstance(result, str)\n                self.assertTrue(len(result) > 0)\n                print(f\"\\n✅ Single field 'title': {result}\")\n        except (NetworkError, RateLimitError, GPlayScraperError) as e:\n            warnings.warn(f\"Network/Rate limit error in test_app_get_field: {e}\")\n            self.skipTest(f\"Skipping due to network/rate limit: {e}\")\n    \n    def test_app_get_fields(self):\n        \"\"\"Test app_get_fields returns multiple fields or handles errors gracefully\"\"\"\n        time.sleep(2)  # Wait 2 seconds before request\n        fields = [\"title\", \"score\", \"installs\"]\n        try:\n            result = self.scraper.app_get_fields(self.app_id, fields, lang=self.lang, country=self.country)\n            if result:\n                self.assertIsInstance(result, dict)\n                print(f\"\\n✅ Multiple fields retrieved:\")\n                for field, value in result.items():\n                    print(f\"  {field}: {value}\")\n        except (NetworkError, RateLimitError, GPlayScraperError) as e:\n            warnings.warn(f\"Network/Rate limit error in test_app_get_fields: {e}\")\n            self.skipTest(f\"Skipping due to network/rate limit: {e}\")\n    \n    def test_app_print_field(self):\n        \"\"\"Test app_print_field executes without error or handles errors gracefully\"\"\"\n        time.sleep(2)  # Wait 2 seconds before request\n        try:\n            print(f\"\\n✅ app_print_field output:\")\n            self.scraper.app_print_field(self.app_id, \"title\", lang=self.lang, country=self.country)\n        except (NetworkError, RateLimitError, GPlayScraperError) as e:\n            warnings.warn(f\"Network/Rate limit error in test_app_print_field: {e}\")\n            self.skipTest(f\"Skipping due to network/rate limit: {e}\")\n        except Exception as e:\n            self.fail(f\"app_print_field raised unexpected {type(e).__name__}: {e}\")\n    \n    def test_app_print_fields(self):\n        \"\"\"Test app_print_fields executes without error or handles errors gracefully\"\"\"\n        time.sleep(2)  # Wait 2 seconds before request\n        try:\n            print(f\"\\n✅ app_print_fields output:\")\n            self.scraper.app_print_fields(self.app_id, [\"title\", \"score\"], lang=self.lang, country=self.country)\n        except (NetworkError, RateLimitError, GPlayScraperError) as e:\n            warnings.warn(f\"Network/Rate limit error in test_app_print_fields: {e}\")\n            self.skipTest(f\"Skipping due to network/rate limit: {e}\")\n        except Exception as e:\n            self.fail(f\"app_print_fields raised unexpected {type(e).__name__}: {e}\")\n    \n    def test_app_print_all(self):\n        \"\"\"Test app_print_all executes without error or handles errors gracefully\"\"\"\n        time.sleep(2)  # Wait 2 seconds before request\n        try:\n            print(f\"\\n✅ app_print_all output:\")\n            self.scraper.app_print_all(self.app_id, lang=self.lang, country=self.country)\n        except (NetworkError, RateLimitError, GPlayScraperError) as e:\n            warnings.warn(f\"Network/Rate limit error in test_app_print_all: {e}\")\n            self.skipTest(f\"Skipping due to network/rate limit: {e}\")\n        except Exception as e:\n            self.fail(f\"app_print_all raised unexpected {type(e).__name__}: {e}\")\n\n\nif __name__ == '__main__':\n    unittest.main()\n"
  },
  {
    "path": "tests/test_basic.py",
    "content": "import unittest\nimport sys\nimport os\n\n# Add the parent directory to the path to import gplay_scraper\nsys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))\n\nfrom gplay_scraper import GPlayScraper\n\n\nclass TestBasicFunctionality(unittest.TestCase):\n    \"\"\"Basic tests that don't require network access\"\"\"\n    \n    def test_import_gplay_scraper(self):\n        \"\"\"Test that GPlayScraper can be imported\"\"\"\n        from gplay_scraper import GPlayScraper\n        self.assertTrue(GPlayScraper is not None)\n    \n    def test_scraper_initialization(self):\n        \"\"\"Test that GPlayScraper can be initialized\"\"\"\n        scraper = GPlayScraper()\n        self.assertIsInstance(scraper, GPlayScraper)\n    \n    def test_scraper_initialization_with_http_client(self):\n        \"\"\"Test that GPlayScraper can be initialized with different HTTP clients\"\"\"\n        # Test with requests (default)\n        scraper1 = GPlayScraper(http_client=\"requests\")\n        self.assertIsInstance(scraper1, GPlayScraper)\n        \n        # Test with curl_cffi\n        try:\n            scraper2 = GPlayScraper(http_client=\"curl_cffi\")\n            self.assertIsInstance(scraper2, GPlayScraper)\n        except ImportError:\n            # curl_cffi might not be installed in CI\n            pass\n    \n    def test_scraper_has_required_methods(self):\n        \"\"\"Test that GPlayScraper has all required methods\"\"\"\n        scraper = GPlayScraper()\n        \n        # App methods\n        self.assertTrue(hasattr(scraper, 'app_analyze'))\n        self.assertTrue(hasattr(scraper, 'app_get_field'))\n        self.assertTrue(hasattr(scraper, 'app_get_fields'))\n        self.assertTrue(hasattr(scraper, 'app_print_field'))\n        self.assertTrue(hasattr(scraper, 'app_print_fields'))\n        self.assertTrue(hasattr(scraper, 'app_print_all'))\n        \n        # Search methods\n        self.assertTrue(hasattr(scraper, 'search_analyze'))\n        self.assertTrue(hasattr(scraper, 'search_get_field'))\n        self.assertTrue(hasattr(scraper, 'search_get_fields'))\n        self.assertTrue(hasattr(scraper, 'search_print_field'))\n        self.assertTrue(hasattr(scraper, 'search_print_fields'))\n        self.assertTrue(hasattr(scraper, 'search_print_all'))\n        \n        # Reviews methods\n        self.assertTrue(hasattr(scraper, 'reviews_analyze'))\n        self.assertTrue(hasattr(scraper, 'reviews_get_field'))\n        self.assertTrue(hasattr(scraper, 'reviews_get_fields'))\n        self.assertTrue(hasattr(scraper, 'reviews_print_field'))\n        self.assertTrue(hasattr(scraper, 'reviews_print_fields'))\n        self.assertTrue(hasattr(scraper, 'reviews_print_all'))\n        \n        # Developer methods\n        self.assertTrue(hasattr(scraper, 'developer_analyze'))\n        self.assertTrue(hasattr(scraper, 'developer_get_field'))\n        self.assertTrue(hasattr(scraper, 'developer_get_fields'))\n        self.assertTrue(hasattr(scraper, 'developer_print_field'))\n        self.assertTrue(hasattr(scraper, 'developer_print_fields'))\n        self.assertTrue(hasattr(scraper, 'developer_print_all'))\n        \n        # List methods\n        self.assertTrue(hasattr(scraper, 'list_analyze'))\n        self.assertTrue(hasattr(scraper, 'list_get_field'))\n        self.assertTrue(hasattr(scraper, 'list_get_fields'))\n        self.assertTrue(hasattr(scraper, 'list_print_field'))\n        self.assertTrue(hasattr(scraper, 'list_print_fields'))\n        self.assertTrue(hasattr(scraper, 'list_print_all'))\n        \n        # Similar methods\n        self.assertTrue(hasattr(scraper, 'similar_analyze'))\n        self.assertTrue(hasattr(scraper, 'similar_get_field'))\n        self.assertTrue(hasattr(scraper, 'similar_get_fields'))\n        self.assertTrue(hasattr(scraper, 'similar_print_field'))\n        self.assertTrue(hasattr(scraper, 'similar_print_fields'))\n        self.assertTrue(hasattr(scraper, 'similar_print_all'))\n        \n        # Suggest methods (only 4 methods, not 6)\n        self.assertTrue(hasattr(scraper, 'suggest_analyze'))\n        self.assertTrue(hasattr(scraper, 'suggest_nested'))\n        self.assertTrue(hasattr(scraper, 'suggest_print_all'))\n        self.assertTrue(hasattr(scraper, 'suggest_print_nested'))\n\n\nif __name__ == '__main__':\n    unittest.main()"
  },
  {
    "path": "tests/test_developer_methods.py",
    "content": "\"\"\"\nUnit tests for Developer Methods\n\"\"\"\n\nimport unittest\nimport time\nimport warnings\nfrom gplay_scraper import GPlayScraper\nfrom gplay_scraper.exceptions import GPlayScraperError, NetworkError, RateLimitError\n\n\nclass TestDeveloperMethods(unittest.TestCase):\n    \"\"\"Test suite for developer methods.\"\"\"\n    \n    def setUp(self):\n        \"\"\"Set up test fixtures before each test method.\"\"\"\n        self.scraper = GPlayScraper()  # Initialize scraper\n        self.dev_id = \"5700313618786177705\"  # Google Inc. developer ID\n        self.count = 10  # Number of items to fetch\n        self.lang = \"en\"  # Language\n        self.country = \"us\"  # Country\n    \n    def test_developer_analyze(self):\n        \"\"\"Test developer_analyze returns list of apps.\"\"\"\n        time.sleep(2)\n        try:\n            result = self.scraper.developer_analyze(self.dev_id, count=self.count, lang=self.lang, country=self.country)\n            self.assertIsInstance(result, list)\n            if result:\n                self.assertGreater(len(result), 0)\n                print(f\"\\n✅ Developer apps ({len(result)} apps):\")\n                for i, app in enumerate(result[:3]):  # Show first 3 apps\n                    print(f\"  {i+1}. {app.get('title', 'N/A')} - {app.get('score', 'N/A')} stars\")\n        except (NetworkError, RateLimitError, GPlayScraperError) as e:\n            warnings.warn(f\"Network/Rate limit error in test_developer_analyze: {e}\")\n            self.skipTest(f\"Skipping due to network/rate limit: {e}\")\n    \n    def test_developer_get_field(self):\n        \"\"\"Test developer_get_field returns list of field values.\"\"\"\n        time.sleep(2)\n        try:\n            result = self.scraper.developer_get_field(self.dev_id, \"title\", count=self.count, lang=self.lang, country=self.country)\n            self.assertIsInstance(result, list)\n            if result:\n                print(f\"\\n✅ Developer app titles ({len(result)} apps):\")\n                for i, title in enumerate(result[:3]):\n                    print(f\"  {i+1}. {title}\")\n        except (NetworkError, RateLimitError, GPlayScraperError) as e:\n            warnings.warn(f\"Network/Rate limit error in test_developer_get_field: {e}\")\n            self.skipTest(f\"Skipping due to network/rate limit: {e}\")\n    \n    def test_developer_get_fields(self):\n        \"\"\"Test developer_get_fields returns list of dictionaries.\"\"\"\n        time.sleep(2)\n        try:\n            result = self.scraper.developer_get_fields(self.dev_id, [\"title\", \"score\"], count=self.count, lang=self.lang, country=self.country)\n            self.assertIsInstance(result, list)\n            if result:\n                print(f\"\\n✅ Developer app fields ({len(result)} apps):\")\n                for i, app in enumerate(result[:3]):\n                    print(f\"  {i+1}. {app.get('title', 'N/A')} - {app.get('score', 'N/A')} stars\")\n        except (NetworkError, RateLimitError, GPlayScraperError) as e:\n            warnings.warn(f\"Network/Rate limit error in test_developer_get_fields: {e}\")\n            self.skipTest(f\"Skipping due to network/rate limit: {e}\")\n    \n    def test_developer_print_field(self):\n        \"\"\"Test developer_print_field executes without error.\"\"\"\n        time.sleep(2)\n        try:\n            print(f\"\\n✅ developer_print_field output:\")\n            self.scraper.developer_print_field(self.dev_id, \"title\", count=self.count, lang=self.lang, country=self.country)\n        except (NetworkError, RateLimitError, GPlayScraperError) as e:\n            warnings.warn(f\"Network/Rate limit error in test_developer_print_field: {e}\")\n            self.skipTest(f\"Skipping due to network/rate limit: {e}\")\n        except Exception as e:\n            self.fail(f\"developer_print_field raised unexpected {e}\")\n    \n    def test_developer_print_fields(self):\n        \"\"\"Test developer_print_fields executes without error.\"\"\"\n        time.sleep(2)\n        try:\n            print(f\"\\n✅ developer_print_fields output:\")\n            self.scraper.developer_print_fields(self.dev_id, [\"title\", \"score\"], count=self.count, lang=self.lang, country=self.country)\n        except (NetworkError, RateLimitError, GPlayScraperError) as e:\n            warnings.warn(f\"Network/Rate limit error in test_developer_print_fields: {e}\")\n            self.skipTest(f\"Skipping due to network/rate limit: {e}\")\n        except Exception as e:\n            self.fail(f\"developer_print_fields raised unexpected {e}\")\n    \n    def test_developer_print_all(self):\n        \"\"\"Test developer_print_all executes without error.\"\"\"\n        time.sleep(2)\n        try:\n            print(f\"\\n✅ developer_print_all output:\")\n            self.scraper.developer_print_all(self.dev_id, count=self.count, lang=self.lang, country=self.country)\n        except (NetworkError, RateLimitError, GPlayScraperError) as e:\n            warnings.warn(f\"Network/Rate limit error in test_developer_print_all: {e}\")\n            self.skipTest(f\"Skipping due to network/rate limit: {e}\")\n        except Exception as e:\n            self.fail(f\"developer_print_all raised unexpected {e}\")\n\nif __name__ == '__main__':\n    unittest.main()\n"
  },
  {
    "path": "tests/test_list_methods.py",
    "content": "\"\"\"\nUnit tests for List Methods\n\"\"\"\n\nimport unittest\nimport time\nimport warnings\nfrom gplay_scraper import GPlayScraper\nfrom gplay_scraper.exceptions import GPlayScraperError, NetworkError, RateLimitError\n\n\nclass TestListMethods(unittest.TestCase):\n    \"\"\"Test suite for list methods (top charts).\"\"\"\n    \n    def setUp(self):\n        \"\"\"Set up test fixtures before each test method.\"\"\"\n        self.scraper = GPlayScraper()  # Initialize scraper\n        self.collection = \"TOP_FREE\"  # Top free apps collection\n        self.category = \"GAME\"  # Game category\n        self.count = 10  # Number of items to fetch\n        self.lang = \"en\"  # Language\n        self.country = \"us\"  # Country\n    \n    def test_list_analyze(self):\n        \"\"\"Test list_analyze returns list of top apps.\"\"\"\n        time.sleep(2)\n        try:\n            result = self.scraper.list_analyze(self.collection, self.category, count=self.count, lang=self.lang, country=self.country)\n            self.assertIsInstance(result, list)\n            if result:\n                self.assertGreater(len(result), 0)\n                print(f\"\\n✅ Top {self.collection} {self.category} apps ({len(result)} apps):\")\n                for i, app in enumerate(result[:3]):  # Show first 3 apps\n                    print(f\"  {i+1}. {app.get('title', 'N/A')} - {app.get('developer', 'N/A')}\")\n        except (NetworkError, RateLimitError, GPlayScraperError) as e:\n            warnings.warn(f\"Network/Rate limit error in test_list_analyze: {e}\")\n            self.skipTest(f\"Skipping due to network/rate limit: {e}\")\n    \n    def test_list_get_field(self):\n        \"\"\"Test list_get_field returns list of field values.\"\"\"\n        time.sleep(2)\n        try:\n            result = self.scraper.list_get_field(self.collection, self.category, \"title\", count=self.count, lang=self.lang, country=self.country)\n            self.assertIsInstance(result, list)\n            if result:\n                print(f\"\\n✅ Top chart app titles ({len(result)} apps):\")\n                for i, title in enumerate(result[:3]):\n                    print(f\"  {i+1}. {title}\")\n        except (NetworkError, RateLimitError, GPlayScraperError) as e:\n            warnings.warn(f\"Network/Rate limit error in test_list_get_field: {e}\")\n            self.skipTest(f\"Skipping due to network/rate limit: {e}\")\n    \n    def test_list_get_fields(self):\n        \"\"\"Test list_get_fields returns list of dictionaries.\"\"\"\n        time.sleep(2)\n        try:\n            result = self.scraper.list_get_fields(self.collection, self.category, [\"title\", \"score\"], count=self.count, lang=self.lang, country=self.country)\n            self.assertIsInstance(result, list)\n            if result:\n                print(f\"\\n✅ Top chart app fields ({len(result)} apps):\")\n                for i, app in enumerate(result[:3]):\n                    print(f\"  {i+1}. {app.get('title', 'N/A')} - {app.get('score', 'N/A')} stars\")\n        except (NetworkError, RateLimitError, GPlayScraperError) as e:\n            warnings.warn(f\"Network/Rate limit error in test_list_get_fields: {e}\")\n            self.skipTest(f\"Skipping due to network/rate limit: {e}\")\n    \n    def test_list_print_field(self):\n        \"\"\"Test list_print_field executes without error.\"\"\"\n        time.sleep(2)\n        try:\n            print(f\"\\n✅ list_print_field output:\")\n            self.scraper.list_print_field(self.collection, self.category, \"title\", count=self.count, lang=self.lang, country=self.country)\n        except (NetworkError, RateLimitError, GPlayScraperError) as e:\n            warnings.warn(f\"Network/Rate limit error in test_list_print_field: {e}\")\n            self.skipTest(f\"Skipping due to network/rate limit: {e}\")\n        except Exception as e:\n            self.fail(f\"list_print_field raised unexpected {e}\")\n    \n    def test_list_print_fields(self):\n        \"\"\"Test list_print_fields executes without error.\"\"\"\n        time.sleep(2)\n        try:\n            print(f\"\\n✅ list_print_fields output:\")\n            self.scraper.list_print_fields(self.collection, self.category, [\"title\", \"score\"], count=self.count, lang=self.lang, country=self.country)\n        except (NetworkError, RateLimitError, GPlayScraperError) as e:\n            warnings.warn(f\"Network/Rate limit error in test_list_print_fields: {e}\")\n            self.skipTest(f\"Skipping due to network/rate limit: {e}\")\n        except Exception as e:\n            self.fail(f\"list_print_fields raised unexpected {e}\")\n    \n    def test_list_print_all(self):\n        \"\"\"Test list_print_all executes without error.\"\"\"\n        time.sleep(2)\n        try:\n            print(f\"\\n✅ list_print_all output:\")\n            self.scraper.list_print_all(self.collection, self.category, count=self.count, lang=self.lang, country=self.country)\n        except (NetworkError, RateLimitError, GPlayScraperError) as e:\n            warnings.warn(f\"Network/Rate limit error in test_list_print_all: {e}\")\n            self.skipTest(f\"Skipping due to network/rate limit: {e}\")\n        except Exception as e:\n            self.fail(f\"list_print_all raised unexpected {e}\")\n\nif __name__ == '__main__':\n    unittest.main()\n"
  },
  {
    "path": "tests/test_package.py",
    "content": "#!/usr/bin/env python3\n\"\"\"\nSimple test script to verify gplay-scraper package functionality\nThis script tests basic import and initialization without network calls\n\"\"\"\n\nimport unittest\nimport sys\nimport os\n\n# Add the parent directory to the path to import gplay_scraper\nsys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))\n\nfrom gplay_scraper import GPlayScraper\n\n\nclass TestPackageFunctionality(unittest.TestCase):\n    \"\"\"Package functionality tests that don't require network access\"\"\"\n    \n    def test_import_gplay_scraper(self):\n        \"\"\"Test that GPlayScraper can be imported\"\"\"\n        from gplay_scraper import GPlayScraper\n        self.assertTrue(GPlayScraper is not None)\n    \n    def test_scraper_initialization(self):\n        \"\"\"Test that GPlayScraper can be initialized\"\"\"\n        scraper = GPlayScraper()\n        self.assertIsInstance(scraper, GPlayScraper)\n    \n    def test_http_clients(self):\n        \"\"\"Test different HTTP client initializations\"\"\"\n        clients = [\"requests\", \"curl_cffi\", \"tls_client\", \"httpx\", \"urllib3\", \"cloudscraper\", \"aiohttp\"]\n        success_count = 0\n        \n        for client in clients:\n            try:\n                scraper = GPlayScraper(http_client=client)\n                self.assertIsInstance(scraper, GPlayScraper)\n                success_count += 1\n            except ImportError:\n                # Optional dependency not available\n                pass\n        \n        self.assertGreater(success_count, 0, \"At least one HTTP client should work\")\n    \n    def test_all_methods_exist(self):\n        \"\"\"Test that all expected methods exist\"\"\"\n        scraper = GPlayScraper()\n        \n        method_groups = [\n            (\"app\", [\"analyze\", \"get_field\", \"get_fields\", \"print_field\", \"print_fields\", \"print_all\"]),\n            (\"search\", [\"analyze\", \"get_field\", \"get_fields\", \"print_field\", \"print_fields\", \"print_all\"]),\n            (\"reviews\", [\"analyze\", \"get_field\", \"get_fields\", \"print_field\", \"print_fields\", \"print_all\"]),\n            (\"developer\", [\"analyze\", \"get_field\", \"get_fields\", \"print_field\", \"print_fields\", \"print_all\"]),\n            (\"similar\", [\"analyze\", \"get_field\", \"get_fields\", \"print_field\", \"print_fields\", \"print_all\"]),\n            (\"list\", [\"analyze\", \"get_field\", \"get_fields\", \"print_field\", \"print_fields\", \"print_all\"]),\n            (\"suggest\", [\"analyze\", \"nested\", \"print_all\", \"print_nested\"]),\n        ]\n        \n        total_methods = 0\n        for group, methods in method_groups:\n            for method in methods:\n                method_name = f\"{group}_{method}\"\n                if hasattr(scraper, method_name):\n                    print(f\"✓ Method {method_name} exists\")\n                    total_methods += 1\n                else:\n                    print(f\"✗ Method {method_name} missing\")\n                    self.fail(f\"Method {method_name} missing\")\n        \n        print(f\"\\n✅ All {total_methods} methods found and working!\")\n        self.assertEqual(total_methods, 40, \"Should have exactly 40 methods\")\n\n\nif __name__ == '__main__':\n    unittest.main()"
  },
  {
    "path": "tests/test_reviews_methods.py",
    "content": "\"\"\"\nUnit tests for Reviews Methods\n\"\"\"\n\nimport unittest\nimport time\nimport warnings\nfrom gplay_scraper import GPlayScraper\nfrom gplay_scraper.exceptions import GPlayScraperError, NetworkError, RateLimitError\n\n\nclass TestReviewsMethods(unittest.TestCase):\n    \"\"\"Test suite for reviews methods.\"\"\"\n    \n    def setUp(self):\n        \"\"\"Set up test fixtures before each test method.\"\"\"\n        self.scraper = GPlayScraper()  # Initialize scraper\n        self.app_id = \"com.whatsapp\"  # WhatsApp app ID for testing\n        self.count = 10  # Number of items to fetch\n        self.lang = \"en\"  # Language\n        self.country = \"us\"  # Country\n        self.sort = \"NEWEST\"  # Sort order for reviews\n    \n    def test_reviews_analyze(self):\n        \"\"\"Test reviews_analyze returns list of reviews.\"\"\"\n        time.sleep(2)\n        try:\n            result = self.scraper.reviews_analyze(self.app_id, count=self.count, sort=self.sort, lang=self.lang, country=self.country)\n            self.assertIsInstance(result, list)\n            if result:\n                self.assertGreater(len(result), 0)\n                print(f\"\\n✅ Reviews for {self.app_id} ({len(result)} reviews):\")\n                for i, review in enumerate(result[:2]):  # Show first 2 reviews\n                    print(f\"  {i+1}. {review.get('userName', 'Anonymous')} - {review.get('score', 'N/A')} stars\")\n                    print(f\"     {review.get('content', 'No content')[:100]}...\")\n        except (NetworkError, RateLimitError, GPlayScraperError) as e:\n            warnings.warn(f\"Network/Rate limit error in test_reviews_analyze: {e}\")\n            self.skipTest(f\"Skipping due to network/rate limit: {e}\")\n    \n    def test_reviews_get_field(self):\n        \"\"\"Test reviews_get_field returns list of field values.\"\"\"\n        time.sleep(2)\n        try:\n            result = self.scraper.reviews_get_field(self.app_id, \"userName\", count=self.count, sort=self.sort, lang=self.lang, country=self.country)\n            self.assertIsInstance(result, list)\n            if result:\n                print(f\"\\n✅ Review usernames ({len(result)} reviews):\")\n                for i, username in enumerate(result[:3]):\n                    print(f\"  {i+1}. {username or 'Anonymous'}\")\n        except (NetworkError, RateLimitError, GPlayScraperError) as e:\n            warnings.warn(f\"Network/Rate limit error in test_reviews_get_field: {e}\")\n            self.skipTest(f\"Skipping due to network/rate limit: {e}\")\n    \n    def test_reviews_get_fields(self):\n        \"\"\"Test reviews_get_fields returns list of dictionaries.\"\"\"\n        time.sleep(2)\n        try:\n            result = self.scraper.reviews_get_fields(self.app_id, [\"userName\", \"score\"], count=self.count, sort=self.sort, lang=self.lang, country=self.country)\n            self.assertIsInstance(result, list)\n            if result:\n                print(f\"\\n✅ Review fields ({len(result)} reviews):\")\n                for i, review in enumerate(result[:3]):\n                    print(f\"  {i+1}. {review.get('userName', 'Anonymous')} - {review.get('score', 'N/A')} stars\")\n        except (NetworkError, RateLimitError, GPlayScraperError) as e:\n            warnings.warn(f\"Network/Rate limit error in test_reviews_get_fields: {e}\")\n            self.skipTest(f\"Skipping due to network/rate limit: {e}\")\n    \n    def test_reviews_print_field(self):\n        \"\"\"Test reviews_print_field executes without error.\"\"\"\n        time.sleep(2)\n        try:\n            print(f\"\\n✅ reviews_print_field output:\")\n            self.scraper.reviews_print_field(self.app_id, \"userName\", count=self.count, sort=self.sort, lang=self.lang, country=self.country)\n        except (NetworkError, RateLimitError, GPlayScraperError) as e:\n            warnings.warn(f\"Network/Rate limit error in test_reviews_print_field: {e}\")\n            self.skipTest(f\"Skipping due to network/rate limit: {e}\")\n        except Exception as e:\n            self.fail(f\"reviews_print_field raised unexpected {e}\")\n    \n    def test_reviews_print_fields(self):\n        \"\"\"Test reviews_print_fields executes without error.\"\"\"\n        time.sleep(2)\n        try:\n            print(f\"\\n✅ reviews_print_fields output:\")\n            self.scraper.reviews_print_fields(self.app_id, [\"userName\", \"score\"], count=self.count, sort=self.sort, lang=self.lang, country=self.country)\n        except (NetworkError, RateLimitError, GPlayScraperError) as e:\n            warnings.warn(f\"Network/Rate limit error in test_reviews_print_fields: {e}\")\n            self.skipTest(f\"Skipping due to network/rate limit: {e}\")\n        except Exception as e:\n            self.fail(f\"reviews_print_fields raised unexpected {e}\")\n    \n    def test_reviews_print_all(self):\n        \"\"\"Test reviews_print_all executes without error.\"\"\"\n        time.sleep(2)\n        try:\n            print(f\"\\n✅ reviews_print_all output:\")\n            self.scraper.reviews_print_all(self.app_id, count=self.count, sort=self.sort, lang=self.lang, country=self.country)\n        except (NetworkError, RateLimitError, GPlayScraperError) as e:\n            warnings.warn(f\"Network/Rate limit error in test_reviews_print_all: {e}\")\n            self.skipTest(f\"Skipping due to network/rate limit: {e}\")\n        except Exception as e:\n            self.fail(f\"reviews_print_all raised unexpected {e}\")\n\nif __name__ == '__main__':\n    unittest.main()\n"
  },
  {
    "path": "tests/test_search_methods.py",
    "content": "\"\"\"\nUnit tests for Search Methods\n\"\"\"\n\nimport unittest\nimport time\nimport warnings\nfrom gplay_scraper import GPlayScraper\nfrom gplay_scraper.exceptions import GPlayScraperError, NetworkError, RateLimitError\n\n\nclass TestSearchMethods(unittest.TestCase):\n    \n    @classmethod\n    def setUpClass(cls):\n        cls.scraper = GPlayScraper()  # Initialize scraper\n        cls.query = \"social media\"  # Search query\n        cls.count = 10  # Number of items to fetch\n        cls.lang = \"en\"  # Language\n        cls.country = \"us\"  # Country\n    \n    def test_search_analyze(self):\n        \"\"\"Test search_analyze returns list of results\"\"\"\n        time.sleep(2)\n        try:\n            result = self.scraper.search_analyze(self.query, count=self.count, lang=self.lang, country=self.country)\n            self.assertIsInstance(result, list)\n            if result:\n                self.assertGreater(len(result), 0)\n                self.assertIn('title', result[0])\n                print(f\"\\n✅ Search results for '{self.query}' ({len(result)} apps):\")\n                for i, app in enumerate(result[:3]):  # Show first 3 results\n                    print(f\"  {i+1}. {app.get('title', 'N/A')} - {app.get('developer', 'N/A')}\")\n        except (NetworkError, RateLimitError, GPlayScraperError) as e:\n            warnings.warn(f\"Network/Rate limit error in test_search_analyze: {e}\")\n            self.skipTest(f\"Skipping due to network/rate limit: {e}\")\n    \n    def test_search_get_field(self):\n        \"\"\"Test search_get_field returns list of field values\"\"\"\n        time.sleep(2)\n        try:\n            result = self.scraper.search_get_field(self.query, \"title\", count=self.count, lang=self.lang, country=self.country)\n            self.assertIsInstance(result, list)\n            if result:\n                self.assertGreater(len(result), 0)\n                print(f\"\\n✅ App titles from search ({len(result)} results):\")\n                for i, title in enumerate(result[:3]):\n                    print(f\"  {i+1}. {title}\")\n        except (NetworkError, RateLimitError, GPlayScraperError) as e:\n            warnings.warn(f\"Network/Rate limit error in test_search_get_field: {e}\")\n            self.skipTest(f\"Skipping due to network/rate limit: {e}\")\n    \n    def test_search_get_fields(self):\n        \"\"\"Test search_get_fields returns list of dictionaries\"\"\"\n        time.sleep(2)\n        fields = [\"title\", \"score\"]\n        try:\n            result = self.scraper.search_get_fields(self.query, fields, count=self.count, lang=self.lang, country=self.country)\n            self.assertIsInstance(result, list)\n            if result:\n                self.assertGreater(len(result), 0)\n                for field in fields:\n                    self.assertIn(field, result[0])\n                print(f\"\\n✅ Multiple fields from search ({len(result)} results):\")\n                for i, app in enumerate(result[:3]):\n                    print(f\"  {i+1}. {app.get('title', 'N/A')} - Score: {app.get('score', 'N/A')}\")\n        except (NetworkError, RateLimitError, GPlayScraperError) as e:\n            warnings.warn(f\"Network/Rate limit error in test_search_get_fields: {e}\")\n            self.skipTest(f\"Skipping due to network/rate limit: {e}\")\n    \n    def test_search_print_field(self):\n        \"\"\"Test search_print_field executes without error\"\"\"\n        time.sleep(2)\n        try:\n            print(f\"\\n✅ search_print_field output for '{self.query}':\")\n            self.scraper.search_print_field(self.query, \"title\", count=self.count, lang=self.lang, country=self.country)\n        except (NetworkError, RateLimitError, GPlayScraperError) as e:\n            warnings.warn(f\"Network/Rate limit error in test_search_print_field: {e}\")\n            self.skipTest(f\"Skipping due to network/rate limit: {e}\")\n        except Exception as e:\n            self.fail(f\"search_print_field raised unexpected {type(e).__name__}: {e}\")\n    \n    def test_search_print_fields(self):\n        \"\"\"Test search_print_fields executes without error\"\"\"\n        time.sleep(2)\n        try:\n            print(f\"\\n✅ search_print_fields output for '{self.query}':\")\n            self.scraper.search_print_fields(self.query, [\"title\", \"score\"], count=self.count, lang=self.lang, country=self.country)\n        except (NetworkError, RateLimitError, GPlayScraperError) as e:\n            warnings.warn(f\"Network/Rate limit error in test_search_print_fields: {e}\")\n            self.skipTest(f\"Skipping due to network/rate limit: {e}\")\n        except Exception as e:\n            self.fail(f\"search_print_fields raised unexpected {type(e).__name__}: {e}\")\n    \n    def test_search_print_all(self):\n        \"\"\"Test search_print_all executes without error\"\"\"\n        time.sleep(2)\n        try:\n            print(f\"\\n✅ search_print_all output for '{self.query}':\")\n            self.scraper.search_print_all(self.query, count=self.count, lang=self.lang, country=self.country)\n        except (NetworkError, RateLimitError, GPlayScraperError) as e:\n            warnings.warn(f\"Network/Rate limit error in test_search_print_all: {e}\")\n            self.skipTest(f\"Skipping due to network/rate limit: {e}\")\n        except Exception as e:\n            self.fail(f\"search_print_all raised unexpected {type(e).__name__}: {e}\")\n\n\nif __name__ == '__main__':\n    unittest.main()\n"
  },
  {
    "path": "tests/test_similar_methods.py",
    "content": "\"\"\"\nUnit tests for Similar Methods\n\"\"\"\n\nimport unittest\nimport time\nimport warnings\nfrom gplay_scraper import GPlayScraper\nfrom gplay_scraper.exceptions import GPlayScraperError, NetworkError, RateLimitError\n\n\nclass TestSimilarMethods(unittest.TestCase):\n    \"\"\"Test suite for similar methods (find related apps).\"\"\"\n    \n    def setUp(self):\n        \"\"\"Set up test fixtures before each test method.\"\"\"\n        self.scraper = GPlayScraper()  # Initialize scraper\n        self.app_id = \"com.whatsapp\"  # WhatsApp app ID for testing\n        self.count = 10  # Number of items to fetch\n        self.lang = \"en\"  # Language\n        self.country = \"us\"  # Country\n    \n    def test_similar_analyze(self):\n        \"\"\"Test similar_analyze returns list of similar apps.\"\"\"\n        time.sleep(2)\n        try:\n            result = self.scraper.similar_analyze(self.app_id, count=self.count, lang=self.lang, country=self.country)\n            self.assertIsInstance(result, list)\n            if result:\n                self.assertGreater(len(result), 0)\n                print(f\"\\n✅ Similar apps to {self.app_id} ({len(result)} apps):\")\n                for i, app in enumerate(result[:3]):  # Show first 3 apps\n                    print(f\"  {i+1}. {app.get('title', 'N/A')} - {app.get('developer', 'N/A')}\")\n        except (NetworkError, RateLimitError, GPlayScraperError) as e:\n            warnings.warn(f\"Network/Rate limit error in test_similar_analyze: {e}\")\n            self.skipTest(f\"Skipping due to network/rate limit: {e}\")\n    \n    def test_similar_get_field(self):\n        \"\"\"Test similar_get_field returns list of field values.\"\"\"\n        time.sleep(2)\n        try:\n            result = self.scraper.similar_get_field(self.app_id, \"title\", count=self.count, lang=self.lang, country=self.country)\n            self.assertIsInstance(result, list)\n            if result:\n                print(f\"\\n✅ Similar app titles ({len(result)} apps):\")\n                for i, title in enumerate(result[:3]):\n                    print(f\"  {i+1}. {title}\")\n        except (NetworkError, RateLimitError, GPlayScraperError) as e:\n            warnings.warn(f\"Network/Rate limit error in test_similar_get_field: {e}\")\n            self.skipTest(f\"Skipping due to network/rate limit: {e}\")\n    \n    def test_similar_get_fields(self):\n        \"\"\"Test similar_get_fields returns list of dictionaries.\"\"\"\n        time.sleep(2)\n        try:\n            result = self.scraper.similar_get_fields(self.app_id, [\"title\", \"score\"], count=self.count, lang=self.lang, country=self.country)\n            self.assertIsInstance(result, list)\n            if result:\n                print(f\"\\n✅ Similar app fields ({len(result)} apps):\")\n                for i, app in enumerate(result[:3]):\n                    print(f\"  {i+1}. {app.get('title', 'N/A')} - {app.get('score', 'N/A')} stars\")\n        except (NetworkError, RateLimitError, GPlayScraperError) as e:\n            warnings.warn(f\"Network/Rate limit error in test_similar_get_fields: {e}\")\n            self.skipTest(f\"Skipping due to network/rate limit: {e}\")\n    \n    def test_similar_print_field(self):\n        \"\"\"Test similar_print_field executes without error.\"\"\"\n        time.sleep(2)\n        try:\n            print(f\"\\n✅ similar_print_field output:\")\n            self.scraper.similar_print_field(self.app_id, \"title\", count=self.count, lang=self.lang, country=self.country)\n        except (NetworkError, RateLimitError, GPlayScraperError) as e:\n            warnings.warn(f\"Network/Rate limit error in test_similar_print_field: {e}\")\n            self.skipTest(f\"Skipping due to network/rate limit: {e}\")\n        except Exception as e:\n            self.fail(f\"similar_print_field raised unexpected {e}\")\n    \n    def test_similar_print_fields(self):\n        \"\"\"Test similar_print_fields executes without error.\"\"\"\n        time.sleep(2)\n        try:\n            print(f\"\\n✅ similar_print_fields output:\")\n            self.scraper.similar_print_fields(self.app_id, [\"title\", \"score\"], count=self.count, lang=self.lang, country=self.country)\n        except (NetworkError, RateLimitError, GPlayScraperError) as e:\n            warnings.warn(f\"Network/Rate limit error in test_similar_print_fields: {e}\")\n            self.skipTest(f\"Skipping due to network/rate limit: {e}\")\n        except Exception as e:\n            self.fail(f\"similar_print_fields raised unexpected {e}\")\n    \n    def test_similar_print_all(self):\n        \"\"\"Test similar_print_all executes without error.\"\"\"\n        time.sleep(2)\n        try:\n            print(f\"\\n✅ similar_print_all output:\")\n            self.scraper.similar_print_all(self.app_id, count=self.count, lang=self.lang, country=self.country)\n        except (NetworkError, RateLimitError, GPlayScraperError) as e:\n            warnings.warn(f\"Network/Rate limit error in test_similar_print_all: {e}\")\n            self.skipTest(f\"Skipping due to network/rate limit: {e}\")\n        except Exception as e:\n            self.fail(f\"similar_print_all raised unexpected {e}\")\n\nif __name__ == '__main__':\n    unittest.main()\n"
  },
  {
    "path": "tests/test_suggest_methods.py",
    "content": "\"\"\"\nUnit tests for Suggest Methods\n\"\"\"\n\nimport unittest\nimport time\nimport warnings\nfrom gplay_scraper import GPlayScraper\nfrom gplay_scraper.exceptions import GPlayScraperError, NetworkError, RateLimitError\n\n\nclass TestSuggestMethods(unittest.TestCase):\n    \"\"\"Test suite for suggest methods (search suggestions).\"\"\"\n    \n    def setUp(self):\n        \"\"\"Set up test fixtures before each test method.\"\"\"\n        self.scraper = GPlayScraper()  # Initialize scraper\n        self.term = \"fitness\"  # Search term for testing\n        self.count = 10  # Number of items to fetch\n        self.lang = \"en\"  # Language\n        self.country = \"us\"  # Country\n    \n    def test_suggest_analyze(self):\n        \"\"\"Test suggest_analyze returns list of suggestions.\"\"\"\n        time.sleep(2)\n        try:\n            result = self.scraper.suggest_analyze(self.term, count=self.count, lang=self.lang, country=self.country)\n            self.assertIsInstance(result, list)\n            if result:\n                self.assertGreater(len(result), 0)\n                print(f\"\\n✅ Search suggestions for '{self.term}' ({len(result)} suggestions):\")\n                for i, suggestion in enumerate(result):\n                    print(f\"  {i+1}. {suggestion}\")\n        except (NetworkError, RateLimitError, GPlayScraperError) as e:\n            warnings.warn(f\"Network/Rate limit error in test_suggest_analyze: {e}\")\n            self.skipTest(f\"Skipping due to network/rate limit: {e}\")\n    \n    def test_suggest_nested(self):\n        \"\"\"Test suggest_nested returns nested suggestions.\"\"\"\n        time.sleep(2)\n        try:\n            result = self.scraper.suggest_nested(self.term, count=self.count, lang=self.lang, country=self.country)\n            self.assertIsInstance(result, dict)\n            if result:\n                print(f\"\\n✅ Nested suggestions for '{self.term}':\")\n                for key, suggestions in list(result.items())[:2]:  # Show first 2 nested\n                    print(f\"  '{key}' -> {suggestions[:3]}\")\n        except (NetworkError, RateLimitError, GPlayScraperError) as e:\n            warnings.warn(f\"Network/Rate limit error in test_suggest_nested: {e}\")\n            self.skipTest(f\"Skipping due to network/rate limit: {e}\")\n    \n    def test_suggest_print_nested(self):\n        \"\"\"Test suggest_print_nested executes without error.\"\"\"\n        time.sleep(2)\n        try:\n            print(f\"\\n✅ suggest_print_nested output:\")\n            self.scraper.suggest_print_nested(self.term, count=self.count, lang=self.lang, country=self.country)\n        except (NetworkError, RateLimitError, GPlayScraperError) as e:\n            warnings.warn(f\"Network/Rate limit error in test_suggest_print_nested: {e}\")\n            self.skipTest(f\"Skipping due to network/rate limit: {e}\")\n        except Exception as e:\n            self.fail(f\"suggest_print_nested raised unexpected {e}\")\n    \n    def test_suggest_print_all(self):\n        \"\"\"Test suggest_print_all executes without error.\"\"\"\n        time.sleep(2)\n        try:\n            print(f\"\\n✅ suggest_print_all output:\")\n            self.scraper.suggest_print_all(self.term, count=self.count, lang=self.lang, country=self.country)\n        except (NetworkError, RateLimitError, GPlayScraperError) as e:\n            warnings.warn(f\"Network/Rate limit error in test_suggest_print_all: {e}\")\n            self.skipTest(f\"Skipping due to network/rate limit: {e}\")\n        except Exception as e:\n            self.fail(f\"suggest_print_all raised unexpected {e}\")\n\nif __name__ == '__main__':\n    unittest.main()\n"
  }
]